id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9911/nucl-th9911075.html
ar5iv
text
# Twenty-Five Years of Progress in the Three-Nucleon Problem ## Introduction The purview of my talk is progress that has been made in our understanding of the three-nucleon systems and of the dynamics that underlies that understanding. My emphasis will be on the theoretical side. My reference point in time is 1974, the date when Bates first delivered beam for an experiment. I will survey that progress by referring to two other significant events that occurred in 1974. One of these is personal: I attended the International Few-Body Conference held that year in Quebec City, CanadaQuebec . The second event is the genesis in that year of three-nucleon forces (3NFs) based on chiral-symmetry considerationsYang . On a personal note it is always a pleasure to return to MIT, where I was a postdoc. Looking back at my work during that period, I find that almost everything dealt with electron scattering, a result of the influence of Bates on the young theorists in the Center for Theoretical Physics. Part of that work involved relativistic corrections to the charge densities of few-nucleon systems, and that motivated my attendance at the Quebec meeting. There are basically three reasons why three-nucleon physics has become a subfield in its own right. The first is that the trinucleons are rich, nontrivial, and “simple” nuclear systems, and understanding their properties is a minimal criterion for success in this area. The word “simple” in this context means that we are capable of performing the very difficult calculations of three-nucleon properties. Indeed, in recent years we have not only succeeded in performing these calculations, but have achieved an understanding of most of the basic trinucleon propertiesjoe ; walter . The second reason is the classic and original goal of the field: using these systems to sort out and refine our understanding of the nuclear force. This is the most important remaining aspect of the problem, which has been greatly aided in recent years by chiral perturbation theory ($`\chi `$PT). Much of our theoretical and experimental attention has been directed at 3NFs, because trinucleon properties show relatively little sensitivity to the details of modern $`N`$-$`N`$ forces. Our remaining problems (though few) are likely due to our lack of understanding of 3NFs 3NF . Finally, the lovely techniques used in this field are fun to work with, and this attraction has seduced two generations of theorists. Our efforts have led to the very successful application of few-body methods to heavier systems, which goes far beyond even the dreams of 1974, as shown at this symposium by Vijay Pandharipande. My strongest impressions of the Quebec meeting are that the field was in a state of confusion. Many calculational techniques were in use, each giving a different answer to the same problem, the <sup>3</sup>H bound-state energy. Faddeev methods, hyperspherical expansions, variational bounds, and separable approximations all had their practitionersQuebec . There was a 10-20% uncertainty ($``$ 1-2 MeV) in the <sup>3</sup>H binding energy, implying that most (in retrospect, all) of the calculations were not converged. The situation was similar with respect to scattering calculations. In order to achieve convergence one requires brute-force computational resources on a scale that would not be available for another decade. ## Nuclear Forces The genesis of the computational problem is the spin of the nucleon. Contrary to much folklore, nuclear physics is difficult not because the force is complicated (in shape), but because it is complex (i.e., it has many components). The origin of the problem is the spin and parity of the pion: $`J^\pi =0^{}`$. The $`\pi `$-nucleon vertex must have a complementary pseudoscalar structure in order to conserve angular momentum and parity, and the dominant form $`(1^+1^{})`$ is $`\stackrel{}{\sigma }_N\stackrel{}{q}`$, where $`\stackrel{}{\sigma }_N`$ is the nucleon (Pauli) spin and $`\stackrel{}{q}`$ is the pion momentum. This leads immediately to a tensor component of the force (part of the one-pion-exchange potential (OPEP)), which dominates interactions in few-nucleon systems. Indeed, $`V_{\mathrm{OPEP}}`$ is roughly 75% of the total potential energy. This spin dependence, together with isospin dependence, accounts for the complexity. Each nucleon has $`22=4`$ spin-isospin components, implying that there are roughly $`(4)^2=16`$ such components in the $`N`$-$`N`$ force, which is indeed exemplified by the 18 components of the recent AV18 potentialAV18 . Dealing with these complexities, in addition to the 3 continuous coordinates specifying the positions of 3 nucleons, is a formidable numerical problem. The importance of OPEP is illustrated in Fig. (1) from the Nijmegen group3P0 . Using a potential that vanishes out to $`b=1.4`$ fm and incorporates OPEP plus some two-pion-exchange potential beyond that value leads to the dashed curve. Clearly, the shape of the phase shift is correct. Adding a smooth background contribution from a short-range potential $`(r1.4\mathrm{fm})`$ produces the dotted curve, while fine tuning leads to the solid curve. All of the “shape”, however, is produced by pion exchange, which is hardly a surprise given that the pion is the lightest of the mesons exchanged between two nucleons. An obvious question is whether a 1-2 MeV uncertainty is a serious handicap in understanding the physics. Alternatively, if one wishes to probe the nuclear force by examining trinucleon properties, what level of calculational accuracy is a reasonable requirement? The fundamental problem is determining the structure of the $`N`$-$`N`$ force, and this is impossible to achieve using only the $`N`$-$`N`$ scattering data. Imagine that some $`N`$-$`N`$ phase shift is known at all energies and with infinite accuracy (neither assumption is true), and that there is no bound state. Under these idealized conditions a potential $`V(r)`$ (where $`r`$ is the separation of the two nucleons) can be deduced that in the Schrödinger equation will reproduce the phase shift. Unfortunately, one can also deduce a $`V(r,p)`$ (where $`p`$ is the relative nucleon momentum) that reproduces that phase shift equally well. On-shell (free-nucleon) scattering cannot produce a unique potential. This led to the idea that making the nucleons “off-shell” by placing them in a bound system with a third nucleon might provide enough additional information to fix the potential, since $`V(r)`$ and $`V(r,p)`$ defined above will definitely produce different tritons. This is one aspect of what has become known as the “off-shell problem”. We can estimate the uncertainties by noting that the $`N`$-$`N`$ system (with potential $`V`$) feels the presence of the third nucleon only through the action of another $`V`$ and the effect should scale as $`V^2`$, which has the wrong dimensions. Another related off-shell problem is that the motion of a pion propagating between nucleons is conventionally specified only by its transferred momentum, $`\stackrel{}{q}`$, while its transferred energy, $`q_0`$, is replaced by other variables such as $`p^2/2M`$. This hints that the effective off-shell interaction scale is set by $`\mathrm{\Delta }H=V^2/Mc^2`$, which is correctoffshell in spite of the intuitive derivation. Because $`V^2`$ contains terms linking three nucleons together and because of the $`1/c^2`$, this effect is at the same time a three-body force, an off-shell effect, and a relativistic correction. Using reasonable numbers for the triton we estimate $`\mathrm{\Delta }H`$ 0.5-1.0 MeV. Thus the previously noted calculational uncertainties ($``$ 1-2 MeV) are unacceptably large, and calculational errors $``$ 100 keV (which is approximately 1% of the binding energy) are required in order to investigate the three-nucleon effects discussed above. In addition, 1% absolute experiments are extremely difficult and uncommon. Consequently, 1%-error calculations, known variously as “exact”, “complete”, or “rigorous”, have become the standard of the field. The ability to achieve this has become our field’s major success story. ## Three-Nucleon Calculations The types of problems attacked and the period during which success was achieved are shown in Fig. (2) and Table 1 BLAST . There are four regions of energy illustrated in Fig. (2) (by arrows) that conveniently encompass the three-nucleon problem: (1) the trinucleon bound states (a pole at $`E_B`$); (2) zero-energy nucleons scattering from the deuteron; (3) $`N`$-$`d`$ scattering below deuteron-breakup threshold (viz., zero total energy); (4) $`N`$-$`d`$ scattering above breakup threshold. These problems were solved at the 1% level at times indicated in Table 1. The Los Alamos-Iowa group was fortunate enough to have participated in half of the entries (top half) in the table, beginning with the <sup>3</sup>H bound state in 1985chen and using only $`N`$-$`N`$ forces, then adding a 3NF, and finally solving <sup>3</sup>He in 1987 (which includes a $`p`$-$`p`$ Coulomb interaction)Levin . Scattering lengths were calculated a few years laterLevin . The bound-state problems are relatively easy, however. Scattering below breakup thresholdPisa\_3 is nearly an order of magnitude harder than a bound-state problem, and above-breakup scattering is nearly an order of magnitude harder stillwalter88 . Above-breakup $`p`$-$`d`$ scattering is a very recent developmentPisa\_ab . A particularly lovely example of this progress is shown in Fig. (3), obtained from the Pisa groupPisa\_3 . Elastic scattering of 3 MeV nucleons (just below breakup threshold) from deuterons is calculated and compared to data. The solid curve ($`p`$-$`d`$) agrees superbly well with the dense, accurate data, while sparser $`n`$-$`d`$ data agree well with the (dashed) calculated values. Note the large Coulomb effect at the forward and backward angles. This plot is rather typical of differential cross sections: they are insensitive to the details of the nuclear force and agree very well with data. Most spin observables, such as tensor analyzing powers, also agree well with data. Figure (4) shows a very recent calculation A\_T of an electromagnetic spin observable, $`A_T^{}`$, in the reaction $`{}_{}{}^{3}\stackrel{}{\mathrm{He}}(\stackrel{}{e},e^{}n)pp`$. The <sup>3</sup>He target is polarized along the direction of electron momentum transfer, and the electrons are longitudinally polarized. This spin-dependent asymmetry in a response function is proportional to $`G_M^n`$ (neutron magnetic form factor) in the most naive description of the reaction. That description is based on the observation that s-waves dominate between the nucleons in <sup>3</sup>He. In that case the two protons are required by the Pauli principle to have spins anti-aligned, and the entire spin of the nucleus is carried by the neutron. The protons do contribute to the reaction because the tensor force modifies the simple s-wave picture and the protons’ spins will be aligned in D-states, and can contribute to the asymmetry through final-state $`p`$-$`n`$ charge-exchange reactions. The figure illustrates the Bates datagao compared to two theoretical calculations: the full calculation (solid curve) and a calculation (dashed curve) that neglects all final-state interactions. The latter calculation would be typical of what was available until very recently, which illustrates both the difficulty of the calculations and the progress that has been made. I would like to summarize this part of my talk as follows: * We can now accurately calculate three-nucleon properties. Most of these properties, such as differential cross sections and most spin observables (e.g., the tensor analyzing power, $`T_{22}`$walter ), agree well with data and depend only weakly on a 3NF. Electromagnetic calculations are very difficult and are the state-of-the-art. * Spin-isospin degrees of freedom are the biggest impediment to few-nucleon calculations. * Many different techniques are now successfully employed in performing calculationsjoe . * 1% accuracy is needed in order to disentangle the physics. * The most demanding problems drive the progress, and Bates problems are of this type. ## Three-Nucleon Forces Three-nucleon forces are small, as we argued earlier for a very special case. In fact that argument holds for the whole class of such forces, as we shall see. If they are so small, are they really necessary, or even interesting? The most modern potentials produce <sup>3</sup>H bound states that are underbound by up to 1 MeV. This defect can be compensated by the addition of a 3NF. Nevertheless, I do not consider this to be very compelling evidence for three-nucleon forces. Are such forces just “theorists’ toys” or is there more compelling experimental evidence? In order to answer this question, we must first establish the credentials of the physics underlying the various models of such forces, which are relatively few in number. The longest-range mechanisms are those based on 2$`\pi `$-exchange, and these have been extensively investigated. Figure (5a) illustrates the generic force of this type, while Fig. (5b) shows the single most important ingredient (other ingredients are also important). The history of this field is depicted in Fig. (6), a diagram showing the evolution of these forces, all of which are field-theory based. Time runs vertically and long lines indicate the oldest forces. Near the bottom are the primitive models (PM). The august Fujita-Miyazawa modelFM (FM) is based on $`\mathrm{\Delta }`$-isobars, as is its offshoot the Urbana-Argonne modelUA (UA). To the left are the models based on chiral symmetry, including the Yang modelYang (Y) (the first of this type, published in 1974) and the Tucson-Melbourne modelTM (TM), the oldest such model still in use. The more recent models based on relativistic field theories (RFT) are the BrazilBrazil (BR) and RuhrPotRuhrPot (RP) models. Finally, the Texas modeltexas (TX) is based on chiral perturbation theory. It is clear from this history that the two key ingredients of 3NFs are: * adequate phenomenology (such as isobars). * imposing chiral constraints. How does one accomplish this? It is believed that the theory underlying the strong interactions is QCD. The “natural” degrees of freedom of this theory are quarks and gluons. We aren’t required to use these degrees of degrees, however, and traditional nuclear physics uses effective (observable) degrees of freedom: nucleons and pions. One can imagine freezing out all other particles and constructing a theory in this compressed Hilbert space, in the fashion of (Feshbach) \[P,Q\] reaction theoryFeshbach . Although the resulting operators can be quite complicated, chiral symmetry, that most important ingredient residing in QCD, can be implemented in the new theory. This “QCD in disguise” is better known as chiral perturbation theory, and applies to both particles and nucleipower . Only one aspect of that theory is needed here: dimensional power countingpower . The latter is a kind of (not obvious!) dimensional analysis based on only two QCD internal energy scales. The first scale is $`f_\pi `$, the pion decay constant ($``$ 93 MeV), which controls the Goldstone bosons and specifically the pion. The second scale is the energy above which we agree to freeze out all excitations, $`\mathrm{\Lambda }1`$ GeV, and is the scale appropriate to the QCD bound states, such as the nucleon, $`\rho `$ and $`\omega `$ resonances, etc. Using these scales, it can be shownGeorgi that a given term in a Lagrangian should scale as: $$^{(\mathrm{\Delta })}\frac{c}{f_\pi ^\beta \mathrm{\Lambda }^\mathrm{\Delta }}(\mathrm{times}\mathrm{various}\mathrm{fields}).$$ Two important properties are that the power $`\mathrm{\Delta }`$ (used to classify Lagrangian terms) satisfies $`\mathrm{\Delta }0`$ (which is a not very obvious chiral-symmetry constraint), while the dimensionless constant $`c`$ satisfies $`|c|1`$, the condition of “naturalness” (an even less obvious constraint). Because freezing out degrees of freedom results in effective interactions with unknown coefficients, the latter condition is the only handle we have on reasonable values for those constants. This formal scheme can be implemented in nuclei to estimate the size of various contributions to potential energies (among others). An additional nuclear scale is required, the effective momentum or inverse correlation length, which is given by $`Qm_\pi c`$, where $`m_\pi `$ is the pion mass. Then it can be shown thatpower $$V_\pi \frac{Q^3}{f_\pi \mathrm{\Lambda }}30\mathrm{MeV}/\mathrm{pair},$$ $$V_{3NF}\frac{Q^6}{f_\pi ^2\mathrm{\Lambda }^3}1\mathrm{MeV}/\mathrm{triplet}.$$ The latter relationship can also be written as $`V_{3NF}V_\pi ^2/\mathrm{\Lambda }`$, which is equivalent to the expression we developed earlier (since $`M\mathrm{\Lambda }`$) and is also the correct size to explain the <sup>3</sup>H binding discrepancy. The use of $`\chi `$PT is finally leading to a consensus on 2$`\pi `$-exchange 3NF terms, and a “standard” model of the 3NF is within reach. All such terms in leading order of $`\chi `$PT have been calculated, although some of them have not yet been implemented. Several of these terms have been checked by testing the tail of the $`N`$-$`N`$ potential against the set of $`p`$-$`p`$ data. That tail is calculated by using the same Lagrangian building blocks that are used to calculate $`\pi `$-$`N`$ scattering and the 2$`\pi `$-exchange 3NF. Important elements of the 2$`\pi `$-exchange $`N`$-$`N`$ force were verifiedmart , which validates the corresponding terms in the 3NF. In addition to the <sup>3</sup>H (<sup>3</sup>He) binding discrepancy, there is one other piece of experimental evidence for a 3NF that is much stronger. The Sagara discrepancy sagara is illustrated in Fig. (7), which shows $`p`$-$`d`$ elastic scattering at 65 MeV. Ignoring the forward direction (where the Coulomb interaction plays a significant role), the agreement is very good between calculations with an $`N`$-$`N`$ force only (dashed lines) and the experimental data except in the diffraction minimum. Adding the TM 3NF produces the solid curve, which is in fairly good agreement with experiment in the minimum. The small 3NF effect is depicted by the long-dashed line, which follows from keeping only those terms linear in the 3NF. This behavior is very reminiscent of Glauber scattering, with a dominant single-scattering contribution falling rapidly with angle until the smaller double-scattering term (which has a reduced slope) becomes significant. This is rather strong evidence for a 3NF, and it persists to higher energies. Our final topic is the extension of 3NFs beyond 2$`\pi `$-exchange. Chiral perturbation theory predicts that there are two mechanisms that have pion range in one pair of nucleons and short range in a second pair, and they should be comparable in size to the 2$`\pi `$-exchange mechanisms. The generic force in $`\chi `$PT is shown in Fig. (5c), and a particular example (the so-called d<sub>1</sub>-term) is illustrated in Fig. (5d). All mechanisms affect the <sup>3</sup>H binding energy, so this is a poor test of a specific mechanism. A tedious examination of low-energy observablesdirk finds that the d<sub>1</sub>-mechanism makes a potentially large contribution to the $`n`$-$`d`$ asymmetry, $`A_y`$. This observable at 3 MeV is depicted in Fig. (8). The calculation with only $`N`$-$`N`$ forces is the solid line, which is about 30% lower than the data. The long-dashed curve includes the effect of the TM force, which accounts for only about 1/4 of the discrepancy. Adding the d<sub>1</sub>-term in the 3NF with a dimensionless coefficient, $`c_1=1`$, produces the short-dashed curve. The size and sign of that coefficient are unknown, and the sign was chosen to move the prediction upward. Although it appears that a choice of $`c_1=3`$ (and quite acceptable in size) would resolve the problem, the algorithms used in our codes failed to converge for such a value, and that final conclusion could not be checked at the time this manuscript was written. Nevertheless, it appears that this mechanism could resolve the low-energy $`A_y`$ puzzle, which has existed for many years and in many forms, for both $`p`$-$`d`$ and $`n`$-$`d`$ scattering and in electromagnetic reactionsA\_y . It remains to be seen whether this mechanism is compatible with the $`A>3`$ bound states and other data. We summarize this section as follows. * Most three-nucleon observables are insensitive to 3NFs. * 3NFs are small in size but appear necessary to reproduce the <sup>3</sup>H binding energy, the Sagara discrepancy, and the $`A_y`$ puzzle. * Chiral symmetry provides a unified approach to 3NFs; power counting identifies dominant mechanisms. * The leading-order (dominant) 2$`\pi `$-exchange 3NFs have been calculated; they have large isobar contributions. * New short-range plus pion-range mechanisms may resolve the low-energy $`A_y`$ puzzle. * Although much remains to be investigated, a consensus appears to be developing for the bulk of 3NF terms, and a “standard model” of 3NFs may be possible in the near future. * The basic building blocks of 3NFs have been recently validated by verifying the corresponding elements in the tail of the $`N`$-$`N`$ potential. ## Acknowledgements We thank Alejandro Kievsky of the Univ. of Pisa and Jacek Golak of the Univ. of Cracow for providing figures. Walter Glöckle of Ruhr-Universität Bochum engaged in a helpful correspondence. The work of J.L.F. was performed under the auspices of the United States Department of Energy.
no-problem/9911/hep-th9911053.html
ar5iv
text
# 1 Quantum constraints to the sphere à la Dirac ## 1 Quantum constraints to the sphere à la Dirac Quantum mechanics, where the sphere $`S^n`$ embedded in $`R^{n+1}`$ is considered as a configuration space, has been studied in and the gauge fields were seen to emerge at the quantum level, which in turn specify the inequivalent quantizations that are possible on the sphere. The authors of these papers, generalizing the conventional canonical commutation relations, set up as a “fundamental algebra”, the Lie algebra of the Euclidean group $`E(n+1)`$ in $`(n+1)`$ dimensional space, which is given by the semidirect product of $`SO(n+1)`$ and $`R^{n+1}`$. Then they obtain the representation of the group using the Wigner’s technique which allows them to construct the representation of $`E(n+1)`$ in terms of the irreducible representation of the ‘little group’ $`SO(n)`$ – the isometry group of $`SO(n+1)`$ – acting on $`S^n`$. Finally they show that a particle on $`S^n`$ couples to a gauge potential covariantly through the generator of the Wigner rotation and that it is related to the ‘induced’ gauge field of the (generally) nonabelian ‘monopole’ located at the center of $`S^n`$. The induced gauge fields were then shown to be nothing but the $`H`$-connections, $`i.e.`$ the gauge fields that emerge when we consider quantum mechanics on the coset space $`G/H`$, which were thoroughly studied in . These authors consider the system of a ‘free particle’ on $`G/H`$, which in the case of sphere is $`SO(n+1)/SO(n)`$ , and working within the framework of Mackey’s quantization scheme have shown that the Hamiltonian on G/H involves the induced $`H`$-connection. In this paper we reconsider this problem, $`i.e.`$ quantum mechanics on $`S^2`$ and $`S^4`$, from somewhat different point of view. Namely we consider , à la Dirac, a square root of the “on sphere constraint”. The results are not different, of course, from those obtained in the above mentioned approaches, however, we hope to gain a deeper insight into the problem. ## 2 Quantum Mechanics on $`S^2`$ Let us start with $`S^2`$. As our sphere $`S^2`$ is embedded in the space $`R^3`$, it is defined by the constraint $$\stackrel{}{x}^2=a^2$$ (1) among the coordinates $`\stackrel{}{x}=(x_1,x_2,x_3)`$ in the $`R^3`$. Naive definition of the quantum mechanics on $`S^2`$ is simply to restrict these variables to satisfy the constraint $`\stackrel{}{x}^2=a^2`$ and to require the momentum operators to be compatible with the constraint in their Hamiltonian and the wave function. Which means that in the coordinate representation $$i\frac{}{t}\left[\mathrm{\Psi }\left(x\right)\right]_{S^2}=\left[\widehat{H}\mathrm{\Psi }\left(x\right)\right]_{S^2}\text{ ,}$$ (2) where $`\left[\right]_{S^2}`$ is the restriction of the variables on the constraint surface $`S^2:\stackrel{}{x}^2=a^2`$. This means that for a free particle $`\left[\widehat{H}\mathrm{\Psi }\left(x\right)\right]_{S^2}`$ $`=\left[{\displaystyle \frac{1}{2m}}{\displaystyle \frac{^2}{x_ix^i}}\mathrm{\Psi }\left(x\right)\right]_{S^2}`$ (3) $`={\displaystyle \frac{1}{2m}}g^{ab}_a_b\left[\mathrm{\Psi }\left(x\right)\right]_{S^2}\text{ ,}`$ where $`_a`$ and $`_a`$ denote the derivative and the covariant derivative with respect to the coordinate $`q^a`$ and the metric $`g_{ab}`$ on $`S^2`$ . As a result we have the Hamiltonian $$=\frac{1}{2m}g^{ab}_a_b\text{ ,}$$ (4) which acts on the wave function $`\psi \left(q\right)=\left[\mathrm{\Psi }\left(x\right)\right]_{S^2}`$ . In this paper we propose a new definition of the quantum mechanics on $`S^2`$ replacing (1) by the “quantum constraint” on the wave function $$\left(\stackrel{}{x}\stackrel{}{\sigma }a\right)\mathrm{\Phi }\left(\stackrel{}{x}\right)=0\text{ .}$$ (5) This leads to $`\left(\stackrel{}{x}\stackrel{}{\sigma }+a\right)\left(\stackrel{}{x}\stackrel{}{\sigma }a\right)\mathrm{\Phi }\left(\stackrel{}{x}\right)`$ $`=0`$ (6) $`\left(\left(\stackrel{}{x}\stackrel{}{\sigma }\right)^2a^2\right)\mathrm{\Phi }\left(\stackrel{}{x}\right)`$ $`=0`$ $`\left(\stackrel{}{x}^2a^2\right)\mathrm{\Phi }\left(\stackrel{}{x}\right)`$ $`=0\text{ ,}`$ thus the constraint $`\stackrel{}{x}^2a^2=0`$ follows from the condition (5). Here $`\sigma `$’s are the Pauli matrices. Defining $$\mathrm{\Delta }\left(\stackrel{}{x}\stackrel{}{\sigma }a\right)\text{ ,}$$ (7) eq.(5) is rewritten as $$\mathrm{\Delta }\mathrm{\Phi }=\left(\begin{array}{cc}x_3a& x_1ix_2\\ x_1+ix_2& x_3a\end{array}\right)\mathrm{\Phi }=0\text{ .}$$ (8) In order for this equation to have a non-trivial solution it has to be degenerate and the determinant should vanish $$det\mathrm{\Delta }=\left(\stackrel{}{x}^2a^2\right)=0\text{ .}$$ (9) We define $`v`$ by $`\mathrm{\Delta }v`$ $`=0\text{ ,}`$ (10) $`v^{}v`$ $`=1\text{ ,}`$ where $`v`$ is $`2\times 1`$ matrix or eigenvector of $`\mathrm{\Delta }`$ whose eigenvalue is $`0`$. The explicit form of $`v`$ can be written as $$v=\frac{1}{\sqrt{2a\left(a+x_3\right)}}\left(\begin{array}{c}a+x_3\\ x_1+ix_2\end{array}\right)\text{ .}$$ (11) The general solution of eq.(8) can be written as $$\mathrm{\Phi }=v\varphi \text{ ,}$$ (12) with $`\varphi `$ an arbitrary complex function on $`S^2`$. Thus the solution to the constraint is the space projected by $`P`$ $$P\mathrm{\Phi }=\mathrm{\Phi }\text{ ,}$$ (13) where the projection operator $`P`$ to the space spanned by $`v`$ is defined as $`P`$ $`vv^{}\text{ ,}`$ (14) $`P^2`$ $`=vv^{}vv^{}=P\text{ ,}`$ $`Pv`$ $`=v\text{ .}`$ Although $`\mathrm{\Phi }`$ lives in this projected space, its derivative $`\mathrm{\Phi }`$ does not necessarily live in this space. Then our interest is concerned with the projected derivative $`P\mathrm{\Phi }`$i.e. $`P\mathrm{\Phi }`$ $`=vv^{}\left(v\varphi \right)`$ (15) $`=vv^{}\left(v\varphi +v\varphi \right)`$ $`=vD\varphi \text{ ,}`$ where $`D`$ $`+A\text{ ,}`$ (16) $`A`$ $`v^{}v\text{ .}`$ It is noticed here that the gauge connection is induced as a result of this projection. It is also obvious that for any polynomial $`F\left(\lambda \right)`$ of $`\lambda `$ $$F\left(P\right)\mathrm{\Phi }=vF\left(D\right)\varphi \text{ .}$$ (17) We define the quantum mechanics on $`S^2`$ by this projection $$i\frac{}{t}\left[\mathrm{\Phi }\left(x\right)\right]_{S^2}=\left[\widehat{H}(x,iP\frac{}{x})\mathrm{\Phi }\left(x\right)\right]_{S^2}\text{ .}$$ (18) As a result we have the Hamiltonian $$=\frac{1}{2m}g^{ab}\left(_a+A_a\right)\left(_b+A_b\right)\text{ ,}$$ (19) which acts on the wave function $`\varphi \left(q\right)=\left[\mathrm{\Phi }\left(x\right)\right]_{S^2}`$ . Here $`A`$ can be written as $$Av^{}dv=\frac{i}{2a\left(a+x_3\right)}\left(x_1dx_2x_2dx_1\right)\text{ ,}$$ (20) and this is the induced magnetic monopole gauge potential obtained in . ## 3 Quantum Mechanics on $`S^4`$ Let us turn our discussion to quantum mechanics on $`S^4`$, which is a rather straightforward generalization of the arguments given above. The coordinates are $$x_M=(x_0,x_1,x_2,x_3,x_5)\text{ ,}$$ (21) which are restricted by $$x_Mx^Ma^2=0\text{ .}$$ (22) We choose $`\gamma ^M`$ $`=(\gamma ^0,\gamma ^1,\gamma ^2,\gamma ^3,\gamma ^5)\text{ ,}`$ (23) $`\gamma ^0`$ $`=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right),\stackrel{}{\gamma }=\left(\begin{array}{cc}0& i\stackrel{}{\sigma }\\ i\stackrel{}{\sigma }& 0\end{array}\right),\gamma ^5=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)\text{ ,}`$ (30) $`\stackrel{}{\gamma }`$ $`=(\gamma ^1,\gamma ^2,\gamma ^3)\text{ ,}`$ which satisfy $$\{\gamma ^M,\gamma ^N\}=2\delta ^{MN}\text{ ,}$$ (31) then our “quantum constraint” is $$\left(x_M\gamma ^Ma\right)\mathrm{\Phi }\left(x\right)=0\text{ .}$$ (32) This implies $`\left(x_N\gamma ^N+a\right)\left(x_M\gamma ^Ma\right)\mathrm{\Phi }\left(x\right)`$ $`=0\text{ ,}`$ (33) $`\left(x_Mx^Ma^2\right)\mathrm{\Phi }\left(x\right)`$ $`=0\text{ ,}`$ thus the constraint $$x_Mx^Ma^2=0$$ (34) follows from the condition (32). Explicitly, our “quantum constraint” is $$\left(\begin{array}{cc}x_5a& x_0+i\stackrel{}{x}\stackrel{}{\sigma }\\ x_0i\stackrel{}{x}\stackrel{}{\sigma }& x_5a\end{array}\right)\mathrm{\Phi }\mathrm{\Delta }\mathrm{\Phi }=0\text{ .}$$ (35) It should be noticed that the rank of the $`4\times 4`$ matrix $`\mathrm{\Delta }`$ is $`2`$ and the degree of the freedom of $`\mathrm{\Phi }`$ is $`2`$. The general solution of the constraint equation is $$\mathrm{\Phi }=v\varphi \text{ ,}$$ (36) where $$v=\frac{1}{\sqrt{2a\left(a+x_5\right)}}\left(\begin{array}{c}a+x_5\\ x_0i\stackrel{}{x}\stackrel{}{\sigma }\end{array}\right)\text{ ,}$$ (37) and $$v^{}v=1_{\left[2\times 2\right]}\text{ .}$$ (38) Using the variables $`z`$ $`x_0i\stackrel{}{x}\stackrel{}{\sigma }\text{ ,}`$ (39) $`z^{}`$ $`x_0+i\stackrel{}{x}\stackrel{}{\sigma }\text{ ,}`$ the induced gauge field is expressed as $$Av^{}dv=\frac{1}{2}\frac{1}{2a\left(a+x_5\right)}\left(z^{}dzdz^{}z\right)\text{ ,}$$ (40) which is the instanton gauge connection. In more familiar terms as $`z`$ $`\alpha _\mu x_\mu \text{ ,}`$ (41) $`\alpha _\mu `$ $`(1,i\stackrel{}{\sigma })\text{ ,}`$ $`z^{}`$ $`\overline{\alpha }_\mu x_\mu \text{ ,}`$ (42) $`\overline{\alpha }_\mu `$ $`=(1,i\stackrel{}{\sigma })\text{ ,}`$ we have $`\left(z^{}dzdz^{}z\right)`$ $`=\overline{\alpha }_\mu x_\mu \alpha _\nu dx_\nu \overline{\alpha }_\nu dx_\nu \alpha _\mu x_\mu `$ (43) $`=\left(\overline{\alpha }_\mu \alpha _\nu \overline{\alpha }_\nu \alpha _\mu \right)x_\mu dx_\nu `$ $`2i\sigma _{\mu \nu }x_\mu dx_\nu \text{ ,}`$ where $$\frac{1}{2}\epsilon _{\mu \nu \alpha \beta }\sigma _{\alpha \beta }=\sigma _{\mu \nu }.$$ (44) We finally arrive at $$A=i\frac{1}{2a\left(a+x_5\right)}\sigma _{\mu \nu }x_\mu dx_\nu $$ (45) which is the instanton gauge connection discussed in . The projection operator in this case is $`P`$ $`=vv^{}`$ (46) $`={\displaystyle \frac{1}{2a}}\left(\begin{array}{cc}a+x_5& x_0+i\stackrel{}{x}\stackrel{}{\sigma }\\ x_0i\stackrel{}{x}\stackrel{}{\sigma }& ax_5\end{array}\right)\text{ ,}`$ (49) and we have of course $$P\mathrm{\Phi }=vD\varphi \text{ .}$$ (50) ## 4 Summary and discussion We have seen in this paper that quantum mechanics on $`S^2`$ and on $`S^4`$ can be reformulated by using the “quantum constraint”, which is, in a sense, a square root of the usual constraint $`\stackrel{}{x}^2=a^2`$ . Emergence of the induced gauge fields (monopoles in $`R^3`$ or two dimensional $`CP^1`$ instantons on $`S^2`$, Yang monopoles in $`R^5`$ or instantons on $`S^4`$ ) can be attributed to the structure of projection operator. We can read off from the wave function $$\mathrm{\Phi }=v\varphi \text{ ,}$$ (51) which can be rewritten as $$Z=v\phi :(Z^{}Z=\phi ^{}\phi =1)\text{ ,}$$ (52) where $$Z\frac{\mathrm{\Phi }}{\sqrt{\mathrm{\Phi }^{}\mathrm{\Phi }}},\text{ }\phi \frac{\varphi }{\sqrt{\varphi ^{}\varphi }}\text{ ,}$$ the structure of the $`S^3S^2`$ $`\left(S^7S^4\right)`$ Hopf fibering, where $`Z`$ stands for the fiber bundle $`S^3`$$`\left(S^7\right)`$, $`v`$ denotes the section $`S^2\left(S^4\right)`$ and finally $`\phi `$ is the fiber $`U(1)\left(SU(2)\right)`$. Quantum mechanics on the odd dimensional spheres can be considered as a reduction of the higher dimensional spheres, for example the $`S^3`$ case can be obtained from the $`S^4`$ case by simply setting $`x_5=0`$ and we are left with the induced meron gauge field. Finally a few comments as to the relation of our approach with that of the coset space $`G/H`$ are in order. There is not so much difference as far as the quantum mechanics on $`S^2`$ is concerned, however, the views seem rather different on $`S^4`$. The structure of the fiber bundle looks different in these two approaches, in our case it is not a group, has a simpler structure and can be considered as a reduction of the former. It is also interesting to note that our projection operator $`P`$ (43) in the case of $`S^4`$ is related to that of ADHM construction of the instanton solutions to the Yang-Mills gauge theory in its simplest case. The $`S^2`$ projection operator on the other hand and particularly $`v`$ (11) plays a crucial role in finding the instanton solution to the $`CP^1`$-$`\sigma `$ model.
no-problem/9911/astro-ph9911364.html
ar5iv
text
# PLASMA TURBULENCE AND STOCHASTIC ACCELERATION IN SOLAR FLARES ## 1 INTRODUCTION Because of the proximity of the Sun, solar flares have been detected in multitude of ways and there is considerable body of relatively detailed observations, a fact uncommon in other similar astrophysical sources and situations. Consequently, solar flares provide an excellent labaratory for testing ideas and models for all aspects of high energy astrophysical phenomena such as enenrgy generation and release, acceleration of particles and radiation processes. In this paper I describe a somewhat new paradigm for particle acceleration in solar flares where plasma waves or turbulence play a much more dominant role than has been attributed to them in the past. In §2 I present a brief review of some of the observational features of solar flares most relevant to the acceleration process, and a short outline of current ideas on production (and channels of release) of flare radiations. Then in §3 I describe some new observations that cannot easily be accounted for by the standard model, but can come about naturally in a model where particles are accelerated stochastically by plasma turbulence. Here I also give some theoretical justifications for such a model. In §4 I develop a simplified model of acceleration based on this idea and in §5 compare the predictions of this model with the new observations. In §6 I give a brief summary. ## 2 A BRIEF REVIEW Flare radiation has been observed throughout the whole range of electromagnetic spectrum from long wavelength radio waves to gamma-rays up to hundreds of Mev. At radio wavelengths there are the various types (II, III, IV, V) of long wavelength emissions, but most relevant to the acceleration process is the continuum emission in the microwave-submillimeter range. Notable in the optical UV range are the H<sub>α</sub> and white light emissions. In the soft X-ray range ($`<`$ 10 keV), there is the thermal continuum emission and line emission by highly ionized heavy elements. Above 10 keV, in the hard X-ray (10 keV to 1 MeV) and gamma-ray ($`>`$ 1 MeV) range, there is a non-thermal continuum radiation resulting from bremsstrahlung of the accelerated electrons, and in the 1 to 7 MeV range there is the gamma-ray line emission produced by accelerated protons and other ions. In addition to the electromagnetic radiation, there are other observations of flare phenomenon, among which the one most relevant to the acceleration process is the direct observation near the earth of accelerated electrons, protons, neutrons and other nuclei. These radiations can be divided into two categories. The microwave, hard X-ray and all gamma-ray radiations are non-thermal radiation produced directly by the accelerated particles and constitute what is called the impulsive phase of the flares. These radiations are highly correlated and often show almost identical temporal evolution. The rest of the radiations are associated with what is called the gradual phase of the flare and are manifestations of flare plasma energized by the primary energy release process. These secondary radiations evolve at a rate that is approximately proportional to the integral of the impulsive time profiles, achieve their maximum at the end of the impulsive phase and then decay gradually. Initially when only thermal radiation was observed, it was believed that most of the released energy goes directly into heating. But since the discovery of the non-thermal microwave and hard X-ray radiation it is believed that a large fraction, perhaps all, of the flare energy goes into acceleration of particles, mostly electrons with a power law distribution, $`f(E)=\kappa E^\delta `$. These electrons produce X-rays through collisions with the ambient protons and ions as they travel down the flaring loop. Some of the radiation comes from the loop but the bulk of it is emitted in the high density regions below the transition layer. The total bremsstrahlung yield of X-rays with energy $`kE_0`$ (in units of $`m_ec^2`$) is $`Y=\frac{2\pi \alpha }{\mathrm{ln}\mathrm{\Lambda }}\frac{E_0\delta 1}{\delta ^2(\delta 2)}`$, where $`\alpha `$ is the fine structure constant and $`\mathrm{ln}\mathrm{\Lambda }20`$ is the Coulomb logarithm. For typical values of $`E_0=20`$ keV and $`\delta =4`$, $`Y10^5`$. This means that almost all of the accelerated electron energy goes into heating or evaporation of plasma in the lower atmosphere, which gives rise to the softer X-rays and other gradual thermal radiations. This scenario agrees with the above-mentioned integral relation, the usual power law X-ray spectra, and the rudimentary observations of the spatial structure (double foot point sources) observed at hard X-rays by SMM and HINATORI spacecrafts. The electrons as they spiral down the flaring loop (defined by the magnetic lines of force) emit also synchrotron radiation at microwave frequencies. The interpretation of this emission is more complicated because of its dependence on the magnetic field strength and geometry, and the presence of various kinds of absorptions. Accelerated protons and ions give rise to nuclear excitation and other gamma-ray lines in the 1-7 MeV range and to a continuum emission around 50 MeV from decay of pions they produce. For a review of gamma-rays from protons and ions the reader is referred to the excellent review by Ramaty and Murphy (1989). We shall not discuss the microwave emissions nor describe the models for gamma-ray line emission whose presence complicates the interpretation of the electron bremsstrahlung emission. To avoid this complication we will deal with the so-called electron dominated flares where a small number of ions are accelerated and their emission can be ignored (cf. e.g. Marschhaüser et al. 1991). ## 3 WHY PLASMA TURBULENCE? There are, however, several more recent observations which do not agree with this scenario at least in its simplest form. In this section we describe two such observations and show how the presence of plasma turbulence can account for these observations. Then we present some theoretical arguments in favor of a model where bulk of the energy released by the reconnection process does not go directly into heating of the plasma or acceleration of the particles but into generation of plasma waves or turbulence which in turn can accelerate particles and heat the plasma (directly, or indirectly via the accelerated particles). ### 3.1 Observational Motivation The two observations that provide evidence for the above scenario are the wide dynamic range spectral and high spatial resolution hard X-ray observations. There may be other evidence such as the impulsive soft X-ray observation from foot points (Hudson et al. 1994, Petrosian 1994, 1996). #### 3.1.1 Spectral Evidence Observed spectra over a limited range (30-500 keV) can be fitted to bremsstrahlung spectra emitted by electrons with a power law energy spectrum. But higher resolution spectra, especially those with a wider dynamic range (10 keV to 100 MeV), observed by the gamma-ray spectrometer (GRS) onboard SMM and by the combined BATSE and EGRET instrumentS on Compton Gamma-Ray Observatory (CGRO), show considerable deviation from a simple power law; as shown in Figure 1, there is spectral hardening (flatter spectra) above around few 100s of keV and a steep cutoff above 40 MeV. These kinds of deviations can be produced by the action of Coulomb collisions and synchrotron losses during the transport of the electrons from top of a loop to the foot points. However, as shown by Petrosian, McTiernan & Marshhaüser (1994), the observed deviations would require plasma densities $`n`$ and magnetic fields $`B`$, which are much higher than those believed to be present (see Figure 1). We conclude that these signatures must be present in the spectra of the accelerated electrons. As we shall show in §5 the stochastic acceleration model described in the next section can reproduce these observations with more reasonable values of the parameters. #### 3.1.2 Spatial Structure The second observation which provides a more direct evidence for presence of turbulence at the top of loops is the observations by YOHKOH of loop top hard X-ray (10-50 keV) emission (Masuda et al. 1994, Masuda 1994). Figure 2 shows an image of one such flare along with variations with time of the emission intensities from the loop top, the foot points and their ratio. Several other flares show similar images and emission ratios. There exists also a limited spectral information (Masuda 1994 and Alexander & Metcalf 1997). In the standard thick target model, the spatial variation of the bremsstrahlung emission depends primarily on the ralative values of the length $`L`$ of the loop and the Coulomb collision mean free path, $`\lambda _{\mathrm{Coul}}=\beta ^2E/(4\pi r_0^2n\mathrm{ln}\mathrm{\Lambda })`$, where $`\beta =v/c`$ is the electron velocity in units of the speed of light and $`r_02.8\times 10^{13}`$cm is the classical electron radius. In general, $`L\lambda _{\mathrm{Coul}}`$ in the corona so that most particles travel through the coronal portion of the loop freely but undergo rapid collision and bremsstrahlung emission once they reach the transition region and the chromosphere where the density increases by several orders of magnitude within a much shorter distance. Loop top emission is possible only if $`\lambda _{\mathrm{Coul}}L`$, in which case the accelerated electrons lose most of their energy at the top of the loop with no emission from foot points. If $`\lambda _{\mathrm{Coul}}L`$ then one would expect a uniform emission from throughout the loop. For further deatails see J. Leach’s Ph.D. thesis (1984), Fletcher (1995) and Holman (1996). Significant loop top emission is possible if the loop is highly inhomogeneous. Since the emission is proportional to the ambient density a higher density at the loop top can mimic the observed variation (Wheatland and Melrose 1995). However, it is difficult to see how large density gradients can be maintained in a $`T10^6`$ K plasma. Enhancement of loop top emission can be produced more naturally in loops with converging field geometry. The field convergence increases the pitch angle of the particles as they travel down the loop, and if strong enough, can reflect them back up the loop before they reach the transition region. This increases the density of the accelerated electrons near the top of the loop giving rise to a corresponding enhancement of the bremsstrahlung emission. Such models were also investigated by Leach (1984), show significant increases of emission from loop tops for large convergence ratio (see also Leach & Petrosian 1983 and Figures 1 and 2 in Peterosian & Donaghy 1999; PD for short), and by Fletcher (1996) showing increase in the average height of the X-ray emission. Finally, trapping of electrons near the top of a loop can come about by an enhanced pitch angle scattering with a mean free path $`\lambda _{\mathrm{scat}}L\lambda _{\mathrm{Coul}}`$. In this case electrons will undergo many pitch angle scatterings and achieve an isotropic pitch angle distribution before they escape within a time $`T_{\mathrm{esc}}\frac{L}{v}\times \frac{L}{\lambda _{\mathrm{scat}}}\frac{L}{v}`$. It can be shown that the ratio of the loop top emission to emission from other parts of the coronal portion of the loop (i.e. the legs of the loop, which do not contain such a scattering agent but are of comparable length) will be approximately $`J_{LT}/J_{Loop}L/\lambda _{\mathrm{scat}}`$, in which case we may detect emission only from the top of the loop and the high density foot points, but not from the legs of the loop. A possible scattering agent which could satisfy this condition is plasma turbulence. If this is the case, however, one must also include the possibility of acceleration of the particles by the same plasma turbulence. ### 3.2 Theoretical Arguments Various mechanisms have been proposed and are investigated for the acceleration of particles in all kinds of astrophysical situations (see, e.g. review articles in the Proc. of IAU Colloq. 142, 1994, ApJ Suppl. 90). The three most common mechanisms are the following. Electric Fields parallel to the magnetic field lines can accelerate charged particles. For fields greater than the Dreicer field, $`𝐄_D=kT/(e\lambda _{\mathrm{Coul}})`$, particles of charge $`e`$ gain energy faster than the mean collision time $`\tau _{\mathrm{Coul}}=\lambda _{\mathrm{Coul}}/v`$. This can lead to unstable particle distribution, which in turn can give rise to turbulence and cause mostly heating (Boris et al. 1970, Holman 1985). Sub-Dreicer fields, in order to accelerate charged particles to relativistic energies, must extend over a region $`L\lambda _{\mathrm{Coul}}`$, which as pointed above cannot be the case for a loop size acceleration region. An anomalously large resistivity is one way to get around this difficulty (Tsuneta 1985). It seems that some other mechanism is required for acceleration to MeV and GeV ranges (needed for electrons and protons, respectively). Shocks are the most commonly considered mechanism of acceleration because they can quickly accelerate particles to very high energies. However this requires the existence of some scattering agent to force repeated passage of the particles across the shock. The most likely agent for scattering is plasma turbulence or plasma waves. The rate of energy gain then is governed by the scattering rate which in this case is proportional to the pitch angle diffusion coefficient $`D_{\mu \mu }`$. However, the turbulence needed for the scattering can also accelerate particles stochastically (a second order Fermi process) at a rate $`D_{EE}/E^2`$, where $`D_{EE}`$ is the energy diffusion coefficient. At high energies $`D_{\mu \mu }D_{EE}/E^2`$ favoring shock acceleration. Stochastic Acceleration is favored for acceleration of low energy background particles because several recent analyses by Hamilton & Petrosian 1992 and Dung & Petrosian 1994, and by Miller and collaborators (e.g. Miller & Reames 1996, Schlickeiser & Miller 1998) have shown that under correct conditions plasma waves can accelerate low energy particles within the desired times. More importantly, as our more recent work has shown (Pryadko & Petrosian 1997), at low energies the above inequality is reversed, $`D_{EE}/E^2D_{\mu \mu }`$, so that the stochastic acceleration (rate $`D_{EE}/E^2`$) becomes more efficient than shock acceleration whose rate is governed by the smaller coefficient $`D_{\mu \mu }`$. Thus, low energy particles are accelerated more efficiently stochastically than by shocks. We can, therefore, imagine two scenarios. In one scenario, low energy particles are accelerated first by electric fields (or stochastically) and then to higher energies by shocks. In a second simpler scenario, stochastic acceleration by turbulence does the whole process, accelerating the background particles to high energies (cf. Petrosian 1994). We now describe the second scenario for flares. ## 4 STOCHASTIC ACCELERATION Stochastic acceleration by turbulence was first proposed by Ramaty and collaborators (see e.g. Ramaty 1979 and Ramaty & Murphy 1987) for solar flare protons and ions. As argued above this process appeares to be the most promising process for acceleration of flare electrons as well (see also Petrosian 1994 and 1996). Similar arguments have been put forth by Schlickeiser and collaborators (Schlickeiser 1989; Bech et al. 1990), and Miller and collaborators (Miller 1991; Miller, LaRosa & Moore 1996; Miller, Guessoum & Ramaty 1990). For the purpose of comparison with the observations mentioned above we need the energy spectrum and spatial distribution of the acceleraterd electrons. The exact evaluation of this requires solution of the time dependent coupled kinetic equations for waves and particles. This is beyond the scope of this work and is not warranted for comparison with the existing data. We therefore make the following simplifying and somewhat justifiable assumptions. We assume that the flare energizing process, presumably magnetic reconnection, produces plasma turbulence throughout the impulsive phase at a rate faster than the damping time of the turbulence, so that the observed variation of the impulsive phase emissions is due to modulation of the energizing process. This assumption decouples the kinetic equation of waves from that of the electrons. We also assume that the turbulence is confined to a region of size $`L`$ near the top of the flaring loop where particles undergo scattering and acceleration, but eventually escape within a time $`T_{\mathrm{esc}}(1+(L/\lambda _{\mathrm{scat}}))\tau _{tr}`$ , where $`\lambda _{\mathrm{scat}}1/D_{\mu \mu }`$ and $`\tau _{tr}L/v`$ is the traverse time across the acceleration region. Since both these times are shorter than the observational integration time, we can use the steady state equation. And because the loop top emission requires $`\lambda _{\mathrm{scat}}L`$, we can assume isotropy of the pitch angle distribution and evaluate the electron spectrum integrated througout the finite acceleration site. The kinetic equation then simplifies to $$\frac{^2}{E^2}[D(E)f]\frac{}{E}[(A(E)|\dot{E}_L|)f]\frac{f}{T_{\mathrm{esc}}(E)}+Q(E)=0.$$ Here $`D`$ and $`A`$ are the diffusion and systematic acceleration coefficients due to the stochastic process that are obtained from the standard Fokker-Planck equation and are related to the coefficient $`D_{EE}`$ and $$\dot{E}_L=\dot{E}_{\mathrm{Coul}}+\dot{E}_{\mathrm{synch}}=4\pi r_0^2cn\mathrm{ln}\mathrm{\Lambda }/\beta 4r_0^2B^2\beta ^2\gamma ^2/9m_ec$$ (1) describes the Coulomb and synchrotron energy loss rates (in units of $`m_ec^2`$). For the source function, $`Q(E)`$, we use a Maxwellian distribution with temperature, $`kT=1.5`$ keV. Following our earlier formalism (Hamilton & Petrosian 1992, Park, Petrosian & Schwartz 1997, PPS for short) we use the following parametric forms for the diffusion coefficients. $$D(E)=𝒟\beta (\gamma \beta )^q^{},A(E)=𝒟(q^{}+2)(\gamma \beta )^{q^{}1},T_{\mathrm{esc}}(E)=𝒯_{\mathrm{esc}}(\gamma \beta )^s/\beta +L/(\beta c\sqrt{2}).$$ (2) Here $`q^{}`$ is related to $`q`$, the spectral index of the (assumed) power law distribution for the plasma wave vectors. The above forms are more general than what one obtains from a single type of plasma waves, say the whistler waves (where $`𝒟`$ and $`𝒯_{\mathrm{esc}}`$ are related and $`s=q^{}=q2`$). They can accomodate other scattering processes, such as the hard sphere model ($`q^{}=2,s=0`$, see Ramaty 1979) or more general turbulence. For a more general spectrum of turbulence that includes all possible waves, more accurate numerical values are obtained by Dung & Petrosian (1994) and Pryadko & Petrosian (1997, 1999). For a more complete analysis one should use these numerical results. The existing data do not warrant considerations of such details. However, we shall choose the parameters in the above expressions so that the relevant parameters qualitatively behave like the ones from these numerical results. As shown in these papers, the rate of acceleration $`𝒟`$ and escape time $`𝒯_{\mathrm{esc}}`$ are proportional to the electron gyrofrequency, $`\mathrm{\Omega }_e`$, and the level of turbulence $`f_{turb}=8\pi _{tot}/B^2`$, where $`_{tot}`$ is the total energy density of turbulence. We solve equation (4) numerically for the spectrum of electrons at the acceleration site, $`f_{AS}(E)`$, and then evaluate the (thin target) bremsstrahlung spectrum, number of photons as a function of photon energy $`k`$ (in units of $`m_ec^2`$), emitted by these electrons: $$J_{AS}(k)=V_k^{\mathrm{}}𝑑Ef_{AS}(E)\beta cn_{AS}\frac{d\sigma }{dk}(E,k),$$ (3) Here $`V`$ and $`n_{AS}`$ are the volume and the background density in the acceleration site, $`d\sigma /dk`$, given by Koch & Motz (1959, eq. \[3BN\]), is the relativistically correct, angle integrated, bremsstrahlung cross section. The electrons escaping the acceleration site travel to the foot points and maintain a nearly isotropic pitch angle distribution in the downward direction with a spectral flux equal to $`F_{\mathrm{esc}}=Lf_{AS}/T_{\mathrm{esc}}`$ and emit bremsstrahlung at the foot points. As is well known (see, e.g. Petrosian 1973), the effective spectrum of the electrons (the so-called cooling spectrum) in the thick target foot point sources is $$f_{\mathrm{thick}}(E)=\frac{1}{\dot{E}_L}_E^{\mathrm{}}\frac{f_{AS}(E)}{T_{\mathrm{esc}}}𝑑E,$$ (4) so that the photon spectrum at the foot points, $`J_{FP}(k)`$, is obtained from equation (3) by replacing the above spectrum for $`f_{AS}`$. For a steep power law electron spectrum, ($`f(E)=\kappa E^\delta ,\delta 1`$), and for a relatively slow variation with energy $`E`$ of $`T_{\mathrm{esc}}`$ (i.e. small $`s`$), one can obtain simple analytic expressions for the photon spectra. For example, the ratio of the foot point to loop top emissions can be approximated by the simple expression $$\frac{J_{FP}}{J_{AS}}\frac{\tau _{\mathrm{Coul}}(k)}{T_{\mathrm{esc}}(k)}=(4\pi r_0^2cn_{AS}\mathrm{ln}\mathrm{\Lambda }𝒯_{\mathrm{esc}})^1\{\begin{array}{cc}2^{(2s)/4}k^{2s/2}\hfill & k1,\hfill \\ k^{1s}\hfill & k1,\hfill \end{array}$$ (5) where the Coulomb collision time is given by $$\tau _{\mathrm{Coul}}(k)\left(\frac{E}{\dot{E}_{\mathrm{Coul}}}\right)_{E=k}=(4\pi r_0^2\mathrm{ln}\mathrm{\Lambda }n_{AS}c)^1\{\begin{array}{cc}\sqrt{2}k^{3/2}\hfill & k1,\hfill \\ k\hfill & k1.\hfill \end{array}$$ (6) Steep spectra are obtained for slow acceleration rates (low levels of turbulence) or rapid escapes, but in the opposite case, for larger values of $`𝒟𝒯`$, the electron spectra become flat up to some high energy, $`E_{\mathrm{synch}}`$, where sychrotron losses come in and the spectrum falls off rapidly. In this case the above ratio is modified by setting $`k=E_{\mathrm{synch}}`$. For more details see PD. ## 5 COMPARISON WITH OBSERVATIONS ### 5.1 High Resolution Total Spectra The high resolution and wide dynamic range spectra are available only for the total emission and not for the loop top and foot point sources separately. We combine the two spectra and fit $`J=J_{FP}+J_{LT}`$ to the observed spectra of electron dominated flares from GRS on SMM and BATSE-EGRET on CGRO. Figure 3 and Table 1 show these fits and the values of the model parameters obtained for three different forms for the acceleration coefficients. As evident the simple hard sphere model fails to reproduce the spectra and can be easily ruled out. But for a model based on the whistler waves and a more general model, we obtain very good fits and reasonable values of the parameters. The values for density are somewhat larger than those assumed usullay, but the required magnetic field and amount of turbulence, $`f_{turb}`$, are very reasonable. Similar parameter values are obtained for two other electron dominated flares (see PPS). ### 5.2 High Spatial Resolution Data The best high spatial resolution flare data in the hard X-ray range are those mentioned in §3 and shown in Figure 2. In Figure 4 we show model predictions for the intensity ratio $`J_{FP}/J_{LT}`$ and the spectal indices obtained by a power law fit to the spectra over the short range, 20 to 50 keV, as a function of the acceleration rate $`𝒟`$ and for several values of the important parameters, such as density, magnetic field, and escape time. The ranges of the observed values for these quantities obtained by Masuda (1994) and Alexander & Metcalf (1997) are shown by the horizontal dotted lines. As evident from Figure 4 some of the values of the model parameters can be constrained by the observations. For high ambient densitites ($`n10^{11}`$cm <sup>-3</sup>) the observed ratios agree with a wide range of the acceleration rate except for short escape times. But at lower densities only smaller values of the acceleration rate are acceptable. On the other hand, a stronger constrain can be obtained from consideration of the spectral indices, which require a slow acceleration rate, $`𝒟<0.15`$s, and show a weak dependence on the ambient density and escape time $`𝒯_{\mathrm{esc}}`$. There is also some dependence on the exponents $`q^{}`$ and $`s`$ (not shown here) and essentially no dependence on the value of the magnetic field $`B`$ whose effect come in above MeV photon energies. The parameters from this comparison are in agreement with those given in Table 1 obtain from spectral fittings to different flares. For further details the reader is referred to PD. ## 6 SUMMARY I have shown that some recent observations do not agree with the predictions of the standard thick target model for the impulsive phase of solar flares. I have demonstrated that higher resolution spectral observations over a wide dynamic range and some high spatial resolution observations in the hard X-ray regime can be explained by a modification of the standard model where the electrons are accelerated stochastically by plasma turbulence. Based on models obtained in PPS and PD, I have shown that the resultant model parameters are reasonable. A more accurate determination of the validity of the model and the range of its parameters can be obtained with simultaneous high spectral and spatial resolution observation of many flare that is expected during the upcoming solar maximum from HESSI, a new mission to be launched some time in 2000. I would like to thank Tim Donaghy and Jim McTiernan for help with preparation of the figures. This work is supported in part by NASA grant NAG-5-7144-0002.
no-problem/9911/hep-th9911054.html
ar5iv
text
# References Geodesic Flow on the $`n`$-Dimensional Ellipsoid as a Liouville Integrable System Petre Diţă<sup>1</sup><sup>1</sup>1email: dita@hera.theory.nipne.ro National Institute of Physics & Nuclear Engineering Bucharest, PO Box MG6, Romania ## Abstract We show that the motion on the $`n`$-dimensional ellipsoid is complete integrable by exihibiting $`n`$ integrals in involution. The system is separable at classical and quantum level, the separation of classical variables being realized by the inverse of the momentum map. This system is a generic one in a new class of $`n`$-dimensional complete integrable Hamiltonians defined by an arbitrary function $`f(q,p)`$ invertible with respect to momentum $`p`$ and rational in the coordinate $`q`$. Complete integrable Hamiltonian systems in the Liouville sense are a main subject of interest in the last decades both at classical as well as quantum level. Classical examples of such systems are the geodesic flow on the triaxial ellipsoid, Neumannn’s dynamical system and the integrable cases of the rigid body motion. The number of interesting examples increased after the seminal papers by Lax and by Olshanetski and Peremolov , especially in connection with the inverse scattering method and the relation with classical simple Lie algebras. The aim of this paper is the study of the geodesic motion on the $`n`$-dimensional ellipsoid, which is a direct generalization of the $`2`$-dimensional case studied by Jacobi , since it seems that no solution is known for this system in the case $`n>2`$ . We obtain a complete description of the problem at the classical level by finding $`n`$ prime integrals in involution, the separation of variables, the explicit solution of the Hamilton-Jacobi equation and the equations of geodesics. We also show that the Schrödinger equation separates. This system is to a great extent universal among the various integrable systems and generates a new class of completely integrable models. The Lagrangean for a particle of unit mass constrained to move on the $`n`$-dimensional ellipsoid $$\frac{x_1^2}{a_1}+\frac{x_2^2}{a_2}+\mathrm{}+\frac{x_{n+1}^2}{a_{n+1}}=1$$ $`(1)`$ where $`a_i,i=1,2\mathrm{},n+1`$ are positive numbers $`a_i_+^{n+1}`$, is $$𝔏=\frac{1}{2}\underset{i=1}{\overset{i=n+1}{}}\dot{x}_i^2$$ $`(2)`$ The preceding equations define a constrained system and the model can be formulated in terms of constrained dynamical variables with Dirac brackets, or in an unconstrained form with canonical Poisson brackets. In the following we will use a third way, the classical one, which consists in reducing the Lagrangean by eliminating one degree of freedom using the equation of constraint (1). The simplest way to do that would be to resolve Eq.(1) with respect to the last coordinate and use it in the free Lagrangean, Eq.(2), but the drawback is that we obtain a non-diagonal metric. We follow here the Jacobi idea which was to find a clever parametrization such that the coresponding metric should be diagonal . For what follows it is useful to define two polynomials $$P(x)=\underset{i=1}{\overset{n+1}{}}(xa_i)Q(x)=\underset{i=1}{\overset{n}{}}(xu_i)$$ $`(3)`$ where $`a_i`$ and $`u_i,i=1,\mathrm{},n`$ are the positive numbers entering the parametrization of the ellipsoid and the ellipsoidal coordinates, respectively. The orthogonal parametrization of the quadric (1) is given by $$x_j^2=\frac{a_jQ(a_j)}{P^{}(a_j)},j=1,2,\mathrm{},n+1$$ $`(4)`$ where $`P^{}(a_j)=dP(x)/dx|_{x=a_j}`$, and the ellipsoidal coordinates $`u_1,\mathrm{},u_n`$ satisfy $`a_1<u_1<a_2<\mathrm{}<u_n<a_{n+1}`$. Using this parametrisation in Eq.(2) we find $$𝔏=\frac{1}{8}\underset{i=1}{\overset{i=n}{}}g_{ii}\dot{u}_i^2$$ $`(5)`$ where the (diagonal) metric is given by $`g_{ii}=u_iQ^{}(u_i)/P(u_i),i=1,2,\mathrm{},n`$, and $`Q^{}(u_i)=dQ(x)/dx|_{x=u_i}`$. Defining as usual the generalized momenta by $`p_i=𝔏/\dot{u}_i`$ and using the Legendre transform we find the Hamiltonian of the problem $$=\underset{i=1}{\overset{i=n}{}}p_i\dot{u}_i𝔏=2\underset{i=1}{\overset{i=n}{}}g^{ii}p_i^2$$ $`(6)`$ where $`g^{ii}=P(u_i)/u_iQ^{}(u_i)`$. We define now the symmetric functions of the polynomials $`Q^{(j)}(x)=Q(x)/(xu_j)`$ $$Q^{(j)}(x)=\underset{k=0}{\overset{n1}{}}x^kS_{nk1}^{(j)},j=1,2\mathrm{},n$$ $`(7)`$ where $`S_0^{(j)}=1,S_1^{(j)}=(u_1+\mathrm{}+u_{j1}+u_{j+1}+\mathrm{}+u_n)`$, etc. The upper index means that the coordinate $`u_j`$ does not enter the symmetric sum $`S_k^{(j)}`$, $`k=1,\mathrm{},n1`$. We define the following functions $$H_k=\underset{l=1}{\overset{n}{}}S_{k1}^{(l)}g^{ll}p_l^2=\underset{i=1}{\overset{n}{}}S_{k1}^{(i)}\frac{P(u_i)}{u_iQ^{}(u_i)}p_i^2,k=1,2\mathrm{},n$$ $`(8)`$ where $`H_1`$ differs from $``$ by a numerical factor. A careful inspection of the Eqs.(8) shows that for each degree of freedom the contribution to the Hamiltonian $`H_k`$ is given by the product of two different factors. The first one depends on the ”Vandermonde” structure $`f_1=S_{k1}^{(i)}/Q^{}(u_i)`$ and the second one $`f_2=(P(u_i)/u_i)p_i^2`$ depends on the ”singularities”, i.e. the hyperellitic curve defined by the parameters $`a_i`$. Let $`g(p,u)=(p,u)`$ be an arbitrary function depending on the canonical variables $`p`$ and $`u`$ which is invertible with respect to the momentum $`p`$. As we will see later the invertibility condition is necessary for the separation of variables in the Hamilton-Jacobi equation. In particular we may suppose that $`(p,u)`$ is an one-dimensional Hamiltonian. For each $`n`$ we define an $`n`$-dimensional integrable model by giving $`n`$ integrals in involution $$_k(p,u)=\underset{i=1}{\overset{n}{}}\frac{S_{k1}^{(i)}}{Q^{}(u_i)}g(p_i,u_i),k=1,2,\mathrm{},n$$ $`(8^{})`$ Our main result is contained in the following proposition. Proposition.Let $`M^{2n}T^{}(R^n)`$ be the canonically symplectic phase space of the dynamical system defined by the Hamilton function Eq.(6). Then i) the functions $`H_i,i=1,2,\mathrm{},n`$ are in involution $$\{H_i,H_j\}=0,i,j=1,2,\mathrm{},n$$ ii) the momentum map is given by $$:M^{2n}R^n:M_𝐡=\{(u_i,p_i):H_i=h_i,i=1,2,\mathrm{},n\},h_i$$ then $`^1(M_𝐡)`$ realizes the separation of variables giving an explicit factorisation of Liouville’s tori into one-dimensional ovals iii) canonical equations are integrable by quadratures In the following we sketch a proof of the above proposition. As it will be easily seen the same proof is also true for the Hamiltonians defined in Eqs.(8). Proof. i) We calculate the Poisson bracket $$\{H_k,H_l\}=\underset{j=1}{\overset{n}{}}\left(\frac{H_k}{u_j}\frac{H_l}{p_j}\frac{H_k}{p_j}\frac{H_l}{u_j}\right)=2\underset{j=1}{\overset{n}{}}p_j\frac{P(u_j)}{u_jQ^{}(u_j)}\left(S_{l1}^{(j)}\frac{H_k}{u_j}S_{k1}^{(j)}\frac{H_l}{u_j}\right)=$$ $$2\underset{j=1}{\overset{n}{}}\underset{i=1}{\overset{n}{}}p_i^2p_j\frac{P(u_j)}{u_jQ^{}(u_j)}\left[S_{l1}^{(j)}\frac{}{u_j}\left(\frac{S_{k1}^{(i)}P(u_i)}{u_iQ^{}(u_i)}\right)S_{k1}^{(j)}\frac{}{u_j}\left(\frac{S_{l1}^{(i)}P(u_i)}{u_iQ^{}(u_i)}\right)\right]=$$ $$\underset{j=1}{\overset{n}{}}\underset{i=1}{\overset{n}{}}p_i^2p_j\frac{P(u_j)}{u_jQ^{}(u_j)}\left[\frac{}{u_j}\left(\frac{S_{l1}^{(j)}S_{k1}^{(i)}P(u_i)}{u_iQ^{}(u_i)}\right)\frac{}{u_j}\left(\frac{S_{k1}^{(j)}S_{l1}^{(i)}P(u_i)}{u_iQ^{}(u_i)}\right)\right]=$$ $$\underset{j=1}{\overset{n}{}}\underset{i=1}{\overset{n}{}}p_i^2p_j\frac{P(u_j)}{u_jQ^{}(u_j)}\left[\frac{}{u_j}\left(\left(S_{l1}^{(j)}S_{k1}^{(i)}S_{k1}^{(j)}S_{l1}^{(i)}\right)\frac{P(u_i)}{u_iQ^{}(u_i)}\right)\right]$$ The last step was possible because the symmetric functions $`S_k^{(j)}`$ and $`S_l^{(j)}`$ depend on all $`u_1,u_2,\mathrm{},u_n`$, but $`u_j`$. Looking at the last expression it is easily seen that the partial derivative with respect to $`u_j`$ vanishes for $`i=j`$. For $`ij`$ we have to show that $$\frac{}{u_j}\frac{S_{l1}^{(j)}S_{k1}^{(i)}S_{k1}^{(j)}S_{l1}^{(i)}}{u_iu_j}=0$$ but this is a direct consequence of the following identities $$\frac{}{u_j}S_{k1}^{(i)}=S_{k2}^{(i,j)}\mathrm{and}S_{k1}^{(i)}S_{k1}^{(j)}=(u_iu_j)S_{k2}^{(i,j)}$$ where the upper index $`(i,j)`$ means that the corresponding expression does not depend on both $`u_i`$ and $`u_j`$. In this way we have shown that $`\{H_k,H_l\}=0`$. ii) The surface $`M_𝐡=\{(u_i,p_i):H_i=h_i,i=1,2,\mathrm{},n\},h_i`$ is given by the system of equations $$\underset{i=1}{\overset{n}{}}S_{k1}^{(i)}\frac{P(u_i)}{u_iQ^{}(u_i)}p_i^2=h_k,k=1,2\mathrm{},n$$ $`(9)`$ and resolving it with respect to $`p_i`$ is equivalent to the calculation of the following determinant $$V_n=\left|\begin{array}{ccccc}1& 1& \mathrm{}& \mathrm{}& 1\\ S_1^{(1)}& S_1^{(2)}& \mathrm{}& \mathrm{}& S_1^{(n)}\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ S_{n1}^{(1)}& S_{n1}^{(2)}& \mathrm{}& \mathrm{}& S_{n1}^{(n)}\end{array}\right|$$ which is equal to the Vandermonde determinant, i.e. $$V_n=\underset{1i<jn}{}(u_ju_i)$$ Let $`V_{n1}^{(j)}`$ be the determinant obtained by removing the $`j`$th column and the last row in $`V_n`$ and $`W_{n,j}`$ the determinant obtained by replacing the $`j`$th column of $`V_n`$ by $`(h_1,\mathrm{},h_n)^t`$. It is easily seen that $$V_{n1}^{(j)}=\underset{\genfrac{}{}{0pt}{}{1k<ln}{kjl}}{}(u_lu_k)$$ i.e. $`V_{n1}^{(j)}`$ is the Vandermonde determinant of the variables $`u_1,\mathrm{},u_n`$ , but $`u_j`$. We have the identities $$\underset{j=1}{\overset{n}{}}V_{n1}^{(j)}=(V_n)^{n2}$$ $$\frac{V_n}{V_{n1}^{(j)}}=(1)^{nj}Q^{}(u_j),j=1,\mathrm{},n$$ $$W_{n,j}=(1)^{nj}V_{n1}^{(j)}\underset{i=0}{\overset{n1}{}}h_{ni}u_j^i,j=1,\mathrm{},n$$ Using these identities it is easily seen that $`^1(M_𝐡)`$ is equivalent to the relations $$\frac{P(u_i)p_i^2}{u_i}=\underset{k=0}{\overset{n1}{}}h_{nk}u_i^ki=1,2\mathrm{},n$$ $`(10)`$ which shows that $`^1(M_𝐡)`$ is a $`n`$-dimensional submanifold of $`M^{2n}`$ and more important $`p_i^2`$ are functions which depend only on the variable $`u_i`$. The last relation shows that the application $`^1(M_𝐡)`$ realizes the separation of variables for the geodesic motion on the ellipsoid. For the Hamiltonians given by Eqs.(8) the preceding equations have the form $$g(p_i,u_i)=\underset{k=0}{\overset{n1}{}}h_{nk}u_i^ki=1,2\mathrm{},n$$ $`(10^{})`$ The relations (10-10) have the classical form $$\phi (x_i,p_i,h_1,\mathrm{},h_n)=0i=1,2\mathrm{},n$$ which is an explicit factorization of Liouville’s tori into one-dimensional ovals. This is important for the quantization problem. With the notation $`R(u)=_{k=0}^{n1}h_{nk}u^k`$ the above relations can be written $$p_i=ϵ_i\sqrt{\frac{u_iR(u_i)}{P(u_i)}},i=1,\mathrm{},n$$ $$p_i=g^1(R(u_i)),i=1,\mathrm{},n$$ where $`ϵ_i=\pm 1`$ and $`g^1`$ is the inverse of the relation (10) with respect to the momentum $`p`$. The last relations allow immediately to resolve the Hamilton-Jacobi equation because in this case the action splits into a sum of terms $$S(𝐡,u_1,\mathrm{},u_n)=S_1(𝐡,u_1)+\mathrm{}+S_n(𝐡,u_n)$$ each of them satisfying an ordinary differential equation. Only the solution of the Hamilton-Jacobi equation for the geodesic motion on the ellipsoid will be presented the other more general case being similar. $$S(𝐡,u_1,\mathrm{},u_n)=\underset{i=1}{\overset{n}{}}ϵ_i_{u_i^0}^{u_i}\sqrt{\frac{wR(w)}{P(w)}}𝑑w$$ iii) The above formulae allows us to choose new canonical variables as follows $`𝒬_1=H_1,𝒬_k=H_k`$, $`k=2,\mathrm{},n`$ and the corresponding variables $`𝒫_i`$, $`i=1,\mathrm{},n`$. The Hamilton equations take the form $$\dot{𝒬}_i=0,i=1,\mathrm{},n$$ $$\dot{𝒫}_1=1,\dot{𝒫}_i=0,i=2,\mathrm{},n$$ and therefore $`𝒬_i=h_i,i=1,\mathrm{},n`$ and $`𝒫_1=t+g_1,𝒫_k=g_k,k=2,\mathrm{},n`$, with $`g_i,h_i,i=1,\mathrm{},n`$. Because $$𝒫_i=\frac{S}{𝒬_i}=\frac{S}{h_i}=\frac{ϵ_i}{2}_{u_i^0}^{u_i}\frac{t^{ni+1/2}}{\sqrt{P(t)R(t)}}𝑑t$$ we obtain the system $$t\delta _{1j}+b_j=\frac{1}{2}\underset{i=1}{\overset{n}{}}ϵ_i_{u_i^0}^{u_i}\frac{t^{nj+1/2}}{\sqrt{P(t)R(t)}}𝑑t,j=1,\mathrm{},n$$ which gives the implicit equations of the geodesics. In this way the integration of the Hamilton equations was reduced to quadratures. On the last expressions one can see that all the subtleties of the geodesic motion on the $`n`$-dimensional ellipsoid are encoded by the hyperelliptic curve $`y^2=P(x)R(x)`$ whose genus is $`g=n`$. For quantization we use both the forms (9) and (10) and show first that the quantization of $`H_1`$ is equivalent to the quantization of Eq.(10). It is well known that because of the ambiguities concerning the ordering of $`p`$ and $`u`$ we must use the Laplace-Beltrami operator . Its general form is $`\mathrm{\Delta }_n=\frac{1}{\sqrt{g}}p_i(\sqrt{g}g^{ij}p_j)`$, $`i,j=1,\mathrm{},n`$, where $`g=det(g_{ij})`$ and $`g_{ij}`$ is the metric tensor. Taking into account that $$g_{ii}=u_iQ^{}(u_i)/P(u_i)=(1)^{ni}\frac{V_n}{V_{n1}^{(i)}}\frac{u_i}{P(u_i)}$$ after some simplifications the Schrödinger equation generated by the Hamiltonian $`H_1`$ is written in the following form $$\underset{i=1}{\overset{n}{}}\frac{1}{V_n}\sqrt{\frac{P(u_i)}{u_i}}\frac{}{u_i}\left((1)^{ni}V_{n1}^{(i)}\sqrt{\frac{P(u_i)}{u_i}}\frac{\mathrm{\Psi }}{u_i}\right)=h_1\mathrm{\Psi }$$ Since the term $`V_{n1}^{(i)}`$ does not depend on $`u_i`$ it can be pulled out of the bracket and the precedent equation takes the form $$\underset{i=1}{\overset{n}{}}(1)^{ni}V_{n1}^{(i)}\sqrt{\frac{P(u_i)}{u_i}}\frac{}{u_i}\left(\sqrt{\frac{P(u_i)}{u_i}}\frac{\mathrm{\Psi }}{u_i}\right)=h_1V_n\mathrm{\Psi }$$ Now we make use of the Jacobi identity for the Vandermonde determinant. With the above notations the identity is $$\underset{j}{}(1)^{nj}V_{n1}^{(j)}u_j^k=\delta _{n1,k}V_n$$ and using it in the preceding relation we obtain $$\underset{i=1}{\overset{n}{}}(1)^{ni}V_{n1}^{(j)}\left[\sqrt{\frac{P(u_i)}{u_i}}\frac{}{u_i}\left(\sqrt{\frac{P(u_i)}{u_i}}\frac{\mathrm{\Psi }}{u_i}\right)+\underset{k=0}{\overset{n1}{}}c_{nk}u_i^k\mathrm{\Psi }\right]=0$$ which is equivalent with $`n`$ indepedent equations of the form $$\sqrt{\frac{P(u_i)}{u_i}}\frac{}{u_i}\left(\sqrt{\frac{P(u_i)}{u_i}}\frac{\mathrm{\Psi }_i}{u_i}\right)+\left(\underset{k=0}{\overset{n1}{}}c_{nk}u_i^k\right)\mathrm{\Psi }_i=0,i=1\mathrm{},n$$ $`(11)`$ Here $`c_1=h_1`$ and the other $`c_k`$ are arbitrary. The direct approach, starting from Eq.(10), is simpler the problem being one-dimensional and one arrives at the same equation, Eq.(11). It has the advantage that the arbitrary coefficients $`c_k`$ are identified to $`c_k=h_k`$, i.e. $`c_k`$ are the eigenvalues of the Hamiltonians $`H_k`$. The Eq.(11) has the general form of a Sturm-Liouville problem $$\frac{d}{dx}\left(p(x)\frac{df(x)}{dx}\right)+v(x)f(x)=\lambda r(x)f(x)$$ which has to be resolved on an interval $`[a,b]`$. It is well known that its eigenfunctions will live in a Hilbert space iff $`p(x)r(x)>0`$ on $`[a,b]`$. If $`p(x)`$ has a continuous first derivative and $`p(x)r(x)`$ a continuous second derivative, then by the following coordinate and function transforms $$\phi =^u\left(\frac{r(x)}{p(x)}\right)^{1/2}𝑑x,\mathrm{\Phi }=(r(u)p(u))^{1/4}f(u)$$ $`(12)`$ the preceding equation takes the standard form $$\frac{d^2\mathrm{\Phi }}{d\phi ^2}+q(\phi )\mathrm{\Phi }=\lambda \mathrm{\Phi }$$ where $$q(\phi )=\frac{\mu ^{^{\prime \prime }}(\phi )}{\mu (\phi )}\frac{v(u)}{r(u)},\mu (\phi )=(p(u)r(u))^{1/4}$$ and $`u=u(\phi )`$ is the solution of the Jacobi inverse problem (12). In our case, Eq.(11), the transformation is $$\phi =_{u_0}^u\left(\frac{uR(u)}{P(u)}\right)^{1/2}𝑑u$$ and the Schrödinger equation has the form $$\frac{d^2\mathrm{\Phi }}{d\phi ^2}+\frac{\mu ^{^{\prime \prime }}(\phi )}{\mu (\phi )}\mathrm{\Phi }=h_1\mathrm{\Phi }$$ $`(13)`$ where $`\mu (\phi )=(R(u(\phi )))^{1/4}`$ and in $`R(u)`$ we made the rescaling $`h_kh_k/h_1,k=1,\mathrm{},n`$. Thus we have obtained that the solving of the Schrödinger equation (11) is equivalent to the solving of the motion of one-dimensional particle in a potential generated by $`R(u(\phi ))`$. For $`n=1`$ Eq.(13) is the equation of the one-dimensional rotator $$\frac{d^2\mathrm{\Psi }}{d\phi ^2}+l^2\mathrm{\Psi }=0$$ with the solution $`\mathrm{\Psi }(\phi )=\frac{1}{\sqrt{2\pi }}e^{il\phi }`$, $`l`$, etc. In all the other cases we have to make use of the theory of hyperelliptic curves, $`\theta `$-functions and/or hyperelliptic Abelian functions in order to obtain explicit solutions. This problem will be treated elsewhere. What is remarkable is that the solving of the classical problem, or the solving of the associated Schrödinger equation leads to the use of the same mathematical formalism, $`\theta `$-functions or hyperelliptic Kleinian functions. However there is a simpler alternative to Eq.(13), namely by the change of variable $$\phi =^u\sqrt{\frac{t}{P(t)}}𝑑t$$ the Eq.(11) takes the form $$\frac{d^2\mathrm{\Psi }}{d\phi ^2}+\left(\underset{k=0}{\overset{n1}{}}h_{nk}u(\phi )^k\right)\mathrm{\Psi }=0$$ which is the equation of a particle moving in a potential generated by the integrals in involution. As concerns the quantization of the Hamiltonian $`_1`$ it depends on its explicit form and we do not pursue it here. From the proof of our results it follows that the hyperelliptic curve was only a tool in obtaining the separation of variables Eq.(10); in fact the separation was a direct consequence of the properties of the Vandermonde determinant. In the following we exibit a few examples of new $`n`$-dimensional integrable models. Two models which show that the dimension $`n`$ of the system has no direct connection with the number of zeros and/or poles of the function $`g(p,u)`$ could be: $`g(p,u)=(sinu/u)p^2`$ and $`g(p,u)=tgue^{\alpha p}`$, the first example being a function with a denumerable number of zeros and the second one with a denumerable number of poles and zeros, in both cases the hyperelliptic curve being of infinite genus. Other examples are deduced for example from the many-body elliptic Calogero-Moser , or the elliptic Ruijenaars models . Starting with the corresponding one-dimensional Hamiltonians $`H_{CM}(p,u)=f(p,u)=p^2/2+\nu ^2\mathrm{}_\tau (u)`$ and $`f_R(p,u)=cosh(\alpha p)\sqrt{12(\alpha \nu )^2\mathrm{}_\tau (u)}`$ respectively, where $`\mathrm{}_\tau (u)`$ is the Weierstrass function, we obtain $`n`$-dimensional models. In conclusion we discovered in this paper a new class of complete integrable systems which allows to uncover the origin of their integrability or solvability property. We have shown that there is a simple and general mechanism allowing us to construct complete integrable Hamiltonian systems with an arbitrary number of degree of freedom, and for all these systems the separation of classical variables is given by the inverse of the momentum map.
no-problem/9911/astro-ph9911385.html
ar5iv
text
# Millisecond Pulsars in 47 Tucanae ## 1. Pulsar searching The cluster 47 Tucanae (47 Tuc) was long known to contain 11 MSPs, with four of them in binary systems (Robinson et al. 1995). In 20 cm observations made since 1997 at Parkes, we have detected all the previously known pulsars and discovered nine new MSPs, 47 Tuc O–W, all of which are in binary systems. Of these, 47 Tuc R has the shortest orbital period for any radio pulsar (96 minutes). For details on the search, orbital parameters, pulse profiles, and luminosities, see Camilo et al. (2000). ## 2. Pulsar timing The frequent detections of 14 pulsars allowed the determination of 12 new coherent timing solutions, and the confirmation of the two previously known. All pulsars lie in a circle of 1.5 arcmin about the centre of the cluster (see Fig. 1). Nine pulsars out of 14 have negative period derivatives (see Table 1), indicating that they are accelerating towards the Earth in the cluster’s gravitational potential, which can thus be probed in some detail. The number of pulsars with projected distance from the centre of the cluster smaller than $`R_{}`$ is proportional to $`R_{}`$. This is typical of an isothermal distribution. According to Phinney (1992) this indicates neutron stars are the dominating mass species in the core of 47 Tuc. ## References Camilo, F., Lorimer, D. R., Freire, P., Lyne, A. G., Manchester, R. N. 2000, ApJ. In press; astro-ph/9911234 Robinson, C. R., Lyne, A. G., Manchester, A. G., Bailes, M., D’Amico, N., & Johnston, S. 1995, MNRAS, 274, 547 Phinney, E. S. 1992, Philos. Trans. Roy. Soc. London A, 341, 39
no-problem/9911/cond-mat9911138.html
ar5iv
text
# Persistence exponent of the diffusion equation in ϵ dimensions ## Abstract We consider the $`d`$-dimensional diffusion equation $`_t\varphi (\text{x},t)=\mathrm{\Delta }\varphi (\text{x},t)`$ with random initial condition, and observe that, when appropriately scaled, $`\varphi (0,t)`$ is Gaussian and Markovian in the limit $`d0`$. This leads via the Majumdar-Sire perturbation theory to a small $`d`$ expansion for the persistence exponent $`\theta (d)`$. We find $`\theta (d)=\frac{1}{4}d0.12065`$$`d^{3/2}+\mathrm{}.`$ LPT ORSAY 99/71 <sup>1</sup>Laboratoire associé au Centre National de la Recherche Scientifique - UMR 8627 We consider the $`d`$-dimensional linear diffusion equation $$\frac{\varphi (\text{x},t)}{t}=\mathrm{\Delta }\varphi (\text{x},t)$$ (1) with initial condition $`\varphi (\text{x},0)=\psi (\text{x})`$, where $`\psi (\text{x})`$ is a zero mean Gaussian random field of covariance $`\psi (\text{x})\psi (\text{x}^{})=\delta (\text{x}\text{x}^{})`$. We are interested in the persistence of the stochsastic process $`\mathrm{\Phi }_d(t)=\varphi (0,t),`$ that is, in the $`t\mathrm{}`$ limit of the probability $`Q(t)`$ that in the interval $`0t^{}t`$ the process $`\mathrm{\Phi }_d(t)`$ does not change sign. This question was first asked by Majumdar et al. and Derrida et al. . The asymptotic decay of $`Q(t)`$ appears to be given by a power law $$Q(t)t^{\theta (d)}$$ (2) with a persistence exponent $`\theta (d)`$ whose precise value has been the main focus of study. One particular application concerns the coarsening of a phase-ordering system with nonconserved order parameter . Persistence exponents of random processes, also of interest in many other contexts in physics (see a recent review due to Majumdar ), have turned out to be very hard to calculate. A perturbative method in this field was designed by Majumdar and Sire and simplified by Oerding et al. (see also Majumdar et al. ). It consists of expanding around a Gaussian Markovian process, for which the persistence exponent is known. This method was applied to zero temperature and critical point coarsening in the Ising model. More recently Majumdar and Bray showed how to expand the persistence exponent of a Gaussian process in powers of a parameter $`1p`$ interpretable as a ”fugacity” weighting the sign changes of the random process. For the diffusion equation (1) no exact expressions for $`\theta (d)`$ are known. Approximate numerical values come from simulation data , from the ”independent interval approximation” (IIA) , and from work by Newman and Toroczkai based on a hypothesis concerning the ”sign time distribution” (see also Dornic and Godrèche and Drouffe and Godrèche ). In this note we show that the persistence exponent $`\theta (d)`$ allows for an $`ϵ`$ expansion in dimension $`0+ϵ`$. Using the Green function of the diffusion equation and writing $`S_d`$ for the surface of the $`d`$-dimensional unit sphere we can solve (1) and find $$\mathrm{\Phi }_d(t)=\frac{S_d^{1/2}}{(4\pi t)^{d/2}}_0^{\mathrm{}}\text{d}rr^{\frac{1}{2}(d1)}\text{e}^{r^2/4t}\mathrm{\Psi }(r)$$ (3) where $`\mathrm{\Psi }(r)`$ is the appropriately normalized integral of the initial field $`\psi (\text{x})`$ on a spherical shell of radius $`r`$ around the origin, $$\mathrm{\Psi }(r)=S_d^{1/2}r^{\frac{1}{2}(d1)}\underset{\mathrm{\Delta }r0}{lim}\frac{1}{\mathrm{\Delta }r}_{r<x<r+\mathrm{\Delta }r}\text{d}\text{x}\psi (\text{x})$$ (4) Hence $`\mathrm{\Psi }(r)`$ is Gaussian of mean zero and covariance $`\mathrm{\Psi }(r)\mathrm{\Psi }(r^{})=\delta (rr^{}).`$ We shall henceforth write $`d=ϵ`$ to indicate that the RHS of (3) is continued analytically to noninteger dimensions and that we prepare for the limit of zero dimension. The remaining discussion is easiest in terms of the new time variable $`\tau =ϵ\mathrm{log}t`$ and the rescaled process $$\stackrel{~}{\mathrm{\Phi }}_ϵ(\tau )=(8\pi )^{ϵ/4}\text{e}^{\tau /4}\mathrm{\Phi }_ϵ(\text{e}^{\tau /4ϵ})$$ (5) which has the advantage of being stationary. Let us denote by $`\stackrel{~}{Q}(\tau )`$ the persistence probability of $`\stackrel{~}{\mathrm{\Phi }}_ϵ(\tau )`$. This quantity then decays as $`\stackrel{~}{Q}(\tau )\mathrm{exp}(\stackrel{~}{\theta }(ϵ)\tau )`$ with $`ϵ\stackrel{~}{\theta }=\theta .`$ We consider now the explicit expression for the correlator $`F_ϵ(\tau \tau ^{})\stackrel{~}{\mathrm{\Phi }}_ϵ(\tau )\stackrel{~}{\mathrm{\Phi }}_ϵ(\tau ),`$ which reads $$F_ϵ(\tau \tau ^{})=\left(\mathrm{cosh}\frac{\tau \tau ^{}}{2ϵ}\right)^{ϵ/2}$$ (6) We observe that in the limit $`ϵ0`$ we have the exponential decay $`F_0(\tau \tau ^{})=\mathrm{exp}(\stackrel{~}{\lambda }|\tau \tau ^{}|)`$ with $`\stackrel{~}{\lambda }=\frac{1}{4}.`$ It follows that $`\stackrel{~}{\mathrm{\Phi }}_0(t)`$ is Markovian and that it has $`\stackrel{~}{\theta }(0)=\stackrel{~}{\lambda }`$ (see e.g. ), whence we conclude that $`\theta (0)=0.`$ For $`ϵ>0`$ the process $`\stackrel{~}{\mathrm{\Phi }}_ϵ(\tau )`$ is non-Markovian and there is no known way to calculate $`\stackrel{~}{\theta }(ϵ)`$ exactly. We therefore consider $`F_ϵF_0`$ as a small perturbation. This allows us to apply the principal result of the Majumdar-Sire perturbation theory . Cast in a form first explicitly exhibited by Oerding et al. and applied to the problem at hand it states that to lowest order in $`F_ϵF_0`$ one has $$\stackrel{~}{\theta }(ϵ)=\stackrel{~}{\lambda }\left[\mathrm{\hspace{0.17em}1}\frac{2\stackrel{~}{\lambda }}{\pi }_0^{\mathrm{}}\text{d}\tau \frac{F_ϵ(\tau )F_0(\tau )}{(1\text{e}^{2\stackrel{~}{\lambda }\tau })^{3/2}}\right]$$ (7) Upon substituting in (7) the explicit expressions for $`\stackrel{~}{\lambda },F_ϵ,`$ and $`F_0`$, and multiplying by $`ϵ`$, we get $$\theta (ϵ)=\frac{ϵ}{4}\left[\mathrm{\hspace{0.17em}1}\frac{1}{2\pi }_0^{\mathrm{}}\text{d}\tau \frac{(\mathrm{cosh}\frac{\tau }{2ϵ})^{ϵ/2}\text{e}^{\tau /4}}{(1\text{e}^{\tau /2})^{3/2}}\right]$$ (8) We extract from (8) the first two terms of the $`ϵ`$ expansion by setting $`\tau =ϵu`$, expanding the integrand for small $`ϵ`$, and performing an integration by parts. The result is $`\theta (ϵ)`$ $`=`$ $`{\displaystyle \frac{ϵ}{4}}{\displaystyle \frac{1}{\pi }}\left({\displaystyle \frac{ϵ}{2}}\right)^{3/2}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{\text{d}u}{\sqrt{u}}}{\displaystyle \frac{1}{1+\text{e}^u}}+\mathrm{}`$ (9) $`=`$ $`{\displaystyle \frac{ϵ}{4}}(8\pi )^{1/2}(1\sqrt{2})\zeta (\frac{1}{2})ϵ^{3/2}+\mathrm{}`$ $`=`$ $`{\displaystyle \frac{ϵ}{4}}\mathrm{\hspace{0.17em}0.12065}\text{}ϵ^{3/2}+\mathrm{}`$ Truncating this series after the $`ϵ^{3/2}`$ term we find $`\theta (1)=0.1293`$… , which should be compared to the Monte Carlo estimate $`\theta (1)=0.1207\pm 0.0005`$, to the IIA result $`\theta (1)=0.1203`$… , and to the value $`\theta (1)=0.1253`$… of Ref. . No numerical values for $`\theta (1)`$ are available from the $`1p`$ expansion of Ref. . Finally, retaining in (8) the full integral – which is the expansion proposed by the original authors – gives for $`ϵ=1`$ the somewhat improved result $`\theta (1)=0.1267`$… . We conclude that the $`ϵ`$ expansion (9) is significant as one of the rare analytical results that exist today for the persistence exponent of the diffusion equation. To make it numerically competitive higher orders in $`ϵ`$ should be calculated.
no-problem/9911/hep-th9911252.html
ar5iv
text
# References OUTP-99-61P November 1999 hep-th/9911252 What does $`E_8`$ know about $`11`$ dimensions ? Ian I. Kogan <sup>1</sup><sup>1</sup>1e-mail: i.kogan@physics.ox.ac.uk and John F. Wheater<sup>2</sup><sup>2</sup>2e-mail: j.wheater@physics.ox.ac.uk Department of Physics, University of Oxford Theoretical Physics, 1 Keble Road, Oxford OX1 3NP, UK Abstract. We discuss some possible relationships in gauge theories, string theory and M theory in the light of some recent results obtained in gauge invariant supersymmetric quantum mechanics. In particular this reveals a new relationship between the gauge group $`E_8`$ and 11-dimensional space. PACS: 11.25.-w, 11.25.Mj Keywords: M-theory, matrix models, heterotic string A long time ago it was proposed by Eguchi and Kawai that the dynamics of a four dimensional $`SU(N)`$ lattice gauge theory at large $`N`$ could be described by the same $`SU(N)`$ gauge theory on a single hyperube with periodic boundary conditions. Somehow the extensive nature of space time in such a theory is not important, at least in the large $`N`$ limit. The Eguchi Kawai model turned out to have a number of problems which required various modifications and it has never yielded a systematic way of solving lattice gauge theories in general. However the idea is clearly appealing, especially in the context of modern developments in which field theories in different numbers of space-time dimensions are related by the compactification of some of the dimensions; the Eguchi Kawai model is simply $`SU(N)`$ lattice gauge theory compactified on $`T^4`$ with the compactification radius sent to zero (or rather one lattice spacing). One practical difficulty with this is precisely that in a lattice theory one cannot continuously vary the compactification radius by changing the lattice size; really we want to keep the lattice size fixed and vary the correlation length so that the size of the box occupied by the continuum effective theory is sent to zero. Perhaps it would be simpler to deal with a continuum theory from the outset. One might suspect that one crucial difference between the modern compactification relations and the Eguchi Kawai model is supersymmetry; this ensures that the energy of the ground state does not diverge under compactification and seems (at least from the technical point of view) to be a vital ingredient in demonstrating the network of dualities relating different string theories and M-theory. In a recent paper Kac and Smilga have analyzed the zero mode structure of the supersymmetric quantum mechanics (SQM) obtained by dimensionally reducing $`D=3+1`$ dimensional $`𝒩=4`$ to supersymmetric Yang-Mills field theory (SYM) with gauge group $`G`$ to $`D=0+1`$. In turn the $`D=3+1`$ theory can be regarded as the dimensional reduction of $`D=9+1`$ $`𝒩=1`$ SYM. The dynamics of this SQM had been first analysed in , . The SQM theory is known to have a continuous spectrum with states of all energies $`E`$ from zero (as guaranteed by supersymmetry) upwards; these states are not normalizable. There is also a discrete spectrum of normalized $`E=0`$ states. If the gauge group $`G=SU(N)`$ then it is known that the SQM Hamiltonian can be regarded as describing the regularized quantum 11-dimensional supermembrane (see for example and references therein) in the light-cone gauge just as ordinary $`SU(N)`$ QM emerges in the light-cone quantization of the bosonic membrane . It was shown in that the spectrum of $`SU(N)`$ SQM is continuous which kills the old (i.e. first quantized ) supermembrane. However, this is precisely the property which is necessary in a matrix formulation of M-theory where $`N`$ in $`SU(N)`$ is related to the number of ‘parton’ $`D0`$ branes in a light-cone formulation of M-theory in a flat background. More about M-theory and Matrix theory can be found in , and and references therein. If the SQM is to work as a formulation of M-theory then it is crucial that there is only one $`E=0`$ ground state and much effort has gone into investigating this. The proof was given for $`SU(2)`$ in , and a lot of evidence accumulated for $`SU(N)`$ . Kac and Smilga have now shown that this is indeed the case for all $`SU(N)`$ and $`U(N)`$ gauge groups. However their results are much more far reaching and show that the other classical groups, $`SO(N)`$, $`Sp(2r)`$, and the exceptional groups, have a much richer structure of $`E=0`$ modes. Consider a $`D=d+1`$ SYM theory with the $`d`$ spatial dimensions forming a compact manifold with isometry group $``$ and scale (compactification scale) size $`\lambda `$. We can dimensionally reduce the theory to SQM by sending $`\lambda `$ to zero. In doing so we lose the isometry group as a symmetry of the theory. However we also create a vector space of $`n_G`$ $`E=0`$ modes which we will denote $`\{|0,i,i=1\mathrm{}n_G\}`$; note that these states are time-independent because $`E=0`$ and can therefore be regarded as forming a real vector space not a complex one. A general zero energy state can then be written $$|X=\underset{i=1}{\overset{n_G}{}}X^i|0,i.$$ (1) The normalization condition applied to $`|X`$ gives the constraint $$1=\underset{i=1}{\overset{n_G}{}}X^iX^i$$ (2) and thus the $`X^i`$ live on the sphere $`S^{n_G1}`$. Thus the destruction of the original isometry group $``$ by the dimensional reduction is accompanied by the emergence of a new one, $`^{}=SO(n_G)`$. We propose that if $`^{}`$ then the process of compactification and reduction is continuous (ie the original $`d`$ dimensional manifold flows into the new one) and the SQM is equivalent to the original field theory; in this sense such models provide a realization of the Eguchi Kawai idea. ¿From the rules given by Kac and Smilga we list in Table 1 the simple groups which have (possibly) physically interesting values of $`n_G`$. If we dimensionally reduce $`D=3+1`$ $`𝒩=4`$ SYM compactified on $`S^3`$ then we see from the table that there are three possible simple $`G`$ with $`n_G=4`$ which will reproduce the $`S^3`$. In addition there is a substantial number of direct products made up from pairs of groups with $`n_G=2`$. For $`D=9+1`$ $`𝒩=1`$ SYM compactified on $`S^9`$ there is only one possible simple group, $`Sp(20)`$, but again many direct product groups with one member taken from the $`n_G=2`$ list and the other from the $`n_G=5`$ list. The groups involved in this dimensional reduction do not seem to have any particular physical significance; it could be pure numerology, just a mathematical game. However, let us now take $`D=10+1=11`$ and we see there are two groups $`SO(24)`$ and $`E_8`$. We have here a very surprising and striking fact - the group $`E_8`$ knows about $`11`$ dimensions! There is a natural place for $`E_8`$ in string theory - through the heterotic construction and Narain compactification \- roughly speaking this appears as a group with self-dual and even root lattice <sup>3</sup><sup>3</sup>3We are not going to discuss $`SO(24)`$ beyond pointing out that it is the spatial symmetry group of the critical bosonic string in light-cone gauge but the relation between 11 dimensions and bosonic strings is unclear to us for now.. The fact that these lattices exist in dimensions $`0(mod8)`$ fits nicely with the fact that the critical dimensions of bosonic and fermionic strings are 26 and 10 and the difference fits the group $`E_8\times E_8`$ (also $`SO(32)`$ of course, but for $`SO(32)`$ one may have open strings contrary to $`E_8\times E_8`$). The fact that this group can be obtained in closed string theory which is finite guarantees that the Green-Schwarz mechanism of anomaly cancellation works. Let us note that in Narain toroidal compactification of heterotic string on a lattice $`\mathrm{\Gamma }_{26d,10d}`$ with $`d<10`$ one may get other groups - but in this case the space has a smaller symmetry $`SO(d)`$. Now consider the Horava-Witten construction of heterotic M-theory. Here the ten-dimensional $`E_8\times E_8`$ heterotic string is related to an eleven-dimensional theory on the orbifold $`R^{10}\times S^1/Z_2`$ and the presence of 10-dimensional boundaries of 11-dimensional space leads to the existence of an $`E_8`$ gauge group on each boundary in order to cancel diffeomorphism anomalies. However it seems that $`E_8`$ is not directly related to 11 dimensions and knows nothing about the maximal Lorentz group $`SO(11)`$. But now we should take account of this new information about $`E_8`$ SQM. Consider heterotic M-theory on a space $`R^1\times T^9\times S^1/Z_2`$ \- each $`E_8`$ gauge theory is on a 9-torus $`T^9`$ and the $`R^1`$ factor is time. Let the radii of the torus be $`R_i=\lambda _iR,i=1\mathrm{}9`$ where the $`\lambda _i`$ are conformal factors and consider the limit $`R0`$. We get a $`1+1`$ dimensional theory on the orbifold $`S^1/Z_2`$. This theory contains the conformal factors of $`T^9`$ whose quantum mechanics is trivial, and two-dimensional gravity with a copy of $`E_8`$ SQM at each of the two singular points on the orbifold. It is tempting to argue that the two-dimensional gravity is non-dynamical and therefore we are just left with the two copies of $`E_8`$ SQM each with an $`SO(11)`$ isometry group on their zero energy normalizable subspace; note that this would not depend on the value $`R_{11}`$. If $`R_{11}`$ is small we have weakly coupled string theory compactified on a shrinking torus, and if it is large we have the strong coupling limit. However it seems to us that the conclusion that there are two $`SO(11)`$ isometry groups cannot be true and that in fact the gravity must somehow mediate a coupling between the boundaries so that the quantum theory has only one $`SO(11)`$ symmetry. Our reason for this is a string theory argument that we give in the next paragraph. String theory on $`T^9`$ which is shrunk to a point is $`T`$-dual to an infinitely large torus, i.e. essentially $`R^9`$. Since the theory on $`T^9`$ is heterotic so is the $`T`$-dual theory on $`R^9`$ (in contrast to non-heterotic M-theory where $`d=11`$ supergravity corresponds to the strongly coupled $`IIA`$ string and the T-dual transforms it into type $`IIB`$). Now under $`T`$-duality we have that, $$R\stackrel{~}{R}=\frac{1}{R},g_s\stackrel{~}{g}_s=g_s\sqrt{\frac{\stackrel{~}{R}}{R}}=\frac{g_s}{R}$$ (3) so when $`R0`$ we see that $`\stackrel{~}{R}\mathrm{}`$ and $`\stackrel{~}{R}_{11}\stackrel{~}{g}_s\mathrm{}`$. Thus string theory arguments show that we should recover the $`SO(11)`$ symmetry. However if we started from strong coupling from the very beginning we can not apply string arguments, but this is precisely the situation discussed in the previous paragraph. The group $`SO(11)`$ we get is hypothetically the full Lorentz group of M-theory. Because the membrane Lorentz algebra is defined in light-cone gauge we have to check that there is full Lorentz invariance just as in a light-cone string theory. The classical Lorentz algebra becomes closed only in the $`N\mathrm{}`$ limit, and quantum-mechanically it is still unknown if $`D=11`$ is a critical dimension. Although the full quantum commutator is still unknown, it has been shown that the lowest non-trivial anomalous terms in the commutators $`[M^i,M^j]`$ are zero . The $`SO(11)`$ should be an exact quantum symmetry. In the case of $`R^{10}\times S^1/Z_2`$ there is an obvious $`SO(10)\times P(10)`$ where $`P(10)`$ denotes the Poincaré symmetry. The full $`P(11)`$ is broken by the existence of the orbifold planes. This group is nothing but a contraction of the full $`SO(11)`$; in a picture of two concentric 10-spheres the full $`SO(11)`$ acts faithfully on the space between them. If we consider matrix theory on a nine-dimensional thorus we have instead of $`0+1`$ SQM a fully fledged $`9+1`$ SYM - again this theory only has a maximal $`SO(10)`$ Lorentz symmetry, but if we add $`9+1`$ dimensional $`E_8`$ theory and reduced the system as a whole again to $`0+1`$ the full $`SO(11)`$ symmetry will be produced by the $`E_8`$ SQM. We have argued that the two $`E_8`$s should produce only one $`SO(11)`$. Suppose this is wrong; is there any other interpretation? A generic state for $`E_8\times E_8`$ theory has the form $$|X,Y=\underset{i,j=1}{\overset{11}{}}X^iY^j|0,i_1|0,j_2=|X_1|Y_2$$ (4) which looks like we have $`R^{11}\times R^{11}=R^{121}`$ \- not at all what we are looking for. A possible loophole is to consider instead of states $`|X_1|Y_2`$ operators $`|XY|`$. This simply means that the second Hilbert space we consider as a conjugate to the first one and the quantum mechanical description is given by a density matrix $$\rho (X,Y)=\underset{i,j=1}{\overset{11}{}}X^iY^j|0,i0,j|$$ (5) This is actually a very nice picture because so far we have no idea how even to start to construct dynamics on this 10-dimensional sphere. But now if we have the space of non-trivial ground states in $`E_8`$ providing coordinates, and momenta being the cotangent bundle to this space, we can at least formulate the canonical symplectic form $$\mathrm{\Omega }=dp^idq^i$$ (6) and start to develop the dynamics. For normalised wave functions the phase space is going to be $`S^{10}\times T^{}S^{10}`$, so two boundaries of 11-dimensional space in heterotic M-theory play the role of two coordinates in the density matrix $`\rho (x+\eta ,x\eta )`$ a phase space of some dual theory which has exact $`SO(11)`$ symmetry. The difference between the coordinates $`\eta `$ is parametrised by the tangent bundle to $`S^{10}`$ and one can see from a Wigner function that a momentum is dual to this difference which justifies the structure of phase space. How to find a Hamiltonian on this phase space and formulate a dynamics is a question which remains to be answered! We would like to thank André Smilga for telling us about the results of and the PPARC Fast Travel fund for supporting his visit here.
no-problem/9911/astro-ph9911180.html
ar5iv
text
# The redshift–space two–point correlation function of galaxy groups in the CfA2 and SSRS2 surveys ## 1 Introduction Loose groups of galaxies, the low–mass tail of the mass distribution of galaxy systems, fill an important gap in the mass range from galaxies to rich clusters. Until now, clustering properties of loose groups have been studied on the basis of rather small samples. The results are consequently uncertain and even contradictory. Nevertheless, clustering properties of groups are shown to be robust against the choice of the identification algorithm, provided systems are identified with comparable number overdensity thresholds (Frederic 1995b ). The two–point correlation function, CF, of galaxies and of galaxy systems constitutes an important measure of the large–scale distribution of galaxies (e.g., Davis & Peebles davis (1983); Bahcall & Soneira bahcall83 (1983); de Lapparent et al. delapparent (1988); Tucker et al. tucker (1997); Croft et al. croft (1997)). From galaxies to clusters the two–point correlation function in redshift–space, $`\xi (s)`$, (e.g., Peebles peebles (1980)), is consistent, within errors, with a power–law form $`\xi (s)=(s/s_0)^\gamma `$ with $`\gamma 1.52`$ for a variety of systems. The correlation length, $`s_0`$, ranges from about 5–7.5 $`\mathrm{h}^1`$ Mpc for galaxies (e.g., Davis & Peebles davis (1983); Loveday et al. loveday (1996); Tucker et al. tucker (1997); Willmer et al. willmer (1998); Guzzo et al. guzzo (1999)) to $`s_0>15`$ $`\mathrm{h}^1`$ Mpc for galaxy clusters (e.g., Bahcall & Soneira bahcall83 (1983); Postman et al. postman (1991); Peacock & West peacock (1992); Croft et al. croft (1997); Abadi et al. abadi (1998); Borgani et al. borgani (1999); Miller et al. miller (1999); Moscardini et al. moscardini (1999)). As far as loose groups are concerned, previous determinations of the CF are very uncertain. From the study of 137 groups (within CfA1) and 87 groups (within SSRS1), Jing & Zhang (jing (1988)) and Maia & da Costa (maia (1990)) respectively find that the group–group CF, $`\xi _{\mathrm{GG}}(s)`$, has a lower amplitude than the galaxy–galaxy CF, $`\xi _{\mathrm{gg}}(s)`$. Analyzing 128 groups in a sub–volume of CfA2N, Ramella et al. (ramella90 (1990); hereafter RGH90) find that the amplitudes of $`\xi _{\mathrm{GG}}(s)`$and $`\xi _{\mathrm{gg}}(s)`$are consistent (see also Kalinkov & Kuneva kalinkov (1990)). Finally, Trasarti-Battistoni et al. (trasarti (1997)) study 192 groups in the Perseus–Pisces region and find that the amplitude of $`\xi _{\mathrm{GG}}(s)`$exceeds that of $`\xi _{\mathrm{gg}}(s)`$. The theoretical expectations for the relative strength of the group and galaxy clustering are also contradictory. Frederic (1995b ) determines the correlation function for galaxy and group halos in CDM numerical simulations by Gelb (gelb (1992)) and finds that groups are more strongly correlated than galaxies. In contrast, Kashlinsky (kashlinsky (1987)), on the basis of an analytical approach to the clustering properties of collapsed systems of different masses, concludes that groups and individual galaxies should be correlated with the same amplitude. Here we compute the two–point correlation function (in redshift space) for 885 groups of galaxies identified in the combined CfA2 and SSRS2 redshift surveys. This sample is characterized by its large extent (more than five times the volumes previously studied) and by the homogeneity of the identification process (the friends–of–friends algorithm FOFA; Ramella et al. ramella97 (1997) – hereafter RPG97). Moreover, we compare the group–group CF to that computed for galaxies in order to determine the relative clustering properties of groups and galaxies. Because we use the same galaxy sample where groups are identified, we avoid possible effects of fluctuations due to the volume sampled. In Sect. 2 we briefly describe the data; in Sect. 3 we describe the estimation of the two–point correlation function; in Sect. 4 we compute the correlation function of groups and compare it to that for galaxies; in Sect. 5 we summarize our results and draw our conclusions. Throughout the paper, errors are at the $`68\%`$ confidence level, and the Hubble constant is $`H_0=100\mathrm{h}`$Mpc<sup>-1</sup> km s$`^1`$. ## 2 Galaxy and groups catalogs We extract the sample of galaxies from the CfA2 North (CfA2N) and South (CfA2S) (Geller & Huchra geller (1989); Huchra et al. huchra (1995); Falco et al. falco (1999)), and the SSRS2 North (SSRS2N) and South (SSRS2S) (da Costa et al. dacosta98 (1998)) redshift surveys. These surveys are complete to $`m_{\mathrm{B}(0)}15.5`$ and cover more than one–third of the sky, i.e. most of the extragalactic sky. The original papers contain detailed descriptions of the observations and of the data reduction. The velocities we use are heliocentric; they include corrections for solar motions with respect to the Local Group and for infall toward the center of the Virgo cluster (see RPG97 for details). As in previous analyses of the CfA2 surveys (e.g., Park et al. park (1994); Marzke et al. marzke95 (1995)), we discard regions of large galactic extinction. The total sample includes 13435 galaxies with radial velocity $`V<15000`$ km s$`^1`$. We use the catalogs of groups identified within CfA2N by RPG97 and within SSRS2 by Ramella et al. (in preparation). The identification method is a friends–of–friends algorithm (FOFA) which selects systems of at least three members above a given number density threshold in redshift space. In particular, RPG97 and Ramella et al. (in preparation) use the number density threshold $`\delta \rho _\mathrm{N}/\rho _\mathrm{N}=80`$ and a line–of–sight link $`V_0=350`$ km s$`^1`$at the fiducial velocity $`V_\mathrm{f}=1000`$ km s$`^1`$. We run FOFA with these parameters on CfA2S and produce a group catalog for this survey, too. The combined catalog contains a total of 885 groups that constitute a homogeneous set of systems objectively identified in redshift space. The group catalogs are limited to radial velocities $`V12000`$ km s$`^1`$, but members are allowed out to $`V15000`$ km s$`^1`$. We confine the galaxy sample to $`V12000`$ km s$`^1`$and are left with a total of 12290 galaxies. Table 1 lists the numbers of groups, $`N_\mathrm{G}`$, and the numbers of galaxies, $`N_\mathrm{g}`$, for each sample. Fig. 1 shows the distribution of the galaxy and group samples on the sky. ## 3 Estimation of the correlation function We compute the two–point correlation functions in redshift space for groups and galaxies (hereafter $`\xi _{\mathrm{GG}}(s)`$, and $`\xi _{\mathrm{gg}}(s)`$, respectively). The formalism in the two cases is the same. We define the separation in the redshift space, $`s`$, as : $$s=\frac{\sqrt{V_\mathrm{i}^2+V_\mathrm{j}^22V_\mathrm{i}V_\mathrm{j}cos\theta _{\mathrm{ij}}}}{H_0},$$ (1) where $`V_\mathrm{i}`$ and $`V_\mathrm{j}`$ are the velocities of two groups (or galaxies) separated by an angle $`\theta _{\mathrm{ij}}`$ on the sky. Following Hamilton (hamilton (1993)) we estimate $`\xi (s)`$ with: $$\xi (s)=\frac{DD(s)RR(s)}{[DR(s)]^2}1,$$ (2) where $`DD(s)`$, $`RR(s)`$, and $`DR(s)`$ are the number of data–data, random–random, and data–random pairs, with separations in the interval $`(s,s+ds)`$. We build the control sample by filling the survey volume with a uniform random distribution of the same number of points as in the data. The points are distributed in depth according to the selection function of the surveys, $`\mathrm{\Phi }(V)`$. In order to decrease the statistical fluctuations in the determination of $`\xi (s)`$, we average the results obtained using several different realizations of the control sample. We compute 50 realizations in the case of groups and 5 in the case of galaxies. Unless otherwise specified, we compute the “weighted” correlation function by substituting the counts of pairs with $`w_\mathrm{i}w_\mathrm{j}`$, the weighted sum of pairs, which takes into account the selection effects of the sample used. In the case of a sample characterized by the same selection function, $`\mathrm{\Phi }(V)`$, volumes are equally weighted and $`w_\mathrm{i}=1/\mathrm{\Phi }(V_\mathrm{i})`$ is the weight of a group (or galaxy) with velocity $`V_\mathrm{i}`$. The appropriate selection function for a magnitude–limited sample (e.g., de Lapparent et al. delapparent (1988); Park et al. park (1994); Willmer et al. willmer (1998)) is: $$\mathrm{\Phi }(V)=\frac{_{\mathrm{}}^{M(V)}\varphi (M)𝑑M}{_{\mathrm{}}^{M_{\mathrm{max}}}\varphi (M)𝑑M},$$ (3) where $`\varphi (M)`$ is the Schechter (schechter (1976)) form of the luminosity function. $`M_{\mathrm{max}}`$ is a low luminosity cut–off. We chose $`M_{\mathrm{max}}=14.5`$, the absolute magnitude corresponding to the limiting apparent magnitude of the survey at the fiducial velocity $`V_\mathrm{f}=1000`$ km s$`^1`$. This value of $`V_\mathrm{f}`$ is the same as in RPG97. The Schechter parameters of the galaxy luminosity function, before the Malmquist bias correction, are: $`M^{}=19.1`$, $`\alpha =1.1`$ for CfA2, and $`M^{}=19.7`$, $`\alpha =1.2`$ for SSRS2 (Marzke et al. marzke94 (1994); marzke98 (1998)). We assume that the group selection function is the same as for galaxies. In fact, the velocity distributions of groups, $`N_\mathrm{G}(V)`$, and of galaxies, $`N_\mathrm{g}(V)`$, are not significantly different according to the Kolmogorov–Smirnov test (cf. Fig. 2). RGH90 and Trasarti-Battistoni et al. (trasarti (1997)), and Frederic (1995b ) make the same assumption for observed and simulated catalogs, respectively. The different luminosity functions of CfA2 and SSRS2 correspond to different selection functions. For this reason we assign to a group (galaxy) $`i`$, belonging to the subsample $`k`$, the weight given by: $$w_\mathrm{i}=\frac{1}{\mathrm{\Phi }_\mathrm{k}(V_\mathrm{i})n_\mathrm{k}},$$ (4) where $`n_\mathrm{k}`$ is the mean number density of groups (galaxies) of that subsample (e.g. Hermit et al. hermit (1996)). We compute the density as $`n_\mathrm{k}=1/𝒱_\mathrm{k}\mathrm{\Sigma }_\mathrm{i}[1/\mathrm{\Phi }(V_\mathrm{i})]`$, where the sum is over all the groups (galaxies) of the subsample volume, $`𝒱_\mathrm{k}`$ (Yahil et al. yahil (1991)). In our analysis, $`𝒱_\mathrm{k}`$ is the effective volume of the subsample. Because the different selection functions, we also build a control sample for each subsample separately, and, conservatively, we do not consider pairs of groups (galaxies) linking two different subsamples. In this way we also avoid crossing large unsurveyed regions of the sky. We compute the errors on $`\xi (s)`$ from 100 bootstrap re–samplings of the data (e.g., Mo et al. mo (1992)). Note that the bootstrap–resampling technique, which overestimates the error in individual bins, represents a conservative choice in this work. ## 4 The group–group correlation function We plot the group–group CF, $`\xi _{\mathrm{GG}}(s)`$, in Fig. 3. In the same figure we also plot the galaxy–galaxy CF, $`\xi _{\mathrm{gg}}(s)`$. On small scales ($`s<3.5`$ Mpc ) $`\xi _{\mathrm{GG}}(s)`$starts dropping because of the anti–correlation due to the typical size of groups. On large scales ($`s>15`$ $`\mathrm{h}^1`$ Mpc) the signal–to–noise ratio of $`\xi _{\mathrm{GG}}(s)`$drops drastically. We thus limit our analysis to the separation range $`3.5<s<15`$ $`\mathrm{h}^1`$ Mpc. The main physical result in Fig. 3 is that $`\xi _{\mathrm{GG}}(s)`$has a larger amplitude than $`\xi _{\mathrm{gg}}(s)`$. This property of the CFs is also evident in Fig. 4, where we plot the ratio $`\xi _{\mathrm{GG}}(s)`$/ $`\xi _{\mathrm{gg}}(s)`$on a linear scale. Over the $`s`$–range of interest, the values of the ratio are roughly constant within the errors. In order to give an estimate of the relative behavior of groups and galaxies we compute the mean of the values of the ratio. We obtain $`<\xi _{\mathrm{GG}}/\xi _{\mathrm{gg}}>`$=$`1.64\pm 0.16`$. ### 4.1 The CF of rich groups Groups with a number of members $`N_{\mathrm{mem}}5`$ are generally reliable, as shown both by optical and X–ray analyses (Ramella et al. ramella95 (1995); Mahdavi et al. mahdavi (1997)). On the other hand, the reliability of groups with fewer members is often questionable. In particular, the analysis of the CfA2N survey performed by RPG97, and the analysis of a CDM model by Frederic (1995a ) show that a significant fraction of the triples and quadruples in group catalogs could be spurious. We consider the 321 rich groups with $`N_{\mathrm{mem}}5`$, and find again that groups are more correlated than galaxies. Moreover, we find evidence that the CF computed for richer groups is higher than the CF computed for poorer groups, i.e. $`<\xi _{\mathrm{GG},\mathrm{rich}}/\xi _{\mathrm{gg}}>=1.48\pm 0.16`$ and $`<\xi _{\mathrm{GG},\mathrm{poor}}/\xi _{\mathrm{gg}}>=1.10\pm 0.14`$ (cf. Fig. 5). The component of spurious groups among poor groups could be responsible for the lower amplitude of the $`\xi _{\mathrm{GG}}(s)`$of poor groups compared to the $`\xi _{\mathrm{GG}}(s)`$of rich groups. In fact, spurious groups should be distributed like non–member galaxies. However, at least part of the observed higher clustering amplitude of rich groups could be due to the existence of a clustering amplitude vs richness relationship. The relationship has been discussed for a variety of systems by several authors (e.g. Bahcall & West bahcall92 (1992); Croft et al. croft (1997); Miller et al. miller (1999)). Richness is usually taken as a measure of the mass of the system, mass being the real, interesting physical quantity directly related to the predictions of cosmological models. ### 4.2 The CF in the volume–limited sample For our groups, richness is not a good physical parameter. A better parameter is the group (line–of–sight) velocity dispersion, $`\sigma _\mathrm{v}`$ (e.g. RPG97). In a magnitude–limited sample any group selection based on velocity dispersion will affect the selection function in an “a priori” unknown way. To avoid this problem, we analyze the volume–limited group sample built by Ramella et al. (in preparation) who run an appropriately modified version of FOFA within volume–limited sub–samples of the CfA2 and SSRS2 galaxy surveys. In particular, we consider the 139 distance limited groups within $`V7800`$ km s$`^1`$, roughly corresponding to the effective depth of CfA2. We cut the volume–limited galaxy catalogs in the same way. Within this sample we compute the “unweighted” CF estimator, i.e. we set $`w=1`$ for all groups/galaxies. For this sample the useful $`s`$–range is $`3.5<s<12`$ $`\mathrm{h}^1`$ Mpc (see Fig. 6). We find that the ratio $`<\xi _{\mathrm{GG}}/\xi _{\mathrm{gg}}>`$of the total volume–limited sample is $`<\xi _{\mathrm{GG}}/\xi _{\mathrm{gg}}>`$$`=1.58\pm 0.10`$, similar to that computed for the magnitude–limited sample ($`<\xi _{\mathrm{GG}}/\xi _{\mathrm{gg}}>`$$`1.6`$). This result reassures us about the reliability of the selection function we assume for groups. In order to check a possible dependence of $`\xi _{\mathrm{GG}}(s)`$on $`\sigma _\mathrm{v}`$, we divide the group volume–limited sample into two subsamples of equal size, one subsample containing groups with $`\sigma _\mathrm{v}214`$ km s$`^1`$, the other including the remaining low velocity dispersion groups. We find that high–$`\sigma _\mathrm{v}`$ systems are more correlated than those with low $`\sigma _\mathrm{v}`$ ($`<\xi _{\mathrm{GG}}/\xi _{\mathrm{gg}}>`$=$`2.14\pm 0.37`$ and $`<\xi _{\mathrm{GG}}/\xi _{\mathrm{gg}}>`$=$`1.29\pm 0.17`$, respectively). This evidence is in agreement with that found for clusters, and suggests a continuum of clustering properties for all galaxy systems. In this context, it is appropriate to compare the groups of the volume–limited sample, characterized by the median velocity dispersion $`\sigma _\mathrm{v}=214`$ km s$`^1`$and by the mean group separation $`d16`$ $`\mathrm{h}^1`$ Mpc, to rich clusters ($`\sigma _\mathrm{v}700`$ km s$`^1`$; $`d50`$ $`\mathrm{h}^1`$ Mpc; e.g. Zabludoff et al. zabludoff (1993); Peacock & West peacock (1992)). We fit $`\xi _{\mathrm{GG}}(s)`$to the form $`\xi (s)=(s/s_0)^\gamma `$ with a non–linear weighted least squares method and find $`\gamma =1.9\pm 0.7`$ and $`s_0=8\pm 1`$ $`\mathrm{h}^1`$ Mpc. Note that groups show similar slope but significantly smaller correlation length than optically or X–ray selected clusters, for which $`s_0>15`$ $`\mathrm{h}^1`$ Mpc (e.g. Bahcall & West bahcall92 (1992); Croft et al. croft (1997); Abadi et al. abadi (1998); Borgani et al. borgani (1999); Miller et al. miller (1999)). Our results agree with the predictions of those N–body cosmological simulations that also correctly predict the observed cluster–cluster CF (e.g. cf. our ($`s_0,d`$) with Fig. 8 of Governato et al. governato (1999)). ### 4.3 The unweighted CF In order to verify the stability of our results against variations of the weighting scheme, we compute the unweighted CF, $`\xi ^{\mathrm{UW}}`$, for the magnitude–limited sample. We find that, as in the weighted case, the amplitude of $`\xi _{\mathrm{GG}}^{\mathrm{UW}}(s)`$is still significantly higher than the amplitude of $`\xi _{\mathrm{gg}}^{\mathrm{UW}}(s)`$, $`<\xi _{\mathrm{GG}}^{\mathrm{UW}}/\xi _{\mathrm{gg}}^{\mathrm{UW}}>`$=$`1.18\pm 0.05`$. We also find that the amplitudes of $`\xi _{\mathrm{GG}}^{\mathrm{UW}}(s)`$and $`\xi _{\mathrm{gg}}^{\mathrm{UW}}(s)`$are both significantly lower than the weighted estimates. The differences between the results of the two weighting schemes rise from the fact that the weighted CF weights each volume of space equally and therefore better traces the clustering of more distant objects. In fact, when we divide the group/galaxy catalogs in two subsamples of equal size according to group/galaxy distances, the distant samples ($`V>6680`$ km s$`^1`$) give CFs with higher amplitude, i.e. $`<\xi _{\mathrm{GG},\mathrm{distant}}^{\mathrm{UW}}/\xi _{\mathrm{GG},\mathrm{nearby}}^{\mathrm{UW}}>=2.15\pm 0.31`$ and $`<\xi _{\mathrm{gg},\mathrm{distant}}^{\mathrm{UW}}/\xi _{\mathrm{gg},\mathrm{nearby}}^{\mathrm{UW}}>=1.43\pm 0.08`$. Moreover, the distant samples ($`V>6680`$ km s$`^1`$) give $`<\xi _{\mathrm{GG},\mathrm{distant}}^{\mathrm{UW}}/\xi _{\mathrm{gg},\mathrm{distant}}^{\mathrm{UW}}>=1.43\pm 0.12`$ in closer agreement with the result of the weighted analysis. The ratio $`\xi _{\mathrm{GG}}^{\mathrm{UW}}/\xi _{\mathrm{gg}}^{\mathrm{UW}}`$for the whole sample, as well as for its nearby and distant parts, is shown in Fig. 7. As for a physical explanation, the fact that $`\xi _{\mathrm{gg}}^{\mathrm{UW}}(s)`$$`<`$$`\xi _{\mathrm{gg}}(s)`$could be the consequence of a dependency of clustering on luminosity, since the unweighted CF estimator is more sensitive to the clustering of nearer, fainter groups/galaxies (e.g., Park et al. park (1994)). In fact, the dependency of clustering on luminosity has been pointed out for the galaxy-galaxy CF (e.g., Benoist et al. benoist (1996); Cappi et al. cappi (1998); Willmer et al. willmer (1998)). In addition, the greater strength of $`\xi _{\mathrm{gg}}(s)`$could be explained by different clustering properties in different volumes of the Universe: e.g., Ramella et al. (ramella92 (1992)) find that the strength of the galaxy CF is very high in the Great Wall. In the volume we examine the two biggest structures, the Great Wall and the Southern Wall, both lie in distant regions (e.g. da Costa et al. dacosta94 (1994)) and therefore their weight is larger in the weighted CF scheme. It is reasonable to expect also that distant groups, which are brighter (and presumably more massive) and which preferably lie in the two big structures, are more strongly correlated than nearby groups leading to the observed $`\xi _{\mathrm{GG}}^{\mathrm{UW}}(s)`$$`<`$$`\xi _{\mathrm{GG}}(s)`$. ## 5 Summary and conclusions We measure the two–point redshift–space correlation function of loose groups, $`\xi _{\mathrm{GG}}(s)`$, for the combined CfA2 and SSRS2 surveys. Our combined group catalog constitutes the largest homogeneous sample available (885 groups). We compare $`\xi _{\mathrm{GG}}(s)`$with the correlation functions of galaxies, $`\xi _{\mathrm{gg}}(s)`$, in the same volumes. Our main results are the following: 1. Using the whole sample we find that groups are significantly more clustered than galaxies, $`<\xi _{\mathrm{GG}}/\xi _{\mathrm{gg}}>`$=$`1.64\pm 0.16`$, thus consistent with the result by Trasarti-Battistoni et al. (1997), based on a much smaller sample. This ratio can be considered a lower limit considering the possible presence of unphysical groups. 2. Groups are significantly less clustered than clusters. In particular, we find $`\gamma =1.9\pm 0.7`$ and $`s_0=8\pm 1`$ for 139 groups identified in a volume–limited sample ($`V7800`$ km s$`^1`$, median velocity dispersion $`\sigma _\mathrm{v}200`$ km s$`^1`$, and mean group separation $`d16`$ $`\mathrm{h}^1`$ Mpc). This result can be compared with that of galaxy clusters ($`s_015`$$`20`$ $`\mathrm{h}^1`$ Mpc for systems with $`\sigma _\mathrm{v}700`$ km s$`^1`$and $`d50`$ $`\mathrm{h}^1`$ Mpc; e.g., Bahcall & West bahcall92 (1992); Croft et al. croft (1997); Abadi et al. abadi (1998); Borgani et al. borgani (1999); Miller et al. miller (1999)). 3. There is a tendency of clustering amplitude to increase with group velocity dispersion $`\sigma _\mathrm{v}`$, which is the better indicator of group mass at our disposal. We conclude that there is a continuum of clustering properties of galaxy systems, from poor groups to very rich clusters, with correlation length increasing with increasing mass of the system. ###### Acknowledgements. We thank Stefano Borgani, Antonaldo Diaferio, and Margaret Geller for useful discussions. Special thanks to Massimo Ramella for enlightening suggestions. M.G. wishes to acknowledge Osservatorio Astronomico di Trieste for a grant received during the preparation of this work. This work has been partially supported by the Italian Ministry of University, Scientific Technological Research (MURST), by the Italian Space Agency (ASI), and by the Italian Research Council (CNR-GNA).
no-problem/9911/astro-ph9911486.html
ar5iv
text
# The Evolution of Neutrino Astronomy How did neutrino astronomy evolve? Are there any useful lessons for astronomers and physicists embarking on new observational ventures today? We will answer the first question from our perspective. You, the reader, can decide for yourself whether there are any useful lessons. The possibility of observing solar neutrinos began to be discussed seriously following the 1958 experimental discovery by Holmgren and Johnston that the cross section for production of the isotope <sup>7</sup>Be by the fusion reaction <sup>3</sup>He + <sup>4</sup>He $`{}_{}{}^{7}\mathrm{Be}+\gamma `$ was more than a thousand times larger than was previously believed. This result led Willy Fowler and Al Cameron to suggest that <sup>8</sup>B might be produced in the sun in sufficient quantities by the reaction <sup>7</sup>Be + $`p`$ $`{}_{}{}^{8}\mathrm{B}+\gamma `$ to produce an observable flux of high-energy neutrinos from <sup>8</sup>B beta-decay. Figure 1 shows the early evolution of neutrino astronomy as described in a viewgraph from a colloquium given by Ray at Brookhaven National Laboratory in 1971. We begin our story in 1964, when we published back-to-back papers in Physical Review Letters arguing that a $`100,000`$ gallon detector of perchloroethylene could be built which would measure the solar neutrino capture rate on chlorine <sup>1</sup><sup>1</sup>1We were asked to write a brief Millennium Essay for the PASP on the evolution of neutrino astronomy from our personal perspective. We present the way the history looks to us more than thirty-five years after our collaboration began and emphasize those aspects of the development of neutrino astronomy that may be of interest or of use to physicists and astrophysicists today. We stress that all history is incomplete and distorted by the passage of time and the fading of memories. For earlier more detailed reviews, the reader can consult two articles we wrote when the subject was still in its childhood and our memories were more immediate (Bahcall & Davis 1976, 1982). The interested reader can find references in these articles to the early works of Bethe, of Holmgren and Johnston, and of Fowler and Cameron and to the works of many other early pioneers in stellar fusion and stellar astrophysics.. Our motivation was to use neutrinos to look into the interior of the sun and thereby test directly the theory of stellar evolution and nuclear energy generation in stars. The particular development that made us realize that the experiment could be done was the demonstration (by John in late 1963) that the principal neutrino absorption cross section on chlorine was twenty times larger than previously calculated due to a super-allowed nuclear transition to an excited state of argon. If you have a good idea today, it likely will require many committees, many years, and many people in order to get the project from concept to observation. The situation was very different in 1964. Once the decision to go ahead was made, a very small team designed and built the experiment; the entire team consisted of Ray, Don Harmer (on leave from Georgia Tech), and John Galvin (a technician who worked part-time on the experiment). Kenneth Hoffman, a (then) young engineer provided expert advice on technical questions. The money came out of the chemistry budget at Brookhaven National Laboratory. Neither of us remember a formal proposal ever being written to a funding agency. The total capital expenditure to excavate the cavity in the Homestake Gold Mine in South Dakota, to build the tank, and to purchase the liquid was $`0.6`$ million dollars (in 1965 dollars). During the period 1964-1967, Fred Reines and his group worked on three solar neutrino experiments in which recoil electrons produced by neutrino interactions would be detected by observing the associated light in an organic scintillator. Two of the experiments, which exploited the elastic scattering of neutrinos by electrons, were actually performed and led to a (higher-than-predicted) upper limit on the <sup>8</sup>B solar neutrino flux. The third experiment, which was planned to detect neutrinos absorbed by <sup>7</sup>Li, was abandoned after the initial chlorine results showed that the solar neutrino flux was low. These three experiments introduced the technology of organic scintillators into the arena of solar neutrino research, a technique that will only finally be used in 2001 when the BOREXINO detector will begin to detect low energy solar neutrinos. Also during this period, John investigated the properties of neutrino electron scattering and showed that the forward peaking from <sup>8</sup>B neutrinos is large, a feature that was incorporated two and half decades later in the Kamiokande (and later SuperKamiokande) water Cherenkov detectors. Ray Davis shows John Bahcall the tank containing 100,000 gallons of perchloroethylene. The picture was taken in the Homestake mine shortly before the experiment began operating. The first results from the chlorine experiment were published in 1968, again in a back-to-back comparison (in PRL) between measurements and standard predictions. The initial results have been remarkably robust; the conflict between chlorine measurements and standard solar model predictions has lasted over three decades. The main improvement has been in the slow reduction of the uncertainties in both the experiment and the theory. The efficiency of the Homestake chlorine experiment was tested by recovering carrier solutions, by producing <sup>37</sup>Ar in the tank with neutron sources, and by recovering <sup>36</sup>Cl inserted in a tank of perchloroethylene. The solar model was verified by comparison with precise helioseismological measurements. For more than two decades, the best-estimates for the observational and for the theoretical prediction have remained essentially constant. The discrepancy between the standard solar model prediction and the chlorine observation became widely known as “the solar neutrino problem.” Very few people worked on solar neutrinos during the period 1968-1988. The chlorine experiment was the only solar neutrino experiment to provide data in these two decades. It is not easy for us to explain why this was the case; we certainly tried hard to interest others in doing different experiments and we gave many joint presentations about what came to be known as “the solar neutrino problem”. Each of us had one principal collaborator during this long period, Bruce Cleveland (experimental) and Roger Ulrich (solar models). A large effort to develop a chlorine experiment in the Soviet Union was led by George Zatsepin, but it was delayed by the practical difficulties of creating a suitable underground site for the detector. Eventually, the effort was converted into a successful gallium detector, SAGE, led by Vladimir Gavrin and Tom Bowles, that gave its first results in 1990. Only one year after the first (1968) chlorine results were published, Vladimir Gribov and Bruno Pontecorvo proposed that the explanation of the solar neutrino problem was that neutrinos oscillated between the state in which they were created and a more difficult to detect state. This explanation, which is the consensus view today, was widely disbelieved by nearly all of the particle physicists we talked to in those days. In the form in which solar neutrino oscillations were originally proposed by Gribov and Pontecorvo, the process required that the mixing angles between neutrino states be much larger than the quark mixing angles, something which most theoretical physicists believed, at that time, was unlikely. Ironically, a flood of particle theory papers explained, more or less ‘naturally’, the large neutrino mixing angle that was decisively demonstrated thirty years later in the SuperKamiokande atmospheric neutrino experiment. One of the most crucial events for early solar neutrino research occurred in 1968 while we were relaxing in the sun after a swim at the CalTech pool. Gordon Garmire (now a PI for the Chandra X-ray satellite) came up to Ray, introduced himself, and said he had heard about the chlorine experiment. He suggested to Ray that it might be possible to reduce significantly the background by using pulse rise time discrimination, a technique used for proportional counters in space experiments. The desired fast-rising pulses from <sup>37</sup>Ar Auger electrons are different from the slower rising pulses from a background gamma or cosmic ray. Ray went back to Brookhaven and asked the local electronic experts if it would be possible to implement this technique for the very small counters he used. The initial answer was that the available amplifiers were not fast enough to be used for this purpose with the small solar neutrino counters. But, in about a year three first class electronic engineers at BNL, Veljko Radeca, Bob Chase, and Lee Rogers were able to build electronics fast enough to be used to measure the rise time in Ray’s counters. This ‘swimming-pool’ improvement was crucial for the success of the chlorine experiment and the subsequent radio-chemical gallium solar neutrino experiments, SAGE, GALLEX, and GNO. Measurements of the rise-time as well as the pulse energy greatly reduce the background for radio-chemical experiments. The backgrounds can be as low as one event in three months. In 1978, after a decade of disagreement between the Homestake neutrino experiment and standard solar model predictions, it was clear to everyone that the subject had reached an impasse and a new experiment was required. The chlorine experiment is, according to standard solar model predictions, sensitive primarily to neutrinos from a rare fusion reaction that involves <sup>8</sup>B neutrinos. These neutrinos are produced in only 2 of every $`10^4`$ terminations of the basic $`pp`$ fusion chain. In the early part of 1978, there was a conference of interested scientists who got together at Brookhaven to discuss what to do next. The consensus decision was that we needed an experiment that was sensitive to the low energy neutrinos from the fundamental $`pp`$ reaction. The only remotely-practical possibility appeared to be another radiochemical experiment, this time with <sup>71</sup>Ga (instead of <sup>37</sup>Cl) as the target. But, a gallium experiment (originaly proposed by the Russian theorist V. A. Kuzmin in 1965) was expensive; we needed about three times the world’s annual production of gallium to do a useful experiment. In an effort to generate enthusiasm for a gallium experiment, we wrote another Physical Review Letters paper, this time with a number of interested experimental colleagues. We argued that a gallium detector was feasible and that a gallium measurement, which would be sensitive to the fundamental $`pp`$ neutrinos, would distinguish between broad classes of explanations for the discrepancy between prediction and observation in the <sup>37</sup>Cl experiment. Over the next five or six years, the idea was reviewed a number of times in the United States, always very favorably. DOE appointed a blue ribbon panel headed by Glen Seaborg that endorsed enthusiastically both the experimental proposal and the theoretical justification. To our great frustration and disappointment, the gallium experiment was never funded in the United States, although the experimental ideas that gave rise to the Russian experiment (SAGE) and the German-French-Italian-Israeli-US experiment (GALLEX) largely originated at Brookhaven. Physicists strongly supported the experiment and said the money should come out of an astronomy budget; astronomers said it was great physics and should be supported by the physicists. DOE could not get the nuclear physics and the particle physics sections to agree on who had the financial responsibility for the experiment. In a desperate effort to break the deadlock, John was even the PI of a largely Brookhaven proposal to the NSF (which did not support proposals from DOE laboratories).A pilot experiment was performed with 1.3 tons of gallium by an international collaboration (Brookhaven, University of Pennsylvania, MPI, Heidelberg, IAS, Princeton, and the Weizmann Institute) which developed the extraction scheme and the counters eventually used in the GALLEX full scale experiment. In strong contrast to what happened in the United States, Moissey Markov, the Head of the Nuclear Physics Division of the Russian Academy of Sciences, helped establish a neutrino laboratory within the Institute for Nuclear Research, participated in the founding of the Baksan neutrino observatory , and was instrumental in securing $`60`$ tons of gallium free to Russian scientists for the duration of a solar neutirno experiment. The Russian-American gallium experiment (SAGE) went ahead under the leadership of Vladimir Gavrin, George Zatsepin (Institute for Nuclear Research, Russia), and Tom Bowles (Los Alamos) and the mostly European experiment (GALLEX) was led by Till Kirsten (Max Planck Institute, Germany). Both experiments had a strong but not primary US participation. The two gallium experiments were performed in the decade of the 1990’s and gave very similar results, providing the first experimental indication of the presence of $`pp`$ neutrinos. Both experiments were tested by measuring the neutrino rate from an intense laboratory radioactive source. There were two dramatic developments in the solar neutrino saga, one theoretical and one experimental, before the gallium experiments produced observational results. In 1985, two Russian physicists proposed an imaginative solution of the solar neutrino problem that built upon the earlier work of Gribov and Pontecorvo and, more directly, the insightful investigation by Lincoln Wolfenstein (of Carnegie Mellon). Stanislav Mikheyev and Alexei Smirnov showed that, if neutrinos have masses in a relatively wide range, then a resonance phenomenon in matter (now universally known as the MSW effect) could convert efficiently many of the electron-type neutrinos created in the interior of the sun to more difficult to detect muon and tau neutrinos. The MSW effect can work for small or large neutrino mixing angles. Because of the elegance of the theory and the possibility of explaining the experimental results with small mixing angles (analogous to what happens in the quark sector), physicists immediately began to be more sympathetic to particle physics solutions to the solar neutrino problem. More importantly, they became enthusiasts for new solar neutrino experiments. The next big break-through also came from an unanticipated direction. The Kamiokande water Cherenkov detector was developed to study proton decay in a mine in the Japanese Alps; it set an important lower limit on the proton lifetime. In the late 1980’s, the detector was converted by its Japanese founders, Masatoshi Koshiba and Yoji Totsuka, together with some American colleagues (Gene Beier and Al Mann of the U. of Pennsylvania) to be sensitive to the lower energy events expected from solar neutrinos. With incredible foresight, these experimentalists completed in late 1986 their revisions to make the detector sensitive to solar neutrinos, just in time to observe the neutrinos from Supernova 1987a emitted in the LMC 170,000 years earlier. (Supernova and solar neutrinos have similar energies, $`10`$ MeV, much less than the energies that are relevant for proton decay.) In 1996, a much larger water Cherenkov detector (with 50,000 tons of pure water) began operating in Japan under the leadership of Yoji Totsuka, Kenzo Nakamura, Yoichiro Suzuki (from Japan) , and Jim Stone and Hank Sobel (from the United States). So far, five experiments have detected solar neutrinos in approximately the numbers (within a factor of two or three) and in the energy range ($`<15`$ MeV) predicted by the standard solar model. This is a remarkable achievement for solar theory since the <sup>8</sup>B neutrinos that are observed primarily in three of these experiments (chlorine, Kamiokande, and its successor SuperKamiokande) depend upon approximately the $`25`$th power of the central temperature. The same set of nuclear fusion reactions that are hypothesized to produce the solar luminosity also give rise to solar neutrinos. Therefore, these experiments establish empirically that the sun shines by nuclear fusion reactions among light elements in essentially the way described by solar models. Nevertheless, all of the experiments disagree quantitatively with the combined predictions of the standard solar model and the standard theory of electroweak interactions (which implies that nothing much happens to the neutrinos after they are created). The disagreements are such that they appear to require some new physics that changes the energy spectrum of the neutrinos from different fusion sources. Solar neutrino research today is very different from what it was three decades ago. The primary goal now is to understand the neutrino physics, which is a prerequisite for making more accurate tests of the neutrino predictions of solar models. Solar neutrino experiments today are all large international collaborations, each typically involving of order $`10^2`$ physicists. Nearly all of the new experiments are electronic, not radiochemical, and the latest generation of experiments measure typically several thousand events per year (with reasonable energy resolution), compared to rates that were typically 25 to 50 per year for the radiochemical experiments (which have no energy resolution, only an energy threshold). Solar neutrino experiments are currently being carried out in Japan (SuperKamiokande, in the Japanese Alps), in Canada (SNO, which uses a kiloton of heavy water in Sudbury, Ontario), in Italy (BOREXINO, ICARUS, and GNO, each sensitive to a different energy range and all operating in the Gran Sasso Underground Laboratory ), in Russia (SAGE, in the Caucasus region ), and in the United States (Homestake chlorine experiment). The SAGE, chlorine, and GNO experiments are radiochemical; the others are electronic. Since 1985, the chlorine experiment has been operated by the University of Pennsylvania under the joint leadership of Ken Lande and Ray Davis. Lande and Paul Wildenhain have introduced major improvements in the extraction and measurement systems, making the chlorine experiment a valuable source of new precision data. The most challenging and important frontier for solar neutrino research is to develop experiments that can measure the energies of individual low-energy neutrinos from the basic $`pp`$ reaction, which constitutes (we believe) more than $`90`$% of the solar neutrino flux. Solar neutrino research is a community activity. Hundreds of experimentalists have collaborated to carry out difficult, beautiful measurements of the elusive neutrinos. Hundreds of other researchers helped refine the solar model predictions, measuring accurate nuclear and solar parameters and calculating input data such as opacities and equation of state. Three people played special roles. Hans Bethe was the architect of the theory of nuclear fusion reactions in stars, as well as our mentor and hero. Willy Fowler was a powerful and enthusiastic supporter of each new step and his keen physical insight motivated much of what was done in solar neutrino research. Bruno Pontecorvo opened everyone’s eyes with his original insights, including his early discussion of the advantages of using chlorine as a neutrino detector and his suggestion that neutrino oscillations might be important. In the next decade, neutrino astronomy will move beyond our cosmic neighborhood and, we hope, will detect distant sources. The most likely candidates now appear to be gamma-ray bursts. If the standard fireball picture is correct and if gamma-ray bursts produce the observed highest-energy cosmic rays, then very high energy ($`10^{15}`$ eV) neutrinos should be observable with a $`\mathrm{km}^2`$ detector. Experiments with the capability to detect neutrinos from gamma-ray bursts are being developed at the South Pole (AMANDA and ICECUBE), in the Mediterranean Sea (ANTARES, NESTOR) and even in space. Looking back on the beginnings of solar neutrino astronomy, one lesson appears clear to us: if you can measure something new with reasonable accuracy, then you have a chance to discover something important. The history of astronomy shows that very likely what you will discover is not what you were looking for. It helps to be lucky.
no-problem/9911/cond-mat9911285.html
ar5iv
text
# Spin Analogs of Proteins: Scaling of “Folding” Properties ## I Introduction Recent numerical studies indicate that characteristic folding times, $`t_{fold}`$, of model proteins grow with the number of aminoacids, $`N`$, as a power law with an exponent which is non-universal – it depends on the class of sequences studied and on the temperature. The resulting deterioration of the folding properties also manifests itself in the way in which temperatures that relate to folding scale with $`N`$ . There are two such characteristic temperatures: $`T_f`$ and $`T_{min}`$. The first of these is a measure of the thermodynamic stability – it can be defined operationally as a temperature at which the probability to occupy the native (the lowest energy) state crosses $`\frac{1}{2}`$. The second temperature is one at which the folding kinetics is the fastest. At temperatures, $`T`$, below $`T_{min}`$, glassy effects set in. Aminoacidic sequences that correspond to proteins should have a $`T_f`$ that is bigger than $`T_{min}`$, or at least comparable to $`T_{min}`$. Otherwise the sequences are bad folders. Studies of two and three dimensional lattice Go models of proteins suggest that $`T_{min}`$ grows with $`N`$ whereas $`T_f`$ first grows and then it either saturates or it grows at a lower rate than $`T_{min}`$. There exists then a characteristic size, $`N_c`$, at which $`T_{min}`$ starts exceeding $`T_f`$ and for $`N>N_c`$ the sequences necessarily become bad folders. This suggests existence of a size related limit to physiological functionality of proteins. The question we ask in this paper is to what extent the scaling behavior of $`t_{fold}`$, $`T_f`$, and $`T_{min}`$ that was found in the lattice Go model of proteins is typical or, in other words, what are the classes of universality for these quantities. Specifically, we consider Ising spin systems: uniform ferromagnets, disordered ferromagnets and spin glasses. Disordered ferromagnets has been shown recently to have a phase space structure, as described by the so called disconnectivity graphs , quite akin to that characterizing proteins, at least for a small number of spins, $`N`$. Spin glasses, on the other hand, have been found to have the phase space structured as in random sequences of aminoacids which are bad folders. The spin systems do not “fold” but an evolution into their ground states can be considered to be analogous to the folding process and $`t_{fold}`$ can be defined as the characteristic time needed to pass through the ground state for the first time, which generally does not coincide with a relaxation time. Thus $`t_{fold}`$, $`T_f`$, and $`T_{min}`$ can be determined like for the proteins and we may additionally enquire how do $`T_f`$ and $`T_{min}`$ relate to the effective critical temperature as determined from the specific heat and magnetic susceptibility. Another motivation to consider the “folding” in spin systems is that the analogies between spin systems and proteins have already permeated the language in which the physics of proteins is couched. It is not clear, however, to what extent these analogies are accurate when it comes to actual details. One qualitative concept, in this category, is that of the energy landscape : spin glasses are said to have rugged energy landscapes but proteins should have a landscape which is much smoother and funnel-like. Another such concept is frustration : the structural frustration in proteins should be “minimal” whereas the frustration in the exchange couplings leads to the slow kinetics as found in spin glasses. These concepts have been probed, e.g. in the random energy model which again originated in the context of spin glasses . The basic message of this paper is that the spin – protein analogies are indeed valid but the details of the behavior are usually distinct. What is analogous, for instance, is that the folding times have a characteristic U-shaped dependence on $`T`$ . Furthermore, the folding properties are the best for small system sizes and then they deteriorate with $`N`$. In particular, the “folding” times at $`T_{min}`$ in spin systems do grow as a power law with $`N`$. On the other hand, both $`T_f`$ and $`T_{min}`$ of simple spin systems generally decrease with $`N`$ and the nature of the phase transition is not a finite size version of the first order as is the case with the proteins. The origins of the difference between spin systems and the Go models of proteins in the behavior of $`T_f`$ and $`T_{min}`$ remain to be elucidated. It should be noted that there are no kinematic constraints on flips of any spin whereas the possible moves in the protein folding process must preserve the chain connectivity and they have to depend on the actual conformation and thus on the history. The constrained character of the protein dynamics makes it acquire aspects of the packing problem, especially so if the native state is maximally compact – such as considered in the studies of scaling in model proteins. The packing aspects become insignificant when dealing with longer and longer $`\alpha `$-helices . We illustrate this point here by considering a 2-dimensional lattice version of the $`\alpha `$-helices (H) as described within the Go scheme and show that these objects indeed become behaving like spin system when $`N`$ becomes bigger and bigger. Notice that the helices have the monomer-monomer interactions of a local kind. Thus the energy barrier against unfolding essentially does not depend on $`N`$ which is not expected of structures with more complex contacts. Most of this paper, however, will be focused on systems described by the Ising spin Hamiltonian: $$=\underset{<ij>}{}J_{ij}S_iS_j,$$ (1) where $`S_i=\pm 1`$, and the exchange couplings, $`J_{ij}`$, connect nearest neighbors on the square and cubic lattices with the periodic boundary conditions. There are $`L^D`$ spins in the system where $`D`$ denotes the dimensionality and $`L`$ the linear size of the system. We consider four models of the exchange couplings: 1) spin glasses (SG) in which the $`J_{ij}`$’s are numbers drawn from the Gaussian probability distribution with a zero mean and a unit dispersion; 2) uniform ferromagnets (FM) with $`J_{ij}=1`$; 3) the disordered ferromagnets (DFM) with $`J_{ij}`$ chosen as the absolute values of the Gaussian numbers; 4) the weakly disordered ferromagnets (DFM’) with the $`J_{ij}`$’s being random numbers between 0.9 and 1.1. We find that it is the latter system which is the most protein-like. In Sections 2 and 3 we discuss the $`T`$ and $`N`$ -dependencies of the “folding” times respectively. In Section 4 we present results on the scaling behavior of $`T_f`$ and $`T_{min}`$ in systems SG, FM, DFM, DFM’, and H. Finally, in Section 5, we demonstrate that the temperatures we study are quite distinct from the critical temperature of the spin systems. ## II Temperature dependence of “folding” times The concepts used in this paper are illustrated in Figure 1 which shows the temperature dependence of the characteristic “folding” time in the $`D`$=2 Ising spin systems considered. In each category, data for a representative example system are shown. We obtain $`t_{fold}`$ by a standard Monte Carlo process in which one typically starts from 1000 random initial spin configurations and determines the median time to reach the ground state for the first time. The spins are updated sequentially and the “folding” times are given in Monte Carlo steps per spin, i.e. the total number of spin updates divided by the number of spins. In the SG case, the ground state (or at least its close approximation) is obtained by multiple slow annealing processes followed by a quenching procedure. The general shape of the $`T`$-dependence is like in the protein systems – it corresponds to a U-shaped curve with a minimum at $`T_{min}`$. The bigger the disorder, the higher the $`T_{min}`$ – the DFM system has the highest $`T_{min}`$ among the systems with the ferromagnetic ground state. Figure 1 also shows that the SG system has a lower value of $`T_{min}`$ than the DFM. However, this does not reflect the degree of the disorder since the two systems are different in nature. The fact that the SG system has a lower value of $`T_{min}`$ than the corresponding DFM system is related to the fact that local energy barriers against spin flipping are generally higher in a DFM than in a SG due to a nonzero value of the average exchange coupling. The phase space structure of the uniform ferromagnet is so simple, containing few local energy minima, that the low temperature upturn does not develop down to $`T`$=0. In this case, we shall attribute a zero value to $`T_{min}`$. The similar phenomenon has been observed in the lattice Go model with repulsive non-native contacts for which only few local minima are available. The shortest “folding” time, $`t_{min}`$ corresponds to $`t_{fold}`$ that is determined at $`T_{min}`$. The temperature dependence of the folding time on $`N`$ is also computed for the Go model of “helices” on the two-dimensional square lattice. A “helical” native state for $`N`$=16 in shown at the top of Figure 2. The meanders shown in the figure become longer and longer when $`N`$ grows. The Hamiltonian for the system is given by $$=\underset{i<j}{}B_{ij}\mathrm{\Delta }_{ij},$$ (2) where $`\mathrm{\Delta }_{ij}`$ is either 1 or 0 depending on whether the monomers $`i`$ and $`j`$ are nearest neighbors on the lattice but not nearest neighbors along the chain, or not. When $`\mathrm{\Delta }_{ij}`$ is non-zero, the two monomers are said to form a contact. The definition of the Go model is that $`B_{ij}`$ is 1 for the native contacts (such as seen in Figure 2) and 0 otherwise. Thus the properties of the system are determined entirely by the native conformation. The dynamics are defined in terms of a Monte Carlo process which satisfies the detailed balance conditions as explained in . The lower part of Figure 2 shows the characteristic U-shape dependence of $`t_{fold}`$ on $`T`$ for system H. What is different compared to the models of maximally compact proteins is that the positions of both $`T_{min}`$ and $`T_f`$ are seen to go down with $`N`$ – the point to which we shall come back in Section 4. ## III Scaling properties of folding times at $`T_{min}`$ In order to study the scaling properties of disordered systems, such as the spin systems with random exchange couplings, one needs to consider ensembles of samples with properties which are similar statistically. Thus for each $`N`$ we have considered up to 50 samples and for each we have performed simulations of “folding” in the Monte Carlo process. The median folding times as a function of $`T`$ have been calculated for each sample separately and we determine their fastest folding condition. Typically it is done by considering 1000 folding trajectories at each $`T`$. But for the FM and DFM’ systems with a small size up to 40000 trajectories at each $`T`$ have been used due to the broadness of the minimum. The value of $`t_{min}`$ has been determined at $`T`$=$`T_{min}`$ that corresponded to a given sample and only then the average of $`t_{min}`$ over samples has been calculated. Figure 3 shows the scaling of the average $`t_{min}`$ for the 2$`D`$ systems: FM, DFM’, DFM, SG and H. Figure 4, on the other hand, deals with the 3$`D`$ Ising systems. All of the results are consistent with the power law: $$t_{min}N^\lambda ,$$ (3) where the values of $`\lambda `$ are shown in Table 1. Interestingly, $`\lambda `$ for the spin systems depends much more strongly on the type of the spin system than on its dimensionality. On the other hand, in the Go models of proteins with the maximally compact native state, the dependence on $`D`$ is strong: it is of order 6 and 3 in 2 and 3 $`D`$ respectively . It should also be noted that for the 2$`D`$ lattice “helices”, $`\lambda 4.59`$ is substantially smaller than the exponent found for the Go proteins with the maximally compact native state which points to the role of the packing effects. The strong dependence of $`\lambda `$ on the choice of the exchange couplings is similar to the lack of universality found in model proteins . Also in analogy to the models of proteins, the scaling exponent depends on the temperature. Figure 5 shows that $`t_{fold}`$ evaluated not at $`T_{min}`$ but at $`T_f`$ grows with an even bigger exponent or possibly the growth becomes exponential. This emphasizes the optimality of the kinetics at $`T_{min}`$. In the DFM’ case $`t_f`$ and $`t_{min}`$ merge together because, as we shall see in the next section, the temperatures $`T_f`$ and $`T_{min}`$ merge themselves. The possibility of a power law scaling for the folding time has been proposed theoretically by Thirumalai (see also Ref. ) based on scaling concepts from polymer physics combined with some phenomenological assumptions. In particular, the power law scaling is argued to be relevant to proteins which fold through direct pathways with a nucleation mechanism. For indirect pathways, the folding time is determined primarily by activation process with barriers which were argued to scale as $`N^{1/2}`$. There has been also a number of other studies of how a typical free energy barrier, $`B`$, in model proteins scales with the number of monomers. All of these studies are phenomenological in nature and the barrier $`B`$ is often calculated at the folding transition temperature $`T_f`$. One assumes that the folding time is related to the barrier $`B`$ through an Arrhenius-like law: $`\tau exp(B/k_BT)`$, as it is typically written for the relaxation time. In the random energy model, and also in another mean field approach for the Go model with a nonspecific critical folding nucleus the barrier scales linearly with $`N`$. Recently, Finkelstein and Badredtinov , and also Wolynes have proposed a $`N^{2/3}`$ law by using a capillarity approximation. Gutin’s et al. and our power laws for $`t_{fold}`$ obtained in simulations of the lattice proteins would formally correspond to a logarithmic dependence of the barrier on $`N`$ at the temperature of the fastest folding. It should be noted, however, that the physics of folding coincides with that of equilibration only in the limit of low temperatures . At high temperatures, for instance, the relaxation times are short but the folding times are long since the search for the ground state takes place primarily in the regions of phase space which are energetically remote from the target native state. Thus the behavior of the barriers may have little bearing on the folding times at $`T_{min}`$ which corresponds to the crossover between the physics of folding through equilibration and the physics of folding through a search for a state that takes place in equilibrium. At low temperatures, the roughness of the energy landscape becomes more and more significant, and the changed nature of the local barriers against the reconfiguration is expected to affect the scaling laws. Understanding of the scaling behavior of the folding time at $`T_{min}`$ and at low temperatures still needs to be worked out – both in the protein and spin systems. The latter systems may prove to be easier conceptually and computationally. ## IV Scaling properties of $`T_f`$ and $`T_{min}`$ We now discuss the scaling of characteristic temperatures. $`T_{min}`$ is determined from the kinetic data. $`T_f`$, on the other hand, is calculated by starting from the ground state and performing a long run that determines the equilibrium probability of the system staying in the ground state. The probabilities are determined as a function of $`T`$ and $`T_f`$ is obtained by an interpolation to where the value of 1/2 is crossed. For the spin systems, our results are based on up to 200 “unfolding” trajectories which last for up to 10000 Monte Carlo steps per spin. As a point of reference, we first consider the scaling properties of $`T_f`$ and $`T_{min}`$ which were found in the two and three dimensional lattice Go models of proteins . The corresponding data points are now shown, in Figure 6, as a function of $`N`$ on the logarithmic scale. Both $`T_f`$ and $`T_{min}`$ grow with $`N`$. The data points suggest that $`T_{min}`$ grows indefinitely – the larger the system size, the higher $`T`$ is needed to secure the optimal folding conditions. On the other hand, $`T_f`$ appears to tend to a saturation value – there is a limit to the thermodynamical stability. This finding is consistent with an analytical result obtained by Takada and Wolynes for Go-like proteins studied within a droplet approximation. Figure 7 shows the scaling of $`T_f`$ and $`T_{min}`$ for the 2$`D`$ lattice “helix” system. At $`N8`$ the foldability is good but on increasing the $`N`$, the behavior is entirely different: the glassy effects decrease in importance – $`T_{min}`$ goes down – but also the thermodynamic stability becomes more and more insignificant. The slopes for the $`N`$-dependence of the two temperatures are somewhat different and the corresponding plots may cross at some large value of $`N`$. Thus it is possible that good foldability can reappear at some large values of $`N`$ – but at a very low $`T`$. We now ask what kind of the scaling behavior of $`T_f`$ and $`T_{min}`$ characterizes the spin systems? Figures 8-11, for systems FM, DFM’, DFM and SG respectively, demonstrate that in no case the scaling is like for the Go lattice models with the maximally compact native state but in some cases it is akin to the behavior exhibited by the 2$`D`$ “helix”. Both for the “helix” and for all of the spin systems studied here, $`T_f`$ decreases with $`N`$ monotonically which is not what happens in the Go models of proteins. This difference in behavior can be traced to the following observation. $`T_f`$ is defined through the equation $`P_N={\displaystyle \frac{1}{1+_l^{}\mathrm{exp}((E_lE_N)/k_BT_f)}}={\displaystyle \frac{1}{2}},`$ (4) where $`P_N`$ is a probability of being in the ground state, $`E_N`$ is the energy of the ground state, $`E_l`$ is the energy of an $`l`$’th state, and the sum written in the denominator excludes the ground state. At temperatures which do not exceed $`T_f`$, the sum is dominated by the low energy excitations. In the spin systems and in the “helix”, energies of these excitations do not depend on $`N`$. For instance, in the Ising case they are of order $`2zJ`$, where $`z`$ is the coordination number and $`J`$ denotes a characteristic value of the exchange interactions. It is only the number of terms in the sum itself that grows with $`N`$. This leads to $`T_f`$ decreasing with $`N`$. On the other hand, in the model proteins, the energies of the excitations typically do depend on $`N`$ which may have a competing effect on $`T_f`$ relative to the impact of the number of states. We now turn to discussion of the scaling properties of $`T_{min}`$. From Figures 8-11 it is clear that it has opposite tendencies for proteins and spin systems. For the 2$`D`$ FM and DFM’ systems one observes an increase followed by a saturation. In all other spin systems, instead of the saturation, one observes a maximum followed by and asymptotic decrease. The difference may reflect the presence of the kinematic constraints on possible moves in proteins due to their polymeric nature. Such constraints may cause emergence of barriers which depend on $`N`$ and result in a growing $`T_{min}`$. In spin systems such kinematic constraints do not exist, each spin configuration has $`N`$ possible ways to move out with a cost which does not depend on $`N`$. Such a high number of degrees of freedom gives the spin systems a large flexibility to cross from local minima to local minima. Thus there is no potential for an indefinite growth of $`T_{min}`$. The initial growth, in the random systems, reflects on the role of the growing number of the local energy minima which may form kinetic traps and make the kinetics glassy. And yet the asymptotic decrease of $`T_{min}`$ suggests that the relevant traps do not need a $`N`$-dependent energy to overcome or points to some entropic effect. The case of the “helix” system may appear puzzling at a first glance since it possesses polymeric constraints and yet they do not lead to a growing $`T_{min}`$. Note that in contrast to the model proteins with maximally compact native states , the Hamiltonian for the “helix” contains terms related only to the local contacts. Thus the chain is much more flexible it is not tightly packed in its native state. The energy barriers against escaping from the traps do not depend on the chain length, and therefore the “helix” exhibits the spin-like properties. High energetic barriers in proteins are often associated with breaking of some tertiary contacts. ## V The specific heat and susceptibility We now compare $`T_f`$ and $`T_{min}`$ to the usual critical temperatures that characterize spin systems. We focus on the properties of the specific heat, $`C`$, and susceptibility, $`\chi `$, defined through $$C=\frac{E^2E^2}{NT^2}$$ (5) and $$\chi =\frac{M^2M^2}{NT}$$ (6) respectively, where $`M`$ is the magnetization. The temperatures at which $`C`$ and $`\chi `$ have a maximum will be denoted as $`T_C`$ and $`T_\chi `$, respectively. In addition, and in analogy to the proteins , we study the structural fluctuations $$\mathrm{\Delta }\chi _s=\chi _s^2\chi _s^2$$ (7) which are defined in terms of the structural overlap function $$\chi _s=\frac{1}{N}\left|\underset{i=1}{\overset{N}{}}S_iS_i^{(N)}\right|,$$ (8) where $`\{S_i^{(N)}\}`$ is the spin configuration in the ground state. (For the ferromagnets $`\chi _s`$ is the same as the absolute value of the magnetization per spin). These fluctuations also have a maximum at some temperature which will be denoted by $`T_s`$. It has been suggested that for proteins $`T_s`$ should be about $`T_f`$ and a small difference between $`T_s`$ and $`T_C`$ is a signature of fast folding. All of these thermodynamic quantities are averaged over 10 to 20 samples and 100 trajectories for each. In each trajectory, the first $`\mathrm{5\hspace{0.17em}000}`$ to $`\mathrm{10\hspace{0.17em}000}`$ Monte Carlo steps per spin are spent for equilibration. The trajectories were then further evolved between $`\mathrm{20\hspace{0.17em}000}`$ and $`\mathrm{50\hspace{0.17em}000}`$ steps per spin. The lower values above refer to the DFM’ system and the higher – to the SG system. Figure 12 shows the scaling behavior of $`T_C`$, $`T_\chi `$, and $`T_s`$ for the 2$`D`$ DFM’ and SG systems. In the case of DFM’, the three temperatures converge to one common critical temperature. Note that none of these temperatures has anything to do with $`T_f`$ or $`T_{min}`$. In the SG system, $`T_s`$ and $`T_\chi `$ tend to separate asymptotics than $`T_C`$ but again none of these temperatures coincides with $`T_f`$ or $`T_{min}`$. The physics of folding is not related to the critical phenomena. It should be noted that a phase transition is spin glasses shows as a singularity in the nonlinear susceptibility. In the 2$`D`$ SG system, the peak position in the nonlinear susceptibility should be at $`T`$=0 for any system size. In summary, we have studied Ising spin systems from the perspective of protein folding. We have demonstrated that there exist many similarities between the spin and polymeric systems. In particular, we have shown that both kind of systems have the property of a power law scaling of the folding time at $`T_{min}`$ as a function of $`N`$. We point out that this holds independent of whether the system is a good or bad folder and is thus some universal feature of folding. Among the random spin systems studied here, the DFM’ systems have the biggest range of the small $`N`$ values at which $`T_f`$ is larger than $`T_{min}`$, both in 2 and 3 $`D`$. Thus these small sized systems are the best analogs of good folders and can serve as toy models that mimic the physics of proteins. Spin glasses of any size, on the other hand, do indeed mimic the physics of random heteropolymers. Asymptotically though, each random spin system is a bad “folder”. This work was supported by KBN (Grant No. 2P03B-025-13). M.S.L. thanks H. Rieger for useful discussions.
no-problem/9911/cond-mat9911485.html
ar5iv
text
# 1 Introduction ## 1 Introduction Coordination is a well-defined concept in crystalline systems, where it is closely related to the concept of bonding. For instance, silicon atoms are tetrahedrally coordinated, and exhibit covalent bonds with their four nearest-neighbors (NNs) in the ordinary crystalline phase (c-Si). Whenever symmetry is broken, as in the case of amorphous silicon (a-Si) studied here, the link between coordination and bonding becomes less obvious. Even if atomic positions are intimately connected with the electronic ground state, we are usually unable to extract detailed informations about bonding from a purely geometrical investigation. Standard analytical tools based on coordination-number analysis are thus insufficient to characterize bonding properties, since they are insensitive to the details of the electronic charge distribution. Whenever strong topological disorder is present, as in the case of atoms with three or five nearest neighbors (T<sub>3</sub> or T<sub>5</sub> defects), the ionic potential and the charge distribution can be significantly different from the crystalline case, and great care should be used to identify the nature of bonding. In this work we use density-functional theory in the local density approximation to calculate the electronic properties of over-coordinated defects in a-Si. Density-functional theory provides an accurate and parameter-free description of the electronic ground-state, and is equally capable of dealing with four-fold coordinated atoms and with more complex topologies. The characterization of defects is then based on the decomposition of the electronic ground state into localized orbitals, using the technique of maximally-localized Wannier functions (MLWF) . In this approach, the extended Bloch orbitals are transformed via unitary transformations into a representation where they are maximally localized. The role and importance of such localized Wannier functions in the study of disordered systems (amorphous silicon in particular) has been advocated by Silvestrelli and coworkers , who have shown that the coordination analysis is often insensitive to the electronic charge distribution and that similarly coordinated atoms can be surrounded by rather different bonding environments. Our present results support these findings. The conjecture that over-coordination could play an important role in the formation of mid-gap electronic levels in amorphous silicon has been recently validated by some of us , using accurate density functional calculations. In that work we argued that T<sub>5</sub> defects can be responsible, as much as T<sub>3</sub> ones, for states close to the Fermi level. We also showed that in the case of T<sub>3</sub> defects the mid-gap electronic state originates from the dangling bond and is well localized on the T<sub>3</sub> defect itself. Conversely, in T<sub>5</sub> defects the midgap electronic state is delocalized over several NNs of the T<sub>5</sub> site, in agreement with the findings of tight-binding calculations reported in Ref. . In this work we pursue further the study of the extended nature of T<sub>5</sub> defects. Our goal is to understand how many atoms can be actively involved in a defect, since this number influences the shape of the “super-hyperfine” structure of the D center in the electron-spin resonance (EPR) spectrum. In turn, this should discriminate dangling bonds from floating ones. It should be noted that both T<sub>3</sub> and T<sub>5</sub> centers generate a D signal; it is the contributions of the secondary ions involved in the uncompensated spin distribution (the super-hyperfine structure) that should be very different in the case of a dangling or a floating bond. The plan of the work is as follows: in Sec. 2 we describe our ab-initio calculations and we review the method used to compute the MLWFs. We also comment on the link with the dynamical Born effective charges $`Z`$. In Sec. 3 we discuss the results for our a-Si sample, in comparison with the results obtained using the “electron-localization function” (ELF) and the “atomic-projected charge” (APC) analysis . Sec. 4 is devoted to the conclusions. ## 2 Computational tools The calculations performed here are based on density-functional theory in the local-density approximation, using a cubic super-cell containing 64 Si atoms , a plane wave basis set, and accurate norm-conserving pseudopotentials. The electronic ground-state is obtained using an all-bands conjugate gradient minimization . The sampling in reciprocal space is performed with 8 k points in the full Brillouin zone. The analysis of the Kohn-Sham orbitals and their charge density has been previously performed using the APC and the ELF analysis . APC is a measure of the contribution of the different atoms to the charge density, and it is obtained projecting the Bloch orbitals $`\mathrm{\Psi }_{n𝐤}`$ on localized atomic functions . ELF, defined as in Ref. , is a measure of the conditional probability of having one electron close to another one with the same spin. It approaches its upper limit (1.0) when the electron density resembles a covalent bond or in the presence of unpaired electrons. Both these approaches provide useful results in the characterization of the electronic distribution; still, there are several limitations that will be mentioned in the next section. The analysis based on localized Wannier functions overcomes these limitations; as a byproduct, it also provides informations on the dielectric properties of the system. Wannier functions are an alternative representation of the electronic ground state, that classifies states using a spatial coordinate R and a band index $`n`$. Wannier functions are not eigenstates of the Kohn-Sham Hamiltonian, but are related to them via the unitary transformation $$|𝐑n=\frac{V}{(2\pi )^3}_{BZ}|\mathrm{\Psi }_{n𝐤}e^{i𝐑𝐤}𝑑𝐤.$$ (1) In the electronic structure problem there are degrees of freedom that do not affect the self-consistent ground state, but determine the shape of the WFs obtained from Eq. (1). They consist of arbitrary unitary rotations $`U_{mn}(𝐤)`$ that mix together at any given $`𝐤`$ in the Brillouin zone fully occupied Bloch orbitals (in the case of a single band, these unitary matrices reduce to a phase factor $`\varphi (𝐤)`$). It is of paramount importance, in order to provide a meaningful real-space representation, to choose these arbitrary matrices $`U_{mn}(𝐤)`$ so that the resulting WFs are well localized. The approach used here follows the lines of Ref. , where the unitary rotations are refined until the resulting Wannier functions are maximally-localized, i.e. they have minimum spread $$\mathrm{\Omega }=\underset{n}{}[\mathrm{𝟎}n|r^2|\mathrm{𝟎}n\mathrm{𝟎}n|r|\mathrm{𝟎}n^2].$$ (2) Incidentally, the sum of the Wannier function centers (WFCs) $`𝐫_n^c=\mathrm{𝟎}n|𝐫|\mathrm{𝟎}n`$ is directly related to the macroscopic polarization of the sample , and this makes the Wannier function analysis attractive to study dielectric properties. In particular, the change in polarization induced by the displacement $`\mathrm{\Delta }\tau _N`$ of an atom N is directly related to the electronic component of its Born dynamical-charge tensor $`Z_N`$ by $$(Z_N)_{i,j}=2\frac{\underset{n}{}\mathrm{\Delta }𝐫_n^c𝐞_i}{\mathrm{\Delta }\tau _N𝐞_j}$$ (3) where $`n`$ runs over all the occupied WFs in the unit cell and $`\mathrm{\Delta }𝐫_n^c`$ are the displacements of the WFCs. ## 3 Results on a-Si The presence of mid-gap states in a-Si is usually ascribed to the existence of T<sub>3</sub> defects in the system. Such conclusion relies on EPR experiments, which measure the unpaired spin oscillations of the electrons in the dangling bonds. Nevertheless, certain features of the EPR signals (the “super-hyperfine structure”) derive also from the ionic magnetic moment and the quadrupolar term of the secondary ions in the defect . Thus, the EPR signal is indirectly sensitive to the presence of complex geometries. $`T_5`$ defects are expected to give rise to more delocalized states involving several atoms , and could thus provide a distinctive shape to the EPR signal. Prompted by these considerations, we investigate in this work the electronic properties of over-coordinated defects in amorphous silicon. Our samples have been obtained by annealing configurations spanned during a first-principles molecular dynamics simulation . In particular, we have selected for this study a sample presenting two $`T_5`$ coordination defects. Although this corresponds to a density of defects higher than that of experiments, it still results in a useful model to investigate the effects of local strain and topological disorder. The defects were first identified using a geometrical analysis: an atom is counted as a neighbor if it resides inside a sphere of radius $`R_c`$ = 2.6 $`\mathrm{\AA }`$ (to compare with the theoretical NN distance in c-Si $`d_{NN}`$ = 2.357 $`\AA `$). This radius is chosen as the first minimum in the correlation function $`g(𝐫)`$, since the peaks correspond, intuitively, to the succession of shells surrounding the reference atom. The two $`T_5`$ defects (labelled T$`{}_{}{}^{A}{}_{5}{}^{}`$ and T$`{}_{}{}^{B}{}_{5}{}^{}`$ in Fig. 1) belong to a “defect group” (Fig. 1, left panel), mostly composed of bonds which are longer and weaker than four-fold ones. Such environment seems favorable to induce delocalization of the wavefunction over several neighbors; our previous APC analysis (see Ref. ) supported this conjecture, as did a similar investigation (Ref. ) based on the electronic localization function. Both approaches however suffer from inherent limitations: the projection technique used for the APC does not provide meaningful informations on the mid-bond region; on the other side, ELF (once it has been been used to unambiguously distinguish between a $`T_3`$ and a $`T_5`$ site) is not able to improve the qualitative description of the defect that can be obtained from the direct inspection of the charge density. To improve our understanding of the electronic structure of the defect group, we calculated the MLWFs for this configuration. In crystalline Si each WF is centered in the middle of the bond and oriented along the bond ; a covalent “barrel” of charge is shared between the two bonded atoms, while some back-bonding charge is distributed between each of these atoms and its remaining three NNs. We expect that in a slightly distorted environment, where short range order is conserved, the shape of the WFs would not drastically change. This is clearly confirmed by our calculations (see Fig. 2). Small deviations from the crystalline case are due to the bending and stretching of bonds, but no doubt exists on the nature of the bond, that resembles closely to the bond of crystalline silicon. Such “regular” bonds are clearly related to their crystalline counterparts by having a very peaked distribution in their spreads, with a maximum around 2.30 $`\mathrm{\AA }^2`$ (vs. 2.04 $`\mathrm{\AA }^2`$ for crystalline silicon, using the same 8 k-point sampling). We started our analysis focusing on the position of the WFC, as suggested by Silvestrelli et al. . In that work it was observed that the centers of charge for anomalous WFs tend to be closer to one specific atom than those of regular WFs, which are instead equally distant from two bonded atoms. In Fig. 1 we show the WFCs belonging to our “defect group”; it can be seen that around the interstitial atom I the Wannier centers depart strongly from their ideal mid-bond positions. Similarly, the shape of the corresponding Wannier functions is clearly anomalous, with a delocalization extending over more than two atoms and with a much larger spread than average, of the order of 4 to 8 $`\mathrm{\AA }^2`$. This confirms our conjecture of delocalized orbitals connecting different atoms inside the “defect group”; an inspection of the shape of the WF clearly confirms the delocalization of the electronic states. In Fig. 3 the left panel shows the WF with the center close to the T$`{}_{5}{}^{}{}_{}{}^{B}`$ defect (C1); its shape suggest a bond shared between two NN. In the right panel of Fig. 3 the delocalized orbital centered at C2 is shown. The shape of this orbital is also influenced by the additional interaction between the two T<sub>5</sub> defects. The anomalous shape and spread of some WFs present in our sample suggest that unusual polarization properties could be present, and could be related to topological disorder. For this reason, we investigated the dynamical charge tensor $`Z`$, to quantify the local anisotropy and deviation from the crystalline order. Our preliminary results suggest that disorder plays a considerable effect on the dynamical charges of a-Si. Following Ref. , we decompose the $`Z`$ tensor into an isotropic contribution (corresponding to the $`l=0`$ spatial rotation representation), a $`l=1`$ antisymmetric one , and a $`l=2`$ traceless symmetric contribution. In crystalline Si the effective charges are zero, and the electronic contribution (-4 times the identity matrix) cancels exactly the ionic contribution (+4 times the identity matrix). This is not the case in amorphous silicon, and we find that atoms belonging to topological defects exhibit a very different behavior. To this purpose it is instructive to inspect Table 1, where the electronic effective charge tensors of some selected atoms are given, together with their decompositions. Even for regular T<sub>4</sub> atoms (T$`{}_{}{}^{A}{}_{4}{}^{}`$ and T$`{}_{}{}^{B}{}_{4}{}^{}`$), the effective charges can show strong anomalies, although this could be an artifact of the high density of defects in the sample. The anisotropy around atom T$`{}_{5}{}^{}{}_{}{}^{B}`$ on the other hand is clearly shown in Fig. 3 (right panel): $`Z`$ reflects the directionality in the polarization of this floating bond. Even if our results are still preliminary, they clearly show that the deviations of the effective charges from the crystalline value are noteworthy, and much larger than expected. ## 4 Conclusions We have presented our results on the microscopic features of floating bonds in a-Si, which have been obtained with a maximally-localized Wannier functions approach. We confirm the conjecture that T<sub>5</sub> defects are accompanied by well-defined delocalized states. Such states can be accurately characterized in terms of Wannier functions; a quantitative measure of delocalization is then provided by the the corresponding spreads. The delocalized states correspond to anomalous covalent bonds expanding over more than two atoms. The dielectric properties are readily available as a byproduct of the Wannier analysis; we find strongly anisotropic effective charges that are significantly different from zero. The authors wish to acknowledge INFM (Istituto Nazionale di Fisica della Materia) for the “Iniziativa Trasversale di Calcolo Parallelo”, and David Singh and the Naval Research Laboratory, where part of this work was performed with the support of ONR. We are also grateful to S. de Gironcoli for insightful comments.
no-problem/9911/hep-ph9911507.html
ar5iv
text
# References MZ-TH/99-52 November 1999 On the nonrelativistic dynamics of heavy particles near the production threshold A.A.Pivovarov Institut für Physik, Johannes-Gutenberg-Universität, Staudinger Weg 7, D-55099 Mainz, Germany and Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312 ## Abstract A solution to the Schrödinger equation for the nonrelativistic Green function which is used for describing the heavy quark-antiquark pair production near the threshold in $`e^+e^{}`$ annihilation is presented. A quick comparison with existing results is given. A choice of the effective mass scale for the nonrelativistic system with Coulomb interaction is discussed. PACS numbers: 14.65.Ha, 13.85.Lg, 12.38.Bx, 12.38.Cy The hadron production near heavy quark threshold will be thoroughly studied experimentally at future accelerators, e.g. . The dynamics of a slow moving pair of the heavy quark and antiquark near the production threshold is nonrelativistic to high accuracy that justifies the use of the nonrelativistic quantum mechanics as a proper theoretical framework for describing such a system . Being much simpler than the comprehensive relativistic treatment this approach allows one to take into account exactly such essential features of the dynamics as Coulomb interaction . The spectrum of hadronic states produced near the quark-antiquark threshold is contained in the Green function $`G(E)=(HE)^1`$ of the effective nonrelativistic Hamiltonian $`H`$. In the position space it reads $$G(E;𝐫,𝐫^{})=𝐫|(HE)^1|𝐫^{}$$ (1) and satisfies the Schrödinger equation $$(HE)G(E;𝐫,𝐫^{})=\delta (𝐫𝐫^{}).$$ (2) The Hamiltonian $`H`$ is represented in the following general form $`H=H_0+\mathrm{\Delta }H`$ where the first term $`H_0`$ is the Coulomb Hamiltonian $$H_0=\frac{p^2}{m}\frac{C_F\alpha _s}{r}$$ (3) with $`p^2=\mathrm{\Delta }^2`$. The color factor for the fermion representation of the gauge group is $`C_F=(N_c^21)/2N_c`$, $`\alpha _s`$ is a QCD coupling constant, $`m`$ is a mass of the heavy quark. The normalization point for $`\alpha _s`$ will be fixed later. The second term $`\mathrm{\Delta }H`$ accounts for relativistic and perturbative strong interaction contributions which are assumed to be small, i.e. they are treated as corrections to the Coulomb spectrum. The term $`\mathrm{\Delta }H`$ has an explicit form $$\mathrm{\Delta }H=\mathrm{\Delta }_kV+\mathrm{\Delta }_{pot}V+\mathrm{\Delta }_{NA}V+\mathrm{\Delta }_{BF}V.$$ (4) The quantity $`\mathrm{\Delta }_kV`$ in eq. (4) is a relativistic kinetic energy correction $$\mathrm{\Delta }_kV=\frac{p^4}{4m^3}$$ (5) the second term $`\mathrm{\Delta }_{pot}V`$ is the strong interaction perturbative corrections to the Coulomb potential , the third term $`\mathrm{\Delta }_{BF}V`$ is a Breit-Fermi potential with addition of the color factor $`C_F`$ . $`\mathrm{\Delta }_{NA}V=C_AC_F\alpha _s^2/(2mr^2)`$ is the non-Abelian potential of quark-antiquark interaction , $`C_A=N_c`$ is a color factor for the gluons. The corrections to the Coulomb Green function at the origin due to terms $`\mathrm{\Delta }_kV`$, $`V_{NA}`$ and $`V_{BF}`$ have first been presented in . Numerous applications of these results and further references can be found in recent literature, e.g. . The treatment of the problem in the present paper is slightly different. For the $`s`$-wave production of a quark-antiquark pair in the triplet spin state ($`L=0`$, $`S=1`$) the Breit-Fermi potential reads $$\mathrm{\Delta }_{BF}V=\frac{C_F\alpha _s}{2m^2}\left(\frac{1}{r}p^2+p^2\frac{1}{r}\right)+\frac{11\pi C_F\alpha _s}{3m^2}\delta (𝐫).$$ The correction $`\mathrm{\Delta }H`$ can be be rewritten as $$\mathrm{\Delta }H=\mathrm{\Delta }_{pot}V\frac{1}{4m}H_0^2\frac{3C_F\alpha _s}{4m}[H_0,\frac{1}{r}]_+$$ $$+\frac{\alpha _s}{m}\left(\frac{5C_F}{4}+\frac{C_A}{2}\right)[H_0,ip_r]_{}\frac{4\pi \alpha _s}{m^2}\left(\frac{C_F}{3}+\frac{C_A}{2}\right)\delta (𝐫)$$ (6) where $`ip_r=_r+1/r`$. Note that a similar form of the representation of the correction in terms of powers of the leading order operator is usual for anharmonic oscillator problems . The part of the potential proportional to the $`\delta `$-function $`\delta (𝐫)`$ is a separable potential. The equation for the full GF with such a potential can be exactly solved for the quantity we need. Noticing this fact one rewrites the Hamiltonian in the form $$H=H_0+\mathrm{\Delta }H=H_{ir}+\alpha _sV$$ (7) with $$V=\frac{4\pi }{m^2}\left(\frac{C_F}{3}+\frac{C_A}{2}\right)\delta (𝐫)=V_0\delta (𝐫).$$ (8) Here $`H_{ir}`$ is an irreducible Hamiltonian. The new representation for the GF reads $$G(E)=(HE)^1=(H_{ir}+\alpha _sVE)^1.$$ (9) Introducing the GF $`G_{ir}(E)`$ of the irreducible Hamiltonian $$G_{ir}(E)=(H_{ir}E)^1$$ (10) one obtains the following equation for the full Green function $`G(E)`$ (written in the operator form) $$G(E)=G_{ir}(E)\alpha _sG_{ir}(E)VG(E).$$ (11) Eq. (11) is exactly solved for the position representation component $`G(E;0,0)`$ with the result $$G(E;0,0)=\frac{G_{ir}(E;0,0)}{1+\alpha _sV_0G_{ir}(E;0,0)}.$$ (12) Eq. (12) accounts for the $`\delta (𝐫)`$ part of the correction to the irreducible Hamiltonian exactly. It is analogous to the usual representation of the vacuum polarization function through the one particle irreducible block. In this case the irreducible object $`G_{ir}(E;0,0)`$ is characterized by the absence of $`\delta (𝐫)`$ interaction in it. The function $`G_{ir}(E;0,0)`$ can be found perturbatively using the Coulomb solution as a leading order approximation. The Green function $`G(E)`$ emerges as a nonrelativistic limit of the relativistic scattering amplitude near the production threshold. The nonrelativistic Hamiltonian can be constructed from the QCD Lagrangian. In the leading order of nonrelativistic expansion there is an energy independent factor (matching coefficient) $`C(\alpha _s,m)`$ that allows one to map the quantum mechanical quantities onto the relativistic cross section near the threshold. In higher orders of nonrelativistic expansion further terms of the expansion of the current itself and new vertices of the effective Lagrangian are generated that should be accounted for when the cross section in QCD is calculated. We do not discuss these terms here because their contributions start at the higher orders in expansion parameters. Therefore the corresponding expressions should be taken only in the leading order of hard loop expansion. The cross section of the heavy quark-antiquark pair production near the threshold in $`e^+e^{}`$ annihilation contains a part with the nontrivial loop expansion. It has the form $$R^{th}(s)C(\overline{\alpha }_s,m)\mathrm{Im}G(E;0,0),s=(2m+E)^2.$$ (13) Here $`C(\overline{\alpha }_s,m)`$ is the matching (hard or high energy) coefficient, $`\overline{\alpha }_s`$ is the coupling constant. Generally, one can use the different normalization points (or different subtraction procedures) for $`\overline{\alpha }_s`$ and for the corresponding coupling constant $`\alpha _s`$ which enters the expression for the nonrelativistic Green function. The vacuum polarization function near the threshold $`C(\overline{\alpha }_s,m)G(E;0,0)`$ requires subtraction. This feature is familiar from the PT analysis of the vacuum polarization function in the full theory. To obtain a finite quantity one can differentiate $`C(\overline{\alpha }_s,m)G(E;0,0)`$ with regard to $`E`$ (constructing the $`D`$-function) or take the discontinuity across the physical cut (constructing the imaginary part or cross section $`R^{th}(s)`$). Both $`D`$-function and $`R^{th}(s)`$ are finite. We write $$G(E;0,0)=\frac{1}{\alpha _sV_0}\frac{1}{\alpha _sV_0}\frac{1}{1+\alpha _sV_0G_{ir}(E;0,0)}$$ and obtain the following expression for the $`D`$-function $$C(\overline{\alpha }_s,m)\frac{d}{dE}G(E;0,0)=\frac{C(\overline{\alpha }_s,m)}{(1+\alpha _sV_0G_{ir}(E;0,0))^2}\frac{d}{dE}G_{ir}(E;0,0).$$ (14) The imaginary part of $`C(\overline{\alpha }_s,m)G(E;0,0)`$ reads $$C(\overline{\alpha }_s,m)\mathrm{Im}G(E;0,0)=\frac{C(\overline{\alpha }_s,m)\mathrm{Im}G_{ir}(E;0,0)}{(1+\alpha _sV_0\mathrm{Re}G_{ir}(E;0,0))^2+(\alpha _sV_0\mathrm{Im}G_{ir}(E;0,0))^2}.$$ (15) The quantities in eqs. (14) and (15) are finite. The explicit expression for the coefficient $`C(\overline{\alpha }_s,m)`$ has been found at the $`\overline{\alpha }_s^2`$ order within dimensional regularization . It contains a singularity of the form $$C_S(\overline{\alpha }_s,m)=1C_F\overline{\alpha }_s^2\left(\frac{C_F}{3}+\frac{C_A}{2}\right)\frac{1}{\epsilon }$$ (16) where only the singular part $`C_S(\overline{\alpha }_s,m)`$ of the coefficient $`C(\overline{\alpha }_s,m)`$ is written. This singularity cancels in eqs. (14) and (15). We consider this cancellation (the renormalization procedure) for the case of $`D`$-function only. To fulfill the renormalization procedure we use the dimensionally regularized Green function in the expression for the correction proportional to $`V_0`$. It suffices to substitute the pure Coulomb Green function for this purpose. The dimensionally regularized Coulomb Green function at the origin takes the explicit form $$G_C^{DR}(\kappa ;0,0)=\frac{m}{4\pi }\left\{\kappa +\frac{C_F\alpha _sm}{2}\left(\frac{1}{\epsilon }+\mathrm{ln}\frac{\mu ^2}{\kappa ^2}2\psi \left(1\frac{C_F\alpha _sm}{2\kappa }\right)\right)\right\}$$ (17) where $`\kappa ^2=mE`$, $`\psi (z)=\mathrm{\Gamma }^{}(z)/\mathrm{\Gamma }(z)`$ is digamma function and $`\mathrm{\Gamma }(z)`$ is Euler’s $`\mathrm{\Gamma }`$-function. One finds $$(1+\alpha _sV_0G_{ir}(E;0,0))^2(1+\alpha _sV_0G_C^{DR}(E;0,0))^2=Z^1(1+\alpha _sV_0G_C(E;0,0))^2$$ (18) where $$Z^1=1+C_F\frac{\alpha _s^2}{4\pi }m^2V_0\frac{1}{\epsilon }=1C_F\alpha _s^2\left(\frac{C_F}{3}+\frac{C_A}{2}\right)\frac{1}{\epsilon }.$$ (19) The finite (renormalized) Coulomb Green function at the origin has the form $$G_C(\kappa ;0,0)=\frac{m}{4\pi }\left\{\kappa +\frac{C_F\alpha _sm}{2}\left(\mathrm{ln}\frac{\mu ^2}{\kappa ^2}2\psi \left(1\frac{C_F\alpha _sm}{2\kappa }\right)\right)\right\}.$$ (20) The renormalization constant $`Z`$ cancels the divergence $`C_S(\overline{\alpha }_s,m)`$ of the high energy coefficient $`C(\overline{\alpha }_s,m)`$ in a proper order of loop expansion. One uses $`\overline{\alpha }_s=\alpha _s+O(\alpha _s^2)`$ as a formal PT relation to achieve the cancellation. The finite coefficient of order $`\alpha _s^2`$ (or $`\overline{\alpha }_{s}^{}{}_{}{}^{2}`$) depends on particular ways of subtraction in $`C(\overline{\alpha }_s,m)`$ and $`G_{ir}(E)`$. If the subtraction procedures for $`C(\overline{\alpha }_s,m)`$ and $`G_{ir}(E)`$ are not properly coordinated during the calculation the finite coefficient is fixed by matching . If the calculation for both $`C(\overline{\alpha }_s,m)`$ and $`G_{ir}(E)`$ has been done within one and the same subtraction scheme this matching is automatic, e.g. . Therefore after the standard renormalization, eq. (12) gives the representation of $`G(E;0,0)`$ as a Dyson sum of irreducible terms. The spectrum of the full system is determined by the equation $$G_{ir}(E;0,0)^1+\alpha _sV_0=0$$ (21) where $`G_{ir}(E;0,0)`$ is constructed perturbatively. Eq. (21) can be solved exactly or perturbatively. The isolated roots of eq. (21) give the discrete spectrum of the system. There is also a continuous spectrum given by the discontinuity of $`G_{ir}(E;0,0)`$ across the cut at positive values of $`E`$. For the continuous spectrum one can use the Coulomb GF in the denominator of eq. (15). The problem of calculating the near-threshold cross section reduces to the construction of the spectrum of the irreducible Hamiltonian $`H_{ir}`$ $$H_{ir}=H_0+\mathrm{\Delta }V_{pot}\frac{1}{4m}H_0^2\frac{3C_F\alpha _s}{4m}[H_0,\frac{1}{r}]_++\frac{\alpha _s}{m}\left(\frac{5C_F}{4}+\frac{C_A}{2}\right)[H_0,ip_r]_{}.$$ (22) The spectrum is found within PT. The leading order spectrum is given by the renormalized pure Coulomb solution eq. (20). The term $`\mathrm{\Delta }_{pot}V`$ represents the first and second order perturbative QCD corrections to the Coulomb potential which were studied. The correction due to the first iteration of the $`\mathrm{\Delta }_{pot}V`$ term has been found in ref. where the simple and efficient framework for computing the iterations of higher orders was formulated. The explicit formulas for these corrections can be found in . For the kinetic $`H_0^2`$ term one finds $$\rho _k(E)=\underset{E^{}}{}\delta (E^{}\frac{E_{}^{}{}_{}{}^{2}}{4m}E)\rho (E^{})$$ (23) where $`\rho (E)`$ is a density of Coulomb states with the energy $`E`$ $$\rho (E)=\frac{1}{\pi }\mathrm{Im}G_C(E;0,0).$$ (24) The sum is over the whole spectrum. For the discrete levels it gives $$\rho _k(E)|_{disc}=\underset{n}{}\delta (E_n\frac{E_n^2}{4m}E)|\psi _n(0)|^2$$ (25) with $`\psi _n`$ being a bound state with the energy level $`E_n`$. The position of the pole is now at $`E_n^{pole}=E_nE_n^2/4m`$. For the continuous spectrum the correction reads $$\rho _k(E)|_{cont}=_0^{\mathrm{}}𝑑E^{}\delta (E^{}\frac{E_{}^{}{}_{}{}^{2}}{4m}E)\rho (E^{})=\frac{1}{1\overline{E}/2m}\rho (\overline{E})$$ (26) with $$E\overline{E}+\frac{\overline{E}^2}{4m}=0.$$ (27) Eq. (27) can be solved perturbatively for small $`E`$. One obtains $$\overline{E}=E+\frac{E^2}{4m}+O(E^3)$$ (28) which is the substitution used in refs. . Note that there is no correction to wave functions because $`H_0^2`$ has no non-diagonal matrix elements between the Coulomb states. The last term in eq. (22) gives no correction to the spectrum. The term with the anticommutator $`[H_0,r^1]_+`$ in eq. (22) gives the correction to the spectrum which was presented earlier as an energy dependent shift of the coupling constant. This correction can be written in the form $$G_C(E)+\mathrm{\Delta }_aG_C(E)=G_C\left(E;\alpha _s\alpha _s\left(1+\frac{3E}{2m}\right)\right)$$ (29) which is valid at small $`E`$ and was presented in . Note, however, that while the $`\delta `$-function part of the correction to the Hamiltonian can be considered unique because it represents a reducible vertex, the irreducible potential $`H_{ir}`$ can be chosen in different forms. Indeed, one can rewrite the sum of kinetic and anticommutator terms of $`H_{ir}`$ in the form $$\frac{1}{4m}H_0^2\frac{3C_F\alpha _s}{4m}[H_0,\frac{1}{r}]_+=\frac{5}{4m}H_0^2\frac{3}{4m^2}[H_0,p^2]_+$$ (30) which is a reshuffling of contributions within the irreducible correction. While the kinetic term $`H_0^2`$ has only its coefficient changed as a result of such a reshuffling, the new anticommutator term leads to a modification of the leading order GF of the following form $$\frac{1}{H_0E}+\frac{3}{4m^2}\left(p^2\frac{1}{H_0E}+\frac{1}{H_0E}p^2\right)+\frac{1}{H_0E}E\frac{3p^2}{2m^2}\frac{1}{H_0E}$$ $$=\frac{3}{4m^2}\left(p^2\frac{1}{H_0E}+\frac{1}{H_0E}p^2\right)+\left(\frac{p^2}{m}\left(1\frac{3E}{2m}\right)\frac{C_F\alpha _s}{r}E\right)^1.$$ (31) The first term of this equation does not affect the structure of the spectrum. The last term in eq. (31) can be interpreted as a correction to the mass. To a certain degree this is equivalent to the previous case (the correction to the coupling $`\alpha _s`$ in eq. (29)) because the genuine parameter of the Coulomb problem is the Bohr radius or momentum, $`p_B=\alpha _sm`$ (or $`C_F\alpha _sm/2`$). One can see that both changes $`\alpha _s\alpha _s\left(1+3E/2m\right)`$ and $`mm/\left(13E/2m\right)`$ lead to the same result for $`p_B`$ within the accuracy of the approximation, i.e. up to higher order terms in $`E`$ $$p_Bp_B\left(1+\frac{3E}{2m}\right)=\frac{p_B}{1\frac{3E}{2m}}+O(E^2).$$ (32) One can check that the total correction to the discrete energy levels, for instance, is the same. The old solution eq. (29) gives $$m\mathrm{\Delta }E_n=\frac{1}{4}E_n^2+3E_n^2=\frac{11}{4}E_n^2$$ (33) while the new decomposition eq. (31) results in $$m\mathrm{\Delta }E_n=\frac{5}{4}E_n^2+\frac{3}{2}E_n^2=\frac{11}{4}E_n^2.$$ (34) This coincidence is valid only parametrically in the region of applicability of perturbation theory in the expansion parameters $`E/m`$ and/or $`\alpha _s`$. It can result in numerical difference when extrapolated to larger energies in the continuous spectrum. The different forms of the decomposition may lead to different numerical predictions within numerical evaluation . The expression given in eq. (12) has a standard structure of the polarization function. The solution $`E_f`$ to eq. (21) (one or the first of the poles of the generalized “propagator” of the full system with the Hamiltonian $`H`$) is an important dimensional parameter for the system. For the observables which are saturated with the contributions of the discrete spectrum the quantity $`E_f`$ can serve as a natural mass parameter. In this case the position of a pole (the numerical value of some solution $`E_f`$ to eq. (21)) can be chosen as a scale for the system instead of the heavy quark pole mass. However, for the observables which are saturated with the contributions of the continuous spectrum or have a considerable admixture of such contributions, the natural mass scale is not necessarily related to $`E_f`$ and is determined by other properties of the full interaction. To conclude, we have presented a solution to the Schrödinger equation for the nonrelativistic Green function. The solution has the form of a resumed geometric series (Dyson resummation) of irreducible blocks which is usual for quantum field theory. The property of irreducibility is defined with respect to the $`\delta `$-function part of the interaction potential. Different decompositions of the irreducible Hamiltonian $`H_{ir}`$ for the treatment within PT are considered. A new form of the first order correction to the spectrum of irreducible Hamiltonian $`H_{ir}`$ is given. Acknowledgments This work is partially supported by Volkswagen Foundation under contract No. I/73611 and Russian Fund for Basic Research under contract Nos. 97-02-17065 and 99-01-00091. A.A.Pivovarov is Alexander von Humboldt fellow.
no-problem/9911/cond-mat9911455.html
ar5iv
text
# Note on the Kaplan–Yorke dimension and linear transport coefficients ## 1. INTRODUCTION The Kaplan–Yorke (KY) or Lyapunov dimension was introduced$`^{\text{[1]}}`$ as a conjecture relating the Hausdorff (H) dimension and the Lyapunov exponents of the invariant measure of a given dynamical system.<sup>5</sup><sup>5</sup>5In the original paper only the natural invariant measure, i.e. the weak limit of the Lebesgue measure or Sinai-Ruelle-Bowen measure, was considered. In the result is formulated in a more general way such that it holds for every invariant measure. This allows a computation of the H–dimension of the attractor of a dissipative dynamical system in terms of its Lyapunov exponents. Its validity has been proven for two dimensional dynamical systems$`^{\text{[2]}}`$ and for a rather large class of stochastic systems.$`^{\text{[3]}}`$ In general one only knows that the KY–dimension is an upper bound for the H–dimension.$`^{\text{[4]}}`$ Although it is not hard to construct rather artificial counter examples, it is generally believed that the conjecture holds for “generic” dynamical systems.$`^{\text{[5]}}`$ In this paper we will discuss a relation (see Eq. (4.10) below) between the KY–dimension in large thermostatted systems and their physical properties such as the transport coefficients in a nonequilibrium stationary state. Such a relation allows us to estimate the KY–dimension from a measurement of the transport coefficient. In doing so we obtain a new relation between a dynamical quantity (the KY–dimension) and a physical quantity (the transport coefficient). A difficulty in doing this is that while the dynamical quantities are usually defined for any finite number of particles $`N`$, the physical quantities usually refer to systems of very large $`N`$, so that one can meaningfully define intensive quantities, which only depend on intrinsic parameters like the number density $`n=N/V`$ rather than on $`N`$ and $`V`$ separately, where $`V`$ is the volume of the system. We would like to state this by saying that strictly speaking, a thermodynamic limit has to be taken, i.e. $`N\mathrm{}`$, $`V\mathrm{}`$ with the number density ($`N/Vn`$) and other intensive quantities, such as in particular the shear rate, considered in this paper, held constant. This is straightforward if the linear transport coefficients are required and the limit of the external field $`F_e0`$ is taken before the limit $`N\mathrm{}`$, as in linear response theory. However, for finite fields, changes in the behavior of the system can occur, when the thermodynamic limit is approached - as e.g. the onset of turbulence in a sheared system.$`^{\text{[6]}}`$ In this paper we are interested only in the behavior of systems before such a transition takes place, like the laminar flow of a sheared fluid considered in section 5. We think, nevertheless, that our results can be usefully formulated for large systems using expressions like “for sufficiently large $`N`$” without taking the mathematical limit (see comment after Eq. (2.4) for a more precise discussion). Although this expression is not mathematically well defined, we think that its meaning will be clear in any practical application (see note 7 on page 14 for an attempt to clarify this point). Moreover, because errors in intensive quantities are typically $`O(N^1)`$ where $`N10^{23}`$, the approach is physically reasonable. The above mentioned connection between a dynamical and thermodynamic treatment requires the usually discrete Lyapunov spectrum to be effectively replaced by a continuous intensive spectrum and an intrinsic version of the KY–dimension to be introduced. In section 4 we show how this can be implemented, after having introduced the basic equations which connect the dynamical and physical quantities in section 2, and deriving a new exact relation for the linear transport coefficients in section 3. ## 2. BASIC RELATIONS AND DEFINITIONS As has been shown before,$`^{\text{[7, 8]}}`$ there is a direct relationship between the sum of the non-zero Lyapunov exponents $`\lambda _i`$ with $`\lambda _i\lambda _{i+1}`$, $`1i2dNf`$ (where $`N`$ is the number of particles, $`d`$ the Cartesian dimension and $`f`$ the number of zero Lyapunov exponents) and the phase space contraction rate in a thermostatted system, subject to an external force $`F_e`$, in a non equilibrium stationary state, of the form: $$\underset{i=1}{\overset{2dNf}{}}\lambda _{i,N}(F_e)=\mathrm{\Lambda }_N(F_e).$$ (2.1) Here the subscript $`N`$ indicates the $`N`$-dependence of the various quantities in Eq. (2.1), $`2dNf`$ is the effective number of degress of freedom in phase space of the system and $`\mathrm{\Lambda }_N(F_e)=\frac{}{𝚪}\dot{𝚪}`$ is the phase space contraction rate, where $`𝚪`$ stands for the collection of the coordinates and momenta of the $`N`$ particles and $`\dot{𝚪}`$ for its time derivative. For macroscopic systems, i.e. systems with very large $`N`$, one can use the equality of the dynamical phase space contraction and the physical entropy production$`^{\text{[8, 9]}}`$ and obtain from Eq. (2.1): $$\frac{1}{N}\underset{i=1}{\overset{2dNf}{}}\lambda _{i,N}(F_e)=\frac{\sigma _N(F_e)}{nk_B}=\frac{J_N(F_e)F_e}{nk_BT}=\frac{L_N(F_e)F_e^2}{nk_BT}.$$ (2.2) Here $`\sigma _N(F_e)`$, $`J_N(F_e)`$ and $`L_N(F_e)`$ are the entropy production rate per unit volume, the dissipative flux and the transport coefficient respectively, induced in the system in the stationary state by the external force $`F_e`$, where a nonlinear constitutive relation $`J_N(F_e)=L_N(F_e)F_e`$ has been used. The subscript $`N`$ indicates the $`N`$-dependence for finite systems. The kinetic temperature $`T`$ is determined by the relation: $$\frac{1}{dNd1}\underset{i=1}{\overset{N}{}}\frac{𝐩_i^2}{m}k_BT,$$ (2.3) where $`m`$ is the particle mass, $`\{𝐩_i,i=1,N\}`$ are the peculiar momenta and $`T`$ is the kinetic temperature. For systems at equilibrium the thermodynamic temperature appearing in Eq. (2.3) should strictly only be calculated in the thermodynamic limit. However, for nonequilibrium systems, as mentioned above the application of the limit $`N\mathrm{}`$ is not straightforward. Still, for sufficiently large values of $`N`$, Eq. (2.2) can be interpreted as: $$\frac{1}{N}\underset{i=1}{\overset{2dNf}{}}\lambda _{i,N}(F_e)=\frac{\sigma (F_e)}{nk_B}+O(N^1)=\frac{L(F_e)F_e^2}{nk_BT}+O(N^1),$$ (2.4) where by $`O(N^1)`$ we mean that - at least at equilibrium - the finite size corrections can be bounded by a function of the form $`CN^1`$ with $`C`$ of order 1. Although this condition on the constant $`C`$ is not mathematically precisely defined, we discuss in section 5 numerical experiments, which will give an indication of the magnitude of the $`O(N^1)`$ corrections in the relationship between the KY–dimension and the viscosity. In what follows when we write $`O(N^1)`$, we will always intend it to carry the particular meaning given by the above discussion. Since we are interested here in the linear ($`F_e`$) regime one can use that $`L_N(F_e)=L_N+O(F_e^2)`$,<sup>6</sup><sup>6</sup>6Here and in what follows we assume that the transport coefficients are even in $`F_e`$. where $`L_N=L_N(0)0`$ is the linear transport coefficient. We note that the left hand side of Eq. (2.2) is not restricted to small fields, due to its dynamical origin, and that the right hand side is related for small fields to the usual entropy production rate per unit volume of Irreversible Thermodynamics.$`^{\text{[10]}}`$ We now introduce the Kaplan–Yorke (KY) dimension. If $`N_{KY}`$ is the largest integer for which $`\underset{i=1}{\overset{N_{KY}}{}}\lambda _{i,N}(F_e)>0`$, the KY–dimension, $`D_{KY,N}`$,$`^{\text{[1]}}`$ for a finite system with a discrete Lyapunov spectrum, is defined by$`^{\text{[1, 7]}}`$: $$D_{KY,N}=N_{KY}+\frac{\underset{i=1}{\overset{N_{KY}}{}}\lambda _{i,N}(F_e)}{|\lambda _{N_{KY}+1,N}(F_e)|}$$ (2.5) where we have not indicated the $`N`$-dependence of $`N_{KY}`$. ## 3. SMALL PHASE SPACE REDUCTION In case the phase space dimension reduction is smaller than one, Eq. (2.5) can be reduced to the exact equation: $$\underset{i=1}{\overset{2dNf}{}}\lambda _{i,N}(F_e)=\lambda _{min,N}(F_e)(2dNfD_{KY,N}(F_e))=\frac{\sigma _N(F_e)V}{k_B},$$ (3.1) where the minimum Lyapunov exponent, $`\lambda _{min,N}(F_e)=\lambda _{2dNf,N}(F_e)`$. From Eqs. (3.1) and (2.2) we can then trivially calculate the linear (i.e. the limiting zero field) transport coefficient in linear response theory as, $$L_N=\underset{F_e0}{lim}\frac{(2dNfD_{KY,N}(F_e))\lambda _{max,N}(F_e)nk_BT}{NF_e^2}.$$ (3.2) A similar relation has been obtained for the periodic Lorentz gas on the basis of periodic orbit theory.$`^{\text{[11]}}`$ We remark that from the point of view of linear response theory, i.e. Eq. (3.2), a phase space dimension reduction smaller than one occurs for any $`N`$, including $`N\mathrm{}`$. In that case one can let $`N\mathrm{}`$ in Eq. (3.2) and obtain a new exact relation for the linear transport coefficients, $`L=lim_N\mathrm{}L_N`$, equivalent to the Green-Kubo formulae. The corresponding expression for the KY–dimension of the steady state attractor for sufficiently small fields, is given by: $$D_{KY,N}(F_e)=2dNf\frac{LF_e^2N}{\lambda _{max,N}nk_BT}+O(F_e^4).$$ (3.3) To obtain the Eqs. (3.2) and (3.3), we have used that for systems which are symplectic at equilibrium, (i.e. all Hamiltonian equilibrium systems), one can write for small $`F_e`$: $`\lambda _{max,N}(F_e)=\lambda _{max,N}+O(F_e^2)=\lambda _{min,N}+O(F_e^2)`$, where $`\lambda _{max,N}\lambda _{max,N}(0)`$ and $`\lambda _{min,N}\lambda _{min,N}(0)`$. For systems which satisfy the Conjugate Pairing Rule$`^{\text{[12]}}`$ (CPR) the sum of each conjugate pair, $`i,i^{}=2dNfi+1`$ of Lyapunov exponents is $$\lambda _{i,N}(Fe)+\lambda _{i^{},N}(Fe)=\frac{2\sigma _N(Fe)V}{k_B(2dNf)},i.$$ (3.4) We note that in nonequilibrium systems, the Conjugate Pairing Rule is expected to hold only in systems that are thermostatted homogeneously. In some systems there is numerical evidence that the maximal exponents satisfy the Conjugate Pairing Rule, while the other pairs do not (that is Eq. (3.4) is true for i=1).$`^{\text{[13]}}`$ These systems are said to satisfy the weak Conjugate Pairing Rule (WCPR). By combining the Eqs. (3.1) and (3.4), one obtains for sufficiently small fields: $$\frac{D_{KY,N}(F_e)}{(dNf/2)}=1\frac{\lambda _{max,N}(F_e)}{\lambda _{min,N}(F_e)}.$$ (3.5) Substituting Eq. (3.4) into Eq. (2.2) and using Eq. (3.3), one obtains another expression for the limiting KY–dimension, for sufficiently small fields: $$\frac{D_{KY,N}(F_e)}{(dNf/2)}=3+\frac{\lambda _{min,N}(F_e)}{\lambda _{max,N}(F_e)}+O(F_e^4)$$ (3.6) In Eqs. (3.5) and (3.6), we have chosen to use the maximal Lyapunov exponents as the conjugate pair in Eq. (3.4), and therefore these equations are valid provided WCPR is obeyed. A similar looking formula, as Eq. (3.6) with the first two terms on the right hand side only, has been quoted in . As mentioned before, all the results in this section hold under the hypothesis that the phase space dimension reduction is smaller than unity. For this to be true for any given small $`F_e`$, $`N`$ is constricted to be of $`O(F_e^2)`$, or equivalently, for any given large $`N,F_e`$ to be of $`O(N^{1/2})`$, as can be seen from Eq. (3.3), so that $`F_e`$ and $`N`$ are coupled. <sup>7</sup><sup>7</sup>7One might think that a phase space dimension reduction of one could hardly have any practical consequence in a macroscopic system whose phase space dimension is of the order of $`10^{23}`$. As we will discuss in Section 6, such a very small phase space dimension reduction is expected to occur under physically realizable conditions. However, from a general physical point of view, we would like to have a theory for $`D_{KY,N}`$, which holds uniformly in $`N`$, i.e. with Eq. (3.3) valid for every $`N`$ with a small but fixed $`F_e`$, so that $`N`$ and $`F_e`$ are independent variables. Now, it is trivial to generalize Eq. (3.1) to the case in which the dimensional reduction is greater than one. In fact one then obtains that: $`{\displaystyle \underset{i=1}{\overset{2dNf}{}}}\lambda _{i,N}(F_e)`$ $`=`$ $`\lambda _{N_{KY}+1,N}(F_e)(2dNfD_{KY,N}(F_e))`$ (3.7) $`+{\displaystyle \underset{i=N_{KY}+2}{\overset{2dNf}{}}}(\lambda _{i,N}(F_e)\lambda _{N_{KY}+1,N}(F_e)).`$ If we assume, as is usually done, that for sufficiently large $`N`$, $`\lambda _{2dNfj}=\lambda _{2dNf}+O(N^1)`$ for fixed $`j`$ not varying with N; then for any fixed dimensional reduction, Eq. (3.7) simply becomes Eq. (3.1) with a correction of $`O(1)`$ in $`N`$. Keeping the phase space dimension reduction fixed as $`N`$ increases still imposes a condition on $`F_e`$ of the form discussed above. To better control the errors when $`2dNfN_{KY}`$ becomes large ($`O(N)`$), a more careful treatment of the Eq. (3.1) is needed in order to obtain an expression uniform in $`N`$ for the phase space dimension reduction. ## 4. LYAPUNOV SPECTRUM FOR VERY LARGE $`N`$ AND LARGE PHASE SPACE REDUCTION For large $`N`$ one can indeed reformulate the definition of $`D_{KY,N}`$ in a more analytical way. Consider thereto the stepwise continuous function of a continuous variable $`0<x2dNf`$: $`\stackrel{~}{\lambda }_N(x,F_e)=\lambda _{i,N}(F_e)`$ for $`i1<xi`$. If one introduces the integral: $$\stackrel{~}{\mu }_N(x,F_e)=_0^x\stackrel{~}{\lambda }_N(y,F_e)𝑑y,$$ (4.1) Eq. (2.5) can be rewritten in the form: $$\stackrel{~}{\mu }_N(D_{KY,N}(F_e),F_e)=0.$$ (4.2) Since $`D_{KY,N}(F_e)`$ as well as $`\stackrel{~}{\mu }_N(x,F_e)`$ are expected to be extensive quantities, i.e. they are proportional to $`N`$ for large $`N`$, it is natural to define the quantity $$\delta _N(F_e)=\frac{D_{KY,N}(F_e)}{2dNf}$$ (4.3) where $`\delta _N(F_e)`$ is the dimension per effective degree of freedom ($`2dNf`$) in phase space. We now use again that, as $`N`$ grows, the difference $`\lambda _{i,N}\lambda _{i+1,N}`$ is expected to go to zero as $`N^1`$. This suggests a possible rescaling of the variable $`x`$ in $`\stackrel{~}{\lambda }_N(x,F_e)`$ to define the function $`\lambda _N(x,F_e)=\stackrel{~}{\lambda }_N(xN,F_e)`$, i.e. $`\lambda _N(x,F_e)=\lambda _{i,N}(F_e)`$ for $`\frac{i1}{2dNf}<x\frac{i}{2dNf}`$. The function $`\lambda _{i,N}(F_e)`$ can then be expected to be well approximated by a continuous function of the variable $`x`$ when $`N`$ is sufficiently large. Thus rewriting Eq. (4.1) as $$\mu _N(x,F_e)=_0^x\lambda _N(y,F_e)𝑑y$$ (4.4) where $`0x1`$, Eq. (4.2) is equivalent to: $$\mu _N(\delta _N(F_e),F_e)=0$$ (4.5) where $`\delta _N(F_e)`$ and $`\mu _N(x,F_e)`$ can be assumed to be intensive quantities. We can make this more precise by the following crucial assumption on the nature of the function $`\lambda _N(y,F_e)`$, i.e. of the Lyapunov spectrum: Smoothness Hypothesis: If $`N`$ is sufficiently large, one can write: $$\lambda _N(x,F_e)=l(x,F_e)+O(N^1)$$ (4.6) with $`l(x,F_e)`$ a smooth function<sup>8</sup><sup>8</sup>8It is enough to assume that $`l(x,F_e)L^1`$ in $`x`$ and $`C^2`$ near $`x=1`$. Moreover we will require that $`l(x,F_e)`$ is $`C^2`$ in $`F_e`$ for every $`x`$. This may not be the case when a phase transition occurs.$`^{\text{[15]}}`$ of $`x`$ and $`F_e`$. Eq. (4.6), as Eq. (2.4), is to be interpreted that for sufficiently large $`N`$, $`|\lambda _N(x,F_e)l(x,F_e)|<C^{}N^1`$ with $`C^{}`$ of order 1. Similarly, as mentioned already above, all the $`O(N^1)`$ terms appearing in forthcoming equations (see e.g. Eq. (4.12)) have to be interpreted in this way and the constants obtained then can be directly expressed in term of $`C^{}`$. We now define, in analogy with Eqs. (4.4) and (4.5): $$m(x,F_e)=_0^xl(y,F_e)𝑑y$$ (4.7) and $`d(F_e)`$ through the equation: $$m(d(F_e),F_e)=0$$ (4.8) respectively. Clearly our Hypothesis implies that: $$\delta _N(F_e)=d(F_e)+O(N^1)$$ (4.9) where $`d(F_e)`$ and $`m(F_e)`$ are now intensive quantities. The function $`m(x,F_e)`$ is sketched in Fig. 1 1, especially near $`x=1`$. From this figure one easily deduces that: $$d(F_e)=1\frac{m(1,F_e)}{m^{}(1,F_e)}+O(m(1,F_e)^2)=1+\frac{1}{l(1,F_e)}\frac{\sigma (F_e)}{2dnk_B}+O(F_e^4),$$ (4.10) where $`m^{}(F_e)`$ is the derivative of $`m(x,F_e)`$ with respect to x at $`x=1`$. Here we used that $`m^{}(1,F_e)=l(1,F_e)=\lambda _{min}(F_e)`$ and that $`m(1,F_e)`$, is the sum over all Lyapunov exponents, divided by $`(2dNf)`$, i.e. the phase space contraction (or entropy production) rate per (effective) degree of freedom in phase space.<sup>9</sup><sup>9</sup>9In a more analytic way Eq. (4.10) follows from the inverse function theorem together with our Smoothness Hypothesis and the fact that $`m(0,0)=0`$. Using Eq. (2.2), Eq. (4.10) can be rewritten as: $$d(F_e)=1\frac{1}{\lambda _{max}}\frac{LF_e^2}{2dnk_BT}+O(F_e^4)$$ (4.11) where the maximum and minimum Lyapunov exponents at equilibrium, (i.e. when $`F_e=0`$) are $`\lambda _{max}=l(0,0)`$ and $`\lambda _{min}=l(1,0)`$, respectively. Moreover we have used that $`\lambda _{max}=\lambda _{min}`$ and that $`l(1,F_e)=l(1,0)+O(F_e^2)`$. In terms of the extensive quantity $`D_{KY,N}(F_e)`$, Eq. (4.11) can be rewritten as: $$\frac{D_{KY,N}(F_e)}{2dNf}=1\frac{1}{\lambda _{max}}\frac{LF_e^2}{2dnk_BT}+O(F_e^4)+O(N^1).$$ (4.12) where we kept the term $`f/(2dN)`$ of $`O(N^1)`$ on the left hand side of Eq. (4.12) to facilitate comparison in figures 4 and 5 of section 5 for $`N=32`$ particles. We observe that Eq. (4.12) is formally very similar to Eq. (3.3), except for the correction term $`O(N^1)`$ and the substitution of the asymptotic $`\lambda _{max}`$ for the finite $`N`$ value $`\lambda _{max,N}`$. It is clear that the same generalization can be performed for the Eqs. (3.5) and (3.6), if one notes that in the present context, the CPR, Eq. (3.4), becomes: $$l(x,F_e)+l(1x,F_e)=\frac{\sigma (F_e)}{dnk_B}.$$ (4.13) From Eq. (4.11) it follows that in the linear regime the reduction in phase space dimension in large thermostatted dissipative systems is extensive. For those systems for which the Smoothness Hypothesis holds, this result is exact. The extensive nature of the reduction has been noted before.$`^{\text{[8, 16, 17, 18]}}`$ ## 5. NUMERICAL TEST We tested our Smoothness Hypothesis as well as Eq. (4.12) and equations derived from it assuming that the WCPR is valid, for a system of 32 WCA particles$`^{\text{[19]}}`$ undergoing shear flow in two dimensions. Although this system does not satisfy the CPR, it does appear to satisfy WCPR to within $`0.7\%`$ when $`N=32`$$`^{\text{[13]}}`$ (which was within the limits of numerical error achieved). The equations of motion for particles in such a system are the so-called SLLOD equations,$`^{\text{[20]}}`$ $$\dot{𝐪}_i=𝐩_i/m+𝐢\gamma y_i,\dot{𝐩}_i=𝐅_i𝐢\gamma p_{yi}\alpha 𝐩_i.$$ (5.1) Here, at not too large Reynolds numbers, the momenta, $`𝐩_i`$ are peculiar momenta, $`𝐢`$ is the unit vector in the $`x`$ direction and $`𝐅_i`$ is the force exerted on particle $`i`$ by all the other particles, due to a Weeks-Chandler-Andersen pair interaction potential$`^{\text{[19]}}`$ between the particles. The value of $`\alpha `$ is determined using Gauss’ Principle of Least Constraint to keep the kinetic temperature fixed.$`^{\text{[20]}}`$ The SLLOD equations of motion given by Eq.(5.1) then model Couette flow when the Reynolds number is sufficiently small so that laminar flow is stable.$`^{\text{[20]}}`$ <sup>10</sup><sup>10</sup>10The $`\alpha `$ appearing in the Eq. (5.1) is related to the phase space contraction rate $`\mathrm{\Lambda }`$ in Eq. (2.1). It is given by the relation $`_{i=1}^{2dNf}\lambda _i=\mathrm{\Lambda }=dN\alpha +O(1)`$. For this system the dissipative flux $`J`$ is just the xy element of the pressure tensor $`P_{xy}`$; the transport coefficient $`L`$ is the Newtonian shear viscosity $`\eta `$; and the external field $`F_e`$ is the shear rate $`\gamma =u_x/y`$,$`^{\text{[20]}}`$ where $`u_x`$ is the local flow velocity in the $`x`$-direction in the system. The calculations were carried out at a reduced kinetic temperature of unity and a reduced density n$`=N/V=0.8`$. All physical quantities occurring in the Eq. (5.1) as well as the temperature and density are made dimensionless by reducing them with appropriate combinations of molecular quantities. In particular the reduction factor for $`\gamma `$ amounts to about 1 ps<sup>-1</sup> for Argon. (c.f. ) We note that for the SLLOD equations for shear flow in 2 dimensions using non autonomous Lees-Edwards periodic boundary conditions,$`^{\text{[20, 21]}}`$ $`f=5`$ (due to the conservation of kinetic energy, momentum and position of the center of mass). Figures 2 and 3 represent a direct test of our Smoothness Hypothesis. Figure 2 gives the Lyapunov spectrum for a system at a shear rate $`\gamma =0.05`$ and for a number of particles $`N=8,18,32`$. As can be easily observed the Lyapunov exponents for $`N`$=18 and 32 just fill the “open spaces” left by the exponents for $`N`$=8 and 18, respectively. Figure 3 shows the behavior of $`\lambda _{max}(\gamma )`$ and $`\lambda _{min}(\gamma )`$ as functions of the externally applied field $`\gamma `$. Although no numerical experiment can in general confirm a mathematical hypothesis, the numerical results seem to agree very well with our Smoothness Hypothesis. We now use the Lyapunov spectrum to compute $`D_{KY,N}(\gamma )`$. This is shown in figure 4a where the $`D_{KY,N}(\gamma )`$ is plotted for $`N=32`$, $`d=2`$ and $`0<\gamma <0.5`$. As can be easily seen, although from the definition Eq. (2.5) one would generally expect to see discontinuities in the first derivatives of this function for those values of $`\gamma `$ for which $`N_{KY}`$ change (c.f. ), the function appears very smooth for a rather large range of values of $`\gamma `$. This indicates that $`N=32`$ is already “big enough” to consider the Lyapunov spectrum as “effectively continuous”.<sup>11</sup><sup>11</sup>11More precisely one can say that for $`N=32`$ the numerical errors involved in computing the Lyapunov exponents are already larger than the errors introduced by neglecting the $`O(N^1)`$ correction. This observation provides a more precise meaning of expressions like “sufficiently large $`N`$” or “a constant $`C`$ of order 1”. This also permits us to check the validity of Eq. (4.12), and the large $`N`$ versions of Eqs. (3.5) and (3.6), which in this case take the form: $`{\displaystyle \frac{D_{KY,N}(\gamma )}{2dNf}}=1{\displaystyle \frac{\eta _N\gamma ^2}{\lambda _{max,N}2dnk_BT}}+O(\gamma ^4)+O(N^1)`$ (5.2) $`{\displaystyle \frac{D_{KY,N}(\gamma )}{dNf/2}}=1{\displaystyle \frac{\lambda _{max,N}(\gamma )}{\lambda _{min,N}(\gamma )}}+O(\gamma ^4)+O(N^1)`$ (5.3) $`{\displaystyle \frac{D_{KY,N}(\gamma )}{dNf/2}}=3+{\displaystyle \frac{\lambda _{min,N}(\gamma )}{\lambda _{max,N}(\gamma )}}+O(\gamma ^4)+O(N^1),`$ (5.4) respectively. Figure 4a shows that these expressions can describe the system studied over the range of fields considered and that the correction terms are small. The deviations in the values obtained using the various expressions is within the numerical error in the data. We observe here that $`D_{KY,N}`$ of Eq. (2.5) is well approximated by a fourth order even polynomial in $`\gamma `$ for fields up to $`\gamma 0.5`$ and that at $`\gamma 0.3`$ the terms of $`O(\gamma ^4)`$ are just $`5\%`$ of the terms of order $`O(\gamma ^2)`$, while the phase space dimension reduction is clearly greater than unity. We note that the quadratic dependence of $`D_{KY,N}`$ on $`\gamma `$ implies a linear dependence of the shear stress on $`\gamma `$,$`^{\text{[23]}}`$ so the linear regime for $`N=32`$ extends beyond strain rates where the phase space dimension reduction is smaller than one, which runs (for $`N=32`$) till approximately $`\gamma =0.175`$. At $`\gamma =0.3`$, $`\eta (\gamma )`$ is just $`8\%`$ smaller than $`\eta (0)`$. For completeness, in Figure 4b, we expand Figure 4a in the regime where the phase space reduction is smaller than unity. In this regime, Eqs. (3.3), (3.5) and (3.6) are expected to be valid for the calculation of $`D_{KY,N}`$ and any deviations of the data from the value calculated using (2.5) are due to the limited numerical precision of the independent calculations of the viscosity and the Lyapunov exponents, $`O(F_e^4)`$ corrections, or to the assumption that the WCPR is obeyed. The numerical results indicate that the numerical error gives the most significant contribution to the deviations observed. The WCPR is found to be valid at least to within numerical error for the state points considered. This is consistent with previous work.$`^{\text{[13]}}`$ Using the fourth order fit to $`D_{KY,N}`$, mentioned above, i.e. $`D_{KY,N}(\gamma )=2dNf+\frac{1}{2}D_{KY,N}^{^{\prime \prime }}\gamma ^2+\frac{1}{4}D_{KY,N}^{^{\prime \prime \prime \prime }}\gamma ^4`$ for the shear rate range $`[0.2,+0.2]`$, (c.f. figure 4b), one can calculate the zero strain rate or Newtonian shear viscosity from $`\eta =D_{KY,N}^{^{\prime \prime }}`$$`\lambda _{max}nk_BT/2N`$. Here $`D_{KY,N}^{^{\prime \prime }}`$ and $`D_{KY,N}^{^{\prime \prime \prime \prime }}`$ are the second and fourth derivatives of $`D_{KY,N}(\gamma )`$, respectively, with respect to $`\gamma `$, taken at $`\gamma =0`$. This leads to a value of $`\eta =2.54\pm 0.07`$, which agrees very well within statistical uncertainties with the Newtonian viscosity directly measured in the simulation and calculated from the defining constitutive relation $`\eta (\gamma )P_{xy}(\gamma )/\gamma `$, viz. $`\eta =2.52\pm 0.05`$ (see the + points in figure 4b). Note that although the precision of $`D_{KY,N}`$ is high (less than $`0.03\%`$ statistical error), the viscosity is related to the phase space contraction $`(2dNfD_{KY,N})`$ and thus calculation of the viscosity from the phase space dimension reduction involves, at small fields, a very small difference between two large numbers, resulting in a larger relative error in the viscosity. Furthermore since the dimensional contraction is a quadratic function of the shear rate, it results in a more difficult calculation of the viscosity from the phase space dimension reduction than the constitutive equation which is linear in the shear rate at low strain rates. Figure 5 compares the values of $`D_{KY,N}`$ determined from Eqs. (5.2), (5.3) and (5.4) with those calculated from its definition in Eq. (2.5), as a function of $`N`$ for fields of $`\gamma =0.15`$ and $`\gamma =0.5`$. In Figure 5a the results for the field of $`\gamma =0.15`$ are shown, and at this strain rate the dimensional contraction will be less than unity for systems size up to approximately $`N=50`$. Therefore, for $`N\stackrel{<}{}50`$, Eq. (3.3), which contains no $`O(N^1)`$ corrections, will be valid, as will Eqs. (3.5) and (3.6) if the WCPR is obeyed. All the numerical results are consistent with the theory and with the assumption that the WCPR is obeyed.$`^{\text{[13]}}`$ At $`\gamma =0.15`$, the values determined using the various methods have a maximum difference of $`0.2\%`$ and are within the numerical errors at each particle number. In Figure 5b, the results for a strain rate of $`\gamma =0.5`$ are shown. In this case, a dimensional contraction of less than unity is only obtained for $`N\stackrel{<}{}5`$. Again, the deviations between the results calculated using Eqs. (5.2)- (5.4) and the definition of $`D_{KY,N}`$ given by Eq. (2.5), are small (at most $`1\%`$), and within the limits of error for all particle numbers considered. This confirms that the coefficients of the $`O(N^1)`$ and $`O(F_e^4)`$ terms are small for this system. ## 6. CONCLUSIONS We mention here a few implications of the results presented in this paper. * The extensivity of the phase space reduction for large $`N`$ and small fields is here, to the best of our knowledge, demonstrated for the first time, on the basis of the Smoothness Hypothesis of the Lyapunov spectrum and the extensivity of the total entropy production. * The relationships given by Eqs. (3.2), (3.3) and (5.2) also apply to systems where not all particles are thermostatted. That is, they can be applied to systems where the Gaussian thermostat operates on selected particles, say those in the boundaries, while the remaining particles evolve under Newtonian dynamics, supplemented by a dissipative field.$`^{\text{[16, 24]}}`$ We note that the Eqs. (3.5), (3.6), (5.3) and (5.4) can only be assumed to apply to homogeneously thermostatted systems in general, since only for such systems can the WCPR be expected to hold. * A simple calculation shows that for a typical case as that of one mole of Argon at its triple point, sheared at the rate of 1 Hz, the difference of the Kaplan-Yorke dimension and the phase space dimension $`(O(10^{23}))`$ is tiny, namely $`3`$. This follows from Eq.(3.3), which shows that the dimension loss, when measured in moles, is equal to the product of the total entropy production rate of the system and the reciprocal of the largest Lyapunov exponent. Since the largest Lyapunov exponent is controlled by the most unstable atomic processes, it is always very small $``$ 1 ps<sup>-1</sup> whether for atomic, molecular and even polymeric systems. We note that this smallness of the phase space dimension reduction in irreversible processes near equilibrium could well be the reason that linear Irreversible Thermodynamics provides such a good description of nonequilibrium systems close to equilibrium. This is because the thermodynamic properties are insensitive to the high order distribution functions - including the full N-particle distribution function of the entire system - since they are determined by a few low order distribution functions, which “do not know” that the dimension of the steady state attractor is only a few dimensions smaller than the $`10^{23}`$ of the phase space of the system. * For the system studied, the $`O(N^1)`$ corrections to $`D_{KY,N}/(2dNf)`$ that appear in Eqs. (5.2), (5.3) and (5.4), due to the smoothness hypothesis are small (see Figure 5) and less than $`1.0\%`$ even for system sizes of $`N=6`$. * In conclusion, although the new relations involving $`D_{KY,N}`$ are simple consequences of the Eq. (4.10), they could nevertheless be useful for applications. In particular the equations give a simple expression for the Kaplan–Yorke dimension of the attractor of a class of many particle systems close to equilibrium i.e. in the regime of linear dissipation near equilibrium. ## 7. ACKNOWLEDGEMENTS EGDC gratefully acknowledges the hospitality of the Research School of Chemistry of the Australian National University as well as financial support from the Australian Research Council, the Australian Defence Forces Academy and the Engineering Research Program of the Office of Basic Energy Sciences of the US Department of Energy under Grant No. DE-FG02-88-ER13847. DJS and DJE thank the Australian Research Council for support of this project.
no-problem/9911/gr-qc9911089.html
ar5iv
text
# Self-force approach for radiation reaction ## Abstract We overview the recently proposed mode-sum regularization prescription (MSRP) for the calculation of the local radiation-reaction forces, which are crucial for the orbital evolution of binaries. We then describe some new results which were obtained using MSRP, and discuss their importance for gravitational-wave astronomy. The problem of including the radiation-reaction (RR) forces in the orbital evolution of a binary is a long-standing open problem. This problem is as yet unresolved even in the extreme mass-ratio limit, with the particle orbiting a non-rotating black hole, although there has been a remarkable progress obtained from various directions schutz . The conventional approach is to consider the fields in the far zone, and then use a balance argument to relate the far-zone fields to the local properties of the particle. The generic failure of such approaches hughes prompted the idea to calculate the local forces acting on the particle, including the RR forces. In the following we discuss the RR forces acting on a scalar point-like charge, but for electric or gravitational charges the basic ideas are similar. The RR force $`{}_{}{}^{\mathrm{RR}}F_{}^{\mu }`$ which acts on a point-like scalar charge $`q`$ is given by quinn-wiseman $${}_{}{}^{\mathrm{RR}}F_{}^{\mu }(\tau )=q^2\left[\frac{1}{3}\left(\ddot{u}^\mu u^\mu \dot{u}_\nu \dot{u}^\nu \right)+\frac{1}{6}^\mu +_{\mathrm{}}^\tau ^\mu G_\mathrm{R}d\tau ^{}\right],$$ (1) where $`^\mu =R_\nu ^\mu u^\nu +R_{\nu \sigma }u^\nu u^\sigma u^\mu \frac{1}{2}R_\nu ^\nu u^\mu `$, $`R_{\nu \sigma }`$ is the Ricci tensor, $`u^\mu `$ is the charge’s 4-velocity, a dot denotes (covariant) derivative with respect to proper time $`\tau `$, $`_\mu `$ denotes covariant differentiation, and $`G_\mathrm{R}`$ is the retarded Green’s function. The first term is a local Abraham-Lorentz-Dirac type damping force, the second is a local force, which couples to Ricci-curvature and preserves conformal invariance, and the third is the so-called “tail” term, which arises from the failure of the Huygens principle in curved spacetime. The greatest problems in the calculation of the RR forces lurk in the tail term, because it requires the knowledge of $`G_\mathrm{R}`$ along the entire past world line of the charge. In addition, the self field of any particle diverges at the position of the particle, and the calculation of the RR forces will have to handle the infinities connected with the self field by providing a regularization prescription. Recently, Ori proposed to approach the RR problem via mode decomposition ori-rr . Ori observed, that the individual Fourier-harmonic modes of the self field are bounded, also for a point-like particle, although the sum over all modes diverges. This observation is very useful, because the calculation of the individual modes is relatively easy. This still leaves the second, harder problem of having a regularization prescription to handle the mode sum. Very recently, Ori suggested MSRP ori-unpublished , which is very successful for the few simple cases to which it has already been applied. In what follows, we overview MSRP very briefly, and describe some of the recent results which were obtained using it. The tail part of the RR force can be decomposed into stationary Teukolsky modes, and then summed over the frequencies $`\omega `$ and the azymuthal numbers $`m`$. This force equals then the limit $`ϵ0^{}`$ of the sum over all $`\mathrm{}`$ modes, of the difference between the force sourced by the entire world line (the bare force $`{}_{}{}^{\mathrm{bare}}F_{\mu }^{\mathrm{}}`$) and the force sourced by the half-infinite world line to the future of $`ϵ`$, where the particle has proper time $`\tau =0`$, and $`ϵ`$ is an event along the past ($`\tau <0`$) world line. Next, we seek a function $`h_\mu ^{\mathrm{}}`$ which is independent of $`ϵ`$, such that the series $`_{\mathrm{}}({}_{}{}^{\mathrm{bare}}F_{\mu }^{\mathrm{}}h_\mu ^{\mathrm{}})`$ converges. Once such a function is found, the regularized self force is then given by $`{}_{}{}^{\mathrm{tail}}F_{\mu }^{}=_{\mathrm{}}({}_{}{}^{\mathrm{bare}}F_{\mu }^{\mathrm{}}h_\mu ^{\mathrm{}})d_\mu `$, where $`d_\mu `$ is a finite valued function. MSRP ori-unpublished then shows, from a local integration of $`G_\mathrm{R}`$, that $`h_\mu ^{\mathrm{}}=a_\mu \mathrm{}+b_\mu +c_\mu \mathrm{}^1`$. MSRP also provides an algorithm for the calculation of the functions $`a_\mu ,b_\mu ,c_\mu `$ and $`d_\mu `$ analytically. It has been conjectured, that for all orbits $`a_\mu =0=c_\mu `$. (It was found to be true for all the special cases calculated so far.) If this is indeed the case, the tail force is given by $${}_{}{}^{\mathrm{tail}}F_{\mu }^{}=\underset{\mathrm{}}{}({}_{}{}^{\mathrm{bare}}F_{\mu }^{\mathrm{}}b_\mu )d_\mu .$$ (2) Note that $`b_\mu `$ is just the limit $`{}_{}{}^{\mathrm{bare}}F_{\mu }^{\mathrm{}\mathrm{}}`$, and that $`{}_{}{}^{\mathrm{bare}}F_{\mu }^{\mathrm{}}`$ can be computed using the Teukolsky formalism. Alternatively, $`b_\mu `$ can also be calculated analytically using MSRP. The only remaining problem then, is to calculate $`d_\mu `$. Even though there is an algorithmic way to calculate $`d_\mu `$, this calculation is by no means easy. Although MSRP has been developed as yet only for very simplified cases, the approach is likely to be susceptible of generalization also for more realistic cases. If robust, MSRP can be of the greatest importance for the calculation of templates for gravitational-wave detection. Table 1 displays the values of the MSRP parameters for the cases which have already been calculated. Note that the motion is not necessarily geodesic. The data in Table 1 may suggest the conjecture that $`d_\mu `$ exactly equals the sum of the two local terms of Eq. (1) (or its analog for other charge types). If this hypothesis is proved to hold in general, then a truely remarkable thing happens: the full RR force can be calculated directly from Eq. (2) when $`d_\mu `$ is ignored. It should be emphasized that presently the support for this far-reaching hypothesis is only the special cases listed in Table 1. Next, we present some results which were obtained by application of this new approach. For the case of a static, minimally-coupled, massless scalar charge in Schwarzschild, the self force is known to equal zero static-scalar . For a static electric charge $`q`$ in Schwarzschild the self force is known to be purely radial and to be given by $`f_r=q^2Mr^3(12M/r)^{1/2}`$ smith-will . These results were recovered using MSRP in burko-cqg . Note that for these two simple static cases the solution for the modes can be obtained analytically. In general, however, this is not expected to be possible, and the solution can be obtained only numerically. The case of a scalar charge $`q`$ in uniform circular orbit around a Schwarzschild black hole was recently considered in burko-prl . The RR force was calculated numerically without any simplifying assumptions, such as far field or slow motion, and the solution is fully relativistic. Both the temporal and the azimuthal, dissipative components and the radial, conservative component of the RR force were computed. Figure 1 displays the behavior of the radial component of the RR force for both geodesic and non-geodesic orbits. In the slow motion and far field limits the force is repulsive, and behaves like $`{}_{}{}^{\mathrm{RR}}F_{r}^{}q^2M^2\mathrm{\Omega }^2/r^2.`$ However, in strong fields the force grows faster, and for fast motion is changed from repulsive to attractive. This expression for the radial, conservative RR force may be very important for the detection of gravitational waves, and also for gravitational-waves astronomy. The conservative radial force causes an additional precession of the periastron of the particle’s orbit, and thus induces a change in the frequency and phase of the emitted radiation burko-br . Although the radial self force has been obtained only for the simple case of a point-like scalar charge, the result indicates that one can expect a non-zero periastron precession also for a small mass. However, the magnitude of the effect will very reasonably depend on the type of the charge. A large-magnitude effect can cause the entire search algorithm to fail in the very detection of the signal (depending also on the size of the template library), and a small effect will introduce errors in the parameters of the observed binary, namely, the wave form would fit the template of a system with parameters different from the parameter of the actual binary. Note that the conservative force depends not only on the radiative modes of the field, but also on the non-radiative modes burko-br . I thank Leor Barack and Amos Ori for discussions and for letting me use their results before their publication. This work was supported by NSF grants AST-9731698 and PHY-9900776 and by NASA grant NAG5-6840.
no-problem/9911/chao-dyn9911003.html
ar5iv
text
# Structure of Quantum Chaotic Wavefunctions: Ergodicity, Localization, and Transport ## 1 Introduction The structure of quantum wavefunctions and the closely related problem of quantum transport in classically non-integrable systems have received much attention recently from a variety of physics communities. Questions concerning the quantum behavior of systems with a generic classical limit are of course of great fundamental interest; they are also very relevant not only for nanostructure and mesoscopics experiments , but also for understanding phenomena in areas as diverse as atomic physics , molecular and chemical physics , microwave physics , nuclear physics , and optics . Combined with knowledge about spectral properties, wavefunction information can be used to address conductance curves, susceptibilities, resonance statistics, and delay times in ballistic quantum dots. Similar wavefunction and transport issues arise in other fields, in the study of resonance statistics in microwave cavities, photoionization cross sections, chemical reaction rates, spectra of Rydberg atoms, lifetimes and emission intensities for resonant optical cavities, and $`S`$-matrix properties in many systems. In the classically integrable case, quantum wavefunctions are known to be associated with the invariant tori of the corresponding classical dynamics, satisfying the Einstein–Brillouin–Keller (EBK) quantization conditions . Classical–quantum correspondence in the ergodic case is, however, more subtle. Here, the typical classical trajectory uniformly visits all of the energetically available phase space, so naively the typical quantum wavefunction should also have uniform amplitude over an entire energy hypersurface, up to the inevitable (Gaussian random) fluctuations. Such behavior follows directly from Random Matrix Theory (RMT), which has been proposed by Bohigas, Giannoni, and Schmit to be the proper description of quantum chaotic behavior in the semiclassical limit (i.e. in the limit where the de Broglie wavelength $`\lambda `$ becomes small compared to the system size). A similar conjecture by Berry states that a typical quantum wavefunction in this same limit should look locally like a random superposition of plane waves of fixed energy, with momenta pointing in all possible directions. RMT , a natural quantum analogue of classical ergodicity, turns out to describe well spectral properties on small energy scales (e.g. the distribution of nearest neighbor level spacings in the spectrum), but does not always provide a valid description of wavefunction structure and transport properties. Scars provide one of the most visually striking examples of strong deviation from RMT wavefunction behavior; other examples include slow ergodic systems and Sinai-type systems . In all these cases, non-RMT behavior can be quantified, semiclassically predicted, and observed. Section 2 addresses the problem of wavefunction ergodicity in broad terms. We are interested both in general questions concerning the implications of global classical properties (such as ergodicity or mixing) on quantum behavior and also in the way in which specific classical structures (such as unstable periodic orbits) may leave their imprints on the quantum eigenstates. Several examples, including that of Sinai-type systems, are discussed in Section 2.3. Sections 3 and 4 focus on the scar phenomenon and quantitative predictions. The overview format of this presentation requires us to omit most derivations; references to more detailed discussions in the literature can be found throughout. ## 2 Ergodic wavefunction structure and quantum transport ### 2.1 Coarse-grained vs. microscopic ergodicity We must distinguish between two ways of extending classical notions of ergodicity to the quantum case . First, we may consider wavefunction intensity integrated over a classically defined region $``$, in the regime where the wavelength $`\lambda `$ becomes small compared to the size of $``$. As can be shown rigorously , in this limit the integrated intensity approaches a constant (equal to the area of $``$ as a fraction of total phase space) for almost all wavefunctions. The result requires only long-time classical ergodicity and quantum–classical correspondence at short times; a half-page physicists’ derivation can be found in . We note that this macroscopic or “coarse–grained” type of quantum ergodicity is clearly implied by RMT, but is in fact a much weaker condition. Quantum wavefunction structure on coarse–grained scales is relevant for studying conductance and conductance fluctuations through wide, multichannel leads, for fast decay processes, and generally for analyzing open systems in the overlapping resonance regime. RMT, on the other hand, is a prediction about wavefunction uniformity at the quantum scale, i.e. on the scale of a single wavelength (or single momentum channel, or most generally at the scale of a single $`\mathrm{}`$-sized cell in phase space). This kind of uniformity is a much stronger condition than coarse-grained ergodicity, and several examples will be given later in this section where microscopic quantum ergodicity is violated in classically ergodic systems. Wavefunction structure at the quantum scale is relevant, obviously, for transport through narrow (or tunneling) leads. ### 2.2 Measures of microscopic ergodicity First we must define quantitative measures of ergodicity or localization at the microscopic scale . Let $`|n`$ be an $`N`$dimensional basis of eigenstates, and $`|a`$ some localized test basis, e.g. position, momentum, or phase space Gaussians, as is physically appropriate in a given system. For example, in discussing Anderson-type lattice localization, we may choose our test basis $`|a`$ to be the position basis, whereas in scattering problems plane waves may be a more natural choice. Sometimes we take $`|a`$ to be the eigenstates of a zeroth-order Hamiltonian $`H_0`$ or of a zeroth-order scattering matrix $`S_0`$. We are then interested in the wavefunction intensities $`P_{an}=|a|n|^2`$ and their correlations. For simplicity of presentation we assume there are no conserved quantities, so classically nothing prevents each eigenstate $`|n`$ for overlapping equally with each $`|a`$ (of course, the formalism generalizes naturally to the more general case of classical symmetries, see ). We adopt the normalization convention $$P_{an}=1,$$ (1) where the averaging is done over test states $`|a`$, over wavefunctions $`|n`$, or over an appropriate ensemble of systems. The first nontrivial moment of the $`P_{an}`$ distribution is the inverse participation ratio: $$\mathrm{IPR}_a=\frac{1}{N}\underset{n=1}{\overset{N}{}}P_{an}^21,$$ (2) which measures the mean squared wavefunction intensity at $`|a`$, or, alternatively, the inverse fraction of wavefunctions having significant intensity at $`|a`$. $`\mathrm{IPR}_n`$, the mean squared intensity of a single wavefunction $`|n`$ averaged over position $`|a`$, is defined analogously: it measures the inverse fraction of phase space covered by $`|n`$. A global IPR measure may also be conveniently defined: $$\mathrm{IPR}=\frac{1}{N}\underset{a=1}{\overset{N}{}}\mathrm{IPR}_a=\frac{1}{N}\underset{n=1}{\overset{N}{}}\mathrm{IPR}_n1,$$ (3) and provides a simple one-number measure of the degree of localization in a quantum system. An IPR of unity corresponds to perfect ergodicity (each wavefunction having equal overlaps with all test states), while $`\mathrm{IPR}=N`$ indicates the greatest possible degree of localization (the wavefunctions $`|n`$ being identical with the test states $`|a`$). Besides being the first nontrivial moment of the intensity distribution, the IPR measure is useful because of its connection with dynamics. For example, $`\mathrm{IPR}_a`$ is proportional to the averaged long-time return probability for a particle launched initially in state $`|a`$: $$\mathrm{IPR}_a=N\underset{T\mathrm{}}{lim}\frac{1}{T}\underset{t=0}{\overset{T1}{}}|a|e^{i\widehat{H}t/\mathrm{}}|a|^2.$$ (4) It is intuitively clear that enhanced long-time return probability is associated with increased localization. We may generalize $`\mathrm{IPR}_a`$ to a transport or wavefunction correlation measure between local states $`|a`$ and $`|b`$: $$P_{ab}=\underset{n=1}{\overset{N}{}}\frac{P_{an}P_{bn}}{N}=\underset{T\mathrm{}}{lim}\frac{N}{T}\underset{t=0}{\overset{T1}{}}|a|e^{i\widehat{H}t/\mathrm{}}|b|^2.$$ (5) Of course, the mean $`\frac{1}{N}_bP_{ab}=1`$ by unitarity; the simplest nontrivial measure of transport efficiency is thus $$Q_a=\frac{1}{N}\underset{b}{}P_{ab}^21,$$ (6) which measures the inverse of the fraction of phase space accessible from $`|a`$ at long times. $`Q_a=1`$ for all $`|a`$ (the RMT result for $`N\mathrm{}`$) indicates perfect long-time transport, and vanishing wavefunction correlations. ### 2.3 Examples The Random Matrix Theory description is free of all dynamical information about the system under study and thus can hardly be expected to provide a correct statistical description of all quantum behavior. It serves, however, as a very useful baseline with which real quantum chaotic behavior may be compared. In RMT, the wavefunction intensities $`P_{an}`$ are squares of Gaussian random variables and so are drawn from a $`\chi ^2`$ distribution (of one degree of freedom for real overlaps $`a|n`$ or two for complex overlaps), with a mean value of unity. This easily leads to $`\mathrm{IPR}_{\mathrm{RMT}}=3`$ in the real case or $`2`$ in the complex case. Notice that even in RMT, quantum fluctuations cause wavefunctions to be less ergodic than the classical expectation $`\mathrm{IPR}_{\mathrm{Clas}}=1`$. By quantum localization, however, we always mean fluctuation in the intensities in excess of what would be expected from a Gaussian random model, i.e. $`\mathrm{IPR}>\mathrm{IPR}_{\mathrm{RMT}}`$. Examples abound of such anomalous quantum behavior in classically ergodic systems, and several are described below. We also mention here that in RMT, the channel–to–channel transport efficiency $`P_{ab}`$ approaches unity for all channels $`|a`$, $`|b`$ in the $`N\mathrm{}`$ semiclassical limit. (i) Scarring is the anomalous enhancement or suppression of quantum wavefunction intensity on the unstable periodic orbits of the corresponding classical system. This localization behavior is perhaps surprising from a naive classical point of view, since in the time domain (see Eq. 4) it implies an enhanced long-time return probability for a wavepacket launched on an unstable periodic orbit. Paradoxically, this is in contrast with the classical behavior, where a probability distribution spreads itself evenly over the entire ergodic space at long times and retains no memory of its initial state. Quantum long-time dynamics, which of course contains phase information, thus retains a much better memory of the short-time classical behavior than does the long-time classical dynamics, for arbitrarily small values of $`\mathrm{}`$. Specifically, for a wavepacket $`|a`$ optimally oriented on an orbit of instability exponent $`\beta `$, we have $`\mathrm{IPR}_{\mathrm{Scar}}`$ $`=`$ $`\left[{\displaystyle \underset{m=\mathrm{}}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{\mathrm{cosh}\beta m}}\right]\mathrm{IPR}_{\mathrm{RMT}}`$ (7) $``$ $`{\displaystyle \frac{\pi }{\beta }}\mathrm{IPR}_{\mathrm{RMT}}\mathrm{IPR}_{\mathrm{RMT}},`$ (8) where the limiting form is valid for weakly unstable orbits ($`\beta 1`$). A finite fraction of order $`\beta `$ of all wavefunctions are scarred: they have intensity $`O(\beta ^1)`$ on the periodic orbit compared with the mean intensity, while most of the remaining wavefunctions are antiscarred, and have intensities on the orbit as small as $`\mathrm{exp}(\pi ^2/2\beta )`$ compared with the mean. The source of this localization effect will be outlined in Section 3. The transport measure $`P_{ab}`$ is also affected by scarring: if both leads $`|a`$ and $`|b`$ are located near periodic orbits, $`P_{ab}`$ can be enhanced or suppressed depending on whether the two orbits are “in-phase” or “out-of-phase” in the range of energies considered. These long-time transport results are $`\mathrm{}`$independent and are obtained directly from the short-time linearized classical dynamics. (ii) The tilted wall billiard (a rectangular box with one wall tilted relative to the other three), is a classically ergodic system in which the wavefunctions (as measured by the IPR) become less and less ergodic in the classical $`\mathrm{}0`$ (or $`\lambda 0`$) limit . (This kind of behavior is possible because the $`\mathrm{}0`$ limit fails to commute with the infinite time $`t\mathrm{}`$ limit.) Specifically, for wavefunctions expressed in channel (momentum) space, we have $$\mathrm{IPR}_{\mathrm{Tilted}}\lambda ^{1/2}/\mathrm{log}\lambda ^1\mathrm{}.$$ (9) This anomalous wavefunction behavior at the single-channel scale is entirely consistent with ergodicity on coarse-grained scales; coarse-graining over $`\lambda ^1/\mathrm{log}\lambda ^1`$ channels is required to retrieve the classical behavior. Quantum transport in these systems is also anomalous and is dominated by diffractive effects (due to the slowness of classical phase space exploration). (iii) As a final example we mention the Sinai billiard, a paradigm of classical chaos . This consists of a circular obstruction of diameter $`d\lambda `$ placed inside a rectangle. Again using momentum channels for the test basis $`|a`$, we obtain $$\mathrm{IPR}_{\mathrm{Sinai}}(\mathrm{log}\lambda ^1)/d\mathrm{}$$ (10) for fixed $`d`$ in the $`\lambda 0`$ limit. Again, we see nonergodic $`\lambda 0`$ wavefunctions in a classically ergodic system. The distributions of wavefunction intensities $`P_{an}`$ and IPR’s ($`\mathrm{IPR}_a`$ and $`\mathrm{IPR}_n`$, see Eq. 2) all display power-law tails (in contrast with the exponential tail prediction of RMT). Transport is similarly anomalous: for the typical channel $`|a`$, $`Q_ad^1`$ (see Eq. 6); i.e. at long times a given channel is coupled to only a fraction $`d1`$ of all other channels. ## 3 Scars of quantum chaotic wavefunctions ### 3.1 Phenomenology and conceptual issues Scars have been observed experimentally and numerically in a wide variety of systems. These include semiconductor heterostructures, where scars are observed to affect tunneling rates from a 2D electron gas into a quantum well , microwave cavities , the hydrogen atom in a uniform magnetic field , acoustic radiation from membranes , and the stadium billiard, for which wavefunctions as high as the millionth state in the spectrum can be numerically analyzed . Nevertheless, confusion over the definition and measures of scars (and the dearth of quantitative predictions) have until recently led some to question the existence of a scar theory, and even to doubt the survival of the phenomenon in the classical limit. Recent theoretical developments enable us to make robust, quantitative predictions about how strongly a given orbit will be scarred and how often, as a function of energy and other system parameters; real comparison with numerical and experimental data is therefore made possible. We also note that statistical and quantitative data, rather than anecdotal (in the form of wavefunction plots) is necessary as a test of scar theory or of any other theory of a localization phenomenon, since considerable fluctuations of wavefunction intensity occur even in the context of RMT (see discussion in first paragraph of Section 2.3). We address some common misconceptions and summarize key facts about the scar phenomenon: (i) Scarring is associated with unstable periodic orbits, not with stable or marginally stable ones, though these of course also do attract wavefunction intensity; a qualitative difference arises because of classical–quantum noncorrespondence in the unstable case (Section 2.3(i)). (ii) Weak scars are not always visible to the naked eye; scars are defined and measured according to a statistical definition. (iii) Scarring predictions are robust, valid even when the exact dynamics is not known well enough to allow individual eigenstates to be determined either quantum mechanically or semiclassically. (iv) The amount of scarring associated with individual eigenfunctions varies significantly from state to state, but in accordance with a theoretically predicted distribution. (v) If an optimally chosen phase space basis is used, the typical intensity enhancement factor as well as the full distribution of scar intensities can be given as a function of the instability exponent $`\beta `$. ### 3.2 Short-time effects on stationary properties We define the autocorrelation function for test state $`|a`$: $$A(t)=a|\mathrm{exp}[iHt]|a,$$ (11) the Fourier transform of which is the local density of states (LDOS) $$S(E)=\underset{n}{}P_{an}\delta (EE_n).$$ (12) Then, if we know the statistical properties of the return amplitude $`A(t)`$, we also know the statistical properties of the LDOS at $`|a`$, and specifically of the wavefunction intensities $`P_{an}`$ (see e.g. Eq. 4). Furthermore, given information about the return amplitude $`A(t)`$ for short times only (say for $`|t|<T_0`$), we immediately obtain the LDOS envelope $`S_{\mathrm{smooth}}(E)`$, which is the energy-smoothed (on scale $`\mathrm{}/T_0`$) version of the true line-spectrum $`S(E)`$. So large short-time recurrences in $`A(t)`$ get ‘burned into’ the spectrum. Specifically, if $`|a`$ is a minimum uncertainty wavepacket optimally placed on a periodic orbit of period $`P`$, then at integer multiples $`t=mP`$ of this period, by Gaussian integration we easily obtain $$A_{\mathrm{short}}(t)=\mathrm{exp}[imS/\mathrm{}]/\sqrt{\mathrm{cosh}\beta m},$$ (13) where the classical orbit has action $`S`$ and instability exponent $`\beta `$. For small $`\beta `$, the Fourier transformed spectral envelope $`S_{\mathrm{smooth}}(E)`$ has bumps of width scaling as $`\beta /P`$ and height scaling as $`\beta ^1`$. Now after the mixing time ($`\beta ^1\mathrm{log}N`$), escaping probability starts returning to the origin, leading to fluctuations in the spectral envelopes, and eventually to discrete delta-function peaks (Eq. 12). The nonlinear scar theory allows the long-time returning amplitude to be analyzed as a homoclinic orbit sum , and leads to the prediction that the spectral intensities are given by $`\chi ^2`$ variables multiplying the short-time envelope: $$P_{an}=|r_{an}|^2S_{\mathrm{smooth}}(E_n),$$ (14) where the $`r_{an}`$ follow a Gaussian random distribution of variance unity. ## 4 Quantitative predictions for scar effects ### 4.1 Wavefunction intensity statistics As a simple application, we may compute the distribution of wavefunction intensities on a periodic orbit of instability exponent $`\beta `$ . For complex wavefunctions, the probability to have an intensity exceeding the mean by a factor $`x`$ is given by $`\mathrm{exp}(x)`$ in RMT; on a periodic orbit the tail of the distribution is instead given by $$P(\beta ,x)=C\beta (\beta x)^{1/2}e^{\beta x/Q},$$ (15) where $`C`$ and $`Q`$ are known numerical constants. Similarly, the tail of the overall intensity distribution (sampled over the entire phase space) is given by $$P(\beta ,x)=C^{}\beta \mathrm{}(\beta x)^{3/2}e^{\beta x/Q},$$ (16) where $`\beta `$ is now the exponent of the least unstable periodic orbit. These two results are illustrated in Fig. 1. Finally, for an ensemble of systems, a power-law tail is obtained, in contrast with the exponential prediction of RMT, and large intensities have been observed numerically with frequency exceeding by $`10^{30}`$ the RMT predictions. ### 4.2 Enhancement in probability to remain The above methods can also be applied to the open systems case . Consider the probability to remain inside a chaotic quantum well coupled to the outside via a tunneling lead. For a lead optimally placed with respect to a periodic orbit of exponent $`\beta 1`$, the long-time probability to remain in enhanced by a factor $`\mathrm{exp}(\pi ^2/2\beta )`$ (see Fig. 2), compared with the RMT expectation. (The enhancement is due the antiscarred states (Section 2.3(i)), which have very small coupling to the lead.) Of course, the classical probability to remain is exponential and independent of lead position for a chaotic system with a narrow lead. So once again we see long-time quantum mechanics retaining an imprint of short-time classical structures which is absent from long-time classical behavior. The analysis can of course be extended to the study of conductance peak statistics in two lead systems, where one or both leads are located near short periodic orbits (compare with the discussion of $`P_{ab}`$ statistics in Section 2.3(i)). ## 5 Conclusions We have seen that wavefunction structure and transport in classically ergodic systems, including the paradigmatic Sinai billiard system, can differ greatly from RMT expectations, and in fact can deviate further and further from RMT in the semiclassical limit. This non-ergodic quantum behavior at the scale of single wavelengths or single quantum channels can be quantitatively and robustly predicted using only short-time classical information. This anomalous small-scale quantum behavior is also entirely consistent with ergodicy on coarse-grained scales, as studied by Schnirelman, Zelditch, and Colin de Verdiere. Scarring is a fascinating example of the influence of identifiable classical structures on stationary quantum properties (e.g. eigenstates) and long-time quantum transport in classically chaotic systems. Short unstable periodic orbits leave a strong imprint on the long-time properties of the quantum chaotic system, even though classical dynamics loses all memory of these structures at long times. Scar theory makes robust and quantitatively verified predictions about properties such as the wavefunction intensity distribution in a chaotic system, including the power-law tail observed after ensemble averaging. A lead centered on an unstable periodic orbit has been shown to produce many exponentially narrow quantum resonances, even though the decay times are very long compared with all other time scales in the problem. Enhancement factors of $`10`$ to $`100`$ in the total probability to remain are easily observed for moderate values ($`1.0`$ to $`0.5`$) of the instability exponent. ## 6 Acknowledgments It is a pleasure to thank E. J. Heller for many fruitful discussions. This work was supported by the National Science Foundation under Grant No. 66-701-7557-2-30.
no-problem/9911/gr-qc9911072.html
ar5iv
text
# SOLVING THE HORIZON PROBLEM WITH A DELAYED BIG-BANG SINGULARITY ## 1 The Cosmological Principle The standard justification of this principle rests on two arguments. One, observation-based, is the quasi-isotropy of the CMBR around us. The second, philosophical, asserts that our location in the universe is not special, and what we observe around us must be thus observed the same from everywhere. The standard conclusion is that the matter distribution of the universe is homogeneous. One can, however, refuse the copernician argument. It is what is done here, where the homogeneity assumption is replaced by another way of taking into account the observed anisotropy of the CMBR: the universe is spherically symmetric and inhomogeneous around us who are located more or less near its center. ## 2 An inhomogeneous “delayed Big-Bang” In an expanding universe, going backward along the parameter called “cosmic time”$`t`$ means going to growing energy densities $`\rho `$ and temperatures $`T`$. Starting from our present matter dominated age, defined by T $``$ 2.73 K, one reaches an epoch where the radiation energy density overcomes the matter one. To deal with the horizon problem, we have to compute light cones. As will be further shown, this can be done by means of light cones mostly located inside the matter dominated area. In this first approach, the Tolman-Bondi $`^,`$ solution for spherically symmetrical dust (equation of state $`p=0`$) models is therefore retained. ### 2.1 Class of Tolman-Bondi models retained As the observed universe does not present appreciable spatial curvature, it can be approximated by a flat Tolman-Bondi model. In comoving coordinates ($`r,\theta ,\phi `$) and proper time $`t`$, its Bondi line-element is: $$ds^2=c^2dt^2+R^2(r,t)dr^2+R^2(r,t)(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2)$$ (1) The arbitrary “mass” function $`M(r)`$ can be used to define the radial coordinate $`r`$: $$M(r)=M_0r^3M_0=const.$$ (2) Einstein’s equations thus give: $$R(r,t)=\left(\frac{9GM_0}{2}\right)^{1/3}r[tt_0(r)]^{2/3}$$ (3) This model presents two singularity surfaces: $`t=t_0(r)`$, interpreted as the Big-Bang surface, for which $`R(r,t)=0`$. $`t=t_0(r)+\frac{2}{3}rt_0^{}(r)`$, usually refered to as “shell-crossing surface”, for which $`R^{}(r,t)=0`$. With the choice: $`t_0(r=0)=0`$, increasing $`t`$ means going from the past to the future. From Einstein’s equations, the energy density is: $$\rho (r,t)=\frac{1}{2\pi G[3t3t_0(r)2rt_0^{}(r)][tt_0(r)]}$$ (4) ### 2.2 Singularities Above expression (4) implies that the energy density goes to infinity not only on the Big-Bang surface, but also on the shell-crossing surface. A rapid calculation shows that the invariant scalar curvature also takes infinite values on these surfaces. They can thus be both considered as physical singularities. As will be seen further on, we here need $`t_0^{}(r)>0`$. In this case, the shell-crossing surface is situated above the Big-Bang in the $`(r,t)`$ plane. For cosmological applications, shell-crossing is not an actual problem. As energy density increases while reaching the neighbourhood of the shell-crossing surface from higher values of $`t`$, radiation becomes the dominant component, pressure can no more be neglected, and Tolman-Bondi models no longer hold. Furthermore, in the following, the light cones of interest never leave the $`t>t_0(r)+\frac{2}{3}rt_0^{}(r)`$ region. ### 2.3 Definition of the temperature To provide local thermodynamical equilibrium, the assumption made is that the characteristic scale of the $`\rho `$ inhomogeneity is much larger than the characteristic length of the photon-baryon interaction. The local specific entropy is defined as: $$S(r)\frac{k_Bn_\gamma (r,t)m_b}{\rho (r,t)}$$ (5) and the local temperature T by: $$n_\gamma =a_nT^3$$ (6) One thus obtains, from expression (4), an equation for the $`T=const.`$ surfaces of interest: $$t=t_0(r)+\frac{r}{3}t_0^{}(r)+\frac{1}{3}\sqrt{r^2t_0^2(r)+\frac{3S(r)}{2\pi Gk_Ba_nm_bT^3}}$$ (7) ### 2.4 The nearly centered-Earth assumption In this first approach, the Earth is assumed situated sufficiently close to the center of the universe so as to justify the approximation: $`r_p=0`$ (the subscript “p” refering to “here and now”). Adding to the specifications of the $`t_0(r)`$ function: $`rt_0^{}|_{r=0}=0`$ one can show, from equation (7), that, in the vicinity of $`r=0`$, the evolution scenario of the universe approximately reproduces the hot Big-Bang one. The centered-Earth assumption is not an inevitable feature of the model. In a work to be published , it is shown that the dipole and quadrupole moments of the CMBR temperature anisotropies can be reproduced with an inhomogeneous Big-Bang function of which the gradient can be chosen all the smaller as the location of the observer is removed from the center of the universe. Other work is in progress to show that the horizon problem can be solved with an observer located off this center. ## 3 Solving the horizon problem Light travels from the last scattering surface to a present local observer on a light cone going from ($`r_p=0`$, $`t_p`$) to a 2-sphere ($`r_{es}`$, $`t_{ls}`$) on the last scattering 3-sphere, defined by $`T=4000`$ K. To solve the horizon problem, it is sufficient to show that this 2-sphere can be contained inside the future light cone of any ($`r=0`$, $`t>0`$) point of space-time, thus restauring causality before reaching the singularity. Consider a $`t_0(r)`$ function, such as the shell-crossing surface is situated above the Big-Bang surface and monotonously increasing with $`r`$. The equation for the radial null geodesic is: $$\frac{dt}{dr}=\pm \frac{1}{3c}\left(\frac{9GM_0}{2}\right)^{1/3}\frac{3t3t_0(r)2rt_0^{}(r)}{[tt_0(r)]^{1/3}}$$ (8) These curves possess an horizontal tangent on and only on the shell-crossing surface where: $$3t3t_0(r)2rt_0^{}(r)=0$$ (9) The past light cone $`t(r)`$ from ($`r_p`$, $`t_p`$) verifies equation (8) with the minus sign. As it is situated above the Big-Bang and shell-crossing surfaces, $`3t3t_0(r)2rt_0^{}(r)`$ and $`tt_0(r)`$ remain positive. $`t(r)`$ is thus a strictly decreasing function of $`r`$ and crosses the shell-crossing surface at a finite point where its derivative goes to zero. On its way, it crosses in turn each $`T=const.`$ surface at a finite point, which is labelled ($`r_{ls1}`$, $`t_{ls1}`$) on the last scattering. Now consider a backward null radial geodesic from any point P located above the shell-crossing surface, solution of equation (8) with the plus sign. Its derivative remains positive as long as it does not reach the shell-crossing surface. Since this surface is strictly increasing with $`r`$, it cannot be horizontally crossed from upper values of $`t`$ by a strictly increasing curve. This geodesic reachs therefore $`r=0`$, without crossing the shell-crossing surface, at $`t_c>0`$. This holds for every light cone issued from any point on the last scattering surface. Every point on this sphere can thus be causally connected, and, in particular, the point ($`r_{ls1}`$, $`t_{ls1}`$), representing the CMBR as seen from the observer. ## 4 Examples of appropriate Big-Bang functions The conditions previously imposed upon the Big-Bang function $`t_0(r)`$ can be summarized as: $`t_0(r=0)`$ $`=`$ $`0`$ $`t_0^{}(r)`$ $`>`$ $`0\text{for all}r`$ $`5t_0^{}(r)+2rt\mathrm{"}_0(r)`$ $`>`$ $`0\text{for all}r`$ $`rt_0^{}|_{r=0}`$ $`=`$ $`0`$ A class of functions fulfilling these conditions is: $$t_0(r)=br^nb>0n>0$$ (10) In these models, matter is the dominating component for, at least, 99% of the cosmic time $`t`$ elapsed between the last scattering and the shell-crossing surfaces, where are located the pertinent cones. The dust approximation can therefore be considered as correct. ## 5 Conclusion In this preliminary work, it has been shown that the horizon problem can be solved with an inhomogeneous singularity and no inflationary phase. To deal with the other major problems of standard cosmology, it can be stressed that: \- the flatness and cosmological constant problems both proceed from Friedmann’s equations and are thus irrelevant in an inhomogeneous model. \- the monopoles problem can be considered as a particle physics problem. \- perturbations on the Big-Bang surface can produce density fluctuations at all scales which could account for the origin of structure formation. Observational data at large reshift will have to be analysed in the formalism of inhomogeneous cosmology to discriminate between the standard homogeneous plus inflation paradigm and the new solution here proposed. Such an analysis will yield limits upon the model parameters. ## References
no-problem/9911/hep-ph9911223.html
ar5iv
text
# 1 Introduction ## 1 Introduction In a recent interesting paper the quantities appearing in the effective action that we had proposed for the description of the massless modes in the color-flavor-locking (CFL) phase of QCD have been evaluated in the perturbative limit of very high density. Furthermore an evaluation of the pseudoscalar masses generated from nonvanishing quark masses in the QCD lagrangian has been given. First let us discuss the expected scaling of the pseudoscalar masses with quark masses in the new phase. We know the standard arguments at zero density giving the square mass of the pseudoscalar as proportional to to quark mass times the condensate $`\overline{\psi }\psi `$. However, in the CFL phase, for massless quarks, there is a discrete symmetry $`Z_{2L}`$ (a change of sign for the left-handed quarks) which requires the condensate to be zero . When we add the symmetry breaking induced by the quark masses the previous statement is no longer valid. Nevertheless we can still think of the condensate vanishing with the quark masses. This leads to the idea that higher dimension operators should be considered, leading to a quadratic dependence of the pseudogoldstone masses on the quark masses. However, also the mechanism discussed above, that is the usual Adler-Dashen theory, when implemented with the idea of a condensate vanishing for massless quarks, can give the quadratic dependence. In fact, in ref. it has been shown that starting from the mass term in the QCD lagrangian $$_m=\psi _L^{}M\psi _R+\mathrm{h}.\mathrm{c}.$$ (1) and considering a mass matrix proportional to the identity matrix $`M=m1_3`$, one gets at very high density a shift of the vacuum energy given by $$\mathrm{\Delta }_{\mathrm{vac}}=\frac{9\mu ^2}{8\pi }m^2+\mathrm{h}.\mathrm{c}.$$ (2) with $`\mu `$ the chemical potential (the calculation is made perturbatively since we are considering the limit of very high density). This shows that for such a mass matrix the condensate is given by $$\underset{u,d,s}{}\psi _R^{}\psi _L=\frac{9\mu ^2}{8\pi }m$$ (3) Therefore, at the level of the effective action of the pseudogoldstones, one can expect that there are different contributions to the masses. One contribution from the usual Adler-Dashen mechanism, plus others coming from higher dimension operators. One higher dimension operator is explicitly constructed in ref. . An important role is played by the $`U(1)_A`$ symmetry. Such a symmetry is effectively restored at large values of $`\mu `$. In this note we will show that there is another term in the effective lagrangian, which gives rise to a pseudoscalar mass pattern of the usual hierarchical type, and it still respects all the needed symmetries. Most importantly, this new term solves a problem present in the paper . In fact the operator considered in turns out to give that all pseudoscalar masses continue to vanish for $`m_u=m_d=0`$ even when $`m_s0`$. This result would be very puzzling. A symmetry breaking term ($`m_s0`$) in the fundamental lagrangian would have no effect in the effective action. We cannot expect that, when an explicit strange quark mass is introduced, all the massless goldstones, of the situation where all quarks were massless, do remain massless. We must expect that those which are no longer goldstones aquire a mass. As we shall see the additional invariant that we will construct gives pseudoscalar masses which all vanish only when the strange quark mass mass vanishes. This avoids the puzzle. ## 2 The effective theory We shortly review the effective action for the color-flavor locked (CFL) phase of QCD introduced in . The symmetry breaking pattern is $$G=SU(3)_cSU(3)_LSU(3)_RU(1)H=SU(3)_{c+L+R}$$ (4) from dynamical condensates $$\psi _{ai}^L\psi _{bj}^L=\psi _{ai}^R\psi _{bj}^R=\gamma _1\delta _{ai}\delta _{bj}+\gamma _2\delta _{aj}\delta _{bi}$$ (5) ($`\psi _{ai}^{L(R)}`$ are Weyl spinors and spinor indices are summed). The indices $`a,b`$ and $`i,j`$ refer to $`SU(3)_c`$ and to $`SU(3)_L`$ (or $`SU(3)_R`$) respectively. To be precise $`H`$ contains an additional $`Z_2`$, which plays an essential role. The effective lagrangian describes the CFL phase for momenta smaller than the energy gap (existing estimates range between 10 and 100 $`MeV`$). Before gauging $`SU(3)_c`$ we need 17 goldstones. We define coset matrix fields $`X`$ ($`Y`$) transforming under $`G`$ as a left-handed (right-handed) quark. We require $$Xg_cXg_L^T,Yg_cYg_R^T$$ (6) with $`g_cSU(3)_c,g_LSU(3)_L,g_RSU(3)_R`$. $`X`$ and $`Y`$ are $`SU(3)`$ matrices, breaking respectively $`SU(3)_cSU(3)_L`$ and $`SU(3)_cSU(3)_R`$. An additional goldstone is related to the breaking of baryon number. It corresponds to a $`U(1)`$ factor transforming under $`G`$ as $$Ug_{U(1)}U,g_{U(1)}U(1)$$ (7) We define the anti-hermitian and traceless currents $$J_X^\mu =X^\mu X^{},J_Y^\mu =Y^\mu Y^{},J_\varphi =U^\mu U^{}$$ (8) which transform under $`G`$ as $$J_X^\mu g_cJ_X^\mu g_c^{},J_Y^\mu g_cJ_Y^\mu g_c^{},J_\varphi ^\mu J_\varphi ^\mu $$ (9) At finite density Lorentz invariance is broken. Barring WZW terms (see ref. for a brief discussion), the most general $`O(3)`$ symmetric lagrangian invariant under $`G`$, with at most two derivatives, is $``$ $`=`$ $`{\displaystyle \frac{F_T^2}{4}}Tr[(J_X^0J_Y^0)^2]\alpha _T{\displaystyle \frac{F_T^2}{4}}Tr[(J_X^0+J_Y^0)^2]{\displaystyle \frac{f_T^2}{2}}(J_\varphi ^0)^2`$ (10) $`+`$ $`{\displaystyle \frac{F_S^2}{4}}Tr[(\stackrel{}{J}_X\stackrel{}{J}_Y)^2]+\alpha _S{\displaystyle \frac{F_S^2}{4}}Tr[(\stackrel{}{J}_X+\stackrel{}{J}_Y)^2]+{\displaystyle \frac{f_S^2}{2}}(\stackrel{}{J}_\varphi )^2`$ We have required invariance under parity (symmetry under $`XY`$). With the parametrization $$X=e^{i\stackrel{~}{\mathrm{\Pi }}_X^aT_a},Y=e^{i\stackrel{~}{\mathrm{\Pi }}_Y^aT_a},U=e^{i\stackrel{~}{\varphi }},a=1,\mathrm{}8$$ (11) (the $`SU(3)`$ matrices $`T_a`$ satisfy $`Tr[T_aT_b]=\frac{1}{2}\delta _{ab}`$) and by defining $$\mathrm{\Pi }_X=\sqrt{\alpha _T}\frac{F_T}{2}(\stackrel{~}{\mathrm{\Pi }}_X+\stackrel{~}{\mathrm{\Pi }}_Y),\mathrm{\Pi }_Y=\frac{F_T}{2}(\stackrel{~}{\mathrm{\Pi }}_X\stackrel{~}{\mathrm{\Pi }}_Y),\varphi =f_T\stackrel{~}{\varphi }$$ (12) the kinetic term is $$_{\mathrm{kin}}=\frac{1}{2}(\dot{\mathrm{\Pi }}_{X}^{}{}_{}{}^{a})^2+\frac{1}{2}(\dot{\mathrm{\Pi }}_{Y}^{}{}_{}{}^{a})^2+\frac{1}{2}(\dot{\varphi })^2\frac{v_X^2}{2}|\stackrel{}{}\mathrm{\Pi }_X^a|^2\frac{v_Y^2}{2}|\stackrel{}{}\mathrm{\Pi }_Y^a|^2\frac{v_\varphi ^2}{2}|\stackrel{}{}\varphi |^2$$ (13) where $$v_X^2=\frac{\alpha _SF_S^2}{\alpha _TF_T^2},v_Y^2=\frac{F_S^2}{F_T^2},v_\varphi ^2=\frac{f_S^2}{f_T^2}$$ (14) The three types of goldstones satisfy linear dispersion relations $`E=vp`$, with different velocities, For local $`SU(3)_c`$ invariance we make the derivatives covariant $$_\mu XD_\mu X=_\mu Xg_\mu X,_\mu YD_\mu Y=_\mu Yg_\mu Y,g_\mu \mathrm{Lie}SU(3)_c$$ (15) The currents become $$J_X^\mu =X^\mu X^{}+g^\mu ,J_Y^\mu =Y^\mu Y^{}+g^\mu $$ (16) giving for the lagrangian $``$ $`=`$ $`{\displaystyle \frac{F_T^2}{4}}Tr[(X^0X^{}Y^0Y^{})^2]\alpha _T{\displaystyle \frac{F_T^2}{4}}Tr[(X^0X^{}+Y^0Y^{}+2g^0)^2]`$ (17) $``$ $`{\displaystyle \frac{f_T^2}{2}}(J_\varphi ^0)^2+\mathrm{spatial}\mathrm{terms}\mathrm{and}\mathrm{kinetic}\mathrm{part}\mathrm{for}g^\mu `$ We introduce $$g_\mu =ig_s\frac{T_a}{2}g_\mu ^a$$ (18) where $`g_s`$ is the QCD coupling constant. The gluon field becomes massive. This can be easily seen in the unitary gauge $`X=Y^{}`$, where $$\stackrel{~}{\mathrm{\Pi }}_X=\stackrel{~}{\mathrm{\Pi }}_Y$$ (19) or $$\mathrm{\Pi }_X=0,\mathrm{\Pi }_Y=F_T\stackrel{~}{\mathrm{\Pi }}_X$$ (20) The gluon mass (for the expected velocities of order one) is $$m_g^2=\alpha _Tg_s^2\frac{F_T^2}{4}$$ (21) The $`XY`$ symmetry, in this gauge, implies $`\mathrm{\Pi }_Y\mathrm{\Pi }_Y`$. The gluon kinetic term can be neglected for energies much smaller than the gluon mass. The lagrangian (17) is then the hidden gauge symmetry version of the chiral QCD lagrangian (except for the field $`\varphi `$). In fact, in this limit, the gluon field becomes auxiliary and can be eliminated through its equation of motion $$g_\mu =\frac{1}{2}(X_\mu X^{}+Y_\mu Y^{})$$ (22) obtaining $$=\frac{F_T^2}{4}Tr[(X^0X^{}Y^0Y^{})^2]\frac{f_T^2}{2}(J_\varphi ^0)^2+\mathrm{spatial}\mathrm{terms}$$ (23) or $$=\frac{F_T^2}{4}\left(Tr[\dot{\mathrm{\Sigma }}\dot{\mathrm{\Sigma }}^{}]v_Y^2tr[\stackrel{}{}\mathrm{\Sigma }\stackrel{}{}\mathrm{\Sigma }^{}]\right)\frac{f_T^2}{2}\left((J_\varphi ^0)^2v_\varphi ^2|\stackrel{}{J}_\varphi |^2\right)$$ (24) where $`\mathrm{\Sigma }=Y^{}X`$ transforms under the group $`SU(3)_cSU(3)_LSU(3)_R`$ as $`\mathrm{\Sigma }g_R^{}\mathrm{\Sigma }g_L^T`$. The goldstone $`\varphi `$ could be interpreted as a particularly light dibaryon state, $`(udsuds)`$, considered by R. Jaffe . After the breaking of color, one has the massless photon and 9 physical goldstones transforming as $`1+8`$ under the unbroken SU(3). In our previous paper we did not consider the pseudogoldstone mode associated to the $`U(1)_A`$ symmetry since it is not massless. However, in the present context it plays an important role. Its kinetic term can be introduced in complete analogy with what we have done for the mode associated to the vector $`U(1)`$. The contribution to the lagrangian is given by $$\frac{f_A^2}{2}((J_\theta ^0)^2v_\theta ^2|\stackrel{}{J}_\theta |^2)$$ (25) where $$J_\theta ^\mu =V^\mu V^{},V=e^{i\stackrel{~}{\theta }}$$ (26) and, for the correct normalization of the kinetic term, one has to define the field $$\theta =\frac{1}{f_A}\stackrel{~}{\theta }$$ (27) Under a $`U(1)_A`$ transformation all the fields are invariant except for $$Ve^{i\alpha }V,e^{i\alpha }U(1)_A$$ (28) An equivalent description can be obtained by including the $`U(1)`$ fields into $`X`$ and $`Y`$ fields belonging to $`U(3)`$, but we will stay with the formalism just described. ## 3 Mass terms We are now in the position of discussing the mass terms for the pseudogoldstones. The quark mass term in the QCD lagrangian is given by $$_m=\psi _L^{}M\psi _R+\mathrm{h}.\mathrm{c}.$$ (29) By thinking of $`M`$ as a set of external fields, the QCD lagrangian preserves all its symmetries if we require that under the global group $`SU(3)_LSU(3)_RU(1)U(1)_A`$ the fields $`M`$ transform as $$Me^{2i\alpha }g_LMg_R^{}$$ (30) As said before, we insist on keeping the $`U(1)_A`$ (in the CFL phase this symmetry is restored at very high density). The effective lagrangian must respect this symmetry extended to the external fields. In ref. it has been observed that an invariant term is given by $$\mathrm{\Delta }_1=cdet(M)Tr(M^{1T}\mathrm{\Sigma })V^4+\mathrm{h}.\mathrm{c}.$$ (31) (this term differs in form from the one of ref. because, as explained above, our $`\mathrm{\Sigma }`$ field is invariant under $`U(1)_A`$). The main observation of the present paper is to notice that at the order $`M^2`$ another invariant term exists. It reproduces the mass mechanism of the usual low density chirally broken phase. In the present case however the condensate has a dependence on the quark masses. In fact, we notice that the quantity $$\mathrm{\Delta }_2=f(Tr(M^{}M))Tr[M^{}\mathrm{\Sigma }]V^2+\mathrm{h}.\mathrm{c}.$$ (32) is invariant, for any choice of the function $`f`$, since the transformation of $`V`$ (see eq. (28)) under $`U(1)_A`$ compensates the change in $`M`$. For an evaluation at high density of the constant $`c`$ and of the function $`f`$ we make use of the results of ref. for the region of very high densities. We can then match the shift of the vacuum energy in QCD to that of the effective theory. For a mass matrix proportional to the unit matrix $`M=m1_3`$, and for $`M=\mathrm{diag}(0,0,m_s)`$, one has in QCD, respectively $$\mathrm{\Delta }_{\mathrm{vac}}=\frac{9\mu ^2}{8\pi }m^2,\mathrm{\Delta }_{\mathrm{vac}}=\frac{3\mu ^2}{8\pi }m_s^2$$ (33) where $`\mu `$ is the chemical potential. Since $`\mathrm{\Delta }_1`$ vanishes for $`m_d=m_u=0`$, it follows that $$f(Tr(M^{}M))=\frac{3\mu ^2}{8\pi }\sqrt{Tr(M^{}M)}$$ (34) Then, choosing the case $`M=m1_3`$, we get $$c=\frac{9(1\sqrt{3})}{8\pi }\mu ^2$$ (35) The invariant $`\mathrm{\Delta }_2`$ in the effective lagrangian solves the puzzle discussed in the introduction. The pseudogoldstone masses do no longer vanish for $`m_u=m_d=0`$ and for a massive strange quark. Notice that the invariant $`\mathrm{\Delta }_2`$ gives a mass pattern identical to the usual one of zero density, except for the factor arising from the function $`f`$, that is $`\sqrt{m_u^2+m_d^2+m_s^2}`$. This is what one would expect from an Adler-Dashen type mechanism, except that in the present case one has to think of a condensate which vanishes with the quark masses due to the $`Z_2`$ invariance, as iscussed in the Introduction. The invariant $`\mathrm{\Delta }_2`$ gives also a contribution to the mass of the pseudogoldstone $`\theta `$ associated to the axial symmetry and to its mixings with $`\pi ^0`$ and $`\eta `$. On the other hand it does not contribute to the mass of the pseudogoldstone $`\varphi `$ associated to the barionic number. We notice that in the phenomenological situation, where $`m_u,m_dm_s`$, the invariant $`\mathrm{\Delta }_2`$ dominates the pseudogoldstone masses. Using the results of ref. one finds that the pseudogoldstone masses are completely determined by the quark masses. In fact, in our notations, from ref. we get $$F_T^2=(218\mathrm{ln}2)\frac{\mu ^2}{9\pi ^2},f_T^2=\frac{9\mu ^2}{\pi ^2}f_A^2$$ (36) Since the pseudogoldstone masses scale as the inverse of the square of the decay constants, one finds that they do not depend on the chemical potential. As a last observation we notice that for $`m_u=m_d=0`$ and $`m_s0`$, the mass pattern is the same as in the chiral phase (except for a multiplicative constant). Therefore the pions $`\pi ^\pm `$ and $`\pi ^0`$ remain massless, whereas all the other pseudogoldstones become massive (excluding the $`\varphi `$ particle). Then, as expected, there are 3 massless Goldstone bosons coming from the breaking of $$SU(3)_cSU(2)_LSU(2)_RSU(2)$$ (37) (notice that 8 goldstones are eaten up by the gluons). Since it has been shown that for two flavors at high density the symmetry breaking pattern differs from the one in eq. (37) , we see that some interesting phenomenon has to happen in the transition between the case of two and three flavors. ## 4 Conclusions In this note we have discussed the pattern of the masses of the pseudogoldstones of the color-flavor-locked phase of QCD at high density when quark masses are considered. We have examined the different mass terms that have to be introduced in the effective lagrangian. A previous puzzling feature, of all goldstones remaining massless for $`m_u=m_d=0`$ and $`m_s0`$ was due to neglection of a mass invariant, whose structure reminds, in a properly modified way, of the Adler-Dashen mechanism of the chirally broken phase at low densities. The pseudogoldstone mass pattern is then completely understood. Acknowledgements We would like to thank Krishna Rajagopal for an enlightening exchange of correspondence on the subject.
no-problem/9911/hep-lat9911039.html
ar5iv
text
# UTCCP-P-76 Heavy-light spectrum and decay constant from NRQCD with two flavors of dynamical quarkstalk presented by A. Ali Khan ## 1 Introduction The decay constant $`f_B`$ is being studied extensively on the lattice because of its importance for the determination of CKM matrix elements. The spectrum of excited $`B`$ mesons and $`b`$ baryons is being measured in present experiments, whereas there exist only few lattice results on this subject. In this article we report on our study of $`B`$ mesons in two-flavor full QCD employing the NRQCD action for heavy quark and a tadpole-improved clover action for light quark. The dynamical configurations have been generated using the same light quark action and an RG-improved gauge action with a plaquette and a rectangular term. Details on our full QCD configurations can be found in Refs. . A parallel study of $`B`$ mesons using the clover action for heavy quark is presented in Ref. . ## 2 Simulation Details We present results for two sets of dynamical lattices corresponding to the heaviest and the lightest sea quark in our configuration set at $`\beta =1.95`$. The results are compared to those from quenched lattices generated with the same RG-improved gauge action at $`\beta =2.187`$, the lattice spacing from the string tension matched to the dynamical lattice with $`\kappa _{sea}=0.1375`$. Some details on these runs are given in Table 1. We take 5 $`\kappa `$ values for the light valence quark corresponding to $`m_{\mathrm{PS}}/m_\mathrm{V}0.80.5`$. The strange quark mass $`m_s`$ is fixed using the $`K`$ and the $`\varphi `$ meson. Our results for the $`B_s`$ meson are obtained with $`m_s`$ from the $`K`$, and the $`\varphi `$ is used to estimate the systematic error. For the heavy quarks, we use NRQCD at $`O(1/M)`$ with a symmetric evolution equation as defined in . We employ 5 bare heavy quark masses, covering a range of roughly $`2.54.5`$ GeV. The heavy-light meson mass $`M`$ is determined from the difference of the meson energy at finite momentum and at rest, assuming the dispersion relation, $`E(\stackrel{}{p})E(0)=\sqrt{\stackrel{}{p}^2+M^2}M`$. As a consistency check, we use both the $`B_d`$ and the $`B_s`$ meson to determine the b quark mass. In our calculation of decay constants, the heavy-light current is corrected through $`O(\alpha /M)`$. The mixing coefficients between the lattice operators contributing at this order to the time component of the axial vector current $`J_4`$, and the matching factor to the continuum current has been calculated in one-loop perturbation theory, $`J_4`$ $`=`$ $`(1+\alpha \rho _0)J_{4,lat}^{(0)}+(1+\alpha \rho _1)J_{4,lat}^{(1)}`$ (1) $`+`$ $`\alpha \rho _2J_{4,lat}^{(2)}.`$ For the RG-improved gluon action, $`\alpha _V`$ has not been calculated, and we use a tadpole-improved one-loop expression for the $`\overline{MS}`$ coupling, $`\alpha _{\overline{MS}}^{TI}(1/a)`$. ## 3 Decay Constants Our preliminary results for $`f_B`$, $`f_{B_s}`$ and $`f_{B_s}/f_B`$ are given in Table 2, along with the statistical error and, where applicable, the uncertainty in the determination of $`m_s`$. Additional systematic errors are estimated as follows: $`O(\alpha ^2)`$ corrections, taken to be $`\alpha ^2\times O(1)`$, are 5%. A previous NRQCD calculation using the plaquette gluon action at $`a^11`$ GeV finds the tree level $`O(1/M^2)`$ corrections to be $`2\%`$ ; we estimate our error from the truncation of the $`1/M`$ expansion to be $`4\%`$. The leading discretization effects from the light quarks and gluons of $`O(\alpha a\mathrm{\Lambda }_{QCD})`$ and $`O(a^2\mathrm{\Lambda }_{QCD}^2)`$ are 5%. Added in quadrature, these estimates give $`7\%`$. Our two-flavor results for $`f_B`$ and $`f_{B_s}`$ given in Table 2 show a $`10\%`$ increase compared to the quenched values (see also Fig. 1). We do not resolve any sea quark mass dependence. The dependence on the value of $`\alpha _s`$ is weaker than for the plaquette gauge action, and the difference between renormalized and bare decay constants is only about $`5\%`$. In Fig. 1 we show the one-loop corrections to the current $`J_4`$ as a function of the heavy-light meson mass. In the $`B`$ region, $`1/M0.2`$, we find the correction to $`J_{4,lat}^{(1)}`$ to be very small and the two other terms to contribute about the same amount. The $`J_{4,lat}^{(2)}`$ contribution also contains a discretization correction to the current first pointed out in . We note that this discretization correction is considerably smaller for the RG gauge action than for the plaquette gauge action . For $`f_{B_s}/f_B`$, we cannot resolve a difference between the three lattices. In a parallel study of B mesons using clover heavy quarks , we have obtained $`f_B`$ and $`f_{B_s}`$ taking the chiral limit for sea quark at $`\beta =1.8,1.95`$ and 2.1. The results from that study at $`\beta =1.95`$ agree within the estimated errors with the present results from NRQCD. ## 4 Spectrum In Fig. 2, we give our results for several $`B`$ splittings from the lattices with $`n_f=0`$ and $`n_f=2,\kappa _{sea}=0.1375`$. The top part of the figure shows the $`B^{}B`$ splitting. At present, we cannot resolve any unquenching effects. For quarkonia, on the same lattices, the hyperfine splitting is found to increase from the quenched value only by a few MeV . We find the $`B^{}B`$ splitting to be $`30\%`$ smaller than the experimental value. Possible sources of systematic error are the finiteness of the sea quark mass, the $`O(\alpha )`$ correction to the coefficient of the $`\sigma B`$ operator, and higher order relativistic corrections. In the middle part of Figure 2, we show results for the $`B_2^{}B`$ splitting, and in the lower part, the spin-averaged $`\mathrm{\Lambda }_b\overline{B}`$ splitting. We do not find significant unquenching effects. However, for definite conclusions, we need to study several sea quark masses and lattice spacings, which is in progress. This work is supported in part by the Grants-in-Aid of Ministry of Education, Science and Culture (Nos. 09304029, 10640246, 10640248, 10740107, 11640250, 11640294, 11740162). SE and KN are JSPS Research Fellows. AAK and TM are supported by the Research for the Future Program of JSPS.
no-problem/9911/astro-ph9911099.html
ar5iv
text
# Supernova Resonance-Scattering Profiles in the Presence of External Illumination ## 1 Introduction In a simple but useful model of spectral line formation during the photospheric phases of a supernova, a line forms by resonance scattering in a homologously expanding atmosphere above a sharp photosphere. An unblended line has a P-Cygni profile with an emission feature near the observer-frame line center wavelength and a blueshifted absorption. Synthetic spectra calculated on the basis of this simple model have been found to fit the observed spectra of most supernovae reasonably well, and to be useful for establishing constraints on the composition structure of the ejected matter. For a detailed discussion of the model, see Jeffery & Branch (1990), who also provided an atlas of line profiles, and compared synthetic spectra to numerous observed spectra of the Type II SN 1987A. Recent studies based on the simple model, using the SYNOW supernova synthetic-spectrum code, include Millard et al. (1999) on the Type Ic SN 1994I and Hatano et al. (1999) on the Type Ia SN 1994D. In the bright Type IIn SN 1998S we have encountered an event for which the above simple model fails. The supernova SN 1998S has been observed extensively with the Hubble Space Telescope (HST) as a target of opportunity by the Supernova INtensive Study (SINS) group (Garnavich et al. 1999, Lentz et al. 1999, and Blaylock et al. 1999, all in preparation), as well as from the ground (Leonard et al. 1999). The observed spectra, especially at short wavelengths and early epochs, contain numerous narrow ($`500`$ km s<sup>-1</sup>) absorption and emission features that formed in circumstellar matter. The optical and ultraviolet spectra also contain broad ($`5000`$–10,000 km s<sup>-1</sup>) features that formed in the supernova ejecta. All of the broad features show little contrast with the continuum, especially at early times. It is not plausible that the radial dependence of the optical depths of all of the supernova lines should be such that the line profiles come out to be shallow, and in synthetic spectrum calculations with the SYNOW code in its simplest form we cannot account for the relative strengths of the spectral features of SN 1998S. The presence of the circumstellar features suggests that the broad supernova features were being affected by light from the region of circumstellar interaction: i.e., the line formation region was illuminated not only from below by light from the photosphere, but also from above by light from the circumstellar interaction. We refer to this external illumination of the supernova line-forming region as “toplighting”. In this paper we present a simple model of resonance-scattering line formation in the presence of toplighting. This simple model provides a clear understanding of the most conspicuous toplighting effect: a rescaling or, as we prefer to call it, a “muting” of the line profiles relative to the continuum. This effect would be present in more realistic models, but would be harder to isolate. Section 2 presents the model. The emergent specific intensities are derived in § 3. The line profile and a muting factor are derived in § 4. In § 5, we offer a picturesque description of radiative transfer in the model atmosphere. A final discussion appears in § 6. ## 2 The Model Suppose that supernova line formation takes place by isotropic resonance scattering in a spherically symmetric atmosphere that has a sharp photosphere with radius $`R_{\mathrm{ph}}`$ as an inner boundary and a circumstellar-interaction region (CSIR) as an outer boundary. Assume that the CSIR is a shell of zero physical width and optical depth, and let its radius be $`R_{\mathrm{cs}}`$ (see Fig. 1): $`R_{\mathrm{cs}}R_{\mathrm{ph}}`$ in all cases, of course. For simplicity, assume that the photosphere emits an outward angle-independent specific intensity $`I_{\mathrm{ph}}`$, that the CSIR emits isotropically with specific intensity $`I_{\mathrm{cs}}`$, and that $`I_{\mathrm{ph}}`$ and $`I_{\mathrm{cs}}`$ are constant over the wavelength interval of interest. When $`I_{\mathrm{cs}}`$ is set to zero, we have what we will call the standard case. Nonzero $`I_{\mathrm{cs}}`$ gives the toplighting case. We assume that the atmosphere is in homologous expansion. In homologous expansion the radius of a matter element $`r`$ is given by $$r=vt,$$ $`(0)`$ where $`v`$ is the constant radial velocity of the matter element and $`t`$ is the time since explosion which is assumed sufficiently large that initial radii are negligible. Line formation is treated by the (non-relativistic) Sobolev method (e.g., Rybicki & Hummer 1978; Jeffery & Branch 1990; Jeffery & Vaughan 1999) in which the line profile in the emergent flux spectrum depends on the radial behavior of the Sobolev line optical depth $`\tau `$ and the line source function $`S`$ (which is a pure resonance scattering source function following our earlier assumption). We consider only isolated (unblended) line formation and do not include any continuous opacity in the atmosphere. We note for homologous expansion that the resonance surfaces for observer-directed beams are planes perpendicular to the line-of-sight. (A resonance surface is the locus of points on which beams are Doppler shifted into resonance with a line in an atmosphere with a continuously varying velocity field.) If we take $`z`$ to be the line-of-sight coordinate with the positive direction toward the observer, the resonance plane for a wavelength shift $`\mathrm{\Delta }\lambda `$ from the line center wavelength $`\lambda _0`$ is at $$z=\frac{\mathrm{\Delta }\lambda }{\lambda _0}ct,$$ $`(1)`$ where $`z/t`$ is the plane’s velocity in the $`z`$-direction. Blueshifts give positive $`z`$ planes and redshifts, negative $`z`$ planes. ## 3 The Emergent Specific Intensities For resonance scattering the source function of a line is just equal to the mean intensity. In our model the line source function is $$S(r)=WI_{\mathrm{ph}}+[1W]I_{\mathrm{cs}},$$ $`(2)`$ where W is the usual geometrical dilution factor, $$W=\frac{1}{2}\left[1\sqrt{1\left(\frac{R_{\mathrm{ph}}}{r}\right)^2}\right]$$ $`(3)`$ (e.g., Mihalas 1978, p. 120). The first term in equation (3) accounts for radiation from the photosphere and the second, for radiation from the CSIR. Consider a line-of-sight specific intensity beam that does not intersect the photosphere (i.e., one that has impact parameter $`p>R_{\mathrm{ph}}`$ in the standard $`p,z`$ coordinate system) and that has a wavelength in the observer’s frame that differs from the line center wavelength by $`\mathrm{\Delta }\lambda `$. In a toplighting case such a beam originates from the CSIR as shown in Figure 1. From the Sobolev method, the emergent specific intensity of such a beam can be seen to be $$I_{\mathrm{\Delta }\lambda }(p>R_{\mathrm{ph}})=I_{\mathrm{cs}}e^\tau +[WI_{\mathrm{ph}}+(1W)I_{\mathrm{cs}}](1e^\tau )+I_{\mathrm{cs}},$$ $`(4)`$ where, of course, $`W`$ and $`\tau `$ must be evaluated at the Sobolev resonance point for the $`\mathrm{\Delta }\lambda `$ value under consideration. Similarly, the emergent specific intensity of a line-of-sight beam that originates from the photosphere (i.e., has $`pR_{\mathrm{ph}}`$) can be seen to be $$I_{\mathrm{\Delta }\lambda }(pR_{\mathrm{ph}})=I_{\mathrm{ph}}e^\tau +[WI_{\mathrm{ph}}+(1W)I_{\mathrm{cs}}](1e^\tau )+I_{\mathrm{cs}}.$$ $`(5)`$ Equations (5) and (6) can be rearranged into the convenient expressions $$I_{\mathrm{\Delta }\lambda }(p>R_{\mathrm{ph}})=(I_{\mathrm{ph}}I_{\mathrm{cs}})W(1e^\tau )+2I_{\mathrm{cs}}$$ $`(6)`$ and $$I_{\mathrm{\Delta }\lambda }(pR_{\mathrm{ph}})=(I_{\mathrm{ph}}I_{\mathrm{cs}})e^\tau +(I_{\mathrm{ph}}I_{\mathrm{cs}})W(1e^\tau )+2I_{\mathrm{cs}}.$$ $`(7)`$ If $`I_{\mathrm{cs}}=0`$, these expressions reduce to the standard or non-toplighting case Sobolev expressions for emergent specific intensity. If the resonance point is not in the line-forming region (i.e., it occurs at $`r<R_{\mathrm{ph}}`$ or $`r>R_{\mathrm{cs}}`$, or it is in the occulted region from which no beam can reach the observer), then $`\tau `$ is just set to zero in the expressions which then reduce to the continuum expressions $$I_{\mathrm{\Delta }\lambda }(p>R_{\mathrm{ph}})=2I_{\mathrm{cs}}$$ $`(8)`$ and $$I_{\mathrm{\Delta }\lambda }(pR_{\mathrm{ph}})=I_{\mathrm{ph}}+I_{\mathrm{cs}}.$$ $`(9)`$ The emission component of a P-Cygni line is largely due to $`I_{\mathrm{\Delta }\lambda }(p>R_{\mathrm{ph}})`$ beams and the absorption component, to $`I_{\mathrm{\Delta }\lambda }(pR_{\mathrm{ph}})`$ beams. To see how going from the standard to the toplighting case affects the components it is best to consider equations (7) and (6). From equation (7), we see that toplighting tends to reduce the line emission by changing $`I_{\mathrm{ph}}`$ to $`I_{\mathrm{ph}}I_{\mathrm{cs}}`$. ¿From equation (6), we see that toplighting tends to fill in the absorption trough (caused by the $`e^\tau `$ factor of the $`I_{\mathrm{ph}}e^\tau `$ term) by adding a positive term to the source function. Both emission component and absorption trough are reduced relative to the continuum by the addition of continuum terms: $`2I_{\mathrm{cs}}`$ in the first case and $`I_{\mathrm{cs}}`$ in the second. Thus we can see (at least for $`I_{\mathrm{ph}}>I_{\mathrm{cs}}`$) that adding toplighting to an atmosphere is likely to reduce the relative size of line components (i.e., to mute them). When $`I_{\mathrm{ph}}=I_{\mathrm{cs}}`$, the line components should vanish altogether as comparing equations (7)–(8) and equations (9)–(10) shows. In § 4, we give a definite analytical analysis of the effect of toplighting on line profile formation and confirm the muting effect. ## 4 The Line Profile and the Muting Factor The flux profile seen by a distant observer is obtained from $$F_{\mathrm{\Delta }\lambda }=2\pi _0^{R_{\mathrm{cs}}}𝑑ppI_{\mathrm{\Delta }\lambda }(p),$$ $`(10)`$ where the integration is over impact parameter. (Actually, the quantity in equation (11) divided by the square of the distance to the observer is the flux measured by the observer. But for brevity here and below we just call this quantity flux.) We can obtain semi-analytic expressions for the flux in the standard and toplighting cases by breaking the integration for the flux into components. These semi-analytic expressions then allow a fully analytic formula for a muting factor which describes the muting effect of toplighting. By inspection of equations (7)–(11) with $`I_{\mathrm{cs}}=0`$, we obtain the standard-case results for flux in the continuum and in the line. The continuum flux is $$F(\mathrm{con})=I_{\mathrm{ph}}F_0,$$ $`(11)`$ where $$F_0=\pi R_{\mathrm{ph}}^2.$$ $`(12)`$ The line flux is $$F_{\mathrm{\Delta }\lambda }=I_{\mathrm{ph}}\left(F_1+F_2\right),$$ $`(13)`$ where $$F_1=2\pi _0^{R_{\mathrm{ph}}}𝑑ppe^\tau $$ $`(14)`$ and $$F_2=2\pi _0^{\sqrt{R_{\mathrm{cs}}^2z^2}}𝑑ppW\left(1e^\tau \right).$$ $`(15)`$ Note that the $`F_1`$ and $`F_2`$ factors are dependent on $`z`$ and therefore on wavelength. Also recall that $`\tau =0`$ for a resonance point outside of the atmosphere or in the occulted region. The contrast factor (i.e., relative difference of line flux from continuum flux) for the standard case is $$\frac{F_{\mathrm{\Delta }\lambda }F(\mathrm{con})}{F(\mathrm{con})}=\frac{F_1+F_2F_0}{F_0}.$$ $`(16)`$ Again from equations (7)–(11), but with nonzero $`I_{\mathrm{cs}}`$, we obtain by inspection the toplighting-case results for flux in the continuum and line. The continuum flux is $$F^{\mathrm{top}}(\mathrm{con})=I_{\mathrm{ph}}F_0+I_{\mathrm{cs}}\left(2G_0F_0\right)=\left(I_{\mathrm{ph}}I_{\mathrm{cs}}\right)F_0+2I_{\mathrm{cs}}G_0,$$ $`(17)`$ where $$G_0=\pi R_{\mathrm{cs}}^2.$$ $`(18)`$ The line flux is $$F_{\mathrm{\Delta }\lambda }^{\mathrm{top}}=\left(I_{\mathrm{ph}}I_{\mathrm{cs}}\right)\left(F_1+F_2\right)+2I_{\mathrm{cs}}G_0.$$ $`(19)`$ The $`2I_{\mathrm{cs}}G_0`$ terms in equations (18) and (20) just account for the toplighting specific intensity beams aimed toward the observer from both the near and far hemisphere of the CSIR. These beams of course can interact with the line and the photosphere and this interaction is accounted for in the other terms. Note that $`G_0F_0`$ and $`G_0F_1`$, where the equalities hold only in the degenerate case where $`R_{\mathrm{cs}}=R_{\mathrm{ph}}`$. Also note that $`G_0F_2`$, where the equality holds only in the degenerate case where both coefficients are zero: i.e., $`R_{\mathrm{cs}}=R_{\mathrm{ph}}=0`$. From these inequalities, it is clear that $`F^{\mathrm{top}}(\mathrm{con})`$ and $`F_{\mathrm{\Delta }\lambda }^{\mathrm{top}}`$ can never be less than zero: a physically obvious result, of course. The contrast factor in the toplighting case is $$\frac{F_{\mathrm{\Delta }\lambda }^{\mathrm{top}}F^{\mathrm{top}}(\mathrm{con})}{F^{\mathrm{top}}(\mathrm{con})}=\frac{\left(I_{\mathrm{ph}}I_{\mathrm{cs}}\right)\left(F_1+F_2F_0\right)}{\left(I_{\mathrm{ph}}I_{\mathrm{cs}}\right)F_0+2I_{\mathrm{cs}}G_0}=\frac{\left(1E\right)\left(F_1+F_2F_0\right)}{(1E)F_0+2EG_0},$$ $`(20)`$ where $$E\frac{I_{\mathrm{cs}}}{I_{\mathrm{ph}}}.$$ $`(21)`$ We define the muting factor $`m`$ to be the ratio of the toplighting-case contrast factor to the standard-case one: $$m=\frac{1E}{1E+2E\left(R_{\mathrm{cs}}/R_{\mathrm{ph}}\right)^2}.$$ $`(22)`$ Since all $`F_1`$ and $`F_2`$ factors have cancelled out, $`m`$ is fully analytic and wavelength independent. The muting factor $`m`$ is a monotonically decreasing function of $`E`$ with a physical maximum of 1 at $`E=0`$ and with only one stationary point, a minimum at $`E=\mathrm{}`$. The muting factor in fact goes to zero at $`E=1`$ and becomes negative for $`E>1`$. Thus for $`E=1`$ the P-Cygni profile of a line vanishes and for $`E>1`$ the line flips: there is an absorption feature at the line center wavelength and a blueshifted emission feature. (Note that this flipped P Cygni profile is not the same as the “inverse” P Cygni profile that would be produced by a contracting rather than an expanding line-forming region.) A picturesque way of understanding a flipped P-Cygni line is given in § 5. The minimum value of $`m`$ at $`E=\mathrm{}`$ is given by $$m(E=\mathrm{})=\frac{1}{1+2\left(R_{\mathrm{cs}}/R_{\mathrm{ph}}\right)^2}.$$ $`(23)`$ Note that $`m(E=\mathrm{})1`$ with the equality holding only for $`R_{\mathrm{cs}}/R_{\mathrm{ph}}=1`$ which is the lower limit on $`R_{\mathrm{cs}}/R_{\mathrm{ph}}`$. Thus $$|m|1.$$ $`(24)`$ Since the absolute value of $`m`$ is always less than or equal to $`1`$, toplighting always mutes a line profile: hence our choice of “muting” for the name of the toplighting effect on line profiles. We can consider a couple simple examples of muting by toplighting. First, consider $`E=1/2`$ and $`R_{\mathrm{cs}}/R_{\mathrm{ph}}=2`$. Note that geometrically thin supernova atmospheres have not been identified, and thus $`R_{\mathrm{cs}}/R_{\mathrm{ph}}2`$ may be typical in real supernovae. With the given input values, $`m=1/9`$ and the contrast factor of the line with respect to the continuum is reduced by this factor in going from a standard case to a toplighting case. The contrast factor of an absorption trough of a standard-case P-Cygni line has absolute lower limit of $`1`$ (i.e., zero flux). Thus, even if this lower-limit case were realized for a standard-case P-Cygni line, the toplighting counterpart absorption would have absorption trough depth of only $`1/9`$ of the continuum level. Next consider $`E=\mathrm{}`$ and $`R_{\mathrm{cs}}/R_{\mathrm{ph}}=2`$. Here $`m=1/7`$. Since $`m`$ is negative, the toplighting has produced a flipped P-Cygni profile. The standard-case P-Cygni absorption trough would be turned into a toplighting-case emission peak with an upper limit on the contrast factor of $`1/7`$. Figure 2 shows examples of P-Cygni files with $`E`$ values that effectively span much of the $`E`$ parameter range. Since there is no upper limit on the contrast factor of a standard-case P-Cygni emission peak, prima facie it seems that a negative muting factor with large absolute value could lead to negative observed flux in a corresponding toplighting-case absorption component. This does not happen of course. As we showed above, the observed flux in the toplighting case is never mathematically negative. For another point of view, consider the following argument. The standard-case P-Cygni line emission peak is largest for an opaque line (i.e., one with $`\tau =\mathrm{}`$ everywhere in the atmosphere) and large $`R_{\mathrm{cs}}/R_{\mathrm{ph}}`$. For such a line, the emission peak contrast factor can only increase with increasing $`R_{\mathrm{cs}}/R_{\mathrm{ph}}`$ as $`\mathrm{ln}(R_{\mathrm{cs}}/R_{\mathrm{ph}})`$ ($`R_{\mathrm{cs}}`$ considered here just as an outer boundary radius) (e.g., Jeffery & Branch 1990, p. 189), but the absolute value of the muting factor for large $`R_{\mathrm{cs}}/R_{\mathrm{ph}}`$ decreases for increasing $`R_{\mathrm{cs}}/R_{\mathrm{ph}}`$ like $`(R_{\mathrm{cs}}/R_{\mathrm{ph}})^2`$. Thus the muting factor always scales to prevent negative observed flux from arising mathematically. The ratio of the monochromatic luminosities of the CSIR and the photosphere (taken independently of each other) is $$\mathrm{\Gamma }=\frac{L_{\mathrm{cs}}}{L_{\mathrm{ph}}}=\frac{4\pi \left(2\pi R_{\mathrm{cs}}^2I_{\mathrm{cs}}\right)}{4\pi \left(\pi R_{\mathrm{ph}}^2I_{\mathrm{ph}}\right)}=2E\left(\frac{R_{\mathrm{cs}}}{R_{\mathrm{ph}}}\right)^2,$$ $`(25)`$ where the factor of $`2`$ accounts for the fact that both hemispheres of the CSIR region contribute to flux in any given direction. The ratio $`\mathrm{\Gamma }`$, in fact, appears in the denominator of the muting factor formula, equation (23). If we use equation (26) to eliminate $`E`$ from equation (23), we obtain $$m=\frac{2(R_{\mathrm{cs}}/R_{\mathrm{ph}})^2\mathrm{\Gamma }}{2(R_{\mathrm{cs}}/R_{\mathrm{ph}})^2\mathrm{\Gamma }+2(R_{\mathrm{cs}}/R_{\mathrm{ph}})^2\mathrm{\Gamma }}.$$ $`(26)`$ Now $`m`$ as a function of $`\mathrm{\Gamma }`$ monontonically decreases with $`\mathrm{\Gamma }`$ from $`m=1`$ at $`\mathrm{\Gamma }=0`$ to a minimum $`m=1/[1+2(R_{\mathrm{cs}}/R_{\mathrm{ph}})^2]`$ at $`\mathrm{\Gamma }=\mathrm{}`$ (the only stationary point). It is clear that $`\mathrm{\Gamma }`$ will have to be large in some sense in order to obtain strong muting. For the sake of definiteness say $`m1/2`$ is “strong” muting. Then $$\mathrm{\Gamma }\mathrm{\Gamma }\left(m=\frac{1}{2}\right)=\frac{2(R_{\mathrm{cs}}/R_{\mathrm{ph}})^2}{2(R_{\mathrm{cs}}/R_{\mathrm{ph}})^2+1}$$ $`(27)`$ is required for strong muting. Since $`R_{\mathrm{cs}}/R_{\mathrm{ph}}1`$ is required physically, a necessary, but not sufficient, condition for strong muting is $`\mathrm{\Gamma }2/3`$. If, as suggested above, $`R_{\mathrm{cs}}/R_{\mathrm{ph}}2`$ for supernovae, then a necessary, but not sufficient, condition for strong muting in supernovae is $`\mathrm{\Gamma }8/9`$. Consequently, only those supernovae whose monochromatic luminosities are strongly enhanced by circumstellar interaction will have line profiles that are strongly muted by toplighting. ## 5 Picturesque Description To complement the mathematical description of the radiative transfer in the standard and toplighting cases of our simple model, we present here a picturesque description. First consider the standard case. We use $`R_{\mathrm{cs}}`$ just as an outer boundary of the atmosphere in this case. If no line is present, the photons emitted by the photosphere just escape to infinity and the flux in any direction is just $`I_{\mathrm{ph}}F_0`$: this is the continuum emission. Adding a resonance-scattering line to the atmosphere has the overall effect of reducing the wavelength-integrated emission of the supernova in the wavelength interval that the line can affect. This interval is $`(\lambda _0+\mathrm{\Delta }\lambda _{\mathrm{min}},\lambda +\mathrm{\Delta }\lambda _{\mathrm{max}})`$, where $$\mathrm{\Delta }\lambda _{\mathrm{min}}=\lambda _0\frac{R_{\mathrm{cs}}}{ct}\mathrm{and}\mathrm{\Delta }\lambda _{\mathrm{max}}=\lambda _0\frac{\sqrt{R_{\mathrm{cs}}^2R_{\mathrm{ph}}^2}}{ct}.$$ $`(28)`$ The reason for the loss in wavelength-integrated emission is that the line, scattering isotropically, will scatter some photons back to the photosphere where in our simple model they are simply absorbed. In a more realistic model, the photons absorbed by the photosphere are a feedback that helps determine the photospheric state. The particular observer-directed photons which are absorbed are those scattered toward the observer in the occulted region: they simply hit the photosphere as they head toward the observer. Because of the homologous expansion, photons continuously redshift in the comoving frame of the atmosphere. Thus, photons emitted by the photosphere at or redward of $`\lambda _0`$ escape the atmosphere without scattering. (Note formally they can scatter at $`\lambda _0`$ at the point of emission on the photosphere, but this effect is assumed accounted for in specifying $`I_{\mathrm{ph}}`$, the constant photospheric specific intensity.) Thus the line scatters photons that before scattering are blueward of $`\lambda _0`$ in observer-frame wavelength. The line scattering does not change a photon’s comoving frame wavelength (not at all in our simple model and not significantly in reality), but by changing its propagation direction relative to the matter flow it does change its observer-frame wavelength. Consider photons scattered toward a distant observer. If the scattering occurs from matter moving away from, mainly perpendicular to, or toward the observer, then the scattered photons in the observer frame have wavelengths redward of $`\lambda _0`$, near $`\lambda _0`$, or blueward of $`\lambda _0`$, respectively. From the proceeding remarks, it is clear that the observer receives all the photons emitted by the photosphere into the line-of-sight at and redward of $`\lambda _0`$ plus the unscattered photons from blueward of $`\lambda _0`$. In addition there are photons scattered into the line-of-sight by the line from the whole wavelength interval $`(\lambda _0+\mathrm{\Delta }\lambda _{\mathrm{min}},\lambda +\mathrm{\Delta }\lambda _{\mathrm{max}})`$. The scattering component is strongest near $`\lambda _0`$ where the resonance planes for scattering are largest and they come closest to (and even touch) the photosphere where the source function and usually the scattering opacity are largest. (Because of the nature of homologous expansion all the photons scattered toward the observer from a plane perpendicular to the line-of-sight have the same observer-frame wavelength: see § 2.) These planes are near $`z=0`$. The scattering component grows progressively weaker as one moves away from near $`\lambda _0`$: i.e., as the resonance planes get farther from $`z0`$. This scattering behavior results in the P-Cygni profile emission feature with its peak near $`\lambda _0`$. The scattering of photons out of the line-of-sight from blueward of $`\lambda _0`$ causes a flux deficiency relative to the continuum: this is the P-Cygni profile blueshifted absorption. As we argued above the wavelength-integrated emission in the wavelength interval $`(\lambda _0+\mathrm{\Delta }\lambda _{\mathrm{min}},\lambda +\mathrm{\Delta }\lambda _{\mathrm{max}})`$ is less than in the line’s absence. Consequently, the P-Cygni absorption feature will be larger than the P-Cygni emission feature. (This ratio of feature size is not necessarily obtained if the line is not a pure resonance scattering line.) Now consider the toplighting case. First, imagine that there is no photosphere or line; there is just the radiating, optically transparent, spherical shell CSIR. The flux in any direction is $`2I_{\mathrm{cs}}G_0`$ at all wavelengths. Now add a line to the atmosphere enclosed by the CSIR. The line has, in fact, no effect on the flux emission. The comoving-frame radiation field at the line center wavelength at any point inside the atmosphere is isotropic. The line scattering is isotropic. In the comoving frame, an isotropic field isotropically scattered is unchanged. Since the comoving frame radiation field is unchanged, the observer-frame radiation field is also unchanged. Now remove the line, but add a non-emitting (but, of course, opaque) photosphere. The flux in all directions at all wavelengths is now $`I_{\mathrm{cs}}\left(2G_0F_0\right)`$. The photosphere just acts as a net absorber. But if one now adds a line, the line will scatter some photons directed toward the photosphere into directions leading to escape. The result is that the wavelength-integrated flux in the wavelength interval $`(\lambda _0+\mathrm{\Delta }\lambda _{\mathrm{min}},\lambda +\mathrm{\Delta }\lambda _{\mathrm{max}})`$ is increased by the addition of the line. Again one has to consider how the line scattering shifts the observer-frame wavelength. The non-emitting photosphere causes there to be a “cone of emptiness” in the radiation field converging at any point in the atmosphere. This reduces the line source function at every point in the atmosphere from the no-photosphere case. Thus, observer-directed beams in the limb region of the atmosphere that interact with the line must be reduced from the no-photosphere case. The closer the resonance point for a beam is to the $`z=0`$ location, the greater the effect of the “cone of emptiness”, and the weaker the emergent beam will be. Beams that do not interact with the line will be unchanged from the no-photosphere case. The upshot is that centered on $`\lambda _0`$ there will be flux deficit relative to the continuum: an absorption feature around the line center wavelength. On the other hand, beams from the photodisk region (i.e., the non-limb part of the atmosphere on the near side of the photosphere) are enhanced by the line scattering. Without the line, the photodisk specific intensity is just $`I_{\mathrm{cs}}`$ from the near hemisphere of the CSIR; the beams from the far hemisphere of the CSIR are occulted by the photosphere. But with the line there is scattering into the line-of-sight in the photodisk region. Consequently, there is an enhancement in flux over the continuum: i.e., a flux emission feature. Since the photodisk region is moving toward the observer this flux emission feature is blueshifted from $`\lambda _0`$. One thus finds that toplighting with a non-emitting photosphere gives rise to a flipped P-Cygni line with a blueshifted emission and an absorption centered near the line center wavelength. From the argument about wavelength-integrated flux above, we see that the blueshifted emission must be larger than the absorption trough centered near line center wavelength. In a general toplighting case the photosphere emits, and so there is a competition between P-Cygni and flipped P-Cygni line formation. Clearly in most supernovae the competition is won by P-Cygni line formation. For flipped P-Cygni line formation to win, $`E=I_{\mathrm{cs}}/I_{\mathrm{ph}}`$ must be greater than 1 (see § 4). ## 6 Discussion The treatment of toplighting in supernova resonance-scattering line formation has been presented here in its simplest form for its heuristic value. For example, we have not discussed how the toplighting might change the radial dependence of the line optical depth $`\tau (r)`$, and we have not considered the possibility that the CSIR reflects supernova light back into the line–forming region. The toplighting version of the SYNOW code that we (Blaylock et al. 1999, in preparation) are using to analyze the spectra of SN 1998S allows for an angular dependence of the radiation from an optically thin circumstellar shell and for a wavelength dependence of the photospheric and CSIR specific intensities. Including toplighting allows us to obtain improved fits to the observed spectra of SN 1998S and shows that some of the P-Cygni line profiles in the early-time SN 1998S may be flipped. (It is not easy to be sure of the flipped profiles because of the numerous superimposed circumstellar features.) Lentz et al. (1999, in preparation) find that allowing for toplighting in detailed NLTE calculations also leads to improved fits. Among core-collapse events, SN 1998S has been uniquely well observed at ultraviolet wavelengths, but physically it is not an exceptional case. Line profiles in other circumstellar-interacting core-collapse events presumably also are affected by toplighting. But as we showed in § 4, only those supernovae whose monochromatic luminosities are strongly enhanced by circumstellar interaction will have line profiles that are strongly muted by toplighting. Examples of supernovae with relatively strong circumstellar interaction (as known from relatively strong radio emission) are SN 1979C, SN 1980K and SN 1993J. All these supernovae showed rather featureless UV spectra in the $`1800`$$`2900`$Å region in comparison to supernovae known not to have had strong circumstellar interaction (Jeffery et al. 1994). Perhaps a UV-peaked toplighting continuum was muting the UV spectra of SN 1979C, SN 1980K and SN 1993J. Spectropolarimetry and nebular-phase line profiles indicate that SN 1998S and its circumstellar shell were not spherically symmetric (Leonard et al. 1999) as has been assumed here, and core-collapse events in general appear to be asymmetric (Wang et al. 1996). Eventually, asymmetric toplighting will have to be taken into account in spectrum calculations. This work was supported by NASA grant NAG5–3505, NASA grant GO-2563.001 to the SINS group from the Space Telescpe Science Institute, which is operated by AURA under NASA contract NAS 5–26555, and Middle Tennessee State University.
no-problem/9911/cond-mat9911341.html
ar5iv
text
# 1 Introduction ## 1 Introduction The Penna bit-string model for biological ageing was published in 1995 , and since then it has been increasingly used to study different characteristics of real populations, as for instance the inheritance of longevity and the advantages of sexual reproduction (for a review, see and references therein). Parental care was first introduced in the model by Thoms et al . More recently, it has been shown that different strategies of child-care can lead to smaller or higher effects on the survival probabilities of the population , and that when combined with reproduction risk, it leads to a self-organisation of female menopause in sexual populations . In this paper we are mainly interested in two points: the first one is to study the spatial distribution of populations subjected to different strategies of child-care. Two of our strategies are specifically related to the movements of the mother on the lattice, and so different from those mentioned above. The third one is also related to the survival of the child, depending if the mother is alive or not . The second point is to compare the usual survival rate of the model with that obtained when the lattice is introduced. In the next section we describe how the asexual Penna model works on a lattice and our strategies of child-care. In section 3 we present the results considering child-care and in section 4 the results when the standard model is compared with the lattice one (without any child-care strategy). In section 5 we present our conclusions. ## 2 The Penna Model on a lattice and strategies of child-care In the asexual version of the Penna model each individual is represented by a single bitstring of 32 bits, that plays the role of a chronological genome. Each individual can live at most for 32 timesteps (“years”). The first bit gives the information if the individual will suffer from the effects of a genetic disease in its first year of life. The second bit corresponds to the second year of life, and so on. Diseases are represented by bits 1. Whenever a bit 1 appears it is computed and the total amount of diseases until the current individual’s age is compared with a limit $`T`$. The individual dies when the limit is reached. There is a minimum reproduction age $`R`$, from which the individual generates with probability 0.5 (in this paper), $`b`$ offspring every year. We also consider a “pregnancy” period: after giving birth, a mother stays one timestep (year) without reproducing. The offspring genome is a copy of the parent’s one, with $`M`$ random mutations. Only deleterious mutations are considered: if a bit 0 is randomly chosen in the parent’s genome, it is mutated to 1 in the offspring one. If a bit 1 is randomly chosen, it stays 1 in the offspring genome. Besides dying if the number of allowed diseases exceeds $`T`$ or for reaching 32 years, an individual may also die for lack of space and food. Without a spatial lattice, such lack is taken into account through the general Verhulst factor $`V=1N(t)/N_{max}`$, where $`N(t)`$ is the actual size of the population at time $`t`$ and $`N_{max}`$ is the maximum size (carrying capacity) the population can sustain, which is defined at the beginning of the simulation . At every timestep and for each individual, a random number between 0 and 1 is generated and compared with $`V`$. If this random number is greater than $`V`$, the individual dies independently of its age or genome. In the present case each individual lives in a given site (i,j) of a square lattice. Instead of considering a single carrying capacity for the whole population, we consider a maximum allowed occupation per site. Moreover, at every timestep each individual has a probability to walk to the neighbouring site that presents the smallest occupation, if this occupation is also smaller or equal to that of the current individual’s site. The Verhulst factor is now given by: $$V_{i,j}=1N_{i,j}(t)/N_{i,j(max)}.$$ In this way, it is possible to consider the same carrying capacity for all sites, or to define regions with more or less resources. Each individual is first tested: if it does not die because of genetic diseases, lack of food and space (Verhulst) or for reaching 33 years, it moves according to the probability mentioned above. We start the simulations randomly distributing one individual per site on a diluted lattice. That is, if an already occupied site is chosen for a new individual, the choice is disregarded and another random number is generated. Our strategies of child-care which are purely related to the movements on the lattice consist in defining a period $`C_m`$ during which the mother and the child are forced to stay together: no child can move alone before reaching an age greater than $`C_m`$. In this way, $`C_m`$ is the period during which the child is under maternal care. We have studied the following cases: (a) if the mother moves, she brings all these young children with her; (b) the mother cannot move if she has any child still under maternal care. We also considered a condition, that can be added to any of the other two or even be considered alone: it consists in killing the motherless child with age $`C_m`$ according to a given probability which decreases with age. We have considered $`C_m=2`$, a probability 0.9 that the child dies in its first year of life if its mother dies, and a probability 0.3 if it happens in its second year of life. This strategy is in fact an improved version of that used before in refs. , where the offspring die if the mother dies, with probability 1, at any age inside the child-care period. This improvement is based on data given in ref. , fig.5, about baboons and lions. In next section we present our results for cases (a), (b) and (c), where (c) corresponds to case (a) plus the survival condition just mentioned. ## 3 Results comparing child-care strategies In this paper we have worked with the same maximum occupation per site. The figures presented in this section correspond to the following parameters: I - General parameters of the model 1) Initial population = 10,000 individuals; 2) Limit number of diseas $`T`$ = 2; 3) Carrying capacity (per site) = 34; 4) Minimum reproduction age = 9; 5) Birth probability = 0.5; 6) Birth rate $`b`$ = 2; 7) Total number of steps = 800,000. II - Parameters related to the lattice and child-care 1) Lattice size = 150 X 150 sites; 2) Probability to walk = 0.2; 3) $`C_m=2`$ age up to which to move alone is forbidden. With these parameters a maximum population of 160,000 individuals is reached, this number decreasing and stabilizing between 25,000 and 5,000 individuals, depending on the child-care strategy. In figure 1 we show the final configuration of the lattice for case (a), and in figure 2, for case (b). In figure 3 we present the results for case (c), where two strategies of child care are working together, as mentioned at the end of section 2. From these figures it can be noticed that the final configuration is strongly dependent on the child-care strategy. Comparing cases (a) and (b), it can be noticed that when the mother cannot move, more empty sites are produced, as a simple consequence of the Verhulst factor: more individuals accumulate at the same site and die for lack of space and food. When the condition that the child dies if the mother dies is added (fig.3), the final configuration presents more empty sites than fig.1. In this third case two effects are superimposed: a stronger action of the Verhulst factor (if the mother is randomly chosen to die, all young children also die), and deaths caused by weak genotypes (mothers that die for accumulation of deleterious mutations). Since the Verhulst factor does not interfere with the genetic features of the population, the survival rates of cases (a) and (b) are the same, and different from that of case (c). In fact, the genetic deaths of mothers with weak genotypes select the best fitted families to survive, increasing the lifespan , as can be seen from figure 4. On the other side, the total population decreases the more the stronger is the strategy of child-care considered . ## 4 Comparison between the standard results and those obtained with the lattice A natural question that appears when the lattice is introduced is: ignoring any child-care strategy, how to compare the standard results of the Penna model with those when the lattice is introduced? Or, which is the behaviour of the survival rate with and without the lattice? In order to answer this question we performed several simulations and observed that: a) For a probability to walk equal to one, the survival rates of the model with and without the lattice are the same, and no special care must be taken. It means that if one considers, as usual, a given carrying capacity $`N_{max}`$ for the whole population, the same survival rate can be obtained simply considering a carrying capacity per site equal to the same $`N_{max}`$ divided by the number of sites. b) As the probability to walk decreases, finite size effects appear. They are observed through a corresponding decrease of the maximum lifetime, that is, the survival rates drop to zero before the standard ones, as can be seen from fig.5a (where the maximum occupation per site is 34). In these cases, if one is interested in comparing the lattice results with those of the standard model, a higher occupation per site must be considered. If this occupation is large enough, the results are the same, as shown in fig.5b, where the maximium occupation per site is 200. A maximum occupation per site equal to 100 produces the same result and is so enough to avoid finite size effects. c) Periodic boundary conditions cannot avoid the finite size effect mentioned above, for small occupations per site. ## 5 Conclusions We show how to implement the Penna model on a lattice, in order to study the spatial evolution of the population. A walking probability is given to each individual, and the Verhulst factor is now per site, i.e., there is a maximum capacity per site. We start the simulations randomly distributing one individual per site, on a diluted lattice. We consider that no individual with age $`C_m`$ can move alone, i.e., while under maternal care. The following strategies of child care have been considered: (a) If the mother moves she brings the offspring under child maternal care with her; (b) The mother cannot move if she has any child under care. Comparing these two situations, we notice that the second one produces more empty sites than the first one, as a consequence of the Verhulst factor (the occupations of many sites exceed the maximum allowed capacity). An extra condition has also been added to the first strategy: if the mother dies, the offspring has a probability to die that depends on its age. In this case, also more empty sites are produced, if compared to the first case. We present the survival rates of the three cases, showing that only the last one interferes with the population genetics. We also compare the survival rates obtained with the standard model with those obtained when the lattice is introduced (without any kind of child-care). We show that for large enough maximum occupations per site, these survival rates are the same. Acknowledgements: To P.M.C. de Oliveira, J.S. Sa Martins and A.T. Bernardes for usefull discussions; to CAPES and CNPq for financial support. References T.J.P. Penna, J.Stat.Phys. 78 (1995) 1629. Paulo Murilo C. de Oliveira, Suzana Moss de Oliveira, Americo T. Bernardes and Dietrich Stauffer, Lancet, 352 (1998) 911. J.S. Sá Martins and S. Moss de Oliveira, Int.J.Mod.Phys. C9 (1998) 421. A.T. Bernardes, Annual Reviews of Computational Physics IV, (1996) 359. Edited by D. Stauffer - World Scientific, Singapore. S. Moss de Oliveira, Physica A257 (1998) 465; Suzana Moss de Oliveira, Paulo Murilo Castro de Oliveira and Dietrich Stauffer, Evolution, Money, War and Computers, Teubner, Stuttgart-Leipzig (1999). J. Thoms, P. Donahue, D. Hunter and N. Jan, J.Phys. I5 (1995) 1689. K.M. Fehsenfeld, J.S. Sá Martins, S. Moss de Oliveira and A.T. Bernardes, Int.J.Mod.Phys. C9 (1998) 935. S. Moss de Oliveira, A.T. Bernardes and J.S. Sá Martins, Eur.Phys.J. B (1998) in press. C. Packer, M. Tatar and A. Collins, Nature 392 (1998) 807. FIGURE CAPTIONS Fig.1 - Final distribution of the population on the lattice considering that if the mother moves, she brings the children under maternal care ($`age2`$) with her (case a). Fig.2 - Final distribution of the lattice considering that the mother cannot move if she has children under maternal care (case b). Fig.3 - Final distribution of the lattice considering that if the mother moves she brings the children under maternal care with her, and that if she dies, her children have a probability 0.9 to die in their first year of life, and 0.3 in their second year of life (case c). Fig.4 - Survival rates for the cases: (a)- filled circles; (b)- triangles; (c)- squares. Fig.5a - Survival rates of the standard model without lattice (triangles), and with lattice for different probabilities to walk. In all cases the initial population $`N(0)=10,000`$ and the common parameters ($`T,b,R`$,birth probability) are those described in section 3. In the lattice cases, the maximum occupation per site is 34 and the lattice size is 150X150. In the standard case $`N_{max}=10N(0)`$. Fig.5b - Triangles: the same standard survival rate of fig.5a; Full squares: the survival rate with lattice for a maximum occupation per site equal to 200 and a probability to walk equal to 0.2.
no-problem/9911/cond-mat9911070.html
ar5iv
text
# Crossing Probabilities in Critical 2-D Percolation and Modular Forms ## 1 Introduction Percolation is perhaps the simplest non-trivial model in statistical mechanics. It is very easy to define, and exhibits a second-order phase transition between the percolating and non-percolating states. A broad array of techniques have been brought to bear on it over a period of many years. Its behavior is of current interest, and has been studied via renormalization group, conformal field theory, Coulomb gas methods, computer simulation, as an example of supersymmetry, and using rigorous mathematical methods. (For general reviews, see ,, for some recent results including a list of references see ). In this contribution, we restrict ourselves to the study of crossing probabilities for two-dimensional systems at the percolation (phase transition) point $`p_c`$. Although percolation is, as mentioned, arguably the simplest model that exhibits a second order phase transition, the ease of formulation of the model is in another sense deceptive, tending to conceal its inherent complexity. The wide range of approaches taken to it already attests to this subtlety. The ultimate reason is suspected to be the unconstrained nature of the model, encompassing a variety of symmetries. In Section 2 we review percolation and crossing probabilities. Exact analytic expressions for these quantities are known from boundary conformal field theory. We transform these results into a form suitable for the present analysis. Section 3 briefly introduces modular forms. Then the unusual modular behavior of the derivatives of the crossing probabilities is examined. The results here are preliminary; a full treatment will appear elsewhere . ## 2 Crossing Probabilities The properties that we consider here are critical and therefore universal, the same for a wide variety of types of (isotropic) percolation and lattice structures. However, for definiteness, when we are specific we will refer to bond percolation on a square lattice. Bond percolation is defined by placing a bond with (independent) probability $`p`$ on each edge of the lattice. Consequently, there are $`2^N`$ possible bond configurations with $`0N_BN`$, where $`N_B`$ is the number of bonds in a given configuration and $`N`$ is the total number of edges. The connected bonds in each configuration form clusters. At $`p_c`$, (note that duality implies $`p_c=1/2`$ on a square lattice), as the lattice is taken to infinity, one or more infinite clusters just appears. For $`p<p_c`$, there is no infinite cluster. The crossing probabilities are defined by considering a finite rectangular $`L\times L^{}`$ lattice as $`L,L^{}`$ infinity with fixed aspect ratio $`r`$ = width/height = $`L/L^{}`$. (Below, we will allow the shape of the lattice to change.) Then the probability of a configuration connecting the left side and the right side of the rectangle is the horizontal crossing probability $`\mathrm{\Pi }_h(r)`$. The probability of a configuration connecting all four sides is the horizontal-vertical crossing probability $`\mathrm{\Pi }_{hv}(r)`$. These quantities are known to depend only on the aspect ratio $`r`$ (for a rectangle) by extensive numerical work and the hypothesis (and consequences) of conformal invariance. In fact, they enjoy an even wider invariance, as discussed below. Next, to motivate the conformal approach to this problem, we consider the $`Q`$-state Potts model. This generalization of the Ising model employs a spin variable $`s_i`$ with values $`s_i=1,2,\mathrm{},Q`$ ($`Q=2`$ corresponds to the Ising model) on each site $`i`$ of the lattice. The (reduced) Hamiltonian is $$H=K\underset{<ij>}{}\delta _{s_i,s_j}$$ (1) where $`<ij>`$ denotes nearest neighbor sites. By introducing the variable $`x=e^K1`$ one may rewrite the partition function as follows: $$Z=\underset{\{s_i\}}{}\underset{<i,j>}{}(1+x\delta _{s_i,s_j})$$ (2) Expanding the product, one may perform the sum over $`s_i`$ in each term. Representing the presence of a factor $`x`$ by a bond then gives rise to a graphical representation of $`Z`$, known as the random cluster or bond-correlated Potts representation ,\[6-9\] $$Z=\underset{G}{}Q^{N_c}x^{N_b}$$ (3) where the sum is over all possible graphs consisting of $`N_b`$ bonds arranged in $`N_c`$ clusters (counting single isolated sites as clusters). This model is known to have a critical point for all $`0Q4`$. (On the square lattice, by duality, $`x_c=Q^{1/2}`$). For $`Q=1`$, the set of configurations included is exactly that, including the weighting, as for bond percolation, with $`p=x/(x+1)`$. For other $`Q`$ values, the configurations are the same but weighted differently. In addition, Eq. (3) allows us to extend the number of states to $`Q𝐑`$. Thus we can envision a continuous change from the Ising model, say, to percolation. Further, the central charge is known as a function of $`Q`$; in particular, $`c=0`$ for $`Q=1`$ (critical percolation). In order to study the crossing probabilities, following , consider for definiteness $`\mathrm{\Pi }_h`$ on a rectangle. Let $`Z_{ab}`$ be the partition function of the Potts model with the spins on the left vertical side fixed in state $`a`$ and those on the right vertical side fixed in state $`b`$. The spins on the rest of the boundary are unrestricted. Then $$\mathrm{\Pi }_h=\underset{Q1}{lim}\left(1\frac{Z_{ab}}{Z_{aa}}\right)$$ (4) with $`ab`$. Note that this expression differs from the one in due to the normalization of $`Z`$. The expression makes no sense for $`Q=1`$, of course, but it allows a solution to the problem if we first express the partition functions using boundary operators and then take the limit. Since we are at criticality, the partition function can (in the limit of a large lattice) be expressed using conformal field theory. The critical Potts models are known to correspond to a certain series of minimal models. A change of boundary conditions is introduced by means of a boundary operator . If the coordinates of the corners of the rectangle are $`z_1,z_2,z_3,z_4`$ we thus have $$Z_{ab}=Z_f\varphi _{fa}(z_1)\varphi _{af}(z_2)\varphi _{fb}(z_3)\varphi _{bf}(z_4)$$ (5) where $`Z_f`$ is the partition function with free boundary conditions and the $`\varphi `$s are boundary operators. The next step is to identify $`\varphi _{af}`$. This is done by comparison with known results for the Ising and $`Q=3`$ state Potts models. In these cases, the operator that changes between fixed boundary conditions $`a`$ and $`b`$ is known to be $`\varphi _{(1,3)}`$. On the other hand, one can implement this change by bringing together two points $`z_1`$ and $`z_2`$ where the boundary conditions go from $`a`$ to $`f`$ and $`f`$ to $`b`$, respectively. Using the operator product expansion then gives a term that must be the operator in question. By the fusion rules, the only operator that can satisfy this is seen to be $`\varphi _{af}=\varphi _{(1,2)}`$ . This argument is doubly satisfying, since the conformal dimension of $`\varphi _{(1,2)}`$ is $`h=0`$ in the limit $`Q1`$, which is a necessary requirement for the crossing probability to be conformally invariant. Further, the operator is level two, so that the differential equation satisfied by its four-point function is second order. It is conventional to consider the problem on the upper half plane, which may subsequently be mapped onto a rectangle via the Schwarz-Christoffel transformation, with the four corner points taken as images of $`1/k,1,1,1/k`$ (with $`0<k<1`$). Then the crossing is between the intervals $`1/k<x<1`$ and $`1<x<1/k`$ on the real axis. The four-point correlation functions depend only on the cross-ratio $$\lambda =\frac{(x_4x_3)(x_2x_1)}{(x_3x_1)(x_4x_2)}=\left(\frac{1k}{1+k}\right)^2$$ (6) One then finds, by means of standard conformal manipulations, that the correlation function satisfies a Riemann equation with the two solutions $`F(\lambda )=1`$ and $`F(\lambda )=\lambda ^{1/3}{}_{2}{}^{}F_{1}^{}(1/3,2/3;4/3;\lambda )`$. One can pick the correct linear combination by imposing the physical constraints that $`F0`$ as $`\lambda 0`$ ($`r\mathrm{}`$ ) and $`F1`$ as $`\lambda 1`$ ($`r0`$). The result is $$\mathrm{\Pi }_h(\lambda )=\frac{2\pi \sqrt{3}}{\mathrm{\Gamma }(1/3)^3}\lambda ^{1/3}{}_{2}{}^{}F_{1}^{}(1/3,2/3;4/3;\lambda )$$ (7) Mapping Eq. (7) onto a rectangle, one finds that the aspect ratio becomes $`r=2K/K^{}`$, where $`K^{}`$ and $`K`$ are the complete elliptic integrals. This result has been extensively tested via Monte Carlo simulations ,, and there is little doubt as to its correctness. Note that one can transform it to any compact shape with four identified points, not just a rectangle; the crossing probability will be the same as the crossing on the half-plane with the corresponding half-plane cross-ratio. Thus, for instance, by consideration of the Schwarz-Christoffel formula it is easy to see that the crossing probability on a rhombus of any angle is the same as on a square. This point has been investigated numerically in Fig. 4.1 of . Of course the same invariance holds for the ”horizontal-vertical” crossing considered below, only the function $`F`$ changes. In general, when one makes a conformal transformation $`zw(z)`$ of a correlation function, factors of $`(w^{}(z))^h`$ appear. In addition, transforming from the upper half plane to a shape with corners, the partition function gains a (non-scale invariant) factor $`L^{ac}`$, where $`L`$ is the length scale, $`c`$ is the central charge and $`a`$ depends on the geometry. Similarly, a correlation function with boundary operators sitting at the corner points gains a factor $`L^{(\pi /\gamma )h}`$, where $`\gamma `$ is the interior angle at the corner . The last two (non-scale invariant) effects occur because the transformation is only piecewise analytic and has singular points at the corners. However, for critical percolation, $`c=0=h`$, and all of these factors become unity. Thus the crossing probabilities are invariant under transformations that are conformally invariant in the interior of a region and only piecewise conformally invariant on its edges. The ”horizontal-vertical” crossing probability $`\mathrm{\Pi }_{hv}`$ may be obtained similarly . We omit most details of the argument. A four-point boundary operator correlation function arises once again. The main differences with $`\mathrm{\Pi }_h`$ are the complexity of the boundary conditions and the absence of any direct identification of the boundary operator. Instead, one considers low-lying null vectors in the $`c=0=h`$ Verma module. Through level 5, there is only one which leads to solutions that satisfy the physical requirements. These are $`\mathrm{\Pi }_{hv}(r)`$ $`=`$ $`\mathrm{\Pi }_{hv}(1/r)`$ $`\mathrm{\Pi }_{hv}(r)`$ $`\stackrel{r\mathrm{}}{}`$ $`\mathrm{\Pi }_h(r)`$ (8) which translate into $`F(\lambda )`$ $`=`$ $`F(1\lambda )`$ $`F(\lambda )`$ $`\stackrel{\lambda 0}{}`$ $`\mathrm{\Pi }_h(\lambda )`$ (9) respectively. Applying these conditions to the level 5 solutions, one finds $$\mathrm{\Pi }_{hv}(\lambda )=\frac{2\pi \sqrt{3}}{\mathrm{\Gamma }(1/3)^3}\lambda ^{1/3}{}_{2}{}^{}F_{1}^{}(1/3,2/3;4/3;\lambda )\frac{\sqrt{3}}{2\pi }\lambda {}_{3}{}^{}F_{2}^{}(1,1,4/3;2,5/3;\lambda )$$ (10) where $`{}_{3}{}^{}F_{2}^{}`$ is a generalized hypergeometric function. The first term is just $`\mathrm{\Pi }_h`$ and the second subtracts configurations with horizontal but no vertical crossings. The differential equation satisfied by $`F`$ may be written $`{\displaystyle \frac{d^3}{d\lambda ^3}}(\lambda (\lambda 1))^{4/3}{\displaystyle \frac{d}{d\lambda }}(\lambda (\lambda 1))^{2/3}{\displaystyle \frac{d}{d\lambda }}F=`$ $`\left[{\displaystyle \frac{d^2}{d\lambda ^2}}(\lambda (\lambda 1))+{\displaystyle \frac{1}{2\lambda 1}}{\displaystyle \frac{d}{d\lambda }}(2\lambda 1)^2\right]\left[{\displaystyle \frac{d}{d\lambda }}(\lambda (\lambda 1))^{1/3}{\displaystyle \frac{d}{d\lambda }}(\lambda (\lambda 1))^{2/3}{\displaystyle \frac{d}{d\lambda }}\right]F=0`$ The factorized form exhibited in the second line is of interest since 1, $`\mathrm{\Pi }_h`$, and $`\mathrm{\Pi }_{hv}`$ span the solutions of the equation formed by letting the rightmost factor act on $`F`$ alone, i.e. $$\left[\frac{d}{d\lambda }(\lambda (\lambda 1))^{1/3}\frac{d}{d\lambda }(\lambda (\lambda 1))^{2/3}\frac{d}{d\lambda }\right]F=0$$ (12) In what follows, it is convenient to consider the $`r`$-derivatives of $`\mathrm{\Pi }_h`$ and $`\mathrm{\Pi }_{hv}`$ (on the rectangle) which we will denote $`\mathrm{\Pi }_h^{}(r)`$ and $`\mathrm{\Pi }_{hv}^{}(r)`$. Note that $`\mathrm{\Pi }_h^{}`$, for instance, is interpretable physically as the probability density that the maximum horizontal extent of a cluster attached to one vertical side of an infinitely wide rectangle of unit height is greater than $`r`$ . Additionally, since the $`r`$-derivative is proportional to the $`\lambda `$-derivative, Eq. (12) reduces to second order. We next proceed to express $`\mathrm{\Pi }_h^{}`$ and $`\mathrm{\Pi }_{hv}^{}`$ on the rectangle as functions of $`r`$, using the result for the cross-ratio $$\lambda =\left(\frac{\vartheta _2(\widehat{q})}{\vartheta _3(\widehat{q})}\right)^4$$ (13) where $`\vartheta _2`$ and $`\vartheta _3`$ are the elliptic theta-functions and $`\widehat{q}=e^{\pi r}`$ (note that $`\widehat{q}`$ is the square root of the usual $`q`$). Eq. (13) follows by applying Landen’s transformation to $`r=2K/K^{}`$ and Eq. (6), resulting in $$r=\frac{K(\sqrt{1\lambda })}{K(\sqrt{\lambda })}$$ (14) where $`K`$ is the complete elliptic integral written as a function of the modulus. Expressing the latter in terms of a ratio of theta-functions ( 8.197.1,2) one obtains Eq. (13). We also note, for future reference, some identities involving $`\lambda `$. One has $`\lambda =16\frac{\eta (\tau /2)^8\eta (2\tau )^{16}}{\eta (\tau )^{24}}`$, $`1\lambda =\frac{\eta (\tau /2)^{16}\eta (2\tau )^8}{\eta (\tau )^{24}}`$, and $`\lambda ^{}=16\frac{\eta (\tau /2)^{16}\eta (2\tau )^{16}}{\eta (\tau )^{28}}`$, where $`\eta `$ is the Dedekind $`\eta `$-function (see Eq. (18)) with $`q=e^{2\pi i\tau }`$, and the differentiation is with respect to the independent variable $`\tau =ir`$. Eq. (13) makes it possible to re-write the differential Eq. (12) for $`F`$ directly in terms of the aspect ratio $`r`$. One obtains $$\frac{d^2f}{dr^2}+a(r)\frac{df}{dr}+b(r)f=0$$ (15) where $`f(r)F^{}(r)`$, the $`r`$-derivative of the four-point function, and $$a(r)=\frac{3\lambda ^{\prime \prime }}{\lambda ^{}}+\frac{5\lambda ^{}}{3(\lambda 1)}+\frac{5\lambda ^{}}{3\lambda }$$ $$b(r)=\frac{5\lambda ^{\prime \prime }}{3\lambda }+3\left(\frac{\lambda ^{\prime \prime }}{\lambda ^{}}\right)^2\frac{\lambda ^{\prime \prime \prime }}{\lambda ^{}}\frac{5\lambda ^{\prime \prime }}{3(\lambda 1)}+\frac{4(\lambda ^{})^2}{3\lambda (\lambda 1)}$$ (16) where differentiation is with respect to $`r`$. One may identify two independent solutions of Eq. (15) as $$f_1=\left[\eta (\widehat{q}^2)\right]^4$$ $$f_2=\frac{1}{2}\left[\vartheta _2(\widehat{q})\right]^4f_W$$ (17) where $`\eta `$ is the Dedekind $`\eta `$-function $$\eta (q)=q^{1/24}\underset{n=1}{\overset{\mathrm{}}{}}(1q^n)$$ (18) and $`f_W`$ is an even, and apparently new, function of $`\widehat{q}`$. Its first few terms are given by $$f_W=\frac{16}{5}(\widehat{q}^2+\frac{16}{11}\widehat{q}^4+\frac{364}{187}\widehat{q}^6+\frac{13568}{4301}\widehat{q}^8+\frac{458070}{124729}\widehat{q}^{10}+\mathrm{})$$ (19) The solution $`f_1`$ is seen to be proportional to $`\mathrm{\Pi }_h^{}`$ by the above, or directly . We note for completeness that $$\mathrm{\Pi }_h^{}(r)=\frac{2^{7/3}\pi ^2}{\sqrt{3}\mathrm{\Gamma }(1/3)^3}\left[\eta (\widehat{q}^2)\right]^4$$ (20) The connection with $`\mathrm{\Pi }_{hv}^{}`$ involves including a term proportional to $`f_2`$. This is specified below. Note that, because of the way we have defined the aspect ratio $`r`$, our $`\mathrm{\Pi }_h`$ coincides with $`\mathrm{\Pi }_v`$ in . ## 3 Modular Forms This section follows the excellent introduction given in . A modular function or form assigns a complex number $`G(\mathrm{\Lambda })`$ to each lattice $`\mathrm{\Lambda }`$. Here the term ’lattice’, following mathematical usage, refers to an infinite regular array of points, defined by the basis $`\{\omega _1,\omega _2\}:\mathrm{\Lambda }=𝐙\omega _1+𝐙\omega _2`$. In addition, for any complex number $`\lambda 0`$, $`G`$ satisfies $`G(\mathrm{\Lambda })=\lambda ^kG(\lambda \mathrm{\Lambda })`$, where $`k`$ is some integer, called the weight (with $`k=0`$ for modular functions). Because of this, on dividing by $`\omega _2`$, we see that $`G`$ is completely specified by $`g(\tau )=G(𝐙\tau +𝐙)`$, where $`\tau =\omega _1/\omega _2`$. Since $`g`$ is also even in $`\tau `$, we may restrict $`\tau `$ to the upper half plane. In addition, there are analyticity and growth (as $`\tau i\mathrm{}`$) conditions on $`g`$. The modular properties arise on considering a change of basis. We can replace $`\{\omega _1,\omega _2\}`$ by $`\{\omega _1^{},\omega _2^{}\}`$ = $`\{a\omega _1+b\omega _2,c\omega _1+d\omega _2\}`$ with $`a,b,c,d𝐙`$, $`adbc=\pm 1`$ without changing the lattice. This implies that $`g`$ must satisfy the modular transformation property $$g\left(\frac{a\tau +b}{c\tau +d}\right)=(c\tau +d)^kg(\tau )$$ (21) Restricting $`\tau `$ to the upper half plane imposes $`adbc=+1`$. The group of matrices implementing such transformations is the (full) modular group $`\mathrm{\Gamma }_1`$. The matrix $`T=\left(\begin{array}{cc}1& 1\\ 0& 1\end{array}\right)`$ , which implements $`\tau \tau +1`$, together with $`S=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)`$ , for $`\tau 1/\tau `$, generate $`\mathrm{\Gamma }_1`$. It follows from Eq. (21) that modular forms of a given weight are a vector space over $`𝐂`$. The dimension of this space is very small, in general, a fact which leads to some very non-trivial relations. In what follows, we examine the modular properties of $`f_1`$ and $`f_2`$, considered as functions of $`\tau =ir`$, with $`r`$ complex. (Here it is useful to envision the rectangle rotated by $`90^{}`$, since $`\tau `$ is in the upper half plane for real $`r`$.) In order to make sense of this, we need to understand the physical meaning of a modular transformation in the context of percolation crossing probabilities. We do not include a full treatment of this question here, but restrict ourselves to the case of a conformally invariant physical quantity $`\mathrm{\Pi }`$, which is also a modular function (i.e. invariant under any $`\gamma \mathrm{\Gamma }_1`$). Suppose further that $`\mathrm{\Pi }`$ is initially defined on a rectangle of aspect ratio $`r`$. Now the modular transformation, as indicated, acts only on the basis vectors $`\{\omega _1,\omega _2\}`$. (For a rectangle, Re$`\{\omega _1\}=0=`$ Im $`\{\omega _2\}`$). To construct a conformal map of the rectangle, we consider these as displacements from a fixed origin at 0. Thus the map must satisfy $`\{0,\omega _1,\omega _2\}\{0,\omega _1^{},\omega _2^{}\}=\{0,a\omega _1+b\omega _2,c\omega _1+d\omega _2\}`$. This can be implemented, for instance, by a projective transformation $$w=\frac{\alpha z}{ϵz+\delta },$$ $$\alpha =(\omega _2\omega _1)\omega _1^{}\omega _2^{},$$ $$ϵ=\omega _1^{}\omega _2\omega _1\omega _2^{},$$ $$\delta =\omega _1\omega _2(\omega _2^{}\omega _1^{})$$ (22) which will take the rectangle into a figure with sides that are straight or arcs of circles. Note that the necessary conditions $`\omega _1\omega _2`$ and $`\omega _1^{}\omega _2^{}`$ imply $`\alpha ,\delta 0`$. This choice of map also has the advantage of always being 1-to-1. In addition, one can show that it preserves the group structure. However, the map is explicitly dependent on the basis vectors. Note that the quantity $`\mathrm{\Pi }`$ will remain invariant. The parameter $`\tau `$, defined as a ratio of displacements, transforms in the usual way, so that $`\mathrm{\Pi }(\gamma (\tau ))=\mathrm{\Pi }(\tau )`$. Thus for each $`\gamma `$, modular invariance corresponds to conformal invariance under a particular map. Of course we can also begin with the physical quantity defined on a parallelogram, removing the conditions on $`\{\omega _1,\omega _2\}`$. Note, however, that the derivative of $`w`$ at either of the three corners differs from the derivative of the modular transformation. We now return to the crossing probabilities. We focus on the functions $`f_1`$ and $`f_2`$, solutions of Eq. (15), considering them as functions of $`\tau =ir`$, so that $`\widehat{q}=e^{\pi i\tau }`$. We have already specified the connection between $`f_1`$ and $`\mathrm{\Pi }_h^{}`$. As mentioned, the solutions of Eq. (12) (and therefore those of Eq. (15)) span $`\mathrm{\Pi }_h^{}`$ and $`\mathrm{\Pi }_{hv}^{}`$. Thus it only remains to find the correct linear combination of $`f_1`$ and $`f_2`$ . Now $`\mathrm{\Pi }_{hv}^{}`$ must, by the first of Eqs. (9), be invariant under $`W:f(\tau )\tau ^2f(1/\tau )`$. The linear combination $`f_2\frac{C}{2}f_1`$ with $`C=\frac{2^{1/3}\pi ^2}{3\mathrm{\Gamma }(1/3)^3}`$ satisfies this condition. Combining this with Eq. (20) we find that $$\mathrm{\Pi }_{hv}^{}(r)=\frac{2^{7/3}\pi ^2}{\sqrt{3}\mathrm{\Gamma }(1/3)^3}\left[\eta (\widehat{q}^2)\right]^4+\frac{24}{\sqrt{3}}f_2(\widehat{q})$$ (23) Now this function behaves like a weight 2 modular form under the operations $`\tau 1/\tau `$ and $`\tau \tau +6`$, but these transformations generate a subgroup of infinite index of $`SL_2(𝐙)`$, and $`\mathrm{\Pi }_{hv}^{}`$ is not a modular form. However, the vector $`F=\left(\begin{array}{c}f_1\\ f_2\end{array}\right)`$ transforms under $`T^2`$ and $`S`$, which generate the theta-group $`\mathrm{\Gamma }_\theta `$, according to $$F(\tau +2)=\left(\begin{array}{cc}\omega & 0\\ 0& 1\end{array}\right)F(\tau )$$ $$\tau ^2F(1/\tau )=\left(\begin{array}{cc}1& 0\\ C& 1\end{array}\right)F(\tau )$$ (24) where $`C`$ is as above and $`\omega =e^{2\pi i/3}`$. Thus $`f_1`$ is a weight 2 modular form (with multiplier system, cf. ) on $`\mathrm{\Gamma }_\theta `$, but $`f_2`$ is a kind of ”second order” modular form, i.e. instead of $`\gamma |f_2f_2=0`$ for any $`\gamma \mathrm{\Gamma }_\theta `$, (where $`|`$ denotes the modular weight 2 operation of $`2\times 2`$ matrices) we have that $`\gamma |f_2f_2`$ is a multiple of $`f_1`$. Another way to look at this situation is to consider the differential equation (15). For convenience we take the independent variable to be $`\tau =ir`$. Then, writing $`f_2=uf_1`$, we find a second-order equation $`f_1u^{\prime \prime }+(2f_1^{}+af_1)u^{}=0`$ for $`u^{}`$. Now $`a=h^{}/h`$, where $`h=\lambda _{}^{}{}_{}{}^{3}\lambda ^{5/3}(1\lambda )^{5/3}`$ so that $`u^{\prime \prime }/u^{}=a2f_1^{}/f_1`$. Hence $`u^{}=2/3hf_1^2`$. Using Eq. (17) for $`f_1`$ and the identities for $`\lambda `$ above, we have finally $$u^{}(\tau )=\frac{2}{3}\frac{\eta (\tau /2)^8\eta (2\tau )^8}{\eta (\tau )^{12}}$$ (25) It follows from the modular properties of $`\eta `$ that $`u^{}`$ is a modular form of weight 2 on $`\mathrm{\Gamma }_\theta `$, so that $`f_2`$ is the product of a modular form and the integral of a modular form. Our examination of the modular properties of $`f_1`$ and $`f_2`$ reveals a close connection between the two functions. Now modular forms, as mentioned, are vector spaces of very small dimension. Thus it is likely that one can derive $`f_2`$ given $`f_1`$ and some physical conditions. This would be very interesting. Whether it is indeed possible remains to be seen, however. We have succeeded in showing that $`f_1`$ follows (up to an overall multiplicative constant) if one assumes that only one conformal block contributes to it . One can also show that the function $`u`$ corresponds to the Weierstrass $`\zeta `$-function on the elliptic curve $`Y^2=X^31728`$. ## Acknowledgements We acknowledge useful conversations with D. Bradley, J. Cardy, A. Özlük, I. Peschel, C. Snyder, D. Zagier and R. Ziff.
no-problem/9911/astro-ph9911267.html
ar5iv
text
# Evolution of primordial 𝐻₂ for different cosmological models ∗∗footnote ∗Talk given at the International Conference 𝐻₂ in space, IAP-Paris (France) September 28th-October 1st, 1999 ## 1 Introduction At early times the Universe was filled up with an extremely dense and hot gas. Due to the expansion it cooled below the binding energies of hydrogen, deuterium, helium, lithium, and thus one can expect the formation of these nuclei. As soon as neutrons and protons leave the equilibrium, the formation of deuterium followed by fast reactions lead finally to the formation of tritium and helium. Thus deuterium is the first stone of the nucleosynthesis but also the passage obligé for heavier elements such as lithium, beryllium and boron. The basic conclusions of the big bang nucleosynthesis on the baryon density $`\mathrm{\Omega }_\rho `$ are $$0.01<\mathrm{\Omega }_\rho h^2<\mathrm{\hspace{0.17em}0.025}$$ (1.1) See Sarkar (1996), Olive (1999) and references in Signore & Puy (1999). ## 2 Post-recombination chemistry The study of chemistry in the post recombination epoch has grown considerably in recent years. From the pionneer works of Saslaw & Zipoy (1967), Shchekinov & Entél (1983), Lepp & Shull (1987), Dalgarno & Lepp (1987) and Black (1988), many authors have developped studies of primordial chemistry in different contexts. Latter & Black (1991), Puy et al. (1993), Stancil et al. (1996) for the chemical network and the thermal balance , Palla, Galli & Silk (1995), Puy & Signore (1996, 1997, 1998a, 1998b), Abel et al. (1997) and Galli & Palla (1998) for the study of the initial conditions of the formation of the first objects. ### 2.1 History From the recombination phase, the electron density decreases which leads to the decoupling between temperature of the matter and temperature of radiation. Chemistry of the early Universe (i.e. $`z<2000`$) is the gaseous chemistry of the hydrogen, helium, lithium and electrons species. The efficiencies of the molecular formation processes is controlled by collisions, matter temperature and temperature of the cosmic microwave background radiation (CMBR). In the cosmological context we have metal-free gas, and thus the formation of $`H_2`$ is not similar to the formation in the interstellar medium by adsorption on the surface of the interstellar grains. The chemical composition of the primordial gas consists of a mixture of: $`H`$, $`H^+`$, $`H^{}`$, $`D`$, $`D^+`$, $`He`$, $`He^+`$, $`He^{2+}`$, $`Li`$, $`Li^+`$, $`Li^{}`$, $`H_2`$, $`H_2^+`$, $`HD`$, $`HD^+`$, $`HeH^+`$, $`LiH`$, $`LiH^+`$, $`H_3^+`$, $`H_2D^+`$, $`e^{}`$ , and $`\gamma `$ which leads to 90 reactions in the chemical network. It is more convenient to reduce the reactions to only those that are essential to accuretaly model the chemistry and to reduce computer times. We adopt the concept of minimal model developped by Abel et al. (1997), and focus on the formation of molecular hydrogen. This way we can reduce the chemical network to 20 reactions. From Saslaw & Zipoy (1967), and Shchekinov & Entél (1983) we know that the formation of primordial molecular hydrogen is due to the two main reactions: $$H^{}+HH_2+e^{}$$ (2.2) $$H_2^++HH_2+H^+.$$ (2.3) These reactions are coupled with the photo-reactions, the associative, recombination and charge exchange reactions (Puy et al. 1993, and Galli & Palla 1998). ### 2.2 Equations of evolution We consider here the chemical and thermal evolution in the framework of the Friedmann cosmological models. The relation between the time $`t`$ and the redshift $`z`$ is given by: $$\frac{dz}{dt}=H_o(1+z)\sqrt{1+\mathrm{\Omega }_oz}$$ (2.4) where $`H_o`$ is the Hubble constant and $`\mathrm{\Omega }_o`$ the parameter of density of the Universe (open Universe $`\mathrm{\Omega }_o<1`$, flat Universe $`\mathrm{\Omega }_o=1`$ and closed Universe for $`\mathrm{\Omega }_o>1`$). The expansion is characterized by the adiabatic cooling: $$\mathrm{\Lambda }_{ad}=\mathrm{\hspace{0.17em}3}nkT_mH_o(1+z)\sqrt{1+\mathrm{\Omega }_oz},$$ (2.5) with the density of matter $`n`$ and the temperature of matter $`T_m`$. The evolution of the matter density is given by: $$\frac{dn}{dt}=3nH_o(1+z)\sqrt{1+\mathrm{\Omega }_oz}.$$ (2.6) and for temperature of radiation: $$\frac{dT_m}{dt}=\frac{2}{3nk}\left[\mathrm{\Lambda }_{ad}+\mathrm{\Gamma }_{compt}+\mathrm{\Psi }_{molec}\right],$$ (2.7) where $`\mathrm{\Gamma }_{compt}`$ is the Compton scattering of CMBR photons on electrons. Below 4000 K only the rotational levels of $`H_2`$ can be excited (quadrupolar transitions). In the cosmological context Puy et al. (1993) have shown that the molecules heat the medium (due to the interactions between primordial molecules and the CMBR photons). The thermal molecular function $`\mathrm{\Psi }_{molec}`$ (heating minus cooling) is positive in this context. All these equations are coupled with the set of chemical equations in order to calculate the evolution of the abundance of $`H_2`$. ### 2.3 Evolution of molecular hydrogen We consider three sets of parameter for $`\mathrm{\Omega }_o`$ which characterize the three particular Universe (open with $`\mathrm{\Omega }=0.1`$, flat with $`\mathrm{\Omega }_o=1`$ and closed with $`\mathrm{\Omega }_o=2`$). Moreover we consider two values for the baryonic fraction, the lower value obtained with the primordial nucleosynthesis $`\mathrm{\Omega }_\rho =0.02`$, and the other which characterizes a full baryonic Universe $`\mathrm{\Omega }_\rho =1`$. In Fig. 1, we have plotted the different curves for the evolution of $`H_2`$. We see the classical two steps of $`H_2`$ formation (the first step correspond to the $`H_2^+`$ channel and the second one to the $`H^{}`$ channel). After this transient growth $`H_2`$ abundance becomes constant. ## 3 Cosmological thermal decoupling In Fig. 2, we have plotted the ratio between $`Tmolec`$ which is the temperature of matter with primordial molecules and $`T_{compt}`$ the temperature of matter without molecules (we consider in this context only the Compton heating). The ratio is closed to unity for the lower value of baryonic fraction and close to 2.5 for the higher value. For $`\mathrm{\Omega }_b>0.02`$, we can expected that $`T_{molec}1.25\times T_{compt}`$. ## 4 Outlook The change of temperature, due to $`H_2`$, on the thermal decoupling could play a role during the transition between the linear regime and the non-linear regime (the turn-around point of gravitation collapse), (Puy & Signore 1996, Signore & Puy 1999). The temperature at the turn-around point is given by $$T_{turn}\left(\frac{3\pi }{4}\right)^{4/3}T_{compt}$$ where $`T_{compt}`$ is the temperature of matter without the influence of molecules (Padmanabhan 1993). Taking into account the influence of the molecules, the temperature is 25 per cent more important than $`T_{compt}`$. Thus molecules could give other initial conditions for the dynamics of the gravitational collapse than ones predicted by the classical theory. ###### Acknowledgements. I would like to acknowledge Tom Abel, Lukas Grenacher, Philippe Jetzer and Monique Signore for valuable discussions. I thank Francoise Combes for organizing such a pleasant conference. This work has been supported by the Dr Tomalla Foundation and by the Swiss National Science Foundation.
no-problem/9911/hep-ex9911019.html
ar5iv
text
# 1 Physics at a High Energy Linear Collider ## 1 Physics at a High Energy Linear Collider A high-energy linear e<sup>+</sup>e<sup>-</sup>collider designed to operate in the c.m. energy range around and above 500 GeV is an obvious next step for particle physics investigations of the origin of mass and the mechanism of electroweak symmetry-breaking, and searches for new dynamics such as Supersymmetry (SUSY). The collider represents a natural facility for discovery and precision measurement of new particles , and would complement the physics potential of the LHC. For example, for $`M_H100`$GeV, favoured by current data, tens of thousands of clean $`H^0`$ bosons per year would be delivered at design luminosity. The anticipated event sample comprises: $``$ Multijet states containing heavy flavours, eg. e<sup>+</sup>e<sup>-</sup> $``$ $`Z^0`$ $`H^0`$ $``$ $`f\overline{f}`$ $`b\overline{b}`$ /$`c\overline{c}`$ /$`\tau ^+\tau ^{}`$ e<sup>+</sup>e<sup>-</sup> $``$ $`\mathrm{t}\overline{\mathrm{t}}`$ $``$ $`bW^+`$ $`\overline{b}W^{}`$ e<sup>+</sup>e<sup>-</sup> $``$ $`\mathrm{t}\overline{\mathrm{t}}`$ $`H^0`$ $``$ $`bW^+`$ $`\overline{b}W^{}`$ $`b\overline{b}`$ e<sup>+</sup>e<sup>-</sup> $``$ $`H^0`$ $`A^0`$ $``$ $`\mathrm{t}\overline{\mathrm{t}}`$ $`\mathrm{t}\overline{\mathrm{t}}`$ e<sup>+</sup>e<sup>-</sup> $``$ $`\stackrel{~}{t}\stackrel{~}{\overline{t}}`$ $``$ $`\stackrel{~}{\chi ^0}c\stackrel{~}{\chi ^0}\overline{c}`$ Events such as these require a high-resolution tracking system with excellent secondary decay vertex resolution. Jet energies will typically be in the range 50 $``$200 GeV, and the track momentum distribution will peak around 2 GeV/$`c`$, so multiple scattering will be important and necessitate low-mass tracking detectors. $``$ Events with missing energy, eg. e<sup>+</sup>e<sup>-</sup> $``$ $`\stackrel{~}{l^+}\stackrel{~}{l^{}}`$ or $`\stackrel{~}{q}\stackrel{~}{\overline{q}}`$ e<sup>+</sup>e<sup>-</sup> $``$ $`\stackrel{~}{\chi ^+}\stackrel{~}{\chi ^{}}`$ or $`\stackrel{~}{\chi ^0}\stackrel{~}{\chi ^0}`$ $``$ and perhaps exotic processes, eg. e<sup>+</sup>e<sup>-</sup> $``$ $`\stackrel{~}{\chi ^0}\stackrel{~}{\chi ^0}`$ $``$ $`\stackrel{~}{G}\gamma \stackrel{~}{G}\gamma `$ e<sup>+</sup>e<sup>-</sup> $``$ $`G\gamma `$ due to gauge-mediated SUSY breaking and extra compact dimensions, respectively. Such signatures demand a hermetic calorimeter with good energy resolution and high granularity for energy-flow measurement. This will allow precise jet-jet invariant mass determination and reconstruction of new heavy states above a large combinatorial background. Exotic signatures comprising photons and large missing energy also motivate consideration of a continuous event readout mode with a software trigger . ## 2 Accelerator and Detector Environment The TESLA collider , utilising superconducting RF cavities for the main linac, is being designed by an international consortium based around DESY. The collider operates in a ‘one-shot’ mode at a frequency of 5 Hz. In each cycle a train of 2820 $`5\times 550`$nm<sup>2</sup> e<sup>-</sup> bunches meets a similar e<sup>+</sup> bunch-train, with a bunch separation of 337 ns. The resulting backgrounds for the detector require careful planning. For example, at $`\sqrt{s}`$ = 500 GeV one expects, per bunch crossing: $``$ $``$ 120k e<sup>+</sup>e<sup>-</sup>$``$a large detector B-field; $``$ $``$ 1000 $`\gamma `$ in tracking volume $``$ highly granular tracking system, and possible bunch tagger with time resolution $``$ 100ns; $``$ several TeV EM energy in the forward regions: $`\theta `$ $`<`$ 100 mrad $``$shielding and masking; as well as $``$ 10<sup>9</sup> neutrons/cm<sup>2</sup>/year, requiring shielding of the inner detector. ## 3 Overview of Detector Design A schematic of the current design is shown in Fig. 1. The general concept is a large detector with a gaseous main tracking chamber and a hermetic, highly granular calorimeter. A first iteration was presented in . The design is evolving, and R&D is underway in all areas, but the options and technology choices are being focussed and refined. A brief summary of the current thinking is given below. ### 3.1 Vertex Detector (VXD) The requirements are high granularity (for low occupancy), low mass (for low multiple scattering), good spatial resolution (for precise vertex-finding), and neutron radiation tolerance (see above). A multi(4 or 5)-layer ‘self-tracking’ device would be optimal. CCDs and LHC-style active pixel sensors (APS) are being considered; all are radiation hard at the expected level. Large-area CCD arrays have been ‘combat tested’ at SLD/SLC and offer $`<`$4$`\mu `$m space-point resolution in 20$`\times `$20$`\mu `$m<sup>2</sup> pixels, with the possibility of devices as thin as 0.12% $`X_0`$/layer. The APS pixels will be larger ($`50\times 50\mu `$m<sup>2</sup>) and thicker (0.8% $`X_0`$/layer), but are more radiation tolerant. CMOS pixel devices have also been suggested and may allow CCD-like resolution with higher radiation tolerance. A 3 or 4 T solenoidal magnetic field would confine most of the background e<sup>+</sup>e<sup>-</sup>flux within the beampipe, allowing the vertex detector to be placed close to the beamline, with the first layer perhaps as close as 1cm. ### 3.2 Tracking System A large-volume time projection chamber (TPC) offers high effective spatial granularity and yields low occupancy in the expected background $`\gamma `$ flux. With a compensating coil to achieve $`\mathrm{\Delta }B/B`$ $``$ 0.2% a momentum resolution $`\mathrm{\Delta }p_t/p_t`$ $``$ $`4.5\times 10^5`$ (4T) could be achieved. A wire-chamber TPC readout offers a useful degree of particle identification, with $`\pi /K`$ separation up to or beyond $`p`$ = 20 GeV/$`c`$. Other possible readout technologies, such as gas electron multipliers and micromesh gaseous structures, are under active development. An ‘intermediate’ tracker would provide linking hits between the VXD and TPC; two planes of $`50\times 500\mu `$m<sup>2</sup> pixels appear to be sufficient. Several planes of silicon pixels ($`50\times 200\mu `$m<sup>2</sup>) and crossed strips ($`50\times 25\mu `$m<sup>2</sup>) are also planned to improve tracking performance in the forward regions $`7^{}<\theta <30^{}`$ from the beamline. ### 3.3 Calorimetry Currently thinking is to put the electromagnetic and main hadron calorimeters inside the solenoid. These would be roughly 25 $`X_0`$ and 5-6 $`\lambda _0`$ thick, respectively. Several options are being considered: a ‘tile’ calorimeter based on a sandwich of absorber and scintillator with wavelength-shifting fibres taking the signal out; the same materials, but in a ‘Shashlik’ (nearly longitudinal fibres) configuration; and a ‘high-granularity’ calorimeter based on highly-segmented W/Si or Pb/scintillator or Pb/Ar for the electromagnetic part and W, Fe or Pb with gas chambers for the hadronic part. The first two options would have $``$ $`3\times 3`$cm<sup>2</sup> transverse segmentation and coarse longitudinal segmentation, with O($`10^5`$) channels; the third option might have $`1\times 1`$cm<sup>2</sup> transverse segmentation with readout of every layer, yielding as many as $`10^7`$ channels. All three options are roughly comparable in terms of their energy resolution: 10%/$`\sqrt{E}`$ (or better) (EM) and 40%/$`\sqrt{E}`$ (had). The high-granularity option offers additionally superb photon and neutral hadron identification capability. ### 3.4 Muon System Space limitations prevent an adequate description, but an instrumented return yoke is being considered for the muon identification and tracking system, as well as to provide a tail-catcher hadron calorimeter. Iron instrumented with gas chambers, such as resistive plate chambers or streamer tubes, provides a well-tested and robust technology base for strip and pad readout systems. ## 4 Future Milestones A detailed technical design report for both the TESLA collider and detector is in preparation, and will be presented in spring 2001, with subsequent evaluation by the German Science Council. A decision on this multinational project might be made as early as 2002, with construction starting in 2003. This would allow turn-on in 2009, just a few years after the startup of the LHC, providing a powerful partnership for exploration of new physics.
no-problem/9911/hep-lat9911006.html
ar5iv
text
# First Evidence for Center Dominance in SU(3) Lattice Gauge Theory ## 1 Introduction The idea that center vortices play a decisive role in the mechanism of color confinement in quantum chromodynamics was proposed more than 20 years ago by ’t Hooft and other authors . Recently, the center-vortex picture of confinement has found remarkable confirmation in numerical simulations of the SU(2) lattice gauge theory . Our group has proposed a technique for locating center vortices in thermalized lattice configurations based on fixing to the so called maximal center gauge, followed by center projection . In SU(2) lattice gauge theory, the maximal center gauge is a gauge in which the quantity $$R=\underset{x}{}\underset{\mu }{}\left|\text{Tr}[U_\mu (x)]\right|^2$$ (1) reaches a maximum. This gauge condition forces each link variable to be as close as possible, on average, to a $`Z_2`$ center element, while preserving a residual $`Z_2`$ gauge invariance. Center projection is a mapping of each SU(2) link variable to the closest $`Z_2`$ center element: $$U_\mu (x)Z_\mu (x)\text{signTr}[U_\mu (x)].$$ (2) The excitations on the projected $`Z_2`$ lattice are point-like, line-like, or surface-like objects, in $`D=2,3`$, or $`4`$ dimensions respectively, called “P-vortices.” These are thin objects, one lattice spacing across. There is substantial numerical evidence that thin P-vortices locate the middle of thick center vortices on the unprojected lattice. The string tension computed on center projected configurations reproduces the entire asymptotic SU(2) string tension . It has also been demonstrated recently that removal of center vortices not only removes the asymptotic string tension, but restores chiral symmetry as well, and the SU(2) lattice is then brought to trivial topology . The vortex density has been seen to scale as predicted by asymptotic freedom . The properties of vortices have also been studied at finite temperature , and it has been argued that the non-vanishing string tension of spatial Wilson loops in the deconfined phase can be understood in terms of vortices winding through the periodic time direction. We have also proposed a simple model which explains the Casimir scaling of higher-representation string-tensions at intermediate distance scales in terms of the finite thickness of center vortices . Putting the above and other pieces of evidence together, it seems clear that our procedure of maximal-center-gauge fixing and center projection identifies physical objects that play a crucial role in the mechanism of color confinement. However, the gauge group of QCD is color SU(3), not SU(2), and it is of utmost importance to demonstrate that the observed phenomena are not specific to the SU(2) gauge group only. Some preliminary results for SU(3) were presented in Section 5 of Ref. . They came from simulations on very small lattices and at strong couplings. It was shown that center-projected Wilson loops reproduce results of the strong-coupling expansion of the full theory up to $`\beta 4`$. The purpose of the present letter is to present further evidence on center dominance in SU(3) lattice gauge theory, very similar to the results that arose from SU(2) simulations. Though not as convincing as the SU(2) data, the first SU(3) results support the view that the vortex mechanism works in SU(3) in the same way as in SU(2). ## 2 Maximal Center Gauge in SU(3) The maximal center gauge in SU(3) gauge theory is defined as the gauge which brings link variables $`U`$ as close as possible to elements of its center $`Z_3`$. This can be achieved as in SU(2) by maximizing a “mesonic” quantity $$R=\underset{x}{}\underset{\mu }{}\text{Tr}U_\mu (x)^2,$$ (3) or, alternatively, a “baryonic” one $$R^{}=\underset{x}{}\underset{\mu }{}\text{Re}\left(\left[\text{Tr}U_\mu (x)\right]^3\right).$$ (4) The latter was the choice of Ref. , where we used the method of simulated annealing for iterative maximization procedure. The convergence to the maximum was rather slow and forced us to restrict simulations to small lattices and strong couplings. The results, that will be presented below, were obtained in a gauge defined by the “mesonic” condition (3). The maximization procedure for this case is inspired by the Cabibbo–Marinari–Okawa SU(3) heat bath method .<sup>1</sup><sup>1</sup>1A similar approach was applied for SU(3) cooling by Hoek et al. . The idea of the method is as follows: In the maximization procedure we update link variables to locally maximize the quantity (3) with respect to a chosen link. At each site we thus need to find a gauge-transformation matrix $`\mathrm{\Omega }(x)`$ which maximizes a local quantity $$R(x)=\underset{\mu }{}\left\{\left|\text{Tr}\left[\mathrm{\Omega }(x)^{}U_\mu (x)\right]\right|^2+\left|\text{Tr}\left[U_\mu (x\widehat{\mu })\mathrm{\Omega }^{}(x)\right]\right|^2\right\}.$$ (5) Instead of trying to find the optimal matrix $`\mathrm{\Omega }(x)`$, we take an SU(2) matrix $`g(x)`$ and embed it into one of the three diagonal SU(2) subgroups of SU(3). The expression (5) is then maximized with respect to $`g`$, with the constraint of $`g`$ being an SU(2) matrix. This reduces to an algebraic problem (plus a solution of a non-linear equation). Once we obtain the matrix $`g(x)`$, we update link variables touching the site $`x`$, and repeat the procedure for all three subgroups of SU(3) and for all lattice sites. This constitutes one center gauge fixing sweep. We made up to 1200 sweeps for each configuration. Center projection is then done by replacing the link matrix by the closest element of $`Z_3`$. The above iterative procedure was independently developed by Montero, and described with full details in his recent publication . Montero, building on the work of Ref. , has constructed classical SU(3) center vortex solutions on a periodic lattice. He has found that P-vortex plaquettes accurately locate the middle of the classical vortex, which is evidence of the ability of maximal center gauge to properly find vortex locations. ## 3 Center Dominance in SU(3) Lattice Gauge Theory The effect of creating a center vortex linked to a given Wilson loop in SU(3) lattice gauge theory is to multiply the Wilson loop by an element of the gauge group center, i.e. $$W(C)e^{\pm 2\pi i/3}W(C).$$ (6) Quantum fluctuations in the number of vortices linked to a Wilson loop can be shown to lead to its area law falloff; the simplest, but urgent question is whether center disorder is sufficient to produce the whole asymptotic string tension of full, unprojected lattice configurations. We have computed Wilson loops and Creutz ratios at various values of the coupling $`\beta `$ on a $`12^4`$ lattice, from full lattice configurations, center-projected link configurations in maximal center gauge, and also from configurations with all vortices removed. Figure 1 shows a typical plot at $`\beta =5.6`$. It is obvious that center elements themselves produce a value of the string tension which is close to the asymptotic value of the full theory. On the other hand, if center elements are factored out from link matrices and Wilson loops are computed from SU(3)/$`Z_3`$ elements only, the Creutz ratios tend to zero for sufficiently large loops. The errorbars are, however, rather large, and one cannot draw an unambiguous conclusion from the data. Recently we have argued that center dominance by itself does not prove the role of center degrees of freedom in QCD dynamics ; some sort of center dominance exists also without any gauge fixing and can hardly by attributed to center vortices. Distinctive features of center-projected configurations in maximal center gauge in SU(2), besides center dominance, were that: 1. Creutz ratios were approximately constant starting from small distances (this we called “precocious linearity”), 2. the vortex density scaled with $`\beta `$ exactly as expected for a physical quantity with dimensions of inverse area. Precocious linearity, the absence of the Coulomb part of the potential on the center-projected lattice at short distances, can be quite clearly seen from Fig. 1. One observes some decrease of the Creutz ratios at intermediate distances. A similar effect is present also at other values of $`\beta `$. It is not clear to us whether this decrease is of any physical relevance, or whether it should be attributed to imperfect fixing to the maximal center gauge. The issue of scaling is addressed in Figure 2. Here values of various Creutz ratios are shown as a function of $`\beta `$ and compared to those quoted in Ref. . All values for a given $`\beta `$ lie close to each other (precocious linearity once again) and are in reasonable agreement with asymptotic values obtained in time-consuming SU(3) pure gauge theory simulations. The plot in Fig. 2 is at the same time a hint that the P-vortex density also scales properly. The density is approximately proportional to the value of $`\chi (1)`$ in center-projected configurations, and $`\chi (1)`$ follows the same scaling curve as Creutz ratios obtained from larger Wilson loops. A closer look at Fig. 2 reveals that there is no perfect scaling, similar to the SU(2) case, in our SU(3) data. Broken lines connecting the data points tend to bend at higher values of $`\beta `$. In our opinion, this is a finite-volume effect and should disappear for larger lattices. In our simulations we use the QCDF90 package (supplemented by subroutines for MCG fixing and center projection), which becomes rather inefficient on a larger lattice . CPU and memory limitations do not allow us at present to extend simulations to larger lattice volumes. An important test of the vortex-condensation picture is the measurement of vortex-limited Wilson loops. Let us denote $`W_n(C)`$ the expectation value of the Wilson loop evaluated on a sub-ensemble of unprojected lattice configurations, selected such that precisely $`n`$ P-vortices, in the corresponding center-projected configurations, pierce the minimal area of the loop. For large loop areas one expects $$\frac{W_n(C)}{W_0(C)}e^{2n\pi i/3}.$$ (7) We tried to measure quantities like $`W_1(C)/W_0(C)`$ and $`W_2(C)/W_0(C)`$ in Monte Carlo simulations. The trend of our data is in accordance with the expectation based on the $`Z_3`$ vortex-condensation theory, Eq. (7), but before the evidence becomes conclusive, the errorbars become too large. ## 4 Conclusion We have presented evidence for center dominance, precocious linearity, and scaling of center-projected Creutz ratios and of P-vortex density from simulations of the SU(3) lattice gauge theory. Our data – and conclusions that can be drawn from them – look quite similar to the case of SU(2). However, the SU(3) data at present are not as convincing and unambiguous as those of SU(2), the errorbars are still quite large and much more CPU time would be required to reduce them. The reason essentially is that the gauge-fixing maximization for SU(3) is very time consuming, either with simulated annealing or by the Cabibbo–Marinari–Okawa-like method used in the present investigation.<sup>2</sup><sup>2</sup>2Typically thousands of iterations were needed for gauge fixing also in the investigation of Montero . Moreover, the maximal center gauge is known to suffer from the Gribov problem, which makes gauge fixing notoriously difficult (in this context see also Refs. ). A better alternative is badly needed, and may be provided by the recent proposal of de Forcrand et al. based on fixing to the so-called Laplacian center gauge. Their first SU(2) results are promising, and the method can readily be extended to the case of SU(3). It is encouraging that none of the pieces of data, which we have accumulated in SU(3) lattice gauge theory until now, contradicts conclusions drawn from earlier SU(2) results. If future extensive simulations with a more suitable, Gribov-copy free center-gauge fixing method confirm the evidence obtained in our exploratory investigation, center vortices will have a very strong claim to be the true mechanism of color confinement in QCD. ### Acknowledgements Our research is supported in part by Fonds zur Förderung der Wissenschaftlichen Forschung P13997-PHY (M.F.), the U.S. Department of Energy under Grant No. DE-FG03-92ER40711 (J.G.), and the Slovak Grant Agency for Science, Grant No. 2/4111/97 (Š.O.). In earlier stages of this work Š.O. was also supported by the “Action Austria–Slovak Republic: Cooperation in Science and Education” (Project No. 18s41). Portions of our numerical simulations were carried out on computers of the Technical University of Vienna, and of the Computing Center of the Slovak Academy of Sciences in Bratislava.
no-problem/9911/math-ph9911024.html
ar5iv
text
# Lattice representations of Penrose tilings of the plane ## I introduction The subject of Penrose tiles has been studied extensively, and the concept of quasiperiodicity has found applications in physics. The definition of the Penrose tiles discussed in this paper is the same as that of Refs. and , but the methods presented here can be applied to other types of tiles as well. We consider a tiling of the plane by a rhombus containing an angle of $`36^{}`$ and a rhombus containing an angle of $`72^{}`$; the edges of these two tiles have unit length. Some authors work with additional matching rules, according to which certain arrangements of tiles are forbidden. The methods of this paper can be applied when matching rules are present as well as when they are absent. An example of a tiling is shown in Fig. 1. Although there are only finitely many directions possible for the edges in such a tiling, the vertices do not lie on any lattice (a lattice is a set of points obtained by taking all integer linear combinations of a set of linearly independent vectors). The subject of this paper is two-, three- and four-dimensional representations of Penrose tilings of the plane. Given a tiling of the plane, a lattice representation is defined by placing tiles with four vertices on a lattice, which has a dimensionality of two, three or four. The tiles share edges with neighboring tiles in the same way as in the original tiling of the plane. Figure 2 shows an example of a three-dimensional representation of the tiling in Fig. 1. ## II four-dimensional representation We begin by defining a four-dimensional representation of a Penrose tiling of the plane. Such representations are well-known, but we present a discussion here because the later sections of this paper make use of this material. We define $`x`$ and $`y`$ axes in the plane in such a way that one of the vertices of the tiling is located at the origin, and that the angles between edges of tiles and the $`x`$ axis are multiples of $`36^{}`$. Because of the relationships $`\mathrm{cos}36^{}`$ $`=`$ $`{\displaystyle \frac{1+\sqrt{5}}{4}},`$ (1) $`\mathrm{cos}72^{}`$ $`=`$ $`{\displaystyle \frac{1+\sqrt{5}}{4}},`$ (2) and $`\mathrm{cos}0=1`$, the $`x`$ coordinates of all of the vertices in the tiling can be expressed as integer linear combinations of $`1/4`$ and $`\sqrt{5}/4`$. The $`y`$ coordinates of all of the vertices in the tiling can be expressed as integer linear combinations of $`\mathrm{sin}36^{}`$ and $`\mathrm{sin}72^{}`$. Thus the $`x`$ and $`y`$ coordinates of any vertex in the tiling can be described by four integers $`x_1`$, $`x_2`$, $`x_3`$ and $`x_4`$: $`x`$ $`=`$ $`{\displaystyle \frac{x_1+x_2\sqrt{5}}{4}},`$ (3) $`y`$ $`=`$ $`x_3\mathrm{sin}36^{}+x_4\mathrm{sin}72^{}.`$ (4) The integers $`x_1`$, $`x_2`$, $`x_3`$ and $`x_4`$ are the coordinates of the given vertex in a four-dimensional space. The set of points in I R<sup>4</sup> with integer coordinates is a lattice. Not all of the points of this lattice are used in our description of a Penrose tiling. Figure 3 shows the coordinates in I R<sup>4</sup> of ten points in the plane. These are the ten possible displacement vectors for a step, as one traces a path along the edges in a tiling of the plane. The four integers corresponding to such a vector give the changes in the coordinates in I R<sup>4</sup> as one moves along the edge of a tile. In the four-dimensional representation, a tile is specified by four points in I R<sup>4</sup>; going around the tile along the edges involves taking steps in I R<sup>4</sup> given by the numbers in Fig. 3. Thus we see that a tile in I R<sup>4</sup> has displacement vectors along its edges that are independent of the location of the original tile in the plane. What does matter is the orientation of the original tile in the plane. Different orientations of the same tile in the plane correspond to tiles having different shapes in I R<sup>4</sup>. In this sense, the number of types of tiles that occur in the four-dimensional representation is greater than the original value of two. Given the four-dimensional representation of a tiling of the plane, the original tiling can be recovered using Eq. (4). This corresponds to projecting the four-dimensional tiling onto a two-dimensional subspace, with suitably defined coordinates in the subspace. Since objects in four dimensions are difficult to visualize, it is also of interest to consider projections onto three-dimensional subspaces. This can be done without creating self-intersections of the higher-dimensional tiling. Then a further projection results in the original tiling of the plane. These matters are discussed in the next section. Rotating a Penrose tiling of the plane by $`36^{}`$ results in a tiling that has the same set of allowed directions for the edges, so the vertices can be described by four integers using the same procedure as for the original tiling. Using Eq. (4), one finds that the effect of a counterclockwise rotation by $`36^{}`$ on the four integers identifying a vertex can be described by multiplication by the following matrix: $$T=\frac{1}{4}\left(\begin{array}{cccc}1& 5& 10& 0\\ 1& 1& 2& 4\\ 1& 1& 0& 2\\ 0& 2& 2& 2\end{array}\right),$$ (5) Because of the definition of $`T`$ as a rotation in the plane by an angle of $`36^{}`$ (one tenth of a revolution), we have the following identities for higher powers of $`T`$: $`T^5`$ $`=`$ $`I,`$ (6) $`T^{10}`$ $`=`$ $`I.`$ (7) Here $`I`$ denotes the $`4\times 4`$ identity matrix. The matrix $`T`$ is not orthogonal (that is, $`TT^t`$ is not the identity matrix), but it does possess the property $$TMTM=I,$$ (9) where the matrix $`M`$ is defined to be $$M=\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 1\end{array}\right).$$ (10) The reason Eq. (9) is true is that the action of $`M`$ is the same as a reflection about the $`y`$ axis in the original two-dimensional space, and the conjugation of a rotation by a reflection in I R<sup>2</sup> is the inverse of the rotation. The result of multiplying a column vector of four integers that represent a vertex in a Penrose tiling by the matrix $`T`$ must again be a column vector of four integers, for reasons explained above. This statement yields some constraints on the possible coordinates in I R<sup>4</sup> for vertices in a Penrose tiling of the plane. For each row in the matrix $`T`$ we have a constraint. For example, from the first row, we see that $`x_1+5x_210x_3`$ must be a multiple of four. Because the $`x_i`$ are integers, this statement is equivalent to the statement that $`x_1+x_2+2x_3`$ must be a multiple of four. Another way to prove this statement is to look at the coordinates of the fundamental displacement vectors shown in Fig. 3. For each of these steps, $`x_1+x_2+2x_3`$ is a multiple of four, so $`x_1+x_2+2x_3`$ must be a multiple of four for arbitrary combinations of these steps. Looking at the other rows in the matrix $`T`$ we obtain further statements about sets of four integers that represent a vertex in a Penrose tiling. Some of this information is redundant. The information may be summarized as $`x_1+x_2+2x_3`$ $`=`$ $`0mod\mathrm{\hspace{0.17em}\hspace{0.17em}4},`$ (11) $`x_2+x_3+x_4`$ $`=`$ $`0mod\mathrm{\hspace{0.17em}\hspace{0.17em}2}.`$ (12) If we define new coordinates according to $`x_1^{}`$ $`=`$ $`{\displaystyle \frac{x_1+x_2+2x_3}{4}},`$ (13) $`x_2^{}`$ $`=`$ $`{\displaystyle \frac{x_2+x_3+x_4}{2}},`$ (14) $`x_3^{}`$ $`=`$ $`x_3,`$ (15) $`x_4^{}`$ $`=`$ $`x_4,`$ (16) then we may use $`x_1^{}`$, $`x_2^{}`$, $`x_3^{}`$ and $`x_4^{}`$ as coordinates on I R<sup>4</sup>. These coordinates have the property that all possible integer values are taken on when representing arbitrary finite sums of the fundamental displacement vectors. If complex numbers are used to describe points in the plane, these sums become sums of unimodular complex numbers of the form $`\mathrm{exp}(i\pi n_j/5)`$, where the $`n_j`$ are integers. The points obtained in this way represent vertices that would occur in tilings of the plane, if overlapping tiles were allowed. The fact that all possible integer values are taken on by the primed coordinates follows from the observation that (in the complex notation for points in the plane) $`\mathrm{exp}(0)`$ is represented by $`(1,0,0,0)`$, $`\mathrm{exp}(2\pi i/5)+\mathrm{exp}(2\pi i/5)`$ is represented by $`(0,1,0,0)`$, $`\mathrm{exp}(4\pi i/5)`$ is represented by $`(0,0,1,0)`$, and $`\mathrm{exp}(3\pi i/5)`$ is represented by $`(0,0,0,1)`$. With respect to the primed coordinates, the matrix $`T`$ becomes $$T^{}=\left(\begin{array}{cccc}1& 0& 1& 0\\ 1& 0& 0& 0\\ 1& 1& 0& 1\\ 0& 1& 0& 0\end{array}\right).$$ (17) The fact that only the numbers 1, -1 and 0 are present in this matrix indicates that an argument such as the one presented above for the matrix $`T`$ will not give any constraints on the values of the new coordinates. The $`x`$ and $`y`$ coordinates of any vertex in the tiling can be described by the four integers $`x_1^{}`$, $`x_2^{}`$, $`x_3^{}`$ and $`x_4^{}`$ as $`x`$ $`=`$ $`{\displaystyle \frac{4x_1^{}2x_2^{}x_3^{}+x_4^{}+(2x_2^{}x_3^{}x_4^{})\sqrt{5}}{4}},`$ (18) $`y`$ $`=`$ $`x_3^{}\mathrm{sin}36^{}+x_4^{}\mathrm{sin}72^{}.`$ (19) Because of the complexity of this transformation, we prefer to use the unprimed coordinates, shown in Eq. (4). ## III three-dimensional representations In this section we describe three-dimensional representations of Penrose tilings of the plane. These have the advantage that they are easier to visualize than the four-dimensional representation described in the previous section. Two projections are described. We call these the “$`\mu `$-projection” and the “(1,2)-projection.” The $`\mu `$-projection has the advantage that the original tiling of the plane can be obtained by a simple projection onto a certain two-dimensional subspace of I R<sup>3</sup>. It has the disadvantage that only the $`x`$ and $`y`$ coordinates of the vertices are integers. The set of $`z`$ values that occur are not integer multiples of a basic step. The (1,2)-projection has the advantage that the $`x`$, $`y`$ and $`z`$ coordinates of the vertices are all integers. The disadvantage is that the original tiling cannot be recovered by a further simple projection, although it can be reconstructed by a different means, as explained below. ### A The $`\mu `$-projection We define a mapping from I R<sup>4</sup> to I R<sup>3</sup> using the matrix $$P^{(\mu )}=\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& \mu \mathrm{sin}\frac{\pi }{5}& \mu \mathrm{sin}\frac{2\pi }{5}\end{array}\right),$$ (20) where $`\mu `$ is a real number specified below. (The $`\mu `$ on the left-hand side of the equation is not an index.) Under this mapping, the vertices in I R<sup>4</sup> map to points in I R<sup>3</sup>. The resulting arrangement of tiles in I R<sup>3</sup> can be projected onto the plane perpendicular to $`(\sqrt{5},1,0)`$ to recover the original tiling. The proper choice for the value of $`\mu `$ results in a two-dimensional tiling in which the angles are the same as in the original tiling. We will use $`\frac{1}{\sqrt{6}}(1,\sqrt{5},0)`$ and $`(0,0,1)`$ as orthonormal basis vectors for the two-dimensional subspace perpendicular to $`(\sqrt{5},1,0)`$, and we let $`x^{}`$ and $`y^{}`$ denote the coordinates of projected points with respect to this basis. The reason for this choice of subspace will become clear in the following calculation. The values of $`x^{}`$ and $`y^{}`$ for a point $`(x_1,x_2,x_3,x_4)`$ in I R<sup>4</sup> can be obtained from $`\left(\begin{array}{c}x^{}\\ y^{}\end{array}\right)`$ $`=`$ $`\left(\begin{array}{ccc}\frac{1}{\sqrt{6}}& \frac{5}{\sqrt{6}}& 0\\ & & \multicolumn{-1}{c}{}\\ 0& 0& 1\end{array}\right)\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& \mu \mathrm{sin}\frac{\pi }{5}& \mu \mathrm{sin}\frac{2\pi }{5}\end{array}\right)\left(\begin{array}{c}x_1\\ x_2\\ x_3\\ x_4\end{array}\right)`$ (21) $`=`$ $`\left(\begin{array}{cccc}\frac{1}{\sqrt{6}}& \frac{5}{\sqrt{6}}& 0& 0\\ & & \multicolumn{-1}{c}{}\\ 0& 0& \mu \mathrm{sin}\frac{\pi }{5}& \mu \mathrm{sin}\frac{2\pi }{5}\end{array}\right)\left(\begin{array}{c}x_1\\ x_2\\ x_3\\ x_4\end{array}\right)`$ (22) $`=`$ $`\left(\begin{array}{c}2\sqrt{\frac{2}{3}}x\\ & & \multicolumn{-1}{c}{}\\ \mu y\end{array}\right),`$ (23) where we have used Eq. (4). Thus, we must take $$\mu =2\sqrt{\frac{2}{3}}.$$ (24) For arrangements of a finite number of Penrose tiles, the process of projecting onto the plane perpendicular to $`(\sqrt{5},1,0)`$ can be implemented by viewing the three-dimensional representation from a viewpoint in the $`(\sqrt{5},1,0)`$ direction. The image obtained in the limit as the distance from the tiles to the observer becomes infinite (together with suitable magnification of the image) is the projection onto the plane perpendicular to $`(\sqrt{5},1,0)`$. Figure 5 shows the three-dimensional representation corresponding to the tiles in the plane shown in Fig. 4, using the $`\mu `$-projection. Figure 6 contains a cut-out model of the object shown in Fig. 5. The lower portion is a frame to support the model. Its location in the fully assembled model corresponds roughly to the right half of the encompassing box in Fig. 5. ### B The (1,2)-projection The (1,2)-projection is defined by the matrix $$P^{(1,2)}=\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 2\end{array}\right),$$ (25) The numbers in the lowest row of this matrix provide the motivation for the name “(1,2)- projection.” The numerical values in this matrix are close to those in $`P^{(\mu )}`$, so the resulting projected tilings are similar in appearance. However, since the coordinates $`x_3`$ and $`x_4`$ occur in the combination $`x_3+2x_4`$, it is not possible to obtain $`y=x_3\mathrm{sin}36^{}+x_4\mathrm{sin}72^{}`$ by simple algebraic operations. In spite of this, it is possible to recover the original tiling from the (1,2)-projection because the tiles in the projection indicate the types of tiles (narrow or wide) and their connections (shared edges) in the original tiling. Therefore, the information that is necessary to assemble the original tiling is contained in the (1,2)-projection. As mentioned above, this three-dimensional representation has the property that the $`x`$, $`y`$ and $`z`$ coordinates of the vertices are integers. Figure 2 shows the (1,2)-projection corresponding to Fig. 1. ## IV two-dimensional representations It is also possible to create a two-dimensional lattice representation of a Penrose tiling of the plane. One way to do this is to map the vertices in I R<sup>4</sup> to I R<sup>2</sup> using the matrix $$\left(\begin{array}{cccc}\frac{1}{4}& \frac{1}{2}& 0& 0\\ \\ 0& 0& \frac{1}{2}& 1\end{array}\right),$$ (26) The resulting tiling of the plane (see Fig. 7) resembles the original tiling, but the tiles are slightly different. Also, the set of directions of edges is no longer invariant under a rotation through an angle of $`36^{}`$. There are three types of narrow tiles and three types of wide tiles (modulo reflections about the coordinate axes), corresponding to different orientations of the tiles in the original tiling. As above, the types of tiles (narrow or wide) and their connections to each other (shared edges) can be identified, so the original tiling can be reconstructed, but this process is not a simple projection operation. Although some of the vertices move quite a bit in this process, nearby vertices are moved by a similar amount, and no self-intersections of the pattern are created. This can be seen by applying the transformation to the points in Fig. 3. These displacement vectors occur along the edges of the tiles. The point (1,0) \[which is represented by (4,0,0,0) in the four-dimensional representation\] gets mapped to (1,0); the point $`(\mathrm{cos}36^{},\mathrm{sin}36^{})`$ \[which is represented by (1,1,1,0) in the four-dimensional representation\] gets mapped to $`(\frac{3}{4},\frac{1}{2})`$, etc. Thus, the shapes of the tiles are not changed drastically, and the identities and arrangement of the original tiles can be read off of the transformed tiling. This is as in the case of the (1,2)-projection above. Another way to say this is that if a vertex is selected in Fig. 7, its original coordinates can be found by choosing an arbitrary path back to the origin along edges in the diagram. The steps in this path identify unit vectors which must be summed to obtain the coordinates of the vertex in Fig. 1. The description of a tiling can be further simplified by removing the edges of the tiles from the list of information that is recorded about the tiling. It is sufficient to record only the locations of the vertices of the tiles on the two-dimensional lattice, as shown in Fig. 8. Only certain types of displacement vectors can occur as edges. These were mentioned in the preceding paragraph. Thus, given the information in Fig. 8, the edges can be reconstructed. The description of a tiling using only a record of the locations of the vertices in the two-dimensional lattice representation, with no further information about the vertices (such as which vertices belong to the same tile), is a description using a two-dimensional array of bits (binary digits). At each lattice point, a tiling vertex can either be present or absent. Not all arrays of bits represent tilings, however. Some displacements between vertices are forbidden. It must be possible to reconstruct a tiling from the bits, with all of the edges belonging to allowed tiles. The two-dimensional representation provides a simple description of a tiling using integers. One advantage that it has is that it is “random access.” It is possible to view the representation of an arbitrary patch of the tiling, using only information about that patch. Some other descriptions using integers do not have this property. For example, if a history of assembly is recorded (a sequence of tile types, edge numbers and orientations) then more data must be accessed to view a patch of the tiling. The two-dimensional representation is also useful for certain types of computer tiling programs and tile-overlap detection. It should be noted that the two-dimensional lattice representation is not the same as simply superimposing a grid over the original tiling and moving the vertices to the nearest grid point. This would result in different shapes representing a given orientation of a tile, depending on how it was positioned relative to the grid. In the two-dimensional representation, a given orientation of a tile is always represented by the same shape, regardless of its position. The two-dimensional representation also provides an interesting distance measure to work with when tiling the plane. This is the Euclidean metric in the lattice space. Because the original tiling is not obtained by a simple projection, this metric is difficult to describe in the original two-dimensional space. Figure 9 shows a tiling generated by a simple algorithm using the two-dimensional representation (basically placing new tiles as close as possible to the origin). The subjects of quasicrystallography and the spiral of Archimedes occur in the literature. ## V conclusion The description of Penrose tilings of the plane using lattices has a variety of applications, some of which are discussed in this paper and the references. As a further application, we describe in the appendix an algorithm for checking for tile overlap using calculations with the integers that occur in the four-dimensional representation. In this paper we have shown that lower-dimensional lattice descriptions of Penrose tilings of the plane are possible, but the reconstruction process becomes more complicated. Such representations may be of use in analyzing tilings, and they provide a simple means of describing a tiling. An open question is the large-scale structure and curvature of the higher-dimensional representations, for tilings of the entire plane. For example, what would Fig. 2 look like as the number of tiles becomes very large? Another question is what the most efficient way to describe a tiling is. The most efficient method presented in this paper is the two-dimensional array of bits. Looking at Fig. 8, it is clear that some compression of this data is possible. Furthermore, if one of the bits is made unknown, it is still possible to reconstruct the tiling. Thus, further improvements in efficiency are possible. ## A Checking for tile overlap using integer math The subject of this appendix is an algorithm for checking for tile overlap using calculations with integers. Additional information, such as vertex-vertex contact, is also given. We will consider here the case of a narrow tile with its long diagonal at an angle of $`54^{}`$ to the $`x`$-axis and a wide tile with its long diagonal at an angle of $`36^{}`$ to the $`x`$-axis. Other cases can be treated in a similar manner. Since the orientations of the tiles are fixed in this discussion, we need only introduce variables to describe the relative positioning of the tiles. We let the four integers $`x_1`$, $`x_2`$, $`x_3`$ and $`x_4`$ describe the displacement vector from the lower-left vertex of the narrow tile to the lower-left vertex of the wide tile. It is useful to think of the lower-left vertex of the narrow tile as being at the origin. Then the location of the wide tile is determined by $`x_1`$, $`x_2`$, $`x_3`$ and $`x_4`$. As we move the wide tile around the plane (without rotating it), some of the positions will have contact between the tiles; others will have no contact. The set of positions for the lower-left vertex of the wide tile for which there is contact is a hexagon. Opposite sides of the hexagon are parallel, but the hexagon is not regular. The second set of Mathematica instructions below draws a diagram to go along with this discussion. For each edge of the hexagon, we define a line by extending the edge infinitely in both directions. The side of the line on which a given test point is located is determined by the sign of the dot product of a normal vector to the line and the vector difference between the test point and a point on the line. We choose the normal vector to point to the inside of the hexagon. Because the normal vector is at $`90^{}`$ to the line, $`\mathrm{sin}36^{}`$ appears in its $`x`$-component, as may be seen from Eq. (4) and the fact that $`\mathrm{sin}72^{}=\mathrm{sin}36^{}(1+\sqrt{5})/2`$. A simplification that therefore occurs is that an overall factor of $`\mathrm{sin}36^{}`$ may be dropped in computing the above-mentioned sign. The problem reduces to finding the sign of numbers of the form $`a+b\sqrt{5}`$, which can be done using integer math (see sign\[ \], below). A test point strictly inside the hexagon will have all six signs positive. If at least one sign is negative, the point is strictly outside the hexagon. If exactly one of the signs is zero (and the others positive), the point is on one of the edges of the hexagon, and it is not one of the endpoints. This means we have vertex-edge or partial edge contact between the tiles. If two of the signs are zero (and the others positive), the test point is at one of the vertices of the hexagon. This means the tiles have vertex-vertex contact. Two of the vertices of the hexagon represent perfect edge contact. These are tested for at the beginning of the program check\[ \]. The program check\[ \] calculates the six signs and returns a character string describing the overlap. The second set of Mathematica instructions below draws a figure showing how the plane is divided up. The narrow tile with its lower-left vertex at the origin is shown. The wide tile is located at an arbitrary position. The dots summarize the information obtained from check\[ \] at some representative points. These include special points, such as the two points for which we have perfect edge contact (large filled circles), points which indicate vertex-vertex contact (medium filled circles), and vertex-edge or partial edge contact (medium unfilled circles). Most of the points in the figure are small circles. The unfilled ones are points that indicate overlap with nonzero area, and the filled ones are points that indicate no overlap at all. ``` (* ========= check for overlapping tiles, using integer math: ========= *) (* the sign of a + b Sqrt[5] : *) sign[a_, b_] := Switch[Sign[a] Sign[b], 1, Sign[a], 0, If[a == 0, Sign[b], Sign[a]], -1, Sign[a] Sign[a^2 - 5 b^2]] (* some parameters: *) Evaluate[Table[b[i, j], {i, 3},{j, -1, 1, 2}]] = {{{-1, -1}, {3, 1}}, {{-4, 0}, {8, 0}}, {{-8, 0}, {4, 4}}} (* check for overlap: *) check[x1_, x2_, x3_, x4_] := ( If[{x1, x2, x3, x4} == {1, 1, 1, 0} || {x1, x2, x3, x4} == {-4,0,0,0}, Return["perfect edge contact"]]; p[1] = {2 x3 + x4, x4}; p[2] = {-x1 + x3 + 3 x4, -x2 + x3 + x4}; p[3] = {-x1 - 5 x2 - 2 x3 + 4 x4, -x1 - x2 + 2 x3}; signs = Sort@Flatten@Table[-j Apply[sign, p[i] - b[i,j]], {i, 3}, {j,-1,1,2}]; If[signs[[1]] == -1, Return["no contact"]]; If[signs[[2]] == 0, Return["vertex-vertex contact"]]; If[signs[[1]] == 0, Return["vertex-edge or partial edge contact"]]; "nonzero-area overlap" ) (* ============== The instructions below draw the figure. ============== *) f["perfect edge contact"] = {Disk, .1} f["no contact"] = {Disk, .025} f["vertex-vertex contact"] = {Disk, .05} f["vertex-edge or partial edge contact"] = {Circle, .05} f["nonzero-area overlap"] = {Circle, .025} v1 = {c1, s1} = {Cos[Pi/5], Sin[Pi/5]} v2 = {c2, s2} = {Cos[2 Pi/5], Sin[2 Pi/5]} DisksCircles = {}; Do[x = N[(x1 + x2 Sqrt[5])/4]; y = N[x3 s1 + x4 s2]; If[Abs[x] < 2 && Abs[y] < 2, temp = f[check[x1, x2, x3, x4]]; AppendTo[DisksCircles, temp[[1]][{x,y}, temp[[2]]]]], {x1, -5, 5}, {x2, -3, 3}, {x3, -2, 2}, {x4, -2, 2}] Show[Graphics[{DisksCircles, Line[{{0, 0}, v1, v1 + v2, v2, {0, 0}}], Line[Map[# + {c1 - 1/2, -s2 - s1}&, {{0, 0}, {1, 0}, {1, 0} + v2, v2, {0, 0}}]]}], AspectRatio -> Automatic, Axes -> True] ``` FIGURE CAPTIONS Figure 1: An arrangement of Penrose tiles in the plane. The vertices of the tiles do not lie on a lattice. Figure 2: A three-dimensional representation of the tiling shown in Fig. 1. The $`x`$, $`y`$ and $`z`$ coordinates of the vertices are integers. Figure 3: The fundamental displacement vectors in the plane, and their coordinates in I R<sup>4</sup> Figure 4: The arrangement of Penrose tiles referred to in Figs. 5 and 6. Figure 5: The $`\mu `$-projection of the tiling shown in Fig. 4. Figure 6: A three-dimensional cut-out model of the representation shown in Fig. 5. The letters indicate points that come together to a single point in the assembled model. For example, the four points labeled “A” come together in the final assembly. When viewed from a distance in the direction of the arrow, the model looks like the image shown in Fig. 4. Illuminating the model from the left helps to eliminate distracting shadows. Further information about the assembly of the model is given in the text. Figure 7: A two-dimensional lattice representation of the tiling shown in Fig. 1. Figure 8: A representation of the tiling shown in Fig. 1 as a two-dimensional array of bits. The original tiling can be reconstructed from this information alone. Figure 9: An arrangement of Penrose tiles generated by a simple algorithm using the two-dimensional lattice representation.
no-problem/9911/quant-ph9911048.html
ar5iv
text
# Motion of a spin 1/2 particle in shape invariant scalar and magnetic fields ## 1 Introduction The concept of supersymmetry in quantum mechanical models was first introduced by Nicolai . A few years later Witten introduced supersymmetric quantum mechanics as a laboratory to examine supersymmetry breaking in quantum field theoretical models. Subsequently SUSYQM has proved to be interesting on its own and has been studied by many authors from different points of view . Over the years it has been shown that SUSYQM plays an important role in obtaining exact solutions of quantum mechanical problems. In fact all solvable problems of quantum mechanics are either supersymmetric or can be made so. Now among the various exactly solvable potentials there is a certain class of potentials which are characterized by a property known as shape invariance . Potentials which are shape invariant satisfy certain conditions and it has been shown that solutions of the Schrödinger equation with any shape invariant potential can be obtained in a trivial manner without solving the differential equation. In fact shape invariance is a sufficient condition for exact solvability. In the present paper our aim is to use the formalism of SUSYQM to study one dimensional motion of a spin $`\frac{1}{2}`$ particle in the presence of a scalar potential as well as a magnetic field. It may be noted that SUSYQM has previously been used to study the motion of a particle in a magnetic field . However, in the present paper the problem is similar in nature to a coupled channel problem . To solve this problem we shall introduce a definition of shape invariance which will require not only the scalar potential but also the magnetic field to satisfy certain conditions. Using this shape invariance property we shall then obtain exact solutions of the problem of a spin $`\frac{1}{2}`$ particle moving in a scalar potential and a magnetic field. The organisation of the paper is as follows: in section 2 we describe the construction of the hamiltonian describing the motion of a spin $`\frac{1}{2}`$ particle in a scalar potential and a magnetic field; in section 3 we introduce the shape invariance conditions and use them to obtain algebraically exact solutions; finally section 4 is devoted to a conclusion. ## 2 Supersymmetric approach to the motion of a spin $`\frac{1}{2}`$ particle on the real line In Witten’s model of SUSY quantum mechanics the Hamiltonian consists of two factorized Schrödinger operators $$H_{}(z;\gamma )=A^\pm (z;\gamma )A^{}(z;\gamma )=\frac{d^2}{dz^2}+W^2(z;\gamma )W^{}(z;\gamma )$$ (1) where $`\gamma `$ denotes a set of parameters and the operators $`A^+(z;\gamma )`$ and $`A^{}(z;\gamma )`$ are given by $$A^\pm (z;\gamma )=\frac{d}{dz}+W(z;\gamma ),$$ (2) where the function $`W(z;\gamma )`$ is called the superpotential. The pair of Hamiltonians in (1) are called SUSY partner Hamiltonians and each of these Hamiltonians describe the motion of a spinless particle in one dimensional potentials $`V_\pm (z;\gamma )=W^2(z;\gamma )\pm W^{}(z;\gamma )`$. Among the various potentials $`V_\pm (z;\gamma )`$ those which satisfy the relation $$V_+(z;\gamma )=V_{}(z;\gamma _1)+ϵ_1,$$ (3) where $`\gamma _1=f(\gamma )`$ is a function of $`\gamma `$ and $`ϵ_1`$ is a constant, are called shape invariant potentials . The shape invariant potentials are always exactly solvable and their solutions can be obtained purely algebraically. We shall now generalize Witten’s model of SUSY quantum mechanics in such way that each of the Hamiltonians $`H_{}`$, $`H_+`$ will describe the motion of a spin $`\frac{1}{2}`$ particle in a magnetic field and a scalar potential. In order to do this we generalise the operators $`A^\pm `$ in the following way: $$A^\pm (z;\gamma ,\beta )=\frac{d}{dz}+W(z;\gamma )+𝐕(z;\beta )𝐒,$$ (4) It may be noted that here we consider motion of the particle along $`z`$-axis and components of the spin operator $`𝐒`$ are $`S_\alpha =\sigma _\alpha /2`$ ($`\alpha =x,y,z`$), $`\sigma _\alpha `$ being the Pauli matrices. Then SUSY partner Hamiltonians can be obtained as in (1) and are given by $$H_\pm (z;\gamma ,\beta )=\frac{d^2}{dz^2}+V_\pm (z;\gamma ,\beta )+𝐁_\pm (z;\gamma ,\beta )𝐒,$$ (5) where $`V_\pm (z;\gamma ,\beta )=W^2(z;\gamma )\pm W^{}(z;\gamma )+V^2(z;\beta )/4,`$ (6) $`𝐁_\pm (z;\gamma ,\beta )=2W(z;\gamma )𝐕(z;\beta )\pm 𝐕^{}(z;\beta ).`$ (7) The Hamiltonians $`H_\pm `$ in (5) describe a spin $`\frac{1}{2}`$ particle moving along the $`z`$-axis in a scalar potential $`V_\pm (z;\gamma ,\beta )`$ and a magnetic field $`𝐁_\pm (z;\gamma ,\beta )`$. The SUSY Hamiltonian reads $$H=\left(\begin{array}{c}H_+0\\ 0H_{}\end{array}\right)=\{Q^+,Q_{}\},$$ (8) where the superchages $`Q^+`$ and $`Q_{}`$ have the form $$Q^+=A^{}\sigma ^+,Q^{}=A^+\sigma ^{}.$$ (9) The supercharges and SUSY Hamiltonian fulfil well known $`N=2`$ SUSY algebra $$\{Q^+,Q^{}\}=H,[Q^\pm ,H]=0,(Q^\pm )^2=0.$$ (10) Note that in the present case the SUSY Hamiltonian and supercharges act on four component wave function. The standard Witten model of SUSY quantum mechanics can be reproduced by setting $`𝐕=0`$. The Hamiltonians $`H_+`$ and $`H_{}`$ have exactly the same energy levels (perhaps with the exception of the zero energy state). For the zero energy ground state the following scenarios are possible: (1) Zero energy ground state does not exist (broken SUSY) (2) Zero energy ground state exists for one of the Hamiltonians $`H_{}`$ or $`H_+`$ (exact SUSY) (3) Zero energy ground state exists for both Hamiltonians $`H_{}`$ and $`H_+`$ (exact SUSY). In a previous paper it was shown that this last scenario can be realised when a particle moves in a rotating magnetic field and a zero scalar potential. For the standard Witten model of SUSY quantum mechanics such a situation is arises when the superpotential is a periodic function . In present paper we shall consider the case when zero energy ground state exists for one of the Hamiltonians $`H_{}`$ or $`H_+`$, say for $`H_{}`$. In this case the eigenvalues $`E_n^\pm `$ and eigenfunctions $`\psi _n^\pm `$ of the Hamiltonians $`H_\pm `$ are related by the following SUSY transformations: $`E_{n+1}^{}=E_n^+,E_0^{}=0,`$ (11) $`\psi _{n+1}^{}={\displaystyle \frac{1}{\sqrt{E_n^+}}}A^+\psi _n^+,`$ (12) $`\psi _n^+={\displaystyle \frac{1}{\sqrt{E_{n+1}^{}}}}A^{}\psi _{n+1}^{}.`$ (13) In equations (12) and (13) the operators $`A^\pm `$ are (2 x 2) matrices and the wave functions $`\psi _n^\pm `$ are 2 component wave functions. As a result equations (13) and (13) are matrix differential equations although in appearance they look similar to the standard SUSY transformations . ## 3 Shape invariant potentials and magnetic fields In this section we shall generalise the idea of shape invariance for obtaining exact solution of the eigenvalue problem for a spin $`\frac{1}{2}`$ particle moving in both a scalar potential as well as a magnetic field. We begin with the eigenvalue problem corresponding to the Hamiltonian $`H_{}`$ with superpotentials $`W=W(z,\gamma )`$, $`𝐕(z,\beta )`$ which depend on some parameters $`\gamma `$ and $`\beta `$. Since the zero energy ground state of this Hamiltonian is annihilated by the operator $`A^{}(z;\gamma ,\beta )`$ we have $$A^{}(z;\gamma ,\beta )\psi _0^{}(z;\gamma ,\beta )=0.$$ (14) Note that in the case of standard Witten model of SUSY quantum mechanics this equation is a single first order differential equation and can be easily solved. But in the present case the operator $`A^{}`$ is a (2 x 2) matrix differential operator and therefore the above equation is a set of two first order coupled differential equations. So in general the ground state can not be obtained in terms of the superpotentials. It is similar to the situation in SUSY quaternionic quantum mechanics . We shall return to the problem of determining the ground state later. For the time being let us assume that we have a solution of equation (14). Now let us consider the SUSY partner of $`H_{}(z;\gamma ,\beta )`$, i.e. $`H_+(z;\gamma ,\beta )`$. If we calculate the ground state of $`H_+(z;\gamma ,\beta )`$ we immediately find the first excited state of $`H_{}(z;\gamma ,\beta )`$ using the SUSY transformations (11) - (13). Now in order to calculate the ground state of $`H_+`$ let us rewrite it in the form $$H_+(z;\gamma ,\beta )=H_{}(z;\gamma _1,\beta _1)+ϵ_1=A^+(z;\gamma _1,\beta _1)A^{}(z;\gamma _1,\beta _1)+ϵ_1,ϵ_1>0,$$ (15) where $`ϵ_1`$ is the factorisation energy. The operators $`A_\pm (z;\gamma _1,\beta _1)`$ corresponding to $`H_{}(z;\gamma _1,\beta _1)`$ have the same form as in (4) but with superpotentials $`W=W(z,\gamma _1)`$, $`𝐕=𝐕(z,\beta _1)`$. We note that the wave function of the ground state of $`H_+(z;\gamma ,\beta )`$ is also the wave function of ground state of $`H_{}(z;\gamma _1,\beta _1)`$ i.e. $`\psi _0^+(z,\gamma ,\beta )=\psi _0^{}(z,\gamma _1,\beta _1)`$ and it satisfies the equation $$A^{}(z;\gamma _1,\beta _1)\psi _0^{}(z,\gamma _1,\beta _1)=0.$$ (16) Using the SUSY transformations we can now obtain the energy level and corresponding wave function of the first excited state of the Hamiltonian $`H_{}(z;\gamma ,\beta )`$: $$E_1^{}=ϵ_1,\psi _1^{}=\frac{1}{\sqrt{ϵ_1}}A^+(z,\gamma ,\beta )\psi _0^{}(z,\gamma _1,\beta _1).$$ (17) From (15) we can now obtain the conditions of shape invariance involving the superpotential and the magnetic field. In explicit form these conditions read $$\begin{array}{ccc}W^2(z;\gamma )+W^{}(z;\gamma )+𝐕^2(z;\beta )/4\hfill & =& W^2(z;\gamma _1)W^{}(z;\gamma _1)\hfill \\ & & +𝐕^2(z;\beta _1)/4+ϵ_1,\hfill \end{array}$$ (18) $$2W(z;\gamma )𝐕(z;\beta )+𝐕^{}(z;\beta )=2W(z;\gamma _1)𝐕(z;\beta _1)𝐕_1^{}(z;\beta _1).$$ (19) Equation (18) is the condition for shape invariant scalar superpotential while (19) is the equation for shape invariant magnetic field. Comparing with equation (3) we find that in the present case shape invariance conditions consist of four equations rather than a single one. In general it is very difficult to solve these equations for superpotentials (magnetic fields) when an arbitrary magnetic field (superpotential) is prescribed. However when we consider some specific superpotential and magnetic field solutions of equations (18) and (19) can still be obtained. To this end let us choose V in the form $$𝐕=g(z)𝐚+\beta 𝐛,$$ (20) where $`𝐚`$ and $`𝐛`$ are perpendicular unit vectors i.e, $`\mathrm{𝐚𝐛}=0`$. Then equation (18) reads $$W^2(z;\gamma )+W^{}(z;\gamma )+\beta ^2/4=W^2(z;\gamma _1)W^{}(z;\gamma _1)+\beta _1^2/4+ϵ_1.$$ (21) and the vector equation (19) splits into two scalar equations $`2W(z;\gamma )g(z)+g^{}(z)=2W(z;\gamma _1)g(z)g^{}(z),`$ (22) $`W(z;\gamma )\beta =W(z;\gamma _1)\beta _1.`$ (23) Then from (22) we obtain $$g(z)=\lambda e^{^z(W(z;\gamma _1)W(z;\gamma ))},$$ (24) where $`\lambda `$ is some constant. Here it is important to note that since $`g(z)`$ does not depend on the parameters $`\gamma `$ the difference between the new and the old superpotential $`(W(z;\gamma _1)W(z;\gamma ))`$ also does not depend on these parameters. In order to satisfy equation (23) we now choose $$W=\gamma f(z),$$ (25) which leads to the following relation between the parameters $$\gamma \beta =\gamma _1\beta _1.$$ (26) Note that only superpotentials of the form (25) ensures that the difference $`(W(z;\gamma _1)W(z;\gamma ))`$ is independent of the parameter $`\gamma `$. Thus the superpotentials (20) and (25) lead to shape invariant scalar potential and magnetic field and so the corresponding eigenvalue problem can be solved exactly. To find the exact solutions we now continue the shape invariant construction recursively and obtain the energy levels and the corresponding wave functions of $`H_{}`$ in the following form $`E_n^{}={\displaystyle \underset{i=0}{\overset{n}{}}}ϵ_i,ϵ_0=0,`$ (27) $`\psi _n^{}(z;\gamma ,\beta )=`$ $`C_n^{}A^+(z;\gamma ,\beta )\mathrm{}A^+(z;\gamma _{n2},\beta _{n2})A^+(z;\gamma _{n1},\beta _{n1})\psi _0^{}(z;\gamma _n,\beta _n),`$ (28) where $`C_n^{}`$ are normalization constants, $`\psi _0^{}(z;\gamma _n,\beta _n)`$ is the zero energy eigenfunction of $`H_{}(z;\gamma _n,\beta _n)`$ which satisfies the equation $`A^{}(z;\gamma _n,\beta _n)\psi _0^{}(z;\gamma _n,\beta _n)=0`$, $`A^\pm (z;\gamma _n,\beta _n)`$ and $`H_{}(z;\gamma _n,\beta _n)`$ are of the form (4) and (5) respectively with superpotentials $`W(z;\gamma _n)`$, $`𝐕(z;\beta _n)`$. In our notations $`\gamma _0=\gamma `$ and $`\beta _0=\beta `$. In explicit form the equation determining $`\psi _0^{}(z,\gamma _n,\beta _n)`$ reads $$\left(\frac{d}{dz}+\gamma _nf(z)+(g(z)𝐚+\beta _n𝐛)𝐒\right)\psi _0^{}(z,\gamma _n,\beta _n)=0.$$ (29) The superpotential $`\gamma _nf(z)`$ can be eliminated from this equation by using the following transformation $$\psi _0^{}(z,\gamma _n,\beta _n)=\varphi (z,\beta _n)e^{{\scriptscriptstyle \gamma _nf(z)}},$$ (30) where $`\varphi `$ is a two component function which satisfies the equation $$\left(\frac{d}{dz}+(g(z)𝐚+\beta _n𝐛)𝐒\right)\varphi (z,\beta _n)=0.$$ (31) Let us now choose $`𝐚`$ parallel to z-axis, $`𝐛`$ parallel to x-axis. Then equation (31) which is a set of two first order coupled differential equations can be rewritten in the form $`a^{}\varphi _1(z,\beta _n)={\displaystyle \frac{\beta _n}{2}}\varphi _2(z,\beta _n),`$ (32) $`a^+\varphi _2(z,\beta _n)={\displaystyle \frac{\beta _n}{2}}\varphi _1(z,\beta _n),`$ (33) where the operators $`a^\pm `$ are given by $$a^\pm =\frac{d}{dz}+g(z)/2.$$ (34) The above set of first order coupled equations can easily be transformed into second order equations for $`\varphi _1`$ and $`\varphi _2`$ and are given by $`a^+a^{}\varphi _1(z,\beta _n)=h_{}\varphi _1={\displaystyle \frac{\beta _n^2}{4}}\varphi _1(z,\beta _n),`$ (35) $`a^{}a^+\varphi _2(z,\beta _n)=h_+\varphi _2={\displaystyle \frac{\beta _n^2}{4}}\varphi _2(z,\beta _n).`$ (36) It is interesting to note that equations (35) and (36) have the form of eigenvalue equations corresponding to $`H_\pm `$ of one dimensional SUSY quantum mechanics (see equation (1)) with superpotential $`g(z)/2`$ and $`\beta _n^2/4`$ can be treated as energy which is negative in the present case. The solutions of equations (35) and (36) need not necessarily be square integrable functions. However nonsquare integrable solutions of (35) and (36) can still be used to obtain physical solutions of the original eigenvalue equation (see equation (30)). ### 3.1 Examples Case 1: The simplest superpotential which we can choose is $`W=\gamma z`$. But in this case from (21) it follows that $`\gamma _1=\gamma `$ and we find from (24) that $`g=const`$. As a result we obtain a magnetic field which does not change its direction. Therefore this case can be reduced to the standard Witten model of SUSY quantum mechanics. Case 2: Let us now consider the following superpotential: $$W=\gamma \mathrm{tanh}(z),\gamma >0.$$ (37) Then iterating the shape invariant condition (21) $`n`$ times we obtain $`ϵ_n=\gamma _{n1}^2\gamma _n^2+(\beta _{n1}^2\beta _n^2)/4,`$ (38) $`\gamma _{n1}(\gamma _{n1}1)=\gamma _n(\gamma _n+1).`$ (39) Equation (39) have two solutions with respect to $`\gamma _n`$. But only one of them is acceptable from the point of view of square integrability of the wave function and this is given by $$\gamma _n=\gamma _{n1}1=\gamma n.$$ (40) Now iterating the relation (26) $`n`$ times we obtain $$\beta _n=\frac{\gamma _{n1}}{\gamma _n}\beta _{n1}=\frac{\gamma }{\gamma _n}\beta $$ (41) To determine $`g(z)`$ we use (24) and obtain $$g(z)=\frac{\lambda }{\mathrm{cosh}(z)}.$$ (42) From (42) it is seen that the function $`g(z)`$ indeed does not depend on the parameters appearing in the superpotential. Now using $`W(z;\gamma )`$ and $`g(z)`$ we can calculate the scalar potential and the magnetic field in which the spin $`\frac{1}{2}`$ particle is moving: $`V_\pm ={\displaystyle \frac{\lambda ^2/4\gamma (\gamma 1)}{\mathrm{cosh}^2(z)}}+\gamma ^2+\beta ^2,`$ (43) $`𝐁_\pm ={\displaystyle \frac{\lambda ^2}{2}}(2\gamma 1){\displaystyle \frac{\mathrm{tanh}(z)}{\mathrm{cosh}(z)}}𝐚+2\gamma \beta \mathrm{tanh}(z)𝐛.`$ (44) Now let us study the eigenvalue equations (35) and (36). To determine whether the spectrum is finite or infinite it is now necessary to establish the maximum value of $`n`$. In the present case equation (35) reads $$\left(\frac{d}{dz}+\frac{\lambda }{2\mathrm{cosh}(z)}\right)\left(\frac{d}{dz}+\frac{\lambda }{2\mathrm{cosh}(z)}\right)\varphi _1(z,\beta _n)=\frac{\beta _n^2}{4}\varphi _1(z,\beta _n).$$ (45) The asymptotic behaviour of the solutions of equation (45) at $`|z|\mathrm{}`$ is given by $$\varphi _1(z,\beta _n)=\mathrm{const}e^{\pm \beta _nz/2}.$$ (46) Using (32) for second component we obtain $$\varphi _2(z,\beta _n)=\mathrm{const}e^{\pm \beta _nz/2}.$$ (47) From (46) and (47) it is seen that the solutions are not square integrable. Now to determine the asymptotic behaviour of $`\psi _0(z,\gamma _n,\beta _n)`$ we use (46) and (47) in (30) and obtain $$\psi _0^{}(z,\gamma _n,\beta _n)=\mathrm{const}\left(\begin{array}{c}1\\ 1\end{array}\right)\frac{e^{\pm \beta _nz/2}}{\mathrm{cosh}^{\gamma _n}(z)}.$$ (48) Then from the condition of square integrability of $`\psi _0^{}(z,\gamma _n,\beta _n)`$ we get $$\gamma _n>|\beta _n|/2.$$ (49) Thus it follows from (49) that $$n<\gamma \sqrt{\gamma |\beta |/2}.$$ (50) Energy levels of the Hamiltonian $`H_{}(z;\gamma ,\beta )`$ are then given by $$E_n^{}=\gamma ^2(\gamma n)^2+\frac{\beta ^2}{4}\left(1\frac{\gamma ^2}{(\gamma n)^2}\right).$$ (51) From (48) it follows that there are two independent square integrable solutions of equation (29) and as a result the ground state of $`H_{}(z;\beta _n,\gamma _n)`$ is twofold degenerate. Now from (28) and (29) it can be shown that each energy level $`E_n^{}`$ is doubly degenerate. We now proceed to determine the eigenfunctions of $`H_{}(z;\beta _n,\gamma _n)`$ in explicit form. In order to do this we need to have the general solutions of equation (45). To solve this equation we first transform it to the equation for hypergeometric functions. Let us introduce a new variable $`x=\mathrm{sinh}(z)`$. Then equation (45) becomes $$\left[(1+x^2)\frac{d^2}{dx^2}x\frac{d}{dx}+\frac{\lambda ^2}{4}\frac{1}{1+x^2}+\frac{\lambda }{2}\frac{x}{1+x^2}\right]\varphi _1=\frac{\beta _n^2}{4}\varphi _1.$$ (52) We now introduce a new function $`f`$ defined by the relation $$\varphi _1=fe^{\frac{\lambda }{2}\mathrm{arctan}(x)}$$ (53) and use a new variable $`\xi =(1ix)/2`$ to obtain from equation (45) $$\left[(1\xi )\xi \frac{d^2}{d\xi ^2}+\left(\frac{1}{2}i\frac{\lambda }{2}\xi \right)\frac{d}{d\xi }\right]f=\frac{\beta _n^2}{4}f.$$ (54) This equation has two linearly independent solutions: $`f^{(1)}=F(a,b;c;\xi ),`$ (55) $`f^{(2)}=\xi ^{1c}(1\xi )^{cab}F(1a,1b;2c;\xi ),`$ (56) where F(a ,b ;c ;x) is the hypergeometric function and $`a=\beta _n/2`$, $`b=\beta _n/2`$, $`c=1/2i\lambda /2`$. Then using (28) we obtain in explicit form two eigenfunctions which correspond to the same energy level given by (51). As a result we conclude once more that energy levels of $`H_{}(z;\beta ,\gamma )`$ are twofold degenerate. Now let us analyze the reason for this double degeneracy of the energy levels of $`H_{}(z;\gamma ,\beta )`$. Note however that this double degeneracy is not related to the SUSY of original Hamiltonian which consists of two partner Hamiltonians $`H_{}(z;\gamma ,\beta )`$ and $`H_+(z;\gamma ,\beta )`$. The degeneracy of $`H_{}(z;\gamma ,\beta )`$ is related to the spin degrees of freedom of the Hamiltonian and also with the existence of an additional integral of motion $`T=I\sigma _y`$ in the case when $`W(z)=W(z)`$, where $`I`$ is parity operator and acts according to $`If(z)=f(z)`$. Also $`T^2=1`$ and thus this operator has two eigenvalue $`\pm 1`$. We also note that the operator of complex conjugation $`R`$, acting according to $`Rf=f^{}`$, commutes with $`A^\pm (z;\gamma _i,\beta _n)`$ in the case when $`𝐚`$ is parallel z axis and $`𝐛`$ is parallel x axis. These operators satisfy the (anti)commutation relations $`TA^\pm (z;\beta _n,\gamma _n)+A^\pm (z;\beta _n,\gamma _n)T=0,`$ (57) $`TR+RT=0,`$ (58) $`RA^\pm (z;\beta _n,\gamma _n)A^\pm (z;\gamma _n,\beta _n)R=0`$ (59) Furthermore the operators $`R`$ and $`T`$ commute with the Hamiltonian $`H_{}(z;\gamma _n,\beta _n)`$ $$[T,H_{}(z;\gamma _n,\beta _n)]=[R,H_{}(z;\gamma _n,\beta _n)]=0.$$ (60) Let us now demonstrate using the above algebra that zero energy level for $`H_{}(z;\gamma _n,\beta _n)`$ is doubly degenerated. To show this let us suppose that we have at least one zero energy ground state. As a result of the commutation relation (60) this state can be chosen also as eigenfunction of the operator $`T`$. Thus the zero energy ground state satisfies the equations $`T\psi _\lambda =\lambda \psi _\lambda ,`$ (61) $`A^{}\psi _\lambda =0,`$ (62) where $`\lambda `$ take one of the values $`1`$ or $`1`$. Then from (58) and (61) it follows that $`R\psi _\lambda =\psi _\lambda `$ Now operating $`R`$ from the left on (62) and using (59) we obtain $$A^{}R\psi _\lambda =A^{}\psi _\lambda =0.$$ (63) Thus $`\psi _\lambda `$ together with $`\psi _\lambda `$ are wave functions of the zero energy ground state. We can conclude that zero energy level of the Hamiltonian $`H_{}(z;\gamma _i,\beta _i)`$ is doubly degenerate. Since the $`n`$th excited state of the Hamiltonian $`H_{}(z;\gamma ,\beta )`$ is related by (28) to the ground state of the Hamiltonian $`H_{}(z;\beta _n,\gamma _n)`$ we conclude that all the energy levels of the Hamiltonian $`H_{}(z;\gamma ,\beta )`$ are doubly degenerate. Finally we note that as a consequence of the relation (11) the zero energy ground level of the full Hamiltonian (8) is doubly degenerate while the excited levels are fourfold degenerate. ## 4 Conclusions In the present paper we extend the definition of shape invariance to obtain exact solutions of the eigenvalue problem relating to the motion of a spin $`\frac{1}{2}`$ particle moving in scalar potential and a magnetic field. The shape invariance conditions are more complicated than in the standard case. It is because instead of one equation for the superpotential $`W`$ in standard case we have four equations coupling the superpotential with the components of the vector function $`𝐕`$. It has been shown that if we choose a superpotential and a magnetic field satisfying the above mentioned shape invariance condition we can obtain exact analytical solutions of the eigenvalue problem. The spectrum of the full Hamiltonian is fourfold degenerate while those of the component Hamiltonians are doubly degenerate. We have also analysed the reasons of this double degeneracy and it has been shown to be due existence of additional integrals of motion rather than SUSY. We feel it would be of interest to find other superpotentials and magnetic fields which are shape invariant and are thus exactly soluble.
no-problem/9911/astro-ph9911324.html
ar5iv
text
# 1 Preamble ## 1 Preamble Pulsar astronomy began serendipitously in 1967 when Jocelyn Bell and Antony Hewish discovered periodic signals originating from distinct parts of the sky via pen-chart recordings taken during an interplanetary scintillation survey of the radio sky at 81.5 MHz (Hewish et al. 1968). This remarkable phenomenon has since been unequivocally linked with the radiation produced by a rotating neutron star (Gold 1968, Pacini 1968). Baade & Zwicky (1934) were the first to hypothesise the existence of neutron stars as a stable configuration of degenerate neutrons formed from the collapsed remains of a massive star after it has exploded as a supernova. Although the theory of pulsar emission is complex, the basic idea can be simply stated as follows: as a neutron star rotates, charged particles are accelerated out along its magnetic poles and emit electromagnetic radiation. The combination of rotation and the beaming of particles along the magnetic field lines means that a distant observer records a pulse of emission each time the magnetic axis crosses his/her line of sight, i.e. one pulse per stellar rotation. Like many things in life, the emission does not come for free. It takes place at the expense of the neutron star’s rotational kinetic energy — one of the key predictions of the Gold/Pacini theory. Measurements of the secular increase in pulse period through pulsar timing techniques (§4) are in excellent agreement with this idea. Pulsar astronomy has come a long way in a remarkably short space of time. Systematic surveys with the world’s largest radio telescopes over the last 30 years have revealed more than 1200 pulsars in a rich variety of astrophysical settings. The present “zoo” of objects is summarised in Fig. 1. Some of the highlights so far include: (1) The original binary pulsar B1913+16 (Hulse & Taylor 1975) — a pair of neutron stars in a 7.75-hr eccentric orbit. The measurement of the orbital decay due to gravitational radiation from the binary system resulted in the 1993 Physics Nobel Prize. (2) The original millisecond pulsar, B1937+21, discovered by Backer et al. (1982) has a period of only 1.5578 ms. The implied spin frequency of 642 Hz means that this neutron star is close to being torn apart by centrifugal forces. (3) Pulsars with planetary companions. Well before the discoveries of Jupiter-mass planets by optical astronomers, Wolszczan & Frail (1992) discovered the millisecond pulsar B1257+12 accompanied by three Earth-mass planets — the first planets discovered outside our Solar system. (4) Pulsars in globular clusters. Dense globular clusters are unique breeding grounds for exotic binary systems and millisecond pulsars. Discoveries of 40 or so “cluster pulsars” have permitted detailed studies of pulsar dynamics in the cluster potential, and of the cluster mass distribution. The plan for the rest of this review is as follows: §2 covers pulse dispersion and what it tells us about pulsars and the interstellar medium; §3 discusses pulse profiles and their implications. In §4 the essential aspects of pulsar timing observations are reviewed. §5 reviews the techniques employed by pulsar searchers. Finally, in §6, we summarise some of the recent results from searches at Parkes. More complete discussions on the observational aspects covered here can be found in Lyne & Smith (1998). ## 2 Pulse Dispersion and the Interstellar Medium Newcomers to pulsar astronomy would do well to begin their studies by reading the discovery paper (Hewish et al. 1968), a classic article packed with observational facts and their implications. One of the phenomena clearly noted in the discovery paper was pulse dispersion — pulses at higher radio frequencies arrive earlier at the telescope than their lower frequency counterparts. An example of this is shown in Fig. 2. Hewish et al. correctly interpreted the effect as the frequency dependence of group velocity of radio waves as they propagate through the interstellar medium — a cold ionised plasma. Applying standard plasma physics formulae, it can be shown (see e.g. Lyne & Smith 1998) that the difference in arrival times $`\mathrm{\Delta }t`$ between a high frequency $`\nu _{\mathrm{hi}}`$ (MHz) and a low frequency $`\nu _{\mathrm{lo}}`$ (MHz) is given by $$\mathrm{\Delta }t=4.15\times 10^6\mathrm{ms}\times (\nu _{\mathrm{lo}}^2\nu _{\mathrm{hi}}^2)\times \mathrm{DM},$$ (1) where the dispersion measure DM (cm<sup>-3</sup> pc) is the integrated column density of free electrons along the line of sight: $$\mathrm{DM}=_0^dn_\mathrm{e}𝑑l.$$ (2) Here, $`d`$ is the distance to the pulsar (pc) and $`n_\mathrm{e}`$ is the free electron density (cm<sup>-3</sup>). Pulsars at large distances have higher column densities, and therefore larger DMs, than pulsars closer to Earth so that, from Eq. 1, the dispersive delay across the bandwidth is greater. In the original discovery paper, Hewish et al. (1968) measured a delay for the first pulsar (B1919+21) of $`0.2`$ s between 81.5 and 80.5 MHz. From Eq. 1, we infer a DM of about 13 cm<sup>-3</sup> pc. Assuming, as a first-order approximation, that the mean Galactic electron density is 0.03 cm<sup>-3</sup> (Ables & Manchester 1976), this implies a distance of about 0.4 kpc<sup>1</sup><sup>1</sup>1Hewish et al. (1968) assumed 0.2 cm<sup>-3</sup> which resulted in an underestimated distance, but still clearly demonstrated that the source is located well beyond the solar system.. The most straightforward method to compensate for pulse dispersion is to use a filterbank to sample the passband as a number of contiguous channels and apply successively larger time delays (calculated from Eq. 1) to higher frequency channels before summing over all channels to produce a sharp “de-dispersed” profile. This can be carried out either in hardware or in software. Fig. 3 shows the clear gain in signal-to-noise ratio and time resolution achieved when the data from Fig. 2 are properly de-dispersed, as opposed to a simple detection over the whole bandwidth. The fact that the free electrons in the Galaxy are finite in extent is well demonstrated by Fig. 4 which shows the dispersion measures of 700 pulsars plotted against the absolute value of their respective Galactic latitudes. It is straightforward to show that, for a simple slab model of free electrons with a mean density $`n`$ and half-height $`H`$, the maximum DM for a given line of sight along a latitude $`b`$ is $`Hn/\mathrm{sin}b`$. The solid curve in Fig. 4 shows fairly convincingly that this simple model accounts for the trend rather well implying that $`Hn30`$ pc cm<sup>-3</sup>. Taking $`n=0.03`$ cm<sup>-3</sup> as before gives us a first-order estimate of the thickness of the electron layer — about 1 kpc. Independent measurements of pulsar distances can, for a large enough sample, be fed back into Eq. 2 to calibrate the Galactic distribution of free electrons. There are three basic distance measurement techniques: neutral hydrogen absorption, trigonometric parallax (measured either with an interferometer or through pulse time-of-arrival techniques) and from associations with objects of known distance (i.e. supernova remnants, globular clusters and the Magellanic Clouds). Together, these provide measurements of (or limits on) the distances to over 100 pulsars. Taylor & Cordes (1993) have used these distances, together with measurements of interstellar scattering for various Galactic and extragalactic sources, to calibrate an electron density model. In a statistical sense, the model can be used to provide distance estimates with an uncertainty of $``$ 30%. Although the model is free of large systematic trends, its use to estimate distances to individual pulsars may result in systematic errors by as much as a factor of two. ## 3 Erratic Individual Pulses and Stable Integrated Profiles Pulsars are weak radio sources. Mean flux densities, usually quoted in the literature at a radio frequency of 400 MHz, vary between 1 and 100 mJy (1 Jy $`=10^{26}`$ W m<sup>-2</sup> Hz<sup>-1</sup>). This means that the addition of many thousands of pulses is required in order to produce a discernible profile. Only a handful of sources presently known are strong enough to allow studies of individual pulses. A remarkable fact from these studies is that, although the individual pulses vary quite dramatically, at any particular observing frequency the integrated profile is very stable. This is illustrated in Fig. 5. In the above examples of stable pulse profiles, which have been normalised to represent 360 degrees of rotational phase, the astute reader will notice two examples of so-called interpulses — a secondary pulse separated by about 180 degrees from the main pulse. The most natural interpretation for this phenomenon is that the two pulses originate from opposite magnetic poles of the neutron star (see however Manchester & Lyne 1977). Geometrically speaking, this is a rather unlikely situation. As a result, the fraction of the known pulsars with interpulses is only a few percent. The integrated pulse profile should really be thought of as a unique “fingerprint” of the radio emission beam of each neutron star. The rich variety of pulse shapes can be attributed to different line-of-sight cuts through the radio beam of the neutron star as it sweeps past the Earth. Two contrasting phenomenological models which account for this are shown in Fig. 6. The “core and cone” model, proposed by Rankin (1983), depicts the beam as a core surrounded by a series of nested cones. Alternatively, the “patchy beam” model, championed by Lyne & Manchester (1988), has the beam populated by a series of emission regions. ## 4 Pulsar Timing Basics Soon after their discovery, it became clear that pulsars are excellent celestial clocks. Hewish et al. (1968) demonstrated that the period of the first pulsar, B1919+21, was stable to one part in $`10^7`$ over a time-scale of a few months. Following the discovery of the millisecond pulsar, B1937+21, in 1982 (Backer et al. 1982) it was demonstrated that its period could be measured to one part in $`10^{13}`$ or better (Davis et al. 1985). This unrivaled stability leads to a host of applications including time keeping, probes of relativistic gravity and natural gravitational wave detectors. Subsequently, a whole science has developed to accurately measure the pulse time-of-arrival in order to extract as much information about each pulsar as possible. Fig. 7 summarises the essential steps involved in a pulse “time-of-arrival” (TOA) measurement. Incoming pulses emitted by the rotating neutron star traverse the interstellar medium before being received by the radio telescope. After amplification by high sensitivity receivers, the pulses are de-dispersed (§2) and added to form a mean pulse profile. During the observation, the data regularly receive a time stamp, usually based on a maser at the observatory, plus a signal from the GPS (Global Positioning System of satellites) time system. The TOA of this mean pulse is then defined as the arrival time of some fiducial point on the profile. Since the mean profile has a stable form at any given observing frequency (§3) the TOA can be accurately determined by a simple cross-correlation of the observed profile with a high signal-to-noise “template” profile — obtained from the addition of many observations of the pulsar at a particular observing frequency. In order to properly model the rotational behaviour of the neutron star, we require TOAs as measured by an inertial observer. Due to the Earth’s orbit around the Sun, an observatory located on Earth experiences accelerations with respect to the neutron star. The observatory is therefore not in an inertial frame. To a very good approximation, the centre-of-mass of the solar system, the solar system barycentre, can be regarded as an inertial frame. It is standard practice to transform the observed TOAs to this frame using a planetary ephemeris. Following the accumulation of about ten to twenty barycentric TOAs from observations spaced over at least several months, a surprisingly simple model can be applied to the TOAs and optimised so that it is sufficient to account for the arrival time of any pulse emitted during the time span of the observations, and predict the arrival times of subsequent pulses. The model is based on a Taylor expansion of the angular rotational frequency about a model value at some reference epoch to calculate a model pulse phase as a function of time. Based upon this simple model, and using initial estimates of the position, dispersion measure and pulse period, a “timing residual” is calculated for each TOA as the difference between the observed and predicted pulse phases (see e.g. Lyne & Smith 1998). Ideally, the residuals should have a zero mean and be free from any systematic trends (Fig. 8a). Inevitably, however, due to our a-priori ignorance of the rotational parameters, the model needs to be refined in a bootstrap fashion. Early sets of residuals will exhibit a number of trends indicating a systematic error in one or more of the model parameters, or a parameter not initially incorporated into the model. For example, a parabolic trend results from an error in the period time derivative (Fig. 8b). Additional effects will arise if the assumed position of the pulsar used in the barycentric time calculation is incorrect. A position error of just one arcsecond results in an annual sinusoid (Fig. 8c) with a peak-to-peak amplitude of about 5 ms for a pulsar on the ecliptic; this is easily measurable for typical TOA uncertainties of order one milliperiod or better. Similarly, the effect of a proper motion produces an annual sinusoid of linearly increasing magnitude (Fig. 8d). Proper motions are, for many long-period pulsars, more difficult to measure due to the confusing effects of timing noise (a random walk process seen in the timing residuals; see e.g. Cordes & Helfand 1980). For these pulsars, interferometric techniques can be used to obtain proper motions (see Ramachandran’s contribution and references therein). In summary, a phase-connected timing solution obtained over an interval of a year or more will, for an isolated pulsar, provide an accurate measurement of the period, the rate at which the neutron star is slowing down, and the position of the neutron star. Presently, measurements of these essential parameters are available for about 600 pulsars. The implications of these measurements for the ages and magnetic fields of neutron stars will be discussed in my other contribution to this volume. For binary pulsars, the model needs to be extended to incorporate the additional radial accelerations of the pulsar as it orbits the common centre-of-mass of the binary system. Treating the binary orbit using just Kepler’s laws to refer the TOAs to the binary barycentre requires five additional model parameters: the orbital period, projected semi-major orbital axis, orbital eccentricity, longitude of periastron and the epoch of periastron passage. The Keplerian description of the orbit is identical to that used for spectroscopic binary stars where a characteristic orbital “velocity curve” shows the radial component of the star’s velocity as a function of time. The analogous plot for pulsars is the apparent pulse period against time. For circular orbits the behaviour is sinusoidal whilst for eccentric orbits the curve has a “saw-tooth” appearance. Two examples are shown in Fig. 9. Also by analogy with spectroscopic binaries, constraints on the mass of the orbiting companion can be placed by combining the projected semi-major axis $`a_\mathrm{p}\mathrm{sin}i`$ and the orbital period $`P_\mathrm{o}`$ to obtain the mass function: $$f(m_\mathrm{p},m_\mathrm{c})=\frac{4\pi ^2}{G}\frac{(a_\mathrm{p}\mathrm{sin}i)^3}{P_\mathrm{o}^2}=\frac{(m_\mathrm{c}\mathrm{sin}i)^3}{(m_\mathrm{p}+m_\mathrm{c})^2},$$ (3) where $`G`$ is the universal gravitational constant. Assuming a pulsar mass $`m_\mathrm{p}`$ of 1.35 M (see below), the mass of the orbiting companion $`m_\mathrm{c}`$ can be estimated as a function of the (initially unknown) angle $`i`$ between the orbital plane and the plane of the sky. The minimum companion mass $`m_{\mathrm{min}}`$ occurs when the orbit is assumed edge-on ($`i=90^{}`$). Further information on the orbital inclination and component masses may be obtained by studying binary systems which exhibit a number of relativistic effects not described by Kepler’s laws. Up to five “post-Keplerian” parameters exist within the framework of general relativity. Three of these parameters (the rate of periastron advance, a gravitational redshift parameter, and the orbital period derivative) have been measured for the original binary pulsar, B1913+16 (Taylor & Weisberg 1989), allowing high-precision measurements of the masses of both components, as well as stringent tests of general relativity. A further two post-Keplerian parameters related to the Shapiro delay in the double neutron star system PSR B1534+12 have now also been measured (Stairs et al. 1998). Based on these results, and other radio pulsar binary systems, Thorsett & Chakrabarty (1999) have recently demonstrated that the range of neutron star masses has a remarkably narrow underlying Gaussian distribution with a mean of 1.35 M and a standard deviation of only 0.04 M. ## 5 Pulsar Searching Pulsar searching is, conceptually at least, a rather simple process — the detection of a dispersed, periodic signal hidden in a noisy time series data taken with a radio telescope. In what follows we give a only brief description of the basic search techniques. Further discussions can be found in Lyne (1988), Nice (1992) and Lorimer (1998) and references therein. Most pulsar searches can be pictured as shown in Fig. 10. The multi-channel search data is typically collected using a filterbank or a correlator (see e.g. Backer et al. 1990), either of which usually provides a much finer channelisation than the eight channels shown for illustrative purposes in Fig. 10. The channels are then incoherently de-dispersed (see §2) to form a single noisy time series. An efficient way to find a periodic signal in these data is to take the Fast Fourier Transform (FFT) and plot the resulting amplitude spectrum. For a narrow duty cycle the spectrum will show a family of harmonics which show clearly above the noise. To detect weaker signals still, a harmonic summing technique is usually implemented at this stage (see e.g. Lyne 1988). The best candidates are then saved and the whole process is repeated for another trial DM. After a sufficiently large number of DMs have been processed, a list of pulsar candidates is compiled and it is then a matter of folding the raw time series data at the candidate period. Fig. 11 is an excellent example of the characteristics of a strong pulsar candidate. The high signal-to-noise integrated profile (top left panel) can be seen as a function of time and radio frequency in the grey scales (lower left and right panels). In addition, the dispersed nature of the signal is immediately evident in the upper right hand panel which shows the signal-to-noise ratio as a function of trial DM. This combination of diagnostics proves extremely useful in differentiating between pulsar candidates and spurious interference. In the discussion hitherto we have implicitly assumed that the apparent pulse period remains constant throughout the observation. For searches with long integration times (Fig. 11 represents a 35-min observation), this assumption is only valid for solitary pulsars, or those in binary systems where the orbital periods are longer than about a day. For shorter-period binary systems, as noted by Johnston & Kulkarni (1992), the Doppler shifting of the period results in a spreading of the total signal power over a number of frequency bins in the Fourier domain. Thus, a narrow harmonic becomes smeared over several spectral bins. To quantify this effect, consider the resolution in fluctuation frequency (the width of a Fourier bin) $`\mathrm{\Delta }f=1/T`$, where $`T`$ is the length of the integration. It is straightforward to show that the drift in frequency of a signal due to a constant acceleration<sup>2</sup><sup>2</sup>2The smearing is even more severe if $`a`$ varies — i.e. extremely short orbital periods. $`a`$ during this time is $`aT/(Pc)`$, where $`P`$ is the true period of the pulsar and $`c`$ is the speed of light. Comparing these two quantities, we note that the signal will drift into more than one spectral bin if $`aT^2/(Pc)>1`$. Thus, without due care, long integration times potentially kill off all sensitivity to short-period pulsars in exciting tight orbits where the line-of-sight accelerations are high! As an example of this effect, as seen in the time domain, Fig. 12 shows a 22.5-min search mode observation of Hulse & Taylor’s binary pulsar B1913+16. Although this observation covers only about 5% of the orbit (7.75 hr), the effects of the Doppler smearing on the pulse signal are very apparent. Whilst the search code nominally detects the pulsar with a signal-to-noise ratio of 9.5 for this observation, it is clear that the Doppler shifting of the pulse period seen in the individual sub-integrations results in a significant reduction in the signal-to-noise ratio. Pulsar searches of distant globular clusters are most prone to this effect since long integration times are required to reach a reasonable level of sensitivity. Since one of the motivations for searching clusters is their high specific incidence of low-mass X-ray binaries, it is likely that short orbital period pulsars will also be present. Anderson et al. (1990) were the first to really address this problem during their survey of a number of globular clusters using the Arecibo radio telescope. In the so-called acceleration search, the pulsar is assumed to have a constant acceleration ($`a`$) during the integration. Each de-dispersed time series can then be re-sampled to refer it to the frame of an observer with an identical acceleration. This transformation is readily achieved by applying the Doppler formula to relate a time interval in the pulsar frame, $`\tau `$, to that in the observed frame at time $`t`$, as $`\tau (t)=\tau _0(1+at/c)`$, where $`a`$ is the observed radial acceleration of the pulsar along the line-of-sight, $`c`$ is the speed of light, and $`\tau _0`$ is a normalising constant (for further details, see Camilo et al. 2000a). If the correct acceleration is chosen, then the net effect is a time series containing a signal with a constant period which can be found using the standard pulsar search outlined above. An example of this is shown in the right panel of Fig. 12 where the time series has been re-sampled assuming a constant acceleration of –17 m s<sup>-2</sup>. The signal-to-noise ratio is increased to 67! The true acceleration is, of course, a-priori unknown, meaning that a large number of acceleration values must be tried in order to “peak up” on the correct value. Although this necessarily adds an extra dimension to the parameter space searched it can pay handsome dividends, particularly in globular clusters where the dispersion measure is well constrained by pulsars already discovered in the same cluster. Anderson et al. (1990) used this technique to find PSR B2127+11C — a double neutron star binary in M15 which has parameters similar to B1913+16. Camilo et al. (2000a) have recently applied the same technique to 47 Tucanae, a globular cluster previously known to contain 11 millisecond pulsars, to aid the discovery of a further 9 binary millisecond pulsars in the cluster. The new discoveries in 47 Tucanae include a 3.48-ms pulsar in 96-min orbit around a low-mass companion (Camilo et al. 2000a). Whilst this is presently the shortest orbital period for a radio pulsar binary, the mere existence of this pulsar, as well as the 11-min X-ray binary X1820$``$303 in the globular cluster NGC6624 (Stella et al. 1987), strongly suggests that there is a population of extremely short-period radio pulsar binaries residing in globular clusters just waiting to be found. As Camilo et al. demonstrate, the assumption of a constant acceleration during the observation clearly breaks down for such short orbital periods, requiring alternative techniques. One obvious extension is to include a search over the time derivative of the acceleration. This is currently being tried on some of the 47 Tucanae data. Although this does improve the sensitivity to short-period binaries, it is computationally rather costly. An alternative technique developed by Ransom, Cordes & Eikenberry (in prep. see also astro-ph/9911073) looks to be particularly efficient at finding binaries whose orbits are so short that many orbits can take place during an integration. This phase modulation technique exploits the fact that the periodic signals from such a binary are modulated by the orbit to create a family of periodic sidebands around the nominal spin period of the pulsar. This technique appears to be extremely promising and is currently being applied to radio and X-ray search data. ## 6 Recent Survey Highlights — The Parkes Multibeam Survey No current review on pulsar searching would be complete without summarising the revolution in the field that is presently taking place at the 64-m Parkes radio telescope in New South Wales, Australia. With $`13\times 20`$-cm 25-K receivers on the sky, along with $`13\times 2\times 288`$-MHz filterbanks, the telescope is presently making major contributions in a number of different pulsar search projects. In its main use for a Galactic plane survey (Camilo et al. 2000b), the system achieves a sensitivity of 0.15 mJy in 35 min and covers about one square degree of sky per hour of observing — a standard that is far beyond the present capabilities of any other observatory. The staggering total of over 500 new pulsars has come from an analysis of about half the total data. Such a large haul is resulting in significant numbers of interesting individual objects: several of the new pulsars are observed to be spinning down at high rates, suggesting that they are young objects with large magnetic fields. The inferred age for the 400-ms pulsar J1119$``$6127, for example, is only 1.6 kyr. Another member of this group is the 4-s pulsar J1814$``$1744, an object that may fuel the ever-present “injection” controversy surrounding the initial spin periods of neutron stars (see however my other contribution to these proceedings). A number of the new discoveries from the survey have orbiting companions. Several low-eccentricity systems are known where the likely companions are white dwarf stars. Two possible double neutron star systems are presently known: J1811$``$1736 (see Fig. 9), while J1141$``$65 has a lower eccentricity but an orbital period of only 4.75 hr. The fact that J1141$``$65 may have a characteristic age of just over 1 Myr implies that the likely birth-rate of such objects may be large. Although tempting, it is premature to extrapolate the properties of one object. It is, however, clear that these binary systems, and the many which will undoubtably come from this survey, will teach us much about the still poorly-understood population of double neutron star systems (see Kalogera’s contribution in this volume). The Parkes multibeam system has not only been finding young, distant pulsars along the Galactic plane. Edwards et al. (in prep. see also astro-ph/9911221) have been using the same system to search intermediate Galactic latitudes ($`5^{}|b|15^{}`$). The discovery of 8 short-period pulsars during this search, not to mention 50 long-period objects, strongly supports a recent suggestion by Toscano et al. (1998) that L-band ($`\lambda `$ 20 cm) searches are an excellent means of finding relatively distant millisecond pulsars. The most massive binary system yet from either of these two multibeam surveys is J1740$``$3052, whose orbiting companion must be at least 11 M! Recent optical observations (Manchester et al. 2000) reveal a K-supergiant as being the likely companion star in this system. With such high-mass systems in the Galaxy, surely it is only a matter of time before a radio pulsar will be found orbiting a stellar-mass black hole. Having finally made the connection between neutron stars and black holes, I will finish by reiterating that pulsar astronomy is currently enjoying one of the most productive phases in its history. The new discoveries are sparking off a variety of follow-up studies of all the exciting new objects. There will surely be plenty of surprises in the coming years and new students are encouraged to join this hive of activity. ###### Acknowledgements. Many thanks to Chris Salter and Fernando Camilo for their comments on an earlier version of this manuscript. The Arecibo Observatory is operated by Cornell University under a cooperative agreement with the NSF.
no-problem/9911/astro-ph9911381.html
ar5iv
text
# CHANDRA X-RAY DETECTION OF THE RADIO HOTSPOTS OF 3C295 ## 1. Introduction X-ray emission from knots and hotspots in radio jets has been detected in only a handful of objects. The three processes normally considered for X-ray emission from these features are synchrotron, thermal, and synchrotron self-Compton (SSC) emissions. The hotspots of 3C295 (Taylor & Perley 1992) have radio brightness temperatures comparable to those of Cygnus A but are so close to the nucleus that previous X-ray systems could not resolve them. However, the Chandra X-ray Observatory<sup>9</sup> has the ability not only to separate the emission from the 3C295 cluster gas, core, and hotspots, but in addition, it allows us to obtain spectra of each component. SSC models predict X-ray intensities in agreement with those observed only for the case of the hotspots of Cygnus A (Harris, Carilli, & Perley 1994). The fact that the observed X-ray flux agrees with the calculated SSC flux for a magnetic field strength equal to the classical estimate from equipartition lends credence to the SSC model but does not prove it. For all the other previously detected knots and hotspots, the predicted SSC flux falls well short of the observed flux, often by two or three orders of magnitude. In this paper we present Chandra observations of 3C295, describe the basic results, and evaluate the emission process for the X-rays from the radio hotspots. We use H<sub>0</sub>=70 km s<sup>-1</sup> Mpc<sup>-1</sup>, $`\mathrm{\Omega }_\mathrm{\Lambda }`$=0.7, and $`\mathrm{\Omega }_M`$=0.3. At a redshift of 0.461, the luminosity distance is 2564 Mpc and 1<sup>′′</sup>=5.8 kpc. In discussing <sup>1</sup>Harvard-Smithsonian Center for Astrophysics, 60 Garden St. Cambridge, MA 02138 <sup>2</sup>Department of Engineering Physics, University of Wollongong, Wollongong NSW 2522, Australia <sup>3</sup>School of Physics & Astronomy, University of Birmingham, Birmingham B15 2TT, UK <sup>4</sup>Massachusetts Institute of Technology, Center for Space Research, Cambridge, MA 02139 <sup>5</sup>Dept. Physics, University of Bristol, Tyndall Avenue, Bristol BS8 1TL, UK <sup>6</sup>Institute for Astronomy, 2680 Woodlawn Drive, Honolulu, HI 96822 <sup>7</sup>University of Manchester, Jodrell Bank Observatory, Macclesfield, Cheshire SK11 9DL, UK <sup>8</sup>Dept. Physics and Astronomy, Johns Hopkins University 3400 N. Charles Street, Baltimore, MD 21218 <sup>9</sup>http://asc.harvard.edu/udocs/docs/docs.html power law spectra we follow the convention that the flux density is S=$`\mathrm{k}\nu ^\alpha `$. ## 2. Data Analysis The calibration observation of 3C295 was performed on 1999 Aug30 for an elapsed time of 20,408s. The target was near the aim point on the S3 ACIS chip. We generated a clean data set by selecting the standard grade set (0,2,3,4,6), energies less than 10 keV, and excising times with enhanced background rates. The screened exposure time for the observation is 17,792 s. During standard processing, photon events are assigned fractional pixel locations due to spacecraft dither, rotation between detector and sky coordinates, and an additional randomization within each $`0^{\prime \prime }\text{.}5`$ ACIS pixel. We used the fractional pixel values to generate images with $`0^{\prime \prime }\text{.}1`$ pixels. Figure 1 shows the central region after adaptive smoothing with a Gaussian constrained to have $`\sigma 0^{\prime \prime }\text{.}3`$. Overlaid are the radio contours from a 20 cm MERLIN+VLA image (Leahy et al., in preparation). ### 2.1. X-ray Morphology The ACIS-S image in Figure 1 shows that the central region of the 3C295 cluster exhibits significant structure with an X-ray core and two outer features aligned with the radio hotspots. This observation is an excellent example of the resolving power of Chandra. 3C295 was previously observed by the Einstein HRI (Henry & Henriksen 1986) and the ROSAT HRI (Neumann 1999) but the spatial resolution of these instruments was comparable to the separation of the two X-ray hotspots and these features were thus not detected. The ACIS image in Figure 2 shows that the core and two hotspots are not simple, azimuthally symmetric features, suggesting that these regions are resolved by Chandra. To determine if the core and hotspots are extended we must model the cluster emission. The region outside a $`3^{\prime \prime }`$ radius is well fit with a $`\beta `$ model with $`\beta `$=0.54 and core radius $`a=4^{\prime \prime }\text{.}1`$ (24 kpc). The residual X-ray data above this model are shown superimposed on the HST image in Figure 2. Inside $`r=3^{\prime \prime }`$ there is clearly excess emission above the $`\beta `$ model, surrounding the nuclear and hotspot sources. We Fig. 1. The 20 cm MERLIN data (contours) overlaid on the ACIS image. The radio contours are logarithmically spaced from 4 mJy to 2.0 Jy, shown with a restoring beamsize of $`0^{\prime \prime }\text{.}14`$. The X-ray data have been shifted by $`0^{\prime \prime }\text{.}66`$ to align the X-ray and radio cores. The NW hotspot is at a projected distance of $`1^{\prime \prime }\text{.}9`$ (11 kpc) from the core and the SE hotspot is $`2^{\prime \prime }\text{.}75`$ (16 kpc) from the core. attempted to remove this by fitting a double $`\beta `$ model to the data with the nuclear and hotspot regions excluded. This removed the surrounding emission fairly effectively, allowing us to study the size of the central components. To test these components for intrinsic extension, we compared the ratio of net counts in two concentric regions centered on these features in the residual image. The ratios of net counts between 0 and $`0^{\prime \prime }\text{.}5`$ and from 0.5 to $`1^{\prime \prime }\text{.}0`$ for the northwest (NW) hotspot, and southeast (SE) hotspot are $`1.35\pm 0.38`$ and $`0.97\pm 0.35`$, respectively (Poisson errors). We use the ACIS-S image of PKS0637-72 to determine this ratio for an imaged point source, finding a ratio of $`1.93\pm 0.08`$. Based on this statistic, there is tentative evidence in the Chandra data for some extension of the hot spot X-ray sources. However, this result is subject to uncertainties in the shape of the underlying diffuse distribution. In the case of the nuclear source these systematics are dominant, and no useful extension test is possible. ### 2.2. X-ray Spectral Results The response matrices for the S3 chip have been calibrated on 32 by 32 pixel regions. There are also twelve effective area files covering the S3 chip. A separate photon weighted response TABLE 1 Summary of Spectral Analysis Results | | Cluster | Nucleus | NW | SE | | --- | --- | --- | --- | --- | | Region | $`r`$=90<sup>′′</sup> | $`r=0^{\prime \prime }\text{.}9`$ | $`1^{\prime \prime }\text{.}5\times 1^{\prime \prime }\text{.}5`$ | $`r`$=$`1^{\prime \prime }`$ | | Net counts | 4856$`\pm `$104 | 137$`\pm `$15 | 138$`\pm `$14 | 42$`\pm `$13 | | Model | Thermal | PL | PL | | | kT or $`\alpha `$ | $`4.4\pm `$0.6 | -0.8$`\pm `$0.3 | 0.9$`\pm `$0.5 | | | $`\chi ^2`$/DOF | 173/143 | 5.3/4 | 3.1/3 | | | Flux | 120. | 19.0 | 3.8 | 1.1 | | Lum | $`9.6\times 10^{44}`$ | $`7.3\times 10^{43}`$ | $`2.9\times 10^{43}`$ | $`9\times 10^{42}`$ | Notes: The temperatures are given in keV; the confidence ranges for the spectral parameters are for $`\mathrm{\Delta }\chi ^2`$=2.7 (90% for one parameter). The fluxes are ’unabsorbed’ for the 0.2 to 10 keV band at the Earth in units of $`10^{14}`$ erg cm<sup>-2</sup>s<sup>-1</sup>. The source’s rest frame luminosity in the 0.2 to 10 keV band is given in ergs s<sup>-1</sup>. Fig. 2. A contour map of the residual Chandra image after subtraction of the best fit $`\beta `$ model superimposed on the residual optical emission from an HST observation after subtracting the emission from the central galaxy. The X-ray data have been smoothed with a Gaussian function of FWHM=$`0^{\prime \prime }\text{.}5`$. Contour levels are 8, 24, 40, 56, 72, 104 counts per square arcsecond. Coordinate pixels are $`0^{\prime \prime }\text{.}045`$. matrix and area file was generated for each extracted spectrum based on the chip coordinates of the detected photons. Spectral bins were chosen to include at least 25 net counts per bin, and the data were fit over the energy range from 0.5 to 7 keV. The results are summarized in Table 1. #### 2.2.1 The cluster gas A spectrum was extracted for the whole cluster within $`90^{\prime \prime }`$ of the radio nucleus, excluding the central radio source, hotspots, and two background sources. A background spectrum was extracted $`5^{}\text{.}3`$ northwest of the cluster center. An absorbed single temperature thermal model, with $`N_\mathrm{H}`$ fixed at $`1.34\times 10^{20}\mathrm{cm}^2`$ (the galactic value) provides an adequate fit for abundances from 0.2 to 0.5 cosmic. The results in Table 1 are based on a fixed value of 0.3 for the abundance. Our temperature is not formally consistent with the $`7.13_{1.35}^{+2.06}`$ keV obtained from analysis of ASCA data by Mushotzky & Scharf (1997) but our analysis is confined to the central $`90^{\prime \prime }`$ and excludes the emission from the core and two hotspots. A more detailed discussion of the cluster emission will be presented in a subsequent paper. #### 2.2.2 The hotspots There are insufficient counts to perform a spectral analysis of the SE hotspot. For the NW hotspot, a spectrum was extracted from a $`1^{\prime \prime }\text{.}5`$ square around the X-ray peak, centered $`1^{\prime \prime }\text{.}9`$ from the nucleus. There are no counts above 3.5 keV and only 5 bins are used in the fit. In order to minimize contamination by cluster thermal emission we used a background spectrum extracted from an identical area at a position $`1^{\prime \prime }\text{.}8`$ SW from the nucleus, in a direction perpendicular to the radio jets. The results for a power law fit are given in Table 1. A thermal model (with $`N_\mathrm{H}=1.34\times 10^{20}\mathrm{cm}^2`$ and $`Z=0.3`$ fixed) gives a best fit temperature of $`kT=4.4\genfrac{}{}{0pt}{}{+19}{2.2}`$ keV and $`\chi ^2=5.1`$ for 3 degrees of freedom, which is also acceptable. TABLE 2 Uniform Density Hot Gas: Thermal Model Parameters | | Size | Mass | $`n_e`$ | P | RM | | --- | --- | --- | --- | --- | --- | | | (arcsec) | (M) | (cm<sup>-3</sup>) | (erg cm<sup>-3</sup>) | (rad/m<sup>2</sup>) | | NW | $`r`$=0.1 | $`2.9\times 10^8`$ | 12 | $`1.7\times 10^7`$ | 58,000. | | | $`r`$=0.75 | $`5.9\times 10^9`$ | 0.60 | $`8.2\times 10^9`$ | 21,000. | | SE | $`r`$=0.1 | $`1.6\times 10^8`$ | 6.8 | $`9.2\times 10^8`$ | 32,000. | | | $`r`$=0.75 | $`3.3\times 10^9`$ | 0.33 | $`4.5\times 10^9`$ | 12,000. | Notes: the total mass (Mass), electron number density ($`n_e`$), and pressure (P) required to reproduce the observed X-ray emission, assuming a temperature of 4.4 keV. The computed rotation measure (RM) assumes B=10$`\mu `$G and a path length equal to $`r`$. #### 2.2.3 The nucleus An X-ray spectrum was extracted from a region $`0^{\prime \prime }\text{.}9`$ in radius around the central peak. In order to remove cluster thermal emission, the background for this spectrum was taken to be the same as that for the northern hotspot. The spectrum is extremely hard; most of the photons have E$`>`$1keV and about half have E$`>`$3keV. An absorbed power law fit provides an adequate representation with the best fitting column density consistent with that from galactic foreground absorption. Thermal fits are unacceptable with $`\chi ^2`$ per degree of freedom $`>34/3`$. Since the best fit value for $`\alpha `$ corresponds to a rather steep inverted spectrum, we also attempted fits with column densities up to 10<sup>24</sup> cm<sup>-2</sup>, but these all gave a worse fit due to the low energy photons in the spectrum. This could be caused by residual thermal emission, but there are not enough events to pursue this further. ### 2.3. Other data MERLIN and VLA data between 1.5 and 43 GHz were taken from Perley & Taylor (1991) and Leahy et al. (in preparation). We obtained archival HST data and after subtracting the emission from the central galaxy obtained a flux of S=0.078 $`\mu `$Jy at $`\nu `$=4.32$`\times `$10<sup>14</sup> Hz for the NW hotspot (with a one $`\sigma `$ uncertainty of 20%) within a $`0^{\prime \prime }\text{.}1`$ radius aperture. The optical emission from the hotspot appears to be extended on the same scale as the brightest radio structure and there are fainter features extending back about $`0^{\prime \prime }\text{.}5`$ towards the galaxy center. The SE hotspot is marginally detected at $`3\sigma `$ with a flux density of S=0.02$`\mu `$Jy (not visible in Figure 2). ## 3. Emission processes for the hotspots ### 3.1. Thermal model for the NW hotspot X-ray emission We have calculated the electron density required to produce the observed $`L_x`$ for a temperature of 4.4 keV and two representative volumes: (a) for a sphere with radius $`0^{\prime \prime }\text{.}1`$ (i.e. the ’unresolved’ case) and (b) a sphere with radius $`0^{\prime \prime }\text{.}75`$ (the maximum size allowed by the data). The resulting values are shown in Table 2, along with the excess rotation measure predicted for a B field component along the line of sight of 10 $`\mu `$G and a path length equal to the radius of the sphere. Combining the best fit $`\beta `$ model and the X-ray luminosity of 3C295 yields a central electron density of $`0.083\mathrm{cm}^3`$. For a temperature of 4.4 keV, this gives an ambient gas pressure of $`1.1\times 10^9\mathrm{erg}\mathrm{cm}^3`$. The minimum pressure in the NW hotspot is $`7\times 10^8\mathrm{erg}\mathrm{cm}^3`$ (Taylor & Perley 1992), and hence the jet could drive a shock into the surrounding gas. Applying the Fig. 3. The observed spectrum of the hotspots. The data for the NW and SE hotspots are shown as filled and open circles, respectively. The $`3\sigma `$ optical flux density for the SE hotspot is shown as an upper-limit. The X-ray data are plotted at 1 keV (log$`\nu `$=17.38). shock jump conditions for a postshock pressure of $`8.2\times 10^9\mathrm{erg}\mathrm{cm}^3`$ (the most favorable case from Table 2), gives a postshock density and temperature of $`0.22\mathrm{cm}^3`$ and 12 keV, respectively. Although this high temperature is allowable within our 90% confidence range, the density is still below the values required to account for the X-ray luminosity (Table 2), and the cooling time of the shocked gas ($`3\times 10^8`$ yr) is far too long for post-shock cooling to allow a rise in density. For a smaller X-ray emitting region the problem becomes more acute, and we conclude that it is unlikely that the X-ray emission is due to shocked hot gas. In addition, the largest allowed change in rotation measure (RM) between the NW hotspot and its surroundings is about 2000 rad m<sup>-2</sup> (Perley & Taylor 1991). At the redshift of 3C295, this converts to an intrinsic RM$``$4000 rad m<sup>-2</sup>. A somewhat smaller excess is allowed for the SE hotspot. These values are significantly less than the predicted RMs in Table 2. While multiple field reversals along the line of sight to the hotspots could reduce the predicted values, we note that the observed RMs are fairly constant over spatial scales of 2 kpc, particularly for the NW hotspot. There are thus several problems with a thermal origin for the emission from the NW hotspot. ### 3.2. Synchrotron Models Successful synchrotron models have been presented for knot A in the M87 jet (Biretta, Stern, & Harris 1991) and hotspot B in the northern jet of 3C 390.3 (Harris, Leighly, & Leahy 1998). An extension of the power laws from lower frequencies requires that the electron population responsible for the radio (and optical) emission extends to a Lorentz factor $`\gamma `$=10<sup>7</sup>. Extrapolating from the radio/optical spectrum under-predicts the observed X-ray emission by a factor of 500 for the NW hotspot, so a simple synchrotron model is unacceptable. For the SE hotspot, the discrepancy is more than a factor of 1000 (see Figure 3). Although we have not calculated synchrotron spectra from proton induced cascades (PIC; Mannheim, Krulis, & Biermann 1991), we suspect that such a model would be feasible. The primary difference between the PIC and the SSC models is that PIC involves a very high energy density in relativistic protons TABLE 3 Synchrotron Input Spectra for SSC Calculation | component | $`\nu _1`$ | $`\nu _b`$ | $`\nu _2`$ | $`\alpha _l`$ | $`\alpha _h`$ | S<sub>b</sub> | | --- | --- | --- | --- | --- | --- | --- | | | (Hz) | (Hz) | (Hz) | | | (cgs) | | NW | 1E9 | 1E10 | 1E15 | 0.70 | 1.50 | 7.08E-24 | | SE | 1E9 | 1.33E10 | 1E15 | 0.70 | 1.58 | 3.16E-24 | Notes: $`\nu _b`$ is the frequency at which the spectral slope changes and S<sub>b</sub> is the flux density at $`\nu _b`$. The radio spectrum is the peak brightness as observed with a beam size of $`0^{\prime \prime }\text{.}2`$ (large enough to include the brightest structure seen at 43 GHz, but not so large as to include a lot of surrounding emission). The spectrum is extended to 10<sup>15</sup> Hz to accommodate the HST data although this has little effect on the derived parameters for the SSC model. The radio spectra are strongly curved (Fig. 3), which cannot be explained either by self-absorption or entirely by spectral aging (given the optical emission). So, following Carilli et al. (1991), we assume that the electron energy spectrum cuts off near the bottom of the observed band. Flux densities are given in ergs cm<sup>-2</sup> s<sup>-1</sup> Hz<sup>-1</sup>. which would indicate a much higher B field (i.e. $`>1000\mu `$G) than those estimated from the minimum energy conditions assuming the electrons are the major contributor to the particle energy density. ### 3.3. Synchrotron Self-Compton Model The radio structure of the hotspots is quite complex, and if we estimate photon energy densities for the brightest regions (from the 43GHz VLA data), we would expect the ACIS detections to be unresolved. Since it is difficult to verify this, we base our calculations on the small volumes, realizing that there may be additional, weaker contributions from somewhat larger scale features (weaker because the photon energy density will be smaller). Our estimates involve defining the radio spectrum for the brightest and smallest structure of the hotspots (Table 3) and then calculating the synchrotron parameters: the minimum pressure magnetic field, B<sub>minP</sub>, the luminosity, L<sub>sync</sub>, and the photon energy density, $`u(\nu )`$. The spectral coverage of SSC will be determined by $`\nu _{out}\gamma ^2\times \nu _{in}`$, and the ratio of energy losses in the IC and synchrotron channels will be $`R=u(\nu )/u(B)L_{ic}/L_{sync}`$, where $`u(B)`$ is the energy density in the magnetic field. We can then compute the amplitude of the IC spectrum for which $`\alpha `$(x)= $`\alpha `$(radio) and which contains the proper luminosity over the defined frequency band. The calculated SSC values (Table 4) demonstrate that the results are not particularly sensitive to the assumed geometry and that the SSC process is most likely the major contributor to the observed X-ray intensities. In addition to the values shown in Table 4, an SSC estimate for the more extended radio structures of the NW hotspot (i.e. extending $`0^{\prime \prime }\text{.}4`$ back towards the nucleus) provides an additional 10% of the total observed flux density at 1 keV. For magnetic field values of $`410\mu G`$ and $`310\mu G`$ in the NW and SE hotspots (slightly less than the equipartition values in Table 4), the SSC model can account for the entire observed X-ray emission. For even lower magnetic fields, more relativistic electrons would be required, which would result in a greater X-ray luminosity than observed. Hence, these magnetic field values are strict lower limits. The near coincidence of the magnetic field value required by the SSC model and equipartition lends weight to the assumption of equipartition and suggests that the proton energy density is not dominant. TABLE 4 SSC parameters for the hotspots | | geom. | B<sub>minP</sub> | u($`\nu `$) | R | S(1keV) | S(ssc)/S(obs) | | --- | --- | --- | --- | --- | --- | --- | | | | ($`\mu `$G) | (erg cm<sup>-3</sup>) | | (cgs) | | | NW | sphere | 319 | 4.4E-10 | 0.11 | 2.1E-32 | 0.57 | | NW | cylin. | 561 | 1.6E-9 | 0.13 | 1.9E-32 | 0.51 | | SE | sphere | 271 | 2.3E-10 | 0.08 | 1.3E-32 | 1.16 | | SE | cylin. | 336 | 2.9E-10 | 0.06 | 9.7E-33 | 0.84 | Notes: For the sphere, $`r=0^{\prime \prime }\text{.}1`$ (to match the radio beamsize). The cylinder for the NW hotspot has $`r=0^{\prime \prime }\text{.}043`$ and $`l=0^{\prime \prime }\text{.}1`$. For the SE hotspot, $`r=0^{\prime \prime }\text{.}05`$ and $`l=0^{\prime \prime }\text{.}25`$. B<sub>minP</sub> is the minimum pressure field for no protons and filling factor=1. ## 4. Conclusions The SSC model provides good agreement between the classical equipartition magnetic field estimates and the average field values required to produce the observed X-ray intensities. Since the equipartition estimates were made with no contribution to the particle energy density from protons, this agreement supports the hypothesis that relativistic electrons account for a major part of the the energy density. If future observations show that the X-ray emission is actually extended over a physical distance of 9 kpc, it is unlikely that SSC can be the sole process operating, because the photon energy density will be much lower outside the compact core of the radio hotspot. SSC is a mandatory process and the only uncertainty in the predictions is the value of the magnetic field. As was the case for the hotspots of Cygnus A, the only viable model to negate these conclusions is one with a much stronger magnetic field: i.e. significantly greater than 500 $`\mu `$G. ## Acknowledgments A complex observatory such as Chandra represents a tremendous effort by an extensive team. We thank in particular the ACIS PI, G. Garmire, who was responsible for the detector which allowed us to obtain our results. The work at CfA was partially supported by NASA contracts NAS5-30934 and NAS8-39073. PM gratefully acknowledges support from CNAA bando 4/98. PEJN and TJP gratefully acknowledge the hospitality of the Harvard-Smithsonian Center for Astrophysics. ## References Biretta, J.A., Stern, C.P., & Harris, D.E. 1991, AJ., 101, 1632 Carilli, C.L., Perley, R.A., Dreher, J.W., & Leahy, J.P. 1991, ApJ, 383, 554 Harris, D.E., Leighly, K.M., & Leahy, J.P. 1998, ApJ., 499, L149 Harris, D.E., Carilli, C.L., & Perley, R.A. 1994, Nature, 367, 713 Henry, J. P. & Henriksen, M. J. 1986, ApJ, 301, 689 Mannheim, K., Krulis, W.M., & Biermann, P.L. 1991, A&A, 251, 723 Mushotzky, R.F. & Scharf, C.A. 1997, ApJ., 482, L13 Neumann, D. M. 1999, ApJ., 520, 87 Perley, R.A. & Taylor, G.B. 1991, AJ, 101, 1623 Taylor, G.B. & Perley, R.A. 1992, A&A, 262, 417
no-problem/9911/cond-mat9911372.html
ar5iv
text
# 1 Normalized conductance 𝑔≡𝐺_NS/𝐺_N as a function of 𝑘_F⁢𝑊'/𝜋 for an ideal interface. The data-point (∘) corresponds to the numerical result (𝑘_F⁢𝑊'/𝜋;𝑔)=(3.2;1.87) of De Raedt et al. []. Andreev scattering and conductance enhancement in mesoscopic semiconductor–superconductor junctions Niels Asger Mortensen<sup>1</sup>, Antti-Pekka Jauho<sup>1</sup>, Karsten Flensberg<sup>2</sup> <sup>1</sup>Mikroelektronik Centret, Technical University of Denmark, Building 345 east, DK-2800 Lyngby, Denmark <sup>2</sup>Ørsted Laboratory, Niels Bohr Institute, University of Copenhagen, Universitetsparken 5, DK-2100 Copenhagen Ø, Denmark Quantum transport in hybrid semiconductor–superconductor nanostructures has been shown to exhibit many of the mesoscopic effects known from semiconductor systems , for instance quantized conductance of the quantum point contact (QPC). Due to the Andreev scattering these effects are modified and the presence of a superconductor (S) opens up for studying new mesoscopic phenomena, such as the quantized critical current in Josephson junctions , but, also leads to a higher understanding of the basic effects found in semiconductor structures. An inherent difficulty in studying mesoscopic effects in semiconductor–superconductor hybrid structures is the large Schottky barrier which often forms at the interface. A large technological effort has been invested in in improving the contact between the superconductor and the two-dimensional electron gas (2DEG) of a semiconductor heterostructure, and in recent years this has become possible for e.g. GaAs-Al, GaAs-In, and InAs-Nb junctions. This development motivates quantitative theoretical modeling of sample-specific transport properties. The aim of our work is to model the conducting properties of a ballistic 2DEG-S interfaces with a QPC in the normal region and also to take into account scattering due to a weak Schottky barrier and/or non-matching Fermi properties of the semiconductor and superconductor. A theoretical framework is provided by the Bogoliubov–de Gennes (BdG) formalism where the scattering states are eigenfunctions of the BdG equation which is a Schrödinger-like equation in electron-hole space. The scattering approach to phase-coherent dc transport in superconducting hybrids follows closely the scattering theory developed for non-superconducting mesoscopic structures. In zero magnetic field, Beenakker found that the Andreev approximation and the rigid boundary condition for the pairing potential lead to a linear-response sub-gap conductance given by $$\frac{G_{\mathrm{NS}}}{2G_0}=\mathrm{Tr}\left(tt^{}\left[\widehat{2}tt^{}\right]^1\right)^2=\underset{n=1}{\overset{N}{}}\frac{T_n^2}{\left(2T_n\right)^2}$$ (1) which, in contrast to the Landauer formula , $$\frac{G_\mathrm{N}}{G_0}=\mathrm{Tr}tt^{}=\underset{n=1}{\overset{N}{}}T_n,$$ (2) is a non-linear function of the transmission eigenvalues $`T_n`$ ($`n=1,2,\mathrm{},N`$) of $`tt^{}`$. Here $`G_0=2e^2/h`$ and $`t`$ is the $`N\times N`$ transmission matrix of the normal region, $`N`$ being the number of propagating modes. The computational advantage of Eq. (1) over the time-dependent BdG approach of De Raedt *et al.* is that we only need to consider the time-independent Schrödinger equation with a potential which describes the disorder in the normal region, so that we can use the techniques developed for quantum transport in normal conducting mesoscopic structures. We study the geometry shown in the inset of Figure 1 and following recent work we model the QPC by a wide-narrow-wide constriction , the interface scattering by a delta-function potential , and the transverse confinement by a hard-wall confining potential. The scattering due to non-matching Fermi velocities and Fermi momenta of the semiconductor and the superconductor are taken into account by replacing the interface transmission and reflection matrices of Ref. by $`\left(t_\delta \right)_{ww^{}}`$ $`=`$ $`\delta _{ww^{}}{\displaystyle \frac{1}{\sqrt{\frac{\left[\mathrm{\Gamma }(\theta _w)r_v+1\right]^2}{4\mathrm{\Gamma }(\theta _w)r_v}}+i\frac{Z\sqrt{\mathrm{\Gamma }(\theta _w)}}{\mathrm{cos}\theta _w}}}`$ (3) $`\left(r_\delta \right)_{ww^{}}`$ $`=`$ $`\delta _{ww^{}}{\displaystyle \frac{\sqrt{\frac{\left[\mathrm{\Gamma }(\theta _w)r_v1\right]^2}{4\mathrm{\Gamma }(\theta _w)r_v}}i\frac{Z\sqrt{\mathrm{\Gamma }(\theta _w)}}{\mathrm{cos}\theta _w}}{\sqrt{\frac{\left[\mathrm{\Gamma }(\theta _w)r_v+1\right]^2}{4\mathrm{\Gamma }(\theta _w)r_v}}+i\frac{Z\sqrt{\mathrm{\Gamma }(\theta _w)}}{\mathrm{cos}\theta _w}}}`$ (4) where $`r_vv_\mathrm{F}^{\left(\mathrm{N}\right)}/v_\mathrm{F}^{\left(\mathrm{S}\right)}`$ is the Fermi velocity ratio, $`r_kk_\mathrm{F}^{\left(\mathrm{N}\right)}/k_\mathrm{F}^{\left(\mathrm{S}\right)}`$ is the Fermi momentum ratio, and $`\mathrm{\Gamma }(\theta )\mathrm{cos}\theta /\sqrt{1r_k^2\mathrm{sin}^2\theta }`$ . We consider the device of Refs. with a relative width $`W/W^{}=31.72`$, an aspect ratio $`L_1/W^{}=5/1.6`$, and a relative length $`L_2/W^{}=20/1.6`$. Furthermore we consider a junction between a GaAs 2DEG (in a GaAs-AlGaAs heterostructure) and a superconducting Al film. For $`T_{\mathrm{F},\mathrm{GaAs}}100\mathrm{K}`$, appropriate parameters are given by $`r_v=0.10`$ and $`r_k=0.007`$ . Figure 1 shows the normalized conductance $`gG_{\mathrm{NS}}/G_\mathrm{N}`$ as a function of $`k_\mathrm{F}W^{}/\pi `$ for an ideal interface. In Figure 2 we show the effect of a finite barrier at an interface with matching Fermi properties. Compared to a similar system without a QPC, see Fig. 2c of Ref. , the normalized conductance is only weakly suppressed for low barrier scattering, $`Z<1`$, and only for a very high barrier strength there is a cross-over from an excess conductance, $`g>1`$, to a deficit conductance, $`g<1`$. In Figure 3 we show how these results are modified when taking the different Fermi properties into account. The detailed behavior is now changed but the overall weak effect of the non-ideal interface on the normalized conductance is the same. Comparing to a similar system without a QPC, see Fig. 2a of Ref. , the normalized conductance changes from $`g0.2`$ (at $`Z=0`$) to $`g>1.3`$ in the presence of a QPC in the normal region. In conclusion, the studied effect of a non-ideal interface with a Schottky barrier and non-matching Fermi properties is very similar to the reflectionless tunneling behavior in diffusively disordered junctions where the net result is as if tunneling through the barrier is reflectionless. In the case of a QPC instead of a diffusive region, the presence of the QPC enhances the normalized conductance even though there is a weak dependence on the barrier strength so that the tunneling is not perfectly reflectionless.
no-problem/9911/gr-qc9911086.html
ar5iv
text
# On Holography in Brans-Dicke Cosmology11footnote 1Talk given at the 9th Midwest Relativity Conference held in University of Illinois at Urbana Champaign, November 12-13, 1999. ## 1 Introduction Dilaton field appears naturally both from the Kaluza-Klein compactification and the String spectrum. The simplest way to incorporate the scalar field as the spin-0 partner of spin-2 gravitational field is Brans-Dicke theory in which the gravitational coupling constant is replaced by a scalar field. In , the author considered the holographic bound in the region within the particle horizon for a general Brans-Dicke universe. The discussion for the case $`k=1`$ matter dominated is not complete because there is no analytical solution for that case. In this paper, we will investigate the holographic bound for the $`k=1`$ matter dominated Brans-Dicke universe in regions within particle horizon and apparent horizon. The particle horizon idea was first proposed by Fischler and Susskind . Because the holographic bound in the region within the particle horizon is not satisfied for closed matter dominated universe, Bak and Rey used the apparent horizon to solve the problem . However, Kaloper and Linde showed that the holographic bound in the regions within apparent horizon could be violated in the anti-De-Sitter space . Bousso’s covariant entropy conjecture gave a means to select a null hypersurface starting from any 2 dimensional spacelike surface so that the holographic bound would be satisfied in the null hypersurface . This holographic bound can be applied to general spacetime starting from any surface. A generalized version of this conjecture was proved by Flanagan, Marolf and Wald under some assumptions on the entropy flux . Other cosmological entropy bounds were discussed in . The Friedman-Roberson-Walker metric for $`k=1`$ is $$\begin{array}{cc}\hfill ds^2& =dt^2+a^2(t)\left(\frac{dr^2}{1r^2}+r^2d\mathrm{\Omega }^2\right)\hfill \\ & =dt^2+a^2(t)(d\chi ^2+\mathrm{sin}^2\chi d\mathrm{\Omega }^2),\hfill \end{array}$$ (1) where $`r=\mathrm{sin}\chi `$. Here I use $`c=\mathrm{}=1`$. The standard cosmological equation and solutions are $$H^2+\frac{1}{a^2}=\frac{8\pi G}{3}\rho ,$$ (2) $$\rho a^3=\rho _0a_0^3,$$ (3) $$a(\eta )=\frac{a_{max}}{2}(1\mathrm{cos}\eta )=a_{max}\mathrm{sin}^2(\eta /2),$$ (4) where the cosmic time $`\eta `$ is $`d\eta =dt/a(t)`$ and $`a_{max}=8\pi G\rho _0a_0^3/3`$. Note that $`\eta \pi `$, $`aa_{max}`$. The particle horizon is $$\chi _{PH}=_0^t\frac{d\stackrel{~}{t}}{a(\stackrel{~}{t})}=\eta .$$ (5) The apparent horizon is $$r_{AH}=\frac{1}{a(t)\sqrt{H^2+1/a^2(t)}}=|\mathrm{sin}(\eta /2)|,$$ (6) $$\chi _{AH}=\frac{\eta }{2}.$$ (7) The idea of holographic bound is that the matter entropy inside a spatial region $`V`$ does not exceed 1/4 of the area $`A`$ of the boundary of that region measured in Planck units. From the metric (1), the holographic bound in a region with radius $`r=\mathrm{sin}\chi `$ for closed universe is $$\frac{S}{GA/4}=\frac{ϵV}{GA/4}=\frac{ϵ(2\chi \mathrm{sin}2\chi )}{Ga^2(t)\mathrm{sin}^2\chi }1,$$ (8) where $`ϵ`$ is the constant comoving entropy density. If we consider the spherical region inside the particle horizon $`\chi _{PH}=\eta `$, then the holographic bound (8) becomes $$\frac{S}{GA/4}=\frac{ϵ(2\eta \mathrm{sin}2\eta )}{Ga_{max}^2\mathrm{sin}^4(\eta /2)\mathrm{sin}^2\eta }1.$$ It is obvious that the bound is violated when $`\eta \pi `$. Therefore, the holographic bound proposed by Fischler and Susskind does not apply to the closed universe. If we consider the spherical region inside the apparent horizon $`\chi _{AH}=\eta /2`$, the holographic bound (8) becomes $$\frac{S}{GA/4}=\frac{ϵ(\eta \mathrm{sin}\eta )}{Ga_{max}^2\mathrm{sin}^6(\eta /2)}1.$$ Therefore, the holographic bound is satisfied if it is satisfied initially. So the holographic bound proposed by Bak and Rey applies to the closed universe. ## 2 Brans-Dicke Cosmology The Brans-Dicke Lagrangian in the Jordan frame is given by $$_{BD}=\frac{\sqrt{\gamma }}{16\pi }\left[\varphi \stackrel{~}{R}\omega \gamma ^{\mu \nu }\frac{_\mu \varphi _\nu \varphi }{\varphi }\right]_m(\psi ,\gamma _{\mu \nu }),$$ (9) with $`\varphi =1/G`$. The cosmological equations are $$H^2+\frac{k}{a^2}+H\frac{\dot{\varphi }}{\varphi }\frac{\omega }{6}\left(\frac{\dot{\varphi }}{\varphi }\right)^2=\frac{8\pi }{3\varphi }\rho ,$$ (10) $$\ddot{\varphi }+3H\dot{\varphi }=\frac{8\pi }{2\omega +3}(\rho 3p),$$ (11) $$\rho a^3=\rho _0a_0^3.$$ (12) To solve the above equations for the case $`k=1`$ and $`p=0`$, we take the solutions for $`k=0`$ as the initial conditions. $$a(t_i)=t_i^p,\varphi (t_i)=\frac{4\pi (4+3\omega )}{2\omega +3}t_i^q,$$ (13) $$p=\frac{2+2\omega }{4+3\omega },q=\frac{2}{4+3\omega }.$$ (14) The holographic bound for the particle horizon is shown in Fig. 1. From Fig. 1, we see the bound is violated at later time. The holographic bound for the apparent horizon is shown in Fig. 2. From Fig. 2, we see that the bound is satisfied if it is satisfied initially. In Einstein frame, the Brans-Dicke Lagrangian is $$=\sqrt{g}\left[\frac{1}{2\kappa ^2}R\frac{1}{2}g^{\mu \nu }_\mu \sigma _\nu \sigma \right]_m(\psi ,e^{\alpha \sigma }g_{\mu \nu }).$$ (15) The above Lagrangian (15) is from Eq. (9) by the transformations $$g_{\mu \nu }=e^{\alpha \sigma }\gamma _{\mu \nu },$$ (16) $$\varphi =\frac{1}{G}e^{\alpha \sigma },$$ (17) where $`\kappa ^2=8\pi G`$, $`\alpha =\beta \kappa `$, and $`\beta ^2=2/(2\omega +3)`$. Remember that the Jordan-Brans-Dicke Lagrangian is not invariant under the above transformations (16) and (17). The corresponding cosmological equations are $$H^2+\frac{k}{a^2}=\frac{\kappa ^2}{3}\left(\frac{1}{2}\dot{\sigma }^2+e^{2\alpha \sigma }\rho \right),$$ (18) $$\ddot{\sigma }+3H\dot{\sigma }=\frac{1}{2}\alpha e^{2\alpha \sigma }\rho ,$$ (19) $$\dot{\rho }+3H\rho =\frac{3}{2}\alpha \dot{\sigma }\rho .$$ (20) Here we consider the solutions for the case $`k=1`$ and $`p=0`$ only. With $`8\pi G/3=1`$, the initial conditions are $$a(t_i)=\left[\frac{\sqrt{2}(18+\alpha ^2)t_i}{4\sqrt{18\alpha ^2}}\right]^{12/(18+\alpha ^2)},\sigma (t_i)=\frac{\alpha }{3}\mathrm{ln}a(t_i).$$ The holographic bound for the particle horizon is shown in Fig. 3. From Fig. 3, we see that the bound is violated at later time. The holographic bound for the apparent horizon is shown in Fig. 4. From Fig. 4, we see that the bound is satisfied if it is satisfied initially. ## 3 Conclusions The holographic bound for Brans-Dicke cosmology is not satisfied if we use the particle horizon, but it is satisfied if we use the apparent horizon. we can understand the above result by using Bousso’s conjecture: In any spacetime satisfying Einstein’s equation and the dominant energy condition, the total entropy $`S`$ contained in any null hypersurface $`L`$ bounded by some connected ($`D2`$) dimensional spacelike surface $`B`$ with area $`A`$ and generated by null geodesics with non-positive expansion must satisfy $`SA/4G`$. In Einstein frame, the spacetime satisfies Einstein’s equation with $`\rho _{tot}=e^{2\alpha \sigma }\rho +\dot{\sigma }/2`$ and $`p_{tot}=e^{2\alpha \sigma }p+\dot{\sigma }/2`$. Therefore, the matter source satisfies the dominant energy condition. Bousso’s covariant entropy conjecture tells us that the holographic bound is satisfied in the region within the apparent horizon. In this paper, we found that the holographic bound is satisfied for the $`k=1`$ matter dominated Brans-Dicke universe. This can be taken as an evidence to support Bousso’s conjecture. Acknowledgement The author would like to thank Raphael Bousso for his helpful correspondence with the subject of Bousso’s conjecture.
no-problem/9911/cond-mat9911342.html
ar5iv
text
# Surmounting Oscillating Barriers ## Abstract Thermally activated escape over a potential barrier in the presence of periodic driving is considered. By means of novel time-dependent path-integral methods we derive asymptotically exact weak-noise expressions for both the instantaneous and the time-averaged escape rate. The agreement with accurate numerical results is excellent over a wide range of driving strengths and driving frequencies. PACS numbers: 05.40.-a, 82.20.Mj, 82.20.Pm The problem of noise driven escape over a potential barrier is ubiquitous in natural sciences . Typically, the noise is weak and the escape time is governed by an exponentially leading Arrhenius factor. This scheme, however, meets formidable difficulties in far from equilibrium systems due to the extremely complicated interplay between global properties of the metastable potential and the noise . Prominent examples are systems driven by time-periodic forces , exemplified by strong laser driven semiconductor heterostructures, stochastic resonance , directed transport in rocked Brownian motors , or periodically driven “resonant activation” processes like AC driven biochemical reactions in protein membranes. Despite its experimental importance, the theory of oscillating barrier crossing is still in its infancy. Previous attempts have been restricted to weak (linear response), slow (adiabatic regime), or fast (sudden regime) driving . In this Letter we address the most challenging regime of strong and moderately fast driving by means of path-integral methods. In fact, our approach becomes asymptotically exact for any finite amplitude and period of the driving as the noise strength tends to zero, and comprises a conceptionally new, systematic treatment of the rate prefactor multiplying the exponentially leading Arrhenius factor. Closest in spirit is the recent work , which is restricted, however, to the linear response regime for the exponentially leading part and treats the prefactor with a matching procedure, involving the barrier region only. Our analytical theory is tested for a sinusoidally rocked metastable potential against very precise numerical results. Conceptionally, our approach should be of considerable interest for many related problems: generalizations for higher dimensional systems and for non-periodic driving forces. Model —We consider the overdamped escape dynamics of a Brownian particle $`x(t)`$ in properly scaled units $$\dot{x}(t)=F(x(t),t)+\sqrt{2D}\xi (t),$$ (1) with unbiased $`\delta `$-correlated Gaussian noise $`\xi (t)`$ (thermal fluctuations) of strength $`D`$. The force-field $`F(x,t)`$ is assumed to derive from a metastable potential with a well at $`\overline{x}_s`$ and a barrier at $`\overline{x}_u>\overline{x}_s`$, subject to periodic modulations with period $`𝒯`$. For $`D=0`$, the deterministic dynamics (1) is furthermore assumed to exhibit a stable periodic orbit (attractor) $`x_s(t)`$ and an unstable periodic orbit (basin boundary) $`x_u(t)>x_s(t)`$. For weak noise $`D`$, there is a small probability that a particle obeying (1) escapes from the basin of attraction $`𝒜(t):=(\mathrm{},x_u(t)]`$ of $`x_s(t)`$ and disappears towards infinity. For an ensemble of particles with probability density $`p(x,t)`$, the population $`P_𝒜(t)`$ within the basin of attraction is $`_{\mathrm{}}^{x_u(t)}p(x,t)𝑑x`$ and the instantaneous rate of escape $`\mathrm{\Gamma }(t)`$ equals $`\dot{P}_𝒜(t)/P_𝒜(t)`$. Apart from transients at early times, this rate $`\mathrm{\Gamma }(t)`$ is independent of the initial conditions at time $`t_0`$. Without loss of generality we can thus focus on $`x(t_0)=x_s(t_0)`$. Small $`D`$ implies rare escape events, i.e. the deviation of $`P_𝒜(t)`$ from its initial value $`P_𝒜(t_0)=1`$ is negligible. Exploiting $`\dot{x}_u(t)=F(x_u(t),t)`$ and the Fokker-Planck-equation $`_tp=_x\{F(x,t)+D_x\}p`$ governing $`p=p(x,t)`$ we find for the instantaneous rate $$\mathrm{\Gamma }(t)=D_xp(x=x_u(t),t).$$ (2) Path-integral approach — With the choice $`p(x,t_0)=\delta (xx_s(t_0))`$ we obtain for the conditional probability $`p(x,t)`$ the path integral representation $$p(x_f,t_f)=𝒟x(t)e^{S[x(t)]/D},$$ (3) where the “action” is given by $$S[x(t)]:=_{t_0}^{t_f}𝑑t[\dot{x}(t)F(x(t),t)]^2/4$$ (4) and where $`x(t_0)=x_s(t_0)`$ and $`x(t_f)=x_f`$ are the “initial” and “final” conditions for the paths $`x(t)`$. For weak noise, the integral (3) is dominated by a set of paths $`x_k^{}(t)`$, corresponding to minima of the action (4) (distinguished by the label $`k`$). These satisfy an Euler-Lagrange equation equivalent to the following Hamiltonian dynamics $`\dot{p}_k^{}(t)`$ $`=`$ $`p_k^{}(t)F^{}(x_k^{}(t),t)`$ (5) $`\dot{x}_k^{}(t)`$ $`=`$ $`2p_k^{}(t)+F(x_k^{}(t),t)`$ (6) with $`F^{}(x,t):=_xF(x,t)`$. For well separated paths $`x_k^{}(t)`$, a functional saddle point approximation in (3) yields $$p(x_f,t_f)=\underset{k}{}\frac{e^{S[x_k^{}(t)]/D}}{[4\pi DQ_k^{}(t_f)]^{1/2}}[1+𝒪(D)],$$ (7) where the quantity $`Q_k^{}(t)`$ relates to the determinant of fluctuations around $`x_k^{}(t)`$. Following the reasoning in , we find for our case that $`Q_k^{}(t)`$ obeys the relation $`\ddot{Q}_k^{}(t)/2`$ $``$ $`d[Q_k^{}(t)F^{}(x_k^{}(t),t)]/dt`$ (8) $`+`$ $`Q_k^{}(t)p_k^{}(t)F^{\prime \prime }(x_k^{}(t),t)=0`$ (9) with initial conditions $`Q_k^{}(t_0)=0`$ and $`\dot{Q}_k^{}(t_0)=1`$. Exploiting that the derivative of the action at its end-point equals the momentum $`p_k^{}(t_f)`$, we can infer from (2,7) our first main result, namely $$\mathrm{\Gamma }(t_f)=\underset{k}{}\frac{p_k^{}(t_f)e^{S[x_k^{}(t)]/D}}{[4\pi DQ_k^{}(t_f)]^{1/2}}[1+𝒪(D)],$$ (10) where the boundary conditions $`x_k^{}(t_0)=x_s(t_0)`$ and $`x_k^{}(t_f)=x_u(t_f)`$ are understood in (5,6). In view of (7), the instantaneous rate (10) has the suggestive form of probability at the separatrix times “velocity”. Closer inspection of (4-6) reveals the following generic features of each path $`x_k^{}(t)`$ which notably contributes to the rate (10), see fig. 1: Starting at $`x_k^{}(t_0)=x_s(t_0)`$, it continues to follow rather closely the stable periodic orbit $`x_s(t)`$ for some time. At a certain moment, it crosses over into the vicinity of the unstable periodic orbit $`x_u(t)`$ and remains there for the rest of its time, ending at $`x_k^{}(t_f)=x_u(t_f)`$. Without loss of generality, we can sort the paths $`x_k^{}(t)`$ by the time they spend near the unstable periodic orbit, such that $`x_0^{}(t)`$ is that path which crosses over from $`x_s(t)`$ to $`x_u(t)`$ at the “latest possible moment”. Apart from a time shift, each path $`x_k^{}(t)`$ then closely resembles the same “master path” $`x^{}(t)`$ (see fig. 1). This path $`x^{}(t)`$ is defined as an absolute minimum of the action (4) in the limit $`t_0\mathrm{}`$, $`t_f\mathrm{}`$, and is fixed uniquely by demanding that $`x^{}(t+k𝒯)`$ is the “master path” associated with $`x_k^{}(t)`$. The basic qualitative features of each minimizing path $`x_k^{}(t)`$ are thus quite similar to the well-known barrier-crossing problem in a static potential . However, in the limit $`t_0\mathrm{}`$, $`t_f\mathrm{}`$ we have, in contrast to this latter situation, not a continuous symmetry (Goldstone mode), but a discrete degeneracy of the minimizing paths. As a consequence, in our case the minimizing paths $`x_k^{}(t)`$ remain well separated and thus the rate formula (10) is valid for any (arbitrary but fixed) finite values of the driving amplitude and period, provided the noise strength $`D`$ is sufficiently small. On the other hand, for a given $`D`$ we have to exclude extremely small amplitudes and extremely long or short periods since this would lead effectively back to the static case. As long as the “master path” $`x^{}(t)`$ remains sufficiently close to the stable periodic orbit $`x_s(t)`$, say for $`tt_s`$, the force-field is well approximated by $$F(x,t)=F(x_s(t),t)+(xx_s(t))F^{}(x_s(t),t).$$ (11) An analogous approximation for $`F(x,t)`$ is valid while $`x^{}(t)`$ remains in a sufficiently small neighborhood of $`x_u(t)`$, say for $`tt_u`$. The corresponding local solutions of the Hamilton equations (5,6) can then be written as $`p^{}(t)`$ $`=`$ $`p^{}(t_{s,u})e^{\mathrm{\Lambda }_{s,u}(t,t_{s,u})}`$ (12) $`x^{}(t)`$ $`=`$ $`x_{s,u}(t)\pm p^{}(t)I_{s,u}(t).`$ (13) Here ‘$`s,u`$’ means that the index is either ‘$`s`$’ or ‘$`u`$’ and the upper and lower signs in (13) refer to ‘$`s`$’ and ‘$`u`$’, respectively. Further, we have introduced $`\mathrm{\Lambda }_{s,u}(t,t_{s,u}):={\displaystyle _{t_{s,u}}^t}F^{}(x_{s,u}(\widehat{t}),\widehat{t})𝑑\widehat{t}`$ (14) $`I_{s,u}(t):=\left|2{\displaystyle _{\mathrm{}}^t}e^{2\mathrm{\Lambda }_{s,u}(t,\widehat{t})}𝑑\widehat{t}\right|.`$ (15) Similarly, the local solutions for the prefactor in (9) can be written as $`Q^{}(tt_s)`$ $`=`$ $`I_s(t)/2`$ (16) $`Q^{}(tt_u)`$ $`=`$ $`c_1/p^{}(t)^2c_2I_u(t).`$ (17) The parameters $`p^{}(t_{s,u})`$ in (12) and $`c_{1,2}`$ in (17) cannot be fixed within such a local analysis around $`x_{s,u}(t)`$, they require the global solution of (5,6,9). We furthermore observe that due to the time-periodicity of $`F(x,t)`$ and $`x_{s,u}(t)`$, the quantities $$\lambda _{s,u}:=\mathrm{\Lambda }_{s,u}(t+𝒯,t)/𝒯$$ (18) are indeed $`t`$-independent. The stability/instability of the periodic orbits $`x_{s,u}(t)`$ implies $`\lambda _s<0`$ and $`\lambda _u>0`$. It follows that $`I_{s,u}(t)`$ from (15) are finite, $`𝒯`$-periodic functions. The expressions for $`x_k^{}(t)`$, $`p_k^{}(t)`$, and $`Q_k^{}(t)`$ are somewhat more complicated than in (12-17) but since $`x_k^{}(t)`$ is well approximated by $`x^{}(t+k𝒯)`$, the same follows for $`p_k^{}(t)`$ and $`Q_k^{}(t)`$. Closer inspection shows that in (10) the pre-exponential factors $`p_k^{}(t_f)`$ and $`Q_k^{}(t_f)`$ can be approximated by $`p^{}(t_f+k𝒯)`$ and $`Q^{}(t_f+k𝒯)`$ without further increasing the error $`𝒪(D)`$ in (10). Within this same accuracy, the exponential in (10) requires – due to the small denominator $`D`$ – a somewhat more elaborate approximation, yielding $$S[x_k^{}(t)]=S[x^{}(t)]+_{t_f+k𝒯}^{\mathrm{}}p^{}(t)^2𝑑t.$$ (19) Rate formula — By introducing these approximations into (10), exploiting (12-17), and dropping the index ‘$`f`$’ of $`t_f`$, we obtain as the central result of this work the instantaneous rate $`\mathrm{\Gamma }(t)=\sqrt{D}\alpha e^{S[x^{}(t)]/D}\kappa (t,D)[1+E(D)]`$ (20) $`\alpha :=[4\pi 𝒯^2\underset{t\mathrm{}}{lim}p^{}(t)^2Q^{}(t)]^{1/2}`$ (21) $`\kappa (t,D):=𝒯{\displaystyle \underset{k=\mathrm{}}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\beta _k(t)^2}{D}}e^{\beta _k(t)^2I_u(t)/2D}`$ (22) $`\beta _k(t):=e^{\lambda _uk𝒯}\underset{\widehat{t}\mathrm{}}{lim}p^{}(\widehat{t})e^{\mathrm{\Lambda }_u(\widehat{t},t)}.`$ (23) The relative error $`E(D)`$ is found to be of order $`𝒪(D)`$ if $`F^{\prime \prime }(x_u(t),t)0`$, and $`𝒪(D^{1/2})`$ otherwise. By use of (14-18) one finds that the average of (22) over a single time-period $`𝒯`$ equals $`1`$. For the time-averaged rate $`\overline{\mathrm{\Gamma }}`$ we thus obtain $$\overline{\mathrm{\Gamma }}=\sqrt{D}\alpha e^{S[x^{}(t)]/D}[1+E(D)].$$ (24) It consists of an Arrhenius-type exponentially leading part and, in contrast to equilibrium rates , a non-trivial pre-exponential $`D`$-dependence. An archetype example — In general, the explicit quantitative evaluation of $`S[x^{}(t)]`$, $`\alpha `$, and $`\kappa (t,D)`$ in (20,24) is not possible in closed analytical form. An exception is the piecewise linear force-field with additive sinusoidal driving $`F(x0,t)`$ $`=`$ $`\lambda _s(x\overline{x}_s)+A\mathrm{sin}(\mathrm{\Omega }t)`$ (25) $`F(x0,t)`$ $`=`$ $`\lambda _u(x\overline{x}_u)+A\mathrm{sin}(\mathrm{\Omega }t),`$ (26) corresponding to a periodically rocked piecewise parabolic potential, with parameters $`\overline{x}_s<0`$, $`\overline{x}_u>0`$, $`\lambda _s<0`$, $`\lambda _u>0`$, respecting $`\lambda _s\overline{x}_s=\lambda _u\overline{x}_u`$ (continuity at $`x=0`$). To simplify the analytical calculations we further restrict ourselves to the case that the “master path” $`x^{}(t)`$ crosses the matching point $`x=0`$ in (26) only once , say at $`t=t_c`$. The periodic orbits $`x_{s,u}(t)`$ then assume the simple form $$x_{s,u}(t)=\overline{x}_{s,u}\frac{A[\lambda _{s,u}\mathrm{sin}(\mathrm{\Omega }t)+\mathrm{\Omega }\mathrm{cos}(\mathrm{\Omega }t)]}{\lambda _{s,u}^2+\mathrm{\Omega }^2}.$$ (27) Moreover, Eqs.(12,13) are now valid with index ‘$`s`$’ for all $`tt_c`$ and with ‘$`u`$’ for all $`tt_c`$. By matching these solutions at $`t=t_c`$ the global parameters $`p^{}(t_{s,u})`$ in (12) are fixed and one obtains $$t_c=\frac{1}{\mathrm{\Omega }}\mathrm{arctan}\left(\frac{\lambda _u\lambda _s\mathrm{\Omega }^2}{\mathrm{\Omega }(\lambda _u+\lambda _s)}\right).$$ (28) To ensure that $`x^{}(t)`$ is a minimum of the action in (4) one has $$(\lambda _u+\lambda _s)A\mathrm{\Omega }\mathrm{cos}(\mathrm{\Omega }t_c)<0.$$ (29) In the same manner, the prefactor (17) has to be matched at $`t=t_c`$. While $`Q^{}(t)`$ is still continuous, $`\dot{Q}^{}(t)`$ develops a jump at $`t=t_c`$ which can be determined from (9). Upon collecting everything, the final result reads $`S[x^{}(t)]=\mathrm{\Delta }V\left[1{\displaystyle \frac{|A\lambda _u\lambda _s|}{(R\mathrm{\Delta }V)^{1/2}}}\right]^2`$ (30) $`\alpha =\left[{\displaystyle \frac{|A|(\mathrm{\Omega }^2+\lambda _u\lambda _s)+(R\mathrm{\Delta }V)^{1/2}}{16\pi ^3|A|S[x^{}(t)]}}\right]^{1/2}`$ (31) $`\beta _k(t)=e^{\lambda _u(k𝒯+tt_c)}\left[\lambda _u\overline{x}_u{\displaystyle \frac{|A\lambda _u\lambda _s|}{H^{1/2}}}\right],`$ (32) where $`H:=(\lambda _u^2+\mathrm{\Omega }^2)(\lambda _s^2+\mathrm{\Omega }^2)`$, $`R:=2H/(\lambda _u^1\lambda _s^1)`$, and $`\mathrm{\Delta }V:=(\lambda _u\overline{x}_u^2\lambda _s\overline{x}_s^2)/2`$ is the potential barrier corresponding to the undriven ($`A=0`$) force-field (26). With $`𝒯=2\pi /\mathrm{\Omega }`$, $`I_u(t)=1/\lambda _u`$, and $`t_c`$ from (28,29), the rate (20,24) is thus determined completely. Comparison — These analytical predictions for the instantaneous rate (20) are compared in fig. 2 for a representative set of parameter values with very accurate numerical results. The agreement indeed improves with decreasing noise-strength $`D`$. While the absolute values of $`\mathrm{\Gamma }(t)`$ and the location of the extrema strongly depend on $`D`$, the overall shape changes very little and does not develop singularities as $`D0`$. The corresponding time-averaged rates (24) are depicted in fig.3, exhibiting excellent agreement between theory and numerics even for relatively large $`D`$. The inset of fig. 3 confirms our prediction that the relative error $`E(D)`$ in (24) decreases asymptotically like $`D`$. Finally, fig.4. illustrates the dependence of the averaged rate $`\overline{\mathrm{\Gamma }}`$ upon the amplitude $`A`$ of the periodic driving force. As expected, our theoretical prediction compares very well with the (numerically) exact rate, except for very small driving amplitudes $`A`$ (see the discussion above eq. (11)). The approximation from is complementary to ours in that it is very accurate for small $`A`$ but develops considerable deviations with increasing $`A`$. Those approximations have been omitted in figs.2,3 since they are not valid in this parameter regime and indeed are way off. This work was supported by DFG-Sachbeihilfe HA1517/13-2 and the Graduiertenkolleg GRK283.
no-problem/9911/hep-ph9911360.html
ar5iv
text
# 1 Results of fits to the parameters of the GMSB model described in the text, starting from a possible set of light sparticle masses measurements from threshold scans. A 200 fb-1 run at the LC is assumed. hep-ph/9911360 CERN-TH/99-348 Extracting GMSB Parameters at a Linear Collider <sup>1</sup><sup>1</sup>1Work supported also by Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany Sandro Ambrosanio CERN – Theory Division, CH-1211 Geneva 23, Switzerland e-mail: ambros@mail.cern.ch and Grahame A. Blair Royal Holloway and Bedford New College, University of London, Egham Hill, Egham, Surrey TW20 0EX, U.K. e-mail: g.blair@rhbnc.ac.uk Abstract Assuming gauge-mediated supersymmetry (SUSY) breaking (GMSB), we simulate precision measurements of fundamental parameters at a 500 GeV $`e^+e^{}`$ linear collider (LC) in the scenario where a neutralino is the next-to-lightest supersymmetric particle (NLSP). Information on the SUSY breaking and the messenger sectors of the theory is extracted from realistic fits to the measured mass spectrum of the Minimal SUSY Model (MSSM) particles and the NLSP lifetime. Contribution to the Workshops: 2<sup>nd</sup> ECFA/DESY Study on Physics and Detectors for a Linear $`e^+e^{}`$ Collider Lund, Frascati, Oxford, Obernai – June 1998 to October 1999 Supersymmetry must be broken if it is to describe nature, and GMSB is one attractive way to realize this, also providing natural suppression of the SUSY contributions to flavour-changing neutral currents at low energies. In GMSB models, the gravitino $`\stackrel{~}{G}`$ is the LSP with mass given by $`m_{\stackrel{~}{G}}=\frac{F}{\sqrt{3}M_P^{}}2.37\left(\frac{\sqrt{F}}{100\mathrm{TeV}}\right)^2\mathrm{eV}`$, where $`\sqrt{F}`$ is the fundamental SUSY breaking scale. The GMSB phenomenology is characterised by decays of the NLSP to its Standard Model partner and the $`\stackrel{~}{G}`$ with a non-negligible or even macroscopic lifetime. In the simplest GMSB realizations, depending on the parameters $`M_{\mathrm{mess}}`$, $`N_{\mathrm{mess}}`$, $`\mathrm{\Lambda }`$, $`\mathrm{tan}\beta `$, sign($`\mu `$) defining the model, the NLSP can be either the lightest neutralino $`\stackrel{~}{N}_1`$ or the light stau $`\stackrel{~}{\tau }_1`$. For this study , we generated several thousand GMSB models following the standard phenomenological approach and focused on the neutralino NLSP scenario, for which we selected several representative points for simulation. Our aim was to explore the potential of a LC in extracting the fundamental model parameters. Firstly, we investigated the sensitivity in determining the GMSB parameters at the messenger and electroweak scales from the knowledge of the sparticle masses that could be obtained from threshold-scanning techniques. We used a sample model with $`M_{\mathrm{mess}}=`$ 161 TeV; $`N_{\mathrm{mess}}=1`$; $`\mathrm{\Lambda }=76`$ TeV; $`\mathrm{tan}\beta =3.5`$; $`\mu >0`$, producing a rather light sparticle spectrum, and assumed a total of 200 fb<sup>-1</sup> collected between 200 and 500 GeV c.o.m. energies at a LC. By just considering the shape of the total cross sections for several kinematically allowed SUSY production processes as functions of $`\sqrt{s}`$ close to the thresholds, we inferred the following approximate precisions for the sparticle masses: $`\mathrm{\Delta }(m_{\stackrel{~}{N}_1})0.2\mathrm{GeV};`$ $`\mathrm{\Delta }(m_{\stackrel{~}{N}_2})0.8\mathrm{GeV};`$ $`\mathrm{\Delta }(m_{\stackrel{~}{C}_1})0.1\mathrm{GeV};`$ $`\mathrm{\Delta }(m_{\stackrel{~}{e}_L})0.2\mathrm{GeV};`$ $`\mathrm{\Delta }(m_{\stackrel{~}{e}_R})0.2\mathrm{GeV};`$ $`\mathrm{\Delta }(m_{\mu _R})0.8\mathrm{GeV};`$ (1) $`\mathrm{\Delta }(m_{\stackrel{~}{\tau }_1})0.8\mathrm{GeV};`$ $`\mathrm{\Delta }(m_{\stackrel{~}{\tau }_2})2.0\mathrm{GeV};`$ $`\mathrm{\Delta }(m_{h^0})0.1\mathrm{GeV}.`$ By performing a fit to minimise a $`\chi ^2`$ based on these errors, with the true (model-dependent) values as the central ones in the fit, we obtained an estimate of the precisions on the underlying parameters, as shown in Tab. 1. We checked that these precisions are typical for the class of models we considered. Then, we considered $`\stackrel{~}{N}_1`$ lifetime measurements in the whole allowed $`c\tau _{\stackrel{~}{N}_1}`$ range, performing event simulation in detail for our set of representative GMSB models. Indeed, since the $`\stackrel{~}{N}_1`$ lifetime is related to $`\sqrt{F}`$ by $$c\tau _{\stackrel{~}{N}_1}=\frac{16\pi }{}\frac{\sqrt{F}^4}{m_{\stackrel{~}{N}_1}^5}\frac{1}{100}\left(\frac{\sqrt{F}}{100\mathrm{TeV}}\right)^4\left(\frac{m_{\stackrel{~}{N}_1}}{100\mathrm{GeV}}\right)^5,$$ (2) the GMSB framework provides an opportunity to extract information on the SUSY breaking sector of the theory from collider experiments that is not available, e.g., in supergravity-inspired models. Typical neutralino lifetimes for our models range from microns to tens of metres. While the lower bound on $`c\tau _{\stackrel{~}{\mathrm{N}}_1}`$ comes from requiring perturbativity up to the grand unification scale , the upper bound is only valid if the $`\stackrel{~}{G}`$ mass is restricted to be lighter than about 1 keV, as suggested by some cosmological arguments (cfr. Fig. 1). For given $`\stackrel{~}{N}_1`$ mass and lifetime, the residual theoretical uncertainty on determining $`\sqrt{F}`$ is due to the factor of order unity $``$ in Eq. (2), whose variation is quite limited in GMSB models (cfr. Fig. 2a). In addition to the dominant $`\stackrel{~}{N}_1\gamma \stackrel{~}{G}`$ decay, it was fundamental to our analysis to take the $`\stackrel{~}{N}_1\stackrel{~}{G}f\overline{f}`$ decays into account, in order to use the tracking detectors for measurements of shorter neutralino lifetimes. We performed a complete study of these channels and found that in most cases of interest for our study the total width is given approximately by $$\mathrm{\Gamma }(\stackrel{~}{N}_1f\overline{f}\stackrel{~}{G})\mathrm{\Gamma }(\stackrel{~}{N}_1\gamma \stackrel{~}{G})\frac{\alpha _{\mathrm{em}}}{3\pi }N_f^cQ_f^2\left[2\mathrm{ln}\frac{m_{\stackrel{~}{N}_1}}{m_f}\frac{15}{4}\right]+\mathrm{\Gamma }(\stackrel{~}{N}_1Z\stackrel{~}{G})B(Zf\overline{f}),$$ (3) where the expressions for the widths of the 2-body $`\stackrel{~}{N}_1`$ decays are well-known . In Fig. 2b, the branching ratio (BR) of the $`\stackrel{~}{N}_1\gamma \stackrel{~}{G}`$ decay is compared to those of $`\stackrel{~}{N}_1Z\stackrel{~}{G}`$ and $`\stackrel{~}{N}_1h^0\stackrel{~}{G}`$ (in the on-shell approximation) and those of the main $`\stackrel{~}{N}_1f\overline{f}\stackrel{~}{G}`$ channels (including virtual-photon exchange contributions only). To generate GMSB events, we modified SUSYGEN 2.2/03 , to take the 3-body neutralino decays into account as follows. We implemented in CompHEP 3.3.18 a home-made lagrangian including the relevant gravitino interaction vertices in a suitable approximation. Then, we studied the kinematical distributions of the $`\stackrel{~}{N}_1f\overline{f}\stackrel{~}{G}`$ channels and passed the results to the event generator numerically. For each sample GMSB model, we considered in most cases a LC run at a c.o.m. energy such that the only SUSY production process open is NLSP pair production $`e^+e^{}\stackrel{~}{N}_1\stackrel{~}{N}_1`$, followed by $`\stackrel{~}{N}_1`$ decays through all possible channels. For more challenging models where the light SUSY thresholds are close to each other, we simulated also events from $`R`$-slepton pair production and used some selection to isolate the $`\stackrel{~}{N}_1\stackrel{~}{N}_1`$ events, for which the $`\stackrel{~}{N}_1`$ production energy is fixed by the beam energy (we also took into account initial-state radiation as well as beamstrahlung effects), allowing a cleaner $`c\tau _{\stackrel{~}{N}_1}`$ measurement. The primary vertex of the events was first smeared according to the assumed beamspot size of 5 nm in $`y`$, 500 nm in $`x`$ and 400 $`\mu `$m in $`z`$ and then the events were passed through a full GEANT 3.21 simulation of the detector as described in the ECFA/DESY CDR . The tracking detector components essential to our analysis included a 5-layer vertex detector with a point precision of 3.5 $`\mu `$m in $`r\varphi `$ and $`z`$, a TPC possessing 118 padrows with point resolution of 160 $`\mu `$m in $`r\varphi `$ and 0.1 cm in $`z`$. In addition, we assumed an electromagnetic calorimeter with energy resolution given by ($`10.3/\sqrt{E}+0.6`$)%, angular pointing resolution of $`50/\sqrt{E}`$ mrad and timing resolution of $`2/\sqrt{E}`$ ns. The dimensions of the whole calorimeter (electromagnetic and hadronic) were 172 cm $`<r<210`$ cm and 280 cm $`<|z|<330`$ cm. The probability for a single neutralino produced with energy $`E_{\stackrel{~}{N}_1}`$ to decay before travelling a distance $`\lambda `$ is given by $`P(\lambda )=1\mathrm{exp}(\lambda /L)`$, where $`L=c\tau _{\stackrel{~}{N}_1}(\beta \gamma )_{\stackrel{~}{N}_1}`$ is the $`\stackrel{~}{N}_1`$ “average” decay length and $`(\beta \gamma )_{\stackrel{~}{N}_1}=(E_{\stackrel{~}{N}_1}^2/m_{\stackrel{~}{N}_1}^21)^{1/2}`$. For $`L`$ less than a few cm, we used tracking for measuring the vertex of $`\stackrel{~}{N}_1\stackrel{~}{G}f\overline{f}`$ decays. When $`L`$ is very short, less than a few hundred $`\mu `$m, the beamspot size becomes important and a 3D procedure is not appropriate. Instead, the reconstructed vertex was projected onto the $`xy`$ plane, where the beamspot size is very small, and we used the resulting distributions to measure the $`\stackrel{~}{N}_1`$ lifetime. We studied several GMSB models with $`c\tau _{\stackrel{~}{N}_1}`$ in the allowed range and found that the intrinsic resolution of the method was approximately 10 $`\mu `$m. An example of the reconstructed 2D decay length distribution for a challenging model where the neutralino lifetime can be very short is shown in Fig. 3 for statistics corresponding to 200 fb<sup>-1</sup> ($`r`$ is the $`xy`$ component of $`\lambda `$). For 500 $`\mu `$m $`<L<\mathrm{\hspace{0.33em}15}`$ cm, we used 3D vertexing to determine the decay length distribution and hence the lifetime of the $`\stackrel{~}{N}_1`$. Vertices arising from $`\stackrel{~}{N}_1\gamma \stackrel{~}{G}`$ and photon conversions in detector material were essentially eliminated using cuts on the invariant mass of the daughter pairs together with geometrical projection cuts involving the mass of the $`\stackrel{~}{N}_1`$ and the topology of the daughter tracks. Methods of measuring the $`\stackrel{~}{N}_1`$ mass using the endpoints of photon energies or threshold techniques, together with details of the projection cuts have been described . Using 200 fb<sup>-1</sup> of data, we concluded that a $`c\tau _{\stackrel{~}{N}_1}`$ measurement with statistical error of $`4\%`$ could be made using this method. For $`L`$ larger than a few cm, we used the $`\stackrel{~}{N}_1\gamma \stackrel{~}{G}`$ channel, providing much larger statistics. The calorimeter was assumed to have pointing capability, using the shower shapes together with appropriate use of pre-shower detectors. Assuming the pointing angular resolution mentioned above, we demonstrated how a decay length measurement can be made. We concluded that for lifetimes ranging from approximately 5 cm to approximately 2 m this method worked excellently, with statistical precisions ranging from a few % at the shorter end to about 6% at the upper end of the range. We also investigated the use of timing information to provide a lifetime measurement, but found it to be less useful than calorimeter pointing. However, the use of timing might be relevant to assign purely photonic events to bunch crossings and to reject cosmic backgrounds. For very long lifetimes, we employed a statistical technique where the ratio of the number of one photon events in the detector to the number of two photon events was determined as a function of $`c\tau _{\stackrel{~}{N}_1}`$. This allowed a largely model-independent measurement out to $`c\tau _{\stackrel{~}{N}_1}`$ few 10’s m. The possibility of using the ratio of the number of no-photon events to one photon events was also discussed . The latter allows a greater length reach, but relies on model-dependent assumptions. In Fig. 4, we summarise the techniques we have used as a function of $`L`$ for a sample model. The criterion for indicating a method as successful is a measurement of $`L`$ and the $`\stackrel{~}{N}_1`$ lifetime to 10% or better. It can be seen that $`L`$ can be well measured for 10’s of $`\mu `$m $`<L<`$ 10’s of m, which is in most cases enough to cover the wide range allowed by theory and suggested by cosmology. With reference to Eq. (2), we note that a 10% error in $`c\tau _{\stackrel{~}{N}_1}`$ corresponds to a 3% error in $`\sqrt{F}`$. This is of the same order of magnitude as the uncertainty on the factor $``$, which parameterises mainly the different possible $`\stackrel{~}{N}_1`$ physical compositions in GMSB models (cfr. Fig. 2a). We also checked explicitly that, in comparison, the contributing error from a neutralino mass measurement using threshold-scanning techniques or end-point methods is negligible . Hence we conclude that, for the models considered and under conservative assumptions, a determination of $`\sqrt{F}`$ with a precision of approximately 5% is achievable at a LC by only performing $`\stackrel{~}{N}_1`$ lifetime and mass measurements in the context of GMSB with neutralino NLSP. Less model dependent and more precise results can be obtained by adding information on the $`\stackrel{~}{N}_1`$ physical composition from other observables, such as $`\stackrel{~}{N}_1`$ decay BR’s, cross sections and distributions.
no-problem/9911/hep-ex9911010.html
ar5iv
text
# Jet and hadron production in e+e- annihilations To be published in the Proceedings of DIS 99 - Berlin 1999 ## 1 Introduction This note covers three topics in the study of identified hadrons and jet production, coming from recent analyses by the DELPHI experiment at LEP. A description of DELPHI can be found in ; its performance is discussed in . The selection of $`q\overline{q}`$ events at the Z peak (LEP 1) is easy: being the production of hadronic final states enhanced by three orders of magnitude over the continuum, simple cuts against lepton-antilepton and two-photon events allow easily the definition of samples with contaminations at the per mil level. The situation is different at LEP 2 (above the Z). The hadronic cross-section is at the present energies of the order of 100 pb, and it is dominated by the radiative return to the Z peak. The cross-section corresponding to effective c.m. energies above 0.85$`\sqrt{s}`$ is presently of the order of 20 pb, with a sizeable contamination from WW events; thus, even with large luminosities, we are left with a few hundreds of events per energy point (see the table below). As a consequence, the cuts against contaminations cannot be too severe (reaching purities of about 0.8, with efficiencies of about 0.8). | $`\sqrt{s}`$ (GeV) | Lumi/exp. | Events with | | --- | --- | --- | | | (pb<sup>-1</sup>) | $`\sqrt{s^{}}>0.85\sqrt{s}`$ | | 133 | 12 | 900 | | 161 | 10 | 350 | | 172 | 10 | 290 | | 183 | 55 | 1,300 | | 189 | 175 | 3,800 | The system of hadron identification of DELPHI covers the momentum region between 100 MeV/$`c`$ and 30 GeV/$`c`$, by the superposition of the separations given by: the $`dE/dX`$ mainly in the region below 1 GeV/$`c`$; the liquid radiator of the Ring Imaging CHerenkov Radiator (RICH) between 1 and 5 GeV/$`c`$; the gaseous radiator of the RICH at high momenta. K$`{}_{s}{}^{}{}_{}{}^{0}`$ ($`\mathrm{\Lambda }`$) are reconstructed by invariant mass distributions for opposite-sign particles coming from a secondary vertex, with typical efficiency and purity of 30% (25%) and 95% (90%) respectively. The influence of the detector on the analysis is studied with the full DELPHI simulation program, DELSIM . The efficiency of identification of charged hadrons is computed directly from the data, using samples of $`\pi ^\pm `$ and protons coming from the decay of K$`{}_{s}{}^{}{}_{}{}^{0}`$ and $`\mathrm{\Lambda }`$. ## 2 Identified hadrons in high energy $`q\overline{q}`$ events The way quarks and gluons transform into hadrons is not entirely understood by present theories; the most satisfactory description is given by Monte Carlo simulations. A different, analytical, approach (see e.g. , and references therein) are QCD calculations using the so-called Modified Leading Logarithmic Approximation (MLLA) under the assumption of Local Parton Hadron Duality (LPHD) . In this picture the particle yield is described by a parton cascade, and the virtuality cut-off $`Q_0`$ is lowered to values of the order of 100 MeV, comparable to the hadron masses; it is assumed that the results obtained for partons apply to hadrons as well. The momentum spectra of particles produced can be calculated as a function of the variable $`\xi _p`$, where $`\xi _p=\mathrm{log}(\frac{2p}{\sqrt{s}})`$ ($`p`$ is the momentum of the particle), and it depends on three parameters: an effective scale parameter $`\mathrm{\Lambda }_{eff}`$, a momentum cut-off $`Q_0`$ in the evolution of the parton cascade and an overall normalization factor $`k`$. The function has the form of a “humped-backed plateau”, approximately Gaussian in $`\xi _p`$ . To check the validity of the MLLA+LPHD approach, it is interesting to study the evolution with the centre of mass energy of the maximum, $`\xi ^{}`$, of the $`\xi _p`$ distribution. In the framework of the MLLA+LPHD the dependence of $`\xi ^{}`$ on the centre of mass energy can be expressed as: $$\xi ^{}a+b\mathrm{ln}\sqrt{s}$$ (1) where the slope $`b`$ depends just on the effective $`\mathrm{\Lambda }`$ and not on $`Q_0`$: one can thus assume that $`b`$ is independent of the mass of the particle produced. The evolution of $`\xi ^{}`$ with the centre of mass energy for identified hadrons was compared with lower energy data; the data up to centre of mass energies of 91 $`\mathrm{GeV}`$ were taken from previous measurements . The fit to expression 1 follows the data points rather well. The average multiplicity of the identified hadrons was obtained from the integration of the $`\xi _p`$ distributions inside a range varying according to the particle type and energy; outside this range the fraction of particles was extrapolated using the JETSET 7.4 prediction. The predictions from JETSET and HERWIG , tuned at the Z, are consistent with the observation. ## 3 Charged particle multiplicity in $`b\overline{b}`$ events QCD predicts that the difference in charge multiplicity between light quark and heavy quark initiated events in $`e^+e^{}`$ annihilations is energy independent; this is motivated by mass effects on the hadronization (see for a review). In a model in which the hadronization is independent of the mass, this difference decreases with the c.m. energy, eventually becoming negative at LEP 2 energies. Experimental tests at LEP 1 energies and below were not conclusive. At LEP 2, the difference between the QCD prediction and the model ignoring mass effects is large, and the experimental measurement can firmly distinguish between the two hypotheses. Data collected by DELPHI at centre-of-mass energies around 183 GeV and 189 GeV were analysed . 90% $`b`$-enriched samples have then been obtained by tagging on the present of tracks with nonzero impact wrt the primary vertex . From such samples the average $`b\overline{b}`$ multiplicity has been measured by unfolding via simulation for detector effects, selection criteria and initial state radiation. The difference $`\delta _{bl}`$ between the $`b\overline{b}`$ multiplicity and the multiplicity in generic light quark $`l=u,d,s`$ events was measured: $`\delta _{bl}(183GeV)`$ $`=`$ $`4.23\pm 1.09(stat)\pm 1.40(syst);`$ $`\delta _{bl}(189GeV)`$ $`=`$ $`4.69\pm 0.89(stat)\pm 1.55(syst).`$ The result (Fig. 1) is fully consistent with the hypothesis of energy independence, and larger than predicted by mass-independent fragmentation. The systematic error will be further reduced by directly measuring the $`c\overline{c}`$ multiplicity. ## 4 Identified hadrons in quark and gluon jets Jets originated from quarks and gluon are expected to show differences in their particle multiplicity, energy spectrum, and angular distributions, due to the different colour charges carried. The LEP detectors can select gluon jets in $`b\overline{b}g`$ events by tagging the $`b`$ quarks. 2.2 MZ collected by DELPHI are considered in the analysis . 3-jet events are clustered using Durham with $`y_{cut}=0.015`$, optimizing the performance and still allowing a reliable comparison with perturbative QCD. For a detailed comparison with mininmum bias, it is necessary to obtain samples of quark and gluon jets with similar kinematics. To fulfill this condition, two different 3-jet event topologies have been used: Y events (mirror symmetric), and Mercedes events (three-fold symmetric). The number of Mercedes and Y 3-jet events was equal to 11,685 and 110,628 respectively. After anti-tagging of heavy quark jets, gluon jet purities of $`82\%`$ for Y events and Mercedes events were achieved. There are 24,449 Y events and 1,806 Mercedes with identified gluon jets. Three different jet classes, namely normal mixture jets, $`b`$ tagged jets and gluon tagged jets, with different compositions in terms of quark and gluon jets, were selected. From the comparison of these, the content in terms of protons, $`K^\pm `$, $`K^0`$ and $`\varphi `$ was calculated as a function of the momentum of the hadron for quark and gluon jets. As for inclusive charged particles, the production spectrum of identified hadrons is softer in $`g`$ jets compared to $`q`$ jets, and the multiplicity is larger. The ratio of the average multiplicity in $`g`$ jets over $`q`$ jets is consistent with the ratio for charged particles, but for protons, where: $$\frac{<n_p>^{(g)}/<n_p>^{(q)}}{<n_{ch.}>^{(g)}/<n_{ch.}>^{(q)}}=1.205\pm 0.041.$$ HERWIG underestimates both the $`K`$ and the p production in $`g`$ jets; JETSET and ARIADNE overestimate the proton production in $`g`$ jets.
no-problem/9911/physics9911038.html
ar5iv
text
# Super-paramagnetic clustering of yeast gene expression profiles ## 1 Introduction DNA microarray technologies have made it straightforward to monitor simultaneously the expression levels of thousands of genes during various cellular processes . The new challenge is to make sense of such massive expression data . In most such experiments, investigators compare the relative change of gene expression levels between two samples (one is called the target, such as a disease sample; the other is called the control, such as a normal sample). In a typical experiment simultaneous expression levels of thousands of genes are viewed over a few tens of time-points (or different tissues ). Hence one needs to analyse arrays that contain $`10^510^6`$ measurements. The aims of such analysis are typically to (a) group genes with correlated expression profiles; (b) Focus on those groups which seem to participate in some biological process; (c) Provide a biological interpretation of the clusters. Interpretations could be co-regulation of the mean cluster expression with a known process, a promoter common to most of the genes in the cluster, etc. (d) In experiments that compare data from different tissues (such as tumor and normal ) one also tries to differentiate them on the basis of their genetic expression profiles. The sizes of the datasets and their complexity call for multi-variant clustering techniques which are essential for extracting correlated patterns in the swarm of data points in multidimensional space (for example, each relative gene expression profile with k time-points may be regarded as a k-dimensional vector). ## 2 SPC Currently, two clustering appoaches are very popular among biologists. One is average linkage, a hierarchical clustering method , with the Pearson correlation used as a similarity measure . The other is self-organizing maps (SOMs) , whose most popular implementation for array data analysis is GENECLUSTER . We present here clustering performed by SPC, a hierarchical clustering method recently introduced by Blatt et al . It is based on an analogy to the physics of inhomogeneous ferromagnets. Full details of the algorithm and the underlying philosophy are given elsewhere ; here only a brief description is provided. The input required for SPC is a distance matrix between the $`N`$ data points that are to be clustered. From such a distance matrix one constructs a graph, whose vertices are the data points and edges identify neighboring points. Two points $`i`$ and $`j`$ are called neighbors (and connected by an edge) if they satisfy the K-mutual-neighbor criterion, i.e. iff $`j`$ is one of the $`K`$ nearest points to $`i`$ and vice versus. With each edge we associate a weight $`J_{ij}>0`$, which decreases as the distance between points $`i`$ and $`j`$ increases. Assignement of the datapoints to clusters is equivalent to partitioning this weighted graph. Cluster indices play the role of the states of Potts spins assigned to each vertex (i.e. to each original data point). Two neighboring spins are interacting ferromagnetically with strength $`J_{ij}`$. This Potts ferromagnet is simulated at a sequence of temperatures $`T`$. The susceptibility and the correlation function for neighboring pairs of spins are measured. The pair correlation function serves to identify clusters: high correlation means that the two data points belong to the same cluster. The temperature $`T`$ controls the resolution at which clustering is performed; the algorithm finds typical clusters at all resolutions. At very low temperatures all points belong to a single cluster and as $`T`$ is increased, clusters break into smaller ones until at high enough temperatures each point forms its own cluster. The clusters found at all temperatures form a dendrogram. Blatt et al showed that the SPC algorithm is robust since the clusters are formed due to collective behavior of the system. The major splits can be easily identified by a peak in the susceptibility. For more details see . ## 3 Yeast Cell Cycle and Microarray Data We applied SPC on a recently published data set to determine whether it could automatically expose known clusters without using prior knowledge. Eisen et al clustered the genes on the basis of data combined from 7 different experiments. We suspected that mixing the results of different experiments may introduce noise into the data associated with a single one. Therefore we chose to use only a single time course, that of gene expression as measured in a single process (cell division cycles following alpha-factor block-and-release ). Furthermore, we focused on genes that have characterized functions (2467 genes) for easier interpretation. Genetic controls and regulation play a central role in determination of cell fate during development. They are also important for the timing of cell cycle events such as DNA replication, mitosis and cytokinesis. Yeast is a single cellular organism, which has become a favorite model in molecular biology due to the easiness of genetic and biochemical manipulation and the availability of the complete genome. Like all living cells, the yeast cell cycle consists of four phases: G1$``$S$``$G2$``$M$``$G1…, where S is the phase of DNA synthesis (replicating the genome); M stands for mitosis (division into two daughter cells), and the two gap phases are called G1 (preceding the S phase) and G2 (following the S phase). At least four different classes of cell cycle regulated genes exist in yeast : G1 cyclins and DNA synthesis genes are expressed in late G1; histone genes in S; genes for transcription factors, cell cycle regulators and replication initiation proteins in G2; and genes needed for cell separation are expressed as cells enter G1. Early and late G1-specific transcription is mediated by the Swi5/Ace2 and Swi4/Swi6 classes of factors, respectively. Changes in the master cyclin/Cdc28 kinases are involved in all classes of regulation. In the alpha-factor block-release experiments, MATa cells were first arrested in G1 by using alpha pheromone. Then the blocker was removed; from this point on the cell division cycle starts and the population progresses with significant cell cycle synchrony. RNA was extracted from the synchronized sample, as well as a control sample (asynchronous cultures of the same cells growning exponentially at the same temperature in the same medium). Fluorescently labeled cDNA was synthesized using Cy3 (”green”) for the control and Cy5 (”red”) for the target. Mixtures of equal amounts of the two samples were taken at every 7min and competitively hybridized to individual microarrays containing essentially all yeast genes. The ratio of red to green light intensity (proportional to the RNA concentrations) was measured by scanning laser microscopy (See for experimental details). The actual data provided at the Stanford website is the log ratios. In the their analysis, Spellman et al. were focusing on identification of 800 cell cycle regulated genes (that may have periodic expression profiles). In our test of SPC, in addition to oscillatory genes we were also looking for any groups of genes with highly correlated expression patterns. ## 4 SPC Analysis of Yeast Gene Expression Profiles We clustered the expression profiles of the 2467 yeast genes of known function over data taken at 18 time intervals (of 7 min) during two cell division cycles, synchronised by alpha arrest and release. Denote by $`E_{ij}`$ the relative expression of gene $`i`$ at time interval $`j`$. Our data consist of 2467 points in an 18-dimensional space, normalised in the standard way: $`G_{ij}=\frac{E_{ij}<E_i>}{\sigma _i};<E_i>=\frac{1}{18}_{j=1}^{18}E_{ij};\sigma _i^2=\frac{1}{18}_{j=1}^{18}E_{ij}^2<E_i>^2`$ We looked for clusters of genes with correlated expression profiles over the two division cycles. The SPC algorithm was used with $`q=20`$ component Potts spins, each interacting with those neighbors that satisfy the $`K`$-mutual neighbor criterion with $`K=10`$. Euclidean distance between the normalized vectors was used as the distance between two genes. This distance is proportional to the Pearson correlation used by Eisen et. al.. At $`T=0`$ all datapoints form one cluster, which splits as the system is “heated”. The resulting dendrogram of genes is presented in Fig. 1. Each node represents a cluster; only clusters of size larger than 6 genes are shown. The last such clusters of each branch, as well as non-terminal clusters that were selected for presentation and analysis (in a way described below) are shown as boxes. The circled boxes represent the clusters that are analysed below. The position of every node along the horizontal axis is determined for the corresponding cluster according to a method introduced by Alon et al ; proximity of two clusters along this axis indicates that the corresponding temporal expression profiles are not very different. The vertical axis represents the resolution, controlled by the “temperature” $`T0`$. The vertical position of a node or box is determined by the value of $`T`$ at which it splits. A high vertical position indicates that the cluster is stable, i.e. contains a fair number of closely-spaced data points (genes with similar expression profiles). In order to identify clusters of genes whose temporal variation is on the scale of the cell cycle, we calculated for each cluster a cycle score $`S_1`$, defined as follows. First, for each cluster $`C`$ (with $`N_C`$ genes) we calculate the average normalized expression level at all $`j=1,\mathrm{},18`$ time intervals and the corresponding standard deviations $`\sigma ^C(j)`$: $$\overline{G}^C(j)=\frac{1}{N_C}\underset{iC}{}G_{ij}[\sigma ^C(j)]^2=\frac{1}{N_C}\underset{iC}{}(G_{ij})^2[\overline{G}^C(j)]^2$$ Next, we evaluated the Fourier transform of the mean expression profiles $`\overline{G}^C(j)`$ for every gene cluster $`C`$. To suppress initial transients, the Fourier transform is performed only over $`j=4,\mathrm{},18`$. Denote the absolute values of the Fourier coefficients by $`A_k`$; the ratio between low-frequency coefficients and the high-frequency ones was used as a figure of merit for the time scale of the variation. We observed that clusters that satisfy the condition $$S_1^C=\frac{_{k=2}^4A_k}{_{k=6}^8A_k}>2.15$$ (1) have the desired time dependence, and found 29 clusters (consisting of 167 genes) to have such scores. For many of these clusters, however, the temporal variation was very weak, i.e. of the same order as the standard deviations $`\sigma ^C(j)`$ of the individual gene expressions of the cluster. We defined another score, $`S_2^C`$, for which we required $$S_2^C=\frac{1}{18}\underset{j=1}{\overset{18}{}}\left[\frac{\overline{G}^C(j)}{\sigma ^C(j)}\right]^2>5.6$$ (2) For clusters $`C`$ that satisfy this condition the “signal” significantly exceeds the noise. We select a cluster if its score exceeds 5.6, while its parent’s score does not. Only 4 clusters, containing 86 genes, satisfy both conditions (1) and (2); these are numbered 1 – 4 on Fig. 1. Seven additional relatively stable clusters which did not satisfy our two criteria, but are of interest, are also selected and circled on figure 1. The corresponding time sequences are shown in Fig 2: $`\overline{G}^C(j)`$ is plotted for each cluster versus time $`j`$, with the error bars representing the standard deviations $`\sigma ^C(j)`$. Clusters 1,2 and 4 clearly corresponds to the cell cycle. ## 5 Details and Interpretation of gene clustering. The full lists of genes that constitute the 11 selected clusters are given in our website . We present here a short analysis of our clusters. We use standard notation for bases: R stands for A or G, W for A or T, K for G or T, N for any base. Cluster # 1: These are mostly Late G1 phase specific genes. They contain the major cell cycle regulators: Cln1,2, Clb5,6 and Swi4 as well as DNA replication and repair genes. One can easily detect MCB (ACGCGT) or SCB (CRCGAAA) sites in their promoters to which MBF (Swi6p+Mbp1p) and SBF (Swi6p+Swi4p) bind respectively . Cluster # 4: This cluster contains mostly S phase genes and is dominated by the histones. Histones are required for wrapping up nascent DNA into nucleosomes, their promoters are regulated by CCA (GCGAARYTNGRGAACR), NEG (CATTGNGCG) as well as SCB (CGCGAAA) . Cluster # 2: These are mostly G2/M phase genes. They contain the major cell cycle regulators: Clb1,2 and Swi5/Ace2. It is known that all genes co-regulated with Clb1,2 are mainly controlled by either Mcm1 at P-box (TTWCCYAAWNNGGWAA) or by Mcm1+SFF through the composite site: (P-box)N2-4RTAAAYAA . Clusters # 5, # 6 and # 8: These are mostly ribosomal protein (RP) genes. The genome of Saccharomyces cerevisiae contains 137 genes coding for ribosomal proteins . Since 59 genes are duplicated, the ribosomal gene family encodes 78 different proteins, 32 of the small and 46 of the large ribosomal subunit. They are co-regulated because they are sub-components of ribosome machinery for protein translation. All genes in cluster #6 reside on chromosomes 2, 4 and 5, except rpl11b which resides on chromosome 9. All genes in clusters #5 and #8 (which are very close in the dendrogram of Figure 1) reside on chromosomes 8-16, except rps17b which resides on chromosome 4. It is likely that the expression of these ribosomal genes is correlated to their chromosomal locations. It is interesting that the expression profiles appear to have pronounced oscillations (throughout in #5, at early times in #6 and late times in #8). Like most of the RP genes, the ribosomal genes in the 3 clusters also contain multiple global Regulator Rap1p binding sites in their promoters within a preferred window of 15-26 bp . The transcription of most RP genes is activated by two Rap1p binding sites, 250 to 400 bp upstream from the initiation of transcription. Since Rap1p can be both an activator and a silencer, it is not known whether Rap1p is responsible for the oscillation. This oscillation could be a result of interplay between cell cycle and Rap1p activity which determines the mean half life of the RP mRNAs (5-7min, ). As fresh medium was added at 91min during the alpha-factor experiments, the genes in #6 and in #8 may have different responses to the nutrient change. Cluster #7: This cluster has 42 genes that are largely not cell cycle regulated. These genes have diverse functions in general metabolism. When searching promoter regions for regulatory elements using gibbsDNA , a highly conserved motif GCGATGAGNT is shared by 90 % of genes. This element seems to be novel, it has some similarity to Gcn4p site TGACTC and Yap1p site GCTGACTAATT . When searching the yeast promoter database - SCPD , we found that the BUF site in the HO gene promoter and the UASPHR site in the Rad50 promoter appear to contain the core motif GATGAG. Although we do not know if this element is functional or what might be the trans-factor, it is still very likely that it may contribute the co-regulation of this cluster of genes. Cluster #10: This cluster is characterized by a pronounced dip towards the end of the profile. They are not cell cycle regulated by and large, except Clb4 (a S/G2 cyclin ) and Rad54 (a G1 DNA repair gene). By searching promoter elements, we found a conserved motif RNNGCWGCNNC that is shared by a subset of the genes (Clb4, YNL252C, Rad54, Rpb10, Atp11 and Pex13). It partially matches a PHO4 binding motif (TCGGGCCACGTGCAGCGAT) in the promoter of Pho8. However, the PHO4 consensus, CACGTK, does not appear in the conserved motif of our cluster. Therefore we suspect that it is a novel motif which should be tested by experiments. ## 6 Summary We used the SPC algorithm to cluster gene expression data for the yeast genome. We were able to identify groups of genes with highly correlated temporal variation. Three of the groups found clearly correspond to well known phases of the cell cycle; some of our observations of other clusters reveal features that have not been identified previously and may serve as the basis of future experimental investigations. Acknowledgements Research of E. Domany was partially supported by the Germany-Israel Science Foundation (GIF) and the Minerva foundation. Research of M. Q. Zhang was partially supported by NIH/NHGRI under the grant number HG01696.
no-problem/9911/hep-ex9911033.html
ar5iv
text
# 1 Introduction ## 1 Introduction The existing calorimetric complexes (CDF, D0, H1 etc.) as well as the the future huge ones (ATLAS , CMS etc.) at the CERN Large Hadron Collider (LHC) are the combined calorimeters with the electromagnetic and hadronic compartments. For the energy reconstruction and description of the longitudinal development of a hadronic shower it is necessary to know the $`e/h`$ ratios, the degree of non-compensation, of these calorimeters. As to the ATLAS Tile barrel calorimeter there is the detailed information about the $`e/h`$ ratio presented in , , , , . But as to the liquid argon electromagnetic calorimeter such information practically absent. The aim of the present work is to develop the method and to determine the value of the $`e/h`$ ratio of the LAr electromagnetic compartment. This work has been performed on the basis of the 1996 combined test beam data . Data were taken on the H8 beam of the CERN SPS, with pion and electron beams of 10, 20, 40, 50, 80, 100, 150 and 300 GeV/c. ## 2 The Combined Prototype Calorimeter The future ATLAS experiment will include in the central (“barrel”) region a calorimeter system composed of two separate units: the liquid argon electromagnetic calorimeter (LAr) and the tile iron-scintillating hadronic calorimeter (Tile) . For detailed understanding of performance of the future ATLAS combined calorimeter the combined calorimeter prototype setup has been made consisting of the LAr electromagnetic calorimeter prototype inside the cryostat and downstream the Tile calorimeter prototype as shown in Fig. 1. The dead material between the two calorimeters was about $`2.2X_0`$ or $`0.28\lambda _I^\pi `$. Early showers in the liquid argon were kept to a minimum by placing the light foam material in the cryostat upstream of the calorimeter. The two calorimeters have been placed with their central axes at an angle to the beam of $`12^{}`$. At this angle the two calorimeters have an active thickness of 10.3 $`\lambda _I`$. Between the active part of the LAr and the Tile detectors a layer of scintillator was installed, called the midsampler. The midsampler consists of five scintillators, $`20\times 100`$ cm<sup>2</sup> each, fastened directly to the front face of the Tile modules. The scintillator is 1 cm thick. Beam quality and geometry were monitored with a set of beam wire chambers BC1, BC2, BC3 and trigger hodoscopes placed upstream of the LAr cryostat. To detect punchthrough particles and to measure the effect of longitudinal leakage a “muon wall” consisting of 10 scintillator counters (each 2 cm thick) was located behind the calorimeters at a distance of about 1 metre. ### 2.1 The Electromagnetic Liquid Argon Calorimeter The electromagnetic LAr calorimeter prototype consists of a stack of three azimuthal modules, each one spanning $`9^{}`$ in azimuth and extending over 2 m along the Z direction. The calorimeter structure is defined by 2.2 mm thick steel-plated lead absorbers, folded to an accordion shape and separated by 3.8 mm gaps, filled with liquid argon. The signals are collected by Kapton electrodes located in the gaps. The calorimeter extends from an inner radius of 131.5 cm to an outer radius of 182.6 cm, representing (at $`\eta =0`$) a total of 25 radiation lengths ($`X_0`$), or 1.22 interaction lengths ($`\lambda _I`$) for protons. The calorimeter is longitudinally segmented into three compartments of $`9X_0`$, $`9X_0`$ and $`7X_0`$, respectively. More details about this prototype can be found in , . In front of the EM calorimeter a presampler was mounted. The active depth of liquid argon in the presampler is 10 mm and the strip spacing is 3.9 mm. The cryostat has a cylindrical form with 2 m internal diameter, filled with liquid argon, and is made out of a 8 mm thick inner stainless-steel vessel, isolated by 30 cm of low-density foam (Rohacell), itself protected by a 1.2 mm thick aluminum outer wall. ### 2.2 The Hadronic Tile Calorimeter The hadronic Tile calorimeter is a sampling device using steel as the absorber and scintillating tiles as the active material . The innovative feature of the design is the orientation of the tiles which are placed in planes perpendicular to the Z direction . For a better sampling homogeneity the 3 mm thick scintillators are staggered in the radial direction. The tiles are separated along Z by 14 mm of steel, giving a steel/scintillator volume ratio of 4.7. Wavelength shifting fibers (WLS) running radially collect light from the tiles at both of their open edges. The hadron calorimeter prototype consists of an azimuthal stack of five modules. Each module covers $`2\pi /64`$ in azimuth and extends 1 m along the Z direction, such that the front face covers $`100\times 20`$ cm<sup>2</sup>. The radial depth, from an inner radius of 200 cm to an outer radius of 380 cm, accounts for 8.9 $`\lambda `$ at $`\eta =0`$ (80.5 $`X_0`$). Read-out cells are defined by grouping together a bundle of fibers into one photomultiplier (PMT). Each of the 100 cells is read out by two PMTs and is fully projective in azimuth (with $`\mathrm{\Delta }\varphi =2\pi /640.1`$), while the segmentation along the Z axis is made by grouping fibers into read-out cells spanning $`\mathrm{\Delta }Z=20`$ cm ($`\mathrm{\Delta }\eta 0.1`$) and is therefore not projective Each module is read out in four longitudinal segments (corresponding to about 1.5, 2, 2.5 and 3 $`\lambda _I`$ at $`\eta =0`$). More details of this prototype can be found in , . ## 3 Event Selection We applied some similar to cuts to eliminate the non-single track pion events, the beam halo, the events with an interaction before LAr calorimeter, the events with the longitudinal leakage, the electron and muon events. The set of cuts is the following: * the single-track pion events were selected by requiring the pulse height of the beam scintillation counters and the energy released in the presampler of the electromagnetic calorimeter to be compatible with that for a single particle; * the beam halo events were removed with appropriate cuts on the horizontal and vertical positions of the incoming track impact point and the space angle with respect to the beam axis as measured with the beam chambers; * the electron events were removed by the requirement that the energy deposited in the LAr calorimeter is less than 90 % of the beam energy; * a cut on the total energy rejects incoming muon; * the events with the obvious longitudinal leakage were removed by requiring of no signal from the punchthrough particles in the muon walls; * to select the events with the hadronic shower origins in the first sampling of the LAr calorimeter; events with the energy depositions in this sampling compatible with that of a single minimum ionization particle were rejected; * to select the events with the well developed hadronic showers energy depositions were required to be more than 10 % of the beam energy in the electromagnetic calorimeter and less than 70 % in the hadronic calorimeter. ## 4 The $`e/h`$ ratio of the LAr Electromagnetic Compartment The response, $`R_h`$, of a calorimeter to a hadronic shower is the sum of the contributions from the electromagnetic, $`E_e`$, and hadronic, $`E_h`$, parts of the incident energy $$E=E_e+E_h,$$ (1) $$R_h=eE_e+hE_h=eE(f_{\pi ^o}+(h/e)(1f_{\pi ^o})),$$ (2) where $`e`$ ($`h`$) is the energy independent coefficient of transformation of the electromagnetic (pure hadronic, low-energy hadronic activity) energy to response, $`f_{\pi ^o}=E_e/E`$ is the fraction of electromagnetic energy. From this $$E=\frac{e}{\pi }\frac{1}{e}R_h,$$ (3) where $$\frac{e}{\pi }=\frac{e/h}{1+(e/h1)f_{\pi ^o}}.$$ (4) In the case of the combined calorimeter the incident beam energy, $`E_{beam}`$, is deposited into the LAr compartment, $`E_{LAr}`$, into Tilecal compartment, $`E_{Tile}`$, and into the dead material between the LAr and Tile calorimeters, $`E_{dm}`$, $$E_{beam}=E_{LAr}+E_{Tile}+E_{dm}.$$ (5) Using relation (3) the following expression has been obtained: $$E_{beam}=c_{LAr}\left(\frac{e}{\pi }\right)_{LAr}R_{LAr}+c_{Tile}\left(\frac{e}{\pi }\right)_{Tile}R_{Tile}+E_{dm},$$ (6) where $`c_{LAr}=1/e_{LAr}`$ and $`c_{Tile}=1/e_{Tile}`$. From this expression the value of the $`(e/\pi )_{LAr}`$ ratio can be obtained $$\left(\frac{e}{\pi }\right)_{LAr}=\frac{E_{beam}E_{Tile}E_{dm}}{c_{LAr}R_{LAr}},$$ (7) where $$E_{Tile}=c_{Tile}\left(\frac{e}{\pi }\right)_{Tile}R_{Tile}$$ (8) is the energy released in the Tile calorimeter. The $`(e/h)_{LAr}`$ ratio and $$f_{\pi ^o,LAr}=k_{LAr}lnE_{beam}$$ (9) can be inferred from the energy dependent $`(e/\pi )_{LAr}`$ ratios: $$\left(\frac{e}{\pi }\right)_{LAr}=\frac{(e/h)_{LAr}}{1+((e/h)_{LAr}1)f_{\pi ^o,LAr}}.$$ (10) We used the value $`(e/h)_{Tile}=1.3`$ and the following expression for the electromagnetic fraction of a hadronic shower in the Tilecal calorimeter $$f_{\pi ^o,Tile}=k_{Tile}lnE_{Tile}.$$ (11) with $`k_{Tile}=0.11`$ , . For the $`c_{LAr}`$ constant the value of 1.1, obtained in , , was used. The algorithm for finding the $`c_{Tile}`$ and $`c_{dm}`$ constants will be considered in the next section. ## 5 The $`c_{Tile}`$ Constant For the determining of the $`c_{Tile}`$ constant the following procedure was applied. We selected the events which start to shower only in the hadronic calorimeter. To select these events the energies deposited in each sampling of the LAr calorimeter and in the midsampler are required to be compatible with that of a beam particle. We used the following expression for the normalized hadronic response $$\frac{R_{Tile}^c}{E_{beam}}=\frac{c_{Tile}}{(e/h)_{Tile}}(1+\left(\left(\frac{e}{h}\right)_{Tile}1\right)(f\pi ^0)_{Tile}),$$ (12) where $$R_{Tile}^c=R_{Tile}+\frac{c_{LAr}}{c_{Tile}}R_{LAr}$$ (13) is the Tile calorimeter response corrected on the energy loss in the LAr calorimeter, $`f_{\pi ^0,Tile}`$ is determined by the formula (11). The values of $`R_{Tile}^c`$ are shown in Fig. 2 together with the fitting line. The obtained value of $`c_{Tile}`$ is equal to $`0.145\pm 0.002`$. ## 6 The Energy Loss in the Dead Material Special attention has been devoted to understanding of the energy loss in the dead material placed between the active part of the LAr and the Tile detectors. The term, which accounts for the energy loss in the dead material between the LAr and Tile calorimeters, $`E_{dm}`$, is taken to be proportional to the geometrical mean of the energy released in the last electromagnetic compartment ($`E_{LAr,3}`$) and the first hadronic compartment ($`E_{Tile,1}`$) $$E_{dm}=c_{dm}\sqrt{E_{LAr,3}E_{Tile,1}}$$ (14) similar to , . The validity of this approximation has been tested by the Monte Carlo simulation and by the study of the correlation between the energy released in the midsampler and the cryostat energy deposition , , . We used the value of $`c_{dm}=0.31`$. This value has been obtained on the basis of the results of the Monte Carlo simulation performed by I. Efthymiopoulos . These Monte Carlo (Fluka) results (solid circles) are shown in Fig. 3 together with the values (open circles) obtained by using the expression (14). The reasonable agreement is observed. The average energy loss in the dead material is equal to about $`3.7\%`$. The typical distribution of the energy losses in the dead material between the LAr and Tile calorimeters for the real events at the beam energy of 50 $`GeV`$, obtained by using Eq. 14, is shown in Fig. 4. ## 7 The $`(e/\pi )_{LAr}`$ and $`(e/h)_{LAr}`$ Ratios. Figs. 5 and 6 show the distributions of the $`(e/\pi )_{LAr}`$ ratio derived by formula (7) for different energies. The mean values of these distributions are given in Table 1 and shown in Fig. 7 as a function of the beam energy. The fit of this distribution by the expression (10) yields $`(e/h)_{LAr}=1.74\pm 0.04`$ and $`k_{LAr}=0.108\pm 0.004`$ ($`\chi ^2/NDF=0.93`$). For the fixed value of the parameter $`k_{LAr}=0.11`$ the result is $`(e/h)_{LAr}=1.77\pm 0.02`$ ($`\chi ^2/NDF=0.86`$). The quoted errors are the statistical ones obtained from the fit. The systematic error on the $`(e/h)_{LAr}`$ ratio, which is a consequence of the uncertainties in the input constants used in the equation (7), is estimated to be $`\pm 0.04`$. Wigmans showed that the the $`e/h`$ ratio for non-uranium calorimeters with high-Z absorber material is satisfactorily described by the formula: $$\frac{e}{h}=\frac{e/mip}{0.41+0.12n/mip}$$ (15) in which $`e/mip`$ and $`n/mip`$ represent the calorimeter response to e.m. showers and to MeV-type neutrons, respectively. These responses are normalized to the one for minimum ionizing particles. The Monte Carlo calculated $`e/mip`$ and $`n/mip`$ values for the RD3 Pb-LAr electromagnetic calorimeter are $`e/mip=0.78`$ and $`n/mip<0.5`$ leading to $`(e/h)_{LAr}>1.66`$. Our measured value of the $`(e/h)_{LAr}`$ ratio agrees with this prediction. There is the estimation of the $`(e/h)_{LAr}`$ ratio of $`3.7\pm 1.7`$ for this electromagnetic compartment obtained in on the basis of data from the combined lead-iron-LAr calorimeter . This value agrees with our value within errors. But we consider their method as the incorrect one since for the determination of the $`(e/\pi )_{LAr}`$ ratios the calibration constants are used which have been obtained by minimizing the energy resolution that leads to distortion of the true $`(e/\pi )_{LAr}`$ ratios. ## 8 Conclusions The method of extraction of the $`e/h`$ ratio, the degree of non-compensation, for the electromagnetic compartment of the ATLAS barrel combined prototype calorimeter is suggested. On the basis of the 1996 combined test beam data we have determined this value which turned out to be equal to $`1.74\pm 0.04`$ and agrees with the Monte Carlo prediction of Wigmans that $`e/h>1.7`$ for this LAr calorimeter. ## 9 Acknowledgments This work is the result of the efforts of many people from the ATLAS Collaboration. The authors are greatly indebted to all Collaboration for their test beam setup and data taking. Authors are grateful Peter Jenni and Marzio Nessi for fruitful discussion and support of this work. We are thankful Julian Budagov and Jemal Khubua for their attention and support of this work. We are also thankful Illias Efthymiopoulos for giving the results of the Monte Carlo simulation, Irene Vichou and Marina Cobal for constructive advices and fruitful discussion.
no-problem/9911/hep-ph9911487.html
ar5iv
text
# Supersymmetric Models with Approximate CP ## 1 Introduction and Motivation Only two CP violating parameters have been measured to high accuracy so far -: $`\epsilon _K`$ $`=`$ $`(2.280\pm 0.013)\times 10^3e^{i\frac{\pi }{4}}`$ (1) $`Re(\epsilon _K^{}/\epsilon _K)`$ $`=`$ $`(2.11\pm 0.46)\times 10^3,`$ (2) where $`\epsilon _K`$ $`=`$ $`{\displaystyle \frac{(\pi \pi )_{I=0}|_W|K_L}{(\pi \pi )_{I=0}|_W|K_S}},`$ (3) $`Re(\epsilon _K^{}/\epsilon _K)`$ $`=`$ $`{\displaystyle \frac{1}{6}}\left(\left|{\displaystyle \frac{\pi ^+\pi ^{}|_W|K_L}{\pi ^+\pi ^{}|_W|K_S}}{\displaystyle \frac{\pi ^0\pi ^0|_W|K_S}{\pi ^0\pi ^0|_W|K_L}}\right|^21\right).`$ (4) Within the Standard Model (SM) the value of $`\epsilon _K`$ can be accounted for if the single CP violating phase, $`\delta _{KM}`$ in the Cabibbo-Kobayashi-Maskawa (CKM) matrix, is of $`O(1)`$. Although the phase is large, the effect is small due to flavor parameters. The theoretical interpretation of $`\epsilon _K^{}/\epsilon _K`$ suffers from large hadronic uncertainties. The SM theoretically preferred range is somewhat lower than the experimental range (for recent work, see refs. and references therein). Yet, if all the hadronic parameters are taking values at the extreme of their reasonable ranges, the experimental result can be accommodated. There are CP violating observables that have not yet been accurately measured, for example: $`a_{\psi k_s}\mathrm{sin}(\mathrm{\Delta }m_Bt)`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Gamma }(\overline{B}_{phys}^0(t)\psi K_S)\mathrm{\Gamma }(B_{phys}^0(t)\psi K_S)}{\mathrm{\Gamma }(\overline{B}_{phys}^0(t)\psi K_S)+\mathrm{\Gamma }(B_{phys}^0(t)\psi K_S)}},`$ (5) $`a_{\pi \nu \overline{\nu }}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Gamma }(K_L\pi ^0\nu \overline{\nu })}{\mathrm{\Gamma }(K^+\pi ^+\nu \overline{\nu })}}.`$ (6) The values of these observables are predicted within the SM to be : $`(a_{\psi k_s})_{SM}`$ $`=`$ $`0.40.8,`$ (7) $`(a_{\pi \nu \overline{\nu }})_{SM}`$ $`=`$ $`O(0.2).`$ (8) The smallness of the measured parameters (1)-(2), however, suggests that there might be new physics that allows a viable description of CP violating phenomena with approximate CP, that is with all CP violating phases smaller than $`O(1)`$. In such a framework it is possible that the predictions for other CP violating observables are substantially different from the SM. In particular, $`a_{\psi k_s}`$ and $`a_{\pi \nu \overline{\nu }}`$ are both much smaller than one. Below we present a framework where the idea of approximate CP is realized. This framework was introduced in ref. , where two explicit supersymmetric (SUSY) models were given. Here, all the CP violating phases are small. In particular $`\delta _{KM}`$ is small, and $`\epsilon _K`$ is accounted for by new physics, requiring at least one phase larger than or of $`O(10^3)`$. We also report the results of a recent reexamination of this framework , in light of the accurate measurement of $`\epsilon _K^{}/\epsilon _K`$. ## 2 The Framework Our high-energy theory is supersymmetric and has CP and abelian horizontal symmetries . At low energies we assume that SUSY is softly broken. Generic values for SUSY parameters might lead to too large flavor changing neutral currents (FCNC). In our framework we use the breaking of the horizontal symmetry supplemented by the mechanism of alignment to avoid this problem. In order to account for CP violation, we break CP spontaneously in such a way that in the low energy effective theory the CP violating phases are small. With approximate CP, the potential CP problem of SUSY models, that is too large contributions to electric dipole moments (EDM) , is avoided. Below we describe in more detail the various ingredients of our framework. ### 2.1 Abelian Horizontal Symmetry Models of abelian horizontal symmetries are able to provide a natural explanation for the hierarchy in the quark and lepton flavor parameters . The full high energy theory has an exact horizontal symmetry, $`H`$. The superfields of the supersymmetric standard model (SSM) carry $`H`$-charges. In addition there is usually at least one SM singlet superfield, $`S`$, that also carries $`H`$-charge. The horizontal symmetry is spontaneously broken when the SM singlet field assumes a vacuum expectation value (vev), $`S`$. The breaking scale is somewhat lower than a scale M where the information about this breaking is communicated to the SSM, presumably by heavy quarks in vector like representations of the SM (the Froggatt-Nielsen mechanism ). The smallness of the ratio between the two scales, $`\lambda \frac{S}{M}1`$, is the source of smallness and hierarchy in the Yukawa couplings. The parameter $`\lambda `$ is taken to be of order of the Cabibbo angle, $`O(0.2)`$. Models are defined by the horizontal symmetry, the assigned horizontal charges and the hierarchy of vevs. For most purposes it is sufficient to analyze the effective low energy theory, which is the SSM supplemented with the following selection rules: (i) Terms in the superpotential that carry charge $`n`$ under $`H`$ are suppressed by $`\lambda ^n`$ if $`n0`$ and vanish otherwise. (ii) Terms in the Kähler potential that carry charge $`n`$ under $`H`$ are suppressed by $`\lambda ^{|n|}`$. These selection rules allow estimation of the various entries in the quark mass matrices $`M^q`$ and the squark mass-squared matrices $`M_{\stackrel{~}{q}}^2`$ (the coefficients of $`O(1)`$ which appear in each entry are not known). The size of the bilinear $`\mu `$ and $`B`$ terms can also be estimated. From the mass matrices, one can further estimate the mixing parameters in the CKM matrix and in the gaugino couplings to quarks and squarks. A convenient way to parameterize SUSY contributions to various processes is by using the $`(\delta _{MN}^q)_{ij}`$ parameters. In the basis where quark masses and gluino couplings are diagonal, the dimensionless $`(\delta _{MN}^q)_{ij}`$ parameters stand for the ratio between $`(M_{\stackrel{~}{q}}^2)_{ij}^{MN}`$, the $`(ij)`$ entry ($`i,j=1,2,3`$) in the squark mass-squared matrix ($`M,N=L,R`$ and $`q=u,d`$), and $`\stackrel{~}{m}^2`$, the average squark mass-squared. If there is no mass degeneracy among squarks, then these parameters can be related to the SUSY mixing angles. The naive values of the different parameters can be calculated using the horizontal symmetry $`U(1)`$. For example, the naive estimate of the $`(\delta _{LR}^d)_{12}`$ parameter which is relevant to $`\epsilon _K^{}/\epsilon _K`$ is given by: $$(\delta _{LR}^d)_{12}\frac{(M_{\stackrel{~}{d}}^2)_{12}^{LR}}{\stackrel{~}{m}^2}\frac{\stackrel{~}{m}M_{12}^d}{\stackrel{~}{m}^2}\frac{m_s|V_{us}|}{\stackrel{~}{m}}\lambda ^6\frac{m_t}{\stackrel{~}{m}}.$$ (9) ### 2.2 Alignment The naive suppression of the supersymmetric flavor changing couplings is not strong enough to solve all the SUSY FCNC problems. To solve the $`\mathrm{\Delta }m_K`$ problem, one can use the horizontal symmetry and holomorphy to induce a very precise alignment of the quark mass matrices and the squark mass-squared matrices , resulting in a very strong suppression of the relevant mixing angles in the gaugino couplings to quarks and squarks. In order to achieve alignment, some of the entries in $`M^d`$ should be suppressed compared to their naive values. The required suppression is achieved by the use of holomorphy that causes some of the Yukawa couplings to vanish . In order to achieve this, more than one $`U(1)`$ horizontal symmetry is required. These holomorphic zeroes are lifted when the kinetic terms are canonically normalized , but their values are suppressed by at least a factor of $`\lambda ^2`$ relative to their naive value . Returning to our example we now find (horizontal symmetry + alignment): $$(\delta _{LR}^d)_{12}\frac{\stackrel{~}{m}M_{12}^d}{\stackrel{~}{m}^2}<\lambda ^2\frac{m_s|V_{us}|}{\stackrel{~}{m}}\lambda ^8\frac{m_t}{\stackrel{~}{m}}.$$ (10) ### 2.3 Spontaneous CP Breaking As stated above, the high energy theory is CP symmetric. CP is spontaneously broken in the following way. There are two SM singlet superfields, $`S_1`$ and $`S_2`$, that carry charges under the same $`U(1)`$ horizontal symmetry. Both of them receive vevs, $`S_2S_1`$. While one of the vevs can be chosen to be real, the second is in general complex, with a phase of $`O(1)`$. The hierarchy between the vevs and the relative, $`O(1)`$ phase, are naturally induced in this framework . This complex vev feeds down to all the couplings. In the low energy effective theory, there are many independent CP violating phases, in particular in the mixing matrices of gaugino couplings to fermions and sfermions. Furthermore, the ratio of vevs enables all CP violating phases to be suppressed, giving approximate CP. The suppression of phases in the effective theory is by even powers of the breaking parameter. Returning to our example we find in this case (horizontal symmetry + alignment + approximate CP): $$Im(\delta _{LR}^d)_{12}<\lambda ^4\frac{m_s|V_{us}|}{\stackrel{~}{m}}\lambda ^{10}\frac{m_t}{\stackrel{~}{m}}.$$ (11) ## 3 Models and Predictions In ref. two representative models of approximate CP were constructed. One of the models (model II) has the smallest viable CP breaking parameter of $`O(0.001)`$, and the other (model I) has an intermediate value of $`O(0.04)`$. Regarding FCNC processes, we find in our models: (i) The contributions to $`\mathrm{\Delta }m_D`$ saturate the experimental upper bound in both models. This is a generic feature of models of alignment, related to the fact that in these models the Cabibbo mixing ($`|V_{us}|\lambda `$) comes from the up sector. (ii) The contributions to $`\mathrm{\Delta }m_B`$ are very small. (iii) The contributions to $`\mathrm{\Delta }m_K`$ are of $`O(10\%)`$ in model I and saturate the experimental value for model II. This is in contrast to all previous models of alignment where, to satisfy the $`\epsilon _K`$ constraint, SUSY contributions to $`\mathrm{\Delta }m_K`$ were negligibly small. (iv) The contributions to other FCNC processes, such as $`\mathrm{\Delta }m_{B_s}`$ and $`bs\gamma `$, are very small. As concerns the rare $`K^+\pi ^+\nu \overline{\nu }`$ decay, in both our models the SUSY contributions are of $`O(10\%)`$. While both the SM and the SUSY amplitudes are real to a good approximation, so that there is maximal interference between the two, the relative sign is unknown so that the rate could be either enhanced or suppressed compared to the SM. In both models $`\epsilon _K`$ is accounted for by SUSY gluino-mediated diagrams . Our results concerning CP violation are summarized in table 1 where $`a_{\psi K_S}`$ and $`a_{\pi \nu \overline{\nu }}`$ are defined above, and $`d_N`$ is the EDM of the neutron (given in units of $`10^{23}ecm`$, so that the present experimental bound is $`d_N<\lambda ^2`$). ## 4 $`\epsilon _K^{}/\epsilon _K`$ With approximate CP $`\delta _{KM}`$ is small and SM contributions can not account for the experimental measurement of $`\epsilon _K^{}/\epsilon _K`$. New physics is required. (If the relevant hadronic matrix element is much larger than its value in the vacuum insertion approximation, as suggested by a recent lattice calculation , then the SM contribution with a small value of $`\delta _{KM}`$ can account for $`\epsilon _K^{}/\epsilon _K`$ .) For SUSY to account for $`\epsilon _K^{}/\epsilon _K`$, at least one of the following conditions should be satisfied ,-: $`Im[(\delta _{LL}^d)_{12}]`$ $``$ $`\lambda \left({\displaystyle \frac{\stackrel{~}{m}}{500GeV}}\right)^2,`$ $`Im[(\delta _{LR}^d)_{12}]`$ $``$ $`\lambda ^7\left({\displaystyle \frac{\stackrel{~}{m}}{500GeV}}\right),`$ (12) $`Im[(\delta _{LR}^d)_{21}]`$ $``$ $`\lambda ^7\left({\displaystyle \frac{\stackrel{~}{m}}{500GeV}}\right),`$ $`Im[(\delta _{LR}^u)_{13}(\delta _{LR}^u)_{23}^{}]`$ $``$ $`\lambda ^2,`$ $`Im[V_{td}(\delta _{LR}^u)_{23}^{}]`$ $``$ $`\lambda ^3\left({\displaystyle \frac{M_2}{m_W}}\right),`$ (13) $`Im[V_{ts}^{}(\delta _{LR}^u)_{13}]`$ $``$ $`\lambda ^3\left({\displaystyle \frac{M_2}{m_W}}\right).`$ In our framework only the conditions involving $`(\delta _{LR}^d)_{12}`$ and $`(\delta _{LR}^d)_{21}`$ can be met. Checking what are the lower bounds on these parameters for extreme values of the parameters (for details see ref. ) we find: $$Im(\delta _{LR}^d)_{12}>7\times 10^7,$$ (14) that is $`O(\lambda ^9)`$ or even $`O(\lambda ^{10})`$ if $`\lambda 0.24`$. A similar bound applies to $`Im[(\delta _{LR}^d)_{21}]`$. In models in which the flavor problems are solved by alignment, but the CP problems are solved by approximate CP, eq. (11) holds. This is consistent with the experimental constraint of eq. (14) only if all the following conditions are simultaneously satisfied: (i) The suppression of the relevant CP violating phases is ‘minimal’, that is $`O(\lambda ^2)`$. (ii) The alignment of the first two down squark generations is ‘minimal’, that is $`O(\lambda ^2)`$. (iii) The mass scale of the SUSY particles is low, $`\stackrel{~}{m}150GeV`$. (iv) The hadronic matrix element is larger than what hadronic models suggest. (v) The mass of the strange quark is at the lower side of the theoretically preferred range. (vi) The value of $`\epsilon _K^{}/\epsilon _K`$ is at the lower side of the experimentally allowed range. We conclude that models that combine alignment and approximate CP are disfavored by the measurement of $`\epsilon _K^{}/\epsilon _K`$. More than that, the explicit models (model I and II) described above are ruled out by this measurement. We do note, however, that models of abelian horizontal symmetries and approximate CP where the flavor problems are solved by a mechanism different from alignment can account for $`\epsilon _K^{}/\epsilon _K`$. ## 5 Conclusions In the near future, we expect first measurements of various CP asymmetries in $`B`$ decays, such as $`B\psi K_S`$ or $`B^\pm \pi ^0K^\pm `$. If these asymmetries are measured to be of order one, it will support the SM picture, that the CP violation that has been measured in the neutral $`K`$ decays is small because it is screened by small mixing angles, while the idea that CP violation is small because all CP violating phases are small will be excluded. It is interesting, however, that various specific models that realize the latter idea, such as those discussed in this work, can already be excluded by the measurement of a tiny CP violating effect, $`\epsilon _K^{}5\times 10^6`$. Acknowledgments I thank Antonio Masiero, Yossi Nir and Luca Silvestrini for enjoyable collaborations on the topics presented here.
no-problem/9911/cond-mat9911340.html
ar5iv
text
# BU-CCS-991001 A Particulate Basis for an Immiscible Lattice-Gas Model ## 1 Introduction In 1986 it was discovered that certain mass and momentum conserving lattice-gas automata gave rise to the isotropic Navier-Stokes equations in the hydrodynamic limit . In 1988, Rothman and Keller extended this discovery by introducing a hydrodynamic lattice-gas model of immiscible fluids . Their model, and lattice Boltzmann variants thereof, have become an important tool for simulating the hydrodynamics of multiphase flow . In both the original and RK lattice-gas models, the dynamics can be decomposed into two steps: In the first, the particles propagate along the lattice vectors to new sites; in the second, the particles entering each site collide by redistributing mass and momentum. In the Rothman-Keller (RK) model, the masses of the various immiscible fluid species and the total momentum are conserved locally, but the choice of collision outcome at each site of the RK model depends on the water-minus-oil order parameter, or “color,” of the neighboring sites. See Fig. 1 for one set of possible collision outcomes that might be allowed by the conservation laws. The flux of the color is determined for each such outgoing state, and its local gradient or field is determined by examining the neighboring sites. The negative of the dot product of this color flux and color field is then a measure of the propensity of outgoing particles to move to sites dominated by particles of their own type, and was called the color work by RK; their prescription was then to choose the outcome that minimizes this work in order to create cohesion and interfacial tension. (In case of a tie, the outcome is chosen randomly from among the states with minimal color work.) In later work, Chen, Chen, Doolen and Lee , and Chan and Liang noted that this minimization of color work is really just the low-temperature limit of a Boltzmann sampling procedure. This paper proposes a microscopic interpretation of the RK model. In the limit where the ratio of the mean-free path to the interaction range is small, we show that an arbitrary interaction potential can be expanded in terms of dot products of fluxes and fields, of which the RK model is merely the first term. This observation accounts for much of the success and utility of the RK model in describing the hydrodynamics of multiphase flow. ## 2 Hydrodynamic Lattice-Gas Automata In lattice-gas models of hydrodynamics, incoming particles collide at each site $`𝐱`$ of a lattice $``$, in a manner to be discussed at length shortly, and then propagate to one of $`B`$ neighboring sites $`𝐱+𝐜_i`$, where $`i\{1,\mathrm{},B\}`$. (Note that some of the $`𝐜_i`$’s may be zero in order to accommodate “rest particles” in the model.) We suppose that the occupancy of each of these $`B`$ channels can be represented by $`L`$ bits $`n_i^{\mathrm{}}(𝐱)\{0,1\}`$, where $`i\{1,\mathrm{},B\}`$ and $`\mathrm{}\{1,\mathrm{},L\}`$. Throughout the remainder of this paper, we shall illustrate various concepts by applying them to three concrete examples: * Example 1: In a lattice gas for a single-species Navier-Stokes fluid , we take one bit ($`L=1`$) in each direction that represents the presence or absence of a particle moving in that direction. * Example 2: In a lattice gas for two immiscible fluids , on the other hand, we might take two bits per direction ($`L=2`$) so that there can be one bit for water particles $`n_i^W(𝐱)`$ and one bit for oil particles $`n_i^O(𝐱)`$ in each direction. * Example 3: We consider the model of Chen, Chen, Doolen and Lee , in which there is one bit in each direction ($`L=1`$), one “rest” particle direction $`i=R`$ such that $`𝐜_R=0`$, and only the rest particles feel an interaction potential. We suppose that there is a charge-like attribute $`q_i(𝐱)`$ associated with the bits in direction $`i`$ at site $`𝐱`$, and we specialize to potential energies of the form $$V=\frac{1}{2}\underset{𝐱,𝐲}{}\underset{i,j}{}q_i(𝐱)q_j(𝐱+𝐲)\varphi (|𝐲|),$$ (1) where the factor of $`1/2`$ prevents double counting. In Example 1, the charge-like attribute might be equal to the channel occupancy $`n_i(𝐱)`$. In Example 2, on the other hand, the charge-like attribute might be the order parameter $$q_i(𝐱)=n_i^W(𝐱)n_i^O(𝐱)$$ (2) which measures the excess of water over oil. In Example 3, we take $`q_i(𝐱)=n_i(𝐱)\delta _{iR}`$, since only the rest particles have a charge-like attribute. In what follows, we shall also have occasion to refer to the total (summed over directions) charge-like attribute of a site $`q(𝐱)_iq_i(𝐱)`$. As noted above, any fluid model has some set of quantities that must be conserved in collisions. In Examples 1 and 3, we should demand conservation of the mass $`_in_i(𝐱)`$ and the momentum $`_in_i(𝐱)𝐜_i`$. In Example 2, on the other hand, collisions must conserve water mass $`_in_i^W(𝐱)`$, oil mass $`_in_i^O(𝐱)`$ and total momentum $`_i[n_i^W(𝐱)+n_i^O(𝐱)]𝐜_i`$. Alternatively, we can say that all three examples conserve total mass and the total momentum, while Example 2 also conserves the charge-like attribute, $`q(𝐱)`$. These conserved quantities naturally partition the set of all states of a given site(s) into equivalence classes of states with the same values for all of the conserved quantities. For example, Fig. 1, relevant to Example 2 above, illustrates the equivalence class comprised of the six possible collisional outcomes that may result when one water particle and one oil particle enter a single site on a two-dimensional triangular lattice from opposite directions (and, hence, with zero total momentum). ## 3 Collisional Energetics We denote the postcollision charge-like attribute with velocity $`𝐜_i`$ at site $`𝐱`$ by $`q_i^{}(𝐱)`$. Upon subsequent propagation, the charge $`q_i^{}(𝐱)`$ will be at position $`𝐱+𝐜_i`$, and the charge $`q_j^{}(𝐱+𝐲)`$ will be at position $`𝐱+𝐲+𝐜_j`$. This is illustrated in Fig. 2. The change in the potential energy due to both collision and propagation is then given by $`\mathrm{\Delta }V`$ $`=`$ $`{\displaystyle \underset{𝐱,𝐲}{}}{\displaystyle \underset{i,j}{}}q_i^{}(𝐱)q_j^{}(𝐱+𝐲)\varphi \left(\left|𝐲+𝐜_j𝐜_i\right|\right){\displaystyle \underset{𝐱,𝐲}{}}{\displaystyle \underset{i,j}{}}q_i(𝐱)q_j(𝐱+𝐲)\varphi \left(\left|𝐲\right|\right)`$ (3) $`=`$ $`{\displaystyle \underset{𝐱,𝐲}{}}{\displaystyle \underset{i,j}{}}q_i^{}(𝐱)q_j^{}(𝐱+𝐲)\left[\varphi \left(\left|𝐲+𝐜_j𝐜_i\right|\right)\varphi \left(\left|𝐲\right|\right)\right]`$ $`+{\displaystyle \underset{𝐱,𝐲}{}}{\displaystyle \underset{i,j}{}}\left[q_i^{}(𝐱)q_j^{}(𝐱+𝐲)q_i(𝐱)q_j(𝐱+𝐲)\right]\varphi \left(\left|𝐲\right|\right)`$ $`=`$ $`\mathrm{\Delta }V_c+\mathrm{\Delta }V_n,`$ where we have defined the contribution to $`\mathrm{\Delta }V`$ due to the movement of the interacting particles, $$\mathrm{\Delta }V_c\underset{𝐱,𝐲}{}\underset{i,j}{}q_i^{}(𝐱)q_j^{}(𝐱+𝐲)\left[\varphi \left(\left|𝐲+𝐜_j𝐜_i\right|\right)\varphi \left(\left|𝐲\right|\right)\right],$$ (4) and that due to nonconservation of the charge-like attribute, $$\mathrm{\Delta }V_n\underset{𝐱,𝐲}{}\left[q^{}(𝐱)q^{}(𝐱+𝐲)q(𝐱)q(𝐱+𝐲)\right]\varphi \left(\left|𝐲\right|\right).$$ (5) Note that $`\mathrm{\Delta }V_c`$ vanishes for systems in which the interacting particles do not move, including our Example 3. Likewise, note that $`\mathrm{\Delta }V_n`$ vanishes for systems with a conserved charge-like attribute, including our Examples 1 and 2. ## 4 The Flux-Field Decomposition If the potential kernel $`\varphi (y)`$ is analytic, we may expand $`\mathrm{\Delta }V_c`$ in a Taylor series in the ratio of the characteristic lattice spacing<sup>1</sup><sup>1</sup>1Since lattice gases are usually dense fluids for which particles undergo a collision at every step, $`c`$ can also be thought of as a mean-free path. $`c`$ to the characteristic interaction range $`y`$. This is done in Appendix A where it is shown that every term of this series can be expressed as the complete inner product of a tensor flux with a tensor field. More specifically, we find that $$\mathrm{\Delta }V_c=\underset{𝐱}{}\underset{n=1}{\overset{\mathrm{}}{}}\underset{r=n/2}{\overset{n}{}}𝒥_r^{}(𝐱)\stackrel{r}{}_{n,r}(𝐱),$$ (6) where we have defined the (local) $`r`$th-rank tensor flux of outgoing particles, $$𝒥_r^{}(𝐱)\underset{i}{}q_i^{}(𝐱)\left(\stackrel{r}{}𝐜_i\right),$$ (7) and where the (generally nonlocal) $`r`$th-rank tensor fields are defined in terms of $`(nr)`$th-rank outgoing tensor fluxes of neighboring sites, $`_{n,r}^{}(𝐱)`$ $``$ $`{\displaystyle \frac{(1)^r}{(1+\delta _{n,2r})n!}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{r}}\right){\displaystyle \underset{𝐲}{}}𝒦_n(𝐲){\displaystyle \stackrel{nr}{}}𝒥_{nr}^{}(𝐱+𝐲).`$ (8) In all the above expressions, primes are used to denote dependence on postcollision values, $`^r`$ denotes an $`r`$-fold outer product, $`^r`$ denotes an $`r`$-fold inner product, and we have defined the $`n`$th-rank completely symmetric kernel $$𝒦_n(𝐲)\underset{m=n/2}{\overset{n}{}}\frac{\varphi _m(y)}{(nm)!}\text{per}\left[\left(\stackrel{2mn}{}𝐲\right)\left(\stackrel{nm}{}\mathrm{𝟏}\right)\right],$$ (9) where “per” indicates a summation over all distinct permutations of indices, and where we have defined the following functions related to the derivatives of $`\varphi (y)`$: $$\varphi _m(y)\left(\frac{1}{y}\frac{d}{dy}\right)^m\varphi (y).$$ (10) The first few such kernels are thus given by $`\left[𝒦_1(𝐲)\right]_i`$ $`=`$ $`\varphi _1(y)y_i`$ (11) $`\left[𝒦_2(𝐲)\right]_{ij}`$ $`=`$ $`\varphi _1(y)\delta _{ij}+\varphi _2(y)y_iy_j`$ (12) $`\left[𝒦_3(𝐲)\right]_{ijk}`$ $`=`$ $`\varphi _2(y)\left(y_i\delta _{jk}+y_j\delta _{ik}+y_k\delta _{ij}\right)+\varphi _3(y)y_iy_jy_k`$ (13) $`\left[𝒦_4(𝐲)\right]_{ijkl}`$ $`=`$ $`\varphi _2(y)\left(\delta _{ij}\delta _{kl}+\delta _{ik}\delta _{jl}+\delta _{il}\delta _{jk}\right)/2+`$ (14) $`\varphi _3(y)\left(y_iy_j\delta _{kl}+y_iy_k\delta _{jl}+y_iy_l\delta _{jk}+y_jy_k\delta _{il}+y_jy_l\delta _{ik}+y_ky_l\delta _{ij}\right)+`$ $`\varphi _4(y)y_iy_jy_ky_l.`$ For completeness, we note that, since the zero-rank flux is just the charge, the portion of $`\mathrm{\Delta }V`$ arising from nonconservation of the charge-like attribute may also be written in terms of these fluxes, $$\mathrm{\Delta }V_n=\frac{1}{2}\underset{𝐱,𝐲}{}\left[𝒥_0^{}(𝐱)𝒥_0^{}(𝐱+𝐲)𝒥_0(𝐱)𝒥_0(𝐱+𝐲)\right]\varphi _0(y)$$ (15) as follows immediately from Eqs. (5) and (7). Because the fields themselves depend on the post-collision fluxes at neighboring sites, it is generally necessary to include $`\mathrm{\Delta }V_n`$ in the collisional energetics. There are, however, some very useful exceptions to this rule. Some of the fluxes $`𝒥_r(𝐱)`$ may be conserved, in which case $`𝒥_r^{}(𝐱)=𝒥_r(𝐱)`$, since their values for incoming and outgoing states must be identical. If all of the fluxes that go into the calculation of a field $`_{n,r}^{}(𝐱)`$ are conserved quantities, then we can write $`_{n,r}^{}(𝐱)=_{n,r}(𝐱)`$ as well. For example, let us return to consider our Example 2. At zeroth order, Eq. (5) indicates that $`\mathrm{\Delta }V_n`$ vanishes because $`𝒥_0`$ is conserved. At first order ($`n=1`$) the field $`_{1,1}^{}(𝐱)`$ depends only on the zero-order flux – namely the total order parameter (water minus oil) – at a given site, and this is conserved. Hence, the first-order energy change can be computed for the order-parameter flux for each outgoing state, and the field based on the incoming states of the neighbors $`𝐱+𝐲`$. If we restrict this set of neighbors to immediately adjacent sites, and look no further than lowest order in the Taylor expansion, we see that this reduces precisely to the RK model. The RK model is thus an approximation that is valid to first-order in $`c/y`$. While this fact may explain much of the utility of the RK model, in the following section we shall see that it may also indicate that the scaling limit of the RK model is more subtle than previously suspected. In any case, this also means that an alternative model that uses the exact potential energy, $`\mathrm{\Delta }V`$ of Eq. (3), to sample post-collision states at each site will certainly be no worse than the RK model which is known to capture much of the phenomenology of immiscible fluid dynamics. Note that if both the flux and field in one of the terms of the Taylor expansion can be evaluated based on incoming quantities, then that term of $`\mathrm{\Delta }V`$ is the same for all outgoing quantities, and therefore does not discriminate between them, so that it is necessary to go to higher order (at least until one encounters the first nonconserved fluxes) to obtain any interaction at all. We saw this with the vanishing of $`\mathrm{\Delta }V_n`$ for systems with conserved charge-like attribute, such as Example 2 above. To see this happen at higher order, let us consider the simpler Example 1 which, perhaps for this very reason, has been less studied. Both fluxes $`𝒥_0`$ and $`𝒥_1`$ are conserved, since they are the mass and momentum, respectively. It follows that the entire $`n=1`$ term of $`\mathrm{\Delta }V_c`$ is the same for all outgoing states. Nontrivial interaction between particles therefore does not even begin until the $`n=2`$ term of the expansion, since $`𝒥_2`$ (which is, in fact, related to the pressure tensor) is not a conserved quantity. ## 5 The Hydrodynamic Limit of the RK Model In order to obtain useful quantitative information from a hydrodynamic lattice gas, one must be careful to work in the correct asymptotic regime. This usually involves scaling the various dimensionless parameters of the problem with the Knudsen number $`\text{Kn}\lambda /L`$, where $`\lambda `$ is the mean-free path and $`L`$ is the characteristic size. In incompressible Navier-Stokes flow, for example, one desires that the Mach number $`\text{M}U/C`$, where $`U`$ is the characteristic hydrodynamic velocity and $`C`$ is the sound speed, scale with the Knudsen number $`\text{M}𝒪(\text{Kn})`$. Since the viscosity $`\nu `$ goes as the product of mean-free path and sound speed, $`\lambda C`$, this implies that the Reynolds number Re scales as $`\text{M}/\text{Kn}𝒪(1)`$<sup>2</sup><sup>2</sup>2Note that this does not mean that the numerical value of the Reynolds number must be near unity. Rather it means that Re approaches a constant value in the scaling limit.. We also demand that the Strouhal number $`\text{St}U\tau /L`$ and the fractional density fluctuation $`\delta \rho /\rho _0`$, where $`\tau `$ is the mean-free time and $`\rho _0`$ is the average background density, both scale as $`𝒪(\text{Kn}^2)`$. This limit is well known to reduce the compressible Navier-Stokes equations to their incompressible counterparts. Since, for a dense LGA, the mean-free path $`\lambda `$ goes as the grid size $`c`$, this means that in order to approach the continuum limit, every time one doubles the size of the lattice (halves Kn), one must quadruple the number of time steps (since St is quartered), and verify that the fractional density fluctuation (a measured “output” quantity in a lattice-gas simulation) is also quartered. Only when this scaling is verified can one be sure that one is working in the correct asymptotic regime. The presence of an interparticle potential adds an additional length scale – the range $`y`$ of the force – and therefore a new dimensionless parameter $`\lambda /y`$, or equivalently $`c/y`$. To derive the flux-field decomposition, we demanded that this ratio be small, but of order unity; that is, we did not scale this parameter with the Knudsen number. Operationally, this means that every time the size of the lattice is doubled, the range of the force in lattice units should be kept the same. If it is ten lattice units at one resolution, it should be ten lattice units at all resolutions<sup>3</sup><sup>3</sup>3In some sense, this means that lattice artifacts never completely disappear, as the set of sites within a ten-lattice-unit radius of a given site are not distributed uniformly or isotropically. If this is deemed problematic, it may be possible to scale $`c/y`$ with $`\text{Kn}^{1/2}`$, or in some other way such that both $`c/y`$ and $`y/L`$ vanish in the scaling limit; this has the attraction of completely removing such problems in the scaling limit, but further exploration of such considerations lies outside the scope of this paper.. While Eq. (4) for $`\mathrm{\Delta }V_c`$ is exact, the flux-field decomposition of Eq. (6) is usually used to approximate $`\mathrm{\Delta }V_c`$ only to some specified order in $`c/y`$. Having determined that the RK model is just such an approximation, we are now in a position to examine how the error incurred by this approximation scales in the continuum limit. Let us compare the RK model to a variant of our model, in which we adopt the strategy of permitting the number of terms $`n_{\mathrm{max}}`$ that we retain in the Taylor expansion for $`\mathrm{\Delta }V_c`$ to increase in the hydrodynamic limit. We shall justify this strategy a posteriori. Specifically, let us take $`\xi `$ more terms each time the lattice size $`N`$ is doubled. If we take the system size in physical units $`L`$ to be fixed, then the lattice spacing is $`cL/N`$. For a dense lattice gas, the mean-free path is of order $`c`$, so the Knudsen number Kn scales as $`c/L`$. It follows that $$n_{\mathrm{max}}=\xi \mathrm{log}_2\left(\frac{N}{N_0}\right)=n_0\xi \mathrm{log}_2\left(\text{Kn}\right),$$ (16) where $`N_0`$ and $`n_0`$ are constants. If we then take the interaction range in lattice units $`y/c`$ to be fixed, then the error term $`\epsilon `$ in the Taylor expansion goes as $$\epsilon \left(\frac{c}{y}\right)^{n_{\mathrm{max}}}\left(\frac{c}{y}\right)^{n_0}\left(\frac{y}{c}\right)^{\xi \mathrm{log}_2(\text{Kn})}\left(\frac{c}{y}\right)^{n_0}\text{Kn}^{\xi \mathrm{log}_2(\frac{y}{c})}.$$ (17) For $`\xi =1`$ (adding one more term to the series at each lattice refinement), it follows that by keeping the lattice spacing less than half of the characteristic interaction range, the error will scale as the Knudsen number; likewise, by making the lattice spacing less than a fourth of the characteristic interaction range, the error will scale as the square of the Knudsen number; and so on. One can also increase $`\xi `$ to raise the power of the Knudsen number to which the error scales. Thus, for small but finite values of $`c/y`$ (order unity in the scaling limit), the error can be made to scale subdominantly to terms that are usually neglected in a Chapman-Enskog expansion anyway, providing the a posteriori justification promised above. A potential problem with the RK model is that it does not refine the definition of the energy in this way at each level of the scaling limit ($`\xi =0`$), and so the corrections that it neglects may indeed matter in that limit. Of course, in most situations, one will choose a particular fixed value of $`n_{\mathrm{max}}`$; in fact, all studies of the RK model to date have used $`n_{\mathrm{max}}=1`$, simply because it has not been previously recognized that the RK model is only the first-order approximation to a more exact model. The RK model is known to exhibit certain anomalous phenomena; for example spurious currents are known to develop near interfaces, even if there is no bulk flow, and the surface tension is known to be slightly anisotropic . It is possible that such anomalies would be eliminated by the more exact treatment of the scaling limit advocated here, but a numerical test of this conjecture is outside the scope of the present work. Of course, such anomalies may be regarded as tolerable as long as one appreciates that one is working only to lowest order of an asymptotic series. Indeed, because the series is asymptotic, there is no point in being overzealous about the value of $`n_{\mathrm{max}}`$, since the series may begin to diverge at some point. There is at least one situation, however, in which the observation that it is necessary to let $`n_{\mathrm{max}}`$ scale with Knudsen number may be critically important, and that is when one is studying the scaling of (possibly divergent) quantities with system size. For example, Rothman and Flekkøy recently studied the scaling properties of fluctuating interfaces using the RK model, measuring, among other things, the saturated width of the interface as a function of system size. Superimposed upon the usual power-law behavior of the saturated width, they found a logarithmic correction that resisted theoretical explanation. We suggest that this anomaly may be due to scaling the system size with fixed $`n_{\mathrm{max}}`$ (since the RK model effectively fixes $`n_{\mathrm{max}}`$ at unity), rather than letting $`n_{\mathrm{max}}`$ increase linearly with system size as advocated here. Again, it would be interesting to verify this conjecture by redoing the numerical experiment using our model, but that is outside the scope of the present work. The main point of this section has been to demonstrate that the flux-field decomposition allows one to understand in what sense the RK model is an approximation to an exact interaction potential, why it may be used for trial state sampling, how it might be corrected at higher order in $`c/y`$, and why its scaling limit may be more subtle than previously suspected. We emphasize that the strategy of scaling $`n_{\mathrm{max}}`$ with Kn was invoked only to facilitate this discussion of the scaling limit, and we are certainly not suggesting that it be used in practical simulations. If one is planning to work at any order of $`c/y`$ at which $`\mathrm{\Delta }V_c`$ involves nonlocal interactions, it makes much more sense to use Eq. (4) directly than to try to deal with higher order terms of the flux-field decomposition. For Monte Carlo sampling of the outgoing states, it is possible that the lowest-order local term of the flux-field decomposition could be used for sampling, while the exact expression for $`\mathrm{\Delta }V_c`$ could be used for the acceptance criterion. ## 6 Conclusions We have shown that the lattice gas model developed by Rothman and Keller for immiscible-fluid hydrodynamics can be derived from an underlying model of particle interactions. From the enhanced understanding provided by our observation, we elucidated the nature of the hydrodynamic limit of the Rothman-Keller model, demonstrating that it is more subtle than previously suspected. Though practical simulations of the particulate model are likely to be significantly more compute-intensive than the original version of the Rothman-Keller model, this work is offerred in the spirit that it is always useful to know the exact model corresponding to any given approximation. We hope that this work helps to provide some theoretical basis for these models’ success, and perhaps for their ultimate improvement. ## Acknowlegements BMB would like to thank the International Centre for Theoretical Physics (ICTP) for their hospitality during a portion of this work. BMB was supported in part by the United States Air Force Office of Scientific Research under grant number F49620-95-1-0285. The collaboration of BMB and PVC was facilitated by NATO grant number CRG 950356. ## Appendix A Derivation of Flux-Field Decomposition To derive Eq. (6) for $`\mathrm{\Delta }V_c`$, we begin with the Taylor expansion of a function of the magnitude of a displaced vector, $$\varphi (|𝐲+𝐜|)=\underset{n=0}{\overset{\mathrm{}}{}}\frac{ϵ^n}{n!}\underset{m=n/2}{\overset{n}{}}\frac{n!\varphi _m(y)}{2^{nm}(2mn)!(nm)!}\left(𝐲𝐜\right)^{2mn}\left(𝐜𝐜\right)^{nm},$$ (18) where $`ϵ`$ has been introduced as an expansion parameter to keep track of the order in $`c/y`$ (it is numerically equal to one), $`y|𝐲|`$, and we have defined the following functions related to the derivatives of $`\varphi (y)`$: $$\varphi _m(y)\left(\frac{1}{y}\frac{d}{dy}\right)^m\varphi (y).$$ (19) We let $`𝐜𝐜_j𝐜_i`$ and use the binomial theorem to write $`\left(𝐲𝐜\right)^{2mn}`$ $`=`$ $`\left(𝐲𝐜_j𝐲𝐜_i\right)^{2mn}`$ (20) $`=`$ $`{\displaystyle \underset{l=0}{\overset{2mn}{}}}{\displaystyle \frac{(2mn)!}{l!(2mnl)!}}\left(𝐲𝐜_j\right)^{2mnl}\left(𝐲𝐜_i\right)^l,`$ and $`\left(𝐜𝐜\right)^{nm}`$ $`=`$ $`\left(|𝐜_j|^2+|𝐜_i|^22𝐜_j𝐜_i\right)^{nm}`$ (21) $`=`$ $`{\displaystyle \underset{k=0}{\overset{nm}{}}}{\displaystyle \underset{p=0}{\overset{nmk}{}}}{\displaystyle \frac{(nm)!|𝐜_i|^{2k}|𝐜_j|^{2p}}{k!p!(nmkp)!}}\left(2𝐜_i𝐜_j\right)^{nmkp}.`$ Inserting these into Eq. (4) for $`\mathrm{\Delta }V_c`$, we get $`\mathrm{\Delta }V_c=`$ (22) $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{𝐱,𝐲}{}}{\displaystyle \underset{i,j}{}}q_i^{}(𝐱)q_j^{}(𝐱+𝐲){\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{ϵ^n}{n!}}{\displaystyle \underset{m=n/2}{\overset{n}{}}}`$ $`{\displaystyle \underset{l=0}{\overset{2mn}{}}}{\displaystyle \underset{k=0}{\overset{nm}{}}}{\displaystyle \underset{p=0}{\overset{nmk}{}}}{\displaystyle \frac{(1)^{nm+lkp}n!}{2^{k+p}l!k!p!(2mnl)!(nmkp)!}}`$ $`\varphi _m(y)\left(𝐲𝐜_j\right)^{2mnl}\left(𝐲𝐜_i\right)^l|𝐜_i|^{2k}|𝐜_j|^{2p}\left(𝐜_i𝐜_j\right)^{nmkp}.`$ Eliminating $`p`$ in favor of the new summation index $`rnm+l+kp`$, and reordering the summations, this becomes $`\mathrm{\Delta }V_c=`$ (23) $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{ϵ^n}{n!}}{\displaystyle \underset{r=0}{\overset{n}{}}}{\displaystyle \underset{𝐱,𝐲}{}}{\displaystyle \underset{i,j}{}}q_i^{}(𝐱)q_j^{}(𝐱+𝐲){\displaystyle \underset{m=n/2}{\overset{n}{}}}{\displaystyle \underset{l=0}{\overset{2mn}{}}}{\displaystyle \underset{k=\mathrm{max}(0,mn+rl)}{\overset{\mathrm{min}(nm,(rl)/2)}{}}}`$ $`{\displaystyle \frac{(1)^rn!\varphi _m(y)|𝐜_i|^{2k}|𝐜_j|^{2n2r2m+2l+2k}}{2^{nrm+l+2k}l!k!(nrm+l+k)!(2mnl)!(rl2k)!}}`$ $`\left(𝐲𝐜_j\right)^{2mnl}\left(𝐲𝐜_i\right)^l\left(𝐜_i𝐜_j\right)^{rl2k},`$ where we have adopted the convention that a sum is zero if its upper limit is less than its lower limit. By reinterpreting dot products raised to the $`s`$ power as the $`s`$-fold inner product of two $`s`$-fold outer products, we can rewrite this as follows $`\mathrm{\Delta }V_c`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{ϵ^n}{n!}}{\displaystyle \underset{r=0}{\overset{n}{}}}(1)^r\left({\displaystyle \genfrac{}{}{0pt}{}{n}{r}}\right){\displaystyle \underset{𝐱,𝐲}{}}{\displaystyle \underset{i,j}{}}q_i^{}(𝐱)q_j^{}(𝐱+𝐲)`$ (24) $`\left[\left({\displaystyle \stackrel{r}{}}𝐜_i\right){\displaystyle \stackrel{r}{}}𝒦_n(𝐲){\displaystyle \stackrel{nr}{}}\left({\displaystyle \stackrel{nr}{}}𝐜_j\right)\right],`$ where $`^r`$ denotes an $`r`$-fold outer product and $`^r`$ denotes an $`r`$-fold inner product, and where we have defined the kernel $`𝒦_n(𝐲)`$ $``$ $`{\displaystyle \underset{m=n/2}{\overset{n}{}}}{\displaystyle \underset{l=0}{\overset{2mn}{}}}{\displaystyle \underset{k=\mathrm{max}(0,mn+rl)}{\overset{\mathrm{min}(nm,(rl)/2)}{}}}`$ (25) $`{\displaystyle \frac{r!(nr)!\varphi _m(y)}{2^{nrm+l+2k}l!k!(nrm+l+k)!(2mnl)!(rl2k)!}}`$ $`\left[\left({\displaystyle \stackrel{2mn}{}}𝐲\right)\left({\displaystyle \stackrel{nm}{}}\mathrm{𝟏}\right)\right]`$ $`=`$ $`{\displaystyle \underset{m=n/2}{\overset{n}{}}}{\displaystyle \frac{n!\varphi _m(y)}{2^{nm}(2mn)!(nm)!}}\left[\left({\displaystyle \stackrel{2mn}{}}𝐲\right)\left({\displaystyle \stackrel{nm}{}}\mathrm{𝟏}\right)\right].`$ In the very last step above we performed the sums over $`k`$ and $`l`$. The expression for the potential energy, Eq. (24), has a remarkable symmetry, inherited from Eq. (3). By making the substitutions $`r`$ $``$ $`nr`$ $`i`$ $``$ $`j`$ $`j`$ $``$ $`i`$ $`𝐱`$ $``$ $`𝐱+𝐲`$ $`𝐲`$ $``$ $`𝐲`$ (26) and noting that $`𝒦_n(𝐲)=(1)^n𝒦_n(𝐲)`$, we can see that the $`r`$th term of Eq. (24) is equal to the $`(nr)`$th term. It also follows that the kernel $`𝒦_n`$ can be chosen to be completely symmetric under interchange of any two of its $`n`$ indices. If we notice that the combinatorial factor $$\frac{n!}{2^{nm}(2mn)!}$$ (27) is precisely equal to the number of distinct ways to assign $`n`$ indices to the tensor $`(^{2mn}𝐲)(^{nm}\mathrm{𝟏})`$, and recalling that only the symmetric part of $`𝒦_n`$ matters, we can rewrite the kernel, Eq. (25), in the remarkably compact form of Eq. (9). The simplicity of this result suggests that there may be an easier way to derive it. If we now introduce the completely symmetric $`r`$th-rank outgoing tensor fluxes, as defined in Eq. (7), then Eq. (24) may be written as $`\mathrm{\Delta }V_c=`$ (28) $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{𝐱,𝐲}{}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{ϵ^n}{n!}}{\displaystyle \underset{r=0}{\overset{n}{}}}(1)^r\left({\displaystyle \genfrac{}{}{0pt}{}{n}{r}}\right)𝒥_r^{}(𝐱){\displaystyle \stackrel{r}{}}𝒦_n(𝐲){\displaystyle \stackrel{nr}{}}𝒥_{nr}^{}(𝐱+𝐲).`$ Because of the symmetry, Eq. (26), we can remove the factor of $`1/2`$ and sum $`r`$ from $`n/2`$ to $`n`$, instead of from $`0`$ to $`n`$. The exception to this arises when $`n`$ is even and $`r=n/2`$; in that case, the factor of $`1/2`$ must be retained. We accommodate this case by dividing by $`1+\delta _{n,2r}`$, where $`\delta `$ is the Kronecker delta. If we then define the $`r`$th-rank tensor fields, as in Eq. (8), the expression of Eq. (6) for the potential energy change follows immediately. As noted in the text, the primes on the fluxes and fields indicate that these are evaluated using the outgoing (post-collision) states. Thus, at each order $`n`$, Eq. (6) expresses the change in potential energy due to the propagation step as the sum of $`n`$ terms, the $`r`$th of which is an $`r`$-fold inner product of an $`r`$th-rank outgoing tensor flux, Eq. (7), with an $`r`$th-rank tensor field, Eq. (8). Note that we can rearrange the order of summation of $`n`$ and $`r`$ to write Eq. (6) in the following alternative format $$\mathrm{\Delta }V_c=\underset{𝐱}{}\underset{r=1}{\overset{\mathrm{}}{}}ϵ^r𝒥_r^{}(𝐱)\stackrel{r}{}\left(\underset{n=0}{\overset{r}{}}ϵ^n_{n+r,r}^{}(𝐱)\right).$$ (29) This makes it clear that at each order in $`r`$ a new $`r`$th-rank tensor flux must be introduced, and the corresponding $`r`$th-rank field is the sum of $`r`$ terms. The disadvantage of writing the result in this way is that each term of the outer sum over $`r`$ contains terms of differing order in $`ϵ`$. Finally, we note in passing that tensor fluxes and fields have been considered in the context of a LGA model of microemulsions . While that model did indeed derive its update rule from considerations of single-particle interactions, it did not employ the method used here. In particular, where $`𝐜_j𝐜_i`$ appears in Eq. (3), that study took only $`𝐜_i`$, and the expansion was not carried out to all orders. On the other hand, that work also included vector-valued charge attributes, in order to properly model the orientation of the surfactant molecules. It is likely that a more exact formulation, analogous to our Eq. (3), exists for this microemulsion model, and may well connect this work with the static lattice models of microemulsions due to Matsen and Sullivan .
no-problem/9911/astro-ph9911060.html
ar5iv
text
# Modelling Hard Gamma-Ray Emission From Supernova Remnants ## Introduction It is widely believed that supernova remnants (SNRs) are the primary sources of cosmic-ray ions and electrons up to energies of at least $`10^{15}`$ eV, where the so-called knee in the spectrum marks a deviation from almost pure power-law behavior. Such cosmic rays are presumed to be generated by diffusive (also called first-order Fermi) acceleration at the remnants’ forward shocks. These cosmic rays can generate gamma rays via interactions with the ambient interstellar medium, including nuclear interactions between relativistic and cold interstellar ions, by bremsstrahlung of energetic electrons colliding with the ambient gas, and inverse Compton (IC) emission off cosmic background radiation. Rudimentary models of gamma-ray production in remnants involving nuclear interactions date back to the late 1970s hl75 ; chev77 . These preceded the first tentative associations of two COS-B gamma-ray sources poll85 with the remnants $`\gamma `$ Cygni and W28. Apart from the work of Dorfi dorfi91 , who provided the first model including a more sophisticated study of non-linear effects of shock acceleration to treat gamma-ray production, the study of gamma-ray SNRs remained quietly in the background until the observational program of the EGRET experiment aboard the Compton Gamma Ray Observatory. This provided a large number of unidentified sources above 50 MeV, a handful of which have interesting associations with relatively young SNRs espos96 . Following the EGRET advances, the modelling of gamma-ray and other non-thermal emission from supernova remnants “burgeoned,” beginning with the paper of Drury, Aharonian, & Völk dav94 (hereafter DAV94), who computed the photon spectra expected from the decay of neutral pions generated in collisions of power-law shock-accelerated ions with those of the interstellar medium (ISM). This work spawned a number of subsequent papers that used different approaches, as discussed in the next section, and propelled the TeV gamma-ray astronomy community into a significant observational program given the prediction of substantial TeV fluxes from the DAV94 model. The initial expectations of TeV gamma-ray astronomers were dampened by the lack of success of the Whipple and HEGRA groups less95 ; prosch96 ; buck97 in detecting emission from SNRs after a concerted campaign. While sectors of the community contended that the constraining TeV upper limits posed difficulties for SNR shock acceleration models, these observational results were naturally explained mdeJ96 ; baring97 ; sturn97 by the maximum particle energies expected (in the 1–50 TeV range) in remnants and the concomitant anti-correlation between maximum energy of gamma-ray emission and the gamma-ray luminosity bergg99 (discussed below). The observational breakthrough in this field came with the recent report of a spatially-resolved detection of SN1006 (not accessible by northern hemisphere atmospheric Čerenkov telescopes (ACTs) such as Whipple and HEGRA) by the CANGAROO experiment Tani98 at energies above 1.7 TeV. The interpretation (actually predicted for SN 1006 by mdeJ96 ; Pohl96 ) that evolved was that this emission was due to energetic electrons accelerated in the low density environs of this high-latitude remnant, generating flat-spectrum inverse Compton radiation seeded by the cosmic microwave background. This suggestion was influenced, if not motivated by the earlier detection Koya95 of the steep non-thermal X-ray emission from SN 1006 that has been assumed to be the upper end of a broad synchrotron component, implying the presence of electrons in the 20–100 TeV range. Studies of gamma-ray emission from remnants have adapted to this discovery by suggesting (e.g. baring97 ; bergg99 ) that galactic plane remnants such as Cas A that possess denser interstellar surroundings may have acceleration and emission properties distinct from high-latitude sources; the exploration of such a contention may be on the horizon, given the detection of Cas A by HEGRA announced at this meeting Voelk99 . Given the complexity of recent shock acceleration/SNR emission models, the range of spectral possibilities is considerable, and a source of confusion for both theorists and observers. It is the aim of this paper to elucidate the study of gamma-ray remnants by pinpointing the key spectral variations/trends with changes in model parameters, and thereby identify the principal parameters that impact TeV astronomy programs. ## Models: A Brief History Reviews of recent models of gamma-ray emission from SNRs can be found in baring97 ; bergg99 ; deJbar97 ; Voelk97 ; a brief exposition is given here. Drury, Aharonian, & Völk dav94 provided impetus for recent developments when they calculated gamma-ray emission from protons using the time-dependent, two-fluid analysis (thermal ions plus cosmic rays) of dmv89 , following on from the similar work of dorfi91 . They assumed a power-law proton spectrum, so that no self-consistent determination of spectral curvature to the distributions eich84 ; ee84 ; je91 or temporal or spatial limits to the maximum energy of acceleration was made. The omission of environmentally-determined high energy cutoffs in their model was a critical driver for the interpretative discussion that ensued. dav94 found that during much of Sedov evolution, maximal diffusion length scales are considerably less than a remnant’s shock radius. Gaisser, et al. gps98 computed emission from bremsstrahlung, inverse Compton scattering, and pion-decay from proton interactions, but did not consider non-linear shock dynamics or time-dependence and assumed test-particle power-law distributions of protons and electrons with arbitrary $`e/p`$ ratios. In order to suppress the flat inverse Compton component and thereby accommodate the EGRET observations of $`\gamma `$ Cygni and IC443, gps98 obtained approximate constraints on the ambient matter density and the primary $`e/p`$ ratio. A time-dependent model of gamma-ray emission from SNRs using the Sedov solution for the expansion was presented by Sturner, et al. sturn97 . They numerically solved equations for electron and proton distributions subject to cooling by inverse Compton scattering, bremsstrahlung, $`\pi ^0`$ decay, and synchrotron radiation (to supply a radio flux). Expansion dynamics and non-linear acceleration effects were not treated, and power-law spectra were assumed. Sturner et al. (1997) introduced cutoffs in the distributions of the accelerated particles (following mdeJ96 ; Reyn96 ; deJm97 ), which are defined by the limits (discussed below) on the achievable energies in Fermi acceleration. Hence, given suitable model parameters, they were able to accommodate the constraints imposed by Whipple’s upper limits buck97 to $`\gamma `$ Cygni and IC 443. To date, the two most complete models coupling the time-dependent dynamics of the SNR to cosmic ray acceleration are those of Berezhko & Völk bv97 , based on the model of byk96 , and Baring et al. bergg99 . Berezhko & Völk numerically solve the gas dynamic equations including the cosmic ray pressure and Alfvén wave dissipation, following the evolution of a spherical remnant in a homogeneous medium. Originally only pion decay was considered, though this has now been extended bkp99 to include other components. Baring et al. simulate the diffusion of particles in the environs of steady-state planar shocks via a well-documented Monte Carlo technique je91 ; ebj96 that has had considerable success in modelling particle acceleration at the Earth bow shock emp90 and interplanetary shocks boef97 in the heliosphere. They also solve the gas dynamics numerically, and incorporate the principal effects of time-dependence through constraints imposed by the Sedov solution. These two refined models possess a number of similarities. Both generate upward spectral curvature (predicted by eich84 ; see the review in baring97 ), a signature that is a consequence of the higher energy particles diffusing on larger scales and therefore sampling larger effective compressions, and both obtain overall compression ratios $`r`$ well above standard test-particle Rankine-Hugoniot values. Yet, there are two major differences between these two approaches. First Berezhko et al. bv97 ; bkp99 include time-dependent details of energy dilution near the maximum particle energy self-consistently, while Baring et al. bergg99 mimic this property by using the Sedov solution to constrain parametrically the maximum scale of diffusion (defining an escape energy). These two approaches merge in the Sedov phase eb99 , because particle escape from strong shocks is a fundamental part of the non-linear acceleration process and is determined primarily by energy and momentum conservation, not time-dependence or a particular geometry. Second, bergg99 injects ions into the non-linear acceleration process automatically from the thermal population, and so determine the dynamical feedback self-consistently, whereas bv97 must specify the injection efficiency as a free parameter. Berezhko & Ellison eb99 recently demonstrated that, for most cases of interest, the shock dynamics are relatively insensitive to the efficiency of injection, and that there is good agreement between the two approaches when the Monte Carlo simulation bergg99 ; ebj96 specifies injection for the model of bv97 . This convergence of results from two complimentary methods is reassuring to astronomers, and underpins the expected reliability of emission models to the point that a hybrid “simple model” has been developed be99 to describe the essential acceleration features of both techniques. This has been extended to a new and comprehensive parameter survey ebb99 of broad-band SNR emission that provides results that form the basis of much of the discussion below. ## Global Theoretical Predictions Since there is considerable agreement between the most developed acceleration/emission models, we are in the comfortable position of being able to identify the salient global properties that should be characteristics of any particular model. Clearly a treatment of non-linear dynamics and associated spectral curvature are an essential ingredient to more accurate predictions of emission fluxes, particularly in the X-ray and gamma-ray bands where large dynamic ranges in particle momenta are sampled, so that discrepancies of factors of a few or more arise when test-particle power-laws are used. Concomitantly, test-particle shock solutions considerably over-estimate ebj96 ; be99 ; ebb99 the dissipational heating of the downstream plasma in high Mach number shocks, thereby introducing errors that propagate into predictions of X-ray emission and substantially influence the overall normalization of hard X-ray to gamma-ray emission (which depends on the plasma temperature bergg99 ; ebb99 ). These points emphasize that a cohesive treatment of the entire particle distributions is requisite for the accuracy of a given model. In addition, finite maximum energies of cosmic rays imposed by spatial and temporal acceleration constraints (e.g. bergg99 ; berez96 ) must be integral to any model, influencing feedback that modifies the non-linear acceleration problem profoundly. In SNR evolutionary scenarios, a natural scaling of this maximum energy $`E_{\mathrm{max}}`$ arises, defined approximately by the energy attained at the onset of the Sedov phase bergg99 ; berez96 : $$E_{\mathrm{max}}\mathrm{\hspace{0.33em}60}\frac{Q}{\eta }\left(\frac{B_{\text{ISM}}}{3\mu \mathrm{G}}\right)\left(\frac{n_{\text{ISM}}}{1\mathrm{cm}^3}\right)^{1/3}\left(\frac{_{\text{SN}}}{10^{51}\mathrm{erg}}\right)^{1/2}\left(\frac{M_{\mathrm{ej}}}{M_{}}\right)^{1/6}\mathrm{TeV},$$ (1) where $`Q`$ is the particle’s charge, $`\eta `$ ($`1`$) is the ratio between its scattering mean-free-path and its gyroradius, $`_{\text{SN}}`$ is the supernova energy, $`M_{\mathrm{ej}}`$ is its ejecta mass, and other quantities are self-explanatory. At earlier epochs, the maximum energy scales approximately linearly with time, while in the Sedov phase, it slowly asymptotes bergg99 ; bv99 to a value a factor of a few above that in Eq. (1). Three properties emerge as global signatures of models that impact observational programs. The first is that there is a strong anti-correlation of $`E_{\mathrm{max}}`$ (and therefore the maximum energy of gamma-ray emission) with gamma-ray luminosity, first highlighted by bergg99 . High ISM densities are conducive to brighter sources in the EGRET to sub-TeV band bergg99 ; bv97 ; ebb99 , but reduce $`E_{\mathrm{max}}`$ in Eq. (1) and accordingly act to inhibit detection by ACTs. Low ISM magnetic fields produce a similar trend, raising the gamma-ray flux by flattening the cosmic ray distribution (discussed below). Clearly, high density, low $`B_{\text{ISM}}`$ remnants are the best candidates for producing cosmic rays up to the knee. Fig. 1 displays a sample model spectrum for Cas A, which has a high density, high $`B_{\text{ISM}}`$ environment. In it the various spectral components are evident, and the lower $`E_{\mathrm{max}}`$ for electrons (relative to that for protons) that is generated by strong cooling is evident in the bremsstrahlung and inverse Compton spectra. The other two global properties are of a temporal nature. The first is the approximate constancy of the observed gamma-ray flux (and $`E_{\mathrm{max}}`$ bv99 ) in time during Sedov phase, an insensitivity first predicted by dorfi91 and confirmed in the analyses of dav94 ; bergg99 ; bv99 . The origin of this insensitivity to SNR age $`t_{\text{SNR}}`$ is an approximate compensation between the SNR volume $`𝒱`$ that scales as $`t_{\text{SNR}}^{6/5}`$ (radius $`t_{\text{SNR}}^{2/5}`$) in the Sedov phase, and the normalization coefficient $`𝒩`$ of the roughly $`E^2`$ particle distribution function: since the shock speed (and therefore also the square root of the temperature $`T_{\mathrm{pd}}`$) declines as $`t_{\text{SNR}}^{3/5}`$, it follows that $`𝒩T_{\mathrm{pd}}t_{\text{SNR}}^{6/5}`$ and flux$`𝒩𝒱`$const. There is also a limb brightening with age dav94 that follows from the constant maximum particle length scale concurrent with continuing expansion. ### Key Parameters and Model Behavioural Trends The principal aim here is to distill the complexity of non-linear acceleration models for time-dependent SNR expansions and discern the key parameters controlling spectral behaviour and simple reasons for behavioural trends. This should elucidate for theorist and experimentalist alike the scientific gains to be made by present and next generation experimental programs. Parameters are grouped according to them being of model origin and environmental nature (trends associated with the age of a remnant were discussed just above), and details can be found in the comprehensive survey of Ellison, Berezhko & Baring ebb99 . There are three relevant model parameters in non-linear acceleration, the ratio of downstream electron and proton temperatures $`T_{\mathrm{ed}}/T_{\mathrm{pd}}`$, the injection efficiency $`\eta _{\mathrm{inj}}`$ (after bv97 ; byk96 ), and the electron-to-proton ratio $`(e/p)_{\mathrm{rel}}`$ at relativistic energies (i.e. $`1`$$`\mathrm{\hspace{0.17em}10}`$GeV). The injection efficiency is the most crucial of these, since it controls the pressure contained in non-thermal ions, and therefore the non-linearity of the acceleration process. It mainly impacts the X-ray to soft gamma-ray bremsstrahlung contribution, a component that is generally dominated by pion decay emission in the hard gamma-ray band. The shape and normalization of the $`\pi ^0`$ decay gamma-rays is only affected when $`\eta _{\mathrm{inj}}`$ drops below $`\mathrm{\hspace{0.17em}10}^4`$ and the shock solution becomes close to the test-particle one, i.e. an overall spectral steepening arises. Variations in $`(e/p)_{\mathrm{rel}}`$ influence the strength of the inverse Compton and bremsstrahlung components, which modify the total gamma-ray flux only if $`(e/p)_{\mathrm{rel}}0.1`$, a high value relative to cosmic ray abundances, or the ambient field is strong. The most interesting behavioural trends are elicited by the environmental parameters $`n_{\text{ISM}}`$ and $`B_{\text{ISM}}`$, and the results adapted from ebb99 are illustrated in Fig. 2. Naively, one expects that the radio-to-X-ray synchrotron and gamma-ray inverse Compton components should scale linearly with density increases, while the bremsstrahlung and pion decay contributions intuitively should be proportional to $`n_{\text{ISM}}^2`$. However, global spectral properties are complicated by the non-linear acceleration mechanism and the evolution of the SNR. As $`n_{\text{ISM}}`$ rises, the expanding supernova sweeps up its ejecta mass sooner, and therefore decelerates on shorter timescales, thereby reducing both the volume $`𝒱`$ of a remnant of given age, and lowering the shock speed and the associated downstream ion temperature $`T_{\mathrm{pd}}`$. Hence, the density increase is partially offset by the “shifting” of the particle distributions to lower energies (due to lower $`T_{\mathrm{pd}}`$) so that the normalization $`𝒩`$ of the non-thermal distributions at a given energy is a weakly increasing function of $`n_{\text{ISM}}`$. Clearly $`𝒱`$ times this normalization controls the observed flux of the synchrotron and inverse Compton components, while the product of $`𝒩`$, the target density $`n_{\text{ISM}}`$ and $`𝒱`$ determines the bremsstrahlung and $`\pi ^0\gamma \gamma `$ emission, with results shown in Fig. 2. Observe that the approximate constancy of the inverse Compton contribution effectively provides a lower bound to the gamma-ray flux in the 1 GeV–1 TeV band, a property that is of significant import in defining experimental goals. The principal property in Fig. 2 pertaining to variations in $`B_{\text{ISM}}`$ is the anti-correlation between radio and TeV fluxes: the higher the value of $`B_{\text{ISM}}`$, the brighter the radio synchrotron, but the fainter the hard gamma-ray pion emission. This property is dictated largely by the influence of the field on the shock dynamics and total compression ratio $`r`$: the higher the value of $`B_{\text{ISM}}`$, the more the field contributes to the overall pressure, reducing the Alfvénic Mach number and accordingly $`r`$, as the flow becomes less compressible. This weakening of the shock steepens the particle distributions and the overall photon spectrum. An immediate offshoot of this behaviour is the premise ebb99 that radio-selected SNRs may not provide the best targets for TeV observational programs. Case in point: Cas A is a very bright radio source while SN 1006 is not, and the latter was observed first. Since $`n_{\text{ISM}}`$ and $`B_{\text{ISM}}`$ principally determine the gamma-ray spectral shape, flux normalization and whether or not the gamma-ray signatures indicating the presence of cosmic ray ions are apparent, they are the most salient parameters to current and future ACT programs and the GLAST experiment. ## Key Issues and Experimental Potential There are a handful of quickly-identifiable key issues that define goals for future experiments, and these can be broken down into two categories: spatial and spectral. First and foremost, the astronomy community needs to know whether the EGRET band gamma-ray emission is actually shell-related. While the associations of espos96 were enticing, subsequent research braz96 ; keohane97 ; braz98 has suggested that perhaps compact objects like pulsars and plerions or concentrated regions of dense molecular material may be responsible for the EGRET unidentified sources. If a connection to the shell is eventually established, it is desirable to know if it is localized only to portions of the shell. One naturally expects that shock obliquity effects ebj96 can play an important role in determining the gamma-ray flux in “clean” systems like SN 1006, and that turbulent substructure within the remnant (e.g. Cas A) can complicate the picture dramatically. Such clumping issues impact radio/gamma-ray flux determinations, since the radio-emitting electrons diffuse on shorter length scales and therefore are more prone to trapping. Another contention that needs observational verification is whether or not limb-brightening increases with SNR age? Improvements in the angular resolution of ACTs can resolve these issues and discern variations in gamma-ray luminosities across SNR shells: the typical capability of planned experiments such as HESS, Veritas, MAGIC and CANGAROO-III is of the order of 2–3 arcminutes in the TeV band weekes99 ; kohnle99 . The principal gamma-ray spectral issue is whether or not there is evidence of cosmic ray ions near remnant shocks. The goal in answering this is obviously the detection of $`70`$MeV $`\pi ^0`$ bump, the unambiguous signature of cosmic ray ions, and given the GLAST differential sensitivity (the measure of capability in performing spectral diagnostics as opposed to detection above a given energy) plotted in Fig. 2, GLAST will be sensitive to remnants with $`n_{\text{ISM}}0.1`$ cm<sup>-3</sup>. Atmospheric Čerenkov experiments can also make progress on this issue, with the dominant component in the super-TeV band for moderately or highly magnetized remnant environs being that of pion decay emission (see Fig. 1). Such a circumstance may already be realized in the recent marginal detection Voelk99 of Cas A by HEGRA. The most powerful diagnostic the sensitive TeV experiments will provide is the determination of the maximum energy (see Fig. 2) of emission (and therefore also that of cosmic ray ions or electrons), thereby constraining $`n_{\text{ISM}}`$, $`B_{\text{ISM}}`$ and the $`e/p`$ ratio. Furthermore, the next generation of ACTs should be able to discern the expected anti-correlation between $`E_{\mathrm{max}}`$ and $`\gamma `$-ray flux, and with the help of GLAST, search for spectral concavity, a principal signature of non-linear acceleration theory. In view of the anticipated increase in the number of TeV SNRs, a population classification may be possible, determining whether or not SN1006 and other out-of-the-plane remnants differ intrinsically in their gamma-ray and cosmic ray production from the Cas A-type SNRs. These potential probes augur well for exciting times in the next 5–10 years in the field of TeV gamma-ray astronomy. Acknowledgments: I thank my collaborators Don Ellison, Steve Reynolds, Isabelle Grenier, Frank Jones and Philippe Goret for many insightful discussions, Seth Digel for providing results of simulations of GLAST spectral capabilities, and Rod Lessard for supplying the Veritas integral flux sensitivity data for Fig. 2.
no-problem/9911/astro-ph9911145.html
ar5iv
text
# Evidence for an ∼80 day periodicity in the X–ray transient pulsar XTE J1946+274 ## 1 Introduction The hard X–ray transient (HXRT) source XTE J1946+274 was discovered with the RossiXTE All Sky Monitor (ASM) during a scan of the Vul-Cyg region on Sept. 5, 1998. The source was detected at a flux level of $`13`$ mCrab (2–12 keV; Smith & Takeshima 1998), which raised steeply in the following days, reaching a peak of $`110`$ mCrab around Sept. 17, 1998 (Takeshima & Chakrabarty 1998). X–ray pulsations at 15.8 s were discovered by BATSE (GRO J1944+26, Wilson et al. 1998a) and subsequently confirmed through RossiXTE pointed observations (Smith & Takeshima 1998). The 1998 outburst of XTE J1946+274 was extensively monitored also through a BeppoSAX observational campaign (Campana et al. 2000; Santangelo et al. 2000). The position error circle obtained with RossiXTE ($`2^{}.4`$ radius, 90% confidence level; Takeshima & Chakrabarty 1998) was reduced to $`30^{\prime \prime }`$ (95% confidence level) through pointed observations with BeppoSAX Narrow Field Instruments (Campana et al. 1998). The best X–ray position is R.A.= 19<sup>h</sup> 45<sup>m</sup> 38<sup>s</sup> and Dec.= +27<sup>o</sup> $`21^{}.5`$ (equinox 2000). XTE J1946+274 lies in the error box of the 1976 Ariel V transient 3A 1942+274 (Warwick et al. 1981). Assuming $`1000`$ HXRTs in the Galaxy (e.g. Bildsten et al. 1997), we estimate a chance probability of $`7\%`$ of of finding a new transient within the 3A 1942+274 error box. This probability is such that we cannot infer a firm association of the two sources. The likely optical counterpart has been recently identified (Verrecchia et al. 2000) with a $`\mathrm{R}14`$ mag Be star, showing a strong $`H_\alpha `$ emission line. Earlier reports of a B counterpart (Israel, Polcaro & Covino 1998; Ghavamian & Garcia 1998) has been ruled out by the BeppoSAX error circle, lying $`10^{\prime \prime }`$ outside. All these characteristics testify that XTE J1946+274 is a Be star HXRT. Here we report the discovery of an $`80`$ d X–ray flux modulation in RossiXTE-ASM data of the source and argue that this (or its double) likely represents the orbital period of the system. ## 2 Period determination The ASM (Levine et al. 1996) on board the RossiXTE (Bradt, Rothschild & Swank 1993) routinely scans about 80% of the X–ray sky every orbit. It consists of three Scanning Shadow Cameras with a 1.3–12 keV energy band, an intrinsic angular resolution of a few arcmin and a large field of view ($`6^\mathrm{o}\times 90^\mathrm{o}`$ FWHM). A sensitivity of $`510`$ mCrab is reached over one day. The intensity is calculated in three energy bands (1.3–3, 3–5 and 5–12 keV) and normalized in units of source count at the center of the field of view. Errors are computed considering the uncertainties due to the counting statistics and a $`2\%`$ systematic error obtained from the Crab calibration. A description of the ASM and its light curves can be found in Levine et al. (1996) and Levine (1998)<sup>1</sup><sup>1</sup>1See the web page http://heasarc.gsfc.nasa.gov/docs/xte/asm\_products.html#access. Fig. 1 shows the light curve of the XTE J1946+274 outburst, which began in Sept. 1998. The source took $`25`$ d to reach the peak and then decayed smoothly, leading to a markedly asymmetric profile during the first $`80`$ d of the outburst. Following the main flare four secondary flares are evident in Fig. 1. Fig. 2 shows the light curves in the three ASM energy channels. The low energy curve contains only a small signal, a result of the fact that the source is heavily absorbed. The hardness ratio from the higher energy channels is consistent with being constant. The overall behaviour of the light curve is suggestive of a relatively intense outburst in the initial phases, during which the neutron star mass capture rate was likely driven primarily by the time variations in the mass ejection from the Be star. The recurrent behaviour and lower flux of the following four flares suggest instead that later in the outburst the mass capture rate variations were induced mainly by the motion of the neutron star in an eccentric orbit. The fairly regular $`80`$ d recurrence of the four secondary flares is more apparent by looking at the light curve minima, which are evidently less influenced by the varying amplitude and shape of the outburst. A simple power spectrum analysis of the four secondary flares (from MJD 51140 to MJD 51450) reveals a significant periodicity ($`3.3\sigma `$ in the 10–250 d range) at $`73\pm 5`$ d. The inclusion of the first flare (from MJD 51040 to MJD 51450) lowers considerably the significance. Similarly, by adopting an epoch folding analysis of the light curve we detect a significant power at a frequency corresponding to $`79\pm 6`$ d. The inclusion of the first flare causes the folding peak to shift to $`87\pm 6`$ d. We also fitted the light curve of the last four flares with a sinusoid plus a constant: we determined a best fit period of $`77`$ d. The same fit on the total light curve provides instead a period of $`95`$ d. The discrepancy between the periods determined with and without the first flare arises because this starts some $`30`$ d before the extrapolation of the ephemerides derived from the subsequent four flares. In order to model the flare profiles more accurately we adopted a model consisting of a smooth burst profile with a power law rise and an exponential decay, namely $`CR=CR_0\left((tt_0)/(t_\mathrm{c}\alpha )\right)^\alpha \mathrm{exp}(tt_0)/t_\mathrm{c}`$. Here $`t_0`$ is the starting time of the flare, $`CR_0`$ is the count rate normalization, $`\alpha `$ the power law slope of the rise and $`t_\mathrm{c}`$ the exponential decay timescale; for time smaller than $`t_0`$ the function value is 0. This model was fitted to each of the four secondary flares by forcing all parameters but the normalisation to be the same and by imposing a periodic recurrence of the outbursts, with the period a free parameter. Only the first flare, for the reasons described above, was allowed a different rise and decay profile and a shift in time. Fig. 1 shows the best fit obtained in this way. The reduced $`\chi ^2`$ is 2.5. The rising exponents are 2.0 and 8.2 and the decay times $`\alpha =12`$ and 6 d for the first and the other flares, respectively. These values are significantly different and confirm ‘a posteriori’ the differences between the first and the subsequent flares. The best separation between the flares is 77.3 d with a nominal statistical uncertainty of $`1`$ d (90% confidence level). We speculate that the slightly different result obtained with epoch folding search and the power spectrum density are due to the small number of points, the strength of the first and third flares and the width of the second and forth flares for which the maximum cannot be identified with certainty. The onset of the first flare is shifted by –7 d with respect to the other flares, moreover the different behaviour of its decay makes it occur some 30 d before the value extrapolated by the following flares. The approximate epoch of the maxima are MJD 51080.5, 51187.4, 51264.8, 51342.1 and 51419.5. ## 3 Discussion In recent years, growing evidence has been gathered that the two classes of outburst from HXRT sources (type I: normal; type II: giant, cf. Stella, White & Rosner 1986) are associated with wind and disk accretion, respectively (Bildsten et al. 1997). Raguzova & Lipunov (1998) described the Be disk-fed accretion in HXRTs, finding that the outburst peaks occur at phase 0–0.5, depending on the wind characteristics and/or orbital eccentricity (i.e. the wind rose effect). The fairly regular spacing of the last four flares in XTE J1946+26 hints for their classification as Type I outbursts, even if the flux at minimum does not drop to very low values as, e.g., in the case of V 0332+53 (Stella, White & Rosner 1986). The different rise and decay timescales, as well as the larger intensity, make however the first flare markedly different from the others. This would argue in favour of a type II outburst. Despite the small number of cases, type II outburst peaks have been observed to be delayed in orbital phase with respect to periastron (e.g. 4U 0115+63, Whitlock et al. 1989; A 0535+26, Motch et al. 1991; Bildsten et al. 1997, see however 2S 1417–624 ibidem) and sometimes they last for several orbital cycles (e.g. V0332+53; Stella, White & Rosner 1986). The phasing of this outburst should therefore be dictated mainly by the time variability of the Be star mass outflow rate, especially in the first phases of the shell ejection episode. As can be noted in Fig. 1, the first, third and fifth flares are stronger than the second and the forth. This might suggest that the neutron star is orbiting the Be companion in an inclined orbit and it crosses twice the Be disk plane, giving rise to two outbursts per orbit. In this case the orbital period would be $`155`$ d. Two flares per orbit have been already observed in 4U 1907+097 (with an orbital period of 8.4 d) at orbital phases $`0.04`$ and $`0.48`$ (Makashima et al. 1984) and in GRO J2058+42 (110 d) with the two flares separated in phase by $`0.5`$ (Wilson et al. 1998b). ## 4 Conclusions We discovered in the RossiXTE-ASM data of XTE J1946 +274 an $`80`$ d modulation. This periodicity likely represent the orbital period of the system or, perhaps, half this value if the neutron star is orbiting the Be out of its shell ejection plane. A correlation between the spin and orbital periods of Be star/X–ray pulsar binary systems was described by Corbet (1984, 1986). Despite the considerable scatter in this relationship, given the 15.8 s spin period of XTE J1946+274, one would obtain for an orbital period of $`77`$ d a moderate eccentricity ($`e<0.4`$) and for its double a somewhat more eccentric system ($`e<0.6`$). An orbital solution, affording an independent accurate measurement of the orbital period and eccentricity, might be obtained from the analysis of the pulse arrival times during the pointed RossiXTE observations. ###### Acknowledgements. We acknowledge the use of quick-look results provided by the RossiXTE-ASM team and useful comments from H. Bradt and an anonymous referee.
no-problem/9911/astro-ph9911307.html
ar5iv
text
# Possible re-acceleration regions above the inner gap and pulsar gamma-ray emission ## References Arons, J., & Scharlemann, E.T. 1979, ApJ, 231, 854 Goldreich, P., & Julian, W.H. 1969, ApJ, 157, 869 Harding, A.K., & Muslimov, A.G. 1998, ApJ, 508, 328 Ruderman, M.A., Sutherland, P.G. 1975, ApJ, 196, 51 (RS75) Thompson, D.J. 1996, in: Pulsars: Problems & Progress, (eds.) S. Johnston, M.A. Walker, & M. Bailes, ASP Conf. Ser. Vol. 105, p.307 Zhang, B., & Harding, A.K. 1999, ApJ, in press
no-problem/9911/astro-ph9911189.html
ar5iv
text
# The origin of scale-scale correlations of the density perturbations during inflation ## Abstract We show that scale-scale correlations are a generic feature of slow-roll inflation theories. These correlations result from the long-time tails characteristic of the time dependent correlations because the long wavelength density perturbation modes are diffusion-like. A relationship between the scale-scale correlations and time-correlations is established providing a way to reveal the time correlations of the perturbations during inflation. This mechanism provides for a testable prediction that the scale-scale correlations at two different spatial points will vanish. PACS: 98.70.Cq, 98.80.Vc The inflation paradigm for the evolution of the universe not only solves classical cosmological dilemmas such as the horizon and flatness problems, but also provides the seeds for structure formation in the form of quantum or thermal fluctuations. These fluctuations grow by gravitational instability to form the structure we see today. The nature of the fluctuations leads to the belief that the primordial density field is likely Gaussian. However, recent analysis of Ly$`\alpha `$ forest lines in quasar absorption spectra reveal that the Ly$`\alpha `$ clouds at redshifts between 2 and 3 are significantly scale-scale correlated on scales of 40 to 80 $`h^1`$ Mpc. Since the gravitational clustering of the universe remains in the linear regime on scales larger than about 30 $`h^1`$ Mpc at redshifts higher than 2, this result implies that the primordial density fluctuations of the universe are probably scale-scale correlated. More direct evidence for the scale-scale correlation in the primordial density field has recently been found in the Cosmic Background Explorer (COBE) Differential Microwave Radiometer (DMR) 4-year all sky maps. In particular, the observed cosmic temperature fluctuations on the North Galactic Pole are found, with $`>99\%`$ confidence, to be scale-scale correlated on angular scales of 10 to 20 degrees. These scales are much larger than the Hubble size at the decoupling era and cannot be attributed to any known foregrounds or correlated noise maps. Moreover, the correlation is on the order of $`10^5`$ K, i.e., the same order as the temperature fluctuations, and therefore it is unlikely to be a higher order effect. Although confirmation of scale-scale correlations in the primordial density field must await further observations, it is appropriate now to study the possible origin of these correlations. At the first glance, these correlations might appear to be incompatible with inflation. However, in this Letter, we show that scale-scale correlations are not only possible, but a generic feature of the inflationary scenario. It is known that there are models which lead to a density field with a Poisson or Gaussian distribution in the one-point distribution functions, but that are highly scale-scale correlated. The physical reason for the co-existence of Gaussian one-point functions and scale-scale correlations can easily be seen in phase space, i.e., position-wavevector space. Any density field $`\rho (𝐫)`$ can be decomposed by an orthogonal and complete basis $`\mathrm{\Psi }_{𝐤,𝐱}`$, where the indexes $`𝐤`$ and $`𝐱`$ denote, respectively, the wavevector and position of a volume element $`d^3xd^3k1`$ in phase space. One way to realize this decomposition is by the discrete wavelet transform (DWT). With this type of decomposition, the behavior of $`\rho (𝐫)`$ in phase space is described by the projection of $`\rho (𝐫)`$ on $`\mathrm{\Psi }_{𝐤,𝐱}`$, which we denote by $`\stackrel{~}{ϵ}_{𝐤,𝐱}`$. The density perturbations of component $`\stackrel{~}{ϵ}_{𝐤,𝐱}`$ are localized at $`𝐤,𝐱`$ in phase space, and thus one can have distributions for which $`\stackrel{~}{ϵ}_{𝐤,𝐱}`$ is Gaussian in its one-point distribution with respect to $`𝐱`$ while scale-scale correlated in terms of $`𝐤`$. For instance, consider a distribution of $`\stackrel{~}{ϵ}_{𝐤,𝐱}`$ which is Gaussian in $`𝐱`$ at a given scale $`|𝐤|=k_1`$. Then $$\stackrel{~}{ϵ}_{𝐤,𝐱}\stackrel{~}{ϵ}_{𝐤,𝐱^{}}=P_{dwt}(k_1)\delta _{𝐱,𝐱^{}},$$ (1) where $`P_{dwt}(k_1)`$ is the variance or the power spectrum with respect to the DWT decomposition . All higher order correlations of $`\stackrel{~}{ϵ}_{𝐤,𝐱}`$ are zero at this scale. If perturbation $`\stackrel{~}{ϵ}_{𝐤,𝐱}`$ at scale $`|𝐤|=k_2`$ is related to that at scale $`|𝐤|=k_1`$ as $$\stackrel{~}{ϵ}_{|𝐤|=k_2,𝐱}=\alpha \stackrel{~}{ϵ}_{|𝐤|=k_1,𝐱},$$ (2) where $`\alpha `$ is a constant, the distribution of $`\stackrel{~}{ϵ}_{|𝐤|=k_2,𝐱}`$ is also Gaussian in $`𝐱`$ with variance $`P_{dwt}(k_2)=\alpha ^2P_{dwt}(k_1)`$, but is strongly scale-scale correlated. That is, at a given position $`𝐱`$, the scale $`k_2`$ perturbation is always proportional to the scale $`k_1`$ perturbation. Scale-scale correlations depend only on the statistical behavior of the fluctuation distribution along the $`𝐤`$ direction but localized in the $`𝐱`$ direction of the phase space. Obviously such scale-scale correlations can only be effectively detected by a scale-space decomposition. In the inflationary scenario, the statistical behavior of the primeval density perturbations in the $`𝐤`$ direction are determined by the time-dependent correlations of the fluctuations during inflation. This is because the density perturbations on comoving wavenumber $`k`$ originate from fluctuations crossing over the Hubble radius at time $`t_c`$, and $$ke^{Ht_c},$$ (3) where $`H`$ is the Hubble constant during inflation. Equation (3) is a well known feature of inflation: the larger the scale of the perturbations, the earlier the time of the formation. Equation (3) is actually a mapping of time $`t`$ to scale space $`k`$ of the density perturbations. Thus, the time-time correlations during the inflation, $`\varphi (t)\varphi (0)`$, where $`\varphi (t)`$ denotes fluctuations of the inflaton $`\mathrm{\Phi }`$, will be mapped into scale-scale correlations. The primeval perturbations will not be scale-scale correlated if the time dependent correlation functions decay faster than Hubble time $`H^1`$. In general, the time-time correlation functions of fluctuations in equilibrium decay exponentially within the mean-free time $`t_f`$ between collisions, which is not longer than about $`1/H`$. Such fluctuations do not lead to scale-scale correlations. However, it is well known both experimentally and theoretically that there is a generic mechanism for long-range correlations in nonequilibrium systems. For these systems, the temporal correlation functions of perturbations decay only as a power-law, $$\varphi (t)\varphi (0)(t_f/t)^{d/2},$$ (4) where $`d`$ is the dimension of the system. This long-time tail phenomenon is found to be applicable to all systems where the relaxation is dominated by long wavelength perturbations via diffusion-like dissipation . Inflation can be treated as such a system thus implying the existence of scale-scale correlations. We can confirm this by considering a standard slow-roll inflation governed by a scalar field $`\mathrm{\Phi }`$ satisfying the equation of motion as $$\ddot{\mathrm{\Phi }}+3H\dot{\mathrm{\Phi }}e^{2Ht}^2\mathrm{\Phi }+V^{}(\mathrm{\Phi })=0,$$ (5) where $`V(\mathrm{\Phi })`$ is the potential of $`\mathrm{\Phi }`$. For our purposes, it is convenient to employ the stochastic inflation formalism , in which the $`\mathrm{\Phi }`$ field is decomposed into large scale component $`\varphi (𝐱,t)`$ and small scale modes $`q(𝐱,t)`$ as $$\mathrm{\Phi }(𝐱,t)=\varphi (𝐱,t)+q(𝐱,t).$$ (6) The component $`\varphi (𝐱,t)`$ is actually a coarse-grained $`\mathrm{\Phi }`$ over size $`1/k_c>1/H`$. It contains the uniform background and perturbations on scales larger than $`1/k_c`$, and therefore can be treated classically. (Here and below we use $`k_c`$ and $`k`$ to describe the wavenumbers of the fluctuations during inflation. It should not be confused with the $`k`$ in Eq. (3), where it is the wavenumber of the density perturbations after inflation). The term $`q(𝐱,t)`$ contains all high frequency quantum or thermal fluctuations. For slow-roll inflation, the evolution of $`\varphi (𝐱,t)`$ modes obeys a Langevin equation $$\frac{\varphi (𝐱,t)}{t}=\frac{1}{3H}\frac{\delta F[\varphi (𝐱)]}{\delta \varphi }+\eta (𝐱,t),$$ (7) where $`F[\varphi ]=d^3𝐱\left[(e^{Ht}\varphi )^2/2+V(\varphi )\right]`$ is the Ginzburg-Landau free energy. The noise term is given by the short-wavelength modes $$\eta (𝐱,t)=\left(\frac{}{t}+\frac{1}{3H}e^{2Ht}^2\right)q(𝐱,t),$$ (8) and its time correlation function is approximately $$\eta (𝐱,t)\eta (𝐱^{},t^{})D\delta (𝐱𝐱^{})\delta (tt^{}),$$ (9) where $`D=H^3/2\pi `$ or $`H^2T/2\pi `$ ($`T`$ being temperature of heat bath) for quantum or thermal fluctuations, respectively. As expected, Eq. (9) shows that the short wavelength modes rapidly decay and are Gaussian. However, the primordial density perturbations originate from the long wavelength modes $`\varphi `$. The time correlation behavior of $`\varphi `$ can be seen from a mode analysis of Eq. (7). Linearizing Eq. (7), one has $$\frac{\varphi }{t}=\left[\frac{\varphi }{\tau _0}\frac{e^{2Ht}}{3H}^2\varphi \right]+\eta $$ (10) with the zero wavenumber relaxation time given by $$\tau _0^1=V^{\prime \prime }(0)/3H.$$ (11) Taking the Fourier transform of Eq. (10) gives $$\frac{\widehat{\varphi }_𝐤}{t}=\frac{\widehat{\varphi }_𝐤}{\tau _𝐤}+\eta _𝐤,$$ (12) where $`𝐤`$ is a comoving wave-vector. The relaxation time of the mode $`𝐤`$ is $$\frac{1}{\tau _𝐤}=\frac{1}{\tau _0}+\frac{1}{3H}e^{2Ht}k^2=\frac{1}{\tau _0}+\frac{k_p^2(t)}{3H},$$ (13) where $`k_p(t)=k\mathrm{exp}(Ht)`$ is the physical wavenumber of $`k`$. Equation (13) is a dispersion relation of mode $`𝐤`$, i.e., $`i\omega =\tau _𝐤^1`$. If the potential $`V`$ is flat, $`V^{}V^{\prime \prime }0`$, we have $`\tau _0^10`$, and $`\omega =ik_p^2(t)/3H`$. This is typical of diffusion-damping soft modes for which the relaxation time $`\tau _𝐤\mathrm{}`$ when $`k_p0`$. These modes lead to long-range correlations in general. Since perturbations $`\varphi (𝐱,t_0)`$ only consist of modes with physical wavelength larger than $`1/k_c`$ at $`t_0`$, the Fourier transform of $`\varphi (𝐱,t_0)`$ is limited to these modes as $$\varphi (𝐱,t_0)=_{k_p(t_0)<k_c}d^3𝐤\widehat{\varphi }_𝐤e^{i\omega t_0i𝐤𝐱}.$$ (14) We are concerned with perturbations $`\varphi (𝐱,t_0)`$ localized within the Hubble radius $`1/H`$ ($`<1/k_c`$) and in this range, the phase factor $`\mathrm{exp}(i𝐤𝐱)`$ in Eq. (14) is almost a constant. Moreover $`\varphi (𝐱,t_0)`$ should be described by an ensemble of solutions of Eq. (8) with various realizations of $`\eta `$. Because $`\eta `$ is white noise, the amplitude of fluctuations $`\widehat{\varphi }_𝐤`$ will be $`k`$-independent on average, i.e., $`\widehat{\varphi }_𝐤\widehat{\varphi }_𝐤^{}\widehat{\varphi }^2`$ is constant, where $`\mathrm{}`$ denotes averaging with respect to the ensemble of $`\eta `$. Hence, we have $`\varphi \varphi ^{}\varphi (𝐱,t_0)\varphi ^{}(𝐱,t_0)k_c^6\widehat{\varphi }^2`$. The values $`\widehat{\varphi }^2`$ for quantum and thermal fluctuations have been calculated in . The Fourier transform of $`\varphi (𝐱,t)`$ is given by $`\varphi (𝐱,t)`$ $`=`$ $`{\displaystyle _{k_p(t)<k_c}}d^3𝐤\widehat{\varphi }_𝐤e^{i\omega ti𝐤𝐱}`$ (15) $`=`$ $`{\displaystyle _{k_p(t)<k_c}}d^3𝐤\widehat{\varphi }_𝐤e^{i\omega t_0i𝐤𝐱}\mathrm{exp}\left(\left[{\displaystyle \frac{1}{\tau _0}}+{\displaystyle \frac{k_p^2(t)}{3H}}\right](tt_0)\right).`$ (16) Thus, the time-dependent correlation function is $$\varphi (𝐱,t)\varphi ^{}(𝐱,t_0)=k_c^3\widehat{\varphi }^2_{k_p(t)<k_c}d^3𝐤\mathrm{exp}\left(\left[\frac{1}{\tau _0}+\frac{k_p^2(t)}{3H}\right](tt_0)\right).$$ (17) In the case of a flat potential $`\tau _0^1=0`$, we have the long-time tail correlation function as $$\frac{\varphi (𝐱,t)\varphi ^{}(𝐱,t_0)}{\varphi \varphi ^{}}\left(\frac{t_f}{tt_0}\right)^{3/2}\mathrm{with}(tt_0)t_f,$$ (18) and $$t_f=\frac{3H}{k_c^2}>\frac{1}{H}.$$ (19) Comparing Eq. (17) with Eq. (4), we have as expected $`d=3`$, and the “mean-free time” is determined by the cut-off $`1/k_c`$, i.e., the minimum scale of the fluctuations $`\varphi `$. When the potential $`V`$ is not constant, the time correlation function $`\varphi (t)\varphi (t_0)`$ \[Eq. (17)\] contains an exponential decay factor $`\mathrm{exp}[(tt_0)/\tau _0]`$. However, the slow-roll dynamics requires that $`V^{\prime \prime }H^2`$. Thus, the factor $`\mathrm{exp}[(tt_0)/\tau _0]`$ can always be ignored as it is equal to about 1 provided $`H(tt_0)`$ is not very large. Therefore, the long temporal correlations do not depend on the details of inflation. In fact, the slow-roll condition in inflation plays the same role as the critical slowing down in phase transitions. Under these conditions, the relaxation is dominated by soft modes and long range temporal correlations occur. Because the background space is uniform, these soft modes do not induce long range spatial correlations and therefore, the perturbations in different space $`𝐱`$ are uncorrelated. In other words, the perturbation field has a Gaussian one-point distribution with respect to $`𝐱`$. For inflation with thermal dissipation, Eq. (17) still holds . In this case $`1/k_c`$ can be as small as the Hubble size $`1/H`$, and therefore $$t_f\frac{1}{H},$$ (20) i.e. the mean-free time is given by the Hubble time. It is important to stress that although the long temporal correlations are calculated here using the stochastic formalism, this result is far more general than any specific stochastic inflation ansatz. The Langevin equation (10) for the long wavelength fluctuations of the scalar field can be derived from first principles by considering the quantum evolution of the reduced density matrix. Since it is necessary that quantum coherence be lost in order to treat the initial density perturbations classically with general relativity, a quantum-to-classical decoherence is necessary. The decoherence via a coarse-grained quantum temporal evolution produces classical correlations in phase space like Eq. (17) through the interaction between long wavelength modes and the short wavelength “thermal” noise of the Hawking temperature $`H`$. Thus long temporal correlations are a generic feature of inflationary models with a slow-roll regime. Since the evolution of perturbations from horizon-crossing to the time of decoupling is linear, we have $$\stackrel{~}{ϵ}_{k,𝐱}\varphi (𝐱,t_c),$$ (21) where $`\stackrel{~}{ϵ}_{k,𝐱}`$ is the wavelet coefficient of density or temperature fluctuations on scale $`k`$ at position $`𝐱`$, and $`k`$ is related to $`t_c`$ via Eq. (3). Thus, the scale-scale correlations of the cosmic temperature fluctuations should be $$\stackrel{~}{ϵ}_{k_1,x_1}\stackrel{~}{ϵ}_{k_2,x_2}\varphi (t_{c1})\varphi ^{}(t_{c2}).$$ (22) The scale-scale correlations of the cosmic temperature fluctuations have been detected using $$C(k_1,k_2,x)\frac{\stackrel{~}{ϵ}_{k_1,x}^2\stackrel{~}{ϵ}_{k_2,x}^2}{\stackrel{~}{ϵ}_{k_1,x}^2\stackrel{~}{ϵ}_{k_2,x}^2}.$$ (23) where the correlation is between wavelet coefficients at different scales but at the same physical location. From Eqs. (21) and (22), we have approximately that $$C(k_1,k_2,x)1\left[\frac{\varphi (t_{c1})\varphi ^{}(t_{c2})}{\varphi ^2}\right]^2.$$ (24) Since $`k_cH`$, the scale-scale correlation given by Eq. (17) is very significant. From Eq. (3), the time difference of forming perturbations on scales $`k`$ and $`2k`$ is $`\mathrm{\Delta }Ht=\mathrm{log}2`$. Hence, $`\varphi (t_{c1}+\mathrm{log}2/H)\varphi ^{}(t_{c1})/\varphi ^2`$ is comparable to 1 according to Eq. (17). Therefore $`C(k,2k,x)1`$ can also be as large as about 1 implying that long-time tails can produce scale-scale correlations on the same order as the temperature fluctuations. The correlation (17) is localized, i.e., the fluctuations $`\varphi (𝐱,t)`$ and $`\varphi (𝐱,t_0)`$ are at the same position $`𝐱`$ with size about $`1/H^3`$. Therefore, $$\varphi (𝐱,t)\varphi ^{}(𝐱^{},t_0)0,$$ (25) if $`𝐱`$ and $`𝐱^{}`$ are different. As a consequence, one can predict that the scale-scale correlations at different positions will be zero, or $$C(k_1,k_2,x_1x_2)\frac{\stackrel{~}{ϵ}_{k_1,x_1}^2\stackrel{~}{ϵ}_{k_2,x_2}^2}{\stackrel{~}{ϵ}_{k_1,x_1}^2\stackrel{~}{ϵ}_{k_2,x_2}^2}1,$$ (26) where $`x_1x_20`$. We have tested this prediction for the Ly-$`\alpha `$ forests. The results show a good agreement with Eq. (25). It has generally been held that mode-mode coupling effects are negligible when perturbations are small and the linear approximation holds. The justification for this view can be seen from the decay of perturbations described by the linearized Boltzmann equation, which without considering diffusion effects, always decay exponentially. However this result is no longer true for processes in which diffusion-like long wavelength modes are important. In this case, even though the short wavelength modes decay exponentially \[Eq. (9)\], the long wavelength modes occur during the long-time tail which leads to scale-scale correlations \[Eq. (17)\]. Furthermore scale-scale correlations also arise in the conventional approach to inflation due to the loss of quantum coherence via the mechanism of coarse-grained histories . As such, scale-scale correlations are a generic feature of the inflation scenario with a slow-roll regime. The detected $`C(11^{},22^{})2`$ implies that first evidence of inflation induced scale-scale correlations may have been found. Finally, the detection of scale-scale correlations provides a way to reveal the time correlations of the fluctuations during inflation.
no-problem/9911/quant-ph9911003.html
ar5iv
text
# A New Class of Adiabatic Cyclic States and Geometric Phases for Non-Hermitian Hamiltonians ## Abstract For a $`T`$-periodic non-Hermitian Hamiltonian $`H(t)`$, we construct a class of adiabatic cyclic states of period $`T`$ which are not eigenstates of the initial Hamiltonian $`H(0)`$. We show that the corresponding adiabatic geometric phase angles are real and discuss their relationship with the conventional complex adiabatic geometric phase angles. We present a detailed calculation of the new adiabatic cyclic states and their geometric phases for a non-Hermitian analog of the spin 1/2 particle in a precessing magnetic field. Since the publication of Berry’s paper on the adiabatic geometrical phase, the subject has undergone a rapid development. Berry’s adiabatic geometric phase for periodic Hermitian Hamiltonians with a discrete nondegenerate spectrum has been generalized to arbitrary changes of a quantum state. In particular, the conditions on the adiabaticity and cyclicity of the evolution, Hermiticity of the Hamiltonian , and degeneracy and discreteness of its spectrum have been lifted. Moreover, the classical and relativistic analogues of the geometric phase have been considered. The purpose of this note is to offer an alternative generalization of Berry’s treatment of the adiabatic geometric phase for a non-Hermitian parametric Hamiltonian $`H[R]`$ with a nondegenerate discrete spectrum. The parameters $`R=(R^1,\mathrm{},R^n)`$ are assumed to be real coordinates of a smooth parameter manifold, with the eigenvalues $`E_n[R]`$ and eigenvectors $`|\psi _n,R`$ of $`H[R]`$ depending smoothly on $`R`$. The approach pursued in the present paper differs from those of Refs. in the choice of the adiabatically evolving state vectors. The corresponding geometric phase angle, which is shown to be real, has the same expression as the geometric phase angle for a Hermitian Hamiltonian. It differs from the conventional complex geometric phase angle by terms which are small for an adiabatically evolving system. In particular, this implies that in a generic adiabatic evolution the imaginary part of the complex adiabatic geometric phase angle is small. The generalization of the results of Berry to non-Hermitian Hamiltonians was originally considered by Garrison and Wright and further developed by Dattoli et al , Miniatura et al , and Mondragón and Hernandez . The main ingredient used by all of these authors in their derivation of a non-Hermitian analog of Berry’s phase is the biorthonormal eigenbasis of the Hamiltonian. More specifically, one writes the evolving state vector $`|\psi (t)`$ in an eigenbasis $`|\psi _n,t:=|\psi _n,R(t)`$ of the Hamiltonian $`H(t):=H[R(t)]`$, $$|\psi (t)=\underset{n}{}C_n(t)|\psi _n,t,$$ (1) and enforces the Schrödinger equation $$i|\dot{\psi }(t)=H(t)|\psi (t),$$ (2) where $`R(t)`$ denotes the curve traced in the parameter space, $`C_n(t)`$ are complex coefficients, a dot denotes a time-derivative, and $`\mathrm{}`$ is set to 1. The resulting equation together with the eigenvalue equation for the Hamiltonian, $$H[R]|\psi _n,R=E_n[R]|\psi _n,R$$ (3) where $`R=R(t)`$, yield $$\underset{n}{}\left[iC_n(t)|\dot{\psi }_n,t+[i\dot{C}_n(t)E_n(t)C_n(t)]|\psi _n,t\right]=0.$$ (4) Next one takes the inner product of both sides of Eq. (4) with the eigenvectors $`|\varphi _n,t=|\varphi _n,R(t)`$ of $`H^{}(t)=H^{}[R(t)]`$ which are defined by $`H^{}[R]|\varphi _n,R=E_n^{}[R]|\varphi _n,R,`$ (5) $`\varphi _m,R|\psi _n,R=\delta _{mn}.`$ (6) One then finds $$iC_m(t)\varphi _m,t|\dot{\psi }_m,t+i\dot{C}_m(t)E_m(t)C_m(t)+\underset{nm}{}iC_n(t)\varphi _m,t|\dot{\psi }_n,t=0.$$ (7) For an adiabatically changing Hamiltonian, $$\varphi _m,t|\dot{\psi }_n,t=\frac{\varphi _m,t|\dot{H}(t)|\psi _n,t}{E_n(t)E_m(t)},\mathrm{for}mn$$ (8) are small and the sum in (7) may be neglected . In this case, Eqs. (7) can be easily integrated to yield $`C_m(t)`$ $``$ $`C_m(0)e^{i[\delta _m(t)+\gamma _m(t)]},`$ (9) $`\delta _m(t)`$ $`:=`$ $`{\displaystyle _0^t}E_m(t^{})𝑑t^{},`$ (10) $`\gamma _m(t)`$ $`:=`$ $`{\displaystyle _0^t}i\varphi _m,t^{}|\dot{\psi }_m,t^{}𝑑t^{}.`$ (11) The validity of the adiabatic approximation, i.e., Eq. (9), is measured by the value of the adiabaticity parameter which is defined by $$\eta :=\frac{1}{\omega _0}\mathrm{Sup}_{n,mn,t}(|\varphi _m,t|\dot{\psi }_n,t|).$$ (12) Here ‘Sup’ stands for ‘Supremum’ and $`\omega _0`$ is the frequency (energy) scale of the system, . Adiabatic approximation is valid if and only if $`\eta 1`$. It is exact if and only if $`\eta =0`$. For a periodic Hamiltonian with period $`T`$, where $`R(T)=R(0)`$, the eigenvectors of the initial Hamiltonian undergo approximate cyclic evolutions, i.e., for $`|\psi (0)=|\psi _n,0`$, $$|\psi (T)e^{i[\delta _m(T)+\gamma _m(T)]}|\psi _n,0.$$ (13) In this case $`\delta _m(T)`$ and $`\gamma _m(T)`$ are called the adiabatic dynamical and geometrical phase angles, respectively. The geometric phase angle $`\gamma _m(T)`$ which can also be expressed in the form $$\gamma _m(T)=i\varphi _m,R|d|\psi _m,R,$$ (14) is the complex analogue of Berry’s adiabatic geometrical phase angle . In Eq. (14), $`d`$ stands for the exterior derivative with respect to the parameters $`R^i`$. The approximate equations (9) and (13) tend to exact equations only in the extreme adiabatic limit, $`\eta =0`$, where the eigenvectors of the Hamiltonian become stationary and the adiabatic approximation is exact . In this case, however, the geometric phase angle (14) vanishes identically. For a system with nonstationary energy eigenstates, $`\eta 0`$, and the adiabatic approximation is never exact . In this case the eigenstates of the initial Hamiltonian are only approximately cyclic. Strictly speaking, the state represented by $`|\psi (T)`$ lies in a neighborhood of the state represented by $`|\psi (0)`$ whose radius is of order $`\eta `$.<sup>1</sup><sup>1</sup>1The states belong to the projective Hilbert space which is endowed with the Fubini-Study metric or its infinite-dimensional analog. The latter is used to define the notion of a neighborhood of a state. The main motivation for the present analysis is the fact that although (due to the adiabaticity of the evolution) the components of $`|\dot{\psi }_m,t`$ along the ‘normal’ directions to $`|\psi _n,t`$ are negligible, $`|\dot{\psi }_m,t`$ does have non-negligible components along other eigenvectors. This is mainly because for a non-Hermitian Hamiltonian $`|\psi _m,t`$ do not form an orthogonal basis. This observation suggests an alternative expansion for the evolving state $`|\psi (t)`$, namely $$|\psi (t)=\underset{nm}{}C_n(t)|\psi _n,t+\stackrel{~}{C}(t)|\varphi _m,t=\stackrel{~}{C}(t)\left[\underset{nm}{}\stackrel{~}{C}_n(t)|\psi _n,t+|\varphi _m,t\right].$$ (15) Here $`m`$ is a fixed label, $`C_n(t)`$ and $`\stackrel{~}{C}(t)`$ are complex coefficients, $`\stackrel{~}{C}_n:=C_n/\stackrel{~}{C}`$, and $`\stackrel{~}{C}(t)`$ is assumed not to vanish. Although the expansions (1) and (15) are mathematically equivalent, the latter allows for the construction of a new class of adiabatic cyclic states of period $`T`$ which are not eigenstates of the initial Hamiltonian. It should be emphasized that these states also have approximate cyclic evolutions. More specifically after each complete period $`T`$ of the Hamiltonian, the corresponding final state lies in a neighborhood of the initial state with the radius of the neighborhood being of order $`\eta `$. Substituting (15) in the Schrödinger equation (2), taking the inner product of both sides of the resulting equation first with $`|\varphi _m,t`$ and then with $`|\psi _k,t`$ for $`km`$, and enforcing the adiabaticity condition: $`\eta 1`$ which means $$\varphi _m,t|\dot{\psi }_n,t0\psi _k,t|\dot{\varphi }_m,t,\mathrm{for}nmk,$$ (16) one obtains $`i\varphi _m,t|\varphi _m,t\dot{\stackrel{~}{C}}(t)+\left[\varphi _m,t|\varphi _m,tE_m(t)+i\varphi _m,t|\dot{\varphi }_m,t\right]\stackrel{~}{C}(t)0,`$ (17) $`{\displaystyle \underset{nm}{}}\left\{\psi _k,t|\psi _n,t\left[\dot{\stackrel{~}{C}}_n(t)+\left(iE_n(t)+{\displaystyle \frac{\dot{\stackrel{~}{C}}(t)}{\stackrel{~}{C}(t)}}\right)\stackrel{~}{C}_n\right]+\psi _k,t|\dot{\psi }_n,t\stackrel{~}{C}_n(t)\right\}`$ $`i\psi _k,t|H(t)|\varphi _m,t,\mathrm{for}kn.`$ (18) Eq. (17) can be immediately integrated to yield $`\stackrel{~}{C}(t)`$ $``$ $`\stackrel{~}{C}(0)e^{i[\stackrel{~}{\delta }_m(t)+\stackrel{~}{\gamma }_m(t)]},`$ (19) $`\stackrel{~}{\delta }_m(t)`$ $`:=`$ $`{\displaystyle _0^t}E_m(t^{})𝑑t^{},`$ (20) $`\stackrel{~}{\gamma }_m(t)`$ $`:=`$ $`{\displaystyle _0^t}{\displaystyle \frac{i\varphi _m,t^{}|\dot{\varphi }_m,t^{}}{\varphi _m,t^{}|\varphi _m,t^{}}}𝑑t^{}.`$ (21) Using Eq. (17), one can write Eq. (18) in the form: $`{\displaystyle \underset{nm}{}}\left\{\psi _k,t|\psi _n,t\left[\dot{\stackrel{~}{C}}_n(t)+i\left(E_n(t)E_m(t)+{\displaystyle \frac{i\varphi _m,t|\dot{\varphi }_m,t}{\varphi _m,t|\varphi _m,t}}\right)\stackrel{~}{C}_n(t)\right]+\psi _k,t|\dot{\psi }_n,t\stackrel{~}{C}_n(t)\right\}`$ $`i\psi _k,t|H(t)|\varphi _m,t\mathrm{for}kn.`$ (22) This is a system of first order linear ordinary differential equations with $`T`$-periodic coefficients. If the initial conditions are such that its solution is $`T`$-periodic, i.e., $`\stackrel{~}{C}_n(T)\stackrel{~}{C}_n(0)`$, then the evolving state undergoes an approximate cyclic evolution, $$|\psi (T)e^{i[\stackrel{~}{\delta }_m(T)+\stackrel{~}{\gamma }_m(T)]}|\psi (0).$$ (23) As seen from Eq. (23), the dynamical part of the total (complex) phase angle, namely $`\stackrel{~}{\delta }_m(T)`$ has the same form as (10), but the geometric part which can be written in the form $$\stackrel{~}{\gamma }_m(T)=\frac{i\varphi _m,R|d\varphi _m,R}{\varphi _m,R|\varphi _m,R},$$ (24) differs from (14). An interesting property of this geometric phase angle is that it has the same form as the geometric phase angle for a Hermitian Hamiltonian, . It is also very easy to show that $`\stackrel{~}{\gamma }_m(T)`$ is real. We wish to emphasize that the (cyclic) geometric phase obtained here is a physical quantity, if Eq. (22) has a periodic solution. Although this is a linear first order system with $`T`$-periodic coefficients, its solutions are not generally $`T`$-periodic . In the following, we shall first elaborate on the relationship between the conventional complex geometric phase angle (14) and our real geometric phase angle (21). We shall then explore the condition of the existence of periodic solutions of (22) for the simplest nontrivial case, i.e., the two-level system. We first introduce $`𝒜_{mn}(t):=i\varphi _m,t|\dot{\psi }_n,t,\stackrel{~}{𝒜}_{mn}(t):={\displaystyle \frac{i\varphi _m,t|\dot{\varphi }_n,t}{\varphi _n,t|\varphi _n,t}},`$ (25) $`𝒜_m(t):=𝒜_{mm}(t),\stackrel{~}{𝒜}_m(t):=\stackrel{~}{𝒜}_{mm}(t),`$ (26) so that $$\gamma _m(t)=_0^t𝒜_m(t^{})𝑑t^{},\mathrm{and}\stackrel{~}{\gamma }_m(t)=_0^t\stackrel{~}{𝒜}_m(t^{})𝑑t^{}.$$ Now using the completeness of the biorthobormal basis vector, i.e., $`_n|\psi _n,t\varphi _n,t|=1`$, we can write $`|\varphi _m,t=_n\varphi _n,t|\varphi _m,t|\psi _n,t`$. Substituting this relation in the definition of $`\stackrel{~}{𝒜}_m(t)`$ and performing the necessary algebra, we obtain $$\stackrel{~}{𝒜}_m(t)=𝒜_m(t)+i\frac{d}{dt}\mathrm{ln}\varphi _m,t|\varphi _m,t+\underset{nm}{}\left(\frac{\varphi _n,t|\varphi _m,t}{\varphi _m,t|\varphi _m,t}𝒜_{mn}(t)\right).$$ (27) Therefore, the complex and real geometric phase angles are related by $$\stackrel{~}{\gamma }_m(T)=\gamma _m(T)+\underset{nm}{}_0^T\frac{\varphi _n,t^{}|\varphi _m,t^{}𝒜_{mn}(t^{})dt^{}}{\varphi _m,t^{}|\varphi _m,t^{}}.$$ (28) Next we observe that according to Eqs. (12), (25), and (27) the difference between $`\stackrel{~}{𝒜}_m`$ and $`𝒜_m`$ consists of terms which are bounded by $`\eta `$. Therefore, in the extreme adiabatic limit $`\eta 0`$, the two phase angles coincide. This in turn shows that in this limit the imaginary part of the complex geometric phase angle $`\gamma _m(T)`$ tend to zero. In practice, however, $`\eta `$ has a small but nonzero value (unless in the trivial case where the energy eigenvectors are stationary and $`\gamma _m(T)=0=\stackrel{~}{\gamma }_m(T)`$). In this case, $`\gamma _m(T)\stackrel{~}{\gamma }_m(T)`$. This is consistent with the fact that $`\gamma _m(T)`$ is not necessarily real. In the remainder of this paper we shall apply our general results to study the two-level system. This system has been the subject of detailed study for its physical applications in particular in connection with the spontaneous decay of the excited states of atoms . Consider a two-dimensional Hilbert space where the Hamiltonian is a possibly non-Hermitian $`2\times 2`$ complex matrix with distinct eigenvalues. Setting $`m=2`$ in Eqs. (15) and using Eqs. (22), one has $`|\psi (t)=\stackrel{~}{C}(t)\left[\stackrel{~}{C}_1(t)|\psi _1,t+|\varphi _2,t\right],`$ (29) $`\dot{\stackrel{~}{C}}_1+Q(t)\stackrel{~}{C}_1=(t),`$ (30) where $`Q(t)`$ $`:=`$ $`i[E_1(t)E_2(t)]+{\displaystyle \frac{\psi _1,t|\dot{\psi }_1,t}{\psi _1,t|\psi _1,t}}{\displaystyle \frac{\varphi _2,t|\dot{\varphi }_2,t}{\varphi _2,t|\varphi _2,t}},`$ $`(t)`$ $`:=`$ $`{\displaystyle \frac{i\psi _1,t|H(t)|\varphi _2,t}{\psi _1,t|\psi _1,t}}.`$ In Eqs. (30) and the remainder of this paper, the adiabatic approximation is assumed to be valid and $``$’s are replaced by $`=`$’s. Eq. (30) can be easily integrated to yield $$\stackrel{~}{C}_1(t)=W(t)\left(\stackrel{~}{C}_1(0)+_0^t\frac{(t^{})}{W(t^{})}𝑑t^{}\right),$$ (31) where $$W(t):=e^{_0^tQ(s)𝑑s}.$$ (32) Having obtained the general solution, one can easily check for the periodic solutions. In view of the fact that $`Q`$ and $``$ are periodic functions of time with the same period $`T`$ as the Hamiltonian, one can show that $$\stackrel{~}{C}_1(t+T)\stackrel{~}{C}_1(t)=W(t)\left(\stackrel{~}{C}_1(0)[W(T)1]+W(T)_0^T\frac{(t^{})}{W(t^{})}𝑑t^{}\right).$$ (33) Therefore, the initial condition leading to a periodic solution is given by $$\stackrel{~}{C}_1(0)=\frac{W(T)_0^T\frac{(t^{})}{W(t^{})}𝑑t^{}}{1W(T)}.$$ (34) If both the numerator and denominator on the right hand side of (34) vanish, then the right hand side of (33) vanishes and the solution (31) is always periodic. If the denominator vanishes, i.e., $$W(T):=e^{i_0^T[E_2(s)E_1(s)]𝑑s}e^{i{\scriptscriptstyle \left({\scriptscriptstyle \frac{i\varphi _2,R|d\varphi _2,R}{\varphi _2,R|\varphi _2,R}}{\scriptscriptstyle \frac{i\psi _1,R|d\psi _1,R}{\psi _1,R|\psi _1,R}}\right)}}=1,$$ (35) but the numerator does not, there is no periodic solutions. If the denominator does not vanish, $`W(T)1`$, then there is a particular initial condition given by Eq. (34) that leads to a periodic solution. It is for this solution that the formula (21) for the (cyclic) adiabatic geometric phase applies. Clearly a similar treatment may be carried out for the choice $`m=1`$ in Eq. (15). The corresponding results are given by the same formulas as for the case $`m=2`$ except that one must interchange the labels 1 and 2. For the case of a Hermitian Hamiltonian where $`|\varphi _n,R=|\psi _n,R`$, the right hand side of Eq. (22) vanishes and the trivial solution: $`\stackrel{~}{C}_n=0`$ is periodic. This solution corresponds to the well-known result that for an adiabatically changing Hamiltonian the eigenvectors $`|\psi _n,0`$ of the initial Hamiltonian undergo cyclic evolutions. In this case, Eq. (21) gives Berry’s phase . Let us next consider the two-level system with the parametric Hamiltonian $$H=H[E,\theta ,\phi ]:=E\left(\begin{array}{cc}\mathrm{cos}\theta & e^{i\phi }\mathrm{sin}\theta \\ e^{i\phi }\mathrm{sin}\theta & \mathrm{cos}\theta \end{array}\right),$$ (36) where $`\theta \mathrm{IR}`$ and $`E,\phi \text{ }\mathrm{C}`$, . Note that due to the form of the Hamiltonian (36), $`\theta [0,\pi ]`$ and the real part $`\phi _r`$ of $`\phi `$ has the range $`[0,2\pi )`$, whereas $`E`$ and the imaginary part $`\phi _i`$ of $`\phi `$ can take arbitrary complex and real values, respectively. It is not difficult to show that the eigenvalues of $`H`$ are given by $`\pm E`$. Hence for $`E0`$ one has two distinct eigenvalues: $`E_1=E`$ and $`E_2=E`$. The corresponding eigenvectors of $`H`$ and $`H^{}`$ are given by $`|\psi _1=\left(\begin{array}{c}e^{i\phi }\mathrm{sin}(\frac{\theta }{2})\\ \mathrm{cos}(\frac{\theta }{2})\end{array}\right),|\psi _2=\left(\begin{array}{c}\mathrm{cos}(\frac{\theta }{2})\\ e^{i\phi }\mathrm{sin}(\frac{\theta }{2})\end{array}\right),\mathrm{and}`$ (41) $`|\varphi _1=\left(\begin{array}{c}e^{i\phi ^{}}\mathrm{sin}(\frac{\theta }{2})\\ \mathrm{cos}(\frac{\theta }{2})\end{array}\right),|\varphi _2=\left(\begin{array}{c}\mathrm{cos}(\frac{\theta }{2})\\ e^{i\phi ^{}}\mathrm{sin}(\frac{\theta }{2})\end{array}\right),`$ (46) respectively. The geometric phase angles $`\gamma _m(T)`$ and $`\stackrel{~}{\gamma }_m(T)`$ can be easily computed, $`\gamma _1(T)=\gamma _2(T)={\displaystyle \frac{1}{2}}{\displaystyle (1\mathrm{cos}\theta )𝑑\phi },`$ (47) $`\stackrel{~}{\gamma }_1(T)={\displaystyle \frac{d\phi _r}{1+e^{2\phi _i}\mathrm{cot}^2(\frac{\theta }{2})}},\stackrel{~}{\gamma }_2(T)={\displaystyle \frac{d\phi _r}{1+e^{2\phi _i}\mathrm{cot}^2(\frac{\theta }{2})}}.`$ (48) Let us next consider the case where $`E,\theta `$ and $`\phi _i`$ are constant and $`\phi _r=\omega t`$. In this case, the Hamiltonian (36) is a non-Hermitian analogue of the Hamiltonian of a magnetic dipole interacting with a precessing magnetic field . For this system, one can easily evaluate the integrals in (47) and (48), and obtain $`\gamma _1(T)=\gamma _2(T)=\pi (1\mathrm{cos}\theta )`$ $`\stackrel{~}{\gamma }_1(T)={\displaystyle \frac{2\pi }{1+e^{2\phi _i}\mathrm{cot}^2(\frac{\theta }{2})}},\stackrel{~}{\gamma }_2(T)={\displaystyle \frac{2\pi }{1+e^{2\phi _i}\mathrm{cot}^2(\frac{\theta }{2})}},`$ where $`T:=2\pi /\omega `$. Furthermore, one can obtain the explicit form of the initial condition for which the state vector (29) performs an adiabatic cyclic evolution. Using Eqs. (34), (41), and (46), one finds $$\stackrel{~}{C}_1(0)=\frac{\mathrm{sinh}\phi _i\mathrm{sin}\theta }{1+(\frac{\pi }{ET})\left(\frac{e^{2\phi _i}\mathrm{cot}^2(\theta /2)}{e^{2\phi _i}+\mathrm{cot}^2(\theta /2)}\right)}.$$ One can repeat the above analysis for the state vector $$|\psi (t)=\stackrel{~}{C}(t)\left[\stackrel{~}{C}_2(t)|\psi _2,t+|\varphi _1,t\right]$$ which corresponds to the choice $`m=1`$ in Eqs. (14) – (23). In this case the initial condition leading to an adiabatic cyclic evolution is given by $$\stackrel{~}{C}_2(0)=\frac{\mathrm{sinh}\phi _i\mathrm{sin}\theta }{1+(\frac{\pi }{ET})\left(\frac{e^{2\phi _i}\mathrm{cot}^2(\theta /2)}{e^{2\phi _i}+\mathrm{cot}^2(\theta /2)}\right)}.$$ It is not difficult to see that for $`\phi _i=0`$ the above results tend to those of Berry . It is also remarkable that unlike $`\gamma _m(T)`$ the new geometric phase angles $`\stackrel{~}{\gamma }_m(T)`$ are sensitive to the imaginary part $`\phi _i`$ of $`\phi `$. In conclusion, we wish to emphasize that the issue of the existence of (exact) cyclic states of arbitrary period for a general Hermitian Hamiltonian is addressed in Ref. . The results of also generalize to the non-Hermitian Hamiltonians. This is simply because the existence of cyclic states of period $`\tau `$ is identical with the existence of eigenstates of the evolution operator $`U(\tau )`$. In the present article, we have considered the adiabatic cyclic states which have approximate cyclic evolutions. What we have established is the existence and construction of a class of adiabatic cyclic states which have the same period as the Hamiltonian but are not eigenstates of the initial Hamiltonian. These states have the same cyclicity properties as the well-known eigenstates of the initial Hamiltonian. The only difference is that the new adiabatic cyclic states acquire real geometric phase angles. For a two-level non-Hermitian Hamiltonian (36) we have constructed these states explicitly and computed the corresponding geometric phases. For this system if we choose $`E,\theta `$ and the imaginary part $`\phi _i`$ of $`\phi `$ to be constant and require the real part $`\phi _r`$ of $`\phi `$ to be proportional to time, i.e., $`\phi _r(t)=\omega t`$, then both the conventional geometric phase angle $`\gamma _m(T)`$ and the geometric phase angle $`\stackrel{~}{\gamma }_m(T)`$ introduced in this paper are real. However, unlike $`\gamma _m(T)`$, $`\stackrel{~}{\gamma }_m(T)`$ depend on $`\phi _i`$. Finally we wish to note that one may follow the approach of Refs. to define a noncyclic analog of the adiabatic geometric phase obtained in the present paper. In this case, one is not limited to the periodic solutions of Eq. (22).
no-problem/9911/astro-ph9911074.html
ar5iv
text
# Time-series Spectroscopy of EC14026 Stars: Preliminary Results ### Discussion and Further Work The main results from our preliminary spectroscopic studies are: $``$ We have shown the feasibility of using modest size telescopes (such as the Danish 1.54m) to do time-series spectroscopy of EC14026 stars. $``$ Velocity variations in EC14026 stars have been detected. This confirms that the previous photometric detection of variability is due to stellar pulsation (not that there were many doubts). $``$ We have obtained 11 nights observations on the Danish 1.54m telescope and as well as 10 nights on the 74 inch telescope at Mt Stromlo. With these observations we will have the opportunity to use the complete array of asteroseismological tools to identify oscillation modes, and hence probe the atmospheres of these stars. Analysis of these data will follow. ## References Koen, C., O’Donoghue, D., Kilkenny, D., Lynas-Gray, A. E., Marang, F., & van Wyk, F. 1998, MNRAS, 296, 317
no-problem/9911/astro-ph9911451.html
ar5iv
text
# PSR B0656+14: Combined Optical, X-ray & EUV Studies ## 1. Introduction Considerable uncertainty remains regarding the fundamental thermal parameters ($`T,N_H\&R/d`$) for PSR B0656+14. Radio derived DM estimates (790 $`\pm `$ 190 pc) disagree with the best $`N_H`$ model fits (250 – 550 pc). Reported calibration uncertainties associated with the low energy channels of the $`ROSAT`$ PSPC compromise the latter - although agreement between other $`ROSAT`$ PSPC & observed $`EUVE`$ fluxes obtained via a correction (e.g. for RX J185635-3754, Walter & An, 1998). We outline the results of such a correction to the existing PSPC datasets archived for PSR B0656+14 via substitution of the low energy channels with measured $`EUVE`$ fluxes, and by incorporating independently derived constraints to the Rayleigh-Jeans tail in the optical, discuss the implications for the neutron star’s thermal parameters. ## 2. Technical & Analytical Overview Optimum thermal fits for $`T_{soft},T_{hard},N_H,R/d`$ were obtained for the archived $`ROSAT`$ PSPC data alone and the PSPC data with the suspect low energy channels substituted with the archival normalised EUVE flux. This substitution results in a significant change in solution space, as shown in Figure 1 (Edelstein et al. 1999). Based on integrated optical photometry, Pavlov et al. (1997) fitted a two component nonthermal/thermal model, the thermal fit defined by a parameter $`G`$ $``$ $`T_{10^6\mathrm{K}}(R_{10\mathrm{k}\mathrm{m}}/d_{500\mathrm{p}\mathrm{c}})^2`$ where $`G`$ = \[1 – 7\] (see Figure 1). A 1$`\sigma `$ upper limit on the unpulsed component from the optical $`B`$ band light curve of Shearer et al. (1997) limits $`G`$ $``$ 4.4, 4.8 and 5.2, based on various optical extinction models to the pulsar (Golden, 1999). These optical results yields tighter constraints on parameter space, as can be seen. ## 3. Discussion & Conclusions Combining the EUVE & ROSAT datasets in this way yields new solutions in parameter space that are further constrained independently via recent optical work. Assuming a simple blackbody form then $`T_{surface}`$ $``$ 5.0$`\times 10^5`$ K and for the $`N_H`$-derived distances of \[250 – 280\] pc, $`R_{\mathrm{}}`$ $``$ \[17.7 – 14.7\] km. Using the estimate of $`R_{\mathrm{}}`$ $``$ $`9.5_{2.0}^{+3.5}`$ km for Geminga as a working upper limit (Golden & Shearer, 1999) places PSR B0656+14 at a distance of no less than $`d`$ = $`152_{32}^{+55}`$. This suggests the possibility of parallax observations to independently derive $`d`$, with immediate implications for the $`R`$ parameter, and consequently models of the condensed matter equation of state. ## References Edelstein, J., Seon, K.-I., Golden, A., & Kwok-in, K., 1999, sub. ApJ. Golden, A., 1999, Ph.D. Thesis, National University of Ireland, Galway. Golden, A., & Shearer, A., 1999, A&A, 342, L5. Greiveldinger, C., Camerini, U., Fry, U., Markwardt, C.B., Ogelman, H., et al., 1986, ApJ, 465, 35. Pavlov, G.G., Welty, A.D., & Cordova, F.A., 1997, ApJ, 489, L75. Shearer, A., Redfern, R.M., et al., 1997, ApJ, 487, L181. Walter, F.M., & An, P., 1998, AAS, 192, 82.07.
no-problem/9911/astro-ph9911022.html
ar5iv
text
# 1 Meet the galaxies ## 1 Meet the galaxies Vorontsov-Vel’yaminov (1967) was one of the first to draw attention to a unique subset of edge-on spiral galaxies that exhibit extraordinarily large disk axial ratios and no discernible bulge component. Goad & Roberts (1981) dubbed these galaxies ‘superthins’ and recognized that spirals selected on the basis of their superthin morphologies tend to share other unique properties, including low optical surface brightness disks, high neutral gas fractions, low metallicities, and slowly rising, dwarf-like rotation curves (see also Karachentsev & Xu 1991; Bergvall & Rönnback 1995; Matthews et al. 1999a). Superthins are by no means uncommon in nearby space. From an inspection of photographic survey plates, Karachentsev et al. (1993) compiled a catalogue of 4454 edge-on, pure disk galaxies (the Flat Galaxy Catalogue or FGC). Our group has surveyed 474 of the FGC objects in the H i 21-cm line using the Nançay Radio Telescope and Green Bank 140-ft Telescope (Matthews & van Driel, in preparation). We detected over 50% of our targets within $`V_h<9500`$ km s<sup>-1</sup> (see also Giovanelli et al. 1997). The high detection rate of our survey demonstrates that optically organized, gas-rich, pure disk galaxies are abundant in the nearby universe, and hence represent one of the most common products of galaxy disk formation. Galaxy formation paradigms must therefore explain the abundance of these small disks and their unique properties, even if their contribution to the overall matter density of the universe is small. We have obtained follow-up optical imaging and photometry of 95 of our H i-detected galaxies using the WIYN telescope at Kitt Peak. By definition, all of the galaxies we surveyed have large disk axial ratios and little or no bulge component. However, we find the FGC galaxies are nonetheless morphologically diverse; many are true superthin objects with small stellar scale heights, while a number exhibit thicker, more flocculent disks. Interestingly, galaxies of both types may have similar disk rotational velocities, H i contents, and optical luminosities. The objects exhibiting superthin morphologies are of particular interest, since the thinness of their stellar disks implies they are among the least dynamically evolved of nearby disk galaxies. Our optical images reveal that the superthins frequently appear rather diffuse, indicating the stellar densities in their disks are low. Since corresponding H i contents are generally high, this suggests that these galaxies have been inefficient star-formers. Thus superthins can be viewed as highly ‘underevolved’ systems. For this reason, superthins can offer us a glimpse of the conditions during the early stages of quiescent disk galaxy evolution without looking beyond the local universe. Moreover, these simple disks allow us to probe disk structure and dynamics without the complication of a bulge or large internal extinction. As an illustration of these ideas, we summarize the results from a detailed analysis of the nearby superthin UGC 7321. ## 2 A nearby superthin studied in detail: UGC 7321 ### 2.1 Global properties Located at a distance of $``$10Mpc, UGC 7321 is a prototypical example of a superthin spiral (Fig. 1). Using the WIYN telescope, we obtained photometrically-calibrated $`B`$ and $`R`$ images of UGC 7321 under $`0^{\prime \prime }\mathrm{\hspace{0.17em}.6}`$ seeing conditions. In addition, we obtained complementary NIR $`H`$-band imaging with IRIM on the Kitt Peak 2.1-m telescope, an H i pencil-beam map using the Nançay Telescope, and we have analyzed archival VLA H i observations of this galaxy. For more details on the observations and their analysis, we refer the reader to Matthews et al. (1999a). For UGC 7321 we derive the following global properties<sup>1</sup><sup>1</sup>1All quantities have been corrected for Galactic and internal extinction and projected to a face-on value; see Matthews et al. (1999a).: $`i88^{}`$; $`M_B=17.0`$; $`\overline{\mu }_B=27.6`$ mag arcsec<sup>-2</sup>; $`A_{opt}`$=16.3 kpc; $`_{HI}`$=1.1$`\times 10^9`$$`_{}`$; $`\frac{_{HI}}{L_B}`$=1.1 $`_{}`$/$`_{}`$; $`W_{20}`$=233 km s<sup>-1</sup>; $`h_r`$=2.1 kpc. We find the rotation curve of UGC 7321 rises slowly, and begins to flatten only well outside the stellar disk. UGC 7321 can therefore be characterized as a low surface brightness, gas-rich galaxy with a rather weak central mass concentration. Its global properties are thus in some ways more reminiscent of an Irregular galaxy than an Sd spiral. Nonetheless, the stellar disk of UGC 7321 is clearly highly organized, and its double-horned global H i profile is distinctly spiralesque. ### 2.2 The radial light distribution A fit to its azimuthally-averaged brightness distribution reveals that UGC 7321 does not have a simple exponential stellar disk (Fig. 2). At small radii a brightness excess over the best exponential fit is observed, while at large radii, the light profile falls off faster than predicted for an exponential, suggesting the stellar disk of UGC 7321 may be truncated. In addition, a major axis brightness profile extracted from our $`H`$-band data exhibits distinct ‘steps’ in the light distribution. These observations suggest that perhaps viscous evolution has been inefficient in this low-density galaxy (see also Matthews & Gallagher 1997). This also provides evidence against the hypothesis that the exponential nature of the stellar disk of spirals is established by the initial conditions of galaxy formation (cf. Dalcanton et al. 1997). ### 2.3 Disk colors and color gradients The examination of disk color gradients offers an important means of constraining disk formation mechanisms (e.g., de Jong 1996) and dynamical evolution histories (e.g., Just et al. 1996). UGC 7321 is particularly suited to the exploration of disk color gradients, since it suffers minimally from internal dust extinction (Matthews et al. 1999a). Near the center of its disk, UGC 7321 exhibits a small, very red nuclear feature ($`BR`$1.5), only a few arcseconds across. Surrounding this feature is a more extended red region with $`BR`$1.2, which extends to $`r\pm 20^{\prime \prime }`$ on either side of the disk center, and has a rather distinct boundary. Intriguingly, this region corresponds very closely to the region over which we observe a light excess over a pure exponential disk (see above). Similar regions have also been found in 3 other superthins (Matthews et al. 1999b); perhaps these represent ancient starbursts, the cores of the original protogalaxies, or kinematically distinct subsystems analogous to the bulges of other spirals. Cutting into the red central region of UGC 7321, we find thin blue bands of stars along the midplane of the galaxy. These bands grow both thicker and bluer with increasing galactocentric radius, having $`BR`$1.05 at $`|r|=20^{\prime \prime }`$, and reaching $`BR`$0.5 at the visible edges of the disk. This radial bluing cannot be explained solely by dust, and is consistent with the type of color gradient predicted by ‘inside-out’ galaxy formation models (e.g., White & Frenk 1991). A faint, thicker, but highly flattened disk of unresolved stars is also visible surrounding the UGC 7321 disk at $`|r|`$2.0. This component has $`BR`$1.1, and shows little change in color with galactocentric distance. The color of this component is consistent with a population of ‘old disk’ stars. Their location at higher $`z`$-heights than the bluer stars along the disk midplane implies that some dynamical heating has occurred even in dynamically cold superthins. We note that the outer disk regions of UGC 7321 are too blue to be explained by low metallicity alone and must be quite young. Nonetheless, the simultaneous presence of stars with $`BR>1`$ implies that UGC 7321 is not a young galaxy, but rather one which has evolved very slowly. This is contrary to the picture of blue, gas-rich, low surface brightness galaxies as young systems (see also Jimenez et al. 1998). ### 2.4 Vertical disk structure Measurements of the vertical light profiles of galaxy disks provide insight into their formation, stability, and evolutionary histories (e.g., de Grijs 1997 and references therein). In order to characterize the vertical light distribution of UGC 7321, we have performed functional fits to the brightness profiles extracted at various galactocentric radii from our $`H`$\- and $`R`$-band images. We find the disk of UGC 7321 is not locally isothermal over most of its radial extent. At $`r`$=0, the vertical light profile can be well characterized by a single exponential function with a scale height $`h_{z,c}`$140 pc (Fig 3). At intermediate galactic radii ($`0^{}\mathrm{\hspace{0.17em}.5}`$$`|r|`$$`1^{}\mathrm{\hspace{0.17em}.5}`$), the vertical light profile becomes less peaked than an exponential and can be represented as the sum of two ‘sech’ functions of differing scale heights ($`h_{z,2}`$120 pc and $`h_{z,3}`$218 pc). For $`|r|`$$`1^{}\mathrm{\hspace{0.17em}.5}`$ we cannot rule out that the disk may be approximately isothermal. We interpret these results as evidence for the existence of disk subcomponents in UGC 7321, analogous to the disk subcomponents of the Milky Way (MW; e.g., Freeman 1993). At intermediate galactic radii, the two fitted sech functions may represent components similar to the “young disk” and the “thin disk” of the MW, while near the disk center, the exponential nature of the brightness profile suggests the existence of an additional component of extremely small scale height, perhaps analogous to the MW’s “nuclear disk”. This multi-disk interpretation appears consistent with the existence of various disk subpopulations delineated in our color maps (see above). Even apparently simple disks like the superthins thus appear to be quite structurally complex and to have been subject to some degree of dynamical evolution.
no-problem/9911/hep-th9911028.html
ar5iv
text
# Large Gauge Ward Identity ## 1 Introduction: The question of “large” gauge invariance at finite temperature has been a very interesting one for several years now . It is well known that the Chern-Simons action in an odd dimensional non-Abelian gauge theory is not invariant under “large” gauge transformations . Rather, it shifts by a constant proportional to the topological winding number associated with the “large” gauge transformation. To be specific, consider the three dimensional Chern-Simons action $$S_{CS}=Md^3x\mathrm{Tr}ϵ^{\mu \nu \lambda }A_\mu (_\nu A_\lambda +\frac{2g}{3}A_\nu A_\lambda )$$ (1) where $`A_\mu `$ is a matrix valued gauge field in some representation of the gauge algebra. Then, under a gauge transformation $`A_\mu U^1A_\mu U+\frac{1}{g}U^1_\mu U`$ the Chern-Simons action changes as $$S_{CS}S_{CS}\frac{4\pi M}{g^2}\times 2\pi W$$ (2) where $`W`$ is the winding number of the gauge transformation defined to be $$W=\frac{1}{24\pi ^2}d^3x\mathrm{Tr}ϵ^{\mu \nu \lambda }_\mu UU^1_\nu UU^1_\lambda UU^1$$ (3) The winding number is a topological number (integer) and unless the gauge transformation belongs to the trivial topology class, it is clear that the Chern-Simons action will not be gauge invariant. However, if the coefficient of the Chern-Simons term is quantized in units of $`\frac{g^2}{4\pi }`$, then the path integral, which involves $`\mathrm{exp}(iS_{CS})`$, will be invariant and we can define a consistent quantum theory . Chern-Simons actions can be induced radiatively, with a perturbatively calculable coefficient. For example, for massive fermions interacting with a gauge field at zero temperature, radiative corrections due to the fermions induce a Chern-Simons term with a coefficient $`\frac{1}{2}`$ (in units of $`\frac{g^2}{4\pi }`$ for every flavor) . Taking into account the intrinsic global anomaly , the effective action is in fact invariant. Alternatively, we can simply consider an even number of fermion flavors. These radiative corrections at finite temperature are even more interesting. At one loop, in the static limit, they induce a Chern-Simons term with a temperature dependent coefficient such that $$MM\frac{g^2}{8\pi }\frac{m}{|m|}\mathrm{tanh}\frac{\beta |m|}{2}$$ (4) where $`\beta `$ is the inverse temperature (in units of the Boltzmann constant). This is now a continuous function of temperature and, consequently, it can no longer be quantized in units of $`\frac{g^2}{4\pi }`$ even for an even number of fermion flavors. It would appear, therefore, that “large” gauge invariance would be lost at finite temperature which is quite mysterious since gauge invariance has no direct relation with temperature. An interesting possible resolution to this puzzle comes from a study of the Chern-Simons theory in $`0+1`$ dimensions which has all the features of the $`2+1`$ dimensional theory and yet is much simpler so that the theory can be exactly solved. It was observed there that, at finite temperature, the radiative corrections give rise to an infinite number of non-extensive terms besides the induced Chern-Simons term and that the effective action due to radiative corrections can be exactly summed in a closed form. This has to be contrasted with the case at zero temperature, where the only nontrivial radiative correction was the Chern-Simons term. Furthermore, it was observed that the summed effective action at finite temperature is invariant under “large” gauge transformations once the tree level coefficient is quantized and we have an even number of fermion flavors. This is, indeed, quite interesting since it points out that even when the Chern-Simons term itself may violate “large” gauge invariance, there may be other terms in the effective action which can compensate to make the total effective action gauge invariant. This mechanism extends to $`2+1`$ dimensional Abelian theories for a restricted class of static backgrounds $`A_\mu =(A_0(t),\stackrel{}{A}(\stackrel{}{x}))`$. However, this is not the full answer, since for these backgrounds (and for their trivial non-abelian generalizations) the “large” gauge transformations in fact have zero winding number - the shift in the Chern-Simons actions comes from a total derivative term, not from the winding number piece. Furthermore, such backgrounds only address the static limit, while the non-static limit is known to be very different . Of course, calculations are more difficult in the $`2+1`$ dimensional theory and one does not expect to be able to sum all the terms in the effective action of this theory. Therefore, to study the problem of “large” gauge invariance in this theory, we must develop a systematic procedure. The natural idea, of course, would be to write a Ward identity for “large” gauge invariance, which relates different amplitudes and, therefore, can be perturbatively checked even if the complete effective action is difficult to evaluate. It is with this in mind that we have chosen to study the question of the Ward identity for “large” gauge transformations in the $`0+1`$ dimensions. Clearly, the derivation of the Ward identity for gauge transformations which are topologically nontrivial is hard, but, at least, in $`0+1`$ dimensions, we have the exact effective actions in closed forms and, therefore, such theories provide a natural starting point. In this paper, we carry out such a study in detail. In section 2, we recapitulate briefly all the relevant facts known from the studies in the $`0+1`$ dimensional theories. In section 3, we try to derive the relevant Ward identity for a single flavor fermion theory from the effective action, directly by brute force. As we will show, this is quite hard since the Ward identities are extremely nonlinear. An alternate method is to look at the Ward identities in terms of the exponential of the effective action which we do in section 4. These identities are linear and easier to handle. (Of course, the nonlinearity creeps in when we transform back to the effective action.) The nonlinearity of these identities brings in many interesting features which we are not used to. Thus, for example, unlike the Ward identities for small gauge invariance, here the identities do not obey superposition. Consequently, if a theory has two distinct sectors, the sum of the effective action coming from the two sectors does not have the same structure of the identity as satisfied by the individual contributions. We point out all such features and present a brief conclusion in section 5 pointing out which features are likely/unlikely to extend to the $`2+1`$ dimensional theory. ## 2 Recapitulation of Results: We first recapitulate all the relevant results known from studies in $`0+1`$ dimensional theories. Recall that the theory of massive fermions with $`N_f`$ flavors interacting with an Abelian gauge field including a Chern-Simons term is described by the action (We assume $`m>0`$ for simplicity.) $$S_{\mathrm{fermion}}=𝑑t\overline{\psi }(i_tmA)\psi \kappa 𝑑tA$$ (5) where we have suppressed the flavor index for the fermions and we note that the last term is the Chern-Simons term in $`0+1`$ dimension. It is worth emphasizing that even though the gauge field here is Abelian, this theory has all the properties of a $`2+1`$ dimensional non-Abelian theory. Under a gauge transformation, $`\psi e^{i\lambda }\psi `$, $`AA+_t\lambda `$, the fermion action is invariant, but the complete action changes: $$S_{\mathrm{fermion}}S_{\mathrm{fermion}}\kappa \mathrm{\hspace{0.17em}2}\pi N$$ (6) where $`N`$ is the appropriate winding number and it is clear that the tree level coefficient $`\kappa `$ of the Chern-Simons term must be quantized for the theory to be consistent. The mass term for the fermion breaks charge conjugation invariance and, consequently, the radiative corrections due to the fermions generate a Chern-Simons term at finite temperature modifying the coefficient as $$\kappa \kappa \frac{N_f}{2}\mathrm{tanh}\frac{\beta m}{2}$$ (7) This is analogous to the behavior in the $`2+1`$ dimensional theory. In particular, we note that, at zero temperature ($`\beta \mathrm{}`$) with an even number of flavors, this is compatible with the quantization of the coefficient of Chern-Simons term, but it poses a problem at finite temperature suggesting that gauge invariancce may be violated if the temperature is nonzero. In this case, of course, the effective action can be exactly evaluated and has the form $`\mathrm{\Gamma }`$ $`=`$ $`\mathrm{\Gamma }_f^{(N_f)}\kappa a=iN_f\mathrm{log}{\displaystyle \frac{\mathrm{cosh}\frac{(\beta m+ia)}{2}}{\mathrm{cosh}\frac{\beta m}{2}}}\kappa a`$ (8) $`=`$ $`iN_f\mathrm{log}\left(\mathrm{cos}{\displaystyle \frac{a}{2}}+i\mathrm{tanh}{\displaystyle \frac{\beta m}{2}}\mathrm{sin}{\displaystyle \frac{a}{2}}\right)\kappa a`$ where we have defined (We have normalized the effective action so that it vanishes for $`A=0`$.) $$a=𝑑tA(t)$$ (9) There are several things to note from the form of this effective action. First, this is a non-extensive action (involves powers of an integrated quantity). Non-extensive actions do not arise at zero temperature from requirements of locality, but locality is not necessarily respected at finite temperature. In fact, it is easily seen from small gauge invariance that, in this theory, if higher order terms do not vanish, the effective action must be non-extensive. For example, let us note that if we have a quadratic term in the effective action, of the form $`S_q=\frac{1}{2}𝑑t_1𝑑t_2A(t_1)F(t_1t_2)A(t_2)`$, then invariance under small gauge transformations would require $$\delta S_q=𝑑t_1𝑑t_2_{t_1}\lambda F(t_1t_2)A(t_2)=𝑑t_1𝑑t_2\lambda _{t_1}F(t_1t_2)A(t_2)=0$$ (10) whose general solution is $`F=`$ constant. If the constant is nonzero, the quadratic action becomes non-extensive (a quadratic function of $`a`$). Thus, one of the important features of this theory is that $`\mathrm{\Gamma }_f^{(N_f)}`$ and, therefore, $`\mathrm{\Gamma }`$ is a function of $`a`$. This simple feature is unlikely to generalize to the $`2+1`$ dimensional theory. In fact, it is already known that this does not hold even in $`1+1`$ dimensions because the Ward identities for small gauge invariance are not restrictive enough. Another feature to note is that, under a “large” gauge transformation, for which $$aa+2\pi N,$$ (11) the effective action coming from the radiative corrections due to the fermions transforms as $$\mathrm{\Gamma }_f^{(N_f)}(a)\mathrm{\Gamma }_f^{(N_f)}(a+2\pi N)=\mathrm{\Gamma }_f^{(N_f)}(a)+\pi N_fN$$ (12) so that the theory continues to be well defined for an even number of fermion flavors. That is, even though the coefficient of the Chern-Simons term is no longer quantized at finite temperature, the noninvariance of this term is completely compensated for by all the higher order terms in the effective action. In fact, the most important thing to observe in this connection is that the inhomogeneous transformation of the fermion effective action is independent of temperature, as we should expect since gauge transformations are not related to temperature. In $`0+1`$ dimensions, we also know the results for the effective action of a massive, complex scalar field interacting with the Abelian gauge field . Consider the theory with action $$S_{\mathrm{scalar}}=𝑑t\left((_tiA)\varphi ^{}(_t+iA)\varphi m^2\varphi ^{}\varphi \right)\kappa 𝑑tA$$ (13) where, we have again suppressed the number of flavors for simplicity. The mass term in this theory does not break parity and, consequently, there is no Chern-Simons term generated. In fact, at zero temperature, the radiative corrections due to the scalar fields identically vanishes which follows from a combination of invariance under small gauge transformation and the absence of parity violation. Nonetheless, at finite temperature, the effective action coming from the scalar fields is nontrivial and has the form $`\mathrm{\Gamma }_s^{(N_f)}`$ $`=`$ $`iN_f\mathrm{log}{\displaystyle \frac{\mathrm{sinh}\frac{(\beta m+ia)}{2}\mathrm{sinh}\frac{(\beta mia)}{2}}{\mathrm{sinh}^2\frac{\beta m}{2}}}=iN_f\mathrm{log}\left(\mathrm{cos}^2{\displaystyle \frac{a}{2}}+\mathrm{coth}^2{\displaystyle \frac{\beta m}{2}}\mathrm{sin}^2{\displaystyle \frac{a}{2}}\right)`$ (14) $`=`$ $`iN_f\mathrm{log}\left({\displaystyle \frac{(\mathrm{cosh}\beta m\mathrm{cos}a)}{2\mathrm{sinh}^2(\beta m/2)}}\right)`$ We see that even though there is no Chern-Simons term, we would have run into the problem of “large” gauge invariance had we done a perturbative calculation and looked at the quadratic terms alone. The effective action, once again, is non-extensive and is a function of $`a`$, namely, $`\mathrm{\Gamma }_s=\mathrm{\Gamma }_s(a)`$. Furthermore, under a large gauge transformation, $$\mathrm{\Gamma }_s^{(N_f)}(a)\mathrm{\Gamma }_s^{(N_f)}(a+2\pi N)=\mathrm{\Gamma }_s^{(N_f)}(a)$$ (15) Namely, the action is invariant independent of the temperature. Finally, consider a simple supersymmetric model in $`0+1`$ dimensions, with action $`S_{\mathrm{super}}`$ $`=`$ $`{\displaystyle 𝑑t\left((_tiA)\varphi ^{}(_t+iA)\varphi m^2\varphi ^{}\varphi +\overline{\psi }(i_tmA)\psi \right)}`$ (16) $`+{\displaystyle 𝑑t\left(\frac{1}{2}(A+\dot{\theta })^2+\frac{i}{2}(\lambda +\xi )(\dot{\lambda }+\dot{\xi })\right)}\kappa {\displaystyle 𝑑tA}`$ Here, in addition to the usual scalar and fermionic fields with identical number of flavors, we also have a stuckelberg multiplet of fields. The effective action for this theory is quite simple $$\mathrm{\Gamma }=\mathrm{\Gamma }_{susy}^{(N_f)}(a)+𝑑t\left(\frac{1}{2}(A+\dot{\theta })^2+\frac{i}{2}(\lambda +\xi )(\dot{\lambda }+\dot{\xi })\right)\kappa 𝑑tA$$ (17) where we recognize that $$\mathrm{\Gamma }_{susy}^{(N_f)}(a)=\mathrm{\Gamma }_f^{(N_f)}(a)+\mathrm{\Gamma }_s^{(N_f)}(a)=iN_f\mathrm{log}\frac{2\mathrm{sinh}^2\frac{\beta m}{2}(\mathrm{cos}\frac{a}{2}+i\mathrm{tanh}\frac{\beta m}{2}\mathrm{sin}\frac{a}{2})}{(\mathrm{cosh}\beta m\mathrm{cos}a)}$$ (18) and the transformation properties follow from our earlier discussion. With these basics, we are now ready to get into the question of the Ward identity for “large” gauge invariance. ## 3 Ward Identity (Hard Way): To begin with, let us consider the model for a single (flavor) massive fermion interacting with an Abelian gauge field. The Lagrangian is trivially obtained from eq. (5). Let us denote by $`\mathrm{\Gamma }_f^{(1)}`$ the effective action which results from integrating out the fermions. From the general arguments of the last section, we know that this effective action will be a function of $`a`$ such that under a large gauge transformation $$\mathrm{\Gamma }_f^{(1)}(a)\mathrm{\Gamma }_f^{(1)}(a+2\pi N)=\mathrm{\Gamma }_f^{(1)}(a)+\pi N$$ (19) The Taylor expansion of this relation gives $$\underset{n=1}{\overset{\mathrm{}}{}}\underset{m=0}{\overset{\mathrm{}}{}}\frac{a^m(2\pi N)^n}{m!n!}\frac{^{n+m}\mathrm{\Gamma }_f^{(1)}}{a^{n+m}}|_{a=0}=\pi N$$ (20) This is an infinite number of constraints which can also be rewritten in the form $`{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(2\pi N)^n}{n!}}{\displaystyle \frac{^n\mathrm{\Gamma }_f^{(1)}}{a^n}}|_{a=0}`$ $`=`$ $`\pi N`$ $`{\displaystyle \frac{a^m}{m!}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(2\pi N)^n}{n!}}{\displaystyle \frac{^{n+m}\mathrm{\Gamma }_f^{(1)}}{a^{n+m}}}|_{a=0}`$ $`=`$ $`0\mathrm{for}m>0`$ (21) Solving this set of equations is tedious, but with a little bit of work, we can determine that the simplest relation which will satisfy the infinity of relations has the form (This is, however, not the most general relation as we will discuss later in the case of the supersymmetric theory.) $$\frac{^2\mathrm{\Gamma }_f^{(1)}}{a^2}=i\left(\frac{1}{4}\left(\frac{\mathrm{\Gamma }_f^{(1)}}{a}\right)^2\right)$$ (22) This relation (as well as the ones following from it) can be checked explicitly for the first few low order amplitudes of the theory and, consequently, eq. (22) can be thought of as the Ward identity for “large” gauge invariance (or better yet, the Master equation from which all the relevant relations can be obtained by taking further derivatives with respect to $`a`$). This relates higher point functions to lower ones as we would expect a Ward identity to do and must hold at zero as well as nonzero temperature. However, unlike conventional Ward identities associated with small gauge invariance, we note that this relation is nonlinear. In some sense, this is to be expected for “large” gauge transformations and we comment on the consequences of this feature later. We note from the explicit form of the effective action determined earlier (see eq. (8) for $`N_f=1`$) that eq. (22) indeed holds for the single flavor fermion theory. Let us also note here that all the higher point functions can be determined from the Ward identity in eq. (22) in terms of the one point function which has to be calculated from the theory and is known to be $$\frac{\mathrm{\Gamma }_f^{(1)}}{a}|_{a=0}=\frac{1}{2}\mathrm{tanh}\frac{\beta m}{2}$$ (23) It follows from this (using the identity in eq. (22)) that, at zero temperature, the two point and all other higher point functions vanish. In fact, from the Ward identity (22), we can determine the form of the effective action to be (recall that $`\mathrm{\Gamma }_f(a=0)=0`$) $$\mathrm{\Gamma }_f^{(1)}(a)=i\mathrm{log}\left(\mathrm{cos}\frac{a}{2}+2i\frac{\mathrm{\Gamma }_f^{(1)}}{a}|_{a=0}\mathrm{sin}\frac{a}{2}\right)$$ (24) Namely, the effective action can be completely determined from the knowledge of the one point function which, of course, coincides with the exact calculations. It is clear, however, that this way of deriving the Ward identity is extremely hard and, in particular, if the theory is complicated (remember that so far we have only considered a single flavor of massive fermion), then, it may be much more difficult to determine the Ward identity. ## 4 Simple Derivation of the Ward Identity: To find a simpler way of deriving the “large” gauge Ward identity, let us define $$\mathrm{\Gamma }(a)=i\mathrm{log}W(a)$$ (25) where the upper sign is for a fermion theory while the lower one corresponds to a scalar theory. Namely, we are interested in looking at the exponential of the effective action (i.e. up to a factor of $`i`$, $`W`$ is the basic determinant that would arise from integrating out a particular field). Once again, we will restrict ourselves to a single flavor massive fermion or a single flavor massive complex scalar since any other theory can be obtained from these basic components. The advantage of studying $`W(a)`$ as opposed to the effective action lies in the fact that, in order for $`\mathrm{\Gamma }(a)`$ to have the right transformation properties under a large gauge transformation (see eqs. (12),(14)), $`W(a)`$ simply has to be quasi-periodic. Consequently, from the study of harmonic oscillator (as well as Floquet theory), we see that $`W(a)`$ has to satisfy a simple equation of the form $$\frac{^2W(a)}{a^2}+\nu ^2W(a)=g$$ (26) where $`\nu `$ and $`g`$ are parameters to be determined from the theory. In particular, let us note that the constant $`g`$ can depend on parameters of the theory such as temperature whereas we expect the parameter $`\nu `$, also known as the characteristic exponent, to be independent of temperature and equal to an odd half integer for a fermionic mode or an integer for a scalar mode. However, all these properties should automatically result from the structure of the theory. Let us also note here that the relation (26) is simply the equation for a forced oscillator whose solution has the general form $$W(a)=\frac{g}{\nu ^2}+A\mathrm{cos}(\nu a+\delta )=\frac{g}{\nu ^2}+\alpha _1\mathrm{cos}\nu a+\alpha _2\mathrm{sin}\nu a$$ (27) The constants $`\alpha _1`$ and $`\alpha _2`$ appearing in the solution can again be determined from the theory. Namely, from the relation between $`W(a)`$ and $`\mathrm{\Gamma }(a)`$, we recognize that we can identify $`\nu ^2\alpha _1`$ $`=`$ $`{\displaystyle \frac{^2W}{a^2}}|_{a=0}=\left(\left({\displaystyle \frac{\mathrm{\Gamma }}{a}}\right)^2i{\displaystyle \frac{^2\mathrm{\Gamma }}{a^2}}\right)|_{a=0}`$ $`\nu \alpha _2`$ $`=`$ $`{\displaystyle \frac{W}{a}}|_{a=0}=\pm i{\displaystyle \frac{\mathrm{\Gamma }}{a}}|_{a=0}`$ (28) From the general properties of the scalar and fermion theories we have discussed, we intuitively expect $`g_f=0`$ and $`\alpha _{2,s}=0`$. However, these should really follow from the structure of the theory and they do, as we will show shortly. The identity (26) is a linear relation as opposed to the Ward identity (22) in terms of the effective action, and holds both for a fermionic as well as a scalar mode. In fact, rewriting this in terms of the effective action (using eq. (25)), we have $$\frac{^2\mathrm{\Gamma }(a)}{a^2}=\pm i\left(\nu ^2\left(\frac{\mathrm{\Gamma }(a)}{a}\right)^2\right)ige^{i\mathrm{\Gamma }(a)}$$ (29) This is reminiscent of the identity in eq. (22), but is not identical. So, let us investigate this a little bit more in detail, first for a fermionic mode. In this case, we know that the fermion mass term breaks parity and, consequently, the radiative corrections would generate a Chern-Simons term, namely, in this theory, we expect the one-point function to be nonzero. Consequently, by taking derivative of eq. (29) (as well as remembering that $`\mathrm{\Gamma }(a=0)=0`$), we determine $`(\nu _f^{(1)})^2`$ $`=`$ $`\left[\left({\displaystyle \frac{\mathrm{\Gamma }_f^{(1)}}{a}}\right)^23i{\displaystyle \frac{^2\mathrm{\Gamma }_f^{(1)}}{a^2}}\left({\displaystyle \frac{\mathrm{\Gamma }_f^{(1)}}{a}}\right)^1\left({\displaystyle \frac{^3\mathrm{\Gamma }_f^{(1)}}{a^3}}\right)\right]_{a=0}`$ $`g_f^{(1)}`$ $`=`$ $`\left[2i{\displaystyle \frac{^2\mathrm{\Gamma }_f^{(1)}}{a^2}}+\left({\displaystyle \frac{\mathrm{\Gamma }_f^{(1)}}{a}}\right)^1\left({\displaystyle \frac{^3\mathrm{\Gamma }_f^{(1)}}{a^3}}\right)\right]_{a=0}`$ (30) This is quite interesting, for it says that the two parameters in eq. (26) or (29) can be determined from a perturbative calculation. Let us note here some of the perturbative results in this theory , namely, $`{\displaystyle \frac{\mathrm{\Gamma }_f^{(1)}}{a}}|_{a=0}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{tanh}{\displaystyle \frac{\beta m}{2}}`$ $`{\displaystyle \frac{^2\mathrm{\Gamma }_f^{(1)}}{a^2}}|_{a=0}`$ $`=`$ $`{\displaystyle \frac{i}{4}}\mathrm{sech}^2{\displaystyle \frac{\beta m}{2}}`$ $`{\displaystyle \frac{^3\mathrm{\Gamma }_f^{(1)}}{a^3}}|_{a=0}`$ $`=`$ $`{\displaystyle \frac{1}{4}}\mathrm{tanh}{\displaystyle \frac{\beta m}{2}}\mathrm{sech}^2{\displaystyle \frac{\beta m}{2}}`$ (31) Using these, we immediately determine from eq. (30) that $$(\nu _f^{(1)})^2=\frac{1}{4},g_f^{(1)}=0$$ (32) so that the equation (29) coincides with (22) for a single fermion flavor. Furthermore, we determine now from eq. (28) $$\alpha _{1,f}^{(1)}=1,\alpha _{2,f}^{(1)}=\pm i\mathrm{tanh}\frac{\beta m}{2}$$ (33) The two signs in of $`\alpha _{2,f}^{(1)}`$ simply corresponds to the two possible signs of $`\nu _f^{(1)}`$. With this then, we can solve for $`W(a)`$ in the single flavor fermion theory and we have (independent of the sign of $`\nu _f^{(1)}`$) $$W_f^{(1)}(a)=\mathrm{cos}\frac{a}{2}+i\mathrm{tanh}\frac{\beta m}{2}\mathrm{sin}\frac{a}{2}$$ (34) which can be compared with eq. (8). For a scalar theory, however, we know that the mass term does not break parity. Consequently, we do not expect a Chern-Simons term to be generated simply from symmetry arguments. In fact, in the scalar theory with parity as a symmetry, there cannot be any odd terms (in $`a`$) in the effective action. Consequently, taking derivatives of eq. (29) and keeping this in mind, we obtain $`(\nu _s^{(1)})^2`$ $`=`$ $`\left[3i{\displaystyle \frac{^2\mathrm{\Gamma }_s^{(1)}}{a^2}}\left({\displaystyle \frac{^2\mathrm{\Gamma }_s^{(1)}}{a^2}}\right)^1{\displaystyle \frac{^4\mathrm{\Gamma }_s^{(1)}}{a^4}}\right]_{a=0}`$ $`g_s^{(1)}`$ $`=`$ $`\left[2i{\displaystyle \frac{^2\mathrm{\Gamma }_s^{(1)}}{a^2}}\left({\displaystyle \frac{^2\mathrm{\Gamma }_s^{(1)}}{a^2}}\right)^1{\displaystyle \frac{^4\mathrm{\Gamma }_s^{(1)}}{a^4}}\right]_{a=0}`$ (35) Once again, we see that the two parameters in the Ward identity can be determined from the first two nontrivial amplitudes of the theory. For the scalar theory, the necessary nontrivial amplitudes can be easily computed . (Calculationally, the scalar theory coincides with two fermionic theories with masses of opposite sign if we neglect the negative sign associated with fermion loops. The amplitudes can also be determined from the explicit form of the effective action in eq. (14).) $`{\displaystyle \frac{^2\mathrm{\Gamma }_s^{(1)}}{a^2}}|_{a=0}`$ $`=`$ $`{\displaystyle \frac{i}{2\mathrm{sinh}^2(\beta m/2)}}`$ $`{\displaystyle \frac{^4\mathrm{\Gamma }_s^{(1)}}{a^4}}|_{a=0}`$ $`=`$ $`{\displaystyle \frac{i}{2\mathrm{sinh}^2(\beta m/2)}}\left(1+{\displaystyle \frac{3}{2\mathrm{sinh}^2(\beta m/2)}}\right)`$ (36) It follows from this that $$(\nu _s^{(1)})^2=1,g_s^{(1)}=\frac{\mathrm{cosh}\beta m}{2\mathrm{sinh}^2(\beta m/2)}$$ (37) This is indeed consistent with our expectations. Furthermore, we now determine from eq. (28) $$\alpha _{1,s}^{(1)}=\frac{1}{2\mathrm{sinh}^2(\beta m/2)},\alpha _{2,s}^{(1)}=0$$ (38) so that we can write $$W_s^{(1)}(a)=\frac{(\mathrm{cosh}\beta m\mathrm{cos}a)}{2\mathrm{sinh}^2(\beta m/2)}$$ (39) This can be compared with eq. (14). Thus, we see that the Ward identities for the single flavor fermion and scalar theories are given, in terms of the effective actions, respectively by $`{\displaystyle \frac{^2\mathrm{\Gamma }_f^{(1)}}{a^2}}`$ $`=`$ $`i\left({\displaystyle \frac{1}{4}}\left({\displaystyle \frac{\mathrm{\Gamma }_f^{(1)}}{a}}\right)^2\right)`$ $`{\displaystyle \frac{^2\mathrm{\Gamma }_s^{(1)}}{a^2}}`$ $`=`$ $`i\left(1\left({\displaystyle \frac{\mathrm{\Gamma }_s^{(1)}}{a}}\right)^2\right)+{\displaystyle \frac{i\mathrm{cosh}\beta m}{2\mathrm{sinh}^2(\beta m/2)}}e^{i\mathrm{\Gamma }_s^{(1)}}`$ (40) As we have already pointed out, these are nonlinear identities and, therefore, superposition does not hold. In fact, even if we are considering only fermions (or scalars) of $`N_f`$ flavors, the identity modifies in a nontrivial manner (which can be derived by simply noting that $`W^{(1)}=e^{\pm (i/N_f)\mathrm{\Gamma }^{(N_f)}}`$), namely, $`{\displaystyle \frac{^2\mathrm{\Gamma }_f^{(N_f)}}{a^2}}`$ $`=`$ $`iN_f\left({\displaystyle \frac{1}{4}}{\displaystyle \frac{1}{N_f^2}}\left({\displaystyle \frac{\mathrm{\Gamma }_f^{(N_f)}}{a}}\right)^2\right)`$ $`{\displaystyle \frac{^2\mathrm{\Gamma }_s^{(N_f)}}{a^2}}`$ $`=`$ $`iN_f\left(1{\displaystyle \frac{1}{N_f^2}}\left({\displaystyle \frac{\mathrm{\Gamma }_s^{(N_f)}}{a}}\right)^2\right)+{\displaystyle \frac{iN_f\mathrm{cosh}\beta m}{2\mathrm{sinh}^2(\beta m/2)}}e^{\frac{i}{N_f}\mathrm{\Gamma }_s^{(N_f)}}`$ (41) Incidentally, let us note that although the Ward identity is linear in $`W(a)`$, for $`N_f`$ flavors $`W^{(N_f)}`$ is a product of $`W^{(1)}`$’s, not a sum, and so superposition is lost. It is clear, therefore, that while the Ward identity is simple for the basic fermion and scalar modes, for arbitrary combinations of these modes, the identity is bound to be much more complicated. However, we can still derive the Ward identity from the basic identities for a single flavor fermion and scalar theory. Thus, as a simple example, let us consider the supersymmetric theory discussed in section 2 (see eq. (16-18)) for a single flavor. In this case, we have simply a sum of a fermionic and a complex bosonic degree of freedom, and defining $$\mathrm{\Gamma }_{susy}^{(1)}(a)=i\mathrm{log}W_{susy}^{(1)}(a)$$ (42) where $$W_{susy}^{(1)}(a)=\frac{W_f^{(1)}(a)}{W_s^{(1)}(a)}$$ (43) we see that we can no longer write a single identity for $`W_{susy}^{(1)}(a)`$. Rather, we will have a coupled set of identities, one of which, say the one for the fermions will be decoupled. On the other hand, since the fermion equation is uncoupled and can be solved (see eq. (34)), we can use the solution to write a single identity for $`W_{susy}^{(1)}(a)`$ and, therefore, $`\mathrm{\Gamma }_{susy}^{(1)}(a)`$: $`i{\displaystyle \frac{^2\mathrm{\Gamma }_{susy}^{(1)}}{a^2}}+\left({\displaystyle \frac{\mathrm{\Gamma }_{susy}^{(1)}}{a}}\right)^2{\displaystyle \frac{3}{4}}`$ $`=`$ $`\mathrm{tanh}({\displaystyle \frac{\beta m+ia}{2}}){\displaystyle \frac{\mathrm{\Gamma }_{susy}^{(1)}}{a}}`$ (44) $`{\displaystyle \frac{\mathrm{cosh}\beta m\mathrm{cosh}\frac{\beta m}{2}}{2\mathrm{sinh}^2\frac{\beta m}{2}\mathrm{cosh}\frac{(\beta m+ia)}{2}}}e^{i\mathrm{\Gamma }_{susy}^{(1)}}`$ This is very different from eq. (22) and yet satisfies the infinite set of constraints in eq. (21). (Let us note here that the invariance properties of $`\mathrm{\Gamma }_{susy}^{(1)}(a)`$ is the same as that for a single flavor fermion theory.) Thus, as we had mentioned earlier, the identity for a basic single flavor theory is simpler. The identities following from eq. (44) can be perturbatively checked. In fact, the identity can even be solved in the following way. Consider eq. (44). Using the method of Fourier decomposition, it is seen after some algebra that the solution to eq. (44) has the form $$W_{susy}^{(1)}(a)=(1e^{\beta m})\underset{k=0}{\overset{\mathrm{}}{}}e^{k\beta m}\left[\mathrm{cos}(k+\frac{1}{2})a+i\mathrm{tanh}^2\frac{\beta m}{2}\mathrm{sin}(k+\frac{1}{2})a\right]$$ (45) It is interesting that even for this simple model, which contains just a single flavor of fermion and a complex scalar, $`W(a)`$ becomes a sum over infinitely many distinct Fourier modes as opposed to the case of either a single fermion or a single complex scalar where $`W(a)`$ involves only a single Fourier component. Let us note here that, although $`W_{susy}^{(1)}(a)`$ in eq. (45) appears different from that in eq. (18), the series in (45) can, in fact, be summed and coincides with the result in (18) for a single flavor. ## 5 Conclusion: In this paper, we have systematically studied the question of “large” gauge invariance, in the $`0+1`$ dimensional Chern-Simons theory. The effective actions, in $`0+1`$ dimensions, are functions of $`a=𝑑tA(t)`$ which is a consequence of Ward identities for small gauge invariance and makes the derivation of the “large” gauge identities relatively simple. This is a feature that we do not expect to generalize to $`2+1`$ dimensions, except for special static backgrounds. Explicitly, we have derived the “large” gauge Ward identities for a single flavor fermion theory as well as a single flavor complex scalar theory interacting with an Abelian gauge field. These identities are simple, but nonlinear. The Ward identity for any other theory can be derived from them and is, in general, more complicated because of the nonlinear nature of the identities. In particular, we have shown that the solutions of the Ward identity for a single flavor fermion theory or a single flavor complex scalar theory involves a single characteristic index and is simple while for a more complex theory (even for a sum of just a single fermion and a single scalar), it involves a sum over an infinity of Fourier modes. This is a feature which we believe would generalize to $`2+1`$ dimensions. ## Acknowledgments This work was supported in part by the U.S. Dept. of Energy Grant DE-FG 02-91ER40685, NSF-INT-9602559 as well as CNPq. GD acknowledges the support of the DOE under grant DE-FG02-92ER40716.00, and thanks the Technion Physics Department for its hospitality.
no-problem/9911/astro-ph9911029.html
ar5iv
text
# Detection of new emission components in PSR B0329+54 ## 1. Data Analysis For the strong pulsar PSR B0329+54, the high-quality single pulses are easily observable. We obtained the data at 325 MHz in March 1999 using the Giant Metrewave Radio Telescope near Pune, and the data at 606 MHz was obtained in August 1996 using the 76-m Lovell Telescope at Jodrell Bank. For the analysis, we considered about 2500 single pulses at each frequency. The time resolutions of data at 325 and 606 MHz were 0.516 and 0.250 ms, respectively. To estimate the pulse profiles which clearly show the presence of weaker components we developed a ‘window-threshold’ technique. In this technique, we set a window in longitude and employ an intensity threshold while considering the single pulses for making average profiles, i.e. we consider all those pulses, which have intensity levels above the threshold within the window. For threshold the rms in intensity was computed from the off-pulse region. In Figs. 1a&b we have plotted the average pulse profiles which were obtained by applying this technique to nine component windows. The average profile obtained from all those pulses is plotted in Figs. 1c&d, which clearly shows the presence of 9 emission components (I, II, III, IV, V, VI, VII, VIII, IX) in PSR B0329+54. The components on trailing side of the profile are closely spaced compared to the leading side. ## 2. Conclusion We have developed a technique based on windowing and thresholding, to detect the weak emission components in pulsar profiles. By applying it to the single pulse data of PSR B0329+54 we have detected three new emission components (VII, VIII and IX) of this pulsar, and also confirmed the presence of a component (VI) proposed by Kuzmin & Izvekova (1996). The near-symmetric distribution of components around the core/centre of the profile, favours the idea that the pulsar emission beam is annular or conal (Oster & Sieber 1977; Rankin 1983). ### Acknowledgments. We thank A. G. Lyne for providing the Jodrell Bank data, and J. M. Rankin for useful discussions. ## References Kuzmin, A. D., & Izvekova, V. A. 1996, PASP, Vol. 105, Pulsars: Problems & Progress, ed. S. Johnston, M.A. Walker & M. Bailes (San Francisco: ASP), 217 Oster, L., & Sieber, W. 1977, ApJ, 58, 303 Rankin, J. M. 1983, ApJ, 274, 333
no-problem/9911/astro-ph9911482.html
ar5iv
text
# The Parkes Multibeam Pulsar Survey: preliminary results ## 1. Introduction Young pulsars are relatively rare objects in the pulsar population because they evolve rapidly, so on average their distance is relatively high. They are usually found at low Galactic latitudes, close to their places of birth, where their detection is limited by the high background temperature and by the broadening of pulses due to dispersion and interstellar scattering. On the other hand, they are interesting objects for many reasons: they are likely to be $`\gamma `$–ray sources; they exibhit rotational glitches, which are of interest in the understanding of the interior structure of neutron stars; they are likely to be associated with supernova remnants. High frequency ($`\nu `$ $``$ 1400 MHz) surveys (Clifton et al, 1992; Johnston et al 1992) and searches (Kaspi et al, 1996; Manchester, D’Amico & Tuohy, 1985) for young, low latitude distant pulsars proved to be successful, because the contribution of the Galactic synchrotron radiation to the radiotelescope system temperature is highly reduced, because the effect of dispersion is more easlily removed, and because the broadening of pulses due to interstellar scattering varies with frequency aproximately as $`\nu ^{4.4}`$. Triggered by the above motivations, we are undertaking a new survey for pulsars along the Galactic plane at 1.4 Ghz, using the 13-element multibeam receiver recently installed on the 64-m Parkes radiotelescope. In this paper we present the experiment configuration, the survey plan and the preliminary results of about 50% of the survey. ## 2. The Multibeam Survey Each beam of the multibeam receiver system at Parkes is approximately 0.23<sup>o</sup> wide and the beams centres are spaced 2 beamwidth apart (see Fig. 1). The survey pointings are interleaved to give complete sky coverage on a hexagonal grid containing a total of 2670 pointings of 13 beams each. The parameters of the present experiment and those of two previous high frequency surveys of the Galactic plane are summarized in Table 1. Thanks to the long integration time adopted (35-min) and the high sensitivity of the new receiver system, the present survey has a sensitivity 7 times better than previous surveys. Fig 2. shows the theoretical sensitivity as a function of the pulsar period and dispersion measure. ## 3. Results and discussion So far we have observed about 1600 pointings, 90% of which are analysed, corresponding to about 50% of the total survey region. The data reduction system is similar to that used in the Parkes low frequency survey (Manchester et al, 1996), and is carried out on a network of workstations. Because of the relatively long integrations adopted, we complement the standard search analysis with “acceleration search” to take into account possible binary motions. To date we have discovered 513 new pulsars, and have detected 190 known pulsars. Accounting for the fact that so far we searched the regions closest to the Galactic plane, we believe that the number of new discoveries for the entire survey should be somewhat over 800. Timing observations of the newly discovered pulsars are carried out at Jodrell Bank and Parkes. Observations are made at intervals of 4 – 8 weeks, or more closely spaced when pulse-counting statistics need to be resolved. Full timing solutions have been obtained so far for 80 pulsars. At least eight of the new discoveries are young pulsars ($`\tau _c`$ $`<`$ 10<sup>5</sup> years). Two radio pulsars with the highest known surface magnetic field have been discovered (Kaspi et al, these proceedings). One of these objects, PSR J1119-6127 is very young, with as characteristic age, $`\tau _c`$ = 1600 years. For this pulsar we also measured a braking index $`n`$=3.0$`\pm `$0.1. So far, eight of the newly discovered pulsars proved to be members of binary systems, including a pulsar (PSR J1811-1736) in a highly eccentric binary system (Lyne et al 1999) and a pulsar (PSR J1740-3052) with a very massive companion ($``$ 11 M). The basic parameters of the binary pulsars are shown in Table 2. ## REFERENCES Clifton, T.R., Lyne, A.G., Jones, A.W., McKenna, J., Ashworth, M. 1992, MNRAS, 254, 177 Johnston, S., Lyne, A.G., Manchester, R.N., Kniffen, D.A, D’Amico, N., Lim, J., Ashworth, M. 1992, MNRAS, 255, 401 Kaspi, V.M., Manchester, R.N., Johnston, S., Lyne, A.G., D’Amico, N. 1996, AJ, 111, 2028 Lyne, A.G. et al 1999, MNRAS, in press; astro-ph/9911313 Manchester R.N., D’Amico, N., Tuohy, I.R. 1985, MNRAS, 212, 975 Manchester R.N., et al 1996, MNRAS, 279, 1235
no-problem/9911/cond-mat9911410.html
ar5iv
text
# Untitled Document PAPER HAS BEEN WITHDRAWN
no-problem/9911/astro-ph9911328.html
ar5iv
text
# Cosmic Shear from Galaxy Spins ## 1 Introduction In the gravitational instability picture of structure formation, galaxy angular momentum arises from the tidal torquing during the early protogalactic stages Hoyle (1949). Gravitational torquing results whenever the inertia tensor of an object is misaligned with the local gravitational shear tensor. One might thus expect that the observed galaxy spin field would contain some information about the gravitational shear field, and possibly allow a statistical reconstruction thereof. Although the observational search has yet to detect any significant alignments of the galaxy spins so far Han et al. (1995), it has been shown that the sample size has to be increased in order to detect weak alignments (Cabanela & Dickey 1999 and references therein). It was argued that angular momentum only arises at second order in perturbation theory Peebles (1969). This picture involved a spherical protogalaxy, which at first order has no quadrupole moment, and thus cannot be tidally torqued. Later, it was realized that protogalaxies should be expected to have order unity variations from spherical symmetry, and should be tidally torqued at first order (Doroshkevich, 1970; White, 1984). However, recent quantitative predictions of this picture (Catelan & Theuns 1996, hereafter CT) appear to be contradicted by simulations Lemson & Kauffmann (1999), which leaves the field in a state of confusion. The Eulerian angular momentum of a halo in a region $`V_E`$ relative to its center of mass in comoving coordinates, $`𝐋^{Eul}=_{V_E}\rho 𝐱\times 𝐯d^3𝐱`$, can be written in terms of Lagrangian variables to second order accuracy using the Zel’dovich approximation: $$𝐋^{2\mathrm{n}\mathrm{d}}_{V_L}\overline{\rho }𝐪\times \varphi d^3𝐪,$$ (1) where $`V_L`$ is a Lagrangian counter part of $`V_E`$, and $`\varphi `$ is the gravitational potential. Approximating $`\varphi `$ in equation (1) by the first three terms of the Taylor-series expansion about the center of mass, we obtain the first order expression of the angular momentum White (1984) in terms of the shear tensor $`𝐓=(T_{jl})=(_j_l\varphi )`$ and the inertia tensor $`𝐈=(I_{lk})=(q_lq_kd^3𝐪)`$: $$L_i^{1\mathrm{s}\mathrm{t}}ϵ_{ijk}T_{jl}I_{lk}.$$ (2) CT used equation (2) to calculate the linearly predicted angular momentum under the approximation that the principal axes of $`𝐈`$ and $`𝐓`$ are uncorrelated. They also discussed the small factor by which the neglect of the correlation between $`𝐈`$ and $`𝐓`$ overestimates the angular momentum in the context of the Gaussian-peak formalism. In $`\mathrm{\S }`$ 2, we show by numerical simulations that this factor in fact is dominant. For galaxies, the direction of angular momentum is relatively straightforward to observe, while the magnitude very difficult. We will thus concentrate on the statistics of the direction of the spin, dealing with $`unit`$ spin vectors. We note that the spin in equation (2) does not depend on the trace of either $`𝐈`$ nor $`𝐓`$. So we can consider unit trace-free tensors, $`\widehat{T}_{ij}`$ ($`\widehat{T}_{ij}\widehat{T}_{ij}=1`$), and the one point statistics of the spins. The most general quadratic relation between a unit spin vector and a unit traceless shear tensor is $$\widehat{L}_i\widehat{L}_j|𝐓Q_{ij}=\frac{1+a}{3}\delta _{ij}a\widehat{T}_{ik}\widehat{T}_{kj},$$ (3) where $`a[0,1]`$ is the correlation parameter, measuring how well aligned $`𝐈`$ and $`𝐓`$ are. If $`𝐈`$ and $`𝐓`$ are uncorrelated, $`a=1`$. If they are perfectly correlated, $`a=0`$. It is also a convenient parameterization of the nonlinear effects on the spin-shear correlation, and must be measured in numerical simulations. ## 2 Simulations We ran the N-body simulations using the PM (Particle-Mesh) code described by Klypin & Holtzman (1997) with $`128^3`$ particles on a $`256^3`$ mesh in a periodic box of size $`L_b=80h^1\mathrm{Mpc}`$ for the cold dark matter model ($`\mathrm{\Omega }_0=1`$, $`\sigma _8=0.55`$ and $`h=0.5`$). We identified halos in the final output of N-body runs (total number of halos, $`N_t=2975`$) by the standard friends-of-friends algorithm with a linking length of $`0.2`$. The angular momentum of each halo was measured from the positions and velocities of the component particles in the center-of-mass frame. We also calculated the initial shear tensor by taking a second derivative of the gravitational potential at the Lagrangian center-of-mass of each halo. Through the simulations, we first tested the validity of the linear perturbation theory by calculating the average correlations among the 1st, the 2nd order Lagrangian and the Eulerian angular momentum obtained from simulations. We have found that $`\widehat{𝐋}^{2\mathrm{n}\mathrm{d}}\widehat{𝐋}^{\mathrm{Eul}}=0.55`$, $`\widehat{𝐋}^{1\mathrm{s}\mathrm{t}}\widehat{𝐋}^{\mathrm{Eul}}=0.51`$, and $`\widehat{𝐋}^{1\mathrm{s}\mathrm{t}}\widehat{𝐋}^{2\mathrm{n}\mathrm{d}}=0.62`$. It shows that nonlinear effects only add a factor of $`2`$ scatter to the linearly predicted spin-shear correlation Sugerman et al. (1999). Rotating the frame into the shear-principal axes, we find the optimal estimation formula for the parameter a from equation (3): $$a=26\underset{i=1}{\overset{3}{}}\stackrel{~}{\lambda _i}^2|\widehat{L}_i|^2,$$ (4) where $`\{\stackrel{~}{\lambda _i}\}_{i=1}^3`$ are the three eigenvalues of the trace-free unit shear tensor, satisfying $`_i\stackrel{~}{\lambda _i}^2=1`$ and $`_i\stackrel{~}{\lambda _i}=0`$. Using equation (4) along with the initial shear tensors and the angular momentum of final dark halos measured from the simulations, we calculated the average value of a to be $`0.237\pm 0.023`$, showing $`10\sigma _a`$ deviation \[$`\sigma _a=\sqrt{8/(5N_t)}`$\] from the value of $`0`$. The inferred value of a results in a significant correlation of the shear intermediate principal axis with the direction of the angular momentum of halos. We have shown by simulations that the shear and the inertia tensors are misaligned with each other to a detectable degree, generating net tidal torques even at first order, although they are quite strongly correlated (it was found by the simulations that $`I_{11}=0.85`$, $`I_{22}=0.11`$, $`I_{33}=0.74`$ for the trace free unit inertia tensor $`𝐈`$ in the shear principal axes frame). Thus we conclude that the CT approximations are useful for determining the direction of the spin vector, but poor for predicting its magnitude Lemson & Kauffmann (1999) due to the strong correlation between $`𝐈`$ and $`𝐓`$. The strong correlation can be understood physically if one considers $`𝐈`$ to be the collection of particles which have shell crossed, in which case it is identical to $`𝐓`$. ## 3 Shear Reconstruction As in the CT prescription, the probability distribution of $`P(𝐋|𝐓)`$ can be described as Gaussian, and its directional part, $`P(\widehat{𝐋}|𝐓)`$ is calculated by $$P(\widehat{𝐋}|𝐓)=P(𝐋|𝐓)L^2𝑑L=\frac{|\widehat{𝐐}|^{1/2}}{4\pi }\left(\widehat{𝐋}^\mathrm{t}\widehat{𝐐}^1\widehat{𝐋}\right)^{3/2}$$ (5) where $`𝐐`$ is given from equation (3). According to Bayes’ theorem, $`P(𝐓|\widehat{𝐋})=P(\widehat{𝐋}|𝐓)P(𝐓)/P(\widehat{𝐋})`$. Here, $`P(𝐓)=P[𝐓(𝐱_1),𝐓(𝐱_2),\mathrm{}]`$ is a joint random processes linking different points with each other. In the standard picture of galaxy formation from random Gaussian fields, $`P(𝐓)`$ is given as $`P(𝐓)=N\mathrm{exp}(𝐓^t𝐂^1𝐓/2)`$ where $`𝐂=(C_{ijkl})=T_{ij}(𝐱)T_{kl}(𝐱+𝐫)`$ is the two-point covariance matrix of shear tensors: $`C_{ijkl}(𝐫)`$ $`=`$ $`(\delta _{ij}\delta _{kl}+\delta _{ik}\delta _{jl}+\delta _{il}\delta _{jk})\left\{{\displaystyle \frac{J_3}{6}}{\displaystyle \frac{J_5}{10}}\right\}+(\widehat{r}_i\widehat{r}_j\widehat{r}_k\widehat{r}_l)\left\{\xi (r)+{\displaystyle \frac{5J_3}{2}}{\displaystyle \frac{7J_5}{2}}\right\}`$ (6) $`+(\delta _{ij}\widehat{r}_k\widehat{r}_l+\delta _{ik}\widehat{r}_j\widehat{r}_l+\delta _{il}\widehat{r}_k\widehat{r}_j+\delta _{jk}\widehat{r}_i\widehat{r}_l+\delta _{jl}\widehat{r}_i\widehat{r}_k+\delta _{kl}\widehat{r}_i\widehat{r}_j)\left\{{\displaystyle \frac{J_5}{2}}{\displaystyle \frac{J_3}{2}}\right\}.`$ Here $`\widehat{𝐫}=𝐫/r`$, $`J_nnr^n_0^r\xi (r)r^{n1}𝑑r`$, and $`\xi (r)=\delta (𝐱)\delta (𝐱+𝐫)`$ is the density correlation function. We would like to compute the posterior expectation value, for example $`𝐓|\widehat{𝐋}`$. By symmetry, both the expectation value and maximum likelihood of $`𝐓`$ occur at $`𝐓=\mathrm{𝟎}`$, which is not the solution we are looking for. We must thus consider the constrained expectation, with the constraint that $`T_{ij}T_{ij}d^3𝐱=1`$. In the limit that $`P(𝐓|\widehat{𝐋})`$ is Gaussian, this is given by the maximum eigenvector of the posterior correlation function $`\xi _{ijlm}(𝐱_\alpha ,𝐱_\beta )T_{ij}(𝐱_\alpha )T_{lm}(𝐱_\beta )|\widehat{𝐋}`$. The solution satisfies $$\xi _{ijlm}(𝐱_\alpha ,𝐱_\beta )T_{ij}(𝐱_\alpha )d^3𝐱_\alpha =\mathrm{\Lambda }T_{lm}(𝐱_\beta )$$ (7) where $`\mathrm{\Lambda }`$ is the largest eigenvalue for which equation (7) holds. In the asymptotic case that $`a1`$ as in our simulations results, we have a simple expression for the $`\widehat{L}_i\widehat{L}_j`$-depending part of the posterior correlation function in terms of the traceless shear, $`\stackrel{~}{T}_{ij}=T_{ij}\delta _{ij}T_{ll}/3`$: $$\stackrel{~}{\xi }_{ijlm}(𝐱_\alpha ,𝐱_\beta )=\stackrel{~}{C}_{ijnk}(𝐱_\alpha 𝐱_\gamma )\stackrel{~}{C}_{lmok}(𝐱_\beta 𝐱_\gamma )\widehat{L}_n(𝐱_\gamma )\widehat{L}_o(𝐱_\gamma )d^3𝐱_\gamma ,$$ (8) where $`\stackrel{~}{𝐂}`$ is now the trace-free two-point covariance matrix of the shear tensors, related to $`𝐂`$ by $`\stackrel{~}{C}_{ijkl}=C_{ijkl}\delta _{kl}C_{ijnn}/3\delta _{ij}C_{mmkl}/3+\delta _{ij}\delta _{kl}C_{mmnn}/9`$. Substituting (3) into (8) explicitly satisfies a modified version of (7). It is readily described in Fourier space: $$\frac{\stackrel{~}{\xi }_{ijlm}(𝐤_\alpha ,𝐤_\beta )}{P(k_\alpha )P(k_\beta )}\stackrel{~}{T}_{ij}(𝐤_\alpha )d^3𝐤_\alpha =\mathrm{\Lambda }\stackrel{~}{T}_{lm}(𝐤_\beta ).$$ (9) Equation (9) differs from (7) by the appropriate Wiener filter due to small $`a`$, since we have assumed no noise. Since we have discarded the constant diagonal component which does not depend on $`\widehat{L}_i\widehat{L}_j`$, equation (8) is not positive definite, and in fact has zero trace. We can nevertheless use a power iteration to quickly obtain the eigenvector corresponding to the largest eigenvalue. One starts with an initial guess $`\stackrel{~}{T}_{ij}^0(𝐱_\alpha )`$, and defines iterates $`\stackrel{~}{T}_{ij}^{n+1/2}=\stackrel{~}{\xi }_{ijlm}\stackrel{~}{T}_{lm}^n`$, and $`\stackrel{~}{T}_{ij}^{n+1}=\stackrel{~}{T}_{ij}^{n+1/2}/\sqrt{(\stackrel{~}{T}_{ij}^{n+1/2})^2}+\stackrel{~}{T}_{ij}^{n1}`$. This effectively eliminates the negative eigenvectors from the iteration, and converge to the correct solution. After $`n`$ iterations, the fractional error is proportional to $`(\mathrm{\Lambda }_1/\mathrm{\Lambda }_0)^n`$, where $`\mathrm{\Lambda }_0`$ and $`\mathrm{\Lambda }_1`$ are the largest and second largest eigenvalues, respectively. In our experience, fifty iterations converges the result to within a percent of the exact solution. In practice, one can expect the correlation to drop off rapidly at large separations, requiring only an evaluation of a sparse matrix-vector multiplication. The final step is the reconstruction of the density given the traceless shears. It is convenient to consider the full shear as a orthonormal reparametrization in terms of trace and trace-free components into a vector $`T_i=\{\delta /\sqrt{3},v_2,v_3,\sqrt{2}T_{12},\sqrt{2}T_{23},\sqrt{2}T_{31}\}`$ where $`\delta =T_{11}+T_{22}+T_{33}`$, $`v_2=[(3\sqrt{3})T_{11}+2\sqrt{3}T_{22}+(3\sqrt{3})T_{33}]/6`$ and $`v_3=[(3\sqrt{3})T_{11}+2\sqrt{3}T_{22}+(3\sqrt{3})T_{33}]/6`$. The two point correlation function (6) is just linearly transformed into these new variables, which we will denote $`T_i(𝐱)T_j(𝐱+𝐫)=C_{ij}^l(r)`$, where the indices $`i,j`$ run from $`1`$ to $`6`$. The inverse correlation will be denoted $`𝐃=(𝐂^l)^1`$. From $`𝐃`$ we extract a scalar correlation $`D_{11}(r)`$ and a 5 component vector correlation $`E_\mu =\{D_{12},D_{13},D_{14},D_{15},D_{16}\}`$. Similarly, we will use a Greek index $`T_\mu `$ to denote the 5 component vector of $`T_i`$ where $`i>1`$. We then obtain the expression for the density $`\delta |T_\mu =D_{11}^1E_\mu T_\mu `$. If the galaxies are uniformly spaced on a lattice, one could also use a Fast Fourier Transform to rapidly perform the shear reconstruction iterations, and the projection of traceless shear to density. To test the reconstruction procedure, we have made a large stochastic spin field on a $`128^3`$ lattice and applied the reconstruction procedure with a Wiener filtering scale of $`k=32`$ for the case of a power-law spectrum, $`P(k)=k^2`$. Figure 1 plots the correlation coefficient $`c=\delta (𝐤)\delta _r(𝐤)/\sqrt{P(k)|\delta _r(𝐤)|^2}`$ between the reconstructed density field $`\delta _r`$ and the original density field $`\delta `$. We see that for $`a=1`$, corresponding to the case of uncorrelated shear and inertia tensor, the reconstruction is very accurate on large scale (small $`k`$) and noisy at small scale (large k) as expected. Even for the realistic case of $`a0.2`$ given the nonlinear effects, Figure 1 shows that a reconstruction is still quite possible but with less accuracy. Note that the reconstructed signal automatically becomes Wiener filtered on large scale. ## 4 Observational Effects Let us now mention several complications that might arise with real data, and speculations on how they might be addressed. The three dimensional spin axis of a disk galaxy can in principle be determined Han et al. (1995), but it requires knowledge of the rotation curve as well as morphological information, such as extinction maps or the direction of spiral arms. A much easier observational task is to measure only the position angle $`\alpha `$ on the sky and the projected axis ratio $`r`$. Assuming a flat disk geometry for the light and the $`z`$ coordinate along the radial direction, this corresponds to observation of $`r=|\widehat{L}_z|`$ and $`\mathrm{tan}(\alpha )=\widehat{L}_y/\widehat{L}_x`$. We immediately note two discrete degeneracies: $`\widehat{L}_z\widehat{L}_z`$ and $`\{\widehat{L}_x,\widehat{L}_y\}\{\widehat{L}_x,\widehat{L}_y\}`$, giving four possible spin orientations consistent with the observables. Since we only care about the spatial orientation of the spin axis, but not its sign, only a twofold degeneracy exists, which we call $`\widehat{𝐋}^a,\widehat{𝐋}^b`$. We can readily take that into account. Noting that $`P(\widehat{𝐋}^a\widehat{𝐋}^b|T)=P(\widehat{𝐋}^a|𝐓)+P(\widehat{𝐋}^b|𝐓)P(\widehat{𝐋}^a\widehat{𝐋}^b|𝐓)`$, the last term is zero, we simply replace all occurrences of $`\widehat{L}_i\widehat{L}_j`$ in equation (8) with $`\widehat{L}_i^a\widehat{L}_j^a+\widehat{L}_i^b\widehat{L}_j^b`$. The reconstruction itself is quite noisy for an individual galaxy, so we will treat as noise the displacement between the observable galaxy positions in Eulerian redshift space and their Lagrangian positions used in our analysis. If galaxies are randomly displaced from their origin by $`\sigma _r`$, we simply convolve the two point density correlation function $`\xi (r)`$ with a Gaussian of variance $`\sigma _r^2`$, and appropriately update the shear correlation matrix. We furthermore note that the use of redshift for distance introduces an anisotropy in the smoothing Gaussian, which can also be readily taken into account. This procedure could in principle be inverted to measure the nonlinear length scale by measuring $`\sigma _r`$ if one knows the intrinsic power spectrum. One could vary $`\sigma _r`$ until one maximized the eigenvalue $`\mathrm{\Lambda }`$ in Equation (7). The shear reconstruction correlator (8) is only accurately defined at each galaxy position, so the integral is should be replaced by a sum over galaxies. The eigenvector iteration scheme works the same way with the discretized matrix. The density is recovered from the trace-free shear in the same fashion as the continuum limit, where again we only reconstruct the density at the galaxy positions. The density at any other point is reconstructed in a similar fashion using the posterior expectation value given the Gaussian random field. The observed spin vectors are of the luminous materials. They can be observed out to large radii, tens of kpc, through radio emission of the gas. While some warping of the disks are seen, the direction of the spin vector generally changes only very modestly as one moves to larger radii. This suggests that the luminous angular momentum be well correlated with the angular momentum of the whole halo. The standard galaxy biasing is not expected to affect the reconstruction, since we have not used the galaxy densities but only the spin directions. Merging of galaxies is also taken into account by this reconstruction procedure, since the resulting spin vectors are expected to align with the constituent orbital angular momentum vectors. The latter is again predicted by the same shear formula. The merger effects are included in the N-body simulations presented above. ## 5 Conclusions We have presented a direct inversion procedure to reconstruct the gravitational shear and density fields from the observable unit galaxy spin field. The procedure is algebraically unique, dealing only with linear algebra, and is computationally tractable when done iteratively. We have shown that the angular momentum-shear correlation signal in simulations is strong enough to allow useful reconstructions. Direct N-body simulations suggest that nonlinear collapse effects only reduce the linearly predicted spin-shear correlation by a factor of 2. The deprojection degeneracy can be incorporated in a straightforward fashion. Large galaxy surveys with a million galaxy redshifts are coming on line soon. This procedure may allow a density reconstruction which is completely independent of any other method that has thus far been proposed. We have demonstrated its effectiveness in simulated spin catalogs. Due to the nature of this letter, only the key results essential to the reconstruction algorithm have been presented. Detailed intermediate steps and justifications will be shown elsewhere Lee & Pen (2000). We thank Giuseppe Tormen and Antonaldo Diaferio for providing the halo-finding algorithm, and the referee for useful suggestions. J. Lee also thanks Anatoly Klypin for helpful comments on the use of the PM code. This work has been supported by Academia Sinica, and partially by NSERC grant 72013704 and computational resources of the National Center for Supercomputing Applications.
no-problem/9911/quant-ph9911089.html
ar5iv
text
# References Mod. Phys. Lett. A 13(1998)33. Quantum fluctuations of the angular momentum and energy of the ground state M.N. Sergeenko The National Academy of Sciences of Belarus, Institute of Physics, Minsk 220072, Belarus and Gomel State University, Gomel 246699, Belarus ## Abstract Quasiclassical solution of the three-dimensional Schrödinger’s equation is given. The existence of nonzero minimal angular momentum $`M_0=\frac{\mathrm{}}{2}`$ is shown, which corresponds to the quantum fluctuations of the angular momentum and contributes to the energy of the ground state. PACS number(s): 03.05.Ge, 03.65.Sq One of the fundamental features of quantum mechanical systems is nonzero minimal energy which corresponds to zero oscillations. The corresponding wave function has no zeros in physical region. Typical example is the harmonic oscillator. The eigenvalues of the one-dimensional harmonic oscillator are $`E_n=\mathrm{}\omega _0(n+\frac{1}{2})`$, i.e. the energy of zero oscillations $`E_0=\frac{1}{2}\mathrm{}\omega _0`$. In three-dimensional case, in the Cartesian coordinates, the eigenvalues of the oscillator are $`E_n=\mathrm{}\omega _0(n_x+n_y+n_z+\frac{3}{2})`$ , i.e. each degree of freedom contributes to the energy of the ground state, $`E_0=E_{0,x}+E_{0,y}+E_{0,z}=`$ $`\frac{3}{2}\mathrm{}\omega _0`$. Energy of the ground state should not depend on coordinate system. This means that, in the spherical coordinates, each degree of freedom (radial and angular) should contribute to the energy of zero oscillations. In many applications and physical models a nonzero minimal angular momentum $`M_0`$ is introduced (phenomenologically) in order to obtain physically meaningful result . However, the existence of $`M_0`$ follows from the quasiclassical solution of the three-dimensional Schrödinger’s equation. Known exact methods to solve the Schrödinger’s equation are usually mathematical methods. However, together with quantum mechanics, the appropriate method to solve the Schrödinger equation has been developed; it is general for all types of problems in quantum mechanics, and its correct application results in the exact energy eigenvalues for all solvable potentials. This is the phase-space integral method which is also known as the WKB method . The general form of the semiclassical description of quantum-mechanical systems has been considered in Ref. . It was shown that the semiclassical description resulting from Weyl’s association of operators to functions is identical with the quantum description and no information need to be lost in going from one to the another. What is more ”the semiclassical description is more general than quantum mechanical description…” . The semiclassical approach merely becomes a different representation of the same algebra as that of the quantum mechanical system, and then the expectation values, dispersions, and dynamics of both become identical. The WKB method was originally proposed for obtaining approximate eigenvalues of one-dimensional and radial Schrödinger problems in the limiting case of large quantum numbers. Exactness of the method for the solvable potentials has been proved in many works . As for multi-dimensional problems, the quasiclassical method is even more efficient and predictive. In the quasiclassical method, the classic quantities such as classic momentum, classic action, phase, etc. are used. The WKB quantization condition and WKB solution are writing via classic momentum. However, the generalized moments obtained from the separation of the three-dimensional Schrödinger equation are different from the corresponding classic moments. The standard WKB method in leading order in $`\mathrm{}`$ always reproduces the exact spectrum for the solvable spherically symmetric potentials $`V(|\stackrel{}{r}|)`$ if the Langer correction $`l(l+1)(l+\frac{1}{2})^2`$ in the centrifugal term of the radial Schrödinger equation has fulfilled. The ground of this correction for the special case of the Coulomb potential was given by Langer ($`1937`$) by means of reducing the Schrödinger equation to canonical form (without first derivatives). However, the Langer replacement $`l(l+1)(l+\frac{1}{2})^2`$ is universal for any spherically symmetric potential and requires modification of the squared angular momentum. The Schrödinger’s equation for a spherically symmetric potential $`V(r)`$, in the representation of the wave function $`\psi (\stackrel{}{r})`$, assumes separation of variables \[$`\psi (\stackrel{}{r})=R(r)\mathrm{\Theta }(\theta )\mathrm{\Phi }(\phi )`$\]. After excluding the first derivatives, this equation can be written in the form of the classic equation $`\mathrm{}^2{\displaystyle \frac{\stackrel{~}{R}_{rr}^{\prime \prime }}{\stackrel{~}{R}}}+{\displaystyle \frac{1}{r^2}}\left(\mathrm{}^2{\displaystyle \frac{\stackrel{~}{\mathrm{\Theta }}_{\theta \theta }^{\prime \prime }}{\stackrel{~}{\mathrm{\Theta }}}}{\displaystyle \frac{\mathrm{}^2}{4}}\right)+{\displaystyle \frac{1}{r^2\mathrm{sin}^2\theta }}\left(\mathrm{}^2{\displaystyle \frac{\stackrel{~}{\mathrm{\Phi }}_{\phi \phi }^{\prime \prime }}{\stackrel{~}{\mathrm{\Phi }}}}{\displaystyle \frac{\mathrm{}^2}{4}}\right)=`$ (1) $`2m[EV(r)],`$ where $`\stackrel{~}{R}(r)=rR(r)`$, $`\stackrel{~}{\mathrm{\Theta }}(\theta )=\sqrt{\mathrm{sin}(\theta )}\mathrm{\Theta }(\theta )`$, $`\stackrel{~}{\mathrm{\Phi }}(\phi )=\mathrm{\Phi }(\phi )`$. Separation of equation (1) results in the three second-order differential equations in canonical form: $$\left(i\mathrm{}\frac{d}{dr}\right)^2\stackrel{~}{R}=\left[2m(EV)\frac{\stackrel{}{M}^2}{r^2}\right]\stackrel{~}{R},$$ (2) $$\left[\left(i\mathrm{}\frac{d}{d\theta }\right)^2\left(\frac{\mathrm{}}{2}\right)^2\right]\stackrel{~}{\mathrm{\Theta }}(\theta )=\left(\stackrel{}{M}^2\frac{M_z^2}{\mathrm{sin}^2\theta }\right)\stackrel{~}{\mathrm{\Theta }}(\theta ),$$ (3) $$\left[\left(i\mathrm{}\frac{d}{d\phi }\right)^2\left(\frac{\mathrm{}}{2}\right)^2\right]\stackrel{~}{\mathrm{\Phi }}(\phi )=M_z^2\stackrel{~}{\mathrm{\Phi }}(\phi ),$$ (4) where $`\stackrel{}{M}^2`$ and $`\stackrel{}{M}_z^2`$ are the constants of separation and, at the same time, integrals of motion. Equations (2)-(4) have the quantum-mechanical form $`\widehat{f}\psi =f\psi `$, where $`f`$ is the physical quantity and $`\widehat{f}`$ is the corresponding operator. Generalized moments in right-hand sides of Eqs. (2)-(4) should be used in the WKB quantization condition and WKB solution and solve the known problem of application of the WKB method to the three-dimensional Schrödinger’s equation<sup>1</sup><sup>1</sup>1Quantities $`\mathrm{}^2\stackrel{~}{R}_{rr}^{\prime \prime }/\stackrel{~}{R}`$, $`\mathrm{}^2\stackrel{~}{\mathrm{\Theta }}_{\theta \theta }^{\prime \prime }/\stackrel{~}{\mathrm{\Theta }}`$, $`\mathrm{}^2\stackrel{~}{\mathrm{\Phi }}_{\phi \phi }^{\prime \prime }/\stackrel{~}{\mathrm{\Phi }}`$ obtained after separation of equation (1) are usually considered as the squared moments which are used in the WKB quantization condition, that results in the known difficulties of the WKB method.. We solve each of them obtained after separation equation by the same method, i.e. the WKB method. For the projection of the angular momentum, from the quantization condition $`p(\phi )𝑑\phi =2\pi m\mathrm{}`$ (here $`p(\phi )=M_z`$), we have $`M_z=m\mathrm{}`$, $`m=0,1,2,\mathrm{}`$. The squared angular momentum eigenvalues, $`\stackrel{}{M}^2`$, are defined from the WKB quantization condition ($`\theta _1`$ and $`\theta _2`$ are the classical turning points) $$_{\theta _1}^{\theta _2}\sqrt{\stackrel{}{M}^2\frac{M_z^2}{\mathrm{sin}^2\theta }}𝑑\theta =\pi \mathrm{}\left(n_\theta +\frac{1}{2}\right),n_\theta =0,1,2,\mathrm{}$$ (5) Integration of (5) gives for $`\stackrel{}{M}^2`$, $$\stackrel{}{M}^2=\left(l+\frac{1}{2}\right)^2\mathrm{}^2,l=|m|+n_\theta .$$ (6) Energy eigenvalues are defined from the condition $$_{r_1}^{r_2}\sqrt{2m[EV(r)]\frac{\stackrel{}{M}^2}{r^2}}𝑑r=\pi \mathrm{}\left(n_r+\frac{1}{2}\right),n_r=0,1,2,\mathrm{},$$ (7) where $`r_1`$, $`r_2`$ are the classical turning points. The WKB solution corresponding to the eigenvalues (6) has the correct asymptotic behavior at $`\theta 0`$ and $`\pi `$ for all values of $`l`$. In the representation of the wave function $`\psi (\stackrel{}{r})`$, $`\mathrm{\Theta }_l^m(\theta )=\stackrel{~}{\mathrm{\Theta }}^{WKB}(\theta )/\sqrt{\mathrm{sin}\theta }\theta ^{|m|}`$ which corresponds to the behavior of the exact wave function $`Y_{lm}(\theta ,\phi )`$ at $`\theta 0`$. The normalized quasiclassical solution \[far from the turning points, where $`p(\theta )(l+\frac{1}{2})\mathrm{}`$\] in the representation of the wave function $`\stackrel{~}{\psi }(\stackrel{}{r})=\stackrel{~}{R}(r)\stackrel{~}{\mathrm{\Theta }}(\theta )\stackrel{~}{\mathrm{\Phi }}(\phi )`$ is written in elementary functions in the form of a standing wave, $$\stackrel{~}{Y}_{lm}(\theta ,\phi )=\frac{1}{\pi }\sqrt{\frac{2l+1}{l|m|+\frac{1}{2}}}\mathrm{cos}\left[\left(l+\frac{1}{2}\right)\theta +\frac{\pi }{2}\left(l|m|\right)\right]e^{im\phi },$$ (8) where we have took into account that the phase-space integral in the classical turning point $`\theta _1`$, $`\chi (\theta _1)=\frac{\pi }{2}(n_\theta +\frac{1}{2})`$. Consider now important consequences of the above solution. First of all note that Eq. (8) shows the existence of a nontrivial solution at $`l=0`$. Setting in (8) $`m=0`$, $`l=0`$ we obtain $$\stackrel{~}{Y}_{00}(\theta ,\phi )=\frac{\sqrt{2}}{\pi }\mathrm{cos}\frac{\theta }{2}.$$ (9) Note, that the angular eigenfunction $`\stackrel{~}{Y}_{00}(\theta ,\phi )`$ of the ground state is symmetric and has the form of standing half-wave. The corresponding eigenvalue is $$M_0=\frac{\mathrm{}}{2}.$$ (10) The eigenvalue (10) contributes to the energy of zero oscillations. This means that (9) and (10) can be considered as solution which describes the quantum fluctuations of the angular momentum. Quantization condition (7) results in the exact energy eigenvalues for all solvable spherically symmetric potentials in quantum mechanics. In particular, for the isotropic oscillator, we have $`E_n=\omega _0[2\mathrm{}(n^{}+\frac{1}{2})+M]`$. So far, as $`M=(l+\frac{1}{2})\mathrm{}`$, this results in the exact energy eigenvalues for the harmonic oscillator. For the energy of zero oscillations, $`E_0`$, we have $`E_0=\omega _0(\mathrm{}+M_0)`$ that apparently shows contribution of the quantum fluctuations of the angular momentum into the energy of the ground state $`E_0`$. The Coulomb problem is another classic example. Quantization condition (7) again reproduces the exact result, $`E_n=\frac{1}{2}\alpha ^2m[(n_r+\frac{1}{2})\mathrm{}+M]^2`$. As in previous example, the quantum fluctuations of the angular momentum, $`M_0`$, contribute to the energy of the ground state, $`E_0=\frac{1}{2}\alpha ^2m(\frac{\mathrm{}}{2}+M_0)^2`$. Third example, the Hulthén’s potential, is of a special interest in atomic and molecular physics. The radial problem for this potential is usually considered at $`l=0`$. However, the quasiclassical approach results in the nonzero centrifugal term at $`l=0`$ and allows to obtain the analytic result for this potential at any $`l`$. The leading-order WKB quantization condition (7) for the Hulthén potential is $$I=_{r_1}^{r_2}\sqrt{2m\left(E+V_0\frac{e^{r/r_0}}{1e^{r/r_0}}\right)\frac{\stackrel{}{M}^2}{r^2}}𝑑r=\pi \mathrm{}\left(n^{}+\frac{1}{2}\right).$$ (11) Calculation of this integral results in the energy spectrum $$E_n=\frac{1}{8mr_0^2}\left(\frac{2mV_0r_0^2}{N}N\right)^2,$$ (12) where $`N=(n_r+\frac{1}{2})\mathrm{}+M`$ denotes the principal quantum number. As in previous examples, this formula apparently shows the contribution of the quantum fluctuations of the angular momentum to the energy of the ground state $`E_0`$. Setting in (12) $`M=0`$, we arrive at the energy eigenvalues obtained from known exact solution of the Schrödinger equation at $`l=0`$. However, in our case $`M_{min}=\frac{\mathrm{}}{2}`$ at $`l=0`$ and the principal quantum number is $`N=(n_r+\frac{1}{2})\mathrm{}+M_0`$. Thus the quasiclassical method is the appropriate method to solve the 3-dimensional Schrödinger’s equation. The Langer correction has a deep physical origin as the correction to the squared angular momentum eigenvalues. It points out the existence of the integral of motion $`\stackrel{}{M}^2=(l+\frac{1}{2})^2\mathrm{}^2`$. The squared angular momentum eigenvalues $`\stackrel{}{M}^2`$ are the same for all spherically symmetric potentials and no further corrections are necessary. The eigenfunctions far from turning points can be written in terms of elementary functions in the form of standing wave and can be treated as a special class of the exact solutions. The angular eigenfunction at $`l=0`$ is of the type of standing half-wave. This solution has treated as one which describes the quantum fluctuations of the angular momentum corresponding to the eigenvalue $`M_0=\frac{\mathrm{}}{2}`$.
no-problem/9911/astro-ph9911305.html
ar5iv
text
# Polarization Characteristics of Pulsar Profiles ## 1. Circular Polarization Han et al. (1998) systematically studied the circular polarization of pulsar profiles. They found that circular polarization is common in profiles but diverse in nature. Circular polarization is about 9% in average, weaker than linear polarization (15%) in general. We emphasize the following points about circular polarization (CP): (1). CP not unique to core emission: One misleading concept is that circular polarization alway accompanies core emission. It is generally strongest in the central or ‘core’ regions of a profile, but is by no means confined to these regions. Circular polarization has been detected from conal components of many pulsars, for example, conal-double pulsars. (2). Sense reversal not unique to core: Circular polarization often changes sense near the middle of the profile. But sense reversals have been observed at other longitudes, e.g. conal components of PSRs B0834+06, B1913+16, B2020+28, B1039$``$19, J1751$``$4657, and B0329+54 in abnormal mode. (3). No PA correlation for core emission: There is no correlation between the sense of the sign change of circular polarization and the sense of variation of linear polarization angle (PA), in contrast to earlier conclusions on this issue. (4). Correlation for cone-dominated pulsars: We found a strong correlation between between the sign of PA variation and sense of circular polarization in conal-double pulsars, with right-hand (negative) circular polarization accompanying increasing PA and vice versa. No good examples contrary to this trend have been found. (5). Variation with frequency: Circular polarization generally does not vary systematically with frequency. Multifrequency profiles of many pulsars show similar CP across a wide frequency range (eg. PSRs B0329+54, B0525+21). However, we have found several examples with significant variations (Table 5 of Han et al 1998). For example, PSR B1749$``$28 now has been confirmed by the data of GL98 to change from dominant right hand (“$``$”) CP at low frequencies to a sense change “$`/+`$” at high frequencies, while PSRs B1859+03 and B1900+01 change from “$`+`$” to “$`+/`$”. ## 2. High Linear Polarization Strong linear polarization is an outstanding characteristics of pulsars. Almost all pulsars with log $`\dot{E}>34.5`$, if a polarization profile is available, have (at least) one highly linearly polarized component. Using our profile database, we found that the highly linearly polarized components do not have to be associated with high $`\dot{E}`$. Several types of pulsar profiles have highly polarized components: (1). Leading-polarized component: The prototype of this kind of pulsars is PSR B0355+54. The leading component is almost 100% polarized. The component does not dominate the profile below several hundred MHz, but becomes stronger towards high frequency. It may be emitted from a different region. Other examples are PSRs B0450+55, B1842+14, B2021+51, B0809+74, B0626+24, B1822$``$09. (2). Trailing-polarized component: PSR B0559$``$05 is a mirror-symmetrical type to PSR B0355+54. Its trailing component is highly linearly polarized, and becomes stronger with increasing frequency. Good examples are PSR B2224+65, J1012+5307 and J1022+1001. The latter two are millisecond pulsars. (3). Polarized multicomponents: The prototype PSR B0740$``$28 has 7 Gaussian components fitted to a high time resolution profile. These pulsars have a sharp leading edge and more gradual trailing edge, and are highly polarized. Good examples are PSRs J0538+2817, B0540+23, B0833$``$45, B0950+08i, J1359$``$6038, B1929+10m. PSRs B0538$``$75, J0134$``$2937 and the postcursor of B0823+26 have just mirror-symmetrical profiles to these examples above. (4). Polarized single component: Almost 100% polarized single components may be conal emission emitted from the very outer edge of beam. Two dozen examples have been found. The best examples are PSRs B0105+65, B0611+22, B0628$``$28, B1322+83, B1556$``$44, J1603$``$5657, B1706$``$44, B1828 $``$10, B1848+13, B1913+10, B1915+13. (5). Interpulse pulsars: PSR B0906$``$49 and B1259$``$63 are young, energentic ($`\dot{E}>35`$), interpulse pulsars with extremely highly polarization. These may be pulsars where both sides of a wide conal beam from a single pole are observed. PSR J0631+1036 may be an off-centre cut through a wide cone. The polarization characteristics of the mean pulse profile provide a framework for understanding the emission processes in pulsars. The characteristics of pulsar circular polarization summarized by Han et al. (1998) should be considered by all emission models. The points on highly linear polarization we make here should be included into future pulsar classifications and geometrical studies of pulsar emission beam. Acknowledgements JLH thanks financial support from the National Natural Science Foundation (NNSF) and the Educational Ministry of China. ## References Gould D.M., Lyne A.G., 1998, MNRAS 301, 235 Han J.L., Manchester R.N., Xu R.X., Qiao G.J., 1998, MNRAS 300, 373 Weisberg J.M., Cordes J.M., Lundgren S.C., et al., 1999, ApJS 121, 171
no-problem/9911/nucl-th9911038.html
ar5iv
text
# Wave Function Structure in Two-Body Random Matrix Ensembles ## Abstract We study the structure of eigenstates in two-body interaction random matrix ensembles and find significant deviations from random matrix theory expectations. The deviations are most prominent in the tails of the spectral density and indicate localization of the eigenstates in Fock space. Using ideas related to scar theory we derive an analytical formula that relates fluctuations in wave function intensities to fluctuations of the two-body interaction matrix elements. Numerical results for many-body fermion systems agree well with the theoretical predictions. PACS numbers: 24.10.Cn, 05.45.+b, 24.60.Ky, 05.30.-d Random matrix theory (RMT) has become a powerful tool for describing statistical properties of wave functions and energy levels in complex quantum systems . The use of a two-body random matrix ensemble (TBRE) is of particular interest for many-body systems since classical RMT implies the presence of $`k`$-body forces ($`k>2`$) and gives the unphysical semicircle as the spectral density. The TBRE displays the same spectral fluctuations as classical RMT while its Gaussian spectral density agrees well with nuclear shell model calculations. The situation is not so clear for the structure of wave functions in TBRE. Recent results show that ground states of shell model Hamiltonians with two-body random interactions favor certain quantum numbers and thereby differ considerably from RMT expectations. In this letter we examine the structure of wave functions in TBRE, and compare numerical results with theoretical predictions. Such a study is not only interesting on its own but is also motivated by the ongoing importance of TBRE for nuclear and mesoscopic physics . We recall that deviations from RMT indicate some degree of wave function non-ergodicity and are related to phenomena like Fock space localization in many-body systems , and scars of periodic orbits or invariant manifolds in classically chaotic systems. To quantify the degree of localization of a given wave function, it is useful to introduce the notion of an inverse participation ratio (IPR). Thus, let $`D`$ be the total dimension of the relevant Fock subspace, let $`|b`$ label the single-particle basis states (Fock states), and let $`|\alpha `$ represent the eigenstates of the Hamiltonian. Then the overlap intensities $$P_{\alpha b}=|\alpha |b|^2$$ (1) are the squares of the expansion coefficients, and have mean value $`1/D`$. The IPR of eigenstate $`|\alpha `$ is defined as the first nontrivial moment of the intensity distribution, namely the ratio of the mean squared $`P_{\alpha b}`$ to the square of the mean: $$\mathrm{IPR}_\alpha =\frac{\frac{1}{D}\underset{b=1}{\overset{D}{}}P_{\alpha b}^2}{\left(\frac{1}{D}_{b=1}^DP_{\alpha b}\right)^2}=D\underset{b=1}{\overset{D}{}}P_{\alpha b}^2.$$ (2) The IPR measures the inverse fraction of Fock states that participate in building up the full wave function $`|\alpha `$, i.e. $`\mathrm{IPR}_\alpha =1`$ for a wave function that has equal overlaps $`P_{\alpha b}`$ with all basis states, and $`\mathrm{IPR}_\alpha =D`$ for the other extreme of a wave function that is composed entirely of one basis state. While complete information about wave function ergodicity is contained in the full distribution of intensities $`P_{\alpha b}`$, the IPR serves as a very useful one-number measure of the degree of Fock-space localization. In RMT, the $`P_{\alpha b}`$ are given (in the large-$`D`$ limit) by squares of Gaussian random variables, in accordance with the Porter-Thomas distribution, leading to $`\mathrm{IPR}_{\mathrm{RMT}}=3`$ for real wave functions. For finite $`D`$, the IPR is slightly below its asymptotic value (with the deviation falling off as $`1/D`$), but is still uniform over the entire spectrum. A very simple two-body interaction model, however, already displays behavior which is qualitatively different from this naive RMT expectation. Let the Hamiltonian be given by $$H=\frac{1}{2}\underset{i,j,k,l}{}V_{ijkl}a_i^{}a_j^{}a_ka_l,$$ (3) where the single-particle indices $`i`$, $`j`$, $`k`$, $`l`$ run from $`1`$ to $`M`$ (the number of available single-particle states) and the $`V_{ijkl}`$ are Gaussian random variables with unit variance. The operators $`a_j^{}`$ and $`a_j`$ create and annihilate a fermion in the single-particle state labeled by $`j`$, respectively, and obey the usual anti-commutation rules. The dimension of the $`N`$-particle Fock subspace is given by $`D=\left(\genfrac{}{}{0pt}{}{M}{N}\right)`$. We notice that this model contains no explicit one-body terms and therefore does not display Anderson-type localization effects. On physical grounds, we assume that the two-body interaction is generated by a real symmetric potential, $`V(r_1,r_2)=V(r_2,r_1)`$ and choose real single-particle wave functions. These conditions lead to the constraints $`V_{ikjl}=V_{ljki}=V_{jilk}=V_{ijkl}`$ and reduce the number of independent variables describing a given realization (Eq. 3) of the ensemble. While such correlations between matrix elements will determine factors of $`2`$ in the calculation below, they do not qualitatively affect our results. The choice of bosons rather than fermions also does not qualitatively change the localization behavior. Fig. 1(top) shows a smoothed ensemble-averaged spectral density $$\rho (E)=\mathrm{Tr}\delta (EH)=\underset{\alpha =1}{\overset{D}{}}\delta (EE_\alpha ),$$ (4) for $`M=13`$ and $`N=6`$, while in Fig. 1(bottom) we plot $`\mathrm{IPR}/\mathrm{IPR}_{\mathrm{RMT}}1`$ as a function of energy. Obviously, we observe strong deviations from RMT behavior at the edges of the spectrum. This implies that wave function intensities do not have a Porter-Thomas distribution. To understand this surprising result, we may adapt a formalism previously used successfully to understand wave function scars and other types of anomalous quantum localization behavior in single-particle systems. Let $`\rho _b(E)`$ be the local density of states (strength function) of the basis state $`b`$: $$\rho _b(E)=b|\delta (EH)|b=\underset{\alpha =1}{\overset{D}{}}P_{\alpha b}\delta (EE_\alpha );$$ (5) it is given by the Fourier transform of the autocorrelation function $$A_b(t)=b|e^{iHt}|b.$$ (6) Let us assume that $`A(t)`$ displays only two different time scales: the initial decay time $`T_{\mathrm{decay}}`$ of the Fock state $`|b`$ due to interactions, and the Heisenberg time $`T_HDT_{\mathrm{decay}}`$ (i.e. $`\mathrm{}`$ over the mean level spacing) at which individual eigenlevels are resolved. Following the initial decay, random long-time recurrences in $`A_b(t)`$ can be shown to be convolved with the short-time behavior . In the energy domain this produces random oscillations $`f_b(E)`$ multiplying a smooth envelope $`\rho _b^{\mathrm{sm}}(E)`$ given by the short-time dynamics: $$\rho _b(E)=f_b(E)\rho _b^{\mathrm{sm}}(E).$$ (7) Here $$\underset{b=1}{\overset{D}{}}\rho _b^{\mathrm{sm}}(E)=\rho ^{\mathrm{sm}}(E),$$ (8) while $`f_b(E)`$ is a fluctuating function with mean value of unity: $$f_b(E)=\frac{\underset{\alpha =1}{\overset{D}{}}r_{\alpha b}\delta (EE_\alpha )}{\rho ^{\mathrm{sm}}(E)}.$$ (9) The $`r_{\alpha b}`$ are random $`\chi ^2`$ variables with mean value one. Then, substituting into Eq. 5, we have the individual wave function intensities given by $$P_{\alpha b}=r_{\alpha b}\frac{\rho _b^{\mathrm{sm}}(E_\alpha )}{\rho ^{\mathrm{sm}}(E_\alpha )}.$$ (10) Both $`r_{\alpha b}`$ and $`\rho _b^{\mathrm{sm}}`$ have $`b`$-dependent fluctuations, which are uncorrelated under our assumption of separated time scales. Then using Eq. 10 we can express the IPR (Eq. 2) as ($`\delta \rho _b^{\mathrm{sm}}(E)\rho _b^{\mathrm{sm}}(E)\rho _b^{\mathrm{sm}}(E)`$) $`\mathrm{IPR}_\alpha `$ $`=`$ $`{\displaystyle \frac{r_{\alpha b}^2}{r_{\alpha b}^2}}\times {\displaystyle \frac{\rho _b^{\mathrm{sm}}(E_\alpha )^2}{\rho _b^{\mathrm{sm}}(E_\alpha )^2}}`$ (11) $`=`$ $`\mathrm{IPR}_{\mathrm{RMT}}\left(1+{\displaystyle \frac{\delta \rho _b^{\mathrm{sm}}(E_\alpha )^2}{\rho _b^{\mathrm{sm}}(E_\alpha )^2}}\right),`$ (12) where all averages are over the Fock basis index $`b`$, i.e. $`\mathrm{}D^1_{b=1}^D`$. In the limit of many particles (and many holes) $`N`$, $`MN1`$ the spectrum approaches a Gaussian shape $$\rho ^{\mathrm{sm}}(E)=\frac{D}{\sqrt{2\pi E_0^2}}\mathrm{exp}(E^2/2E_0^2),$$ (13) where $`E_0^2=D^1\mathrm{Tr}H^2`$ is given by the mean sum of squares of matrix elements in any given row of the Hamiltonian. The same arguments lead to the vanishing of higher-order cumulants for the individual strength functions $`\rho _b^{\mathrm{sm}}(E)`$, so each of these should also have a Gaussian shape, but with centroid $`c_b=H_{bb}`$ and variance $`v_b=_b^{}H_{bb^{}}^2`$. Due to these fluctuations in the centroids and widths as one goes through the different basis states $`b`$, we have $`\delta \rho _b^{\mathrm{sm}}(E)`$ $`=`$ $`{\displaystyle \frac{\rho _b^{\mathrm{sm}}(E)}{c_b}}\delta c_b+{\displaystyle \frac{\rho _b^{\mathrm{sm}}(E)}{v_b}}\delta v_b`$ (14) $`=`$ $`\left[{\displaystyle \frac{E}{E_0}}{\displaystyle \frac{\delta c_b}{E_0}}+\left({\displaystyle \frac{E^2}{E_0^2}}1\right){\displaystyle \frac{\delta v_b}{2E_0^2}}\right]\rho ^{\mathrm{sm}}(E),`$ (15) where in the second line we have used the Gaussian form of Eq. 13. Substituting into Eq. 11, we obtain the general form: $$\frac{\mathrm{IPR}(E)}{\mathrm{IPR}_{\mathrm{RMT}}}1=\frac{(\delta c_b)^2}{E_0^2}\left(\frac{E^2}{E_0^2}\right)+\frac{(\delta v_b)^2}{4E_0^4}\left(\frac{E^2}{E_0^2}1\right)^2,$$ (16) valid of course not only for two-body interactions but for any ergodic Hamiltonian with a Gaussian density of states. The quantities $`(\delta c_b)^2`$ and $`(\delta v_b)^2`$ do depend on the parameters of the model, but remarkably the IPR behavior is always given to leading order by a quartic polynomial in energy. From this point of view, it is easy to understand the enhancement in fluctuations near the edge of the spectrum (i.e. for large $`|E|`$): two Gaussians differing only slightly in their centroid or width may look almost identical in the bulk, but the relative difference increases dramatically as one moves into the tail of the spectrum. For our simple model (Eq. 3) one may easily compute the coefficients in Eq. 16. The squared width $`v_b=E_0^2`$ of the full spectrum is given by the sum of squares of entries in one row of the Hamiltonian and equals the number of independent terms in Eq. 3 that couple any given Fock state to other Fock states, times the mean squared value ($`=1`$) of each such term. A simple counting argument then shows $`E_0^2=2\left(\genfrac{}{}{0pt}{}{N}{2}\right)\left[1+2(MN)+\left(\genfrac{}{}{0pt}{}{MN}{2}\right)\right]`$. Because all the contributions are independent $`\chi ^2`$ variables of mean $`1`$ and variance $`2`$, the variance in the sum is given by twice the number of contributions: $`(\delta v_b)^2=2E_0^2`$. Finally, the variance in the centroid is given by the number of terms in the Hamiltonian (Eq. 3) which contribute to each diagonal element in the Fock basis, namely $`(\delta c_b)^2=\left(\genfrac{}{}{0pt}{}{N}{2}\right)`$. Then $`{\displaystyle \frac{\mathrm{IPR}(E)}{\mathrm{IPR}_{\mathrm{RMT}}}}1`$ $`=`$ $`\left[1+2(MN)+\left({\displaystyle \genfrac{}{}{0pt}{}{MN}{2}}\right)\right]^1`$ (17) $`\times `$ $`\left[\left({\displaystyle \frac{E^2}{E_0^2}}\right)+\left[2\left({\displaystyle \genfrac{}{}{0pt}{}{N}{2}}\right)\right]^1\left({\displaystyle \frac{E^2}{E_0^2}}1\right)^2\right]`$ (18) $``$ $`{\displaystyle \frac{2}{M^2}}\left[{\displaystyle \frac{E^2}{E_0^2}}+{\displaystyle \frac{1}{N^2}}\left({\displaystyle \frac{E^2}{E_0^2}}1\right)^2\right],`$ (19) where in the last line we have taken the dilute many-particle limit $`MN1`$. The fluctuations are always strongest near the edge of the spectrum; in particular close to the ground state ($`E_{\mathrm{gs}}^22E_0^2\mathrm{ln}D2E_0^2N\mathrm{ln}(M/N)`$), we find $$\frac{\mathrm{IPR}(E_{\mathrm{gs}})}{\mathrm{IPR}_{\mathrm{RMT}}}1=\frac{4}{M^2}\left[N\mathrm{ln}\frac{M}{N}+2\mathrm{ln}^2\frac{M}{N}\right].$$ (20) Let us compare the IPR prediction of Eq. 18 with the numerical data presented in Fig. 1(bottom). While the prediction qualitatively describes the correct trend of the IPR as a function of energy, it is not in quantitative agreement with the data and fails by as much as a factor of $`2`$ at the edge of the spectrum. This is not very surprising, given that the actual behavior of the spectral density $`\rho ^{\mathrm{sm}}(E)`$ is far from Gaussian for the parameters we have chosen. Although it is known that the spectrum approaches a Gaussian form in the many-particle limit , the number of particles that can be simulated numerically is not nearly large enough for the Gaussian form to be a good quantitative approximation, particularly in the tail (see Fig. 1(top)). A good way to measure deviations from the Gaussian form is to compute the fourth cumulant divided by the square of the second cumulant $`E_0^2`$: for a Gaussian this quantity vanishes while for our system it equals $`0.7`$, actually closer to the semicircle value of $`1`$. To obtain quantitatively valid predictions, we must correct for deviations from the Gaussian shape. The ansatz $$\rho ^{\mathrm{sm}}(E)=\frac{16ϵ\left(\frac{E}{E_0}\right)^2+ϵ\left(\frac{E}{E_0}\right)^4}{13ϵ}\frac{D}{\sqrt{2\pi E_0^2}}\mathrm{exp}\left(\frac{E^2}{2E_0^2}\right)$$ (21) allows for nonzero higher cumulants while keeping the number of states and the width $`E_0^2`$ fixed. The parameter $`ϵ`$ is obtained by a least squares fit of the numerical data to this form. A derivation starting from the first line of Eq. 14 yields an IPR that depends on the energy through a rational function instead of the quartic polynomial of Eq. 18. As we can see in Fig. 1(bottom), this leads to a surprisingly good quantitative prediction for the IPR behavior, given that we are applying perturbation theory around a Gaussian shape for a spectrum that in reality is very far from the Gaussian limit. Gaussian spectral densities have been observed in nuclear shell model calculations with realistic or random two-body interactions . This is mainly due to the presence of spin and isospin degrees of freedom, which cause important correlations between Hamiltonian matrix elements. Let us therefore consider adding spin to the Hamiltonian (Eq. 3), so that $`a_j^{}`$ and $`a_j`$ now create and annihilate a fermion in the single-particle state labeled by $`j(n_j,s_j)`$, with $`n_j`$ and $`s_j=\pm 1/2`$ denoting the orbital and spin quantum numbers. In what follows we consider $`N`$ fermions in a shell of fixed total spin $`S=_{j=1}^Ns_j=0`$. We assume that the random matrix elements $`V_{ijkl}`$ depend only on the orbital quantum numbers; this reduces the number of independent matrix elements considerably. The (sparse) two-body interaction matrix is constructed using a code similar to the one described in Ref.. The spectral density of this TBRE agrees very well with a Gaussian, and the IPR is predicted by Eq. 16. However, the determination of the quantities $`(\delta c_b)^2`$ and $`(\delta v_b)^2`$ is more difficult since the spin degree of freedom makes the required counting of matrix elements a non-trivial task. Alternatively, one may obtain these quantities directly from the numerically generated Hamiltonians. Fig. 2 shows that the numerical data and theoretical results are in good agreement. This confirms the validity of the simple theory derived in this letter. We note that our results are in qualitative agreement with nuclear shell model calculations using realistic interactions and with calculations in atomic physics . Our calculation shows that deviations from ergodicity near the ground state do not require the presence of one-body terms in the Hamiltonian, though such terms may of course enhance the degree of localization. In summary we have studied numerically and analytically the wave function structure in many-body fermion systems with random two-body interactions. Near the edge of the spectrum, wave function intensities of this two-body random ensemble exhibit fluctuations that deviate strongly from random matrix theory predictions, while good agreement is obtained in the bulk of the spectrum. The numerical results agree well with the theoretical prediction that is derived from arguments used in scar theory. In particular, we have presented a simple formula that relates fluctuations of the wave function intensities to fluctuations of the two-body matrix elements of the Hamiltonian. We thank George F. Bertsch for suggesting this study and for several very useful discussions. This research was supported by the DOE under Grant DE-FG-06-90ER40561.
no-problem/9911/astro-ph9911376.html
ar5iv
text
# Studying pulsars with the SKA and other new facilities ## 1. Astronomers Dreams To meet the goals outlined in the SKA science case (available on www.ras.ucalgary.ca/SKA), astronomers have drafted the following specifications for the SKA: | Frequency coverage | 0.2 – 20 GHz | | --- | --- | | Collecting area | 1,000,000 m<sup>2</sup> ($`A_e/T_{sys}=2\times 10^4`$) | | Simultaneous frequency bands | $``$2 | | Resolution at 1.4 GHz | 0.1 arcsec | | Field of view at 1.4 GHz | $`1^0\times 1^0`$ | | Number of simultaneous beams | 100 | This is a subset of the more detailed specifications which are available on the various SKA web pages listed in Section 8. The specifications are being refined as the science case is refined and it is not too early for people to be contributing their ideas to the science case. ## 2. Potential Construction Methods The proposed construction methods may be of some interest to astronomers, because they lead to considerably different capabilities. Existing large radio telescopes like Jodrell Bank, Bonn and Parkes cost thousands of dollars per square metre to build. To be realistic, the total costof the SKA has to be no more that 1 Billion US dollars. This implies then that the collecting area of the telescope has to be built for 200-300 million dollars or $``$$200 per square metre. This section summarises some of the methods proposed for achieving this. More details are on the SKA web pages listed in Section 8. KARST (China) 10–14 dishes of 300–500m in diameter like Arecibo. The aim being to take advantage of naturally occuring holes in the ground of suitable shape. Substantially improved sky coverage would be obtained by only using part of the dish (at any one time) which is dynamically deformed to focus on a vastly more mobile feed system. Greatly improved sensitivity by using multiple beam feed/receiver systems. This method is a good candidate (especially for pulsars) , but the small number of stations (10–14) does not measure up to the desired image fidelity which requires 500–1000 stations. LAR - Large Adaptive Reflector (Canada) Similar to KARST, but uses a much shallower parabola, requiring much greater deformations of the surface and the feed/receiver system to be supported at a height of several kilometres on a tethered ballon. It gains some sensitivity by using more of the surface, but suffers from foreshortening at low elevations. It is unproven technology which is unlikely to be as cheap as the FAST proposal due to greater engineering complexity and offers no clear advantages over the FAST proposal apart from more flexibility in location. Scaled GMRT technology (India) 9000 12m dishes built using wire rope and cheap Indian labour. A more conventional design, which in some respects could be considered the current pace setter as it satisfies a large fraction of the specifications and the affordability has already been demonstrated in the construction of GMRT which has 30 45m dishes and cost $``$ $500 m<sup>-2</sup>. The success or otherwise of GMRT in the coming years may have a strong bearing on the future success of this proposal for the SKA. Commercial 5m Dishes (USA) 50000 5m commercial off the shelf dishes. The idea being to take advantage of the cost savings from using components that are already under mass production. Apart from the construction method of the dishes, this proposal has a lot in common with the Indian proposal. The prototype presently being constructed, the 1hT (1 hectare telescope), should give a good indication of the affordability. Planar Phased Arrays (Netherlands) This proposal aims to take the concept of riding on components with commercially driven costs a step further, by moving as much as possible of the telescope infrastructure into a technology that not only has high commercial demand, also has a rapidly improving cost curve and future scalability. ie to move away from mechanically based infrastructure to information based technology that presently has an exponential growth curve. The main proposal is to have a flat collecting area and steer the telescope electronically by forming beams in the appropriate direction. A great advantage of this proposal is that it makes it possible to do several experiment simultaniously, by using multiple beams. However it does suffer from foreshortening, and reduced sensitivity, because it cannot have cooled receivers. Luneberg Lenses (Australia & Russia) While the above proposals have been around for some time, this one is quite recent. People are wrapped with the flexibility and multiple beams of the planar arrays, but frustrated with the poorer sensitivity and complexity of the planar pahsed arrays. Luneberg lenses may provide a way to overcome that. A Luneberg lense is a sphere of dialectirc material which has a radial gradient of refractive index. A plane wave impinging on the sphere is focussed at a point on the other side of the sphere - making it possible to have multiple cooled receivers without enourmous beamforming networks. ## 3. Future Growth - Multiple Beams To date the sensitivity of radio telescopes has followed an exponential growth curve. The first and most obvious point about exponential growth is that it cannot be sustained indefinitely. Can we maintain it for sometime into the future ? There are 2 basic ways to stay on this exponential curve: 1) Spend more money which requires bigger and bigger collaborations, 2) Take advantage of technological advances in other areas. Radio astronomy is at the point where it needs to do both 1) and 2) to stay on the curve ! ### 3.1. Extensibility Through Improved Technologies System Temperature: Reber started out with a 5000 K system temperature. Modern systems now run at around 20 K, meaning that if everything else was kept constant, Reber’s telescope would now be 250 times more sensitive than when first built. There are possibilities of some improvements in future, but nothing like what was possible in the past. Band Width: Telescopes like the GBT (Green Bank Telescope) having bandwidths some 500 times greater than Reber’s, will give factors of 20–25 improvement in sensitivity. Some future improvements will be possible, but again they will not be as large as in the past. Multiple Beams: In the focal or aperture plane, multiple beam systems provide an excellent extensibility path, allowing vastly deeper surveys than were possible in the past. Although multiple beam systems have been used in the past, the full potential of this approach is yet to be exploited. A notable example that has made a stride forward in this direction is the Parkes 13 beam L band system (Stavely-Smith et al. 1996, pASA 13 243). The fully sampled focal plane phased array system being developed at NRAO by Fisher and Bradley highlights the likely path for the future. The sensitivity of the 64m Parkes telescope for example, has improved by a factor of 400 since 1962. Scope for continuing this evolution looks good for the next decade, but beyond that more collecting area will be needed. Putting a 100 beam system on Arecibo by 2005 is technically feasible and would allow Arecibo to jump out in front of the curve as it did when first built in 1964. ## 4. Other New Facilities The aim of this and the following section is to give you a feel for which new radio facilities are likely to be the most useful for pulsar studies. Other spectral bands are covered by other speakers at this meeting, eg Trumper on X-rays. Table 1 summaries the new radio facilities and their likely completion dates. At present, Arecibo and Parkes (with the 13 beam receiver system) are roughly equivelent in terms of their survey capability (rate at which a given area can be searched to a given sensitivity) and are well ahead of the next best facilities and some of the new facilties such as ALMA, GBT, 1hT and VLA. GMRT may join this group, as could GBT or any of the other large single dishes if they opted for multi beam receiver systems. ## 5. Pulsar Survey Strategies The SKA is almost certain to be an array with a large number of elements (N). In the past, large area surveys for pulsars at radio wavelengths have been dominated by large single dishes, Molongolo being a notable exception. This is mainly because when an array is used to form coherent beams, the beam size is very small and therefore sky coverage is slow. A much faster alternative (despite the $`sqrt(N)`$ loss of sensitivity) is to form incoherent sums of the N array outputs, thereby surveying the whole primary beam. However, this requires a detection system (eg a filter bank) for every antenna element, which becomes expensive. For example (see Table 2) a better rate is achieved in a much more cost effective way by putting 13 receivers and detection systems on Parkes, than putting 27 on the VLA ! A particularly striking fact shown in Table 2 is that this may continue to be true even if the SKA is built ! A 100 beam system on Arecibo can find pulsars as fast as a 7 beam system on the SKA ! For a few million dollars Arecibo could be kitted out as the search engine to find all the pulsars, which can then be timed quickly, using the enhanced sensitivity of coherent beams on the full SKA. Forming coherent SKA beams would also be the method of choice for small area searches, for example in globular clusters. To date, pulsar surveys are both sensitivity and dispersion limited, including the 1.4GHz Parkes multibeam survey. The SKA, LOFAR and a 100 beam Arecibo will remove the sensitivity limitation. LOFAR is unlikely to find vast number of millisecond pulsars because it will continue to be dispersion limited. The SKA and a 100 beam Arecibo can push to GHz frequencies and beat the dispersion, scattering and sky noise limits while still having enough sensitivity. This is in contrast to the common (and incorrect) assumption that pulsars provide a scientific driver for a low frequency SKA. ## 6. Timing Except for a few pulsars, the precision of pulsar timing is presently limited by sensitivity rather than systematics, uncalibrated instrumental effects or unmodelled drifts in the pulsar signal. Sensitivity will no longer be a problem with the SKA and systematics will dominate. Trying to assess what sort of timing precision is achievable is therefore quite difficult, because we cannot characterise the systematics yet or know whether they can be dealt with. Pulsar timing is limited by systematics around 1 $`\mu `$s or a bit below. It therefore seems likely that the SKA would allow a large number (perhaps 100’s) of pulsars to be timed to this level of precision. Using 1 beam it might required something like 10 hours to obtain suitable data to time 500 pulsars at the 1 $`\mu `$s level. Using 10 beams that could be randomly positioned, it would take only 1 hour. Whether the precision can be pushed well below 1 $`\mu `$s remains to be seen. As discussed by Britton at this meeting poor calibration of polarisation is a dominant systematic effect at present and the invariant profile method he proposes should to help solve that problem. However, the SKA is likely to have a planar phased array either in the aperture plane or the focal plane. The polarisation characteristics of these devices are substantially different to anything we are currently use for timing pulsars. It would be advisable for somebody to do some experiments to see how well we can do pulsar timing with such systems. Another question is the number of pulses (normally 100’s) needed to form a stable profile for timing. This may limit the speed with which a group of pulsars can be timed with the SKA. ## 7. Some Other Pulsar Uses What others things may be possible with the SKA that are not possible now. Of course there will be many that we cannot yet think of, but it does not hurt to mention a few of those that we know now. Simultaneous multi frequency, multi pulsar studies will be a lot easier. The SKA will make it possible to do single pulse studies for a large number of pulsars, compared to the handful at present. One interesting possibility arises if the SKA is made using a planar phased array or Luneberg lens. One could collect signals from every element and store them in a FIFO buffer of say 1 hour in length. In the mean time, collect timing points on pulsars that are likely to glitch. When one sees that a pulsar has glitched, the FIFO buffer should be saved so that a beam can then be formed towards the pulsar and thereby obtain data during the glitch. This idea of course extends beyond pulsars and could be used for detecting any transient source where there is a trigger available by other means. ## 8. SKA References and Web Resources | Reference | Web Address | Item of Interest | | --- | --- | --- | | Australia | www.atnf.csiro.au/SKA/ | Luneberg Lens | | Netherlands | www.nfra.nl/skai/ | Planar Array | | Canada | www.ras.ucalgary.ca/SKA/ | Science Case | | SETI | www.seti.org/ | 1hT | | China | 159.226.63.50/bao/LT | FAST/KARST | | USA | www.usska.org | USA SKA consortium | | Backer | www.nfra.nl | 1999 Amsterdam SKA meeting | | Bailes | www.atnf.csiro.au/SKA/WS/wsmb | 1997 Sydney SKA meeting | | Fisher | www.nrao.edu.au/$``$rfisher/ | Array Feed |
no-problem/9911/nucl-th9911048.html
ar5iv
text
# A renormalisation-group treatment of two-body scattering ## Introduction Recently there has been much interest in the possibility of developing a systematic treatment of low-energy nucleon-nucleon scattering using the techniques of effective field theorymcb:wein2 ; mcb:ksw2 ; mcb:vk3 . Here we approach the problem using Wilson’s continuous renormalisation groupmcb:wrg to examine the low-energy scattering of nonrelativistic particles interacting through short-range forcesmcb:bmr . The starting point for the renormalisation group (RG) is the imposition of a momentum cut-off, $`|𝐤|<\mathrm{\Lambda }`$, separating the low-momentum physics which we are interested in from the high-momentum physics which we wish to “integrate out”. Provided that there is a separation of scales between these two regimes, we may demand that low-momentum physics should be independent of $`\mathrm{\Lambda }`$. The second step is to rescale the theory, expressing all dimensioned quantities in units of $`\mathrm{\Lambda }`$. As the cut-off $`\mathrm{\Lambda }`$ approaches zero, all physics is integrated out until only $`\mathrm{\Lambda }`$ itself is left to set the scale. In units of $`\mathrm{\Lambda }`$ any couplings that survive are just numbers, and these define a “fixed point”. Such fixed points correspond to systems with no natural momentum scale. Examples include the trivial case of a zero scattering amplitude and the more interesting one of a bound state at exactly zero energy. Real systems can then be described in terms of perturbations away from one of these fixed points. For perturbations that scale as definite powers of $`\mathrm{\Lambda }`$, we can set up a power-counting scheme: a systematic way to organise the terms in an effective potential or an effective field theory. A fixed point is said to be stable if all perturbations vanish like positive powers of $`\mathrm{\Lambda }`$ as $`\mathrm{\Lambda }0`$ and unstable if one or more of them grows with a negative power of $`\mathrm{\Lambda }`$. ## Two-body scattering We consider $`s`$-wave scattering by a potential that consists of contact interactions only. Expanded in powers of energy and momentum this has the form $$V(k^{},k,p)=C_{00}+C_{20}(k^2+k^2)+C_{02}p^2\mathrm{},$$ (1) where $`k`$ and $`k^{}`$ denote momenta and energy-dependence is expressed in terms of the on-shell momentum $`p=\sqrt{ME}`$. Below all thresholds for production of other particles, this potential should be an analytic function of $`k^2`$, $`k^2`$ and $`p^2`$. Low-energy scattering is conveniently described in terms of the reactance matrix, $`K`$. This is similar to the scattering matrix $`T`$, except for the use of standing-wave boundary conditions. It satisfies the Lippmann-Schwinger (LS) equation (seemcb:newt ) $$K(k^{},k,p)=V(k^{},k,p)+\frac{M}{2\pi ^2}𝒫q^2𝑑q\frac{V(k^{},q,p)K(q,k,p)}{p^2q^2},$$ (2) where $`𝒫`$ denotes the principal value. On-shell, with $`k=k^{}=p`$, the $`K`$-matrix is related to the phase-shift by $$\frac{1}{K(p,p,p)}=\frac{M}{4\pi }p\mathrm{cot}\delta (p),$$ (3) which means it has a simple relation to the effective-range expansionmcb:ere , $$p\mathrm{cot}\delta (p)\frac{1}{a}+\frac{1}{2}r_ep^2+\mathrm{},$$ (4) where $`a`$ is the scattering length and $`r_e`$ is the effective range. We shall see that this turns out to be equivalent to an expansion around a nontrivial fixed point of the RG. ## Renormalisation group To set up the RG we first impose a momentum cut-off on the intermediate states in the LS equation (2). This can be written $$K=V(\mathrm{\Lambda })+V(\mathrm{\Lambda })G_0(\mathrm{\Lambda })K,$$ (5) where we have included a sharp cut-off in the free Green’s function, $$G_0=\frac{M\theta (\mathrm{\Lambda }q)}{p^2q^2}.$$ (6) We now demand that $`V(k^{},k,p,\mathrm{\Lambda })`$ varies with $`\mathrm{\Lambda }`$ in order to keep the off-shell $`K`$-matrix independent of $`\mathrm{\Lambda }`$: $$\frac{K}{\mathrm{\Lambda }}=0.$$ (7) This is sufficient to ensure that all scattering observables do not depend on $`\mathrm{\Lambda }`$. Differentiating the LS equation (5) with respect to $`\mathrm{\Lambda }`$ and then operating from the right with $`(1+G_0K)^1`$, we get $$\frac{V}{\mathrm{\Lambda }}=\frac{M}{2\pi ^2}V(k^{},\mathrm{\Lambda },p,\mathrm{\Lambda })\frac{\mathrm{\Lambda }^2}{\mathrm{\Lambda }^2p^2}V(\mathrm{\Lambda },k,p,\mathrm{\Lambda }).$$ (8) We now introduce dimensionless momentum variables, $`\widehat{k}=k/\mathrm{\Lambda }`$ etc., and a rescaled potential, $$\widehat{V}(\widehat{k}^{},\widehat{k},\widehat{p},\mathrm{\Lambda })=\frac{M\mathrm{\Lambda }}{2\pi ^2}V(\mathrm{\Lambda }\widehat{k}^{},\mathrm{\Lambda }\widehat{k},\mathrm{\Lambda }\widehat{p},\mathrm{\Lambda }).$$ (9) From the equation (8) satisfied by $`V`$ we find that the rescaled potential satisfies the RG equation $$\mathrm{\Lambda }\frac{\widehat{V}}{\mathrm{\Lambda }}=\widehat{k}^{}\frac{\widehat{V}}{\widehat{k}^{}}+\widehat{k}\frac{\widehat{V}}{\widehat{k}}+\widehat{p}\frac{\widehat{V}}{\widehat{p}}+\widehat{V}+\widehat{V}(\widehat{k}^{},1,\widehat{p},\mathrm{\Lambda })\frac{1}{1\widehat{p}^2}\widehat{V}(1,\widehat{k},\widehat{p},\mathrm{\Lambda }).$$ (10) ## Fixed points We are now in a position to look for fixed points: solutions of (10) that are independent of $`\mathrm{\Lambda }`$. These provide the possible low-energy limits of theories as $`\mathrm{\Lambda }0`$ and hence the starting points for systematic expansions of the potential. ### The trivial fixed point One obvious solution of (10) is the trivial fixed point, $$\widehat{V}(\widehat{k}^{},\widehat{k},\widehat{p},\mathrm{\Lambda })=0,$$ (11) which describes a system with no scattering. For systems described by potentials close to the fixed point we can expand in terms of eigenfunctions, $`\widehat{V}=\mathrm{\Lambda }^\nu \varphi (\widehat{k}^{},\widehat{k},\widehat{p})`$, of the linearised RG equation, $$\widehat{k}^{}\frac{\varphi }{\widehat{k}^{}}+\widehat{k}\frac{\varphi }{\widehat{k}}+\widehat{p}\frac{\varphi }{\widehat{p}}+\varphi =\nu \varphi .$$ (12) These have the form $$\widehat{V}(\widehat{k}^{},\widehat{k},\widehat{p},\mathrm{\Lambda })=C\mathrm{\Lambda }^\nu \widehat{k}^l\widehat{k}^m\widehat{p}^n,$$ (13) with eigenvalues $`\nu =l+m+n+1`$, where $`l`$, $`m`$ and $`n`$ are non-negative even integers. The eigenvalues are all positive and so the fixed point is a stable one: all nearby potentials flow towards it as $`\mathrm{\Lambda }0`$. The corresponding unscaled potential has the expansion $$V(\widehat{k}^{},\widehat{k},\widehat{p},\mathrm{\Lambda })=\frac{2\pi ^2}{M}\underset{l,n,m}{}\widehat{C}_{lmn}\mathrm{\Lambda }_0^\nu k^lk^mp^n,$$ (14) where we have written the coefficients in dimensionless form by taking out powers of $`\mathrm{\Lambda }_0`$, the scale of the short-distance physics. The power counting in this expansion is just the one proposed by Weinbergmcb:wein2 if we assign an order $`d=\nu 1`$ to each term in the potential. This fixed point can be used to describe systems where the scattering at low energies is weak and can be treated perturbatively. It is not the appropriate starting point for $`s`$-wave nucleon-nucleon scattering, where the scattering length is large. ### A nontrivial fixed point The simplest nontrivial fixed point is one that depends on energy only, $`\widehat{V}=\widehat{V}_0(\widehat{p})`$. It satisfies $$\widehat{p}\frac{\widehat{V}_0}{\widehat{p}}+\widehat{V}_0(\widehat{p})+\frac{\widehat{V}_0(\widehat{p})^2}{1\widehat{p}^2}=0.$$ (15) The solution, which must be analytic in $`\widehat{p}^2`$, is $$\widehat{V}_0(\widehat{p})=\left[1\frac{\widehat{p}}{2}\mathrm{ln}\frac{1+\widehat{p}}{1\widehat{p}}\right]^1.$$ (16) Although the detailed form of this potential is specific to our particular choice of cut-off, the fact that it tends to a constant as $`\widehat{p}0`$ is a generic feature, which is present for any regulator. The corresponding unscaled potential is $$V_0(p,\mathrm{\Lambda })=\frac{2\pi ^2}{M}\left[\mathrm{\Lambda }\frac{p}{2}\mathrm{ln}\frac{\mathrm{\Lambda }+p}{\mathrm{\Lambda }p}\right]^1.$$ (17) The solution to the LS equation for $`K`$ with this potential is infinite, or rather $`1/K=0`$. This corresponds to a system with infinite scattering length, or equivalently a bound state at exactly zero energy. To study the behaviour near this fixed point we consider small perturbations about it that scale with definite powers of $`\mathrm{\Lambda }`$: $$\widehat{V}(\widehat{k}^{},\widehat{k},\widehat{p},\mathrm{\Lambda })=\widehat{V}_0(\widehat{p})+C\mathrm{\Lambda }^\nu \varphi (\widehat{k}^{},\widehat{k},\widehat{p}).$$ (18) These satisfy the linearised RG equation $$\widehat{k}^{}\frac{\varphi }{\widehat{k}^{}}+\widehat{k}\frac{\varphi }{\widehat{k}}+\widehat{p}\frac{\varphi }{\widehat{p}}+\varphi +\frac{\widehat{V}_0(\widehat{p})}{1\widehat{p}^2}\left[\varphi (\widehat{k}^{},1,\widehat{p})+\varphi (1,\widehat{k},\widehat{p})\right]=\nu \varphi .$$ (19) Solutions to (19) that depend only on energy ($`\widehat{p}`$) can be found straightforwardly by integrating the equation. They are $$\varphi (\widehat{p})=\widehat{p}^{\nu +1}\widehat{V}_0(\widehat{p})^2.$$ (20) Requiring that these be well-behaved as $`\widehat{p}^20`$, we find the RG eigenvalues $`\nu =1,1,3,\mathrm{}`$. The fixed point is unstable: it has one negative eigenvalue. The instability can be seen from the RG flow in Fig. 1. Only potentials that lie exactly on the “critical surface” flow into the nontrivial fixed point as $`\mathrm{\Lambda }0`$. Any small perturbation away from this surface eventually builds up and drives the potential either to the trivial fixed point at the origin or to infinity. The corresponding unscaled potential is $$V(k^{},k,p,\mathrm{\Lambda })=V_0(p,\mathrm{\Lambda })+\frac{M}{2\pi ^2}\left(C_1+C_1p^2+\mathrm{}\right)V_0(p,\mathrm{\Lambda })^2.$$ (21) For perturbations around the nontrivial fixed point, we can assign an order $`d=\nu 1=2,0,2,\mathrm{}`$ to each term in the potential. This power counting for (energy-dependent) perturbations agrees with that found by Kaplan, Savage and Wisemcb:ksw2 using a “power divergence subtraction” scheme and also by van Kolckmcb:vk3 in a more general subtractive renormalisation scheme. The equivalence can be seen by making the replacement $$V_0=\frac{2\pi ^2}{M\mathrm{\Lambda }}+\mathrm{}\frac{4\pi }{M\mu },$$ (22) where $`\mu `$ is the renomalisation scale introduced by Kaplan, Savage and Wise in their subtraction scheme, and which plays an analogous role to the cut-off $`\mathrm{\Lambda }`$ in our approach. The on-shell $`K`$-matrix for this potential is (to any order in the $`C`$’s) $$\frac{1}{K(p,p,p)}=\frac{M}{2\pi ^2}\left(C_1+C_1p^2+\mathrm{}\right).$$ (23) This is just the effective-range expansion (4). There is a one-to-one correspondence between the perturbations in $`V`$ and the terms in that expansion, $$C_1=\frac{\pi }{2a},C_1=\frac{\pi r_e}{4}.$$ (24) The expansion around the nontrivial fixed point is the relevant one for systems with large scattering lengths, such as $`s`$-wave nucleon-nucleon scattering. ## Weak long-range forces The treatment outlined above is only valid at very low momenta, where all pieces of the potential can be regarded as short-range. To extend it to describe nucleon-nucleon scattering at higher momenta, we would like to include pion-exchange forces explicitly. The longest-ranged of these is single pion exchange, which provides a central Yukawa potential, $$V_{1\pi }(𝐤^{},𝐤)=\frac{4\pi \alpha _\pi }{(𝐤𝐤^{})^2+m_\pi ^2},$$ (25) where $$\alpha _\pi =\frac{g_A^2m_\pi ^2}{16\pi f_\pi ^2}0.072.$$ (26) As in chiral perturbation theory, we want to treat the pion mass as a new low-energy scale (in addition to the momentum and energy variables). This can be done by defining a rescaled variable $`\widehat{m}_\pi =m_\pi /\mathrm{\Lambda }`$ and applying the RG as above. The corresponding term in the rescaled potential is $$\widehat{V}_{1\pi }(\widehat{𝐤}^{},\widehat{𝐤},\widehat{m}_\pi ,\mathrm{\Lambda })=\mathrm{\Lambda }\frac{Mg_A^2}{8\pi ^2f_\pi ^2}\frac{\widehat{m}_\pi ^2}{(\widehat{𝐤}\widehat{𝐤}^{})^2+\widehat{m}_\pi ^2}.$$ (27) It scales as $`\mathrm{\Lambda }^1`$, like the effective-range term in the potential above. This suggests that one-pion exchange (OPE) can be treated as a perturbation. It would contribute at next-to-leading order (NLO) in the potential. However questions remain about whether OPE is really weak enough for a perturbative treatment to be useful. A possible scale for nonperturbative long-range physics is the pionic “Bohr radius”: $$R=\frac{2}{\alpha _\pi M}5.8\text{fm}.$$ (28) This should be compared with the range of the Yukawa potential, $`r_\pi =1/m_\pi =1.4`$ fm, which cuts off the potential at long distances, preventing the formation of a bound state. The ratio of these scales is $$\frac{r_\pi }{R}0.24,$$ (29) Although this is smaller than the critical value of 0.84, at which a bound state formsmcb:newt , one might expect relatively slow convergence of the perturbation series. Further questions are raised when the contribution of OPE to the effective range is examined. A perturbative treatment (to NLO in an expansion in powers of momenta, $`m_\pi `$ and $`1/a`$, as inmcb:ch ) gives a short-range contribution to the effective $`{}_{}{}^{1}S_{0}^{}`$ range of $`r_e^0`$ $`=`$ $`r_e{\displaystyle \frac{2\alpha _\pi M}{m_\pi ^2}}`$ (30) $`=`$ $`2.621.38=1.24\text{fm}.`$ (31) It is also possible to set up a distorted-wave effective-range expansion, in which the long-range interaction is treated all ordersmcb:hk . This is essentially an expansion in powers of energy of $`p\mathrm{cot}(\delta \delta _{1\pi })/|_{1\pi }(p)|^2`$ where $`\delta _{1\pi }`$ is the OPE phase shift and $`_{1\pi }(p)`$ the corresponding Jost functionmcb:newt . The resulting purely short-range effective range ismcb:r99 (see alsomcb:sf ) $$r_e^0=4.2\text{fm}.$$ (32) This is significantly different from the perturbatively corrected effective range (30). The difference may be an indication of either strong forces with two-pion range, or of strong short-range forces with a complicated structuremcb:ks . ## Summary We have applied Wilson’s renormalisation group to nonrelativistic two-body scattering and identified two important fixed pointsmcb:bmr . The first is the trivial fixed point. Perturbations around it can be used to describe systems with weak scattering. These perturbations can be organised according to Weinberg’s power countingmcb:wein2 . The second fixed point describes systems with a bound state at exactly zero energy. In this case the relevant power-counting is the one found by Kaplan, Savage and Wisemcb:ksw2 and van Kolckmcb:vk3 . The expansion around this fixed point is exactly equivalent to the effective-range expansion. These ideas can be extended in various ways. Short-range interactions in other numbers of spatial dimensions can be studied. The critical dimension for instability of the nontrivial fixed point is $`D=2`$, which has been studied for some time in the context of anyonsmcb:j ; mcb:mt . Three-body systems are also being studied from the point of view of effective field theorymcb:bhvk1 ; mcb:bg . In some cases these display much more complicated behaviour under the RG than the two-body ones discussed abovemcb:bhvk2 . Various nucleon-nucleon scattering observables as well as deuteron properties have been calculated using the expansion around the nontrivial fixed pointmcb:ksw2 ; mcb:cgss ; mcb:ssw ; mcb:fms . In this approach, pion-exchange forces are treated as perturbations. An alternative approach which is being explored by other groups is to use Weinberg’s power counting in the expansion of the potential, but then to iterate that potential to all orders in the LS equationmcb:orvk ; mcb:egm ; mcb:vkrev ; mcb:bmpvk ; mcb:pc . This may provide a way to evade the problems of slow convergence when OPE is included explicitlymcb:ch ; mcb:ks . Finally, strong long-ranged interactions, such as the Coulomb force, lead to quite different behaviour from the examples discussed here. They can still be treated using similar techniques, as in NRQEDmcb:nrqed and NRQCDmcb:nrqcd .
no-problem/9911/astro-ph9911148.html
ar5iv
text
# Magnetic Fields in Star-Forming Molecular Clouds I. The First Polarimetry of OMC-3 in Orion A ## 1 Introduction It is now well established that magnetic fields play a significant role in the evolution of molecular clouds and their associated star formation (see Heiles et al. 1993 and references therein). At a distance of 500 pc, the Orion complex is the closest star forming region that is undergoing massive star formation, and as such, has been the object of intense study. Many studies have focused on the region known as OMC-1, a massive cloud core which lies behind the Orion Nebula (M42). This core is embedded in the integral-shaped filament, identified in the <sup>13</sup>CO $`J=10`$ transition by Bally et al. (1987) and most recently mapped at 850 and 450 µm by Johnstone & Bally (1999). The OMC-3 region lies at the northern tip of the filament (Bally et al. 1987), near the HII region NGC 1977 (see Kutner, Evans & Tucker 1976). Molecular studies reveal the dust temperatures to be considerably cooler (T $``$ 20-25 K) in the OMC-3 region than in OMC-1 (Chini et al. 1997). Dust condensations in OMC-3 were identified in 1.3 mm continuum by Chini et al. (1997). An evolutionary sequence has been suggested, with source ages declining as one moves north along the filament. Reipurth, Rodríguez & Chini (1999) have done a VLA search at 6 cm for compact sources in the region and report no sources coincident with the positions of MMS1 and MMS4, two of the most northern condensations identified by Chini et al. (1997). Additionally, there is no outflow associated with either source (Yu, Bally & Devine 1999; Chini et al. 1997; Castets & Langer 1995). These results suggest that these objects may in fact be in a pre-collapse phase, since outflows are known to be associated with the earliest phases of collapse (Shu, Adams & Lizano 1987). Goodman et al. (1995) illustrated that measurements of polarization of background starlight in the optical and near-infrared are not effective tracers of magnetic field structure in dense molecular gas due to poor alignment and/or amorphous dust grain structure. At submillimeter and millimeter wavelengths, aligned, rotating grains produce polarized thermal emission. Draine & Weingartner (1996, 1997) have shown that radiative torques are highly effective at aligning grains of specific sizes ($``$0.2 $`\mu `$m). Lazarian, Goodman & Myers (1997) suggest that the effects of radiative torques could be comparable to paramagnetic inclusions (Purcell 1975, 1979; Spitzer & McGlynn 1979) at aligning grains in the star-forming interstellar medium. Both these mechanisms result in grains aligned with their long axes perpendicular to the magnetic field. In this paper, we present 850 µm imaging polarimetry of OMC-3. These data represent the first polarimetry of this active star-forming region and are also some of the first obtained with the new imaging polarimeter at the James Clerk Maxwell Telescope (JCMT). This is the first publication from a project designed to study magnetic field structure in a variety of molecular clouds in different phases of star formation. In $`\mathrm{\S }`$2, we describe the polarimeter and the observing and reduction techniques; in $`\mathrm{\S }`$3, a mosaic of OMC-3 is presented and discussed; $`\mathrm{\S }`$4 summarizes the results thus far. ## 2 Observations and Data Reduction The observations were taken from 1998 September 5 to 7 using the new imaging polarimeter on SCUBA (Submillimeter Common User Bolometric Array) at the JCMT. These nights were stable, with $`\tau `$(225 GHz) ranging from 0.05 to 0.07 during the period of observations. Calibration of the polarizer was performed on 1998 September 5 using the Crab Nebula, for which percentage polarization $`p=19.3\pm 4`$% and position angle $`\theta =155\pm `$ 5 were measured. The polarimeter consists of a rotating quartz half-waveplate and a fixed analyzer which are used to measure linear polarization of thermal emission. The waveplate introduces a phase-lag of one half wavelength between the plane of polarization along the ‘fast axis’ and the orthogonal plane. Rotation of the plate changes the angle between the fast axis and the plane of the incoming source polarization, so a varying component of the polarized emission is retarded. The analyzer is a photo-lithographically etched grid of 6 $`\mu `$m spacing which transmits only one plane of polarization (by absorbing photons with an E-component parallel to the wires) to SCUBA. The detector thus sees a modulated signal as the waveplate rotates. The variations are used to deduce the percentage polarization and polarization position angle of the source. Seven pointing centers were observed along the OMC-3 filament, four the first night and only three (slightly shifted in position) on subsequent nights to provide better coverage of low signal-to-noise regions. Data for each pointing center were co-added to create seven maps of $`I`$, $`Q=q/I`$ and $`U=u/I`$, where $`I`$, $`q`$ and $`u`$ are Stokes’ parameters. Each polarization cycle consists of 16 integrations at 22.5 rotation intervals (i.e. 4 independent measurements at each angle) in 8 minutes integration time. The brightest source, MMS6, was observed for only 6 cycles, or 48 minutes integration time. Observations centered on MMS1 and MMS4 required $``$3 hours integration time. The sensitivity of the SCUBA detector yielded polarization measurements on sources of 0.5 Jy beam<sup>-1</sup> at the 6$`\sigma `$ level for 4% polarized flux. For each polarization cycle map, the standard preliminary reduction was done. This included subtraction of sky levels as well as instrumental polarization (IP) for each bolometer. Polarizations less than 1% are unreliable, since the IP has been found to vary by $`\pm 0.5`$% for about 30% of the bolometers. The average IP for all off-axis bolometers is $`0.88\pm 0.06`$% @ $`166\pm `$ 2 while the IP of the central bolometer is $`1.08\pm 0.10`$% @ $`158\pm `$ 3. After correction for source rotation across the array, the Stokes’ parameters were calculated by comparing measurements offset by 45 in waveplate rotation (90 on the sky). The $`I`$, $`Q`$ and $`U`$ maps for each pointing center were averaged, and standard deviations were derived by comparing the individual data sets. The maps were then binned spatially by a factor of two to yield 6″ (approximately half-beamwidth) sampling. At this point, it was necessary to diverge from the standard reduction technique to make mosaics of the $`I`$, $`Q`$ and $`U`$ maps before calculating the percentage polarization $`p`$ and position angle $`\theta `$. Mosaicing was done using the MAKEMOS tool in Starlink’s CCDPACK (a UK based software package). Variances were used to weight the overlapping data values, and variances were also generated for the map. The calculation of $`p`$ (and its uncertainty) is given by: $$p=\sqrt{Q^2+U^2};dp=p^1\sqrt{[dQ^2Q^2+dU^2U^2]}.$$ (1) A bias exists which tends to increase the $`p`$ value, even when $`Q`$ and $`U`$ are consistent with $`p=0`$ because $`p`$ is forced to be positive. The polarization percentages were debiased according to the expression: $$p_{db}=\sqrt{p^2dp^2}.$$ (2) The position angle can then be calculated by the following relations: $$\theta =0.5\mathrm{arctan}(U/Q);d\theta =28.6^{}/\sigma _p$$ (3) where $`\sigma _p`$ is the ratio $`p_{db}/dp`$. The $`p_{db}`$ values were then thresholded such that $`p_{db}`$ 100% and $`\sigma _p2.5`$. The position angles can of course take on any value, but we note that offsets of 180 cannot be distinguished in linear polarization. ## 3 Results ### 3.1 The Polarization Pattern Figure 4 shows the OMC-3 filament in 850 µm continuum (colored greyscale) overlaid with polarization vectors. Only vectors up to 20% are plotted on the figure; 27 vectors in total were omitted. The blue contours indicate where $`\sigma _p=6`$ and $`\sigma _p=10`$. By equation (3), the vectors enclosed by these contours have $`d\theta 5^{}`$ and $`d\theta 3^{}`$, respectively. No vectors have $`\sigma _p<2.5`$, so no plotted vector has $`d\theta >12^{}`$. The polarization vectors are well-ordered along the filament with the best alignment in the northern region between the sources MMS1 and MMS6 (as identified by Chini et al. 1997, see Figure 4). Since the vectors show a high degree of ordering, it is tempting to think that they are uniform across the field. However, extraction of data subsets centered on each of the four “regions” indicated in Figure 4 illustrates that this is not the case. The distribution of $`\theta `$ in each region is fit by either 1 or 2 Gaussians by minimizing chi-squared. The position angle (mean and dispersion) of each of these Gaussians is noted on the figure, as well as the reduced chi-squared of each fit. The dispersions in $`\theta `$ are relatively narrow, when compared with the sample presented in Myers & Goodman (1991) for 26 dark cloud regions. If a single gaussian is fit to every subregions’ distribution (regardless of the goodness-of-fit), it is found that Regions A through D have dispersions of 8, 8, 9 and 12 respectively. Such low dispersions were identified only for dark clouds without clusters, i.e. less than 15 associated stars in 1 pc<sup>2</sup> (Myers & Goodman 1991), yet OMC-3 has condensation density of 135 pc<sup>-2</sup> (taking 9 sources in a $`6^{}\times 30^{\prime \prime }`$ area). According to the Myers & Goodman model of uniform and non-uniform field components, this result implies either a low ratio of non-uniform to plane-of-sky uniform components of the magnetic field, or a low number of magnetic field correlation lengths along the line of sight. To implement their analysis fully will require measurement of the line-of-sight component of the magnetic field toward several positions in OMC-3. It is interesting to compare the changes in the position angle of the filament on the sky with the change in orientation of the polarization vectors. The filament’s orientation can be traced easily due to the positions of the condensations themselves, which without exception are embedded within it. Measuring an angle E of N, three main segments of OMC-3 can be distinguished. From MMS1 to MMS6, the filament is at an angle of $``$130 ($`50`$); this area is covered by Regions A and B as denoted on Figure 4. These histograms reveal that the peaks fit to these distributions agree with the position angle of the filament to within 10. The situation is similar for Region C, which contains MMS7, the only IRAS source in OMC-3 (05329$``$0505). The angle of the filament steepens from MMS6 to MMS7 to $``$160 ($`20`$). The distribution of $`\theta `$ exhibits two peaks in position angle. The strongest peak is in fact at $`19\pm 7`$ which indicates excellent alignment with the filament. (The uncertainty represents the 1$`\sigma `$ dispersion in the distribution.) However, a second peak exists at $`36\pm 5`$. Finally, from MMS7 through MMS9, the filament aligns north-south (position angle 0). None of the polarization vectors exhibit such an angle; instead, the distribution is double peaked at $`33\pm 5`$ and $`47\pm 15`$. Since the maximum uncertainty associated with any value of $`\theta `$ is $`12^{}`$, Region D is the only one where no alignment exists between the filament and the polarization pattern. In short, the polarization, and hence inferred field direction (in the plane of the sky), $`B_{}`$, changes as one moves along OMC-3. $`B_{}`$ is predominantly perpendicular to the filament along most of its length, diverging by $`3050`$ from the filament only in the southernmost part of OMC-3. These results are roughly consistent with the work of Schleuning (1998), which also established $`B_{}`$ perpendicular to the filament direction in OMC-1. The field direction does not appear to be affected by the presence of the dust condensations, but rather is aligned with the structure of the filament itself. At a resolution of 15″ (7500 A.U.), these data simply may not have enough resolution to detect the details of fields associated with the starless cores or protostellar envelopes and their associated outflows. ### 3.2 The Influence of Outflows? Many previous works have noted that the inferred $`B_{}`$ field direction from long wavelength polarization is oriented either parallel (e.g. IRAS 16293, Tamura et al. 1993; NGC 1333, Minchin, Sandell & Murray 1995; Tamura, Hough & Hayashi 1995) or perpendicular (e.g. VLA 1623, Holland et al. 1996) to the observed protostellar outflow direction. The relative orientations of the outflows in OMC-3 are illustrated by the green lines on Figure 4, as measured by Chini et al. (1997) and Yu, Bally & Devine (1997) using <sup>12</sup>CO $`J=21`$ and H<sub>2</sub> shocks, respectively, to identify outflow signatures. With the exception of MMS6, the outflows are aligned E-W. Hence, in Regions A-C, the outflows are perpendicular neither to the filament nor the $`B_{}`$-field. In Region D, they are aligned perpendicular to the filament, but are offset from $`B_{}`$ by $`3050^{}`$; however, it is possible that we may be detecting a superposition of the magnetic fields of the filament and the outflow(s). Reipurth, Rodríguez & Chini (1999) suggest that MMS9 is in fact the driving source for the most powerful outflow in the OMC-3 region. If the evolutionary sequence proposed by Chini et al. (1997) is correct, then MMS9’s outflow may have had sufficient time to alter the magnetic field in its vicinity. These results suggest that the field of the filament alone does not determine the outflow direction. Thus, there must be other relevant factors which determine the stucture of a protostellar system. Were these data interpreted as a uniform field along which material had collapsed, one would naively expect protostellar disks to be aligned parallel to the filament, and outflows perpendicular to it but aligned with the ambient field; however, this is not observed. ### 3.3 Percentage Polarization The distribution of $`p_{db}`$ along the filament exhibits a mean value of 4.2%, with a 1$`\sigma `$ dispersion of 1%. Values up to 100% are allowed, but only those $`<20`$% (i.e. all but 27) are plotted on Figure 4 since larger values are unlikely to be physical. The rms $`dp`$ and $`\sigma _p`$ values of unplotted data are 15% and 2.9, compared to 2.5% and 7.3 for all values of $`p`$ with $`\sigma _p>2.5`$. Polarizations up to 11.9% have been detected with $`\sigma _p7`$. Several authors have discussed whether observations of decreased polarization percentage toward regions of higher flux are due to changes in physical conditions or averaging of small scale variations in a large beam. Figure 4 shows the percentage polarization along a cut perpendicular to the filament through MMS4 (indicated by the red cross in Figure 4, which has the highest $`\sigma _p`$ in our data set. Although the uncertainties are increasing toward the edge of the filament, the data show a clear trend of decreasing polarization percentage toward the filament’s center. These changes could be due to changes in either grain properties or field strength, but they have also been suggested as an observable signature of helical fields (Fiege & Pudritz 1999). The interpretation of this depolarization effect will be discussed more fully in a forthcoming paper. ## 4 Summary Submillimeter wavelength polarimetry with the JCMT has revealed a highly ordered polarization pattern along the filament known as OMC-3. These data indicate that $`B_{}`$is perpendicular to the filament along most of its length, diverging only in the most southern regions by between $`3050`$. The outflows which have been observed in OMC-3 are aligned with neither the ambient field nor the filament in any consistent way. The field of the filament is thus unlikely to be the dominant factor in determining the configuration of the protostellar systems embedded within it. The mean percentage polarization is 4.2%, with a 1$`\sigma `$ dispersion of 1%. Values as high as 11.9% have been measured with $`\sigma _p7`$. A depolarization effect is measured toward the denser parts of the filament. The authors would like to thank J. Greaves, T. Jenness, G. Moriarty-Schieven and A. Chrysostomou at the JCMT for their assistance with problems both large and small during and especially after observing, and the referee for an insightful and thorough review. BCM would like to thank J. Fiege for many constructive conversations and MATLAB expertise. The research of BCM and CDW is supported through grants from the Natural Sciences and Engineering Research Council of Canada. The JCMT is operated by the Joint Astronomy Centre on behalf of the Particle Physics and Astronomy Research Council of the UK, the Netherlands Organization for Scientific Research, and the National Research Council of Canada.
no-problem/9911/chao-dyn9911023.html
ar5iv
text
# First Experimental Evidence for Chaos-Assisted Tunneling in a Microwave Annular Billiard ## Abstract We report on first experimental signatures for chaos-assisted tunneling in a two-dimensional annular billiard. Measurements of microwave spectra from a superconducting cavity with high frequency resolution are combined with electromagnetic field distributions experimentally determined from a normal conducting twin cavity with high spatial resolution to resolve eigenmodes with properly identified quantum numbers. Distributions of so-called quasi-doublet splittings serve as basic observables for the tunneling between whispering gallery type modes localized to congruent, but distinct tori which are coupled weakly to irregular eigenstates associated with the chaotic region in phase space. For two decades a new kind of tunneling mechanism produces great interest, since it demonstrates how the dynamical features of a classical Hamiltonian system effect the behavior of its quantum counterpart . This so-called “dynamical tunneling” occurs whenever a discrete symmetry of the system leads to distinct but symmetry related parts of the underlying classical phase space. In contrast to the well-known barrier tunneling, dynamical tunneling only depends upon the probability for such a quantum particle, although classically forbidden, to leave certain regions of phase space and travel into others. This basically involves the coupling strength between distinct phase space regions. In the special case of two symmetry related regular regions separated by a chaotic area in a mixed phase space, semiclassical quantization yields pairs of quantum states which are localized to the corresponding sets of congruent, but distinct tori. These so-called quasi-doublets show a very sensitive splitting behavior which depends upon the coupling to irregular eigenstates associated with the intermediate chaotic sea. This non-direct, enhanced coupling of regular eigenmodes via chaotic ones is what defines chaos-assisted tunneling in its original sense . The aim here is to demonstrate for the first time that chaos-assisted tunneling can be observed experimentally, even for a case where the size of the splitting is several orders of magnitude below the typical mean level spacing of the system. For this purpose we performed measurements on superconducting as well as normal conducting microwave cavities constituting a special family of Bohigas’ annular billiard . This system has been proven in very extensive and certainly also very accurate computer simulations, especially in Refs. , to be a paradigm for chaos-assisted tunneling and provides access for experimental investigation. The two-dimensional geometry of the annular billiard is defined by two circles of radius $`r`$ resp. $`R`$, the latter being set to unity, and the center displacement or eccentricity $`\delta `$, see left part of Fig.1. In the following, only the special one-parameter family $`r+\delta =0.75`$ will be considered, since it provides all the features which are relevant for chaos-assisted tunneling: From the classical point of view the system shows a transition from integrable ($`\delta =0`$) to mixed behavior ($`\delta >0`$), thus developing a growing chaoticity with increasing $`\delta `$. Furthermore, the discrete reflection symmetry leads to congruent but classically distinct regions in phase space. To demonstrate this, Fig.1 (right part) shows a typical Poincaré surface of section for the configuration ($`\delta =0.20/r=0.55`$). Here the area preserving Birkhoff coordinates $`L`$ (point of impact on the the outer circle) and $`S=\mathrm{sin}\alpha `$ (angular momentum of the billiard particle) have been used. Beside a large chaotic sea with chains of stable islands in the center, both of which are influenced by a change of the eccentricity $`\delta `$, two symmetry related but distinct neutrally stable coastal regions for $`|S|>0.75`$ can be observed. Per construction of the family $`r+\delta =0.75`$ these regular regions are invariant under variations of $`\delta `$, since the corresponding trajectories do not hit the inner circle. As a consequence $`S`$ is conserved, indicated by horizontal lines in the surface of section. Those lines correspond to two so-called whispering gallery trajectories . From this, the only difference between the two distinct regular regions is the sign of $`S`$, i.e. the sense of motion for the propagating particle. The fundamental question now accounts for the quantum counterpart of the classically forbidden transport between the distinct coastal regions: dynamical tunneling. Since the coupling between both regions crucially depends upon the topology and the size of the chaotic sea, the system is in particular adequate to study chaos-assisted tunneling. But what is the basic observable for the tunneling strength in the corresponding quantum system? To answer this it is very instructive to start with the integrable case ($`\delta =0`$). Solving the Schrödinger equation with Dirichlet boundary conditions leads to eigenvalues $`k_{n,m}`$ and eigenstates $`\mathrm{\Psi }_{n,m}`$, with the angular momentum quantum number $`n`$ and the radial quantum number $`m`$. Due to EBK quantization the property $`S=n/k_{n,m}`$ is the quantum angular momentum which has to be compared with the classical $`S=\mathrm{sin}\alpha `$ in order to find the location of a certain quantum state in the phase space. While in the classical system the reflection symmetry of the billiard leads to two distinct but related regular regions with opposed sense of motion for the propagating particle (i.e. the whispering gallery trajectories clockwise and counterclockwise), the corresponding quantum eigenstates are organized in doublets for $`\delta =0`$ with two parities, even and odd, respectively. However, continuously increasing the eccentricity $`\delta `$ systematically destroys this doublet structure, yielding singulets for states with $`S=n/k`$ right within the chaotic sea ($`S<0.75`$) and quasi-doublets on the remaining regular coast ($`S>0.75`$). As in the case of the well-known double-well potential the very small splitting of those quasi-doublets is directly determined by the classically forbidden tunneling, thus presenting a very effective observable for the hardly accessible tunneling strength. Since the location of a certain quasi-doublet on the regular coast (defined by $`S=n/k`$) as well as the transport features of the chaotic sea (defined by the eccentricity $`\delta `$) have a direct impact on the splitting, its systematic investigation allows the experimental study of chaos-assisted tunneling in the annular billiard. As in earlier studies (for an overview, see ), we simulated the quantum billiard by means of a two-dimensional electromagnetic microwave resonator of the same shape (see l.h.s. of Fig.1). The measurements were divided into two parts: Taking in total three different configurations of the family $`r+\delta =0.75`$ (i.e. $`\delta =0.10`$, 0.15 and 0.20) we performed on one hand experiments with a superconducting Niobium resonator (scaled to $`R=\frac{1}{8}`$ m) at 4.2 K in order to measure quasi-doublet splittings within the frequency range up to $`f=20`$ GHz. The very high quality factor of up to $`Q10^6`$ allows a resolution of $`\mathrm{\Gamma }/f10^6`$, where $`\mathrm{\Gamma }`$ is equal to the full width at half maximum of a resonance. For demonstration, Fig.2 shows transmission spectra of the superconducting resonator in the vicinity of 9 GHz. Beside several singulets exactly one quasi-doublet can be observed which shows a small but systematic displacement with the eccentricity $`\delta `$. In all cases the very small splitting of the quasi-doublet is clearly detectable. This is only due to the high frequency resolution of the superconducting resonator. In this context, however, it is important to note, that the position of the exciting antennas has to be choosen very carefully in order to minimize the perturbation on the whispering gallery type modes in the coastal region (see Fig.1) and thus not to influence the size of their physical quasi-doublet splitting. Using antennas right within the whispering gallery region of the billiard (see sketch on top of Fig.2) always produce “false” splittings even for the concentric system ($`\delta =0`$) with twofold degenerate states. We therefore used antennas in the shadow region of the inner circle also preserving the symmetry of the whole geometry, cf. Fig.2. For a proper identification of the quantum numbers $`(n|m)`$ of the modes associated with the quasi-doublets on the other hand we used a normal conducting Copper twin of the Niobium billiard cavity. There we measured the corresponding wavefunction resp. electromagnetic field distributions from which the relevant quantum numbers $`n`$ and $`m`$ could be deduced, even if they are far from being “good quantum numbers” in the given eccentric systems which are non-integrable. This second part of the experiment was based on a field perturbation method originally introduced in accelerator physics and used successfully in billiard research before . According to Slater’s theorem a small metallic body inside the cavity locally interacts with the electromagnetic field in such a way that a frequency shift of the excited mode results from the compensation of the non-equilibrium between the totally stored electric and magnetic field energy. This frequency shift $$f=f_0f=f_0\left(a\stackrel{}{E_0}^2b\stackrel{}{H_0}^2\right)$$ (1) with respect to the unperturbed mode (index 0) directly depends upon the superposition of the squared electric and magnetic fields, $`\stackrel{}{E_0}`$ and $`\stackrel{}{H_0}`$, respectively. Since the quantum wavefunction is related to the electric field only, the magnetic component has to be removed by a proper choice of the geometry constants $`a`$ and $`b`$ in Eq.(1) by choosing needle-like bodies (1.84 mm in length and 1.00 mm in diameter). Moving the body across the whole two-dimensional surface of the billiard with a spatial resolution of about one tenth of a wavelength by means of a guiding magnet and detecting the frequency shift $`f`$ at each position, finally provides the complete field distribution. Examples of those for the configuration $`\delta =0.20`$ in the vicinity of 9 GHz are plotted in the upper part of Fig.3. Of the three distributions only the middle one with the quantum numbers $`n=18`$ (36 field maxima in the polar direction) and $`m=1`$ (one field maximum in the radial direction) is characteristic for the modes which are localized in the whispering gallery region (see Fig.2). Contrary to this the distributions on the l.h.s. and on the r.h.s. show a totally different pattern. The parity of the distributions is determined in the following way: If there is maximum field strength on the line which defines the reflection symmetry of the billiard, positive parity can be assigned to the mode, likewise negative parity for zero field strength on this line. Underneath the squared electric field strength distributions in Fig.3, the corresponding transmission spectrum taken at 300 K with the normal conducting Copper cavity is shown. The three broad resonances associated with the field distributions are much better resolved in the measurement at 4.2 K of the superconducting Niobium twin cavity. The small displacement in frequency of the resonances in the two measurements is due to mechanical imperfections of each individual cavity and positioning errors of the respective inner circles within the resonators. Mechanical uncertainties of order $`\pm 100`$ $`\mu `$m relative to the radius of the outer circle $`R=`$125 mm are sufficient to account for the observed displacements. The spectrum at 4.2 K in Fig.3 shows that the resonance magnified in the insert is in fact one of the expected quasi-doublets characteristic for chaos-assisted tunneling. Naturally, this quasi-doublet is not resolved in the spectrum taken at room temperature, and the field distribution in the upper part of Fig.3 proves that of the two modes corresponding to the doublet in the particular case considered the one with negative parity is excited stronger than the other. Calculating the quantum angular momentum $`S=n/k`$ (with $`k=2\pi R/\lambda =2\pi Rf/c_0`$, where f denotes the centroid frequency of the quasi-doublet and $`c_0`$ the speed of light) finally yields the position of the whispering gallery type mode on the corresponding classical surface of section, see r.h.s. of Fig.1. In the case of mode $`(18|1)`$ one obtains $`S0.77`$ characteristic for a mode in the so-called “beach region” defined by the borderline $`S=0.75`$ between the chaotic sea and the regular coast, respectively. This comparison demonstrates that the measurements combine the high spatial resolution of about $`\lambda /10`$ for the normal conducting billiard with the high frequency resolution of about $`1/Q`$ for the superconducting one, thus allowing a very effective classification of regular quasi-doublets as well as chaotic singulets in the range up to approximately 14 GHz, where the splittings become smaller than the resonance widths of the superconducting resonator. The difference in frequency between the peaks of each quasi-doublet were estimated through a non-linear fitting to “skew Lorentzians” (see Eq. (4) in ). In what follows, we only consider the family with quantum numbers $`(n|1)`$, since it consists of some 30 resolved and undoubtedly identified quasi-doublets within the measured frequency range. To uncover effects due to chaos-assisted tunneling, the splitting of a certain quasi-doublet has to be analyzed as a function of its corresponding position in the classical phase space. As mentioned above, this position might be expressed in terms of $`S=n/k`$ as the quantized analog of the classical angular momentum $`S=\mathrm{sin}\alpha `$, directly representing the very location in the underlying surface of section. The resulting curve, again for the configuration $`\delta =0.20`$, is shown in the lower part of Fig.4. Here, the distribution of normalized splittings $`|\mathrm{\Delta }f/f|`$ shows a very smooth transition from chaotic states defined by large splittings right within the chaotic sea ($`S<0.75`$) to regular quasi-doublets with very small splittings in the classical coastal region ($`S>0.75`$). Also the measured field distributions show an increasing regularity with growing $`S`$, as can be seen from the examples in the upper part of Fig.4. Especially mode $`(29|1)`$ is hardly distinguishable from the corresponding concentric mode (with splitting zero) although the system is strongly eccentric. Besides this global behavior a first strong signature of chaos-assisted tunneling can be observed in the particular shape of the splitting curve: In the direct vicinity of the beach at $`S=0.75`$ the quasi-doublets show a locally enhanced splitting amplitude, thus indicating a very effective coupling between the regular coast and the chaotic sea. As described above, this corresponds to a locally enhanced tunneling strength in the beach region as theoretically predicted in . To evaluate the influence of the chaoticity of the system, Fig.5 shows a direct comparison of the splittings for all eccentricities $`\delta `$ in the vicinity of the beach at $`S=0.75`$. Note that the splittings for different $`\delta `$ not only enhance the visibility of the maximum, they furthermore reveal an additional feature of chaos-assisted tunneling: While splittings on the rising left part of the maximum, i.e. within the chaotic sea ($`S<0.75`$), are distributed quite systematically — e.g. the data points of $`\delta =0.20`$ always correspond to the largest splittings — shows the falling right part large fluctuations for a given eccentricity $`\delta `$. This effect is also theoretically predicted and accounts for the high rate of anti-crossings with chaotic modes for high angular momenta $`S`$. Thus on the r.h.s. of $`S=0.75`$ the tunneling strength shows a very random dependence on the eccentricity $`\delta `$ leading to strong fluctuations in the distribution of splittings. Finally, for even larger values of $`S`$ the splitting amplitudes are of the order of the inverse quality factor, $`\mathrm{\Delta }f/f1/Q10^6`$ defining the resolution limit of the present setup. In summary, we have presented first experimental signatures for chaos-assisted tunneling in a billiard. As the basic observable we have investigated the splittings of quasi-doublets with respect to their position in the classical phase space and their dependence on the eccentricity $`\delta `$. A local maximum in the vicinity of the beach region with a systematically rising and randomly falling part has been found which directly reflects the enhanced tunneling strength at this critical location between the regular coast and the chaotic sea. In this context, a combined experimental setup using normal as well as superconducting billiards has offered a very effective tool for measuring highly resolved quasi-doublets with properly identified quantum numbers. We are particularly grateful to O. Bohigas for encouraging us to study this novel mechanism of tunneling and him, S. Tomsovic and D. Ullmo for their kind invitations to Orsay and many fruitful discussions. One of us (A.R.) has also profited very much from M.C. Gutzwiller’s insight. We thank E. Doron and S. Frischat especially for guiding us to “look on the beach”. This work has been supported by the DFG under contract number Ri 242/16-1 and through the SFB 185 “Nichtlineare Dynamik”.
no-problem/9911/hep-ph9911243.html
ar5iv
text
# Effects of Large CP-violating Soft Phases on Supersymmetric Electroweak Baryogenesis ## I Introduction The universe is not matter-antimatter symmetric; as shown by primordial nucleosynthesis measurements, the baryon density to entropy density ratio $`n_B/s`$ is constrained to be about $`4\times 10^{11}`$ . The necessary ingredients for a theory to explain this asymmetry are the Sakharov criteria: baryon number violation, C and CP violation and non-equilibrium conditions . Electroweak baryogenesis is an attractive mechanism with the best chance of verification through traditional collider experiments . The central feature of electroweak baryogenesis is spontaneous symmetry breaking. At high temperatures, the vaccum expectation value of the Higgs field is zero but as the universe cools down over time, a second minimum appears in the potential at $`v0`$. The universe undergoes a phase transition, from a “symmetric” phase where $`v=0`$ to the “broken” phase where $`v0`$. If the phase transition is second order, which means that no potential barrier exists, then the universe is approximately in equilibrium throughout the transition and no baryogenesis can result (since the third Sakharov criterion is violated). If the phase transition is first order, then the phase transition occurs through quantum tunneling. As the universe cools, the probability of making a transition from symmetric to broken phase grows and when the transition occurs at a certain point in space, a bubble of broken phase forms and expands. Because of its first order nature, the phase transition does not occur throughout the whole universe at the same time, resulting in non-equilibrium conditions. The baryon asymmetry is generated as the wall of the expanding bubble passes through points in space. Particles in the unbroken phase close to the wall interact with the changing Higgs field profile in the wall and the presence of CP-violating couplings produces source terms for participating particles. Different chiralities couple with different strength when CP is violated and a difference occurs in the reflection and transmission probabilities for the two different chiralities. Due to rapid gauge, Yukawa and strong sphaleron interactions, the CP-violating source terms are translated into a net left handed weak doublet quark density which is finally converted into a baryon asymmetry by weak sphaleron decays. The asymmetry then diffuses through the bubble wall into the broken phase where the weak sphaleron interactions are exponentially suppressed. Subsequent washout of the baryon asymmetry can therefore be kept under control provided the first order phase transition is strong enough. The Standard Model satisfies the three Sakharov criteria, but it can not generate the required value of $`n_B/s`$. This is because the only source of CP violation in the SM comes from the phase $`\delta `$ in the CKM quark mixing matrix. The baryon asymmetry is $`\mathrm{sin}\delta `$, but this value is suppressed by flavor mixing factors and the resulting asymmetry cannot be too large. Therefore, the phase transition has to be strongly first order to avoid washing out the produced baryon number, which translates into the criterion $`v(T_c)/T_c\stackrel{>}{}1`$ . However, in the Standard Model, the phase transition is too weakly first order to prevent washout, unless the SM Higgs mass is less than 50 GeV, far below the present experimental limit . The situation can be improved in the Minimal Supersymmetric Standard Model (MSSM) as the light right handed top squark contribution to the temperature dependent effective potential can push the value of $`v(T_c)/T_c`$ up to acceptable values even for light Higgs masses allowed by experimental searches . Also, supersymmetric extensions of the SM include additional sources, which in general can further enhance the possibilty of baryon asymmetry production during the electroweak phase transition. As a result, a specific region in the MSSM parameter space corresponding to a heavy CP-odd Higgs boson, a light CP-even Higgs boson, a light right-handed stop and a heavy left-handed stop can provide a plausible framework for electroweak baryogenesis. The CP-violating interactions in the MSSM arise in the complex phases of the soft supersymmetry breaking terms in the Lagrangian and in the phase of $`\mu `$. Since the largest contributions come from charginos and neutralinos and less importantly from the right handed stops, only the gaugino mass phases $`\phi _1`$, $`\phi _2`$ and $`\phi _3`$, the phase of $`\mu `$ and the phase of the stop trilinear parameter $`A_t`$ are relevant for baryogenesis. These phases have to be small, typically $`\stackrel{<}{}10^2`$, if they are considered individually with sparticle masses $`𝒪`$(TeV), otherwise they induce contributions to the electric dipole moments (EDMs) of the neutron and electron exceeding experimental limits . However, these constraints can be avoided if relations among soft breaking parameters ensure cancellations of individual contributions to the EDMs . In the most general case this possibility leads to models with light superpartner spectra and CP-violating phases of $`𝒪`$(1) . Remarkably, string motivated scenarios with non-universal gaugino masses can be described where such EDM cancellations occur naturally as a result of the SM embedding on five-branes . Different methods of calculating the baryon asymmetry, using both classical and quantum Boltzmann equations, have produced the required asymmetry, provided that $`\mathrm{sin}\phi _\mu 10^210^4`$ in agreement with the experimental limits on the EDMs of the electron and neutron, when the other MSSM phases are taken to be zero. In this paper we assume that large phases are allowed by the cancellation mechanism and at the same time the right handed stop is light enough to significantly modify the finite temperature Higgs potential. All other sfermions are heavier and their contribution to the Higgs potential is Boltzmann suppressed. We briefly review the generation of the baryon asymmetry and the phase dependence factorization emphasizing the significance of the $`\phi _2+\phi _\mu `$ phase combination. In the large phase scenario it is then possible that the amount of baryon asymmetry generated at the phase transition is $`10^210^4`$ larger than what is observed. We solve the full quantum Boltzmann equations to extract the CP violating source terms which also provides enhancement compared to classical Boltzmann equations due to quantum memory effects . It is necessary to wash out some of the asymmetry; therefore the constraint $`v(T_c)/T_c\stackrel{>}{}1`$ can be relaxed. We reevaluate the washout calculation using the effects of requiring some washout to occur and derive our limits for the light Higgs mass and the right handed stop based on electroweak baryogenesis. We are able to show that even in the one-loop approximation for the temperature dependent effective Higgs potential the light Higgs mass upper limit can be pushed up to 115 GeV without invoking negative values of the right handed stop soft breaking mass parameter $`m_U^2`$, which can lead to color-breaking global minima and scalar potential instability . ## II Baryon Asymmetry Calculation In order to calculate the baryon asymmetry of the universe, one has to start with a self-consistent computation of the CP-violating sources resulting from particle interactions with the changing Higgs profile in the bubble wall. We follow the procedure of reference , which uses the closed-time-path formalism of finite-temperature field theory to derive quantum Boltzmann equations for Higgsinos and stops. Quantum memory effects resulting from correct treatment of particle propagation in the plasma strengthen the non-equilibrium character of the the particle scattering off the bubble wall and produce larger source terms. The dominant sources of CP-violation that are relevant to the electroweak baryogenesis scenario comes from the interactions of the Higgs fields with charginos and neutralinos. These interactions couple the Higgsino and gaugino components of charginos and neutralinos and involve potentially large CP-violating phases originating from the mixing. For the charginos, the CP-violating Lagrangian in the symmetric phase is $$=gH_1^0\overline{\stackrel{~}{H}}P_L\stackrel{~}{W}gH_2^0\overline{\stackrel{~}{W}}P_L\stackrel{~}{H}+h.c.$$ (1) The CP-violating phases $`\phi _\mu `$ and $`\phi _2`$ are introduced by switching to the basis of mass eigenstates in the broken phase (see Appendix). Following the steps in to generate the CP-violating source term in the Boltzmann equation, we obtain for any point $`X`$ inside the bubble wall $$𝒮_C=2g^2\mathrm{Im}(M_2\mu )v^2(X)\dot{\beta }(X)_{\stackrel{~}{W}}=\gamma _{\stackrel{~}{W}}\mathrm{sin}(\phi _\mu +\phi _2),$$ (2) where $`v^2=v_1^2+v_2^2`$ and $`\dot{\beta }(X)=d\beta (X)/dt`$ characterizes the temporal variation of the Higgs profile $`\mathrm{tan}\beta (X)=v_2(X)/v_1(X)`$ as the wall passes through point $`X`$. $`_{\stackrel{~}{W}}`$ is a temperature dependent phase-space integral (the explicit form can be found in ) and includes information about thermal behavior of the winos and Higgsinos in the high temperature plasma. The neutralino CP-violating interactions are $$=\frac{1}{2}[H_1^0\overline{\stackrel{~}{H_1}}P_L(g_2\stackrel{~}{W_3}g_1\stackrel{~}{B})+H_2^0(g_2\overline{\stackrel{~}{W_3}}g_1\overline{\stackrel{~}{B}})P_L\stackrel{~}{H_2}]+h.c.$$ (3) Following the same steps as in the chargino case, we find the source term $`𝒮_N`$ $`=`$ $`g_2^2\mathrm{Im}(M_2\mu )v^2(X)\dot{\beta }(X)_{\stackrel{~}{W}}+g_1^2\mathrm{Im}(M_1\mu )v^2(X)\dot{\beta }(X)_{\stackrel{~}{B}}`$ (4) $`=`$ $`\gamma _{\stackrel{~}{W}}\mathrm{sin}(\phi _\mu +\phi _2)+\gamma _{\stackrel{~}{B}}\mathrm{sin}(\phi _\mu +\phi _1).`$ (5) We can combine the source terms for the charginos and neutralinos to obtain the total source term for Higgsinos $$𝒮_{\stackrel{~}{H}}=3\gamma _{\stackrel{~}{W}}\mathrm{sin}(\phi _2+\phi _\mu )+\gamma _{\stackrel{~}{B}}\mathrm{sin}(\phi _1+\phi _\mu )$$ (6) where $`\gamma _{\stackrel{~}{W}}`$ $`=`$ $`|\mu ||M_2|g_2^2v^2(X)\dot{\beta }(X)_{\stackrel{~}{W}},`$ (7) $`\gamma _{\stackrel{~}{B}}`$ $`=`$ $`|\mu ||M_1|g_1^2v^2(X)\dot{\beta }(X)_{\stackrel{~}{B}}.`$ (8) . Here we emphasize that the source dependends on the full physical (reparametrization invariant) combinations of the phases $`\phi _1+\phi _\mu `$ and $`\phi _2+\phi _\mu `$ which factorize from the rest of the source term. In this sense our considerations are independent of the particular details going into the calculation of the Higgsino thermal production rate. The phase space integrals $`_{\stackrel{~}{W}}`$ and $`_{\stackrel{~}{B}}`$ exhibit strong resonant behavior leading to a maximum for $`m_{\stackrel{~}{H}}m_{\stackrel{~}{W}}`$ ($`m_{\stackrel{~}{H}}m_{\stackrel{~}{B}}`$) . In terms of the soft breaking parameters the enhancement occurs when the gaugino masses $`M_1`$ or $`M_2`$ are close in value to the Higgsino mass parameter $`\mu `$. For similar reasons it is easy to understand why the right handed stop contribution to the CP-violating source is always subdominant. The corresponding phase-space integral $`_{\stackrel{~}{t}_R}`$ is typically far away from its maximum as $`m_{\stackrel{~}{t}_L}m_{\stackrel{~}{t}_R}`$ in our framework and the stop source can therefore be neglected. The baryon asymmetry can be directly related to the Higgsino source by solving a set of coupled diffusion equations for the Higgs and Higgsino, top and stop, and the first two generation quark and squark densities . We assume that strong sphaleron transitions and interactions induced by the top Yukawa coupling are very fast, allowing us to reduce the number of relevant equations to the diffusion equations for Higgses and Higgsinos and baryon number. They can be solved analytically, with the result $$\frac{n_B}{s}=\frac{81𝒜\overline{D}\mathrm{\Gamma }_{ws}}{82v_w^2s},$$ (9) where $`\overline{D}`$ is an effective diffusion constant, $`\mathrm{\Gamma }_{ws}`$ is the weak sphaleron transition rate, and $`v_w`$ is the velocity of the bubble wall. The coefficient $`𝒜`$ is determined by boundary conditions to be $$𝒜=\frac{1}{\overline{D}\lambda _+}_0^{\mathrm{}}𝑑u𝒮_He^{\lambda _+u},$$ (10) where $$\lambda _+=\frac{v_w+\sqrt{v_w^2+4\stackrel{~}{\mathrm{\Gamma }}\overline{D}}}{2\overline{D}},$$ (11) and $`\stackrel{~}{\mathrm{\Gamma }}`$ is an effective decay constant. ## III Calculation of Washout Once the phase transition has occurred, weak sphaleron transitions tend to erase any asymmetry that has been created. In the broken phase, the weak sphaleron rate is $$\mathrm{\Gamma }_{ws}2.8\times 10^5T(\frac{\alpha _W}{4\pi })^4\kappa (\frac{E_{sp}}{BT})^7e^{\frac{E_{sp}}{T}},$$ (12) where $$E_{sp}=\frac{gv(T)B}{\alpha _W},$$ (13) and $$B=B(\frac{\lambda }{g^2})1.87.$$ (14) $`\kappa `$ is a constant in the range $`10^4\kappa 10^1`$. The differential equation satisfied by $`n_B`$ in the broken phase is $$\frac{dn_B}{dt}=Cn_f\mathrm{\Gamma }_{ws}n_B,$$ (15) where C is a number of $`𝒪`$(1) and we take $`C1`$ absorbing the uncertainty into the uncertainty of $`\kappa `$ and $`B`$. Solving (14) we get $$n_B(t)=n_B(t_c)\mathrm{exp}(_{t_c}^t𝑑tn_f\mathrm{\Gamma }_{ws}).$$ (16) The traditional bound is found by assuming that all of the washout happens at $`T=T_c`$, which amounts to saying that the integral in equation (3.5) is equal to the value of the integrand at $`T=T_c`$. However, we want to evaluate the integral more carefully, and in the process derive a new bound. Since we do not have an analytic formula for $`v(T)`$, we need to examine the integrand to see if we can make any simplifying assumptions. The sphaleron rate is proportional to $`(v(T)/T)^7\mathrm{exp}(36v(T)/T)`$, and we know that $`1\stackrel{<}{}v(T)/T<\mathrm{}`$. If we approximate that $`v(T)=v(T_c)`$ throughout this period, then the washout rate is negligible when $`T.75T_c75\mathrm{GeV}`$. Since $`v(T)`$ changes by about a factor of 2.5 in the temperature range from (0 - 100) GeV, treating $`v(T)`$ as a constant is a good approximation; it is also conservative, because it will actually overestimate the washout slightly. We can make a change of variables to $$\zeta =\frac{E_{sp}(T_c)}{T}$$ (17) by using the relation between time and temperature, $$t(3\times 10^2)\frac{M_{Pl}}{T^2}.$$ (18) The integral we now have to do is $`I`$ $`=`$ $`(3.4\times 10^8){\displaystyle \frac{M_{Pl}}{E_{sp}(T_c)}}\kappa {\displaystyle _{\zeta _c}^{\mathrm{}}}\zeta ^7e^\zeta 𝑑\zeta `$ (19) $`=`$ $`(3.4\times 10^8){\displaystyle \frac{M_{Pl}}{E_{sp}(T_c)}}\kappa e^{\zeta _c}\times {\displaystyle \underset{n=0}{\overset{7}{}}}{\displaystyle \frac{7!}{n!}}\zeta _c^n`$ (20) Because $`\zeta _c=4\pi Bv(T_c)/gT_c36`$, the $`\zeta _c^7`$ term will dominate all of the other terms. (Again, dropping the smaller terms is a conservative approximation.) Then $`I`$ $``$ $`(3.4\times 10^8){\displaystyle \frac{M_{Pl}}{T_c}}\kappa e^{\zeta _c}\zeta _c^6`$ (21) $``$ $`(4.1\times 10^9)\kappa \zeta _c^6e^{\zeta _c}`$ (22) Inserting this expression into equation (15), we now have an approximate expression for the observed $`n_B/s`$: $$n_B(T0)/s=(n_B(T=T_c)/s)\mathrm{exp}((4.1\times 10^9)\kappa \zeta _c^6e^{\zeta _c})\stackrel{>}{}4\times 10^{11},$$ (23) from which we can obtain $$\zeta _c6\mathrm{log}\zeta _c\mathrm{log}\kappa 9\mathrm{log}10\mathrm{log}4.1+\mathrm{log}(\mathrm{log}\frac{n_B/s(T_c)}{4\times 10^{11}})\stackrel{>}{}0.$$ (24) By choosing a certain value for $`\kappa `$ and $`n_B/s(T_c)`$, this equation can be solved numerically. We will do this in the next section after we have presented our numerical result for $`n_B/s(T_c)`$. Once this is done, we can use the relation $$\zeta _c36\frac{v(T_c)}{T_c}$$ (25) to find the lower bound on $`v(T_c)/T_c`$. This lower bound on $`v(T_c)/T_c`$ can be translated into an upper bound on the light Higgs mass. The one-loop finite-temperature effective potential has a form $$V(\phi )=\frac{1}{2}m^2(T)\phi ^2T[E_{SM}\phi ^3+F_{MSSM}(\phi ,T)]+\frac{1}{8}\lambda (T)\phi ^4.$$ (26) For the MSSM with heavy decoupled stops, the potential becomes SM-like and one has $$\frac{v(T_c)}{T_c}\frac{2E_{SM}}{\lambda },$$ (27) where $$E_{SM}=\frac{2M_W^3+M_Z^3}{4\pi v^3}.$$ (28) Using $`m_h^2=2\lambda v^2`$, equation becomes $$m_h^2\stackrel{<}{}\frac{4E_{SM}v^2}{v(T_c)/T_c}.$$ (29) The other extreme is a light right-handed stop whose temperature dependent self energy, responsible for screening of the stop interactions in the plasma, is balanced by a negative soft squared mass term $`m_U^2\mathrm{\Pi }_{\stackrel{~}{t}_R}=(\frac{4}{9}g_3^2+\frac{4}{9}g^2)T^2+\frac{1}{6}h_t^2[1+\mathrm{sin}^2\beta (1\frac{\stackrel{~}{A}_t^2}{m_Q^2})]T^2`$. In such a case $`F_{MSSM}`$ is $`F_{MSSM}(\phi ,T)`$ $`=`$ $`\phi ^3E_{MSSM}`$ (30) $`=`$ $`\phi ^3{\displaystyle \frac{m_t^3(1\frac{\stackrel{~}{A}_t^2}{m_Q^2})^{\frac{3}{2}}}{2\pi v^3}},`$ (31) where $`\stackrel{~}{A}_t=A_t\mu /\mathrm{tan}\beta `$ is the stop left-right mixing parameter. Then $`{\displaystyle \frac{v(T_c)}{T_c}}`$ $``$ $`{\displaystyle \frac{2(E_{SM}+E_{MSSM})}{\lambda }},`$ (32) $`m_h^2`$ $`\stackrel{<}{}`$ $`{\displaystyle \frac{4(E_{SM}+E_{MSSM})v^2}{v(T_c)/T_c}}.`$ (33) When screening is present ($`m_U^2+\mathrm{\Pi }_{\stackrel{~}{t}_R}>0`$) the temperature dependent Higgs potential can be analyzed numerically. ## IV Numerical Results For the calculation of the baryon asymmetry, we have to adopt a model for the electroweak phase transition and evaluate $`v^2(X)`$ and $`\beta (X)`$. These functions have been calculated numerically using the two-loop finite-temperature effective potential in . However, we have instead chosen to use the model of which lends itself more easily to our computation: $`v(X)`$ $`=`$ $`v(L_w)[1\mathrm{cos}({\displaystyle \frac{X\pi }{L_w}})][\theta (X)\theta (XL_w)]+v(L_w)\theta (XL_w),`$ (34) $`\beta (X)`$ $`=`$ $`\mathrm{\Delta }\beta [1cos({\displaystyle \frac{X\pi }{L_w}})][\theta (X)\theta (XL_w)]+\beta (X=0)+\mathrm{\Delta }\beta \theta (XL_w).`$ (35) We have tested that the answer only weakly depends on the model used for the phase transition, so the results should be of at least the right order of magnitude. We chose the value $`\mathrm{\Delta }\beta `$ = .001 which is suggested by the results of , and a value $`v(L_w)100\mathrm{GeV}`$. The width of the wall is chosen to be $`L_w25/T`$ and the typical velocity of the bubble wall is expected to be $`v_w0.1`$. For the averaged diffusion coefficient we use $`\overline{D}=`$ 0.8 Ge$`\mathrm{V}^1`$ and $`\stackrel{~}{\mathrm{\Gamma }}=`$ 1.7 GeV as in . All of these parameters enter the calculation of the baryon asymmetry, however, the scaling dependence of $`n_B`$ on these quantities is straightforward and does not interfere with CP-violation effects. Figure 1 shows the baryon asymmetry $`n_B/s`$ generated as a function of $`|\mu |`$ for several combinations of $`\phi _\mu `$ and $`\phi _1`$ and $`\phi _2=0`$, $`|M_1|=140\mathrm{GeV}`$, $`|M_2|=250\mathrm{GeV}`$, $`\mathrm{tan}\beta 3`$. The asymmetry $`n_B/s`$ can be as large as $`10^7`$ for $`|\mu ||M_2|`$. Let us note in passing that the signs of $`\phi _1`$ and most importantly of $`\phi _\mu `$ (as it appears in both chargino and neutralino matrices) determines the sign of $`n_B`$. This fact is also demonstrated in Fig. 1 since $`n_B`$ is positive for $`\mathrm{sin}\phi _\mu >0`$ with the exception of case e and $`\mu M_1`$ where $`n_B`$ turns negative being dominated by the neutralino contribution to the Higgsino source. Experimental determination of the magnitude and sign of the soft phases is therefore essential if the feasibility of the supersymmetric electroweak baryogenesis is to be verified. Alternatively, the observed sign of $`n_B`$ could be used to determine the sign of $`\phi _\mu `$ until it can be measured other ways. In order to calculate the lower bound on $`\zeta _c`$, we have to specify the the sphaleron parameters $`B`$ and $`\kappa `$. The non-perturbative scaling factor $`B`$ comes from numerical minimization of the sphaleron energy and was evaluated for the MSSM in . The usual range depending on the coupling strengths is $`1.5<B(\frac{\lambda }{g^2})<2.7`$ with a typical median of 1.87. The value of $`\kappa `$ is obtained as a functional determinant associated with fluctuations about the sphaleron and was estimated to be $`\kappa 0.1`$ . However, when the Higgs propagator uncertainty is absorbed the allowed range for kappa is $`10^4<\kappa <10^1`$. The required value of $`v(T_c)/T_c`$ depends on the amount of dilution of the baryon number produced during the electroweak phase transition that can be allowed in order to explain the observed value of $`n_B/s`$. Inclusion of both large CP-violating soft phases and quantum memory effects leads to a substantial enhancement in the value of the produced asymmetry $`n_B/s(T_c)`$. Consequently, the constraints on the strength of the first order phase transition can be softened as a function of $`n_B(T_c)/n_B(0)`$. This decrease of $`v(T_c)/T_c`$ is ilustrated in Fig. 2 for several values of $`\kappa `$. The enhancement in $`n_B(T_c)`$ coming from quantum effects (about a factor of $`10^2`$ ) for typical order of magnitude values of $`\mathrm{sin}\phi _\mu `$ is demonstrated by the intervals with the position of the left (right) arrow corresponding to no (full) quantum enhancemnent respectively. The minimal value of $`\mathrm{sin}\phi _\mu `$ required in order to generate $`n_B/s(T_c)n_B/s(0)`$ (negligible washout) without including the quantum effects in the sources is about $`5\times 10^2`$ which is in agreement with the values obtained in Ref. . Since we are looking for an absolute lower bound on $`v(T_c)/T_c`$, we will take $`n_B/s(T_c)`$ to have its maximum value, $`10^7`$. For $`\kappa =10^1`$, the numerical solution gives $$\zeta _c\stackrel{>}{}39.6,$$ (36) while the solution for $`\kappa =10^4`$ yields $$\zeta _c\stackrel{>}{}31.3.$$ (37) This translates into a bound on $`v(T_c)/T_c`$ of $`{\displaystyle \frac{v(T_c)}{T_c}}`$ $`\stackrel{>}{}`$ $`1.1,\kappa =10^1,`$ (38) $`{\displaystyle \frac{v(T_c)}{T_c}}`$ $`\stackrel{>}{}`$ $`.87,\kappa =10^4.`$ (39) Using the bound for $`\kappa =10^4`$, we can find the upper bound on the light Higgs mass. For heavy stops, the upper bound is $$m_h\stackrel{<}{}51.7\mathrm{GeV}.$$ (40) For light stops with no thermal screening and negligible mixing ($`\stackrel{~}{A}_t^2<<m_Q^2`$) the upper bound is $$m_h\stackrel{<}{}138.1\mathrm{GeV}.$$ (41) In Figure 3 we show a plot of the upper Higgs mass limit as a function of the right handed stop mass calculated in our framework. The stop mass range corresponds to variation of the right handed stop soft mass parameter $`m_U^2`$ from $`\mathrm{\Pi }_{\stackrel{~}{t}_R}`$ (no thermal screening) to $`\mathrm{\Pi }_{\stackrel{~}{t}_R}`$ (strong thermal screening). The wide band corresponds to variation of $`B`$ and $`\kappa `$ within their full range and the narrow central band shows the set of curves resulting from taking $`B=1.87`$ and varying $`\kappa `$ in the full range. It is important to stress that the role of large CP-violating phases is crucial in this context. If the baryon asymmetry is overproduced during the electroweak phase transition due to $`𝒪`$(1) values of $`\phi _\mu `$ and/or $`\phi _1`$ the washout conditions (35) and (36) are less stringent then if the phases are constrained to be $`10^2`$. As a result, the upper bound on the Higgs mass can be as much as 15 GeV higher compared to the situation where $`n_B/s(T_c)n_B/s(0)`$ and no washout is allowed. It is obvious from our results that inclusion of large CP-violating phases in the calculation of the baryon asymmetry relaxes the stringent constraints on the strength of the first order transition and consequently the light Higgs masses can be pushed towards larger values. Even for stop masses $`m_{\stackrel{~}{t}_R}\stackrel{>}{}170`$ GeV resulting from positive values of the soft breaking parameter $`m_U^2`$ the upper bound on the Higgs mass can be as high as 115 GeV. Our considerations are based on the one loop temperature dependent effective Higgs potential evaluation. Two loop corrections are known to significantly enhance the strength of the first order phase transition and further relax the upper bound on the Higgs mass. In this respect our results represent a conservative estimate of the upper Higgs mass limit and it is likely to be moved upward when two loop corrections are included. ## V Summary and Conclusions The upper bound on the Higgs mass which still allows electroweak baryogenesis is an important issue which is becoming relevant as the Higgs mass experimental lower limits are approaching 100 GeV. The baryon asymmetry produced during the electroweak phase transition can succesfully explain the observed value provided there are additional degrees of freedom contributing to the finite temperature Higgs potential. Supersymmetric models with a light right handed stop are a very good candidate for a theory that can provide these degrees of freedom. Also, the CP-violating phases appearing in the soft breaking terms of the Lagrangian can supplement (or entirely replace) the effects of a CP-violating phase in the CKM matrix. Previously it has been thought that based on the electron and neutron EDM experimental limits the supersymmetric CP-violating phases have to be small and consequently there is no room for washout of the produced baryon asymmetry. This translates into stringent constraints on the Higgs and right handed stop masses, often leading to problems with color breaking minima and potential stability. We have shown that once there is a possibility of cancellations among individual contributions to the EDMs and the CP-violating phases are allowed to be $`𝒪`$(1), the produced baryon asymmetry exceeds the observed value and the baryon density can be allowed to be diluted by a factor of $`10^3`$ or more. In such a case the upper Higgs mass bound is increased and depending on the right handed squark mass it can go beyond 115-120 GeV while at the same time the scalar potential is stable. This scenario opens new possibile implications for supersymmetric phenomenology. As pointed out in it is important to independently measure the CP-violating phases to correctly interpret experimental observables. If the phases are small the window for supersymmetric electroweak baryogenesis will get increasingly smaller as the LEP and Tevatron experiments will be pushing up the lower Higgs mass limit. It is natural to expect that in the case of small phases the experimentally determined Higgs mass should not be too far above 100 GeV if electroweak baryogenesis is expected to work. On the other hand, if the Higgs is not discovered at LEP nor at the Tevatron, one can expect that the CP-violating phases of the MSSM are of $`𝒪`$(1) if baryogenesis occurs at the electroweak phase transition, and they should be measurable if the superpartners are discovered. Of course, finding a light Higgs boson at LEP or Fermilab is consistent with having large supersymmetric soft phases. ## VI Acknowledgments We thank Jim Cline for valuable discussions, A. Riotto for helpful clarifications, and L. Everett for suggestions and comments on the manuscript. We also thank M. Quiros for correspondence. ## VII Appendix Above the electroweak scale, the chargino mass matrix is $`_C\left(\begin{array}{cc}|M_2|e^{i\phi _2}& 0\\ 0& |\mu |e^{i\phi _\mu }\end{array}\right).`$ (A3) This matrix is made real and diagonal by two complex matrices, $$_C^{diag}=U^{}_CV^1,$$ (A4) where we can take $`U=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right),V=\left(\begin{array}{cc}e^{i\phi _2}& 0\\ 0& e^{i\phi _\mu }\end{array}\right).`$ (A9) If we switch to a basis of two-component Weyl spinors, $$P_L\stackrel{~}{W}=P_LV_{j1}^{}\stackrel{~}{\psi _j},P_R\stackrel{~}{W}=P_RU_{j1}\stackrel{~}{\psi _j},$$ (A10) $$P_L\stackrel{~}{H}=P_LV_{j2}^{}\stackrel{~}{\psi _j},P_R\stackrel{~}{H}=P_RU_{j2}\stackrel{~}{\psi _j},$$ (A11) where $`\stackrel{~}{\psi _j}`$ are two-component spinors. In terms of $`\stackrel{~}{\psi _j}`$, the interaction becomes $$=gH_1^0e^{i\phi _2}\overline{\psi _2}P_L\psi _1gH_2^0e^{i\phi _\mu }\overline{\psi _1}P_L\psi _2+h.c.$$ (A12) Above the electroweak scale, the neutralino mass matrix is $`_N\left(\begin{array}{cccc}|M_1|e^{i\phi _1}& 0& 0& 0\\ 0& |M_2|e^{i\phi _2}& 0& 0\\ 0& 0& 0& |\mu |e^{i\phi _\mu }\\ 0& 0& |\mu |e^{i\phi _\mu }& 0\end{array}\right).`$ (A17) It is made real and diagonal by the complex matrix $`N`$, $$_N^{diag}=N^{}_NN,$$ (A18) where we can take $`N=\left(\begin{array}{cccc}e^{\frac{i\phi _1}{2}}& 0& 0& 0\\ 0& e^{\frac{i\phi _2}{2}}& 0& 0\\ 0& 0& \frac{i}{\sqrt{2}}e^{\frac{i\phi _\mu }{2}}& \frac{i}{\sqrt{2}}e^{\frac{i\phi _\mu }{2}}\\ 0& 0& \frac{1}{\sqrt{2}}e^{\frac{i\phi _\mu }{2}}& \frac{1}{\sqrt{2}}e^{\frac{i\phi _\mu }{2}}\end{array}\right).`$ (A23) Switching to the two-component spinor basis, we have $$P_L\stackrel{~}{H_1}=P_LN_{j3}^{}\stackrel{~}{\psi _j^0},P_R\stackrel{~}{H_1}=P_RN_{j3}\stackrel{~}{\psi _j^0},$$ (A24) $$P_L\stackrel{~}{H_2}=P_LN_{j4}^{}\stackrel{~}{\psi _j^0},P_R\stackrel{~}{H_2}=P_RN_{j4}\stackrel{~}{\psi _j^0},$$ (A25) $$P_L\stackrel{~}{W_3}=P_LN_{j2}^{}\stackrel{~}{\psi _j^0},P_R\stackrel{~}{W_3}=P_RN_{j2}\stackrel{~}{\psi _j^0},$$ (A26) $$P_L\stackrel{~}{B}=P_LN_{j1}^{}\stackrel{~}{\psi _j^0},P_R\stackrel{~}{B}=P_RN_{j1}\stackrel{~}{\psi _j^0},$$ (A27) resulting in the interaction $`=`$ $``$ $`{\displaystyle \frac{g_2^2}{2\sqrt{2}}}e^{\frac{i(\phi _2+\phi _\mu )}{2}}[iH_1^0\overline{\stackrel{~}{\psi _3^0}}P_L\stackrel{~}{\psi _2^0}+H_1^0\overline{\stackrel{~}{\psi _4^0}}P_L\stackrel{~}{\psi _2^0}iH_2^0\overline{\stackrel{~}{\psi _2^0}}P_L\stackrel{~}{\psi _3^0}H_2^0\overline{\stackrel{~}{\psi _2^0}}P_L\stackrel{~}{\psi _4^0}]+h.c.`$ (A28) $`+`$ $`{\displaystyle \frac{g_1^2}{2\sqrt{2}}}e^{\frac{i(\phi _1+\phi _\mu )}{2}}[iH_1^0\overline{\stackrel{~}{\psi _3^0}}P_L\stackrel{~}{\psi _1^0}+H_1^0\overline{\stackrel{~}{\psi _4^0}}P_L\stackrel{~}{\psi _1^0}iH_2^0\overline{\stackrel{~}{\psi _1^0}}P_L\stackrel{~}{\psi _3^0}H_2^0\overline{\stackrel{~}{\psi _1^0}}P_L\stackrel{~}{\psi _4^0}]+h.c.`$ (A29)
no-problem/9911/hep-ph9911261.html
ar5iv
text
# Theoretical Analysis of Static Hyperon Data for HYPERON99 ## I Introduction - What is Meant by Static Properties In the context of this meeting, hyperon masses and magnetic moments are considered static properties to be discussed in this talk, while hyperon decays and spin structure determined from deep inelastic scattering are not considered static properties. But all of them depend upon the spin and flavor structure of the hyperons. Any theoretical model for the hyperons with parameters to be determined from experiment should use input from all these data. The masses and magnetic moments are very well measured, and they are well described by the simple constituent quark model. Going beyond this model is difficult without input from other data, some of which are not so well measured. Thus progress in understanding hyperon structure will come from combining input from all relevant experimental data and improvements in the precision of data other than masses and magnetic moments. ### A New Data on $`\mathrm{\Xi }^o\mathrm{\Sigma }^+`$ decays The new data for the semileptonic decay $`\mathrm{\Xi }^o\mathrm{\Sigma }^+`$ agrees with the SU(3) prediction $$g_A(\mathrm{\Xi }^o\mathrm{\Sigma }^+)=g_A(np)$$ (1) where we use the shortened form $`g_A`$ to denote $`G_A/G_V`$. The essential physics of this prediction is that the spin physics in the nucleon system which is probed in the neutron decay is unchanged when the $`d`$ quarks in the nucleon are changed to strange quarks. This very striking result is completely independent of any fitting of weak decays using the conventional D/F parametrization. We can immediately carry this physics further by inserting the $`ds`$ transformation into the well-known prediction for the ratio of the proton ($`uud`$) to neutron ($`udd`$) magnetic moments and obtain a prediction for the ratio of the $`\mathrm{\Sigma }^+(uus)`$ to $`\mathrm{\Xi }^o(uss)`$ magnetic moments. It is conventient to write the prediction in the form: $$\frac{\mu _p}{\mu _n}=1.46=\frac{4\xi _{ud}+1}{4+\xi _{ud}}=1.5$$ (2) where $`\xi _{ud}`$ is the ratio of the quark magnetic moments $$\xi _{ud}=\mu _u/\mu _d=2$$ (3) This is easily generalized to give $$\frac{\mu _\mathrm{\Sigma }^+}{\mu _\mathrm{\Xi }^o}=1.96=\frac{4\xi _{us}+1}{4+\xi _{us}}=1.89$$ (4) where $`\xi _{us}`$ is the ratio of the quark magnetic moments, $$\xi _{us}=\mu _u/\mu _s=(\mu _u/\mu _d)(\mu _d/\mu _s)=3.11$$ (5) and we have determined $`(\mu _d/\mu _s)`$ by the ratio between the experimental value of $`\mu _\mathrm{\Lambda }`$ and the SU(3) prediction $`\mu _\mathrm{\Lambda }=\mu _n/2`$ which assumes that $`\mu _d=\mu _s`$ $$\mu _d/\mu _s=\mu _n/2\mu _\mathrm{\Lambda }=3.11$$ (6) The fact that the prediction for this ratio (4) agrees with experiment much better than either moment agrees with the SU(6) quark model is very interesting. ### B New $`\mathrm{\Lambda }`$ polarization measurements from Z decays and DIS When a $`\mathrm{\Lambda }`$ is produced either from $`Z^o`$ decay or in deep inelastic scattering, the accepted mechanism is the production of a polarized quark produced in a pointlike vertex from a $`W`$ boson or a photon, and the eventual fragmentation of this quark into the $`\mathrm{\Lambda }`$ directly or into a $`\mathrm{\Sigma }^o`$ or $`\mathrm{\Sigma }^{}`$ which eventually decays into a $`\mathrm{\Lambda }`$. Now that experimental data on $`\mathrm{\Lambda }`$ polarization are becoming available in both processes a central theoretical question is which model to use for the spin structure of the $`\mathrm{\Lambda }`$. In the simple quark model, the strange quark carries the spin of the $`\mathrm{\Lambda }`$ and the $`u`$ and $`d`$ are coupled to spin zero. This model has been used in the first analysis of experimental data from $`Z`$ decay and found to be consistent with the data. But the deep inelastic experiments have shown that the spin structure of the proton is different from that given by the simple quark model. An alternative approach is presented in where SU(3) symmetry is assumed and the spin structure of SU(3) octet hyperons is deduced from that of the proton. But SU(3) symmetry is known to be broken. Several approaches to this symmetry breaking have been proposed by theorists, and other mechanisms are discussed in . How to include the $`\mathrm{\Lambda }`$’s produced by the fragmentation of a quark into $`\mathrm{\Sigma }^{}`$ which eventually decays into a $`\mathrm{\Lambda }`$ remains controversial, since the decay via a strong interaction may be already included in the fragmentation function. The question of how to all this right remains open. ## II Masses and Magnetic Moments ### A The Sakharov-Zeldovich 1966 Quark model (SZ66) Andrei Sakharov was a pioneer in hadron physics who took quarks seriously already in 1966. He asked “Why are the $`\mathrm{\Lambda }`$ and $`\mathrm{\Sigma }`$ masses different? They are made of the same quarks!”. His answer that the difference arose from a flavor-dependent hyperfine interaction led to relations between meson and baryon masses in surprising agreement with experiment. Sakharov and Zeldovich $`anticipated`$ QCD by assuming a quark model for hadrons with a flavor dependent linear mass term and hyperfine interaction, $$M=\underset{i}{}m_i+\underset{i>j}{}\frac{\stackrel{}{\sigma }_i\stackrel{}{\sigma }_j}{m_im_j}v_{ij}^{hyp}$$ (7) where $`m_i`$ is the effective mass of quark $`i`$, $`\stackrel{}{\sigma }_i`$ is a quark spin operator and $`v_{ij}^{hyp}`$ is a hyperfine interaction with different strengths but the same flavor dependence for $`qq`$ and $`\overline{q}q`$ interactions. Hadron magnetic moments are are described simply by adding the contributions of the moments of these constituent quarks with Dirac magnetic moments having a scale determined by the same effective masses. The model describes low-lying excitations of a complex system with remarkable success. Sakarov and Zeldovich already in 1966 obtained two relations between meson and baryon masses in remarkable agreement with experiment. Both the mass difference $`m_sm_u`$ between strange and nonstrange quarks and their mass ratio $`m_s/m_u`$ have the same values when calculated from baryon masses and meson masses The mass difference between $`s`$ and $`u`$ quarks calculated in two ways from the linear term in meson and baryon masses showed that it costs exactly the same energy to replace a nonstrange quark by a strange quark in mesons and baryons, when the contribution from the hyperfine interaction is removed. $$m_sm_u_{Bar}=M_\mathrm{\Lambda }M_N=177\mathrm{MeV}$$ (8) $$m_sm_u_{mes}=\frac{3(M_K^{}M_\rho )+M_KM_\pi }{4}=180\mathrm{MeV}$$ (9) $$\left(\frac{m_s}{m_u}\right)_{Bar}=\frac{M_\mathrm{\Delta }M_N}{M_\mathrm{\Sigma }^{}M_\mathrm{\Sigma }}=1.53$$ (10) $$\left(\frac{m_s}{m_u}\right)_{Mes}=\frac{M_\rho M_\pi }{M_K^{}M_K}=1.61$$ (11) Further extension of this approach led to two more relations for $`m_sm_u`$ when calculated from baryon masses and meson masses. and to three magnetic moment predictions with no free parameters $$m_sm_u_{mes}=\frac{3M_\rho +M_\pi }{8}\left(\frac{M_\rho M_\pi }{M_K^{}M_K}1\right)=178$$ (12) $$m_sm_u_{Bar}=\frac{M_N+M_\mathrm{\Delta }}{6}\left(\frac{M_\mathrm{\Delta }M_N}{M_\mathrm{\Sigma }^{}M_\mathrm{\Sigma }}1\right)=190.$$ (13) $$\mu _\mathrm{\Lambda }=0.61=\mu _\mathrm{\Lambda }=\frac{\mu _p}{3}\frac{m_u}{m_s}=\frac{\mu _p}{3}\frac{M_\mathrm{\Sigma }^{}M_\mathrm{\Sigma }}{M_\mathrm{\Delta }M_N}=0.61$$ (14) $$1.46=\frac{\mu _p}{\mu _n}=\frac{3}{2}$$ (15) $$\mu _p+\mu _n=0.88=\frac{M_p}{3m_u}=\frac{2M_p}{M_N+M_\mathrm{\Delta }}=0.865$$ (16) where masses are given in MeV and magnetic moments in nuclear magnetons. ### B Problems in going beyond Sakharov-Zeldovich These successes and the success of the new relation (4) make it difficult to improve on the results of the simple constituent quark model by introducing new physics effects like higher order corrections. Any new effect also introduces new parameters. In order to keep any analysis significant, it is necessary to include large amounts of data in order to keep the total amount of data much larger than the number of parameters. In contrast to the successes of the simple quark model in magnetic moments and hyperon decay, there are also failures. Pinpointing these failures and comparing them with the successes may offer clues to how to improve the simple picture. Combining the experimental data for hyperon magnetic moments and semileptonic decays have provided some contradictions for models of hyperon structure. The essential difficulty is expressed in the experimental value of the quantity $$\frac{(g_a)_{\mathrm{\Lambda }p}}{(g_a)_{\mathrm{\Sigma }^{}n}}\frac{\mu _{\mathrm{\Sigma }^+}+2\mu _\mathrm{\Sigma }^{}}{\mu _\mathrm{\Lambda }}=0.12\pm 0.04$$ (17) The theoretical prediction for this quantity from the standard SU(6) quark model is unity, and it is very difficult to see how this enormous discrepancy by a factor of $`8\pm 2`$ can be fixed in any simple way. The expression (17) is chosen to compare two ways of determining the ratio of the contributions of strange quarks to the spins of the $`\mathrm{\Sigma }`$ and $`\mathrm{\Lambda }`$. In the commonly used notation where $`\mathrm{\Delta }u(p)`$, $`\mathrm{\Delta }d(p)`$ and $`\mathrm{\Delta }s(p)`$ denotes the contributions to the proton spin of the $`u`$, $`d`$ and $`s`$ \- flavored current quarks and antiquarks respectively to the spin of the proton the SU(6) model gives $$\mathrm{\Delta }s(\mathrm{\Lambda })_{SU(6)}=1$$ (18) $$\mathrm{\Delta }s(\mathrm{\Sigma })_{SU(6)}=1/3$$ (19) and $$\frac{\mathrm{\Delta }s(\mathrm{\Sigma })_{SU(6)}}{\mathrm{\Delta }s(\mathrm{\Lambda })_{SU(6)}}=\frac{(g_a)_{\mathrm{\Sigma }^{}n}}{(g_a)_{\mathrm{\Lambda }p}}=\frac{\mu _{\mathrm{\Sigma }^+}+2\mu _\mathrm{\Sigma }^{}}{3\mu _\mathrm{\Lambda }}=\frac{1}{3}$$ (20) whereas experimentally $$\frac{(g_a)_{\mathrm{\Sigma }^{}n}}{(g_a)_{\mathrm{\Lambda }p}}=0.473\pm 0.026$$ (21) $$\frac{\mu _{\mathrm{\Sigma }^+}+2\mu _\mathrm{\Sigma }^{}}{3\mu _\mathrm{\Lambda }}=0.06\pm 0.02$$ (22) The semileptonic decays give a value which which is too large for the $`\mathrm{\Sigma }/\mathrm{\Lambda }`$ ratio; the magnetic moments give a value which is too low. Thus the most obvious corrections to the naive SU(6) quark model do not help. If they fix one ratio, they make the other worse. Furthermore, the excellent agreement obtained by De Rujula, Georgi and Glashow for $`\mu _\mathrm{\Lambda }`$ assuming that the strange quark carries the full spin of the $`\mathrm{\Lambda }`$ suggests that eq. (18) is valid, while the excellent agreement of the experimental value $`0.340\pm 0.017`$ for $`(g_a)_{\mathrm{\Sigma }^{}n}`$ with the prediction -(1/3) suggests that eq.(19) is valid. The disagreement sharpens the paradox of other disagreements previously discussed because it involves only the properties of the $`\mathrm{\Lambda }`$ and $`\mathrm{\Sigma }`$ and does not assume flavor SU(3) symmetry or any relation between states containing different numbers of valence strange quarks. There is also the paradox that the magnetic moment of the $`\mathrm{\Lambda }`$ fits the value predicted by the naive SU(6) quark model, while the magnetic moments of the $`\mathrm{\Sigma }`$ are in trouble. In the semileptonic decays it is the opposite. It is the $`\mathrm{\Sigma }`$ which fits naive SU(6) and both the $`\mathrm{\Lambda }`$ and the nucleon are in trouble. If one assumes the obvious fix for the semileptonic decays by assuming a difference between constituent quarks and current quarks, one can fit the nucleon and $`\mathrm{\Lambda }`$ decays but then the $`\mathrm{\Sigma }`$ is in trouble. The magnetic moments thus seem to indicate that the contribution of the strange quark to the spin of the $`\mathrm{\Sigma }`$ is smaller than any reasonable model can explain, when the scale is determined by the $`\mathrm{\Lambda }`$ moment. This result is far more general than the simple naive SU(6) quark model. But the new relation (4) between the $`\mathrm{\Sigma }^+`$ and $`\mathrm{\Xi }^o`$ moments seems to indicate that the strange quark contributions to these moments are the same. ## III Semileptonic Decays We now consider the semileptonic weak decays and begin by comparing the available data for four semileptonic decays with several theoretical predictions. The $`\mathrm{\Xi }^o\mathrm{\Sigma }^+`$ decay considered above and equal to the neutron decay is omitted here. TABLE 1. Theoretical Predictions and Experimental Values of $`G_A/G_V`$ | $`DECAY`$ | $`Simple`$ | $`Constituent`$ | $`SU(3)`$ | $`Experiment`$ | | --- | --- | --- | --- | --- | | | $`SU(6)`$ | $`SU(6)`$ | | | | | | | | | | $`np`$ | $`5/3`$ | $`input`$ | $`input`$ | $`1.261\pm 0.004`$ | | | | | | | | $`\mathrm{\Lambda }p`$ | $`1`$ | $`0.756\pm 0.003`$ | $`0.727\pm 0.007`$ | $`0.718\pm 0.015`$ | | | | | | | | $`\mathrm{\Xi }^{}\mathrm{\Lambda }`$ | $`1/3`$ | $`0.252\pm 0.001`$ | $`0.193\pm 0.012`$ | $`0.25\pm 0.05`$ | | | | | | | | $`\mathrm{\Sigma }^{}n`$ | $`1/3`$ | $`0.252\pm 0.001`$ | $`input`$ | $`0.340\pm 0.017`$ | | $`\frac{\mathrm{\Sigma }^{}n}{\mathrm{\Lambda }p}`$ | $`1/3`$ | $`1/3`$ | $`noprediction`$ | $`0.473\pm 0.026`$ | The nucleon and $`\mathrm{\Lambda }`$ data are seen to be in strong disagreement with simple SU(6) but are smaller by about the same factor of about 5/4. Thus they are both fit reasonably well by the SU(6) constituent quark model which fixes $`G_A/G_V`$ for the constituent quark to fit the nucleon decay data and reduces the other simple SU(6) predictions by the same factor. But the $`\mathrm{\Sigma }`$ data agree with simple SU(6) and therefore disagree with constituent SU(6). The SU(3) analysis fixes its two free parameters by using the nucleon and $`\mathrm{\Sigma }`$ decays as input; its predictions for the $`\mathrm{\Lambda }`$ and $`\mathrm{\Xi }`$ fit the experimental data within two standard deviations. However the error on the $`\mathrm{\Xi }`$ data is considerably larger than the other errors, and all three predictions fit the $`\mathrm{\Xi }`$ data within two standard deviations. Thus the significance of this fit can be questioned. Our SU(3) fit deals directly with observable quantities rather than introducing D and F parameters not directly related to physical observables. This makes both the underlying physics and the role of experimental errors much more transparent. The neutron decay which has the smallest experimental error fixes one of the two free parameters. The $`\mathrm{\Sigma }^{}`$ decay provides the smallest error in fixing the remaining parameter, the spacing between successive entries in Table I, required to be equal by the SU(3) “equal spacing rule” . The success of this procedure is evident since the errors on the predictions introduced by using these two decays as input are much smaller than the experimental errors on the remaining decays. The contrast between the good SU(6) fit of the $`\mathrm{\Sigma }`$ and the bad SU(6) fit of the others may give some clues to the structure of these baryons. The $`\mathrm{\Sigma }`$ data rule out the constituent SU(6) model which otherwise seems attractive as it preserves all the good SU(6) results for strong and electromagnetic properties at the price of simply renormalizing the axial vector couplings to constituent quarks. Any success of SU(3) remains a puzzle since no reasonable quark model has been proposed which breaks SU(6) without breaking SU(3). ## IV The spin structure of baryons ### A Results from DIS experiments Surprising conclusions about proton spin structure have arisen from an analysis combining data from polarized deep inelastic electron scattering and weak baryon decays. Polarized deep inelastic scattering (DIS) experiments provided high quality data for the spin structure functions of the proton, deuteron and neutron -. The first moments of the spin dependent structure functions can be interpreted in terms of the contributions of the quark spins ($`\mathrm{\Delta }\mathrm{\Sigma }=\mathrm{\Delta }u+\mathrm{\Delta }d+\mathrm{\Delta }s`$) to the total spin of the nucleon. The early EMC results were very surprising, implying that $`\mathrm{\Delta }\mathrm{\Sigma }`$ is rather small (about 10%) and that the strange $`sea`$ is strongly polarized. More recent analyses , incorporating higher-order QCD corrections, together with most recent data, suggest that $`\mathrm{\Delta }\mathrm{\Sigma }`$ is significantly larger, but still less than 1/3 of nucleon’s helicity, $`\mathrm{\Delta }\mathrm{\Sigma }0.24\pm 0.04`$ and $`\mathrm{\Delta }s=0.12\pm 0.03`$. Conventional analyses to determine the quark contributions to the proton spin, commonly denoted by $`\mathrm{\Delta }u`$, $`\mathrm{\Delta }d`$ and $`\mathrm{\Delta }s`$, use use three experimental quantities. The connection between two to proton spin structure is reasonably clear and well established. The third is obtained from hyperon weak decay data rather than nucleon data via SU(3) flavor symmetry relations and its use has been challenged. ### B How should SU(3) be used in analyzing hyperon decays and relating data to baryon spin structure? We first note that the Bjorken sum rule together with isospin tell us that the neutron weak decay constant $$g_A(np)=\mathrm{\Delta }u(p)\mathrm{\Delta }d(p)=1.261\pm 0.004$$ (23) and that $$\mathrm{\Delta }u(p)\mathrm{\Delta }d(p)=\mathrm{\Delta }d(n)\mathrm{\Delta }u(n)=1.261\pm 0.004$$ (24) Its SU(3) rotations give $$g_A(\mathrm{\Sigma }^{}n)=\mathrm{\Delta }u(n)\mathrm{\Delta }s(n)=0.340\pm 0.017$$ (25) and $`\mathrm{\Delta }u(n)\mathrm{\Delta }s(n)=\mathrm{\Delta }d(p)\mathrm{\Delta }s(p)=`$ (26) $`=\mathrm{\Delta }s(\mathrm{\Sigma }^{})\mathrm{\Delta }u(\mathrm{\Sigma }^{})=0.340\pm 0.017`$ (27) as well as the prediction now satisfied by experiment $`g_A(\mathrm{\Xi }^o\mathrm{\Sigma }^+)=\mathrm{\Delta }s(\mathrm{\Xi }^o)\mathrm{\Delta }u(\mathrm{\Xi }^o)=`$ (28) $`=g_A(np)=1.261\pm 0.004`$ (29) The two independent linear combinations of $`\mathrm{\Delta }u(p)`$, $`\mathrm{\Delta }d(p)`$ and $`\mathrm{\Delta }s(p)`$ obtained directly from the data without any assumptions about the $`D`$ and $`F`$ couplings commonly used can be combined to project out isoscalar component of eq.(24) and eq.(27), $`\mathrm{\Delta }u+\mathrm{\Delta }d2\mathrm{\Delta }s=g_A(np)+2g_A(\mathrm{\Sigma }^{}n)=`$ (30) $`=0.58\pm 0.03`$ (31) The commonly used procedure to determine these two linear conbinations includes the data for the $`\mathrm{\Lambda }p`$ and $`\mathrm{\Xi }^{}\mathrm{\Lambda }`$ decays, which do not directly determine any linear combination of but require an additional parameter, the $`D/F`$ ratio to give these quantities. Thus the standard procedure uses includes two more pieces of data at the price of an additional free parameter. Since the $`\mathrm{\Xi }^{}\mathrm{\Lambda }`$ decay has a much larger error than all the other decays, there seems to be little point in introducing the D/F ratio. ### C How does SU(3) symmetry relate the valence and sea quarks in the octet baryons We first note the following relations between the baryon spin structures following from SU(3) Symmetry $`\mathrm{\Delta }u(p)=\mathrm{\Delta }d(n)=\mathrm{\Delta }u(\mathrm{\Sigma }^+)=\mathrm{\Delta }d(\mathrm{\Sigma }^{})=`$ (32) $`=\mathrm{\Delta }s(\mathrm{\Xi }^o)=\mathrm{\Delta }s(\mathrm{\Xi }^{})`$ (33) $`\mathrm{\Delta }d(p)=\mathrm{\Delta }u(n)=\mathrm{\Delta }s(\mathrm{\Sigma }^+)=\mathrm{\Delta }s(\mathrm{\Sigma }^{})=`$ (34) $`=\mathrm{\Delta }s(\mathrm{\Sigma }^o)=\mathrm{\Delta }u(\mathrm{\Xi }^o)=\mathrm{\Delta }d(\mathrm{\Xi }^{})`$ (35) $`\mathrm{\Delta }s(p)=\mathrm{\Delta }s(n)=\mathrm{\Delta }d(\mathrm{\Sigma }^+)=\mathrm{\Delta }u(\mathrm{\Sigma }^{})=`$ (36) $`=\mathrm{\Delta }d(\mathrm{\Xi }^o)=\mathrm{\Delta }u(\mathrm{\Xi }^{})`$ (37) $$\mathrm{\Delta }u(\mathrm{\Sigma }^o)=\mathrm{\Delta }d(\mathrm{\Sigma }^o)=(1/2)[\mathrm{\Delta }u(\mathrm{\Sigma }^+)+\mathrm{\Delta }d(\mathrm{\Sigma }^+)]$$ (38) $$\mathrm{\Delta }q(\mathrm{\Sigma }^o)+\mathrm{\Delta }q(\mathrm{\Lambda })=(2/3)[\mathrm{\Delta }u(n)+\mathrm{\Delta }d(n)+\mathrm{\Delta }s(n)]$$ (39) These relations allow all the baryon spin structures to be obtained from the values of $`\mathrm{\Delta }u(n)`$, $`\mathrm{\Delta }d(n)`$ and $`\mathrm{\Delta }s(n)`$ However, we know that SU(3) symmetry is badly broken. This can be seen easily by noting that all these SU(3) relations apply separately to the valence quark and sea quark spin contributions. Thus SU(3) requries that the sea contributions satisfy eq.(27). Since the strange contribution of the sea in the proton is known experimentally to be suppressed, this suggests that the strange sea in the $`\mathrm{\Sigma }`$ must be enhanced. This simply does not make sense in any picture where SU(3) is broken by the large mass of the strange quark. We are therefore led naturally to a model in which SU(3) symmetry holds for the valence quarks and is badly broken in the sea while the sea is the same for all octet baryons, is a spectator in weak decays and does not contribute to the magnetic moments. The sea thus does not contribute to the coupling of the photon or the charged weak currents to the nucleon. The one place where the sea contribution is crucial is in the DIS experiments, which measure the coupling of the neutral axial current to the nucleon. The Bjorken sum rule and its SU(3) rotations relate the weak decays to the spin contributions of the active quarks to the baryon, without separating them into valence and sea contributions. The effects of the flavor symmetry breaking in the sea can be avoided by assuming that the flavor symmetry is exact for the algebra of currents, but the the hadron wave functions are not good SU(3) states but are broken in the sea. In this way one can obtain relations for the differences between spin contributions in which the sea contribution cancels out if the sea is the same for all octet baryons, even if SU(3) is broken in the sea. ## V Where is the physics? What can we learn? ### A How is SU(3) broken? We now examine the underlying physics of some of these decays in more detail. The weak decays measure charged current matrix elements, in contrast to the EMC experiment which measures neutral current matrix elements related directly via the Bjorken sum rule to $`\mathrm{\Delta }u(p)`$, $`\mathrm{\Delta }d(p)`$ and $`\mathrm{\Delta }s(p)`$. The charged and neutral current matrix elements have been related by the use of symmetry assumptions whose validity has been questioned . We now examine the $`\mathrm{\Sigma }^{}n`$ decay and see how SU(3) breaking affects the relations $$g_A(\mathrm{\Sigma }^{}n)=\mathrm{\Delta }u(n)\mathrm{\Delta }s(n)=\mathrm{\Delta }d(p)\mathrm{\Delta }s(p)$$ (40) $$\mathrm{\Delta }s(\mathrm{\Sigma }^{})\mathrm{\Delta }u(\mathrm{\Sigma }^{})=\mathrm{\Delta }d(p)\mathrm{\Delta }s(p)$$ (41) $$|G_V(\mathrm{\Sigma }^{}n)|=|G_V(np)|$$ (42) The quantity denoted by $`g_A`$ is a ratio of axial-vector and vector matrix elements. Although only the axial matrix element is relevant to the spin structure, breaking SU(3) in the baryon wave functions breaks both the relations between axial and vector couplings, as well as those from CVC for strangeness changing currents. Serious constraints on possible SU(3) breaking in the baryon wave functions are placed by the known agreement with Cabibbo theory of experimental vector matrix elements, uniquely determined in the SU(3) symmetry limit. On the other hand, the strange quark contribution to the proton sea is already known from experiment to be reduced roughly by a factor of two from that of a flavor-symmetric sea , due to the effect the strange quark mass. This suppression is expected to violate the $`\mathrm{\Sigma }^{}n`$ mirror symmetry, since it is hardly likely that the strange sea should be enhanced by a factor of two in the $`\mathrm{\Sigma }^{}`$. Yet Cabibbo theory requires retaining the relation between the vector matrix elements eq.(42). ### B A model which breaks SU(3) only in the sea We now move to the discussion of the model described above mechanism for breaking SU(3) which keeps all the good results of Cabibbo theory like eq.(42) by introducing a baryon wave function $$|B_{phys}=|B_{bare}\varphi _{sea}(Q=0)$$ (43) where $`|B_{bare}`$ denotes a valence quark wave function which is an SU(3) octet satisfying the condition eq.(42) and $`\varphi _{sea}(Q=0)`$ denotes a sea with zero electric charge which may be flavor asymmetric but is the same for all baryons. The wave function eq.(43) is shown to satisfy eq.(42) and to to give all charged current matrix elements by the valence quark component. This provides an explicit justification for the hand-waving argument that the sea behaves as a spectator in hyperon decays. Unlike the charged current, the matrix elements of the neutral components of the weak currents do have sea contributions, and these contributions are observed in the DIS experiments. The SU(3) symmetry relations eqs.(33-39) are no longer valid. However, the weaker relation obtained from current algebra still holds. $$g_A(\mathrm{\Sigma }^{}n)=\frac{n\left|\mathrm{\Delta }u\mathrm{\Delta }s\right|n\mathrm{\Sigma }^{}\left|\mathrm{\Delta }u\mathrm{\Delta }s\right|\mathrm{\Sigma }^{}}{2}$$ (44) SU(3) says $$\mathrm{\Xi }^o\left|\mathrm{\Delta }s\mathrm{\Delta }u\right|\mathrm{\Xi }^o=p\left|\mathrm{\Delta }u\mathrm{\Delta }d\right|p$$ (45) If the strange sea is suppressed, this is clearly wrong. However, Current Algebra relations require only that $`\mathrm{\Xi }^o\left|\mathrm{\Delta }s\mathrm{\Delta }u\right|\mathrm{\Xi }^o+\mathrm{\Sigma }^+\left|\mathrm{\Delta }u\mathrm{\Delta }s\right|\mathrm{\Sigma }^+=`$ (46) $`=p\left|\mathrm{\Delta }u\mathrm{\Delta }d\right|p+n\left|\mathrm{\Delta }d\mathrm{\Delta }u\right|n`$ (47) This is immune to strange sea suppression in all baryons. ### C Getting $`\mathrm{\Delta }u`$, $`\mathrm{\Delta }d`$ and $`\mathrm{\Delta }s`$ From Data Breaking up the quark contributions into valence and sea contributions becomes necessary to treat SU(3) breaking and the suppression of the strange sea. Two ways of doing this have been considered, one using hyperon decay data and the other using the ratio of the proton and neutron magnetic moments. What is particularly interesting is that each of the two approaches makes assumptions that can be questioned, but that although these assumptions are qualitatively very different, both give very similar results. The use of hyperon data requires a symmetry assumption between nucleon and hyperon wave functions, which is not needed for the magnetic moment method. But the use of magnetic moments requires that the sea contribution to the magnetic moments be negligible, which is not needed for the hyperon decay method. ## VI Conclusions The question how flavor symmetry is broken remains open. Model builders must keep track of how proposed SU(3) symmetry breaking effects affect the good Cabibbo results for hyperon decays confirmed by experiment. The observed violation of the Gottfried sum rule remains to be clarified, along with the experimental question of whether this violation of $`\overline{u}\overline{d}`$ flavor symmetry in the nucleon exists for polarized as well as for unpolarized structure functions. The question of how SU(3) symmetry is broken in the baryon octet can be clarified by experimental measurements of $`\mathrm{\Lambda }`$ polarization in various ongoing experiments. ###### Acknowledgements. It is a pleasure to thank Danny Ashery, Marek Karliner and Joe Lach for helpful discussions and comments. This work was partially supported by a grant from US-Israel Bi-National Science Foundation.
no-problem/9911/astro-ph9911010.html
ar5iv
text
# Puzzling Pulsars and Supernova Remnants ## References Gaensler, B. M. & Johnston, S. 1995, MNRAS, 275, L73 Green, D. A. 1998, http://www.mrao.cam.ac.uk/surveys/snrs/ Lorimer, D. R., Lyne, A. G. & Camilo, F. 1998, A&A, 331, 1002 Shull, J. M., Fesen, R. A. & Saken, J. M. 1989, ApJ, 346,860 Taylor, J. H. et al. 1995, ftp://pulsar.princeton.edu/pub/catalog
no-problem/9911/patt-sol9911001.html
ar5iv
text
# Crossover from a square to a hexagonal pattern in Faraday surface waves ## Abstract We report on surface wave pattern formation in a Faraday experiment operated at a very shallow filling level, where modes with a subharmonic and harmonic time dependence interact. Associated with this distinct temporal behavior are different pattern selection mechanisms, favoring squares or hexagons, respectively. In a series of bifurcations running through a pair of superlattices the surface wave pattern transforms between the two incompatible symmetries. The close analogy to 2D and 3D crystallography is pointed out. When a fluid layer is vibrated vertically patterns of standing waves occur at the liquid-air interface. This experiment, first studied by Faraday in 1831, has become a paradigm for the investigation of spontaneous pattern formation . From an experimental point of view, the Faraday setup is particularly attractive since the characteristic length and time scales are under external control and relaxation times are pretty short. If the vibration signal is sinusoidal and weakly supercritical one generically observes well ordered standing wave patterns in the form of squares or lines, oscillating with half the frequency of the excitation (subharmonic response). By carefully tuning the drive frequency within the crossover regime between gravity and capillary surface waves, more complicated (quasi-periodic) patterns with an 8 or even 10-fold point symmetry have been predicted and observed . Edwards and Fauve introduced the idea of a two-frequency excitation signal. The resulting interplay between the competing modes gave rise to unexpected new phenomena like hexagonal, triangular, quasi-periodic, spatially localized structures and even superlattice type patterns . However, the large number of control parameters of a multi-frequency drive signal renders such an experiment problematic as far as a systematic exploration of parameter space is concerned and it lacks an intuitive understanding of the fundamental principles. By use of a viscoelastic liquid, it has been shown that the familiar subharmonic Faraday resonance can be preempted by a synchronous (harmonic) response, leading to a bicriticality as well. However, such fluids are more complicated than Newtonian liquids and the characterization of their nonlinear rheology is incomplete. For Newtonian fluids the parameter region in which the response switches from subharmonic to harmonic is difficult to access since it requires a very shallow filling depth, large shaking elevations and thus a powerful vibrator. This article describes a systematic investigation of a Newtonian liquid under a single-frequency drive operated close to the bicritical crossover. The interesting new feature is a ”phase transition” from a quadratic pattern to a structure of hexagonal symmetry. The symmetry change proceeds via a sequence of superlattices with 2-fold and 6-fold point symmetry. Our new shaker system has a force of 4800 N and a maximum peak-peak elevation of 5.4 cm. It is operated at a rather low drive frequency of $`810Hz`$. The combination of these parameters with a filling depth of $`0.7mm`$, only, is necessary to access the harmonic-subharmonic bicriticality. The container with an inner diameter of $`290mm`$ (equivalent to about $`15`$ times the wavelengths) is sealed by a glass plate and temperature controlled at $`25^{}\pm 0.1^{}`$C. At this temperature the sample fluid (low viscosity Silicon oil, Dow Corning 200) is specified by a kinematic viscosity $`\nu =9.7\times 10^6m^2/s`$, a surface tension of $`\sigma =0.0201N/m`$ and a density $`\rho =934kg/m^3`$. The actual acceleration experienced by the container is recorded by a piezoelectric device. A feedback loop control limits disturbing anharmonicities to a level of less than 0.2%. Our visualization method for the evaluation of pattern symmetries has been described elsewhere . Although this simple light reflection technique provides high contrast pictures, the relation between the recorded intensity and the surface profile is not trivial. Nevertheless for simple structures like squares, we were able to reconstruct the profile by the following procedure: Starting from an estimated surface profile composed of a small number of spatial Fourier modes, we computed the light distribution of the expected video image by means of a ray tracing algorithm. Then we adapted the mode amplitudes and their relative phases to optimize the agreement between the calculated and recorded video pictures. The linear stability theory evaluated for our fluid predicts that the bicritical threshold, at which the system changes from the harmonic instability at lower to the subharmonic instability at higher frequencies, occurs at a drive frequency of $`\mathrm{\Omega }_B/2\pi =8.7Hz`$. This is in good agreement with the present results. Fig. 1 depicts a calculated neutral stability diagram (acceleration amplitude $`a`$ vs. wavenumber $`k`$) for an excitation frequency $`\mathrm{\Omega }/2\pi =9.5Hz`$ slightly above $`\mathrm{\Omega }_B`$. The minimum of the left (right) resonance tongue defines the wavenumber $`k_s`$ ($`k_h`$) of the subharmonic S (harmonic H) instability. The absolute minimum $`a_c`$ corresponds to the onset of Faraday waves. Fig. 2 presents a phase diagram classifying the surface patterns as observed during a quasistatic amplitude ramp at fixed drive frequencies. Starting from a subcritical drive $`ϵ=a/a_c1=2\%`$ the amplitude $`a`$ is increased by steps of $`0.2\%`$. After each increment the scan is suspended for 240 sec and a surface picture is taken thereafter. Having reached $`ϵ=10\%`$ the ramp is reversed. There is no noticeable hysteresis for the primary onset between upward and downward scans. Within the drive frequency region of the harmonic Faraday instability ($`\mathrm{\Omega }<\mathrm{\Omega }_B`$) the bifurcation scenario is rather conventional: Entering subregion $`V`$ from below (see Fig. 2), we find a perfect hexagonal surface tiling which persists up to the maximum drive amplitude. The focus of the present paper is on the subharmonic region $`\mathrm{\Omega }>\mathrm{\Omega }_B`$, where the onset pattern (region $`II`$) is quadratic, while hexagons (region $`V`$) occur at rather elevated shaking amplitudes as a higher bifurcation. Our aim is to describe the bifurcation sequence, which results from an amplitude scan at the fixed drive frequency of $`\mathrm{\Omega }=9.5Hz`$ (gravity wave regime). The primary Faraday pattern (region $`II`$ in Fig. 2) has a subharmonic time dependence and exhibits a perfect quadratic symmetry as shown in Fig. 3a. The associated spatial power spectrum (Fig. 3b) indicates the fundamental wave vectors $`𝐤_{S1}`$ and $`𝐤_{S2}`$ but also pronounced contributions from higher harmonics, in particular $`𝐤_{S1}+𝐤_{S2}`$. The appearance of square patterns in low viscosity gravity waves agrees with a small amplitude theory expanded around the subharmonic instability threshold . At those small values of $`\epsilon `$ the competing harmonic Faraday modes do not noticeably affect the pattern selection process. The shallow filling level used in our setup makes the surface elevation profile very anharmonic with high, narrow tips but broad, shallow hollows. We have re-constructed the surface profile belonging to Fig. 3a by our ray tracing algorithm. The spatial dependence of the interface deformation is decomposed according to $$\eta (𝐫)=\underset{i}{}A_i\mathrm{cos}(𝐤_i𝐫+\varphi _i),$$ (1) where $`𝐤_i\{𝐤_{S1},𝐤_{S2},(𝐤_{S1}\pm 𝐤_{S2}),\mathrm{\hspace{0.17em}2}𝐤_{S1},\mathrm{\hspace{0.17em}2}𝐤_{S2},\mathrm{\hspace{0.17em}2}(𝐤_{S1}\pm 𝐤_{S2})\}`$ and $`𝐫=(x,y)`$. A density plot with the gray level proportional to the local surface elevation is given in Fig. 3d together with the related video image (c) as computed by ray tracing the reflected light. With increasing drive amplitude the harmonic Faraday instability gradually gains influence upon the pattern selection dynamics. When entering region $`III`$ we observe a continuous transition to a $`\sqrt{2}\times \sqrt{2}`$ superlattice (see Fig. 4a,b), characterized by a new subharmonic mode with wave vector $`𝐤_{D1}`$. Even though the associated Fourier spectrum also shows a second mode $`𝐤_{D2}`$ orthogonal to the first one, the amplitude ratio $`A_{D1}/A_{D2}4`$ indicates that $`𝐤_{D1}`$ is strongly prevailing. Therefore the original quadratic invariance of the pattern is broken and replaced by the simpler rectangular symmetry. The surface profile as derived by ray tracing suggests that the mode $`𝐤_{D1}`$ enters Eq. 1 in the form $`\mathrm{cos}(𝐤_{D1}𝐫\pi /2)`$. Assuming $`\varphi _{S1}=\varphi _{S2}=0`$ (by a proper choice of the origin) the cubic nonlinearity $`A_{S1}A_{S2}A_{D1}^{}`$ enters the amplitude equation for $`A_{D1}`$ and thus restricts the relative phase $`\varphi _{D1}`$ to $`0`$ or $`\pi /2`$, depending on the sign of the related coupling coefficient. The former (latter) value is associated with a modulative (displacive) mode. Details will be published elsewhere. For our system the second case applies, which becomes evident by comparing the video images Figs. 3a and 4a: With increasing order parameter $`A_{D1}`$ the rows of elevation maxima (connected by dashed lines in Fig. 3a) are displaced in opposite directions as indicated by the arrows. The displacive mode $`𝐤_{D1}`$ is excited by a pair of wavevector triads according to the geometrical relations $`𝐤_{D1}=𝐤_{H1}𝐤_{S1}`$ and $`𝐤_{D1}=𝐤_{H2}𝐤_{S2}`$. The modes $`𝐤_{H1}`$, $`𝐤_{H2}`$ oscillate synchronously with the drive. Their contribution to the triad-couplings becomes energetically efficient since $`|𝐤_{H1}|=|𝐤_{H_2}|=\sqrt{5/2}1.58`$ coincides almost exactly with the wavenumber $`k_h1.59`$ associated with the harmonic Faraday instability (Fig.1). The transition from region $`III`$ to $`IV`$ is accompanied by a very slow rearrangement of the pattern and the occurrence of defects. A snapshot taken during this crossover process is shown in Fig. 4c. As $`\epsilon `$ is increased, the displacive mode $`𝐤_{D1}`$ gradually dies out and thus $`𝐤_{H1}`$ and $`𝐤_{H2}`$ enter into a new triad together with the nonlinear harmonic mode $`𝐤_{H3}=𝐤_{S2}𝐤_{S1}`$. Even though $`|𝐤_{H3}|=\sqrt{2}`$ does not exactly match the unstable wavenumber band around $`k_h`$ (see Fig. 1) the drive amplitude is apparently high enough to allow this detuning. The six-armed star of wavevectors $`𝐤_{Hi}`$ shown in Fig. 4d already suggests a 6-fold rotational invariance, but the true symmetry of this ”quasi-hexagonal” transient state is still 2-fold: The angles between $`𝐤_{H1}`$ and $`𝐤_{H2}`$ and between $`𝐤_{H2}`$ and $`𝐤_{H3}`$ are $`56^{}`$ and $`62^{}`$, respectively, rather than $`60^{}`$. ¿From the horizontal streaks of the Fourier spectrum of Fig. 4d it can be seen that the translational coherence in $`x`$-direction is in a process of disintegration, while the spatial periodicity in $`y`$-direction and the 2-fold point symmetry are still preserved. Nevertheless the mutual resonance among the $`𝐤_{Hi}`$ is the precursor of the hexagonal symmetry, which will follow. The transient re-orientation process comes to a halt when the pattern has accomplished the ideal hexagonal symmetry (region IV and Figs. 4e,f). At this stage the harmonic Faraday modes govern the pattern selection process. The structure depicted in Figs. 4e,f can be understood as a $`\sqrt{3}\times \sqrt{3}`$ superlattice of the hexagonal lattice of the final state (region $`V`$). Apparently, the subharmonic contributions to the Fourier spectrum are dynamically slaved, since the 6-fold symmetry dictated by the $`𝐤_H`$-modes is recovered in number as well as in orientation by the new subharmonic set $`\{𝐤_{S1},𝐤_{S2},𝐤_{S3}\}`$. Note that the subharmonic time dependence does not allow a mutual resonance among the $`𝐤_{Si}`$. The relative $`30^{}`$-orientation between the S-star and the H-star results from the triad cross-coupling between harmonic and subharmonic modes. The appropriate geometrical resonance conditions $`𝐤_{S1}+𝐤_{S2}=𝐤_{H1}`$, etc., also enforce the length of the subharmonic wavevectors to reduce from unity to $`|𝐤_{Si}|=\sqrt{5/6}0.91`$. It can be shown that the fundamental modes contributing to the $`\sqrt{3}\times \sqrt{3}`$ superlattice enter Eq. 1 with equal spatial phases $`\varphi _{Hi}=\varphi _{Si}=0`$. The equality $`\varphi _{H1}=\varphi _{H2}=\varphi _{H3}=0`$ follows from the mutual resonance among the $`𝐤_H`$-modes, while the cross-coupling between $`𝐤_S`$\- and $`𝐤_H`$-modes enforces $`\varphi _{S1}=\varphi _{S2}=\varphi _{S3}`$. The remaining condition $`\varphi _{Si}=0`$ is more subtle and established only at quintic order in the associated amplitude equation. With the reasoning outlined in Ref. one obtains either $`\varphi _{S1}+\varphi _{S2}+\varphi _{S3}=0`$ or $`\pi /2`$. Since the former (latter) case is associated with a 6-fold (3-fold) symmetry, we conclude from Fig.4e that $`\varphi _{Si}=0`$ applies in our system. If the drive amplitude is rised into phase region $`V`$ the long wavelength modulation of the $`\sqrt{3}\times \sqrt{3}`$ superlattice disappears and the pure hexagonal state as depicted in Fig. 4g survives. Simultaneously the $`\mathrm{\Omega }/2`$-component in the temporal power spectrum of the surface oscillation dies out. To summarize: The present letter reports a Faraday experiment in a very thin fluid layer. Drive frequency and filling depth are adjusted such that waves with subharmonic and harmonic time dependence become simultaneously unstable. The different time dependencies imply distinct wavelengths, but they are also responsible for different nonlinear pattern selection mechanisms, which favor either squares or hexagons. In a series of bifurcations the transition from one symmetry to the other takes place in a surprisingly coherent manner running through a displacive $`\sqrt{2}\times \sqrt{2}`$ and a $`\sqrt{3}\times \sqrt{3}`$ superlattice. A comparison to crystallographic phase transitions is in order. Hexagonal structures on one side and square (in $`2D`$) or cubic (in 3D) structures on the other side are incompatible since there is no group-subgroup relation connecting the different space groups. Hence such transitions involve a reconstruction of the lattice via the formation of lattice defects. The hcp-fcc transition, occuring e.g. in solid 4He, is a prominent 3D example. Here the relevant defects are stacking faults. The same principles hold of course for the present patterns. At the same stage along the phase sequence from $`II`$ to $`IV`$ there has to be a reconstructive transition involving defects. In fact the pattern of Fig. 4c taken just at the $`IIIIV`$\- boundary shows such as line defect (running in the vertical direction in the right half of the picture). In contrast to the hcp-fcc transition, the transition between the two incompatible symmetries of our system does not occur in a single step, but involves two intermediate phases. Transition of the type $`IIIII`$ and $`VIV`$ are also known in $`2D`$ crystallography. The transition from a simple hexagonal lattice to a $`\sqrt{3}\times \sqrt{3}`$ superstructure has been observed for instance in monolayers of $`C_2F_5Cl`$ adsorbed on graphite . The displacive transition $`IIIII`$ is analogous to the reconstruction of the (100) surface of Wolfram crystals Ref. . Here the surface atoms are displaced in exactly the same way as the elevation maxima of the surface profile in the present study. In terms of 2D space groups the transition is from $`p4`$ to $`p2mg`$ implying a doubling of the unit cell. The rectangular $`p2mg`$ symmetry calls for a rectangular metric, that is for different lattice parameters along two orthogonal directions but this has not observed in our experiment, presumably because of the insufficient resolution. Acknowledgements — We thank J. Albers for his support. This work is supported by the Deutsche Forschungsgemeinschaft.
no-problem/9911/cond-mat9911273.html
ar5iv
text
# 𝑑_c=4 is the upper critical dimension for the Bak-Sneppen model ## Abstract Numerical results are presented indicating $`d_\mathrm{c}=4`$ as the upper critical dimension for the Bak-Sneppen evolution model. This finding agrees with previous theoretical arguments, but contradicts a recent Letter \[Phys. Rev. Lett. 80, 5746-5749 (1998)\] that placed $`d_\mathrm{c}`$ as high as $`d=8`$. In particular, we find that avalanches are compact for all dimensions $`d4`$, and are fractal for $`d>4`$. Under those conditions, scaling arguments predict a $`d_\mathrm{c}=4`$, where hyperscaling relations hold for $`d4`$. Other properties of avalanches, studied for $`1d6`$, corroborate this result. To this end, an improved numerical algorithm is presented that is based on the equivalent branching process. PACS number(s): 64.60.Ak, 05.40.-a, 05.65.+b. The Bak-Sneppen evolution model B+S has received considerable attention as an archetype of self-organized criticality BTW , which has been put forward as a general mechanism leading to many non-equilibrium scaling phenomena observed in Nature Bakbook . Although the Bak-Sneppen model was originally developed as an attempt to interpret paleontological data indicating co-evolutionary activity in biological evolution B+S ; PNAS , it has also been used to interpret power law distributions in quiescent periods between earth-quakes Ito . Its generic mechanism of extremal dynamics scaling has even inspired an algorithm to approximate combinatorial optimization problems EOperc . In a recent Letter highd , numerical results for several critical exponents of the Bak-Sneppen model in high dimensions were presented. On the basis of that study, and certain scaling arguments, Ref. highd concluded that the upper critical dimension, where those exponents obtain mean field values, was $`d=8`$. Avalanches, cascading through the system via nearest-neighbor activation events, form domains that were found to be fractal for $`d>2`$. These two findings are surprising: On one hand, avalanches would proceed on a filamentary domain where surface sites outnumber bulk sites. On the other hand, activity would have to return mostly — by pure chance — to sites already within that domain to keep it from growing linearly with time, as required in the mean-field limit. The claims for Ref. highd are in stark contrast to earlier derived scaling relations (see Tab. 1 in Ref. scaling ), which would predict an upper critical dimension of $`d=4`$. For instance, the scaling relation for the avalanche cut-off is $`\sigma =1\tau +d/D`$. Its mean-field value is $`\sigma =1/2`$ meanfield ; BoPa1 , and the avalanche distribution exponent attains $`\tau =3/2`$ meanfield ; BoPa1 , while Ref. BoPa1 explicitly derived that $`D=4`$ is the mean field value for the avalanche dimension exponent, implicating $`d=4`$ as the upper critical dimension. But the scaling relations in scaling were derived under the clearly stated assumption that avalanches would always be compact for $`d<d_\mathrm{c}`$, and activity within them would proceed homogeneously over their domain, with many returns to each site. Ref. highd argues that avalanches are ramified already for $`d>2`$, allowing for a $`d_\mathrm{c}>4`$. In this letter we test the claims of Ref. highd by independent means. To this end, we have developed an alternative algorithm, which has the added benefit of improving temporal cut-offs by more than two decades while eliminating finite lattice size effects entirely. As a result, we find the assumptions underlying the scaling theory in scaling intact. In particular, our numerical simulation results show that avalanches in $`d4`$ are compact, and that scaling exponents, when asymptotic behavior is extracted, take on mean-field values for $`d=5`$ and 6, very much in contradiction to Ref. highd . Thus we show that the Bak-Sneppen model possesses an upper critical dimension, $`d_c=4`$, below which hyperscaling relations hold, similar to equilibrium critical phenomena. In fact, we have studied a whole range of different properties of the Bak-Sneppen model for $`1d6`$ bsd.pre . Here, we focus directly on those properties relevant with regard to the upper critical dimension. The Bak-Sneppen model is very easy to state B+S . It consists of random numbers $`\lambda _𝐫`$ between 0 and 1, occupying sites $`𝐫`$ on a $`d`$-dimensional lattice. At each update step, $`s`$, the smallest random number $`\lambda _{min}(s)`$ is located. That site as well as its $`2d`$ nearest neighbors each receive new random numbers drawn independently from a flat distribution on the unit interval. This update step is repeated at the site with the next $`\lambda _{min}(s+1)`$, and so on. The process inevitably evolves toward a self-organized critical state BTW in which almost all $`\lambda _𝐫`$ are larger than a critical threshold $`\lambda _\mathrm{c}`$ (see Tab. 1), while those $`\lambda _𝐫`$ below are part of an avalanche of activity. The current minimum $`\lambda _{min}(s)`$ is the “active” site while those $`\lambda <\lambda _\mathrm{c}`$ are “unstable” and potentially active; all $`\lambda >\lambda _\mathrm{c}`$ are “stable”. These avalanches are critical and their properties have been described in terms of scaling relations scaling . The obvious way to implement the model is to specify a lattice, place a random number $`\lambda _𝐫`$ on each site $`𝐫`$, keep an efficient list of those numbers, ordered by size, and repeatedly replace the smallest $`\lambda _{min}(s)`$ and its neighbors. In this method, knowledge of $`\lambda _\mathrm{c}`$ is a priori not necessary to determine the critical properties of the avalanches. This is, in fact, the algorithm employed in Ref. highd and some other, low-dimensional studies. But in higher dimensions feasible length scales in the pre-defined lattice, with a fixed number of sites $`N=L^d`$, exponentially diminish with dimension. For instance, Ref. highd used $`N=2^{16}`$, giving $`L=16`$ in $`d=4`$ (only $`L=4`$ in $`d=8`$), or a maximal spatial separation of $`r_{\mathrm{max}}=dL/2=32`$, and a temporal cut-off of maximally $`s_{\mathrm{co}}r_{\mathrm{max}}^D10^6`$, since $`D4`$. A more efficient way to describe an avalanche utilizes a generalized branching process PMB ; scaling that has been shown to be exactly equivalent to the Bak-Sneppen model. In the update procedure, the values of the new random numbers that enter the system are unrelated to each other, their predecessors, or any other number in the system. Thus, any number that has no chance of becoming active in itself (i. e. those with $`\lambda >\lambda _\mathrm{c}`$), will not affect the dynamics of the critical avalanche through its particular value. Now, assume that we are at the beginning of a critical avalanche, i. e. all numbers in the system are above $`\lambda _\mathrm{c}`$. All we need to know to describe the ensuing critical avalanche is the existence of a smallest number in the system to initiate the avalanche. In updating that number and its neighbors, either no number below $`\lambda _\mathrm{c}`$ will be created and that critical avalanche happens to be empty, or it will produce numbers below $`\lambda _\mathrm{c}`$ and will progress through more updates until no more numbers below $`\lambda _\mathrm{c}`$ remain at some later time step. Thereafter, a new critical avalanche will start at a new, unrelated location. To describe an individual avalanche, we can assume that it started at the origin, and at all times we need only keep track of those numbers below $`\lambda _\mathrm{c}`$. Lattice addresses appear only as parameters characterizing exclusively those numbers currently unstable. While the number of sites covered by an avalanche $`n_{\mathrm{cov}}`$ can grow as much as linear in time $`s`$, currently unstable sites at most grow like $`s^{1/2}`$ scaling . Thus, our temporal cut-off due to memory constraints $`N`$ behaves like $`s_{\mathrm{co}}N^2`$. Since our results depend so strongly on results from numerical simulations, we describe our improved method in great detail. To simulate this process we create a list $``$ of currently unstable numbers, $`=\{(\lambda _𝐫,𝐫)|\lambda _𝐫<\lambda _\mathrm{c}\}`$. We initialize $``$ with a single entry $`(\lambda _𝐫=0,𝐫=\mathrm{𝟎})`$. At each update we first remove the smallest $`\lambda _𝐫`$ in $``$ and any of its neighbors in the lattice, if those happen to be in $``$. Then, we draw a new number for each of those sites, but only sort into $``$ those numbers below $`\lambda _\mathrm{c}`$ by storing $`(\lambda _𝐫,𝐫)`$. Addresses $`𝐫`$ could extend arbitrarily far from the origin, thus eliminating any spatial cut-off. It is very easy to keep $``$ compact and sorted according to $`\lambda `$. In fact, it is sufficient to sort in a new $`\lambda _𝐫`$ linearly from the bottom (instead of using a heap, say): the dynamics quickly leads to a list in which almost all numbers are very densely packed just below $`\lambda _\mathrm{c}`$, and almost every number inserted at the bottom moves only a few steps through $``$. Without explicit lattice reference, it is not easy for this algorithm to check the minimum’s neighbors in the lattice which, if unstable, would need to be removed from $``$. To be sure, we would have to search $``$ at each update to eliminate those neighbors, which would be unreasonably time consuming. Instead, we utilize a procedure similar to hash tables hash : When a lattice address $`𝐫`$ with $`\lambda _𝐫<\lambda _\mathrm{c}`$ is stored in some entry $`_k`$, $`𝐫`$ is mapped into an index $`i=i(𝐫)`$ for a large, sparse array $`𝒜`$, $`|𝒜|||`$. $`𝒜_i`$ in turn stores the index $`k`$ of $`_k`$, or is empty otherwise. When activity returns to the neighborhood of $`𝐫`$, $`i(𝐫)`$ is calculated and $`𝒜_i`$ can be checked at once to track an unstable number in $``$. It is crucial that $`i(𝐫)`$ is unique, of course, but since $`|𝒜|`$ is finite there exist lattice addresses $`𝐫𝐫^{}`$ such that $`i(𝐫)=i(𝐫^{})`$. Those conflicts are rare between any two currently unstable sites, if $`|𝒜|||`$. They can be resolved by moving to the next available index $`i(𝐫)+I`$, where $`I`$ is the offset to the nearest free entry in $`𝒜`$ (for important details, see Ref. bsd.pre ). We have first used this simulated branching process to extrapolate for the values of $`\lambda _\mathrm{c}`$ bsd.pre given in Tab. 1 by simulating avalanches at various $`\lambda `$ below $`\lambda _\mathrm{c}`$ (see Ref. scaling ). This extrapolation uses the domain covered by an avalanche, $`n_{\mathrm{cov}}`$, requiring $``$ to record all currently and previously unstable sites. At $`\lambda _\mathrm{c}`$, $`n_{\mathrm{cov}}`$ can increase as much as linear with $`s`$, quickly exhausting memory (see below). Fortunately, for $`\lambda `$ well below $`\lambda _\mathrm{c}`$, avalanche duration and coverage are quickly cut off. We have run up to $`2\times 10^6`$ critical avalanches in each $`d`$, $`1d6`$, to determine their statistical properties. We used $`||_{\mathrm{max}}=2^{16}`$ and $`|𝒜|=2^{20}`$, easily sufficient to run avalanches up to $`s_{\mathrm{max}}=10^8||_{\mathrm{max}}^2`$ update steps. \[Avalanches were terminated at $`s_{\mathrm{max}}`$ due to time constraints and due to the error in $`\lambda _\mathrm{c}`$, see Tab. 1.\] $`𝒜`$ is already much larger than the lattice used in Ref. highd , but we point out that even on a fixed lattice with $`N=2^{24}`$ sites in $`d=6`$, the largest distance is $`r_{\mathrm{max}}=48`$, i. e. $`s_{\mathrm{co}}<10^6`$. In contrast, our algorithm has $`s_{\mathrm{co}}||_{\mathrm{max}}^25\times 10^9`$ in any dimension, yielding significant spatial correlations for $`r>100`$ even in $`d=6`$ (see Fig. 1). In a long critical avalanche, almost all unstable numbers cluster densely in $``$ with values just below $`\lambda _\mathrm{c}`$, which puts high demands on a random number generator. Typically, in an ongoing avalanche that has grown to $`10^4`$ unstable numbers at some time $`s`$, $`10^3`$ of those are placed within $`\mathrm{\Delta }\lambda /\lambda _\mathrm{c}10^4`$ just below $`\lambda _\mathrm{c}`$. Using $`\sigma =1/2`$, those numbers can expect to survive up to $`s(\mathrm{\Delta }\lambda /\lambda _\mathrm{c})^{1/\sigma }10^8`$ update steps. To record temporal correlations accurately, it is crucial to resolve these numbers accurately. This requires numbers that are sufficiently random for more than $`8`$ digits! Our data has been obtained with a sophisticated 64-bit random number generator provided to us by the authors of Ref. RNG . First, we discuss the data for the distribution of avalanche activity, $`P(r,s)`$, which leads to the avalanche dimension exponent $`D`$. To find $`P(r,s)`$, we record the instances of having activity at a (positive) distance $`r`$ relative to the origin at update step $`s`$. Similar to a random walk, the moments of the distribution define the exponent $`D`$ via $`r^q_ss^{q/D}`$. In Fig. 1 we have plotted the data as $`s/r^D`$ vs. $`r`$ for $`d4`$. For $`d<4`$ we have used the value of our best fit for the exponent $`D`$ scaling ; bsd.pre as given for each dimension in Tab. 1. For $`d4`$, the fitted value corresponded to the mean-field result BoPa1 , $`D=4`$. For $`d=4`$, we conjecture a simple asymptotic form for the scaling behavior, $`s{\displaystyle \frac{r^4}{\mathrm{ln}r}}(d=4).`$ (1) Thus, in Fig. 1, unlike for the other dimensions $`d`$, for $`d=4`$ we have plotted $`s\mathrm{ln}r/r^4`$. For all dimensions the data levels out horizontally for increasing $`r`$. Logarithmic factors, as seen in Eq. (1) for $`d=4`$, are common for scaling behavior at the upper critical dimension logscale . Similar evidence for $`d_\mathrm{c}=4`$ is provided by the data for the backward avalanche exponent $`\tau _\mathrm{b}^{\mathrm{all}}`$, which is related to the avalanche distribution exponent through $`\tau =3\tau _\mathrm{b}^{\mathrm{all}}`$ scaling . This particular scaling relation does not rely on avalanches being compact and is valid both above and below the upper critical dimension. The data exhibits mean-field behavior, $`\tau _\mathrm{b}^{\mathrm{all}}=3/2(=\tau )`$, already for $`d=5`$ and 6. In Fig. 2 we show the data for the backward-avalanche distribution reduced by the mean field scaling behavior, $`P_\mathrm{b}^{\mathrm{all}}(s)/s^{3/2}`$ as a function of $`1/\mathrm{ln}(s)`$, for $`d=1`$ through $`d=6`$, and for the multi-trait model BoPa1 , which analytic calculations show exhibits mean field behavior. The curves for $`d<4`$ clearly approach zero rapidly for increasing times $`s`$ on this scale, indicating that $`\tau _\mathrm{b}^{\mathrm{all}}>3/2`$ in those dimensions. The corresponding data for $`d>4`$ is clearly bounded away from zero, comparable to the data from the multi-trait model, where $`\tau _\mathrm{b}^{\mathrm{all}}=3/2`$ exactly. This data also shows that there are significant corrections to scaling and cut-off effects, even for the multi-trait model. In this case, simply fitting a power law over the largest region available from the simulation data can give misleading results due to systematic curvature on increasing scales. (Fitted values for $`\tau _\mathrm{b}^{\mathrm{all}}`$ are discussed in Ref. bsd.pre .) In $`d=4`$, the data is again consistent with some logarithmic correction to scaling behavior. These findings receive strong theoretical support. The only assumption in the scaling theory of Ref. scaling is that the avalanches are compact below the upper critical dimension where each site that is visited in an avalanche is typically visited many times. Our numerical results indicate that avalanches indeed remain compact for $`d4`$. To test this explicitly, we have determined the probability distribution $`P(n_{\mathrm{cov}},R)`$ of having a finished avalanche that has covered a domain $`n_{\mathrm{cov}}`$ with a radius of gyration $`R`$. As mentioned above, $`n_{\mathrm{cov}}`$ may grow linear with $`s`$, rapidly exhausting memory for $``$. Thus, we used $`s_{\mathrm{max}}=||_{\mathrm{max}}=2^{20}10^6`$ and $`|𝒜|=2^{22}`$. The exponent $`d_{cov}`$ is defined through the moments of the distribution $`P(n_{\mathrm{cov}},R)`$ via $`n_{\mathrm{cov}}R^{d_{cov}}`$ for large $`R`$, providing a measure of the fractal structure of avalanches if $`d_{cov}<d`$. In turn, avalanches are compact if $`d_{cov}=d`$. Using our numerical results, we plot in Fig. 3 the quantity $`n_{\mathrm{cov}}/R^d`$ as a function of $`1/\mathrm{ln}(R^d)`$. If avalanches are fractal ($`d_{cov}<d`$), we would find that asymptotically this quantity should approach zero (like $`R^{(dd_{cov})}`$). In fact, we find that this quantity clearly remains finite for large $`R`$ for all dimensions $`d<4`$. For $`d>4`$, we find a rapid approach to zero on this logarithmic scale, indicating fractal behavior. In $`d=4`$ this quantity appears to vanish as well, but as a linear function of $`1/\mathrm{ln}(R^d)`$. Similar to Eq. (1), we find that the numerical simulations indicate marginal behavior in $`d=4`$ with logarithmic corrections. Thus, in $`d=4`$ avalanches are marginally compact (a “fat fractal”), and the conditions underlying the scaling theory in Ref. scaling are upheld for all $`d4`$, implying $`d_\mathrm{c}=4`$. We demonstrate the consistency of our argument by also measuring the scaling of the coverage with the lifetime of an avalanche, $`n_{cov}s^\mu `$ highd . Since there should be only one characteristic length for a compact avalanche, we expect that $`rR`$, and thus, $`\mu =d/D`$ for $`dD`$, and $`\mu =1`$ in the mean-field limit. In Fig. 4, we plot the coverage reduced by its mean-field scaling, $`n_{cov}/s`$, as a function of duration $`s`$. Clearly, the data for $`d=5`$ and 6 is in perfect agreement with mean-field behavior, while for $`d=4`$ the deviation from mean-field behavior are minute, $`\mu _{\mathrm{meas}}0.99`$, even without considering the effect of logarithmic factors. For $`d=3`$ we find $`\mu 0.905`$, quite consistent with the value predicted from $`\mu =d/D0.896`$ using our measured value $`D=3.35`$ in $`d=3`$ (see Tab. 1). SB acknowledges helpful discussions with M. J. Creutz and T. T. Warnock.
no-problem/9911/astro-ph9911423.html
ar5iv
text
# Magnetic fields and the large-scale structure ## Abstract The large-scale structure of the Universe has been observed to be characterized by long filaments, forming polyhedra, with a remarkable 100-200 Mpc periodicity, suggesting a regular network. The introduction of magnetic fields into the physics of the evolution of structure formation provides some clues to understanding this unexpected lattice structure. A relativistic treatment of the evolution of pre-recombination inhomogeneities, including magnetic fields, is presented to show that equivalent-to-present field strengths of the order of $`10^8`$ G could have played an important role. Primordial magnetic tubes generated at inflation, at scales larger than the horizon before recombination, could have produced filamentary density structures, with comoving lengths larger than about 10 Mpc. Structures shorter than this would have been destroyed by diffusion due to the small pre-recombination conductivity. If filaments constitute a lattice, the primordial magnetic field structures that produced the post-recombination structures of matter, impose several restrictions on the lattice. The simplest lattice compatible with these restrictions is a network of octahedra contacting at their vertexes, which is indeed identifiable in the observed distribution of superclusters. The very large structure of the Universe is characterized by filaments and voids. In supercluster distribution maps, such as those by Tully et al. (1992) and Einasto et al. (1997), one can identify filaments larger than 600 $`h^1Mpc`$, such as the one connecting the Tucana and the Ursa Major superclusters, probably extended to the Draco supercluster. With galactic peculiar velocities of less than $`10^3km/s`$, in a time of the order of Hubble’s, a galaxy is only able to travel about 10 Mpc. Therefore it is difficult to explain how the primordial mass inhomogeneities could be rearranged from a chaotic distribution to one with ordered alignments. Galaxies, or their pregalactic inhomogeneities, have no time to redistribute themselves. Present hydrodynamic forces, and particularly magnetic forces, have had no time to produce a redistribution and explain the large scale structures. These large structures are also much larger than the horizon at Recombination (about 10 Mpc, comoving length), which implies they could have been originated only at the Inflation epoch. Inflation magnetogenesis is one of the most interesting possibilities for the origin of cosmic magnetic fields (Turner & Widrow 1988; Ratra 1992; Garretson, Field, & Carrol 1992; Dolgov 1993; Gasperini & Veneziano 1995 and others). If present magnetic fields do not have an important influence on the 100 Mpc large-scale structure, there remains the interesting possibility that they were important in the past. In this paper, we analyze the effects of primordial magnetic fields on the present large-scale density distribution. The history of magnetic fields during the different epochs of the Universe is a complicated one, in which the Radiation Dominated Epoch was critical, as the electron-photon interaction was responsible for a resistivity capable of destroying small scale fields (Lesch & Birk 1998). Magnetic diffusion induced by a finite scalar conductivity cannot affect large scale fields because it doesn’t have sufficient time. It is likely that field structures larger than comoving 3 kpc were able to survive and, with complete certainty, structures larger than the horizon survived this hostile epoch. Some of these field structures became sub-horizon after Recombination and were not destroyed because, in this epoch, the assumption of infinite conductivity is reasonably satisfied and magnetic diffusivity is negligible at all scales. As shown by Battaner, Florido & Jimenez-Vicente (1997) and Florido & Battaner (1997), primordial magnetic flux tubes were able to induce filamentary radiative energy density inhomogeneities during the radiative epoch between Annihilation and Recombination, or more precisely until the so-called Acoustic Epoch. Magnetic flux tubes arise in cosmic MHD systems and are the $`\stackrel{}{B}`$-structures necessary when magnetic coherence cells exist. They anisotropically affect photon distribution because magnetic fields are present in the energy-momentum tensor. Energy density distributions created during this epoch produced potential wells and seeds for baryonic and CDM inhomogeneities. They originally consisted of filaments. Post recombination non-linear and imperfect fluid effects distorted these structures, but the largest ones were relatively unaffected and should be recognizable today. Moreover, the original primordial magnetic flux tubes were distorted by small scale effects, such as field amplification in growing $`\rho `$-inhomogeneities, ejections by radio galaxies and other effects, which again, kept the very large scales relatively unaffected. Therefore, both the density and the magnetic field large structures would have survived and should be recognizable today. A linear perturbation of the Maxwell, conservation of momentum-energy and Einstein Field equations was carried out by Battaner, Florido & Jimenez-Vicente (1997) to study the evolution of magnetic field and density inhomogeneities during the Radiation dominated era, even if not all these equations are independent. The perturbed Robertson-Walker metrics in a plane universe was considered. A mean cosmological magnetic field cannot exist but a mean magnetic energy density $`<B^2/8\pi >`$ may be non-vanishing. Provided that this is negligible compared with the radiative energy density, the general laws of expansion ($`Rt^{1/2}`$) and cooling ($`TR^1`$) are unaffected by the presence of a magnetic energy. During this epoch, the magnetic field strength always decreases, being diluted by expansion, but the shape of the structures remains unmodified in the expansion. The distribution of $`\stackrel{}{B}_0=\stackrel{}{B}R^2`$, where $`R`$ is the cosmological scale factor, taking its present value as unity, is constant. This field, $`\stackrel{}{B}_0`$, would coincide with the present field only if the complicated post-recombination effects had not distorted the structures and amplified the field. The equivalent-to-present magnetic field strength during the radiation dominated era should be of the order of $`10^8G`$. If it were less than this, then magnetic fields would have no influence on the structure. If higher, the growth of galaxies and clusters would have proceeded too efficiently. A primordial magnetic flux tube with $`10^8`$ G at its centre would produce a density filament with a relative over-density of $`\delta =5\times 10^4`$ at Recombination, starting with complete density homogeneity, or considering primordial isocurvature inhomogeneities. After Recombination $`\delta `$ has increased without the influence of magnetic fields. These would have increased from $`B_010^8G`$ before Recombination to $`B_013\times 10^6`$ G at present, as observed in clusters and in the intercluster medium (Kronberg, 1994). A rough scheme is depicted in figure 1. It is an observational fact that large scale filaments form polyhedra which form a lattice (Broadhurst et al. 1990; Tully et al. 1992; Einasto et al. 1997 and references therein). Battaner & Florido (1997) considered the properties of this lattice, as if they were produced by primordial magnetic flux tubes, as present filaments would have inherited the topological characteristics of the primordial tubes. They concluded that the simplest network matching the magnetic restrictions consists of octahedra only contacting at their vertexes. If the real structures actually consisted of such octahedra, the present supercluster distribution would remain as in an egg-carton Universe. Battaner, Florido & Garcia-Ruiz (1997) identified octahedra only contacting at their vertexes in the real sky as being all the important superclusters and all the important voids (taken from the ETJEA, i.e. Einasto et al. 1997, and the EETDA, i.e. Einasto et al. 1994 catalogues, respectively) forming part of the web. Therefore, the egg-carton network is perfectly recognizable in the present large-scale structure. The size of the octahedra would be $`150h^1Mpc`$. As shown by Battaner (1998) this network is compatible with a fractal structure, there being sub-octahedra within the octahedra, and so on. The lower limit of the fractal range could be about 10 Mpc, because filaments smaller than that probably had no chance of surviving the Radiation Dominated era. The upper limit is that imposed by observations and even by the present horizon. The fractal dimension would be either 1.77 or 2 depending on the ratio of octahedra/sub-octahedra sizes. Conclusions Though some of the above ideas seems to be rather speculative, it is very noticeable that the cosmic wave actually matches the theoretical results. Our basic conclusion is that primordial magnetic fields have played an important role in establishing the presently observed supercluster network. References Battaner, E. 1998, Astron. Astrophys., 334, 770 Battaner, E., Florido, E. & Garcia-Ruiz, J.M. 1997, Astron. Astrophys., 327, 8 Battaner, E., Florido, E. & Jimenez-Vicente, J. 1997, Astron. Astrophys., 326, 13 Broadhurst, T.J., Ellis, R.S., Koo, D.C. & Szalay, A.S. 1990, Nature, 343, 726 Dolgov, A.D. 1993, Phys. Rev. D, 48, 2499 Einasto, M., Einasto, J., Tago, E., Dalton, G.B. & Andernach, H. 1994, Mon. Not. Roy. Astron. Soc., 269, 301 Einasto, M., Tago, E., Jaaniste, J., Einasto, J. & Andernach, H. 1997, Astron. Astrophys. Supp. Ser., 123, 129 Florido, E. & Battaner, E. 1997, Astron. Astrophys., 327, 1 Garretson, W.D., Field, G.B. & Carrol, S.M. 1992, Phys. Rev D, 46, 5346 Gasperini, M. & Veneziano, G. 1995, Astroparticle Physics, 1, 317 Kronberg, P.P. 1994, Reports on Progress in Physics, 57, 325 Lesch, H. & Birk, G. 1998, Phys. Plasmas, 5, 2773 Ratra, B. 1992, Astrophys. J., 391, L1 Tully, R.B., Scaramella, R., Vettolani, G. & Zamorani, G. 1992, Astrophys. J., 388, 9 Turner, M.S. & Widrow, L.M. 1988, Physical Rev D, 37, 2743
no-problem/9911/chao-dyn9911002.html
ar5iv
text
# Can Strange Nonchaotic Dynamics be induced through Stochastic Driving? ## I Introduction The effect of noise on the dynamics of low–dimensional nonlinear systems has been widely studied. One major motivation has been to verify the robustness of observed dynamical phenomena , but a large number of studies are directed toward studying whether additive (or multiplicative) noise can induce novel dynamical phenomena. In this context, noise–induced ordering has been extensively explored in the past few years . At first glance, such results appear counterintuitive since the addition of randomness would normally be expected to enhance the effects of chaos in any system. At the same time, it is well–established that additive noise causes phenomena such as stochastic resonance , or otherwise stabilizes chaotic motion . In other situations the effect is to reduce the value of the largest Lyapunov exponent (LE), namely to make the system less chaotic , or to create new random attractors . Can strange nonchaotic attractors (SNAs) be formed via stochastic driving of a nonlinear system? While it has been suggested that additive noise can create SNAs, this question touches upon an important open issue. The only examples of SNAs known to date have quasiperiodic driving in the dynamics , although there are some experimentally studied systems where the dynamics appears to be on strange nonchaotic attractors, but where there is no explicit quasiperiodic driving. This question therefore has considerable practical relevance. In this article we examine the mechanism whereby the addition of a chaotic or stochastic signal to a general chaotic system has the effect of reducing the degree of disorder . The particular systems where this occurs all appear to have large contracting regions in the phase space, typified by, say, an exponential tail in the Poincaré map. An additional motivation here is to understand the different mechanisms through which chaotic attractors are “made” nonchaotic. The contrast here is with quasiperiodically driven chaotic dynamical systems which can often transform a strange chaotic attractor into a strange nonchaotic one . On strange nonchaotic attractors (SNAs) the dynamics is aperiodic since the attractor is fractal, but the largest Lyapunov exponent is nonpositive, so there is no sensitivity to initial conditions. The dynamics is intermediate between quasiperiodic and chaotic; there are features of both regularity and chaos. Our present results suggest that stochastic driving alone cannot create SNAs: noise–induced stabilization differs in important respects from strange nonchaotic dynamics. We find that the noise induced order proceeds as follows. By adding noise, the invariant measure on the (noisy) attractor is modified. If the measure on those regions where the dynamics is locally contracting is enhanced, then this has the effect of lowering the Lyapunov exponent. (Alternately, the Lyapunov exponent can be enhanced by increasing the measure in regions where the local dynamics is expanding). On the other hand, given a quasiperiodically driven system where there are strange nonchaotic attractors, the addition of noise may also destroy such attractors . Our main results are presented in Section II, where we discuss model systems with stochastic forcing. We analyse the dynamics in terms of local Lyapunov exponents and show the methodology of this mechanism for inducing ordering. A number of previously studied control methods appear to fall in this class of techniques, as does a related anticontrol method . A summary follows in Section III. ## II Results Consider a stochastically driven nonlinear dynamical system specified by, say, the iterative mapping $$x_{n+1}=f(x_n)+\sigma \xi _n,$$ (1) where $`\xi _n`$ is additive stochastic or random noise of strength $`\sigma `$. We consider the case when the system has positive Lyapunov exponent with no driving, i.e. for $`\sigma =0`$. For nonzero $`\sigma `$ it can happen that the Lyapunov exponent corresponding to the $`x`$ degree of freedom $$\lambda =\underset{N\mathrm{}}{lim}\frac{1}{N}\underset{i=1}{\overset{N}{}}\mathrm{ln}|f^{}(x_i)|$$ (2) can become negative. A number of related situations show a very similar property, namely that the Lyapunov exponent decreases on addition of an extra stochastic or chaotic term in the dynamics. For example, driving via another chaotic system, $`x_{n+1}`$ $`=`$ $`f(x_n)+\sigma \xi _n`$ (3) $`\xi _{n+1}`$ $`=`$ $`g(\xi _n),`$ (4) where the maps $`f`$ and $`g`$ can be different from each other, or in an extreme case, where $`\xi _n`$ is a constant, namely the case of constant feedback studied in some detail by Parthasarathy and Sinha . This type of feedback causes the system, Eq. (1) to display a form of “control”. It may happen that a periodic orbit is stabilized via feedback . Alternately, the motion continues to be aperiodic, but since the Lyapunov exponent is negative, two systems with different initial conditions which are driven by the exactly same noise will actually show synchronization. However, in a trivial sense, the above dynamical system cannot possess a nonchaotic attractor because the largest Lyapunov exponent, namely that corresponding to the $`\xi `$ degree of freedom is positive. If one considers two separate initial conditions, namely driving two systems with independent realizations of the noise or chaotic driving, then there is no synchronization, as can be expected. In this feature, such dynamics differs from the motion on strange nonchaotic attractors where there can be robust synchronization . One cannot have true SNA dynamics in the presence of stochastic driving alone. Some understanding of the above results can be obtained by considering typical examples . For specific maps considered the exponential logistic map, $$x_{n+1}F(x_n)=x_n\mathrm{exp}[\alpha (1x_n)].$$ (5) and the quadratic logistic map, $$x_{n+1}F(x_n)=\alpha ^{}x_n(1x_n),$$ (6) These show the typical bifurcation diagram as a function of $`\alpha `$ or $`\alpha ^{}`$, with chaotic dynamics over a range in parameter space. In the latter case, Eq. (6), e. g. at $`\alpha ^{}=4`$, it is observed that on adding the restricted noise the LE does not decrease: a pair of such chaotic systems synchronize with identical driving noise . The logistic map also synchronizes with restricted noise, and in this case, the Lyapunov exponent remains positive . Both maps have contracting and expanding sub–regions but behave in different ways. This difference can be analysed by considering Eq. (5), for instance, at $`\alpha =3`$; the dynamics is chaotic for almost every initial condition on $`[0,\mathrm{}]`$. Beyond $`x1.4`$ and in the region around the map maximum \[Fig. 1(a)\], the map is contracting, so that only a small region of the phase space is effectively responsible for the chaotic dynamics. Most of the natural invariant measure is, however, concentrated away from these contracting regions; see Fig. 1(b). However for Eq. (6), the phase space is restricted only to and the contracting region is relatively much narrower. The mapping Eq. (5) has positive Lyapunov exponent for $`\alpha =3`$. By adding a noise term as in Eq. (1), the natural measure can be modified so as to increase the sampling of the contracting regions of phase space \[Fig. 1(b)\]: this reduces the Lyapunov exponent of the driven system. Consider the partial sums, $$\lambda _+=\underset{N_+\mathrm{}}{lim}\frac{1}{N_+}\underset{i}{}\mathrm{ln}|f^{}(x_i)|,|f^{}(x_i)|>1,$$ (7) and $$\lambda _{}=\underset{N_{}\mathrm{}}{lim}\frac{1}{N_{}}\underset{i}{}\mathrm{ln}|f^{}(x_i)|,|f^{}(x_i)|<1,$$ (8) namely the separate contributions to the Lyapunov exponent. These are obtained by partitioning a long trajectory ($`N\mathrm{}`$) into $`N_+`$ points on expanding regions and $`N_{}`$ points on contracting regions. Clearly, $`N=N_++N_{}`$ and $`\lambda =\frac{N_+}{N}\lambda _+\frac{N_{}}{N}\lambda _{}`$. As the intensity of the noise term increases, the dynamics is pushed out onto those parts of phase space where the average slope of the map is less than 1. Thus the latter partial sum, $`\lambda _{}`$, increases in magnitude at the expense of $`\lambda _+`$ and eventually becomes larger than $`\lambda _+`$, leading to a Lyapunov exponent which is zero or negative. The variation of these quantities with noise strength $`\sigma `$ is shown in Fig. 2, and it is clear that the system can be made “nonchaotic” both in the case of additive noise, as well as for driving via an added chaotic signal. Similar ideas apply to noise–driven flows where analogous results can be obtained. Again, we find that systems where the Lyapunov exponents can be reduced by adding noise are characterized by having a large contracting region in the phase space; this can be detected by examination of the return map, for instance. An important class of such continuous system that we have studied are equations corresponding to the kinetics of coupled chemical reactions, as for example the cubic non-isothermal autocatalator (see Chapter 4 in Ref. and references therein for more details). $`{\displaystyle \frac{dx}{dt}}`$ $`=`$ $`\mu \mathrm{exp}(z)xy^2\kappa x`$ (9) $`{\displaystyle \frac{dy}{dt}}`$ $`=`$ $`xy^2+\kappa xy`$ (10) $`{\displaystyle \frac{dz}{dt}}`$ $`=`$ $`\delta y\gamma z`$ (11) Shown in Fig. 3(a) is the return map for the system in the regime where the dynamics is chaotic, which shows an exponential tail similar to the simple iterative mapping, Eq. (5). Upon addition of noise the Lyapunov exponents decreases as shown in Fig. 3(b). Other examples with very similar behaviour in higher dimensions are, for example, equations that model the Belousov-Zhabotinsky reaction . Since these reactions can be studied experimentally, it is possible that the effect of noise in reducing chaos in such systems can be verified in practice . A number of different systems which share the above features can be controlled in this manner, namely by adding noise or chaotic driving. Note, however, that the system is not truly “nonchaotic”. Unlike the dynamics on periodic or quasiperiodic attractors or SNAs where all the Lyapunov exponents are nonpositive, here the dynamics is not confined to a single global attractor, but to some region of the phase space for each realization of noise. Therefore the fluctuations of all dynamical quantities, and in particular the Lyapunov exponents, actually increase because of the additive noise. ## III Summary Nonuniform attractors in nonlinear dynamical systems typically have interwoven contracting and expanding subregions. By increasing the measure on regions where the dynamics is locally contracting relative to those which are locally unstable, one can render the motion “nonchaotic”. This can be effected through the action of additive noise: the invariant measure on some chaotic attractors can be so modified that the dynamics is taken to those regions of the attractor which are contracting on average, and this results in a nonpositive Lyapunov exponent. Indeed, noisy experimental data can yield a negative value for the Lyapunov exponent even though the actual dynamics of the system may be chaotic. Adding stochastic noise, does not, however, create strange nonchaotic attractors . For each realization of the noise, the limiting set is different, and thus there are no attractors per se. One important property of SNAs is the synchronization of two trajectories driven by the same external quasiperiodic force . Motion on the nonchaotic sets obtained by adding noise do not have this property unless they are driven by identical noise. Whether it is reasonable to expect that this can be realized in practice is a moot question. In the present work, we have considered only additive stochastic driving. It is possible that some other forms of stochastic driving (perhaps via parametric modulation) can create true SNAs; this question remains to be explored. ACKNOWLEDGMENT This research has been supported by a grant from the Department of Science and Technology, India.
no-problem/9911/astro-ph9911092.html
ar5iv
text
# Detection of 33.8 ms X-ray pulsations in SAX J0635+0533 ## 1 Introduction The X-ray source SAX J0635$`+`$0533 was discovered by Kaaret et al. (1999) thanks to a BeppoSAX observation within the error box of the unidentified Galactic gamma-ray source 2EG J0635$`+`$0521 (Thomson et al. (1995)), a candidate gamma-ray pulsar as suggested by its hard gamma-ray spectrum (Merck et al. (1996)). The X-ray source is characterized by quite hard X-ray emission detected up to 40 keV (Kaaret et al. (1999)). Its energy spectrum is consistent with a power-law model with a photon index of 1.5, an absorption column density of $`2.0\times 10^{22}\mathrm{cm}^2`$, and a flux of $`1.2\times 10^{11}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ in the $`210`$ keV energy band. A search for pulsed emission over a period range from 0.030 s to 1000 s did not detect any pulsed signal. Due to the large error box of the gamma-ray source, the identification of SAX J0635$`+`$0533 with 2EG J0635$`+`$0521 is not definitive: such an identification could only be made through pulsed detection in both X-ray and gamma-ray emission or a much improved gamma-ray position. Follow up optical observations (Kaaret et al. (1999)) suggest as a counterpart of SAX J0635$`+`$0533 a Be star with a V-magnitude of 12.8, located within the $`1^{}`$ X-ray source error box. The estimated distance is in the range $`2.55`$ kpc. The total Galactic 21 cm column density along the optical counterpart direction is $`7\times 10^{21}\mathrm{cm}^2`$ (Stark et al. (1992)), in agreement with an estimation derived from the extinction of the optical spectrum. Kaaret et al. (1999) assert that the larger column density for SAX J0635$`+`$0533, estimated from the X-ray spectrum, implies the presence of circumstellar gas around the X-ray source. Moreover, taking into account the positional consistency between the X-ray and the gamma-ray sources as well as the hard X-spectrum, the strong X-ray absorption, and the optical association with the Be star, they suggest that SAX J0635$`+`$0533 is an X-ray binary emitting gamma-rays. We revisited the BeppoSAX observation of SAX J0635$`+`$0533, available from the public archive (obs.code #30326001). In this letter we present new imaging and timing results. Our analysis has revealed a 33.8 ms pulsation of the X-ray source. §2 describes the observation and data reduction. In §3, we report the data analysis procedures and results. We conclude with a discussion of the results in §4. ## 2 Observation and Data Reduction The field including the serendipitous source SAX J0635$`+`$0533 was observed on 23-24 October 1997 by the Narrow Field Instruments (NFIs) on board BeppoSAX (Boella et al. 1997a ). In our analysis, we use only data coming from the two NFI imaging instruments, namely the Low Energy Concentrator Spectrometer (LECS) operating in the energy range 0.1–10 keV (Parmar et al. (1997)) and the Medium Energy Concentrator Spectrometer (MECS) operating in the energy range 1.3–10 keV (Boella et al. 1997b ). At the observation epoch only two of the three MECS detector units were operating. LECS and MECS raw data have been reduced in cleaned photon list files by using the SAXDAS v.2.0 software package and adopting standard selection criteria<sup>1</sup><sup>1</sup>1http://www.sdc.asi.it/software/saxdas. The source was located $`3^{}`$ off-axis. The total duration of the observation is 72 714 s and the net exposure is 14 043 s and 34 597 s for LECS and MECS, respectively. The different exposures are due to the fact that the LECS was only operated during satellite nighttime. ## 3 Data Analysis In our analysis, we include only events within a $`3.7^{}`$ radius around the source position; this radius optimizes the signal–to–noise ratio for SAX J0635$`+`$0533 and contains $``$80% of the source signal for both LECS and MECS. Due to the fact that the source is strongly absorbed, we further improve the signal–to–noise ratio by using the energy range $`1.810`$ keV for both the instruments. The resulting number of selected events is then 479 for LECS and 3 658 for MECS, respectively. We estimate that less than $`1\%`$ of these events is due to the diffuse and instrumental background. ### 3.1 Spatial Analysis The position of SAX J0635$`+`$0533 has been re-evaluated by considering MECS data accumulated under specific constraints of the spacecraft; in particular, only data referring to the Z star tracker (the one aligned with the NFIs) in use have been taken into account. This allows us to reduce the attitude reconstruction error to $`30^{\prime \prime }`$ (90% confidence level)<sup>2</sup><sup>2</sup>2http://www.sdc.asi.it/software/cookbook/attitude.html. After suitable smoothing of the screened data and then by using a Gaussian centering procedure, the position has been found at $`RA=06^\mathrm{h}35^\mathrm{m}18^\mathrm{s}`$, $`DEC=05^{}33^{}11^{\prime \prime }`$ (J2000) with no significant statistical error. The error box is only determined by the attitude reconstruction error. The Be star (Kaaret et al. 1999) is within the $`30^{\prime \prime }`$ of the SAX J0635$`+`$0533 error box and $`6.8^{\prime \prime }`$ distant from its center. Fig. 1 shows a finding chart for SAX J0635$`+`$0533 from the Digitized Sky Survey in a field of $`4^{}\times 4^{}`$. Within the reduced X-ray source error circle, only two of the seven stars referred to in Kaaret et al. (1999) are now present. The Be star is the brightest one which is positioned close to the center of the X-ray source error circle. ### 3.2 Temporal Analysis The SAX J0635$`+`$0533 light–curve (1000 s bin size) for the MECS is shown in Fig. 2a (gaps are present due to non–observing time intervals during South Atlantic Anomaly and Earth occultation). Fig. 2a shows that the emission of SAX J0635$`+`$0533 is variable up to a factor of 10. Fig. 2b shows the hardness ratio between 4–10 keV and 1.8–4 keV bands. The relative contributions of these two energy bands are not constant, but no strong correlation with respect to the total intensity is evident. In order to search for periodicity, the arrival times of all selected events have been converted to the Solar System Baricentric Frame, using the BARICONV code<sup>3</sup><sup>3</sup>3 http://www.sdc.asi.it/software/saxdas/baryconv.html. The $`Z_1^2`$ test (Buccheri et al. (1983)) on the fundamental harmonics with the maximum resolution ($`\delta f=1/\mathrm{72\hspace{0.17em}714}`$ Hz) applied to the MECS baricentered arrival times does not reveal significant deviations from a statistically flat distribution up to 50 Hz. If SAX J0635$`+`$0533 is a binary pulsar of rotational spin period $`P_s`$ and orbital period $`P_o`$, the observed $`P_s`$ is modulated by the orbital motion. Thus, a direct search for a coherent oscillation at $`P_s`$ can be successful only if the modulation amplitude is small over the time interval $`\mathrm{\Delta }T`$ in which the search is performed. This condition is satisfied if $`\mathrm{\Delta }TP_o`$. To reduce the effect of a possible orbital motion in the periodicity search, we divide the whole data span into $`M`$ subintervals, calculating the $`Z_1^2`$ statistics for each trial period in each subinterval, and then adding together the $`M`$ statistics for each trial period. This procedure results in a less noisy spectrum. The sum of the $`M`$ separate $`Z_1^2`$ statistics results in a statistics with $`2M`$ degrees of freedom (Bendat & Piersol (1971)) which we refer to as the $`Z_1^2(\nu =2M)`$ statistics. Because the power depends on the square of the pulsed signal, its strength decreases with $`M`$ and only sufficiently strong signals can be detected. We selected time slices corresponding to intervals of continuous observation taken between two Earth occultation periods. The total number of these slices is $`M=13`$, each one lasting $`3300\mathrm{s}`$. We adopted a frequency step $`\delta f=2\times 10^4\mathrm{Hz}`$, spanning 50 Hz of search range with 250 000 trial frequencies. Fig. 3 shows the power spectrum obtained with the MECS and LECS data in the $`Z_1^2(\nu =26)`$ statistics as a function of frequency, where an evident excess appears at $`f_0=29.5364\pm 0.0001`$ Hz. The value of $`Z_1^2(\nu =26)`$ at this frequency is equal to $`99.6`$. Because the $`Z_1^2(\nu =26)`$ follows the $`\chi ^2`$ statistics with 26 degrees of freedom, the single trial chance occurrence probability to have an excess greater than 99 is $`2\times 10^{10}`$, as shown in Fig. 4. Taking into account the number of trial frequencies used, the probability is $`5\times 10^5`$, corresponding to 4 standard deviations of the Gauss statistics. When we use only MECS data, the $`Z_1^2(\nu =26)`$ value decreases to 91 in agreement with the reduction of the source counts. We tested also the power at half and double values of the detected frequency $`f_0`$: no signal is present at $`f=f_0/2`$ or at $`f=2f_0`$. In order to check for the persistence of the periodicity in the whole observation, we also binned the maximum resolution spectrum of the entire data set by a factor 13. We obtain a high $`Z_1^2(\nu =26)>104`$, at frequency = 29.5364 Hz, indicating that the previous high power value does not come from one or few selected time intervals. From the $`Z_1^2(\nu =26)`$ value we can estimate the pulsed fraction. For $`N_p`$ pulsed counts over $`N_t`$ total counts, $`Z_1^2(\nu =26)=2\alpha N_p^2/N_t+\nu `$, where $`\alpha `$ is a shape constant. In our case, $`\alpha =0.25`$ (sinusoidal shape), and the pulsed fraction is then about 0.2. We also performed a periodicity search over the full observation including the first and second derivative of the frequency, $`\dot{f}`$ and $`\ddot{f}`$ respectively, at the maximum resolution step. The search was performed within the interval $`\mathrm{\Delta }f=2\times 10^4`$ Hz centered on the detected value $`f_0`$. The value range for $`\dot{f}`$ and $`\ddot{f}`$ were chosen consequently. We obtained a maximum in the power spectrum ($`Z_1^2(\nu =2)=52`$) for $`f=29.53643\pm 0.00001`$ Hz, $`\dot{f}=(3.1\pm 0.2)\times 10^9`$ Hz s<sup>-1</sup> and $`\ddot{f}=(1.1\pm 0.1)\times 10^{14}`$ Hz s<sup>-2</sup>, where the errors refer to the parameter resolution. Assuming a circular orbit, the second order polynomial expansion of the frequency vs. time relation is locally compatible with a orbital period $`P_o18`$ days and semi-major projected axis $`a_p\mathrm{sin}i63`$ s lt. However, we stress that the interval of observation is much shorter than the derived orbital period, and that the parabolic fit may not be a good representation if the orbit is eccentric. Thus, these results should be taken only as a possible indication of the orbital parameters and not as a firm detection of orbital motion. Figure 5 shows the pulse profile folding the data taking into account the frequency derivative terms. Note that if the orbit is eccentric, the real pulse profile could be narrower. ## 4 Discussion The coherence of the detected periodicity is high, $`Q=f/\delta f10^5`$. This value is much greater than those observed in quasiperiodic oscillations (QPOs) often detected in the X-ray emission of X-ray binaries (van der Klis et al. (1996)). The high coherence induces us to interpret this periodicity as a neutron star spin period. The association between SAX J0635$`+`$0533 and the Be star has been reinforced thanks to a reduced error circle of the X-ray position coordinates. Timing analysis results indicate that the neutron star could orbit around a companion star. The tentative orbital parameters are consistent, for orbit inclination less than $`25^{}`$, with a primary mass greater than 10 $`M_{}`$, as expected for a Be-star. The X-ray emission may be powered either by accretion or by spin-down of the neutron star. We consider these possibilities in turn. The SAX J0635$`+`$0533 system may consist of a rotation-powered pulsar orbiting the Be star. In this case, the X-ray emission could be magnetospheric emission similar to the power-law component in the X-ray emission of known X-ray/gamma-ray pulsars. The pulsation frequency we have found and the X-ray luminosity of the source are similar to those of the known X-ray/gamma-ray pulsars. The high X-ray variability of SAX J0635$`+`$0533 on time scales of 1000 s is unlike the steady X-ray emission seen from isolated pulsars. However, it could be produced by variable X-ray absorption caused by matter in the binary system or a wind from the Be star. An alternative, but still rotation-powered, scenario is that SAX J0635$`+`$0533 is similar to the Be radio pulsar PSR J1259–63. These two sources have similar spin frequencies and similar X-ray spectra (Nicastro et al. (1998)). In this case, the X-ray emission of SAX J0635$`+`$0533 should arise from a shock interaction of the energetic particles from the pulsar with the wind from the Be star. However, the upper bound, 8%, on the X-ray pulsed fraction from PSR J1259–63 in the 2–10 keV band (Kaspi et al. 1995) is well below the value estimate for SAX J0635$`+`$0533 in the same energy band. The X-ray emission from SAX J0635$`+`$0533 may be powered by accretion. Strong X-ray variability would naturally occur in such a system. Following this interpretation we can infer the magnetic field strength of the neutron star. For accretion to proceed, the centrifugal force on the accreting matter co-rotating in the magnetosphere must be less than the local gravitational force (Illarionov & Sunyaev (1975); Stella, White, & Rosner (1986)). Assuming a bolometric luminosity of $`1.2\times 10^{35}\mathrm{erg}\mathrm{s}^1`$ (0.1–40 keV) estimated from spectra results given in Kaaret et al. (1999) for a 5 kpc distance, and a neutron star mass of $`1.4\mathrm{M}_{}`$ and radius of 10 km, we can set an upper limit on the magnetic field strength of $`2\times 10^9\mathrm{G}`$. This is a factor $`10^3`$ lower than those measured in typical accreting X-ray pulsars, but similar to the fields inferred for the 2.49 ms low-mass X-ray binary SAX J1808.4$``$3658 (Wijnnands & van der Klis (1998)) and for millisecond radio pulsars. The X-ray luminosity of SAX J0635$`+`$0533 is a factor of 10 below that of most Be/X-ray binaries or the peak luminosity of SAX J1808.4$``$3658, but may simply indicate a low mass accretion rate. A definitive association of SAX J0635+0533 with the EGRET source requires detection of a periodicity in gamma-rays at the pulsar spin period. Due to the long integration time required to obtain a detectable gamma-ray signal, only a priori knowledge of the binary parameters would permit a sensitive search for periodicity in gamma-rays. This can be obtained with additional X-ray observations of SAX J0635$`+`$0533. We wish to thank Enrico Massaro (University of Rome) and Ignacio Neguerela (BeppoSAX SDC) for scientific discussions, and Guido Vizzini (IFCAI) for the technical support in the data reduction procedure.
no-problem/9911/cond-mat9911082.html
ar5iv
text
# Vortex in Chiral Superconducting State ## References
no-problem/9911/cond-mat9911020.html
ar5iv
text
# Thermodynamics of a Tiling Model ## 1 Introduction to the model In this work we propose a model that shows a non-trivial thermodynamic behaviour due to its particular geometric structure. This two-dimensional model is built using square tiles, called Wang’s tiles, on a square lattice. The edges of these tiles can be of six different ’colours’. But of all the possible types ($`6^4`$) that can be created changing the colours just a particular group of sixteen, found by Ammann , is taken into account (see figure 1). This is one of the minimal set of Wang tiles such that the corresponding tilings exist and are non-periodic tilings. A tiling is a configuration of tiles placed edge-to-edge on the plane, where all contiguous edges have the same colour. If at least one tiling is allowed and none of the possible tilings shows a periodic pattern, then the set of tiles composing them is called aperiodic. Other aperiodic sets of tiles are often used as models for quasi-crystal materials , but this is not the present case. The Wang tiles were the first aperiodic tiles to be discovered, in 1966 by Berger . They were initially important, and they still are, because of their use in problems of mathematical logic . The use that we make of them in this work is, anyway, far away from that point of view. Indeed we look at the behaviour of a system built by Wang tiles in a thermal bath, defining thermodynamic observables on these tilings. The main stimulus to study this model has been the high degeneracy of the perfectly matched configurations and their non-trivial aperiodic structure. Putting the system in a thermal bath allows for translations of the tiles also in positions where there is no matching between edges of neighbours tiles, forming in this way a unmatched-tiling. We can assign an energy to each configuration which is equal to the number of links such that the colours of the facing edges are different. The energy of the exactly matched configurations, that from now on we will call ground states, is then zero. If we label the type of tile in the position given by the coordinates $`(x,y)`$ in the plane by $`T_{x,y}`$ and the type of its four edges respectively towards South, East, North and West by $`T_{x,y}^{(S)}`$, $`T_{x,y}^{(E)}`$, $`T_{x,y}^{(N)}`$ and $`T_{x,y}^{(W)}`$ we can write an Hamiltonian for this system in the following way: $$=\underset{(x,y)}{\overset{1,L1}{}}\left(1\delta \left(T_{x,y}^{(E)}T_{x+1,y}^{(W)}\right)\right)+\underset{(x,y)}{\overset{1,L1}{}}\left(1\delta \left(T_{x,y}^{(N)}T_{x,y+1}^{(S)}\right)\right).$$ (1) $`L`$ is the number of tiles in the lattice in one direction. The $`\delta (z)`$ is the delta of Kronecker. ## 2 Equilibrium analysis and critical behaviour The numerical simulations done in order to study the equilibrium characteristics have been performed using the parallel tempering algorithm . We have chosen open boundary conditions on the two dimensional lattice. Indeed using periodic boundary conditions the fact that periodic tilings do not exist would have implied that the energy of the ground state would have been different from zero. With open boundary conditions the energy of the ground state is, by definition of (1), equal to zero. It is equivalent to say that all the tiles are edge by edge perfectly matched, forming some non-periodic structure. ### 2.1 Phase transition At different temperatures, for every system we have computed the energy and the specific heath. Numerical simulations have been brought about on square lattices of different size: from a linear size of 8 to one of 32. Every equilibrium simulation has been carried out for a number of MC steps going from 10 millions to 100 millions, relatively to the size of the lattice. A range of temperature between 0 and 5 has been observed, then concentrating in small intervals in what comes out to be the critical region, around $`T=0.4`$. We controlled that at zero temperature the energy goes to zero and that the specific heat computed from the derivative of the energy coincides with the one computed from the energy fluctuations. These are checks of a correct thermalization. The specific heat presents a smeared but clear change around the temperature $`0.4`$ (see figure 2), from a lower value for lesser temperatures to a higher value for greater. Increasing the size of the system we can observe that the crossing points between two curves of different size moves towards right and that the slope of the $`C_L(T)`$ line increases in this interval of temperatures as $`L`$ increases, but we also see that a peak grows around $`T=0.42`$. Performing Finite Size Scaling (FSS) analysis of the crossing points of the specific heat curves we find that their abscissa tends to a $`T_{cross}(\mathrm{})=0.398\pm 0.007`$ as $`L\mathrm{}`$, with an exponent $`\nu =1.6\pm 0.5`$ and that the behaviour of the slope with increasing size is compatible with a diverging fit at that temperature. In this case we would have a jump at $`T_c=T_{cross}(\mathrm{})`$, which corresponds to a critical exponent $`\alpha `$ equal to zero. But at the same time the fit of the peak height is consistent with a divergence at $`T_c=0.42\pm 0.01`$. Knowing that at criticality $`C|TT_c|^\alpha `$ and $`\xi |TT_c|^\nu L`$ we get $`CL^{\alpha /\nu }`$. Our data are both consistent with a power-law divergence ($`\alpha /\nu =0.35\pm 0.01`$), and with a logarithmic one ($`\alpha =0`$). A similar value of the critical temperature was found, in this last hypothesis, by Janowsky and Koch . In any case there is evidence for a second order phase transition. ### 2.2 Order parameter Wang’s tiles satisfy a particular property: if all the tiles of the two contiguous west and south sides (or equally of the north and east sides) of the square lattice are fixed, then at most one aperiodic tiling can be formed. This is due to the fact that each tile of the Wang set has different combinations of west-south (or north-east) colours. Then at most only one of them can be put, in the bottom-left (either top-right) corner. The same is true for the tiles to be put in the two corners created by putting the first tile. If any, there will be only one combination available also for them. Of course this is valid at $`T=0`$ in our system. This condition implies that ground state degeneracy can not increase faster than $`16^{2L1}`$and therefore the entropy density is zero at zero temperature: $`s_L(0)a/L`$, where $`a<8\mathrm{log}2`$. Since just a few north-south or west-east combination are allowed the actual constant in the entropy is smaller than $`8\mathrm{log}2`$ (a stricter upper bound of $`a=\mathrm{log}12=3.585\mathrm{log}2`$ can be easily obtained). In order to identify an order parameter we have to break this degeneracy. We have then simulated the parallel evolution of two copies of the system, with the following procedure: one system reaches the equilibrium, then a copy of the equilibrium configuration is done and the evolution of this second copy is performed. One foundamental constraint is set: the boundary tiles on the south and east sides of the lattice stay unchanged in the dynamics towards equilibrium of the second copy. For $`T>0`$ the equilibrium states are no more the exactly matched tilings. Because of the thermal noise some couple of neighbours sides of the Wang’s tiles can be unmatched. Furthermore there won’t be a unique tiling minimizing the energy. The second copy can then evolve to a different configuration. We are interested in looking what happens when the temperature is increased from below to above the critical temperature, and how the behaviour is sensitive to the size of the system. With this aim we introduce an overlap that depends on the distance $`l`$ of the tiles from the two contiguous boundary sides that are fixed once that the first copy has reached equilibrium: $$q(l)=\frac{1}{2(Ll)+1}\underset{yl,x=l}{}\underset{xl,y=l}{}\overline{\delta \left(T_{x,y}^{(1)}T_{x,y}^{(2)}\right)}$$ (2) Where the average $`\overline{(\mathrm{})}`$ has been performed over different realizations of the configurations, $`(\mathrm{})`$ is the time average at equilibrium and $`l=1,\mathrm{},L`$. The overlap $`q(l)`$ is computed along the diagonal: starting from the vertex shared by the two fixed contiguous sides, $`q(1)`$, and ending at the opposite vertex of the lattice, $`q(L)`$. In order to gain more statistics the overlap is built averaging over the $`2(Ll)+1`$ elements in the row of ordinate $`l`$ with the coordinate $`xl`$ and in the column of abscissa $`l`$ with $`yl`$ and assigning this average value to the ’diagonal’ function $`q(l)`$. The statistic sample is formed repeating the simulation several times with different initial configurations, in order to obtain different equilibrium configurations (we have always checked if the same equilibrium configuration could possibly appear more than once starting from different initial configurations to avoid annoying biases of the statistic sample, but this never happened). Every size has been simulated for at least 100 different initial configurations. At zero temperature the overlap is always one. At higher temperature but below the phase transition it goes to a constant lesser than one (far enough from the boundary $`L`$, where finite size effects are overwhelming). Above the transition point $`q(l)`$ decays very rapidly, compatible with an exponential, to the lowest possible value, corresponding to completely uncorrelated copies (hot or disordered phase). Since the tiles are of sixteen different types and the probability distribution of the tile-types is uniform, this lowest value is equal to $`1/16`$. In figure 3 we show the behaviour of $`q(l)`$ at different temperatures, both above and below the critical one, for different sizes. From this figures we can see that the $`q(l)`$ seems to approach some plateau for $`lL/2`$ at temperatures below $`T0.42`$. This is more evident as we go to bigger sizes. We can compute a $`q(T)`$ for every size, averaging over the plateau values of $`q(l)`$. The plateau is every time chosen taking a small window around $`L/2`$ and then enlarging it as long as the plateau value stays constant. As $`l`$ grows further finite size effects destroy this plateau. We observe that this parameter shows a change approaching $`T_c`$: when $`T<T_c`$, it increases from the value of $`1/16`$ that it has at high temperature to the value of 1 that it reaches at zero temperature. The profiles of $`q_L(T)`$ are plotted in figures 4. The probability distribution of the overlap’s values at the plateau has a non-trivial shape as soon as $`T<T_c`$ (figure 5). This is not so astonishing since fixing the boundary conditions we have broken the degeneracy of the equilibrium states. (We stress that due do the overlap’s definition the distribution doesn’t show any symmetry $`qq`$). The form of the $`P(q)`$ averaged over different realizations of the tile’s configurations at two continuous edges of the lattice is not strongly dependent from the size of the samples, at least for the simulated cases. The probability distributions of the overlap for different realizations of the equilibrium boundary conditions in the bottom and left edges of the tiling are represented in figure 6. ## 3 Off-equilibrium analysis In order to study systems with very slow relaxation times, much greater than the experimental times, it is very important to look at the off-equilibrium dynamics. This approach describes glasses, spin-glasses and, in general, any system with many equilibrium states divided by high free energy barriers, in a more realistic way in comparison with an equilibrium approach, since equilibrium is, in practice, never reached during experiments. To keep the sample out of equilibrium during the run of a numerical simulation we need the typical distance $`\xi (t)`$ over which the system has reached equilibrium at a certain time to be always smaller than the linear size $`L`$ of the lattice. This distance is also called dynamical correlation distance. Practically, in order to be sure of avoiding thermalization, we choose sizes bigger than those used in the static analysis. The greater is the size the longer is the time needed to the correlation distance to reach the size of the system. More over we used in this case a standard Monte Carlo algorithm instead of the parallel tempering, so that the thermalization times increased sensitively under the transition point. In order to study the response of the system to an external field we can embed the system into a perturbative field. Namely a field directed in one of the sixteen possible directions in the ’tile-types space’, where a particular tile of the set of sixteen can be favoured (positive field) or disfavoured (negative field). This can be a uniform or non-uniform field. To avoid any preference towards one particular type of tile we have chosen a non-uniform random field, whose value at each site is independent from the other sites. Thus we add to the Hamiltonian (1) the perturbative term: $$\underset{(x,y)}{}h_{x,y}^{sign}\left(1\delta \left(T_{x,y}h_{x,y}^{type}\right)\right).$$ (3) where $`h_{x,y}=h_{x,y}^{type}h_{x,y}^{sign}`$ is the random external field pointing along one of the tile-types or opposite to it. $`h_{x,y}^{type}`$ has a uniform distribution of the sixteen possible choices of tile-type, and $`h_{x,y}^{sign}`$ gives the magnitude of the field and its sign (it can be randomly positive or negative, always according to a uniform distribution). The probability distribution can then be written as: $$𝒫(h_{x,y})=\frac{1}{16}\underset{n}{\overset{0,15}{}}\delta \left(h_{x,y}^{type}e^{i\pi \frac{n}{16}}\right)\times \frac{1}{2}\left(\delta (h_{x,y}^{sign}h_o)+\delta (h_{x,y}^{sign}+h_o)\right)$$ (4) with $`\overline{h_{x,y}}=0`$ and $`\overline{h_{x,y}^2}=h_o^2`$. $`n=0,\mathrm{},15`$ gives the ’direction’ of the field in a representation in the complex plane (see figure 7) Once the system is cooled from high temperature, it is left evolving until a certain time $`t_w`$, usually called waiting time. At $`t=t_w`$ the field is switched on and we begin recording the values of the temporal correlation function $`C(t,t_w)`$ and of the integrated response function $`m(t,t_w)`$. For our model they are defined as follows: $$C(t,t_w)=\frac{1}{L^2}\underset{x,y}{}\delta (T_{x,y}(t)T_{x,y}(t_w)),$$ (5) $$m(t,t_w;h)=\frac{1}{L^2}\underset{x,y}{}\frac{\overline{h_{x,y}^{sign}(t_w)\delta (T_{x,y}(t)h_{x,y}^{type}(t_w))}}{h_o}$$ (6) and the susceptibility is $$\chi (t,t_w)=\underset{h_o0}{lim}\frac{m(t,t_w;h)}{h_o}.$$ (7) For numerical computation it becomes: $$\chi (t,t_w)\frac{m(t,t_w;h)}{h_o}.$$ (8) Here $`(\mathrm{})`$ is the average over different dynamical processes and $`\overline{(\mathrm{})}`$ the average over the random realizations of the perturbative external field. In a system at equilibrium the Fluctuation-Dissipation Theorem (FDT) holds: $$\chi (tt_w)=\frac{1C(tt_w)}{T}.$$ (9) where we have made explicit use of the fact that the correlation function is defined in such a way that $`C(t_w,t_w)=C(0)=1`$. Out of equilibrium this theorem is no more valid. Anyway a generalization is possible , at least in the early times of the dynamics. This generalization is made introducing a multiplicative factor $`X(t,t_w)`$ depending on two times such that : $$\chi (t,t_w)=\frac{X(t,t_w)}{T}\left(1C(t,t_w)\right)$$ (10) In a certain regime, called aging regime, $`X(t,t_w)<1`$ and the FDT is violated. An important assumption is that this modified coefficient $`X(t,t_w)/T`$ depends on $`t`$ and $`t_w`$ only through $`C(t,t_w)`$. Its inverse is also called effective temperature $`T_eT/X\left[C(t,t_w)\right]`$ since the system in this regime, for a given time-scale, seems to behave like a system in equilibrium at a temperature different from the heat-bath temperature. In terms of susceptibility and correlation functions we can write a general functional dependence: $$\chi (t,t_w)=\frac{1}{T}S\left[C(t,t_w)\right]$$ (11) The $`S[C]`$ here defined would be $`1C`$ if we were at equilibrium. From our probe we find that we first have a regime where the relation is linear with a coefficient equal to the heat-bath temperature. This early regime is sometimes called stationary, since the observables computed at very short times (compared with $`t_w`$) do not depend on the age of the system. During this regime the system goes fast towards a local minimum. After this first time the $`S[C]`$ bends and the coefficient of $`X(t,t_w)/T`$ is no more the inverse heat-bath temperature. $`X(t,t_w)`$ is now less than one, as if the system would be at an effective temperature bigger than the one of the heat-bath. It also seems to change continuously as the system evolves, until it reaches zero (figures 8 \- 10). The value of the autocorrelation function at which $`S[C]`$ leaves the $`1C`$ line, equal to the average overlap value $`q`$ in the static, increases continuously with temperature, as we already observed in the static analysis. The $`S[C]`$ shows no strong dependence from the magnitude of the field, at least for the values that we have used to perturbate the system. Furthermore the system shows only a slight dependence from the size (measures on $`L=64,128,256`$) (fig. 9). From the figures 10 we can see that $`S[C]`$ moves towards some asymptotic line increasing $`t_w`$. The initial difference between $`1C`$ and $`S[C]`$ is due to lack of statistics: if $`N_{fr}`$ is the number of field realization performed in the simulation there is a difference of order $`N_{fr}/L^2`$ between the value of the integrated response function in the thermodynamic limit and the value at finite size. We can also look at the link with the static analysis. Like in the case of mean field spin glasses we can suppose, following , that for $`t,t_w\mathrm{}`$, $`C(t,t_w)q`$ and $`X[C(t,t_w)]x(q)`$, where $`x(q)`$ is the cumulative distribution of $`P(q)`$: $$x(q)=_0^q𝑑q^{}P(q^{})$$ (12) If this link is valid we can connect the susceptibility multiplied by the heat-bath temperature at a certain correlation value ($`S[C]`$) to the integral $`_1^C𝑑qx(q)`$. In our case there is an agreement between dynamic data and the values of this last integral computed on the static data. The two approaches are consistent if the external field is not too large, in order to avoid the non-linear effects that are neglected in the derivation of formula 6. The field cannot even be too small, though, in order to make the system move from the meta-stable state in which it has gone during $`t_w`$. The lower is the temperature at which we cool the system, the bigger is the external field necessary to make it explore other parts of the space of states. A probe at too low temperatures then is not really working because non-linear effects interfere heavily or because the system does not reach the aging regime. Two examples of the agreement between dynamic and static data are shown in figure 10. ## 4 Conclusions We have studied the thermodynamics of a tiling model built by Wang tiles. For this kind of system we have found evidence of a phase transition from a completely disordered phase, in which the tiles on the plane are completely uncorrelated between each other, to a phase in which they begin to present an organized, also if very complicated, structure. For $`T0`$ this structure becomes an exactly matched tiling. In order to characterize the phase of the system we have determined an order parameter with a non-zero mean value below $`T_c`$. It is the overlap between two tilings built with the same boundary conditions on the lower and the left sides of the lattices. Our data hint that it could have a non-trivial probability distribution under the phase transition. Under the critical point the tiling system shows the aging phenomenon: the answer of the system to an external perturbation and the time autocorrelation function depend on the history of the system. This brings to a violation of the fluctuation-dissipation theorem for $`T<T_c`$. The integrated response function times the temperature ($`S[C]`$) shows a progressive bending when the system leaves the stationary regime until it reaches a constant value ($`X=0`$) and the dynamically and statically determined $`q`$ values increase continuously with decreasing temperature. From this kind of behaviour it is not clear wether the model belongs to the class of systems showing domain growth or it is rather more similar to a spin glass in magnetic field. Indeed for very long times (small values of the correlation function) the fluctuation-dissipation ratio goes eventually to zero and it can not be excluded that the dynamics evolves through domains growth , even though in our case the nature of the domains of tiles should still be theoretically understood. Nevertheless for a very large interval of time, i.e. of values of $`C`$ in the plot of figure 10, the $`S(C)`$ is continuosly bending ($`0<X<1`$) like in a spin glass model , especially in the regions $`0.3<C<0.6`$, for $`T=0.35`$, and $`0.4<C<0.7`$, for $`T=0.3`$, as shown in figure 10. For values of $`C`$ smaller than these the $`S(C)`$ curves flatten but the predictions obtained from the static behaviour (the $`P(q)`$ are shown in figure 5) are also nearly flat, making quite difficult the distinction between a domain growth dynamics, where the response function is constant in the aging regime, and a more complicated behaviour where even the response function shows aging. Eventually we show that our data are consistent with the equivalence between the equilibrium function $`_C^1𝑑qx(q)`$ and the dynamically determined $`S[C]`$ as $`t,t_w\mathrm{}`$. Acknowledgments We warmly thank J.Kurchan for having stimulated this work and for his advises.
no-problem/9911/astro-ph9911155.html
ar5iv
text
# Viscous Boundary Layer Damping of R-Modes in Neutron Stars ## 1. Introduction The recently discovered r-mode instability (Andersson 1998; Friedman & Morsink 1998) might play a significant role in setting the spin frequencies ($`\nu _s`$) of rapidly rotating neutron stars (Lindblom et al. 1998; Owen et al. 1998; Andersson et al. 1999a; Bildsten 1998; Andersson et al. 1999b). The unstable regime for the r-modes in the neutron star (NS) core depends on the competition between the gravitational radiation excitation and viscous dissipation. Lindblom et al. (1998) and Andersson et al. (1999a) computed the dissipation due to shear and bulk viscosities in normal fluids (not superfluids) and found that gravitational excitation exceeds viscous damping in all but the most slowly rotating stars. Lindblom & Mendell (1999) showed that superfluid mutual friction is also not competitive with gravitational radiation unless the superfluid entrainment parameter assumes a very special value. Hence, on theoretical grounds, r-modes are expected to be unstable over much of the parameter space occupied by newborn NSs, NSs in low-mass X-ray binaries (LMXBs), and millisecond radio pulsars (MSPs) during spin-up. However, observations pose challenges to this theoretical picture. The existence of two $`1.6`$ ms radio pulsars means that rapidly rotating NSs are formed in spite of the r-mode instability (Andersson et al. 1999b). While it is not clear that their current core temperatures place these MSPs inside the r-mode instability region, current theory says that they were certainly unstable during spin-up. For accreting NSs in LMXBs, Bildsten (1998) and Andersson et al. (1999b) conjectured that the gravitational radiation from an r-mode with a small constant amplitude could balance the accretion torque. This would possibly explain the preponderance of $`300`$ Hz spins among LMXBs (van der Klis 1999), which would otherwise have been spun up to $`>1000\mathrm{Hz}`$ during their $`10^9`$ yr lifetime (van Paradijs & White 1995). However, for normal fluid cores, Levin (1999) showed that the temperature dependence of the shear viscosity makes this spin equilibrium thermally unstable, leading to a limit cycle behavior of rapid spin-downs after prolonged periods of spin-up. For superfluid cores, Brown & Ushomirsky (1999) showed that such constant-amplitude spin-up is inconsistent with the quiescent luminosities of several known LMXBs. All but the hottest NSs have solid crusts that occupy the outer $``$ km of the star. In this paper, we calculate a previously overlooked dissipation: the viscous boundary layer between the oscillating fluid core and static crust. The shear viscosity of NS matter is relatively small and does not affect the r-mode structure in the stellar interior. However, the transverse fluid motions of the r-modes are very large at the crust-core boundary and must, of course, go to zero relative velocity at the crust. The dissipation in the resulting viscous boundary layer (hereafter referred to as the VBL) substantially shortens (by typically $`10^5`$) the r-mode damping times. Previous work (Lindblom et al. 1998, 1999; Andersson et al. 1999a) included damping from the shear viscosity acting only on the gradient in the transverse velocity of the mode in the interior of the star (on the length scale of the stellar radius, $`R`$). The shear in the VBL acts on a much shorter length scale, and is hence stronger. This new source of dissipation raises the minimum frequency for the r-mode instability in NSs with a crust to $`500`$ Hz for $`T10^{10}`$ K, and even higher frequencies for lower temperatures. It thus alleviates the conflict between the accretion-driven spin-up scenario for MSPs and the r-mode instability and significantly alters the spin-down scenario proposed by Owen et al. (1998) for newborn NSs. ## 2. The Viscous Boundary Layer The transverse fluid motions of an r-mode cause a time-dependent “rubbing” of the core fluid against the otherwise co-rotating crust spinning at $`\mathrm{\Omega }=2\pi \nu _s`$. In previous works, the boundary condition applied at this location allowed the fluid to have large-amplitude transverse motion. Neglecting viscosity is an excellent assumption far away from the crust-core interface. However, there can be no relative motion at the boundary for a viscous fluid, leading to a VBL where the transverse velocity drops from a large value to zero (the “no-slip” condition). The VBL is mediated by the kinematic viscosity, $`\nu `$, in the fluid just beneath the base of the crust. The density there has most recently been estimated as $`\rho _\mathrm{b}1.5\times 10^{14}\mathrm{g}\mathrm{cm}^3`$ (Pethick et al. 1995). We use the values for $`\nu `$ as found by Flowers & Itoh (1979) and fit by Cutler & Lindblom (1987), $$\nu =1.8\times 10^4\mathrm{cm}^2\mathrm{s}^1\frac{f}{T_8^2}.$$ (1) In this fit, three different cases are handled with the parameter $`f`$. When both neutrons and protons are normal, the predominant scatterers are neutrons, and $`f=(\rho /\rho _\mathrm{b})^{5/4}`$. If neutrons are superfluid and protons normal, the viscosity is mediated by electron-proton scattering, and $`f=1/15`$. Finally, when both protons and neutrons are superfluid, electron-electron scattering yields $`f=5(\rho /\rho _\mathrm{b})`$. We begin by assuming that the crust is infinitely rigid, and hence the waves do not penetrate into the crust. In an oscillating flow, the thickness of the VBL, $`\delta `$, is found by setting the oscillation frequency of the r-mode in the rotating frame, $`\omega =2m\mathrm{\Omega }/l(l+1)=2\mathrm{\Omega }/3`$ ($`l=m=2`$), to the inverse of the time it takes vorticity to diffuse across its width, i.e. $`\delta ^2/\nu \omega ^1`$. In the plane-parallel case, Landau & Lifshitz (1959) show that the correction to the velocity in the VBL of an incompressible oscillatory flow falls off exponentially with a length $$\delta =\left(\frac{2\nu }{\omega }\right)^{1/2}3\mathrm{cm}\frac{f^{1/2}}{T_8}\left(\frac{1\mathrm{kHz}}{\nu _s}\right)^{1/2},$$ (2) much smaller than $`R`$. This estimate neglects the Coriolis force. When the VBL is determined by the balance between the Coriolis force, $`2\mathrm{\Omega }v`$, and viscosity, $`\nu v/\delta ^2`$ (the Ekman problem), then the thickness is $`\delta (\nu /\mathrm{\Omega })^{1/2}`$, roughly the same as eq. (2). Hence, while the Coriolis force will change the angular dependence of the velocity in the VBL, it is unlikely to change its thickness. The viscous dissipation can be handled quite simply for this thin VBL. Since all of the kinetic energy there is damped in one cycle, the $`Q`$ of the oscillation is roughly just the ratio of the total volume to that in the VBL, or $`QR/3\delta 10^5T_8(\nu _s/1\mathrm{kHz})^{1/2}/f^{1/2}`$. The damping time is then a few hundred seconds, thus competitive with the growth time from gravitational wave emission, $`\tau _{\mathrm{gw}}=146\mathrm{s}M_{1.4}^3R_6^9(\nu _s/1\mathrm{kHz})^6`$, where $`M_{1.4}=M/1.4M_{}`$ and $`R_6=R/10\mathrm{km}`$, of the $`l=m=2`$ r-mode (Owen et al. 1998; Lindblom et al. 1998). To calculate the damping more accurately, we use the dissipation per unit area from Landau & Lifshitz (1959) and integrate over the surface area at the crust-core boundary $$\frac{dE}{dt}=\frac{\rho v^2}{2}\left(\frac{\omega \nu }{2}\right)^{1/2}R^2\mathrm{sin}\theta d\theta d\varphi .$$ (3) The fluid velocity is given by $`\stackrel{}{v}=(\alpha \mathrm{\Omega }R)\stackrel{}{Y}_{lm}^Be^{i\omega t}`$, and $`E=1.64\times 10^2M(\alpha \mathrm{\Omega }R)^2/2`$ is the mode energy (Owen et al. 1998). Integrating and using the mode frequency in the rotating frame, we find the damping time due to rubbing $$\tau _{\mathrm{rub}}=\frac{2E}{dE/dt}100\mathrm{s}\frac{M_{1.4}T_8}{R_6^2f^{1/2}}\left(\frac{\rho _\mathrm{b}}{\rho }\right)\left(\frac{1\mathrm{kHz}}{\nu _s}\right)^{1/2},$$ where $`\rho `$ is the density at the crust-core boundary. So far, we have assumed that the large-scale relative motion between the oscillating core and the crust is simply set by the toroidal motion of the r-mode. This is fine in the inviscid limit ($`\nu =0`$). The transverse displacement $`\xi _{}`$ at the crust-core boundary is then discontinuous, and the r-modes do not couple to the crust’s torsional modes, which have $`\stackrel{}{\xi }=\xi _{}\stackrel{}{Y}_{lm}^B`$. For $`\nu =0`$, r-modes can only couple to the crust via their $`𝒪(\mathrm{\Omega }^2)`$ radial motion. However, the VBL exerts a time-dependent shear stress, $`\sigma _r=(\omega \nu )^{1/2}\rho v`$ (Landau & Lifshitz 1959), that shakes the crust and can potentially couple the crust to the core. If this coupling can drive the amplitude of the crustal motion $`\xi _{}`$ to be comparable to the transverse motions in the r-modes ($`\xi _{}=3\alpha R/2`$ for $`l=m=2`$), then our picture of the core rubbing against a static crust would need to be reconsidered. The energy in an r-mode oscillation is $`10^{51}\alpha ^2(\nu _s/1\mathrm{kHz})^2\mathrm{ergs}`$, while the typical energy in torsional oscillations of the crust is roughly $`10^{48}\alpha ^2\mathrm{ergs}`$ (McDermott et al. 1988). Hence, if the r-mode could couple efficiently to the crust, it would have adequate energy to drive crustal pulsations of comparable amplitude and potentially break the crust. We now estimate the magnitude of this coupling and show that, under most circumstances, it should be quite small. Let us look at a toy problem of a simple harmonic oscillator (i.e., the crust) with a mode frequency $`\omega _0`$ and weak damping $`\gamma \omega _0`$ driven by a harmonic force per unit mass $`f_0e^{i\omega t}`$. Away from resonances with crustal pulsation modes, the response of the crust is then just $`\xi _{}f_0/(\omega _0^2\omega ^2)`$. Ignoring Coriolis force, the fundamental $`l=2`$ torsional mode has a frequency $`40\mathrm{Hz}`$, while the overtones have frequencies $`500n\mathrm{Hz}`$, where $`n1`$ is the radial order of the overtone (McDermott et al. 1988). Damping times of these oscillations are on the order of days to decades (McDermott et al. 1988), so the resonances are narrow. The spacing between the crustal modes, $`500`$ Hz, is comparable to the r-mode frequency, so on average, $`\omega _0^2\omega ^2\omega ^2`$. Mass per unit area in the crust is $`M_{\mathrm{crust}}/4\pi R^2=\rho h`$, where $`h1`$ km is the scale height at the base of the crust. Therefore, $`f_0=\sigma _r/\rho h`$ and, in the likely situation that the r-mode frequency is not in resonance with a normal mode frequency of the crust, we find $`\xi _{}/R\alpha (\delta /h)10^4\alpha /T_8`$. This simple model (and a preliminary calculation) shows that the induced toroidal displacements in the crust are much smaller than the r-mode displacements. In other words, even though the r-mode has plenty of energy to drive crustal pulsations, it cannot effectively shake the crust. Hence, neglecting the coupling should not invalidate our estimate of the damping time $`\tau _{\mathrm{rub}}`$. ## 3. The R-mode Instability We now find the region in $`(\nu _s,T)`$ space where a NS is unstable to the r-mode instability by balancing the viscous damping with the excitation due to gravitational radiation. When the crust is present, the r-mode is unstable for $`\tau _{\mathrm{rub}}<|\tau _{\mathrm{gw}}|`$, or $$\nu _s1070\mathrm{Hz}\frac{M_{1.4}^{4/11}}{R_6^{14/11}}\frac{f^{1/11}}{T_8^{2/11}}\left(\frac{\rho }{\rho _\mathrm{b}}\right)^{2/11}.$$ (4) However, other viscous mechanisms in the core also contribute to the damping. In Figure 2 we show the critical spin frequency where $`1/\tau _{\mathrm{gw}}1/\tau _{\mathrm{rub}}1/\tau _{\mathrm{visc}}=0`$, where we use previously published calculations of shear (Lindblom et al. 1998) and bulk (Lindblom et al. 1999) viscous r-mode damping for the interior viscous damping time $`\tau _{\mathrm{visc}}`$. The solid line corresponds to the case where all nucleons are normal ($`f=1`$), while the shading around it represents the range in frequencies when either neutrons or all nucleons are superfluid. For comparison, the dashed line shows the instability curve that neglects the effect of the viscous boundary layer. Our curves presume that the crust is present for all temperatures less than $`2\times 10^{10}`$ K. For temperatures higher than that, the bulk viscosity is the dominant damping mechanism. The shaded box in Figure 2 indicates the region of the $`(\nu _s,T)`$ space where the observed LMXBs ($`250\mathrm{Hz}\nu _s600\mathrm{Hz}`$, $`T(18)\times 10^8`$ K; Brown & Bildsten 1998) reside. The arrows on the left-hand side of the plot show frequencies and the upper limits on the core temperatures for the two fastest MSPs. Clearly, the strength of the VBL dissipation means that r-modes are not excited in the accreting and colder ($`T10^9`$ K) NSs. This result removes the discrepancy between the existence of $`1.6`$ ms ($`625`$ Hz) millisecond pulsars and the r-mode instability (Andersson et al. 1999b) and alleviates the disagreement between the observed and predicted quiescent luminosities of NS transients (Brown & Ushomirsky 1999). We now re-examine the thermogravitational runaway scenario for LMXBs (Levin 1999) . The two closed loops in Figure 2 show the evolution of the spin and temperature of a NS accreting at $`10^8M_{}`$ yr<sup>-1</sup> (i.e, same as in Levin 1999), but including the VBL damping. The evolution is qualitatively the same. First, the star spins up until it reaches the instability line. The r-mode amplitude then grows until saturation (at amplitude $`\alpha _s`$), heating the star, and then spins it down due to gravitational wave emission. The r-mode eventually stabilizes and the star cools down. For the chosen $`\alpha _s`$, the thermal runaway spins the star down to $`200400`$ Hz. The final frequency is not sensitive to the initial temperature of the star, but only to the assumed $`\alpha _s`$. The duration of the active r-mode phase is $`<\mathrm{yr}`$, while the duration of the cool-down at constant spin is $`10^5`$ yrs. The spinup from $`400`$ to $`1100`$ Hz takes roughly $`5\times 10^6`$ yrs. Therefore, in agreement with Levin (1999), we do not expect that any of the currently observed LMXBs are in the active r-mode phase. Moreover, this evolution cannot explain the clustering of LMXB spins around $`300`$ Hz (van der Klis 1999), since the duration of the cool-down at constant spin phase (horizontal leg in Figure 2) is much shorter than that of the spin-up phase (the vertical leg). Other mechanisms, such as mass quadrupole radiation from the crust (Bildsten 1998) can likely explain such clustering. What about newborn NSs? NSs born in supernovae are thought to rotate near breakup, and have initial temperatures $`T_i10^{11}`$ K. They quickly cool, roughly according to $`T_9=(t/\tau _c)^{1/6}`$ for $`TT_i`$, where $`\tau _c1`$ yr is the Urca cooling time at $`10^9`$ K (Shapiro & Teukolsky 1983). During the initial cooling stage, the r-mode instability line is as calculated previously (dashed line in Figure 2), and the spin-down evolution proceeds according to the scenario described by Owen et al. (1998). However, a solid crust forms when the NS has cooled to $`T_\mathrm{m}10^{10}`$ K (the exact temperature depends on the composition), after which we expect the evolution to be significantly altered. The dotted line in Figure 2 shows the evolution of $`\nu _s`$ and $`T`$ of a newborn NS, computed as in Owen et al. (1998), with initial spin $`\nu _{s,i}=1320`$ Hz, but including the VBL and the heating from it. The star cools from $`T_i=10^{11}`$ K and enters the instability region. The r-mode then grows, saturates at $`\alpha _s=1`$, and spins the star down. We find that the final spin frequency of the star is $`\nu _f=415`$ Hz including the VBL, rather than $`120`$ Hz (Owen et al. 1998). The spindown is also of shorter duration, lasting only $`10^4`$ s, rather than the $`1`$ yr of the original calculations of Owen et al. (1998). For shorter Urca cooling time $`\tau _c`$, the final frequency is somewhat higher. The effect of the heating due to the VBL on the cooling time is negligible. In our simulation, the r-mode amplitude grows only by a factor of 3 from its initial value by the time the star cools to $`T_\mathrm{m}`$, so we do not expect the presence of the r-mode to affect crust formation. In general, the rapid spindown phase begins when the Urca cooling time at the current temperature, $`T/(dT/dt)T^6`$, exceeds the gravitational wave spindown time, $`\mathrm{\Omega }/(d\mathrm{\Omega }/dt)\alpha _s^2`$. Therefore, unless $`T_\mathrm{m}`$ is much less than $`10^{10}`$ K, the crust will form while the r-mode amplitude is still rather small. The strain amplitude is a very shallow function of $`\nu _s`$ during the spin-down stage, $`h_c\nu _s^{1/2}`$ (Owen et al. 1998). However, most signal-to-noise ($`S/N`$) in a gravitational wave detector is accumulated at low $`\nu _s`$, where the source spends most of its time. Hence, the total $`S/N`$ depends sensitively on the low-frequency cutoff of the spin-down evolution. Using the results of Owen et al. (1998), we find that raising the cutoff frequency to $`415`$ Hz from $`120`$ Hz reduces the $`S/N`$ from $`8`$ to $`2.5`$ for “enhanced” LIGO and a NS at a distance of $`20`$ Mpc, where the event rate is expected to be a few per year. ## 4. Conclusions and Future Work Our initial work on the viscous boundary layer between the oscillating fluid in the core and the co-rotating crust shows that the dissipation there is very large, making it the predominant r-mode damping mechanism when the crust is present. The smallest spin frequency which allows the r-mode to operate is about 500 Hz, nearly a factor of five higher than previous estimates. This substantially reduces the parameter space for the instability to operate, especially for older, colder NSs, such as those accreting in binaries and millisecond pulsars. It resolves the discrepancy between the theoretical understanding of the r-mode instability and the observations of millisecond pulsars and LMXBs. In our estimates we presumed that the boundary layer is laminar. Of course, turbulence in the VBL would increase the viscosity there and move the instability to higher rotation frequencies. We also showed that the r-modes induce some transverse motions in the crust, though these motions appear to be negligibly small compared to the r-mode amplitude. The Coriolis force is likely to change the details of the eigenfunctions in the boundary layer, but it is unlikely to change its gross structure. Finally, based on the large conductivity contrast between the core and the crust, we have presumed that the VBL heating evenly spreads throughout the isothermal core. This may be an oversimplification, and a more detailed time-dependent calculation of the heat transport will yield the radial temperature profile. This is especially important for the accreting systems, where the crust melting temperatures are lower because of the smaller mean nuclear charges (Haensel & Zdunik 1990). Can a weak magnetic field substantially change our arguments? The Ohmic diffusion time across the VBL is many years, much longer than the oscillation period. Hence, any magnetic field protruding into the fluid from the crust will be pulled and sheared in a nearly ideal MHD response. The field produced by the shear is $`\delta B/B\alpha R/\delta `$, and hence the magnetic field does not compete with viscous shear when $`B(\rho \nu \mathrm{\Omega })^{1/2}10^{11}\mathrm{G}(\nu _s/1\mathrm{kHz})^{1/2}/T_8`$. The surface dipole field strengths in LMXBs and MSPs are lower than this value. If the same can be said about the $`B`$ field at the base of the crust, then the viscous boundary layer we have discussed should apply. Of course, for younger and potentially much more magnetic neutron stars, the story will be modified, as the restoring force from the stretched field lines needs to be self-consistently included. We thank Andrew Cumming, Yuri Levin, and Lee Lindblom for helpful conversations. L.B. is a Cottrell Scholar of the Research Corporation and G.U. is a Fannie and John Hertz Foundation Fellow. This work was supported by the National Science Foundation through Grants AST97-31632 and PHY 94-07194.
no-problem/9911/cond-mat9911451.html
ar5iv
text
# Low temperature ellipsometry of 𝛼'-NaV2O5 \[ ## Abstract The dielectric function of $`\alpha ^{}`$-NaV<sub>2</sub>O<sub>5</sub> was measured with electric field along the a and b axes in the photon energy range 0.8-4.5 eV for temperatures down to 4K. We observe a pronounced decrease of the intensity of the 1 eV peak upon increasing temperature with an activation energy of about 25meV, indicating that a finite fraction of the rungs becomes occupied with two electrons while others are emptied as temperature increases. No appreciable shifts of peaks were found showing that the change in the valence state of individual V atoms at the phase transition is very small. A remarkable inflection of this temperature dependence at the phase transition at 34 K indicates that charge ordering is associated with the low temperature phase. \] $`\alpha ^{}`$-NaV<sub>2</sub>O<sub>5</sub> is subject of intensive research as a result of its remarkable physical properties. The compounds AV<sub>2</sub>O<sub>5</sub> (A= Li, Na, Ca, Mg, etc.) all have the same lattice structure, similar to that of V<sub>2</sub>O<sub>5</sub>. The structure can be described as two-legged ladders with VO<sub>5</sub> pyramids forming the corners arranged in two-dimensional sheets. In AV<sub>2</sub>O<sub>5</sub> the A atoms enter the space between the layers and act as electron donors for the V<sub>2</sub>O<sub>5</sub> layers. In $`\alpha ^{}`$-NaV<sub>2</sub>O<sub>5</sub> the average valence of the V-ions corresponds to V<sup>+4.5</sup>. X-ray diffraction indicates that at room temperature all V-ions are crystallographically equivalent. At 35K a phase transition occurs, below which the following changes take place: (i) A quadrupling of the unit cell, (ii) opening of a spin gap . In addition there are several experimental hints for a charge redistribution below the phase transition e.g. unaccounted for changes in engropy, splitting of the V-NMR lines, inequivalent V-sites observed with XPS. In this paper we investigate the charge redistribution using optical spectroscopic ellipsometry as a function of temperature. In our spectra we observe clear indication of a strong charge redistribution between the rungs of the ladders at elevated temperature, which at the same time provides a channel for electrical conductivity with an activation energy of about 25 meV. We also report a remarkable inflection of the temperature dependence at the phase transition, which we interpret as an inflection of the charge redistribution process due to a particular correlated electronic state in which the charge and spin degrees of freedom are frozen out simultaneously. The crystal (sample CR8) with dimensions of approximately 2, 3 and 0.3 mm along the a, b, and c axes respectively, was mounted in a UHV optical cryostat in order to prevent the formation of ice on the surface. The pressure was about 10<sup>-8</sup> mbar at 300K and reached 10<sup>-9</sup> mbar at 4K. We performed ellipsometric measurements on the (001) surfaces of the crystals both with the plane of incidence of the light along the a and the b axis. An angle of incidence $`\mathrm{\Theta }`$ of $`80^0`$ was used in all experiments. In Ref. we describe the details of the procedure followed to obtain $`ϵ(\omega )`$. The room temperature results are in general agreement with previous results using Kramers-Kroning analysis of reflectivity data. Along the a-direction we observe a peak at 0.9 eV with a shoulder at 1.4 eV, a peak at 3.3 eV and the slope of a peak above 4.2 eV, outside our spectral window. A similar blue-shifted sequence is observed along the b-direction. The 1eV peak drops rather sharply and extrapolates to zero at 0.7 eV. However, weak absorption has been observed within the entire far and mid-infrared range. The strong optical absorption within the entire visible spectrum causes the characteristic black appearance of this material. Based on the doping dependence of the optical spectra of Na<sub>1-x</sub>Ca<sub>x</sub>VO<sub>5</sub> we established in Ref. the assignments made in Refs., namely that the peak at around 1 eV along the a and b direction is due to transitions between linear combinations of V 3d<sub>xy</sub>-states of the two V-ions forming the rungs. In Refs. and even and odd combinations were considered. The 0.9 eV peak in $`\sigma _a(\omega )`$ (peak A) would then correspond to the transition from V-V bonding to antibonding combinations on the same rung. In Ref. this model was extended to allow lop-sided linear combinations of the same orbitals, so that the 0.9 eV peak then is a transition between left- and right-oriented linear combinations. The work presented in Ref. definitely rules out the assignment of these peaks to crystal field-type V$`dd`$ transitions proposed in Refs. . The 1.1 eV peak in $`\sigma _b(\omega )`$ (peak B) involves transitions between neighboring rungs along the ladder. As a result of the correlation gap in the density of states, the optically induced transfer of electrons between neighboring rungs results in a final state with one rung empty, and a neighboring rung doubly occupied, in other words, an electron hole pair consisting of a hole in the band below E<sub>F</sub>, and an electron in the empty state above E<sub>F</sub>. Note that the final state wavefunction is qualitatively different from the on-rung bonding-antibonding excitations considered above (peak A), even though the excitation energies are the same : it involves one rung with no electron, and a neighboring rung with one electron occupying each V-atom. We associate the lower energy of peak A compared to peak B with the attractive electron-hole Coulomb interaction, favoring on-rung electron-hole pairs. Optical transitions having values below 2eV were also seen in V<sub>6</sub>O<sub>13</sub> and VO<sub>2</sub>. In V<sub>2</sub>O<sub>5</sub> they have very small intensities, and were attributed to defects . The peak at 3.3 eV in $`\sigma _a(\omega )`$ we could attribute to a transition from the $`2p`$ orbital of oxygen to the antibonding level within the same V<sub>2</sub>O cluster. Let us now address the temperature dependence of the spectra. Perhaps most striking of all is the fact that the peak positions turn out to be temperature independent throughout the entire temperature range. This behavior should be contrasted with the remarkable splitting of the V-NMR lines in two components below the phase transition. It has been suggested that the T<sub>c</sub> marks the transition from a high temperature phase where every rung is occupied with an electron residing in a H$`{}_{}{}^{+}{}_{2}{}^{}`$ type bonding orbital (formed by the two V3d<sub>xy</sub> orbitals), to a low temperature phase, where the system is in a charge ordered state (e.g. the zigzag ordered state with an alternation of V<sup>4+</sup> and V<sup>5+</sup> states, or, as suggested in Refs. with V<sup>4+</sup>/V<sup>5+</sup> ladders and V<sup>4.5+</sup>/V<sup>4.5+</sup> ladders alternating). In Refs. estimates have been made of the potential energy difference between the left and righthand V-sites on the same rung, in order to reproduce the correct intensity and photon-energy of the 1 eV peak along $`a`$, as well as producing a V<sup>4.9+</sup>/V<sup>4.1+</sup> distribution between left and right. This turned out to be $`\mathrm{\Delta }=V_LV_R0.8eV`$, with an effective hopping parameter $`t_{}0.4eV`$. To have V<sup>4.5+</sup>/V<sup>4.5+</sup> above and V<sup>4+</sup>/V<sup>5+</sup> below the phase transition, requires than that the potential energy difference changes from $`\mathrm{\Delta }=0.8eV`$ below T<sub>c</sub> to $`0`$ at and above $`T_c`$. As a result the ”1 eV peak” would shift from 0.89 eV to 0.8 eV in the temperature interval between 0 and 34 K, and would remain constant above $`T_c`$. The observed shift is less than 0.03 eV within the entire temperature interval, and less than 0.01 eV between 0 and 34 K. This suggests that the change in $`\mathrm{\Delta }`$ (and consequently the charge of the V atoms) at the phase transition is very small. In fact, a change of $`\mathrm{\Delta }`$ from $`0.1eV`$ to $`0`$ across T<sub>c</sub>, compatible with the experimental results, would yield a change in the valence state from V<sup>4.44+</sup>/V<sup>4.56+</sup> to V<sup>4.5+</sup>/V<sup>4.5+</sup> between 0 and 34 K, which is an almost negligible effect. Thus we conclude that, irrespective of the possible charge configurations V<sup>4.5+</sup>/V<sup>4.5+</sup> or V<sup>5+</sup>/V<sup>4+</sup>, the changes in the charges of the V atoms at the phase transitions are very small (smaller than 0.06e). As we can see in Fig. 1b and 2b, there is a strong decrease of the intensity of the peaks A and B with the increase of the temperature. The spectral weight for both cases is not transferred to low frequencies. The spectral weight of the B peak seems to be recovered up to and above 4eV. The spectral weight of the 3.3eV peak in the a direction is recovered also in the nearby high frequencies , whereas the intensity of the A peak seems to be recovered at even higher photon energy, probably 4.5 eV . The evolution of the 1 eV peaks can be seen from Fig. 3, where the integrated intensities in $`\sigma _1(\omega )`$ from 0.75 eV to 2.25 eV were plotted as a function of temperature. The data fitted with the formula $`I(T)=I_0(1fe^{E_0/T})`$ gave $`f`$=0.35 and $`E_0`$=286K for the a direction and $`f`$=0.47 and $`E_0`$=370K for the b direction. From the fits we see that the activation energy $`E_0`$ is about 25meV, which is very small for the frequency range of the peaks. A decrease of the intensity of the A peak takes place below the phase transition, but otherwise there are no features related to it. The splitting of the A peak of about 55meV (Fig.1c) exists even at 100K. Judging from its sharp shape and the value of splitting, it can be attributed to a phonon side-band. Band structure calculations have indicated that the d<sub>x</sub><sub>y</sub> orbitals are well separated from the other d orbitals and ESR experiments have led to g-values which indicate the complete quenching of the orbital momentum . There are then no other low-lying crystal levels, about 25 meV above the ground state, to play a role in the temperature dependence behavior of the A peak. Comparing the doping dependence of the A peak in Ca<sub>x</sub>Na<sub>1-x</sub>V<sub>2</sub>O<sub>5</sub> and the high temperature dependence from Fig.1b we see that the two behaviors resembles, presenting no shifting or splitting. But, as discussed in Ref , the intensity of the A peak decreases upon doping because doping induces doubly occupied rungs. The same mechanism can then be responsible for the decrease of the intensity of the A peak with increasing temperature. The bonding-antibonding transition (A peak) on the rung will have a reduced intensity, as there are fewer singly occupied rungs, as in the case of Ca<sub>x</sub>Na<sub>1-x</sub>V<sub>2</sub>O<sub>5</sub>. The transitions on the doubly occupied rungs are at an energy $`U`$, around 4 eV, with a factor $`4t_{}/U`$ reduction of the original spectral weight. The activation energy of 25meV would be then the energy required to redistribute the electrons between the rungs, either on the same ladder, or between different ladders. Eventually, at very high temperatures, only half of the rungs would be occupied with one electron, so the intensity of the A peak would be at half the low temperature value ($`f=0.5`$ in the fitting formula of Fig.3). At first glance the processes leading to partial emptying of rungs, while doubly occupying others, seem to be of the order of the energy of peak B (1eV), which corresponds exactly to such a process and one may wonder how a low energy scale could exist. However, processes involving the collective motion of charge can be at a much lower energy than the single particle charge transfer, as a result of short range (nearest neighbor) Coulomb interactions. An example of such a collective mode is the zig-zag ordering involving an (almost) soft charge mode for $`k`$ at the Brillouin zone boundary. These charge modes, because $`k`$ is at the BZ boundary, can appear only indirectly (e.g. phonon assisted) in $`\sigma (\omega )`$, and therefore are at best weakly infrared active. Under favorable conditions the spin degrees of freedom in addition result in a weak but finite $`\sigma (\omega )`$. Another way in which the electrons can move from one rung to another is by forming topological defects, such as domain cells separating charge ordering domains. Macroscopically this could lead to double occupancy of some rungs and emptying others. In fact, even though the optical gap is 1eV, there are experiments which indicate charge degrees of freedom at a much lower energy. Resistivity measurements yielded an energy gap ranging from 30meV at lower temperatures to 75 meV at high temperatures . The dielectric loss $`ϵ^{\prime \prime }`$ for frequency of 16.5 GHz along b direction is rather constant up to 150 K and then increases very rapidly above 200K (so that the microwave signal is lost at room temperature), meaning that an absorption peak could start to evolve at 200K for very low frequencies. A low frequency continuum was observed near 25 meV with infrared spectroscopy and at 75 meV with Raman spectroscopy . Also infrared measurements found that $`\sigma _{1,a}`$ increases with increasing temperature. The suppression of intensity below the phase transition in this context (Fig.3) seems to mark a redistribution of charge which is associated with the spin gap. X-ray diffraction indicates that the superstructure below T<sub>c</sub> consists of a group of 4 rungs : 2 neighboring rungs of the central ladder, 1 on the left-hand and 1 on the right-hand ladder. The presence of a spin-gap indicates that the 4 spins of this structural unit form an $`S=0`$ state below T<sub>c</sub>. To account for the absence of a change of the valence of the V atoms at the phase transition, as well as for the this slight doubly occupancy below the phase transition, the following scenario can be put forward. Below T<sub>c</sub> the structure would be formed by singlets (see Fig. 4). A possible arrangement, which is motivated by the observed crystal structure at low temperature is indicated in Fig. 4. It corresponds to mainly two degenerate configurations involving one electron on each rung (top and bottom), engaged in a singlet formed of two electrons on nearest neighbor V-positions on different ladders. These diagonal singlets were originally proposed by Chakraverty et al. for Na<sub>0.33</sub>V<sub>2</sub>O<sub>5</sub>. The middle two configurations are at higher energy states of order 1 eV, hence they are only slightly mixed in. Because the latter configurations have one empty and one doubly occupied rung, the intensity of the A and B peak should again be reduced, as was discussed for temperatures above the phase transition. The reduced intensity in our spectra below $`T_c`$ then reflects the amount of singlet character involving doubly occupied rungs. Passing the phase transition the coherence of this state would vanish. This would result in a random configuration with an average valence of +4.5 for the V atoms, and also a spin susceptibility for high temperature phase due to appearance of some free spins. The nature of the weak charge-redistribution which we observe at low temperature would then be manifestly quantum mechanical. In conclusion, we have measured the temperature dependence behavior of the dielectric function along the a and b axes of $`\alpha ^{}`$-NaV<sub>2</sub>O<sub>5</sub> in the photon energy range 0.8-4.5 eV for temperatures down to 4K. No appreciable shifts of the 1 eV peaks were found, thus showing that the change in the valence state of the V atoms at the phase transition is very small (smaller than 0.06e). A strong decrease of the 1 eV peaks with increasing temperature was observed. We assigned this temperature dependence behavior to collective charge redistribution, namely the redistribution of the electrons among the rungs resulting in double occupation of some rungs as temperature increases, with an activation energy of about 25meV. Below the phase transition, a small but sharp decrease of intensity of the 0.9 eV peak in $`\sigma _a(\omega )`$ was found. It was attributed to a finite probability of having, in the singlet state below T<sub>c</sub>, configurations with electron pairs occupying the same rung. We gratefully acknowledge T.T.M. Palstra, J.G. Snijders, M. Cuoco, and A. Revcolevschi for stimulating discussions. This investigation was supported by the Netherlands Foundation for Fundamental Research on Matter (FOM) with financial aid from the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
no-problem/9911/astro-ph9911418.html
ar5iv
text
# Neutron Stars in Supernova Remnants ### Acknowledgments. I am indebted to E. Amato, R. Bandiera, M. Salvati, L. Woltjer for many discussions on the subject of this talk. This work was partly supported by the Italian Space Agency. ## References Amato, E. 1999 (these proceedings) Bandiera, R., Masini, E., Pacini, F., Salvati, M., Woltjer, L. 1998 Proc. of the Workshop “The relationship between neutron stars and Supernova remnants”, Mem. SAIt, vol. 69, n. 4. Bandiera, R. 1999 (these proceedings) Cavaliere, A., Pacini, F. 1970, ApJ, 159, L21. Duncan, R.C., Thompson, C. 1992, ApJ, 392, L9. Frail, D.A. 1998, in Proc. of the NATO ASI “The many faces of neutron stars”, Kluwer Academic Publishers, pp. 179-194. Green, D.A. 1996, A catalogue of galactic Supernova remnants, Mullard Radio Astronomy Observatory, Cambridge, UK. Helfand, D.J., Becker, R.H. 1984, Nature, 307, 215. Helfand, D.J. 1998, in Proc. of the Workshop “The relationship between neutron stars and Supernova remnants”, Mem. SAIt, vol. 69, n. 4, pp. 791-800. Pacini, F. 1983, A&A, 126, L11. Tananbaum, B. et al. 1999, IAUC No. 7246. Tsuruta, S., Cameron, A.G.W. 1966, Nature, 211, 356. Woltjer, L. 1968, ApJ, 152, L179.
no-problem/9911/physics9911045.html
ar5iv
text
# Embedded Solitons in a Three-Wave System ## I Introduction Recent studies have revealed a novel class of embedded solitons (ESs) in various nonlinear-wave systems. An ES is a solitary wave which exists despite having its internal frequency in resonance with linear (radiation) waves. ESs may exist as codimension-one solutions, i.e., at discrete values of the frequency, provided that the spectrum of the corresponding linearized system has (at least) two branches, one corresponding to exponentially localized solutions, the other one to delocalized radiation modes. In such systems, quasilocalized solutions (or “generalized solitary waves” ) in the form of a solitary wave resting on top of a small-amplitude continuous-wave (cw) background are generic . However, at some special values of the internal frequency, the amplitude of the background may exactly vanish, giving rise to an isolated soliton embedded into the continuous spectrum. Examples of ESs are available in water-wave models, taking into account capillarity , and in several nonlinear-optical systems, including a Bragg grating incorporating wave-propagation terms and second-harmonic generation in the presence of the self-defocusing Kerr nonlinearity (the latter model with competing nonlinearities was introduced earlier in a different context ). It is relevant to stress that ESs, although they are isolated solutions, are not structurally unstable. Indeed, a small change of the model’s parameters will slightly change the location of ES (e.g., its energy and momentum, see below), but will not destroy it, which is clearly demonstrated by the already published results . In this respect, they may be called generic solutions of codimension one. ESs are interesting because they naturally appear when higher-order (singular) perturbations are added to the system, which may completely change its soliton spectrum. Optical ESs have a potential for applications, due to the very fact that they are isolated solitons, rather than occurring in continuous families. The stability problem for ESs was solved in a fairly general analytical form in Ref. , which was also verified by direct simulations of the model considered. It was demonstrated that ES is a semi-stable object which is fully stable to linear approximation, but is subject to a slowly growing (sub-exponential) one-sided nonlinear instability. Development of this weak instability depends on values of the system’s parameters; in some cases, it is developing so slowly that ES, to all practical purposes, may be regarded as a fully stable object . In the previously studied models, only a few branches of ESs were found, and only after careful numerical searching, which suggest they may be hard to observe in a real experiment. The present work aims to investigate ESs in a recently introduced model of a three-wave interaction in a quadratically nonlinear planar waveguide with a quasi-one-dimensional Bragg grating , which can be quite easily fabricated. It will be found that ESs occur in abundance in this model, hence it may be much easier to observe them experimentally. It should also be stressed that, unlike the previously studied models, in which ESs appear in relatively exotic conditions, e.g., as a result of adding singular perturbations or specially combining different nonlinearities , the model that will be considered below and found to give rise to a rich variety of ESs, is exactly the same which was known to support vast families of ordinary (non-embedded) gap solitons. This, in particular, implies that ES can be found in the corresponding system under the same conditions which are necessary for the observation of the regular solitons, i.e., the experiment may be quite straightforward. An estimate of the relevant physical parameters will be given at the end of the paper. The rest of the paper is organized as follows. In section 2, we recapitulate the model and obtain solutions in the form of fundamental zero-walkoff ESs, which, physically, correspond to the case when the Poynting vector of the carrier waves is aligned with the propagation direction. The analysis is extended in section 3 to the case of fundamental walking ESs, for which the Poynting vector and the propagation distance are disaligned. Concluding remarks are collected in section 4. ## II The Model and Zero-Walkoff Solitons The model describes spatial solitons produced by the second-harmonic generation (SHG) in a planar waveguide, in which two components of the fundamental harmonic (FH), $`v_1`$ and $`v_2`$, are linearly coupled by the Bragg reflection on a grating in the form of a system of scores parallel to the propagation direction $`z`$ (for a more detailed description of the model see ): $`i(v_1)_z+i(v_1)_x+v_2+v_3v_2^{}`$ $`=`$ $`0,`$ (1) $`i(v_2)_zi(v_2)_x+v_1+v_3v_1^{}`$ $`=`$ $`0,`$ (2) $`2i(v_3)_zqv_3+D(v_3)_{xx}+v_1v_2`$ $`=`$ $`0.`$ (3) Here $`v_3`$ is the second-harmonic (SH) field, $`x`$ is a normalized transverse coordinate, $`q`$ is a real phase-mismatch parameter, and $`D`$ is an effective diffraction coefficient. The diffraction terms in the FH equations (1) and (2) are neglected as they are much weaker than the artificial diffraction induced by the Bragg scattering, while the SH wave, propagating parallel to the grating, undergoes no reflection, hence the diffraction term is kept in Eq. (3). Experimental techniques for generation and observation of spatial solitons in planar waveguides are now well elaborated (), and the waveguide carrying a set of parallel scores with a spacing commensurate to the light wavelength (which is necessary to realize the resonant Bragg scattering) can be easily fabricated. Therefore, the present system provides a medium in which experimental observation of ESs is most plausible. As mentioned above, the observation of ES in this system should be further facilitated by the fact that it supports a multitude of distinct ES states, see below. Eqs. (1)–(3) have three dynamical invariants: the Hamiltonian, which will not be used below, the energy flux (norm) $$E_{\mathrm{}}^+\mathrm{}\left[|v_1(x)|^2+|v_2(x)|^2+4|v_3|^2\right]𝑑x,$$ (4) and the momentum, $$Pi_{\mathrm{}}^+\mathrm{}\left((v_1)_x^{}v_1+(v_2)_x^{}v_2+2(v_3)_x^{}v_3\right)𝑑x.$$ (5) The norm played a crucial role in the analysis of the ES stability carried out in . Soliton solutions to Eqs. (1)–(3) are sought in the form $$v_{1,2}(x,z)=\mathrm{exp}(ikz)u_{1,2}(\xi ),v_3(x,z)=\mathrm{exp}(2ikz)u_3,$$ (6) where $`\xi xcz`$, with $`c`$ being the walkoff (slope) of the spatial soliton’s axis relative to the light propagation direction $`z`$. The substitution of (6) into Eqs. (1)–(3) leads to an 8th-order system of ordinary differential equations (ODEs) for the real and imaginary parts of $`v_{1,2,3}`$ (primes standing for $`d/d\xi `$): $`ku_1+i(1c)u_1^{}+u_2+u_3u_2^{}`$ $`=`$ $`0,`$ (7) $`ku_2i(1+c)u_2^{}+u_1+u_3u_1^{}`$ $`=`$ $`0,`$ (8) $`(4k+q)u_3+Du_3^{\prime \prime }2icu_3^{}+u_1u_2`$ $`=`$ $`0.`$ (9) Before looking for ES solutions to the full nonlinear equations, it is necessary to investigate the eigenvalues $`\lambda `$ of their linearized version, in order to isolate the region in which ESs may exist. Substituting $`u_1,u_2\mathrm{exp}(\lambda \xi )`$, and $`u_3\mathrm{exp}(2\lambda \xi )`$ into Eqs. (6)–(8) and linearizing, one finds that the FH and SH equations decouple in the linearized approximation. The FH equations give rise to a biquadratic characteristic equation, $$(1c^2)^2\lambda ^4+2\left[(1+c^2)k^2(1c^2)\right]\lambda ^2+(k^21)^2=0,$$ (10) and the SH equation produces another four eigenvalues given by $$\left[D\lambda ^2(4k+q)\right]^2+4c^2\lambda ^2=0.$$ (11) A necessary condition for the existence of ESs is that the eigenvalues given by Eq. (10) have non-zero real parts - this is necessary for the existence of exponentially localized solutions - while the eigenvalues from Eq. (11) should be purely imaginary (otherwise, one will have ordinary, rather than embedded, solitons). This discrimination between the two sets of the eigenvalues is due to the fact that Eqs. (7) and (8) for the FH components are always linearizable, while the SH equation (9) may be nonlinearizable, which opens the possibility for the existence of ESs . As it follows from Eqs. (10) and (11), these two conditions imply $$k^2+c^2<1;\mathrm{\hspace{0.33em}4}k+q<c^2/D.$$ (12) For the case $`c=0`$, the parametric region defined by the inequalities (12) is displayed in Fig. 1. In Ref. , numerous ordinary (gap ) soliton solutions to the present model have been found by means of a numerical shooting method. To construct ES solutions, we applied the same method to Eqs. (7), (8) and (9), allowing just one parameter to vary. From each ES solution that was found this way, branches of the solutions were continued in the parameters $`k,q`$ and $`c`$, by means of the software package AUTO . Note that the $`c=0`$ solutions admit an invariant reduction $`u_2=u_1^{}`$, $`u_3=u_3^{}`$, which reduces the system to a 4th-order ODE system, thus making numerical shooting feasible. We confine the analysis to fundamental solitons, which implies that the SH component $`u_3`$ has a single-humped shape (a distinctive feature of gap solitons in the same system is that not only fundamental solitons, but also certain double-humped two-solitons (bound states of two fundamental solitons) appear to be stable ). Note that double- and multi-humped ESs must exist too as per a theorem from Ref. , but leaving them aside, we will still find a rich structure of fundamental ESs. We begin with a description of the results from the reduced case $`c=0`$, when an additional scaling allows us to set $`D1`$ without loss of generality. The results are displayed in Figs. 1 – 3. There is a strong evidence for existence of an infinite “fan” of fundamental ES branches. Among them, we define a ground-state soliton as the one which has the simplest internal structure (Fig. 2a). The next “first excited state” differs by adding one (spatial) oscillation to the FH field (Fig. 2c). Adding each time an extra oscillation, we obtain an indefinitely large number of “excited states” (as an example, see the 8th state in Fig. 2d). We stress, however, that all the “excited states” belong to the class of the fundamental solitons, rather than being bound states thereof. In Fig. 1, the first nine states (branches) are shown in the $`(k,q)`$ parametric plane. Note that the whole bundle of the branches originates from the point $`(k=1,q=4)`$, which is precisely the intersection of the two lines which limit the existence region of ES (see Eq. (12) with $`c=0`$). At this degenerate point, the linearization (see above) gives four zero eigenvalues. More branches than those depicted in Fig. 1 have been found, the numerical results clearly pointing towards the existence of infinitely many branches, accumulating on the border $`q+4k=0`$ of the ES-region. In the accumulation process, each $`u_3`$ component is successively wider, while the $`u_{1,2}`$ ones have more and more internal oscillations. Since $`k`$ is an arbitrary propagation constant, from physical grounds, the results obtained for the $`c=0`$ solutions are better summarized in terms of energy flux $`E`$ vs. mismatch $`q`$ (Fig. 3). Note that all the branches shown in Fig. 3 really terminate at their edge points, which exactly correspond to hitting the boundary $`k=1`$, see Fig. 1. It is also noteworthy that all the solutions are exponentially localized, except at the edge point $`k=1`$, where a straightforward consideration of Eqs. (7)–(9) demonstrates that, in this case, ES are weakly (algebraically) localized as $`|x|\mathrm{}`$ (cf. Fig 2b): $`\begin{array}{c}u_1\sqrt{(4k+q)}|x|^1,u_2(1/2)\sqrt{(4k+q)}|x|^2,\\ u_3x^2.\end{array}`$ Finally, we observe from Figs. 1 and 3 that the first “excited-state” branch has a remarkable property that it corresponds to a nearly constant value of $`q`$. This means that while, generally, ES are isolated (codimension-one) solutions for fixed values of the physical parameters, this branch is nearly generic, existing in a narrow interval of the $`q`$-values between $`4.0`$ and $`3.74`$. ## III Walking Solitons We now turn to ESs with $`c0`$, i.e., walking ones. These were sought for systematically by returning to the full 8th-order-ODE model and allowing the AUTO package to detect bifurcations (of the pitchfork type), while moving along branches of the $`c=0`$ solutions. It transpires that all the bifurcating branches have $`c0`$, i.e., they are walking ESs. Such solutions are of codimension-two in the parameter space (i.e., the solutions can be represented by curves $`k(q)`$, $`c(q)`$), which can be established by a simple counting argument after noting that the 8th-order linear system has two pairs of pure imaginary eigenvalues. Alternatively, the walking ESs can be represented, in terms of the energy flux and momentum (see Eqs. (4) and (5)), by curves $`E(q)`$ and $`P(q)`$. We present results only for the walking solutions which bifurcate from the ground and first excited $`c=0`$ states, while other walking ESs can also be readily found. It was found that the ground-state branch has exactly two bifurcation points, giving rise to two distinct walking-ES solution branches (up to a symmetry). These new branches are shown, in terms of the most physically representative $`c(q)`$ and $`E(q)`$ dependences, in Fig. 4. Note that they, eventually, coalesce and disappear. As the inset to Fig. 4b shows, they disappear via a tangent (fold, or saddle-node) bifurcation. The first excited state has three bifurcation points. One of them gives rise to a short branch of walking ESs that terminates, while two others appear to extend to $`q=\mathrm{}`$ (their ostensible “merger” in Fig. 5 is an artifact of plotting). It is known that, in the large-mismatch limit $`q\mathrm{}`$, the present three-wave model with the quadratic nonlinearity goes over into a modified Thirring model with cubic nonlinear terms . This suggests that the latter model may also support ES. However, consideration of this issue is beyond the scope of the present work. Fig. 4 clearly shows that, in a certain interval of the mismatch parameter $`q`$, the system gives rise to a multistability, i.e., coexistence of different types of spatial solitons in the planar optical waveguide (for instance, taking account of the fact that each $`c0`$ branch has symmetric parts with the opposite values of $`c`$, we conclude that there are five coexisting solutions at $`q`$ taking values between about $`8`$ and $`11`$). This situation is of obvious interest for applications, especially in terms of all-optical switching . Indeed, switching from a state with a larger value of the energy flux to a neighboring one with a smaller flux can be easily initiated by a small localized perturbation, in view of the above-mentioned one-sided semistability of ES, shown in a general form in . Such a switching perturbation can be readily made controllable and movable if created by a laser beam launched normally to the planar waveguide and focused at a necessary spot on its surface . Switching between the two branches with $`c0`$ can be quite easy to realized too, due to small energy-flux and walkoff/momentum differences between them, see Fig. 4. ## IV Conclusion To conclude the analysis, it is necessary to estimate the actual size of the relevant physical parameters. This is, in fact, quite easy to do, as there is no essential difference in the estimate from that which was presented in Ref. for the ordinary solitons in exactly the same model. This means that a diffraction length $`1`$ cm is expected for the SH component, and, definitely, the diffraction lengths for the FH components, which are subject to the strong Bragg scattering, will be no larger than that. Thus, a sample with a size of a few cm may be sufficient for the experimental observation of ESs. The sample may be an ordinary planar quadratically nonlinear waveguide with a set of parallel scores written on it. The other parameters, such as the power of the laser beam that generates the solitons, etc., are expected to be the same as in the usual experiments with the spatial solitons . As concerns the weak semi-instability of ESs, it may be of no practical consequence for the experiment, as it would manifest itself only in a much larger sample. In this connection, it may be relevant to mention that, strictly speaking, the usual spatial solitons observed in numerous experiments are all unstable (e.g., against transverse perturbations) in the usual (linear) sense, but the instability has no room to develop in real experimental samples. Finally, we see from Figs. 4 and 5 that the maximum walkoff that ESs can achieve is, in the present notation, slightly smaller than $`1`$. According to the geometric interpretation of the underlying equations (1) - (3) (see details in the original work ), this implies that the maximum size of the misalignment angle between the propagation direction and the axis of the spatial soliton may be nearly the same as the (small) angle between the Poynting vectors of the two FH waves and that of the SH wave. To summarize the work, we have found a rich spectrum of isolated solitons residing inside the continuous spectrum in a simple model of the three-wave spatial interaction in a second-harmonic-generating planar optical waveguide equipped with a quasi-one-dimensional Bragg grating. An infinite sequence of fundamental embedded solitons were found. They differ by the number of internal oscillations. The embedded solitons are localized exponentially, except for a limiting degenerate case, when they become algebraically localized. Branches of the zero-walkoff spatial solitons give rise, through bifurcations, to several branches of walking solitons. The structure of the bifurcating branches provides for a multistable configuration of the spatial optical solitons. This may find straightforward applications to all-optical switching. ## Acknowledgements The stay of B.A.M. at the University of Bristol was supported by a Benjamin Meaker fellowship. A.R.C. holds and U.K. EPSRC Advanced Fellowship.
no-problem/9911/astro-ph9911201.html
ar5iv
text
# 1 Molecular Cooling ## 1 Molecular Cooling The ultimate fate of the gas which cools in cooling flows is still unknown. A possibility is that a fraction of the gas forms cold molecular clouds . As a result of fragmentation we expect the formation of small clouds with higher density and possible production of molecules in particular $`H_2`$ and traces of $`HD`$ and $`CO`$. Taking into account radiative transfer effects we computed analytically in the cooling function $`\mathrm{\Lambda }\left(T\right)`$ for the transition between the ground state and the first rotational level. This way we determined a lower limit of $`\mathrm{\Lambda }`$. Meanwhile we improved this calculation by including numerically all rotational transitions which are relevant at low temperatures (i.e. up to $`J5`$). The following column densities are adopted for a typical small cloud (with $`n_{H_2}=10^6`$cm<sup>-3</sup>): $`N_{CO}=10^{14}\mathrm{cm}^2\mathrm{and}N_{H_2}=2\times 10^{18}\mathrm{cm}^2`$, which corresponds to a $`CO`$ abundance: $`\eta _{CO}5\times 10^5`$. For $`HD`$ instead we assume the primordial ratio $`\eta _{HD}7\times 10^5`$. In Figure 1 we plotted the molecular cooling function $`\mathrm{\Lambda }\left(T\right)`$ taking into account $`CO`$, $`HD`$ and $`H_2`$ in the range 3-300 K. $`CO`$ is the main coolant in the range of temperatures 3-80 K, $`HD`$ in the range 80-150 K and $`H_2`$ dominates above 150 K. ## 2 Thermal equilibrium The heating is given by the external X-ray flux as produced from the hot intracluster gas. The thermal balance between heating and cooling leads to an equilibrium temperature for the clouds. We have calculated the minimum equilibrium temperature $`T_{clump}`$ of the clumps inside the cooling flow region. Table 1 shows the equilibrium temperature for different clusters. For comparison we give the values we find using our analytical approximation ($`N`$=1) and the ones by taking into account higher excited rotational levels ($`N`$=5). One clearly sees that the inclusion of the higher excited levels into the calculations lowers the equilibrium temperature, particularly for hot clusters such as for instance Abell 478. We conclude that thermal equilibrium can be achieved at very low temperatures inside the cooling flow region mainly due to $`CO`$-cooling. Other molecules than $`CO`$, for example $`CN`$ or $`H_2CO`$, could also be important. Thus the study of the chemistry in cooling flows might lead to important insight. | Table 1. Equilibrium temperature | | | | --- | --- | --- | | Cluster | $`T_{clump}\left(N=1\right)`$ | $`T_{clump}\left(N=5\right)`$ | | | (in K) | (in K) | | PKS 0745-191 | 4 | 3 | | Hydra A | 4 | 3 | | Abell 478 | 75 | 10 | | Centaurus | 25 | 4 | Acknowledgements. We would like to thank M. Plionis and I. Georgantopoulos for organizing such a pleasant conference. This work has been supported by the Dr Tomalla Foundation and by the Swiss National Science Foundation.
no-problem/9911/astro-ph9911480.html
ar5iv
text
# Distant Cluster Search: Welcoming some Newcomers from the EIS ## 1. Introduction The quest for high–redshift ($`z>0.5`$) clusters of galaxies has recently received a lot of well deserved attention. The physical mechanisms that rule galactic evolution still lack a clear understanding, and clusters at different redshifts are privileged observational targets to develop related studies. Moreover, knowing the number density of these systems, as well as their epoch of formation, provides crucial ways of testing different theoretical cosmological models put forward in the literature (see eg Bahcall, Fan & Cen 1997). ## 2. The Algorithm However, finding clusters at cosmologically interesting look-back times ($`z>0.5`$), not to mention defining a complete sample, is a time consuming and difficult task. Successful attempts to gather optically selected samples were made by Postman et al (1996; P96 onwards) in the northern hemisphere. Prompted by the release of the EIS data, we developed an algorithm - see Lazzati et al (1998), Lobo et al (1998) - to be applied to catalogues of galaxy positions and magnitudes. One of its main advantages is that the spatial and luminosity part of the filter are run separately on the catalogue, with no assumption on the typical size or typical $`M^{}`$ for clusters, as these parameters intervene in our algorithm only as typical angular scale - a set of gaussian $`\sigma `$’s - and typical apparent magnitude $`m^{}`$. In this way, the significance of a cluster detection is always enhanced, by combining the most probable $`m^{}`$ with the most probable angular size. Moreover, a local background is also used, allowing us to adapt well to and overcome the hazards of inhomogeneous data sets (the quality of EIS Patch A data was somewhat affected by El Niño). Extensive simulations using directly the EIS data allowed us to obtain completeness and contamination rates that do seem advantageous relatively to the respective ones obtained with the P96 algorithm (see the next section). ## 3. New Cluster Candidates Running our new algorithm on the EIS data produced a new “robust” set of cluster candidates up to $`z1.1`$ (as estimated via the respective $`m^{}`$), namely $`41`$ for Patch A ($`3`$ sq. degs.) and $`21`$ for Patch B ($`1.5`$ sq. degs.). Out of these, nearly half are not present in the list of candidates obtained from the same data using the P96 algorithm (Olsen et al 1999; see Lobo et al 1998). The false detection rate we estimated is of $`1.3`$ spurious candidates per square degree once the threshold is set at $`S/N=4`$ (our final adopted detection threshold). For comparison, P96 report an estimated contamination rate for their final catalogue of at most $`30\%`$ in their $`5.1`$ square degree area. As for completeness, setting the detection threshold at $`S/N=4`$ still allows us to achieve a completeness of $`95\%`$ until $`z0.9`$ for richness 2, Coma-like clusters. Some of our cluster candidates have already been followed-up in multi-waveband BVRI imaging to estimate photometric redshifts and to get a handle on cluster members and determine their color properties. The complete list of candidates, finding charts and further details will be presented in Lobo et al (1999, in preparation). ## 4. Future Three of our highest estimated redshift ($`z0.6`$) cluster candidates were selected for spectroscopic observations carried out with the FORS/VLT last September. The data we obtained (aimed at reaching down to $`L0.3L^{}`$, and as far as the cluster virial radius) allowed us to confirm their “cluster identity” and to secure their redshifts as being $`z0.64`$, $`z0.66`$ and $`z0.71`$, respectively. Once these data are completed, they will form a small but homogeneous sample of clusters of very similar richness and redshift, that can be used to provide us with the absolute normalization of the bright end of the luminosity function in high–$`z`$ clusters, an important constraint for theories of structure formation and evolution. We will also explore the dynamics and morphological evolution of member galaxies at a redshift associated to the assembling of large galaxy clusters. ### Acknowledgments. C. Lobo acknowledges financial support by the CNAA fellowship reference D.D. n.37 08/10/1997 ## References Renzini A., da Costa L. 1997, The Messenger, 87, 23 Bahcall N., Fan X., Cen R. 1997, ApJ, 485, L53 Postman M. et al 1996, AJ, 111, 615 Lazzati D. et al 1998, in proceedings of the XIVth IAP Meeting, Wide Field Surveys in Cosmology, eds. S. Colombi & Y. Mellier, 400 Lobo C. et al 1998, astro-ph/9809162 Olsen et al 1999, A&A, 345, 363 Olsen et al 1999, A&A, 345, 681
no-problem/9911/cond-mat9911014.html
ar5iv
text
# A basic obstacle for electrical spin-injection from a ferromagnetic metal into a diffusive semiconductor ## Abstract We have calculated the spin-polarization effects of a current in a two dimensional electron gas which is contacted by two ferromagnetic metals. In the purely diffusive regime, the current may indeed be spin-polarized. However, for a typical device geometry the degree of spin-polarization of the current is limited to less than 0.1%, only. The change in device resistance for parallel and antiparallel magnetization of the contacts is up to quadratically smaller, and will thus be difficult to detect. (Accepted by PRB Rap. com.) Spin-polarized electron injection into semiconductors has been a field of growing interest during the last years. The injection and detection of a spin-polarized current in a semiconducting material could combine magnetic storage of information with electronic readout in a single semiconductor device, yielding many obvious advantages. However, up to now experiments for spin-injection from ferromagnetic metals into semiconductors have only shown effects of less than 1%, which sometimes are difficult to separate from stray-field-induced Hall- or magnetoresistance-effects. In contrast, spin-injection from magnetic semiconductors has already been demonstrated successfully using an optical detection method. Typically, the experiments on spin-injection from a ferromagnetic contact are performed using a device with a simple injector-detector geometry, where a ferromagnetic metal contact is used to inject spin polarized carriers into a two dimensional electron gas (2DEG). A spin-polarization of the current is expected from the different conductivities resulting from the different densities of states) for spin-up and spin-down electrons in the ferromagnet. For the full device, this should result in a conductance which depends on the relative magnetization of the two contacts. A simple linear-response model for transport across a ferromagnetic/normal metal interface, which nonetheless incorporates the detailed behaviour of the electrochemical potentials for both spin directions was first introduced by van Son et al.. Based on a more detailed (Boltzmann) approach, the model was developed further by Valet and Fert for all metal multilayers and GMR. Furthermore, it was applied by Jedema et al. to superconductor-ferromagnet junctions. For the interface between a ferromagnetic and a normal metal, van Son et al. obtain a splitting of the electrochemical potentials for spinup and spindown electrons in the region of the interface. The model was applied only to a single contact and its boundary resistance. We now apply a similar model to a system in which the material properties differ considerably. Our theory is based on the assumption that spin-scattering occurs on a much slower timescale than other electron scattering events. Under this assumption, two electrochemical potentials $`\mu _{}`$ and $`\mu _{}`$, which need not be equal, can be defined for both spin directions at any point in the device. If the current flow is one dimensional in the $`𝐱`$-direction, the electrochemical potentials are connected to the current via the conductivity $`\sigma `$, the diffusion constant D, and the spin-flip time constant $`\tau _{sf}`$ by Ohm’s law and the diffusion equation, as follows: $$\frac{\mu _,}{x}=\frac{ej_,}{\sigma _,}$$ (2) $$\frac{\mu _{}\mu _{}}{\tau _{sf}}=\frac{D^2(\mu _{}\mu _{})}{x^2}$$ (3) where D is a weighted average of the different diffusion constants for both spin directions. Without loss of generality, we assume a perfect interface without spin scattering or interface resistance, in a way that the electrochemical potentials $`\mu _{}`$ and the current densities $`j_{}`$ are continuous. Starting from these equations, straightforward algebra leads to a splitting of the electrochemical potentials at the boundary of the two materials, which is proportional to the total current density at the interface. The difference $`(\mu _{}\mu _{})`$ between the electrochemical potentials decays exponentially inside the materials, approaching zero difference at $`\pm \mathrm{}`$. $$(\mu _{}(\pm \mathrm{})=\mu _{}(\pm \mathrm{}))$$ (4) A typical lengthscale for the decay of $`(\mu _{}\mu _{})`$ is the spin-flip length $`\lambda =\sqrt{D\tau _{sf}}`$ of the material. In a semiconductor, the spin-flip length $`\lambda _{sc}`$ can exceed its ferromagnetic counterpart $`\lambda _{fm}`$ by several orders of magnitude. In the limit of infinite $`\lambda _{sc}`$, this leads to a splitting of the electrochemical potentials at the interface which stays constant throughout the semiconductor. If the semiconductor extends to $`\mathrm{}`$, Eqs. A basic obstacle for electrical spin-injection from a ferromagnetic metal into a diffusive semiconductor in combination with Eq. 4 imply a linear and parallel slope of the electrochemical potentials for spin-up and spin-down in the semiconductor, forbidding injection of a spin-polarized current if the conductivities for both spin channels in the 2DEG are equal. At the same time, we see that the ferromagnetic contact influences the electron system of the semiconductor over a lengthscale of the order of the spin-flip length in the semiconductor. A second ferromagnetic contact applied at a distance smaller than the spin-flip length may thus lead to a considerably different behaviour depending on its spin-polarization. In the following, we will apply the theory to a one dimensional system in which a ferromagnet (index $`i=1`$) extending from $`x=\mathrm{}`$ to $`x=0`$ is in contact with a semiconductor (index $`i=2`$, $`0<x<x_0`$), which again is in contact to a second ferromagnet (index $`i=3`$, $`x_0x\mathrm{}`$). This system corresponds to a network of resistors $`R_{1,}`$, $`R_{SC,}`$, and $`R_{3,}`$, representing the two independent spin channels in the three different regions as sketched in Fig. (1a). The (x dependent) spin-polarization of the current density at position $`x`$ is defined as $$\alpha _i(x)\frac{j_i(x)j_i(x)}{j_i(x)+j_i(x)}$$ (5) where we set the bulk spin polarization in the ferromagnets far from the interface $`\alpha _{1,3}(\pm \mathrm{})\beta _{1,3}`$. The conductivities for the spin-up and spin-down channels in the ferromagnets can now be written as $`\sigma _{1,3}=\sigma _{1,3}(1+\beta _{1,3})/2`$ and $`\sigma _{1,3}=\sigma _{1,3}(1\beta _{1,3})/2`$. We assume that the physical properties of both ferromagnets are equal, but allow their magnetization to be either parallel ($`\beta _1=\beta _3`$ and $`R_{1,}=R_{3,}`$) or antiparallel ($`\beta _1=\beta _3`$ and $`R_{1,}=R_{3,}`$). In the linear-response regime, the difference in conductivity for the spin-up and the spin-down channel in the ferromagnets can easily be deduced from the Einstein relation with $`D_iD_i`$ and $`\rho _i(E_F)\rho _i(E_F)`$, where $`\rho (E_F)`$ is the density of states at the Fermi energy, and D the diffusion constant. To separate the spin-polarization effects from the normal current flow, we now write the electrochemical potentials in the ferromagnets for both spin directions as $`\mu _,=\mu ^0+\mu _,^{}`$, $`(i=1,3)`$, $`\mu ^0`$ being the electrochemical potential without spin effects. For each part $`i`$ of the device, Eqs. (A basic obstacle for electrical spin-injection from a ferromagnetic metal into a diffusive semiconductor) apply separately. As solutions for the diffusion equation, we make the Ansatz $$\mu _{i,}=\mu _i^0+\mu _{i,}^{}=\mu _i^0+c_{i,}\mathrm{exp}\pm ((xx_i)/\lambda _{fm})$$ (6) for $`i=1,3`$ with $`x_1=0`$, $`x_3=x_0`$, and the + (-) sign referring to index $`1`$ ($`3`$), respectively. From the boundary conditions $`\mu _1(\mathrm{})=\mu _1(\mathrm{})`$ and $`\mu _3(\mathrm{})=\mu _3(\mathrm{})`$, we have that the slope of $`\mu ^0`$ is identical for both spin directions, and also equal in region 1 and 3 if the conductivity $`\sigma `$ is identical in both regions, as assumed above. In addition, these boundary conditions imply that the exponential part of $`\mu `$ must behave as $`c\mathrm{exp}(x/\lambda _{fm})`$ in region 1 and as $`c\mathrm{exp}((xx_0)/\lambda _{fm})`$ in region 3. In the semiconductor we set $`\tau _{sf}=\mathrm{}`$, based on the assumption that the spin-flip length $`\lambda _{sc}`$ is several orders of magnitude longer than in the ferromagnet and much larger than the spacing between the two contacts. This is correct for several material systems, as semiconductor spin-flip lengths up to $`100\mu `$m have already been demonstrated . In this limit, we thus can write the electrochemical potentials for spin-up and spin-down in the semiconductor as $$\mu _{2,}(x)=\mu _{1,}(0)+\gamma _,x\text{,}\gamma _,=constant$$ (7) While the conductivities of both spin-channels in the ferromagnet are different, they have to be equal in the two dimensional electron gas. This is because in the 2DEG, the density of states at the Fermi level is constant, and in the diffusive regime the conductivity is proportional to the density of states at the Fermi-energy. Each spin channel will thus exhibit half the total conductivity of the semiconductor ($`\sigma _{2,}=\sigma _{sc}/2`$). If we combine equation A basic obstacle for electrical spin-injection from a ferromagnetic metal into a diffusive semiconductor and 6 and solve in region 1 at the boundary $`x=0`$ and in region 3 at $`x=x_0`$ we are in a position to sketch the band bending in the overall device. From symmetry considerations and the fact that $`j_2`$ and $`j_2`$ remain constant through the semiconductor (no spin-flip) we have $$\mu _1(0)\mu _1(0)=\pm (\mu _3(x_0)\mu _3(x_0))$$ (8) where the +(-) sign refers to parallel (antiparallel) magnetization, respectively. This yields $`c_1=c_3`$ and $`c_1=c_3`$ in the expression for $`\mu _{}`$ in Eq. 6 for the parallel case, which is shown schematically in Fig. (1b). The antisymmetric splitting of the electrochemical potentials at the interfaces leads to a different slope and a crossing of the electrochemical potentials at $`x=x_0/2`$. We thus obtain a different voltage drop for the two spin directions over the semiconductor, which leads to a spin polarization of the current. In the antiparallel case where the minority spins on the left couple to the majority spins on the right the solution is $`c_1=c_3`$ and $`c_1=c_3`$ with $`j_{}=j_{}`$. A schematic drawing is shown in Fig. (1c). The splitting is symmetric and the current is unpolarized. The physics of this result may readily be understood from the resistor model (Fig. 1a). For parallel (antiparallel) magnetization we have $`R_1+R_3R_1+R_3`$ $`(R_1+R_3=R_1+R_3)`$, respectively. Since the voltage across the complete device is identical for both spin channels, this results either in a different (parallel) or an identical (antiparallel) voltage drop over $`R_{SC}`$ and $`R_{SC}`$. For parallel magnetization ($`\beta _1=\beta _3=\beta `$) the finite spin-polarization of the current density in the semiconductor can be calculated explicitly by using the continuity of $`j_{i,}`$ at the interfaces under the boundary condition of charge conservation for $`(j_i+j_i)`$ and may be expressed as: $$\alpha _2=\beta \text{ }\frac{\lambda _{fm}}{\sigma _{fm}}\text{ }\frac{\sigma _{sc}}{x_0}\text{ }\frac{2}{(2\frac{\lambda _{fm}\sigma _{sc}}{x_0\sigma _{fm}}+1)\beta ^2}$$ (9) where $`\alpha _2`$ is evaluated at $`x=0`$ and constant throughout the semiconductor, because above we have set $`\tau _{sf}=\mathrm{}`$ in the semiconductor. For a typical ferromagnet, $`\alpha _2`$ is dominated by $`(\lambda _{fm}/\sigma _{fm})/(x_0/\sigma _{sc})`$ where $`x_0/\sigma _{sc}`$ and $`\lambda _{fm}/\sigma _{fm}`$ are the resistance of the semiconductor and the relevant part of the resistance of the ferromagnet, respectively. The maximum obtainable value for $`\alpha _2`$ is $`\beta `$. However, this maximum can only be obtained in certain limiting cases, i.e., $`x_00`$, $`\sigma _{sc}/\sigma _{fm}\mathrm{}`$, or $`\lambda _{fm}\mathrm{}`$, which are far away from a real-life situation. If, e.g., we insert some typical values for a spin injection device ($`\beta =60\%`$, $`x_0=1\mu `$m, $`\lambda _{fm}=10`$ nm, and $`\sigma _{fm}=10^4\sigma _{sc}`$), we obtain $`\alpha 0.002\%`$. The dependence of $`\alpha _2`$ on the various parameters is shown graphically in Figs. (2a) and (2b) where $`\alpha _2`$ is plotted over $`x_0`$ and $`\lambda _{fm}`$, respectively, for three different values of $`\beta `$. Apparently, even for $`\beta >80\%`$, $`\lambda _{fm}`$ must be larger than $`100`$ nm or $`x_0`$ well below $`10`$ nm in order to obtain significant (i.e. $`>1\%`$) current polarization. The dependence of $`\alpha _2`$ on $`\beta `$ is shown in Fig. (3a) for three different ratios $`\sigma _{fm}/\sigma _{sc}`$. Even for a ratio of 10, $`\alpha _2`$ is smaller than $`1\%`$ for $`\beta <98\%`$, where the other parameters correspond to a realistic device. By calculating the electrochemical potential throughout the device we may also obtain $`R_{par}`$ and $`R_{anti}`$ which we define as the total resistance in the parallel or antiparallel configuration, respectively. The resistance is calculated for a device with ferromagnetic contacts of the thickness $`\lambda _{fm}`$, because only this is the lengthscale on which spin dependent resistance changes will occur. In a typical experimental setup, the difference in resistance $`\mathrm{\Delta }R=(R_{anti}R_{par})`$ between the antiparallel and the parallel configuration will be measured. To estimate the magnitude of the magnetoresistance effect, we calculate $`\mathrm{\Delta }R/R_{par}`$ and we readily find $$\frac{\mathrm{\Delta }R}{R_{par}}=\frac{\beta ^2}{1\beta ^2}\text{ }\frac{\lambda _{fm}^2}{\sigma _{fm}^2}\text{ }\frac{\sigma _{sc}^2}{x_0^2}\text{ }\frac{4}{(2\frac{\lambda _{fm}\sigma _{sc}}{x_0\sigma _{fm}}+1)^2\beta ^2}$$ (10) Now, for metallic ferromagnets, $`\mathrm{\Delta }R/R_{par}`$ is dominated by $`(\lambda _{fm}/\sigma _{fm})^2/(x_0/\sigma _{sc})^2`$ and is approx. $`\alpha _2^2`$. In the limit of $`x_00`$, $`\sigma _{sc}/\sigma _{fm}\mathrm{}`$, or $`\lambda _{fm}\mathrm{}`$, we again obtain a maximum which is now given by $$\frac{\mathrm{\Delta }R}{R_{par}}=\frac{\beta ^2}{(\beta 1)(\beta +1)}$$ (11) Fig (3b) shows the dependence of $`\alpha _2`$ and $`\mathrm{\Delta }R/R_{par}`$ on $`\beta `$, for a realistic set of parameters. Obviously, the change in resistance will be difficult to detect in a standard experimental setup. We have thus shown, that, in the diffusive transport regime, for typical ferromagnets only a current with small spin-polarization can be injected into a semiconductor 2DEG with long spin-flip length even if the conductivities of semiconductor and ferromagnet are equal (Fig (3a)). This situation is dramatically exacerbated when ferromagnetic metals are used; in this case the spin-polarization in the semiconductor is negligible. Evidently, for efficient spin-injection one needs a contact where the spin-polarization is almost 100%. One example of such a contact has already been demonstrated: the giant Zeeman-splitting in a semimagnetic semiconductor can be utilized to force all current-carrying electrons to align their spin to the lower Zeeman level. Other promising routes are ferromagnetic semiconductors or the so called Heusler compounds or other half-metallic ferromagnets. Experiments in the ballistic transport regime (where $`\sigma _{sc}`$ has to be replaced by the Sharvin contact resistance) may circumvent part of the problem outlined above. However, a splitting of the electrochemical potentials in the ferromagnets, necessary to obtain spin-injection, will again only be possible if the resistance of the ferromagnet is of comparable magnitude to the contact resistance. ###### Acknowledgements. This work was supported by the European Commission (ESPRIT-MELARI consortium ’SPIDER’), the German BMBF under grant #13N7313 and the Dutch Foundation for Fundamental Research FOM.
no-problem/9911/hep-ph9911216.html
ar5iv
text
# Review on 𝛼_𝑠 at LEP ## 1 INTRODUCTION The theoretical description of hadron production in $`e^+e^{}`$-annihilation consists of three parts. The first part is based on the fundamentals of the standard model: Feynman diagrams are used to (perturbatively) calculate the electroweak process of $`e^+e^{}`$-annihilation and the evolution of partons under the strong interaction. At some stage the partons must be combined to become hadrons. This hadronisation process cannot be described by perturbation theory and thus builds a second (non-perturbative) part. Finally the decay of unstable hadrons, which can be described by kinematics using experimentally measured decay rates, need to be included before the prediction can be confronted with data. Several different predictions exist for the two first parts. Calculations of the evolution of quarks and gluons are available in fixed order $`𝒪`$($`\alpha _s^2`$), recently some observables became available in $`𝒪`$($`\alpha _s^3`$), and in the next to leading log approximation (NLLA), which resums large logarithms to all orders of $`\alpha _s`$. When combining $`𝒪`$($`\alpha _s^2`$) with NLLA matching ambiguities occur leading to even more competing predictions. To describe the second part (the hadronisation) usually generator based models are used. The most reliable of these models are the Lund string fragmentation, implemented in Jetset, and the Cluster fragmentation, implemented in Herwig. Recently analytical predictions, so called power corrections, became available to describe the influence of hadronisation on event shape observables. Given the many different possibilities to determine $`\alpha _s`$ it was necessary to restrict this review due to time and space limitation. I’ve chosen to review the analyses using event shape observables to determine $`\alpha _s`$, as this field currently is very active at LEP. In the following I will first discuss the standard method for measuring $`\alpha _s`$ from event shapes, which uses $`𝒪`$($`\alpha _s^2`$), NLLA or the combined $`𝒪`$($`\alpha _s^2`$)+NLLA prediction in combination with generator based hadronisation models. In the second part the energy dependence of $`\alpha _s`$ will be discussed introducing the alternative analytic description of hadronisation by power corrections. Finally first results obtained using the new $`𝒪`$($`\alpha _s^3`$) predictions are presented. In all three sections Delphi-analyses shall serve as showcases. ## 2 CONSISTENT DETERMINATION OF THE STRONG COUPLING The original motivation of repeating an $`\alpha _s`$-measurement with LEP1 data in Delphi was to include the event orientation, given e.g. by the polar angle of the Thrust axis $`\theta _T`$, into the fit. Second order coefficients including such an event orientation became available in $`𝒪`$($`\alpha _s^2`$) through the Event2 program by Catani and Seymour . Using the fully reprocessed LEP1 data, event shapes were measured very precisely in eight bins of $`\theta _T`$. The small systematic errors result not only from the good quality of the final data reprocessing, but also from the fact that all detector corrections in this measurement were naturally calculated for each $`\theta _T`$-bin separately. The second order prediction for such distributions now contains second order coefficients $`A`$ and $`B`$ which depend on the observables value itself and on $`\theta _T`$: $$\frac{1}{N}\frac{\mathrm{d}^2N}{\mathrm{d}y\mathrm{d}\theta _T}=A(y,\theta _T)\frac{\alpha _s(\mu )}{2\pi }+\left(A(y,\theta _T)2\pi b_0\mathrm{ln}\left(\frac{\mu ^2}{E_{\mathrm{cm}}^2}\right)+B(y,\theta _T)\right)\left(\frac{\alpha _s(\mu )}{2\pi }\right)^2$$ (1) $`\mu `$ being the renormalisation scale and $`b_0=(332N_f)/12\pi `$. Beside $`\alpha _s`$ this formula contains renormalisation scale $`x_\mu `$ parametrised as $`x_\mu =\mu ^2/E_{\mathrm{cm}}^2`$ as a free parameter. As a first step the dependence of the resulting $`\alpha _s`$-values on this parameter were investigated. It is found that different observables aquire largely different dependencies on $`x_\mu `$. Also the scales with the optimal $`\chi ^2/\mathrm{ndf}`$ vary widely (see Fig. 1 left and middle). In spite of this large variation of the optimal scale, the $`\alpha _s`$ results corresponding to the experimentally optimised scales show a smaller spread for a large number of different event shapes, than the results obtained with $`x_\mu =1`$. In contrast to the results with scale $`1`$, optimised scales yield consistent values of $`\alpha _s`$ without assuming any renormalisation scale error (see Fig. 1 right). In addition the scale dependence of $`\alpha _s`$ near the optimised scale is smaller than for $`x_\mu =1`$, so that the scale variation between half and twice the chosen value of $`x_\mu `$ leads to a smaller scale uncertainty for optimised scales. Averaging over all 18 investigated observables the final result is $`\alpha _s(M_\mathrm{Z})`$ $`=`$ $`0.1228\pm 0.0119\text{(}x_\mu =1\text{)}`$ (2) $`\alpha _s(M_\mathrm{Z})`$ $`=`$ $`0.1173\pm 0.0026\text{(opt. scales),}`$ (3) where for the optimised scales also the $`b`$-quark mass corrections are included. The overall fit quality of these results is far better for the optimised scales than for $`x_\mu =1`$, as can be seen from the $`\mathrm{\Delta }\chi ^2`$ in Fig. 1. Thus the larger spread of the results with $`x_\mu =1`$ can be attributed to a less stable fit procedure which is caused by the bad agreement between data and the prediction. To crosscheck the results the scale dependence was also investigated for NLLA and combined $`𝒪`$($`\alpha _s^2`$)+NLLA predictions. In contrast to the $`𝒪`$($`\alpha _s^2`$) results the required relative renormalisation scales are close to one and thus there is no significant change compared to the results obtained with $`x_\mu =1`$. Moreover, because of the limited fit range available for NLLA and because of the matching ambiguities for combined $`𝒪`$($`\alpha _s^2`$)+NLLA predictions the total error for these methods is larger than for the $`𝒪`$($`\alpha _s^2`$) result: $`\alpha _s(M_\mathrm{Z})`$ $`=`$ $`0.116\pm 0.006\text{NLLA}`$ (4) $`\alpha _s(M_\mathrm{Z})`$ $`=`$ $`0.119\pm 0.005𝒪\text{(}\alpha _s^2\text{)+NLLA}`$ (5) Both results are in good agreement with each other and with the $`𝒪`$($`\alpha _s^2`$) results . There are two other publication in which optimised scales are used to determine $`\alpha _s`$ at $`E_{\mathrm{cm}}=M_\mathrm{Z}`$: Already in 1992 Opal states to have found a clear improvement in the fit quality with optimised scales using 14 observables. They quote $`\alpha _s(M_\mathrm{Z})`$ $`=`$ $`0.118_{0.003}^{+0.007}\text{(opt. scales),}`$ (6) where the error includes a scale variation from the optimised scale upto a scale of 1. In 1996 Burrows et. al. investigated 15 Observables using SLD-data. In contrast to Opal and Delphi they found no significant reduction of the spread of $`\alpha _s`$-values, though the shift to lower values of $`\alpha _s`$ when using optimised scales is reproduced: $`\alpha _s(M_\mathrm{Z})`$ $`=`$ $`0.1265\pm 0.0076\text{(}x_\mu =1\text{)}`$ (7) $`\alpha _s(M_\mathrm{Z})`$ $`=`$ $`0.1173\pm 0.0071\text{(opt. scales).}`$ (8) In spite of the extra theoretical uncertainties due to matching ambiguities three of four LEP experiments today use the combined $`𝒪`$($`\alpha _s^2`$)+NLLA calculation to determine their central $`\alpha _s`$-value (second error at L3 is the theoretical component): $$\begin{array}{cccc}\hfill \alpha _s(M_\mathrm{Z})& =& 0.1216\pm 0.0039\hfill & \text{Aleph}\text{ }\text{[5]}\hfill \\ \hfill \alpha _s(M_\mathrm{Z})& =& 0.1220\pm 0.0015\pm 0.0060\hfill & \text{L3}\text{ }\text{[6]}\hfill \\ \hfill \alpha _s(M_\mathrm{Z})& =& 0.120\pm 0.006\hfill & \text{Opal}\text{ }\text{[3]}\text{.}\hfill \end{array}$$ (9) ## 3 ENERGY DEPENDENCE The increase of beam energy accomplished during the LEP2 programme gives access to the energy dependence of event shapes and thereby to the energy dependence of $`\alpha _s`$. ### 3.1 Power Corrections Using data at different energies allows also to replace the generator based hadronisation models by an analytical ansatz. For mean values this ansatz (which was developed by Dokshitzer and Webber ) describes the hadronisation by an additive term: $$f=\frac{1}{\sigma _{\mathrm{tot}}}f\frac{\mathrm{d}f}{\mathrm{d}\sigma }d\sigma =f_{\mathrm{pert}}+f_{\mathrm{pow}}\text{.}$$ (10) The 2nd order perturbative prediction is given by Eq. (1) with coefficients A and B integrated over $`y`$ and $`\theta _T`$. The power correction term is falling off like the inverse centre-of-mass energy and is given by $$f_{pow}=c_f\frac{4C_F}{\pi ^2}\frac{\mu _I}{E_{\mathrm{cm}}}\left[\alpha _0(\mu _I)\alpha _s(\mu )\left(b_0\mathrm{log}\frac{\mu ^2}{\mu _I^2}+\frac{K}{2\pi }+2b_0\right)\alpha _s^2(\mu )\right]$$ where $`\alpha _0`$ is a non-perturbative parameter accounting for the contributions to the event shape below an infrared matching scale $`\mu _I`$, $`K=(67/18\pi ^2/6)C_A5N_f/9`$. The Milan factor $``$ is set to 1.8, which corresponds to three active flavours in the non-perturbative region . The observable-dependent coefficient $`c_f`$ is 2 and 1 for $`f=1T`$ and $`f=M_\mathrm{h}^2/E_{\mathrm{vis}}^2`$, respectively. For $`B_{\mathrm{max}}`$ the coefficient is itself energy dependent: $`c_{B_{\mathrm{max}}}1/\sqrt{\alpha _s(E_{\mathrm{cm}})}`$ . The infrared matching scale is set to 2 GeV as suggested by the authors , the renormalisation scale $`\mu `$ is set to be equal to $`E_{\mathrm{cm}}`$. Beside $`\alpha _s`$ these formulae contain $`\alpha _0`$ as the only free parameter. In order to measure $`\alpha _s`$ from individual high energy data this parameter has to be known. To infer $`\alpha _0`$, a combined fit of $`\alpha _s`$ and $`\alpha _0`$ to a large set of measurements at different energies is performed . For $`E_{\mathrm{cm}}M_\mathrm{Z}`$ only Delphi measurements are included in the fit. The resulting values of $`\alpha _0`$ for $`B_{\mathrm{max}}`$, $`1T`$ and $`M_\mathrm{h}^2/E_{\mathrm{vis}}^2`$ are summarised in Tab. 1. Even though the found $`\alpha _0`$ values are experimentally inconsistent, the universality of $`\alpha _0`$ is not violated because the expected theoretical precision allows deviations of upto 20%. The $`\alpha _0`$ values are consistent with the corresponding analyses of L3 and Opal/Jade . After fixing $`\alpha _0`$ to the values found, $`\alpha _s`$ can be calculated individually for each energy using Eqs. (103.1). The results for energies between 65 GeV and 189 GeV of this method and of the traditional methods described in the previous section are compared in Fig. 2. All methods give consistent results. ### 3.2 Energy dependence of $`𝜶_𝒔`$ To measure the energy dependence of $`\alpha _s`$ Delphi uses the logarithmic energy slope of the inverse coupling. This quantity is directly proportional to the Callan-Symanzik $`\beta `$-function and is independent of $`\alpha _s`$ and of $`E_{\mathrm{cm}}`$ to first order: $`{\displaystyle \frac{\mathrm{d}\alpha _s^1}{\mathrm{d}\mathrm{log}E_{\mathrm{cm}}}}`$ $`=`$ $`{\displaystyle \frac{1}{\alpha _s^2}}\beta _{\alpha _s}={\displaystyle \frac{\beta _0}{2\pi }}+{\displaystyle \frac{\beta _1}{4\pi ^2}}\alpha _s+\mathrm{}1.27`$ (11) The numerical value represents the QCD prediction calculated in second order for energies between 91 GeV and 200 GeV using the PDGs world average of $`\alpha _s`$. The energy dependence of this derivative in the given range and the uncertainty of $`\alpha _s`$ influence this value by about one unit in the last digit. The result with the smallest systematic error is obtained from $`𝒪`$($`\alpha _s^2`$)+NLLA fits: $`{\displaystyle \frac{\mathrm{d}\alpha _s^1}{\mathrm{d}\mathrm{log}E_{\mathrm{cm}}}}`$ $`=`$ $`1.12\pm 0.22\pm 0.18`$ (12) in good agreement with the QCD expectation. Instead of determining a value for the energy dependence explicitly L3 checks the running of $`\alpha _s`$ by applying a combined fit to all energies assuming the standard model running to $`𝒪`$($`\alpha _s^3`$). This yields a $`\chi ^2/\mathrm{ndf}=15.4/12`$ also indicating a good agreement . ## 4 $`𝒪`$($`\alpha _s^3`$) Third order calculations for four-parton final states have become available recently . These calulations can be used to measure $`\alpha _s`$ in next to leading order from event shapes that acquire non-trivial values only for four and more partons, like the four-jet-rate $`R_4`$: $$R_4=B\left(\frac{\alpha _s(\mu )}{2\pi }\right)^2+\left(B2\pi b_0\mathrm{log}\frac{\mu ^2}{E_{\mathrm{cm}}^2}+C\right)\left(\frac{\alpha _s(\mu )}{2\pi }\right)^3$$ (13) Observables of this kind are uncorrelated to the observables discussed so far which are based on three-jet-like configurations. The reduced number of relavant events is partly compensated by the (due to quadratic $`\alpha _s`$ dependence) increased sensitivity in Eq. (13). Delphi investigated four-jets-rates $`R_4(y_{\mathrm{cut}})`$ at a given $`y_{\mathrm{cut}}`$ for the Durham cluster algorithm . It shows good agreement with the prediction for $`y_{\mathrm{cut}}>0.002`$. Using $`y_{\mathrm{cut}}=0.0025`$ the resulting $`\alpha _s`$ values are consistent with the results shown in the previous section, but they have larger statistical errors. Also the running obtained from this analysis is in good agreement: $`{\displaystyle \frac{\mathrm{d}\alpha _s^1}{\mathrm{d}\mathrm{log}E_{\mathrm{cm}}}}`$ $`=`$ $`1.16\pm 0.46\text{.}`$ (14) This new calculation has also been investigated by Aleph, but was not yet used to determine the strong coupling. ## 5 SUMMARY In this talk it was tried to give an overview over the current state of the art in measuring $`\alpha _s`$ from event shape observables at LEP. Beside different perturbative calculations several hadronisation models (generator based and analytical ones) exist. The previous standard choice for the perturbative calculation $`𝒪`$($`\alpha _s^2`$)+NLLA is questioned by Delphi due to poor data description and unsatisfactory consistency. In that sense measurements of $`\alpha _s`$ from LEP data are still in full progress. In addition to the method used, special attention is payed to tests of the running, also new theoretical developements like power corrections and $`𝒪`$($`\alpha _s^3`$) gain due recognition.
no-problem/9911/cond-mat9911152.html
ar5iv
text
# Critical Temperature and Nonextensivity in Long-range Interacting Lennard-Jones-like Fluids In recent years, much attention has been paid to physical systems with microscopic long-range interactions. Nonextensive behavior and difficulties with the standard formulation of thermodynamics are since long known for this class of systems (see, for instance, ). More precisely, in order to have a well defined thermodynamic limit in the usual sense, a system must have a finite free energy per particle. In other words, an increase of $`N`$ (number of particles) is expected to leave the free energy per particle unaltered if $`N>>1`$, i.e., the free energy $`F_N`$ should be asymptotically proportional to $`N`$. However, when the effective range of the interactions between particles decays slowly enough with the distance, $`F_N`$ increases with a higher power of $`N`$, hence $`F_N/N`$ diverges when $`N\mathrm{}`$. According to many text-book interpretations of thermodynamics, the concept itself of thermodynamic limit is ill-defined in such cases and the formalism becomes unphysical (see, for instance, ). In some sense (that will become transparent later on), this is a restricted interpretation of thermodynamics and a possible way out from this difficulty has been recently proposed and illustrated through several examples . In the present effort, we shall address along these lines a Lennard-Jones-like fluid, with special emphasis onto its gas-liquid phase transition and the associated critical point. To do this we shall perform molecular dynamics for the two- and three-dimensional cases. In order to understand the thermodynamical behavior of systems with long-range interacting particles, we consider a Lennard-Jones-like model with pair interactions characterized by a $`(12,\alpha )`$ potential. A variety of values for $`\alpha `$ can be associated with standard interactions in models for fluids. For instance, for $`D=3`$, $`\alpha =6`$ corresponds to the standard Lennard-Jones fluid, $`\alpha =3`$ essentially corresponds to dipole-dipole interactions in systems with dipole configurations such that the interaction is attractive, $`\alpha =2`$ corresponds to dipole-monopole interactions (such us those relevant in the tides), $`\alpha =1`$ mimics the gravitational and the (unscreened) Coulombic interactions, and finally $`\alpha =0`$ corresponds to a mean field approach. For arbitrary $`D`$, the choice $`\alpha =D2`$ corresponds to the isotropic solutions of the $`D`$-dimensional Poisson equation. Also, there are some special situations in which neutral, glassy, small spheres can interact through long-range potentials with low, and not even integer, $`\alpha `$. Such cases might occur, through a Casimir-like effect due to the large fluctuations at the critical point of standard fluids, within which the little spheres are immersed . We might occasionally refer to such fluids as Casimir-like ones. For all these reasons, we shall focus on herein all the values of $`\alpha `$ in the interval $`0\alpha 6`$. An interesting though preliminary discussion of this class of fluids has been done by Grigera . Our aim in the present work is to provide a more detailed description which will hopefully give a suitable picture of some general thermodynamic properties of systems with long-range (and not singular at the origin) interactions. The two-body potential we shall assume is given as follows: $$v(r)=4ϵ\left[\left(\frac{\sigma }{r}\right)^{12}\frac{1}{4}\frac{(48/\alpha )^{\alpha /12}}{(1\alpha /12)^{1\alpha /12}}\left(\frac{\sigma }{r}\right)^\alpha \right],$$ (1) where $`r`$ is the distance between particles and $`\sigma >0`$ and $`ϵ>0`$ are specific Lennard-Jones-like parameters. For the long-tailed attractive term $`r^\alpha `$ in potential (1) we consider $`0\alpha <12`$. The $`r^{12}`$ term describes the short range repulsive potential (due to nonbonding overlap between the electron clouds for the $`D=3`$ case). The usual convention in the theory of fluids (for instance, standard Lennard-Jones fluids) is to define the diameter $`\sigma `$, via $`v(\sigma )=0`$. Due to the range of the interactions, this would not seem particularly helpful here. So, we rather make a different convention, namely the $`r^\alpha `$ coefficient has been chosen so that the minimal value $`v(r_{\text{min}})=ϵ`$ in all cases. The Lennard-Jones-like $`(12,\alpha )`$ potentials as a function of $`r`$ are plotted in FIG 1 for typical values of $`\alpha `$. A main difficulty for fluid systems with long-range (but not infinitely long) interactions is the absence of any exact solution. In our present case, calculations were performed with the molecular dynamics method for systems of $`N`$ spherically-symmetric particles with periodic boundary conditions. Standard mean field (or van der Waals) theory for an integrable (i.e., $`\alpha >D`$) potential with a cutoff $`r_c`$ uses corrections $`_{r_c}^{\mathrm{}}v(r)d^Dr`$. However, our main aim is to discuss the trend of thermodynamic quantities as a function of $`N`$ (the size of the system) and to provide an unified approach to the thermodynamic limit for the entire range $`(0\alpha 6)`$. Consequently, no corrections are considered in the present treatment and a cutoff distance of half of the box size was applied to the interaction; we verified that, in the $`N\mathrm{}`$ limit, no physical consequences seemed to emerge from the adoption of this computational convenience. The equations of motion were solved using the Verlet algorithm and the temperature was kept constant by a weak coupling of the system with an external thermal bath . When starting a simulation from scratch, the initial configuration is generated by randomly distributing the molecules. Then, the molecules evolve in such a way as to achieve the energetically most favorable positions and velocities. Full equilibrium is assumed only when the pressure, the potential energy and the total energy exhibit stable values. In general, the computational work is relatively heavy, since for systems with long-range interactions it is uneasy to obtain very precise numerical data for quantities such as the critical temperature, for example. Nevertheless, as we will see, the trends of the present numerical data clearly exhibit, in all the cases we have studied here, the conjectured thermodynamical scalings . In the remainder of this paper, quantities are specified in usual reduced (dimensionless) form in terms of the Lennard-Jones parameters $`\sigma `$ and $`ϵ`$. The reduced number density is $`\rho \sigma ^D`$ where $`\rho =N/V`$; the reduced temperature is $`k_BT/ϵ`$ where $`k_B`$ is the Boltzmann constant; the reduced pressure is $`\sigma ^DP/ϵ`$; finally, the reduced time is $`\sqrt{ϵ/m}t/\sigma `$, where $`m`$ is the mass of the particles. From now on we consider $`\sigma =ϵ=m=k_B=1`$. Special attention is payed to the point at which the critical isotherm ($`T=T_c`$, where $`T_c`$ corresponds to the critical temperature) simultaneous has a vanishing slope $`\left(P/V\right)_{T_c}=0`$ and an inflection point, i.e., where the curve changes from convex to concave, hence, $`\left(^2P/V^2\right)_{T_c}=0`$ in the pressure-volume plane. In fact, these properties simultaneously occur only at the critical point of the system. It is known that the standard Lennard-Jones fluid provides a handy model for testing liquid theories and for investigating such phenomena as melting, the liquid-vapour surface, nucleation, etc. Consistently, our interest in the present more general fluid stems mainly in the critical point of the liquid-vapor curve. The $`NVT`$-ensemble was used in our simulations. Naturally, strictly speaking, only at the thermodynamical limit we can speak of the critical temperature. Nevertheless, even for finite $`N`$, we can define an effective ”critical” temperature $`T_c(\alpha ;N)`$ by using the zero slope and inflection point criteria just mentioned. We have obtained the values for $`T_c(\alpha ;N)`$ from simulations with systems containing $`N=27,64,125,216`$ and $`512`$ in three dimensions, and $`N=25,49,100,225`$ and $`400`$ in two dimensions. In FIG 2, $`T_c(\alpha ;N)`$ as a function of $`N`$ is depicted for typical values of $`\alpha `$ for D=3 case. For $`\alpha >D`$, the critical temperature shows a tendency to saturate in the limit $`N\mathrm{}`$, but the convergence becomes very slow when $`\alpha `$ approaches $`D`$ from above, consistently with what was already discussed for the total and the potential energies of a similar fluid ; as we shall see, for $`\alpha D`$, the approximate critical temperature diverges with $`N`$. A suitable plot for the curves associated with the extensive regime (i.e., $`\alpha >D`$) is depicted at inset in FIG 2. Labels on lines correspond to several values of $`\alpha `$ at Figure ($`\alpha =1,2,3,4,5,6`$) and inset ($`\alpha =3.5,4,5,6`$). The $`D=2`$ case exhibits similar trends. In fact, all our results (for all $`(\alpha ,D)`$) are satisfactorily fitted with the simple expression $`T_c(\alpha ;N)T_c(\alpha )+a_\alpha N^{1\alpha /D}`$, where the parameters $`T_c(\alpha )`$ and $`a_\alpha `$ come out from the fitting. For the extensive region ($`\alpha >D`$) $`T_c(\alpha )`$ represents the thermodynamic limit of the critical temperature; $`a_\alpha <0`$. In FIG 3, we depict some values of $`T_c(\alpha )`$ given by the present approach as a function of $`\alpha `$. We verify that $`T_c`$ diverges for $`\alpha D`$ and that, when $`\alpha D`$ from above, $`T_c(\alpha )`$ diverges like $`1/(\alpha D)`$ (see the inset in the figure). This behavior has been conjectured for generic systems with long-range interactions. The thermodynamic limit is not reached in the usual way in the nonextensive region. Nevertheless, if we appropriately scale the various relevant quantities, our numerical results will show that this limit is in fact as well defined as that corresponding to the extensive region. Let us now clarify how this happens. Thermodynamic quantities $`𝒜_N`$ like the internal energy, the free energy, Gibbs energy, etc., associated with systems including potentials having an attractive tail that decays as $`r^\alpha `$, do not always scale with $`N`$. Indeed, we generically have $$\frac{𝒜_N}{N}_1^{N^{1/D}}𝑑r\frac{r^{D1}}{r^\alpha }=\frac{1}{D}\frac{N^{1\alpha /D}1}{1\alpha /D},$$ (2) Consistently, we can introduce a new variable given by $$N^{}\frac{N^{1\alpha /D}1}{1\alpha /D}$$ (3) which, in the limit $`N\mathrm{}`$, behaves asymptotically as $`1/(\alpha /D1)`$ for $`\alpha /D>1`$; as $`\mathrm{ln}N`$ for $`\alpha /D=1`$; and as $`N^{1\alpha /D}/(1\alpha /D)`$ for $`0\alpha /D<1`$. For $`\alpha =0`$, $`N^{}N`$, which precisely recovers the value frequently used in mean field approximations in order to renormalize the coupling constants in such a way as to make the system to (artificially) become extensive. It is known since several decades ( and references therein) that thermodynamical extensivity imposes, in classical systems like the present one, $`\alpha /D>1`$. Our definition (3) clearly is consistent with this, but also provides the correct scaling for $`0\alpha /D1`$, where no essential knowledge has been developed until very recently, and along the present lines. We expect the thermodynamic variables to generically include scaling with $`N^{}`$ as recently conjectured . For instance, we expect the $`lim_N\mathrm{}𝒜_N/(NN^{})`$ to be finite for all values of $`\alpha `$, whether it is in the extensive region or in the nonextensive one. Thus, we can write $$\frac{G_N}{N^{}N}=\frac{U_N}{N^{}N}\frac{T}{N^{}}\frac{S_N}{N}+\frac{P}{N^{}}\frac{V_N}{N}.$$ (4) It can be easily verified that this type of scaling preserves the Legendre transformation structure of thermodynamics, even in the nonextensive region ($`0\alpha D`$) . These features are in fact included in a more general context, namely that of a recent statistical-mechanical formalism addressing nonextensive systems . In what concerns the so called intensive variables ($`T,P`$, etc), we expect them to scale with $`N^{}`$. For instance, the critical temperatures must scale as $`T_c(\alpha ;N)N^{}T_c^{}(\alpha )`$. Indeed, in FIG 4$`(a)`$, the critical temperature $`T_c(\alpha ;N)`$ vs $`N^{}`$ is exhibited for typical values of $`\alpha `$, and a linear dependence with $`N^{}`$ is clearly observed. As before, it is possible to fit the data with the simple expression $`T_c(\alpha ;N)b_\alpha +T_c^{}(\alpha )N^{}`$, where $`b_\alpha <0`$ and $`T_c^{}(\alpha )`$ are fitting parameters. Since $`T_c^{}`$ is finite in all cases, it is to be considered as the correctly scaled expression for the critical point of the system. In FIG 4$`(b)`$, we plot $`T_c^{}`$ as a function of $`\alpha `$ in both the extensive and nonextensive regimes. We see that $`T_c^{}(\alpha )`$ increases continuosly with $`\alpha `$ but its derivative $`dT_c^{}(\alpha )/d\alpha `$ possibly presents a discontinuity at $`\alpha =D`$. The error bars illustrate the mean error (less than 5%) observed in the measurements of $`T_c`$ in our numerical simulations. In FIG 5$`(a)`$, we plot $`P_c(\alpha ;N)/T_c(\alpha ;N)`$ as a function of $`1/N^{1/3}`$ for the $`D=3`$ system, and it turns out to be once again linear. This enables a simple extrapolation (the precise finite size scalings associated with this problem have not been taken into account here, since they do not appear to add anything specially relevant for our present discussion) which provides $`P_c(\alpha )/T_c(\alpha )`$, which of course coincides with $`P_c^{}(\alpha )/T_c^{}(\alpha )`$, since both $`P`$ and $`T`$ scale with $`N^{}`$. We notice that, as it happened with $`T_c^{}(\alpha )`$, the values are finite in both the extensive and nonextensive regions (i.e., $`\alpha `$): see FIG 5$`(b)`$. The observed error ($``$ 10 % or less) in the measurement of the $`P_c`$ is roughly the double of that observed for $`T_c`$. Let us finally address a more general situation, namely the functional relationships such as the equation of states, in the present case the funtion $`f(P^{},\rho ,T^{})=0`$, which relates the density $`\rho `$ with temperature and pressure. In FIG 6$`(a)`$, we illustrate the ideas with the $`D=3`$ system using several sizes, namely $`N=27,64,125,216`$ at the same fixed temperature $`T=3.4`$ and $`\alpha =2`$. We notice, as expected, that the curves in the $`(P,\rho )`$ plane do not remain finite as $`N`$ increases. There is no usual thermodynamic limit and consequently no interesting properties can be exhibited under these conditions. The present class of models might produce a negative (here, unphysical) pressure $`P_c(2;N)`$ when $`N`$ increases. Such behavior is commonly observed in $`N`$-dependent isotherms in the $`P\rho `$ plane. However, and interestingly enough, a completely different picture emerges if, instead of presenting the results at fixed $`T`$, we do it at fixed $`T^{}=T/N^{}`$. Indeed, in FIG 6$`(b)`$, we plot the curves corresponding to $`T^{}=0.43`$ ($`>T_c^{}(2)`$) and we observe that the curves start converging as $`N`$ increases. In fact, the convergence can be exhibited versus $`1/N^{1/D}`$ (see inset), which allows the establishment of a well defined thermodynamic limit even for this example in the nonextensive region. In addition to these results, we appoint that a new family of isotherms is obtained in the present way, when the normalized function $`T^{}(\alpha ;N)`$ instead of $`T(\alpha ;N)`$ is used, but the true temperature value is always given by $`T(\alpha ;N)`$ in all cases. In the same line, we verify that the values of $`P(2;N)/T(2;N)`$ appear to converge, as desired, to nonnegative values when $`N\mathrm{}`$. At this point, it is important to clarify that the thermodynamical variables we are analyzing are those which appear in relations such as the equation of states. The parameters which characterize thermodynamical equilibrium are the usual ones, i.e., the temperature $`T`$ of the external thermal bath, its pressure $`P`$, etc. Summarizig, we have shown an example, a Lennard-Jones-like fluid, that can violate the usual scalings of thermodynamics, namely those corresponding to extensive systems, i.e., those associated with interactions whose range is not too long, more precisely satisfying $`\alpha >D`$. We have shown how the appropriate scaling (through $`N^{}`$ given by definition (5)) of the various thermodynamical quantities enables an unified picture for both short- and long-range interacting systems. By so doing, we have empirically established a variety of secondary finite-size scalings. In addition to this, we have verified that the critical temperatures, pressures, etc, diverge like $`1/(\alpha D)`$ when $`\alpha `$ approaches $`D`$ from above, a fact which is expected to be model-independent. Let us finally add that we hope to have provided enough arguments to show that thermodynamics and the concept of thermodynamical equilibrium can very well accomodate the (until now almost unexplored) nonextensive systems. The detailed study of more such systems (quantum, frustrated, etc) certainly is very welcome. One of us (S.C.) ackowledges partial support by FONDECYT, grant 3980014. The other one (C.T.) is thankful to Per Bak for useful remarks and warm hospitality at the Niels Bohr Institute where this work was concluded. Finally, CNPq and PRONEX (Brazilian Agencies) are also ackowledged for partial support.
no-problem/9911/astro-ph9911528.html
ar5iv
text
# The Tip of the Red Giant Branch Distance to the Large Magellanic Cloud ## 1 Introduction The distance to the Large Magellanic Cloud (LMC) continues to serve as the cornerstone in the extragalactic distance scale. Distances to external galaxies measured using the Cepheid variable star period–luminosity (PL) relation are usually determined with respect to the distance of the LMC. Recently, Mould et al. (2000) reported the final results from the Hubble Space Telescope (HST) Key Project on the Extragalactic Distance Scale: H<sub>0</sub> = 71 $`\pm `$ 6 km s<sup>-1</sup> Mpc<sup>-1</sup>. One of the largest contributors to the systematic error is the uncertainty in the LMC distance modulus, assumed to be $`18.50\pm 0.13`$ mag (Mould et al. 2000). The uncertainty in this value was derived by examining the distribution of various LMC distance estimates published in literature. As will become clear below, it is not surprising that the distance to LMC is not determined more precisely. Distance estimates for the LMC span from 18.1 to 18.7 mag. For details of this field, readers are referred to discussions in Westerlund (1997), Jha et al. (1999), and Mould et al. (2000). Here, we restate some of the latest results, many of which were derived using HIPPARCOS parallax measurements. Feast & Catchpole (1997) reported a dereddened distance modulus of $`(mM)_0^{LMC}=18.70\pm 0.10`$ mag, using the HIPPARCOS distances to Galactic Cepheid variables, which were then compared with the LMC Cepheids from Caldwell & Laney (1991). However, three independent re–analyses suggest that the distance modulus derived by Feast & Catchpole can be lowered by as much as $`0.35`$ mag (Madore & Freedman 1998, Oudmaijer, Groenewegen & Schrijver 1998, Luri et al. 1998). Likewise HIPPARCOS parallaxes to Galactic subdwarfs have been used to redetermine the RR Lyrae distance to the LMC, yielding $`(mM)_0^{LMC}`$ = 18.65 mag (Reid 1997), 18.60 $`\pm `$ 0.07 mag (Gratton et al. 1997), 18.54 $`\pm `$ 0.04 mag (Carretta et al. 1999), and 18.61 $`\pm `$ 0.28 mag (Groenewegen & Salaris 1999). A HIPPARCOS based calibration of Mira variables by van Leeuwen et al. (1997) yields an LMC distance modulus of 18.54 $`\pm `$ 0.2 mag. These results imply LMC distances that are 2–10% higher than the conventional value of 50 kpc \[$`(mM)_0`$ = 18.50\]. Conversely, evidence for a shorter distance comes from a relatively new method that utilizes the luminosity of red clump stars (Udalski 1998, Stanek, Zaritsky, & Harris 1998). The derived distance modulus is sensitive to corrections for age and metallicity differences between the Galactic calibrating stars and the LMC, and values ranging from $`(mM)_0`$ = 18.08 to 18.36 have been reported (Stanek et al. 1998, Udalski 1998, Cole 1998, Girardi et al. 1998). Recently Zaritsky (1999) reanalyzed the reddening distribution in the LMC, taking into account differences in reddening as a function of stellar color, and he concludes that the published red clump distance moduli need to be revised upwards by $``$0.2 mag (to $``$18.3–18.55 mag) to correct for an overestimate in the previously adopted reddening values. Romaniello et al. (1999) also obtained a low extinction to the red clump stars. They further applied a theoretical metallicity correction to obtain a larger distance modulus of 18.59 mag. All of these values can be compared to the geometric distance measurement to the LMC based on the SN 1987A expanding ring, which yields $`\mu _0^{LMC}`$ = 18.44 – 18.58 mag (Gould & Uza 1998, Panagia et al. 1997). Unfortunately, convergence on a precise distance still eludes us. Zaritsky, Harris & Thompson (1997: hereafter ZHT) have been undertaking the Magellanic Clouds Photometric Survey (MCPS) using the Las Campanas 1m telescope. A roughly 4 by 2.7 region of the LMC has been analyzed to date using the U-, B-, V- ,and I-band filters. This database presents us with an opportunity to re-measure the distance to the LMC – this time, using the tip of the red giant branch (TRGB) method, which uses the I–band luminosity function of the red giant branch stars. These stars evolve upward on the red giant branch, but undergo a drastic physical change at the onset of the core helium flash. The method is currently calibrated using the distances to Galactic globular clusters (Da Costa & Armandroff 1990, Lee, Freedman & Madore 1993). The TRGB method is powerful because as a bright Population II distance indicator, it is applicable to any morphological type of galaxy, and red giants are found in abundant numbers, yielding good number statistics. In addition, one can select the RGB stars strategically by avoiding regions with dust and young stars, thereby minimizing uncertainties due to crowding and reddening corrections. An earlier application of the TRGB method to determine the LMC distance was reported by Reid, Mould & Thompson (1987), who examined the stellar populations of Shapley Constellation III in the LMC. They detected the TRGB at $`I=14.60\pm 0.05`$ mag. By adopting a bolometric correction of 0.39 mag (corresponding to M stars), extinction of $`A_I=0.07`$ mag, and an absolute bolometric magnitude of the TRGB of $`3.5\pm 0.1`$ implied from Frogel, Cohen & Persson (1983), Reid et al. (1987) reported $`(mM)_0^{LMC}=18.42\pm 0.15`$ mag. More recently, Romaniello et al. (1999) used the HST/WFPC2 multiband observations of a field around SN1987A to estimate the magnitude of the TRGB. Using a theoretical calibration of Salaris & Cassisi (1998), they obtained the distance modulus of $`(mM)_0^{LMC}=18.69\pm 0.25`$ mag. We re-determine the distance to the LMC using the TRGB method, using the significantly larger photometric database of ZHT. Because this method is based on a distance scale that is completely independent of the Pop I Cepheid distance indicators, it can provide an alternative Pop II check on the distance to the LMC. The data used for the analysis presented in this paper are described in §2. We then discuss the derivation of the TRGB distance (§3), followed by a discussion of the errors (§4), and a summary of our conclusions (§5). ## 2 The Data The data used for this study come from the ongoing MCPS described initially by ZHT. The survey is being conducted using the Las Campanas Swope (1m) telescope with the Great Circle Camera (Zaritsky, Shectman, & Bredthauer 1996) and a 2K by 2K CCD with a pixel scale of 0.7” pixels. The effective exposure time per filter is about 4 min. Details of the data reduction are described by ZHT and Zaritsky (1999). The region of the LMC being studied here is roughly 4 by 2.7 centered at $`\alpha =5^h20^m`$ and $`\delta =66^{}48^{}`$. Roughly 4 million stars are photometered in both the B- and V-band images (about half as many in $`U`$ and $`I`$). We recover the reddening along the line of sight to individual stars as described in detail by Zaritsky (1999). We fit stellar spectra plus extinction to stars in our survey with $`UBVI`$ photometry. We recover both an estimate of the effective temperature of the star and $`A_V`$, for the assumed standard extinction curve. We find that the technique is reliable only for stars with effective temperatures 5500 K$`<T_{eff}<`$ 6500 K and $`T_{eff}>12000`$ K. For stars with other effective temperatures, the recovered $`T_{eff}`$ and $`A_V`$ are sufficiently degenerate that the results of the fitting algorithm are suspect. The lower of the two temperature ranges includes the LMC red clump stars and some of the giant branch. Because the distribution of extinction values is different for the hotter and cooler stars, it is critical to measure the reddening to a population that is as similar as possible as the population being studied. Therefore, to derive reddenings for the TRGB stars we choose to use those derived from the 39,613 stars that are in the cooler temperature range. To correct the TRGB star photometry, we convert our extinction measurements for individual stars to a map of extinction across the region. Because the extinction to any individual star is significantly uncertain ($`\sigma 0.2`$ mag) we average local values and interpolate to produce our extinction map. Again, details and tests of the procedure are described by Zaritsky (1999). The map has a variable spatial resolution that depends on the local surface density of extinction measurements, with the highest spatial resolution of 84″. The reddening corresponding to each resolution element is determined from at least 3 stars, so typical formal uncertainties per resolution element are $`\sigma _{A_V}<`$ 0.1 mag (thus $`\sigma _{A_I}<0.05`$ mag). ## 3 Tip of the Red Giant Branch Distance to the LMC ### 3.1 TRGB Calibration The TRGB marks the core helium flash of first–ascent red giant branch stars. In the I–band, the TRGB magnitude is insensitive to both age and metallicity (Iben & Renzini 1983). In composite stellar populations, it is observed as a sharp discontinuity in the I–band luminosity function. Currently, the TRGB calibration is based on RR Lyrae distances to six Galactic globular clusters spanning a range of metallicities $`2.1`$ \[Fe/H\] $`0.7`$, estimated using the metallicity-$`M_V`$ correlation by Lee, Demarque & Zinn (1990) (Da Costa & Armandroff 1990: hereafter DA90, Lee, Freedman & Madore 1993:hereafter LFM93). Using the I–band TRGB magnitude ($`I_{TRGB}`$), the distance modulus is estimated via a relation: $$(mM)_I=I_{TRGB}+BC_IM_{bol,TRGB}.$$ (1) Both $`BC_I`$ and $`M_{bol,TRGB}`$ terms are functions of the metallicity (LFM93) : $$M_{bol,TRGB}=0.19[Fe/H]3.81,$$ (2) $$BC_I=0.8810.243(VI)_{TRGB}.$$ (3) The bolometric correction was derived by DA90 by comparing their optical photometry of globular clusters with the IR photometry from Frogel, Persson & Cohen (1983). The metallicity, \[Fe/H\], is determined from the de–reddened $`(VI)`$ colors of the RGB stars: $$[Fe/H]=12.64+12.6(VI)_{3.5}3.3(VI)_{3.5}^2,$$ (4) where $`(VI)_{3.5}`$ is measured at the absolute I magnitude of $`3.5`$ The calibration presented by equations 1–4 is semi–empirical, based on an RR Lyrae distance scale using the theoretical models of horizontal branch stars. Cassisi & Salaris (1997) recently presented a purely theoretical calibration of the TRGB method, by examining their stellar evolution models (Salaris & Cassisi 1996). The main result of that work is that the theoretical calibration is $``$0.1 mag brighter than the empirical calibration. They attribute this difference to a poor sampling of the RGB stars in the Galactic globular clusters used for the DA90 calibration, and the small sample of stars observed by Frogel, Persson & Cohen (1983) whose photometric data were used to estimate the bolometric corrections. In §4.2, we discuss how the uncertainties in the calibrations affect out LMC distance estimate. ### 3.2 TRGB Magnitude In Figure 1, we show the reddening corrected $`I(VI)`$ and $`I(BI)`$ color–magnitude diagrams for stars in the upper RGB region for the entire LMC region surveyed. In order to make a reliable TRGB detection, we select RGB stars using the following criteria: (1) Stars in the shaded region in Figure 1 are excluded, as their magnitudes and colors are obviously inconsistent with RGB stars. Also, those with $`I16.0`$ were not included to minimize the computational time. (2) Using the extinction values derived from stars of effective temperatures $`T_{eff}>12,000`$ K as described in Section 2, those regions with $`A_V0.2`$ are excluded, to avoid dusty regions. (3) Any stars that lie in a crowded region are excluded. This was done by excluding those stars lying in a region for which the density map value, $`\sigma `$, is greater than 1.0 star/pixel. The stellar density map was constructed by counting stars with $`V<21`$ in 21″ pixels. We are left with a sample of 2072 stars, consisting mostly of RGB stars and some intermediate–age asymptotic giant branch stars (those brighter than $`I14.5`$). In Figure 2 we show the corresponding I–band luminosity function from $`M_I=13`$ to 16 mag (top), and the output of the edge–detection filtering of the I–band luminosity function. The details of the edge–detection filter were described by Sakai, Madore & Freedman (1997). The technique is designed to pick out the luminosity at which the slope reaches the maximum. Briefly, the filter is a modified Sobel kernel, $`[1,0,+1]`$, and it is applied to a luminosity function that has been smoothed using a Gaussian of dispersion equal to the photometric error. The position of the TRGB is indicated by the highest peak in the lower panel of Figure 2. We detect the TRGB at $`I_0=14.54\pm 0.04`$ mag. The error quoted here, 0.04 mag, refers to the FWHM of the peak profile in the filter output function. We note that our TRGB magnitude is 0.06 mag brighter than the value of $`14.60\pm 0.05`$ derived by Reid et al. (1987). Although the discrepancy is slight ($``$1.5$`\sigma `$), it may reflect the sampling bias. Romaniello et al. (1999) detected the TRGB using the HST/WFPC2 observations at $`I_0`$ = 14.50 $`\pm `$ 0.25 mag, which agrees with our value well within 1$`\sigma `$. However, they obtain a larger distance modulus of 18.69 $`\pm `$ 0.25 mag, mainly due to their use of a theoretical (Salaris & Cassisi 1998) rather than the empirical calibration of Lee et al. (1993). From the RGB sample used to determine this TRGB magnitude, we find $`\overline{(VI)}_{TRGB}=1.7\pm 0.1`$ mag and $`\overline{(VI)}_{3.5}=1.5\pm 0.1`$ mag. Substituting these colors into the calibration formulae, we obtain $`M_{I,TRGB}=4.05\pm 0.06`$ mag. Thus, the distance modulus to the LMC determined by the TRGB method is $`(mM)_0^{LMC}=18.59\pm 0.07`$ mag, corresponding to the linear distance of $`52\pm 2`$ Kpc. The error quoted here only includes random terms: 0.04 mag uncertainty in the edge–detection method, and 0.06 mag from the color spread in the RGB population. We discuss the systematic errors in the distance in §4. ## 4 Uncertainties in the TRGB Distance to the LMC In Table 1, the uncertainties in the LMC distance modulus are summarized. They include those due to the photometric and reddening estimates, the TRGB calibration zero point, and the effects of the line–of–sight depth of the LMC itself. We review in the following subsections some of the uncertainties in detail. ### 4.1 Photometry and Reddening Internal photometric errors are calculated by DAOPHOT II (Stetson 1987). From comparison of results from overlapping images, we conclude that the internal uncertainties at worst underestimate the true uncertainties in the instrumental magnitudes by a factor of 1.5 (ZHT). We have multiplied the DAOPHOT uncertainties by this factor to be conservative. Because of the large number of stars ($``$1000) used in determining the TRGB position, the internal photometric errors have an insignificant effect on the fitted distance modulus. The uncertainty in the zero point of the photometric scale appears small ($`0.05`$ mag) as well from some comparisons to previous data; as will be shown later in Section 6, there is good agreement between the red clump magnitude obtained here with independent measurements and also between our TRGB magnitude with that from Reid et al. (1987). The median $`A_V`$ in a single resolution element is determined to $`\pm `$0.1 mag. Small scale reddening structures are not resolved and so the uncertainty in the extinction of any single star can be much larger. However, all of our measurements are based on large samples, where mean extinction should be well determined from the map. We have used the uncertainty of 0.1 mag per resolution element and calculated the number of resolution elements included in the selected regions to determine the error in the mean extinction of the region to be $`\pm `$ 0.03 mag. ### 4.2 Uncertainties in the TRGB Calibration The TRGB calibration used in this paper is based on the distances to six Galactic globular clusters which were determined by the metallicity–$`M_V`$ relation for RR Lyrae variable stars given by Lee, Demarque & Zinn (1990) for $`Y_{MS}=0.23`$, expressed as $`M_V(RR)=0.17[Fe/H]+0.82`$. Unfortunately, both the zero point and the degree of metallicity dependence is uncertain. In particular, the range of zero point estimates by different groups is as much as 0.3 mag. Chaboyer (1999) has published an excellent review on the RR Lyrae distance scale, in which he calibrated the method using several different methods that included statistical parallax fitting, theoretical horizontal branch models, and main sequence fitting using the HIPPARCOS database. For the LMC, where \[Fe/H\] = $`1.18`$, adopting the five different calibrations given by Chaboyer (1999), we obtain RR Lyrae magnitudes, M<sub>V</sub>(RR), the range from 0.55 to 0.89 mag with the average value of 0.68 $`\pm `$ 0.11 mag. The RR Lyrae calibration of Lee et al. (1990), which we adopted in the TRGB calibration would yield, for \[Fe/H\] = $`1.18`$, M<sub>V</sub>(RR) = 0.62, which is in agreement with the mean of these more recent calibrations. We therefore adopt a systematic error in the RR Lyrae distance scale of 0.11 mag. A purely theoretical calibration of the TRGB method was presented recently by Cassisi & Salaris (1997:hereafter CS97) who used the evolutionary models of stars for a combination of various masses and metallicities for $`Y_{MS}=0.23`$ (Salaris & Cassisi 1996). CS97 find that the empirical zero point of the TRGB calibration is fainter by $``$0.1 mag than that of the theoretical calibration, mainly due to (1) sampling errors: not enough stars populate the TRGB region in the Galactic globular clusters used in the empirical calibration, such that the probability of actually observing the brightest TRGB stars is very small; and (2) statistical uncertainties introduced by the small number of stars used by Frogel, Persson & Cohen (1983) to derive the bolometric correction. The importance of sampling the TRGB was confirmed empirically by Sakai & Madore (1999), who reported that when only one in five stars of the true stellar population was used in estimating the position of the TRGB, its magnitude became fainter by $`0.06`$ mag. The CS97 calibration is given as follows: $$M_{I,TRGB}=3.953+0.437[Fe/H]+0.147[Fe/H]^2,$$ (5) $$[Fe/H]=39.270+64.687(VI)_{3.5}36.351(VI)_{3.5}^2+6.838(VI)_{3.5}^3.$$ (6) If we were to adopt this calibration instead of that of Lee et al., we obtain $`M_{I,TRGB}=4.13`$ mag, and thus the distance modulus of $`(mM)_0^{LMC}=18.67`$ mag, $``$4% further than the value derived using Lee et al. calibration. Here, we adopt an uncertainty of 0.1 mag as a systematic error in the calibration due to small globular cluster samples. Another possible uncertainty in the TRGB calibration originates from the fact that the zero–point calibration of LFM93 did not use the edge–detection filtering. Because the method used to determine the TRGB magnitudes for Galactic globular clusters is different from that used to measure the extragalactic TRGB magnitudes, there could be a systematic difference in the results. Unfortunately, because the globular cluster data are much less densely populated than most other extragalactic data, it is difficult to securely measure the TRGB magnitude using the filtering method. Nevertheless, we apply the edge filter to the globular cluster data to examine whether we observe any large systematic errors. By combining the data from all six globular clusters, we obtain a TRGB magnitude of $`4.05\pm 0.03`$ mag. This result agrees precisely with the zero point we have adopted, suggesting that there is no major systematic error from inconsistent TRGB measuring techniques between the calibrators and the galaxies. We have adopted an error of 0.06 mag due to the color spread in the LMC RGB population (see Table 1). However, we feel that this is a very conservative estimate. Using the LMC data, we find that the TRGB magnitude is insensitive to the $`(VI)`$ color range sampled. Dividing the RGB sample into red and blue subgroups, the TRGB magnitude is the same for both of them. This result confirms that adopting one zero point for the entire TRGB star population that spans a broad range in color should not make the TRGB distance modulus estimate any more uncertain. ### 4.3 Effects of Crowding and Extinction on the TRGB Distance Determination When applying the edge–detection filter to the I–band luminosity function to determine the TRGB magnitude, one usually avoids the central region of the galaxy where the extinction is large and the crowding becomes a severe problem. Because the TRGB method requires an independent estimate of the internal extinction of the galaxy, the degree of extinction, and the uncertainty in the adopted value, must be minimized. For our analysis of the LMC, we are fortunate to have excellent extinction maps, determined from two sets of stars: those with temperatures higher than 12,000 K; and those with effective temperatures between 5,500 and 6,500 K. As discussed earlier, the extinction map inferred from the hot stars was used to exclude the high–extinction regions. Here, we examine whether the TRGB magnitude is sensitive to this particular selection, by showing in Figure 3 the I–band luminosity function and corresponding edge–detection filter outputs for four different extinction cutoff values. The number of stars used in each subsample is indicated in brackets. Most of the stars in high–extinction regions also lie in more crowded regions. For the bottom case (highest extinction cutoff), even though the TRGB position is visible as the highest peak in the filter output function, it is barely visible in the luminosity function. Similarly, we use the density map to examine how crowding affects the determination of the TRGB magnitude. In Figure 4 the luminosity functions and filter output functions are shown for four different surface density ranges (density ranges increasing from top to bottom). We apply the standard extinction cutoff, $`A_V0.3`$ mag, to all of the samples. Figure 4 illustrates the rapid degradation of the TRGB detection at high stellar density, which justifies our exclusion of regions with density $`>1.0`$. ### 4.4 The Line–of–Sight Depth and the Tilt of the Large Magellanic Cloud If the bulk of the LMC population subtends a large depth along the line–of–sight due either to an intrinsic thickness or high inclination, the fitting of the TRGB might tend to preferentially sample the nearest (and brightest) giants, and thus lead to a systematic underestimate of the LMC distance. We explore this effect using a simple simulation. We also examine whether these simulations will help in estimating the actual line–of–sight depth of the LMC. We begin with a luminosity function that is roughly defined as $`N(M_I<4)=0`$, and $`N(M_I4)M^{0.6}`$. About 7500 stars were generated in the magnitude range $`4M_I3`$. Then random photometric noise following a Gaussian distribution with a FWHM of 0.03 mag was added to each magnitude to simulate the observed photometric error in the true LMC data. This distribution simulates a sample with zero line–of–sight depth. We then displaced each star along the line–of–sight randomly following a boxcar distribution, and for each simulation, the TRGB magnitude and the dispersion of the profile corresponding to the TRGB in the edge–detection filtering output were measured. For boxcar back–to–front depths of 0.05, 0.10, 0.15, and 0.20 mag, we obtained TRGB magnitudes of $`4.052\pm 0.003`$, $`4.033\pm 0.012`$, $`4.016\pm 0.021`$, and $`4.007\pm 0.026`$ mag respectively. Contrary to one’s naive expectation, the TRGB magnitude becomes slightly fainter as the line–of–sight depth becomes larger. A similar trend is observed when a Gaussian line–of–sight distribution is assumed instead of a boxcar. The result of the broadening is translated into a smoothing of the luminosity function. Because the luminosity function is steeper towards magnitudes fainter than the TRGB than towards brighter magnitudes, the largest derivative, which is what the filtering selects, is displaced to fainter magnitudes. From the same set of simulations, we calculate that the dispersion of the TRGB filter peak “profile” is $`0.023\pm 0.003`$, $`0.032\pm 0.012`$, $`0.046\pm 0.022`$, and $`0.057\pm 0.028`$. Our measured profile width for the LMC is $`\pm `$0.04 mag. This suggests that the LMC depth along the line of sight lies in the range $`0.100.20`$ mag, which corresponds to 2.3–4.6 Kpc at a distance of 50 Kpc. This range probably overestimates the actual depth because other sources of uncertainty, such as reddening errors, will contribute to the width of the TRGB feature. Xu, Crotts & Kunkel (1995) studied the structure of interstellar medium around the SN 1987A in the LMC and suggested that it extended for up to $``$1Kpc. We can also put an upper limit to the LMC line–of–sight depth from the magnitude distribution of red clump stars which is shown in Figure 5. We observe a dispersion of $`0.200\pm 0.003`$ mag, corresponding to 10% in distance ($``$5 Kpc). This can be interpreted as a very conservative upper limit on the LMC depth, and it compares well with the values obtained from the TRGB study. Bessell, Freeman & Wood (1986) found a much thinner disk (scale height of only 0.3 Kpc), which is certainly consistent with our upper limits. We note however that a variation in reddening could also affect and dominate the width of the filter peak profile, leaving little room for the thickness of the LMC itself. For the purpose of this study, the relevant conclusion is not that we obtain 5 Kpc thickness, but rather that even when our simulations are run with such a large upper limit, our determination of the TRGB magnitude is unaffected. The tilt of the LMC with respect to the plane of the sky would contribute an additional systematic uncertainty, in a very similar fashion as the observable line-of-sight depth of the LMC disk that was discussed in the previous section. Several papers have dealt with this issue previously, suggesting the tilt angle of $``$25 up to $``$55 (de Vaucouleurs 1980, Caldwell & Coulson 1986, Laney & Stobie 1986, Welch et al. 1987). The MCPS survey region that was used to determine the TRGB distance in this paper is located about 3 almost directly north from the center of the LMC, which turns out to be a fortuitous situation since the line-of-nodes roughly bisects this surveyed region; an HI mapping of the LMC suggest that the line of nodes has the position angle of $`12^{}`$ (Kim et al. 1998). Thus, the TRGB distance measured in this paper should be close to that of the center of the LMC, and any systematic uncertainty in our TRGB magnitude originating from the tilt of the LMC should be negligible. ## 5 Discussion and Conclusion Using the Large Magellanic Cloud Survey by Zaritsky, Harris & Thompson (1997), we derive a distance modulus to the LMC of $`18.59\pm 0.09`$ (random) $`\pm 0.16`$ (systematic) mag. This value is $`0.09`$ mag further than the conventional value adopted by many groups when measuring extragalactic distances using the Cepheid variable stars’ PL relation (e.g. Mould et al. 2000). A caveat in the TRGB method currently is, however, its dependence on the measured distances to six Galactic globular clusters. If we were to correct for the proposed incompleteness effects in the calibrating globular cluster RGB population, then our estimate of the LMC distance would be raised by $`35`$%. On the other hand, using the RR Lyrae calibration by Carney et al. (1992) could shorten the distance by 9%, even though we believe that the Lee et al. calibration that is used in this paper agrees better with other methods. If the TRGB calibration is based on a more robust, HIPPARCOS–based RR Lyrae distance scale, such as the ones derived by Reid (1997) or Gratton et al. (1997), it will yield an even larger LMC distance modulus by $``$0.1 mag. Our systematic uncertainty estimate of $`\pm `$0.16 mag agrees reasonably conservative, and includes uncertainties in the RR Lyrae zeropoint calibration, and possible depth and incompleteness effect. The MCPS data used in this paper provides an opportunity to compare the TRGB distance to the LMC with that measured using the magnitude of the red clump (RC) stars. The latter is a fairly new method; in §1, several estimates of the LMC distances using the red clump method were listed. Here, we are able to compare the RC distance that is determined using the data that are on the same photometric and extinction system. In Figure 5, the UN-reddened magnitude distribution of RC stars in the same regions used for the TRGB method is shown. The red clump centroid is observed at $`18.056\pm 0.003`$ mag. Using the HIPPARCOS calibration derived by Stanek & Garnavich (1998) which yields a red clump magnitude of $`0.23\pm 0.03`$ mag, we obtain a distance modulus to the LMC of $`18.29\pm 0.03`$ mag. Stanek et al. (1998) had obtained the distance modulus of 18.07 mag, using the MCPS data and red clump method. The difference of $``$0.2 mag between this previous work and our results comes solely from the revised reddening map (Zaritsky 1999) which is used in our analysis. Udalski’s (1998) metallicity correction (0.09\[Fe/H\]) would increase the distance modulus to 18.36. The TRGB and RC distance estimates are inconsistent at the $``$5$`\sigma `$ level when only internal errors are used. A larger metallicity correction for the RC (cf. Cole 1998, Girardi et al. 1997) would yield better agreement. A full analysis as to which set of corrections to apply is beyond the scope of this paper. However, we point out that even if the true distance modulus to the LMC turns out to be $``$18.3 mag, this would not necessarily discredit the TRGB method if the problem lies with the RR Lyrae distance scale. We conclude by pointing out that there is excellent consistency between the TRGB distance derived in this paper and the RR Lyrae (horizontal branch) distances determined by several authors (e.g. Reid 1997, Gratton et al. 1997, Carrretta et al. 1999). This agreement is not entirely unexpected, because both the TRGB and RR Lyrae distances are based on Galactic sub-dwarfs and RR Lyrae variable stars. However, the consistency (between the TRGB and RR Lyrae distances to the LMC) suggests that the relative magnitude offset between the TRGB and the RR Lyrae stars matches well between the LMC and the Galactic globular clusters which have been used to calibrate the TRGB method (taking into account metallicity effects). This offers reassurance that the TRGB calibration that has been used must be robust, implying that there cannot be large problems, beyond the 0.1 mag level, with incompleteness effects in the TRGB calibration as discussed in Cassisi & Salaris (1997), or with the metallicity dependence. We would like to thank Brad Gibson for providing us with an extensive list of published distance estimates to the LMC. S.S. acknowledges support from NASA through the Long Term Space Astrophysics Program, NAS-7-1260. DZ acknowledges financial support from an NSF grant (AST-9619576), a NASA LTSA grant (NAG-5-3501), a David and Lucile Packard Foundation Fellowship, and an Alfred P. Sloan Foundation Fellowship. RCK acknowledges the support of NSF Grants AST–9421145 and AST–9900789.
no-problem/9911/gr-qc9911074.html
ar5iv
text
# Untitled Document Caroline Santos<sup>1</sup><sup>1</sup>1 dma3cds@gauss.dur.ac.uk, On leave from: Departamento de Física da Faculdade de Ciências da Universidade do Porto, Rua do Campo Alegre 687, 4150-Porto, Portugal. Department of Mathematical Sciences, University of Durham, England PACS numbers: 04.40.-b, 11.27.+d, Keywords: gravity, topological defects
no-problem/9911/astro-ph9911079.html
ar5iv
text
# 1 In fig.1a and 1b we show the behaviour of Ω_𝑄 (solid line), 𝛾_𝑄 (dash-dot), 𝛼=𝜌+3⁢𝑝/(𝜌_𝑏+3⁢𝑝_𝑏) (short dashed line) and Ω_𝑄/Ω̃_𝑄 (long dashed line) for an exponential potential with 𝜆=3/4 and for two different initial conditions at 𝑁=0, 𝑥=0,𝑦=0.001 and 𝑥=0.5,𝑦=0.001 for graph 1a and 1b respectively. In fig.1c we show the behaviour of Ω_𝑄, 𝛾_𝑄 and the ratio Ω_𝑄/Ω̃_𝑄 for a potential 𝑉≃𝑒^{-1/𝑄} (thin solid line, short dashed and long dashed lines respectively) and for a potential 𝑉≃1/𝑄 (thick solid line, dashed-dot-dot and dashed-dot line respectively) with the same initial conditions 𝑥=0,𝑦=0.001 at 𝑁=0. Notice that in all cases the model independent solution agrees quite well with the exact solution, for a large number of e-folds and the discrepancy becomes significant when 𝛾_𝑄 starts to increase. Finally, in fig.1d we plot the Ω̃_𝑄 vs N. astro-ph/9911079 IFUNAM-FT-9909 Model Independent Accelerating Universe and the Cosmological Coincidence Problem A. de la Macorra<sup>1</sup><sup>1</sup>1e-mail: macorra@fenix.ifisicacu.unam.mx | Instituto de Física, UNAM | | --- | | Apdo. Postal 20-364 | | 01000 México D.F., México | ABSTRACT We show that the evolution of the quintessence energy density $`\mathrm{\Omega }_Q`$ is model independent in an accelerating universe. The accelerating behaviour has lasted at most 0.5 e-folds of expansion assuming a present day value of $`\mathrm{\Omega }_Q=2/3`$. The generic evolution differs from the exact solution by a percentage given by $`r=2\gamma _Q`$, with $`\gamma _Q=(\rho _Q+p_Q)/\rho _Q`$. For a small $`\gamma _Q`$ the evolution remains a good approximation for a long period. Nucleosynthesis bounds on $`\mathrm{\Omega }_Q`$ suggest that the model independent solution is valid for at least 12 e-folds of expansion which includes then the scale of radiation and matter equality. We can, therefore, establish model independent conditions on the cosmological parameters at some factor scale $`a_ia_o`$. Finally, we discuss the relevance of this result to the cosmological coincidence problem. Recent observations show that the universe has entered an accelerating expansion regime . If these observations are confirmed it would be the first experimental evidence for an energy density different from the radiation or matter and with negative pressure, i.e. for a ”cosmological constant”. The cosmological constant can be given by constant energy density with equation of state $`p_\mathrm{\Lambda }=\rho _\mathrm{\Lambda }`$, where $`p_\mathrm{\Lambda }`$ is the pressure and $`\rho _\mathrm{\Lambda }`$ the energy density, or it could be parameterised in terms of a slow varying scalar field Q, quintessence, with a potential $`V(Q)`$ and a time varying equation of state $`p_Q=(\gamma _Q1)\rho _Q`$, with $`p_Q=\frac{1}{2}\dot{Q}^2V(Q)`$ and $`\rho _Q=\frac{1}{2}\dot{Q}^2+V(Q)`$. The equation of state has $`0\gamma _Q2`$, it is model dependent and it may vary with time. For an accelerating universe we have the condition $`(\gamma _Q1)\mathrm{\Omega }_Q<1/3`$, where $`\mathrm{\Omega }_Q`$ is the ratio between the energy density of $`Q`$ and the critical energy density. An important question is why the energy density of the cosmological constant is of the same order of magnitude as matter even though the expansion rate of both energies is quite different today. This is the coincidence problem . A possible answer is by setting the initial conditions such that the Q field start to dominate only recently. However, this solution introduces a fine tuning problem on the initial conditions. To avoid such a fine tuning problem tracker solutions have been proposed . In this models the energy density of the scalar field Q redshifts mimics the dominant energy density (radiation) at the beginning with $`\gamma _Q>1`$ and when matter dominates the equation of state for $`Q`$ changes to $`\gamma _Q<1`$. With this kind of fields one avoids the initial fine tuning problem but they have difficulties to explain (without any fine tuning of the potential) the central values of $`\mathrm{\Omega }_Q=2/3\pm 0.05,\gamma _Q0.35\pm 0.07`$ since they have $`\gamma _Q>0.3`$ . However, they remain a very interesting possibility. Other models with more the one term in the potential $`V=V_1+V_2`$ where one term dominates during radiation and scales as radiation or matter while the other dominates at a recent time giving the acceleration of the universe have been studied . In particular if during the radiation dominated epoch on has a potential $`V_1=aQ^4`$, it will redshift as radiation and the ratio $`\rho _r/\rho _Q`$ remains constant. On the other for $`V_1=aQ^2`$ the ratio $`\rho _m/\rho _Q`$ is constant. In either case the initial value can be set to $`\rho _r/\rho _Q=O(1)`$ or $`\rho _r/\rho _Q=O(1)`$ avoiding an initial fine tuning problem but not the coincidence problem. Finally, we can invoke the anthropic principle to try to explain the coincidence problem where the likelihood of our present day cosmological parameters are determined . Different models have been proposed to give an accelerating universe and a general analysis can be found in , . We have the exponential potential, $`Ve^{\lambda Q}`$ with $`\lambda <3`$ ,, the inverse power potentials $`V1/Q^n,n>0`$ ,, $`Ve^{1/Q}`$ or $`V(c+(QQ_0)^n)^ae^{\lambda Q}`$ and mixtures of any of these potentials. All these potentials have a finite $`\lambda =V_Q/V<O(1)`$, with $`V_QV/Q`$, when $`V`$ approaches its minimum and this is indeed required for any model to give an accelerating universe since in this limit the potential energy dominates the energy density of Q. Some of these potentials may arise from non-perturbative effect or form string theory , . In this work we address the reverse coincidence problem. This is, we start by imposing present days values of $`\mathrm{\Omega }_Q,\mathrm{\Omega }_m`$ and $`\gamma _Q`$ and we go backwards in time. We will show that the evolution of $`\mathrm{\Omega }_Q`$ while the universe accelerates is model independent. We can therefore establish generic initial conditions for $`\mathrm{\Omega }_Q`$ at the beginning of the accelerating epoch. Furthermore, as long as $`\gamma _Q`$ is small (say $`\gamma _Q<0.4`$) the evolution of $`\mathrm{\Omega }_Q`$ remains model independent (within an error of $`r=2\gamma _Q=20\%`$) and this allows to set the initial conditions at a much smaller scale factor $`a_i`$ compared to today’s value $`a_o`$, e.g. at radiation and matter equality $`a_i/a_o10^3`$ or at nucleosynthesis $`a_i/a_o10^{10}`$. The range of validity is model and initial conditions dependent, however, in most models the universe have a scaling regime with $`\mathrm{\Omega }_Q1,\gamma _Q1`$ before the acceleration epoch leading to a large number of e-folds of model independent expansion. This regime seems to be necessary due to the condition $`\mathrm{\Omega }_Q<0.1`$ at nucleosynthesis . Here, we do not solve the coincidence problem but we contribute to establish model independent behaviour of the universe at the accelerating era and some e-folds before that, say $`a_{m.i.}`$. The last step would be to mach those initial conditions at $`a_{m.i.}`$ with a particular model coming from earlier times. In a spatially flat Friedmann–Robertson–Walker (FRW) Universe, the Hubble parameter is given by $`H^2=\frac{1}{3}\rho =\frac{1}{3}(\rho _b+\rho _Q)`$ where the barotropic fluid is described by an energy density $`\rho _b`$ and a pression $`p_b`$ with a standard equation of state $`p_b=(\gamma _b1)\rho _b`$, with $`\gamma _b=1`$ for matter and $`\gamma _b=4/3`$ for radiation. Since we will study the evolution starting from today, we will take $`\gamma _b=0`$, however we will leave the $`\gamma _b`$ dependence in the equations. The interaction between the barotropic fluid and Q is gravitational only (we take $`8\pi G=1`$). In terms of the of the variables $`x\dot{Q}/\sqrt{6}H`$, $`y\sqrt{V/3}H`$ the cosmological evolution of the universe is given by $`x_N`$ $`=`$ $`3x+\sqrt{{\displaystyle \frac{3}{2}}}\lambda y^2+{\displaystyle \frac{3}{2}}x[2x^2+\gamma _b(1x^2y^2)]`$ $`y_N`$ $`=`$ $`\sqrt{{\displaystyle \frac{3}{2}}}\lambda xy+{\displaystyle \frac{3}{2}}y[2x^2+\gamma _b(1x^2y^2)]`$ (1) $`H_N`$ $`=`$ $`{\displaystyle \frac{3}{2}}H[\gamma _b(1x^2y^2)+2x^2]`$ where $`N`$ is the logarithm of the scale factor $`a`$, $`Nln(a)`$, $`f_Ndf/dN`$ for $`f=x,y,H`$ and $`\lambda (N)V_Q/V`$. Notice that all model dependence in eqs.(1) is through the quantities $`\lambda (N)`$ and the constant parameter $`\gamma _b`$ and generic solutions can be obtained ,. The flatness condition gives $`1=\mathrm{\Omega }_Q+\mathrm{\Omega }_b`$ and from eqs.(1) the evolution of $`\mathrm{\Omega }_Q=x^2+y^2`$ and $`\gamma _Q=2x^2/(x^2+y^2)`$ is $`(\mathrm{\Omega }_Q)_N`$ $`=`$ $`3(\gamma _b\gamma _Q)\mathrm{\Omega }_Q(1\mathrm{\Omega }_Q)`$ $`(\gamma _Q)_N`$ $`=`$ $`\gamma _Q(2\gamma _Q)(3+\sqrt{{\displaystyle \frac{3}{2}}}{\displaystyle \frac{\lambda }{x}}\mathrm{\Omega }_Q)`$ (2) An accelerating expansion universe requires $`\rho +3p=\rho _b(3\gamma _b2)+\rho _Q(3\gamma _Q2)<0`$ which gives the constraint $$(\gamma _Q1)\mathrm{\Omega }_Q<1/3$$ (3) where we have taken the barotropic fluid as matter, i.e. $`\gamma _b=1`$. Equivalently, we can write condition (3) in terms of $`x`$ as $`x^2<(\mathrm{\Omega }_Q1/3)/2`$ and at present day this sets an upper value of $`x^21/6`$ and $`\gamma _Q<1/2`$ for today’s central value of $`\mathrm{\Omega }_{oQ}=2/3`$ (from now on the subscript $`o`$ refers to present day quantities). If we take the preferred value $`\gamma _{oQ}0.35`$ the condition on $`x`$ is $`x_o^2=\frac{1}{2}\mathrm{\Omega }_{oQ}\gamma _{oQ}0.11`$, which is slightly smaller then the previously obtained one, and $`y_o^2=\mathrm{\Omega }_{oQ}(1\gamma _{oQ}/2)0.55`$, i.e. we have $`y_o^25x_o^2`$. The solution to eq.(2) is only model dependent through the quantity $`\lambda (N)`$. Solving eq.(2) for $`\mathrm{\Omega }_Q`$ we obtain $$\mathrm{\Omega }_Q(N)=\frac{\mathrm{\Omega }_{oQ}e^{3\gamma _b(N_oN)+3{\scriptscriptstyle \gamma _Q𝑑N}}}{1\mathrm{\Omega }_{oQ}+\mathrm{\Omega }_{oQ}e^{3\gamma _b(N_oN)+3{\scriptscriptstyle \gamma _Q𝑑N}}}$$ (4) where $`\mathrm{\Omega }_Q(N_o)=\mathrm{\Omega }_{oQ}`$ and we have $`N<N_0`$ for a time $`t<t_0`$. In the limit of $`\mathrm{\Omega }_{oQ}1`$ and $`\gamma _Qcte`$ eq.(4) reduces to $`\mathrm{\Omega }_Qe^{3N(\gamma _b\gamma _Q)}`$. However we are not interested in this region since the initial condition has $`\mathrm{\Omega }_{oQ}2/3`$. During the accelerating period we have $`x^2<y^2`$ and $`\gamma _Q<\gamma _b`$ and we can expand eq.(4) as a function of $`R3\gamma _Q𝑑N`$. Eq.(4) becomes $$\mathrm{\Omega }_Q=\stackrel{~}{\mathrm{\Omega }}_Q\left(1+\frac{1\mathrm{\Omega }_{oQ}}{1\mathrm{\Omega }_{oQ}+\mathrm{\Omega }_{oQ}e^{3\gamma _b(N_oN)}}R+O(R^2)\right)$$ (5) with $$\stackrel{~}{\mathrm{\Omega }}_Q=\frac{\mathrm{\Omega }_{oQ}e^{3\gamma _b(N_oN)}}{1\mathrm{\Omega }_{oQ}+\mathrm{\Omega }_{oQ}e^{3\gamma _b(N_oN)}}.$$ (6) Notice that as long as $`R`$ is constant or $`R1`$, eq.(5) is model independent and it is certainly valid during the acceleration period of the universe. We see, therefore, that the accelerating expansion of the universe is model independent and the energy density of quintessence evolves as $`\stackrel{~}{\mathrm{\Omega }}_Q`$. Furthermore, $`\stackrel{~}{\mathrm{\Omega }}_Q`$ remains a good approximation to eq.(4) as long as $`\gamma _Q`$ is small. We can establish a relationship between the amount of discrepancy that we wish to accept between $`\mathrm{\Omega }_Q`$ and $`\stackrel{~}{\mathrm{\Omega }}_Q`$. For a $`r`$ percentage of difference, i.e. $`\mathrm{\Omega }_Q/\stackrel{~}{\mathrm{\Omega }}_Q=1+r`$ we have $`r=\frac{1\mathrm{\Omega }_{oQ}}{1\mathrm{\Omega }_{oQ}+\mathrm{\Omega }_{oQ}e^{3\gamma _b(N_oN)}}RR`$ (at large $`N_0N`$). Since $`(\gamma _Q)_N`$ is proportional to $`\gamma _Q`$ it varies rapidly when $`\gamma _Q`$ is not small as can be seen from eq.(2). $`R`$ becomes large when $`\gamma _{QN}6\gamma _Q`$ (as a first approximation) and we find $`R=3\gamma _Q𝑑N=(\mathrm{\Delta }\gamma _Q)/2`$. If we allow an $`r=20\%`$ discrepancy between the exact and the model independent $`\stackrel{~}{\mathrm{\Omega }}_Q`$, this will happen at $`\mathrm{\Delta }\gamma _Q=2r=0.4`$. Since before the accelerating epoch one has $`xy`$ and $`\gamma _Q1`$, the model independent energy density stops being a good approximation only until $`\gamma _Q2r`$. When will this happen is model and initial condition depended, however, for a large number of cases $`N_0N_i`$ can be quite large ($`N_i`$ is the beginning of the model independent evolution of Q). In fact, if ,for example, we set initial conditions such that $`x<2y`$, i.e. $`\gamma _Q<0.4`$, at $`N_i`$ (regardless of its value) the evolution of $`\mathrm{\Omega }_Q`$ is model independent (within a maximum of 20% discrepancy). In fig.1a and 1b we show the behaviour of the exact $`\mathrm{\Omega }_Q`$ by solving the dynamically eqs.(1) numerically compared to $`\stackrel{~}{\mathrm{\Omega }}_Q`$ for the same exponential potential $`V=V_0e^{3Q/4}`$ but with different initial conditions $`x=0,y=0.001`$ and $`x=0.5,y=0.001`$ respectively at $`N=0N_o`$. To appreciate more the difference between these two quantities we plot $`\mathrm{\Omega }_Q/\stackrel{~}{\mathrm{\Omega }}_Q`$ and we include in the graph the acceleration parameter $`\alpha =\rho +3p/(\rho _b+3p_b)`$ and $`\gamma _Q`$. We observe from the graphs that the concordance between $`\stackrel{~}{\mathrm{\Omega }}_Q`$ and $`\mathrm{\Omega }_Q`$ is remarkable as long as the universe expands in an accelerating way ($`\alpha <0`$) and $`\gamma _Q`$ remains small. We would like to mention that running the models down from present day the solution is very sensitive to the initial conditions at $`N_o`$. This is because we have a late time attractor and models with small variations at a late time may have come from quite different values at $`\mathrm{\Delta }N`$ large. We also show in fig.1c two different models ($`V1/Q`$ and $`Ve^{1/Q}`$) with initial conditions $`x=0,y=0.001`$ at $`N=0`$. Form fig.1 we see that as long as $`x,\gamma _Q`$ are small the agreement between $`\mathrm{\Omega }_Q`$ and $`\stackrel{~}{\mathrm{\Omega }}_Q`$ is very good. In order to avoid a rapid increase in $`\mathrm{\Omega }_Q`$ (going backwards in time), which would contradict the nucleosynthesis bound ($`\mathrm{\Omega }_Q<0.1`$), we require precisely this kind of behaviour. So, we expect our $`\stackrel{~}{\mathrm{\Omega }}_Q`$ to be a good approximation for a long period of expansion that could go as far as matter and radiation equality or nucleosynthesis. Furthermore, if we run the models backwards the ”early” time attractor has $`\mathrm{\Omega }_Q=1`$ (with $`x=1,y=0`$) and the scale at which the transition between $`\gamma _Q>1`$ and $`\gamma _Q<1`$ takes place is a symmetric point in $`\mathrm{\Omega }_Q`$ as a function of $`N`$. Therefore if $`\mathrm{\Omega }_Q1`$ for $`\mathrm{\Delta }N=2M`$ then our generic solution will be valid for at least $`M`$ e-folds of expansion. If we assume that the quintessence energy density has been increasing since nucleosynthesis (i.e. no extra complications) then $`\stackrel{~}{\mathrm{\Omega }}_Q`$ is valid for at least 12 e-folds, i.e. it includes matter and radiation equality scale. We can invert eq.(4) to get the number e-folds of expansion $`\mathrm{\Delta }N=N_oN_i`$ (see fig.2b) as a function of $`\mathrm{\Omega }_{iQ}(N_i)/\mathrm{\Omega }_{oQ}(N_o)`$, $$\mathrm{\Delta }N=\frac{1}{3}Log\left[\frac{\mathrm{\Omega }_{oQ}}{\mathrm{\Omega }_{iQ}}\left(\frac{1\mathrm{\Omega }_{iQ}}{1\mathrm{\Omega }_Q}\right)Log[1+R]\right]$$ (7) We can therefore determine the amount of quintessence $`\mathrm{\Omega }_Q`$ at different earlier times. For example, the number of e-folds of accelerating expansion is at most $`\mathrm{\Delta }N=0.46`$ since $`\mathrm{\Omega }_Q`$ must be greater or equal than $`1/3`$ (cf. eq.(3)). During structure formation with $`a_i/a_o1/2.2`$ we have $`\mathrm{\Omega }_Q=0.15`$, at matter and radiation equality (useful for tracker models) $`a_i/a_o10^3`$ we get $`\mathrm{\Omega }_Q=2\times 10^9`$ and at nucleosynthesis with $`a_i/a_o10^{10}`$ we have $`\mathrm{\Omega }_Q=2\times 10^{23}`$. With these results we can establish the initial conditions from which the model independent expansion is valid and we only need to concentrate on the behaviour at earlier times. To conclude, we have seen that the expansion of the accelerating universe is model independent and it is given by $`\stackrel{~}{\mathrm{\Omega }}_Q`$. As long as $`\gamma _Q`$ is small the evolution of $`\stackrel{~}{\mathrm{\Omega }}_Q`$ remains a good approximation to the exact solution and the percentage difference is given by $`r=2\gamma _Q`$. Due to the nucleosynthesis bound ($`\mathrm{\Omega }_Q<0.1`$), we do not expect (going backwards in time from today’s value) $`\mathrm{\Omega }_Q`$ to have a decrease and then a rapid increase and therefore our solution should be valid for at least 12 e-folds of expansion. We can then impose model independent condition at much smaller scales then our present day one, like at radiation and matter equality. This result contributes to solve the reverse coincidence problem since we can trace back the problem of initial conditions to a scale in which the universe expanded in a non-accelerating way and possibly as matter or radiation. A.M. research was supported in part by CONACYT project 32415-E and by DGAPA, UNAM, project IN-103997.
no-problem/9911/cond-mat9911107.html
ar5iv
text
# Thermodynamics of self-gravitating systems with softened potentials ## I Introduction The statistical mechanics of self-gravitating systems is amazing. It has been studied since long ago by Antonov , Lynden-Bell and Wood , Thirring , and Kiessling , among others . One reason for the interesting and peculiar behavior of these systems is that they are thermodynamically unstable. The usual thermodynamical limit exists only for those systems which are thermodynamically stable . For a system of $`N_p`$ classical particles interacting via a two body potential $`\varphi (r)`$, a sufficient condition for thermodynamical stability states that there must exist a positive constant $`E_0`$ such that, for each configuration $`\{𝒓_1,\mathrm{},𝒓_{N_p}\}`$, the following inequality is obeyed : $$\mathrm{\Phi }(𝒓_1,\mathrm{},𝒓_{N_p})=\frac{1}{2}\underset{ij}{}\varphi (|𝒓_i𝒓_j|)N_pE_0.$$ (1) In contrast, self-gravitating systems do not possess a proper thermodynamical limit. Moreover, due to the short distance singularity of the gravitational potential, the entropy is not even well defined: it diverges for any value of the energy . To define the thermodynamics of these systems the potential must be regularized at short distances. This can be done in many different ways. Particles endowed with a hard core is one possibility . In this case, the potential is repulsive and singular at short distances. Other popular choices are the so called softened potentials, which are smooth at the origin. As shown by Thirring , the thermodynamical instability is caused neither by the singularity nor by the long range nature of the potential, but is due to the fact that the potential is always attractive<sup>*</sup><sup>*</sup>*In the case of hard core particles the potential is repulsive at short distances and the thermodynamical instability is actually due to the long range forces.. The essential common feature of these purely attractive potentials is the appearance of a phase transition separating a high energy homogeneous phase (HP) from a low energy collapsing phase (CP) . The phase transition takes place in an energy interval with negative microcanonical specific heat. From the dynamical point of view, both phases are also different: the single particle motion is superdiffusive in the CP and ballistic in the HP . The dynamics and statistics of simple low dimensional models with long range attractive forces has been studied in . Their conclusions support the idea of a collapsing phase transition as in the Thirring model. If angular momentum is conserved, the situation could be notably altered . As mentioned, the usual thermodynamical limit does not exist for unstable systems. To have well defined thermodynamics when the number of particles $`N_p`$ is huge, the following scaling must be considered: the potential energy is rescaled by $`1/N_p`$, and then the energy and entropy scale with $`N_p`$. It has been proved for the canonical ensemble that this scaling reproduces mean field theory exactly in the limit $`N_p\mathrm{}`$ . This means that correlations among two or more particles vanish, and therefore the equilibrium state is characterized by a one particle density only, which minimizes the free energy functional. Although we are not aware of any rigorous proof, we shall assume here that the same holds for the microcanonical ensemble, changing minimization of the free energy by maximization of the entropy functional. If the troubles caused by the short distance singularity are ignored, it is possible to write down a mean field entropy functional for self-gravitating systems, which depends only on the particle density. This functional is not upper bounded, and, therefore, has no absolute maximum, a reflection of the fact that the entropy is not defined in the finite system. For energies larger than $`E_c0.335GM^2/R`$ there is however a local maximum. Below this energy, no local maximum of the entropy exists . This fact was explained in terms of a transition from the homogeneous isothermal sphere behavior to the CP at $`E_c`$. The transition produces negative specific heat, and was called the gravo-thermal catastrophe . Very recently, it has been pointed out that the low energy phase might be described by a spherically non-symmetric deformation of the singular solution of the isothermal Lane-Emden equation . The gravitational potential must be modified at short distances to make equilibrium statistical mechanics applicable to self-gravitating systems. As shown by Kiessling for the canonical ensemble , in the limit where the classical gravitational potential is recovered the equilibrium state approaches a particle distribution with all particles collapsing at a single point. The behavior of the system will depend on the scales at which the regularization is effective. There might be regularized potentials which, in the mean field limit, produce the global maximum of their associated entropy close to the solution of the isothermal Lane-Emden equation for energies $`EE_c`$. If this is the case, a collapsing transition should occur at some energy close to $`E_c`$. The CP is expected to be very sensitive to the details of the regularization at short distances and the HP almost insensitive to it. On the other hand, if the regularization is effective only at very short distances, the solutions of the isothermal Lane-Emden equation will be global maxima of the entropy only at very high energies, and therefore the collapsing transition will take place at some energy much larger than $`E_c`$. The critical energy will be higher the smaller the scale at which the regularized potential differs significantly from the unregularized one. A completely similar picture was rigorously established by Kiessling for the canonical ensemble . In this paper we introduce a convenient new softening procedure for the regularization of the gravitational potential, and we investigate the consequences in the microcanonical thermodynamics of self-gravitating systems. The rest of the article is organized as follows: in Sec. II we introduce the family of potentials to be studied; in Sec. III we derive the mean field equation and its general solution in terms of a set of algebraic equations; Sec. IV is devoted to the discussion of the results and Sec. V to summarize the conclusions. ## II Softened potential As mentioned in the introduction, the short distance singularity of the gravitational potential causes many troubles. What is more, for a real system such singularity is not physical, since at short distances new physics must be taken into account. Thus, the potential should be modified at short distances to avoid the singularity. In simulations of cosmological problems a widely used potential is the so called Plummer softened : $$\varphi (r)=\frac{GM^2}{\sqrt{r^2+\sigma ^2}}.$$ (2) For $`r\sigma `$, (2) coincides with the gravitational potential. Other softened potentials are those known as spline softened . The equilibrium thermodynamics of systems with these softened potentials has been studied in , and the dynamical effects of softening were considered in . The form of the potential at short distances is arbitrary to a large extent, since we do not know how the new interactions modify it. The problems with the singularity of the gravitational potential in statistical mechanics disappear if the equilibrium distribution is modified appropriately. For instance, if the approach to equilibrium is collisionless, via violent relaxation, the equilibrium state is described by the Lynden-Bell statistics , whose one-particle distribution is of Fermi-Dirac type, and produces an effective repulsion at short distances . The regularized potential we propose, which, as we shall discuss, has several remarkable features making it very convenient for thermodynamical purposes, is based on the following identityEquation (3) follows immediately from the sine series expansion of the constant function, $`f(x)=1`$, in the interval $`(0,1)`$.: $$\frac{1}{x}=\mathrm{\hspace{0.33em}4}\underset{k=1}{\overset{\mathrm{}}{}}\frac{\mathrm{sin}[(2k1)\pi x]}{(2k1)\pi x},0<x<\mathrm{\hspace{0.17em}1}.$$ (3) The singularity at the origin is removed by truncating the series to a given order $`N`$. Let us consider a system of $`N_p`$ particles confined within a sphere of radius $`R`$ in 3 dimensions. The maximum distance between two particles is $`2R`$. Hence, our potential must represent $`1/r`$ for distances $`0r<2R`$. Therefore, we choose the following interaction energy between two particles of mass $`m`$ located at $`𝒓`$ and $`𝒓^{}`$: $$\varphi (|𝒓𝒓^{}|)=\frac{Gm^2}{R}\mathrm{\hspace{0.17em}2}\underset{k=1}{\overset{N}{}}\varphi _k(|𝒓𝒓^{}|/R),$$ (4) where $`\varphi _k(x)=\mathrm{sin}(\omega _kx)/(\omega _kx)`$ are spherical Bessel functions of order zero and $`\omega _k=(2k1)\pi /2`$. Fig. 1 displays the singular potential and the regularized potential with $`N=10`$ and $`N=20`$. A similar expansion has been used to introduce simple models in low dimensions which allow to perform numerical simulations of systems with long range attractive forces with CPU time growing only as the number of particles . What is remarkable of (4) is that each term obeys the following differential equation: $$(^2+\frac{\omega _k^2}{R^2})\varphi _k(r/R)=\mathrm{\hspace{0.33em}0},$$ (5) so that the potential (4) verifies $$𝒟_N\varphi (r)=\mathrm{\hspace{0.33em}0},$$ (6) where $$𝒟_N=\underset{k=1}{\overset{N}{}}(^2+\frac{\omega _k^2}{R^2}).$$ (7) This relation will prove very useful in the mean field analysis of the next section. ## III Mean Field analysis It is well known that long range forces suppress fluctuations, and thus in these cases a mean field analysis is accurate or even exact. We expect that, dealing with an unstable system in the scaling regime described in the introduction, the description of the thermodynamical state in terms of a one particle density, neglecting two or more particle correlations, gives the essential physical behavior . We will derive in this section the form of the mean field equation and of the corresponding thermodynamic quantities for a system whose dynamics is governed by a potential of the form (4). ### A Mean Field Equation Let us consider a system of particles enclosed in a spherical region of radius $`R`$ and volume $`V=4/3\pi R^3`$, with a total mass $`M`$ distributed according to a smooth density $`\rho (𝒓)`$, normalized such that $`d^3r\rho (𝒓)=1`$, and interacting via a two body central potential $`\varphi (|𝒓𝒓^{}|)`$. If the potential is smooth, the entropy per particle in the microcanonical ensemble can be written in terms of the particle density as : $$𝒮=d^3r\rho (𝒓)\left[\mathrm{ln}V\rho (𝒓)\mathrm{\hspace{0.17em}1}\right]+\frac{3}{2}\mathrm{ln}\left(E\mathrm{\Phi }\right),$$ (8) where $`E`$ is the total energy and $`\mathrm{\Phi }`$ is the potential energy: $$\mathrm{\Phi }[\rho ]=\frac{1}{2}d^3rd^3r^{}\rho (𝒓)\varphi (|𝒓𝒓^{}|)\rho (𝒓^{}).$$ (9) The volume $`V`$ entering the first term of the r.h.s in (8) has been included to make the entropy look dimensionally correct, and plays no significant role since it only shifts the entropy by a constant. The physical density is the absolute maximum of (8) under the constraint $`\rho =1`$. Differentiating with respect to $`\rho `$ we arrive at the following integral equation: $$\mathrm{ln}V\rho (𝒓)=\mu \frac{3}{2}\beta d^3r^{}\varphi (|𝒓𝒓^{}|)\rho (𝒓^{}),$$ (10) where $`\beta =1/(E\mathrm{\Phi })`$ and $`\mu `$ is the Lagrange multiplier for the constraint $`\rho =1`$. Defining $`\nu (𝒓)`$ by $$\rho (𝒓)=\frac{1}{V}\mathrm{exp}[\mu +\nu (𝒓)],$$ (11) the constraint is solved by taking $$e^\mu =\frac{V}{d^3re^{\nu (𝒓)}}.$$ (12) Substituting (11) and (12) in (10), we obtain for $`\nu (𝒓)`$: $$\nu (𝒓)=\frac{3}{2}\beta \frac{d^3r^{}\varphi (|𝒓𝒓^{}|)e^{\nu (𝒓^{})}}{d^3r^{}e^{\nu (𝒓^{})}}.$$ (13) If we take for $`\varphi (r)`$ the Newtonian potential, we know that the entropy is not well defined. Nevertheless, it is still possible to start formally with the entropy functional (8), which gives a finite result for any smooth distribution $`\rho (𝒓)`$, but is unbounded (see IV C). There can still exist local maxima, which are then solutions of (13). By expanding the right-hand side of this equation in a series of spherical Bessel functions and truncating after N terms, one would obtain results equivalent to the ones we get using the softened potential. If we now particularize (13) to the softened potential (4), we see that $`\nu (𝒓)`$ obeys the same differential equation (6) as the potential. Imposing rotational symmetry on $`\nu `$ , we obtain the following ordinary differential equation: $$\underset{k=1}{\overset{N}{}}\left(\frac{d^2}{dr^2}+\frac{2}{r}\frac{d}{dr}+\frac{\omega _k^2}{R^2}\right)\nu (r)=\mathrm{\hspace{0.33em}0}.$$ (14) The general solution of this equation is a linear combination of $`\{\mathrm{sin}(\omega _kr)/r\}`$ and $`\{\mathrm{cos}(\omega _kr)/r\}`$. The cosines should be absent from the solution since (13) implies that $`\nu `$ is smooth. (Only at $`T=0`$, i.e., $`\beta =\mathrm{}`$, is $`\nu `$ singular.) Indeed, it is shown explicitly in appendix A that the solution of (13) can be written as $$\nu (r)=\underset{k=1}{\overset{N}{}}\nu _k\varphi _k(r/R),$$ (15) where $`\nu _k`$ are $`N`$ numerical coefficients determined by the following set of equations: $$\nu _k=\mathrm{\hspace{0.33em}3}\beta \frac{GM^2}{R}\frac{_0^1𝑑xx^2\varphi _k(x)\mathrm{exp}\{_k\nu _k\varphi _k(x)\}}{_0^1𝑑xx^2\mathrm{exp}\{_k\nu _k\varphi _k(x)\}}.$$ (16) The integral equation (13) has been reduced to a system of $`N`$ non-linear algebraic equations with $`N`$ unknowns. It can be solved by iteration, for instance with a Newton algorithm (see appendix B for a summary of the method used in this work). ### B Thermodynamical quantities Using formula (A2) of appendix A, it is straightforward to verify that, for a spherically symmetric mass distribution $`\mathrm{exp}(\mu +\nu (r))`$, the potential energy is given by: $$\mathrm{\Phi }=\frac{GM^2}{R}\underset{k=1}{\overset{N}{}}\left[\frac{_0^R𝑑rr^2\varphi _k(r/R)e^{\nu (r)}}{_0^R𝑑rr^2e^{\nu (r)}}\right]^2.$$ (17) For an equilibrium distribution of the form (15) , Eq. (16) implies $$\mathrm{\Phi }=\frac{1}{9\beta ^2}\frac{R}{GM^2}\underset{k=1}{\overset{N}{}}\nu _k^2.$$ (18) Hence, the total energy is $$E=\frac{1}{\beta }\frac{1}{9\beta ^2}\frac{R}{GM^2}\underset{k}{}\nu _k^2.$$ (19) From (8) we easily obtain the equilibrium entropy: $$𝒮=\mathrm{ln}\left(_0^1𝑑xx^2\mathrm{exp}(\underset{k}{}\nu _k\varphi _k(x))\right)+\frac{R}{GM^2}\frac{_k\nu _k^2}{3\beta }\frac{3}{2}\mathrm{ln}\beta .$$ (20) Since the entropy is stationary under variations of the mass distribution, the inverse temperature is $`1/T𝒮/E=\beta =1/(E\mathrm{\Phi })`$. ## IV Results In order to present specific numerical results, it is convenient to work with dimensionless quantities. We measure the energy in units of the characteristic energy, $`GM^2/R`$, where $`M`$ is the total mass and $`R`$ the radius of the confining sphere. The dimensionless energy is then $`ϵ=ER/(GM^2)`$. Any other quantity with dimensions of energy (like the temperature $`1/\beta `$ and the potential energy $`\mathrm{\Phi }`$) must be also understood to be expressed in units of $`GM^2/R`$, and, similarly, magnitudes with dimensions of length are given in units of $`R`$. As a matter of terminology, we shall call Newtonian potential (NP) to the unregularized potential, $`GM^2/r`$, Newtonian entropy (NE) to its corresponding entropy, regularized potential (RP) to the potential regularized by Eq. (4), and regularized entropy (RE) to its associated entropy. ### A N = 10 For values of $`N`$ not too large, computations are easy. Let us describe the case $`N=10`$ in detail. (See appendix B for a summary of the numerical methods used in this work). The solution of Eq. (16) as a function of the energy $`ϵ`$ provides all thermodynamic functions. For each value of $`ϵ`$ we found only one solution, which should then be the absolute maximum of (8). We shall return to this point later on, in Sec. IV C. Fig. 2 displays the inverse temperature $`\beta =1/T`$ versus $`ϵ`$. In the thermodynamics of stable systems, this function must be monotonically decreasing, since the entropy is a convex function of the energy . In the present case, however, $`\beta `$ decreases with $`ϵ`$ in the low and high energy regimes, but it increases for $`ϵ`$ in $`(4.46,0.2)`$ and, consequently, the specific heat is negative in this energy interval. This is a consequence of the instability of the system. As is usually the case with these systems , the negative specific heat region is associated with a transition to a collapsed phase. To investigate this, let us define an order parameter $`\kappa =R_0/R`$, where $`R_0`$ is the radius of the sphere centered at the origin which contains 95% of the mass (of course the value of 95% is arbitrary). Fig. 3 displays $`\kappa `$ versus $`ϵ`$. At $`ϵ=\mathrm{}`$ the mass is distributed homogeneously, and then $`\kappa =(0.95)^{1/3}0.9830`$. When the energy is reduced, $`\kappa `$ decreases monotonically and slowly. Notice the anomaly at $`ϵ0.335`$; we shall discuss it in Sec. IV B. The collapsing order parameter $`\kappa `$ varies abruptly in the region where the specific heat is negative. It decays from $`\kappa 0.95`$, corresponding to an homogeneous phase, to $`\kappa 0.1`$. In the later case, the mass distribution consists of a small dense core and an homogeneous tenuous halo. The results of this section are similar to those found by regularizing the potential with hard core repulsions , and to those derived from the Lynden-Bell statistics applied to the unregularized potential . It is remarkable that different regularizations lead to similar results. ### B Newtonian potential It is interesting to compare the maximum of the RE with the local maximum of the NE. Substituting $`\varphi (|𝒓𝒓^{}|)=GM^2/|𝒓𝒓^{}|`$ in (13), and using the fact that this potential is a Green function of the Laplacian, we get the following differential equation $$^2\nu (𝒓)=\frac{3}{2}\beta e^\mu \mathrm{exp}(\nu (𝒓)),$$ (21) which, for spherically symmetric $`\nu `$, is equivalent to the isothermal Lane-Emden equation : $$\frac{d^2\nu (r)}{dr^2}+\frac{2}{r}\frac{d\nu (r)}{dr}+\frac{3}{2}\beta e^\mu e^{\nu (r)}=\mathrm{\hspace{0.33em}0}$$ (22) The proper solutions of Eq. (22), with $`\beta `$ and $`\mu `$ such that $`\beta =1/(ϵ\mathrm{\Phi })`$ and $`\mu =\mathrm{ln}𝑑rr^2\mathrm{exp}\nu (r)`$, give local maxima of the entropy if $`ϵ>0.335`$ . The high energy phase should depend only weakly on the form of the potential at short distances. Therefore, the maximum of the RE in the high energy phase might be an approximation to the local maximum of the NE given by Eq. (22). This is indeed the case. To see how close the maximum of the RE is to the appropriate solution of eq. (22), we define a distance between functions by $`D=\mathrm{max}\{|\nu _{N=10}(r)\nu _{LE}(r)|\}`$, where the subscripts indicate the solutions of Eq. (16) with $`N=10`$, and of Eq. (22) respectively. For $`ϵ>0.335`$, i.e., when the Lane-Emden Eq. determines a local maximum of the NE, $`D<10^4`$. The absolute maximum of the RE is indeed a very good approximation to the local maximum of the NE. Now, we can understand the anomaly in $`\kappa `$ around $`ϵ0.335`$, which was mentioned in Sec. IV A and which can be appreciated in Fig. 3. At this point, which is close to the energy at which the solutions of the Lane-Emden eq. cease to be local maxima of the NE, the nature of the maximum of the RE also changes, originating anomalies such as the pick in $`1/T`$ (Fig. 2) and the fissure in $`\kappa `$ (Fig. 3). The effect of the regularization is to deform the entropy functional dramatically for mass distributions $`\rho (r)`$ which are very concentrated at the origin. These distributions get a huge amount of negative entropy after softening the potential, at least for $`N=10`$, in such a way that there are no maxima of the RE close to them. On the other hand, the entropy of smooth distributions which are not concentrated is sensitive to the global form of the potential rather than to the short distance details. Therefore, these distributions have similar NE and RE, and they essentially do not feel the regularization. The solutions of the Lane-Emden Eq. belong to this class, and, consequently, close to them there is a local maximum of the RE which is indeed the global maximum for not too large $`N`$, in particular for $`N=10`$. ### C $`N`$ dependence The NP can be arbitrarily well approximated at short distances by a RP with $`N`$ sufficiently large. Consequently, the maximum of the RE close to the local maximum of the NE will attain the later in the $`N\mathrm{}`$ limit, and, obviously, must cease to be the global maximum of the RE and become a local one for some value of $`N`$, which will be denoted by $`N_c`$. Since the maximum of the entropy depends on the energy, $`N_c`$ is a function of $`ϵ`$. In principle, we can compute $`N_c(ϵ)`$ by solving Eq. (16) for large values of $`N`$. In practice, however, this is very difficult and we must content ourselves with an estimate of $`N_c`$. To get the estimate, let us first analyze how matter distributions with arbitrarily high NE can be built. There is an upper bound for the entropy functional (8) if the potential energy (9) is bounded from below ($`\mathrm{\Phi }\mathrm{\Phi }_{min}`$ for any $`\rho (r)`$): $$𝒮\mathrm{\hspace{0.33em}1}+\frac{3}{2}\mathrm{ln}(ϵ\mathrm{\Phi }_{min}).$$ (23) In the case of the RP, $`\mathrm{\Phi }_{min}=N`$. Since the entropy has an upper bound, it is reasonable to assume that it has a global maximum given by a regular function $`\nu (r)`$ of the form (15), with coefficients $`\nu _k`$ verifying (16). The potential energy associated to the NP has no lower bound and therefore $`\mathrm{\Phi }_{min}=\mathrm{}`$. Hence, (23) does not provide an upper bound for the NE. Indeed, it is straightforward to verify that the distribution $$\rho (r)=\{\begin{array}{cc}\frac{3\alpha }{4\pi r_0^3}\hfill & 0<r<r_0\hfill \\ \frac{3(1\alpha )}{4\pi (1r_0^3)}\hfill & r_0<r<\mathrm{\hspace{0.17em}1}\hfill \end{array}$$ (24) with $`0<\alpha <1`$, has arbitrarily large entropy when we take the limit $`r_00`$, while maintaining $`\alpha \mathrm{ln}r_0`$ constantNotice that in the limit $`r_00`$ with $`\alpha \mathrm{ln}r_0`$ constant only an infinitesimal amount of matter collapses, while the rest is homogeneously distributed. This reflects the fact that it is enough that two particles (hard binaries) become arbitrarily close to make the potential energy arbitrarily negative, and therefore the kinetic energy arbitrarily large. However, as they constitute only two degrees of freedom, their contribution to the purely configurational entropy term $`\rho (\mathrm{ln}V\rho 1)`$ of (8) is negligible.. This is true for any value of $`ϵ`$. We shall call these distributions, for any values of $`\alpha `$ and $`r_0`$, special distributions (SD). If $`N`$ is large enough, there are SD with larger RE than the maximum close to the solution of the Lane-Emden equation.. As already claimed, it is very difficult to get the solutions of eq. (16) for large values of $`N`$. To overcome this problem and obtain an estimate of $`N_c`$, we shall study the restriction of the RE to particle distributions of the form (24) (SD). In such a way, we have a RE which depends only on two parameters, $`\alpha `$ and $`r_0`$. Now, the maximization of this entropy with respect to $`\alpha `$ and $`r_0`$ is an easy task, even for very large values of $`N`$. Obviously, the smaller value of $`N`$ for which the maximum of the restricted RE is larger than the RE of the corresponding solution of the Lane-Emden equation will give an estimate of $`N_c`$. Strictly speaking, this estimate is an upper bound on $`N_c`$. Before analyzing the behavior with $`N`$, let us look again to the $`N=10`$ case, for which computations are easy. Fig. 4 displays the maximum of the restricted RE and the RE of the corresponding solution of (16), both for $`N=10`$, as a function of $`ϵ`$. The later distribution has always larger RE than any SD. This, besides the fact that we did not find other solutions by varying the initial guess, confirms that for each $`ϵ`$ only a local maximum of the $`N=10`$ RE exists. It is, obviously, the global maximum. To investigate the behavior with $`N`$, we computed the estimate of the critical $`N_c`$ for several values of $`ϵ`$. As it could have been anticipated, $`N_c`$ grows with $`ϵ`$. Table B shows the results. Column one displays $`ϵ`$, column two the entropy of the solutions of the Lane-Emden eq., column three the maximum of the restriction of the RE to SD for the estimated $`N_c`$, and column four the estimated $`N_c`$. It is apparent that, in the high energy phase ($`ϵ0.335`$), we must go to $`N`$ larger than 30 to see global maxima different from the solutions of the Lane-Emden equation. It is worth noting that a similar scenario was rigorously established by Kiessling for the equilibrium state of self-gravitating systems in contact with a thermal bath . ## V Conclusions To define the thermodynamics of gravitational systems properly, the Newtonian potential must be regularized at short distances, removing its singularity. Only then is the entropy well defined, or, in a mean field approach, the entropy functional upper bounded. One way to introduce a regularization is by softening, i.e., by making the potential smooth at short distances while keeping it basically unchanged at long distances. There are infinitely many ways to achieve that. One interesting possibility is given by the truncation of the expansion of the gravitational potential in spherical Bessel functions to a given order $`N`$, as in Eq. (4). This regularization has the virtue of reducing the mean field integral equation to a system of $`N`$ algebraic equations with $`N`$ unknowns. This simplifies considerably the solution of the problem. The result which emerges from this approach is the following: if the regularization is mild enough, $`N<30`$, the system undergoes a phase transition separating a high energy homogeneous phase from a low energy collapsed phase. In the high energy phase, the mass distribution and the thermodynamic quantities are those of an isothermal sphere. Quantitatively, they are very close to the solutions of the corresponding Lane-Emden equation. The low energy phase is characterized by a mass distribution consisting of a dense core surrounded by a tenuous halo. As usual in these cases , the transition from the HP to the CP takes place in an energy interval with negative specific heat, an indication of the thermodynamical instability of the system. These results are remarkably similar to those found with a different regularization (hard core spheres) , and with those derived from the Lynden-Bell statistics applied to the unregularized potential . We can then conclude that the thermodynamics is not very sensitive to the form of the regularization. The effect of a mild regularization is to deform the entropy functional in such a way that, in the high energy phase, the global maximum of the entropy with the regularized potential is very close to the local maximum with the unregularized potential. The analysis based on the Lane-Emden equation is therefore very accurate and the conclusions extracted from it hold. If the potential is too sharp at short distances, $`N>30`$, we expect also a collapsing transition, which however will take place at a much higher energy than the one predicted by the analysis of the stability of the solutions of the Lane-Emden equation. Below the critical energy the global maximum of the RE will not be in the vicinity of the solution of the Lane-Emden equation, where, nevertheless, there will be a local maximum of the RE. Besides describing such metastable states, the Lane-Emden equation might be physically relevant in diluted self-gravitating systems with an interparticle distance high enough to be insensitive to a truncation of the expansion of the gravitational potential in spherical Bessel functions, Eq. (4), to 30 terms. Finally, let us comment on the structure of the low energy microcanonical equilibrium state when the regularized potential is very sharp ($`N>30`$) at short distances. In this regime there are high entropy mass distributions consisting of a small amount (infinitesimal when $`N\mathrm{}`$) of matter condensed and the rest homogeneously distributed. This might indicate that at these scales the system is not well described by a smooth density, and granularity is playing a major role. ## A Let us show that the solutions of Eq. (13) for $`\nu `$ rotationally symmetric are of the form (15) if the potential is $`\varphi (r)=_{k=1}^N\varphi _k(r)`$. Introducing spherical coordinates, eq. (13) can be written $$\nu (r)=\frac{3\beta GM^2}{R}\underset{k=1}{\overset{N}{}}\frac{_0^1𝑑r^{}r^{}e^{\nu (r^{})}_0^\pi 𝑑\theta \mathrm{sin}\theta \varphi _k(\sqrt{r^2+r^22rr^{}\mathrm{cos}\theta })}{2_0^1𝑑r^{}r^2e^{\nu (r^{})}}.$$ (A1) The integral in $`\theta `$ can be readily performed, and gives $$_0^\pi 𝑑\theta \mathrm{sin}\theta \frac{\mathrm{sin}[\omega _k(r^2+r^22rr^{}\mathrm{cos}\theta )^{1/2}]}{\omega _k(r^2+r^22rr^{}\mathrm{cos}\theta )^{1/2}}=\mathrm{\hspace{0.33em}2}\varphi _k(r)\varphi _k(r^{}).$$ (A2) Eqs. (A1) and (A2) imply $`\nu (r)=_{k=1}^N\nu _k\varphi _k(r)`$, with the coefficients $`\nu _k`$ determined by (16), QED. ## B Since the numerical solution of Eq. (16) is central in this work, we shall outline in this appendix the method used to solve it. The problem is to find the roots of a vector function defined in a multidimensional space. Eq. (16) can be written as $$F_i(\nu _1,\mathrm{},\nu _N)=\mathrm{\hspace{0.33em}0}$$ (B1) with $`i=1,\mathrm{},N`$. We shall use matrix notation and denote by $`𝝂`$ the complete set $`(\nu _1,\mathrm{},\nu _N)`$ and by $`𝑭`$ the vector $`(F_1,\mathrm{},F_N)`$. To solve systems of equations like (B1) we choose the Newton-Raphson method, which works as follows : provided we have an initial guess $`𝝂`$, which is close to the solution of (B1), we can expand $`F_i`$ in Taylor series in a neighbourhood of $`𝝂`$: $$F_i(𝝂+\delta 𝝂)=F_i(𝝂)+\underset{j=1}{\overset{N}{}}J_{ij}\delta \nu _j+O(\delta 𝝂^2)$$ (B2) where $`J_{ij}=\frac{F_i}{\nu _j}`$ is the Jacobian matrix. In matrix notation we have: $$𝑭(𝝂+\delta 𝝂)=𝑭(𝝂)+𝑱\delta 𝝂+O(\delta 𝝂^2)$$ (B3) By neglecting terms of order higher than linear in $`\delta 𝝂`$ and by setting $`𝑭(𝝂+\delta 𝝂)=0`$, we obtain a set of linear equations for the corrections $`\delta 𝝂`$ that move each function $`F_i`$ closer to zero simultaneously: $$𝑱\delta 𝝂=𝑭$$ (B4) This linear equation is a standar problem in numerical linear algebra and can be solved by LU decomposition. The corrections are then added to the initial guess, $$𝝂_{new}=𝝂+\delta 𝝂$$ (B5) and the process is iterated to convergence. It is possible to show that the method converges always provided the initial guess is close enough to the root. It can also spectacularly fail to converge indicating (though not proving) that the putative root does not exist nearby. To avoid problems with the poor global convergence of the method, we started with many different initial guess. We always found convergence to the same solution, except when $`N`$ was larger than $`30`$, where we found only convergence at high energy. We never got two different solutions (whithin our convergence criterion, see below) by starting at two different points. Numerically, a convergence criterion is necessary. We stoped computations when one of this two conditions $$\underset{i=1}{\overset{N}{}}\left|\delta \nu _i\right|<\mathrm{\hspace{0.25em}10}^{10}\mathrm{or}\underset{i=1}{\overset{N}{}}\left|F_i\right|<\mathrm{\hspace{0.25em}10}^{10}$$ (B6) was verified. Each time the function $`F_i`$ was called, several integrals entering eq. (16) were performed numerically, using a Romberg algorithm . The integrands are smooth functions and it was possible to achieve a high precision with a relatively modest numerical effort.
no-problem/9911/hep-ex9911051.html
ar5iv
text
# Prompt photon processes in photoproduction at HERA ## 1 Introduction Isolated high transverse energy (“prompt”) photon processes at HERA (fig. 1) could yield information about the quark and gluon content of the photon, together with the gluon structure of the proton . The particular virtue of prompt photon processes is that the observed final state photon emerges directly from a QCD diagram without the subsequent hadronisation which complicates the study of high $`E_T`$ quarks and gluons. The ZEUS collaboration has recently published the first observation at HERA of prompt photons at high transverse momentum in photoproduction reactions , based on an integrated luminosity of 6.4 pb<sup>-1</sup>. An NLO calculation by Gordon was found to be in agreement with the ZEUS results, and indicates the feasibility of distinguishing between different models of the photon structure. In the present study we extend our earlier study of prompt photon production from a data sample of 37 pb<sup>-1</sup>. Differential cross sections are given for the final state containing a prompt photon, and a prompt photon accompanying jet as a function of pseudorapidity and of transverse photon energy. Comparison is made with several LO and NLO (next to leading order) predictions, with the goal of testing different proposed hadronic structures of the incoming photon. ## 2 Event Selection The data used here were obtained from $`e^+p`$ running in 1996–97 at HERA, with $`E_e=27.5`$ GeV, $`E_p=820`$ GeV. The ZEUS experiment is described elsewhere . The major components used in the analysis are the central tracking detector (CTD) and the uranium-scintillator calorimeter(UCAL). Prompt photons are detected in the barrel section of the calorimeter (BCAL), which consists of an electromagnetic section (BEMC) followed by two hadronic sections; the BEMC consists of pointing cells of $`20`$ cm length and $`5`$ cm width at a minimum radius 1.23m from the beamline. This width is not small enough to resolve the photons from the processes $`\pi ^02\gamma `$, $`\eta 2\gamma `$ and $`\eta 3\pi ^0`$ on an event by event basis. It does, however, enable a partial discrimination between single photon signals and the decay product of neutral mesons. A standard ZEUS electron finding algorithm was used to identify candidate photon signals in BCAL with measured $`E_T^\gamma >4.5`$ GeV. The Energy loss in dead material to the measured photon energy has been corrected using MC generated single photons. This correction amounted typically to 200-300 MeV. After the photon energy correction the events were retained for final analysis if a photon candidates with transverse energy $`E_T^\gamma >5`$ GeV was found in the BCAL. To identify jets, a cone jet finding algorithm was used. Jet with $`E_T^{jet}>4.5`$ GeV and pseudorapidity $`1.5<\eta ^{jet}<1.8`$ were accepted with a cone radius of 1 radian, where pseudorapidity is defined as $`\eta =\mathrm{ln}(\mathrm{tan}\theta /2)`$. Events with an identified DIS positron were removed, restricting the acceptance of the present analysis to incoming photons of virtuality $`Q^21`$ GeV<sup>2</sup>. The quantity $`y_{JB}`$, defined as the sum of $`(Ep_Z)`$ over all the UCAL cells divided by twice the positron beam energy $`E_e`$, provides a measure of the fractional energy $`E_{\gamma 0}/E_e`$ of the interacting quasi-real photon. A requirement of $`0.15<y_{JB}<0.7`$ was imposed, the lower cut removing some residual proton-gas backgrounds and the upper cut removing remaining DIS events. Wide-angle Compton scatters were also excluded by this cut. A photon candidate was rejected if a CTD track pointed within 0.3 rad of it. An isolation cone was also imposed around photon candidates: within a cone of unit radius in $`(\eta ,\varphi )`$, the total $`E_T`$ from other particles was required not to exceed $`0.1E_T(\gamma )`$. This greatly reduces backgrounds from dijet events with part of one jet misidentified as a single photon ($`\pi ^0,\eta `$, etc). In addition, as discussed in , it removes most dijet events in which a high $`E_T`$ photon radiating from a final state quark. A remainder of such events is included as part of the signal in the data and the theoretical calculations. ## 3 Signal/background separation A typical high-$`E_T`$ photon candidate in the BEMC consists of a cluster of 4-5 cells selected by the electron finder. Two shape-dependent quantities were studied in order to distinguish photon, $`\pi ^0`$ and $`\eta `$ signals. These were (i) the mean width $`<\delta Z>`$ of the BEMC cluster in $`Z`$ and (ii) the fraction $`f_{max}`$ of the cluster energy found in the most energetic cell in the cluster. $`<\delta Z>`$ is defined as the mean absolute deviation in $`Z`$ of the cells in the cluster, energy weighted, measured from the energy weighted mean $`Z`$ value of the cells in the cluster. Its distribution shows two peaks at low $`<\delta Z>`$ which are identified with photons and $`\pi ^0`$ mesons, and a tail at higher values. This tail quantified the $`\eta `$ background; photon candidates in this region were removed. The remaining candidates consisted of genuine high $`E_T`$ photons and $`\pi ^0`$ and remaining $`\eta `$ mesons. The numbers of candidates with $`f_{max}0.75`$ and $`f_{max}<0.75`$ were calculated for the sample of events occurring in each bin of any measured quantity. From these numbers, and the ratios of the corresponding numbers for the $`f_{max}`$ distributions of the single particle samples, the number of photon events in the given bin was evaluated. Further details of the background subtraction method are given in . The distribution of $`f_{max}`$ for prompt photon candidates in selected events is shown in fig. 2, well fitted to a sum of photon and background distributions. ## 4 Results We evaluate cross sections for prompt photon production corrected by means of PYTHIA using GRV photon structures . A bin-by-bin factor is applied to the detector-level measurements so as to correct to cross sections in the specified kinematic intervals calculated in terms of the final state hadron system photoproduced in the range $`0.16<y^{true}<0.8`$, i.e. $`\gamma p`$ centre of mass energies in the range 120 – 270 GeV. The virtuality of the incoming photon is restricted to the range $`Q^2<1`$ GeV<sup>2</sup>. When a jet was demanded, the hadron-level selections $`E_T^{jet}>5`$ GeV, $`1.5<\eta ^{jet}<1.8`$ were imposed. The systematic error of 15% were taken into account and were finally combined in quadrature. The main contributions are from the energy scale on the calorimeter and the background subtraction. Fig. 3 shows an inclusive cross section $`d\sigma /d\eta ^\gamma `$ for prompt photons in the range $`5<E_T^\gamma <10`$ GeV. Reasonable agreement between data and MC is seen at forward values of rapidity, but the data tend to lie above the MC at negative rapidity. The data are also compared with NLO calculations of Gordon(LG), Krawczyk and Zembrzuski(KZ) using the GS and GRV photon structures . The curves are subject to a calculational uncertainty of 5%, and uncertainties in the QCD scale could raise the numbers by up to $`8\%`$. Away from the most forward directions, the LG calculation using GS tend to lie low, while the LG implementation of the GRV photon structure give a reasonable description of the data. KZ calculation has detailed differences from LG including a box diagram contribution for the process $`\gamma g\gamma g`$ . In fig. 3 inclusive cross sections $`d\sigma /dE_T^\gamma `$ for prompt photons in the range $`0.9<\eta ^\gamma <0.7`$ are compared to the theoretical models. All six theoretical models describe the shape of the data well. However the HERWIG predictions is systematically low. The two NLO calculations are in better agreement with the data, and cannot be experimentally distinguished. Similar features can be seen in fig. 4 which shows cross sections for the production of a photon accompanied by a jet in the kinematic range specified above. The KZ calculation is too high at low $`E_T^\gamma `$, attributable to the lack of a true jet algorithm in this approach . Fig. 4 shows corresponding cross sections for the photon accompanied by at least one jet. The results were corrected to hadron-level jets in the kinematic range $`E_T^{jet}>5`$GeV, $`1.5<\eta ^{jet}<1.8`$. In a comparison with NLO calculations from the GS photon structure again provides a less good description of the data overall than that of GRV. As with the photoproduction of a dijet final state , the information from the prompt photon and the measured jet can be used to measure a value of $`x_\gamma `$, the fraction of the incoming photon energy which participates in the hard interaction. A “measured” value of $`x_\gamma `$ at the detector level was evaluated as $`x_\gamma ^{meas}=(Ep_Z)/2E_ey_{JB}`$, where the sum is over the jet plus the photon. The resulting distribution is shown in fig. 5 compared with PYTHIA predictions. Reasonable agreement is seen, and a dominant peak near unity indicates clearly the presence of the direct process. Corrected to hadron level, the cross section integrated over $`x_\gamma >0.8`$ is $`15.4\pm 1.6(stat)\pm 2.2(sys)`$ pb. This may be compared with results from Gordon , which vary in the range 13.2 to 16.6 pb according to the photon structure taken and the QCD scale (approximately an 8% effect). Here, the experiment is in good agreement with the range of theoretical predictions but does not discriminate between the quoted models. ## 5 Conclusions The photoproduction of inclusive prompt photons, and prompt photons accompanied by jets, has been measured with the ZEUS detector at HERA using an integrated luminosity of 37 pb<sup>-1</sup>. Cross sections as a function of pseudorapidity and transverse energy have been measured for photon transverse energies in the range $`5<E_T^\gamma <10`$ GeV and for jet transverse energies in the range $`E_T^{jet}>5`$ GeV. The results are compared with parton-shower Monte Carlo simulations of prompt photon processes and with NLO QCD calculations incorporating the currently available parameterisations of the photon structure. NLO QCD calculations describe the shape and magnitude of the measurements reasonably well. ## Acknowledgments We are grateful to L. E. Gordon, Maria Krawczyk and Andrzej Zembrzuski for helpful conversations, and for providing theoretical calculations.
no-problem/9911/chao-dyn9911001.html
ar5iv
text
# Different Traces of Quantum Systems Having the Same Classical Limit ## 1 Introduction The analysis of a quantum system in the semiclassical regime allows one to approximate quantum traces by the sum over periodic orbits of the corresponding classical system. This relation is provided by the famous Gutzwiller Trace Formula which establishes a fruitful link between a quantum system and its classical counterpart. It is well known that infinitely many quantum systems may have the same classical limit. A natural problem arises how different such quantum systems might be. Let us concentrate on quantum traces which allow us to compute the spectrum. The Gutzwiller-like formula approximates in the semiclassical limit the traces of the evolution operator $`U`$ by $$\text{Tr}U^n=\underset{\mathrm{p}.\mathrm{o}.}{}A_oe^{iS_o/\mathrm{}}.$$ (1) The sum is taken over all periodic orbits with periods equal or less than $`n`$. The orbits contribute with amplitudes $`A_o`$ which depend on their stability while the phases are equal to their classical actions (measured in $`\mathrm{}`$ units). Possible Maslov indices, not mentioned in the formula (1), do not influence the following argument since they are related to the classical system. An example of two quantum systems, with the same classical limit, having the trace of the Floquet operator constant with respect to $`\mathrm{}`$ and equal to $`0`$ and $`\sqrt{2}`$ respectively is provided by the quantum baker map on the sphere . To approximate these traces semiclassically in the spirit of (1) $`\mathrm{}`$ corrections to the classical actions, $`S_o=S_{\mathrm{class}}+\mathrm{}S_{\mathrm{quant}}`$, are needed. These corrections disappear in the classical limit $`\mathrm{}0`$, but they result in corrections of order unity to the traces. The traces of such quantum systems can not be expressed using only classical quantities and may not converge in the classical limit. Many quantum models for which the semiclassical treatment is known do not seem to suffer from this problem — their traces do converge to those predicted by the sum over periodic orbits without any quantum corrections. An example where such $`\mathrm{}`$ corrections are needed in the semiclassical analysis is the original version of the kicked top . On the other hand a small change of this quantum model is sufficient to eliminate this problem. The Fourier transform with respect to $`1/\mathrm{}`$ of the quantity $`\mathrm{\Psi }_0|U^n|\mathrm{\Psi }_0`$ for some state $`|\mathrm{\Psi }_0`$ may be used to check the semiclassical predictions. The modulus of this transform has peaks at the classical actions of orbits of $`n`$ iterations of the map, if the state $`|\mathrm{\Psi }_0`$ overlaps with the state localized on the periodic orbit. The phase of the transform will contain Maslov indices and first order quantum corrections to the actions of periodic orbits. Contributions stemming from periodic points with the same classical action form single peak in the transform. In this work we propose a way to analyze the classical action and its quantum correction coming from a single periodic point of $`n`$ iterations of the map. The paper is organized as follows. The decomposition of the Husimi representation of the evolution operator into a sum over periodic orbit is presented in section 2. The analysis of actions of periodic orbits and their corrections found for the quantum models of the baker map on the sphere is performed in section 3. Conclusions and consequences for the spectra are discussed in section 4. ## 2 Coherent State Decomposition of the Evolution Operator Lets $`\mathrm{\Omega }`$ be a classical phase space. Consider a classical area-preserving map $`\mathrm{\Theta }:\mathrm{\Omega }\mathrm{\Omega }`$ and a corresponding quantum operator $`U`$ acting on a Hilbert space $``$. Assume we have a family of generalized coherent states $`|\alpha `$ for any $`\alpha \mathrm{\Omega }`$ fulfilling the minimal uncertainty relation. Such a family of states is found for several examples of classical phase spaces. The coherent states form an overcomplete basis allowing the decomposition of the identity operator $`_\mathrm{\Omega }|\alpha \alpha |𝑑\alpha =\mathrm{𝟏}`$. We introduce the generalized Husimi representation of an operator $`𝒜`$ $$H_𝒜(\alpha )=\alpha |𝒜|\alpha .$$ (2) If $`𝒜`$ is a projection operator onto state $`|\mathrm{\Psi }`$, it reduces to the standard Q-representation of a pure state $`H_{|\mathrm{\Psi }\mathrm{\Psi }|}(\alpha )=|\alpha |\mathrm{\Psi }|^2`$. Representing the identity operator as a sum over coherent states we have $$_\mathrm{\Omega }H_𝒜(\alpha )𝑑\alpha =\text{Tr}𝒜,_\mathrm{\Omega }H_{|\mathrm{\Psi }\mathrm{\Psi }|}(\alpha )𝑑\alpha =1.$$ (3) The family of coherent states allows us to establish a link between classical and quantum dynamics . If for any $`\rho >0`$ $$\underset{\mathrm{}0}{lim}\underset{\alpha \mathrm{\Omega }}{inf}_{C(\mathrm{\Theta }(\alpha ),\rho )}|\alpha ^{}|U|\alpha |^2𝑑\alpha ^{}=1,$$ (4) where $`C(\mathrm{\Theta }(\alpha ),\rho )`$ denotes a circle centered at $`\mathrm{\Theta }(\alpha )`$ with the radius $`\rho `$, we say that the quantization of $`\mathrm{\Theta }`$ into $`U`$ is *regular* with respect to the family of coherent states $`|\alpha `$. The formula (4) means that $`|\alpha ^{}|U|\alpha |^2`$ tends to $`\delta \left(\alpha ^{}\mathrm{\Theta }(\alpha )\right)`$ in the classical limit, so the quantum state $`U|\alpha `$ is then concentrated in the vicinity of the classical image $`\mathrm{\Theta }(\alpha )`$. Let us consider the Husimi representation of the evolution operator $`H_{U^n}(\alpha )=\alpha |U^n|\alpha `$. The above argument shows that in the semiclassical regime the state $`U^n|\alpha `$ is concentrated at $`\mathrm{\Theta }^n(\alpha )`$ and is close to $`0`$ everywhere else. Therefore $`H_{U^n}(\alpha )`$ is localized on the periodic points of classical map ($`\mathrm{\Theta }^n(\alpha )=\alpha `$). This useful feature was observed in and served to demonstrate classical to quantum correspondence. The author of investigated only the squared modulus of the function $`H_{U^n}(\alpha )`$. We know that $`\text{Tr}U^n=_\mathrm{\Omega }\alpha |U^n|\alpha 𝑑\alpha `$ and the integrated function is localized on the periodic points in the classical limit ($`\mathrm{}0`$). This fact allows us to decompose the quantum traces into sums over periodic orbits for any quantum maps, under the standard assumption that all periodic points are isolated, but without employing the saddle point approximation. The modulus of the integrated overlap $`\alpha |U^n|\alpha `$ near the periodic point corresponds to the amplitude $`A_o`$ in equation (1) and depends on the stability of this periodic orbit. The phase of the function $`H_{U^n}(\alpha )`$ seems to have a stationary point at the periodic orbit as observed in for the quantum kicked top and reported later on in this work for the baker map on the sphere. ## 3 Various Quantizations of Baker Map on the Sphere Quantizing the baker map on the sphere we define four different quantum maps $`B_k`$ ($`k=0\mathrm{}3`$), corresponding to the same classical system. In spite of this fact, they have different traces and spectra. By a quantum map on the sphere we understand a $`N=2j+1`$ dimensional unitary operator acting in the Hilbert space of angular momentum, where $`j(j+1)`$ is the eigenvalue of $`\widehat{J}^2`$ operator which is conserved by the evolution. The four unitary operators $`B_k`$ may be expressed in the eigenbasis $`|j,m`$ of $`\widehat{J}_z`$ operator $$B_{(ab)}=R^1\left[\begin{array}{cc}R^{(a)}& 0\\ 0& R^{\prime \prime (b)}\end{array}\right],$$ (5) where index $`(ab)`$ with $`a,b=0,1`$ corresponds to binary representation of $`k`$ ($`B_{(00)}=B_0`$, etc.), $`R`$ is the Wigner rotation matrix $`R_{m^{},m}=j,m|e^{i\frac{\pi }{2}\widehat{J}_y}|j,m^{}`$ and the matrices $`R^{}`$ and $`R^{\prime \prime }`$ are constructed by choosing odd or even columns of the matrix $`R`$ in the following way $$R_{}^{}{}_{m,l}{}^{(a)}:=\sqrt{2}R_{m,2l+j+a},m,l=j\mathrm{}\frac{1}{2};$$ $$R_{}^{\prime \prime }{}_{m,l}{}^{(b)}:=\sqrt{2}R_{m,2lj1+b},m,l=\frac{1}{2}\mathrm{}j.$$ (6) This construction requires the dimension $`N`$ to be an even integer so the quantum number $`j`$ is half integer. The volume of the classical phase space is equal to $`4\pi `$ (the unit sphere), so $`N=\frac{4\pi }{2\pi \mathrm{}}`$, which relates $`\mathrm{}`$ to the size of the Hilbert space ($`\mathrm{}=2/N`$) and the classical limit corresponds to $`N\mathrm{}`$. Now the coherent state $`|\alpha `$ will refer to a $`SU(2)`$ vector coherent state $`|\theta ,\varphi `$ . In figure 1 we plot the Husimi representation of the operator $`(B_1)^3`$ for $`N=100`$, the coordinates are the azimuth angle $`\varphi `$ and $`t=\mathrm{cos}\theta `$. The complex function $`\alpha |(B_1)^3|\alpha `$ is well localized near the periodic points of three iterations of the classical baker map on the sphere. The analogous plots for other versions of the model look almost the same. We want to investigate the phase of contribution coming from single periodic point. The numerical integration performed for several dimensions $`N`$ shows that the phases of such contributions are very well approximated by $`\mathrm{arg}(\alpha _o|(B_k)^n|\alpha _o)`$, where $`|\alpha _o`$ is the coherent state centered at the periodic point. This is so due to the stationarity of the phase of the function $`\alpha |(B_k)^n|\alpha `$ at the periodic point, as pointed out before. We have calculated the phase of the Husimi representation of $`(B_k)^3`$ for four versions of the model defined by equations (5) and (6) at the periodic point $`\theta =\mathrm{arccos}\left(\frac{5}{7}\right)`$, $`\varphi =\frac{2}{7}\pi `$. The results are plotted in figure 2 where the four different symbols denote the four different quantum maps. The phase $`\chi `$ is plotted in the interval $`(\pi ,\pi ]`$. We observe that, starting from some dimension, the phase depends linearly on $`N`$ modulo $`2\pi `$. The points for four versions of the model, marked with different symbols, form parallel lines and they do not converge one to another. The phase $`\chi _k=\mathrm{arg}(\alpha _o|(B_k)^3|\alpha _o)`$ can be then well parametrized by $`S_{\mathrm{class}}\frac{N}{2}+S_{\mathrm{quant}}^{(k)}`$. The slope of these lines is the same and corresponds to the classical action of the orbit $`S_{\mathrm{class}}`$ equal to $`\frac{2}{7}2\pi `$ (only the fractional part of $`2\pi `$ is important here). The quantum corrections $`S_{\mathrm{quant}}`$ to the actions of periodic orbits may be found by an extrapolation of the linear behavior to $`N=0`$, as it is demonstrated in figure 2 for $`k=2`$. The asymptotic dotted line crosses $`N=0`$ axes at $`\chi =S_{\mathrm{quant}}^{(2)}=\frac{2}{7}\pi `$. The quantum corrections for other quantizations are $`S_{\mathrm{quant}}^{(0)}=S_{\mathrm{quant}}^{(3)}=0`$ and $`S_{\mathrm{quant}}^{(1)}=\frac{2}{7}\pi `$. The investigation performed for other periodic points shows that the quantum corrections $`S_{\mathrm{quant}}`$ are the same for periodic points belonging to the same periodic orbit and are equal to $`p\times S_{\mathrm{quant}}`$(primary orbit) if the periodic orbit is $`p`$ repetitions of a primary orbit, therefore with this respect they behave like classical actions. ## 4 Conclusions In this paper we have focussed on the semiclassical behavior of quantum systems having the same classical limit. The corrections to the quantum traces, not disappearing in the classical limit, arise from $`\mathrm{}`$ corrections to the actions of periodic orbits which do disappear in the classical limit. It means that the traces of a quantum system having such corrections may not be approximated by the trace formula constructed on purely classical properties of the system. For any quantum system we propose a way to find the contributions to the traces coming from single periodic points of $`n`$ iterations of the classical map by means of autocorrelation function $`\alpha |U^n|\alpha `$. This function is localized at the periodic orbits of classical system ($`U`$ stands for evolution operator and $`|\alpha `$ denotes the coherent state – the wave packet). The four versions of the quantum baker map on the sphere serve as an example of such behavior. We investigate the phase $`\chi =\mathrm{arg}(\alpha _o|B^n|\alpha _o)`$ for the coherent state $`|\alpha _o`$ pointing at the periodic points of the classical baker map on the sphere. We have observed the linear behavior of $`\chi `$ as a function of $`1/\mathrm{}`$ in semiclassical regime. We believe that quantum systems having the same classical limit may have different traces not converging one to another in the classical limit. These traces can be explained by means of quantum corrections (first order in $`\mathrm{}`$) to the actions of periodic orbits in the semiclassical approximation. The spectra of the Floquet operators of such systems contain $`N1/\mathrm{}`$ eigenvalues placed on the unit circle. They may be expressed in terms of the traces. The difference of quantum traces of order of $`1`$ requires difference between eigenvalues of different quantum systems at least of order of $`1/N`$, i.e. the same as the mean level spacing. The arguments presented in this paper should be valid even in the case of infinite Hilbert space (e.g. for continuous dynamics). I acknowledge fruitful discussions with F. Haake and K. Życzkowski. It is a pleasure to thank the organizers of the workshop *Dynamics of Complex Systems* for the invitation to Dresden.
no-problem/9911/astro-ph9911338.html
ar5iv
text
# Oscillations During Thermonuclear X-ray Bursts: A New Probe of Neutron Stars ## 1. Introduction During the past 25 years X-ray astronomers have expended a good deal of effort in an attempt to identify the spin frequencies of neutron stars in LMXB (see for example Wood et al. 1991; Vaughan et al. 1994). These efforts were largely prompted by the discovery in the radio band of rapidly rotating neutron stars, the millisecond radio pulsars (see Backer et al. 1984), and subsequent theoretical work suggesting their origin lie in an accretion-induced spin-up phase of LMXB (see the review by Bhattacharya 1995 and references therein). However, up to the mid-90’s there was little or no direct evidence to support the existence of rapidly spinning neutron stars in LMXB. This situation changed dramatically with the launch of the Rossi X-ray Timing Explorer (RXTE) in December, 1995. Within a few months of its launch RXTE observations had provided strong evidence suggesting that neutron stars in LMXB are spinning with frequencies $`300`$ Hz. These first indications came with the discovery of high frequency (millisecond) X-ray brightness oscillations, “burst oscillations,” during thermonuclear (Type I) X-ray bursts from several neutron star LMXB systems (see Strohmayer et al. 1996; Smith, Morgan & Bradt 1997; Zhang et al. 1996). At present these oscillations have been observed from six different LMXB systems (see Strohmayer, Swank & Zhang 1998). The observed frequencies are in the range from $`300600`$ Hz, similar to the observed frequency distribution of binary millisecond radio pulsars (Taylor, Manchester & Lyne 1993), and consistent with some theoretical determinations of spin periods which can be reached via accretion-induced spin-up (Webbink, Rappaport & Savonije 1983). In this contribution I will review our observational understanding of these oscillations, with emphasis on how they can be understood in the context of spin modulation of the X-ray burst flux. I will discuss how detailed modelling of the oscillation amplitudes and harmonic structure can be used to place interesting constraints on the masses and radii of neutron stars and therefore the equation of state of supranuclear density matter. Inferences which can be drawn regarding the physics of thermonuclear burning will also be discussed. I will conclude with some outstanding theoretical questions and uncertainties and where future observations and theoretical work may lead. ## 2. Observational properties of burst oscillations Burst oscillations with a frequency of 363 Hz were first discovered from the LMXB 4U 1728-34 by Strohmayer et al. (1996). Since then an additional five sources with burst oscillations have been discoverd. The burst oscillation sources and their observed frequencies are given in table 1. In the remainder of this section I review the important observational properties of these oscillations and attempt to lay out the evidence supporting the spin modulation hypothesis. ### 2.1. Oscillations at burst onset Many bursts show detectable oscillations during the $`12`$ s risetimes typical of thermonuclear bursts. For example, Strohmayer, Zhang & Swank (1997) showed that some bursts from 4U 1728-34 have oscillation amplitudes as large as 43 % within 0.1 s of the observed onset of the burst. They also showed that the oscillation amplitude decreased monotonically as the burst flux increased during the rising portion of the the burst lightcurve. Figure 1 shows this behavior in a burst from 4U 1636-53. This burst had an oscillation amplitude near onset of $`80`$ %, and then showed an episode of radius expansion beginning near the time when the oscillation became undectable (see Strohmayer et al. 1998a). The presence of modulations of the thermal burst flux approaching 100 % right at burst onset fits nicely with the idea that early in the burst there exists a localized hot spot which is then modulated by the spin of the neutron star. In this scenario the largest modulation amplitudes are produced when the spot is smallest, as the spot grows to encompass more of the neutron star surface, the amplitude drops, consistent with the observations. X-ray spectroscopy during burst rise also suggests that the emission is localized near the onset of bursts. Prior to RXTE few instruments had the collecting area and temporal resolution to study spectral evolution during the short rise times of thermonuclear bursts. Day & Tawara (1990) used GINGA observations of 4U 1728-34 in an attempt to constrain the e-folding spreading time of the burning front to $`0.1`$ s in two bursts. With RXTE Strohmayer, Zhang & Swank (1997) investigated the spectral evolution of bursts from 4U 1728-34. They fit a black body model to intervals during several bursts and plotted the flux $`F_{bol}`$ versus $`F_{bol}^{1/4}/kT_{BB}`$. For black body emission from a spherical surface this ratio is a constant proportional to $`(R/d)^{1/2}`$, where $`d`$ and $`R`$ are the source distance and radius, respectively. Figure 2 shows such a plot for a burst from 4U 1728-34. In this plot the solid line connects successive time intervals, with the burst beginning in the lower left and evolving diagonally to the upper right and then across to the left. This evolution indicates that the X-ray emitting area is not constant, but increases with time during the burst rise. The spectra of type I bursts are not true black bodies (see London, Taam & Howard 1986; Ebisuzaki 1987; Lewin, van Paradijs & Taam 1993), however, the argument here concerns the energetics and not the detailed shape of the spectrum. Since the effect is seen in bursts that do not show photospheric radius expansion the atmosphere is always geometrically thin compared with the stellar radius, so the physics of spectral formation depends only on conditions locally. The same spectral hardening corrections will apply to intervals during the rising and cooling portions of the burst which have the same measured black body temperatures (ie. spectral shapes), thus, the deficit in $`F_{bol}^{1/4}/kT_{BB}`$ evident during the rising phase cannot be due solely to spectral formation effects. ### 2.2. Expectations from the theory of thermonuclear burning The thermonuclear instability which triggers an X-ray burst burns in a few seconds the fuel which has been accumulated on the surface over several hours. This $`>`$ 10<sup>3</sup> difference between the accumulation and burning timescales means that it is unlikely that the conditions required to trigger the instability will be achieved simultaneously over the entire stellar surface. This realization, first emphasized by Joss (1978), led to the study of lateral propagation of the burning over the neutron star surface (see Fryxell & Woosley 1982, Nozakura, Ikeuchi & Fujimoto 1984, and Bildsten 1995). The subsecond risetimes of thermonuclear X-ray bursts suggests that convection plays an important role in the physics of the burning front propagation, especially in the low accretion rate regime which leads to large ignition columns (see Bildsten (1998) for a review of thermonuclear burning on neutron stars). Bildsten (1995) has shown that pure helium burning on neutron star surfaces is in general inhomogeneous, displaying a range of behavior which depends on the local accretion rate, with low accretion rates leading to convectively combustible accretion columns and standard type I bursts, while high accretion rates lead to slower, nonconvective propagation which may be manifested in hour long flares. These studies emphasize that the physics of thermonuclear burning is necessarily a multi-dimensional problem and that localized burning is to be expected, especially at the onset of bursts. The propertiess of oscillations near burst onset described above fit well into this picture of thermonuclear burning on neutron stars. Miller (1999) has recently found evidence for a significant 290 Hz subharmonic of the strong 580 Hz oscillation seen in bursts from 4U 1636-53. Evidence for the subharmonic was found by adding together in phase data from the rising intervals of 5 bursts. This result suggests that in 4U 1636-53 the spin frequency is 290 Hz, and that the strong signal at 580 Hz is caused by nearly antipodal hot spots. If correct this result has interesting implications for the physics of nuclear burning, in particular, how the burning can be spread from one pole to the other within a few tenths of seconds, and how fuel is pooled at the poles, perhaps by a magnetic field. ### 2.3. The coherence of burst oscillations One of the most interesting aspects of the burst oscillations is the frequency evolution evident in many bursts. The frequency is observed to increase by $`13`$ Hz in the cooling tail, reaching a plateau or asymptotic limit (see Strohmayer et al. 1998a). An example of this behavior in a burst from 4U 1702-429 is shown in figure 3. However, increases in the oscillation frequency are not universal, Strohmayer (1999) and Miller (1999) have recently reported on an episode of spin down in the cooling tail of a burst from 4U 1636-53. Frequency evolution has been seen in five of the six burst oscillation sources and appears to be commonly associated with the physics of the modulations. Strohmayer et. al (1997) have argued this evolution results from angular momentum conservation of the thermonuclear shell. The burst expands the shell, increasing its rotational moment of inertia and slowing its spin rate. Near burst onset the shell is thickest and thus the observed frequency lowest. The shell then spins back up as it cools and recouples to the bulk of the neutron star. Calculations indicate that the $`10`$ m thick pre-burst shell expands to $`30`$ m during the flash (see Joss 1978; Bildsten 1995), which gives a frequency shift due to angular momentum conservation of $`2\nu _{spin}(20\mathrm{m}/R)`$, where $`\nu _{spin}`$ and $`R`$ are the stellar spin frequency and radius, respectively. For the several hundred Hz spin frequencies inferred from burst oscillations this gives a shift of $`2`$ Hz, similar to that observed. In bursts where frequency drift is evident the drift broadens the peak in the power spectrum, producing quality factors $`Q\nu _0/\mathrm{\Delta }\nu _{FWHM}300`$. In some bursts a relatively short train of pulses is observed during which there is no strong evidence for a varying frequency. Recently, Strohmayer & Markwardt (1999) have shown that with accurate modeling of the frequency drift quality factors as high as $`Q4,000`$ are achieved in some bursts. They modelled the frequency drift and showed that a simple exponential “chirp” model of the form $`\nu (t)=\nu _0(1\delta _\nu \mathrm{exp}(t/\tau ))`$, works remarkably well. The resulting quality factors derived from the frequency modelling are very nearly consistent with the factors expected from a perfectly coherent signal of finite duration equal to the length of the data trains in the bursts. These results argue strongly that the mechanism which produces the modulations is a highly coherent process, such as stellar rotation, and that the asymptotic frequencies observed during bursts represent the spin frequency of the neutron star. ### 2.4. The long-term stability of burst oscillation frequencies The accretion-induced rate of change of the neutron star spin frequency in a LMXB is approximately $`1.8\times 10^6`$ Hz yr<sup>-1</sup> for typical neutron star and LMXB parameters. The Doppler shift due to orbital motion of the binary can produce a frequency shift of magnitude $`\mathrm{\Delta }\nu /\nu =v\mathrm{sin}i/c2.05\times 10^3`$, again for canonical LMXB system parameters. This doppler shift easily dominates over any possible accretion-induced spin change on orbital to several year timescales. Therefore the extent to which the observed burst oscillation frequencies are consistent with possible orbital Doppler shifts, but otherwise stable over $``$ year timescales, provides strong support for a highly coherent mechanism which sets the observed frequency. At present, the best source available to study the long term stability of burst oscillations is 4U 1728-34. Strohmayer et al. (1998b) compared the observed asymptotic frequencies in the decaying tails of bursts separated in time by $`1.6`$ years. They found the burst frequency to be highly stable, with an estimated time scale to change the oscillation period of about 23,000 year. It was also suggested that the stability of the asymptotic periods might be used to infer the X-ray mass function of LMXB by comparing the observed asymptotic period distribution of many bursts and searching for an orbital Doppler shift. ## 3. Burst oscillations as probes of neutron stars Detailed studies of the burst oscillation phenomenon hold great promise for providing new insights into a variety of physics issues related to the structure and evolution of neutron stars. In particular, the burst oscillations have given astronomers their first direct method to investigate the two dimensional nature of nuclear flame front propagation. In this section I will outline how the burst oscillations can be used to probe neutron stars. ### 3.1. Mass - Radius constraints and the EOS of dense matter Using the rotating hot spot model it is possible to determine constraints on the mass and radius of the neutron star from measurements of the maximum observed modulation amplitudes during X-ray bursts as well as the harmonic content of the pulses. The physics that makes such constraints possible is the bending of photon trajectories in a strong gravitational field. The strength of the deflection is a function of the stellar compactness, $`GM/c^2R`$, with more compact stars producing greater deflections and therefore weaker spin modulations. An upper limit on the compactness can be set since a star more compact than this limit would not be able to produce a modulation as large as that observed. Complementary information comes from the pulse shape, which can be inferred from the strength of harmonics. Information on both the amplitude and harmonic content can thus be used to bound the compactness. Detailed modelling, during burst rise for example, can then be used to determine a confidence region in the mass - radius plane for neutron stars. Miller & Lamb (1998) have investigated the amplitude of rotational modulation pulsations as well as harmonic content assuming emission from a point-like hot spot. They also show that knowledge of the angular and spectral dependence of the emissivity from the neutron star surface can have important consequences for the derived constraints. More theoretical as well as data modelling in this area are required. ### 3.2. Doppler shifts and pulse phase spectroscopy Stellar rotation will also play a role in the observed properties of spin modulation pulsations. For example, a 10 km radius neutron star spinning at 400 Hz has a surface velocity of $`v_{spin}/c2\pi \nu _{spin}R0.084`$ at the rotational equator. This motion of the hot spot produces a Doppler shift of magnitude $`\mathrm{\Delta }E/Ev_{spin}/c`$, thus the observed spectrum is a function of pulse phase (see Chen & Shaham 1989). Measurement of a pulse phase dependent Doppler shift in the X-ray spectrum would provide additional evidence supporting the spin modulation model and also yields a means of constraining the neutron star radius, perhaps one of the few direct methods to infer this quantity for neutron stars. The rotationally induced velocity also produces a relativistic aberration which results in asymmetric pulses, thus the pulse shapes also contain information on the spin velocity and therefore the stellar radius (Chen & Shaham 1989). The component of the spin velocity along the line of site is proportional to $`\mathrm{cos}\theta `$, where $`\theta `$ is the lattitude of the hotspot measured with respect to the rotational equator. The modulation amplitude also depends on the lattitude of the hotspot, as spots near the rotational poles produce smaller amplitudes than those at the equator. Thus a correlation between the observed oscillation amplitude and the size of any pulse phase dependent Doppler shift is to be expected. Dectection of such a correlation in a sample of bursts would provide strong confirmation of the rotational modulation hypothesis. Searches for a Doppler shift signature are just beginning to be carried out. Studies in single bursts have shown that spectral variations with pulse phase can be detected (see Strohmayer, Swank, & Zhang 1998). The varations with pulse phase show a 4-5 % modulation of the fitted black body temperature, consistent with the idea that a temperature gradient is present on the stellar surface, which when rotated produces the flux modulations. Ford (1999) has analysed data during a burst from Aql X-1 and finds that the softer photons lag higher energy photons in a manner which is qualitatively similar to that expected from a rotating hot spot. Strohmayer & Markwardt (1999) have shown that signals from multiple bursts can be added in phase by modelling the frequency drifts present in individual bursts. This provides a stronger signal with which to test for Doppler shift effects. So far, burst oscillation signals from 4U 1702-429 have been added in phase in an attempt to identify a rotational Doppler shift. A difficulty in analysing the phase resolved spectra from bursts is the systematic change in the black body temperature produced as the surface cools. A simpler measure of the spectral hardness, rather than the black body temperature, is the mean energy channel of the spectrum. I computed the distribution of mean channels in the RXTE proportional counter array (PCA) as a function of pulse phase using spectra from 4 different bursts from 4U 1702-429. Figure 4 shows the results. A strong modulation of the mean PCA channel is clearly seen. There is a hint of an asymmetry in that the leading edge of the pulse appears harder (as expected for a rotational Doppler shift) than the trailing edge, but the difference does not have a high statistical significance. More data will be required to decide the rotational Doppler shift issue. ### 3.3. Physics of thermonuclear burning The properties of burst oscillations can tell us a great deal about the processes of nuclear burning on neutron stars. The amplitude evolution during the rising phase of bursts contains information on how rapidly the flame front is propagating. If the anitpodal spot hypothesis to explain the presence of a subharmonic in 4U 1636-53 is correct, then it has important implications for the propagation of the instability from one pole to another in $``$ 0.2 s (see Miller 1999). In addition, a two pole flux anistropy suggests that the nuclear fuel is likely pooled by some mechanism, perhaps associated with the magnetic field of star. Further detections and study of the subharmonic in 4U 1636-53 could shed more light on these issues. Until recently, much of the work concerning burst oscillations has concentrated on studies of the pulsations themselves and their relation to individual bursts. With the samples of bursts growing it is now possible to concieve of more global studies which correlate the properties of oscillations with other measures of these sources, for example, their spectral state and mass accretion rate. This will allow researchers to investigate the system parameters which determine the likelihood of producing bursts which show oscillations. Such investigations will provide insight into how the properties of thermonuclear burning (as evidenced in the presence or absence of oscillations) are influenced by other properties of the system. Furthermore, we can test if theoretical predictions of how the burning should behave are consistent with the hypothesis that the oscillations result from rotational modulation of nonuniformities produced by thermonuclear burning. Initial work in this regard suggests that bursts which occur at higher mass accretion rates show stronger burst oscillations more often (see Franco et al. 1999). Although preliminary this result appears roughly consistent with theoretical descriptions of the thermonuclear burning which indicates an evolution from vigorous, rapid (thus uniform) burning at lower mass accretion rates (lower persistent count rates) to weaker, slower burning (thus more non-uniform) at higher mass accretion rates (see Bildsten 1995, for example). ## 4. Remaing puzzles and the future Although much of the burst oscillation phenomenology is well described by the spin modulation hypothesis several important hypotheses need to be confronted with more detailed theoretical investigations. Perhaps the most interesting is the mechanism which causes the observed frequency drifts. Expressed as a phase slip the frequency drifts seen during the longest pulse trains correspond to about 5 - 10 revolutions around the star. Whether or not a shear layer can persist that long needs to be further investigated. The recent observation of a, so far unique, spin down in the decaying tail of a burst from 4U 1636-53 (see Strohmayer 1999), which might be the first detection of the spin down caused by thermal expansion of the burning layers, needs to be better understood in the context of thermonuclear energy release at late times in bursts. Another perplexing issue is the mechanism which allows flux asymmetries to both form and then persist at late times in bursts. Although RXTE provided the technical advancements required to discover the burst oscillations, it may take future, larger area instruments such as Constellation-X or a successor timing mission to RXTE to fully exploit their potential for unlocking the remaining secrets of neutron stars. ## REFERENCES Backer, D. C., Kulkarni, S. R., Heiles, C., Davis. M. M. & Goss, W. M. 1982, Nature, 300, 615 Bhattacharya, D. 1995,in X-ray Binaries, ed. W. H. G. Lewin, J. Van Paradijs, & E. P. J. Van den Heuvel, (Cambridge: Cambridge Univ. Press), p 233 Bildsten, L. 1995, ApJ, 438, 852 Bildsten, L. 1998, in “The Many Faces of Neutron Stars”, ed. R. Buccheri, A. Alpar & J. van Paradijs (Dordrecht: Kluwer), p. 419 Chen, K. & Shaham, J. 1989, ApJ, 339, 279 Day, C. S. R. & Tawara, Y. 1990, MNRAS, 245, 31P Ebisuzaki, T. 1987, PASJ, 39, 287 Ford, E. C. 1999, ApJ, 519, L73 Franco, L. et al. 1999, in preparation Fryxell, B. A., & Woosley, S. E. 1982, ApJ, 261, 332 Joss, P. C. 1978, ApJ, 225, L123 Lewin, W. H. G., van Paradijs, J. & Taam, R. E. 1993, Space Sci. Rev., 62, 233 London, R. A., Taam, R. E. & Howard, W. M. 1986, ApJ, 306, 170 Miller, M. C. 1999a, ApJ, 515, L77 Miller, M. C. 1999b, ApJ, submitted Miller, M. C., Lamb, F. K. & Psaltis, D. 1998, ApJ, 508, 791 Miller, M. C. & Lamb, F. K. 1998, ApJ, 499, L37 Nozukura, T., Ikeuchi, S., & Fujimoto, M. Y. 1984, ApJ, 286, 221 Smith, D., Morgan, E. H. & Bradt, H. V. 1997, ApJ, 479, L137 Strohmayer, T. E. 1999, ApJ, 523, L51 Strohmayer, T. E. & Markwardt, C. B. 1999, ApJ, 516, L81 Strohmayer, T. E., Swank, J. H., & Zhang, W. 1998, Nuclear Phys B (Proc. Suppl.) 69/1-3, 129-134 Strohmayer, T. E., Zhang, W., Swank, J. H., White, N. E. & Lapidus, I. 1998a, ApJ, 498, L135 Strohmayer, T. E., Zhang, W., Swank, J. H. & Lapidus, I. 1998b, ApJ, 503, L147 Strohmayer, T. E., Zhang, W. & Swank, J. H. 1997, ApJ, 487, L77 Strohmayer, T. E. et al. 1996, ApJ, 469, L9 Taylor, J. H., Manchester, R. N. & Lyne, A. G. 1993, ApJS, 88, 529 Vaughan, B. A. et al. 1994, ApJ, 435 362 Webbink, R. F., Rappaport, S. A. & Savonije, G. J. 1983, ApJ, 270, 678 Wood, K. S. et al. 1991, ApJ, 379, 295 Zhang, W., Lapidus, I., Swank, J. H., White, N. E. & Titarchuk, L. 1996, IAUC 6541
no-problem/9911/astro-ph9911456.html
ar5iv
text
# Kinematical trends among the field horizontal branch starsBased in part on HIPPARCOS data ## 1 Introduction: HB-stars, their population membership and the galactic structure Stars of horizontal-branch nature are important objects in studies of the older stellar populations and in studies of galactic structure in relation with formation theories for the Milky Way. Their virtue lies in three properties. First, they have rather well defined absolute magnitudes and their distances are therefore easy to determine. Second, their nature is such that they are easy to discover, in particular at higher galactic latitudes. Third, the stellar evolution leading to such stars is in principle relatively well understood. The horizontal branch (HB) stellar phase is the core helium burning late stage of evolution of originally low to medium mass stars. During the red giant phase the stars have lost mass such that a helium core of $`0.5`$ M, surrounded by an outer shell of hydrogen gas, remains. For very thin shells, $`M_{\mathrm{shell}}<0.02`$ M, the stars are rather blue and of spectral class sdB (subdwarf B), for ever thicker shells their atmospheres are cooler, leading to spectral types such as horizontal branch B and A (HBB and HBA) and to the red HB (RHB) stars with $`M_{\mathrm{shell}}0.5`$ M. Between the HBA and RHB stars lies the pulsational instability strip with the RR Lyrae stars. (For the systematics of the spectral classification see de Boer et al. 1997c.) HBA and HBB stars are often called blue HB (BHB) stars, the even hotter sdB and sdO stars are also known as extended or extreme HB (EHB) stars. BHB and EHB stars can be easily found based on their blueness, the RR Lyr stars due to their variability. Of the BHB stars, the HBAs are easily identified, as their physical parameters differ from main sequence A stars, while HBBs may be confused with main sequence B stars. However, at higher galactical latitudes A or B main sequence stars are very rare. RHB stars can easily be confused with subgiants and giants because they lie in the same region of the HRD. To trace the structure of the (local part of the) Milky Way one employs several techniques (see also the review by Majewski 1993). The classical method is to perform star counts (see, e.g., Bahcall & Soneira 1984). Selecting stars based on their proper motion allows, if their radial velocities and distances are also determined, to study the true kinematics of the stars (see, e.g., Carney et al. 1996 and papers cited there). Basing oneself on proper motions one naturally studies the general group of more nearby stars extended by true high-velocity stars. Another method is to sample distances and radial velocities of a specific set of stars, such as BHB or RR Lyr stars, to investigate these parameters in a statistical manner (see, e.g., Kinman et al. 1994, 1996). A more specific method is to observe statistically complete samples of stars of a special type in several directions to derive scale heights (for sdB stars see e.g., Heber 1986, Moehler et al. 1990, Theissen et al. 1993), or scale lengths in the Milky Way. A further possibility is to go beyond the present kinematic parameters by calculating orbits based on distances, radial velocities, and proper motions. This method has been used for high proper motion stars (Carney et al. 1996), other dwarf stars (Schuster & Allen 1997), and for globular clusters (Dauphole et al. 1996). Also the orbits of sdB stars have been investigated (Colin et al. 1994, de Boer et al. 1997a). The latter study showed that most of the sdB stars have orbits staying fairly well inside the Milky Way disk, indicating that the sdB stars are not generally part of the halo population. In the present study we have attempted to perform a similar analysis for HBA and HBB stars (for short: HBA/B stars). Our sample consists of the local HB stars which were observed by the Hipparcos satellite. These are the HB-like stars with the most accurate spatial and kinematic data available to date. However, only for a few HBA/B stars are the parallaxes accurate enough to calculate reliable distances (de Boer et al. 1997b). For the other stars the distance still must be derived from photometry. Here one needs to know the absolute magnitude of field horizontal branch stars. Especially since the publication of the Hipparcos catalogue a lot of effort has gone into fixing this value. However, this has not yet led to total agreement. For a review of various approaches to solving this problem we refer to de Boer (1999) and Popowski & Gould (1999). An important parameter in these studies is the metallicity of the stars, as it is generally thought to be correlated with age. For dwarf stars metallicities can be estimated using photometric indices or spectroscopy (see the summary by Majewski 1993). For HB stars this is, unfortunately, not a trustworthy method. The atmospheres of many HB stars have most probably been altered chemically with respect to the original composition. Gravitational settling of heavy elements in the sdB/O star and possibly HBB star atmospheres leads to a present lower content of elements like He, while levitation of heavy elements leads to atmospheres with enhanced abundances of certain elements like Fe or Au as found in several field horizontal branch stars, e.g. Feige 86 (Bonifacio et al. 1995). Levitation must also be the explanation for the high metal abundances in blue HB stars in M 13 (Behr et al. 1999) and NGC 6752 (Moehler et al. 1999) finally uncovered to explain deviant flux distributions near the Balmer jump of globular cluster blue HB stars (Grundahl et al. 1999). Therefore, original metallicities (as well as the original masses) are no longer accessible quantities. Determining the kinematic properties can help deciding which of the HB stars are intrinsically more metal poor and which are more metal rich, and hence of somewhat younger origin. The main subjects of our study are the HBA/B stars which are located in the colour magnitude diagram on the horizontal branch between the RR-Lyrae stars and the hot subdwarfs. Unfortunately the main sequence crosses the HB at the HBB region, so that HBB stars can be confused with normal B stars. Therefore we have only few HBB stars in our sample. We mainly focus on HB stars with temperatures lower than 10000 K which lie above the main sequence. Sect. 2 deals with the data neccessary for our study. In Sect. 2.3 we determine the absolute magnitudes and distances of the HBA/B and sdB/O stars with the method of auto-calibration using the shape of the HB defined by the stars with the best Hipparcos parallaxes. In Sect. 3 we discuss the kinematical behaviour of the HBA/B and sdB/O stars and make comparisons with the results of de Boer et al. (1997a). To further explore a possible trend in kinematics of stars along the HB we investigate (Sect. 4) the orbits of a sample of RR-Lyrae stars. ## 2 The data ### 2.1 Composition of the sample Our sample consists of the Hipparcos (ESA 1997) measured HB stars. In order to identify them we searched through lists of bright HB-candidates in publications concerning horizontal branch stars, such as Kilkenny et al. (1987) for sdB/O stars, and Corbally & Gray (1996), Huenemoerder et al. (1984), and de Boer et al. (1997b) for the HBA/B stars. However, for a few stars in these lists indications exist that they are probably not horizontal branch stars. Among these are HD 64488 (Gray et al. 1996), HD 4772 (Abt & Morell 1995, Philip et al. 1990), HD 24000 (Rydgren 1971), HD 52057 (Waelkens et al. 1998, Stetson 1991) and HD 85504 (Martinet 1970). This sample, although being of limited size, represents the HB stars with by far the best kinematical data currently available. Two further stars are HB-like but were excluded from the study nevertheless. BD +32 2188 has a rather low value for log $`g`$ so that it lies considerably above the ZAHB in the log $`g`$ $``$ $`T_{\mathrm{eff}}`$ diagram. Being metal deficient (Corbally & Gray 1996) it can be considered a horizontal branch star evolving away from the ZAHB. Because the evolutionary state is not fully HB the star cannot be part of our sample. HD 49798 is a subluminous O-star. However, its log $`g`$ is relatively low and its trigonometrical parallax implies a star with absolute brightness of about $``$2 mag, far too bright for a normal sdO star. It is probably on its way from the horizontal branch to become a white dwarf or it is a former pAGB star. Because of these aspects we excluded this star. A large fraction of the known horizontal branch stars has no published radial velocity and could therefore not be used for our study. A few stars had radial velocities but no Hipparcos data. There is no constraint on the position, so that the sample stars are located in all parts of the sky. However, as many studies were made in fields near the galactic poles we have relatively more stars at very high galactic latitudes. Although our sample of stars is certainly not statistically complete in any way, we do not expect noticeable selection effects due to position in the sky (see Sect. 5.2.). ### 2.2 Physical properties of the stars, extinction While many of the stars are classical template HB stars, like HD 2857, HD 109995, HD 130095 or HD 161817, others are not as well studied. For most of our stars values for $`\mathrm{log}g`$ and $`T_{\mathrm{eff}}`$ are available in the literature from a variety of methods. Sources are given in Table 1. For HD 78913 and HD 106304 $`\mathrm{log}g`$ and $`T_{\mathrm{eff}}`$ were derived from a fit of Kurucz models to spectrophotometric IUE data and photometry. For BD +25 2602 no data are available to determine $`\mathrm{log}g`$ and $`T_{\mathrm{eff}}`$. We keep it as part of our sample, as it was identified as a horizontal branch star by Stetson (1991). Wherever possible we took the values for $`E_{BV}`$ from de Boer et al. (1997b), supplemented by values listed in Gratton (1998). For the other stars we derived the $`E_{BV}`$, with ($`BV`$)- and ($`UB`$)-values taken from the SIMBAD archive and a two-colour-diagram. Note that with this method there may well be metallicity dependent effects having an influence on the reddening derived. For the star CD $``$38 222 no ($`UB`$) data are available; the reddening is very small as follows from the IRAS maps of Schlegel et al. (1998). We adopted the value from that study. ### 2.3 Absolute magnitudes and distances We obtained the distances of the HB stars using the absolute magnitude of the relevant portion of the HB rather than directly using the Hipparcos parallaxes. The reason for this is that most of the parallaxes are smaller than 3 mas which means that their error of on average 1 mas is too large to calculate accurate distances. The absolute magnitudes $`M_V`$, which are a function of the temperature and thus of $`(BV)_0`$, have been derived through self calibration as follows. We started with the determination of the shape of the field horizontal branch. For this we calculated the absolute magnitudes of those HB and sdB/O stars which have reasonably good parallaxes. For the determination of the mean absolute magnitude of the HB sample we excluded HD 74721 and BD +42 2309 because their absolute magnitudes, calculated from their parallaxes, are too bright by more than 3.5 magnitudes. Also excluded at this point are HD 14829 and HD 117880, whose parallaxes lead to absolute magnitudes far too faint. With this medianization (leaving out the extremes to both sides) we ensure that our result is not affected by stars with extreme values. Furthermore the stars having parallaxes with $`\mathrm{\Delta }\pi /\pi >1`$ were excluded for the determination of the shape of the HB. We then fitted by eye a curve to our sample in the colour magnitude diagram. In order to smooth this curve, it was approximated by a polynomial. Note that we aim to fit the observed parameters of the field horizontal branch and that we do not rely on a shape taken from globular clusters or theoretical models (see Fig. 1). From this we determined the value $`\delta M_V`$ giving the difference of $`M_V`$ for each ($`BV`$)<sub>0</sub> with respect to $`M_V`$ at ($`BV`$)<sub>0</sub>=0.2 mag. Although the available metallicity measurements show a large spread for individual stars (see table II of Philip 1987), the averages for each lie around \[Fe/H\] $``$1.5 dex. Since the effect of metallicity on $`M_V`$ is small for RR Lyr stars (about 0.1 mag per 0.5 dex, see de Boer 1999) we will neglect the metallicity effects for the HBA stars. Distances and absolute magnitudes of a sample of stars obtained through trigonometric parallaxes have to be corrected for the Lutz-Kelker bias (Lutz & Kelker 1973). This statistical effect, depending on the relative error of the parallaxes, leads to an over-estimation of the parallax on average, leading to too faint absolute magnitudes and too short distances of the sample. The correction we applied is based on the averaging of parallaxes. For that we have to correct the parallaxes of individual stars, acknowledging that such a correction is only valid in a statistical sense. The expected parallax $`\pi ^{}`$ given by $$\pi ^{}=10^{0.2\left[M_VV\delta M_V\right]1+0.62E_{BV}}$$ (1) with $`M_V`$ being the absolute magnitude, $`V`$ the apparent magnitude, $`E_{BV}`$ the reddening. $`\delta M_V`$ is a term which accounts for the temperature and/or $`BV`$ dependence of the absolute magnitude of BHB stars in the same way as done by Gratton (1998). Now $`M_V`$ is varied and $`\chi ^2`$($`M_V`$)=$`_i(\pi _i^{}(M_V)\pi _i)^2/(\mathrm{\Delta }\pi _i)^2`$ is calculated (formula as revised by Popowski & Gould 1999). At the correct $`M_V`$ the average of $`\chi ^2`$ should be minimized. $`M_V`$ is now found using all stars, regardless of their $`\mathrm{\Delta }\pi /\pi `$, except the four excluded above. We arrived at an absolute magnitude of $`M_V=0.63\pm 0.08`$ mag for the horizontal part ($`(BV)_00.2`$ mag) of the horizontal branch. As stated before this value should be valid for \[Fe/H\]$``$1.5 dex. However, as the curve defining the shape of the HB is subjective to a certain extent the real error of the HB’s absolute magnitude is somewhat larger. The absolute magnitudes and thus the distances of the individual stars including those omitted earlier are obtained by adding their $`\delta M_V`$ to the mean absolute magnitude of the HB. ### 2.4 Proper motions and positions Positions and proper motions used in this work were taken from the Hipparcos catalogue (ESA 1997). The mean error of the proper motions is below 1.5 mas/yr (see Table 2) which means an error in the tangential velocity of 3.5 km s<sup>-1</sup> for a star at a distance of 500 pc. As most of our stars have smaller distances the error caused by the proper motion uncertainty is even smaller. No star of the sample of HBA/B or sdB/O stars has an astrometric flag in the Hipparcos catalogue, indicating there were no problems in the data reduction. The Hipparcos goodness-of-fit statistic is below +3 in all cases, meaning that the astrometric data derived from the Hipparcos catalogue should be reliable and there are no indications that our sample contains double stars. ### 2.5 Radial velocities The radial velocities were taken from original sources (see Table 2), in part found from the Hipparcos Input Catalogue (Turon et al. 1987). The typical uncertainties are about 10 km s<sup>-1</sup>, so that they should not have a large effect on our results. The size of our sample was limited to a large extent by the lack of radial velocities; for only about 30% of the HB-candidates radial velocities could be found. Radial velocities can be affected by binarity of the star. We cannot absolutely exclude this possibility for some of the stars, but as noted in Sect. 2.4 there are no indications for binary nature for any of our stars. For some stars, Corbally & Gray (1996) found drastically different values for the radial velocity. They note however that in many of these cases their values may be affected for some reason (see their Sect. 4) as they show strong deviations with respect to values from the literature. We therefore used radial velocities from Corbally & Gray only for HD 2857 for which no other value is available. ## 3 Kinematics and orbits In order to gain information about the nature and population membership of the stars we analyse their kinematic behaviour and calculate their orbits. ### 3.1 Calculating orbits and velocities Before calculating the orbits the observational data have to be transformed into the coordinates of the galactic system ($`X,Y,Z;U,V,W`$). In this coordinate system $`X`$ points from the Sun in direction of the galactic centre with its origin in the galactic centre, $`Y`$ points into the direction of the galactic rotation at the position of the sun, and $`Z`$ toward the north galactic pole. The same applies to the corresponding velocities $`U`$, $`V`$, $`W`$. The orbits are calculated using the model for the gravitational potential of our Milky Way by Allen & Santillan (1991) which was developed to be used in an orbit calculating program (Odenkirchen & Brosche 1992). This model has been extensively used in the studies of de Boer et al. (1997a), Geffert (1998) and Scholz et al. (1996). There are several other models available which yield similar results as long as the orbits do not extend to extreme distances from the galactic centre (Dauphole et al. 1996). The model of Allen & Santillan (1991) is based on $`\mathrm{\Theta }_{\mathrm{LSR}}=220`$ km s<sup>-1</sup> and $`R_{\mathrm{LSR}}=8.5`$ kpc. The values for the peculiar velocity of the Sun used in the calculations in this paper are $`U_{\mathrm{pec},}=10`$ km s<sup>-1</sup>, $`V_{\mathrm{pec},}=15`$ km s<sup>-1</sup>, $`W_{\mathrm{pec},}=8`$ km s<sup>-1</sup>. To determine the parameters $`z_{\mathrm{max}}`$, the maximum height reached above the galactic plane and $`R_a`$ and $`R_p`$, the apo- and perigalactic distances, we calculated the orbits over 10 Gyr. This for certain does not give true orbits as the orbits are probably altered in time by heating processes. However this long timespan allows to better show the area the orbit can occupy in the meridional plane (see Fig. 2). As in de Boer et al. (1997a), we also calculated the eccentricity $`ecc`$ of the orbit, given by $$ecc=\frac{R_\mathrm{a}R_\mathrm{p}}{R_\mathrm{a}+R_\mathrm{p}}$$ (2) and the normalised $`z`$-extent, $`nze`$, given by $$nze=\frac{z_{\mathrm{max}}}{\varpi (z_{\mathrm{max}})}.$$ (3) The parameter $`nze`$ is more relevant than $`z_{\mathrm{max}}`$, since it accounts for the effect of diminished gravitational potential at larger galactocentric distance $`\varpi `$. To assign a star to a population often the $`U,V,W`$-velocities and their dispersions are used, as well as the orbital velocity $`\mathrm{\Theta }`$. For stars near the Sun (small $`Y`$), the $`V`$ velocity is nearly the same as $`\mathrm{\Theta }`$. However, for stars further away from the Sun’s azimuth, $`\mathrm{\Theta }`$ becomes a linear combination of $`U`$ and $`V`$. Therefore $`\mathrm{\Theta }`$ should be preferred. In order to make comparisons with results from other studies, we use both $`U,V,W`$ and $`\mathrm{\Theta }`$. We calculated the errors of the velocity components and the orbital velocity using Monte Carlo simulations of Gaussian distributions to vary the input parameters within their errors as described by Odenkirchen (1991). This is neccesary rather than just calculating errors using Gauss error propagation laws because the parameters are significantly correlated. For the error calculation we used the software of Odenkirchen (priv. comm.). The proper motion errors were taken from the Hipparcos catalogue. The errors of the distances were calculated from the error in absolute magnitude as derived in Sect. 2.3. We took the errors of the radial velocities as published in the respective articles. For those radial velocities of Wilson (1953) and Evans (1967) having quality mark “e”, meaning the error is larger than 10 km s<sup>-1</sup>, we used 15 km s<sup>-1</sup> as error. This is justified as can be seen by comparison of these values with those of other studies. Generally the error in the velocity components is less than 10 km s<sup>-1</sup>. Only a few stars have somewhat larger errors, the largest error in $`\mathrm{\Theta }`$ being 12 km s<sup>-1</sup>. For the HBA/B stars the typical value of $`\mathrm{\Delta }\mathrm{\Theta }`$ is about 7 km s<sup>-1</sup>, for the on average closer sdB/O stars $`\mathrm{\Delta }\mathrm{\Theta }`$ is 1 to 2 km s<sup>-1</sup>. We estimated errors for $`nze`$, $`ecc`$, $`R_\mathrm{a}`$ and $`R_\mathrm{p}`$ because they have not been used individually in the interpretation. Moreover the larger values of $`nze`$ are very sensitive to small variations in the shape of the orbit. This especially applies to stars having chaotic orbits. Variations in the input distance modulus showed that the resulting variations in all of these quantities except $`nze`$ are relatively small in most cases. For a discussion of overall effects on a sample see de Boer et al. (1997a). ### 3.2 Morphology of the orbits The orbits of the HBA/B stars show a large variety of shapes. Nearly all of the cooler HBA stars have a small perigalactic distance ($`R_\mathrm{p}`$ 3 kpc) and the most extreme case, HD 86986, reaches a perigalactic distance of only 0.4 kpc. The single exception is HD 117880, which has a $`R_\mathrm{p}`$ of nearly 4 kpc. Four stars have truly chaotic orbits, the rest has boxy type orbits, but some of these show signs of chaotic behaviour as well. HD 79813 has an orbit staying very close to the disc, while HD 117880 orbits nearly perpendicular to the galactic plane. On the whole about half of our stars have orbits which are chaotic or show signs of that. This agrees quite well with the results of Schuster & Allen (1997) who analysed a sample of local halo subdwarfs. Most of the stars have apogalactic distances of $``$ 8 to 11 kpc, just one star (HD 86986) goes well beyond. The reason for this clumping in $`R_\mathrm{a}`$ is not physical but due to selection effects. Stars with $`R_\mathrm{a}7.5`$ kpc never venture into the observable zone (at least observable by Hipparcos). On the other hand the probability of finding the stars is greatest when they are near their major turning point, $`R_\mathrm{p}`$. So it is clear that the mean $`R_\mathrm{a}`$, as well as to a lesser extent the eccentricity, are affected by selection effects. Stars belonging to the thin disk would have orbits with very small eccentricities and $`nze`$ values (solar values: $`ecc`$= 0.09, $`nze`$= 0.001, see de Boer et al. 1997a), while thick disk stars would have larger values on average. Halo stars have generally orbits with large eccentricities while their $`nze`$ show a large range. The eccentricities of the HBA star orbits are very large, ranging from 0.5 to nearly 1.0, the values for $`nze`$ vary by a huge amount, from 0.04 (HD 78913) to 5 (HD 117880). The stars BD +36 2242 and Feige 86 are exceptions, their values for both parameters are more appropriate for disk objects. We note that these two stars are the hottest of the HBA/B sample. The kinematics of the four HBB stars from Schmidt (1996) show overall behaviour similar to that of BD +36 2242 and Feige 86 (Fig. 2). All of these are hotter than 11000 K, the $`T_{\mathrm{eff}}`$ of BD +36 2242. The star HD 117880 features an orbit somewhat dissimilar from the others. While its $`nze`$ is very high, its eccentricity is by far the lowest of the sample of HBA stars. ### 3.3 Velocity components and dispersions The HBA stars ($`T_{\mathrm{eff}}`$ $``$ 10,000 K) have a mean orbital velocity of $`\mathrm{\Theta }`$ = 17 km s<sup>-1</sup>, lagging about 200 km s<sup>-1</sup> behind the local standard of rest. However, the velocity dispersions are large: 102, 53 and 95 km s<sup>-1</sup> in $`U`$, $`V`$, $`W`$ respectively. This shows that there are many stars with a non disk-like kinematical behaviour in the sample of HBA/B stars. They therefore belong to the galactic halo population rather than to the disk. The orbital velocities of the HBA stars in the sample do not have a Gaussian distribution, as one might have expected. Instead, they seem to have a somewhat flatter distribution (see Fig. 4). About 75% of our stars have prograde velocities, four stars have retrograde orbits. However the exact distribution cannot be studied reliably due to the limited number of stars at disposal. Both the analysis of the kinematic properties and the shapes of the orbits imply that the HBA/B stars mostly are members of the galactic halo population. However, there seems to be a difference in kinematics and hence population membership between the cooler and the hotter stars. Stars cooler than about 10,000 K have low orbital velocities and a large spread in $`nze`$. In contrast to this are the hotter stars whose kinematics and orbits are consistent with those of disk objects. The HBB stars of Schmidt (1996) which are all hotter than 10,000 K behave like sdB stars, too. ### 3.4 Kinematics of sdB/O stars The sample of sdB/O stars show classical disk behaviour: Their mean orbital velocity is $`\mathrm{\Theta }=219`$ km s<sup>-1</sup>, meaning a negligible asymmetric drift. The $`V`$ velocity dispersion (which is also the dispersion in $`\mathrm{\Theta }`$, because the stars are in the solar vicinity) is relatively small, similar to that of old thin disk orbits, while the dispersion in $`U`$ is much larger, fitting to thick disk values. The dispersion $`\sigma _W`$ is somewhere in between. These values are quite similar to those of the sdB star sample of de Boer et al. (1997a). Until now no population of field sdB stars with halo kinematics has been found. Yet, hot subdwarfs of the horizontal branches of halo globular clusters are, of course, well known (see e.g. Moehler et al. 1997). ### 3.5 Trend of kinematics along the HB? Given the results above there seems to be a trend in the kinematics of star types along the blue part of the horizontal branch (see Fig. 3). The sdB/O stars have disklike orbits. The same probably applies to the HBB stars hotter than about 10,000 K, though the statistics are rather poor for this part of the HB. In contrast to that stand the cooler HBA stars which have much smaller orbital velocities, large orbital eccentricities and large ranges of $`nze`$, thus showing a behaviour fitting more to halo than to disk objects. This result suggests to analyse the kinematics of the adjoining cooler stars of the HB, the RR Lyraes. ## 4 RR Lyrae stars ### 4.1 A sample of RR Lyrae stars from the literature Recently, Martin & Morrison (1998) carried out an investigation of the kinematics of RR Lyrae stars which is mainly based on the study of Layden (1994). For our analysis we will use only those stars having Hipparcos data. Six Hipparcos stars were excluded because they have a proper motion error larger than 5 mas/yr. The RR Lyrae stars present the observational difficulty in that they are variables with both $`V`$ and $`BV`$ changing continously. For most of the sample we were able to take the mean magnitudes from Layden (1994). For the remaining stars we derived the intensity-mean magnitudes with help of the formula given by Fitch et al. (1966) and revised by Barnes & Hawley (1986) which is the same method as used by Layden (1994), using the photometric data of Bookmeyer et al. (1977). The Layden photometry was dereddened using the Burstein & Heiles (1982) reddening maps. For later steps in this study it is necessary to know the mean $`BV`$ of the RR Lyrae stars. As the colour curves of the stars are quite similar to the brightness curves, with the star being bluest when it is near maximum brightness, we took the same formula as we used to calculate the mean magnitude. This is not entirely correct but gives $`BV`$ close to the actual one. For six stars we did not have the appropiate light curve data, so we could not determine the mean $`BV`$ for them. Therefore only 26 RR Lyraes are shown in Fig. 3. As the RR-Lyrae stars are in most cases fainter and therefore farther away than our HBA/B stars they have a rather large $`\mathrm{\Delta }_\pi /\pi `$. For this reason we used the absolute magnitude derived in Sect. 2.3 to calculate the distances for these stars. We thus have ignored the effects of metallicity on $`M_V`$ for individual stars. Also possible evolutionary effects on $`M_V`$ (see Clement & Shelton 1999) have been ignored, an aspect Groenewegen & Salaris (1999) did not consider in their determination of the RR Lyrae $`M_V`$ either. Since we study the orbits of the RR Lyrae as a sample these limitations will not affect our conclusions. For most RR Lyrae stars we took the radial velocities from the sources mentioned in Sect. 2.5, supplemented by radial velocities from Layden (1994). The metallicities of the RR-Lyrae stars were taken from Layden (1994) as far as possible. A few values come from Layden et al. (1996) and Preston (1959). ### 4.2 RR-Lyrae kinematics We calculated the orbits for the RR Lyrae stars in the same manner as for the HBA/B and sdB stars. The RR-Lyrae stars show a spread in kinematical behaviour wider than that of the HBA/B stars. Many stars have orbits similar to those of the HBA stars, others show disklike orbits with orbital velocities in the vicinity of 200 km s<sup>-1</sup>. Of the halo RR-Lyrae stars many have perigalactic distances smaller than 1 kpc, as we also found for the HBA stars. The RR-Lyrae stars have orbital velocities typically spanning the entire range found for disk and halo stars (see Fig. 3). Three members of our sample of RR Lyr stars have orbits shaped somewhat different from those of the rest of halo orbits, looking similar to that of HD 117880. In Fig. 3 we have sorted the RR Lyr stars according to their metallicity using different plot symbols. The stars with an \[Fe/H\]$`>0.9`$ dex have high $`\mathrm{\Theta }`$ like disk stars. The stars with lower metallicities are more evenly distributed in $`\mathrm{\Theta }`$. There are several stars with disk-like kinematics with a very low metallicity as low as \[Fe/H\]$`<2.0`$ dex (see Table 4). ## 5 Selection effects The study of the spatial distribution of HB stars involves, unfortunately, several selection effects. The general aspects have been reviewed by Majewski (1993) and will not be repeated in detail here. Yet, for each stellar type discussed in this paper a few comments are in place. HBA stars have in most cases been identified from photometry, notably because of a larger than normal Balmer jump. This larger jump is mostly due to lower metallicity of the stellar atmosphere. If the atmospheric metallicity is identical to the original one, then the criterion favours intrinsically metal poor stars, which are presumably the older ones. However, also stars starting with a little more mass than the Sun and thus of solar composition will become HB stars and, when as old as the Sun, by now are solar metallicity HB stars. If they were of HBA type, they would not have been recognized in photometry of the Balmer jump. Such stars would be underrepresented in our sample. The HBA stars considered here come from all galactic latitudes, so that selection effects due to galactic latitude are not to be expected. However stars which have orbits going far away from the disk are always underrepresented, as their fraction of time near the disk (and hence being observable) is much smaller than for those which do not go far from the disk. RR Lyraes, being variables, are not prone to such selection effects. Most of them are identified solely by their variability. Metallicity or high velocity are generally not used as criteria for the identification for RR Lyrae stars. For a discussion of selection effects due to galactic latitude we refer to Martin & Morrison (1998), as our sample is a subsample of theirs. The sdB/O stars were identified in surveys for quasars, e.g. the PG catalogue (Green et al. 1986) or Hamburger Quasar Survey (Hagen et al. 1995). This means their blue colour is the criterion, rather than proper motion, radial velocity or metallicity. Therefore we do not expect a selection bias towards metal poor halo stars. Moreover, de Boer et al. (1997a) showed that the sdB/O stars observed now near the Sun come from widely differing locations in the Milky Way. As these catalogues only map objects which are somewhat away from the galactic plane, they miss the majority of stars with solar type orbits. sdO stars may be confused with pAGB stars descending down the HRD towards the white dwarf regime. The HBB stars of Schmidt (1996) are also taken from the PG catalogue, so that there should not be noticeable selection effects, either. However HBB stars and main sequence stars have similar physical properties such as log $`g`$, so that there may be confusion with the latter. Apart from this the selection effects mentioned for the sdB/O stars apply to the HBB stars, too. Finally, some words concerning the distribution of distances of the different samples are in place. Generally, if one deals with stars having different absolute magnitudes, as in our case when the sdBs are several magnitudes fainter than the HBAs, one gets samples with different mean distances. The intrinsically fainter stars are on average much nearer than the brighter stars, if the two groups have similar apparent magnitudes. This means that the spatial regions sampled differ depending on the absolute magnitude of the stars. This would imply that the sdB sample is biased towards disk stars as we do not sample them far enough from the galactic plane where there may be a higher concentration of halo stars than further in. This is however not the case. As we include some of the results of de Boer et al. (1997a) which come from a completely different source, namely mostly from the PG-catalogue (Green et al. 1986) dealing with significantly fainter stars, the PG stars actually have on average larger distances than any of our HBA stars. For this reason we do not expect that the difference in kinematics arises from the distribution of the distances in the samples. ## 6 Discussion: trends and population membership ### 6.1 Overall trends As shown in Figs. 3 and 4 the kinematics of the stars of horizontal branch type appears to have a trend along the HB indeed. The sdB stars have in general rather disk-like orbits and kinematical properties. The ones analysed here (Table 4) show the same behaviour as those from the large sample of sdB stars investigated previously (de Boer et al. 1997a). The HB stars, the prime goal of our investigation, span a wide range in orbit parameters but when this group is split in HBB and HBA stars a cut is present. The (hotter) HBB stars behave rather like the sdBs with orbits of disk-like characteristics. However, such stars are difficult to recognize and our sample is small. A much larger sample may show a larger variation in kinematics. The HBA stars have mostly halo orbits (mean $`\mathrm{\Theta }17`$ km s<sup>-1</sup>). This is very similar to the value at which most other studies concerning metal poor stars in the solar neighbourhood arrive (see Table 2 of Kinman 1995). However, the known sample may be observationally skewed toward stars with low atmospheric metallicity (large Balmer jump). The RR-Lyrae stars have orbits spanning a large range in orbital parameters, too. However, a trend seems to be present with metallicity. The metal poor stars have halo orbits similar to those of the HBA stars with rather low orbital velocities of less than 100 km s<sup>-1</sup>, and large $`ecc`$ and $`nze`$. The metal rich stars on the other hand have rather disk-like kinematical characteristics. A similar distribution of metallicities and orbital velocities was also found in the studies of Chen (1999) and Martin & Morrison (1998). Although there are a few RR Lyraes having high orbital velocities ($`\mathrm{\Theta }160`$ km s<sup>-1</sup>) and clearly disk-like orbits (some of which are very metal poor), HBA stars with such characteristics are not found in our sample. On the other hand no RR Lyraes with \[Fe/H\]$`>0.9`$ dex with halo-like orbits or kinematics are present. This means that a high metallicity for a RR Lyr star is a good indicator that it is a disk star. However, a low metallicity does not mean that a star neccessarily belongs to the halo. For an overview of literature data on values for $`\mathrm{\Theta }`$ (or asymmetric drift) for various star groups we refer to Fig. 3 in the review of Gilmore et al. (1989). ### 6.2 Discussion Since the sdB stars (and possibly the HBB stars) have disk-like orbits, these stars must be part of a relatively younger, more metal rich group among the HB stars. Majewski (1993) uses the expression ‘intermediate Population II’, other authors use the words ‘thick’ or ‘extended disk’. In addition to the disk-nature of their orbits, the vertical distribution is consistent with a scale height of the order of 1 kpc (Villeneuve et al. 1995, de Boer et al. 1997a). Since the amount of metals in their atmospheres may have been altered by diffusion it is not possible to estimate the true age from the metallicity. The HBA stars have really halo orbits. This must mean they belong to a very old population. Their atmospheric metal content is low indeed, the determinations showing a large scatter per star and from star to star ranging between $`1`$ and $`2`$ dex. However, metal rich HBA stars which are known to exist in star clusters (see Peterson & Green 1998), would likely be underrepresented in the sample. If the halo contains mostly old stars, like globular cluster stars, then the resulting halo HB stars should occupy the HB in ranges related with metallicity as with the globular clusters (see Renzini 1983). The very metal poor ones (\[$`M/H`$\] $`2`$ dex) would be HB stars of HBB and HBA nature as well as RR Lyrae, the ones of intermediate metallicity (\[$`M/H`$\] $`1.5`$ dex) would be very blue down to sdB like, and the metal rich ones (\[$`M/H`$\] $`1`$ dex) would be RHB stars, perhaps including some RR Lyrae. This behaviour may also explain the existence of the two Oosterhof groups (see van Albada & Baker 1973 or Lee et al. 1990) of RR Lyrae, since only the very metal poor and the relatively metal rich globular clusters contain RR Lyrae. Evolutionary changes of the HB stars may also affect the location on the HB (Sweigart 1987, Clement & Shelton 1999). However, sdB stars with halo kinematics have not been found (de Boer et al. 1997a). Instead, they have only disk orbits. This must mean that the stars which originally formed in the halo had an initial mass, a metallicity and a red giant mass loss such that RR Lyrae and HBA stars were the end product, and not sdB stars. As for the RR Lyrae stars, they show a wide range in kinematic behaviour, more or less in line with the atmospheric metal content. The actual metallicity did not bias the identification of these stars, since they are selected based on variability. One tends to divide the RR Lyrae sample into metal poor and metal rich RR Lyrae (see Layden 1994). Here we recall that in the HB stars the contents of heavier elements in their atmospheres may be altered (see Sect. 1). The RR Lyrae stars with the continuous upheaval of the pulsation may stimulate mixing so that their atmospheres probably show the true metallicity. Thus, for RR Lyraes the metallicity may be used as a general population tracer. The observed range of metallicities would mean that there are old as well as younger RR Lyraes. Old RR Lyrae must be very metal poor and should have halo orbits. The majority of the RR Lyrae included in our analysis fit these parameters. There are, however, a substantial number of RR Lyr stars in our sample with disk-like kinematics but low metallicities, in several cases as low as $`2`$ dex. The origin of this group of stars, dubbed the ‘metal weak thick disk’, is still unknown (see Martin & Morrison 1998 for a discussion). Young (or younger) RR Lyrae should be relatively metal rich and have disk orbits. The investigated sample contains such stars. These objects should have an age, main-sequence mass, metallicity and RGB mass loss such that RR Lyrae emerge, i.e. HB stars with a thicker hydrogen shell. They are, being relatively metal rich, also of slightly different $`M_V`$ than the metal poor and old ones. In fact, they are fainter and their distances should be based on the appropriate brightness-metallicity relation. The dependence is, however, feeble and amounts to just 0.1 mag for 0.5 dex. We tested how serious ignoring this effect is on the derived orbits by reducing the RR Lyr star distances by 10 %. It does not lead to a change of significance in the histogram of Fig. 4. ### 6.3 Summary Our orbit studies allow to see a trend in the kinematics of the field HB stars along the horizontal branch. This appears to give us access to the structure of the Milky Way and its halo as well as information about possible formation scenarios. The trends related with age and history could only be found using the kinematics, since it has become clear that the atmospheric metallicity in HB-like stars has no relation to the one of the main sequence progenitor. The location of the stars on the HB must be a complicated function of age, main-sequence mass, initial metallicity, and mass loss on the RGB. For the HB-like stars of today indications for the age can be determined from the present kinematic parameters. Only detailed models for metallicity dependent stellar evolution from main sequence through the RG phase with mass loss should, in comparison with the observables of horizontal branch stars, eventually be able to retrieve the true origin of the HB stars. ###### Acknowledgements. We thank Oliver Cordes for supplying the values of log $`g`$ and $`T_{\mathrm{eff}}`$ for two stars. We are very grateful to Michael Odenkirchen who supplied us the orbit calculating software. Furthermore we thank Michael Geffert for enlightening discussions, Wilhelm Seggewiss and Jörg Sanner for carefully and critically reading the manuscript. This research project was supported in part by the Deutsche Forschungs Gemeinschaft (DFG) under grant Bo 779/21. For our research we made with pleasure use of the SIMBAD in Strasbourg.
no-problem/9911/astro-ph9911027.html
ar5iv
text
# Timing of the young pulsar J1907+0918 ## 1. The search for radio pulsations from SGR 1900+14 In early June 1998 we observed the soft gamma-ray repeater SGR 1900+14 using the Arecibo telescope seven days after the source became active following a long period of quiescence. The search for radio pulsations at 430 MHz and 1.4 GHz was carried out with the Penn State Pulsar Machine (PSPM), a filterbank which records the total power outputs of the receiver over $`128\times 60`$ kHz frequency channels every 80$`\mu `$s. Our search did not reveal the 5.16-s period reported for SGR 1900+14 by Kouveliotou et al. (1998). Based on our observations we place an upper limit of approximately 150 $`\mu `$Jy to the flux density of the magnetar at frequencies around 430 MHz. Following the announcement of a low-frequency detection of this pulsar by Shitov (see contribution elsewhere in these proceedings) we observed the magnetar using the 47 MHz dipole feed. Although this system could detect B0950+08 and B0823+26 we were unable to detect the magnetar. Further, more sensitive, low-frequency Arecibo observations would be worthwhile. The 1410-MHz observations did, however, reveal the presence of a very promising 113-ms pulsar candidate with a dispersion measure of 350 cm<sup>-3</sup> pc. Subsequent observations made around the end of September both at Arecibo and Effelsberg, confirmed the existence of the pulsar (PSR J1907+0918) and identified its true period to be 226 ms (Xilouris et al. 1998, IAUC No. 7023). ## 2. Timing observations of PSR 1907+0918 Follow-up timing results show that PSR J1907+0918 is an interesting radio pulsar in its own right. Regular timing observations using the PSPM were initiated in mid October 1998. A standard tempo analysis of pulse time-of-arrival measurements spanning a 9-month baseline yields the following timing solution: R.A. (J2000) 19 h 07 m 22.4 sec, Dec. 09 18’ 31.8”, $`P=0.226106270831`$ sec, $`\dot{P}=94.286\times 10^{15}`$, these parameters apply to the reference MJD 51216. Uncertainties for each parameter are one unit of the least significant digit quoted. Current post-fit residuals are 98.6 $`\mu `$s. In spite of present covariances between position and $`\dot{P}`$, it is clear from high-precision period measurements over the 9-month baseline that the quoted $`\dot{P}`$ is correct. The characteristic age is 38 kyr and implied dipole surface magnetic field is $`4.7\times 10^{12}`$ G. ## 3. Discussion Apart from globular cluster pulsars, PSR J1907+0918 and SGR 1900+14 are the closest pair of neutron stars in the sky that do not presently consitute a binary. The angular separation between them is $``$ 2 arcmin. An assumed distance of 5 kpc implies a spatial separation of 3.2 pc, while a distance of 7 kpc implies a separation of 4.5 pc. Either this close proximity is simply a coincidence, or the neutron stars both originated from a disrupted massive binary system. Such scenarios have been invoked to explain the proximity of the Crab pulsar to B0525+21 (Gott, Gunn & Ostriker, 1970, ApJ 160, L91) and PSR B1853+01 and PSR B1854+00 (Wolzczan Cordes & Dewey 1991, ApJ, 372, L99). Regardless of the fact that these two neutron stars may have had a common origin, we would like to point out that it is presently not clear to us whether the nearby supernova remnant G42.8+0.6 is associated with SGR 1900+14 (which has an estimated age of 10 kyr; Kouveliotou et al. 1999, ApJ, 510 L115) or PSR J1907+0918 (with a characteristic age of 38 kyr). As mentioned above, Shitov has recently detected 5.16-s pulsations from SGR 1900+14 at 100 MHz and determined a dispersion measure of 281.4 cm<sup>-3</sup> pc. Based on this and the dispersion measure for J1907+0918, both these neutron stars are at a comparable distance from the Earth (5–7 kpc). If PSR J1907+0918 is associated with G42.8+0.6 then the spatial separation between them is 20pc (assuming a distance to the remnant of 5 kpc) or 28pc (assuming a distance to the remnant of 7 kpc). The transverse velocity required for the remnant and the pulsar to be associated is then between 550–760 km s<sup>-1</sup>. It should also be noted that, since this region of the Galactic plane has a high density of supernova remnants and pulsars, it is possible that neither PSR J1907+0918 nor SGR 1900+14 have any connection with G42.8+0.6. Future VLBI proper motion measurements of PSR J1907+0918, perhaps using Arecibo-Effelsberg-GBT, would certainly help to clarify this situation. ### Acknowledgments. We wish to thank A. Wolszczan and D. Backer for providing access to their datataking equipment and hence making these observations possible. We would also like to thank F. Camilo and I. Stairs for useful discussions concerning the Arecibo timing observations. Arecibo Observatory is run by Cornell University under contract with the National Science Foundation.
no-problem/9911/astro-ph9911487.html
ar5iv
text
# Distribution and Kinematics of the Circum-nuclear Molecular Gas in the Seyfert 1 Galaxy NGC 3227 ## 1 INTRODUCTION The distribution of circum-nuclear molecular gas plays an important role in the proposed unified scheme for Seyfert galaxies. In the standard picture a torus of dense molecular gas and dust is surrounding the black hole and its accretion disk (e.g. overviews by Antonucci 1993 and Peterson 1997, see also Pier & Krolik 1992, 1993, Edelson & Malkan 1987). As the relative orientation of the AGN to the plane of its host galaxy is probably random the appearance of the AGN as Seyfert 1 or Seyfert 2 nucleus depends only on whether the viewing angle onto the central engine is blocked by the torus or not. There are two important questions arising for the distribution and kinematics of the circum-nuclear molecular gas: (1) Is the circum-nuclear molecular gas already participating in the obscuration of the AGN? Recent observations of about 250 nearby active galaxies with the HST from Malkan, Gorjian & Tam (1998) suggest that molecular material at distances of about 100 pc is responsible for the obscuration of the nucleus rather than a nuclear torus. Similarly Cameron et al. (1993) and Schinnerer, Eckart & Tacconi (1999) find indications for a more complex picture in NGC 1068. (2) What is the fueling mechanism for AGNs? As a very effective mechanism nuclear bars are considered to bring the molecular gas to small radii and fuel the central engine (e.g. Shlosman, Frank & Begelman 1989). However, Regan & Mulchaey (1999) searched in 12 nearby Seyfert galaxies with little success for signatures of strong nuclear bars by combining HST NICMOS 1.6 $`\mu `$m images with HST optical images to study the dust morphology (see also Martini & Pogge 1999). The recent improvements in mm-interferometry now allow one to obtain sub-arcsecond resolution observations of the molecular line emission in combination with high spectral resolution and high sensitivity. This combination is ideal to study the kinematics and distribution of molecular gas in the circum-nuclear region of nearby Seyfert galaxies. NGC3227 (Arp 94b) is a Seyfert 1 galaxy located at a distance of 17.3 Mpc (group distance; Garcia 1993; see also Tab. 1). Rubin & Ford (1968) studied the system NGC 3226/7 for the first time in the optical. In NGC 3227 they found indications for a nuclear outflow as well as a spiral arm that stretches toward the close elliptical companion. The NLR is extended towards the northeast (Mundell et al. 1992a, Schmitt & Kinney 1996) and the BLR clearly shows variations in the optical continuum and line emission (Salamanca et al. 1994, Winge et al. 1995). NGC 3227 is classified as a SAB(s) pec in the RC3 catalog (de Vaucouleurs et al. 1991). Due to its inclination the end of the bar and the starting points of the spiral arms are difficult to identify. As indicated by optical and NIR observations (Mulchaey, Regan, Kundu 1997, De Robertis et al. 1998) the galaxy has probably a bar with 6.7 kpc - 8.4 kpc radius and a position angle of about -20<sup>o</sup> relative to the major kinematic axis. Mundell et al. (1995b) studied the system NGC 3226/7 in the HI line emission. Two gas plumes north and south of the system are seen with velocities between about 1200 km/s and 1300 km/s. The HI gas of the plumes with the largest spatial separation from NGC 3227 also shows the largest velocity difference to NGC 3227 ($`v_{sys}`$ = 1110$`\pm `$10 km/s; derived in section 4). These clumps can be identified with tidal tails which are often observed in interacting galaxies. Here we mainly study the distribution and kinematics of the molecular gas in the central few arcseconds of NGC 3227. The observations are summarized in section 2 followed by the description of the interferometric data (section 3). In section 4 we outline molecular gas kinematics including the derivation of the rotation curve and a discussion of the general structural and dynamical properties. A decomposition of the molecular gas in NGC 3227 into components of circular and non-circular motion is given in section 5 (and Appendix A). The mass and thickness of the molecular gas disk are derived in section 6. In section 7 (and Appendix B and C) a 3-dimensional analysis of the kinematics in the inner 1” (85 pc) in NGC 3227 in the framework of a bar and warp model for the gas motion is presented. The final summary and implications are given in section 8. ## 2 OBSERVATIONS AND DATA REDUCTION The <sup>12</sup>CO (1-0) and <sup>12</sup>CO (2-1) lines were observed in January/February 1997 using the IRAM PdB interferometer with 5 antennas in its A, B1 and B2 configuration providing 30 baselines from 40 m to 408 m. The resulting resolution is 1.5” $`\times `$ 0.9” (PA 34<sup>o</sup>) at 2.6 mm and 0.7” $`\times `$ 0.5” (PA 31<sup>o</sup>) at 1.3 mm using uniform weighting. We applied standard data reduction and calibration (15% accuracy at 2.6 and 1.3 mm) procedures. After cleaning we reconvolved data with a CLEAN beam of FWHM 1.2” for <sup>12</sup>CO (1-0) and FWHM 0.6” for the <sup>12</sup>CO (2-1) . To increase the S/N we smoothed the data spectrally to a resolution of 20 km/s in both lines. In addition a <sup>12</sup>CO (2-1) data set with a spectral resolution of 7 km/s was made ranging from -182 km/s to 182 km/s since only this interval was covered by all three configurations. The HCN (1-0) line was observed between March 1995 and March 1996 using the IRAM PdB interferometer with 4 antennas in its B1, B2, C1 and C2 configuration providing baselines from 24 m to 288 m. The spatial resolution is 3.3” $`\times `$ 1.7” (PA 42<sup>o</sup>) using natural weighting. The uncertainties of the flux calibration are about 15 %. For the interferometric observations we used a velocity of v<sub>obs</sub>=1154 km/s which is 44 km/s larger than the systemic velocity of v<sub>sys</sub>=1110 km/s derived from the data (section 4). ## 3 RESULTS The IRAM 30 m data (Schinnerer et al. in prep.) show an elongation of the nuclear line emission at a PA of $``$130<sup>o</sup>. A comparison to published data reveals that even the 30 m telescope already starts to spatially resolve parts of the central 2 kpc diameter emission. At high angular resolution most of the molecular gas is located in a 3” diameter circum-nuclear ring. In the following subsections we describe the distribution and kinematics of the molecular gas as observed with the PdBI in the <sup>12</sup>CO(1-0), <sup>12</sup>CO(2-1), and the HCN(1-0) line. The detection of molecular line emission at radii as small as 13 pc is used to estimate an upper limit to the enclosed nuclear mass. No millimeter radio continuum emission was detected by our interferometric observations. The corresponding upper limits are 8 mJy at 1.3 mm and 5 mJy at 2.6 mm. ### 3.1 The extended emission The <sup>12</sup>CO emission mapped with the PdBI is concentrated in an uneven ring-like structure in the inner 6” (500 pc; Fig. 1). The eastern part is about six times brighter than the western part (Fig. 1). In the following we refer to this structure as the ring. In the channel maps <sup>12</sup>CO (1-0) line emission is seen from -260 km/s to 220 km/s (relative to the central velocity $`v_{obs}`$ = 1154 km/s we used for the observations) and from -240 km/s to 180 km/s for the <sup>12</sup>CO (2-1) line at a spectral resolution of 20 km/s. The difference in velocity range is probably due to the fact that the smaller 0.6” beam of the <sup>12</sup>CO (2-1) line has already resolved some emission which is still detected in the <sup>12</sup>CO (1-0) beam of 1.2”. A comparison to single dish measurements shows that the PdBI <sup>12</sup>CO (1-0) and <sup>12</sup>CO (2-1) maps contain about 20 % and 10 % of the total line flux, respectively. Combined with the results from the 30 m maps (Schinnerer et al. in prep.) this means that most of the remaining <sup>12</sup>CO gas is distributed in a fairly smooth gas disk, and the structures seen in the PdBI maps are the main concentrations of the molecular gas. However, the <sup>12</sup>CO line flux observed by the PdBI traces the compact circum-nuclear gas component that constitutes the reservoir out of which both the nucleus as well as circum-nuclear star formation can be fed. A much weaker additional component (molecular bar) is detected east and west of the center. This component stretches out to a radius of about 7” (590 pc). The east and west parts of the bar have a NS-offset of about 3” connecting the NW-region with the circum-nuclear ring (Fig. 1). In the de-projected map this component resembles a bar which encloses an angle of about 85<sup>o</sup> with the kinematic major axis (Fig. 2). For de-projection the intensity maps were rotated by the position angle so that the kinematic major axis is parallel to the $`x`$-axis. The $`y`$-axis was then corrected via $`y=y^{}/cos(i)`$, where $`i`$ is the inclination. In addition emission regions to the north-west, south-east and south (NW-region, SW-region, S-region; Fig. 2) lie in a de-projected spatial distance of 10” to 20” from the center and have typical sizes of about 1.5” to 2”. The NW- and SE-region are stronger in the <sup>12</sup>CO (1-0) line than in the <sup>12</sup>CO (2-1) line whereas the S-region is more prominent in the <sup>12</sup>CO (2-1) line. This might indicate that the molecular line emission in the S-region is partly due to optically thin gas. The NW-region lies at the tip of the molecular bar and is also twice as bright as the SE-region in the <sup>12</sup>CO (1-0) line emission. ### 3.2 The nuclear emission Over a velocity range from -140 km/s to 35 km/s we detect emission with an extent $``$0.6” at the dynamical center. In all pv-diagrams the structure of this component appears to be S-shaped and symmetric both with respect to the central position and with respect to a velocity about 19 km/s below the systemic velocity $`v_{sys;HI}`$ = 1135 km/s derived from HI observations by Mundell et al. 1995b (44 km/s below the central velocity of $`v_{obs}`$ = 1154 km/s we assumed for the observations). The position of the dynamical center: To derive the exact position of the dynamical center we fitted a Gaussian to the nuclear component in the channel map at -63 km/s. We chose that map since its velocity is relatively close to the true systemic velocity and the nuclear component is clearly separated from the emission in the ring. The nucleus is positioned (0.28$`\pm `$0.02)” west and (0.84$`\pm `$0.03)” north of the interferometer phase center of RA 10<sup>h</sup>23<sup>m</sup>30.590<sup>s</sup> and DEC 19<sup>o</sup>51’54.00” (J2000.0). We note that the position of the phase center may itself have an error of up to 0.2”. With this uncertainty the positions of the northern and southern radio component (Mundell et al. 1995b) are both included in our error budget. ### 3.3 The HCN (1-0) Data The HCN (1-0) line emission in NGC 3227 was observed with the IRAM 30 m (Schinnerer et al. in prep.) as well as the PdBI (Fig. 3). The comparison of both measurements shows that the PdBI map contains the total flux of the 30 m observation. The HCN (1-0) line emission is concentrated in a barely resolved nuclear source. A Gaussian fit provides a source size of (2.67$`\pm `$0.24)” very similar to the FWHM of the beam of 2.4”. Therefore the HCN source structure is much more compact than the <sup>12</sup>CO ring structure. The strongest HCN(1-0) emission (about twice as strong as in the neighboring channel maps) is found at -40 km/s (close to the systemic velocity) indicating a strong nuclear component. The position of the peak within its errors in this map is identical to the position of the dynamical center. The shape of the pv-diagram (Fig. 5) along the kinematic major axis is different from that of the strong and extended <sup>12</sup>CO (1-0) line emission (Fig. 5). The shape of the HCN(1-0) pv-diagram is similar to the inner part of the <sup>12</sup>CO (2-1) pv-diagram at 7 km/s (Fig. 6) convolved to a lower resolution. This indicates that most of the HCN(1-0) emission is coming from a region with $``$ 0.6” diameter. ## 4 MOLECULAR GAS KINEMATICS In this section we describe the properties of the molecular gas kinematics in the circum-nuclear region of NGC 3227. We outline how we derived a rotation curve and what the expected dynamical resonances in conjunction with the possible kpc-scale bar in NGC 3227 are. ### 4.1 General kinematic properties of NGC 3227 Mundell et al. (1995b) detected in their HI study of the system NGC 3226/7 an HI cloud or dwarf galaxy about 60” (5 kpc) west of the nucleus of NGC 3227. The observed features (tidal tails, enhanced star formation at the position of the western spiral arm) indicate that the interaction with NGC 3226 and probably also the HI cloud is ongoing. On the other hand the HI disk is relatively undisturbed, since its velocity field is in good agreement with an inclined rotating disk (Mundell et al. 1995b). González Delgado & Perez (1998) observed HII regions in NGC 3227 in their H$`\alpha `$ emission. The HII regions show an offset to the NW and SE relative to the major axis of the galaxy in agreement with the position of a bar. Theoretical calculations (e.g. Athanassoula 1992a) predict that gas at the leading side of the bar is shocked and compressed, favoring the formation of stars there. This indicates that NGC 3227 is rotating clockwise. Therefore the spiral arms are trailing and the southwestern side is closer to the observer. The location of prominent dust lanes seen in the optical HST images of Malkan et al. (1998) are consistent with this geometry. ### 4.2 The nuclear molecular gas The peculiar nuclear kinematics: In the <sup>12</sup>CO (2-1) data set with a spectral resolution of 7 km/s the contrast of the more compact structures was enhanced over that of the more extended components to allow for a detailed study of the inner 1”. The pv-diagrams presented in Fig. 6 have now a nominal resolution of 0.3”. Along all position angles the emission in the inner 1” does not drop linearly to zero at the center. Along the kinematic major axis an apparent counter rotation is observed between 0.2” $`r`$ 0.5” (see Fig. 5). For even smaller radii the velocity flips back again to the rotation sense of the outer structure at $`r>`$ 0.5”. This behavior forms a S-shape in the inner 1” of the pv-diagrams. These changes in the rotation sense are present in all pv-diagrams. The enclosed nuclear mass: If the velocity of the nuclear emission inside $`r`$ 0.2” is due to Keplerian motion of molecular gas these data can be used to estimate the enclosed mass. Assuming an inclined disk the position angle under which these two emission regions have the largest angular separation of 0.28” at a position angle of PA $``$ 110<sup>o</sup>. This is not coincident with the position angles of other components like the radio jet ($``$ -10<sup>o</sup>; Mundell et al. 1995b), the \[O III\] ionization cone ($``$ 15<sup>o</sup>; Schmitt & Kinney 1996) or the H$`\alpha `$ outflow ($``$ 50<sup>o</sup>; Arribas & Mediavilla 1994). Therefore a different cause for this high velocity except motion in a nuclear potential appears to be unlikely. The extent of 0.28” at $`PA`$ 110<sup>o</sup> translates into a radial distance of about 12 pc. Together with a velocity difference of $`\mathrm{\Delta }v(12pc)75km/s`$ (not corrected for inclination) this gives a lower limit for the enclosed mass of about 1.5 $`\times `$ 10<sup>7</sup> M. This limit is in approximate agreement with the enclosed mass derived for the central black hole in NGC 3227 using H$`\beta `$ reverberation mapping with a BLR size scale of $``$ 17 days and a FWHM of the H$`\beta `$ line of 3900 km/s. With this technique Ho (1998) finds an enclosed mass of 3.8 $`\times `$ 10<sup>7</sup> M whereas Salamanca et al. (1994) and Winge et al. (1995) estimate a black hole mass of $``$ 10<sup>8</sup> M. ### 4.3 The Rotation Curve We derived rotation curves from the <sup>12</sup>CO (1-0) and <sup>12</sup>CO (2-1) data (using the routine ’ROTCUR’ from GIPSY). This routine does not correct for the effect of beam smearing. However, as the peaks in the <sup>12</sup>CO velocity fields occur at distances of about 3.5” and the smallest beam has a FWHM of 0.6” this should not affect radii below 3.0” and outside the central 1”. Using our <sup>12</sup>CO measured values for the position of the dynamical center and a systemic velocity of v<sub>sys</sub>=1110$`\pm `$10 km/s, we obtain $`i`$ = (56$`\pm `$3)<sup>o</sup> and $`PA`$ = (160$`\pm `$2)<sup>o</sup>, which are in excellent agreement with the values of Mundell et al. (1995b) derived from HI data. To obtain a rotation curve ranging from the center to the outer HI disk ($``$ 100”) our <sup>12</sup>CO rotation curve was combined with the HI data (Mundell et al. 1995b) from the literature (Fig. 7). To analyze our data we use two rotation curves: (1) For the central 0.5” we assumed Keplerian rotation in agreement with the enclosed mass estimates in the inner 25 pc until the velocities of the Keplerian curve dropped well below the <sup>12</sup>CO velocities derived from the observations. (2) For the decomposition of the motion with the program 3DMod (see section 5) we extrapolated the rotation curve from its value at r=0.5” to 0 km/s at r=0.0”. Comparing this extrapolation of the rotation curve towards the center to the observed data allows us to find non-circular velocities associated with the nuclear region. This curve is referred to as our model rotation curve. Despite this difference the two rotation curves are basically identical for radii $`>`$ 0.7”. Our <sup>12</sup>CO data allows us to obtain a rotation curve for the range 0.5” $`<`$ r $``$ 5”. A rise of the rotation velocity till about 3” to 4” is observed. No reliable rotation curve measurements are available between $`r=5^{\prime \prime }`$ (end of the <sup>12</sup>CO rotation curve) and $`r=22^{\prime \prime }`$ (innermost point of HI rotation curve not affected by beam smearing). To connect the <sup>12</sup>CO curve with the HI curve we used a Keplerian velocity fall-off starting at $`r=5^{\prime \prime }`$. For radii r $`>`$ 22” the HI rotation curve was fully adopted. In order to test the deduced rotation curve as well as the values for the inclination and position angle we looked for differences between the observed <sup>12</sup>CO velocity field and a velocity field derived from the model rotation curve. In general the residuals (Fig. 8) in the difference field show characteristic patterns for mismatched parameters (see van der Kruit & Allen 1978). In our case they are less than about $`\pm `$20 km/s and indicate that most of the line emission can be described by the derived rotation curve satisfying the assumption of a simple rotating disk. An exception are the molecular bar and a region about 1” south of the center that show residuals of $``$ 35 km/s. ### 4.4 Dynamical Resonances It is possible to estimate the position of the dynamical resonances from the rotation curve in conjunction with the bar length (Fig. 9). These resonances can be compared to the distribution of the molecular gas and the HII regions. #### 4.4.1 Theoretical positions of resonances NGC 3227 has probably a bar as indicated by optical and NIR observations (Mulchaey, Regan, Kundu 1997, De Robertis et al. 1998) with 6.7 kpc - 8.4 kpc radius and a position angle of about -20<sup>o</sup> relative to the major kinematic axis. The presence of a bar is also supported by the locations of the HII regions (González Delgado & Perez 1998) that are consistent with a bar for radii $``$ 50” (size of their field of view). Under the assumption that the end of the bar is close to the corotation (CR) this gives an angular pattern speed of $`\mathrm{\Omega }_p`$ = (32$`\pm `$5) km/s, since the HI rotation curve (Mundell et al. 1995b) is relatively constant at these radii. The bar would have an ILR at $`r`$ 20” (1.7 kpc). At these distances one can find the SE-region of the <sup>12</sup>CO line emission with two associated HII regions (Nr. 14 and 15, González Delgado & Perez 1998) as well as in northeast direction three more HII regions (Nr. 18 - 20). These <sup>12</sup>CO emission line regions and the associated HII regions are consistent with the presence of an ILR. However, the gas distribution and kinematics in the inner 40” are not easily described by structures at the position of resonances due to the large-scale bar. Possible reasons for this are that the gas flow is disturbed due to interaction with NGC 3226, the HI cloud, or the HII regions. Another possibility is that the gas dynamics are just beginning to be influenced by the bar or that a stable semi equilibrium has not yet been reached. The analysis is hampered by the poor knowledge of the rotation curve for radii 5” $`<r<`$ 22”. ## 5 DECOMPOSITION OF GAS MOTIONS IN NGC 3227 We were successful in decomposing the molecular gas motions in NGC 3227 in their circular and non-circular component using 3DMod with a model rotation curve (see section 4). The results of the decomposition are essential since they high-light complex non-circular features in the velocity field that will be modeled in section 7. 3DMod uses as an input the intensity map, the rotation curve and a velocity dispersion distribution to calculate the 3-dimensional spatial cubes of these properties. These spatial 3-dimensional cubes are rotated according to the inclination and position angle. Afterwards they are merged to a 3-dimensional $`xyv`$ cube which can now be compared directly to the measured data cube. A detailed description of the decomposition algorithm used in 3DMod is given in Appendix A. ### 5.1 The Decomposition of the interferometric <sup>12</sup>CO data The best decompositions were achieved using a systemic velocity of $`v_{sys}`$ = 1110 km/s in agreement with the derived rotation curve and visual inspection of the nuclear pv-diagrams. For the velocity dispersion a value of 30 km/s combined with a thin disk (0.2” = 1 resolution element) gave the best fit to the data. The decomposition shows that observed higher velocity dispersions are due to a spatial superposition of circular and non-circular components. The results are summarized in Table 2. About 80 % of the total line emission in the inner 8” $`\times `$ 8” of NGC 3227 is in excellent agreement with line emission from gas in circular motion. However, the decomposition of the <sup>12</sup>CO (2-1) data reveals 3 distinct components in non-circular motion: the nuclear region, the molecular bar, and a $``$0.6” knot 1” south. The nuclear region accounts for 3% of the total flux in the inner 8” $`\times `$ 8” and allows the following possible interpretations: (1) The velocity field of this component is in agreement with a counter rotating disk seen at a position angle of (41$`\pm `$3) km/s with an inclination of $``$ 32<sup>o</sup> and an axial ratio of $``$ 0.85. At this inclination the nuclear disk is oriented orthogonal to the galaxy plane as $`32^o+56^o90^o`$. This disk is also responsible for most of the HCN(1-0) line emission (as in the case of NGC 1068 data ;Tacconi et al. 1997). (2) A different possible cause for the complex nuclear velocity field could be radial motions. However, such motions never change the direction of rotation on the major kinematic axis and can therefore be ruled out. (3) Further possibilities are motions in a bar potential or the warping of the molecular gas. We explore these issues further in section 7. The molecular bar is about 4” longer in the western direction than along its eastern extension and accounts for $``$17% of the CO line flux in the inner 25” $`\times `$ 25”. The NS offset between the two sides of the bar might reflect its width. Its velocity is blue-shifted (on both sides of the nucleus) relative to the circular velocity by about 50 km/s. ## 6 MASS AND THICKNESS OF THE GAS DISK A knowledge of the molecular gas mass and the dynamical mass of the nuclear region is required in order to estimate the thickness of the gas disk as well as the torques acting on it in the case of a possible warping (see section 7 and Appendix C). A comparison of the molecular gas mass to the dynamical mass of the nuclear region also shows that the molecular gas is a probe of the nuclear gravitational potential in NGC 3227 rather than providing a dominant component of it. The molecular gas masses of the various component in the interferometric maps are given in Table 2. We used the $`\frac{N_{H_2}}{I_{CO}}`$-conversion factor of 2 $`\times `$ 10<sup>20</sup> $`\frac{cm^2}{Kkm/s}`$ from Strong et al. (1989). (see e.g. Schinnerer, Eckart & Tacconi 1998 for discussion and references). In addition we estimated the dynamical mass by using the inclination corrected circular velocity for a given radius via $`M_{dyn}[M_{}]=232\times v_{rot}(r)[km/s]^2\times r[pc].`$ The molecular gas contributes about 6 % to the dynamical mass in the inner 50 pc. We find an average velocity dispersion $`\sigma _{obs}30km/s`$ in the areas of the dispersion map (2<sup>nd</sup>-order moment) that show circular motions. The velocity gradient of the rotation curve in the region from 0.35” to 2.0” is about 68 km/s arcsec<sup>-1</sup>. This translates into an observed velocity spread with a $`FWHM_{rot}`$ of $``$ 41 km/s for a beam of 0.6”. The resulting velocity dispersion is $`\sigma _{rot}=\frac{FWHM_{rot}}{2/sqrtln(2)}25km/s`$. We therefore find an intrinsic velocity dispersion $`\sigma _{real}17km/s`$ in the inner 6” of NGC 3227 using quadratic deconvolution. This implies that the symmetrical structures in the pv-diagrams that result in an increased apparent velocity dispersion have to be explained via a complex (ordered) velocity field rather than by an increased turbulence of the central molecular gas. Following the equations used in Schinnerer et al. (1999; see also Quillen et al. 1992, Combes & Becquaert 1997, Downes & Solomon 1998) we derive for the inner 550 pc a molecular gas disk height of about 15 pc. ## 7 RESULTS OF THE KINEMATIC MODELING To analyze the complex kinematics in the inner 50 pc of NGC 3227 showing clear deviations from pure circular motions we modeled the data with 3DRings (see appendix B). 3DRings allows to model non-circular motions (1) via elliptical orbits with changing position angles characterizing gas motions in a bar and (2) via circular orbits leaving the plane of the galaxy representing a warp. The best bar solution fails to fully explain the data whereas the warp model gives a very satisfactory fit to the data. The model subdivides the disk into many individual (circular or elliptical) orbits of molecular gas. The inclination, position angle and shape of the rotation curve for the overall galaxy were held fixed. Each fitting process was started at large radii and successively extended towards the center. In each case we tried several start set-ups that all converged to similar (best) solutions with mean deviations from the data of less than about 10 km/s and 0.1” for each radius and velocity in the pv-diagrams and 10<sup>o</sup> in the position angle of the mapped structures. To test the quality and uniqueness of a model we used these criteria to derived the internal errors of the various model parameters ($`\alpha _0`$, $`\xi \mathrm{\Delta }t`$ , $`\omega (r)`$ for the warp and $`ϵ(r)`$, $`PA(r)`$ for the bar approach; see appendix B). For both approaches we used a rotation curve in which the motions of the inner few parsecs were assumed to be Keplerian due to the presence of an enclosed mass. The best results for the warp model is consistent with an enclosed mass of 2 $`\times `$ 10<sup>7</sup> M. The pv-diagrams, the velocity field and the intensity map were used as a guidance during the fitting process. To allow for an easy comparison between the data and the models, we also displayed the pv-diagrams at an angular resolution of 0.3” in order to enhance in the cleaned data the contrast of the small scale structures (Fig. 6). All source components in this representation of the data can also be identified in the images at the nominal angular resolution of 0.6”. ### 7.1 The bar approach Elliptical orbits caused by a bar potential are a generally accepted way to explain non-circular but well ordered motion in external galaxies in the presences of a bar potential. This is the only possibility to describe non-circular planar motion that is stable for several orbital time scales. However, high angular resolution NIR data shows no evidence for a strong nuclear bar (Schinnerer et al., in prep.). In Fig. 11 we show the curves for the position angle $`PA`$ and an eccentricity $`ϵ`$ that describe the ellipses for our best fit in the framework of the bar approach. Fig. 12 shows that we are not able to account for the observed amount of counter rotation along the kinematic major axis and especially along PA 40<sup>o</sup> close to the kinematic minor axis. Also, in all pv-diagrams the S-shape in the inner 1” is not fully reproduced. However, the model fails completely to reproduce the pv-diagrams close to the kinematic minor axis, especially, the counter rotation observed at $`r`$ 0.5”. Also the fit can not explain the second flip with the velocity rising according to the enclosed central mass. The bar model can not fully reproduce the central 1” intensity map and velocity field (Fig. 13) for this bar model. Comparison to the calculated velocity fields and rotation curves of Wozniak & Pfenniger (1997) who used self-consistent models of barred galaxies shows that no strong apparent counter rotation along the minor axis is possible in such a scenario (their figure 5). This makes us confident that our bar approach is valid and that the bar scenario is a less likely solution for the central 1” in NGC 3227. ### 7.2 The warp approach Since the bar approach failed to fully explain the complex but well ordered motion in the inner 1” of NGC 3227 we inspected the second possibility to explain non-circular motion: the warping of the gas disk mimicked by tilting circular orbits out of the plane of the galaxy. This approach seems reasonable as gas can leave the plane of the galaxy at the position of vertical resonances (Pfenniger 1984; Combes et al. 1990) and therefore probably populate the essential orbits. As warps have also been observed in accretion disks (e.g. NGC 4258 by Miyoshi et al. 1997) this seems a plausible way to pursue. In Fig. 15 we show the excellent fit of our warp model to the pv-diagrams. The intensity map and velocity field of this model are shown in Fig. 16. The remaining differences (mainly reflecting the uneven intensity distribution in the data) are mostly due to the assumption of a uniform density distribution. The $`\omega (r)`$ curve (Fig. 14) required to fit the data is remarkably smooth. The gas disk warps itself covering the nucleus starting from the south at $`r`$ 100 pc (1.2”). At $`r`$ 30 pc (0.36”) the warped disk is orthogonal to the host plane in agreement with the disk interpretation in section 5. At smaller radii the warp continues and the curvature becomes stronger. In this geometry the AGN is obscured at least ones. The best fits result in $`\alpha _o=(120\pm 20)^o`$ with respect to the major axis. The corresponding $`\xi \mathrm{\Delta }t`$ is $`(2.7\pm 0.7)\times 10^5yrs`$. Therefore $``$10 rotations at $`r=0.3^{\prime \prime }`$ are required in order to precess by 360<sup>o</sup>. For $`\xi \mathrm{\Delta }t>0`$ an $`\alpha _o`$ exist such that the resulting solution is identical in its kinematics. This indicates that there exists a single geometrical solution both with prograde and retrograde precession relative to the rotation of the host galaxy. To fit the counter rotation it is required that $`\omega (r)`$ rises above 90<sup>o</sup>. This is in agreement with Pringle (1997) who has shown that in the case of accretion disks it is possible to obtain a stable warp till tilting angles of 180<sup>o</sup>. ### 7.3 Discussion of both approaches The comparison of both best solutions shows that the warp model is a much better and preferred description of the data. The greatest shortcoming of the planar bar model is that it fails to reproduce the counter rotation along the minor axis, although the fit is satisfying along the kinematic major axis. As a strong stellar bar is not observed in the NIR (Chapman et al. (1999) and our SHARP K-band speckle image reconstructions) strong streaming motions can not be evoked to explain the remaining differences between the bar model and the data. These facts combined with the poor fit to the kinematic minor axis $`pv`$-diagram and to the observed spectra (Fig. 17) makes it not straight forward to explain the kinematics in the inner 0.8” with a bar, i.e. via a pure planar system with ordered (elliptical) orbits. Therefore our modeling suggest that the molecular nuclear gas disk in NGC 3227 is likely to be warped in the inner 70 pc. Also it is indicated that the gas observed at radii $``$ 13 pc is relatively uniformly distributed rather than constrained to some distinct areas in the nuclear region. The warp approach results in a very good fit to the $`xyv`$ data cube (see also spectra in Fig. 17) and therefore we definitely favor this model. Theoretical analyses of the bulge formation and the bar dissolution using orbit calculations or N-body simulations show that with a sufficient central mass density 3-dimensional stellar orbits form that can support and enhance the so-called boxy or peanut appearance of bulges. The interesting zone is the region where the bar potential is only as strong as the bulge potential of the central mass. This happens often at about the distance of the radial ILR. Then in addition to the radial ILR a inner vertical resonance (IVR) forms which allows stars to leave the plane of the disk (Pfenniger 1984, Combes et al. 1990). Friedli & Benz (1993) have shown that even well within the plane of a barred galaxy vertical resonances can be high-lighted by the gas being pushed out of the plane. Since $`x_4`$ orbits are vertically unstable, the gas can leave the plane very quickly after formation of the bar and becomes trapped in stable anomalous orbits inclined with respect to the major axis $`x`$ (ANO<sub>x</sub> orbits, see Pfenniger & Friedli 1991). Similarly Garcia-Burillo et al. (1999) observe in the warped galaxy NGC 4013 substantial amounts of molecular gas well above the plane. In this case, however, the star formation activity in the disk might be responsible. For NGC 3227 (and NGC 1068, Schinnerer et al. 1999) we find that the gas disk starts to warp at a radius at which the bulge begins to dominate the gravitational potential. Once out of the plane, the gas would be mainly supported by the potential of the bulge, competing with that of the bar or disk. As the gas is dissipative compared to the stars it is valid to assume that it will stay on ordered non-crossing orbits which will strongly favor the formation of warps. In order to move the gas out of the plane a torque is needed. Torques can be induced by a non-spherical galactic potential (similar to the effect of the halo on the HI disk), by the radiation pressure of the radio jet (similar to the central radiation source causing the warp in the accretion disk; Pringle 1996, 1997), by gas pressure in the ionization cone (Quillen & Bower 1999) or as a transient phenomenon by the gravitational force of a dislocated molecular cloud complex (see also appendix C for a more detailed discussion). Estimates of these effects in the nuclear region of NGC 3227 (see table in appendix) show that torques induced by gas pressure or GMCs are the two most likely causes. The only worrying aspect of the warp model is seen in the 3-dimensional view (see Fig. 18). The direct view onto the AGN is seemingly blocked by the warped gas disk. This is in contradiction to the unified scheme for Seyfert galaxies that proposes for Seyfert 1 types a clear view to the central engine. This problem can be softened, if the orbits are not homogeneously filled with molecular gas but rather with molecular clumps smaller than the size of the Seyfert 1 nucleus. In this case no effective shadowing of the compact Seyfert nucleus itself can occur. The interpretation of the X-ray data demands a warm absorber which would be located in the immediate vicinity of the AGN (Komossa & Fink 1997). Recent observations of absorption lines in the UV of Seyfert 1 galaxies show that these absorptions occur in galaxies with a warm absorber (as NGC 3227; Crenshaw et al. 1999) at larger radial distances than the BLR. The different classifications as a Seyfert 2 or Seyfert 1 found for NGC 3227 in the literature are most likely caused by variability. NGC 3227 was first classified as a Seyfert 2 by Khachikian & Weedman (1974). However, from data with higher S/N Osterbrock (1977) identified it as a Seyfert 1.2. Further measurements by Heckman et al. (1981) and Peterson et al. (1982) confirmed this classification. On the other hand Schmidt & Miller (1985) note that the general low (and variable) strength of the emission line spectrum is in better agreement with a Seyfert 2 nucleus. This indicates that the nucleus of NGC 3227 is either intrinsically relatively weak for a Seyfert 1 galaxy or it is weakened by obscuration. Obscuring gas in the warped molecular disk may well be responsible for the observed variability and varying classification of the nuclear source in NGC 3227. It is also possible to compare the warp model with results from optical polarimetry. Thompson et al. (1980) as well as Schmidt & Miller (1985) have observed a polarization of about $``$ 1% for the continuum as well as the permitted and forbidden emission lines of the BLR and NLR. The degree and position angle of the polarization for the continuum and the emission lines are similarly implying a common cause for the polarization. The position angle (131<sup>o</sup>$`\pm `$8<sup>o</sup>) of the polarization is not in agreement with the galactic major axis or the axis of the radio jet but it is in agreement with the apparent enhancement of disk material due to projection effects in the warp geometry as proposed here. The described circumstances provide a strong independent support both for the presence of material on the line sight to the nucleus as well as for the particular warp geometry we obtained by analyzing the kinematics of the molecular gas. ## 8 SUMMARY AND IMPLICATIONS 1. Molecular gas close to the nucleus.— We obtained PdBI data of the HCN(1-0) and the <sup>12</sup>CO line emission of the nuclear region in NGC 3227 with sub-arcsecond spatial resolution. These data allow for the first time a detailed and quantitative analysis of the molecular gas kinematics in the inner 500 pc of this Seyfert 1 galaxy. NGC 3227 shows a nuclear gas ring with a diameter of about 250 pc similar to the one observed in NGC 1068 (Tacconi et al. 1997, Schinnerer et al. 1999). Gas emission at a radius of about 13 pc is detected in the <sup>12</sup>CO (2-1) line emission in NGC 3227. This emission shows a remarkable velocity offset to the systemic velocity and allows for the first time to use the molecular line emission for an estimation of the enclosed mass in the inner 25 pc of about $``$ 1.5$`\times `$10<sup>7</sup> M (not correcting for inclination effects). This is in agreement with estimates from other wavelength ranges. 2. The HCN(1-0) line emission is very concentrated.— Comparison between the HCN(1-0) and the <sup>12</sup>CO data to single dish observations suggest that the <sup>12</sup>CO line emission is distributed in a disk of FWHM $``$ 25” whereas the HCN(1-0) line emission is concentrated on the nucleus. The direct comparison to our high resolution interferometric <sup>12</sup>CO (2-1) data furthermore suggests that the HCN(1-0) is mainly arising from a region of size $``$ 0.6” which shows unusual kinematical behavior. 3. The nuclear molecular gas disk is likely to be warped.— To model the nuclear kinematics in NGC 3227 observed in the <sup>12</sup>CO (2-1) line emission we used a modified tilted ring model which is able to describe gas motions in a thin ($``$17 pc), warp disk as well as in a bar potential. Our modeling of the nuclear kinematics with 3DRings suggests that a warped gas disk provides a better explanation to the observed gas motions than motion evoked by a bar potential. The warp of the gas disk starts at an outer radius of $``$ 75 pc and is perpendicular to the outer disk of the host at $``$ 30 pc. This warping indicates an obscuration of the nucleus at small radii by a thin gas disk. This is in agreement with findings at other wavelengths which suggest an obscuration at radii larger than the BLR, including parts of the NLR. The most likely cause for the warping of the gas disk is the gas pressure in the ionized gas cones as traced by the NLR. This pressure results in a torque onto the gas disk. This mechanism was recently also discussed by Quillen & Bower (1999) as a possible cause for the warp in M 84. 4. Small molecular gas tori are not needed.— Our observations of NGC 1068 (Schinnerer et al. 1999) and NGC 3227 suggest that warps of the circumnuclear gas disk may be common. Even though nothing in our observations can rule out the existence of small molecular tori with radii $``$25 pc our finding implies that not under all circumstances the postulated molecular gas torus of most unified schemes for Seyfert galaxies are required to obscure the nuclei. A somewhat peculiar distribution of molecular gas or dust in the host galaxy as proposed e.g. by Malkan et al. (1998) appears to be more likely. However, the obscuring gas and dust may still be in well ordered motion. Acknowledgments: IRAM is financed by INSU/CNRS (France), MPG (Germany) and IGN (Spain). We thank the staff on Plateau de Bure for doing the observing, and the staff at IRAM Grenoble for help with the data reduction, especially D. Downes, R. Neri and J. Wink. For fruitful discussions we thank A. Baker, D. Downes, P. Englmaier, J. Gallimore, O. Gerhard, R. Maiolino, A. Quillen, N. Scoville and L. Sparke. We used the NASA/IPAC Extragalactic Database (NED) maintained by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. APPENDIX Here we give a detailed description of the algorithms we used to analyze (Appendix A) and describe (Appendix B) the kinematics and intensity distribution of the molecular gas in the nuclear region of NGC 3227. We also summarize and compare possible physical mechanisms that may lead to a warp of the molecular disk in the circum-nuclear region of active galactic nuclei (Appendix C). ## Appendix A DECOMPOSITION OF MOTIONS: 3DMod The application of 3DMod allows us to decompose the observed kinematics into their components of non-circular motion and circular rotation around the nuclear position. As an input it uses information on the integrated line flux distribution, the rotation curve and the spatial distribution of the velocity dispersion. The 3-dimensional spatial distributions of the intensity, the velocity and the velocity dispersion are generated separately. Intensity distribution: We use the deprojected measured intensity distribution. We correct for resolution dependent deprojection effects by deconvolution before the deprojection and a later reconvolution. The de-projected deconvolved intensity map is loaded into the $`xyz`$ model cube also allowing to introduce a thickness of the molecular disk in the $`z`$ direction, if required. The velocity field: The spatial velocity cube is constructed assuming that the gas is dynamically coupled via interaction between the individual molecular clouds and clumps and that the velocity field does not vary significantly with height $`z`$. The rotation curve is extrapolated to $`v=0km/s`$ towards the origin (r = 0) and used to model the axisymmetric velocity field in the plane of the galaxy assuming circular rotation. Velocity Dispersion: We assume that the velocity dispersion of the gas is locally isotropic. We allow for a possible exponential variation $`\sigma (x,y,z)=A\times r(x,y,z)^\alpha `$. Here $`r(x,y,z)`$ is the radial distance from the origin. Such a description is appropriate for the transition between a possibly large nuclear velocity dispersions and the low dispersions in the disk. A minimum velocity dispersion of $`7km/s`$ is assumed for the dispersion between the individual clouds (e.g. Combes & Becquaert 1997). Creation of Model Cubes: Each spatial cube is corrected for inclination $`i`$ and position angle $`PA`$ of the galaxy. After rotation the 3 individual spatial cubes are merged to obtain a $`xyv`$ cube with two spatial axes (on the sky) and a spectral axis. The spectral with a resolution similar to the observed data is generated via integration along the line of sight. For each spatial pixel in the plane of the sky the central velocity is determined and the flux is distributed into the spectral pixels according the local velocity dispersion. Convolution along the remaining two axis gives the required spatial resolution. After scaling to the observed flux the calibrated model $`xyv`$ cube can directly be compared to the measured data. Decomposition of Motions: The $`xyv`$ cube is generated as described above assuming that all the flux is originating from gas in circular motion. The difference between the data cube and the model cube then shows positive and negative residuals at all positions for which the assumption of circular motion is not correct. In general it can be demanded that the integrated flux at each position is positive. This means that the positions which have negative residuals have to be corrected. As a first approximation of the correction we create an intensity map of the negative residuals by integrating them over velocity, subtracting a mean negative noise level, and then setting all positive values in the resulting map to zero. This map is added to the observed intensity map. The new map should now contain only emission from components in circular motion and is used as a new input to calculate the intensity cube. Then a new $`xyv`$ cube is generated and subtracted from the data cube. The residual cube should now contain only significant positive residuals which represent components in non-circular motion. ## Appendix B KINEMATIC MODELING: 3DRings The model 3DRings allows to create kinematic models of circular and non-circular motions. The model is based on the following considerations: Since gas is dissipative it can only move for a longer time on orbits which are not self-intersecting, not crossing and do not have strong cusps. Therefore only two basic possibilities are available for gas orbits: (1) planar circular or elliptical orbits and (2) tilted circular orbits. The first possibility can be identified with the $`x_1`$ and $`x_2`$ orbits (e.g. Contopoulos & Papayannopoulos 1980) which exist in bar potentials in the plane of the host galaxy. The second possibility describes gas motion that is no longer confined to the plane of the galaxy so that the gas disk is warped. We use the inclination, position angle, and the rotation curve derived from the molecular gas velocity field. The gas disk is subdivided into a number of elliptical (bar approach) or circular (warp approach) rings between a given inner and outer radius. For both approaches the properties of these 3-dimensional rings are given by continuous parameter input curves as a function of radius. The fitting is started at large radii where simple circular motion dominates and successively extended towards the nuclear region. The fit is done matching the pv-diagrams and the moment maps. The bar approach: Rather than calculating the gas motion in a given gravitational, potential 3DRings fits the observed $`xyv`$ data cube under the simplifying assumption of closed elliptical $`x_1`$ and $`x_2`$ orbits with continous curves of ellipticity $`ϵ(r)`$ and position angle $`PA(r)`$ and centered on the nucleus. This approach has been adopted from Telesco & Decher (1988; Fig. 7 therein) who discussed its validity in great detail. To match observations and theoretical calculations we implemented the orbits such that the velocities at the minor and major axis are inversely proportional to the axial ratio. The warp approach: 3DRings is similar to other tilted ring models (e.g. Rogstad et al. 1974 Nicholson, Bland-Hawthorn, Taylor 1992, see also Fig. 16 in Schwarz 1985) and follows the method described in Quillen et al. (1992). The inclination and precession of the rings (representing gas orbits) is given by continuous curves $`\omega (r)`$ and $`\alpha (r)`$, respectively (see Fig. 19). A torque acting on an orbit with a circular velocity $`v_c(r)`$ introduces a precession rate $`d\alpha /dt\xi v_c/r`$. After a time $`\mathrm{\Delta }`$$`t`$ one obtains $`\alpha (r)=\xi \mathrm{\Omega }\mathrm{\Delta }t+\alpha _0`$. Here $`\xi `$ is given by the acting torque (see Appendix C)and $`\mathrm{\Omega }`$=$`v_c`$/$`r`$. We considered for our analysis models with constant traveling time $`\xi \mathrm{\Delta }t`$ and assume the molecular gas to be uniformly distributed. Verification of the bar approach: To test how far our simplified bar models that follow the original approach of Telesco & Decher (1988) are justified, we compare them to theoretical calculations. In our model continuous, smoothly varying curves of the ellipticities and position angles of the 3DRings ellipses were chosen under the boundary condition that they do not cross each other. The resulting model resembles the intensity map and velocity field of theoretical models (e.g. Model 001 of Athanassoula 1992b). The velocity field is shown in the rest-frame of the rotating bar by subtracting a circular disk model with constant intensity and a linearly increasing rotation curve which results in the same angular velocity as the bar model at the co-rotation radius of the bar model (Fig. 20). We also compare the density and velocity profile across the dust lane. Due to shocks jumps in the density and velocity profile occur at the position of the dust lanes (Fig. 21). The density profile shows a maximum at that position whereas the velocity has its maximum upstream and adopts much smaller values downstream with respect to the shock in the dust lane. As shown in Fig. 20 and 21 the results of 3DRings are also qualitatively comparable to the ones of Athanassoula (1992b). Therefore we are confident that our chosen bar approach is in sufficient agreement with results of theoretical calculations and N-body simulations and very well suited to search for bar signatures in the observed $`xyv`$ cubes without having to calculate orbits in assumed or inferred gravitational potentials. ## Appendix C CAUSES FOR WARPS IN CIRCUM-NUCLEAR GAS DISKS Since a warped molecular gas disk in the circum-nuclear regions of galaxies is a relatively new concept we give a more detailed description of possible causes for warps in the area between a few to several 100 pc. To generate a warp in a thin gas disk it has to be moved out of the plane of the galaxy by an acting torque. Several mechanisms can account for a torque in the central few hundred parsecs. The most important ones are the gas pressure of the ionization cone, radiation pressure of a radio jet or a nuclear source, the gravitational forces of individual molecular cloud complexes and an axisymmetric, non-spherical galactic potential (e.g. representing the stellar bulge of a galaxy). In general the torque $`M`$ can be expressed as $$|\stackrel{}{M}|=|\stackrel{}{F}\times \stackrel{}{l}|=|\stackrel{}{F}|lsin(\psi )pAl$$ (C1) For this approximation we assume that the force $`\stackrel{}{F}`$ is perpendicular to the lever arm $`\stackrel{}{l}`$ (i.e. $`\psi `$ = 90<sup>o</sup>). In addition the force can be expressed by a pressure $`p`$ onto the local disk area $`A`$ relating the torque to a pressure gradient between the top and bottom side of the disk. For the different mechanisms, we describe in the following approximations of the torques as well as the related observational quantities that are required for this. The results of the calculations are summarized in Tab. 3 and Tab. 4. Only the torque by the gas pressure can sustain a warping of the disk over several dynamical time scales. As a transient phenomenon a GMC above or below the disk may induce a torque as well. ### C.1 Warp caused by an axisymmetric galactic potential The observed HI-Warps are explained by this mechanism (see review by Binney 1992). The axisymmetric potential (Fig. 22 a) of the halo applies a torque to the HI gas disk evoking a warping of the HI disk. The derivation of the torque is given in Goldstein (1980) and in Arnaboldi & Sparke (1994). The torque $`M`$ induced by a mass $`m`$ at a point ($`r,\theta `$) with the inclination $`\theta `$ of the rotation axis of the orbit of the mass $`m`$ relative to the rotation axis of the main system is here defined as $`Mm\frac{\delta V}{\delta \theta }`$ (Stacey 1969). Here $`V`$ is the axisymmetric potential. Arnaboldi & Sparke (1994) deduced in their analysis of orbits in polar rings in an axisymmetric potential the torque far outside of the core of a slightly oblate halo. For a potential with oblate symmetry (axial ratio $`a=bc`$) the torque is described by $$Mm\pi G\rho _o\frac{a^2(a^2c^2)}{3c^2}sin(2\theta ).$$ (C2) This relation is adequate to estimate the strength of an axisymmetric potential outside the core radius. Inside the core radius of the potential (halo) the torque varies with $`r^2`$ (Sparke 1996). The author also gives a relation between the precession rate and the torque: $$Mm\dot{\mathrm{\Phi }}sin(\theta )(r^2\mathrm{\Omega }(r)).$$ (C3) Here $`\dot{\mathrm{\Phi }}`$ is the precession rate (in units of \[rad/s\]) and $`\mathrm{\Omega }(r)=\frac{v(r)}{r}`$ is the angular velocity. Since these quantities are known or used as input parameters for 3DRings (see Appendix B), the above relation can be used to estimate the torque of the observed or modeled (with 3DRings) warp. We neglected the effects of dissipation, radial mass transport (only circular orbits are used) and self-gravity within the rings. Both relations can only give rough estimates as the core size of the axisymmetric potential is unknown or the precession rate $`\dot{\mathrm{\Phi }}`$ (some fraction of $`\frac{2\pi }{T}`$; $`T`$ is the time of circulation of a circular orbit) and the inclination angle $`\theta `$ have to be determined from the model 3DRings. However, for a first approximation these estimates can be used as reference values in order to determine whether a given mechanism is sufficient to account for the required torques. ### C.2 Torque imposed by a molecular cloud As a transient phenomenon a GMC above or below the disk may induce a torque as well (Fig. 22 b). The decomposition of the motions in the molecular gas in NGC 3227 (see section 5 and Appendix A) has shown that in the inner 300 pc molecular cloud complexes exist that do not participate in the circular motion of the underlying molecular gas disk. Therefore these complexes are likely not to be part of the gas disk but they are probably located above or below the disk. In this case they interact gravitationally with the underlying gas disk and can result in a warp. Under the assumptions that the mass $`m_{GMC}`$ of the GMC is similar to the mass $`m`$ of the underlying gas disk segment and that the GMC has a height above the disk of the order of its radius the acting force between the GMC and the disk section can simply be estimated via $$F=G\frac{m_{GMC}m}{r^2}.$$ (C4) This then allows us to estimate the torque following equation C1. The mass of the GMC can be estimated from the <sup>12</sup>CO line flux (see section 6). ### C.3 Warp caused by gas pressure In active galaxies ionization cones ranging from the nucleus till a distance of a few hundred parsecs are observed. These cones can be regarded as parts of the NLR (narrow line region) for which the number density $`n`$ and excitation temperature $`T`$ can be deduced from the observed forbidden lines. The cones have partly large opening angles and show in general a large inclination relative to the normal axis on the disk. In many cases the cones can touch or even intersect the disk. Then there will be a pressure gradient between the top and bottom part of the disk (Fig. 22 c). Even if the molecular gas is clumpy the response of clumps to the pressure gradient imposed by the gas in the ionization cone can be substantial (e.g. Eckart, Ageorges, Wild 1999) and due to self-gravitative interaction (e.g. Lovelace 1998) the entire molecular disk will response. In this case the gas pressure applies a force at a distance $`r`$ from the center onto the disk. Via a torque $`\stackrel{}{M}`$ this causes a change of the angular momentum $`\stackrel{}{L}`$. This process can be stable over time-scales which are long compared to the dynamical time-scales at the corresponding radii. Therefore this scenario is very well suited to cause and maintain warps in circum-nuclear molecular gas disks. Such a scenario has also been proposed by Quillen & Bower (1999) for M 84. To first order the gas pressure can be estimated from the equation of state for an ideal gas $$p=\frac{N}{V}kT=nkT$$ (C5) The number density $`n`$ can be derived from the particle number $`N`$ per volume $`V`$ and $`k`$ is the Boltzmann constant. The torque can then again be calculated following equation C1. The area $`A`$ is the interaction area of the ionization cone with the galaxy disk and $`l`$, the lever arm, is the distance to the center of this area. For a purely geometrical dilution of the number density it can be shown that the torque is independent of the radial distance from the center. The ionization cone and its influence on the disk can therefore extend over a few 100 pc and thus provide a continuous warping of the gas disk. ### C.4 Warp caused by radiation pressure In the case of a 1 pc diameter circum-nuclear accretion disk a warping of the disk can be caused by non-uniform illumination of the gas disk by the AGN radiation field (UV, X-ray; Pringle 1996, 1997; Fig. 22 d). This is due to the fact that independent of the angle of incident photons are irradiated perpendicular to the disk (Pringle 1996). The non-uniform illumination can be evoked via a non-isotropic radiation field of the AGN or via a small instability in the disk that moves gas out of the plane (as shown here). Due to extinction in the disk this process can not act at large distances from the nucleus. The radio jet, however, may represent a possibly strong radiation source. If the jet is inclined with respect to the gas disk the jet components can be quite close to the disk even at large radii from the nucleus (Fig. 22 d). Therefore we estimate the jet’s possible contribution to the warping of the gas disk. To calculate the radiation pressure of the jet its radiation power has to be estimated first. We assume that the radiation flux density $`S`$ is given by a power law $`S=b\nu ^\alpha `$. We assume synchrotron emission and $`\alpha `$$``$-1. In order to obtain an upper limit to the torque induced by a jet we also assume that the upper cut-off frequency is in the NIR to optical range as it is found for powerful jets (e.g. M87; Meisenheimer et al. 1996). In the radio to sub-mm range the disk can be regarded as being transparent. To estimate the total luminosity we used integration limits of 10 $`\mu `$m - 0.5 $`\mu `$m. For isotropic radiation the jet luminosity is given via $`L=4\pi D^2_{\nu _1}^{\nu _2}S𝑑\nu =4\pi D^2_{\nu _1}^{\nu _2}b\nu ^\alpha 𝑑\nu `$ Here $`D`$ is the distance to the source. The radiation density $`I`$ onto a given area $`A`$ (seen under a given solid angle from the jet) at distance $`l`$ can now be calculated. The radiation pressure is then estimated via: $$p=\gamma \frac{I}{c}.$$ (C6) Here $`\gamma `$ equals $`1`$ in the cause of a black body, $`2`$ for an ideal reflecting body and $`c`$ is the speed of light. Again equation C1 can be used to obtain an upper limit of the torque due to radiation pressure from the jet.
no-problem/9911/hep-ph9911402.html
ar5iv
text
# Multibaryons in the Skyrme model*footnote **footnote *Cont. to the Proc. of ”Hadron Physics 99”. Coimbra, Portugal. September 10- 15, 1999 to be published by AIP. ## I Introduction In the last few years there have been several important developments in the determination of the lowest energy multiskyrmion configurations. This type of solutions are essential for the understanding of multibaryons and, perhaps, nuclei in the framework of the topological chiral soliton models. So far, these models have proven to be useful for the description of quantities such as the masses, strong and electromagnetic properties of the octet and decuplet baryons, baryon-baryon interactions, etc. (see e.g. Refs. and references therein). The knowledge of the properties of the multiskyrmion configurations opens the possibility of studying more complex baryonic objects. In fact, several investigations concerning non-strange multiskyrmion systems have been reported in the literature (see, e.g., Refs.). Of particular interest are, however, the strange multibaryons. Perhaps the most celebrated example is the $`H`$ dibaryon predicted in the context of the MIT bag model more than twenty years ago. This exotic has been studied in various other models, including the Skyrme model , but its existence remains controversial both theoretically and experimentally. It has also been speculated that strange matter could be stable. This has lead to numerous investigations of the properties of strange matter in bulk and in finite lumps (for a recent review see Ref.). Moreover, with the new heavy ion colliders there is now the possibility of producing strange multibaryons in the laboratory. In this situation the study of multibaryon systems within the $`SU(3)`$ Skyrme model appears to be very interesting. For general soliton configurations this is a quite hard numerical task since one has to deal with several coupled partial differential equations. However, the problem is greatly simplified if one introduces the (approximate) rational maps ansätze for the multiskyrmion configurations. The construction of these ansätze is based on the analogy between BPS monopoles and skyrmions and requires that the approximate solutions have the same symmetries than the exact numerical ones. In fact, it is now known that up to $`B=9`$ these configurations are very symmetric. Namely, for $`B=2`$ the solution corresponds to an axially symmetric torus while configurations with $`B=39`$ possess the symmetries of the platonic polyhedra (e.g. tetrahedron for $`B=3`$, etc) . In contrast with the exact solution, however, the rational map approximation assumes that the modulus of the static pionic field is radially symmetric while its direction depends only on the polar coordinates. In this contribution we will report on how to describe multibaryon states in the $`SU(3)`$ Skyrme model using these approximate ansätze. ## II Symmetric multiskyrmions and rational maps A rational map of order $`N`$ is a map of $`S^2S^2`$ of the form $`R_N(z)={\displaystyle \frac{p(z)}{q(z)}}`$ (1) where $`p,q`$ are polynomials of degree at most $`N`$ in the stereographic coordinate $`z=\mathrm{tan}(\theta /2)\mathrm{exp}(i\varphi )`$. It was shown by Donaldson that there is a one-to-one correspondence between BPS monopoles of order $`k`$ and rational maps of degree $`N=k`$. Using the analogy between this type of monopoles and the skyrmions, the authors of Ref. proposed the following ansätze for the static soliton chiral field $`U_N^{rat.}(\stackrel{}{r})=\mathrm{exp}\left[i\stackrel{}{\tau }\widehat{n}_NF(r)\right]`$ (2) where $`\widehat{n}_N=({\displaystyle \frac{2\mathrm{}(R_N)}{1+|R_N|^2}},{\displaystyle \frac{2\mathrm{}(R_N)}{1+|R_N|^2}},{\displaystyle \frac{1|R_N|^2}{1+|R_N|^2}})`$ (3) Replacing Eq.(2) in the Skyrme model effective action $`\mathrm{\Gamma }_{eff}={\displaystyle \frac{f_\pi ^2}{4}}{\displaystyle d^4x\text{Tr}_\mu U^\mu U^{}}+{\displaystyle \frac{1}{32e^2}}{\displaystyle d^4x\text{Tr}[U^{}_\mu U,U^{}_\nu U]^2}`$ (4) one gets the following expression for the soliton mass $`M_{sol}={\displaystyle \frac{f_\pi ^2}{2}}{\displaystyle d^3r\left[F^2+2N\frac{\mathrm{sin}^2F}{r^2}\left(1+\frac{F^2}{e^2f_\pi ^2}\right)+\frac{_N}{e^2f_\pi ^2}\frac{\mathrm{sin}^4F}{r^2}\right]}`$ (5) where $`_N={\displaystyle \frac{1}{4\pi }}{\displaystyle \frac{2idzd\overline{z}}{(1+|z|^2)^2}\left(\frac{1+|z|^2}{1+|R_N|^2}\left|\frac{dR_N}{dz}\right|\right)^4}`$ (6) To obtain the ansatz for a given baryon number $`B=N`$ one should proceed as follows. First, one constructs the most general map of degree $`N`$ that has the symmetries of the exact solutions. Then, the resulting $`_N`$ has to be minimized with respect to the remaining free parameters. To perform the first step it is useful to recall that under a general $`SO(3)`$ transformation the stereographic coordinate $`z`$ transforms as $`z{\displaystyle \frac{\alpha z+\beta }{\overline{\beta }z+\alpha }}`$ (7) where $`\alpha ,\beta `$ are entries of the $`J=1/2`$ representation of the corresponding rotation operator. We illustrate the method by considering the case $`B=2`$. The most general map of degree $`N=2`$ is $`R_2={\displaystyle \frac{\mu z^2+\nu z+\lambda }{\delta z^2+\gamma z+\xi }}`$ (8) If we impose the symmetries of the exact torus configuration (axial symmetry plus $`\pi `$ rotations around the three cartesian axes) such general form reduces to $`R_2={\displaystyle \frac{z^2a}{az^2+1}}`$ (9) The value of $`a`$ can be now determined by requiring that it should minimize $`M_{sol}`$ (that is, $`_2`$). In this way one finds $`a=0`$. Thus, the appropriate ansatz is $`R_2=z^2`$ (10) The explicit expressions of the rational maps corresponding to the other baryon numbers have been given in Ref.. Once such maps are determined, the Euler-Lagrange equation for the soliton profile $`F(r)`$ can be numerically solved for each baryon number and the multiskyrmion masses $`M_{sol}`$ evaluated. The values of the soliton masses (per baryon number) for the different baryon numbers as calculated using the rational map ansätze are given in Table I. For reference, the results corresponding to the skyrmion configurations which fully minimize the static energies and the associate symmetry groups are also given. From this table one observes that the rational map ansätze indeed provide a very good approximation to the exact numerical solutions. ## III Strange multibaryons We turn now to the study of the strange multibaryons within the $`SU(3)`$ Skyrme model using the rational map ansätze described in the previous section. For this purpose, the effective action Eq.(4) has to be supplemented with the Wess-Zumino term and some suitable flavor symmetry breaking terms. In the calculations described below we have included terms that account for the different pseudoscalar meson masses and also for the difference between their decay constants. To extend the model to $`SU(3)`$ flavor space we use the bound state approach, in which strange baryons appear as bound kaon-soliton systems. Thus, we introduce a generalized Callan-Klebanov ansatz $`U=\sqrt{U_N}U_K\sqrt{U_N}`$ (11) where $`U_N`$ is the $`SU(2)`$ multiskyrmion field properly embedded into $`SU(3)`$ and $`U_K`$ is the field that carries the strangeness. Its form is $`U_K=\mathrm{exp}\left[i{\displaystyle \frac{\sqrt{2}}{f_K}}\left(\begin{array}{cc}0& K\\ K^{}& 0\end{array}\right)\right]`$ (14) where $`K`$ is the usual kaon isodoublet. In the spirit of the bound state approach we consider first the problem of a kaon field in the background of a static multiskyrmion configuration. To describe such configuration we use the rational map ansatz approximation Eq.(2). Consequently, the ansatz for the kaon field should be $`K=k_N(r,t)\stackrel{}{\tau }\widehat{n}\chi `$ (15) where $`\chi `$ is a 1/2 spinor. Replacing Eqs.(11-15) in the effective action and performing the corresponding canonical transformations we obtain a quadratic Hamiltonian whose diagonalization leads to $`\left[{\displaystyle \frac{1}{r^2}}_r\left(r^2h_r\right)+m_K^2+Vfϵ_N^22\lambda ϵ_N\right]k(r)=0`$ (16) The radial functions $`f`$, $`h`$, $`\lambda `$ and $`V`$ depend on the baryon number $`B`$ only through the integral $`_N`$. Their explicit expressions can be found in Ref.. Eq.(16) has been solved numerically for different values of $`B`$ using the values of Ref. for $`f_\pi `$ and $`e`$ and setting $`m_K`$ and $`f_K/f_\pi `$ to their corresponding empirical values. The resulting eigenenergies are listed in Table II. Also listed are the masses (per baryon number) of the corresponding $`Y=0`$ states in the adiabatic approximation, $`M_{Y=0}^{adiab}/B=M_{sol}+ϵ`$. These states are of particular interest since it has been claimed that some of them can be stable against strong decays. As a general trend we see that the kaon binding energies $`D_N^K=m_Kϵ_N`$ decrease with increasing baryon number. However, as in the case of the energy required to liberate a single $`B=1`$ skyrmion from the multisoliton background, we observe some deviation from a smooth behaviour, namely, $`D_4^K>D_3^K`$ and $`D_7^K>D_6^K`$. Consequently, such deviations will be also present in the multiskyrmion mass per baryon. Interestingly, this kind of phenomena has been also observed in some MIT bag model calculations. There they are due to shell effects. Using the values given in Table II we obtain $`\begin{array}{ccc}M_{2\mathrm{\Lambda }}2M_\mathrm{\Lambda }& =& 12MeV\\ M_{4\mathrm{\Lambda }}2M_{2\mathrm{\Lambda }}& =& 176MeV\\ M_{7\mathrm{\Lambda }}(M_{3\mathrm{\Lambda }}M_{4\mathrm{\Lambda }})& =& 177MeV\end{array}`$ (20) in the static soliton approximation (i.e. to $`𝒪(N_c^0)`$). These results seem to confirm previous speculations about the stability of the tetralambda in the Skyrme model and opens up the possibility of a stable heptalambda. On the other hand, they indicate that the $`H`$-particle, although very close to threshold, is not stable. Within the static multiskyrmion approximation considered so far the spin and isospin quantum numbers of the bound kaon-multiskyrmion systems are not well defined. To recover good spin and isospin quantum numbers we proceed with the standard semi-classical collective quantization. For $`B>1`$, however, we should introduce independent spin and isospin rotations. The collective Lagrangian reads $`\begin{array}{ccc}L_{coll}& =& \frac{1}{2}\left[\mathrm{\Theta }_{ab}^J\mathrm{\Omega }_a\mathrm{\Omega }_b+\mathrm{\Theta }_{ab}^I\omega _a\omega _b+2\mathrm{\Theta }_{ab}^M\mathrm{\Omega }_a\omega _b\right]\left(c_{ab}^J\mathrm{\Omega }_a+c_{ab}^I\omega _a\right)T_b\end{array}`$ (22) Here, $`\stackrel{}{\mathrm{\Omega }}`$ is the angular velocity corresponding to the spin rotation, $`\stackrel{}{\omega }`$ that of the isospin rotation and $`T_b`$ is the kaon spin. $`\mathrm{\Theta }_{ab}^J`$ and $`\mathrm{\Theta }_{ab}^I`$ are the corresponding moments of inertia while $`\mathrm{\Theta }_{ab}^M`$ is an inertia that mixes spin and isospin. The constants $`c_{ab}^J`$ and $`c_{ab}^I`$ are the hyperfine splitting constants which for $`B=1`$ provide the $`\mathrm{\Lambda }`$-$`\mathrm{\Sigma }`$ mass splitting. The explicit expressions of these inertia and hyperfine splitting tensors in terms of the soliton profile function $`F(r)`$ and the rational map $`R_N(z)`$ can be found in Ref.. Using the standard definitions for the canonical conjugate momenta $`\begin{array}{ccc}J_a& =& \frac{L_{coll}}{\mathrm{\Omega }_a}=\mathrm{\Theta }_{ab}^J\mathrm{\Omega }_b+\mathrm{\Theta }_{ab}^M\omega _bc_{ab}^JT_b\\ I_a& =& \frac{L_{coll}}{\omega _a}=\mathrm{\Theta }_{ab}^M\mathrm{\Omega }_b+\mathrm{\Theta }_{ab}^I\omega _bc_{ab}^IT_b\end{array}`$ (25) it is rather simply to find the general form of the collective Hamiltonian $`H_{coll}`$. Details are given in Ref.. It is important to stress that the structure of the inertia and hyperfine splitting tensors appearing in Eq.(22) is strongly determined by the multiskyrmion symmetries. Using group theory arguments, it can be shown that (for symmetric skyrmions) such tensors are always diagonal. The number of independent diagonal entries, as well as whether the mixing inertias vanish or not, is also fixed by the properties of the corresponding symmetry group $`G`$. For example, for $`B=3`$ the three components of the spin and isospin operators transform as the 3-dim irrep $`F_2`$ of the group $`T_d`$. Therefore, there is only one independent component for the spin inertia, one for isospin inertia and one for the mixing inertia. Similar analysis can be done for the hyperfine splittings. For $`B=4`$, however, $`I_1,I_2`$ transform as the 2-dim irrep $`E_g`$ of the group $`O_h`$ while $`I_3`$ as the 1-dim irrep $`A_{2g}`$ and the three components of $`\stackrel{}{J}`$ as the 3-dim irrep $`T_{1g}`$. Thus, for $`B=4`$ we should have $`\mathrm{\Theta }_{11}^I=\mathrm{\Theta }_{22}^I\mathrm{\Theta }_{33}^I;\mathrm{\Theta }_{11}^J=\mathrm{\Theta }_{22}^J=\mathrm{\Theta }_{33}^J;\mathrm{\Theta }_{aa}^M=0`$ (26) Finally, we have to determine the collective wave-functions. Their general form must be $`|JJ_z,II_z,S={\displaystyle \underset{J_3I_3T_3}{}}\beta _{J_3I_3T_3}^{JIT}D_{J_zJ_3}^JD_{I_zI_3}^IK_{T_3}^T`$ (27) where $`D_{J_zJ_3}^J`$ and $`D_{I_zI_3}^I`$ are $`SU(2)`$ Wigner functions and $`\beta _{J_3I_3T_3}^{JIT}`$ are some numerical coefficients that have to be fixed by requiring that these wave-functions transform as a 1-dim irrep of $`G`$. It is very important to notice that such irrep may not coincide with the trivial irrep. As well known when one performs an adiabatic symmetry operation on a skyrmion configuration one can pick a non-trivial phase. These are the so-called Filkenstein-Rubinstein phases. A detailed analysis of these phases for the configurations we are dealing with has been done by Irwin. Using these phases one gets that, except for $`B=5,6`$, the wavefunctions should transform as the trivial irrep of $`G`$. For $`B=5`$ they should transform as the $`A_2`$ irrep of $`D_{2d}`$ and for $`B=6`$ as the $`A_2`$ irrep of $`D_{4d}`$. Having obtained the explicit form of the collective Hamiltonians and wavefunctions, the $`𝒪(N_c^1)`$ rotational contribution $`E_{rot}`$ to the multiskyrmion masses can be calculated using first order perturbation theory. The numerical values of such contributions to the masses of the lowest lying non-strange baryons are given in Table III while those corresponding to the zero-hypercharge multibaryons are listed in Table IV. From Table III we note that the quantum numbers of the ground states are consistent with those known for light nuclei with the exception of the odd values $`B=5,7,9`$. We also observe that the lowest lying state has the lowest possible value of isospin and on the average mass splittings decrease for increasing baryon number. This is a consequence of the fact that, although all the moments of inertia increase with increasing baryon number, the increase of the spin inertia is much faster than that of the isospin one. Using the values of the rotational corrections to the lowest lying $`Y=I=0`$ states (some of which are listed in Table IV) one can see that stability of the $`4\mathrm{\Lambda }`$ and the $`7\mathrm{\Lambda }`$ is not affected by these corrections. For example, for the $`4\mathrm{\Lambda }`$ there is a decrease of $`36MeV`$ in the binding energy while that of the $`7\mathrm{\Lambda }`$ is increased by $`45MeV`$. ## IV Conclusions In this contribution we have reported on the description of multibaryons within the bound state approach to the $`SU(3)`$ Skyrme model. To describe the multiskyrmion backgrounds we have used ansätze based on rational maps. Such configurations are known to provide a good approximation to the exact numerical ones, and lead to a great simplification in the treatment of the kaon-soliton system. An important property of these approximate configurations is that they have the same symmetries as the exact ones. We have shown that the properties of the associated symmetry groups completely determine the explicit form of the collective Hamiltonians (namely, the detailed structure of the inertia and hyperfine splitting tensors). The same happens for the collective wavefunctions. In particular, we have shown how the Filkenstein-Rubinstein phases fix, in a unique way, the one dimensional irreducible representations as which each wave function should transform. Thus, the method to obtain the collective Hamiltonians and wave functions described here is also valid in the exact case. On the other hand, the numerical values of the meson bindings and of the independent inertia parameters and hyperfine splitting constants will depend on the detailed form of the ansätze and will be, therefore, approximate. Using an effective action that provides a good description of the hyperon static properties we have studied the spectra of non-strange and strange multibaryons. In the case of non-strange baryons we found that, for even baryon number, the ground state quantum numbers coincide with those of known stable nuclei. It should be stressed, however, that in our opinion these quite compact multiskyrmion configurations should be interpreted as ”multiquark bags” rather than normal nuclei. How these configurations are related with them is not yet clear. Another feature of the predicted spectra is that the low lying non-strange multibaryons always have the lowest possible value of isospin. This can be understood in terms of the behaviour of the inertia tensors as a function of the baryon number. The situation is more complicated in the case of strange particles for which there is a quite delicate interplay between the different terms contributing to the rotational energies. From the calculated spectra of strange multibaryon it results that some $`Y=0`$ configurations could be stable against strong decays. Such configurations, usually called strangelets, are expected to be seen in RHIC. Many of the ideas discussed in the present contribution can be extended to case of heavy flavor (e.g. charmed) multibaryons. In such case, however, a proper treatment requires the use of an effective Lagrangian that accounts for both chiral symmetry and heavy quark symmetry. The present model has been also applied to the study of the binding of the $`\eta `$ meson to few non-strange baryon systems. We finish with a comment on the Casimir corrections to the multibaryon masses. Although these corrections are not expected to affect in any significant way the kaon eigenvalues and the rotational energies shown here, they might play some role in the determination of the multibaryon binding energies. Within the $`SU(2)`$ Skyrme model it has been shown that they are responsible for the reduction of the otherwise large $`B=1`$ soliton mass to a reasonable value when the empirical value of $`f_\pi `$ is used. Here, we have avoided the $`B=1`$ large mass problem by using the customary method of fitting $`f_\pi `$ to reproduce the nucleon mass. A more consistent approach should certainly use the empirical $`f_\pi `$ and include the Casimir corrections. In this respect, there have been recently some efforts to evaluate the corrections to the $`B=1`$ mass in the $`SU(3)`$ Skyrme model. Unfortunately, even in the $`SU(2)`$ sector, almost nothing is known for $`B>1`$. This is, of course, a very difficult task since it requires the knowledge of the meson excitation spectrum around the non-trivial multiskyrmion up to rather large energies. ## ACKNOWLEDGEMENTS The material presented here is based on work done with J.P. Garrahan and M. Schvellinger. Support provided by the grant PICT 03-00000-00133 from ANPCYT, Argentina is acknowledged. The author is fellow of the CONICET, Argentina. He would like to thank the members of the Organizing Committee for their warm hospitality during the workshop.
no-problem/9911/astro-ph9911070.html
ar5iv
text
# Studies of Mira and semiregular variables using visual databases ### Acknowledgments. We thank the many observers and those who maintain the visual databases of the AAVSO, RASNZ, AFOEV, VSOLJ and BAAVSS. ## References Bateson, F.M., McIntosh, R., Venimore, C.W. 1988, RASNZ PubVSS, 15, 70 Bedding, T. R., Zijlstra, A. A., Jones, A., Foster, G., 1998, MNRAS, 301, 1073 Foster, G., 1996, AJ, 112, 1709 Kiss, L.L., Szatmáry, K., Cadmus Jr., R.R., Mattei, J.A. 1999, A&A, 346, 542 Percy, J.R. & Au, W.W.-Y. 1999, PASP, 111, 98 Walker, W.S.G., Ives, F.V., Williams, H.O. 1995, Southern Stars, 36, 123 Whitelock, P.A., 1998, astro-ph/9801002 Wood, P. R. & Zarro, D. M., 1981, ApJ, 247, 247
no-problem/9911/astro-ph9911129.html
ar5iv
text
# Radio Triggered Star Formation in Cooling Flows ## 1 Introduction More than half of clusters within redshift $`z0.1`$ contain bright, central X-ray emission from $``$ keV gas that appears to be cooling at rates of $`101000\mathrm{M}_{}\mathrm{yr}^1`$ (Fabian 1991). Commonly referred to as cooling flows, persistent accretion of this cooling material onto the bright, central galaxies in clusters (CDGs) at even a fraction of these rates would be capable of fueling vigorous star formation and the central engines generating their radio sources. Enhanced levels of cold gas and star formation are indeed seen in cooling flows (see McNamara 1997 for a review). However, the inferred star formation rates are only $`<110\%`$ of the cooling rates derived from X-ray observations, and the amounts of cold gas detected outside of the X-ray band would account for $`<10^8`$ yr of accumulated material. Between 60–70% of CDGs in cooling flows harbor Type 1 Fanaroff-Reiley (FR I) radio sources, while only $`20\%`$ of CDGs in non cooling flow clusters have bright radio sources (Burns et al. 1997). The presence of a cooling flow increases dramatically the likelihood of detecting a bright radio source in a CDG. The radio sources in cooling flows are often interacting with the cool (and hot) intracluster medium, influencing the gas dynamics (e.g. Burns et al. 1997 and this conference), and in some cases, possibly triggering star formation (McNamara 1997). The origin of the cool material, whether direct cooling from the intracluster medium, as would follow from the standard cooling flow model, or an external source, such as cold gas stripped from surrounding cluster galaxies, has not been identified conclusively. I discuss these issues in this article, illustrating several points with a brief analysis of new optical imagery for the Abell 1068 cluster CDG. ## 2 Host Galaxy Properties A giant CDG resides at the base of all known cooling flow clusters. CDGs are, as a class, the largest and most luminous elliptical galaxies. Their envelopes have been traced to radii of several hundred kiloparsecs (Uson et al. 1991; Johnstone et al. 1991). The absolute magnitudes of CDGs are typically $`M_\mathrm{V}20`$ to $`22`$ within a 16 kpc radius (Schombert 1987), but they can be as luminous as $`M_\mathrm{I}26`$ including the envelope (Johnstone et al. 1991). Unusually blue colors associated with young, massive stars are often seen in the central $`530`$ kpc of cooling flow CDGs (McNamara 1997; Cardiel et al. 1998). The likelihood of detecting a blue population correlates strongly with $`\dot{m}_x`$. This correlation is shown using a $`UB`$ color excess relative to a non-accreting galaxy template in Figure 1; it is the strongest evidence linking star formation to the presence of a cooling flow. The star formation rates associated with the objects in Figure 1 range from $`<1100\mathrm{M}_{}\mathrm{yr}^1`$ (McNamara & O’Connell 1989, McNamara 1997; Cardiel et al. 1998). Beyond the central regions, the spatially averaged surface brightness profiles usually follow the de Vaucouleurs $`r^{1/4}`$ law well into their halos. If the CDG has the characteristic envelope of a cD galaxy (Schombert 1987), the profile rises above the $`r^{1/4}`$ law extrapolated outward from the halo. In Figure 2 I show $`U`$ and $`R`$ surface brightness profiles for the CDG in the distant, $`z=0.1386`$ cooling flow cluster Abell 1068, whose cooling rate is estimated to be $`\dot{m}_x400\mathrm{M}_{}\mathrm{yr}^1`$ (Allen et al. 1995). The $`U`$-band profile rises above the $`r^{1/4}`$ profile in the inner several kpc of the galaxy. Beyond the inner few arcsec, both the $`U`$ and $`R`$ profiles follow the $`r^{1/4}`$ profile until reaching the cD envelope at $`\mu (R)25\mathrm{mag}.\mathrm{arcsec}^2`$, where the surface brightness rises above the $`r^{1/4}`$ profile with an amplitude of $`0.5`$ mag. Apart from the blue core, this surface brightness profile is typical for cD galaxies in clusters with and without cooling flows (Porter et al. 1991). There is little evidence to suggest that the average halo structure and colors of cooling flow galaxies have recent star formation in excess of what is seen in non cooling flow galaxies. The blue inner regions appear to be the result of accretion concentrated onto the core of a preexisting galaxy, but evidently not throughout its volume. ## 3 Radio Triggered Star Formation Most cooling flows harbor luminous $`10^{4042}\mathrm{ergs}\mathrm{s}^1`$ emission line nebulae extending several to tens of kpc around the CDG nuclei (Heckman et al. 1989; Baum 1991). The line emission and blue optical continuum are usually extended on similar spatial scales (Cardiel et al. 1998), and the radio and emission line morphologies and powers are correlated, although with a large degree of scatter (Baum 1991, but also see Allen 1995). The tendency for strong line emission from warm, $`10^4`$ K gas to lie along the edges of radio sources is particularly germane to understanding star formation in these objects. An early example was seen in the Abell 1795 CDG (van Breugel et al. 1984), and a more striking example is seen in H$`\alpha `$ imagery of the Abell 2597 CDG with the Hubble Space Telescope (Koekemoer et al. 1999). Furthermore, the radio jets in Abell 1795 and Abell 2597 bend at roughly 90 degree angles and inflate into radio lobes at the locations of dust clouds embedded in the emission-line nebulae (Sarazin et al. 1994; McNamara et al. 1996). Their disrupted (i.e. bending) radio morphologies are almost surely the result of collisions between the radio jets and cold, dense clouds associated with the line-emitting gas. At the same time, Abell 2597 and Abell 1795 have bright blue optical continuum (blue lobes) along their radio lobes (McNamara & O’Connell 1993; McNamara 1997), much like the so-called alignment effect seen in distant radio galaxies (McCarthy 1993). That this phenomenon is seen in a relatively small sample of CDGs is particularly interesting. Unlike distant radio galaxies, the cooling flow CDGs were selected on the basis of their X-ray properties, rather than their radio properties. Upon their discovery, two models emerged to explain the blue lobes: jet-induced star formation (De Young 1995) and scattered light from an obliquely directed active nucleus (Sarazin & Wise 1993; Murphy & Chernoff 1993; Crawford & Fabian 1993). The scattered light hypothesis predicts the blue lobe light should be polarized, as is found in many distant radio galaxies exhibiting the alignment effect (Jannuzi & Elston 1991; di Serego Alighieri 1989). $`U`$-band continuum polarization measurements for the Abell 1795 and Abell 2597 CDGs obtained with the KPNO 4m Mayall telescope gave upper limits of $`<6\%`$ to the degree of polarization in both objects, which effectively excluded the scattering hypothesis (McNamara et al. 1996; 1999). Subsequent HST images of both objects resolved the blue lobes into knots of young star formation (McNamara et al. 1996; Pinkney et al. 1996; Koekemoer et al. 1999). The HST $`R`$-band image of Abell 1795’s blue knots are shown against a contour map of the radio source in Figure 3. The stellar knots are found along the edges of the radio lobes and near the collision sites of the radio plasma and cold gas. They are not found primarily along the radio jets, as would be expected if the triggering mechanism were shocks traveling transverse to the jet trajectory, as predicted in jet-induced star formation models (De Young 1995; Daly 1990, Begelman & Cioffi 1989). The observations suggest that momentum transferred through direct collisions between the radio plasma and cold gas clouds may be a more suitable triggering mechanism. (D. De Young pointed out that the strongest shocks would occur at the point of impact, and these shocks provide a possible triggering mechanism.) Although star formation at rates of $`1040\mathrm{M}_{}\mathrm{yr}^1`$ appears to be occurring in these objects, the radio sources may not have triggered all star formation. In addition to the blue light along the radio lobes, a more diffuse blue component that accounts for more than half the blue light is seen. Therefore, the radio source may be augmenting star formation in preexisting star bursts. ## 4 A Burst Mode of Star Formation in Cooling Flows Tracing the history of a stellar population, even in isolation, is difficult. The problem is further complicated when the population is embedded in a bright background galaxy. The blue lobes in the Abell 1795 and Abell 2597 CDGs are the first clear-cut evidence for a burst mode of star formation in cooling flows. The blue lobes cannot be old because the the alignment between the radio and optical structures can last only a fraction of the radio source lifetime and the stellar diffusion time scale, both $`10^7`$ yr. Additional evidence supporting a burst mode of star formation in cooling flows has accumulated in recent years. Cardiel et al. (1998) have argued using the Mg II absorption line index, the 4000 Å break, and far UV colors that short duration bursts ($`<10^7`$ yr) or constant star formation with ages $`1`$ Gyr best fit Bruzual model isochrones. While acknowledging the large uncertainties in the population isochrones, a burst mode of star formation is unexpected in simple, continuous cooling flow models (e.g. Fabian 1991). If star formation is indeed being fueled by cooling flows, it would seem that gas is not accreting continuously. Transient sources of fuel, such as mergers or stripping, may also be contributing. ## 5 Are CDGs in Cooling Flows Low Radio Power Siblings of High Redshft Radio Galaxies? The premise that blue lobes are sites of star formation is supported by several facts. The absence of a polarized signal from the blue lobes effectively excludes the scattered light hypothesis. Synchrotron radiation can be excluded by the absence of a detailed correlation between the radio source and blue lobes, and the nebular continuum is insufficiently strong to account for the blue color excesses. However, Balmer absorption is seen in the spectra of some objects (Allen 1995), and the emission line luminosities and H II region characteristics are often consistent with powering by young stars (Shields & Filippenko 1990; Voit & Donahue 1997), so star formation is almost certainly the primary source of the color excesses in CDGs. The situation is more complex in the high redshift powerful radio galaxies (HzRGs) exhibiting the alignment effect. The aligned optical continuum in HzRGs is often strongly polarized, which has been interpreted as the signature of scattered light from an obliquely-directed active nucleus (di Serego Alighieri et al. 1989; Jannuzi & Elston 1991). In Figure 4, I plot our polarized flux upper limits for the blue lobes in Abell 1795, Abell 2597, and the alignment regions of several HzRGs against rest frame 20 cm radio power (see McNamara et al. 1999). The polarized fluxes are measured in the rest frame $`U`$-band, and can be compared directly. Although the HzRGs are 2–3 orders of magnitude more powerful in their radio and polarized fluxes, a linear extrapolation downward between radio power and polarized flux from the mean HzRG value to the cooling flows would predict a lower polarized flux than is observed. Assuming similar host galaxy properties and scattering environments in both types of object, and further assuming the polarized flux scales approximately in proportion to radio power (see McNamara et al. 1999), at the precision of our measurements, we should not have detected a polarized flux in Abell 2597 and Abell 1795. In addition, it would seem that the polarized fluxes of HzRGs generally account for a large but incomplete fraction of the blue light, and occasionally unpolarized star light dominates (e.g. van Breugel et al. 1998). It is possible then that the blue lobes in cooling flows and the alignment effect in powerful radio galaxies are similar phenomena. But while starlight dominates the aligned continuum in lower radio power CDGs, scattered light dominates in HzRGs owing to their more powerful nuclei (McNamara et al. 1999). ## 6 An Analysis of New Imagery for the Abell 1068 CDG In this section I discuss new optical imagery of the Abell 1068 central cluster galaxy. The data provide new clues to the relationship between star formation and the radio source, and raise new questions regarding the mechanism fueling star formation. $`U`$-band CCD imaging is the most sensitive means of isolating and studying the bluest galaxy populations from the ground. The blue populations in CDGs often contribute more than half of the central $`U`$-band light, while the fraction decreases to $`10\%`$ or less in the $`R`$ and $`I`$ bands. The blue populations can therefore be isolated by modeling and subtracting the background galaxy leaving the blue regions in residual. By doing so in two or more pass bands, intrinsic colors of the blue population can be estimated. I applied this procedure to the $`z=0.1386`$ Abell 1068 CDG, one of the most distant and largest cooling flows ($`\dot{m}_x400\mathrm{M}_{}\mathrm{yr}^1`$) discovered in the $`ROSAT`$ All Sky Survey (Allen et al. 1995). It is also one of the bluest CDGs in my sample. Figure 5 presents 4-panels showing the $`U`$-band image to the upper left, a $`UR`$ color map (grayscale) superposed on $`R`$-band contours to the upper right, $`U`$-band contours, after subtracting a smooth $`U`$-band model CDG galaxy, on the 20 cm FIRST radio grayscale image ($`FWHM=5.4`$ arcsec), lower left, and an H$`\alpha `$ map, lower right. The panels are registered to the same scale; north is at top and east is to the left. Gray regions in the color map are abnormally blue. Several features are noteworthy. First, the central region within a 13 kpc diameter is $`0.50.9`$ mag bluer than normal. The nuclear colors, after K correction, range between $`(UR)_{\mathrm{K},0}1.52.3`$ (the foreground reddening is negligible). An arc of blue light lies 8 arcsec (25 kpc) in projection to the north-west of the nucleus, and a large wisp or arc of blue light extends to the south-west, until meeting a bright blue patch of light 13 arcsec to the east of the nucleus, and about 8 arcsec to the north of the bright neighboring galaxy to the south-west of the nucleus. This feature is nearly as blue as the nucleus with $`(UR)_{\mathrm{K},0}1.6`$. Finally, several blue knots, 15–30 arcsec north-west of the nucleus, appear along a line between the nucleus and a disturbed galaxy 35 arcsec to the north-west of the nucleus. The remaining colors of the off-nuclear features range from $`(UR)_{\mathrm{K},0}2.0`$ to the normal background color $`(UR)_{\mathrm{K},0}2.4`$. After subtracting a model galaxy from the $`U`$ and $`R`$ CDG images, I find an intrinsic nuclear blue population color $`(UR)_{\mathrm{K},0}0.2`$. This color is consistent with Bruzual-Charlot population model colors for a $`10^7`$ yr old burst population or continuous star formation for $`0.1`$ Gyr. The colors are bluer than expected for star formation in a cooling flow that has been accreting continuously for $`>1`$ Gyr. The accretion population’s luminosity mass is $`2\times 10^8\mathrm{M}_{}`$, which would correspond to a star formation rate of $`80\mathrm{M}_{}\mathrm{yr}^1`$. The off-nuclear colors, being a few tenths of a magnitude redder than the nuclear colors, are consistent with a several $`10^7`$ yr old burst or continuous star formation for $`<1`$ Gyr. The off nuclear blue regions are apparently not in dynamical equilibrium. They appear to be stripped debris, possibly from the bright neighboring galaxies to the north-west and south-west of the nucleus. The disturbed appearance of the north-west galaxy’s $`R`$-band isophotes support the stripping hypothesis. The blue regions are considerably bluer than their putative parent galaxies, which would be consistent with blue material being composed primarily of young stars that formed out of cold material stripped from the galaxies. ### 6.1 Radio Triggered Star Formation in Abell 1068? Both the Abell 1068 CDG and the bright galaxy to the south west of the CDG are radio sources. Each have radio powers of $`8.5\times 10^{24}`$ W/Hz, which are typical for FR I radio sources. In addition, the nucleus is embedded in a luminous emission line nebula with an H$`\alpha `$ luminosity $`>2\times 10^{42}\mathrm{ergs}\mathrm{s}^1`$ (Allen et al. 1992). Although only a low resolution radio map is available, the radio source appears extended to the north-west in the same direction as a tongue of H$`\alpha `$ emission extending from the nucleus. Both the radio source and the tongue of H$`\alpha `$ emission terminate 8 arcsec (25 kpc) to the north-west of the nucleus at the location of the bright blue arc. Such a close spatial relationship between the radio source, nebular emission, and knots of star formation are common in powerful radio galaxies in general, and in cooling flows in particular. It is tempting to speculate that, with high resolution radio maps in hand, the radio and optical morphologies will again be consistent with radio triggered star formation in the blue arc to the north-west, much like Minkowski’s Object (van Breugel et al. 1985). ## 7 The Fueling Mechanism The origin of the material fueling star formation is of fundamental interest. A cooling flow origin is supported by the correlation between central blue color excess in CDGs and the cooling rate of the intracluster gas, derived independently from X-ray observations, shown in Figure 1 (e.g. McNamara 1997; Cardiel et al. 1998). Were major galaxy mergers supplying the fuel, this correlation would be difficult to to explain. I would then expect CDGs experiencing significant bursts of star formation to be observed with equal frequency in cooling flow and non-cooling flow clusters alike, but they are not. Nonetheless, the evidence supporting periodic bursts of star formation implies an intermittent source of fuel. Ram pressure stripping of cold gas from neighboring cluster galaxies may be such a source of fuel, and might account for the $`\dot{m}_x`$–blue color correlation. The cooling rate $`\dot{m}_x\rho _{\mathrm{gas}}^2`$, and the ram pressure force on a parcel of gas is $`\rho _{\mathrm{gas}}v^2`$. Therefore, the dense cooling flow regions provide a large stripping cross section capable of sweeping cold, dense molecular gas from cluster dwarf galaxies and spirals, which would rain onto the parent CDG. Abell 1068 may be a case in point, as might the Abell 1795 CDG (McNamara et al. 1996). ## 8 Cooling Flows and the Chandra X-ray Observatory As I wrote this article, Chandra was launched and began sending astonishingly crisp images of cosmic X-ray sources. During the next few years, many of Chandra’s targets will be clusters of galaxies, and the cooling flows promise some of the most interesting and productive cluster science. Their bright cores–the characteristic signature of a cooling flow–afford Chandra the opportunity to take full advantage of its nearly perfect, half arcsecond mirrors. For the first time, we will be capable of mapping structure in the X-ray-emitting gas on angular scales smaller than the radio sources and star formation regions. The temperature and density maps on these small scales will provide local cooling rates that can be compared directly to optically-derived star formation rates. Perhaps more than any other X-ray telescope planned or in queue, Chandra will advance our understanding of the dynamical and thermal state of cluster cores, which hopefully will bring the long-standing cooling flow problem to resolution. ## 9 Summary $``$ Unusually blue colors associated with young, massive stars frequent the central regions of cooling flow CDGs. The probability of detecting a blue population increases sharply with $`\dot{m}_x`$ derived from X-ray observations. $``$ Star formation in cooling flows apparently occurrs in repeated, short duration ($`<1`$ Gyr) bursts, not continuously as would be expected in standard cooling flow models. $``$ Bursts of star formation are often triggered by the radio sources. $``$ Cold material stripped from neighboring galaxies may feed the the radio source and fuel some star formation in CDGs.
no-problem/9911/hep-ph9911500.html
ar5iv
text
# 1 The total cross section of six-jet events at LO in the C scheme, decomposed in terms of the three contributions 𝑞⁢𝑞̄⁢𝑔⁢𝑔⁢𝑔⁢𝑔 (solid), 𝑞⁢𝑞̄⁢𝑞'⁢𝑞̄'⁢𝑔⁢𝑔 (dashed) and 𝑞⁢𝑞̄⁢𝑞'⁢𝑞̄'⁢𝑞''⁢𝑞̄'' (dotted). RAL-TR-99-070 LC-TH-1999-017 November 1999 Six-jet production at $`e^+e^{}`$ linear colliders<sup>1</sup><sup>1</sup>1Talk given at the 2nd ECFA/DESY Study on Physics and Detectors for a Linear Electron-Positron Collider, Lund, Sweden, 28-30 June 1998. S. Moretti Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX, UK. ## Abstract The calculation of the tree-level QCD processes $`e^+e^{}q\overline{q}gggg`$, $`q\overline{q}q^{}\overline{q}^{}gg`$ and $`q\overline{q}q^{}\overline{q}^{}q^{\prime \prime }\overline{q}^{\prime \prime }`$ has recently been accomplished. We highlight here the relevance of such reactions for some of the physics at future electron-positron linear accelerators. 1. Motivations for an exact calculation of $`e^+e^{}6`$ partons As accelerator physics will enter the Linear Collider (LC) epoch , one will encounter a long series of resonant processes ending up with six-jet signatures. One should recall top quark production and decay for a start, whose study will represent one of the main areas of activity at a future LC . Top quarks will be produced in pairs, via $`e^+e^{}\gamma ^{},Z^{}t\overline{t}`$, followed most of the times by $`t\overline{t}b\overline{b}W^+W^{}\text{6 jets}`$. Then one should not forget the new generation of gauge boson resonances, such as $`e^+e^{}ZW^+W^{}`$ and $`ZZZ`$, and their dominant six-jet decays. The interest in these reactions resides primarily in the possibility of an accurate study of the gauge structure of the electroweak (EW) model . In the same respect, one could also add highly-virtual photonic processes, like $`e^+e^{}\gamma ^{}W^+W^{}`$, $`\gamma ^{}ZZ`$, $`\gamma ^{}\gamma ^{}Z`$ and $`\gamma ^{}\gamma ^{}\gamma ^{}`$, in which the photons split into quark-antiquark pairs. In addition, of particular relevance are reactions involving the Higgs particle, $`\varphi `$, e.g., in the Standard Model (SM), such as $`e^+e^{}Z\varphi ZW^+W^{}`$ and $`Z\varphi ZZZ`$ – as discovery channels of a heavy scalar boson – or $`e^+e^{}Z\varphi Z\varphi \varphi `$ – as a means to study the Higgs potential of a light scalar (the latter decaying to $`b\overline{b}`$) . Given such a wide scope offered by six-jet final states, it is of paramount importance to have a strong control on the backgrounds. The parton-shower (PS) event generators (e.g., HERWIG and JETSET/PYTHIA ) represent a valuable instrument in this respect, as they are able to describe the full event, from the initial hard scattering down to the hadron level. However, Matrix Element (ME) models are acknowledged to describe the large angle distributions of the QCD radiation better than the former do (see, e.g., ), which are in fact superior in the small angle dynamics. As in the processes we just mentioned the final state jets are typically produced at large angle and are isolated (being the decay products of massive objects), the need of exact ME computations should be manifest<sup>2</sup><sup>2</sup>2The matching of fixed-order (as well as resummed) multi-parton final states with the subsequent PS to finally reach the hadronisation stage is also a pressing matter, towards which some progress has recently been made .. As for theoretical advances in this respect, studies of $`e^+e^{}`$ 6-quark EW processes are well under way (see Ref. for a review). However, a large fraction of the six-jet cross section comes from QCD interactions. The case of QCD six-jet production from $`W^+W^{}`$ decays was considered in Ref. . In this note, we discuss the dominant, tree-level QCD contributions to six-jet final states through the order $`𝒪(\alpha _s^4)`$, i.e., the processes: $$e^+e^{}\gamma ^{},Z^{}q\overline{q}gggg,q\overline{q}q^{}\overline{q}^{}gg,q\overline{q}q^{}\overline{q}^{}q^{\prime \prime }\overline{q}^{\prime \prime },$$ (1) where $`q,q^{}`$ and $`q^{\prime \prime }`$ represent any possible flavours of quarks (massless and/or massive) and $`g`$ is a gluon, whose computation has recently been tackled . 2. Numerical results In order to select a six-‘jet’ sample we apply a jet clustering algorithm directly to the ‘quarks’ and ‘gluons’ in the final state of the processes (1). For illustrative purposes, we use the Cambridge (C) jet-finder only. This is based on the ‘measure’ $$y_{ij}=\frac{2\mathrm{min}(E_i^2,E_j^2)(1\mathrm{cos}\theta _{ij})}{s_{ee}},$$ (2) where $`E_i`$ and $`E_j`$ are the energies and $`\theta _{ij}`$ the separation of any pair $`ij`$ of particles in the final state, with $`i<j=2,\mathrm{}6`$, to be compared against a resolution parameter denoted by $`y`$. In our tree-level approximation, the selected rate is nothing else than the total partonic cross section with a cut $`y_{ij}>y`$ on any possible $`ij`$ combination. The summations over the three reactions (1) and over all possible ‘massless’ (see Ref. for a dedicated study of mass effects) combinations of quark flavours in each of these have been performed here. As for numerical inputs, they can be found in . The six-jet event rate induced by the $`𝒪(\alpha _s^4)`$ QCD events at $`\sqrt{s_{ee}}=500`$ GeV – the value that we use here for the centre-of-mass (CM) energy of a LC – can be rather large. Adopting a yearly luminosity of, e.g., 100 fb<sup>-1</sup> and assuming a standard evolution of $`\alpha _s`$ with increasing energy, at $`y=0.001`$ one should expect some 2300 events per annum: see Fig. 1. However, these rates decrease rapidly as $`y`$ gets larger. From Fig. 1, one can appreciated how the dominant component is due to two-quark-four gluon events, followed by the four-quark-two-gluon and six-quark ones, respectively. These relative rates are of particular relevance to a LC environment. In fact, the capability of the detectors of distinguishing between jets due to quarks (and among these, bottom flavours in particular: e.g., in selecting top and light Higgs decays) and gluons, is of crucial importance there, in order to perform dedicated searches for old and new particles. The concern about background effects at a LC due to six-jet events via $`𝒪(\alpha _s^4)`$ QCD comes about if one considers that they may naturally survive some of the top signal selection criteria. It is well known that the large value of the top mass (here, $`m_t=175`$ GeV) leads to rather spherical events. Therefore, shape variables such us thrust and sphericity represent useful means to disentangle $`e^+e^{}t\overline{t}`$ events. For example, a selection strategy that does not exploit neither lepton identification nor the tagging of $`b`$-jets was outlined in Ref. . The requirements are a large particle multiplicity, a high number of jets (eventually forced to six) and a rather low(high) thrust(sphericity). Jets are selected according to a jet clustering algorithm (in our case, the C one with $`y=0.001`$, for sake of illustration). Clearly, six-jet events (1) meet the first two criteria. As for the thrust and sphericity distributions, these are shown in Fig. 2. From there, it is evident the overlap of the $`t\overline{t}`$ and QCD spectra. Searches for resonances will often need to rely on the mass reconstruction of di-jet systems. Therefore, it is worth looking at the invariant mass distributions which will be produced by all possible two-parton combinations $`ij`$ in (1). As usual in multi-jet analyses, we first order the jets in energy, so that $`E_1>\mathrm{}>E_6`$. Then, we construct $$m_{ij}\frac{M_{ij}^2}{s_{ee}}=\frac{2E_iE_j(1\mathrm{cos}\theta _{ij})}{s_{ee}}.$$ (3) These fifteen quantities are shown in Fig. 3 for the C scheme at $`y=0.001`$. We found it convenient to plot the ‘reduced’ invariant masses $`m_{ij}`$ rather than the actual ones $`M_{ij}`$, as energies and angles ‘scale’ with the CM energy in such a way that the shape of the distributions is largely unaffected by changes in the value of $`\sqrt{s}_{ee}`$ in the CM energy range relevant to a LC. In the figure, it is interesting to notice the ‘resonant’ behaviour of some of the distributions. It is well known that photon bremsstrahlung generated by the incoming $`e^+e^{}`$ beams (or Initial State Radiation, ISR) can be quite sizable at a LC<sup>3</sup><sup>3</sup>3Other peculiar feature is the presence of Linac energy spread and beamsstrahlung effects. For a TESLA collider design, these are however smaller in comparison , so we leave them aside here.. In order to implement ISR we resort to the so-called Electron Structure Function approach by using the $`𝒪(\alpha _{em}^2)`$ expressions of Ref. . As a simple exercise, we plot the production rates for the sum of the processes (1) in presence of ISR in Fig. 4, as a function of the resolution parameter $`y`$ in the C scheme. They are compared to those obtained without ISR (that is, the sum of the rates in Fig. 1). We see that the curve corresponding to the processes convoluted with ISR lies above the lowest-order one. This is rather intuitive, as it is well known that the radiation of photons from the incoming electron and positrons tends to lower the ‘effective’ CM energy of the collision . At $`y=0.001`$, the difference in the rates is around $`25\%`$ and this tends to increase as $`y`$ gets larger. In contrast, we have checked that the shape of the differential distributions (such as those in Fig. 3) suffers little from ISR. 3. Summary The exact calculation of the processes $`e^+e^{}q\overline{q}gggg`$, $`q\overline{q}q^{}\overline{q}^{}gg`$ and $`q\overline{q}q^{}\overline{q}^{}q^{\prime \prime }\overline{q}^{\prime \prime }`$ (for both massless and massive quarks) at the leading-order in perturbative QCD has been completed. In this short note we have emphasised the strong impact that such reactions can have as backgrounds to many of the multi-jet studies foreseen at a LC. A numerical program is available for simulations, based on helicity amplitudes, thus also allowing for initial state polarisation. Acknowledgements We thank the conveners of the QCD/$`\gamma \gamma `$ working group for the stimulating environment they have been able to create during the workshops.
no-problem/9911/nucl-th9911037.html
ar5iv
text
# NUCLEAR EFFECTS IN HIGH ENERGY PHOTON AND NEUTRAL PION PRODUCTION ## Acknowledgments This work is supported by the US-DOE grant, DE-FG02-86ER40251, and by Hungarian grant OTKA, F019689.
no-problem/9911/hep-th9911147.html
ar5iv
text
# A brief review of “little string theories” ## 1 Introduction One of the most surprising results which came out of the developments in string theory in the last few years is the existence of consistent non-gravitational theories in five and six space-time dimensions, even though no consistent Lagrangians are known for interacting theories in these dimensions. Generally these theories have been discovered by considering some limit of string theory configurations involving 5-branes and/or singularities. The higher dimensional non-gravitational theories may be divided into two classes. One class, which generally arises from a low-energy limit of string theory (or M theory), includes superconformal field theories in five and six dimensions<sup>1</sup><sup>1</sup>1Supersymmetry plays an essential role in proving the existence of these theories, it is not clear if non-supersymmetric theories also exist above four dimensions or not.. These theories seem to be standard local field theories, even though they have no good Lagrangian description (they are sometimes called “tensionless string theories” since many of them have BPS-saturated strings on their moduli space whose tensions go to zero at the conformal point). We will not discuss these theories here. The other class of theories, which was given the name “little string theories” (LSTs) by , is generally obtained by taking the string coupling to zero in some configuration of NS 5-branes and/or singularities which is not well-described by perturbation theory, and which in fact remains non-trivial (but decoupled from gravity) even after taking the string coupling to zero. The string scale $`M_s`$ remains constant in this limit and plays an important role in the dynamics of these theories; they appear to be non-local theories with some string theory-like properties. In this contribution I will try to summarize all that is currently known about the simplest theories of this type, which are six dimensional theories with 16 supercharges arising as the $`g_s0`$ limit of NS 5-branes in type IIA or type IIB string theory. There are many interesting results on lower dimensional LSTs, LSTs with less supersymmetry and compactifications of LSTs, that I will not have room to review here. The transparencies and audio for this talk are available at . ### 1.1 Definition and simple properties of “little string theories” The simplest definition of six dimensional LSTs with 16 supercharges comes from looking at $`k`$ parallel and overlapping NS 5-branes in type IIA or type IIB string theory, with $`k>1`$, and taking the string coupling $`g_s0`$ (see also ). The gravitational interactions go to zero in this limit, but the couplings of the fields associated with the NS 5-branes remain non-trivial even after taking $`g_s`$ to zero; this can be seen, for instance, by analyzing the low-energy theory (as described below). The string scale $`M_s`$ is kept finite in the limit, and it is the only parameter of the LSTs (except for the discrete parameter $`k`$); in particular, there is no continuous dimensionless coupling parameter in these theories, so unlike conventional string theories they have no obvious weakly coupled limits which can serve as the starting point for a perturbative expansion. The NS 5-branes break half of the supersymmetry of type II string theories, and thus there are no forces between them and we can indeed put $`k`$ of them on top of each other. In six dimensions (like in ten or two dimensions) supersymmetry is generally chiral, and the dimensional reduction of type II string theory to six dimensions has $`𝒩=(2,2)`$ supersymmetry. In the type IIA case the NS 5-branes preserve a chiral half of the supersymmetry so the resulting LST has $`𝒩=(2,0)`$ supersymmetry, while in the type IIB case it has $`𝒩=(1,1)`$ supersymmetry. We can obtain equivalent definitions of the LSTs by using various duality symmetries. Using the duality between type IIA string theory and M theory we see that the $`(2,0)`$ LST can also be derived from $`k`$ 5-branes in M theory, with a transverse circle of radius $`R`$, in the limit $`R0`$, $`M_p\mathrm{}`$ with $`RM_p^3=M_s^2`$ kept constant. Using S-duality in type IIB we see that the $`(1,1)`$ LST can be derived from the $`g_s\mathrm{}`$ limit of $`k`$ D5-branes in type IIB string theory. Additional definitions arise from recalling that $`k`$ NS 5-branes with a transverse circle are T-dual to an $`A_{k1}`$ singularity with a compact circle , and noting that the existence of such a circle does not affect the decoupled theory on the NS 5-branes (it affects only the bulk modes which decouple when $`g_s0`$). Thus, by using this T-duality, we conclude that the $`(2,0)`$ LSTs arise also as the $`g_s0`$ limit of type IIB string theory on an $`A_{k1}`$ singularity, and the $`(1,1)`$ LSTs arise also as the $`g_s0`$ limit of type IIA string theory on an $`A_{k1}`$ singularity<sup>2</sup><sup>2</sup>2There is a subtlety here involving the degree of freedom corresponding to the center of mass position of the NS 5-branes which is not evident in the T-dual picture, generally we can ignore this degree of freedom since it is free and decoupled.. The last definition suggests a generalization to the $`g_s0`$ limit of $`D_n`$ and $`E_n`$ type singularities, which also correspond to LSTs with 16 supercharges; we will not discuss these theories in detail here, but let us note that the full classification of LSTs in six dimensions with 16 supercharges includes theories with $`(2,0)`$ and $`(1,1)`$ supersymmetry of type $`G`$ (where $`G=A_k,D_k`$ or $`E_k`$) for any simply-laced Lie algebra $`G`$. Some simple properties of the LSTs with 16 supercharges follow from the above definitions : 1. Low-energy behaviour : For the $`(2,0)`$ LSTs it follows from the M theory definition that the low-energy behaviour (well below the characteristic scale $`M_s`$) is given by the low-energy theory on $`k`$ M5-branes, which is an $`𝒩=(2,0)`$ superconformal theory (see, e.g., ). For the $`(1,1)`$ LSTs it follows from the definition using D5-branes in type IIB string theory that the low-energy behaviour is given by an $`𝒩=(1,1)`$ $`U(k)`$ gauge theory whose gauge coupling is $`g_{YM}^2=1/M_s^2`$. For large $`k`$, ’t Hooft scaling suggests that the perturbative gauge theory (which is free at low energies) breaks down at an energy squared scale of order $`1/g_{YM}^2k=M_s^2/k`$. 2. The moduli space metric of theories with 16 supercharges cannot receive any quantum corrections. For the $`(1,1)`$ LSTs the type IIB constructions show that it is $`\text{}^{4k}/S_k`$, corresponding to the transverse positions of the $`k`$ identical 5-branes. For the $`(2,0)`$ LSTs the M theory construction shows that it is $`(\text{}^4\times S^1)^k/S_k`$, where the radius of the $`S^1`$ is $`M_s^2`$ (recall that the canonical dimension of a scalar field in six dimensions is two). The low-energy theory at generic points in the moduli space involves $`k`$ tensor multiplets in the $`(2,0)`$ case and $`k`$ vector multiplets in the $`(1,1)`$ case. 3. Type IIA string theory on a circle of radius $`R`$ is T-dual to type IIB string theory on a circle of radius $`1/M_s^2R`$, and an NS 5-brane wrapped on the circle is transformed under this duality to an NS 5-brane wrapped on the dual circle. Thus, T-duality commutes with the limit defining the LSTs, and the $`(2,0)`$ $`A_{k1}`$ LST compactified on a circle of radius $`R`$ is dual to the type $`(1,1)`$ $`A_{k1}`$ LST compactified on a circle of radius $`1/M_s^2R`$. Similarly, the LSTs compactified on $`T^d`$ have an $`O(d,d,\text{})`$ T-duality symmetry. Note that this shows that T-duality can exist even in non-gravitational theories. The existence of such a T-duality symmetry is the first indication we see that the LSTs are non-local; in particular, after toroidal compactification, they do not have a unique energy-momentum tensor (the same theory can be coupled to different gravitational backgrounds). 4. All LSTs have BPS-saturated strings of tension $`T=M_s^2`$ (at the origin of moduli space; in some cases there are more BPS-saturated strings away from the origin). These may be viewed as marginally bound states of fundamental strings with the NS 5-branes. At the origin of moduli space, the $`(2,0)`$ LST has no other BPS states, while in the $`(1,1)`$ LST the low-energy gluons (and their superpartners) are massless BPS particles. After compactification there are many additional BPS states which we will not discuss here (see, e.g., ). ### 1.2 Motivations At first sight there is no reason to be interested in “little string theories”, since they are neither directly relevant for studying quantum field theories (being non-local theories) nor for studying quantum gravity (being non-gravitational theories). However, the very fact that these theories are, in some sense, intermediate between local field theories and “standard” string theories means that they may be able to teach us about both. For example, the fact that LSTs have some string theory-like properties, like T-duality and a Hagedorn spectrum (to be discussed below), means that such properties can appear also in non-gravitational theories, and it may be easier to understand them in that context. The existence of non-gravitational Lorentz-invariant theories which are intrinsically non-local is very interesting in itself, and it would be nice to know how to define these theories and what is the proper way to think about them. Some more concrete motivations for studying these theories are : 1. The original construction of LSTs was motivated by the fact that they (or rather their compactification on $`T^5`$) arise as the discrete light-cone quantization (DLCQ) of M theory on $`T^5`$ with $`k`$ units of longitudinal momentum. 2. Compactifications of the LSTs lead to many interesting local and non-local field theories. For example, the low-energy limit of the LSTs on $`T^2`$ gives four dimensional $`𝒩=4`$ SYM theories, and the compactification of LSTs on other manifolds gives theories related to QCD . 3. As we will see in the next section, the LSTs are a rare example of a theory whose discrete light-cone quantization (DLCQ) is simple, and we can use it as a toy model to learn about DLCQ. 4. As we will see in section 3, linear dilaton backgrounds of string theory seem to be holographically dual to LSTs, and thus studying LSTs can teach us about holography in linear dilaton backgrounds (and perhaps give us clues about how to define holography in general backgrounds). ## 2 DLCQ constructions The definitions of LSTs given above have not been useful so far for making any explicit computations in these theories, so one would like to have more direct definitions of the LSTs. The first such definition was found using discrete light-cone quantization (DLCQ), which is a quantization of the theory compactified on a light-like circle of radius $`R`$ with $`N`$ units of momentum around the compact circle. In the large $`N`$ limit the momentum around the circle becomes effectively continuous, and it is believed that all six dimensional results may be reproduced. The advantage of DLCQ comes from the fact that negative-momentum modes decouple in the light-cone frame, so the dynamics involves just the positive-momentum modes (for example it can only involve a finite number of particles). To derive a DLCQ theory we need to exactly integrate out the zero modes of fields (carrying no momentum in the compact direction), which is usually complicated. However, there is a small class of theories where one can derive the exact DLCQ theory, since they have enough supersymmetry to determine the Lagrangian after integrating out the zero modes, and the six dimensional LSTs fall into this class. The DLCQ of LSTs may be derived either by following Seiberg’s prescription of regarding the DLCQ as the limit of a compactification on a small space-like circle , or by starting from M(atrix) theory, which is the DLCQ of string theory or M theory, in configurations with 5-branes or singularities, and taking the limit defining the LSTs. All the derivations lead to the same result and they are detailed in the literature (and reviewed in ), we will present here only the result. The DLCQ of the $`A_{k1}`$ $`(2,0)`$ LST with $`N`$ units of momentum is the $`1+1`$ dimensional $`𝒩=(4,4)`$ supersymmetric sigma model on the $`4Nk`$-dimensional moduli space of $`N`$ instantons in $`SU(k)`$ (on $`\text{}^4`$), compactified on a circle of radius $`\mathrm{\Sigma }=1/RM_s^2`$. Equivalently (by the ADHM construction), it is the conformal theory describing the low-energy limit of the Higgs branch of the $`𝒩=(4,4)`$ $`U(N)`$ SQCD theory with an adjoint hypermultiplet and $`k`$ hypermultiplets in the fundamental representation. The moduli space of instantons is singular, but the sigma model on it seems to make sense. The spectrum of chiral operators of the $`(2,0)`$ LST may be computed in this description (a similar computation in the $`0+1`$ dimensional sigma model on the moduli space of instantons was performed in ). For the $`A_{k1}`$ $`(1,1)`$ LSTs it turns out to be complicated to write down the usual DLCQ, but simple to write down a DLCQ with a Wilson line of the low-energy $`U(k)`$ gauge group around the light-like circle . In the large $`N`$ limit the effect of the Wilson line is expected to disappear. The DLCQ with the Wilson line is the conformal theory which arises as the low-energy limit of the Coulomb branch of the $`1+1`$ dimensional $`𝒩=(4,4)`$ $`U(N)^k`$ gauge theory with bifundamental hypermultiplets for adjacent groups (when we arrange the $`k`$ gauge groups in a circle, as in a quiver diagram). Again, the conformal theory is compactified on a circle of radius $`\mathrm{\Sigma }=1/M_s^2R`$. Removing the Wilson line corresponds to taking some of the gauge couplings to infinity before taking the low-energy limit. The DLCQ gives us an explicit non-perturbative definition of the “little string theories”, enabling us in principle to compute all their states and correlation functions. Unfortunately, in practice such computations are extremely difficult; the conformal theories involved are very complicated even before we take the large $`N`$ limit, and we have to take this limit to obtain results about the LSTs in six dimensions. Thus, no dynamical computations have been made so far using the DLCQ, but only identifications of some operators and states (see, e.g., ). As usual in DLCQ, it is quite complicated to study other points on the moduli space of the LSTs or compactifications, both of which lead to quite different DLCQ theories. For example, the DLCQ of the $`(2,0)`$ $`A_{k1}`$ theory at a generic point on its moduli space is given by the deformation of the Higgs branch SCFT described above by generic masses for the $`k`$ hypermultiplets in the fundamental representation (which is a relevant deformation of the Higgs branch SCFT). ## 3 Holographic constructions The second useful construction of LSTs is a holographic dual along the lines of the AdS/CFT correspondence; unlike the previous construction this one cannot serve as a definition of the LSTs since we have no non-perturbative definition of the string/M theory backgrounds that are involved, but it seems to be more useful in computing correlation functions in the LSTs (at least at low energies). The AdS/CFT correspondence states that local conformal field theories are often dual to string/M theory compactifications including AdS spaces, and it was generalized to some other classes of local field theories as well (though these are often dual to spaces with regions of high curvature). However, there is no reason for general backgrounds of string/M theory (or any other theory of quantum gravity) to be holographically dual to a local field theory, and it seems likely that the theory which is holographically dual to quantum gravity in Minkowski space is highly non-local. LSTs provide the first example of holography for a non-local non-gravitational theory, and it turns out that generally LST-like theories are holographically dual to backgrounds which asymptote to string theory in a linear dilaton background . As in the AdS/CFT correspondence, the space which is holographically dual to LSTs may be derived by starting from the string theory background corresponding to NS 5-branes and taking the limit of $`g_s0`$ (where $`g_s`$ is the asymptotic string coupling) which defines the LSTs . For $`(2,0)`$ LSTs the correct procedure is actually to start from the background corresponding to M5-branes with a transverse circle, as described above, since a background of NS 5-branes in string theory does not correspond to a configuration which is localized in the $`S^1`$ coordinates of the moduli space. Starting with such a background and taking the appropriate limit leads to the space $$l_p^2ds^2=H^{1/3}dx_6^2+H^{2/3}(dx_{11}^2+dU^2+U^2d\mathrm{\Omega }_3^2),$$ (3.1) where $`dx_6^2`$ is the metric on $`\text{}^6`$, $`d\mathrm{\Omega }_3^2`$ is the metric on $`S^3`$, $`x_{11}`$ is compactified on a circle with radius $`M_s^2`$, $$H=\underset{j=\mathrm{}}{\overset{\mathrm{}}{}}\frac{\pi k}{(U^2+(x_{11}2\pi jM_s^2)^2)^{3/2}},$$ (3.2) and there are also $`k`$ units of 4-form flux on $`S^3\times S^1`$. M theory compactified on this space is holographically dual to the $`A_{k1}`$ $`(2,0)`$ LST. The space (3.1) is quite complicated, but it simplifies in the asymptotic regions of space. For small $`U`$ and $`x_{11}`$ the space (3.1) becomes just $`AdS_7\times S^4`$, which is holographically dual to the $`(2,0)`$ SCFTs, as expected since these SCFTs are the low-energy limit of the $`(2,0)`$ LSTs. For large $`U`$ the physical radius of the $`x_{11}`$ circle becomes very small and it is more appropriate to view the background as a type IIA string theory compactification. Defining a new variable $`\varphi `$ by $`l_s^2U=\sqrt{k}e^{\varphi /\sqrt{k}l_s}`$, the string metric is simply $$ds_{string}^2=dx_6^2+d\varphi ^2+kl_s^2d\mathrm{\Omega }_3^2,$$ (3.3) with a linear dilaton in the $`\varphi `$ direction, $$g_s(\varphi )=e^{\varphi /\sqrt{k}l_s},$$ (3.4) and $`k`$ units of 3-form flux on the $`S^3`$. Note that for large $`k`$ the curvatures are small everywhere, so supergravity is a good approximation at low energies. Even though the behaviour of this space near the boundary is very different from that of AdS space, it seems that the same principles apply also in this case; for example, as in AdS/CFT, correlation functions in the LSTs are identified with the response of string/M theory on the background (3.1) to turning on boundary values for non-normalizable modes of the fields. In the space (3.1) some of these modes are just the incoming and outgoing waves in the $`\varphi `$ direction, so that some correlation functions of the LST correspond to the S-matrix for scattering in the $`\varphi `$ direction. Similarly, the type $`(1,1)`$ LSTs are holographically dual to the near-horizon limit of type IIB NS 5-branes; unfortunately this background becomes singular for small $`U`$ (there is no analog of the M theory region in (3.1)) so this description is less useful for computations of correlation functions, but it can still be used for identifying the operators and some states of these LSTs as described below. The holographic description of the LSTs is useful for : 1. As in the AdS/CFT correspondence, some correlation functions may be computed using supergravity. In this case the supergravity approximation is valid for the $`(2,0)`$ LSTs at large $`k`$ and at energies well below the string scale $`M_s`$, where stringy corrections in the background (3.3) become important. The computation of the 2-point function of the energy-momentum tensor in the LSTs was described in , and it is possible to compute also other correlation functions (though the actual computations are quite complicated). In particular, supergravity gives an exact description of the analog of the ’t Hooft limit for the $`(2,0)`$ LSTs, in which $`M_s`$ and $`k`$ are taken to infinity with the scale $`M_s^2/k`$ (which in the $`(1,1)`$ case is the inverse ’t Hooft gauge coupling) kept constant. 2. A big difference between the background (3.1) and backgrounds which are dual to other field theories is that near the boundary of (3.1) the string coupling goes to zero and the curvatures are small, so one can compute the spectrum of fields exactly (and not just for large $`k`$ as in other cases). The full spectrum of chiral fields in the LSTs was computed in this way in , and turned out to be exactly the same for all values of $`k`$ (in the $`A_{k1}`$ case) as the spectrum of chiral fields in the field theories which arise as the low-energy limit of the LSTs. 3. The holographic description can be used to reliably compute some of the states in the LSTs, which are states propagating in the weakly coupled region of (3.1). For example, one has the states in the supergraviton multiplet propagating with some momentum in the $`\varphi `$ direction, and these look like a continuum of states from the six dimensional point of view (the mass shell condition relates the 6-dimensional momentum to the $`\varphi `$-momentum but it does not determine its magnitude). It turns out that for the states in the supergravity multiplet this continuum of states starts at a scale of order $`M^2M_s^2/k`$, while other string states in the weak coupling region also give rise to continuous spectra in the LST, starting at higher values of $`M^2`$. The implications of the existence of this continuum are not clear. The same continuum of states can also be seen in the DLCQ formalism , where it is related to the continuum of states found in sigma models on orbifold singularities with zero theta angle. 4. The holographic description can be used to analyze the behaviour at finite temperature and energy density, as discussed in the next section. ## 4 Behaviour at finite energy density We have already seen some indications that LSTs are not local field theories, but the strongest indication for this seems to come from analyzing the equation of state of the LSTs at finite temperature or energy density. For local field theories the high-energy behaviour of the density of states is always an exponential of a power of the energy density which is less than one, while for the LSTs we will see that it is exponential in the energy density. The most reliable way to compute the equation of state of the LSTs is to use their holographic description. Holographic dualities relate finite temperature states of non-gravitational theories to black hole configurations, with the Hawking temperature of the black hole equated with the field theory temperature, and the field theory energy equated with the mass of the black hole. The entropy of such black holes can be computed by the usual Bekenstein-Hawking formula which relates it to their area. In the case of LSTs the appropriate black holes (at large enough energy densities) are the near-horizon limits of near-extremal NS5-branes and this computation was first done in this context in . The result is $$E=T_HS;T_HM_s/\sqrt{6k},$$ (4.1) and it is reliable when the curvatures in the black hole background are small (requiring $`k1`$) and when the string coupling at the black hole horizon is small (recall that the string coupling becomes weaker and weaker as one goes towards the boundary), requiring that the energy density $`\mu E/V`$ satisfies $`\mu kM_s^6`$. Thus, we find that for large $`k`$ and large energy densities the equation of state is Hagedorn-like, corresponding to a limiting temperature in the theory of $`T_HM_s/\sqrt{6k}`$; above some energy density which is smaller than $`kM_s^6`$ the specific heat diverges and increasing the energy density no longer increases the temperature, which remains $`T=T_H`$. This behaviour is similar to the behaviour of single-string states in free string theory, where $`T_HM_s`$; however, in string theory it is believed (at least in some cases) that at the temperature $`T_H`$ there is a phase transition to a different phase where a state becoming massless at $`T=T_H`$ condenses , while in the LSTs there is no evidence for any states becoming massless at $`T=T_H`$, so it seems that $`T_H`$ is really a limiting temperature (and the canonical ensemble cannot be defined beyond this temperature). It would be nice to reproduce the behaviour (4.1) also in the DLCQ description of the LSTs, but this seems rather difficult. The behaviour (4.1) is, in fact, reproduced by a naive counting of the density of states in the DLCQ description , but the relevant states are really only those whose energy scales as $`1/N`$ in the large $`N`$ limit, and identifying all these states is much more difficult. Some states whose energy scales as $`1/N`$ were identified in , and they correspond to a density of states in space-time which is exponential but with a smaller exponent corresponding to $`\stackrel{~}{T}_HM_s/\sqrt{12}`$; it would be interesting to identify in the DLCQ the other states which give rise to (4.1). It is not clear how to interpret the exponential behaviour of the equation of state in the LSTs. The fact that the density of states depends only on the energy and not on the volume suggests that generic high-energy states are single-object configurations, perhaps similar to long, space-filling strings. It is not yet known at which energy density the behaviour (4.1) begins, and what is the equation of state in the regime $`\mu <M_s^6`$ (which is relevant, in particular, for the ’t Hooft limit of the LSTs described above). General arguments (with some assumptions about the behaviour of generic operators in the LSTs) suggest that the behaviour (4.1) implies the non-existence of correlation functions of operators at separations smaller than $`T_H^1`$. Presumably, there are no local operators in these theories, but only operators “smeared” on distance scales of at least $`T_H^1`$. Computations in the holographic formulation give correlation functions of operators (which are naively local) in momentum space, but it seems that the Fourier transform of the results to position space does not exist for small separations , consistent with this non-locality. Note that the scale of non-locality suggested by these arguments is different (by a factor of order $`\sqrt{k}`$) from the scale of non-locality suggested by the T-duality symmetry. ## 5 Future directions Much progress has been made in the last few years in understanding “little string theories”, but many open questions remain. Some interesting open questions are : 1. The behaviour of LSTs at high energies is clearly not governed by a field theoretical fixed point, and it is interesting to understand what the high-energy behaviour is. There are some indications that the theory becomes weakly coupled at high energies, such as the fact that in the holographic description the string coupling vanishes near the boundary, and that the string coupling is small everywhere in the holographic dual of the theory at high energy densities. It would be interesting to understand if there is some sense in which the theory becomes weakly coupled at high energies (perhaps as in asymptotically free field theories) and if there is any simple description of the high-energy limit. 2. The behaviour at intermediate energy scales, such as those governed by the ’t Hooft limit (where we take large $`k`$ and look at energies of order $`EM_s/\sqrt{k}`$), is still not clear. In principle correlation functions at these scales may be computed from supergravity, but so far it is not known if their behaviour (and the behaviour of the equation of state in this regime) is more like a field theory (and, if so, which field theory ?) or more like a string theory. 3. We have not had a chance to discuss here LSTs in lower dimensions or with less supersymmetry. It would be interesting to try to classify these theories and to see if they are similar to the six dimensional LSTs described above or not. It is also interesting to analyze compactifications of the LSTs, which give rise to many interesting theories. Unlike local field theories, the behaviour of non-local field theories upon compactification is not determined by the behaviour of the uncompactified theory, so it should be studied independently and may reveal new results. In particular, the large $`k`$ behaviour of the $`T^5`$ compactification of the LSTs is related to M(atrix) theory on $`T^5`$, and it would be interesting to understand it further and to identify the states whose energy scales as $`1/k`$ in the large $`k`$ limit. It has recently been suggested that the holographic description of LSTs becomes weakly coupled if one looks at particular configurations which are far out on the LST moduli space. For example, configurations in the $`(1,1)`$ theory where the low-energy $`U(k)`$ gauge theory is broken at a scale $`M_W`$ were argued to have a perturbation expansion in $`M_s/M_W`$, which can be used to reliably compute correlation functions in these theories (far out on the moduli space) at energies well below $`M_W`$ (but potentially above the scales $`M_s`$ and $`M_s/\sqrt{k}`$). If perturbation theory is indeed reliable in these configurations it would be very interesting to use them for various computations, and to see what they can teach us about the LSTs. ## Acknowledgments I would like to thank T. Banks, M. Berkooz, S. Kachru, D. Kutasov, N. Seiberg and E. Silverstein for enjoyable collaboration and discussions on the results presented here. This research is supported in part by DOE grant DE-FG02-96ER40559. ## References
no-problem/9911/hep-ph9911303.html
ar5iv
text
# THE NON-EQUILIBRIUM DISTRIBUTION FUNCTION OF PARTICLES AND ANTI-PARTICLES CREATED IN STRONG FIELDS ## 1 Introduction In recent years much attention has been devoted to the back-reaction (BR) problem in high-energy particle physics $`^{}`$ and especially in early cosmology . A flux-tube model based on the Schwinger mechanism of the vacuum particle creation by a strong field is a commonly used model for the dynamical description of multiple particle phenomena. During the passed time this process of the spontaneous pair creation has been intensively investigated for the case of a given external field, electromagnetic or gravitational one . In majority of the BR studies, some phenomenological source is introduced into a kinetic equation in a close analogy with the exact Schwinger result for a constant electric field (e.g. ). Recently, a more consistent derivation of the source term has been done in Refs. . In particular, an exact kinetic equation for scalar and spinor QED with the non-Markovian source term for the time-dependent but spatially homogeneous field was obtained in Ref. . Some properties of the kinetic equation of this type were studied in $`^{}`$. A common feature of this approach is the observation of a very complicated pattern of oscillations in the density of created particles. The frequency of these oscillations is of order of the zitterbewegung frequency what evidences a high particle density reached in a system. It is quite obvious that under such conditions the interactions of created particles should be taken into consideration. On the level of self-consistent mean-field, it is just the back influence of the created particles on a formed field (the back reaction problem). For the thermalization process to be very actual for ultra-relativistic heavy-ion collisions, the account of direct particle-particle interactions is important. To simplify the problem in question, the relaxation time approximation is used traditionally. Longer than 10-years story of studies towards these directions put more new questions than gave answers. This is related to high computational complexity of the equations under consideration which are integro-differential and highly non-linear ones. In addition, these equations need a renormalization procedure to remove the logarithmic divergences in the observable densities of energy and current. Besides, the standard relaxation time approximation turned out to be too rough and should be improved for the proper treating of collisions . This contribution addresses a single specific problem, namely, to study the influence of the BR on temporal evolution of the creation process and manifestation of this dynamics in observable fields and especially in particle distribution functions. For simplicity, we do not take into account the non-Abelian structure of color electric fields and concentrate on the back-reaction problem in application only to strong electromagnetic fields. However, the characteristics considered and the region of the field parameters used are of interest for the flux tube model applications to hadronic processes. We restrict ourselves to a simplest situation of a time-dependent space-homogeneous field. ## 2 Basic equations Our approach to the BR problem is based on the following exact Vlasov-like KE for the distribution function $`f(\stackrel{}{P},t)`$ $$\frac{f(\stackrel{}{P},t)}{t}+eE(t)\frac{f(\stackrel{}{P},t)}{P_3}=C(\stackrel{}{P},t),$$ (1) where $`E(t)`$ \- a strong homogeneous electric field <sup>1</sup><sup>1</sup>1) We use the units $`\mathrm{}=c=1`$ and the metric is chosen to be $`g^{\mu \nu }=\mathrm{diag}(1,1,1,1).`$ ), $`C(\stackrel{}{P},t)`$ is the dynamical source term describing the vacuum creation and annihilation processes within the Schwinger mechanism $$C(\stackrel{}{P},t)=\frac{W(\stackrel{}{P},t,t)}{2}_{\mathrm{}}^t𝑑t^{}W(\stackrel{}{P},t,t^{})\left[1\pm 2f(\stackrel{}{P},t^{})\right]\mathrm{cos}\theta (\stackrel{}{P},t,t^{}).$$ (2) Here use the notation the transition amplitudes $$W(\stackrel{}{P},t,t^{})=eE(t^{})\frac{P(t,t^{})}{\omega ^2(\stackrel{}{P},t,t^{})}\left(\frac{\epsilon }{P(t,t^{})}\right)^{g1},$$ (3) with the kinetic 3-vector momentum $`\stackrel{}{P}(\stackrel{}{P}_{},P_3),`$ the degeneracy factor $`g`$ and $`P(t,t^{})=P_3+e[A(t)A(t^{})],`$ (4) $`\omega ^2(\stackrel{}{P},t,t^{})=\epsilon _{}^2+P^2(t,t^{}),\epsilon _{}=\sqrt{m^2+P_{}^2},`$ (5) $`\theta (\stackrel{}{P},t,t^{})=2{\displaystyle _t^{}^t}𝑑t^{\prime \prime }\omega (\stackrel{}{P},t,t^{\prime \prime }).`$ (6) The KE (1) is a direct consequence of the corresponding one-particle equation of motion in the presence of a quasi-classical electric field $`A_\mu =(0,0,0,A(t)),`$ $`E(t)=\dot{A}(t)`$. For the subsequent calculations it is convenient to use the following local form of the KE (1) $`{\displaystyle \frac{f}{t}}+eE(t){\displaystyle \frac{f}{P_3}}={\displaystyle \frac{1}{2}}Wv,`$ $`{\displaystyle \frac{v}{t}}+eE(t){\displaystyle \frac{v}{P_3}}=W[1\pm 2f]2\omega u,`$ (7) $`{\displaystyle \frac{u}{t}}+eE(t){\displaystyle \frac{u}{P_3}}=2\omega v,`$ where was introduced two auxiliary real functions $`u(\stackrel{}{P},t)`$ and $`v(\stackrel{}{P},t)`$ with the initial conditions $`f(t_0)=v(t_0)=u(t_0)=0`$ and $`W=W(\stackrel{}{P},t,t),\omega =\omega (\stackrel{}{P},t,t).`$ In the mean-field approximation, the distribution function $`f(\stackrel{}{P},t)`$ allows one to find the densities of observable physical quantities. In particular, the conduction $`j_{cond}(t)`$ and polarization $`j_{pol}(t)`$ terms contribute into the electromagnetic current density $`j_{in}(t)=j_{cond}(t)+j_{pol}(t),`$ (8) $`j_{cond}(t)=2eg{\displaystyle \frac{d^3P}{(2\pi )^3}\frac{P_3}{\omega }f(\stackrel{}{P},t)},`$ (9) $`j_{pol}(t)=eg{\displaystyle \frac{d^3P}{(2\pi )^3}\frac{P_3}{\omega }v(\stackrel{}{P},t)\left(\frac{\epsilon }{P_3}\right)^{g1}}.`$ (10) The KE (1) should be combined with the Maxwell equation $$\dot{E}(t)=j_{tot}(t)$$ (11) which closes the set of equations for the BR problem. We assume that a particle-antiparticle plasma was initially formed due to some external field $`E_{ex}(t)`$ excited by an external current $`j_{ex}(t)`$. The internal field and current are noted as $`E_{in}(t)`$ and $`j_{in}(t)`$. So, we have $$E(t)=E_{in}(t)+E_{ex}(t),j_{tot}(t)=j_{in}(t)+j_{ex}(t).$$ (12) It is well known that vacuum expectation values of type (8) can have ultra-violet divergences and need some regularization procedure. We use here the method suggested in paper . To regularize different observables (currents, energy density etc.) it is necessary to fulfill some subtractions of the relevant counterterms from the every regularized function $`f,v`$ and $`u`$. These subtraction terms are constructed as coefficients of the asymptotic expansion of corresponding functions in series over the power of $`\omega ^1(\stackrel{}{P})`$. The leading terms of such expansions can be easily found from Eqs. (2) $$f_a=\left[\frac{eE(t)P_3}{4\omega ^3}\left(\frac{\epsilon _{}}{P_3}\right)^{g1}\right]^2,v_a=e\dot{E}(t)\frac{P_3}{4\omega ^4}\left(\frac{\epsilon _{}}{P_3}\right)^{g1}$$ (13) The conduction current (8) is regular one while the polarization current (8) contains the logarithmic divergence. For its regularization it is enough to fulfill one subtraction $`vvv_a`$ in (8) that can be interpreted as the charge renormalization. As the final result, the regularized Maxwell equation can be written in the following form (it is implied here that the coupling constant $`e`$ and the fields $`E_{in}`$ and $`E_{ex}`$ have been renormalized too) : $$\dot{E}_{in}=\frac{ge}{4\pi ^3}d^3P\frac{P_3}{\omega }\left[f+\frac{v}{2}\left(\frac{\epsilon _{}}{P_3}\right)^{g1}e\dot{E}\frac{P_3}{8\omega ^4}\left(\frac{\epsilon }{P_3}\right)^{2(g1)}\right].$$ (14) This regularization procedure is mostly adequate to the method based on reducing the KE to the system of partial differential equations (2). This approach leads to the numerical results (see ) which are quite consistent with the results obtained with the adiabatic regularization scheme . ## 3 Numerical results The distribution function of a strongly non-equilibrium state of particle-antiparticle plasma is investigated numerically for the cases with ($`E_{in}(t)0`$) and without ($`E_{in}(t)=0`$) taking into account the BR mechanism. Various shapes of the external field impulse were studied. It turned out that the system behavior is weakly sensitive to a particular shape of the impulse and below we present here the calculation results only for the impulse of the Narojny-type $$A_{ex}(t)=A_0b[\mathrm{tanh}(t/b)+1],E_{ex}(t)=A_0\mathrm{cosh}^2(t/b).$$ (15) The parameters of this potential are chosen in accordance with conditions of the flux-tube model . In particular, the coupling constant is taken as a rather large value, $`e^2=4`$ and the used dimensionless variables are : $`ttm,PP/m,AA/m.`$ The initial impulse is characterized by the width $`bm=0.5`$ and amplitude $`A_0/m=7.0`$. The distribution functions of bosons and fermions are shown in Fig. 1 neglecting the BR mechanism. Because in this case their momentum dependence is determined only by the transition amplitudes (3), these distributions are smooth. Such behavior is regarded as a regular one. A valley region in the boson distribution function arising near small values of $`P_3`$ is caused by the linear $`P_3`$-dependence of the amplitude (3). When the BR mechanism is taken into account, the regular momentum dependence of the distribution function is destroyed (Figs. 2-5; the result like that in Fig. 3 was obtained previously in but in the framework of different approach). At the same time Fig. 4 and Fig. 5 demonstrate the existence of periodic temporal behavior of the distribution function. Two-dimensional representation in Fig. 5 clearly shows how a ’dog-brush’ structure of the distribution function along the $`P_3`$ axis is combined with the periodic time structure along the time axis. ## 4 Conclusion Thus, the BR equations generate some large-scale structure on the background of small-scale multi-mode complex dynamics. The small-scale trembling is a manifestation of vacuum oscillations. Trembling frequency is increasing with time. The smoothed initial distribution function in Fig. 5 corresponds to the external field impulse. Large-scale wiggles in the distribution function is a consequence of self-organization of the system due to the growth of the collective plasma oscillations. The presented results provide some evidence that the inclusion of the BR mechanism into consideration of the pair creation gives rise to stochastic behavior of the system. It is not excluded that a dynamical chaos can be found also at other values of the parameter $`eE`$, however this needs additional numerical investigations. It is quite possible that the revealed stochastic features of the vacuum pair creation is a source of the statistical behavior observed in multiple particle production. ## Acknowledgments One of the authors (S. A. S.) gratefully acknowledges the hospitality of the University of Rostock. He wishes to thank G. Röpke, D. Blaschke, V. G. Morozov and S.M. Schmidt for valuable comments. This work was supported in part by the Russia State Committee of Higher Education under grant N 97-0-6.1-4. ## References
no-problem/9911/quant-ph9911024.html
ar5iv
text
# Comment on “Non-Contextual Hidden Variables and Physical Measurements”Submitted to Phys. Rev. Lett. ## Abstract Kent’s conclusion that “non-contextual hidden variable theories cannot be excluded by theoretical arguments of the Kochen-Specker type once the imprecision in real world experiments is taken into account” \[Phys. Rev. Lett. 83, 3755 (1999)\], is criticized. The Kochen-Specker theorem just points out that it is impossible even to conceive a hidden variable model in which the outcomes of all measurements are pre-determined; it does not matter if these measurements are performed or not, or even if these measurements can be achieved only with finite precision. In a recent Letter , Kent generalizes a result advanced by Meyer , and concludes that: (i) “Non-contextual hidden variable \[NCHV\] theories cannot be excluded by theoretical arguments of the K\[ochen-\]S\[pecker\] type once the imprecision in real world experiments is taken into account”. (ii) “This does not (…) affect the situation regarding local hidden variable \[LHV\] theories, which can be refuted by experiment, modulo reasonable assumptions .” In my view, the situation is the opposite: The KS theorem holds, precisely because it is a theoretical argument which deals with gedanken concepts such as ideal yes-no questions. However, the empirical refutation of LHV theories can be questioned precisely on the grounds of the inevitable finiteness of the precision of real measurements. Allow me to illustrate both points. The KS theorem is a mathematical statement which asserts that for a physical system described in quantum mechanics (QM) by a Hilbert space of dimension greater than or equal to three, it is possible to find a set of $`n`$ projection operators, which represent yes-no questions about an individual physical system, so that none of the $`2^n`$ possible sets of “yes” or “no” answers is compatible with the sum rule of QM for orthogonal resolutions of the identity (i.e., if the sum of a subset of mutually orthogonal projection operators is the identity, one and only one of the corresponding answers ought to be “yes”) . The smallest example currently known of such a set has only $`18`$ yes-no questions (about a physical system described by a four-dimensional Hilbert space) . As far as I can see, the plain new result contained in is the following: For any physical system described by a finite Hilbert space, it is always possible to construct a set of projection operators, which is dense in the set of all projection operators, so that an assignation of “yes” or “no” answers is possible in a way compatible with the sum rule of QM. From a mathematical point of view, it is clear that this new result does not, by no means, nullify the KS theorem. However, Kent affirms that this is so when one takes into account that realistic physical measurements are always of finite precision. Kent seems to assume that the KS theorem concerns the results of a (counterfactual) set of measurements, instead of (the plain non-existence of) a set of yes-no questions with pre-determined answers. The KS theorem just points out that it is impossible even to conceive a hidden variable model in which the outcomes of all measurements are pre-determined; it does not matter if these measurements are performed or not, or even if these measurements can be achieved only with finite precision. The KS theorem assumes that any NCHV theory is a classical theory, and since in classical physics there is in principle no difficulty to conceive ideal (i.e., defined with infinite precision) yes-no questions, then it is quite legitimate to handle ideal yes-no questions when one is trying to prove that such a theory does not even exist. The only possible loophole in the KS theorem would be caused by the nonexistence of some of the yes-no questions involved in any of its proofs. However, this loophole would have very weird consequences. For instance, consider a physical system described by a three-dimensional real Hilbert space, and assume that the only yes-no questions with a real existence would be those represented by projection operators defined by vectors with rational components. This subset is dense in the set of yes-no questions and admits an assignation of “yes” or “no” answers compatible with the sum rule . However, the initial assumption is in conflict with the superposition principle because some linear combinations of “legal” yes-no questions would be illegal, since their normalization would demand irrational components . On the other hand, the finite precision measurement problem matters in real experiments. It will affect any real experiment based on the KS theorem , and indeed affects the theoretical analysis of any real experiment to refute LHV. In fact, real experiments like those of Aspect et al. mentioned by Kent, admit LHV models . These models still work even assuming perfect efficiency of detectors, but vanish when infinite precision of preparations and (of all required) measurements is assumed. The author thanks A. Peres and E. Santos for useful comments and clarifications. This work was supported by the Universidad de Sevilla (Grant No. OGICYT-191-97) and the Junta de Andalucía (Grant No. FQM-239).
no-problem/9911/hep-ph9911351.html
ar5iv
text
# 1 Introduction ## 1 Introduction In deep inelastic scattering (DIS), the current hemisphere of the Breit frame is expected to be very similar to one hemisphere in an $`e^+e^{}`$ experiment. This expectation relies on the fundamental assumption of quark fragmentation universality. However, a set of features in DIS introduce corrections to this assumption. In this letter I will give an abbreviated presentation of the investigations in , adressing two main issues: QCD radiation can give rise to high-$`p_{}`$ emissions, without correspondance in an $`e^+e^{}`$ event. A high-$`p_{}`$ emission in DIS sometimes manifests itself with a completely empty current region . A sizeable rate of such events introduces uncertainties to the interpretation of the data. Furthermore, problems arise even if the current region is not empty, but merely depopulated. It is therefore of interest to find a way to exclude high-$`p_{}`$ events using another signal than an empty current Breit hemisphere. In sections 2 and 3 we discuss an approach based on jet reconstruction. The flavour composition in $`e^+e^{}`$ and DIS is not exactly the same. When excluding high-$`p_{}`$ events from the DIS analysis, the boson-gluon fusion channel for heavy quark production is supressed. This implies a lower heavy quark rate in the studied DIS sample, as compared to $`e^+e^{}`$ data at corresponding energies. A uds enriched $`e^+e^{}`$ data sample with high statistics is available from the $`\mathrm{Z}^0`$ peak. In section 4 we discuss a method to study properties of $`e^+e^{}`$ quark hemispheres at different scales, using data from the fixed energy $`\mathrm{Z}^0`$ experiments. ## 2 Jet Algorithms In order to exclude from the DIS sample high-$`p_{}`$ events without correspondance in $`e^+e^{}`$, it is natural to use $`k_{}`$-type cluster algorithms with a jet resolution set to $`Q/2`$. This scale is in analogy to the kinematical constraint for gluon emissions in $`e^+e^{}`$, which is $`p_\mathrm{g}E_\mathrm{g}\sqrt{s}/2`$. In the HERA experiments, particles which in the lab frame have a large pseudo-rapidity w.r.t. the proton direction are not detected. In our analysis, we have chosen to exclude all particles with pseudo-rapidity larger than $`3.8`$, to take this into account. For clustering purposes, an initial cluster is introduced along the proton direction, carrying the missing longitudinal momentum. Historically, $`k_{}`$ algorithms were first designed for $`e^+e^{}`$ physics, and a set of different algorithms exist . In DIS, jet observables depend on structure functions and an algorithm where the jet properties factorizes into perturbatively calculable coefficients convoluted with the structure functions is in general preferred. A $`k_{}`$ algorithm carried out in the Breit frame and designed to fulfill jet requirements in DIS is presented in , but we have chosen not to use it in the present investigation for the following reasons: We do not intend to investigate jet cross sections or the specific jets found by the cluster algorithm, but merely to exclude high-$`p_{}`$ events. In the accepted sample, the analysis is performed on the current Breit hemisphere, which is defined independently of any jet reconstruction scheme. Thus the infrared properties of the used algorithm, and in particular the problems adressed by the Breit frame algorithm in , are not essential for the present study. It is in stressed that the appealing benefits of the algorithm requires the jet resolution scale $`E_t`$ to satisfy $`\mathrm{\Lambda }_{QCD}^2E_t^2Q^2`$. By setting $`E_t=Q/2`$, we would in the present study use the Breit frame algorithm in a way for which it is not intended. For these reasons, we have chosen to use the simpler $`e^+e^{}`$ $`k_{}`$ algorithms. To estimate the reliability of the results obtained using jets, we have used three different algorithms, Luclus , Durham and Diclus , applied in the hadronic CMS. ## 3 Current Breit Hemisphere Properties To examine if a cut in jet-$`p_{}`$ is suitable to isolate a sample where universality is expected to hold, we study results for the current Breit hemisphere in MC generated events at HERA energies. The results are compared to $`e^+e^{}`$ MC results. We simulate the electro-weak interaction in DIS using the Lepto MC . In both the $`e^+e^{}`$ and DIS simulations, we use the Colour Dipole Model , implemented in Ariadne , to describe the parton cascade. The Lund string fragmentation model , implemented in Jetset is then used to describe hadronization. The chosen MC programs give a generally good description of data in DIS and $`e^+e^{}`$. The kinematical region of the DIS simulation is constrained by the cut $`Q^2>4\mathrm{G}\mathrm{e}\mathrm{V}^2`$. For each $`Q^2`$, the whole range of possible $`x`$-values is considered. Ideally, the properties of the current Breit hemisphere would depend on the scale $`Q^2`$ only, but QCD radiative corrections introduces an implicit $`x`$ dependence. When high-$`p_{}`$ events are excluded, the implicit $`x`$ dependence is strongly reduced. To show that radiative corrections to the properties of the current Breit hemisphere are reduced by the cuts, it will suffice to consider the results integrated over $`x`$. #### Effects of the $`p_{}`$-cut In Fig 2, the fraction of events in DIS with an empty current Breit hemisphere is shown to be significantly reduced after a $`p_{}`$$`<Q/2`$ cut, for all jet finding algorithms. Another interesting observable is the average energy in the current Breit hemisphere, which in general differs from a corresonding $`e^+e^{}`$ hemisphere. Consider two hemispheres in the rest frame of an $`e^+e^{}`$ annihilation event. In general, they have different masses and hence different energies. In other words, a high-$`p_{}`$ emission in one hemisphere reduces the energy of the other, due to energy–momentum conservation. As for an $`e^+e^{}`$ hemisphere, the energy of the current Breit hemisphere at fixed $`Q^2`$ depends on emissions in this hemisphere and in the nearby phase space of the opposite hemisphere. The latter is very large in DIS at low $`x`$, but after our suggested event cuts, it is reduced by the condition $`p_{}`$$`<Q/2`$. The corresponding kinematical constraint in $`e^+e^{}`$ is however $`\left|𝐩\right|<Q/2`$, which is more restrictive. Thus the region for high-$`p_{}`$ emissions which can reduce the energy of the considered hemisphere is larger in DIS than in $`e^+e^{}`$. Without reaching for a quantitative prediction, we expect the mean energy and multiplicity of the current Breit hemisphere to satisfy $$E_{\mathrm{CBH}}<\frac{Q}{2},N_{\mathrm{CBH}}<\frac{1}{2}N_{ee}(Q^2),$$ (1) where $`N_{ee}(Q^2)`$ is the average multiplicity in an $`e^+e^{}`$ experiment with invariant mass $`Q^2`$. If $`2E_{\mathrm{CBH}}/Q`$ would be much smaller than 1, the similarity between the current Breit hemisphere and an unbiased $`e^+e^{}`$ hemisphere is poor, and it is not likely that reliable conclusions from a comparison can be drawn. On the other hand, if $`2E_{\mathrm{CBH}}/Q1`$, the current Breit hemisphere sample may be closely related to an unbiased $`e^+e^{}`$ hemisphere. In Fig 2, the relative energy shift $`2E_{\mathrm{CBH}}/Q1`$ is shown to be sizeable for the unrestricted event sample, but significantly reduced after imposing a $`p_{}<Q/2`$ cut. #### Flavour Compositions A cut in $`p_{}`$ suppresses the boson-gluon fusion channel for charm production in DIS and reduces the charm rate. In our MC simulations, charm is suppressed by roughly a factor 2 in a $`p_{}<Q/2`$ event sample. A problem for the interpretation of this result is the fact that the MC simulation program significantly underestimates the rate of charm events. In the MC this is about 10%, while data indicate a much larger value around 25% . However, a similar suppression ought to be expected also for real data. If boson-gluon fusion is a more important source of charm in nature than assumed in the MC model, the suppression may be even stronger. It would then be appropriate to compare with uds enriched $`e^+e^{}`$ data. How to obtain such data at different energies will be discussed in section 4. Monte Carlo results for average multiplicities are presented in the left plot of Fig 3. The results for the low-$`p_{}`$ DIS sample are essentially equivalent for the three considered jet algorithms, and only the Diclus result is shown. In this MC generated event sample, heavy quark rates are well below 10%, and it is therefore better compared to a MC $`e^+e^{}`$ sample with only uds. The agreement between the low-$`p_{}`$ and the $`e^+e^{}`$ uds sample is better than between the unrestricted event samples also shown in Fig 3. #### Energy Scale Corrections As discussed previously in this section, high-$`p_{}`$ emissions close in rapidity to the considered hemisphere implies a reduction of the average energy in the current Breit hemisphere, $`E_{\mathrm{CBH}}`$, to a value slightly smaller than $`Q/2`$. It is then natural to compare the current Breit hemisphere multiplicities with $`e^+e^{}`$ data at a squared mass $`s=4E_{\mathrm{CBH}}^2`$. The agreement between $`e^+e^{}`$ and Breit frame multiplicities is then significantly improved, as shown in the right plot of Fig 3. Only the Durham result is presented, but the other results are very similar. Though the focus here is on radiative corrections, we can not firmly exclude the possibility that there are differences between $`e^+e^{}`$ and DIS also in the hadronization phase, which could imply an non-perturbative flow of energy and particles between the current region and the target region. Such an effect is masked by the radiative corrections, and an unexpected breaking of quark fragmentation universality in the hadronization phase could hardly be seen in the average multiplicity. However, once expected differences are corrected for, unexpected features which break quark fragmentation universality can still be searched for in less blunt observables, like strangeness rates , energy spectra and higher multiplicity moments. Comparing the left and right plot in Fig 3, we see that the high-$`p_{}`$ cut has a relatively small influence on the average multiplicity, as compared to the energy scale shift. However, in Fig 2 we note that the rate of empty current Breit hemisphere events, which are excluded from the analysis more by necessity than by theoretical understanding, are reduced with the high-$`p_{}`$ cut. This is also the case for the energy shift, as seen in Fig 2. Thus we find that a cut in jet-$`p_{}`$ is a powerful step towards an event sample where quark fragmentation universality is expected to hold. ## 4 Scale Evolutions in Fixed Energy $`𝐞^+𝐞^{}`$ Annihilation A large sample of uds enriched events are available from LEP1 at $`\sqrt{s}=90`$GeV. This is not the case for other energies corresponding to the HERA kinematical range. In this section we discuss how the scale evolution of $`e^+e^{}`$ uds hemispheres can be examined, using data from a fixed energy $`e^+e^{}`$ experiment. In an $`e^+e^{}`$ experiment at squared mass $`s`$, we consider an artificial scale $`Q_{\mathrm{max}}^2<s`$. Three jets are reconstructed with a $`k_{}`$ cluster algorithm, and events where $`p_{}`$$`>Q_{\mathrm{max}}/2`$ are excluded. We study the multiplicity for particles where the rapidity, measured in the thrust direction, satisfies $$\left|y\right|>\frac{1}{2}\mathrm{ln}(s/Q_{\mathrm{max}}^2).$$ (2) These cones in the thrust direction correspond to $`e^+e^{}`$ hemispheres with squared invariant mass $`Q_{\mathrm{max}}^2`$, and their evolution with this scale can thus be studied in a fixed energy experiment. (For a more detailed discussion, see e.g. .) This “thrust-cone” method is designed to be similar to the Breit frame analysis. Two differences are present, due to the absence of a $`t`$-channel probe in $`e^+e^{}`$. The scale $`Q^2`$ and the direction which defines the quark jets are determined by the probe in DIS. In $`e^+e^{}`$, the scale $`Q_{\mathrm{max}}^2`$ is chosen freely and the rapidity is measured along the thrust direction. It is possible to use the thrust direction also in the DIS analysis, which could improve the similarity with the $`e^+e^{}`$ thrust-cone algorithm. A thorough investigation of the quantitative effects of this adjustment is however better performed with a full detector simulation, as they may depend on how the thrust direction is reconstructed in DIS events with a large fraction of the target region undetected. In Fig 4 the multiplicity evolution of an $`e^+e^{}`$ uds sample is compared to results including all flavours and results obtained by the thrust-cone algorithm in a fixed energy uds sample. The thrust-cone multiplicities are plotted as a function of the average energy scales obtained in the cones. These scales are somewhat smaller than the chosen $`Q_{\mathrm{max}}/2`$, and the deviations are of order 5 to 10%. The aim of the thrust-cone algorithm is to reproduce the scale evolution of a $`e^+e^{}`$ uds sample better than a sample including all flavours. As seen in Fig 4, this is achieved for moderate $`Q_{\mathrm{max}}`$, between 4 and 8 GeV. This corresponds to current Breit hemispheres in DIS with $`Q^2`$ between 16 and 64 $`\mathrm{GeV}^2`$, which is an important range at the HERA experiments. In rare cases where an emitted gluon gives the hardest jet, the thrust will be directed along the gluon momentum. Thus the average multiplicity in the thrust-cone analysis will get some contribution from gluon jets, which results in a systematic overestimation of the multiplicity. For $`Q_{\mathrm{max}}>`$ 8 GeV, the effect of gluon “pollution” is of the same order as the effect of the heavy quarks. However, in a realistic experimental analysis, the thrust-cone results applied on uds events at the $`\mathrm{Z}^0`$ peak will benefit from very large statistics compared to experiments at lower energies, and also from the fact that the scale evolution can be studied over a large range with the same detector. To conclude this section, our investigation indicates that it would be interesting to compare DIS data not only to full $`e^+e^{}`$ results at different $`s`$, but also with thrust-cone results in uds samples from the $`\mathrm{Z}^0`$ experiments. ## 5 Summary The assumption of quark fragmentation universality implies that the current Breit hemisphere in DIS is expected to be very similar to a hemisphere in $`e^+e^{}`$ annihilation. However, the experimental situations are different, and several corrections to universality are present. We have here proposed event cuts to improve the expected validity of quark fragmentation universality. We suggest to exclude from the DIS analysis events with high-$`p_{}`$ emissions, which have no correspondence in $`e^+e^{}`$ events. The $`p_{}`$-scale can be reconstructed using jet cluster algorithms with the resolution scale $`Q/2`$. For these purposes, it is suitable to use $`k_{}`$ clustering algorithms originally designed for analyses of $`e^+e^{}`$ annihilation events. Using Monte Carlo simulations, we investigate three different types of $`k_{}`$ cluster schemes, and find that the agreement between $`e^+e^{}`$ and Breit frame results improves after a cut in jet $`p_{}`$, independently of the specific choice of $`e^+e^{}`$ $`k_{}`$ algorithm. In the accepted low-$`p_{}`$ DIS sample, heavy quarks are suppressed. This motivates a comparison with uds enriched $`e^+e^{}`$ data, which are available from the experiments at the $`\mathrm{Z}^0`$ pole, but not at lower energies. We have here presented a method, the “thrust-cone algorithm”, to study scale evolutions of $`e^+e^{}`$ quark hemispheres, using data from fixed energy experiments. With this algorithm, uds enriched data with high statistics from the LEP1 experiments can be compared to results for the current Breit hemisphere, over a large range of energies. ### Acknowledgments I thank Leif Lönnblad and Gösta Gustafson for their significant contributions to this investigation.
no-problem/9911/cond-mat9911136.html
ar5iv
text
# Thermoelectric Response Near the Density Driven Mott Transition \[ ## Abstract We investigate the thermoelectric response of correlated electron systems near the density driven Mott transition using the dynamical mean field theory. \] Thermoelectric effects which are responsible for the direct conversion of heat energy into electrical energy and vice versa have recently received renewed attention . So far optimal thermoelectric response has been achieved in traditional doped semiconductors but recent improvements in both the synthesis of correlated electron systems and the theoretical methods for treating them have generated new interest in the thermoelectric properties of correlated materials. The thermoelectric performance of a material, reflects its ability to convert applied voltages into temperature gradients while minimizing the irreversible effects of Joule heating and thermal conduction. This can be quantified by the so called dimensionless figure of merit denoted here by $`Z_TT`$: $$Z_TT=\frac{S^2\sigma T}{\kappa +\kappa _L}.$$ (1) Here $`\sigma `$ is the electrical conductivity, $`S`$ the thermopower (or Seebeck coefficient) and $`T`$ the temperature, $`\kappa `$ is the electronic contribution to the thermal conductivity and $`\kappa _L`$ is the lattice thermal conductivity. In weakly correlated systems the Seebeck coefficient can be interpreted as the logarithmic derivative of the conductivity with respect to the Fermi energy. In strongly correlated electron systems S is a more difficult quantity to interpret, and is this the subject of this letter. We consider a system of fermions doped away from a Mott insulating state, where the magnetic correlations are weak so that the magnetism is not the driving force behind the metal to insulator transition. This situation is realized experimentally in titanate and vanadate perovskite compounds . For example, La<sub>1-x</sub>Y<sub>x</sub>TiO<sub>3</sub> is ferromagnetic for $`x`$ near 1 and anti-ferromagnetic at small values of $`x`$ but in all these compounds the Néel and Curie temperatures are quite small, of the order of 130 K and less so we focus on the paramagnetic phase of the doped Mott insulator. The experimental motivation for our study was mainly provided by the work of Y. Tokura’s group which has demonstrated that in the class of ternary perovskites, one can control to a large extent various parameters such as orbital degeneracy, bandwidth, and carrier concentration to provide an experimental realization of the filling driven Mott transition . To approach this problem we use the Dynamical Mean Field Theory (DMFT) , which has successfully described many aspects of the physics of three dimensional transition metal oxides. The goal of our study is to understand qualitatively the thermoelectric response of a doped Mott insulator and to derive explicit formulas for the transport coefficients which are valid at very low and high temperatures. Using these insights and numerical results we discuss the figure of merit of this particular kind of materials. An early numerical calculation of the Seebeck coefficient in the large d Hubbard model appeared in ref. . We consider perfect periodic solids described by the $`N`$-fold degenerate Hubbard model: $$H=\underset{ij\sigma }{}t_{ij}c_{i\sigma }^+c_{j\sigma }+\frac{U}{2}\underset{j\sigma \sigma ^{}}{}n_{j\sigma }n_{j\sigma ^{}}\mu \underset{j\sigma }{}n_{j\sigma }$$ (2) and ignore electron-phonon interactions. The index $`\sigma `$ can be thought of as a spin or an orbital index, and will run from 1 to $`N`$. The single band case corresponds to $`N=2`$. We now summarize the relevant aspects of the DMFT. The transport properties are obtained from a Green’s function with a frequency dependent but momentum independent self energy: $$G(k,\omega )=G(ϵ_k,\omega )=\frac{1}{\omega +\mu ϵ_k\mathrm{\Sigma }(\omega )}$$ (3) where $`ϵ_k`$ is the dispersion relation. The self-energy, is computed by solving an Anderson impurity model in a bath described by a hybridization function $`\mathrm{\Delta }(i\omega )`$. Regarded as a functional of the hybridization function it obeys the self consistency condition . $$\frac{1}{i\omega +\mu \mathrm{\Delta }(i\omega )\mathrm{\Sigma }(i\omega )}=𝑑\epsilon G(\epsilon ,i\omega )D(\epsilon )$$ (4) where $`D(\epsilon )`$ is the bare density of states, $`D(\epsilon )=_k\delta (\epsilon ϵ_k).`$ The transport coefficients that govern the electrical and thermal responses of the model are given in terms of current-current correlation functions. Within the DMFT they reduce to averages over the spectral density $`\rho (ϵ,\omega )`$ : $$\sigma =\frac{e^2}{T}A_0,S=\frac{k_B}{e}\frac{A_1}{A_0},\kappa =k_B^2(A_2\frac{A_1^2}{A_0}).$$ (5) where $`A_n`$ $`=`$ $`{\displaystyle \frac{\pi }{\mathrm{}V}}{\displaystyle \underset{k,\sigma }{}}{\displaystyle 𝑑\omega \rho _\sigma (k,\omega )^2(\frac{ϵ_k}{k_x})^2(T\frac{f(\omega )}{\omega })(\beta \omega )^n}`$ (6) $`=`$ $`{\displaystyle \frac{N\pi }{\mathrm{}k_B}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega 𝑑\epsilon {\displaystyle \frac{\rho ^2(\epsilon ,\omega )(\omega \beta )^n}{4\mathrm{cosh}^2(\frac{\beta \omega }{2})}}\mathrm{\Phi }(\epsilon ).`$ (7) In this expression the relevant information about the bare band structure is contained in the spectral density, $`\rho `$, and the transport function $`\mathrm{\Phi }`$, $$\mathrm{\Phi }(\epsilon )=\frac{1}{V}\underset{k}{}(\frac{ϵ_k}{k_x})^2\delta (\epsilon ϵ_k).$$ (8) The DMFT expressions bear superficial similarity with recent work of Mahan and Sofo , if we identify their transport function which we denote $`\mathrm{\Sigma }_{\text{MS}}(\epsilon )`$ as: $$\mathrm{\Sigma }_{\text{MS}}(\omega +\mu )=\left(\frac{N\pi }{\mathrm{}}\right)𝑑\epsilon \rho ^2(\epsilon ,\omega )\mathrm{\Phi }(\epsilon ).$$ (9) Ref. stressed the relevance of correlated materials to thermo-electricity and suggested that the optimal response is obtained when $`\mathrm{\Sigma }_{\text{MS}}`$ takes the form of a delta function like peak at the Fermi level. The DMFT allows us to derive explicit expressions for $`\mathrm{\Sigma }_{\text{MS}}`$ starting from microscopic Hamiltonians. Near the Mott transition, the low energy spectral function does assume a delta like form at low temperatures. We find however, that the optimum of the figure of merit is achieved at high temperatures, when the quasi-delta-function-like resonance at the Fermi level is absent. In electronic systems near a Mott transition there are two widely separated energy scales: the bare bandwidth D, which sets the scale for incoherent excitations and the coherence temperature $`T_{\text{coh}}`$ where Fermi-liquid-like properties begin to be observed. $`T_{\text{coh}}`$ vanishes as we approach half filling. As a result when we decrease the temperature, starting from very high temperatures we expect two crossovers to take place, one when $`k_BTD`$ and the other when $`TT_{\text{coh}}`$. We assume in this analysis that the interaction energy U is much larger than the bare bandwidth, i.e. we are close to the the $`U=\mathrm{}`$ limit. At very low temperature the Fermi liquid picture is applicable and momentum space offers a natural description of the transport processes. The only states that contribute come from a narrow region around the Fermi surface, which in the local mean-field picture shows up as a narrow Kondo like resonance near the Fermi level. At very high temperatures the entire Brillouin zone is equally important and a real-space picture of the transport is more appropriate. The transport is then entirely due to incoherent motion of charge carriers. We discuss these two asymptotic regimes below. Low temperature regime: In this regime we use Fermi liquid ideas to parameterize the transport coefficients in terms of a few parameters, which contain all the effects of the interactions: $`\gamma =Im\mathrm{\Sigma }(0)`$ is the scattering rate at the Fermi level, $`Z=(1\frac{Re\mathrm{\Sigma }}{\omega }|_{\omega =0})^1`$ is the quasiparticle residue at the Fermi-level and $`\stackrel{~}{\mu }=\mu Re\mathrm{\Sigma }(0)`$ is the effective chemical potential. Close to the Mott transition there is only one effective scale controlling the low-temperature physics and therefore the scattering rate behaves as $`A(k_BT)^2/Z^2D`$ where A is a dimensionless constant . Performing a low-temperature expansion of equation (6) and substituting into (5), we obtain: $$\sigma =\frac{Ne^2DZ^2\mathrm{\Phi }(\stackrel{~}{\mu })E_0}{2A(k_BT)^2}\kappa =\frac{k_BNDZ^2}{2A(k_BT)}\mathrm{\Phi }(\stackrel{~}{\mu })E_2$$ (10) We find that the thermopower and the figure of merit do not depend on $`\gamma `$ and are given by $`S`$ $`=`$ $`{\displaystyle \frac{k_B}{e}}\left({\displaystyle \frac{k_BT}{Z}}{\displaystyle \frac{d\mathrm{ln}\mathrm{\Phi }(\stackrel{~}{\mu })}{d\stackrel{~}{\mu }}}\right){\displaystyle \frac{E_2}{E_0}}`$ (11) $`Z_TT`$ $`=`$ $`{\displaystyle \frac{E_2}{E_0}}\left({\displaystyle \frac{k_BT}{Z}}{\displaystyle \frac{d\mathrm{ln}\mathrm{\Phi }(\stackrel{~}{\mu })}{d\stackrel{~}{\mu }}}\right)^2.`$ (12) Here the numbers $`E_n`$ are given by, $$E_n=_{\mathrm{}}^{\mathrm{}}\frac{x^ndx}{4\mathrm{cosh}^2\left(\frac{x}{2}\right)\left[1+\left(\frac{x}{\pi }\right)^2\right]}.$$ (13) Numerical calculation gives $`E_0=0.82`$ and $`E_2=1.75`$. Note that the Wiedeman-Franz law holds here as it does for transport dominated by impurity scattering. However, since the scattering rate is energy dependent around the Fermi-surface the ratio of $`\sigma T`$ and $`\kappa `$ is not equal to the classical Lorenz number. The physical content of equation (12) is transparent. Correlations enhance the figure of merit relative to that of a non-interacting system with the same density of states by a factor of $`\frac{1}{Z^2}`$ which in this context can be thought of as the square of the mass enhancement. This factor is expected to be large and in fact diverges as we approach the density driven Mott transition in La<sub>1-x</sub>Sr<sub>x</sub>TiO<sub>3</sub> . Note however that expression (12) is only valid in the low-temperature regime because it is restricted to $`\beta ZD1`$. Thus the figure of merit will be very low in the low-temperature regime unless the logarithmic derivative of the transport functions becomes appreciable. Unlike the density of states however, the transport function does not have any Van Hove singularities and the only singular points in the logarithmic derivative are at the band edges where the transport function vanishes. High temperature regime: To describe this region we observe that the spectral function shifted by the value of the chemical potential $$\stackrel{~}{\rho }(ϵ,\omega )=\rho (ϵ,\omega \mu )$$ (14) converges to a well-defined, shape centered around $`\omega =0`$ as the temperature tends to infinity. This agrees with a rigid-band interpretation of the lower Hubbard band, except that the carriers in this band are completely incoherent near the band edge. The high-temperature behavior of the chemical potential can also be found analytically and is given by: $`\beta \mu =\mathrm{ln}(\frac{n}{N(1n)})`$. We can now obtain the leading high-temperature behavior of the transport coefficients by inserting the scaling form (14) with $`\stackrel{~}{\rho }`$ temperature independent and expanding to lowest order in $`\beta `$ the equations for the coefficients $`A_n`$. The results are parameterized in terms of the moments $$\gamma _m=\frac{a}{D^{m+1}}𝑑ϵ𝑑\omega \omega ^m\stackrel{~}{\rho }^2(ϵ,\omega )\mathrm{\Phi }(ϵ)$$ (15) which we evaluate numerically. To leading order in $`\beta `$ we find: $`\sigma `$ $`=`$ $`\left({\displaystyle \frac{e^2}{a\mathrm{}}}\right)\pi N(D\beta )\gamma _0{\displaystyle \frac{\frac{n}{N}(1n)}{(\frac{n}{N}+(1n))^2}}`$ (16) $`S`$ $`=`$ $`\left({\displaystyle \frac{k_B}{e}}\right)\mathrm{ln}({\displaystyle \frac{n}{N(1n)}})`$ (17) $`\kappa `$ $`=`$ $`\left({\displaystyle \frac{k_BD}{a\mathrm{}}}\right)\pi N(D\beta )^2\gamma _2{\displaystyle \frac{\frac{n}{N}(1n)}{(\frac{n}{N}+(1n))^2}}.`$ (18) Near the Mott transition the density and degeneracy dependence of the moments $`\gamma _0`$ and $`\gamma _2`$ is given by: $`\gamma _m=\stackrel{~}{\gamma }_m(1n+\frac{n}{N})^2`$ where the $`\stackrel{~}{\gamma }_m`$’s are constants, $`\stackrel{~}{\gamma }_00.05`$ and $`\stackrel{~}{\gamma }_20.01`$. The factor in parenthesis is simply the total integrated spectral weight of the Green’s function. The high-temperature equation that we get for the thermopower corresponds to the well known Heikes formula generalized for degeneracy $`N`$. Comparison with numerical solutions of the dynamical mean field equations, reveals that for the resistivity, the high temperature expansion formula is quite accurate over a very wide temperature range and breaks down only at temperatures of the order of $`T_{\text{coh}}`$. This is surprising, since the high temperature expansion is a priori only valid for $`\beta D1`$. This result gives some insight into the origin of the linear resistivity which was observed in early studies of the Hubbard model in infinite dimensions . Furthermore if one keeps the next term in the high-temperature expansion of the thermopower, one can fit the numerical data over a comparable region. Notice that the thermopower close to the Mott transition is hole-like which agrees with the picture of holes in a paramagnetic spin background . The high-temperature expression for the figure of merit is to lowest order in $`\beta `$ given by: $$Z_TT=\frac{\pi \stackrel{~}{\gamma _0}\mathrm{ln}^2\left(\frac{n}{N(1n)}\right)n(1n)}{\frac{\kappa _L}{\kappa _D}+\pi \stackrel{~}{\gamma _2}(D\beta )^2n(1n)},$$ (19) where $`\kappa _D=\frac{k_BD}{a\mathrm{}}`$. Here the lattice contribution to the thermal conductivity has been included since the electronic contribution tends to zero in the high temperature region. As temperature increases the figure of merit increases monotonically to a constant value that goes linearly with the bandwidth, $`D`$, of the system. At any finite temperature however the equation above gives a figure of merit that increases monotonically with $`D`$ to a maximum which is at a bandwidth larger than $`k_BT`$ and thus outside the region of validity of our formula. Thus we conclude that the optimum figure of merit is obtained when the bandwidth is of the order of the temperature. We now turn to quantitative calculations of the thermopower in the intermediate temperature range which is the range most pertinent to experiments and possible applications. The best characterized system, in the class of materials that we seek to describe is the La<sub>1-x</sub>Sr<sub>x</sub>TiO<sub>3</sub> system with $`x`$ small. We model this system with a Hubbard model on a three dimensional hypercubic lattice with half bandwidth $`D=0.5`$ eV and interaction strength $`U=2.0`$ eV in the Mott insulating end of the series ($`x=0`$). We take into account the $`x`$ dependence of the bandwidth by using the fact that the Ti-O-Ti bond-angle, $`\theta `$, changes with doping. The bandwidth then depends on $`\theta `$ through $`D(\theta )=D(180^{})\mathrm{cos}^2\theta `$. To select the bond-angles corresponding to a given doping we use data for ab-plane bond-angles from . The results from the calculations of S, using IPT, with this choice of parameters is displayed in Fig. 1. We notice here that in the high temperature end of the data the values of the thermopower are actually close to what Heikes formula predicts thus indicating that the high temperature thermopower can be used as a rough estimate of the carrier density. At lower temperature we see a marked decrease in the magnitude of the thermopower which heralds the transition into the Fermi-liquid regime where the thermopower is small and electron-like. We note however that for doping values lower than 0.1 the thermopower is already hole-like above 300K which indicates that the quasiparticle peak has almost disappeared and that the carriers are completely incoherent. This regime does not appear for smaller values of the interaction U when the system is below the Mott transition. The thermopower near the Mott transition therefore seems to be a sensitive indicator of the character of the carriers (i.e. localized vs. itinerant). Since bandwidth and carrier concentration, can be controlled to a large degree in this class of materials we use our results to assess the prospects for a large figure of merit in this class of systems. We are not aware of any measurements of the thermal conductivity in the titanate compounds so for the purpose of obtaining an order of magnitude estimate of the figure of merit we use a value of $`2.0`$W/mK for $`\kappa _L`$ based on measurements carried out recently in the manganite oxides , The results from the numerical calculation of the figure of merit in the intermediate temperature region are displayed in Fig. 2. The data for T = 0.05 and T = 0.10 were obtained with iterated perturbation theory (IPT) modified for finite doping . For the T = 0.20 data we use the infinite $`U`$ non crossing approximation (NCA). This approach has also been shown to give results in good agreement with exact methods at high temperatures . The behavior of the figure of merit can be understood qualitatively as follows. It vanishes at a density $`n_0(T)`$ where the Seebeck coefficient vanishes. At very high-temperature this occurs at $`n_0=\frac{N}{1+N}`$ according to equation (17). As temperature is lowered $`n_0(T)`$ moves towards half filling, and is given by the condition $`T_{\text{coh}}(n)T`$. This is due to the formation of the coherent quasi-particles which give a negative contribution to the thermopower. For densities lower than $`n_0(T)`$ the system is Fermi-liquid-like and we find a low figure of merit, closer to the Mott transition at temperature higher than $`T_{\text{coh}}(n)`$ the figure of merit is larger. The lowest temperature results show little density dependence below $`n_0`$. This is because both the logarithmic derivative of the transport function and the quasiparticle residue depend essentially linearly on the doping and therefore the figure of merit varies little with density. The T = 0.1 data seems to be displaying a similar trend as the T = 0.05 data but the system is essentially out of the Fermi-liquid regime at that temperature. To conclude, we have investigated the thermoelectric coefficient and the thermoelectric figure of merit near the density driven Mott transition. We provided simple expressions for the transport coefficients of this model, obtaining a qualitative understanding of the thermoelectric coefficient. In the light of our results, a large figure of merit in this class of systems seems rather unlikely. It would require very small doping, and a very small value of the bare half-bandwidth parameter D, to be able to access the high temperature regime $`T>D`$ where the figure of merit is of order unity. We have argued however, that the thermopower is a sensitive probe of the degree of itineracy of the carriers, and an experimental investigation of the thermoelectric response in the region where large mass enhancement has been observed is highly desirable. Acknowledgements: We acknowledge useful discussions with Ekkehard Lange, Henrik Kajueter, A. Ramirez and P.B. Littlewood. This work was supported by the NSF under grant DMR 95-29138.
no-problem/9911/quant-ph9911109.html
ar5iv
text
# Quantum Cryptography using entangled photons in energy-time Bell states ## Abstract We present a setup for quantum cryptography based on photon pairs in energy-time Bell states and show its feasability in a laboratory experiment. Our scheme combines the advantages of using photon pairs instead of faint laser pulses and the possibility to preserve energy-time entanglement over long distances. Moreover, using 4-dimensional energy-time states, no fast random change of bases is required in our setup : Nature itself decides whether to measure in the energy or in the time base. PACS Nos. 3.67.Dd, 3.67.Hk Quantum communication is probably one of the most rapidly growing and most exciting fields of physics within the last years . Its most mature application is quantum cryptography (also called quantum key distribution), ensuring the distribution of a secret key between two parties. This key can be used afterwards to encrypt and decrypt secret messages using the one time pad . In opposition to the mostly used ”public key” systems , the security of quantum cryptography is not based on mathematical complexity but on an inherent property of single quanta. Roughly speaking, since it is not possible to measure an unknown quantum system without modifying it, an eavesdropper manifests herself by introducing errors in the transmitted data. During the last years, several prototypes based on faint laser pulses mimicking single photons, have been developed, demonstrating that quantum cryptography not only works inside the laboratory, but in the ”real world” as well . Besides, it has been shown that two-photon entanglement can be preserved over large distances , especially when being entangled in energy and time . As pointed out by Ekert in 1991 , the nonlocal correlations engendered by such states can also be used to establish sequences of correlated bits at distant places, the advantage compared to systems based on faint laser pulses being the smaller vulnerability against a certain kind of eavesdropper attack . Besides improvements in the domain of quantum key distribution, recent experimental progress in generating, manipulating and measuring the so-called Bell-states , has lead to fascinating applications like quantum teleportation , dense-coding and entanglement swapping . In a recent paper, we proposed and tested a novel source for quantum communication generating a new kind of Bell states based on energy-time entanglement . In this paper, we present a first application, exploiting this new source for quantum cryptography. Our scheme follows Ekert’s initial idea concerning the use of photon-pair correlations. However, in opposition, it implements Bell states and can thus be seen in the broader context of quantum communication. Moreover, the fact that energy-time entanglement can be preserved over long distances renders our source particulary interesting for long-distance applications. To understand the principle of our idea, we look at Fig. 1. A short light pulse emitted at time $`t_0`$ enters an interferometer having a path length difference which is large compared to the duration of the pulse. The pulse is thus split into two pulses of smaller amplitudes, following each other with a fixed phase relation. The light is then focussed into a nonlinear crystal where some of the pump photons are downconverted into photon pairs. Working with pump energies low enough to ensure that generation of two photon pairs by the same as well as by two subsequent pulses can be neglected, a created photon pair is described by $`|\psi ={\displaystyle \frac{1}{\sqrt{2}}}\left(|s_A|s_B+e^{i\varphi }|l_A|l_B\right).`$ (1) $`|s`$ and $`|l`$ denote a photon created by a pump photon having traveled via the short or the long arm of the interferometer, and the indices A, B label the photons. The state (1) is composed of only two discrete emission times and not of a continuous spectrum. This contrasts with the energy-time entangled states used up to now . Please note that, depending on the phase $`\varphi `$, Eq. (1) describes two of the four Bell states. Interchanging $`|s`$ and $`|l`$ for one of the two photons leads to generation of the remaining two Bell-states. In general, the coefficients describing the amplitudes of the $`|s|s`$ and $`|l|l`$ states can be different, leading to nonmaximally entangled states. However, in this article, we will deal only with maximally entangled states. Behind the crystal, the photons are separated and are sent to Alice and Bob, respectively (see Fig.1). There, each photon travels via another interferometer, introducing exactly the same difference of travel times through one or the other arm as did the previous interferometer, acting on the pump pulse. If Alice looks at the arrival times of the photons with respect to the emission time of the pump pulse $`t_0`$ – note that she has two detectors to look at–, she will find the photons in one of three time slots. For instance, detection of a photon in the first slot corresponds to ”pump photon having traveled via the short arm and downconverted photon via the short arm”. To keep it short, we refer to this process as $`|s_P;|s_A`$, where $`P`$ stands for the pump- and $`A`$ for Alice’s photon. However, the characterization of the complete photon pair is still ambiguous, since, at this point, the path of the photon having traveled to Bob (short or long in his interferometer) is unknown to Alice. Fig. 1 illustrates all processes leading to a detection in the different time slots both at Alice’s and at Bob’s detector. Obviously, this reasoning holds for any combination of two detectors. In order to build up the secret key, Alice and Bob now publicly agree about the events where both detected a photon in one of the satellite peaks – without revealing in which one – or both in the central peak – without revealing the detector. This additional information enables both of them to know exactly via which arm the sister photon, detected by the other person, has traveled. For instance, to come back to the above given example, if Bob tells Alice that he detected his photon in a satellite peak as well, she knows that the process must have been $`|s_p;|s_A|s_B`$. The same holds for Bob who now knows that Alice photon traveled via the short arm in her interferometer. If both find the photons in the right peak, the process was $`|l_p;|l_A|l_B`$. In either case, Alice and Bob have correlated detection times. The cross terms where one of them detect a photon in the left and the other one in the right satellite peak do not occur. Assigning now bitvalues 0 (1) to the short (long) processes, Alice and Bob finally end up with a sequence of correlated bits. Otherwise, if both find the photon in the central slot, the process must have been $`|s_p;|l_A|l_B`$ or $`|l_p;|s_A|s_B`$. If both possibilities are indistinguishable, we face the situation of interference and the probability for detection by a given combination of detectors (e.g. the ”+”-labeled detector at Alice’s and the ”–” labeled one at Bob’s) depends on the phases $`\alpha `$, $`\beta `$ and $`\varphi `$ in the three interferometers. The quantum mechanical treatment leads to $`P_{i,j}=\frac{1}{2}\left(1+ijcos(\alpha +\beta \varphi )\right)`$ with $`i`$,$`j`$ = $`\pm 1`$ denoting the detector labels . Hence, chosing appropriate phase settings, Alice and Bob will always find perfect correlations in the output ports. Either both detect the photons in detector ”–” (bitvalue ”0”), or both in detector ”+” (bitvalue ”1”). Since the correlations depend on the phases and thus on the energy of the pump, signal and idler photons, we refer to this base as the energy base (showing wave like behaviour), stressing the complementarity with the other, the time basis (showing particle like behaviour). Like in the BB84 protocol it is the use of complementary bases that ensures the detection of an eavesdropper . If we consider for instance the most intuitive intercept/resend strategy, the eavesdropper intercepts the photons, measures them in one of the two bases and sends new, accordingly prepared photons instead. Since she never knows in which basis Bob’s measurement will take place, she will in half of the cases eavesdrop and resend the photons in the ”wrong basis” and therefore will statistically introduce errors in Bobs results, revealing in turn her presence. For a more general treatment of quantum key distribution and eavesdropping using energy-time complementarity, we refer the reader to . To generate the short pump pulses, we use a pulsed diode laser (PicoQuant PDL 800), emitting 600ps (FWHM) pulses of 655 nm wavelength at a repetition frequency of 80 MHz. The average power is of $``$ 10 mW, equivalent to an energy of 125 pJ per pulse. The light passes a dispersive prism, preventing the small quantity of also emitted infrared light to enter the subsequent setup, and a polarizing beamsplitter (PBS), serving as optical isolator. The pump is then focussed into a singlemode fiber and is guided into a fiber optical Michelson interferometer made of a 3 dB fiber coupler and chemically deposited silver end mirors. The path length difference corresponds to a difference of travel times of $``$ 1.2 ns, splitting the pump pulse into two, well seperated pulses. The arm-length difference of the whole interferometer can be controlled using a pizoelectric actuator in order to ensure any desired phase difference. Besides, the temperature is maintained stable. In order to control the evolution of the polarization in the fibers, we implement three fiber-optical polarization controllers, each one consisting of three inclinable fiber loops – equivalent to three waveplates in the case of bulk optic. The first device is placed before the interferometer and ensures that all light, leaving the Michelson interferometer by the input port will be reflected by the already mentioned PBS and thus will not impinge onto the laser diode. The second controller serves to equalize the evolution of the polarization states within the different arms of the interferometer, and the last one enables to control the polarization state of the light that leaves the interferometer by the second output port. The horizontally polarized light is now focussed into a 4x3x12 mm $`KNbO_3`$ crystal, cut and oriented to ensure degenerate collinear phasematching, hence producing photon pairs at 1310 nm wavelength – within the so-called second telecommunication window. Due to injection losses of the pump into the fiber and losses within the interferometer, the average power before the crystal drops to $``$ 1 mW, and the energy per pulse – remember that each initial pump pulse is now split into two – to $``$ 6 pJ. The probability for creation of more than one photon pair within the same or within two subsequent pulses is smaller than 1 %, ensuring the assumption that lead to Eq. 1. Behind the crystal, the red pump light is absorbed by a filter (RG 1000). The downconverted photons are then focussed into a fiber coupler, separating them in half of the cases, and are guided to Alice and Bob, respectively. The interferometers (type Michelson) located there have been described in detail in . They consist of a 3-port optical circulator, providing access to the second output arm of the interferometer, a 3 dB fiber coupler and Faraday mirrors in order to compensate any birefringence within the fiber arms. To controll their phases, the temperature can be varied or can be maintained stable. Overall losses are about 6 dB. The path length differences of both interferometers are equal with respect to the coherence length of the downconverted photons – approximately 20 $`\mu `$m. In addition, the travel time difference is the same than the one introduced by the interferometer acting on the pump pulse. In this case, ”the same” refers to the coherence time of the pump photons, around 800 fs or 0.23 mm, respectively. To detect the photons, the output ports are connected to single-photon counters – passively quenched germanium avalanche photodiodes, operated in Geiger-mode and cooled to 77 K . We operate them at dark count rates of 30 kHz, leading to quantum efficiencies of $``$ 5 %. The single photon detection rates are of 4-7 kHz, the discrepancy being due to different detection efficiencies and losses in the circulators. The signals from the detectors as well as signals being coincident with the emission of a pump pulse are fed into fast AND-gates. To demonstrate our scheme, we first measure the correlated events in the time base. Conditioning the detection at Alice’s and Bob’s detectors both on the left satellite peaks, ($`|s_p`$,$`|s_A`$ and $`|s_p`$,$`|s_B`$, respectively) we count the number of coincident detections between both AND-gates, that is the number of triple coincidences between emission of the pump-pulse and detections at Alice’s and Bob’s. In subsequent runs, we measure these rates for the right-right ($`|l_p`$,$`|l_A`$ AND $`|l_p`$,$`|l_B`$) events, as well as for the right-left cross terms. We find values around 1700 coincidences per 100 sec for the correlated, and around 80 coincidences for the non-correlated events (table 1). From the four times four rates –remember that we have four pairs of detectors–, we calculate the different quantum bit error rates QBER, which is the ratio of wrong to detected events. We find values inbetween 3.7 and 5.7 %, leading to a mean value of QBER for the time base of (4.6 $`\pm `$0.1)%. In order to evaluate the QBER in the energy base, we condition the detection at Alice’s and Bob’s on the central peaks. Changing the phases in any of the three interferometers, we observe interference fringes in the triple coincidence count rates (Fig.2). Fits yield visibilities of 89.3 to 94.5 % for the different detector pairs (table 2). In the case of appropriatly chosen phases, the number of correlated events are around 800 in 50 sec, and the number of errors are around 35. From these values, we calculate the QBERs for the four detector pairs. We find values inbetween 2.8 and 5.4 %, leading to a mean QBER for the energy base of (3.9$`\pm `$0.4) %. Note that this experiment can be seen as a Franson-type test of Bell inequalities as well . From the mean visibility of (92.2$`\pm `$0.8)%, we can infer to a violation of Bell inequalities by 27 standard deviations. Like in all experimental quantum key distribution schemes, the found QBERs are non-zero, even in the absence of any eavesdropping. Nevertheless, they are still small enough to guarantee the detection of an eavesdropper attack, allowing thus secure key distribution. inequality The remaining $``$ 4% are due to accidental coincidences from uncorrelated events at the single photon detectors, a not perfectly localized pump pulse, non-perfect time resolution of the photon detectors, and, in the case of the energy basis, non-perfect interference. Note that the last mentioned errors decrease at the same rate as the number of correlated events when increasing the distance between Alice and Bob (that is when increasing the losses). In contrast to that, the number of errors due to uncorrelated events (point 1) stays almost constant since it is dominated by the detector noise. Thus, the QBER increases with distance, however, only at a small rate since this contribution was found to be small. Experimental investigations show that introducing 6 dB overall losses – in the best case equivalent to 20 km of optical fiber – leads to an increment of only around 1%, hence to a QBER of 5-6%. Besides detector noise, another major problem of all quantum key distribution schemes developed up to now is stability, the only exception being . In order to really implement our setup for quantum cryptography, the interferometers have to be actively stabilized, taking for instance advantage of the free ports. The chosen passive stabilization by controlling the temperature is not sufficient to ensure stable phases over a long time. To conclude, we presented a new setup for quantum cryptography using Bell states based on energy-time entanglement, and demonstrated its feasability in a laboratory experiment. We found bit rates of around 33 Hz and quantum bit error rates of around 4 % which is low enough to ensure secure key distribution. Besides a smaller vulnerability against eavesdropper attacks, the advantage of using discrete energy-time states, up to dimension 4, in our scheme, is the fact that no fast change between non-commuting bases is necessary. Nature itself chooses between the complementary properties energy and time. Furthermore, the recent demonstration that energy-time entanglement can be preserved over long distances shows that this scheme is perfectly adapted to long-distance quantum cryptography. We would like to thank J.D. Gautier for technical support and PicoQuant for fast delivery of the laser. This work was supported by the Swiss PPOII and the European QuCom IST projects.
no-problem/9911/hep-ph9911411.html
ar5iv
text
# Solving multi-loop Feynman diagrams using light-front coordinates ## I Introduction Multi-loop Feynman diagrams pose a serious challenge, especially in the massive case where multiple scales arise. Apart from the successes of high-order expansions in $`\alpha `$ in QED, which provide a stringent test of quantum field theory, Feynman diagrams are also the way in which we understand the most of field theory, since it allows us to quantify it. They form the basis for our understanding of many phenomena, such as asymptotic freedom and gauge invariance. Therefore, we should strive to refine our handling of Feynman diagrams and to extend its context and its applications. In the late 40’s, the move from time-ordered perturbation theory towards covariant perturbation theory brought about a revolution in field theory. It created a handle on the calculations and people were able the control the divergences. However, despite the successes for scattering experiments, the method failed to deliver for bound-states calculations. Therefore, there has been a constant move ”backwards” to quasi-potential, and time-ordered, formulations, as we can formulate a bound state only in a single time frame, not with the relative time as it exists in the covariant formulation. However, often it is hard to follow the route back from covariant to time-ordered perturbation theory, and then to apply it to an intrinsic non-perturbative problem, such as a bound-state problem. One distinct method for describing bound-states in field theory is discrete light-cone quantization, where as the time direction a light-like direction is chosen. This has certain advantages, extensively discussed in the literature. However, also here renormalization forms a serious problem, often dealt with rather callously. Although the renormalization for perturbative expansions is reasonably under control, its extension to non-perturbative calculations is far from desired. In this paper I will tackle a class of simple $`n`$-loop Feynman diagrams, determine the finite part and show that the light-front approach is particularly useful for that. I will also discuss how to look upon the large set of counterterms in this highly divergent case. These diagrams have been studied extensively in the recent years. However, the simplicity of this approach in Minkowski space is striking. There are no special functions needed, and eventually it depends on the introduction of the collective coordinate $`\beta `$, analogous to the radial coordinate $`r^2`$ in Euclidean space. In this case, specifically, the coordinate interpolates smoothly from the threshold value at the center of the kinematical to the edges of the kinematical domain. Eventually, such simple collective coordinates for many-particle systems might extend the applicability of Hamiltonian light-front field theory, as, generally, the problem of the Hamiltonian approach is the control on number of the variables. ## II The sunset diagram We consider the $`n`$-loop Feynman diagram, which consists of $`n+1`$ lines between two vertices: $$_n=\frac{1}{(2\pi i)^n}\frac{d^4k_1\mathrm{}d^4k_n}{(k_1^2m_1^2)(k_2^2m_2^2)\mathrm{}(k_n^2m_n^2)((pk_1k_2\mathrm{}k_n)^2m_{n+1}^2)}.$$ (1) This diagram the generalized sunset diagram. Lines can be added or removed (see Fig. 1). The Feynman diagram is covariant and therefore it is only a function of $`p^2`$ and the masses $`m_1,\mathrm{},m_{n+1}`$. We solve it in the frame where $`p_{}=(p^1,p^2)=0`$. We introduce light-front coordinates: $`k^\pm =\frac{1}{\sqrt{2}}(k^0\pm k^3)`$, and $`x_i=k_i^+/p^+`$. The transverse momenta $`k^1`$ and $`k^2`$ are unaltered. After the residue integration over the light-front energies $`k_i^{}`$, we find: $$_n=_\mathrm{\Delta }dx_1\mathrm{}dx_n\mathrm{d}^2k_1\mathrm{}\mathrm{d}^2k_n\frac{1}{2^{n+1}x_1\mathrm{}x_n(1x_i)\left(2p^+p^{}\beta ^1\alpha \right)},$$ (2) where $`\beta ^1`$ $`=`$ $`{\displaystyle \frac{m_1^2}{x_1}}\mathrm{}+{\displaystyle \frac{m_n^2}{x_n}}+{\displaystyle \frac{m_{n+1}^2}{1x_i}},`$ (3) $`\alpha `$ $`=`$ $`{\displaystyle \frac{k_1^2}{x_1}}+\mathrm{}+{\displaystyle \frac{k_n^2}{x_n}}+{\displaystyle \frac{(k_i)^2}{1x_i}}.`$ (4) The domain $`\mathrm{\Delta }`$ is given by $`x_i0`$ and $`x_i1`$. This integral is the corresponding light-front diagram, equivalent to the Feynman diagram. If we translate the transverse momenta successively starting from $`k_n`$: $$k_i=l_i\left(\underset{j=1}{\overset{j=i1}{}}k_j\right)\left(\frac{1}{x_i}+\frac{1}{1_{j=1}^ix_j}\right)^1\left(\frac{1}{1_{j=1}^ix_j}\right),$$ (5) we find that the $`\alpha `$ reduces to a pure quadratic form in $`l_i`$: $$\alpha =\underset{i=1}{\overset{n}{}}l_i^2\left(\frac{1}{x_i}+\frac{1}{1_{j=1}^ix_j}\right).$$ (6) The integral is divergent, and requires counterterms $`c_0`$, $`c_1p^2`$, $`\mathrm{}`$, $`c_{n1}(p^2)^{n1}`$. We do so by subtracting the $`(n1)`$-th order Taylor expansion in $`p^{}`$ around $`p^2=0`$ , implemented through the the multiplication of the integral with the proper moments: $$_n=_\mathrm{\Delta }I_n^{\mathrm{reg}}=_\mathrm{\Delta }IJ^n,$$ (7) where $$J=\frac{p^2}{\beta ^1+\alpha }.$$ (8) which will lead to $$_n^{\mathrm{reg}}=_\mathrm{\Delta }\frac{dx_1\mathrm{}dx_nd^2l_1\mathrm{}d^2l_n(p^2)^n}{2^{n+1}x_1\mathrm{}x_n(1x_i)(2p^+p^{}\beta ^1\alpha )(\beta ^1+\alpha )^n}.$$ (9) If we scale the transverse momenta accordingly and perform the angular integrations we obtain the integral: $$_m^{\mathrm{reg}}=\pi ^n_0^{\mathrm{}}𝑑z_1\mathrm{}_0^{\mathrm{}}𝑑z_n_\mathrm{\Delta }dx_1\mathrm{}dx_n\frac{(p^2)^n}{2^{n+1}(p^2z_i\beta ^1)(\beta ^1+z_i)^n},$$ (10) where $$z_i=l_i^2\left(\frac{1}{x_i}+\frac{1}{1_{j=1}^ix_j}\right).$$ (11) In order to integrate over $`z_i`$ we express the integrand as a series in $`p^2`$. For each separate term we can integrate over all $`z_i`$, which yields: $$_n^{\mathrm{reg}}=\frac{\pi ^n}{2^{n+1}}_\mathrm{\Delta }\mathrm{d}^nx(p^2)^{n1}\underset{i=0}{\overset{\mathrm{}}{}}\frac{1}{(n+i)\mathrm{}(i+1)}\left(p^2\beta \right)^{i+1}.$$ (12) We can write the series as an analytical function: $$_n^{\mathrm{reg}}=_\mathrm{\Delta }\mathrm{d}^nx\frac{(\pi )^n}{2^{n+1}(n1)!\beta ^{n1}}(1𝒯_{n1})\left(1p^2\beta \right)^{n1}\mathrm{ln}\left[1p^2\beta iϵ\right],$$ (13) where $`𝒯_{n1}`$ stands for the $`(n1)`$-th order Taylor expansion around $`p^2=0`$. The Taylor expansion yields the following polynomial: $$𝒯_{z(n1)}(1z)^{n1}\mathrm{ln}[1z]=\underset{k=1}{\overset{n1}{}}\left(\underset{j=0}{\overset{k}{}}\frac{(1)^jn!}{(nj)!j!(kj+1)}\right)z^k.$$ (14) The imaginary part follows directly from the natural logarithm of Eq. (13): $$\underset{ϵ0}{lim}\mathrm{ln}(xiϵ)=\mathrm{ln}|x|i\pi \theta (x),$$ (15) which lead to a finite amplitude, unaffected by the renormalization procedure which reminiscence appears in Eq. (13) in the form of the subtracted Taylor expansion. ## III Longitudinal integration After the integration over the $`k_i^{}`$’s and the $`k_i`$’s, we are left with an integration of the longitudinal momentum fractions $`x_i`$, which cannot be performed analytically. However, note that the integrand of Eq. (13) depends only on one particular combination of the longitudinal momenta, namely $`\beta `$. This $`\beta `$ is a smooth function of the longitudinal momenta $`x_i`$, and ranges between: $$0\beta \left(\underset{i=1}{\overset{n+1}{}}m_i\right)^2=b,$$ (16) where for $`\beta =b`$ the longitudinal momentum fractions $`x_i`$ equal $`m_i\sqrt{b}`$. Therefore the $`n`$-dimensional integral over $`\mathrm{\Delta }`$ reduces to the determination of the integration measure $`\mu `$ for the integration over $`\beta `$: $$_\mathrm{\Delta }d^nxf(\beta )=_0^b\mu (\beta )𝑑\beta f(\beta ).$$ (17) The volume of the domain $`\mathrm{\Delta }`$ is $`\mathrm{\Gamma }(n+1)^1`$. Once this measure is determined, it can be used for all values of $`p^2`$. The measure $`\mu `$ can be determined via several means, for example, with Monte Carlo integration. The threshold behavior of the diagram is dominated by the the values of $`\beta `$ close to $`b`$. For this purpose we can make an analytical expansion of $`\mu `$ around $`b`$. We find that: $$\mu (\beta )\frac{1}{2}\mathrm{\Omega }_nb^{\frac{1n}{4}}\left[\underset{i=1}{\overset{n+1}{}}\sqrt{m_i}\right](b\beta )^{\frac{n2}{2}}.$$ (18) where $`\mathrm{\Omega }_n`$ is the surface area of the unit sphere in $`n`$ dimensions. Note that as some of the masses tend to zero, the exponent in the measure will be larger than $`(n2)/2`$. The addition of a zero mass particle, $`m_i=0`$, leads to a flat direction in $`\beta `$ with respect to the longitudinal momentum fraction $`x_i`$ at the threshold. Therefore the harmonic approximation breaks down. However, $`\beta `$ and the measure $`\mu `$ are well-defined as long as at least one particle is massive. ### A The integration measure Apart from series expansion and Monte-Carlo integration mentioned above, we can determine the measure iteratively. Given the integration measure $`\mu _n(\beta _n)`$ for $`n`$ momenta, the integration measure $`\mu _{n+1}(\beta _{n+1})`$ for $`n+1`$ variables can be expressed as $$_0^{b_{n+1}}\mu _{n+1}(\beta _{n+1})𝑑\beta _{n+1}=_0^{b_n}_0^1\mu _n(\beta _n)𝑑\beta _ny^n𝑑y,$$ (19) where $$\beta _{n+1}=\left(\frac{m^2}{1y}+\frac{1}{\beta _ny}\right)^1,$$ (20) with $`m`$ the mass of the added particle, with longitudinal momentum fraction $`1y`$. The other longitudinal momenta are scaled by a factor $`y`$ such that the total longitudinal momentum remains 1. ## IV Renormalization In this paper we described a method to find the finite part of any diagram of the type of Fig. 1. The divergences are removed using the Taylor expansion in the external momentum. The counterterms, $`c_0`$, $`c_1p^2`$ till $`c_{n1}(p^2)^{n1}`$, for the sunset diagram can be expressed as divergent integrals depending on the masses $`m_1,\mathrm{},m_{n+1}`$: $$c_k=_\mathrm{\Delta }𝑑x_1\mathrm{}𝑑x_n\frac{d^2k_1\mathrm{}d^2k_n}{2^{n+1}x_1\mathrm{}x_n(1x_i)}\frac{1}{(\alpha +\beta ^1)^{k+1}},$$ (21) which defines their relation with other renormalization schemes. The other renormalization schemes will find that the counterterms in Eq. (21) equal an infinite constant, to be removed, and a finite part, a function of the masses, which is the finite renormalization. The use of Taylor expansion became in disfavor, because of two complications. Firstly, in the case of multiple external momenta, it is not clear which combination of external momenta should serve as variable in the Taylor expansion; different choices will lead to different results, and do not automatically guarantee locality. Secondly, in the case of gauge theories, an extremely consistent scheme, which treats a whole class of integrals in the same way, is required such that the gauge invariance is preserved. Dimensional regularization has for a long time been the only scheme satisfying this consistency, which preserved algebraic relations existing among different integrands of Feynman integrals. For example, the fermion-loop correction to the gauge propagator $`\mathrm{\Pi }^{\mu \nu }`$ must be transverse, therefore the two parts to $`\mathrm{\Pi }^{\mu \nu }`$, namely $`g^{\mu \nu }\mathrm{\Pi }_s`$ and $`p^\mu p^\nu \mathrm{\Pi }_t`$ must be handled in the same way such that $`p^2\mathrm{\Pi }_t=\mathrm{\Pi }_s`$, which is difficult problem for an arbitrary regularization scheme, since the two terms have different degrees of divergence. However, for an Hamiltonian approach, such as light-front field theory, the renormalization at the level of the integrand is required, if one wants to carry the renormalization procedure over from the covariant renormalization. Note that a straightforward cut-off procedure breaks covariance, since it cannot be applied to the energy part of the covariant integration. The integration of the energies and the regularization should be interchangeable, such that locality is guaranteed. So, although it leads to further complications which requires careful analysis, the Taylor expansion is the way forward for the Hamiltonian approach. The natural choice of renormalization for a Hamiltonian approach, is to make the self-energy contributions vanish as all the particles are on-shell. However, this is not consistent with the covariant, local and therefore true, renormalization. If the sum energy is the sum of the on-shell energies, it does not mean that the energy is shared out evenly; a large part of the amplitude might arise from the case that both particles are off-shell in different directions. Therefore it is essential to treat the subtractions as pure constants. Even more, although we can generate finite terms in light-front perturbation theory, for a proper light-front approach we should take the procedure one step further and determine the corresponding finite wave function. However, this is far beyond the scope of this paper. ## V Results Central to this approach is the actual shape of the measure $`\mu (\beta )`$. For particles of equal mass, we find that the measure is most spread over the whole range of $`\beta `$. As the masses start to deviate the measure peaks more and more at low values of $`\beta `$. However, the measure stays finite, even for massless particles. In Fig. 2 we show two scaled, normalized set of measures, one for equal mass particles, and one for particles with increasing masses. Note that the increasing masses peak more at lower values of $`\beta `$, due to the leading contributions from the heavy particles carrying large momentum fractions. In Fig. 3 we compare the measures for two massive particles and a number of particles with equal, but small, or vanishing, masses. For the inspection of the amplitudes, I fitted the measures with a five parameter function, which fits the measure with an accuracy within a few percent. The accuracy for the massless case is higher than for the massive case, in the former case it is below a percent. The function depend on the parameters $`\gamma _1,\gamma _2,\mathrm{},\gamma _5`$: $$\overline{\mu }(\overline{\beta })=\gamma _1(1\overline{\beta })^{\gamma _2}+\gamma _3(1\overline{\beta })^{\gamma _5}\overline{\beta }^{\gamma _4},$$ (22) where $`\overline{\beta }=\beta /b`$ and $`\overline{\mu }=\mathrm{\Gamma }(n+1)\mu /b`$, such that the axis and the measure are normalized to unity. The fitted parameters for the two extreme cases, one case with all masses equal, and one case with the first two masses 1.0, and all the other masses zero are given respectively in Table I and Table II. ## VI Conclusions I have derived a straightforward, and largely analytical, way to determine the finite part of the sunset diagram. Both the threshold behavior and the full amplitude can be determined accurately. I removed the divergent parts by subtracting the corresponding Taylor expansion. The mass dependence of the diagram, for the scaled external momentum $`bp^2`$, is only weak over the whole range of masses, and this dependence appears solely in the integration measure $`\mu (\beta )`$. A careful analysis of the Feynman parameterization could also yield a similar variable $`\beta `$. However, it requires one to work in Euclidean space, where the imaginary part does not come for free. Also the poles in dimension space, which are the different subdivergencies of the integral, are transferred to singularities in the parameter space, which renders dimensional regularization invalid. Therefore the removal of the lower-order Taylor expansion is essential. It seems possible to extend the light-front approach to more complicated diagrams, which could contain two and more light-front intermediate states. This requires the introduction of more $`\beta `$ variables. The determination of the measures stays essentially the same. Eventually, the light-front approach might be more convenient for the calculation of multi-loop diagrams, as it sees such a diagram as a transition via collection of successive intermediate states, which can all be handled separately, and do not grow as wildly as the number of the subgraphs of a complicated covariant multi-loop Feynman diagram.
no-problem/9911/nucl-th9911024.html
ar5iv
text
# Parity violating elastic electron scattering and neutron density distributions in the Relativistic Hartree-Bogoliubov model ## I Introduction Measurements of ground state distributions of nucleons provide fundamental nuclear structure informations. The ground state densities reflect the basic properties of effective nuclear forces, and an accurate description of these distributions presents the primary goal of nuclear structure models. Extremely accurate data on charge densities, and therefore on proton distributions in nuclei, are obtained from elastic scattering of electrons. Experimental data of comparable precision on neutron density distributions are, however, not available. It is much more difficult to measure the distribution of neutrons, though more recently accurate data on differences in radii of the neutron and proton density distributions have been obtained. Various experimental methods have been used, or suggested, for the determination of the neutron density in nuclei . Among them, one that is also very interesting from the theoretical point of view, is parity violating elastic electron scattering. In principle, the elastic scattering of longitudinally polarized electrons provides a direct and very accurate measurement of the neutron distribution. It has been shown that the parity-violating asymmetry parameter, defined as the difference between cross sections for the scattering of right- and left-handed longitudinally polarized electrons, produces direct information on the Fourier transform of the neutron density . In a recent article , Horowitz has used a relativistic optical model to calculate the parity-violating asymmetry parameters for the elastic scattering of 850 MeV electrons on a number of spherical, doubly closed-shell nuclei. Ground state densities were calculated using a three-parameter Fermi formula and the relativistic mean-field model. Coulomb distortion corrections to the parity-violating asymmetry were calculated exactly. It has been shown that a parity violation experiment to measure the neutron density in a heavy nucleus is feasible. Informations about the distribution of neutrons in nuclei should also constrain the isovector channel of the nuclear matter energy functional. A correct parameterization of the isovector channel of effective nuclear forces is essential for the description of unique phenomena in exotic nuclei with extreme isospin values. For neutron-rich nuclei, these include the occurrence of nuclei with diffuse neutron densities, the formation of the neutron skin and the neutron halo. At the neutron drip-line different proton and neutron quadrupole deformations are expected, and these will give rise to low-energy isovector modes. In the present work we describe parity violating elastic electron scattering on neutron-rich nuclei in the framework of relativistic mean-field theory. The ground state density distributions will be calculated with the relativistic Hartree-Bogoliubov (RHB) model. This model represents a relativistic extension of the Hartree-Fock-Bogoliubov (HFB) framework, and it has been successfully applied in studies of the neutron halo phenomenon in light nuclei , properties of light nuclei near the neutron-drip line , ground state properties of Ni and Sn isotopes , the deformation and shape coexistence phenomena that result from the suppression of the spherical N=28 shell gap in neutron-rich nuclei , the structure of proton-rich nuclei and the phenomenon of ground state proton emission . In particular, it has been shown that neutron radii, calculated with the RHB model, are in excellent agreement with experimental data . In studies of the structure of nuclei far from the $`\beta `$-stability line, both on the proton- and neutron-rich sides, it is important to use models that include a unified description of mean-field and pairing correlations . For the ground state density distributions of neutron-rich nuclei in particular, the neutron Fermi level is found close to the particle continuum and the lowest particle-particle modes are embedded in the continuum. An accurate description of neutron densities can only be obtained with a correct treatment of the scattering of neutron pairs from bound states into the positive energy continuum. Starting from the RHB self-consistent ground state neutron densities in isotope chains that also include neutron-rich nuclei, we will calculate the parity-violating asymmetry parameters for the elastic scattering of 850 MeV electrons. The main point in the present analysis will be to determine how sensitive the asymmetry parameters are to the variations of the neutron density distribution along an isotope chain. An interesting question is whether parity violating electron scattering could be used, in principle at least, to measure the neutron skin, or even the formation of the neutron halo. Of course, studies of electron scattering from exotic nuclei require very complex experimental facilities, as for example the double storage ring MUSES, under construction at RIKEN. By injecting an electron beam, generated by a linear accelerator, in one of the storage rings, and storing a beam of unstable nuclei in the other, collision experiments between radioactive beams and electrons could be performed. In Section II we present an outline of the relativistic Hartree-Bogoliubov model and calculate the self-consistent ground state neutron densities of Ne, Na, Ni and Sn isotopes. For the elastic scattering of longitudinally polarized electrons on these nuclei, in section III we use a relativistic optical model to calculate the parity-violating asymmetry parameters. Coulomb distortion corrections are included in the calculation, and the resulting asymmetries are related to the Fourier transforms of the neutron densities. The results are summarized in Section IV. ## II Relativistic Hartree-Bogoliubov description of ground state densities In the framework of the relativistic Hartree-Bogoliubov model , the ground state of a nucleus $`|\mathrm{\Phi }>`$ is represented by the product of independent single-quasiparticle states. These states are eigenvectors of the generalized single-nucleon Hamiltonian which contains two average potentials: the self-consistent mean-field $`\widehat{\mathrm{\Gamma }}`$ which encloses all the long range particle-hole (ph) correlations, and a pairing field $`\widehat{\mathrm{\Delta }}`$ which sums up the particle-particle (pp) correlations. The single-quasiparticle equations result from the variation of the energy functional with respect to the hermitian density matrix $`\rho `$ and the antisymmetric pairing tensor $`\kappa `$. The relativistic Hartree-Bogoliubov equations read $`\left(\begin{array}{cc}\widehat{h}_Dm\lambda & \widehat{\mathrm{\Delta }}\\ \widehat{\mathrm{\Delta }}^{}& \widehat{h}_D+m+\lambda \end{array}\right)\left(\begin{array}{c}U_k(𝐫)\\ V_k(𝐫)\end{array}\right)=E_k\left(\begin{array}{c}U_k(𝐫)\\ V_k(𝐫)\end{array}\right).`$ (1) $`\widehat{h}_D`$ is the single-nucleon Dirac Hamiltonian, $`m`$ is the nucleon mass, and $`\widehat{\mathrm{\Delta }}`$ denotes the pairing field. The column vectors represent the quasi-particle spinors and $`E_k`$ are the quasi-particle energies. The chemical potential $`\lambda `$ has to be determined by the particle number subsidiary condition in order that the expectation value of the particle number operator in the ground state equals the number of nucleons. The Hamiltonian $`\widehat{h}_D`$ describes the dynamics of the relativistic mean-field model : nucleons are described as point particles; the theory is fully Lorentz invariant; the nucleons move independently in the mean fields which originate from the nucleon-nucleon interaction. Conditions of causality and Lorentz invariance impose that the interaction is mediated by the exchange of point-like effective mesons, which couple to the nucleons at local vertices: the isoscalar scalar $`\sigma `$-meson, the isoscalar vector $`\omega `$-meson, and the isovector vector $`\rho `$-meson. The single-nucleon Hamiltonian in the Hartree approximation reads $$\widehat{h}_D=i\alpha +\beta (m+g_\sigma \sigma (𝐫))+g_\omega \omega (𝐫)+g_\rho \tau _3\rho _3(𝐫)+e\frac{(1\tau _3)}{2}A(𝐫),$$ (2) where $`\sigma `$, $`\omega `$, and $`\rho `$ denote the mean-field meson potentials. $`g_\sigma `$, $`g_\omega `$ and $`g_\rho `$ are the corresponding meson-nucleon coupling constants, and the photon field $`A`$ accounts for the electromagnetic interaction. The meson potentials are determined self-consistently by the solutions of the corresponding Klein-Gordon equations. The source terms for these equations are sums of bilinear products of baryon amplitudes, calculated in the no-sea approximation. The pairing field $`\widehat{\mathrm{\Delta }}`$ in the RHB single-quasiparticle equations (1) is defined $$\mathrm{\Delta }_{ab}(𝐫,𝐫^{})=\frac{1}{2}\underset{c,d}{}V_{abcd}(𝐫,𝐫^{})\kappa _{cd}(𝐫,𝐫^{}),$$ (3) where $`a,b,c,d`$ denote quantum numbers that specify the Dirac indices of the spinors, $`V_{abcd}(𝐫,𝐫^{})`$ are matrix elements of a general two-body pairing interaction, and the pairing tensor is $$\kappa _{cd}(𝐫,𝐫^{})=\underset{E_k>0}{}U_{ck}^{}(𝐫)V_{dk}(𝐫^{}).$$ (4) The input parameters of the RHB model are the coupling constants and the masses for the effective mean-field Hamiltonian, and the effective interaction in the pairing channel. In most applications we have used the NL3 set of meson masses and meson-nucleon coupling constants for the effective interaction in the particle-hole channel: $`m=939`$ MeV, $`m_\sigma =508.194`$ MeV, $`m_\omega =782.501`$ MeV, $`m_\rho =763.0`$ MeV, $`g_\sigma =10.217`$, $`g_2=10.431`$ fm<sup>-1</sup>, $`g_3=28.885`$, $`g_\omega =12.868`$ and $`g_\rho =4.474`$. Results of NL3 model calculations have been found in excellent agreement with experimental data, both for stable nuclei and for nuclei far away from the line of $`\beta `$-stability. The NL3 interaction will also be used in the present analysis of ground state neutron densities. For the pairing field we employ the pairing part of the Gogny interaction $$V^{pp}(1,2)=\underset{i=1,2}{}e^{((𝐫_1𝐫_2)/\mu _i)^2}(W_i+B_iP^\sigma H_iP^\tau M_iP^\sigma P^\tau ),$$ (5) with the set D1S for the parameters $`\mu _i`$, $`W_i`$, $`B_i`$, $`H_i`$ and $`M_i`$ $`(i=1,2)`$. This force has been carefully adjusted to the pairing properties of finite nuclei all over the periodic table. In particular, the finite range of the Gogny force automatically guarantees a proper cut-off in momentum space. On the phenomenological level, the fact that we are using a non-relativistic interaction in the pairing channel of a relativistic Hartree-Bogoliubov model, has no influence on the calculated ground state properties. A detailed discussion of this approximation can be found, for instance, in Ref. . The ground state of a nucleus results from a self-consistent solution of the RHB single-quasiparticle equations (1). The iteration procedure is performed in the quasi-particle basis. A simple blocking prescription is used in the calculation of odd-proton and/or odd-neutron systems. The resulting eigenspectrum is transformed into the canonical basis of single-particle states, in which the RHB ground state takes the BCS form. The transformation determines the energies and occupation probabilities of the canonical states. In Ref. we have performed a detailed analysis of ground state properties of Ni ($`28N50`$) and Sn ($`50N82`$) nuclei in the framework of the RHB model. In a comparison with available experimental data, we have shown that the NL3 + Gogny D1S effective interaction provides an excellent description of binding energies, neutron separation energies, and proton and neutron $`rms`$ radii, both for even and odd-A isotopes. The RHB model predicts a reduction of the spin-orbit potential with the increase of the number of neutrons. The resulting energy splittings between spin-orbit partners have been discussed, as well as pairing properties calculated with the finite range effective interaction in the $`pp`$ channel. In Figs. 1 and 2 we plot the self-consistent ground state neutron densities for the even-A Ni ($`30N48`$) and Sn ($`56N74`$) isotopes. The density profiles display pronounced shell effects and a gradual increase of neutron radii. In the inserts we include the corresponding differences between neutron and proton $`rms`$ radii. For Ni, the value of $`r_nr_p`$ increases from $`0.03`$ fm for <sup>58</sup>Ni to 0.39 fm for <sup>76</sup>Ni. The neutron skin is less pronounced for the Sn isotopes: $`r_nr_p=0.27`$ fm for <sup>124</sup>Sn. In Fig. 3 the difference of the neutron and proton $`rms`$ radii for Sn isotopes are compared with recent experimental data . The experimental values result from measured cross sections of the isovector spin-dipole resonances in Sn nuclei. The agreement between theoretical and experimental values is very good. The calculated differences are slightly larger than the measured values, but they are still within the experimental error bars. This result, together with the analysis of Ref. , indicates that the ground state neutron densities of the Ni and Sn isotope chains are correctly described by the RHB model with the NL3 + Gogny D1S effective interaction. The first experimental evidence of the formation of the neutron skin along a chain of stable and unstable isotopes was reported in Ref. for the Na nuclei. By combining $`rms`$ charge radii, determined from isotope-shift data, with interaction cross sections of Na isotopes on a carbon target, it has been shown that the neutron skin monotonically increases to 0.4 fm in neutron-rich $`\beta `$-unstable Na nuclei. In Ref. we have applied the RHB model in the description of properties of light nuclei with large neutron excess. The ground state properties of a number of neutron-rich nuclei have been analyzed: the location of the neutron drip-line, the reduction of the spin-orbit interaction, $`rms`$ radii, changes in surface properties, and the formation of the neutron skin and the neutron halo. In particular, we have also calculated the chain of Na isotopes. The neutron density profiles are plotted in Fig. 4, with the differences of the neutron and proton $`rms`$ radii in the insert. The calculations have been performed assuming spherical symmetry, i.e. deformations of proton and neutron densities were not taken into account. The blocking procedure has been used both for protons and neutrons. Strong shell effects are observed in the interior. In the central region the neutron density increases from $`0.065`$ fm<sup>-3</sup> for <sup>26</sup>Na, to more than $`0.09`$ fm<sup>-3</sup> for <sup>27</sup>Na. This effect, of course, corresponds to the filling of the $`s\frac{1}{2}`$ neutron orbital. At $`r2`$ fm the density decreases from $`0.1`$ fm<sup>-3</sup> to $`0.09`$ fm<sup>-3</sup>. The calculated neutron radii are compared with the experimental values in Fig. 5. The model reproduces the trend of the experimental data and, with a possible exceptions of $`N=11`$, the theoretical values are in excellent agreement with the measured radii. The smooth increase of $`r_nr_p`$ after $`N=11`$ (Fig. 4) is interpreted as the formation of the neutron skin. For $`N=11`$ the number of protons and neutrons is the same, and therefore both the odd proton and odd neutron occupy the $`sd`$ orbitals with the same probabilities. An explicit proton-neutron short-range interaction, not included in our model, is probably responsible for a possible reduction of the neutron radius in this nucleus. In some loosely bound systems at the drip-lines, the neutron density distribution displays an extremely long tail: the neutron halo. The resulting large interaction cross sections have provided the first experimental evidence for halo nuclei . The neutron halo phenomenon has been studied with a variety of theoretical models . For very light nuclei in particular, models based on the separation into core plus valence space nucleons (three-body Borromean systems) have been employed. In Ref. the RHB model has been applied in the study of the formation of the neutron halo in the mass region above the s-d shell. Model calculations with the NL3 + Gogny D1S effective interaction predict the occurrence of neutron halo in heavier Ne isotopes. The study has shown that, in a mean-field description, the neutron halo and the stability against nucleon emission can only be explained with the inclusion of pairing correlations. Both the properties of single-particle states near the neutron Fermi level, and the pairing interaction, are important for the formation of the neutron halo. In Fig. 6 we plot the proton and neutron density distributions for <sup>30</sup>Ne, <sup>32</sup>Ne and <sup>34</sup>Ne. The proton density profiles do not change with the number of neutrons. The neutron density distributions display an abrupt change between <sup>30</sup>Ne and <sup>32</sup>Ne. A long tail emerges, revealing the formation of a multi-particle halo. The formation of the neutron halo is related to the quasi-degeneracy of the triplet of states 1f<sub>7/2</sub>, 2p<sub>3/2</sub> and 2p<sub>1/2</sub>. For $`N22`$ the triplet of states is in the continuum, it approaches zero energy at $`N=22`$, and a gap is formed between these states and all other states in the continuum. The pairing interaction promotes neutrons from the 1f<sub>7/2</sub> orbital to the 2p levels. Since these levels are so close in energy, the total binding energy does not change significantly. Due to their small centrifugal barrier, the 2p<sub>3/2</sub> and 2p<sub>1/2</sub> neutron orbitals form the halo. ## III Parity violating elastic electron scattering In this section we will illustrate how the neutron density distributions shown in Figs. 1 \- 6 can be measured with polarized elastic electron scattering. We will calculate the parity-violating asymmetry parameter, defined as the difference between cross sections for the scattering of right- and left-handed longitudinally polarized electrons. This difference arises from the interference of one-photon and $`Z^0`$ exchange. As it has been shown in Ref. , the asymmetry in the parity violating elastic polarized electron scattering represents an almost direct measurement of the Fourier transform of the neutron density. The calculation procedure closely follows the derivation and definitions of Ref. . We consider the elastic electron scattering on a spin-zero nucleus, i.e. on the potential $$\widehat{V}(r)=V(r)+\gamma _5A(r),$$ (6) where V(r) is the Coulomb potential, and A(r) results from the weak neutral current amplitude $$A(r)=\frac{G_F}{2^{3/2}}\rho _W(r).$$ (7) The weak charge density is defined $$\rho _W(r)=d^3r^{}G_E(|𝐫𝐫^{}|)[\rho _n(r^{})+(14\mathrm{s}\mathrm{i}\mathrm{n}^2\mathrm{\Theta }_W)\rho _p(r^{})],$$ (8) where $`\rho _n`$ and $`\rho _p`$ are point neutron and proton densities and the electric form factor of the proton is $`G_E(r)\frac{\mathrm{\Lambda }^3}{8\pi }e^{\mathrm{\Lambda }r}`$ with $`\mathrm{\Lambda }=4.27`$ fm<sup>-1</sup>. sin$`{}_{}{}^{2}\mathrm{\Theta }_{W}^{}=0.23`$ for the Weinberg angle. In the limit of vanishing electron mass, the electron spinor $`\mathrm{\Psi }`$ defines the helicity states $$\mathrm{\Psi }_\pm =\frac{1}{2}(1\pm \gamma _5)\mathrm{\Psi },$$ (9) which satisfy the Dirac equation $$[\alpha 𝐩+V_\pm (r)]\mathrm{\Psi }_\pm =E\mathrm{\Psi }_\pm ,$$ (10) with $$V_\pm (r)=V(r)\pm A(r).$$ (11) The parity-violating asymmetry $`A_l`$, or helicity asymmetry, is defined $$A_l=\frac{d\sigma _+/d\mathrm{\Omega }d\sigma _{}/d\mathrm{\Omega }}{d\sigma _+/d\mathrm{\Omega }+d\sigma _{}/d\mathrm{\Omega }},$$ (12) where $`+()`$ refers to the elastic scattering on the potential $`V_\pm (r)`$. The calculation starts with the self-consistent relativistic Hartree-Bogoliubov ground state proton and neutron densities. The charge and weak densities are calculated by folding the point proton and neutron densities (see Eq. (8)). The resulting Coulomb potential $`V(r)`$ and weak potential $`A(r)`$ (7) are used to construct $`V_\pm (r)`$. The cross sections for elastic electron scattering are obtained by summing up the phase shifts which result from the numerical solution of the partial wave Dirac equation. The calculation includes the Coulomb distortion effects. The cross sections for positive and negative helicity electron states are calculated, and the resulting asymmetry parameter $`A_l`$ is plotted as a function of the scattering angle $`\theta `$, or the momentum transfer $`q`$. In order to check the correctness and accuracy of the computer code which calculates the elastic electron scattering cross sections, we have performed the same tests as those reported in Ref. : experimental data are reproduced for the elastic cross sections from <sup>208</sup>Pb at 502 MeV ; plane wave approximation results are reproduced; it is verified that the asymmetry parameter $`A_l`$ is linear in the potential $`A(r)`$ (7). We have also reproduced the parity-violating asymmetries calculated in Ref. : elastic scattering at 850 MeV on <sup>16</sup>O, <sup>48</sup>Ca, and <sup>208</sup>Pb (relativistic mean-field densities), as well as on the three-parameter Fermi densities. In Figs. 7 and 8 we plot the parity-violating asymmetry parameters $`A_l`$ for the <sup>58-76</sup>Ni isotopes, for the elastic electron scattering at 500 MeV and 850 MeV, respectively. The ground state neutron densities for these nuclei are shown in Fig. 1. For electron energies below 500 MeV the asymmetry parameters are small ($`<10^5`$), and the differences between neighboring isotopes are $`<10^6`$. At 850 MeV (the energy for which most of the calculations of Ref. have been performed), the values of $`A_l`$ are of the order of $`10^5`$. The differences between neighboring isotopes ($`10^6`$) are especially pronounced for <sup>58-66</sup>Ni, at $`\theta 20^o`$ and $`\theta 35^o`$. These differences reflect the strong shell effects calculated for the ground state neutron densities of the lighter Ni isotopes (see Fig. 1). The heavier Ni isotopes display more uniform neutron densities in the interior, and the resulting asymmetry parameters $`A_l`$ are not very different. Similar results are also obtained at 1000 MeV electron energy. Of course, above 1 GeV the approximation of elastic scattering on continuous charge and weak densities is not valid any more, and the structure of individual nucleons becomes important. In the remainder of this section we show the results for $`A_l`$ at 850 MeV electron energy. For all chains of isotopes we have also calculated the asymmetry parameters at 250 MeV, 500 MeV and 1000 MeV. The energy dependence, however, is similar to that observed for the Ni isotopes: below 850 MeV the differences in the calculated $`A_l`$ for the neighboring isotopes are too small. The figures of merit $$F=A_l^2\frac{d\sigma }{d\mathrm{\Omega }},$$ (13) for 850 MeV electron scattering on the Ni isotopes are shown in Fig. 9. The figure of merit is, of course, strongly peaked at forward angles (see also Fig.12 of Ref. ). In order to emphasize the differences between Ni isotopes, the $`F`$’s are plotted in the interval $`10^o\theta 30^o`$. The figure of merit defines the optimal kinematics for an experiment in which the neutron density distribution could be determined from the measured parity-violating asymmetries. The asymmetry parameter $`A_l`$ provides a direct measurement of the Fourier transform of the neutron density . In Figs. 10 and 11 we plot the asymmetries for the Ni isotopes (Fig. 8) as functions of the momentum transfer $`q=2Esin\theta /2`$, and compare them with the squares of the Fourier transforms of the neutron densities $$F(q)=\frac{4\pi }{q}𝑑rr^2j_0(qr)\rho _n(r).$$ (14) The differences between the asymmetries can be directly related to the form factors. Note that the positions of the minima of $`A_l`$ correspond almost exactly to the minima of the form factors. Of course, the agreement would have been perfect if we had plotted the Fourier transforms of the weak density (8), but the differences are indeed very small. More important is the observation that the measurement of the parity-violating asymmetry at high momentum transfer might provide information about the details of the density profile of the neutron distribution. In Fig. 3 we have shown that the differences in the neutron and proton $`rms`$ radii, calculated with the RHB NL3 + Gogny D1S model, are in very good agreement with recent experimental data on Sn isotopes. In Fig. 12 we plot the asymmetry parameters $`A_l`$ for the 850 MeV elastic electron scattering on the even <sup>106-124</sup>Sn isotopes. The angular dependence is similar to that observed for the Ni nuclei; significant differences between neighboring isotopes are only found at $`\theta >20^o`$. The asymmetry parameters are compared to the Fourier transforms of the Sn neutron densities in Figs. 13 and 14. The Na isotopes (Figs. 4 and 5) display the formation of the neutron skin, as well as strong shell effects in the central region of neutron densities. These shell effects are clearly seen in the plots of the asymmetry parameters $`A_l`$ in Fig. 15. Especially pronounced is the transition between <sup>26</sup>Na and <sup>27</sup>Na, which corresponds to the filling of the $`s\frac{1}{2}`$ neutron orbital. The Fourier transforms of neutron densities are compared to the asymmetry parameters in Figs. 16 and 17. The differences between the minima of the asymmetry parameters for neighboring isotopes are the same as the differences between the minima of the Fourier transforms of the densities. In principle, it should be possible to deduce the neutron density distribution from the measured asymmetries. An interesting theoretical question is whether the asymmetry parameters are sensitive to the formation of the neutron halo, i.e. whether parity violating electron scattering could be used to detect the formation of the halo. For the even Ne nuclei, the neutron halo phenomenon is illustrated in Fig. 6. The tail in the neutron density develops in <sup>32</sup>Ne. The Fourier transforms of neutron densities and the asymmetry parameters $`A_l`$ are shown in Fig. 18. At small momentum transfer the differences in the asymmetry parameters are very small. Only at $`q2.5`$ fm<sup>-1</sup> the differences are of the order of $`10^5`$. This probably means that, even if it became possible to measure polarized elastic electron scattering on extremely neutron-rich nuclei, the parity-violating asymmetries would not be sensitive to the formation of the neutron halo. ## IV Conclusions The relativistic mean-field theory has been used to study the parity violating elastic electron scattering on neutron-rich nuclei. The parity-violating asymmetry parameter, defined as the difference between cross sections for the scattering of right- and left-handed longitudinally polarized electrons, provides direct information about the neutron density distribution. The ground state neutron densities of neutron-rich Ne, Na, Ni and Sn have been calculated with the relativistic Hartree-Bogoliubov model. The NL3 effective interaction has been used for the mean-field Lagrangian, and pairing correlations have been described by the pairing part of the finite range Gogny interaction D1S. The NL3 + Gogny D1S interaction produces results in excellent agreement with experimental data, not only for spherical and deformed $`\beta `$-stable nuclei, but also for nuclear systems with large isospin values on both sides of the valley of $`\beta `$-stability. In the present work, in particular, the calculated neutron $`rms`$ radii are shown to reproduce recent experimental data on Na and Sn isotopes. Starting from the relativistic Hartree-Bogoliubov solutions for the self-consistent ground states, the charge and weak densities are calculated by folding the point proton and neutron densities. These densities define the Coulomb and weak potentials in the Dirac equation for the massless electron. The partial wave Dirac equation is solved with the inclusion of Coulomb distortion effects, and the cross sections for positive and negative helicity electron states are calculated. The parity-violating asymmetry parameters are plotted as functions of the scattering angle $`\theta `$, or the momentum transfer $`q`$, and they are compared with the Fourier transforms of the neutron density distributions. We have compared the parity-violating asymmetry parameters for chains of neutron-rich isotopes. For low electron energies ($`500`$ MeV), the differences between neighboring isotopes are very small. At 850 MeV, significant differences in the parity-violating asymmetries are found at $`\theta >20^o`$. They can be related to the differences in the neutron density distributions. In particular, if plotted as function of the momentum transfer $`q`$, the asymmetry parameter can be related to the Fourier transform of the neutron density. It has been shown that the parity violating elastic electron scattering is sensitive to the formation of the neutron skin in Na, Ni and Sn isotopes, and also to the shell effects of the neutron density distributions. On the other hand, from the example of neutron-rich Ne nuclei, it appears that the asymmetry parameters would not be sensitive to the formation of the neutron halo. We conclude that, if it became possible to measure parity violating elastic electron scattering on neutron-rich nuclei, the asymmetry parameters would provide detailed information on neutron density distributions, neutron radii, and differences between neutron and charge radii. This knowledge is, of course, essential for constraining the isovector channel of effective interactions in nuclei, and therefore for our understanding of the structure of nuclear systems far from the $`\beta `$-stability line. ACKNOWLEDGMENTS This work has been supported in part by the Bundesministerium für Bildung und Forschung under project 06 TM 875, and by the Deutsche Forschungsgemeinschaft. The relativistic optical code is based on the program for elastic electron scattering DREPHA, written by B. Dreher, J. Friedrich and S. Klein. Figure Captions
no-problem/9911/hep-ph9911407.html
ar5iv
text
# Testing models with a nonminimal Higgs sector through the decay 𝑡→𝑞+𝑊⁢𝑍 ## INTRODUCTION The mass of the top quark, which is larger than any other fermion mass in the standard model (SM) and almost as large as its scale of electroweak symmetry breaking (EWSB), cannot be explained within the SM . This has originated speculations about the possible relationship between the top quark and the nature of the mechanism responsible for EWSB. Several models have been proposed, where such large mass can be accommodated or plays a significant role. In the supersymmetric (SUSY) extensions of the SM , the large value of the top quark mass can drive the radiative breaking of the electroweak symmetry; furthermore within the context of SUSY grand unified theories (GUT’s) the fermions of the third–family can be accommodated in scheemes where their masses arise from a single Yukawa term . On the other hand, in some top–condensate (TC) models it is postulated that new strong interactions bind the heavy top quark into a composite Higgs scenario. From a more phenomenological point of view, it is also intriguing to notice that the top quark decay seems to be dominated by the SM mode ($`tbW`$), not only within the SM but also in theories beyond it, which makes the top quark decay width almost insensitive to the presence of new physics; unless the scale of new physics is lighter than the top quark mass itself, such that new states can appear in its decays. This is the case, for instance, in the general two Higgs doublet model (THDM–III) , where the flavor changing mode $`tc+h`$ can be important for a light Higgs boson ($`h`$), or in SUSY models with light stop quark and neutralinos, in whose case the decay $`t\stackrel{~}{t}+\stackrel{~}{\chi ^0}`$ can also be relevant. But in general, the rare decays of the top quark have undetectable branching ratios (BR’s); for instance, the flavor–changing neutral current (FCNC) rare decays $`tcV`$ ($`V=\gamma ,Z,g`$) have a very small BR in the SM, of the order $`10^{11}`$ , and are out of reach of present and future colliders. A similar result is obtained in several extensions of the SM; for instance in the THDM–II, minimal SUSY extensions of the SM (MSSM) and left–right models, to mention some cases . The rare decay $`tqWZ`$ $`(q=b,s,d)`$, may be above the threshold for the production of a real $`WZ`$ state, provided that $`m_tm_q+m_W+m_Z`$. The possibilities to satisfy this relation depend on the final state $`q`$ and the precise value of the top quark mass, which according to the Particle Data Group is $`m_t=173.8\pm 5.2`$ GeV. For the case when $`q=b`$, the top quark mass must satisfy $`m_t176.1\pm 0.5`$ GeV, where the uncertainty on the right–hand side is mostly due to the ambiguity in the bottom quark mass, thus $`tbWZ`$ can occur on–shell only if $`m_t`$ takes its upper allowed value (at $`1\sigma `$). However, if $`q=d`$ or $`s`$, the decays $`tqWZ`$ can occur even when $`m_t`$ takes its central value. The value of BR($`tbWZ`$) predicted in SM is $`5.4\times 10^7`$ , which is beyond the sensitivity of Tevatron Run II or even CERN Large Hadron Collider (LHC); thus its observation would truly imply the presence of new physics. For $`q=d,s`$ the SM result is even smaller, since the amplitude is supressed by the Cabibbo–Kobayashi–Maskawa (CKM) matrix elements $`V_{tq}`$. On the other hand, the decay mode $`tqWZ`$ can proceed through an intermediate charged Higgs boson that couples to both $`tq`$ and $`WZ`$ currents, and thus can be used to test the couplings of Higgs sectors beyond the SM . The construction of extensions of the SM Higgs sector must satisfy the constraints impossed by the successful phenomenological relation $`\rho m_W^2/m_Z^2\mathrm{cos}\theta _w=1`$, which also measures the ratio between the neutral and charged current couplings strength. At tree level this relation is satisfied naturally in models that include only Higgs doublets, but in more general scenarios, there could be tree level contributions to $`\rho 1`$. Since the vertex $`HWZ`$ arises at tree level only for Higgs bosons lying in representations higher than the usual SM Higgs doublet, there could be violations of the constraints impossed by the $`\rho `$ parameter. However, tree–level deviations of the electroweak $`\rho `$ parameter from unity can be avoided by arranging the nondoublet fields and the vacuum expectation values (V.E.V’s) of their neutral members, so that a custodial $`SU(2)_c`$ symmetry is maintained . On the other hand, a generic coupling of the charged Higgs boson with fermions may be associated with the possible appearence of FCNC in the Higgs–Yukawa sector. FCNC are automatically absent in the minimal SM with one Higgs doublet, however in multiscalar models large FCNC can appear if each quark flavor couples to more than one Higgs doublet . FCNC can be avoided either by impossing some ad hoc discrete symmetry to the Yukawa Lagrangian, i.e., by coupling each type of fermion only to one Higgs doublet; or by using flavor symmetries. The former case is used in the so–called two Higgs doublet models I and II, whereas the last one is associated with model III, here FCNC is only suppressed by some ansatz for the Yukawa matrices, for instance the Li–Sher one: $`(Y_q)_{ij}=\sqrt{m_im_j}/m_W`$, whose phenomenology was studied in . In this Rapid Communication we shall consider, in a very general setting, the contribution of a charged Higgs boson to the decay $`tqWZ`$, and present the results in terms of two factors that parametrize the doublet–triplet mixing and the nonminimal Yukawa couplings, respectively. Then, we discuss the values that these parameters can take for specific extensions of the SM, when the constraints from both the custodial symmetry and FCNC are satisfied, and present the predicted values for BR($`tqWZ`$). ## The decay $`tqWZ`$ We are interested in studying the contribution of charged Higgs boson to the rare decay of the top quark $`tqWZ`$ ($`q=d,s,b`$), within the context of models with extended Higgs sector that include additional Higgs doublets and triplets. The charged Higgs will be assumed to be the lightest charged mass eigenstate that results from the general mixing of doublets and triplets in the charged sector. <sup>*</sup><sup>*</sup>*This is justified by our explicit analysis of the Higgs potential for several models with Higgs doublets and triplets, which will be presented elsewhere. Higgs doublets are needed in order to couple the charged Higgs with quarks; the vertex $`tqH^\pm `$ will be written as follows $$\frac{ig}{2\sqrt{2}m_W}\eta _{tq}\mathrm{cos}\alpha \left[(m_t\mathrm{cot}\beta +m_q\mathrm{tan}\beta )+(m_t\mathrm{cot}\beta m_q\mathrm{tan}\beta )\gamma _5\right],$$ (1) which can be considered as a modification of the result obtained for the Yukawa sector of the general THDM, where $`\mathrm{cos}\alpha `$ is included to account for the doblet–triplet mixing; $`\mathrm{tan}\beta `$ is the ratio of the V.E.V.’s of the two scalar doublets. The charged Higgs coupling to the quarks is also determined by the parameters $`\eta _{tq}`$, which is equal to the CKM mixing matrix only for model–II, i.e., $`\eta _{tq}^{II}=V_{tq}`$; however, in the general case (THDM–III), one can have $`\eta _{tq}^{III}>V_{tq}`$ . On the other hand, we require a representation higher than the doublet, in order to obtain a sizeable coupling $`HWZ`$ at tree level, which is written as $$\frac{igm_W}{\mathrm{cos}\theta _w}\mathrm{sin}\alpha g_{\mu \nu }.$$ (2) In order to evaluate the decay $`tqWZ`$, we shall write a general amplitude to describe the contribution of the intermediate charged Higgs, neglecting the SM contribution, which is a good approximation since the corresponding BR is very suppressed. To calculate the amplitude one also needs to take into account the finite width of the intermediate charged Higgs boson with momentum $`p_H`$, mass $`m_H`$, and width $`\mathrm{\Gamma }_H`$, for this we shall use the relativistic Breit–Wigner form of the propagator in the unitary gauge. Then, the amplitude can be written in general as $$=A\left[\overline{u}(p_q)(a+b\gamma _5)u(p_t)\right]\left(\frac{i}{p_H^2\widehat{m}_H^2}\right)\left[g_{\mu \nu }ϵ_W^\mu ϵ_Z^\nu \right],$$ (3) where $`\widehat{m}_Hm_H+(i/2)\mathrm{\Gamma }_H`$; $`a`$, $`b`$, and $`A`$ are constants related to the parameters $`\alpha `$ and $`\eta _{tq}`$ previously mentioned $`a`$ $`=`$ $`m_t\mathrm{cot}\beta +m_q\mathrm{tan}\beta ,`$ (4) $`b`$ $`=`$ $`m_t\mathrm{cot}\beta m_q\mathrm{tan}\beta ,`$ (5) $`A`$ $`=`$ $`{\displaystyle \frac{g^2}{2\sqrt{2}\mathrm{cos}\theta _w}}\eta _{tq}\mathrm{cos}\alpha \mathrm{sin}\alpha .`$ (6) To calculate the partial decay width, we shall perform a numerical integration of the expression for the squared amplitude, over the standard three–body phase space, namely $$\mathrm{\Gamma }(tqWZ)=\frac{1}{(2\pi )^3}\frac{1}{32m_t^3}\overline{}^2𝑑s𝑑t.$$ (7) $`\overline{}^2`$ denotes the squared amplitude, averaged over initial spins and summed over final polarizations, it has the form $$\overline{}^2=\frac{A^2[(a^2+b^2)(m_t^2+m_q^2s)+(a^2b^2)2m_tm_q]}{(sm_H^2)^2+m_H^2\mathrm{\Gamma }_H^2}\left[2+\left(\frac{sm_W^2m_Z^2}{2m_Wm_Z}\right)^2\right].$$ (8) The integration limits are $$(m_W+m_Z)^2s(m_tm_q)^2,$$ (9) and $$t^{}tt^+,$$ (10) where $$t^\pm =m_t^2+m_Z^2\frac{1}{2s}[(s+m_t^2m_q^2)(s+m_Z^2m_W^2)\lambda ^{1/2}(s,m_t^2,m_q^2)\lambda ^{1/2}(s,m_Z^2,m_W^2)],$$ (11) and $`\lambda (x,y,z)=(x+yz)^24xy`$. The branching ratio for this decay is obtained as the ratio of Eq. (7) to the total width of the top quark, which will include the modes $`tqW`$ and $`tqH`$; the expressions for the widths are $$\mathrm{\Gamma }(tqW^+)=\frac{G_Fm_t^3}{8\pi \sqrt{2}}V_{tq}\left(1\frac{m_W^2}{m_t^2}\right)^2\left(1+2\frac{m_W^2}{m_t^2}\right)\left[1\frac{2\alpha _s}{3\pi }\left(\frac{2\pi ^2}{3}\frac{5}{2}\right)\right],$$ (12) and $`\mathrm{\Gamma }(tqH^+)`$ $`=`$ $`{\displaystyle \frac{g^2}{128\pi m_W^2m_t}}\eta _{tq}\mathrm{cos}^2\alpha \left[a^2[(m_t+m_q)^2m_H^2]+b^2[(m_tm_q)^2m_H^2]\right]`$ (13) $`\times `$ $`\lambda ^{1/2}(1,{\displaystyle \frac{m_q^2}{m_t^2}},{\displaystyle \frac{m_H^2}{m_t^2}}).`$ (14) On the other hand, the Higgs width will include the fermionic decays into $`Hc\overline{s}`$ and $`H\tau \nu _\tau `$; adding them we obtain $$\mathrm{\Gamma }(H^+f\overline{f^{}})=\frac{g^2m_H}{32\pi m_W^2}\mathrm{cos}^2\alpha \left[3\eta _{cs}(m_c^2\mathrm{cot}^2\beta +m_s^2\mathrm{tan}^2\beta )+m_\tau ^2\mathrm{tan}^2\beta \right]$$ (15) as well as the bosonic mode $`HWZ`$ $`\mathrm{\Gamma }(H^+W^+Z)`$ $`=`$ $`{\displaystyle \frac{g^2m_H}{64\pi }}\mathrm{sin}^2\alpha \left[1+\left({\displaystyle \frac{m_W^2}{m_H^2}}\right)^2+\left({\displaystyle \frac{m_Z^2}{m_H^2}}\right)^22{\displaystyle \frac{m_W^2}{m_H^2}}2{\displaystyle \frac{m_Z^2}{m_H^2}}+10{\displaystyle \frac{m_W^2}{m_H^2}}{\displaystyle \frac{m_Z^2}{m_H^2}}\right]`$ (16) $`\times `$ $`\lambda ^{1/2}({\displaystyle \frac{m_H^2}{m_W^2}},{\displaystyle \frac{1}{\mathrm{cos}^2\theta _w}},1).`$ (17) ## Results and conclusions In order to present the results for the mode $`tbWZ`$, i.e., $`q=b`$, we shall assume that the top quark mass takes its upper allowed value, and will consider a Yukawa sector similar to the model–II, in whose case the factor $`\eta _{tb}`$ is equal to the CKM matrix element $`V_{tb}`$ $`(1)`$; the results are shown for two values of $`\mathrm{tan}\beta `$ (2, and $`m_t/m_b`$) which are acceptable for GUT–Yukawa unification. For the factor $`\mathrm{cos}\alpha \mathrm{sin}\alpha `$, which is part of the constant $`A`$, we shall consider first the value $`\frac{1}{2}`$, which corresponds to the maximum value that can be expected to arise in an scenario where the custodial symmetry is respected, for instance in a model with one Higgs doublet and two Higgs triplets of hypercharges 0 and 2, respectively, where one can align the V.E.V.’s to respect the custodial symmetry and obtaining $`\rho =1`$. Although our framework is similar to the one of Ref. , in our case we are allowing full mixing between all the scalar multiplets of the model, which allows us to have charged and neutral Higgs bosons that couple simultaneously to both fermion and gauge boson pairs. On the other hand, to consider a model without a custodial symmetry, we take the value $`\mathrm{sin}\alpha =0.04`$, which corresponds to the maximum value that is allowed by the experimental error in the $`\rho `$ parameter . With all these considerations, we shown in Fig. 2 our results for the BR of the decay $`tbWZ`$; we notice that it can reach a maximum value of order $`1.78\times 10^2`$. For the decays into the light quarks, still working within the framework of model II, we obtain a very suppressed result, where we are taking now the central value for the top quark mass, namely, for $`tsWZ`$ we get a maximum value for the BR of order $`1.95\times 10^6`$ for $`\mathrm{sin}\alpha \mathrm{cos}\alpha =\frac{1}{2}`$ and $`\mathrm{tan}\beta =2`$; for $`tdWZ`$ we get results even smaller and thus uninteresting. On the other hand, if we consider a model with a Yukawa sector of the type THDM–III, the coupling of the charged Higgs with the quarks is not determined by the CKM mixing matrix, then the couplings $`t\overline{d}H^{}`$ and $`t\overline{s}H^{}`$ may not be suppresed. Although in model III there can be dangerous FCNC, it happens that such effects have not been tested by top quark decays, and thus can give large and detectable effects . For the parameter $`\alpha `$ we take the same values of the previous case, assuming also $`\eta _{ts}=\eta _{td}`$ we get a maximum value for the BR of order $`1.31\times 10^3`$ for both $`t(d,s)+WZ`$, as shown in Fig. 2. We conclude from our results that there exist a region of parameters where it is possible to obtain a large BR for the decay $`tbWZ`$. Moreover, for $`m_H=162`$ to $`m_H=182`$ GeV, $`\mathrm{cos}\alpha \mathrm{sin}\alpha =\frac{1}{2}`$, and $`\mathrm{tan}\beta =m_t/m_b`$ we obtain a BR larger than the one predicted by the SM. Furthermore, the maximum value for the BR, of order $`10^2`$, seems factible to be detected at the future CERN LHC, where about $`10^8`$ top quark pairs could be produced, and one would have $`10^6`$ events of interest with only one top quark decaying rarely. If we also include the decays of the W and Z into leptonic modes, to allow a clear signal, one would end with about $`1.3\times 10^4`$ events, which is interesting enough to perform a future detailed study of backgrounds; however this is beyond the scope of present work. On the other hand, we observe from Fig. 2 that even within models without a custodial symmetry with $`\mathrm{sin}\alpha =0.04`$, it is possible to get BR for the decay $`tsWZ`$ larger than the SM result, or the result obtained within models where $`\mathrm{sin}\alpha \mathrm{cos}\alpha =\frac{1}{2}`$, depending on the value of $`\mathrm{tan}\beta `$; in some cases it can reach a BR of order $`1.31\times 10^3`$. In conclusion, we find that the decay $`tqWZ`$ is sensitive to the contribution of new physics, in particular from a charged Higgs boson, which makes this mode an interesting arena for testing physics beyond the SM. ###### Acknowledgements. This research was supported in part by the Benemérita Universidad Autónoma de Puebla with funds granted by the Vicerrectoria de Investigación y Estudios de Posgrado under contract VIEP/930/99, and in part by the CONACyT under Contract G 28102 E.