id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/0002/astro-ph0002060.html
|
ar5iv
|
text
|
# From Historical Perspectives to Some Modern Possibilities
## References
Aller, L. H. 1956, “Gaseous Nebulae” (London: Chapman and Hall)
Aller, L. H., Ufford, C. W. & Van Vleck, J. H. 1949, ApJ, 109, 42
Aller, L. H., Hyung, S. & Feibelman, W. A. 1999, Proc. Nat. Acad. Sci., 96, 5371
Campbell, W. A. & Moore, J. H. 1918, Lick Obs. Publ., 13, 82.
Curtis, H. D. 1918, Lick Obs. Publ., 13, 57
Feibelman, W. A., Aller, L. H., Keyes, C. D. & Czyzak, S. J. 1985, Proc. Nat. Acad. Sci., 84, 2292.
Green, W. K. 1917, Lick Obs. Bull., 9, 72
Keenan, F. P., Aller, L. H., Bell, K. L. Hyung, S., McKenna, F., & Ramsbottom, C. 1996, MNRAS 287. 1073
Keenan, F. C., McKenna, F., Bell, K. L., Ramsbottom, C., Wickstead, A., Aller, L. H., & Hyung, S. 1997 ApJ, 487, 457
Keenan, F. C., Aller, L. H., Bell, K., Espey, Feibelman W. A., Hyung, S., McKenna, F., & Ramsbottom D. 1998, MNRAS, 295, 688
Keenan, D. C., Aller, L. H., Bell, K. L., Hyung, S., Crawford, F. L., Feibelman, W. A., McKenna, F., & McLaughlin, B. M. 1999, MNRAS, 304, 27
Menzel, D. H., & Aller, L. H. 1941, ApJ, 93, 230
Minkowski, R. 1964, PASP, 76. 197
Minkowski, R., & Osterbrock, D. E. 1960, ApJ, 131, 547
Perrine, C. D., 1929, Astr. Nach., 237, 89
Pradhan, A. K. 1976, MNRAS, 177, 31
Rowlands, N. Houck, J. U., & Herter, E. 1994, ApJ, 427, 867
Sabbadin, F., & Hamazaoglu, G., 1982, A&A, 109, 134
Seaton M. J. 1954a MNRAS 114, 154
Seaton M. J. 1954b, Annales d’Astrophysique, 17, 396
Seaton, M. J., & Osterbrock, D.E. 1957, ApJ, 125, 66
Wilson, O. C. 1950, ApJ, 111, 279
Zanstra, H. 1931, ZAp, 2, 329
|
no-problem/0002/physics0002010.html
|
ar5iv
|
text
|
# A-Tract Induced DNA Bending is a Local Non-Electrostatic Effect
## Results and Discussion
Although the macroscopic curvature of DNA induced by adenine-tracts (A-tracts) was discovered almost two decades ago structural basis for this phenomenon remains unclear. A few models considered originally suggested that it is caused by intrinsic conformational preferences of certain sequences, but all these and similar theories failed to explain experimental data obtained later. Calculations show that the B-DNA duplex is mechanically anisotropic, that bending towards minor grooves of some A-tracts is strongly facilitated, and that the macroscopic curvature becomes energetically preferable once the characteristic A-tract structure is maintained by freezing or imposing constraints. However, the static curvature never appears spontaneously in calculations unbiased a priori and these results leave all doors open for the possible physical origin of the effect. In the recent years the main attention has been shifted to specific interactions between DNA and solvent counterions that can bend the double helix by specifically neutralizing some phosphate groups. The possibility of such mechanism is often evident in protein-DNA complexes, and it has also been demonstrated by direct chemical modification of a duplex DNA. In the case of the free DNA in solution, however, the available experimental observations are controversial. Molecular dynamics simulations of a B-DNA in an explicit counterion shell could neither confirm nor disprove this hypothesis. Here we report the first example where stable static curvature emerges spontaneously in molecular dynamics simulations. Its direction is in striking agreement with expectations based upon experimental data. However, we use a minimal B-DNA model without counterions, which strongly suggests that they hardly play a key role in this effect.
Figure 1 exhibits results of a 10 ns simulation of dynamics of a 25-mer B-DNA fragment including three A-tracts separated by one helical turn. This sequence has been constructed after many preliminary tests with shorter sequence motives. Our general strategy came out from the following considerations. Although the A-tract sequences that induce the strongest bends are known from experiments, probably not all of them would work in simulations. There are natural limitations, such as the precision of the model, and, in addition, the limited duration of trajectories may be insufficient for some A-tracts to adopt their specific conformation. Also, we can study only short DNA fragments, therefore, it is preferable to place A-tracts at both ends in order to maximize the possible bend. There is, however, little experimental evidence of static curvature in short DNA fragments, and one may well expect the specific A-tract structure to be unstable near the ends. That is why we did not simply take the strongest experimental “benders”, but looked for sequence motives that in calculations readily adopt the characteristic local structure, with a narrow minor groove profile and high propeller twist, both in the middle and near the ends of the duplex. The complementary duplex $`\mathrm{AAAATAGGCTATTTTAGGCTATTTT}`$ has been constructed by repeating and inverting one such motive.
The upper trace in plate (a) shows the time dependence of rmsd from the canonical B-DNA model. It fluctuates below 4 Å sometimes falling down to 2 Å, which is very low for the double helix of this length indicating that all helical parameters are well within the range of the B-DNA family. The lower surface plot shows the time evolution of the minor DNA groove. The surface is formed by 75 ps time-averaged successive minor groove profiles, with that on the front face corresponding to the final DNA conformation. The groove width is evaluated by using space traces of C5’ atoms as described elsewhere . Its value is given in angströms and the corresponding canonical B-DNA level of 7.7 Å is marked by the straight dotted lines on the faces of the box. It is seen that the overall groove shape has established after 2 ns and remained stable later, with noticeable local fluctuations. In all A-tracts the groove strongly narrows towards 3’ ends and widens significantly at the boundaries. There are two less significant relative narrowings inside non A-tract sequences as well.
Dynamics of $`\mathrm{B}_\mathrm{I}\mathrm{B}_{\mathrm{II}}`$ backbone transitions are shown in plate (b). The B<sub>I</sub> and B<sub>II</sub> conformations are distinguished by the values of two consecutive backbone torsions, $`\epsilon `$ and $`\zeta `$. In a transition they change concertedly from (t,g<sup>-</sup>) to (g<sup>-</sup>,t). The difference $`\zeta \epsilon `$ is, therefore, positive in B<sub>I</sub> state and negative in B<sub>II</sub>, and it is used in Fig. (d) as a monitoring indicator, with the corresponding gray scale levels shown on the right. Each base pair step is characterized by a column consisting of two sub-columns, with the left sub-columns referring to the sequence written at the top in 5’-3’ direction from left to right. The right sub-columns refer to the complementary sequence shown at the bottom. It is seen that, in A-tracts, the B<sub>II</sub> conformation is preferably found in ApA steps and that $`\mathrm{B}_\mathrm{I}\mathrm{B}_{\mathrm{II}}`$ transitions in neighboring steps often occur concertedly so that along a single A-strand $`\mathrm{B}_\mathrm{I}`$ and $`\mathrm{B}_{\mathrm{II}}`$ conformations tend to alternate. The pattern of these transitions reveals rather slow dynamics and suggests that MD trajectories in the 10 ns time scale are still not long enough to sample all relevant conformations. Note, for instance, a very stable $`\mathrm{B}_{\mathrm{II}}`$ conformation in both strands at one of the GpG steps.
Plate (c) shows the time evolution of the overall shape of the helical axis. The optimal curved axes of all DNA conformations saved during dynamics were rotated with the two ends fixed at the OZ axis to put the middle point at the OX axis. The axis is next characterized by two perpendicular projections labeled X and Y. Any time section of the surfaces shown in the figure gives the corresponding axis projection averaged over a time window of 75 ps. The horizontal deviation is given in angströms and, for clarity, its relative scale is two times increased with respect to the true DNA length. Shown on the right are two perpendicular views of the last one-nanosecond-average conformation. Its orientation is chosen to correspond approximately that of the helical axis in the surface plots.
It is seen that the molecule maintained a planar bent shape during a substantial part of the trajectory, and that at the end the bending plane was passing through the three A-tracts. The X-surface clearly shows an increase in bending during the second half of the trajectory. In the perpendicular Y-projection the helical axis is locally wound, but straight on average. The fluctuating pattern in Y-projection sometimes reveals two local maxima between A-tracts, which corresponds to two independent bends with slightly divergent directions. One may note also that there were at least two relatively long periods when the axis was almost straight, namely, around 3 ns and during the fifth nanosecond. At the same time, straightening of only one of the two bending points is a more frequent event observed several times in the surface plots.
Finally, plate (d) shows the time fluctuations of the bending direction and angle. The bending direction is characterized by the angle between the X-projection plane in plate (c) and the $`xz`$ plane of the local DNA coordinate frame constructed in the center of the duplex. According to the Cambridge convention the local $`x`$ direction points to the major DNA groove along the short axis of the base-pair, while the local $`z`$ axis direction is adjacent to the optimal helicoidal axis. Thus, a zero angle between the two planes corresponds to the overall bend to the minor groove exactly at the central base pair. In both plots, short time scale fluctuations are smoothed by averaging with a window of 15 ps. The total angle measured between the opposite axis ends fluctuates around 10-15 in the least bent states and raises to average 40-50 during periods of strong bending. The maximal instantaneous bend of 58 was observed at around 8 ns.
The bending direction was much more stable during the last few nanoseconds, however, it fluctuated at a roughly constant value of 50starting from the second nanosecond. This value means that the center of the observed planar bend is shifted by approximately two steps from the middle base pair so that its preferred direction is to the minor groove at the two ATT triplets, which is well distinguished in plate (c) as well, and corresponds to the local minima in the minor groove profiles in plate (a). During the periods when the molecule straightened the bending direction strongly fluctuates. This effect is due to the fact that when the axis becomes straight the bending plane is not defined, which in our case appears when the central point of the curved axis passes close to the line between its ends. It is very interesting, however, that after the straightening, the bending is resumed in approximately the same direction.
Figure 2 exhibits similar data for another 10 ns trajectory of the same DNA fragment, computed in order to check reproducibility of the results. A straight DNA conformation was taken from the initial phase of the previous trajectory, energy minimized, and restarted with random initial velocities. It shows surprisingly similar results as regards the bending direction and dynamics in spite of a somewhat different minor groove profile and significantly different distribution of $`\mathrm{B}_\mathrm{I}`$ and $`\mathrm{B}_{\mathrm{II}}`$ conformers along the backbone. Note that in this case the helical axis was initially S-shaped in X-projection, with one of the A-tracts exhibiting a completely opposite bending direction. Fluctuations of the bending direction are reduced and are similar to the final part of the first trajectory, which apparently results from the additional re-equilibration. In this case the maximal instantaneous bend of 71 was observed at around 4 ns.
Comparison of traces in plates (a) and (d) in Figs. 1 and 2 clearly shows that large scale slow fluctuations of rmsd are caused by bending. The rmsd drops down to 2 Å when the duplex is straight and raises beyond 6 Å in strongly bent conformations. In both trajectories the molecule experienced many temporary transitions to straight conformations which usually are very short living. These observations suggest that the bent state is relatively more stable than the straight one and, therefore, the observed behavior corresponds to static curvature. In conformations averaged over successive one nanosecond intervals the overall bending angle is 35-45 except for a few periods in the first trajectory. Figure 3 shows a snapshot from around 8.5 ns of the second trajectory where the rmsd from the straight canonical B-DNA reached its maximum of 6.5 Å. The strong smooth bent towards the minor grooves of the three A-tracts is evident, with the overall bending angle around 61.
All transformations exhibited in Figs. 1 and 2 are isoenergetic, with the total energy fluctuating around the same level established during the first nanosecond already, and the same is true for the average helicoidal parameters. Plates (b), however, indicate that there are much slower motions in the system, and this observation precludes any conclusions concerning the global stability of the observed conformations. Moreover, we have computed yet another trajectory for the same molecule starting from the canonical A-DNA form. During 10 ns it converged to a similarly good B-DNA structure with the same average total energy, but the bending pattern was not reproduced. It appears, therefore, that the conformational space is divided into distinct domains, with transitions between them probably occurring in much longer time scales. However, the very fact that the stable curvature in good agreement with experimental data emerges in trajectories starting from a featureless straight canonical B-DNA conformation strongly suggests that the true molecular mechanism of the A-tract induced bending is reproduced. Therefore, it cannot depend upon the components discarded in our calculations, notably, specific interactions with solvent counterions and long-range electrostatic effects.
We are not yet ready to present a detailed molecular mechanism responsible for the observed curvature because even in this relatively small system it is difficult to distinguish the cause and the consequences. We believe, however, that all sorts of bending of the double helical DNA, including that produced by ligands and that due to intrinsic sequence effects, have its limited, but high flexibility as a common origin. Its own conformational energy has the global minimum in a straight form, but this minimum is very broad and flat, and DNA responds by distinguishable bending to even small perturbations. The results reported here prove that in the case of A-tracts these perturbations are produced by DNA-water interactions in the minor groove. Neither long range phosphate repulsion nor counterions are essential. The curvature is certainly connected with the specific A-tract structure and modulations of the minor groove width, but it does not seem to be strictly bound to them. In dynamics, conformations, both smoothly bent and kinked at the two insertions between the A-tracts, are observed periodically. Note also, that the minor groove profile somewhat differs between the two trajectories and that it does not change when the molecule straightens. We strongly believe, however, the experimental data already available will finally allow one to solve this problem by theoretical means, including the approach described here, and we continue these attempts.
## Methods
Molecular dynamics simulations have been performed with the internal coordinate method (ICMD) including special technique for flexible sugar rings . The so-called “minimal B-DNA” model was used which consists of a double helix with the minor groove filled with explicit water. Unlike the more widely used models, it does not involve explicit counterions and damps long range electrostatic interactions in a semi-empirical way by using distance scaling of the electrostatic constant and reduction of phosphate charges. The DNA model was same as in earlier reports, namely, all torsions were free as well as bond angles centered at sugar atoms, while other bonds and angles were fixed, and the bases held rigid. AMBER94 force field and atom parameters were used with TIP3P water and no cut off schemes. With a time step of 10 fs, these simulation conditions require around 75 hours of cpu per nanosecond on a Pentium II-200 microprocessor.
The initial conformations were prepared by vacuum energy minimization starting from the fiber B-DNA model constructed from the published atom coordinates. The subsequent hydration protocol to fill up the minor groove normally adds around 16 water molecules per base pair. The heating and equilibration protocols were same as before . During the runs, after every 200 ps, water positions were checked in order to identify those penetrating into the major groove and those completely separated. These molecules, if found, were removed and next re-introduced in simulations by putting them with zero velocities at random positions around the hydrated duplex, so that they could readily re-join the core system. This procedure assures stable conditions, notably, a constant number of molecules in the minor groove hydration cloud and the absence of water in the major groove, which is necessary for fast sampling . The interval of 200 ps between the checks is small enough to assure that on average less then one molecule is repositioned and, therefore, the perturbation introduced is considered negligible.
## Acknowledgements
I thank R. Lavery for useful discussions as well as critical comments and suggestions to the paper.
## Appendix
This section contains comments from anonymous referees of peer-review journals where the manuscript has been considered for publication, but rejected.
### A Journal of Molecular Biology
#### 1 First referee
Dr. Mazur describes molecular dynamics simulations where a correct static curvature of DNA with phased A-tracts emerges spontaneously in conditions where any role of counterions or long range electrostatic effects can be excluded.
I have several problems with this manuscript:
1) The observed curvature is dependent on the starting model. In fact the manuscript uses the phrase ‘stable static curvature’ incorrectly to describe what is probably a trapped metastable state. The observed curve is neither stable nor static.
2) The choice of DNA sequence seems to be biased toward that which gives an altered structure in simulations, ad is not that which gives the most pronounced bend in solution. I would suggest a comparison of (CAAAATTTTTG)n and (CTTTTAAAAG)n.
3) The result is not consistent with solution results. See for example:
Prodin, F., Cocchione, S., Savino, M., & Tuffillaro, A. “Different Interactions of Spermine With a Curved and a Normal DNA Duplex - (Ca(4)T(4)G)(N) and (Ct(4)a(4)G)(N) - Gel -Electrophoresis and Circular-Dichroism Studies” (1992) Biochemistry International 27, 291-901.
Brukner, l, Sucis, S., Dlakic, M., Savic, A., & Pongor, S. “Physiological concentrations of magnesium ions induces a strong macroscopic curvature in GGGCCC - containing DNA” (1994) J. Mol. Biol. 236, 26-32.
Diekmann, S., & Wang, J. C. “On the sequence determinants and flexibility of the kinetoplast DNA fragment with abnormal gel electrophoretic mobilities” (1985) J. Mol. Biol. 186, 1-11.
Llaudnon, C. H., & Griffith, J. D. “Cationic metals promote sequence-directed DNA bending” (1987) Biochemistry 26, 3759-3762.
4) The result is not consistent with other simulations. See for example:
Feig, M., & Pettitt, B. M. “Sodium and Chlorine ions as part of the DNA solvation shell” (1999) Biophys. J. 77, 1769-81.
5) The results should be given by objective statistical descriptions rather than a series of spot examples, as in “sometimes reveals two independent bends”.
#### 2 Second referee
This manuscript describes the modeling of a 25-residue DNA duplex using molecular dynamics simulations. The DNA sequence in question contains 3 A/T tracts arranged in-phase with the helix screw and thus is expected to manifest intrinsic bending. Unlike previous MD studies of intrinsically bent DNA sequences, these calculations omit explicit consideration of the role of counterions. Because recent crystallographic studies of A-tract-like DNA sequence have attributed intrinsic bending to the localization of counterions in the minor groove, the present finding that intrinsic bending occurs in the absence of explicit counterions is important for understanding the underlying basis of A-tract-dependent bending.
Overall, the MD procedure appears sound and the calculations were carried out with obvious care and attention to detail. There are two specific issues raised by this study that should be addressed in revision, however.
1. Although the sequence chosen for this study was based on a canonical, intrinsically-bent motif consisting of three A tracts, it is unclear to what extent intrinsic bending has been experimentally shown for this particular sequence. There are known sequence-context effects that modulate A-tract-dependent bending and thus the author should refer the reader to data in the literature or show experimentally that intrinsic bending of the expected magnitude occurs for this particular sequence. Moreover, one A tract is out-of-phase with respect to the others and it is therefore not clear how this contributes to the overall bend. The author is understandably concerned about end effect with short sequences; this problem can be ameliorated by examining DNA fragments that constrain multiple copies of the chosen motif or by extending the ends of the motif with mixed-sequence DNA.
2. Notwithstanding the authors remark bout separating the cause and the effects with respect to intrinsic bending some comments about the underlying mechanism of bending seem appropriate. It would be particularly useful to know whether average values of any specific conformational variables are unusual or whether strongly bent states are consistent with narrowing of the minor groove within A-tracts, for example.
|
no-problem/0002/cond-mat0002148.html
|
ar5iv
|
text
|
# Shape anisotropy and Voids
## Abstract
Numerical simulations on a 2-dimensional model system showed that voids are induced primarily due to shape anisotropy in binary mixtures of interacting disks. The results of such a simple model account for the key features seen in a variety of flux experiments using liposomes and biological membranes Sita .
A variety of lipid molecules contribute to the structure and barrier function of biological membranes. Part of each molecule is hydrophilic (head) and part hydrophobic (tail). This amphipathic nature, in the presence of water, leads to a bilayer structure in which hydrophilic head groups have maximum contact with water and hydrophobic tails have minimum contact with water. The process of membrane formation is one of minimizing the free energy and maximizing the stability of the structure Stein .
A major question has been whether the bilayer is to be viewed as an isotropic homogeneous phase or as a heterogeneous phase Lee . If osmotic contraction of the bilayer vesicles leads to an altered hydraulic conductivity (water flux coefficient), one obviously favors a heterogeneous membrane model. Otherwise a homogeneous, isotropic phase model would be adequate, obviating the need to look for a fine structure within the bilayer. Using the erythrocyte as an experimental system ( in which the area of the biconcave cell does not change when it is osmotically expanded to a spherical shape), it was concluded that hydraulic conductivity was stretch independent, i.e., in support of the isotropic model Sita . An alternative way to assess hydraulic conductivity is to use hydrogen peroxide as an analog of water: since many experimental systems have catalase (an enzyme that degrades hydrogen peroxide to molecular oxygen and water) within the vesicle/cell, an assay of this occluded catalase directly permits one to measure the conductivity to exogenous hydrogen peroxide. Under equilibrium conditions of assay, the rate of degradation would be same as the rate of permeation of the peroxide into the vesicle. Thus, one can directly assess the stretch sensitivity of the membrane by osmotic titrations with osmolites, using non-electrolytes like hydrogen peroxide as probes of flux. In the course of these experiments it was found that Sita : (i) among all the lipid combinations tested, the phosphatidylcholine (PC) vesicles and intact erythrocytes, both, did not show a decrease in occluded catalase activity on osmotic compression of the membrane (ii) on the other hand, all other membrane systems, such as peroxisomes, E.coli, macrophages showed stretch (osmotic) sensitivity (iii) so did liposomes made from these cells and organelles (iv) further, when binary mixtures were investigated, only cardiolipin and cerebrosides when added to PC (5 to 10$`\%`$ of PC) conferred stretch sensitivity in liposomes (v) these binary mixtures also exhibited enhanced activation volume(osmotic sensitivity) and diminished activation energy for hydrogen peroxide flux (vi) further, glucose was readily permeable across these membranes of binary mixtures (vii) addition of cholesterol, which is abundantly found in erythrocytes, inhibited the stretch sensitivity to peroxide permeation (viii) evidence was also seen that this diffusion increases with decrease in temperature, i.e., the process has a negative temperature coefficient.
These studies on biological membranes and liposomes, in which composition as well as dynamics considerably vary, prompted us to question as to what constitutes a minimal description to account for variable permeability induced by doping across a liposomal membrane. For instance, cardiolipin enhanced permeation of hydrogen peroxide and molecules as large as glucose Sita . Though possible descriptors could be many, (composition, structure, dynamics in terms of inter and intra molecular potentials), we adopted a bare-bone approach to resolve this complex issue to arrive at a minimal description adequate to account for the observations.
Structural changes in the membrane are best identified by non-interactive molecules and therefore leaks across bilayers are commonly studied using non-electrolytes Stein . The diagnostic for non-specific permeation is size dependence such that, these hydrated solutes intercalate, penetrate and navigate through such interstices, spaces or voids, stochastically or in files to reach the other side of the membrane Lee . In order to capture the diverse features in a parsimonious manner, we restrict to a two dimensional cross-section of a three dimensional system. Such a restriction is reasonable since the probe particle permeating across the membrane at any instant of time experiences the effective cross section rather than the three dimensional obstruction. The permeation across the membrane depends primarily on the availability of free space or voids. Thus the problem reduces to the study of packing of 2-dimensional objects at the first instance. Then one needs to determine which factor(s) determine the appearance and size-distribution of voids in such a 2-dimensional system.
The configuration space of this model system (membrane) is a 2-dimensional box with toroidal boundary conditions. The constituents of this two dimensional box are the circular disks (and/or the rigid combinations of the circular disks as dopants) of unit radii. A circular disk simply represents the hard core scattering cross section, seen by the passing particle (a non-electrolyte which acts as a probe), across the thickness spanned by two lipid molecules, viewed somewhat as cylinders. A typical dopant is two or more circular disks rigidly joined in a prespecified geometry. These constituents are identified by the position coordinates of their centers, the angle made by the major axes with the side of the box ( in case of dopants) and the radii of the circular disks. It is reasonable that the disks (which represent molecules, with long range attractive interaction and hard core repulsion near center, contained within a structure) interact pair wise via Lennard-Jones potential (a measure of the interaction energy), which has the form
$`V_{LJ}(r_{ij})=4ϵ{\displaystyle \underset{i=1}{\overset{N}{}}}{\displaystyle \underset{j=i+1}{\overset{N}{}}}\left(({\displaystyle \frac{\sigma }{r_{ij}}})^{12}({\displaystyle \frac{\sigma }{r_{ij}}})^6\right)`$
where, $`r_{ij}`$ is the distance between the centers of the $`i^{\mathrm{th}}`$ and $`j^{\mathrm{th}}`$ disks, $`\sigma `$ determines the range of hard core part in the potential and the $`ϵ`$ signifies the depth of attractive part. While studying the binary mixtures, we consider different shape anisotropic combinations (impurities or dopants) of $`\kappa `$ number of circular disks. We treat these combinations as one unit. e.g. rod<sub>n</sub> denotes a single dopant made of $`n`$ unit circular disks rigidly joined one after another in a straight line. The impurities interact with constituent circular disks via potential,
$`V(r_{ij})={\displaystyle \underset{\alpha =1}{\overset{\kappa }{}}}V_{LJ}(r_{i_\alpha j})`$
where, $`r_{i_\alpha j}`$ is the distance between the centers of $`\alpha ^{\mathrm{th}}`$ disk in $`i^{\mathrm{th}}`$ impurity and the $`j^{\mathrm{th}}`$ circular disk, and among themselves interact via
$`V(r_{ij})={\displaystyle \underset{\alpha =1}{\overset{\kappa _1}{}}}{\displaystyle \underset{\beta =1}{\overset{\kappa _2}{}}}V_{LJ}(r_{i_\alpha j_\beta })`$
where, $`r_{i_\alpha j_\beta }`$ is the distance between the centers of $`\alpha ^{\mathrm{th}}`$ disk in $`i^{\mathrm{th}}`$ impurity and the $`\beta ^{\mathrm{th}}`$ disk in $`j^{\mathrm{th}}`$ impurity.
An $`r`$-void is defined as a closed area in a membrane devoid of disks or impurities, and sufficient to accommodate a circular disk of radius $`r`$ Gauri . Clearly, larger voids also accommodate smaller probes, i.e., an $`r`$-void is also an $`r^{}`$-void if $`r^{}<r`$. Similarly, the voids for the particle of size zero are the voids defined in the conventional sense, i.e., a measure of the net space unoccupied by the disks.
Equilibrium configurations of the model system are obtained by a Monte Carlo method (using the Metropolis algorithm) starting with a random placement of the disks Gauri ; Binder <sup>1</sup><sup>1</sup>1The equilibrium configurations thus obtained are further confirmed by simulated annealing NR .. The box was filled with disks such that they occupy 70% area of the box,i.e.,loosely packed to facilitate the formation of voids. The temperature parameter, $`T`$, was so chosen that the quantity $`k_BT<4ϵ`$. This ensured an approximate hexagonal arrangement of the disks and the presence of very few large voids in the absence of dopants. (Fig. 1a; far too less r-voids for $`r0.5`$<sup>2</sup><sup>2</sup>2It may be recalled that glucose offers approximately half the radius of the PC cross section, yielding a relevant definition for a larger void of interest.). Similar numerical simulations are performed on the model system with dopants. The number of dopants is chosen to be 10% Sita of the number of circular disks with a constraint that the total occupied area of the box is still 70% as the focus was on the redistribution of void sizes. Fig. 1b illustrates the formation of larger voids in the vicinity of the rod<sub>2</sub> impurities.
The variation in the number of $`r`$-voids as a function of the size of the permeating particle(using the digitization algorithm described in Gauri ) is shown in Fig. 2. When only circular disks are present (dotted curve), hardy any large $`r`$-voids are seen. When mixed with the anisotropic impurities, say, rod<sub>2</sub>, a distinct increase in the number of large $`r`$-voids is seen with appropriate redistribution of smaller $`r`$-voids (solid curve). This result is consistent with the unexpected permeation of large molecules such as glucose through the doped membrane, observed experimentally Sita .The difference curve showed the formation of a significant extent ($``$ 30%) of $`r`$-voids of size $`0.5`$ and above.
Is the induction of large voids due to the anisotropy in potential of the impurities and, should the large voids form around the rods, the centers of anisotropy? Firstly, we carried out simulations with large circular disks in place of rod<sub>2</sub> as impurities. The radius of large disk was chosen in such a way that the area occupied by each of the large disk is same as that of a rod<sub>2</sub>. Fig. 3 shows the result of such simulations. The curve (a) in Fig. 3 represents the difference curve of $`r`$-void distribution of pure membrane and that of membrane doped with rod<sub>2</sub> impurity. The curve (b) represents the same when the membrane is doped with large circular disks. It can be clearly seen that the number of larger $`r`$-voids is always less in the latter case, thus confirming the role of shape anisotropy in the induction of large $`r`$-voids. Further, simulations are carried out with rod<sub>2</sub> of smaller size and rod<sub>4</sub> type impurities. The curves (c), (d) in Fig. 3 respectively shows the corresponding difference curves. It shows an interesting feature that the peak of the difference curve shifts with the change in the type of anisotropy. This suggests a possible way of constructing membrane with selective permeability properties. The simulation regime adopted here, limited the exploration of ternary mixtures in yielding statistically significant results on transport. However, by using rod<sub>2</sub> of 0.5 size (Fig.3, curve (c)) (an oval approximation of the small dopant cholesterol), we could demonstrate a shift in void sizes to left in binary mixtures, consistent with our experimental results in ternary mixtures with cholesterol Sita .
Further, we considered rod<sub>n</sub> type impurities. Fig. 4 represents the relation between the length dimension of rod<sub>n</sub> and the number of $`r`$-voids (for r=0.55). The anisotropy in the potential of rod<sub>n</sub> increases with $`n`$, such that the number of large $`r`$-voids should increase with $`n`$. Fig. 4 indeed shows a jump when the rod<sub>2</sub> impurities are added and afterwards, it shows a slow and almost linear increase with increase in $`n`$.
Since dopants induced voids, their influence is most likely to be seen in their own vicinity, enhancing the “local transport”. As the dopants exhibit different potential in different directions, certain positions of the constituents are preferred from the point of view of energy minimization, eventually giving rise to voids in the vicinity of impurities. To verify this, we calculate the local permeation probability for particle of size $`r`$, which is a ratio of the area of $`r`$-voids and the area of the local neighborhood. Fig. 5 shows the local permeation probability around ten randomly chosen impurities and ten randomly chosen circular disks. The higher local permeation probability is seen to be associated with rod<sub>2</sub>.
The model is realistic in that, one can compute elastic properties (surface tension like attribute) by stretching the membrane from one side and computing the energy change, which yielded a change of $``$ 13 dyn/cm which is of the same order as the observed surface tension and changes there of in bilayers Jan . The model is limited in relation to an investigation of temperature effects which requires incorporation of multiple time scales. The model is really general because it not restricted by the parameter space of the components and therefore it is extendible to a variety of phenomena including transport in weakly bound granular media, voids in polymers, modeling of zeolites which may act like a sponge absorbing only desired species.
In summary, we proposed a two dimensional computational model system comprising a mixture of objects interacting via Lennard-Jones potential to explain anomalous permeation seen in bilayers. The significance of these observations on permeation, in what is essentially a granular medium (with long range attraction as an added feature), relates to development of large voids seeded by impurities. Unlike shape anisotropy, the change in composition (via a change in $`\sigma `$), lateral pressure/density, and presumably temperature (though dynamic simulations would need to be done) would all simply produce voids within the bounds of a hexagonal array. Even among various dopants in shapes, X, L, Y, Z, T symmetrical or otherwise, all other factors being equal, size per size, rod<sub>n</sub> produce the largest voids Gauri .
Thus the largest $`r`$-voids induced by anisotropy yield biological observations of interest, without manifesting as ordered geometric structures. In this sense these $`r`$-voids differ from the conventional results obtained in studies on granular media (which also do not usually incorporate long range interactions) Granular . It is increasingly becoming clear that voids play a pivotal role in relating the dynamics of biopolymers to specific functional states Raj .
We thank C. N. Madhavarao for discussions. The author (GRP) is grateful to CSIR (India) for fellowship and (ADG) is grateful to DBT (India) for research grant.
FIGURE CAPTIONS
Typical equilibrium configurations for interacting disks. The parameter, $`\sigma `$ in Lennard-Jones potential is 2 units. The size of the box is 50$`\times `$50 (a) A pure membrane with 556 unit circular disks. (b) A doped membrane with 464 circular disks and 46 rod<sub>2</sub>(shown in gray).
Distribution of $`r`$-voids in two different configurations. The main graph shows the number of $`r`$-voids as a function of the relative size ($`r`$) of the probe particle. The vertical bars represent the error margins at the corresponding points. the dotted curve gives the distribution in pure membrane while the solid curve shows the same in a membrane doped with rod<sub>2</sub> (10:1). Difference curve clearly demonstrates the presence of large voids in the doped membrane.
Difference curves of distribution of $`r`$-voids. Curves are obtained by treating the void distribution for pure membrane as the base. Difference curves for membranes(a) doped with rod<sub>2</sub> (b) doped with large circular disks (of radii $`\sqrt{2}`$) which occupy the same area as that of rod<sub>2</sub>. These induce smaller number of large voids as compared to rod<sub>2</sub>. (c) doped with small rods. Irrespective of their small size, they induce large voids, but the peak shifts towards left. (d) doped with rod<sub>4</sub>. Significantly large voids are induced. The peak shifts towards the right.
Dependence of the number of $`r`$-voids on the length of the rod-shaped impurities. The graph shows a steady increase in the number of $`r`$-voids (for $`r=0.55`$) with $`n`$. The first expected large jump in the number of voids because of the shape anisotropy is seen clearly when the configuration consists of molecules in the shape of circular disks and rod<sub>2</sub>.
Local permeation probability in a doped model system. The points show the local permeation probability around ten randomly chosen unit disks and ten randomly chosen rod<sub>2</sub>s. Further, as a guide line, averages are shown by the heights of the boxes, clearly indicating significantly more permeation in the neighborhood of rod<sub>2</sub>.
|
no-problem/0002/nucl-th0002027.html
|
ar5iv
|
text
|
# FZJ-IKP(TH)-2000-05 Complete one–loop analysis of the nucleon’s spin polarizabilities
\[
## Abstract
We present a complete one–loop analysis of the four nucleon spin polarizabilities in the framework of heavy baryon chiral perturbation theory. The first non–vanishing contributions to the isovector and first corrections to the isoscalar spin polarizabilities are calculated. No unknown parameters enter these predictions. We compare our results to various dispersive analyses. We also discuss the convergence of the chiral expansion and the role of the delta isobar.
PACS numbers: 13.40.Cs, 12.39.Fe, 14.20.Dh
\]
Low energy Compton scattering off the nucleon is an important probe to unravel the nonperturbative structure of QCD since the electromagnetic interactions in the initial and final state are well understood. In the long wavelength limit only the charge of the target can be detected and the experimental cross sections, up to photon energies $`\omega `$ of about 50 MeV in the centre-of-mass system, can be described reasonably by the Powell formula . At higher energies $`50<\omega <100`$ MeV, the internal structure of the system slowly becomes visible. Historically this nucleon structure-dependent effect in unpolarized Compton scattering was taken into account by introducing two free parameters into the cross-section formula, commonly denoted the electric $`(\overline{\alpha })`$ and magnetic $`(\overline{\beta })`$ polarizabilities of the nucleon in analogy to the structure dependent response functions for light-matter interactions in classical electrodynamics. Over the past few decades several experiments on low energy Compton scattering off the proton have taken place, resulting in several extractions of the electromagnetic polarizabilities of the proton. At present, the commonly accepted numbers are $`\overline{\alpha }^{(p)}=(12.1\pm 0.8\pm 0.5)\times 10^4\mathrm{fm}^3`$, $`\overline{\beta }^{(p)}=(2.10.80.5)\times 10^4\mathrm{fm}^3`$ , indicating that the proton compared to its volume of $`1`$fm<sup>3</sup> is a rather stiff object. In parallel to the ongoing experimental efforts theorists have tried to understand the internal dynamics of the nucleon that would give rise to such (small) structure effects. At present, several quite different theoretical approaches find qualitative and quantitative explanations for these 2 polarizabilities, but it can be considered as one of the striking successes of chiral perturbation theory (for a general overview, see e.g. ref.).
Quite recently, with the advent of polarized targets and new sources with a high flux of polarized photons, the case of polarized Compton scattering off the proton $`\stackrel{}{\gamma }\stackrel{}{p}\gamma p`$ has come close to experimental feasibility. On the theoretical side it has been shown that one can define 4 spin-dependent electromagnetic response functions $`\gamma _i,i=1\mathrm{}4`$, which in analogy to $`\overline{\alpha },\overline{\beta }`$ are commonly called the “spin-polarizabilites” of the proton. First studies have been published , claiming that the such parameterized information on the low-energy spin structure of the proton can really be extracted from the upcoming double-polarization Compton experiments. A success of this program would clearly shed new light on our understanding of the internal dynamics of the proton and at the same time serve as a check on the theoretical explanations of the polarizabilities. The new challenge to theorists will then be to explain all 6 of the leading electromagnetic response functions simultaneously. At present there only exists one experimental analysis that has shed some light on the magnitude of the (essentially) unknown spin-polarizabilities $`\gamma _i^{(p)}`$ of the proton: The LEGS group has reported a result<sup>*</sup><sup>*</sup>*Note that we have subtracted off the contribution of the pion-pole diagram in order to be consistent with the definition of the spin-polarizabilities given in . for a linear combination involving three of the $`\gamma _i`$, namely
$`\gamma _\pi ^{(p)}|_{\mathrm{exp}.}`$ $`=`$ $`\gamma _1^{(p)}+\gamma _2^{(p)}+2\gamma _4^{(p)}`$ (1)
$`=`$ $`\left(17.3\pm 3.4\right)\times 10^4\mathrm{fm}^4.`$ (2)
We note that this pioneering result was obtained from an analysis of an unpolarized Compton experiment in the backward direction, where the spin-polarizabilities come in as one contribution in a whole class of subleading order nucleon structure effects in the differential cross-section. Given these structure subtleties and the fact that most theoretical calculations have predicted this particular linear combination of spin-polarizabilities to be a factor of 2 smaller than the number given in Eq.(2), we can only reemphasize the need for the upcoming polarized Compton scattering experiments.
In this note we are taking up the challenge on the theory side within the context of Heavy Baryon Chiral Perturbation Theory (HBChPT), extending previous efforts in a significant way. Previously an order $`𝒪(p^3)`$ SU(2) HBChPT calculation was performed, which showed that the leading (i.e. long-range) structure effects in the spin-polarizabilities are given by 8 different $`\pi N`$ loop diagrams giving rise to a $`1/m_\pi ^2`$ behavior in the $`\gamma _i`$. Subsequently it was shown in an $`𝒪(ϵ^3)`$ SU(2) “small scale expansion” (SSE) calculation —which in contrast to HBChPT includes the first nucleon resonance $`\mathrm{\Delta }`$(1232) as an explicit degree of freedom —that 2 ($`\gamma _2,\gamma _4`$) of the 4 spin-polarizabilities receive large corrections due to $`\mathrm{\Delta }`$(1232) related effects, resulting in a big correction to the leading $`1/m_\pi ^2`$ behavior . Another important conclusion of was that any HBChPT calculation that wants to calculate $`\gamma _2,\gamma _4`$ would have to be extended to $`𝒪(p^5)`$ before it can incorporate the large $`\mathrm{\Delta }`$(1232) related corrections found in . Recently, two $`𝒪(p^4)`$ SU(2) HBChPT calculations of polarized Compton scattering in the forward direction appeared, from which one can extract one particular linear combination$`\gamma _0`$ can also be calculated from the absorption cross sections of polarized photons on polarized nucleons via the GGT sum rule , as pointed out in . In the absence of such data several groups have tried to extract the required cross sections via a partial wave analysis of unpolarized absorption cross sections. Recent results of these efforts are given in table 2. of 3 of the 4 $`\gamma _i`$, which is usually called $`\gamma _0`$:
$`\gamma _0=\gamma _1\left(\gamma _2+2\gamma _4\right)\mathrm{cos}\theta |_{\theta 0}.`$ (3)
The authors of claimed to have found a huge correction to $`\gamma _0`$ at $`𝒪(p^4)`$ relative to the $`𝒪(p^3)`$ result already found in , casting doubt on the usefulness/convergence of HBChPT for spin-polarizabilities. Given that $`\gamma _0`$ involves the very 2 polarizabilities $`\gamma _2,\gamma _4`$, which were already shown in to receive huge corrections even up to $`𝒪(p^5)`$ when one tries to calculate them in an effective field theory without explicit $`\mathrm{\Delta }`$(1232) degrees of freedom, the (known) poor convergence for $`\gamma _0`$ found in should not have come as a surprise. We will come back to this point later.
In the following we report on the results of a $`𝒪(p^4)`$ calculation of all 4 spin-polarizabilities $`\gamma _i`$, which allows to study the issue of convergence in chiral effective field theories for these important new spin-structure parameters of the nucleon. The pertinent results of our investigation can be summarized as follows:
1) We first want to comment on the extraction of polarizabilities from nucleon Compton scattering amplitudes. In previous analyses it has always been stated that in order to obtain the spin-polarizabilities from the calculated Compton amplitudes, one only has to subtract off the nucleon tree-level graphs from the fully calculated amplitudes. The remainder in each (spin-amplitude) then started with a factor of $`\omega ^3`$ and the associated Taylor-coefficient was related to the spin-polarizabilities. Due to the (relatively) simple structure of the spin-amplitudes at this order, this prescription gives the correct result in the $`𝒪(p^3)`$ HBChPT and the $`𝒪(ϵ^3)`$ SSE calculations. However, at $`𝒪(p^4)`$ (and also at $`𝒪(ϵ^4)`$ ) one has to resort to a definition of the (spin-) polarizabilities that is soundly based on field theory, in order to make sure that one only picks up those contributions at $`\omega ^3`$ that are really connected with (spin-) polarizabilities. In fact, at $`𝒪(p^4)`$ ($`𝒪(ϵ^4)`$) the prescription given in leads to an admixture of effects resulting from 2 successive, uncorrelated $`\gamma NN`$ interactions with a one nucleon intermediate state. In order to avoid these problems we advocate the following definition for the spin-dependent polarizabilities in (chiral) effective field theories: Given a complete set of spin-structure amplitudes for Compton scattering to a certain order in perturbation theory, one first removes all one-particle (i.e. one-nucleon or one-pion) reducible (1PR) contributions from the full spin-structure amplitudes. Specifically, starting from the general form of the T-matrix for real Compton scattering assuming invariance under parity, charge conjugation and time reversal symmetry, we utilize the following six structure amplitudes $`A_i(\omega ,\theta )`$ in the Coulomb gauge, $`ϵ_0=ϵ_0^{}=0`$,
$`T`$ $`=`$ $`A_1(\omega ,\theta )\stackrel{}{ϵ}^{}\stackrel{}{ϵ}+A_2(\omega ,\theta )\stackrel{}{ϵ}^{}\widehat{k}\stackrel{}{ϵ}\widehat{k}^{}`$ (4)
$`+`$ $`iA_3(\omega ,\theta )\stackrel{}{\sigma }(\stackrel{}{ϵ}^{}\times \stackrel{}{ϵ})+iA_4(\omega ,\theta )\stackrel{}{\sigma }(\widehat{k}^{}\times \widehat{k})\stackrel{}{ϵ}^{}\stackrel{}{ϵ}`$ (5)
$`+`$ $`iA_5(\omega ,\theta )\stackrel{}{\sigma }[(\stackrel{}{ϵ}^{}\times \widehat{k})\stackrel{}{ϵ}\widehat{k}^{}(\stackrel{}{ϵ}\times \widehat{k}^{})\stackrel{}{ϵ}^{}\widehat{k}]`$ (6)
$`+`$ $`iA_6(\omega ,\theta )\stackrel{}{\sigma }[(\stackrel{}{ϵ}^{}\times \widehat{k}^{})\widehat{ϵ}\widehat{k}^{}(\stackrel{}{ϵ}\times \widehat{k})\stackrel{}{ϵ}^{}\widehat{k}],`$ (7)
where $`\theta `$ corresponds to the c.m. scattering angle, $`\stackrel{}{ϵ},\widehat{k}(\stackrel{}{ϵ}^{},\widehat{k}^{})`$ denote the polarization vector, direction of the incident (final) photon while $`\stackrel{}{\sigma }`$ represents the (spin) polarization vector of the nucleon. Each (spin-)structure amplitude is now separated into 1PR contributions and a remainder, that contains the response of the nucleon’s excitation structure to two photons:
$`A_i(\omega ,\theta )=A_i(\omega ,\theta )^{1\mathrm{P}\mathrm{R}}+A_i(\omega ,\theta )^{\mathrm{exc}.},i=3,\mathrm{},6.`$ (8)
Taylor-expanding the spin-dependent $`A_i(\omega ,\theta )^{1\mathrm{P}\mathrm{R}}`$ for the case of a proton target in the c.m. frame into a power series in $`\omega `$, the leading terms are linear in $`\omega `$ and are given by the venerable LETs of Low, Gell-Mann and Goldberger :
$`A_3(\omega ,\theta )^{1\mathrm{P}\mathrm{R}}`$ $`=`$ $`{\displaystyle \frac{\left[1+2\kappa ^{(p)}(1+\kappa ^{(p)})^2\mathrm{cos}\theta \right]e^2}{2M_N^2}}\omega +𝒪(\omega ^2),`$ (9)
$`A_4(\omega ,\theta )^{1\mathrm{P}\mathrm{R}}`$ $`=`$ $`{\displaystyle \frac{(1+\kappa ^{(p)})^2e^2}{2M_N^2}}\omega +𝒪(\omega ^2),`$ (10)
$`A_5(\omega ,\theta )^{1\mathrm{P}\mathrm{R}}`$ $`=`$ $`{\displaystyle \frac{(1+\kappa ^{(p)})^2e^2}{2M_N^2}}\omega +𝒪(\omega ^2),`$ (11)
$`A_6(\omega ,\theta )^{1\mathrm{P}\mathrm{R}}`$ $`=`$ $`{\displaystyle \frac{(1+\kappa ^{(p)})e^2}{2M_N^2}}\omega +𝒪(\omega ^2).`$ (12)
While it is not advisable to really perform this Taylor-expansion for the spin-dependent $`A_i(\omega ,\theta )^{1\mathrm{P}\mathrm{R}}`$ due to the complex pole structure, one can do so without problems for the $`A_i(\omega ,\theta )^{\mathrm{exc}.}`$ as long as $`\omega m_\pi `$. For the case of a proton one then finds
$`A_3(\omega ,\theta )^{\mathrm{exc}.}`$ $`=`$ $`4\pi \left[\gamma _1^{(p)}(\gamma _2^{(p)}+2\gamma _4^{(p)})\mathrm{cos}\theta \right]\omega ^3+𝒪(\omega ^4),`$ (13)
$`A_4(\omega ,\theta )^{\mathrm{exc}.}`$ $`=`$ $`4\pi \gamma _2^{(p)}\omega ^3+𝒪(\omega ^4),`$ (14)
$`A_5(\omega ,\theta )^{\mathrm{exc}.}`$ $`=`$ $`4\pi \gamma _4^{(p)}\omega ^3+𝒪(\omega ^4),`$ (15)
$`A_6(\omega ,\theta )^{\mathrm{exc}.}`$ $`=`$ $`4\pi \gamma _3^{(p)}\omega ^3+𝒪(\omega ^4).`$ (16)
We therefore take Eq.(16) as starting point for the calculation of the spin-polarizabilities, which are related to the $`\omega ^3`$ Taylor-coefficients of $`A_i(\omega ,\theta )^{\mathrm{exc}.}`$. As noted above, both the $`𝒪(p^3)`$ HBChPT and the $`𝒪(ϵ^3)`$ SSE results are consistent with this definition.
2) Utilizing Eqs.(8,16) we have calculated the first subleading correction, $`𝒪(p^4)`$, to the 4 isoscalar spin-polarizabilities $`\gamma _i^{(s)}`$ already determined to $`𝒪(p^3)`$ in in SU(2) HBChPT. We employ here the convention
$`\gamma _i^{(p)}=\gamma _i^{(s)}+\gamma _i^{(v)};\gamma _i^{(n)}=\gamma _i^{(s)}\gamma _i^{(v)}.`$ (17)
Contrary to popular opinion we show, that even at subleading order all 4 spin-polarizabilities can be given in closed form expressions which are free of any unknown chiral counterterms! The only parameters appearing in the results are the axial-vector nucleon coupling constant $`g_A=1.26`$, the pion decay constant $`F_\pi =92.4`$MeV, the pion mass $`m_\pi =138`$MeV, the mass of the nucleon $`M_N=938`$MeV as well as its isoscalar, $`\kappa ^{(s)}=0.12`$, and isovector, $`\kappa ^{(v)}=3.7`$, anomalous magnetic moments. All $`𝒪(p^4)`$ corrections arise from 25 one-loop $`\pi N`$ continuum diagrams, with the relevant vertices obtained from the well-known SU(2) HBChPT $`𝒪(p)`$ and $`𝒪(p^2)`$ Lagrangians given in detail in ref.. To $`𝒪(p^4)`$ we find
$`\gamma _1^{(s)}`$ $`=`$ $`+{\displaystyle \frac{e^2g_A^2}{96\pi ^3F_\pi ^2m_\pi ^2}}\left[1\mu \pi \right],`$ (18)
$`\gamma _2^{(s)}`$ $`=`$ $`+{\displaystyle \frac{e^2g_A^2}{192\pi ^3F_\pi ^2m_\pi ^2}}\left[1+\mu {\displaystyle \frac{(6+\kappa ^{(v)})\pi }{4}}\right],`$ (19)
$`\gamma _3^{(s)}`$ $`=`$ $`+{\displaystyle \frac{e^2g_A^2}{384\pi ^3F_\pi ^2m_\pi ^2}}\left[1\mu \pi \right],`$ (20)
$`\gamma _4^{(s)}`$ $`=`$ $`{\displaystyle \frac{e^2g_A^2}{384\pi ^3F_\pi ^2m_\pi ^2}}\left[1\mu {\displaystyle \frac{11}{4}}\pi \right],`$ (21)
with $`\mu =m_\pi /M_N1/7`$ and the the numerical values given in table 1. The leading $`1/m_\pi ^2`$ behavior of the isoscalar spin-polarizabilities is not touched by the $`𝒪(p^4)`$ correction, as expected. With the notable exception of $`\gamma _4^{(s)}`$, which even changes its sign due to a large $`𝒪(p^4)`$ correction, we show that this first subleading order of $`\gamma _1^{(s)},\gamma _2^{(s)},\gamma _3^{(s)}`$ amounts to a 25-45% correction to the leading order result. This does not quite correspond to the expected $`m_\pi /M_N`$ correction of (naive) dimensional analysis, but can be considered acceptable. The physical origin of the large correction in $`\gamma _4^{(s)}`$ is not yet understood, but we remind the reader again of our comments above, that it was shown in the SSE calculation of that one should not expect a good convergence behavior for $`\gamma _2^{(s)},\gamma _4^{(s)}`$ in HBChPT at all.
3) We further report the first results for the 4 isovector spin-polarizabilities $`\gamma _i^{(v)}`$ obtained in the framework of chiral effective field theories. Previous calculations at $`𝒪(p^3)`$ and $`𝒪(ϵ^3)`$ were only sensitive to the isoscalar spin-polarizabilities $`\gamma _i^{(s)}`$, therefore this calculation gives the first indication from a chiral effective field theory about the magnitude of the difference in the low-energy spin structure between proton and neutron. As in the case of the isoscalar spin-polarizabilities there are again no unknown counterterm contributions to this order in the $`\gamma _i^{(v)}`$. All $`𝒪(p^4)`$ contributions arise from 16 one-loop $`\pi N`$ continuum diagrams with the relevant $`𝒪(p),𝒪(p^2)`$ vertices again obtained from the Lagrangians given in ref.. To $`𝒪(p^4)`$ one finds
$`\gamma _1^{(v)}`$ $`=`$ $`{\displaystyle \frac{e^2g_A^2}{96\pi ^3F_\pi ^2m_\pi ^2}}\left[0\mu {\displaystyle \frac{5\pi }{8}}\right],`$ (22)
$`\gamma _2^{(v)}`$ $`=`$ $`{\displaystyle \frac{e^2g_A^2}{192\pi ^3F_\pi ^2m_\pi ^2}}\left[0\mu {\displaystyle \frac{(1+\kappa ^{(s)})\pi }{4}}\right],`$ (23)
$`\gamma _3^{(v)}`$ $`=`$ $`{\displaystyle \frac{e^2g_A^2}{384\pi ^3F_\pi ^2m_\pi ^2}}\left[0+\mu {\displaystyle \frac{\pi }{4}}\right],`$ (24)
$`\gamma _4^{(v)}`$ $`=`$ $`0,`$ (25)
with the numerical values again given in table 1. The result of our investigation is that the size of the $`\gamma _i^{(v)}`$ really tends to be an order of magnitude smaller than the one of the $`\gamma _i^{(s)}`$ (with the possible exception of $`\gamma _1^{(v)}`$), supporting the scaling expectation, $`\gamma _i^{(v)}(m_\pi /M_N)\gamma _i^{(s)}`$ from (naive) dimensional analysis. This is reminiscent of the situation in the spin-independent electromagnetic polarizabilities $`\overline{\alpha }^{(v)},\overline{\beta }^{(v)}`$ , which are also suppressed by one chiral power relative to their isoscalar partners $`\overline{\alpha }^{(s)},\overline{\beta }^{(s)}`$.
4) Finally, we want to comment on the comparison between our results and existing calculations using dispersion analyses. Given our comments on the convergence of the chiral expansion for the (isoscalar) spin-polarizabilities , we reiterate that we do not believe our $`𝒪(p^4)`$ HBChPT result for $`\gamma _2^{(s)},\gamma _4^{(s)}`$ to be meaningful. Their large inherent $`\mathrm{\Delta }`$(1232) related contribution just cannot be included (via a counterterm) before $`𝒪(p^5)`$ in HBChPT that only deals with pion and nucleon degrees of freedom. In table 1 it is therefore interesting to note that by adding (“by hand”) the delta-pole contribution of $`\mathrm{2.5\hspace{0.17em}10}^4`$fm<sup>4</sup> found in to $`\gamma _2^{(s)}`$ one could get quite close to the range for this spin-polarizability as suggested by the dispersion analyses . Similarly, adding $`+\mathrm{2.5\hspace{0.17em}10}^4`$fm<sup>4</sup> to $`\gamma _4^{(s)}`$ as suggested by also leads quite close to the range advocated by the dispersion results . However, such a procedure is of course not legitimate in an effective field theory, but it raises the hope that an extension of the $`𝒪(ϵ^3)`$ SSE calculation of that includes explicit delta degrees of freedom could lead to a much better behaved perturbative expansion for the isoscalar spin-polarizabilities. Whether this expectation holds true will be known quite soon . For the isovector spin-polarizabilities we have given the first predictions available from effective field theory. In general the agreement with the range advocated by the dispersion analyses is quite good. Judging from table 1 we note that the main difference between the 2 analyses from Mainz seems to lie in the treatment of the isovector structure, indicating that the isospin separation might pose some difficulties in the dispersion approaches. In table 2 we give a comparison of our results for those linear combinations of the $`\gamma _i`$ that typically are the main focus of attention in the literature. However, we re-emphasize that we do not consider our $`𝒪(p^4)`$ HBChPT predictions for $`\gamma _0^{(s)},\gamma _\pi ^{(s)}`$ to be meaningful, because they involve $`\gamma _2^{(s)},\gamma _4^{(s)}`$. The corresponding isovector combinations, however, again seem to agree quite well with the dispersive results and so far we have no reason to suspect that they might be affected by the poor convergence behavior of some of their isoscalar counterparts. We further note that our $`𝒪(p^4)`$ HBChPT predictions for $`\gamma _0^{(s,v)}`$ differ from the ones given in 2 recent calculations . As noted above this difference solely arises from a different definition of nucleon spin-polarizabilities. If we (“by hand”) Taylor-expand our $`\gamma NN`$ vertex functions in powers of $`\omega `$ and include the resulting terms into the the $`\gamma _0`$ structure, we obtain the $`𝒪(p^4)`$ corrections $`\gamma _0^{(s)}=6.9,\gamma _0^{(v)}=1.6`$ in units of $`10^4`$fm<sup>4</sup>, in numerical (and analytical) agreement with . This brings us to an important point: Once the first polarized Compton asymmetries have been measured, it has to be checked very carefully whether the same input data fitted to the terms we define as 1PR plus the additional free $`\gamma _i`$ parameters leads to the same numerical fit-results for the spin-polarizabilities as in the dispersion theoretical codes usually employed to extract polarizabilities from Compton data. Small differences for example in the treatment of the pion/nucleon pole could lead to quite large systematic errors in the determination of the $`\gamma _i`$. Such studies are under way .
###### Acknowledgements.
G.C.G. would like to acknowledge financial support from the TMR network HaPHEEP under contract FMRX-CT96-0008.
|
no-problem/0002/cond-mat0002009.html
|
ar5iv
|
text
|
# Observation of the cluster spin-glass phase in La2-xSrxCuO4 by anelastic spectroscopy
## I INTRODUCTION
The low doping region of the phase diagram of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> is attracting considerable interest, due to the appearance of unconventional correlated spin dynamics and ordering processes (for a review see Ref. ). In undoped La<sub>2</sub>CuO<sub>4</sub> the Cu<sup>2+</sup> spins order into a 3D antiferromagnetic (AF) state with the staggered magnetization in the $`ab`$ plane. Doping by Sr rapidly destroys the long range AF order, with $`T_N`$ passing from 315 K to practically 0 K around $`x_c0.02`$. Above this critical value of the Sr content no long range AF order is expected at finite temperature. There are also indications that the holes are segregated into domain walls, sometimes identified as charge stripes, which separate hole-poor regions where the AF correlations build up. The holes should be mobile along these ”charge rivers”, but at low $`x`$ they localize near the Sr atoms below $`30`$K, causing a distortion of the spin texture of the surrounding Cu<sup>2+</sup> atoms. For $`x<x_c`$ the spin distortions around the localized holes are decoupled from the AF background, and freeze into a spin-glass (SG) state below $`T_f\left(x\right)\left(815\text{K}\right)x`$. For $`x>x_c`$ a cluster spin-glass (CSG) state is argued to freeze below $`T_g\left(x\right)1/x`$ and AF correlations develop within the domains defined by the charge walls, with the easy axes of the staggered magnetization uncorrelated between different clusters. The formation of the SG and CSG states are inferred from sharp maxima in the <sup>139</sup>La NQR and $`\mu `$SR relaxation rates, which indicate the slowing of the AF fluctuations below the measuring frequency ($`10^710^8`$ Hz in those experiments) on cooling, and from the observation of irreversibility, remnant magnetization, and scaling behavior in magnetic susceptibility experiments.
Here we report the observation of a step-like increase of the low-frequency acoustic absorption close to the temperature at which the spin freezing process is detected in the NQR measurements. The absorption is ascribed to changes of the sizes of the frozen clusters induced by the vibration stress through magnetoelastic coupling, or equivalently to the motion of the walls between them.
## II EXPERIMENTAL AND RESULTS
The samples where prepared by standard solid state reaction as described in Ref. and cut in bars approximately $`40\times 4\times 0.6`$ mm<sup>3</sup>. The final Sr contents and homogeneities where checked from the temperature position and sharpness of the steps in the Young’s modulus and acoustic absorption due to the tetragonal (HTT) / orthorhombic (LTO) transition, which occurs at a temperature $`T_t`$ linearly decreasing with doping. The transitions appear narrower in temperature than the one of a Sr-free sample, indicating that the width was mostly intrinsic and not due to Sr inhomogeneity, except for the sample at the lowest Sr content. The Sr concentrations estimated in this way turned out $`x=0.0185\pm 0.0015`$, $`0.0315\pm 0.0015`$ and $`0.0645\pm 0.002`$, in good agreement with the nominal compositions. In the following the samples will be referred as $`x=0.019`$, 0.03 and 0.06.
The complex Young’s modulus $`E`$ was measured by electrostatically exciting either of the lowest three flexural modes and detecting the vibration amplitude by a frequency modulation technique. The elastic energy loss coefficient (or reciprocal of mechanical $`Q`$) is related to the imaginary part $`E^{\prime \prime }`$ of $`E`$ by $`Q^1(\omega ,T)=E^{\prime \prime }(\omega ,T)/E^{}(\omega ,T)`$, and it was measured by the decay of the free oscillations or the width of the resonance peak.
In Fig. 1 the anelastic spectra of three samples with $`x=0.019`$, 0.03, 0.06 below $`16`$K measured exciting the first flexural mode are reported. A step-like increase of the absorption is observed around or slightly below $`T_g`$ (Ref. ) The gray arrows indicate the values of $`T_g`$ in the magnetic phase diagrams deduced from NQR (lower values) and $`\mu `$SR (higher values) experiments, which are in agreement with the data in Ref. (for the sample with $`x=0.019`$ the $`T_g\left(x=0.02\right)`$ values are indicated). The black arrows indicate the temperature of the maximum of the <sup>139</sup>La NQR relaxation rate measured on the same samples in a separate study, which indicate a freezing in the spin-glass phase, as discussed later. The coincidence of the temperatures of the absorption steps with those of freezing of the spin fluctuations suggest a correlation between the two phenomena.
The sample with $`x=0.03`$ was outgassed from excess O by heating in vacuum up to $`790`$K, while the other two samples where in the as-prepared state, therefore containing some interstitial O. The concentration $`\delta `$ of excess O is a decreasing function of $`x`$ (Ref. ) and should be negligible for $`x=0.06`$ but not for $`x=0.019`$. This fact allowed us to observe the absorption step singled out from the high-temperature tail of an intense peak that occurs at lower temperature (see the sharp rise of dissipation below 3 K for $`x=0.019`$ in Fig. 1). Such a peak has been attributed to the tunneling-driven tilt motion of a fraction of the O octahedra. The LTO phase is inhomogeneous on a local scale, and a fraction of the octahedra would be unstable between different tilt orientations, forming tunneling systems which cause the anelastic relaxation process. The interstitial O atoms force the surrounding octahedra into a fixed tilt orientation, resulting in a decrease of the fraction of mobile octahedra and therefore in a depression of the absorption peak. In addition, doping shifts the peak to lower temperature at a very high rate, due to the coupling between the tilted octahedra and the hole excitations. Therefore, it is possible to reduce the weight of the low temperature peak by introducing concentrations of interstitial O atoms that are so small that do not change appreciably the doping level due to the Sr substitutionals. Figure 2 compares the absorption curves of the $`x=0.019`$ sample in the as-prepared state with a concentration $`\delta 0.002`$ of excess O and after removing it in vacuum at high temperature. The initial concentration $`\delta `$ has been estimated from the intensity of the anelastic relaxation process due to the hopping of interstitial O, whose maximum occurs slightly below room temperature at our measuring frequencies (not shown here). The presence of excess O indeed decreases and shifts to lower temperature the tail of the peak in Fig. 2, while the effect on the absorption step is negligible. This justifies the comparison of the sample with $`x=0.019`$ and $`\delta >0`$ together with the other samples with $`\delta 0`$, and demonstrates that the nature of the low temperature peak is different from that of the step-like absorption.
## III DISCUSSION
The present data show the presence of a step in the acoustic absorption at the boundary of the spin-glass quasi-ordered state in the $`T,x`$ magnetic phase diagram. The case of the $`x=0.019`$ sample is less clear-cut, since the step is rather smooth. Furthermore, the Sr content is within the range $`0.018<x<0.02`$, at the boundary between the SG and the CSG phases, where the phase diagram is largely uncertain. The $`T_f\left(x\right)`$ line ends at $`15`$K for $`x0.018`$, and the line $`T_g\left(x\right)`$ starts from 10-12 K at $`x0.02`$ (Refs. ). A larger spread of experimental data (from 7.8 to 12.5 K) is actually observed just at $`x=0.02`$.
A mechanism which in principle produces acoustic absorption is the slowing down of the magnetic fluctuations toward the spin-glass freezing. When measuring the spectral density $`J_{\text{spin}}(\omega ,T)`$ of the spin fluctuations (the Fourier transform of the spin-spin correlation function), e.g. through the <sup>139</sup>La NQR relaxation rate, a peak in $`J_{\text{spin}}`$ is found at the temperature at which the fluctuation rate $`\tau ^1\left(T\right)`$ becomes equal to the measuring angular frequency $`\omega `$. Near the glass transition the magnetic fluctuation rate was found to approximately follow the law $`\tau ^1\left[\left(TT_g\right)/T_g\right]^2`$, and the temperature at which the condition $`\omega \tau =1`$ for the maximum of relaxation is satisfied for $`\omega /2\pi =1219`$ MHz is close to $`T_g`$. A similar peak would be observed in the spectral density of the lattice strain $`J_{\text{latt}}(\omega ,T)`$, if the spin fluctuations cause strain fluctuations through magnetoelastic coupling. The acoustic absorption is proportional to the spectral density of the strain and hence to $`J_{\text{latt}}`$, $`Q^1=\omega J_{\text{strain}}/T\omega J_{\text{latt}}/T`$, and therefore at our frequencies ($`\omega 50`$ kHz) we should observe a narrow peak at a temperature slightly lower than the ones detected by NQR relaxation. The absorption steps in Fig. 1 can hardly be identified in a strict way as due to the contribution from the freezing magnetic fluctuations because they appear as steps instead of peaks. We propose that the main contribution comes from the stress-induced movement of the domain boundaries between the clusters of quasi-frozen antiferromagnetically correlated spins. The mechanism is well known for ferromagnetic materials, but is possible also for an ordered AF state, if an anisotropic strain is coupled with the easy magnetization axis. In this case, the elastic energy of domains with different orientations of the easy axis would be differently affected by a shear stress, and the lower energy domains would grow at the expenses of the higher energy ones. The dynamics of the domain boundaries is different from that of the domain fluctuations and generally produces broad peaks in the susceptibilities. An example is the structural HTT/LTO transformation in the same samples, where the appearance of the orthorhombic domains is accompanied by a step-like increase of the acoustic absorption. We argue that the features in the anelastic spectra just below $`T_g`$ are associated with the stress-induced motion of the walls enclosing the clusters of AF correlated spins. More properly, the anelastic relaxation is attributed to the stress-induced changes of the sizes of the different domains.
The $`x=0.019`$ sample is at the border $`x_c0.02`$ between SG and CSG state. The NQR measurements on the same sample indicate a spin-freezing temperature $`9`$K, closer to the CSG $`T_g\left(x_c\right)`$ rather then to the SG $`T_f\left(x_c\right)`$, which is consistent with the presence of moving walls, otherwise absent in the SG state. Nonetheless, following the model proposed by Gooding et al. we do not expect a sharp transition between the SG and the CSG states. According to that model, at low temperature the holes localize near the Sr dopants, and in the ground state an isolated hole circulates clockwise or anti-clockwise over the four Cu atoms neighbors to Sr. Such a state induces a distortion of the surrounding Cu spins, otherwise aligned according to the prevalent AF order parameter. The spin texture arising from the frustrated combination of the spin distortions from the various localized holes produces domains with differently oriented AF order parameters, which can be identified with the frozen AF spin clusters. The dissipative dynamics which we observe in the acoustic response should arise from the fact that the energy surface of the possible spin textures has many closely spaced minima and the vibration stress, through magnetoelastic coupling, can favor jumps to different minima. In this picture, one could argue that the random distribution of Sr atoms may cause the formation of spin clusters also for $`xx_c`$ and it is possible to justify the fact that for $`x=0.019`$ the absorption step does not start below the maximum of the <sup>139</sup>La NQR relaxation rate, which signals the freezing of the spin clusters. Rather, the acoustic absorption slowly starts increasing slightly before the $`T_g`$ determined by the NQR maximum is reached. This may indicate that the spin dynamics is not only governed by cooperative freezing, but is also determined by the local interaction with the holes localized at the surrounding Sr atoms. Then, the regions in which the Sr atoms induce a particularly strong spin-texture could freeze and cause anelastic relaxation before the cooperative transition to the glass state is completed. Systematic measurements around the $`x=0.02`$ doping range are necessary to clarify this point.
The dependence of the intensity of the absorption step on $`x`$, which is sharper and most intense at $`x=0.03`$, qualitatively supports the above picture. In fact, at lower doping one has only few domains embedded in a long range ordered AF background, while above $`0.05`$ the fraction of walls of disordered spins connecting the Sr atoms increases at the expenses of the ordered domains, with a cross-over to incommensurate spin correlations. The anelasticity due to the stress-induced change of the domain sizes is expected to be strongest in correspondence to the greatest fraction of ordered spins, namely between $`0.03`$ and $`0.05`$, in accordance with the spectra in Fig. 1.
Finally we point out the insensitiveness of the absorption step to the presence of interstitial O (Fig. 2 and Ref. ), in view of the marked effects that even small quantities of excess O cause to the low temperature peak (Fig. 2) and to the rest of the anelastic spectrum. This is consistent with a dissipation mechanism of magnetic rather than of structural origin.
## IV CONCLUSION
The elastic energy loss coefficient of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (proportional to the imaginary part of the elastic susceptibility) measured around $`10^3`$ Hz in samples with $`x=0.019`$, 0.032 and 0.064 shows a step-like rise below the temperature of the transition to a quasi-frozen cluster spin-glass state. The origin of the acoustic absorption is thought to be magnetoelastic coupling, namely anisotropic in-plane strain associated with the direction of the local staggered magnetization. The absorption is not peaked at $`T_g`$ and therefore does not directly correspond to the peak in the dynamic spin susceptibility due to the spin freezing. Rather, it has been ascribed to the stress-induced changes of the sizes of the spin clusters, or equivalently to the motion of the walls. The phenomenology is qualitatively accounted for in the light of the model of Gooding et al. of magnetic correlations of the Cu<sup>2+</sup> spins induced by the holes localized near the Sr dopants.
## Acknowledgments
The authors thank Prof. A. Rigamonti for useful discussions and for a critical review of the manuscript. This work has been done in the framework of the Advanced Research Project SPIS of INFM.
## V Figures captions
Fig. 1 Elastic energy loss coefficient of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> with $`x=0.019`$ ($`1.3`$ kHz), $`x=0.03`$ ($`1.7`$ kHz), $`x=0.06`$ ($`0.43`$ kHz). The gray arrows indicate the temperature $`T_g`$ of freezing into the cluster spin-glass state deduced from NQR (lower values) and $`\mu `$SR (higher values) experiments The black arrows indicate the temperature of the maximum of the <sup>139</sup>La NQR relaxation rate measured on the same samples.
Fig. 2 Elastic energy loss coefficient of the sample with $`x=0.019`$ in the as prepared state (with interstitial O) and after outgassing the excess O measured at 1.3 kHz.
|
no-problem/0002/hep-ph0002163.html
|
ar5iv
|
text
|
# CENTRAL RAPIDITY DENSITIES OF CHARGED PARTICLES AT RHIC AND LHC
## 1 Introduction
In the last years many experimental and theoretical efforts have been devoted to search Quark Gluon Plasma (QGP) and/or collective effects<sup>1</sup><sup>1</sup>1Collective effects are those considered to explain the event, which go beyond the superposition of elementary nucleon-nucleon collisions. in heavy ion collisions , as a tool to study the nonperturbative aspects of Quantum Chromodynamics (QCD). To achieve this goal several signatures of the phase transition(s) from confined to deconfined quarks and gluons and of chiral symmetry restoration have been proposed.
The finding of three of the proposed signals at the Super Proton Synchrotron (SPS) at CERN has originated great excitement and debate in the scientific community in this field. In fact, an abnormal suppression of J/$`\psi `$ and a strong enhancement of strange baryons and antibaryons have been observed in central<sup>2</sup><sup>2</sup>2By central collision we mean a head-on one, in which most of the matter of the lightest nucleus participates. In practice different criteria are used, both theoretically (upper bound in impact parameter, minimum number of participant wounded nucleons,$`\mathrm{}`$) and experimentally (percentage of the cross section, lower bound in the number of charged particles,$`\mathrm{}`$). Pb-Pb collisions, compared with those measured in collisions between lighter projectiles and targets. Also an enhancement in the dilepton spectrum for dilepton masses below 0.8 GeV/c<sup>2</sup> has been seen in Pb-Au collisions . Whether or not these three experimental observations are really an unambiguous proof of the existence of a QGP is still an open question , due to their possible explanation using more conventional, but still interesting, physics. In any case, it is expected that the forthcoming heavy ion experiments in the Relativistic Heavy Ion Collider (RHIC) at BNL and the Large Hadron Collider (LHC) at CERN will clarify definitively the point<sup>3</sup><sup>3</sup>3RHIC and LHC will provide center of mass energies of 200 GeV and 5.5 TeV per nucleon respectively, to be compared with $`20`$ GeV per nucleon at the SPS. .
In spite of the work done these last years there are many fundamental aspects of the physics of heavy ions at high energies which are not clear at all. Fundamental questions like, e.g.:
* Is particle rapidity density proportional to the number of participant nucleons or to the number of elementary nucleon-nucleon collisions?
* Which is the physical explanation of the SPS particle correlation data?
* How large will particle multiplicities be at RHIC and LHC?
are answered in a very different way by several models, all of them claiming to agree with the existing experimental data.
Referring to the last question, in Fig. 1 it is shown the pseudorapidity distribution of charged particles from different models for central Pb-Pb collisions at a beam energy of 3 TeV per nucleon; this plot has been taken from the ALICE<sup>4</sup><sup>4</sup>4ALICE (A Large Ion Collider Experiment) is the approved detector at the LHC fully dedicated to Heavy Ion Physics. Technical Proposal done by the ALICE Event Generator Pool in December 1995. The results according to Monte Carlo codes of several models show large differences at central pseudorapidity. Indeed, between the String Fusion Model (SFM) and the VENUS or SHAKER codes there is a factor larger than 4 at $`\eta =0`$, while the difference in the fragmentation regions ($`|\eta |5`$) is smaller. At RHIC energy<sup>5</sup><sup>5</sup>5Un updated set of predictions for RHIC can be found in . the difference is about a factor 2 - most models give results in the range $`700÷1500`$.
These uncertainties in one of the most elementary aspects of the collision, may leave us uncomfortable regarding the necessity to keep under control the conventional physics of heavy ions to clearly distinguish the signatures of QGP and/or collective effects in the proposed observables. Needless to say, from the experimental point of view it is crucial to know whether there will be 2000 or 8000 charged particles per unit rapidity in central Pb-Pb collisions at the LHC for the design of the detectors. For these reasons we review in this paper charged particle central rapidity density predictions of different models for central collisions between the largest nuclei that will be available at RHIC and LHC, discussing the origin of the differences among the results.
According to their origin, models can be classified into three categories. On the one hand, some models like Dual Parton Model (DPM) , its Monte Carlo implementation DPMJET , Quark-Gluon String Model (QGSM) , FRITIOF , SFM , Relativistic Quantum Molecular Dynamics (RQMD) , Ultrarelativistic Quantum Molecular Dynamics (UrQMD) , VENUS or its new version NEXUS , and LUCIAE mainly pay attention to the soft part of the collision (there is no need of a hard perturbative part at SPS energies). The hard part in some of these models is included adding to the elementary soft cross section the jet one, as an input for the eikonalized cross section.
On the contrary, other models like the Heavy-Ion Jet Interaction Generator (HIJING) , Eskola et al. , and Geiger and Müller are mainly focused to the hard part. They compute the number of minijets or partons with transverse momentum larger than a given $`p_01÷2`$ GeV/c. These hard partons are taken as the starting point of an evolution and expansion previous to hadronization (for discussions on this point see for example ). A soft part, extracted from the SPS data, is added with an energy dependence taken from some model.
A third kind of models are the statistical and thermodynamical ones . In these models the main predictions refer to ratios between different kind of particles and not to absolute values of each kind. Usually to get absolute rapidity densities the volume at freeze-out has to be specified. The volumes used are $`3600`$ and $`14400`$ fm<sup>3</sup>, giving charged particle densities at midrapidity of 1200 and 8000 at RHIC and LHC energies respectively<sup>6</sup><sup>6</sup>6The value of 8000 charged particles per unit rapidity at $`y=0`$ was the preferred value for many models before 1995. Indeed only the SFM gave values close to 2500. Now, by different although probably related reasons, several models have lowered their predictions to values close to the SFM one. .
The plan of this review will be the following: After this Introduction, in the next Section the DPM and DPMJET Monte Carlo code will be discussed in some detail, introducing several concepts which will also be used in the other models. In Sections 3, 4, 5 and 6 the SFM, RQMD, HIJING and Perturbative QCD (PQCD) and Hydrodynamical models respectively, together with their predictions for charged particle densities at midrapidity, will be briefly reviewed. Other models will be discussed in Section 7. Afterwards, in Section 8 we will argue on percolation in heavy ion collisions, and in Section 9 possible implications for Cosmic Ray Physics will be commented. In the last Section the different results will be compared and some discussions presented.
## 2 The Dual Parton Model and the DPMJET Monte Carlo code
The DPM is a dynamical model for low $`p_{}`$ hadronic and nuclear interactions, based on the large $`N`$ expansion of QCD with $`N_c/N_f`$ fixed . The dominant lowest order configuration in p-p scattering at high energy consists in the production of two strings between valence constituents, of type $`(qq)_vq_v`$, see Fig. 2.
There are also more complicated terms, corresponding to higher order diagrams in the large $`N`$ expansion, involving 4, 6,$`\mathrm{}`$ strings. These extra strings are of the type $`q_s\overline{q}_s`$, with sea quarks and antiquarks at their ends (Fig. 3). These configurations correspond to multiple inelastic scattering in the $`S`$-matrix approach, the number of strings being equal to twice the number of inelastic collisions. The contribution of each configuration to the cross section is determined using the generalized eikonal approach (see below) or the perturbative Reggeon calculus in hadron-hadron collisions and the Glauber-Gribov model in collisions involving nuclei.
For A-A collisions, the rapidity distribution of secondaries is given by
$$\frac{dN^{\mathrm{AA}}}{dy}=\overline{n}_\mathrm{A}\left[N^{(qq)_v^{\mathrm{A}_p}q_v^{\mathrm{A}_t}}(y)+N^{q_v^{\mathrm{A}_p}(qq)_v^{\mathrm{A}_t}}(y)\right]+2(\overline{n}\overline{n}_\mathrm{A})N^{q_s\overline{q}_s}(y),$$
(1)
where $`N(y)`$ are the rapidity distributions of produced particles in the individual strings stretched between the projectile ($`p`$) and target ($`t`$) nuclei, $`\overline{n}_\mathrm{A}`$ is the average number of wounded nucleons of A and $`\overline{n}`$ is the average number of nucleon-nucleon collisions. Both $`\overline{n}_\mathrm{A}`$ and $`\overline{n}`$ are computed in the Glauber model. For instance, for minimum bias collisions
$$\overline{n}=\frac{\mathrm{A}^2\sigma _{\mathrm{NN}}}{\sigma _{\mathrm{AA}}}\frac{\mathrm{A}^2\sigma _{\mathrm{NN}}}{\pi (2R_\mathrm{A})^2}\frac{\mathrm{A}^{4/3}}{4},$$
(2)
with $`\sigma _{\mathrm{NN}}`$ and $`\sigma _{\mathrm{AA}}`$ the nucleon-nucleon and nucleus-nucleus cross sections respectively; for central collisions,
$$\sigma _{\mathrm{AA}}\pi R_\mathrm{A}^2,\overline{n}\mathrm{A}^{4/3}.$$
(3)
If all strings would have the same plateau height (i.e. the same value of $`N(0)`$), $`dN^{\mathrm{AA}}/dy|_{y=0}`$ would increase like A<sup>4/3</sup>. However at present energies the plateau height of the $`q_s\overline{q}_s`$ strings is much smaller than that of the $`(qq)_vq_v`$ ones, and the first term in (1) dominates. One obtain in this way the result of the Wounded Nucleon Model (WNM) . At SPS energies only for central collisions some departure of the law
$$\frac{dN^{\mathrm{AA}}}{dy}\overline{n}_\mathrm{A}$$
(4)
is expected, and indeed has been seen in the experimental data .
At higher energies the contribution of the sea strings becomes increasingly important, not only because their plateau height gets higher but also due to the need to introduce multistring configuration in each nucleon-nucleon collision. If the average number of strings in each nucleon-nucleon collision is $`2\overline{k}`$ (this number can be computed in the generalized eikonal model), the total number of strings is $`2\overline{k}\overline{n}`$ and (1) is changed into
$`{\displaystyle \frac{dN^{\mathrm{AA}}}{dy}}`$ $`=`$ $`\overline{n}_\mathrm{A}\left[N^{(qq)_v^{\mathrm{A}_p}q_v^{\mathrm{A}_t}}(y)+N^{q_v^{\mathrm{A}_p}(qq)_v^{\mathrm{A}_t}}(y)+(2\overline{k}2)N^{q_s\overline{q}_s}(y)\right]`$ (5)
$`+`$ $`2\overline{k}(\overline{n}\overline{n}_\mathrm{A})N^{q_s\overline{q}_s}(y).`$
The hadronic spectra of the individual strings $`N(y)`$ is obtained from a convolution of momentum distribution functions and fragmentation functions . Both functions can be determined to a large extent from known Regge trajectories.
For RHIC and LHC energies $`\overline{k}`$ $`2`$ and 3 respectively. Using these values in (5) it is obtained :
$`{\displaystyle \frac{dN^{\mathrm{SS}}}{dy}}|_{y=0}=170,`$ $`{\displaystyle \frac{dN^{\mathrm{PbPb}}}{dy}}|_{y=0}=1890\text{ at }\sqrt{s}=200\text{ GeV per nucleon},`$
$`{\displaystyle \frac{dN^{\mathrm{SS}}}{dy}}|_{y=0}=500,`$ $`{\displaystyle \frac{dN^{\mathrm{PbPb}}}{dy}}|_{y=0}=7900\text{ at }\sqrt{s}=7\text{ TeV per nucleon},`$ (6)
for charged particles in central ($`\overline{n}_\mathrm{A}>28`$ in S-S and 200 in Pb-Pb, corresponding to $`b0`$) A-A collisions (in a value of 8500 for Pb-Pb, $`b<3`$ fm, at $`\sqrt{s}=6`$ TeV per nucleon is given).
In these results no semihard collisions were taken into account. The inclusion of this kind of collisions cannot modify significantly the numbers in (6) since the total number of strings is constrained by unitarity. The fact that some of the $`q_s\overline{q}_s`$ strings can be the result of a semihard gluon-gluon interaction, will affect the $`p_{}`$ distribution of the produced particles. However, average multiplicities are practically unchanged if one neglects changes from energy-momentum conservation due to the larger $`p_{}`$ in the semihard contribution.
Other effect not taken into account in these first estimations done in 1991 is shadowing corrections, which can be of importance at RHIC and LHC energies. The physical origin of shadowing corrections<sup>7</sup><sup>7</sup>7For a discussion on the relation between unitarity, parton saturation and shadowing see for example . can be traced back to the difference between the space-time picture of the interaction in the Glauber model and in Glauber-Gribov field theory . In Glauber we have successive collisions of the incident hadron to explain multiple scattering in hadron-nucleus interactions, while in Gribov theory simultaneous collisions of different projectile constituents with nucleons in the target nuclei are considered. Nevertheless, the h-A scattering amplitude can be written as a sum of multiple scattering diagrams with elastic intermediate states, which have the same expressions in both cases. In addition to these diagrams there are other ones which contain, as intermediate states, all possible diffractive excitations of the projectile hadron, whose influence at SPS energies is small. The size of the high mass excitations of the initial hadron is controlled by the triple Pomeron coupling. The value of this coupling, determined from soft diffraction experimental data, allows to describe hard diffraction measured at the Hadron-Electron Ring Accelerator (HERA) at DESY and also the size of the shadowing effects in the nuclear structure functions at small $`x`$ . These considerations imply a reduction of particle densities at midrapidity of a factor 2 at RHIC and 3 at the LHC .
This shadowing can be alternatively seen as a way of introducing the interaction among strings, see next Section. In (1) and (5) it is assumed that strings fragment independently one from each other. As the number of strings grows with the energy of the collision, with the size of the projectile or the target and with the degree of centrality of the collision, interaction of strings is expected at very high energies in central heavy ion collisions. This approach is equivalent to take into account the triple Pomeron coupling, whose effects on $`dN/dy`$ are very small at SPS energies and become large at RHIC and LHC.
In order to include the hard part in the DPM, the eikonal depending on impact parameter $`b`$ and energy is divided in a sum of soft plus hard pieces ,
$$\chi (b^2,s)=\chi _s(b^2,s)+\chi _h(b^2,s),$$
(7)
normalized to the corresponding elementary cross sections,
$$d^2b2\chi _i(b^2,s)=\sigma _i^0,i=s,h;$$
(8)
in terms of the eikonal, the inelastic cross section for the collision is
$$\sigma _{in}=d^2b\left[1e^{2\chi (b^2,s)}\right].$$
(9)
The soft eikonal is parametrized as
$$\chi _s(b^2,s)=\frac{\sigma _s^0}{8\pi \left[c+\alpha ^{}\mathrm{log}(s/s_0)\right]}\mathrm{exp}\left(\frac{b^2}{4\left[c+\alpha ^{}\mathrm{log}(s/s_0)\right]}\right)$$
(10)
and the hard one as
$$\chi _h(b^2,s)=\frac{\sigma _h^0}{8\pi d}\mathrm{exp}\left(\frac{b^2}{4d}\right).$$
(11)
The soft input is a soft Pomeron with a linear trajectory, $`\alpha _s(t)=1+\mathrm{\Delta }_s+\alpha ^{}t`$,
$$\sigma _s^0=g^2s_s^\mathrm{\Delta },$$
(12)
and the hard cross section $`\sigma _h^0`$ is calculated from PQCD using a lower $`p_{}`$ cut-off and conventional structure functions. Unitarity of the cross section is explicit in (9), which can be expanded as
$$\sigma _{in}=d^2b\underset{l_c+m_c1}{}\sigma (l_c,m_c,b^2,s),$$
(13)
the sum running over $`l_c`$ soft elementary collisions and $`m_c`$ hard ones.
DPMJET is a Monte Carlo code for sampling hadron-hadron, hadron-nucleus, nucleus-nucleus, lepton-hadron and lepton-nucleus collisions at accelerator and cosmic ray energies . It uses the DPM for hadronic and nuclear interactions, the hard part being simulated using PYTHIA and, in its latest version DPMJET-II.5 , one of the most recent sets of parton distribution functions, GRV-LO-98 . The code includes intranuclear cascade processes of the created secondaries with formation time considerations, and also nuclear evaporation and fragmentation of the residual nucleus.
In the first versions of the code, in addition to diagrams where the valence diquarks at the end of one string fragment into hadrons preserving the diquark, diquark breaking was allowed. This is the so-called popcorn mechanism, see Fig. 4. However the mechanism is not enough to explain the large baryon stopping observed in A-B collisions. For this reason in the DPM new diagrams for diquark breaking, like the one in Fig. 5, have been proposed and discussed. In both Figs. 4 and 5 the dashed line is the string junction: at large $`N_c`$ a baryon can be pictured as made out of three valence quarks together with three strings which join in the string junction<sup>8</sup><sup>8</sup>8Using some supergravity solution and the recently conjectured duality between gauge and string theory the large $`N_c`$ baryon wave function has been constructed from $`N_c`$ strings connected via a junction . . These diagrams, included in DPMJET-II.5, shift the baryon spectrum to the central rapidity region and also produce an enhancement of strange baryons and antibaryons.
In the code the presence of diquarks and antidiquarks at sea string ends is also included. This increases baryon and antibaryon rapidity densities and, due to energy-momentum conservation, reduces that of pions.
The results of DPMJET-II.5 for charged particles in central Pb-Pb collisions at RHIC (3 % more central events) and LHC (4 % more central events) are
$$\frac{dN}{dy}|_{y=0}^{\mathrm{RHIC}}=1280,\frac{dN}{dy}|_{y=0}^{\mathrm{LHC}}=2800.$$
(14)
These values agree with the ones computed in the DPM , see above. The previous version of the code gives a higher value at LHC, $`dN_{ch}/d\eta |_{\eta =0}=3700`$ (although for $`\sqrt{s}=6`$ TeV per nucleon and $`b3`$ fm). This reduction is due to the inclusion of new diagrams and to the energy-momentum conservation consequences of the inclusion of $`(qq)_s(\overline{q}\overline{q})_s`$ strings. Notice than the value obtained in the code is much smaller than that obtained in the DPM using (5). This fact is essentially due to energy-momentum conservation, which prevents some of the ($`\overline{n}`$,$`\overline{n}_\mathrm{A}`$,$`\overline{k}`$) configurations to take place.
## 3 The String Fusion Model
The SFM is based on QGSM, a model which is quite similar to DPM with only minor differences. The main ingredient added in the SFM is the fusion of strings . The basic idea is that strings fuse as soon as their transverse position come within a certain interaction area, of the order of the string proper transverse dimension as dictated by its mean $`p_{}`$. In a Monte Carlo approach, such a picture can be realized by assuming that strings fuse as soon as partons which act as their sources, have their transverse positions close enough. In this language the fusion probability is determined by the parton transverse dimension, that is, by the parton-parton cross section. Energy conservation can be taken into account by distributing the available energy among these active partons, as it has always been done in string models . Then the emerging strings occupy different intervals in rapidity space, determined by the energy-momentum of their sources. The fusion of strings may only take place when their rapidity intervals overlap. In particular, for two pairs of partons from the projectile and target with rapidities $`y_1`$, $`y_2`$ and $`y_1^{}`$, $`y_2^{}`$ respectively, the two corresponding strings fuse in the interval $`[\mathrm{max}\{y_1^{},y_2^{}\},\mathrm{min}\{y_1,y_2\}]`$. If this interval becomes small the resulting object will have will have its total energy of the order of a typical hadron mass and, as with ordinary strings, is no more a string but rather an observed hadron. The exact value of the minimal string energy and thus of its minimal rapidity length is taken the same as for ordinary strings.
The color and flavor properties of the formed strings follow from the properties of their ancestor strings. The fusion of several quark-antiquark $`q\overline{q}`$ strings produces a $`Q\overline{Q}`$ complex with color $`Q`$ (quadratic Casimir operator of the representation $`Q^2`$), which is determined by the SU(3) color composition laws. For example, the fusion of two $`q\overline{q}`$ triplet strings produces a $`[\overline{3}]`$ string (that is, a diquark-antiquark string) with probability 1/3 and a $`[6]`$ string with probability 2/3 ($`[3][3]=[6][\overline{3}]`$). On the other hand, if two triplet strings with opposite color flux directions fuse (a quark $`[3]`$ state fuses with a $`[\overline{3}]`$ antiquark state), either colorless states at the end of the new string or a $`[8]`$ string are formed with probabilities 1/9 and 8/9 respectively ($`[3][\overline{3}]=[1][8]`$). The flavor of the fused string ends is evidently composed of the flavor of the partons sitting there. As a result of string fusion, we thus obtain strings with arbitrarily large color and differently flavored ends, in accordance with the probability to create the color $`Q`$ from several (anti)quarks. Crude characteristics of hadron interactions depend only on the fact that the total number of strings of whatever color in a given transverse area becomes limited because of their fusion. In other words, string density cannot grow infinitely but is bounded from above . More detailed properties of hadron spectra require knowledge of a particular manner in which the new fused strings decay into hadrons.
In all color string models, it is assumed that the homogeneous color field corresponding to the strings creates pairs of colored partons, which neutralize this field and provide for its subsequent decay. The basic formula which describes the probability of such a process is taken in the spirit of the famous Schwinger expression for the probability to create an electron-positron pair in a constant electromagnetic field . With a constant color field which originates from two opposite color charges $`\stackrel{}{Q}`$ and $`\stackrel{}{Q}`$ (8-vectors in SU(3)), the probability rate to create a pair of partons with color charges $`\stackrel{}{C}`$ and $`\stackrel{}{C}`$, flavor $`f`$ and mass $`M_f`$ for unit string length is assumed to be given by
$$\frac{d\omega (\stackrel{}{Q},\stackrel{}{C})}{d^2p_{}}A_t(kQC)^2\mathrm{exp}\left(\frac{M_{}^2}{kQC}\right).$$
(15)
The parameter $`A_t`$ has the meaning of the string transverse area and $`M_{}`$ is the transverse mass. $`k`$ is proportional to the string tension $`\kappa `$,
$$\kappa =\frac{\pi kQ^2}{2}.$$
(16)
In the strings of high color, denoted as color ropes, break as a result of successive production of $`q\overline{q}`$ pairs, which gradually neutralize the color flux of the string until it breaks. In SFM, however, the process considered is that of creation of a pair of parton complexes with color $`\stackrel{}{Q}`$ equal to that of the ends of the string. This is the main contribution to the breaking of the string for low values of $`Q`$ . As, in the Monte Carlo code, fusion of strings is taken into account in an effective way and only fusion of two strings is allowed, the mechanism of string breaking for high color strings is the one just mentioned. It is also assumed that $`[3]`$, $`[\overline{3}]`$, $`[6]`$ and $`[8]`$ strings have the same transverse area<sup>9</sup><sup>9</sup>9This assumption is a very strong one; other possibilities will be discussed in Section 8 in relation to percolation of strings . . The string tension is proportional to $`Q^2`$,
$$Q_{[3]}^2=4/3=Q_{[\overline{3}]}^2,Q_{[8]}^2=3,Q_{[6]}^2=10/3.$$
(17)
So, approximately $`\kappa _{[8]}\kappa _{[6]}2.5\kappa _{[3]}=2.5\kappa _{[\overline{3}]}`$.
As can be inferred from what has been presented above, fusion of strings leads to an enhancement of baryon and antibaryon production, due to the possibility of having $`(qq)`$ and $`(\overline{q}\overline{q})`$ at the end of the fused strings. In DPM a similar mechanism is introduced considering the possibility of diquarks in the sea, as mentioned in the previous Section. In addition to this mechanism there is another source of baryon enhancement, due to the larger tension of fused strings (17) which, through (16) and (15), implies a more efficient production of heavy quarks and diquarks, and of higher $`p_{}`$. Therefore, it is also expected heavy flavor enhancement and some increase of transverse momentum.
Another important consequences of string fusion is the possibility of producing particles in collisions involving nuclei, outside the nucleon-nucleon kinematical region, the so-called cumulative effect (part of these effect is usually addressed to the Fermi motion of nucleons inside nuclei). In fact the resulting fused string has an energy-momentum corresponding to the sum of the energy-momenta of its ancestor strings, which can be larger than the energy-momentum available in an isolated nucleon-nucleon collision .
In the SFM code, the nuclear parton wave function is taken as a convolution of the parton distribution in a nucleon with the nucleon distribution in a nucleus. In this way, hadrons and nuclei are treated in a similar way, different from what DPMJET does. Also, in a previous version of SFM most of the computations where done at SPS energies and no hard part was considered. This part is now introduced in the code in a standard way and modifies the central rapidity region at energies higher than those of SPS.
The probability of fusion of two strings is controlled by the parton-parton cross section,
$$\sigma _p=2\pi r^2.$$
(18)
Its numerical value is fixed to $`\sigma _p8`$ mb, in order to reproduce the $`\overline{\mathrm{\Lambda }}`$ enhancement seen in central S-S and S-Ag collisions at SPS energies . This value, which means $`r0.36`$ fm, has been obtained implementing in the code fusion of only two strings and therefore has to be considered as an effective one. The actual transverse size of a string should be less, a more realistic one being $`r0.2÷0.25`$ fm , a value which agrees with other considerations .
For the purpose of this review, the most important consequence of fusion of strings is that it suppresses total multiplicities, reducing the number of pions in the central rapidity region, although the rapidity distribution becomes larger at the extreme of the fragmentation regions. In the predictions done with the previous version of the model , no hard part was included and the charged particle densities at midrapidity for central ($`b=0`$) Au-Au collisions were
$`{\displaystyle \frac{dN}{dy}}|_{y=0}`$ $`=`$ $`1000\text{ at }\sqrt{s}=200\text{ GeV per nucleon},`$
$`{\displaystyle \frac{dN}{dy}}|_{y=0}`$ $`=`$ $`1900\text{ at }\sqrt{s}=6.3\text{ TeV per nucleon}.`$ (19)
The corresponding values without considering fusion of strings were 1850 and 4000 respectively. A strong suppression of the central density is produced (note the agreement with the values from DPMJET). Including the hard part , the values for collisions of charged particle densities corresponding to the 5% more central events<sup>10</sup><sup>10</sup>10In the model, this translates into $`b3.2`$ fm for Au-Au at RHIC and $`b3.3`$ fm for Pb-Pb at LHC. are
$`{\displaystyle \frac{dN^{\mathrm{AuAu}}}{dy}}|_{y=0}`$ $`=`$ $`910\text{ at }\sqrt{s}=200\text{ GeV per nucleon},`$
$`{\displaystyle \frac{dN^{\mathrm{PbPb}}}{dy}}|_{y=0}`$ $`=`$ $`3140\text{ at }\sqrt{s}=5.5\text{ TeV per nucleon},`$ (20)
and now the suppression due to string fusion is smaller (the corresponding values without fusion are 1300 and 3690 respectively). The reason for this is that the strings coming from hard scatterings do not fuse in the code. At LHC a large proportion of strings are hard ones and therefore the relative size of suppression is smaller. The hard strings have a size $`1/p_{}`$ and indeed should interact and fuse, although with smaller probability than the soft ones. Effects of overlapping of strings will be further discussed in Section 8.
## 4 The Relativistic Quantum Molecular Dynamics Model
RQMD is a semiclassical microscopic approach which combines classical propagation with stochastic interactions. Strings and resonances can be excited in elementary collisions, their fragmentation and decay leading to the production of particles. The nature of the active degrees of freedom in RQMD depends on the relevant length and time scales of the processes considered. In low energy collisions (around 1 GeV per nucleon in the center of mass) RQMD reduces to solving transport equations for a system of nucleons, other hadrons and eventually resonances interacting in binary collisions and via mean fields. At large beam energies ($`>10`$ GeV per nucleon in the center of mass) the description of a projectile hadron interacting in a medium (a cold nucleus) as a sequence of separated hadron or resonance collisions breaks down. A multiple collision series is formulated on the partonic level, following the Glauber-Gribov picture. In RQMD these multiple collisions correspond to strings formed between partons of the projectile and target, including sea quarks and antiquarks. The string excitation law $`dPdx^+/x^+`$ is the same used in FRITIOF . The decay of elementary color strings is done using JETSET . Rescattering is included: four classes of binary interactions, BB, BM, MM and $`\overline{\mathrm{B}}`$B (B denoting baryon, M denoting meson) are considered.
One of the main ingredients of RQMD is the inclusion of interaction of strings by means of formation of color ropes, see previous Section, when there are overlapping strings. These ropes are chromoelectric flux tubes whose sources are charge states in representations of color SU(3) with dimension higher than the triplet one. They are equivalent to the fused strings of SFM. As already mention, as a simplification in SFM only fusion of two strings is considered, as an effective way to take into account string interaction. In RQMD all possibilities are considered. The breaking of these higher color strings proceeds through successive production of $`q\overline{q}`$ pairs due to the Schwinger mechanism.
As in the case of SFM, introduction of color ropes in RQMD leads to heavy flavor and baryon and antibaryon enhancement. In version RQMD 2.3 the model reproduces the SPS rapidity distributions of h<sup>-</sup>, K<sup>0</sup>, $`\mathrm{\Lambda }`$, $`\overline{\mathrm{\Lambda }}`$, $`\mathrm{\Xi }^{}`$ and $`\overline{\mathrm{\Xi }}^+`$. It slightly underestimates the yields of $`\mathrm{\Omega }^{}`$ and $`\overline{\mathrm{\Omega }}^+`$ (by less than a factor 2). Also it is able to reproduce the $`m_{}`$ spectrum of all these particles. Let us mention that independent string models are not able to reproduce these slopes.
The formation of color ropes leads to a strong suppression of central rapidity distributions. The prediction of RQMD for central ($`b=3`$ fm) Pb-Pb collisions at RHIC is $`dN/dy|_{y=0}700`$ . This number is lower than the value of SFM, 910. The reason for that probably has to do with the strong fusion probability used in RQMD. The effect of this strong string interaction in some observables (like antibaryon enhancement) is compensated by other processes (a large $`\overline{\mathrm{B}}`$B annihilation).
## 5 The Heavy-Ion Jet Interaction Generator (HIJING)
In HIJING the soft contribution is modeled by diquark-quark strings with gluon kinks induced by soft gluon radiation, in a way very similar to the FRITIOF model . Since this model treats explicitly minijet physics through PQCD, the transverse momentum in string kinks due to soft processes is limited from above by a minijet scale $`p_0=2`$ GeV/c. Gluon radiation is extended to the hard part of high $`p_{}`$, which, together with the use of momentum distribution functions for partons similar to those of DPM, constitutes a difference with FRITIOF. Strings decay independently by means of the JETSET routines. In addition to the low $`p_{}<p_0`$ gluon kinks, HIJING includes an extra low $`p_{}`$ transfer between the constituent quarks and diquarks at the string ends. This extra $`p_{}`$ is chosen to ensure an smooth extrapolation in the $`p_{}`$ distributions from the soft to the hard regime.
Multiple minijet production with initial and final state radiation is included along the lines of the PYTHIA model . First, the cross section for hard parton scattering $`\sigma _{jet}`$ is computed in PQCD at leading order (LO), using a K-factor $`2`$ to simulate higher order corrections. The eikonal formalism, see Section 2, is employed to calculate the number of minijets per inelastic nucleon-nucleon collision. For A-A collisions at impact parameter $`b`$ the total number of jets is given by
$$N_{jet}^{\mathrm{AA}}(b)=\frac{\mathrm{A}^2T_{\mathrm{AA}}(b)}{\sigma _{\mathrm{AA}}(b)}\sigma _{jet},$$
(21)
with
$$T_{\mathrm{AA}}(b)=d^2b^{}T_\mathrm{A}(bb^{})T_\mathrm{A}(b^{}),$$
(22)
$`T_\mathrm{A}(b)`$ being the nuclear profile function normalized to 1 and $`\sigma _{\mathrm{AA}}(b)d^2b\{1\mathrm{exp}[\sigma _{\mathrm{NN}}A^2T_\mathrm{A}(b)]\}`$ the A-A cross section for impact parameter $`b`$. For central collisions $`b=0`$, $`\sigma _{\mathrm{AA}}(b=0)1`$ and
$$N_{jet}^{\mathrm{AA}}(b)\frac{\mathrm{A}^2}{\pi R_\mathrm{A}^2}\sigma _{jet}\mathrm{A}^{4/3}.$$
(23)
Therefore, at high energies and for central nucleus-nucleus collisions there will be many minijets.
In the model, jet quenching is included to enable the study of the dependence of moderate and high $`p_{}`$ observables, on an assumed energy loss per unit length $`dE/dx`$ of high energy partons traversing the dense matter produced in the collision. The effect of including jet quenching is a moderate enhancement of particle production in the central rapidity region and to diminish the yield in the fragmentation regions. Furthermore, in the last version of the model the mechanism of string junction migration explained in Section 2 is included, in order to shift baryons from fragmentation to central rapidity regions.
The results for charged densities at midrapidity in central ($`b<3`$ fm) Au-Au collisions at $`\sqrt{s}=200`$ GeV per nucleon are shown in Fig. 6. The different curves refer to different versions of the model with and without quenching and shadowing of the nucleon structure functions in the nucleus . For LHC, HIJING predictions lie in the range $`5000÷7500`$ depending on structure functions used and quenching included or not.
## 6 Perturbative Quantum Chromodynamics and Hydrodynamical models
It has been argued that the initial state (the initial distribution of partons) in a high energy heavy ion collision could be computed using PQCD. Several groups have developed models along this line. Concretely, Eskola et al. have computed charged densities and transverse energies al midrapidities, using PQCD at some given scale which is taken to be equal to a saturation scale, the scale at which parton distributions stop their increasing at small $`x`$.
In ultrarelativistic heavy ion collisions the number of produced gluons and quarks with $`p_{}`$ greater than some cut-off $`p_0`$, $`N_{\mathrm{AA}}(b,p_0,\sqrt{s})`$, increases when $`p_0`$ decreases, when the size of the nuclei increases or $`b`$ decreases, see (21) and (23), and when $`\sqrt{s}`$ increases due to the small $`x`$ enhancement of parton distribution functions. Shadowing of nucleon structure functions in nuclei will decrease $`N_{\mathrm{AA}}(b,p_0,\sqrt{s})`$, but next-to-leading order (NLO) corrections will increase it. At sufficient large cut-off $`p_0\mathrm{\Lambda }_{\mathrm{QCD}}`$ the system of produced gluons is dilute and usual perturbation theory is applicable. However, at some transverse momentum $`p_0=p_{sat}`$ the gluon and quark phase space density saturate and no further increase is expected. In this case one may conjecture that evaluation of the number of charged particles $`N_{ch}`$ and transverse energy $`E_T`$ using QCD formulae at this saturation scale $`p_{sat}`$ gives a good estimate of the total $`N_{ch}`$ and $`E_T`$ (partons with $`p_{}p_{sat}`$ are rare, partons with $`p_{}p_{sat}`$ saturate and contribute little to the total $`E_T`$).
In , first $`N_{\mathrm{AA}}\left(b=0,p_0,\sqrt{s}\right)`$ for $`|y|<0.5`$ is computed using standard PQCD expressions at LO. Nuclear effects on parton distribution functions are implemented using the EKS98 parameterization of nuclear corrections. To simulate NLO contributions, a K-factor $`\mathrm{K}=2`$ is used. The scale in the PQCD calculation is fixed from considering that at saturation $`N_{\mathrm{AA}}(b=0,p_{sat},\sqrt{s})`$ partons, each one with transverse area $`\pi /p_{sat}^2`$, fill the whole transverse area $`\pi R_\mathrm{A}^2`$,
$$N_{\mathrm{AA}}\left(b=0,p_{sat},\sqrt{s}\right)=p_{sat}^2R_\mathrm{A}^2.$$
(24)
In Fig. 7 $`N_{\mathrm{AA}}\left(b=0,p_0,\sqrt{s}\right)`$ is plotted for $`\mathrm{A}=208`$ as a function of $`p_0`$ at SPS, RHIC and LHC energies. The dashed curve is $`p_0^2R_\mathrm{A}^2`$. The intersection points give us $`p_{sat}`$ at the corresponding energies. Of course, all this is only valid as long as $`p_0\mathrm{\Lambda }_{\mathrm{QCD}}`$ for perturbation theory to be justifiedly used, which is doubtful at SPS and RHIC (see the Figure; the saturation momentum are $`0.5`$, $`1.4`$ and $`2.3`$ GeV/c at SPS, RHIC and LHC energies respectively).
The values of $`N_i=N_{\mathrm{AA}}\left(b=0,p_{sat},\sqrt{s}\right)`$ and $`p_{sat}`$ can be well fitted by the expressions
$`N_i`$ $`=`$ $`1.383\mathrm{A}^{0.922}(\sqrt{s})^{0.383},`$ (25)
$`p_{sat}`$ $`=`$ $`0.208\mathrm{A}^{0.128}(\sqrt{s})^{0.191}\mathrm{GeV}/\mathrm{c}.`$ (26)
The initial state computed in this way very nearly fulfills the kinetic thermalization condition for bosons, $`ϵ/n=2.7T`$ (the number of gluons is much larger than that of quarks), and there also is some justification to consider that further hydrodynamical expansion is locally thermal, i.e. entropy conserving. Thus initially the entropy $`S_i=3.6N_i`$ (ideal system of bosons). For the final hadronic gas $`S_i=S_f4N_f`$ so that $`N_f=0.9N_i`$, i.e. the number of hadrons in the final state is, up to 10 % corrections, equal to the number of initially produced gluons at the scale $`p_{sat}`$. The multiplicity prediction
$$N_{ch}=\frac{2}{3}0.9N_i$$
(27)
is directly obtained from (25) and plotted as a dashed line in Fig. 8. The values at RHIC and LHC for central ($`b=0`$) Pb-Pb collisions are 900 and 3100, not very different from those obtained by DPM, DPMJET and SFM on very different grounds<sup>11</sup><sup>11</sup>11Theoretical models based on a semiclassical treatment of gluon radiation by partons in the colliding nuclei give values compatible with these ones, see . .
## 7 Other models
In this Section we would like to comments on some other models. The fact that the rest of the models are joined together, do not mean at all that they are less important or successful than the mentioned ones. It is simply the shortage of space which prevents us from a longer study.
The VENUS model is an extension of DPM. The main difference is the inclusion of diagrams in which there is two color exchanges, the first one providing two $`(qq)_vq_v`$ strings, one of the being intermediate, because a second color exchange breaks the diquark, giving a different $`(qq)_vq_v`$ string and a double string which consists in a forward moving quark linked to two backward moving quarks, see Fig. 9. These diagrams become increasingly important with a growing number of inelastic collisions, as in h-A or A-B, although of course they are also present in N-N collisions. They enhance stopping power, shifting the baryon spectrum towards midrapidities.
VENUS gives large values for central rapidity densities. At $`\sqrt{s}=6`$ TeV per nucleon for central ($`b3`$ fm) Pb-Pb collisions its result is $`dN_{ch}/d\eta |_{\eta =0}=8400`$ . The model has lately been extended to deal with $`\gamma ^{}`$-$`\gamma ^{}`$, $`\gamma ^{}`$-h, $`\nu `$-h, h-h, h-A and A-B collisions in the same unified approach . An unique Pomeron describes both soft and hard interactions by means of the evolution of structure functions from some properly chosen initial conditions. The new model , denoted by NEXUS, has not given values for LHC yet. Preliminary predictions for central ($`b<2`$ fm) Au-Au collisions at RHIC are $`dN_{ch}/dy|_{y=0}1100`$ . In this model particles and resonances produced in string fragmentation are allowed to rescatter and, if more than two of them are close enough, joined into a quark cluster which is decayed isotropically .
The LUCIAE event generator (Lund University and China Institute of Atomic Energy) is a version of the FRITIOF model where collective interactions among strings and rescattering of the produced particles are included. The collective interactions are incorporated following . LUCIAE has studied several observables at SPS comparing to experimental data, but, to our knowledge, has not worked out predictions for rapidity densities at RHIC and LHC. In any case, the inclusion of collective string effects produces very fast particles at the extreme of the phase space, similar to the cumulative effect which occurs in the SFM. Also, simply by energy-momentum conservation, a suppression of particles in the central rapidity region should happen.
The Ultrarelativistic Quantum Molecular Dynamics (UrQMD) is a microscopic hadronic approach based on the covariant propagation of mesonic and baryonic degrees of freedom. It allows for formation of strings and resonances, and rescattering among them and of the produced particles. In this aspect it is quite similar to RQMD. In the low energy region, i.e. $`\sqrt{s}2`$ GeV per nucleon, the inelastic cross sections are dominated by s-channel formation of resonances, which decay into particles isotropically in their local rest frame according to their lifetimes. A large variety of baryonic and mesonic states have been incorporated in the model. All corresponding antiparticles are included and treated on the same footing. At higher energies strings are considered. Much attention is paid in the model to the intermediate energy region between AGS, the Alternating Gradient Synchrotron at BNL ($`\sqrt{s}5`$ GeV per nucleon), and SPS, in order to achieve a smooth transition between the low and high energy regimes. The prediction of the model for central ($`b3`$ fm) Au-Au collisions at RHIC is shown in Fig. 10. The central density of charged pions is $`750`$, being for all charged particles $`1100`$.
Another approach using also UrQMD is the model denoted by VNI+UrQMD, where a combined microscopic partonic/hadronic transport scenario is introduced . The initial high density partonic phase of the heavy ion reaction is calculated in the framework of the parton cascade model VNI , using cross section obtained from PQCD at LO (see for a discussion of the uncertainties introduced by NLO corrections and K-factors in VNI and other parton cascade models). The partonic state is then hadronized via a configuration space coalescence and cluster hadronization model, and used as initial condition for a hadronic transport calculation using UrQMD. In Fig. 11 the time evolution of parton and on-shell hadron rapidity densities for central ($`b1`$ fm) at RHIC can be seen . From this curve, the charged particle rapidity density is $`1000`$. When, instead of calculating the initial phase using a parton cascade approach, QGP formation is assumed, this plasma is evolved hydrodynamically until hadronization, and then UrQMD is used for hadronic transport (the model considers a first order phase transition and is denoted as Hydro+UrQMD ), a smaller charged particle central rapidity density is obtained for central ($`b=0`$) Au-Au collisions at RHIC, $`750`$.
Using also VNI there is the model denoted by VNI+HSD , where HSD stands for Hadron String Dynamics model . This model involves quarks, diquarks, antiquarks, antidiquarks, strings and hadrons as degrees of freedom. The parton cascade model VNI is extended by the hadronic rescattering as described by HSD. Its results for central ($`b2`$ fm) Au-Au collisions at RHIC are shown in Fig. 12, where VNI and HSD predictions are the four plots at the top and VNI+HSD are the two plots at the bottom. It is worth noting that VNI+HSD gives almost the same results as HSD, which already includes final state interactions. This fact, also observed in DPM and SFM (see Section 2) is based on unitarity, which controls the number of inelastic collisions independently of their soft or hard origin. The total number of charged particles at midrapidity is $`1150`$.
Another RHIC prediction comes from a modification of the HIJING model to include a parton cascade model and final state interactions based on the ART model . In this model (HIJING+ZPC+ART) the central rapidity density of charged particles for central ($`b=0`$) collisions at RHIC is of the order 1100.
The assumption that local thermodynamical equilibrium is attained by the system of two heavy ions colliding at high energies is a basic hypothesis of macroscopic statistical and thermodynamical models (see for a discussion on statistical equilibrium in heavy ion collisions). This idea comes from a long time ago . Following the statistical model treats the system as a grand canonical ensemble with two free parameters, a temperature $`T`$ and a chemical potential $`\mu _B`$. Interactions of the produced particles are taken into account considering a excluded volume correction, corresponding to repulsion setting in for all hadrons at a radius of 0.3 fm (a hard core). Hadron yield ratios resulting from this model are in reasonable agreement with SPS central Pb-Pb data. Taking these results and looking at the expected phase boundary between the QGP and the hadron gas, the hadrochemical freeze-out points are where one expects, this being suggestive that hadron yields are frozen at the point when hadronization of the QGP is complete. This gives for RHIC a freeze-out temperature of 170 MeV, the same as found at SPS. The chemical potential is expected to be small, 10 MeV used as an upper limit. Strangeness and $`I_3`$ conservation then require values of $`\mu _s=2.5`$ MeV and $`\mu _{I_3}=0.2`$ MeV. In order to predict absolute yields one has to estimate the volume per unit rapidity at the time when hadronization is complete. Starting from an initial temperature of $`T_i=500`$ MeV at a time $`\tau =0.2`$ fm/c and using a transverse expansion with $`\beta =0.16`$, this volume is 3600 fm<sup>3</sup> at the freeze-out temperature of 170 MeV. The number of charged pions per unit rapidity at $`y=0`$ is 1260, that of charged kaons 194, that of protons 62 and that of antiprotons 56. Modifying the freeze-out temperature to 160 MeV results in a reduction of hadron yields about 10 %. For central Pb-Pb collisions at the LHC, taking again $`T_f=170`$ MeV and $`\mu _B=10`$ MeV and performing similar calculations, the fireball volume per unit rapidity at chemical freeze-out is 14400 fm<sup>3</sup>, resulting in 5000 charged pions, 770 charged kaons, 250 protons and 220 antiprotons per unit rapidity. The total charged particle density is
$$\frac{dN}{dy}|_{y=0}=7560.$$
(28)
As at RHIC, small modifications of $`T_f`$ and $`\mu _B`$ lead to small changes in this prediction. Both at RHIC and LHC the centrality of these results should correspond roughly to the centrality of the SPS experimental data which were fitted to extract the parameters used in the predictions. As different experiments consider different centrality criteria, this is not fully determined, so let us take $`5÷10`$ % as an estimate.
The quark coalescence model assumes that at RHIC a QGP will be produced in the collision, which will expand and cool, hadronization proceeding via quark coalescence as described by the ALCOR model . In this nonlinear coalescence model, subprocesses are not independent, competing one with each other. The coalescence equations relate the number of hadrons of a given type to the product of the numbers of different quarks from which the hadron consists. The main predictions of the model relates different particle ratios, but unfortunately the absolute values cannot be obtained in the model.
Finally, let us mention an extrapolation done by the WA98 Collaboration from central pseudorapidity densities measured at SPS at different centralities. The predicted maximum charged pseudorapidity density for central Pb-Pb collisions is 1000 at RHIC and 2500 at LHC, for $`10`$ % more central events.
## 8 Percolation
In many models, multiparticle production at high energies is described in terms of color strings stretched between the projectile and target, see previous Sections. In principle, these strings fragment independently, the only correlation among them being energy-momentum conservation. However, with growing energy, centrality and/or size of the colliding particles, the number of strings grows and one expects that the hypothesis of independent fragmentation is no longer valid, interaction among them becoming essential. For these reasons we have seen in Section 3 and 4 different ways of taking into account such interaction. In particular, in SFM or RQMD the strings fuse or form color ropes, in such a way that the transverse size of the new string or rope is the same as that of the original strings. In this case it can be shown that there is no phase transition. However, other possibilities could be discussed. It could be the case that the new string would have a transverse size corresponding to the sum of the sizes of the original strings. In this case, a first order phase transition occurs .
An alternative and natural way to the formation of a (non-thermal) QGP is percolation of strings , which in any case can be used as an estimation for the failure of the independent fragmentation hypothesis. This a purely classical, geometrical mechanism. At a given energy and impact parameter $`b`$ in an A-B collision, there is an available transverse area. For simplicity, let us take A=B and $`b=0`$, so this area is $`\pi R_\mathrm{A}^2`$. Inside it, the strings formed in the collision can be viewed as circles of radius $`r_0`$. Some of the circles may overlap and then form clusters of strings. Above a critical density of strings, percolation occurs, so that clusters of overlapping strings are formed through the whole collision area. Percolation gives rise to the formation of a collective state, which can be identified as QGP, at a nuclear scale. The phenomenon of continuum percolation is well known and has been used to explain many different physical processes . The percolation threshold $`\eta _c`$ is related to the critical density of strings $`n_c`$ by
$`\eta _c`$ $`=`$ $`\pi r_0^2n_c,`$ (29)
$`n_c`$ $`=`$ $`{\displaystyle \frac{N_c}{\pi R_\mathrm{A}^2}}\mathrm{at}b=0,`$ (30)
where $`N_c`$ is the number of exchanged strings and $`r_00.2÷0.25`$ fm , see Section 3. $`\eta _c`$ has been computed using Monte Carlo simulation and direct connectedness expansions . The results lie in the range $`1.12÷1.18`$ using step functions for the profile function of the nucleus (i.e. strings homogeneously distributed in the whole transverse area available). The use of Woods-Saxon or Gaussian nuclear densities leads to higher values of $`\eta _c`$ , up to $`1.5`$. The corresponding value of $`n_c`$ lies in the range $`6÷12`$ strings/fm<sup>2</sup>. In Table 1 we show the number of strings exchanged in different central ($`b=0`$) collisions together with their densities, as obtained in the SFM . It is seen that percolation could already occur for Pb-Pb at SPS. At RHIC and LHC, even collisions between much lighter nuclei could lead to the phase transition.
Notice that string percolation occurs in two dimensions. Percolation of hadrons was proposed long ago as a possible way to reach QGP. However, in this case percolation is three-dimensional and the critical density is below even normal nuclear matter density. This is in agreement with lattice studies which show that the so-called energy radius of the hadron is about 0.2 fm. Therefore the color fields inside hadrons occupy only a few percent of the transverse area, $`(r_0/R_h)^2(1/5)^2`$. This also explains the relative weak string-string interaction (for instance the triple Pomeron coupling of Glauber-Gribov theory used in DPM and DPMJET, see Section 2).
Percolation is a second order phase transition . The corresponding scaling law gives the behavior of the number of clusters of $`n`$ strings, $`\nu _n`$, in terms of $`\eta `$,
$$\nu _n=n^\tau \left(n^\sigma \left[\eta \eta _c\right]\right),|\eta \eta _c|1,n1,$$
(31)
where $`\tau =187/91`$ and $`\sigma =36/91`$. The fraction $`\varphi `$ of the total surface occupied by strings is determined by
$$\varphi =1e^\eta .$$
(32)
It can be seen that the multiplicity $`\mu _n`$ due to a cluster of $`n`$ overlapping strings, compared to the multiplicity of one string $`\mu _1`$, is given by
$`\mu _n`$ $`=`$ $`n\mu _1F(\eta ),`$ (33)
$`F(\eta )`$ $`=`$ $`\sqrt{{\displaystyle \frac{1e^\eta }{\eta }}}.`$ (34)
From Table 1 and taking $`r_0=0.25`$ fm, the values of $`F(\eta )`$ for central ($`b=0`$) Pb-Pb collisions at RHIC and LHC are 0.59 and 0.46 respectively, quite close to the values of the reduction factor of multiplicities in DPM due to triple Pomeron couplings, 1/2 and 1/3 respectively, see Section 2. A naive calculation for central rapidity densities of charged particles can be done multiplying the corresponding values obtained in the SFM without fusion of strings for Pb-Pb at $`b=0`$, by these reduction factors $`F(\eta )`$. The results for RHIC and LHC are 910 and 1980 respectively (380 at $`\sqrt{s}=19.4`$ GeV with $`F(\eta )=0.68`$) and clearly, from the way they were obtained, should be considered as lower bounds.
Percolation, in addition to reduce central rapidity multiplicities and enhance heavy flavor production (as string fusion does), also gives rise to other consequences in long range correlations and transverse momentum correlations , and J/$`\psi `$ suppression . Also, as the energy-momentum of the clusters is the sum of the energy-momentum of the original strings, a huge cumulative effect is expected.
The critical point of percolation is the fixed point of a scale transformation (renormalization group equation) which eliminates short range correlations, surviving only long range ones. Close the that point observables should depend only on $`\eta `$, and not on energy or nuclear size separately.
By passing, let us notice that a similar intrinsic scale has been proposed in the small $`x`$ physics domain, related with saturation of structure functions or minijets, which was discussed in Section 6. This quantity is defined by
$$\mathrm{\Lambda }^2=\frac{N_{\mathrm{AA}}(p_{sat})}{\pi R_\mathrm{A}^2},$$
(35)
with $`p_{sat}`$ the transverse momentum at which saturation starts. When the number of partons $`N_{\mathrm{AA}}(p_{sat})`$, each one with a transverse size of the order $`\pi /p_{sat}^2`$, verifies $`\mathrm{\Lambda }^2\pi p_{sat}^2`$, partons cover the whole nuclear area. Physics should depend only on the value of $`\mathrm{\Lambda }^2`$. Furthermore, the effective action which describes small $`x`$ physics should become critical at some fixed point of some renormalization group, the correlation functions depending only on critical exponents determined by symmetry considerations and dimensionality. Comparing with percolation, indeed $`\eta /(\pi r_0^2)=N/(\pi R_\mathrm{A}^2)`$ is formally $`\mathrm{\Lambda }^2`$, with the exchange of $`N`$ soft strings by $`N_{\mathrm{AA}}(p_{sat})`$ partons. Let us indicate that from the arguments exposed above, overlapping partons will only cover the whole transverse area $`\pi R_\mathrm{A}^2`$ asymptotically. According to (32), the transverse area $`S`$ covered by $`N_{\mathrm{AA}}(p_{sat})`$ partons of transverse size $`\pi /p_{sat}^2`$ is
$$S=\pi R_\mathrm{A}^2\left[1\mathrm{exp}\left(\mathrm{\Lambda }^2\pi /r_0^2\right)\right].$$
(36)
## 9 Cosmic Ray Physics and heavy ion accelerators
Usually it is considered that the highest cosmic ray energies, say $`10^{15}÷10^{20}`$ eV, are much higher than energies reached at accelerators. With the advent of RHIC and LHC, this is not true any longer. Pb-Pb collisions at RHIC and LHC will reach total energies of $`10^{15}`$ and $`10^{18}`$ eV respectively. This means that, although no participant nuclei larger than Fe is expected, there will be collective physics to explore in Cosmic Ray Physics. In particular changes in the multiplicity originate changes in the development of atmospheric showers. Unfortunately, atmospheric showers are dominated by forward particles, and it is in the fragmentation regions where models which in the central rapidity region are quite different, are more similar. Nevertheless, collective effects like color rope formation will influence the fragmentation regions (for example, the enhancement of the cumulative effect) and have observable effects in the development of the shower . Besides, PQCD effects may be of importance for the transverse broadening of the shower. It would be convenient to apply the different models for multiparticle production to simulations of cosmic ray atmospheric showers .
## 10 Conclusions
In Table 2 predictions of the different models for charged particle densities produced in central Au-Au or Pb-Pb collisions at RHIC and LHC are presented, together with the corresponding centrality criteria. Some of the predictions of the models are not available and its place has been left empty. In order to estimate the discrepancy between different predictions induced by different definitions of centrality, by the fact that some of the results are $`dN/dy|_{y=0}`$ and other $`dN/d\eta |_{\eta =0}`$, and by using Au-Au or Pb-Pb at RHIC or slightly different energies at LHC, in Table 3 we present results obtained in the SFM for different reactions. From them it can be concluded that results should be compared allowing for a $`20÷30`$ % discrepancy.
For completeness, we have included predictions from percolation and from the WNM . As discussed in Section 8, the former should be considered as lower bounds. The latter have been obtained computing in SFM $`dN_{ch}/dy|_{y=0}`$ for p-p, p-n and n-n collisions at the corresponding energies, making an isospin weighted average and multiplying by the number of wounded nucleons in a central collision, taking $`\overline{n}_\mathrm{A}=200`$ in Pb-Pb and $`\overline{n}_\mathrm{A}=190`$ in Au-Au (which correspond to $`b0`$); the result that we get in this way for Pb-Pb at $`\sqrt{s}=17.3`$ GeV per nucleon (SPS) is 370. In the following we will not discuss these two quite naive predictions.
At RHIC energies all the predictions are in the range $`700÷1550`$ and most of them in $`1000÷1100`$. The lowest value corresponds to RQMD. The reason for that, as already commented at the end of Section 4, is the formation of color ropes with a large probability, which is required in the model to describe antibaryon enhancement at SPS because a rather large baryon-antibaryon annihilation cross section is used. In SFM, which considers a similar mechanism (string fusion), the charged density obtained is larger. The low value also obtained in Hydro+UrQMD is due to the initial QGP state and the first order transition assumed.
At LHC the differences among the predictions are larger, more than a factor 3. They lie in the range $`2500÷8500`$. Comparing Table 2 with Fig. 1, which summarizes the situation before 1996, it can be seen that nowadays predictions tend to gather around the lowest values; at the time of only SFM gave predictions below 4000.
Essentially, models based on parton shower evolution predict larger values. Also statistical models obtain very large charged densities. However, the model described in Section 6, which uses mainly partonic degrees of freedom, obtains a low value (3100). This is due to the saturation of minijet production, which plays the role of an upper cut-off in the number of minijets. This saturation is a consequence of unitarity .
Unitarity is a basic ingredient that controls the number of soft and hard elementary scatterings (which are no more independent one from each other) in models like DPM, DPMJET, SFM and RQMD. In addition to that, energy-momentum conservation reduces the possible number of scatterings. Finally, interaction among strings is another collective effect which reduces central pion densities. In DPM these interactions are taken into account by means of the triple Pomeron. Its coupling is fixed to describe soft diffraction and HERA data. In other models as RQMD or SFM the interactions among strings are taken into account via the formation of color ropes or fusion of strings. Its strength is essentially fixed to reproduce heavy flavor and antibaryon enhancement at SPS. It is not unexpected that these three different forms of quantifying the shadowing give rise to similar predictions for global and simple observables as central rapidity densities of charged particles. Probably the knowledge of shadowing from small $`x`$ physics can help to reconcile models based on partonic degrees of freedom with those based on strings as degrees of freedom. On the other hand, statistical thermal models predict a larger value close to 8000 as a consequence of the rather large volume at freeze-out. A reduction in a factor 3 would mean a strong reduction in this volume and a large change of the ratio between different particles, or in the temperature and chemical potential values.
In the summary talks of three mayor conferences were quoted 8000, 8000, and between 3000 and 8000, for the probable number of charged particles per unit rapidity at the center of mass in central Pb-Pb collisions at the LHC. Models with interaction among strings obtain a lower value. The interplay between soft and hard physics is one of the main issues in the study of strong interactions and, together with the search and characterization of QGP, one of the main goals of Heavy Ion Physics. Doubtless, the new experiments at RHIC and LHC will shed light on this subject, even measuring such a simple observable as central rapidity densities of charged particles.
## Acknowledgements
This work has been done under contract AEN99-0589-C02 of CICYT (Spain). We express our gratitude to N. S. Amelin, M. A. Braun, A. Capella, W. Cassing, K. J. Eskola, E. G. Ferreiro, A. B. Kaidalov, J. Ranft, C. A. Salgado, H. Sorge, D. Sousa and K. Werner for useful discussions and comments.
FIGURES
|
no-problem/0002/nucl-th0002033.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
For the last two decades, pion-nucleus elastic scattering has been studied by a number of people using a variety of theoretical methods, some of which are given in Refs. \[$`116`$\]. The aim of most theoretical works is to understand the pion-nucleus scattering and thereby the nuclear structure in the framework of the multiple scattering theory by using elementary pion-nucleon scattering amplitudes obtained from the pion-nucleon scattering data. Normally, the Klein-Gordon equation is solved either in coordinate space or in momentum space with various forms of elementary t-matrices including nuclear structure effects \[$`112`$\]. There are also works using approaches other than directly solving the Klein-Gordon equation, such as the $`\mathrm{\Delta }`$-isobar model taking into account production and propagation of the $`\mathrm{\Delta }`$-isobar in nuclei. Though numerous works, essentially solving the Klein-Gordon equation in different ways, have revealed much of pion-nucleus dynamics, the understanding of the pion-nucleus scattering is still not quite satisfactory.
On the other hand, when one tries to do a quantitative calculation of cross sections for a reaction involving pions, such as ($`\pi `$,$`\pi ^{}`$) or ($`\pi `$,$`K`$), by using the distorted wave Born approximation (DWBA) or the distorted wave impulse approximation (DWIA), it is essential to have the pion distorted wave functions. Even if the accurate pion distorted wave functions may not be achieved, it would be convenient to have a way of treating the distortion effects in a simple manner. The present study was motivated by such a need of the distorted wave functions that could reproduce the pion scattering data by all means so that the distortion effects could be treated.
The optical potentials normally used in the Klein-Gordon equation for the pion-nucleus scattering are known to be nonlocal, particularly in the $`\mathrm{\Delta }`$-resonance region due to the $`P`$-wave nature of the resonance. However, it would be not only easier to visualize but also interesting if one can localize the nonlocal potential and look at the dynamics of the scattering from a different point of view. Recently, Satchler showed a method of reducing the Klein-Gordon equation into the form of a Schrödinger equation by redefining some kinematical quantities. He then reproduced not only elastic but also inelastic differential cross sections of the pion near the $`\mathrm{\Delta }`$(1232)-resonance energy for various target nuclei ranging from <sup>40</sup>Ca to <sup>208</sup>Pb by using local potentials of the Woods-Saxon form. In Section 2 we follow Satchler, reduce the Klein-Gordon equation to a Schrödinger equation, and search for phenomenological local optical potentials which can reproduce the scattering data.
We choose the system of $`\pi ^{}+^{12}`$C, because for this system many experimental data are available in a wide range of energies, 120 - 766 MeV and we are interested in examining the energy-dependency of the pion-nucleus local potentials. We employ the Woods-Saxon form of the local potential and search for the potential parameters to fit the experimental data. The real and imaginary parts of the resulting local potentials are found to be consistent with the dispersion relation. The energy-dependency of the phenomenological local potential shows that the imaginary part of the potential peaks near the $`\mathrm{\Delta }`$-resonance energy, and it can be explained by the decay of the $`\mathrm{\Delta }`$’s in the nuclear medium, which is manifested as absorption of the pion projectile in the scattering.
We then compare in Section 3 our phenomenological local potential with the local potential exactly phase-shift equivalent to the Kisslinger potential obtained by using the Krell-Ericson transformation method , which has been used for instance by Parija and recently by Johnson and Satchler . Section 4 summarizes the paper.
## 2 Phenomenological Local Potentials
### 2.1 The Model
In most works dealing with solving the Klein-Gordon equation, a so-called truncated Klein-Gordon equation is used, which means terms quadratic in the potentials are neglected compared to the pion energy. Then one gets
$$\left[(\mathrm{}c)^2^2+2\omega (V_N+V_C)\right]\varphi =(\mathrm{}kc)^2\varphi ,$$
(1)
where $`\varphi `$ is the distorted wave function for the relative motion between the pion and the target nucleus, $`V_C`$ and $`V_N`$ are the Coulomb and the nuclear potentials, respectively, and $`k`$ is the relativistic center-of-mass momentum of the pion. In regarding Eq. (1) as the equation for the scattering between the pion and the target nucleus, Stricker, McManus and Carr defined $`\omega `$ by
$$\omega =\frac{M_\pi m_Tc^2}{M_\pi +m_T},$$
(2)
where $`m_T`$ is the target mass given by the target mass number multiplied by the atomic mass unit and $`M_\pi `$ is the total energy of the pion in the pion-nucleus center-of-mass system. Satchler then introduced a reduced mass $`\mu `$ defined by $`\mu =\omega /c^2`$ and put Eq. (1) into the form of a Schrödinger equation
$$\left[\frac{\mathrm{}^2}{2\mu }^2+V_N+V_C\right]\varphi =E_{c.m.}\varphi $$
(3)
for the scattering of two masses, $`M_\pi `$ and $`m_T`$ with a center-of-mass kinetic energy $`E_{c.m.}=(\mathrm{}k)^2/2\mu `$. (The incident pion bombarding energy was modified so that a standard nonrelativistic optical model computer program could be used .) In what follows, we use this method of solving Eq. (3) with phenomenological Woods-Saxon local potentials.
### 2.2 Results for Phenomenological Local Potentials
The Woods-Saxon form of $`V_N`$ in Eq. (3) can be written as
$$V_N(r)=\frac{V}{1+exp(X_V)}+i\frac{W}{1+exp(X_W)}$$
(4)
with
$`X_i=(rR_i)/a_i,R_i=r_iA^{1/3}(i=V,W),`$
where $`r_i`$ and $`a_i`$ are radius and diffuseness parameters, respectively, and $`A`$ is the target mass number. The Coulomb potential $`V_C`$ is given in a simple form obtained from a uniform charge distribution of radius $`R_C=1.2A^{1/3}`$ fm. There are 6 adjustable parameters in Eq. (4). We fixed them by using a $`\chi ^2`$-fitting method. The $`\chi ^2`$, written explicitly as
$$\chi ^2=\frac{1}{N}\underset{i=1}{\overset{N}{}}\left[\frac{\sigma _{ex}^i\sigma _{th}^i}{\mathrm{\Delta }\sigma _{ex}^i}\right]^2$$
(5)
is evaluated at each energy, and the potential parameters are adjusted so as to minimize the $`\chi ^2`$. In Eq. (5), $`\sigma _{ex}^i`$’s ($`\sigma _{th}^i`$’s) and $`\mathrm{\Delta }\sigma _{ex}^i`$’s are the experimental (theoretical) cross sections and uncertainties, respectively, and $`N`$ is the number of data used in the fitting. Since experimental total elastic ($`\sigma _E`$), reaction ($`\sigma _R`$) and total ($`\sigma _T`$) cross sections were also available at all energies except for 400 and 500 MeV, we used not only differential cross sections but also $`\sigma _E`$, $`\sigma _R`$ and $`\sigma _T`$ as the data, $`\sigma _{ex}^i`$, to be fitted.
In searching for the parameters we first kept the radius parameters, $`r_i`$, as 0.9 or 1.0 fm and the diffuseness parameters, $`a_i`$, as 0.4 or 0.5 fm. When we could not get good fits to the data with these fixed parameters, we let them vary. We tried to find both repulsive and attractive potentials at all energies. It was possible to find attractive potentials at all energies considered here, but repulsive potentials were obtained only at 230, 260 and 280 MeV, which are just above the $`\mathrm{\Delta }`$-resonance energy. Satchler also obtained a repulsive Woods-Saxon potential for $`\pi ^\pm `$ \+ <sup>208</sup>Pb at 291 MeV, whereas at other (lower) energies he found attractive potentials . At these 3 energies the cross sections calculated with attractive potentials were indistinguishable from the ones calculated with repulsive potentials. However, one may normally expect a repulsive potential above the resonance. Also, the dispersion relation calculations to be discussed in the next subsection prefer repulsive real potentials just above the $`\mathrm{\Delta }`$-resonance energy. Thus, we shall henceforth include in our discussion only repulsive potentials at 230, 260 and 280 MeV. (At energies higher than the $`\mathrm{\Delta }`$-resonance there exist several $`N^{}`$ resonances, but these resonances are not so pronounced as the $`\mathrm{\Delta }`$, and they overlap with each other due to broad widths. Indeed, at other energies we could not find repulsive potentials.)
There is a well-known ambiguity in determining the optical potential parameters . Due to the strong absorption taking place in the nuclear surface region, different potentials can often fit the scattering data equally well as long as they have similar values near the surface region. Actually, we found that when we used only the elastic differential cross sections as the data to be fitted, the extracted potential parameters could not always reproduce the experimental total elastic ($`\sigma _E`$), reaction ($`\sigma _R`$) and total ($`\sigma _T`$) cross sections. However, when we included in the fitting $`\sigma _E`$, $`\sigma _R`$ and $`\sigma _T`$ as the data to be reproduced in addition to the differential cross section, the resulting potential parameters reproduced all the cross sections quite well as shown in Figs. 1 and 2. (In Figs. 1 and 2, the experimental data for the pion kinetic energies, $`E`$ = 120 - 280 MeV and $`E`$ = 486, 584, 663 and 766 MeV are taken from Refs. and , respectively. Recently, differential elastic cross section data at $`E`$ = 400 and 500 MeV became available from Ref. .) The searched parameters and the $`\chi ^2`$-values for each energy are listed in Table 1. The fits to the experimental cross sections are in general very good. But there is some discrepancy in the differential cross sections at low bombarding energies, particularly at $`E`$ = 150 MeV. At this energy the second minimum of the differential cross section was not reproduced correctly, and the third maximum was underestimated. When we tried to reproduce the data at larger angles by further adjusting the potential parameters, it was possible to fit the larger angle data, but then the first minimum was not reproduced at the right angle. This may be due to that we have used the Woods-Saxon potentials whereas more realistic potentials such as local potentials phase-shift equivalent to Kisslinger potentials look considerably different from the Woods-Saxon form, particularly inside the nucleus, as will be shown in Section 3. However, we also note that similar discrepancies between the calculated and the experimental cross sections at $`E`$ = 150 MeV can be seen from Refs. and , in which the $`\mathrm{\Delta }`$-isobar model and the Kisslinger potential are used, respectively.
### 2.3 Dispersion Relation and Discussion
Because the extracted potentials have an ambiguity as mentioned above, it is worthwhile to check whether they satisfy the dispersion relation, which is known to be satisfied by the real and imaginary parts of the optical potentials . Also, since the relativistic Klein-Gordon equation, normally solved with nonlocal potentials, is reduced to a nonrelativistic Schrödinger equation, it would be interesting to see whether the phenomenological local potentials are consistent with the dispersion relation. The relation is often written in the form of a so-called subtracted dispersion relation as
$$V(E,r)=V(E^{},r)+\frac{EE^{}}{\pi }P_0^{\mathrm{}}𝑑E^{\prime \prime }\frac{W(E^{\prime \prime },r)}{(E^{\prime \prime }E^{})(E^{\prime \prime }E)},$$
(6)
where $`P`$ stands for the principal value and $`E^{}`$ is the energy where $`V(E=E^{},r)`$ is known . This equation tells us that once the imaginary part of the potential at a certain radius is known as a function of the energy the real part can be calculated from the relation. Thus, we inserted the imaginary potentials extracted from the $`\chi ^2`$-fitting into the $`W(E,r)`$ of Eq.(6), computed the real potential using the relation, and compared the results with the real potentials extracted from the $`\chi ^2`$-fitting. In so doing, since the potential values in the nuclear surface region are most significant in determining the cross section, we first evaluate a strong absorption radius($`R_S`$) defined as the apsidal distance on the Rutherford trajectory corresponding to the angular momentum $`L=L_{1/2}`$, where $`L_{1/2}`$ is the angular momentum for which the $`S`$-matrix element has the magnitude $`|S_L|=\sqrt{1/2}`$. Here, we used non-integer $`L_{1/2}`$ values at which $`|S_L|=\sqrt{1/2}`$, following Ref. . The $`L_{1/2}`$ values thus obtained are listed in Table 1. The strong absorption radius parameter ($`r_S=R_S/A^{1/3}`$) computed from these $`L_{1/2}`$ values are also listed in Table 1 and plotted in Fig. 3(a) as a function of the pion energy. The $`r_S`$ becomes as large as 1.6 fm near the $`\mathrm{\Delta }`$-resonance region and about 1.1 fm at $`E400500`$ MeV. Note that Satchler’s strong absorption radius parameters were also roughly around 1.5 fm near the $`\mathrm{\Delta }`$-resonance energy . To compare the values of the real and imaginary parts of the potentials at a certain radius, we took $`r=1.5A^{1/3}`$ fm in Eq. (6), which is close to a strong absorption radius near the $`\mathrm{\Delta }`$-resonance energy.
In Fig. 4 is plotted by the solid circles the extracted real and imaginary parts of the potentials evaluated at $`r=1.5A^{1/3}`$ fm as a function of the energy. We could roughly fit the solid circles in Fig. 4(b) by the sum of a Gaussian function and a constant of the form
$$W(E,r)=W_0exp((\frac{EE_0}{\mathrm{\Delta }E})^2)+W_1,$$
(7)
where $`W_0`$, $`E_0`$, $`\mathrm{\Delta }E`$, and $`W_1`$ were found to be -13.9, 220, 110, and -3.34 MeV, respectively. The $`W(E,r)`$ in Eq. (7) with these parameters is plotted by the curve in Fig. 4(b). We then inserted Eq. (7) into Eq. (6), chose the value of $`E^{}`$ in Eq. (6) as 500 MeV, carried out the integral over the energy, and obtained the real part, $`V(E,r)`$. The resulting $`V(E,r)`$ is plotted by the curve in Fig. 4(a), which roughly fits the extracted real potentials (the solid circles). As mentioned earlier, at 230, 260 and 280 MeV both attractive and repulsive potentials could fit the cross section data equally well. But the $`V(E,r)`$ calculated from the dispersion relation (the full curve) prefers repulsive potentials at these energies.
The dispersion relation is applied at other radii also from $`1.3A^{1/3}`$ to $`2.0A^{1/3}`$ fm. In this radial region the extracted imaginary potentials can be fitted by Eq. (7) (but with different values of $`W_0,E_0,\mathrm{\Delta }E,`$ and $`W_1`$, of course, for each energy) with $`E_0`$ 205 MeV on the average. The real potentials calculated from the dispersion relation in this radial region reproduce the extracted real potentials just as well as in Fig. 4(a), so the figures are not repeatedly shown here. But outside this radial region, the extracted imaginary potentials are not so well fitted by Eq. (7). Also, the extracted real potentials are somewhat more scattered around the real potential curves calculated from the dispersion relation. Thus, it seems that the extracted phenomenological local potentials are consistent with the dispersion relation in this outer surface region.
Here, we note that although we reproduce the cross sections quite well as shown in Figs. 1 and 2 and the extracted local potentials in the outer surface region are reasonably consistent with the dispersion relation, it still does not necessarily mean that the extracted potentials are the unique ones which can describe the data. Particularly the inner part of the potentials can be quite different, as will be seen in Section 3, because this method cannot well determine the potentials inside the nucleus due to the absorption in the nuclear surface region in addition to the fact that we have assumed the Woods-Saxon form of the local potential.
However, an interesting feature obtained from the results is the broad peak in the imaginary potential as seen in Fig. 4(b). For $`1.3A^{1/3}`$ fm $`<r<2.0A^{1/3}`$ fm, the peaks are located at $`E=E_0`$ 205 MeV on the average, which is near the $`\mathrm{\Delta }`$-resonance energy, and the $`\mathrm{\Delta }E`$ of the peaks is about 110 MeV. The $`\mathrm{\Delta }`$’s produced in nuclei can get absorbed via processes such as quasi-free decay or spreading . Such an absorption is reflected in the flux of the incident pion as the imaginary part of the local potential peaked at about $`E=E_0`$. The region where the $`\mathrm{\Delta }`$ decays in nuclei through the quasi-free channel and the spreading was studied . It was shown that the quasi-free decay takes place at $`r1.6A^{1/3}`$ fm and the spreading at $`r0.9A^{1/3}`$ fm. Since the quasi-free decay is the dominant decay process over the spreading by a factor of roughly 2 , the strong absorption radius near the $`\mathrm{\Delta }`$-resonance energy is mainly determined by the region where the quasi-free decay takes place, which is as large as about $`1.6A^{1/3}`$ fm. Fig. 3(a) and Table 1 show just that the extracted strong absorption radius parameters near the $`\mathrm{\Delta }`$-resonance are about 1.6 fm, consistent with the results of Ref. . Here, we also remark that 1.59 times $`\pi R_S^2`$ roughly reproduces the total cross section as plotted by the crosses in Fig. 3(b), i.e., $`\sigma _T1.59\pi R_S^2`$, where $`R_S=r_SA^{1/3}`$ with $`r_S`$ being the strong absorption radius parameter plotted by the squares in Fig. 3(a).
## 3 Comparison with Local Potentials Equivalent to Kisslinger potential
In this Section we briefly describe the Krell-Ericson transformation method closely following Johnson and Satchler , where it was extensively applied to both $`\pi ^+`$ and $`\pi ^{}`$ scattered from various target nuclei at pion energies from 20 to 291 MeV. A detailed study of the resulting equivalent local potentials can be found there.
We return to the truncated Klein-Gordon equation in Eq. (1). $`\omega `$ is now taken as the total energy of the pion in the pion-nucleus center-of-mass system. These slightly different definitions of $`\omega `$ do not make any significant difference in the values of $`\omega `$ because the mass of the pion is very small compared to that of target nuclei. For the nuclear potential $`V_N`$ in Eq. (1) the Kisslinger form of the potential has been frequently used, which can be written as
$$V_N=\frac{(\mathrm{}c)^2}{2\omega }\left[q(r)+\stackrel{}{}\alpha (r)\stackrel{}{}\right],$$
(8)
where the first term $`q(r)`$ is mainly due to the pion-nucleon $`S`$-wave interaction and the second term comes from the $`P`$-wave part.
By rewriting the pion distorted wave function $`\varphi (𝐫)`$ in Eq. (1) as
$$\varphi (𝐫)=P(r)\psi (𝐫)$$
(9)
with the Perey factor $`P(r)=\left[1\alpha (r)\right]^{1/2}`$, one can get a Schrödinger equation for $`\psi (𝐫)`$,
$$\left[\frac{\mathrm{}^2}{2\mu }^2+U_L+V_C\right]\psi =E_{c.m.}\psi ,$$
(10)
where $`\mu =\omega /c^2`$ and $`E_{c.m.}=(\mathrm{}k)^2/2\mu `$. Here, $`U_L`$ is a local potential dependent only on $`r`$ as follows:
$$U_L=U_1+U_2+U_3+\mathrm{\Delta }U_C$$
(11)
with
$`U_1`$ $`=`$ $`{\displaystyle \frac{(\mathrm{}c)^2}{2\omega }}{\displaystyle \frac{q(r)}{1\alpha (r)}},`$
$`U_2`$ $`=`$ $`{\displaystyle \frac{(\mathrm{}c)^2}{2\omega }}{\displaystyle \frac{k^2\alpha (r)}{1\alpha (r)}},`$
$`U_3`$ $`=`$ $`{\displaystyle \frac{(\mathrm{}c)^2}{2\omega }}\left[{\displaystyle \frac{\frac{1}{2}^2\alpha (r)}{1\alpha (r)}}+\{{\displaystyle \frac{\frac{1}{2}\alpha (r)}{1\alpha (r)}}\}^2\right],`$ (12)
$`\mathrm{\Delta }U_C`$ $`=`$ $`{\displaystyle \frac{\alpha (r)V_C}{1\alpha (r)}}.`$
Thus if $`q(r)`$ and $`\alpha (r)`$ are given, the local potential $`U_L`$, exactly phase-shift equivalent to the Kisslinger potential, can be calculated. The expressions and parameters for $`q(r)`$ and $`\alpha (r)`$ for the system of $`\pi ^{}`$ \+ <sup>12</sup>C at 7 energies from 120 to 280 MeV are available from the work of Sternheim and Auerbach , where simple forms of $`q(r)`$ and $`\alpha (r)`$ are used: $`q(r)=b_0k^2\rho (r)`$ and $`\alpha (r)=b_1\rho (r)`$ with $`\rho (r)`$ being the target nuclear density. (Note that $`b_1`$ here corresponds to $`c_0`$ in the notation of Johnson and Satchler .) We have used ”Fermi averaged parameters” for $`b_0`$ and ”Fitted parameters” for $`b_1`$ as listed in Table I of Ref. . The same parameter sets were used by Di Marzio and Amos to calculate approximate analytic pion distorted wave functions . We took the <sup>12</sup>C density from Ref. , which is consistent with Ref. .
The real and imaginary parts of $`U_L`$ calculated with these parameters are plotted by the full curves in Figs. 5(a) and 5(b), respectively, for $`E=`$ 120, 150, 230, and 280 MeV only. (For brevity, figures for 180 and 260 MeV are omitted. $`U_L`$ for 200 MeV is shown in Fig. 6.) Both real and imaginary parts of $`U_L`$ display a wiggly behaviour, as already observed in Refs. and , but the wiggles disappear as the energy increases. The reason for the gradual disappearance of the wiggles at higher energies can be easily seen, when the Kisslinger potential is expressed in a simple form as in Refs. and . In Fig. 6, each term of $`U_L`$ is plotted for two different pion energies; one below the $`\mathrm{\Delta }`$-resonance, the other above the resonance. The real and imaginary parts are plotted by the full and broken curves, respectively. It is easily seen that $`U_2`$ and $`U_3`$ are the terms that characterize the shape of the summed local potential $`U_L`$. $`U_2`$ determines the overall shape of $`U_L`$ for both below and above the resonance, and $`U_3`$ brings in more fluctuations, particularly below the resonance. As the energy increases the wiggles in all terms of $`U_L`$ become less prominent. Such gradual disappearance of oscillatory behaviours of the real potential at higher energies was already observed in a model-independent Fourier-Bessel analysis of the pion potential by Friedman , who extracted the real potential by assuming the Woods-Saxon form for the imaginary potential. The same tendency of disappearance of wiggles at higher energies can be also seen from the figures in Refs. and .
We can also see that the wiggles do not appear in the outer nuclear surface region, where the scattering is most sensitive to the potential. Thus, as the energy increases, the equivalent local potentials at large radii become more or less close to the form of a Woods-Saxon potential. In Fig. 5 we plotted by the broken curves the phenomenological Woods-Saxon potentials extracted in Section 2. At 230, 260 and 280 MeV the phenomenological local potentials are close to the equivalent local potentials $`U_L`$ (the full curves) in the outer surface region. Especially, the imaginary parts at large radii are very close to each other. But at lower energies there are large discrepancies between the phenomenological Woods-Saxon potentials and the equivalent local potentials. Even the signs of the real potentials are opposite except for large radii at 120 MeV. (We may, however, remark that even the equivalent local potentials could have different signs of real potentials at smaller radii depending on the interaction parameters used as shown in Fig. 13 of Ref. , while they produce similar scattering cross sections, because the scattering is most sensitive to the potential at large radii.) As pointed out earlier, we can see from Fig. 4(a) that the real potentials at $`1.3A^{1/3}`$ fm $`<r<2.0A^{1/3}`$ fm are repulsive only at the energies just above the $`\mathrm{\Delta }`$-resonance. Thus, the dispersion relation calculations seem to require attractive potentials at energies below the $`\mathrm{\Delta }`$-resonance in the outer surface region.
Also, although the equivalent local potentials are theoretically better founded, the phenomenological Woods-Saxon potentials reproduce the cross sections much better, as shown in Fig. 1. (The equivalent local potentials obtained here result in the same differential cross sections as in Figs. 1 and 2 of Ref. , so they are not repeated here.)
## 4 Summary
We assumed the Woods-Saxon form of phenomenological local potentials in solving a Schrödinger equation reduced from the Klein-Gordon equation and searched for the potential parameters. The calculated cross sections reproduced the experimental cross sections quite well in a wide range of energy. The real and imaginary parts of the phenomenological potentials in the outer nuclear surface region are found to satisfy the dispersion relation. The imaginary part of the phenomenological local potentials as a function of the energy has a peak near the $`\mathrm{\Delta }`$-resonance energy due to the decay of the $`\mathrm{\Delta }`$’s in the nuclear medium, which is reflected in the pion flux as absorption of the incident pion. The strong absorption radius ($`1.6A^{1/3}`$ fm) in the $`\mathrm{\Delta }`$-resonance region is found to be consistent with the previous studies of the region where the $`\mathrm{\Delta }`$ decays in the nuclear medium. But we again stress that the phenomenological local potentials obtained here are not necessarily unique. This method of calculating the pion cross sections may rather be taken as a simple way of taking into account the distortion effects in the DWBA or DWIA calculations with a relatively good accuracy as in Ref. . It is well known that for high energy beams the distortion effects can be often treated by an eikonal approximation with so-called a distortion factor or an attenuation factor. Indeed, Table 1 shows that at higher energies the real part of the phenomenological local potential becomes much smaller than the imaginary part.
Very recently this approach to the treatment of the distortion effects has been applied to the <sup>12</sup>C($`\pi ^+,K^+`$)$`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{12}`$C reaction, and the distorted wave functions of $`\pi ^+`$ and $`K^+`$ calculated in this method have been successfully used in reproducing the hypernuclear production cross sections in DWIA . Even if the distorted wave functions calculated in this way may not be accurate especially inside the nucleus, this simple method seems quite useful in dealing with the distortion effects in view of the fact that most of the cross sections are well reproduced.
Acknowledgements
We are grateful to Professor T. Udagawa for his hospitality at the University of Texas at Austin and for his careful reading the manuscript and helpful discussions. SWH owes thanks to Drs. H. W. Fearing and B. K. Jennings for their hospitality at TRIUMF and discussions and Professor B. C. Clark for sending some of the experimental data in numbers. This work was supported in part by the Ministry of Education of Korea (BSRI 98-2422) and by Korea Science and Engineering Foundation (951-0202-033-2).
| $`E`$ | $`V`$ | $`r_V`$ | $`a_V`$ | $`W`$ | $`r_W`$ | $`a_W`$ | $`\chi ^2`$ | $`L_{1/2}`$ | $`r_S`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| (MeV) | (MeV) | (fm) | (fm) | (MeV) | (fm) | (fm) | | ($`\mathrm{}`$) | (fm) |
| 120 | -31.2 | 1.55 | .257 | -149. | 0.9 | .536 | 1.7 | 3.5 | 1.58 |
| 150 | -54.0 | 1.20 | .574 | -103. | 1.0 | .566 | 8.9 | 4.1 | 1.58 |
| 180 | -86.4 | 0.9 | .553 | -62.0 | 1.30 | .455 | 2.7 | 4.7 | 1.59 |
| 200 | -93.6 | 0.9 | .571 | -66.35 | 1.20 | .4905 | 2.0 | 5.0 | 1.54 |
| 230 | 137. | 1.0 | .2156 | -58.53 | 1.40 | .3556 | 1.8 | 5.1 | 1.44 |
| 260 | 111. | 1.0 | .3136 | -53.8 | 1.35 | .3694 | 1.2 | 5.3 | 1.37 |
| 280 | 109. | 1.0 | .319 | -46.74 | 1.35 | .381 | 0.63 | 5.5 | 1.34 |
| 400 | -39.4 | 1.12 | .4 | -59.8 | 0.9 | .474 | 1.5 | 5.7 | 1.05 |
| 486 | -22.0 | 1.124 | .4 | -70.3 | 0.9 | .4 | 1.8 | 7.0 | 1.11 |
| 500 | -31.8 | 1.05 | .4 | -53.8 | 0.9 | .540 | 1.7 | 6.7 | 1.05 |
| 584 | -12.4 | 1.20 | .366 | -69.8 | 0.9 | .437 | 1.3 | 8.3 | 1.13 |
| 663 | -4.90 | 1.37 | .300 | -64.0 | 0.965 | .442 | 1.9 | 9.6 | 1.17 |
| 766 | -4.11 | 1.40 | .526 | -60.3 | 1.0 | .462 | 1.8 | 11.6 | 1.25 |
|
no-problem/0002/nucl-th0002010.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The large, $`A`$ dependent enhancement of the cross section at low invariant masses of the dipion system found in Refs. , could be a signal of very strong nuclear medium effects on correlated pion pairs in the $`I=J=0`$ (”$`\sigma `$”) channel. This strength accumulation had been predicted in Ref. , and similar results were obtained by Hatsuda et al. , where the enhancement at low invariant masses of the spectral function in the $`\sigma `$ channel appears as a consequence of the partial restoration of chiral symmetry in the nuclear medium.
In the last few years, several non perturbative models have been developed, which describe very successfully the $`\pi \pi `$ interaction in vacuum . When nuclear medium effects were included, some accumulation of strength was found close to the two pion threshold, which could be consistent with the experimental results. However, a full calculation of the $`A(\pi ,2\pi )X`$ process was needed in order to take into account all other nuclear effects and the detailed structure of the scattering amplitude.
A first attempt was presented in ref. . In that work, a very simple model for the elementary $`\pi N\pi \pi N`$ amplitude was used, and the most important medium effects were included. The results showed a clear peak on the $`M_{\pi \pi }`$ distribution slightly above threshold, in good agreement with experimental data for medium nuclei. However, the agreement was not as satisfactory for deuterium, where the $`M_{\pi \pi }`$ distribution was overestimated at threshold.
In this paper we will present a new study of the reaction, including a more realistic $`\pi N\pi \pi N`$ amplitude, which reproduces very well the cross section on hydrogen and deuterium, and we shall also consider some nuclear effects omitted previously, like the reduction of the incoming pion flux due to absorption and quasielastic scattering, which modifies drastically the effective density at which the reaction occurs.
## 2 $`\pi \pi `$ scattering in the scalar isoscalar channel
The $`\pi \pi `$ scattering amplitude is obtained solving the Bethe-Salpeter (BS) equation
$$T=𝒱+𝒱𝒢T.$$
(1)
Fully detailed formulas and many technicalities can be found in Refs. . The $`|\pi \pi ,I=0>`$ and $`|K\overline{K},I=0>`$ states are included in the coupled channels calculation. The potential $`𝒱`$ is obtained from the lowest order chiral lagrangians and $`𝒢`$ is the two meson propagator. A cutoff of 1 GeV is used to regularize the momentum integral appearing in the calculation of $`𝒢`$. The method guarantees both unitarity and consistency with chiral perturbation theory at low energies. This theoretical $`\pi \pi `$ scattering amplitude agrees well with experimental phase shifts and inelasticities from threshold up to energies around 1.2 GeV, and therefore provides a good starting point for our analysis.
To account for nuclear effects, the BS equation is modified by the substitution of vacuum meson propagators by the medium ones. Namely,
$$\stackrel{~}{𝒢}=i\frac{d^4k}{(2\pi )^4}\frac{1}{k^2m^2\mathrm{\Pi }(k)}\frac{1}{(Pk)^2m^2\mathrm{\Pi }(Pk)},$$
(2)
where $`\mathrm{\Pi }(k)`$ is the meson selfenergy in nuclear matter, which accounts for the particle-hole and $`\mathrm{\Delta }`$-hole excitations. The resulting $`\pi \pi `$ scattering amplitude shows a strong dependence on the baryon density, as can be appreciated in Fig. 1. Whereas at high energies (around 600 MeV) the imaginary part of $`T_{\pi \pi }`$ is reduced, there is a large enhancement around 300 MeV, where the CHAOS data show a well marked peak.
Similar results have been found using quite different approaches. See for instance Ref. and Fig. 3 of Ref. .
Finally, let us mention that other isospin channels either have a very small contribution to the $`A(\pi ,\pi \pi )X`$ reaction at low energies, $`(I=1)`$, or show a weak interaction between the pions $`(I=2)`$, thus, for them, we will not consider the interaction between the pions.
## 3 Elementary $`\pi N\pi \pi N`$ reaction
In order to be able to compare detailed effects on the differential cross section a high quality description of the elementary cross section is clearly required. Fortunately, such models are readily available in the literature . The model used in this paper follows closely that of ref. , although with some improvements to accommodate it to new resonance data in the PDG book , and to properly include the final state $`\pi \pi `$ interaction in the scalar isoscalar channel. The Lagrangians and coupling constants used can be found in the appendix of ref. .
Although the model is quite complex, and includes many mechanisms, it has no free parameters. Some coupling constants, related to the Roper resonance, have an uncertainty band associated to the uncertainties quoted in the PDG book. In those cases we have always taken the central values.
The results agree well with the experimental data for total and differential cross sections for all isospin channels, including of course the two-pions invariant mass distributions measured by CHAOS.
## 4 $`\pi A\pi \pi X`$ reaction
Many different nuclear effects modify the pion production cross sections. First, the initial and final pions undergo a strong distortion. This is implemented in the calculation following the methods of refs. . The incoming pion flux is reduced by absorption and quasielastic scattering. Both are very large because we are close to the $`\mathrm{\Delta }`$ resonance peak. The pions scattered quasielastically are simply removed because they loose energy, and thus the probability to participate in a pion production process is drastically reduced. Distortion is less important for the final pions because of their lower energy. Only absorption has been considered for them. Second, the incoming pion collides with a nucleon which is moving in a Fermi sea, and the emitted nucleon is Pauli blocked, therefore only momenta above certain value are allowed to contribute, and this is implemented by means of a local density approximation. Third, the intermediate resonances ($`\mathrm{\Delta }`$’s, $`N^{}`$’s) also see their properties modified by the medium. Also, new reaction mechanisms like meson exchange currents, could play some role, although it has been shown in Ref. that the reaction is essentially quasifree, and these possible mechanisms are not included in our calculation. Finally, the pion-pion final state interaction in the nuclear medium is considered. We select the part of the amplitude in which the two final pions are in the scalar isoscalar channel, and then we modify this part by incorporating the nuclear medium $`\pi \pi `$ interaction .
## 5 Results and discussion
As shown in Fig. 2, we find that our model describes fairly well the $`\pi ^+\pi ^+\pi ^+`$ reaction both in deuterium and in heavier nuclei. Details and comments on normalization can be found in ref. . This gives us much confidence on our treatment of the nuclear medium effects. Note that they are the same for this and the $`\pi ^+\pi ^{}`$ channel except for the the two-pions final state interaction, which is pure isospin 2 in $`\pi ^+\pi ^+`$, and mostly isospin 0 in the $`\pi ^+\pi ^{}`$ case.
However, although the $`\pi ^{}\pi ^+\pi ^{}`$ reaction is well reproduced in deuterium, the model fails for this channel in heavier nuclei (see Fig. 3). Furthermore, we find the effect of in-medium final state interaction of the pions to be rather small. Similar results are obtained for heavier nuclei.
The main reason for the small enhancement found is the very small effective density (see Fig. 4) at which the pion production process occurs. As explained before, the initial pion has a large probability of being absorbed or quasielastically scattered. As a consequence, the flux reaching the center of the nucleus is small and the reaction occurs mainly at the surface. An estimation of the average density gives $`\rho _{av}=\rho _0/4`$, considerably lower than those used in Ref. .
Imposing a high fixed average density we get a much larger although yet insufficient enhancement, and that at the price of destroying the nice agreement found for the $`\pi ^+\pi ^+`$ case, due to the too large Fermi motion of the nucleons. Better agreement with the nuclear data can be reached by selecting a simplified version of the model used for the elementary $`\pi N\pi \pi N`$ reaction, that overestimates the low mass region for the deuteron case.
There are several possibilities which could explain the large discrepancy between data and our model. One could be a much stronger pion-pion interaction in the medium in the scalar isoscalar channels, and some recent works point into that direction. On the other hand, more trivial effects could be playing an important role. Probably, apart from concentrating on the peak appearing in medium and heavy nuclei, one should look more carefully to the very low values at low invariant masses of the cross section in deuteron and hydrogen. According to our model, this is due to destructive interference between large pieces of the amplitude. If some of these pieces are substantially modified in nuclei, the interference could disappear, and the spectral function would have some additional strength close to threshold.
Some of these questions could be answered soon. There are new experimental data being analyzed, which have measured a wider phase space than CHAOS, and also other CHAOS measurements studying the energy dependence (meaning the incoming pion energy) of the peak. In our model, this is important because the interference effects are smaller at lower energies.
Finally, we think that lepton induced reactions, which are free from the initial state interaction and would allow the pion production to happen at higher densities could be a better probe. In particular, $`(\gamma ,\pi \pi )`$ is currently under theoretical investigation.
Acknowledgements
This work has been partially supported by DGYCIT contract no. PB-96-0753.
|
no-problem/0002/astro-ph0002381.html
|
ar5iv
|
text
|
# Fossils of turbulence and non-turbulence in the primordial universe: the fluid mechanics of dark matter
## 1 Introduction
Was the primordial universe turbulent or non-turbulent soon after the Big Bang? How did the hydrodynamic state of the early universe affect the formation of structure from gravitational forces, and how did the formation of structure by gravity affect the hydrodynamic state of the flow? What can be said about the dark matter that comprises $`99.9\%`$ of the mass of the universe according to most cosmological models? Space telescope measurements show answers to these questions persist literally frozen as fossils of the primordial turbulence and nonturbulence that controlled structure formation, contrary to standard cosmology which relies on the erroneous Jeans 1902 linear-inviscid-acoustic theory and a variety of associated misconceptions (e. g., cold dark matter). When effects of viscosity, turbulence, and diffusion are included, vastly different structure scenarios and a clear explanation for the dark matter emerge . From Gibson’s 1996 theory the baryonic (ordinary) dark matter is comprised of proto-globular-star-cluster (PGC) clumps of hydrogenous planetoids termed “primordial fog particles”(PFPs), observed by Schild 1996 as “rogue planets … likely to be the missing mass” of a quasar lensing galaxy . The weakly collisional non-baryonic dark matter diffuses to form outer halos of galaxies and galaxy clusters .
## 2 Fluid mechanics of structure formation
Before the $`1989`$ Cosmic Microwave Background Experiment (COBE) satellite, it was generally assumed that the fluid universe produced by the hot Big Bang singularity must be enormously turbulent, and that galaxies were nucleated by density perturbations produced by this primordial turbulence. George Gamov $`1954`$ suggested galaxies were a form of “fossil turbulence”, thus coining a very useful terminology for the description of turbulence remnants in the stratified ocean and atmosphere, Gibson $`19801999`$. Other galaxy models based on turbulence were proposed by von Weizsacker $`1951`$, Chandrasekhar $`1952`$, Ozernoi and colleagues in $`19681971`$, Oort $`1970`$, and Silk and Ames $`1972`$. All such theories were rendered moot by COBE measurements showing temperature fluctuation values $`\delta T/T`$ of only $`10^5`$ at $`300,000`$ years compared to at least $`10^2`$ for the plasma if it were turbulent. At this time, the opaque plasma of hydrogen and helium had cooled to $`3,000`$ K and become a transparent neutral gas, revealing a remarkable photograph of the universe as it existed at $`10^{13}`$ s, with spectral redshift z of $`1100`$ due to straining of space at rate $`\gamma 1/t`$.
Why was the primordial plasma before $`300,000`$ years not turbulent? Steady inviscid flows are absolutely unstable. Turbulence always forms in flows with Reynolds number $`Re=\delta vL/\nu `$ exceeding $`Re_{cr}100`$, where $`\nu `$ is the kinematic viscosity of a fluid with velocity differences $`\delta v`$ on scale $`L`$, Landau-Lifshitz 1959. Thus either $`\nu `$ at $`10^{13}`$ s had an unimaginably large value of $`9\times 10^{27}`$ m<sup>2</sup> s<sup>-1</sup> at horizon scales $`L_H=ct`$ with light speed velocity differences $`c`$, or else gravitational structures formed in the plasma at earlier times and viscosity plus buoyancy forces of the structures prevented strong turbulence.
## 3 Fossils of first structure (proto-supervoids)
The power spectrum of temperature fluctuations $`\delta T`$ measured by COBE peaks at a length $`3\times 10^{20}`$ m which is only $`1/10`$ the horizon scale ct, suggesting the first structure formed earlier at $`10^{12}`$ s ($`30,000`$ years). The photon viscosity of the plasma $`\nu =c/n\sigma _\tau `$ was $`4\times 10^{26}`$ m<sup>2</sup> s<sup>-1</sup> then, with free electron number density $`n=10^{10}`$ m<sup>-3</sup> and $`\sigma _\tau `$ the Thomson cross section for Compton scattering. The baryon density $`\rho `$ was $`3\times 10^{17}`$ kg m<sup>-3</sup>, which matches the density of present globular-star-clusters as a fossil of the weak turbulence at this time of first structure. The fragmentation mass $`\rho (ct)^3`$ of $`10^{46}`$ kg matches the observed mass of superclusters of galaxies, the largest structures of the universe. Because $`ReRe_{crit}`$, the horizon scale $`ct=3\times 10^{20}`$ m matches the Schwarz viscous scale $`L_{SV}=(\gamma \nu /\rho G)^{1/2}`$ at which viscous forces $`F_V=\rho \gamma L^2`$ equal gravitational forces $`F_G=\rho ^2GL^4`$, and also the Schwarz turbulence scale $`L_{ST}=\epsilon ^{1/2}/(\rho G)^{3/4}`$ at which inertial-vortex forces $`F_I=\rho \epsilon ^{2/3}L^{8/3}`$ equal $`F_G`$, where $`\epsilon `$ is the viscous dissipation rate . Further fragmentation to proto-galaxy scales is predicted in this scenario, with the nonbaryonic dark matter diffusing to fill the voids between constant density proto-supercluster to proto-galaxy structures for scales smaller than the diffusive Schwarz scale $`L_{SD}=(D^2/\rho G)^{1/4}`$, where $`D`$ is the diffusivity of the nonbaryonic dark matter . Fragmentation of the nonbaryonic material to form superhalos implies $`D=10^{28}`$ m<sup>2</sup> s<sup>-1</sup>, from observation of present superhalo sizes $`L_{SD}`$ and densities $`\rho `$ , trillions of times larger than $`D`$ for H-He gas with the same $`\rho `$.
## 4 Fossils of the first condensation (as “fog”)
Photon decoupling dramatically reduced viscosity values to $`\nu =3\times 10^{12}`$ m<sup>2</sup> s<sup>-1</sup> in the primordial gas of the nonturbulent $`10^{20}`$ m size proto-galaxies, with $`\gamma =10^{13}`$ s<sup>-1</sup> and $`\rho =10^{17}`$ kg m<sup>-3</sup>, giving a PFP fragmentation mass range $`M_{SV}M_{ST}10^{23}10^{25}`$ kg, the mass of a small planet. Pressure decreases in voids during fragmentation as the density decreases, to maintain constant temperature from the perfect gas law $`T=p/\rho R`$, where $`R`$ is the gas constant, for scales smaller than the acoustic scale $`L_J=V_S/(\rho G)^{1/2}`$ of Jeans $`1902`$, where $`V_S`$ is the sound speed. However, the pressure cannot propagate fast enough in voids larger than $`L_J`$ so they cool. Hence radiation from the warmer surroundings can heat such large voids, increasing their pressure and accelerating the void formation, causing a fragmentation within proto-galaxies at the Jeans mass of $`10^{35}`$ kg, the mass of globular-star-clusters. These proto-globular-cluster (PGC) clumps of PFPs provide the materials of construction for everything else to follow, from stars to people. Leftover PGCs and PFPs thus comprise present galactic dark matter inner halos which typically have expanded to about $`10^{21}`$ m (30 kpc) of the core and exceed the luminous (star) mass by factors of $`1030`$.
## 5 Observations
Observations of quasar image twinkling frequencies reveal that the point mass objects which dominate the mass of the lens galaxy are not stars, but “rogue planets… likely to be the missing mass”, Schild $`1996`$, independently confirming this prediction of Gibson $`1996`$. Other evidence of the predicted primordial fog particles (PFPs) is shown in Hubble Space Telescope photographs, such as thousands of $`10^{25}`$ kg “cometary globules” in the halo of the Helix planetary nebula and possibly like numbers in the Eskimo planetary nebula halo. These dying stars are very hot ($`100,000`$ K versus $`6,000`$ K normal) so that many PFPs nearby can be brought out of cold storage by evaporation to produce the $`10^{13}`$ m protective cocoons that make them visible to the HST at $`10^{19}`$ m distances.
## 6 Summary and conclusions
The Figure summarizes the evolution of structure and turbulence in the early universe, as inferred from the present nonlinear fluid mechanical theory. It is very different, very early, and very gentle compared to the standard model, where structure formation in baryonic matter is forbidden in the plasma epoch because $`L_J`$ is larger than $`L_H=ct`$ and galaxies collapse at 140 million years (redshift z=20) producing $`10^{36}`$ kg Population III superstars that explode and re-ionize the universe to explain the missing gas (sequestered in PFPs). No such stars, no galaxy collapse, and no re-ionization occurs in the present theory. To produce the structure observed today, the concept “cold dark matter” (CDM) was invented; that is, a hypothetical non-baryonic fluid of “cold” (low speed) collisionless particles with adjustable $`L_J`$ small enough to produce gravitational potential wells to drive galaxy collapse. Cold dark matter is unnecessary in the present theory. Even if it exists it would not behave as required by the standard model. Its necessarily small collision cross section requires $`L_{SD}L_J`$ so it would diffuse out of its own well, without fragmentation if $`L_{SD}L_H`$. The immediate formation of “primordial fog particles” from all the neutral gas of the universe emerging from the plasma epoch permits their gradual accretion to form the observed small ancient stars in dense globular-star-clusters known to be only slightly younger than the universe. These could never form in the intense turbulence of galaxy collapse in the standard model because $`L_{ST}`$ scales would be too large.
|
no-problem/0002/astro-ph0002362.html
|
ar5iv
|
text
|
# COLLISIONAL DARK MATTER AND THE STRUCTURE OF DARK HALOS
## 1. Introduction
Cold dark matter scenarios within the standard inflationary universe have proved remarkably successful in fitting a wide range of observations. While structure on large scales is well reproduced by the models, the situation is more controversial in the highly nonlinear regime. Navarro, Frenk & White (1995, 1996, 1997; NFW) claimed that the density profiles of near-equilibrium dark halos can be approximated by a “universal” form with singular behaviour at small radii. Higher resolution studies have confirmed this result, finding even more concentrated dark halos than the original NFW work and showing, in addition, that CDM halos are predicted to have a very rich substructure with of order 10% of their mass contained in a host of small subhalos (Frenk et al 1999, Moore et al 1999a, 1999b, Ghigna et al 1999, Klypin et al 1999, Gottloeber et al 1999, White & Springel 1999). Except for a weak anticorrelation of concentration with mass, small and large mass halos are found to have similar structure. Many of these studies note that the predicted concentrations appear inconsistent with published data on the rotation curves of dwarf galaxies, and that the amount of substructure exceeds that seen in the halo of the Milky Way (see also Moore 1994; Flores and Primack 1994; Kravtsov et al 1998; Navarro 1998).
It is unclear whether these discrepancies reflect a fundamental problem with the Cold Dark Matter picture, or are caused by overly naive interpretation of the observations of the galaxy formation process (see Eke, Navarro & Frenk 1998; Navarro & Steinmetz 1999; van den Bosch 1999). On the assumption that an explanation should be sought in fundamental physics, Spergel & Steinhardt (1999) have argued that a large cross-section for elastic collisions between CDM particles may reconcile data and theory. They suggest a number of modifications of standard particle physics models which could give rise to such self-interacting dark matter, and claim that cross-sections which lead to a transition between collisional and collisionless behaviour at radii of order 10 – 100 kpc in galaxy halos are preferred on astrophysical grounds. Ostriker (1999) argues that the massive black holes observed at the centres of many galactic spheroids may arise from the accretion of such collisional dark matter onto stellar mass seeds. Miralda-Escude (2000) argues that such dark matter will produce galaxy clusters which are rounder than observed and so can be excluded.
At early times the CDM distribution is indeed cold, so the evolution of structure is independent of the collision cross-section of the CDM particles. At late times, however, a large cross-section leads to a small mean free path and so to fluid behaviour in collapsed regions. In this Letter we explore how the structure of nonlinear objects (“dark halos”) is affected by this change. We simulate the formation of a massive halo from CDM initial conditions in two limits: purely collisionless dark matter and “fluid” dark matter. We do not try to simulate the the more complex intermediate case in which the mean free path is large in the outer regions of halos but small in their cores. If this intermediate case (which is the one favoured by Spergel & Steinhardt (1999) and by Ostriker (1999)) produces nonlinear structure intermediate between the two extremes we do treat, then our results show that collisional CDM would give poorer fits to the rotation curves of dwarf galaxies than standard collisionless CDM. Further work is needed to see if this is indeed the case.
## 2. THE N-BODY/SPH SIMULATION
Our simulations use the parallel tree code GADGET developed by Springel (1999, see also Springel, Yoshida & White 2000b). Our chosen halo is the second most massive cluster in the $`\mathrm{\Lambda }`$CDM simulation of Kauffmann et al (1999). We analyse its structure in the original simulation and in two higher resolution resimulations. In the collisionless case these are the lowest resolution members of a set of four resimulations carried out by Springel et al (2000a) using similar techniques to those of NFW. Details may be found there and in Springel et al(2000b). These collisionless resimulations use GADGET as an N-body solver, whereas our collisional resimulations start from identical initial conditions but use the code’s Smoothed Particle Hydrodynamics (SPH) capability to solve the fluid equations. The SPH method regards each simulation particle as a “cloud” of fluid with a certain kernel shape. These clouds interact with each other over a length scale which is determined by the local density and so varies both in space and time. The basic parameters of our simulations are tabulated in Table 1, where N$`_{\text{tot}}`$ is the total number of particles in the simulation, N$`_{\text{high}}`$ the number of particles in the central high-resolution region, $`m_p`$ is the mass of each high-resolution particle, and $`l_s`$ stands for the gravitational softening length. Our cosmological model is flat with matter density $`\mathrm{\Omega }_m=0.3`$, cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ and expansion rate $`H_0=70`$km<sup>-1</sup>Mpc<sup>-1</sup>. It has a CDM power spectrum normalised so that $`\sigma _8=0.9`$. The virial mass of the final cluster is $`M_{200}=7.4\times 10^{14}h^1M_{}`$, determined as the mass within the radius $`R_{200}=1.46h^1`$Mpc where the enclosed mean
## 3. RESULTS
On scales larger than the final cluster, the matter distribution in all our simulations looks similar. This is no surprise. The initial conditions in each pair of simulations are identical, so particle motions only begin to differ once pressure forces become important. Furthermore the initial perturbation fields in simulations of differing resolution are identical on all scales resolved in both models, and even S0 resolves structure down to scales well below that of the cluster. As is seen clearly in Figure 1, a major difference between the collisional and collisionless models is that the final cluster is nearly spherical in the former case and quite elongated in the latter. The axial ratios determined from the inertia tensors of the matter at densities exceeding 100 times the critical value are 1.00:0.96:0.84 and 1.00:0.72:0.63 respectively. Again this is no surprise. A slowly rotating fluid body in hydrostatic equilibrium is required to be nearly spherical, but no such constraint applies in the collisionless case (see also Miralda-Escude 2000).
In Figure 2 we show circular velocity profiles for our simulations. These are defined as $`V_c(r)=\sqrt{GM(r)/r}`$, where $`M(r)`$ is the mass within a sphere radius $`r`$; they are plotted at radii between 2$`l_s`$ and $`5R_{200}`$. They agree reasonably well along each sequence of increasing resolution, showing that our results have converged numerically on these scales. Along the fluid sequence the profiles resemble the collisionless case over the bulk of the cluster. In the core, however, there is a substantial and significant difference; the fluid cluster has a substantially steeper central cusp. The difference extends out to radii of about $`0.5R_{200}`$ and has the wrong sign to improve the fit of CDM halos to published rotation curves for dwarf and low surface brightness galaxies.
(Note that in the fluid case we expect small halos to approximate scaled down but slightly more concentrated versions of cluster halos, as in the collisionless case studied by Moore et al (1999a); this scaling will fail for intermediate cross-sections because the ratio of the typical mean free path to the size of the halo will increase with halo mass.)
In Figure 3 we compare the level of substructure within $`R_{200}`$ in our various simulations. Subhalos are identified using the algorithm SUBFIND by Springel (1999) which defines them as maximal, simply connected, gravitationally self-bound sets of particles which are at higher local density than all surrounding cluster material. (Our SPH scheme defines a local density in the neighbourhood of every particle.) Using this procedure we find that 1.0%, 3.4% and 6.7% of the mass within $`R_{200}`$ is included in subhalos in S0, S1 and S2 respectively. Along the fluid sequence the corresponding numbers are 3.0%, 6.4% and 3.1%. The difference in the total amount results primarily from the chance inclusion or exclusion of infalling massive halos near the boundary at $`R_{200}`$. In Figure 3 we show the mass distributions of these subhalos. We plot each simulation to a mass limit of 40 particles, corresponding approximately to the smallest structures we expect to be adequately resolved in our SPH simulations. Along each resolution sequence the agreement is quite good, showing this limit to be conservative. For small subhalo masses there is clearly less substructure in the fluid case, but the difference is more modest than might have been anticipated.
## 4. Summary and Discussion
An interesting question arising from our results is why our fluid clusters have more concentrated cores than their collisionless counterparts. The density profile of an equilibrium gas sphere can be thought of as being determined by its Lagrangian specific entropy profile, i.e. by the function $`m(s)`$ defined to be the mass of gas with specific entropy less than $`s`$. The larger the mass at low specific entropy, the more concentrated the resulting profile. Thus our fluid clusters have more low entropy gas than if their profiles were similar to those of the collisionless clusters. The entropy of the gas is produced by a variety of accretion and merger shocks during the build-up of the cluster, so the strong central concentration reflects a relatively large amount of weakly shocked gas.
We study gas shocking in our models by carrying out one further simulation. We take the initial conditions of S1 and replace each particle by two superposed particles, a collisionless dark matter particle containing 95% of the original mass and a gas particle containing 5%. These two then move together until SPH pressure forces are strong enough to separate them. The situation is similar to the standard 2-component model for galaxy clusters except that our chosen gas fraction is significantly smaller than observed values.
In this mixed simulation the evolution of the collisionless matter (and its final density profile) is almost identical to that in the original S1. This is, of course, a consequence of the small gas fraction we have assumed. In agreement with the simulations in Frenk et al (1999) we find that the gas density profile parallels that of the dark matter over most of the cluster but is significantly shallower in the inner $`200h^1`$kpc. Comparing this new simulation (S1M) with its fluid counterpart (S1F) we find that in both cases the gas which ends up near the cluster centre lay at the centre of the most massive cluster progenitors at $`z=13`$. In addition it is distributed in a similar way among the progenitors in the two cases. In Figure 4 we compare the specific entropy profiles of the cluster gas. These are scaled so that they would be identical if each gas particle had the same shock history in the two simulations. Over most of the cluster there is indeed a close correspondence, but near the centre the gas in the mixed simulation has higher entropy. (This corresponds roughly to $`r<100h^1`$kpc.)
As Figure 4 shows, this is partly a numerical artifact; the two entropies differ only at radii where two-body heating of the gas by the dark matter particles is predicted to be important in the mixed case. (The effect is absent in the pure fluid simulation.) The weaker shocking in the fluid case is evident from the equivalent ”entropy” profile of S1 in Figure 4. This lies between those of the two fluid simulations, and in particular significantly above that of S1F in the central regions.
In conclusion the effective heating of gas by shocks in the fluid case is similar to but slightly weaker than that in the mixed case. This is presumably a reflection of the fact that the detailed morphology of the evolution also corresponds closely. The difference in final density profile is a consequence of three effects. In the mixed case the gas is in equilibrium within the external potential generated by the dark matter, whereas in the pure fluid case it must find a self-consistent equilibrium. In addition the core gas is heated by two-body effects in the mixed case. Finally in the pure fluid case the core gas experiences weaker shocks.
Overall our results show that in the large cross-section limit collisional dark matter is not a promising candidate for improving the agreement between the predicted structure of CDM halos and published data on galaxies and galaxy clusters. The increased concentration at halo centre will worsen the apparent conflict with dwarf galaxy rotation curves. Furthermore, clusters are predicted to be nearly spherical and galaxy halos to have similar mass in substructure to the collisionless case, although with fewer low mass subhalos. Intermediate cross-sections would lead to collisional behaviour in dense regions and collisionless behaviour in low density regions with a consequent breaking of the approximate scaling between high and low mass halos. The resulting structure may not lie between the two extremes we have simulated. Self-interacting dark matter might then help resolve the problems with halo structure in CDM models, if indeed these problems turn out to be real rather than apparent.
SW thanks Jerry Ostriker and Mike Turner for stimulating discussions which started him thinking about this project.
|
no-problem/0002/hep-th0002233.html
|
ar5iv
|
text
|
# The exotic Galilei group and the “Peierls substitution”
## 1 Introduction
The rule called “Peierls substitution” says that a charged particle in the plane subject to a strong magnetic field $`B`$ and to a weak electric potential $`V(x,y)`$ will stay in the lowest Landau level, so that its energy is approximately $`E=eB/(2m)+ϵ`$, where $`ϵ`$ is an eigenvalue of the potential $`eV(X,Y)`$ alone. The operators $`X`$ and $`Y`$ satisfy, however, the anomalous commutation relation
$$[X,Y]=\frac{i}{eB}.$$
(1)
Similar ideas emerged, more recently, in the context of the Fractional Quantum Hall Effect , where it is argued that the system condensates into a collective ground state. This “new state of matter” is furthermore represented by the “Laughlin” wave functions (22) below, which all belong to the lowest Landau level .
Dunne, Jackiw, and Trugenberger justify the Peierls rule by considering the $`m0`$ limit, reducing the classical phase space from four to two dimensions, parametrized by non-commuting coordinates $`X`$ and $`Y`$, whereas the potential $`V(X,Y)`$ becomes an effective Hamiltonian. While this yields the essential features of the Peierls substitution, it has the disadvantage that the divergent ground state energy $`eB/(2m)`$ has to be removed by hand. In this Letter, we derive a similar model from first principles, without resorting to such an unphysical limit.
First we construct, following Souriau , a model for a non-relativistic particle in the plane associated with the two-parameter central extension of the Galilei group. Our model, parametrized by the mass, $`m`$, and a new invariant, $`\kappa `$, turns out to be the non-relativistic limit of the relativistic anyon model of Jackiw and Nair .
For a free particle the usual equations of motions hold unchanged and $`\kappa `$ only contributes to the conserved quantities, (6). More importantly, it yields non-commuting position coordinates, see below. Minimal coupling to an external gauge field unveils, however, new and interesting phenomena, which seem to have escaped attention so far. The interplay between the internal structure associated with $`\kappa `$ and the external magnetic field $`B`$ yields, in fact, an effective mass $`m^{}`$. For vanishing effective (rather than real) mass, we get some curiously simple motions, which satisfy a kind of generalized Hall law, Eq. (11) below. For a constant electric field the usual cycloidal motions degenerate to a pure drift of their guiding centers alone. Such motions form a two-dimensional submanifold of the four-dimensional space of motions. Reduction to this subspace is the classical manifestation of Laughlin’s condensation into a collective motion. Then the quantization of the reduced model allows us to recover the Laughlin description.
## 2 Exotic particle in the plane
First we construct a classical model of our “exotic” particle in the plane. Let us start with the Faddeev-Jackiw framework . A mechanical system is described by the classical action $`{\displaystyle \theta }`$ defined through the “Lagrange one-form” $`\theta =a_\alpha d\xi ^\alpha Hdt`$, where $`\xi =(\stackrel{}{r},\stackrel{}{v})`$ is a point in phase space. The Euler-Lagrange equation is expressed using $`\omega ={\scriptscriptstyle \frac{1}{2}}\omega _{\alpha \beta }d\xi ^\alpha d\xi ^\beta `$, the $`t=const`$ restriction of the two-form $`d\theta `$, as
$$\omega _{\alpha \beta }\dot{\xi }^\beta =_{\xi ^\alpha }H.$$
(2)
For a system with a first-order Lagrangian $`=(\stackrel{}{x},\stackrel{}{v},t)`$, for example, one can chose in particular $`\theta =dt`$; when $`\omega `$ is regular, we get Hamilton’s equations. The construction works, however, under more general conditions: on the one hand, not all one-forms $`\theta `$ come from a Lagrangian $``$ which would only depend on position, velocity and time . On the other hand, the two-form $`\omega `$ can suffer singularities, necessitating “Hamiltonian reduction”, which amounts to eliminating some of variables and writing the reduced one-form using intrinsic canonical coordinates on the reduced manifold .
The Faddeev-Jackiw framework is actually equivalent to that of Souriau , who proposed to describe the dynamics by a closed two-form, $`\sigma `$, of constant rank on the “evolution space” $`𝒱`$ of positions $`\stackrel{}{r}`$, velocities $`\stackrel{}{v}`$, and time $`t`$. Then the classical motions are the integral curves of the null space of $`\sigma `$, viz
$$(\dot{\stackrel{}{r}},\dot{\stackrel{}{v}},\dot{t})\mathrm{ker}\sigma .$$
(3)
Writing $`\sigma `$ as $`\omega dHdt`$, the Euler-Lagrange equations (2) are recovered. Being closed, $`\sigma `$ is furthermore locally $`d\theta `$, showing that the two approaches are indeed equivalent.
Working with the two-form $`\sigma `$ is actually more convenient as working with the one-form $`\theta `$. For example, a symmetry is a transformation which leaves $`\sigma `$ invariant, while the Lagrange one-form $`\theta `$ changes by a total derivative.
Souriau actually goes one step farther, and (as advocated also by Crnkovic and Witten ), argues that the fundamental space to look at is $``$, the space of solutions of the equations of motion. Souriau calls this abstract substitute of the phase space the space of motions. In our case, $``$ is the simply the set of motion curves in the evolution space $`𝒱`$.
Our classical particle model is then constructed as follows. Let us recall that the elementary particles correspond to irreducible, unitary representations of their symmetry groups. According to geometric quantization, though, these representations are associated with some coadjoint orbits of the symmetry group ; the idea of Souriau was to view these orbits, endowed with their canonical two-forms, as spaces of motions.
Now, as discovered by Lévy-Leblond , the planar Galilei group admits a two-parameter central extension, parametrized by two real constants $`m`$ and $`\kappa `$ (see, e.g., ). The new invariant $`\kappa `$ has the dimension of $`\mathrm{}/c^2`$. The coadjoint orbits of the doubly-extended Galilei group coincide with those of the singly-extended one, but carry a modified symplectic structure. The interesting ones are those associated with the mass $`m>0`$ and $`\kappa 0`$; they are $`=𝐑^4`$ with coordinates $`(v_i)`$ and $`(q^i)`$, endowed with the noncanonical twisted-in-the-wrong-way symplectic two-form
$$\omega =mdv_idq^i+{\scriptscriptstyle \frac{1}{2}}\kappa \epsilon _{ij}dv^idv^j.$$
(4)
Owing to the new term in (4), the Poisson bracket of the configuration coordinates is nonvanishing, $`\{x,y\}=\kappa /m^2`$. For these orbits, the evolution space is $`𝒱=\times 𝐑𝐑^5`$, endowed with the two-form
$$\sigma =mdv_i(dr^iv^idt)+{\scriptscriptstyle \frac{1}{2}}\kappa \epsilon _{ij}dv^idv^j.$$
(5)
This two-forms is exact, namely $`\sigma =d\theta `$ with $`\theta =mv_idr^i{\scriptscriptstyle \frac{1}{2}}m|\stackrel{}{v}|^2dt+{\scriptscriptstyle \frac{1}{2}}\kappa ϵ_{ij}v^idv^j`$. However, because of the “exotic” contribution, it is not of the form $`dt`$ with a first-order Lagrangian $``$ ; thus, this model has no ordinary Lagrangian. Both generalized formalisms work nevertheless perfectly, and we choose to pursue along these lines. (Let us mention that a Lagrangian could be constructed—but it would be acceleration-dependent .)
Most interestingly, the “exotic” term $`{\scriptscriptstyle \frac{1}{2}}\kappa \epsilon _{ij}dv^idv^j`$ in (4) has already been used, namely to describe relativistic anyons ; our presymplectic form (5) appears to be the non relativistic limit of that in Ref. when their spin, $`s`$, is identified with our parameter $`\kappa `$. (We believe in fact that our particles are indeed non-relativistic anyons.) Group contraction of the (trivially) centrally-extended Poincaré group yields furthermore the doubly-extended planar Galilei group .
It is readily seen that the modified two-form (5) yields the usual equations of free motions, despite the presence of the new invariant $`\kappa `$. The two-form (5) on $`𝒱`$ flows down to $``$ as $`\omega `$ in (4) along the projection $`(\stackrel{}{r},\stackrel{}{v},t)(\stackrel{}{q},\stackrel{}{v})`$, where $`\stackrel{}{q}=\stackrel{}{r}\stackrel{}{v}t`$. The space of free motions is hence $``$, endowed with the symplectic form $`\omega `$.
For completeness, let us mention that $`\sigma `$ is invariant with respect to the natural action of the Galilei group on $`𝒱`$ whose “moment map” consists of the conserved quantities
$$\{\begin{array}{cc}\hfill ȷ& =m\stackrel{}{r}\times \stackrel{}{v}+{\scriptscriptstyle \frac{1}{2}}\kappa |\stackrel{}{v}|^2,\hfill \\ & \\ \hfill k_i& =m(r_iv_it)+\kappa \epsilon _{ij}v^j,\hfill \\ & \\ \hfill p_i& =mv_i,\hfill \\ & \\ \hfill h& ={\scriptscriptstyle \frac{1}{2}}m|\stackrel{}{v}|^2.\hfill \end{array}$$
(6)
These same quantities were found before (see ), using rather different methods. Let us observe that, owing to the exotic structure, the angular momentum $`ȷ`$ and the boosts $`\stackrel{}{k}`$ in (6) contain new terms (which are, however, also separately conserved). By construction, they satisfy the commutation relations of the doubly-extended planar Galilei group which only differ from the usual ones in that the boosts no longer commute, $`\{k_i,k_j\}=\kappa \epsilon _{ij}`$, cf. .
Let us now put our charged particle into an external electromagnetic field $`F=(\stackrel{}{E},B)`$. Applying as in the minimal coupling prescription $`\sigma \sigma +eF`$, the system is now described by the two-form
$$\sigma =(mdv_ieE_idt)(dr^iv^idt)+{\scriptscriptstyle \frac{1}{2}}\kappa \epsilon _{ij}dv^idv^j+{\scriptscriptstyle \frac{1}{2}}eB\epsilon _{ij}dr^idr^j$$
(7)
on the evolution space $`𝒱`$. It is interesting to note that our two-form (7)—which is again exact if $`F`$ is exact, but is in no way Lagrangian—is the non-relativistic limit of the relativistic expression in Ref. . A short computation shows that a tangent vector $`(\delta \stackrel{}{r},\delta \stackrel{}{v},\delta t)`$ satisfies the Euler-Lagrange equations (3) when
$$\begin{array}{ccc}\{\begin{array}{c}m^{}\delta r^i=m\left(v^i\frac{e\kappa }{m^2}\epsilon _j^iE^j\right)\delta t,\hfill \\ \\ m\delta v^i=e\left(E^i\delta t+B\epsilon _j^i\delta r^j\right),\hfill \\ \\ mv_i\delta v^i=eE_i\delta r^i,\hfill \end{array}& \text{where}& m^{}=m\frac{\kappa eB}{m}.\end{array}$$
(8)
If the effective mass $`m^{}`$ is nonzero, the third equation is automatically satisfied; the middle one becomes
$$m^{}\delta v_i=e\left(E^i+B\epsilon _j^iv^j\right)\delta t.$$
(9)
Thus, for $`\kappa 0`$, the velocity $`\delta \stackrel{}{r}/\delta t`$ and the “momentum” $`\stackrel{}{v}`$ are different (not even parallel); it is the latter which satisfies the Lorentz equations of motion (9) with effective mass $`m^{}`$.
If, however, the effective mass $`m^{}`$ vanishes, i.e., when the magnetic field $`B`$ takes the critical (constant) value
$$B=\frac{m^2}{e\kappa },$$
(10)
then $`\sigma `$ suffers singularities. The curious “motions” with instantaneous propagation can be avoided and we can still have consistent equations of motion, provided $`v^i=(e\kappa /m^2)\epsilon _j^iE^j`$. But this latter condition, together with Eq. (10), astonishingly reads
$$v^i=\frac{1}{B}\epsilon _j^iE^j.$$
(11)
This generalized Hall law requires that particles move with “momentum” $`\stackrel{}{v}`$ perpendicular to the electric field and determined by the ratio of the (possibly position and time dependent) electric and the (constant) magnetic fields.
Assume, from now on that, the electric field $`\stackrel{}{E}=\stackrel{}{}V`$ be time-independent. On the three-dimensional submanifold $`𝒲`$ of $`𝒱`$ defined by Eq. (11), the two-form (7) induces a well-behaved closed two-form $`\sigma _𝒲`$ of rank $`2`$. Upon defining the new “position” variables
$$Q^i=r^i\frac{mE^i}{eB^2},$$
(12)
one readily finds that
$$\sigma _𝒲={\scriptscriptstyle \frac{1}{2}}eB\epsilon _{ij}dQ^idQ^jdHdt$$
(13)
with the (reduced) Hamiltonian $`H=eV(\stackrel{}{r})+m|\stackrel{}{E}|^2/(2B^2)`$. The second term, here, represents the drift energy. The equations of motion are simply
$$\begin{array}{ccc}\{\begin{array}{c}\dot{Q}^i=\frac{1}{B}\epsilon _j^iE^j,\hfill \\ \dot{H}=0,\hfill \end{array}& & \end{array}$$
(14)
confirming that the Hamiltonian descends to the reduced space of motions. The latter is two-dimensional and endowed with a symplectic two-form, we call $`\mathrm{\Omega }`$, inherited from $`\sigma _𝒲`$. Easy calculation shows that $`H/Q^i=eE_i`$, hence
$$H=eV(X,Y)$$
(15)
where $`(X,Y)`$ are coordinates on the reduced space of motions, $``$, obtained by integrating the equations of motion (cf. Eq. (17) below). Note that the drift energy has been absorbed into $`H`$ by the redefinition of the position, Eq. (12). At last, one finds that the coordinates $`X`$ and $`Y`$ on $``$ have anomalous Poisson bracket
$$\{X,Y\}=\frac{1}{eB}.$$
(16)
In conclusion, we have established via Eqs (15) and (16) the classical counterpart of the Peierls rule. Let us insist that our construction does not rely on any unphysical limit of the type $`m0`$, rather it uses the new freedom of having a vanishing effective mass.
## 3 Hall motions
Let us assume henceforth that the electric field $`\stackrel{}{E}`$ is constant. The equations of motion are readily solved. For nonzero effective mass $`m^{}`$, i.e., when the magnetic field does not take the critical value (10), one recovers the usual motion, composed of uniform rotation (but with modified frequency $`eB/m^{}`$) plus the drift of the guiding center.
When the magnetic field takes the critical value (10) and when the constraint (11) is also satisfied, velocity and “momentum” become the same, $`\stackrel{}{v}=\delta \stackrel{}{r}/\delta t`$, so that the constraint (11) requires that all particles move collectively, according to …Hall’s law ! This is understood by noting that for vanishing effective mass $`m^{}=0`$, the circular motion degenerates to a point, and we are left with the uniform drift of the guiding center alone.
The reduced space of motions $``$ (we suggestively called the space of Hall motions) can now be described explicitly. It is parametrized (see Eqs (12) and (14)) by the coordinates $`(X,Y)(R^i)`$ where
$$R^i=Q^i\frac{1}{B}\epsilon _j^iE^jt.$$
(17)
The constraint (11) implies now that $`\delta \stackrel{}{v}=0`$; the induced presymplectic two-form on the three-dimensional submanifold $`𝒲`$ is hence simply $`eF`$. The symplectic structure of the space of Hall motions is therefore
$$\mathrm{\Omega }={\scriptscriptstyle \frac{1}{2}}eB\epsilon _{ij}dR^idR^j=eBdXdY.$$
(18)
The coordinates $`X`$ and $`Y`$ have therefore the Poisson bracket (16).
The symmetries and conserved quantities can now be found. Firstly, the ordinary space translations $`(\stackrel{}{r},\stackrel{}{v},t)(\stackrel{}{r}+\stackrel{}{c},\stackrel{}{v},t)`$ are symmetries for the reduced dynamics, since they act on $``$ according to $`\stackrel{}{R}\stackrel{}{R}+\stackrel{}{c}`$. The associated conserved quantities identified as the “reduced momenta” are linear in the position and time; they read
$$\begin{array}{cc}P_i=eB\epsilon _{ij}R^j=eB\epsilon _{ij}Q^jeE_it.& \end{array}$$
(19)
(Their conservation can also be checked directly using the Hall law (11)). The reduced momenta do not commute but have rather the Poisson bracket of “magnetic translations”,
$$\{P_X,P_Y\}=eB.$$
(20)
The time translations $`(\stackrel{}{r},\stackrel{}{v},t)(\stackrel{}{r},\stackrel{}{v},t+\tau )`$ act on $``$ according to $`R^iR^i\epsilon _j^iE^j\tau /B`$, which is a combination of space translations. The reduced Hamiltonian is (see (15))
$$H=e\stackrel{}{E}\stackrel{}{R}=e\stackrel{}{E}\stackrel{}{r}$$
(21)
and is related to the reduced momenta by $`H=\stackrel{}{E}\times \stackrel{}{P}/B`$. The remaining Galilean generators $`ȷ`$ and $`\stackrel{}{k}`$ are plainly broken by the external fields. (The system admits instead “hidden” symmetries that will be discussed elsewhere.)
It is amusing to compare the reduced expressions with the conserved quantities $`\stackrel{}{p}`$ and $`h`$ associated with these same symmetries acting on the original (but “exotic”) evolution space $`𝒱`$ “before” reduction. We find $`\stackrel{}{p}=m\stackrel{}{v}+\stackrel{}{P}`$ and $`h={\scriptscriptstyle \frac{1}{2}}m|\stackrel{}{v}|^2+eV{\scriptscriptstyle \frac{1}{2}}m|\stackrel{}{v}|^2+H`$, where the velocity is of course fixed by the Hall law. Our reduced expressions are hence formally obtained by the “$`m0`$ limit”, as advocated in Ref. .
Our construction here appears as a nice illustration of Hamiltonian reduction . The restriction to the $`t=const`$ phase space of our two-form $`\sigma `$ in (7) is a closed two-form, $`\omega `$. The generic case, $`m^{}0`$, above arises when $`\omega `$ is regular, so that the matrix $`\omega `$ is invertible. On the other hand, vanishing effective mass, $`m^{}=0`$, as in (10), means precisely that $`\omega `$ is singular. In Faddeev-Jackiw language, our reduction amounts to eliminating the velocities by the constraint (11) to yield $`X`$ and $`Y`$ as conjugate canonical variables and $`H`$ as the Hamiltonian, on reduced space. This is seen by writing $`\sigma _𝒲`$, in (13), as $`d\theta _𝒲`$ with Lagrange form $`\theta _𝒲={\scriptscriptstyle \frac{1}{2}}eB\epsilon _{ij}Q^idQ^jH(\stackrel{}{v},\stackrel{}{Q})dt`$; note that the $`dv^i`$ are absent and the $`\stackrel{}{v}`$ only appear in the Hamiltonian and are determined by (11).
## 4 Quantization of the Hall motions
The quantization is simplified by observing that the space of Hall motions is actually the same of that of a one-dimensional harmonic oscillator with cyclotron frequency $`eB/m`$. The standard procedures can therefore be applied.
Let us assume that we work on the entire plane and introduce the complex coordinate $`Z=\sqrt{eB}(X+iY)`$; the symplectic form (18) is then $`\mathrm{\Omega }=d\overline{Z}dZ/(2i)`$, hence $`\{\overline{Z},Z\}=2i`$. Now $`\mathrm{\Omega }`$ is exact, $`\mathrm{\Omega }=d\mathrm{\Theta }`$ with the choice $`\mathrm{\Theta }=(\overline{Z}dZZd\overline{Z})/(4i)`$ corresponding to the “symmetric gauge”. The prequantum line-bundle is therefore trivial; it carries a connection with covariant derivative $`D=\frac{1}{4}\overline{Z}`$ along $``$. Choosing the antiholomorphic polarization, spanned by $`\overline{}`$, yields the wave “functions” as half-forms $`\psi (Z,\overline{Z})\sqrt{dZ}`$ that are covariantly constant along the polarization, i.e., such that $`\overline{D}\psi =0`$. This yields
$$\psi (Z,\overline{Z})=f(Z)e^{|Z|^2/4}$$
(22)
with $`f(Z)`$ holomorphic, $`\overline{}f=0`$. The the inner product is $`f,g=_{}\overline{f(Z)}g(Z)e^{|Z|^2/2}\mathrm{\Omega }`$. We recover hence the “Bargmann-Fock” wave functions proposed by Laughlin , and by Girvin and Jach to explain the FQHE. These wave functions span a subspace of the Hilbert space of the “unreduced” system and, indeed, represent the ground states in the FQHE . (The details of the projection to the lowest Landau level are not yet completely clear, though .)
The quantum operator associated to the polarization-preserving classical observables are readily found . For example, the quantum operators $`\widehat{Z}`$ and $`\overline{Z}`$ are given as $`\widehat{Z}\psi =(2\overline{D}+Z)\psi `$ and $`\widehat{\overline{Z}}\psi =(2D+\overline{Z})\psi `$. Acting on the holomorphic part alone, this yields for the complex momenta $`\widehat{P}=\widehat{Z}`$ and $`\widehat{\overline{P}}=\widehat{\overline{Z}}`$
$$\{\begin{array}{c}[\widehat{Z}f](Z)=Zf(Z),\hfill \\ [\widehat{\overline{Z}}f](Z)=2f(Z).\hfill \end{array}$$
(23)
(See also .) Quantization of polarization-preserving observables takes Poisson brackets into commutators; in particular, we have $`{\scriptscriptstyle \frac{1}{2}}[\widehat{\overline{Z}},\widehat{Z}]=1`$, so that $`\widehat{Z},\widehat{\overline{Z}}`$ and the identity span the Heisenberg algebra, just like their classical counterparts.
Being a combination of translations, the reduced Hamiltonian (21)—different from the usual quadratic oscillator Hamiltonian—becomes $`H=(\overline{}Z+\overline{Z})/(2B)`$, once we have put $`=\sqrt{eB}(E_1+iE_2)`$. Its quantum counterpart is found as $`\widehat{H}=(\overline{}\widehat{Z}+\widehat{\overline{Z}})/(2B)`$ with (23). For an electric field in the $`x`$ direction, for example,
$$[\widehat{H}f](Z)=a(2+Z)f(Z),$$
(24)
where $`a={\scriptscriptstyle \frac{1}{2}}E\sqrt{e/B}`$. (The subtle problem of ordering does not arise here.) The eigenfunctions of $`\widehat{H}`$ in (24) are readily found as $`f(Z)=Ae^{(ZZ_0)^2/4}`$ associated with the (real) eigenvalue $`ϵ=aZ_0`$, cf. .
Thus the Peierls rule is confirmed also at the quantum level, for a linear potential.
## 5 Discussion
In the spirit of Dirac, we believe that “it would be surprising if Nature would not seize the opportunity to use the new invariant $`\kappa `$.” While it plays little rôle as long as the particle is free, this invariant becomes important when the particle is coupled to an external field: albeit the classical motions are similar to those in the case $`\kappa =0`$, it yields effective terms responsible for the reduction we found here. This curious interplay between the “exotic” structure and the external magnetic field is linked to the two-dimensionality of space and to the Galilean invariance of the theory. Mathematically, the second extension parameter arises owing to the commutativity of planar rotations—just like for exotic statistics of anyons . The physical origin of $`\kappa `$ is, perhaps, the band structure. In a solid the effective mass can be as much as 30 times smaller than that of a free electron. Our formula (10) could indeed serve to measure the new invariant $`\kappa `$ using the data in the FQHE.
Had we worked over the two-torus $`𝐓^2`$ rather than over the whole plane, prequantization would require the integrality condition $`{\displaystyle \mathrm{\Omega }}=2\pi \mathrm{}N`$ for some integer $`N`$ . The actual meaning of this condition is that the “Feynman” factor
$$\mathrm{exp}\left(\frac{i}{\mathrm{}}\theta \right)$$
(25)
be well-defined, independently of the choice of the one-form $`\theta `$ .
Representing $`𝐓^2`$ by a rectangle of sides $`L_x`$ and $`L_y`$ would then imply the well-known magnetic flux quantization condition $`eBL_xL_y=2\pi \mathrm{}N`$, analogous to the Dirac quantization of monopoles. Furthermore, the non-simply-connectedness of the torus implies that the factor (25) can have different inequivalent meanings, labeled by the characters of $`𝐙\times 𝐙`$, the homotopy group of the two-torus .
## 6 Acknowledgement
We are indebted to Prof. R. Jackiw for acquainting us with the Peierls substitution and for advice, and to Profs. J. Balog, P. Forgács and Z. Horváth for discussions. Correspondence with Profs. V. P. Nair and G. Dunne is also acknowledged.
|
no-problem/0002/cond-mat0002365.html
|
ar5iv
|
text
|
# Three-body recombination in Bose gases with large scattering length
## Abstract
An effective field theory for the three-body system with large scattering length is applied to three-body recombination to a weakly-bound $`s`$-wave state in a Bose gas. Our model independent analysis demonstrates that the three-body recombination constant $`\alpha `$ is not universal, but can take any value between zero and $`67.9\mathrm{}a^4/m`$, where $`a`$ is the scattering length. Other low-energy three-body observables can be predicted in terms of $`a`$ and $`\alpha `$. Near a Feshbach resonance, $`\alpha `$ should oscillate between those limits as the magnetic field $`B`$ approaches the point where $`a\mathrm{}`$. In any interval of $`B`$ over which $`a`$ increases by a factor of 22.7, $`\alpha `$ should have a zero.
preprint: DOE/ER/40561-84-INT00
The successful achievement of Bose-Einstein condensation has triggered a large interest in interacting Bose gases. One of the main factors limiting the achievable density in these experiments is the loss of atoms through 3-body recombination. Such losses occur when three atoms scatter to form a molecular bound state (called a “dimer” for brevity) and a third atom. The kinetic energy of the final state particles allows them to escape from the trapping potential. This 3-body recombination process is interesting in its own right as it provides a unique window on 3-body dynamics.
The number of recombination events per unit time and volume can be parametrized as $`\nu _{rec}=\alpha n^3`$, where $`\alpha `$ is the recombination constant and $`n`$ the density of the gas. The calculation of $`\alpha `$ in general is a complicated problem, because it is sensitive to the detailed behavior of the interaction potential . The simplest case is 3-body recombination to a weakly-bound $`s`$-wave state. For atoms of scattering length $`a`$ and mass $`m`$, the binding energy of the dimer is $`B_d=\mathrm{}^2/(ma^2)`$ and the size of the bound state is comparable to $`a`$. Assuming that $`a`$ is the only important length scale, dimensional analysis implies $`\alpha =𝒞\mathrm{}a^4/m`$, where $`𝒞`$ is dimensionless. The problem of 3-body recombination to a weakly-bound $`s`$-wave state has been studied previously . Fedichev et al. found that the coefficient $`𝒞`$ has the universal value $`𝒞=3.9`$. Nielsen and Macek and Esry et al. found that $`𝒞`$ could take any value between $`0`$ and $`𝒞_{\mathrm{max}}65`$.
An ideal means to study the dependence of $`\alpha `$ on the scattering length $`a`$ is to use Feshbach resonances , which occur when the energy of a spin-singlet molecular bound state is tuned to the energy threshold for two separated atoms by applying an external magnetic field $`B`$. Such resonances have, e.g., been observed for <sup>23</sup>Na and <sup>85</sup>Rb atoms . When the magnetic field is varied in the vicinity of the resonance, the scattering length varies according to
$$a(B)=a_0\left(1+\frac{\mathrm{\Delta }_0}{B_0B}\right),$$
(1)
where $`a_0`$ is the off-resonant scattering length and $`\mathrm{\Delta }_0`$ and $`B_0`$ characterize the width and position of the resonance, respectively. On the side of the resonance where $`a`$ increases towards $`+\mathrm{}`$, the spin-singlet molecule becomes a weakly-bound $`s`$-wave state.
In this Letter, we use effective field theory methods to calculate the recombination constant $`\alpha `$ for a weakly-bound $`s`$-wave state. We find that the naive scaling relation $`\alpha =𝒞\mathrm{}a^4/m`$ is modified by renormalization effects, so the coefficient $`𝒞`$ is not universal, and that its maximum value is $`𝒞_{\mathrm{max}}=67.9`$. Our results confirm those of Refs. , demonstrate that they are model-independent, and provide a new tool for precise studies of the recombination rate. On the side of a Feshbach resonance where $`a\mathrm{}`$, the $`B`$-dependence of $`\alpha `$ can be predicted in terms of one adjustable parameter. It has a remarkable behavior, with $`𝒞`$ oscillating between zero and 67.9 more and more rapidly as $`B`$ approaches the resonance.
Effective field theory (EFT) is a powerful method for describing systems composed of particles with wave number $`k`$ much smaller than the inverse of the characteristic range $`R`$ of their interaction. (For a van der Waals potential $`C_6/r^6`$, $`R=(2mC_6/\mathrm{}^2)^{1/4}`$). EFT focuses on the aspects of the problem that are universal, independent of the details of short-distance interactions, by modelling the interactions as pointlike. The separation of scales $`k1/R`$ allows a systematic expansion in powers of the small parameter $`kR`$ . Generically, the scattering length $`a`$ is comparable to $`R`$, and the expansion is effectively in powers of $`ka`$. The pointlike interactions of the EFT generate ultraviolet divergences, but they can be absorbed into the renormalized coupling constants of the effective Lagrangian. All information about the influence of short-distance physics on low-energy observables is captured by these constants. At any given order in $`kR`$, only a finite number of coupling constants enter and this gives the EFT its predictive power. The domain of validity of EFT is $`kR1`$, even in the case of large scattering length $`aR`$. Thus it should accurately describe weakly-bound states with size of order $`a`$. However, the dependence on $`ka`$ is nonperturbative for $`k1/a`$, and it is necessary to reorganize the perturbative expansion into a new expansion in $`kR`$ by resumming higher order terms to all orders in $`ka`$. There has been significant progress recently in carrying out this resummation for the 3-body system. At leading order in $`kR`$, a single 3-body parameter is necessary and sufficient to carry out the renormalization . The scattering length and this 3-body parameter are sufficient to describe all low-energy 3-body observables up to errors of order $`R/a`$. In nuclear physics, this result has allowed a successful description of low-energy neutron-deuteron scattering and the binding energy of the triton. The variation of the 3-body parameter provides a natural explanation for the Phillips line .
We will apply this EFT for systems with large scattering length to the 3-body recombination problem. For simplicity, we now set $`\mathrm{}=1`$. We start by writing down a general local Lagrangian for a non-relativistic boson field $`\psi `$:
$$=\psi ^{}\left(i\frac{}{t}+\frac{\stackrel{}{}^2}{2m}\right)\psi \frac{C_0}{2}(\psi ^{}\psi )^2\frac{D_0}{6}(\psi ^{}\psi )^3+\mathrm{}.$$
(2)
The dots denote terms with more derivatives and/or fields; those with more fields will not contribute to the 3-body amplitude, while those with more derivatives are suppressed at low momentum. In order to set up integral equations for 3-body amplitudes, it is convenient to rewrite $``$ by introducing a dummy field $`d`$ with the quantum numbers of two bosons,
$``$ $`=`$ $`\psi ^{}\left(i{\displaystyle \frac{}{t}}+{\displaystyle \frac{\stackrel{}{}^2}{2m}}\right)\psi +d^{}d{\displaystyle \frac{g}{\sqrt{2}}}(d^{}\psi \psi +\text{h.c.})`$ (3)
$`+`$ $`hd^{}d\psi ^{}\psi +\mathrm{}.`$ (4)
The original Lagrangian (2) is easily recovered by a Gaussian path integration over the $`d`$ field, which implies $`d=(g/\sqrt{2})\psi ^2/(1+h\psi ^{}\psi )`$, $`C_0=g^2`$, and $`D_0=3hg^2`$. The atom propagator has the usual non-relativistic form $`i/(\omega p^2/2m)`$. The bare propagator for $`d`$ is simply $`i`$, but the exact propagator, including atom loops to all orders, is
$$iS_d(\omega ,\stackrel{}{p})=\frac{i4\pi /(mg^2)}{1/a+\sqrt{m\omega +\stackrel{}{p}^{\mathrm{\hspace{0.17em}2}}/4iϵ}},$$
(5)
where $`a`$ is the scattering length, which is related to the bare parameter $`g`$ and the ultraviolet (UV) cutoff $`\mathrm{\Lambda }`$ by
$$a=\frac{mg^2}{4\pi }\left(1+\frac{mg^2\mathrm{\Lambda }}{2\pi ^2}\right)^1.$$
(6)
The propagator (5) has a pole at $`\omega =1/(ma^2)+\stackrel{}{p}^{\mathrm{\hspace{0.17em}2}}/(4m)`$ corresponding to a weakly-bound state. Attaching four atom lines to this propagator gives the exact two-atom scattering amplitude. Taking the incoming atoms to have momenta $`\pm \stackrel{}{p}`$, the amplitude is $`(1/aip)^1`$, confirming the identification of $`a`$ as the scattering length.
We now consider the 3-body recombination process. We take the momenta of the incoming atoms to be small compared to the momenta of the final particles, which have magnitude $`p_f`$. Using Fermi’s golden rule, the recombination coefficient can be written
$$\alpha =\frac{map_f^2}{6\sqrt{3}\pi }\left|T(p_f)\right|^2,$$
(7)
where $`T(p)`$ is the amplitude for the transition between three atoms at rest and a final state consisting of an atom and a dimer in an $`s`$-wave state with momentum $`p`$ in their center-of-momentum frame. In Eq. (7), this amplitude is evaluated on shell at the value $`p_f=2/(\sqrt{3}a)`$ prescribed by energy conservation. However, $`T(p)`$ is also defined at off-shell values of $`p`$. The first few diagrams contributing to $`T`$ are illustrated in Fig. 1. All loop diagrams are of the same order as the tree diagrams, and they therefore have to be summed to all orders. This is conveniently accomplished by solving the integral equation represented by the second equality in Fig. 1. Corrections to this equation are of order $`R/a`$. The integral equation is
$`T(p)={\displaystyle \frac{96\pi ^{3/2}\sqrt{a}}{m}}\left({\displaystyle \frac{1}{p^2}}+{\displaystyle \frac{h}{2mg^2}}\right)+{\displaystyle \frac{2}{\pi }}{\displaystyle _0^\mathrm{\Lambda }}𝑑q`$ (8)
$`\times {\displaystyle \frac{q^2T(q)}{1/a+\sqrt{3}q/2iϵ}}\left[{\displaystyle \frac{1}{pq}}\mathrm{ln}\left|{\displaystyle \frac{q^2+pq+p^2}{q^2qp+p^2}}\right|+{\displaystyle \frac{h}{mg^2}}\right],`$ (9)
where we have inserted an UV cutoff $`\mathrm{\Lambda }`$ on the integral over $`q`$. If we were allowed to set $`h=0`$ and take $`\mathrm{\Lambda }\mathrm{}`$, a rescaling of the variables in Eq. (8) would lead to $`T(p)=K(ap)a^{3/2}/(mp)`$, with $`K(x)`$ a dimensionless function. Evaluating this in Eq. (7), the scaling relation $`\alpha =𝒞a^4/m`$ would follow immediately. However, the integral equation (8) has the same properties as the one describing atom-dimer scattering and the limit $`\mathrm{\Lambda }\mathrm{}`$ can not be taken. The individual diagrams are finite as $`\mathrm{\Lambda }\mathrm{}`$, but their sum is sensitive to the cutoff. In an EFT, the dependence on the UV cutoff is cancelled by local counterterms. In Ref. , it was shown that the dependence of the low-energy observables on the cutoff $`\mathrm{\Lambda }`$ could be precisely compensated by varying $`h`$ appropriately. Writing $`h=2mg^2H(\mathrm{\Lambda })/\mathrm{\Lambda }^2`$, it was found that $`H(\mathrm{\Lambda })`$ could be well approximated by
$$H(\mathrm{\Lambda })\mathrm{tan}\left[s_0\mathrm{ln}(\mathrm{\Lambda }/\mathrm{\Lambda }_{})\pi /4\right],$$
(10)
where $`s_01.0064`$ is determined by the asymptotic behavior of the integral equation. This expression defines a parameter $`\mathrm{\Lambda }_{}`$ that characterizes the effect of the 3-body force on physical observables. A remarkable feature of this expression is its periodicity in $`\mathrm{ln}\mathrm{\Lambda }`$. As $`\mathrm{\Lambda }`$ is increased, $`H(\mathrm{\Lambda })`$ decreases to $`\mathrm{}`$, changes discontinously to $`+\mathrm{}`$, and continues decreasing.
The scaling violations in $`T(p)`$ from the renormalization of the 3-body force can be expressed as a dependence on $`a\mathrm{\Lambda }_{}`$. The simple scaling relation for $`\alpha `$ is therefore replaced by $`\alpha =𝒞(\mathrm{\Lambda }_{}a)a^4/m`$. Thus the value of $`𝒞`$ is not universal. In Fig. 2, we show $`\alpha `$ as a function of $`a`$ for $`\mathrm{\Lambda }_{}a_0=1.78,\mathrm{\hspace{0.17em}4.15},\mathrm{\hspace{0.17em}7.26}`$ and 19.77, where $`a_0`$ is an arbitrary but fixed length scale. Interestingly, $`\alpha `$ appears to oscillate as a function of $`\mathrm{ln}a`$ between zero and a maximum value $`𝒞_{\mathrm{max}}`$. We find that the curves can be very well fit by the expression
$$\alpha \frac{\mathrm{}a^4}{m}𝒞_{\mathrm{max}}\mathrm{cos}^2\left[s_0\mathrm{ln}(a\mathrm{\Lambda }_{})+\delta \right],$$
(11)
with $`𝒞_{\mathrm{max}}=67.9\pm 0.7`$ and $`\delta =1.74\pm 0.02`$.
We now compare our result to those of Refs. . Using an approximate solution to the 3-body wavefunction in hyperspherical coordinates, Fedichev et al. found that the coefficient $`𝒞`$ has the universal value $`𝒞=3.9`$, independent of the interaction potential. We find that the value of $`𝒞`$ is not universal, but can vary from zero to about 67.9, depending on the details of 3-body interactions at short distances. We suggest that the specific value in Ref. must correspond to an implicit assumption about the short-distance behavior of the 3-body wavefunction. Our results are consistent with those of Refs. , which were obtained using the hyperspherical adiabatic approximation. Nielsen and Macek obtained $`𝒞_{\mathrm{max}}68.4`$ by applying the hidden crossing theory. Esry et al. used coupled channel calculations to obtain $`\alpha `$ numerically for over 120 different 2-body potentials. For $`aR`$, their empirical result has the form of Eq. (11) with $`s_0=1`$ and $`𝒞_{\mathrm{max}}=60\pm 13`$. Refs. show that the zeroes in $`\alpha `$ arise from interference effects involving 2-body and 3-body hyperspherical adiabatic potentials. The origin of these interference effects is less obvious in our EFT approach. However, our approach has several other advantages. First, it is completely model independent. Second, it is a controlled approximation, with corrections from finite range effects suppressed by powers of $`R/a`$. Third, it has predictive power in that all other low-energy 3-body observables can be determined in terms of $`a`$ and $`\mathrm{\Lambda }_{}`$. For example, the atom-dimer scattering length $`a_d`$ can be fit by
$$a_da\left(1.41.8\mathrm{tan}[s_0\mathrm{ln}(a\mathrm{\Lambda }_{})+3.2]\right).$$
(12)
We now apply the EFT to Feshbach resonances, where the value of $`a`$ is varied by changing the external magnetic field (cf. Eq. (1)). Our formalism is only applicable close to the Feshbach resonance on the side where $`a>0`$, because only in that region is there a weakly-bound molecule with $`B_d1/(ma^2)`$. Away from the resonance, or close to the resonance but on the side where $`a<0`$, 3-body recombination must involve more deeply-bound molecules with binding energies of order $`1/(mR^2)`$. To predict $`\alpha `$ as a function of the magnetic field $`B`$, we must specify how the parameter $`\mathrm{\Lambda }_{}`$ in Eq. (10) varies as a function of $`B`$. A Feshbach resonance is characterized by nonanalytic dependence of the scattering length $`a`$ on $`B`$. In a field theory, nonanalytic dependence on external parameters arises from long-distance fluctuations . The explicit ultraviolet cutoff $`\mathrm{\Lambda }`$ in our EFT excludes long-distance effects, which could introduce nonanalytic dependence on $`B`$, from the coefficients in $``$. Thus the bare parameters $`C_0=g^2`$ and $`D_0=3hg^2`$ in Eq. (2) should be smooth functions of $`B`$ for a fixed value of $`\mathrm{\Lambda }`$. The resonant behavior of the scattering length near $`B=B_0`$ in Eq. (1) can be reproduced by approximating $`g^2`$ by a linear function of $`B`$ in the resonance region. The parameter $`h`$ should also be a smooth function of $`B`$, but we can take $`h`$ to be approximately constant over the narrow resonance region. This assumption implies via Eq. (10) that $`\mathrm{\Lambda }_{}`$ should be constant while $`a(B)`$ varies as in Eq. (1).
The behavior of the recombination coefficient near the Feshbach resonance can therefore be read off from Fig. 2. If $`\alpha `$ is measured at one value of $`B`$ for which $`a(B)|a_0|`$, it determines $`\alpha `$ as a function of $`B`$ up to a two-fold ambiguity corresponding to whether the slope of $`𝒞`$ is positive or negative at that value of $`B`$. As $`B`$ approaches $`B_0`$, $`𝒞`$ should oscillate between 0 and $`𝒞_{\mathrm{max}}`$ in the manner shown in Fig. 2. The successive zeros correspond to values of $`a(B)`$ that differ roughly by multiplicative factors of $`\mathrm{exp}(\pi /s_0)22.7`$. Thus EFT makes the remarkable prediction that there are values of the magnetic field close to a Feshbach resonance where the contribution to $`\alpha `$ from a weakly-bound state vanishes.
The loss rate of <sup>23</sup>Na atoms from a Bose-Einstein condensate near a Feshbach resonance has been studied by Stenger et al. . Our theory applies to the low-field side of the resonance at $`B=907`$ G. Taking into account the 3 atoms lost per recombination event and a Bose factor of $`1/3!`$, the loss rate from the condensate is $`\dot{N}=\alpha Nn^2/2`$. The loss rates measured in Ref. correspond to a coefficient $`𝒞300`$ both off and near the resonance. This value is a factor of 4 larger than our maximum value. If $`𝒞>𝒞_{\mathrm{max}}`$, the 3-body recombination rate must be dominated not by the weakly-bound Feshbach resonance but instead by molecules with much larger binding energies $`1/(mR^2)`$. Alternatively, the large loss rate in Ref. could be due to collective effects associated with the Bose-Einstein condensate, such as a 2-body recombination process involving atoms and dimers in a molecular condensate . In Fig. 3, we show the contribution to $`\alpha `$ from the weakly-bound state as a function of the magnetic field $`B`$ for $`\mathrm{\Lambda }_{}=1.94\text{ nm}^1`$, $`1.16\text{ nm}^1`$, and $`0.65\text{ nm}^1`$. If this contribution to $`\alpha `$ could be isolated, the first zero may be wide enough to be observed by experiment. The higher zeroes, however, are increasingly narrow and very close to the resonance.
We have applied an EFT for atoms with large scattering length to the problem of 3-body recombination into a weakly-bound $`s`$-wave state. We find that the coefficient $`𝒞`$ in the scaling relation $`\alpha =𝒞\mathrm{}a^4/m`$ is not universal, but must be in the range $`0𝒞67.9`$. If the 3-body recombination rate is measured to be larger than the maximum value, there must be a large contribution from molecules that are more deeply bound. Other low-energy 3-body observables, such as the atom-dimer scattering length, can be predicted in terms of $`a`$ and $`𝒞`$. Near a Feshbach resonance as $`a\mathrm{}`$, we find that $`𝒞`$ should oscillate between zero and 67.9. In any interval of $`B`$ over which $`a`$ increases by a factor of $`\mathrm{exp}(\pi /s_0)22.7`$, $`\alpha `$ should have a zero. Assuming that it is dominated by recombination into the weakly-bound state, the 3-body loss rate should have a minimum at that value of $`B`$. If a Bose-Einstein condensate was prepared at such a value of the magnetic field, one could study its behavior with large scattering length and relatively small 3-body losses.
We thank C.H. Greene, G.P. Lepage, J.H. Macek, and U. van Kolck for useful discussions. This research was supported in part by NSF grant PHY-9800964 and by DOE grants DE-FG02-91-ER4069 and DOE-ER-40561.
|
no-problem/0002/cond-mat0002100.html
|
ar5iv
|
text
|
# Regularities in football goal distributions
## Abstract
Besides of complexities concerning to football championships, it is identified some regularities in them. These regularities refer to goal distributions by goal-players and by games. In particular, the goal distribution by goal-players it well adjusted by the Zipf-Mandelbrot law, suggesting a conection with an anomalous decay.
Regularity in some complex systems can sometimes be identified and expressed in terms of simple laws. Typical examples of such situations are found in a wide range of contexts as the frequency of words in a long text, the population distribution in big cities, forest fires, the distribution of species lifetimes for North American breeding bird populations, scientific citations, www surfing, ecology, solar flares, economic index, epidemics in isolated populations, among others. Here, universal behaviours in the most popular sport, the football, are discussed. More precisely, this work focuses on regularities in goal distribution by goal-players and by games in championships. Furthermore, the goal distribution by goal-players is connected with an anomalous decay related to the Zipf-Mandelbrot law and with Tsallis nonextensive statistical mechanics.
In many contexts, it is common that few phenomena with high intensity arise, and so do many ones with low intensity. For instance, a long text generally contains many words that are employed in few opportunities and a small number that occurs largely. The above mentioned systems are good examples too. In particular, this kind of behaviour usually occurs in football championships, because there are many players that make few goals in contrast with the topscorers.
A detailed visualization of this behaviour can be well illustrated by considering some of the most competitive and traditional championships of the world. Our particular choice of championships has been done guided by the criterion of easy accessibility of the corresponding data to anyone. Therefore, we consider, here, some of the main league football championships from Italy, England, Spain and Brazil. Each of these championships has the participation of about twenty teams, contains around three hundred games, and approximately eight hundred goals. In Fig. 1 we exhibit data of these championships. In these graphics, the abscissa presents the number of goals $`x`$ divided by an average of goals $`m`$ (total number of goals per total number of goal-players), and the ordinate indicates the quantity $`N(x)`$ of players with $`x`$ goals divided by the quantity of players with one goal, $`N(1)`$. The regular shape of the graphics presented in Fig. 1 suggests a general law to describe the distribution of goals.
In the study of the majority of the previously cited systems, the Zipf’s law, $`N(x)=a/x^b`$, arises naturally, at least in part of the analysis. In the Zipf’s law, $`a`$ and $`b`$ are constants and $`x`$ is the independent variable. In order to give a better adjustment to a large part of the data, and based on information theory, Mandelbrot proposed $`N(x)=a/(c+x)^b`$ as a generalization of the Zipf’s law, with $`a`$, $`b`$, and $`c`$ all being constants. This Zipf-Mandelbrot’s distribution also arises in the context of a generalized statistical mechanics proposed some years ago, equivalently rewritten as
$$N(x)=N_0[1(1q)\lambda x]^{\frac{1}{1q}},$$
(1)
where $`N_0`$, $`\lambda `$, a $`q`$ are real parameters. In addition, this function satisfies an anomalous decay equation,
$$\frac{d}{dx}\left(\frac{N(x)}{N_0}\right)=\lambda \left(\frac{N(x)}{N_0}\right)^q.$$
(2)
The parameter $`q`$ can be considered as a measure of how anomalous the decay is. In particular, equation (1) is reduced to the usual exponential decay, $`N(x)=N_0\mathrm{exp}(\lambda x)`$, in the limit $`q1`$.
Motivated by these physical connections, we employ the distribution (1) to adjust the goals data. Following the construction of Fig. 1, we use the number of goalplayers with one goal, $`N(1)`$, and the average goal number by goalplayer, $`m`$, to eliminate $`N_0`$ and $`\lambda `$. Furthermore, it is a good approximation to replace the discrete average with a continuous one in the present analysis, i.e.,
$$m=\frac{_0^{\mathrm{}}xN(x)}{_0^{\mathrm{}}N(x)}=\frac{1}{\lambda (32q)}(q<3/2).$$
(3)
Thus, the distribution of goals dictated by equation (1) can be rewritten as
$$N(x)=N(1)\frac{\left[1\frac{(1q)}{(32q)m}\right]^{\frac{1}{q1}}}{\left[1\frac{(1q)}{(32q)m}x\right]^{\frac{1}{q1}}},$$
(4)
where $`q`$ becomes the unique parameter that remains to be adjusted, since $`N(1)`$ and $`m`$ are obtained directly from the data. Fig. 2 illustrates applications of equation (4) for four championships, indicating therefore the goodness of the formula (4). The same conclusion is obtained in the other championships showed in Fig. 1. Here, $`q=1.33`$ was employed as approximated value, leading to the Zipf-Mandelbrot’s exponent $`b3`$. In this way, $`q1.33`$ can be interpreted as the universal parameter for this kind of championships. Also, it is interesting to remark that $`b3`$ occurs in the distribution of scientific citations .
Other kinds of regularities in goal distributions can be identified, but with different behaviours. This fact can be verified in the distribution of goals per game. Proceeding in a similar way as done in Fig. 1, it is considered normalized scale distribution of games and goals. In this case, the abscissa is the number $`x`$ of goal divided by $`M`$, the mean goal per game of a championship (the number of goals of a championship divided by the corresponding number of games). In addition, the ordinate is given by the number of games with $`x`$ goals of a championship divided by the number of games of the corresponding championship. Fig. 3 contains this kind of graphics illustrating this regular behaviour by considering, again, the main league football championships from Italy, England, Spain and Brazil. As one can see, this figure strongly suggests a regularity in the distribution of goals for distinct championships.
Besides the numberless factors, including fluctuation due to relatively small number of teams and games that are present in a football championship, regularities arise in the goal distributions. In particular, the goal distribution for players that make goals are well adjusted by a Zipf-Mandelbrot law, suggesting a connection with ubiquitous phenomena such as anomalous diffusion.
###### Acknowledgements.
One of us, Mendes, R. S., thanks partial finantial support by CNPq (Brazilian Agency).
|
no-problem/0002/astro-ph0002109.html
|
ar5iv
|
text
|
# The ALADIN Interactive Sky Atlas
## 1 Introduction
### 1.1 The CDS
The Centre de Données astronomiques de Strasbourg (CDS) defines, develops, and maintains services to help astronomers find the information they need from the very rapidly increasing wealth of astronomical information, particularly on-line information.
In modern astronomy, cross-matching data acquired at different wavelengths is often the key to the understanding of astronomical phenomena, which means that astronomers have to use data and information produced in fields in which they are not specialists. The development of tools for cross-identification of objects is of particular importance in this context of multi-wavelength astronomy.
A detailed description of the CDS on-line services can be found, e.g., in Egret et al. (cds-amp2 (1995)) and in Genova et al. (cds-hub (1996), cds (1998), cds2000 (2000)), or at the CDS web site<sup>1</sup><sup>1</sup>1*Internet address:* http://cdsweb.u-strasbg.fr/.
### 1.2 The ALADIN Project
Several sites currently provide on-line access to digitized sky surveys at different wavelengths: this is, for instance, the case of Digitized Sky Survey (DSS) at STScI (Morrison 1995adass…4..179M (1995)), and of similar implementations at other sites, providing quick access to cutouts of the compressed DSS images. SkyView at HEASARC (McGlynn et al. skyview (1997)) can generate images of any portion of the sky at wavelengths in all regimes from radio to gamma-ray. Some of these services provide simultaneous access to images and to catalogue data. The SkyCat tool, recently developed at ESO (Albrecht et al. skycat (1997)), addressed this concern in the context of the European Southern Observatory scientific environment (in view of supporting future users of the Very Large Telescope); SkyCat uses a standardized syntax to access heterogeneous astronomical data sources on the network.
Aladin has been developed independently by the CDS since 1993 as a dedicated tool for identification of astronomical sources – a tool that can fully benefit from the whole environment of CDS databases and services, and that is designed in view of being released as a multi-purpose service to the general astronomical community.
Aladin is an interactive sky atlas, allowing the user to visualize a part of the sky, extracted from a database of images from digitized surveys and observational archives, and to overlay objects from the CDS catalogues and tables, and from reference databases (Simbad and NED), upon the digitized image of the sky.
It is intended to become a major cross-identification tool, since it allows recognition of astronomical sources on the images at optical wavelength, and at other wavelengths through the catalogue data. Expected usage scenarios include multi-spectral approaches such as searching for counterparts of sources detected at various wavelengths, and applications related to careful identification of astronomical objects. Aladin is also heavily used for the CDS needs of catalogue and database quality control.
In the case of extensive undertakings (such as checking the astrometric quality for a whole catalogue), it is expected that Aladin will be useful for understanding the characteristics of the catalogue or survey, and for setting up the parameters to be adjusted while fine tuning the cross-matching or classification algorithms, by studying a sample section of objects or fields.
A discussion of the usage of such a tool for cross-identification can be found in Bartlett & Egret (xid-179 (1997)), where it is shown how *training sets* are used to build likelihood ratio tests.
The Aladin interactive atlas is available in three modes: a simple previewer, a Java interface, and an X-Window interface. We describe here mostly the Java interface which is publicly accessible on the World-Wide Web.
## 2 Access modes
After a long phase of development, (see e.g., Paillou et al. paillou (1994)), Aladin has been first distributed to a limited number of astronomy laboratories in 1997, as an X-Window client program, to be installed on a Unix machine on the user side. The client program interacts with the servers running on Unix workstations at CDS (image server, catalogue server, Simbad server) and manages image handling and plane overlays.
The strategy of having a client program on the user side is difficult to maintain on the long run. The World-Wide Web offers, with the development of Java applications (or *applets*), a way to solve this difficulty. Actually, there is still a *client* program: this is the Java applet itself, that the user receives from the WWW server. Most current Internet browsers are able to make it run properly, so that the user does not have to install anything special other than an Internet browser.
As a consequence, Aladin is currently available in the three following modes:
a pre-formatted image server provides a compressed image of fixed size ($`14.1\mathrm{}\times 14.1\mathrm{}`$ for the DSS-I) around a given object or position. When an object name is given, its position is resolved through the Simbad name resolver. Anchors pointing to the previewer are integral part of the World-Wide Web interfaces to the Simbad database<sup>2</sup><sup>2</sup>2*Internet address*: http://simbad.u-strasbg.fr/Simbad and to the CDS bibliographic service<sup>3</sup><sup>3</sup>3*Internet address*: http://simbad.u-strasbg.fr/biblio.html. The result page also gives access to the full resolution FITS image for download.
this is the primary public interface, supporting queries to the image database and overlays from any catalogue or table available at CDS, as well as from Simbad and NED databases. Access to personal files is not possible (due to security restrictions of the Java language). These restrictions do not apply to the *stand-alone* version, which can be installed and run on a local *Java virtual machine*.
The X-Window Aladin client provides most of the functionalities of the Aladin Java interface, plus more advanced functions, as described below (section 6).
## 3 The image database
### 3.1 Database summary
The Aladin image dataset consists of:
* The whole sky image database from the first Digitized Sky Survey (DSS-I) digitized from photographic plates and distributed by the Space Telescope Science Institute (STScI) as a set of slightly compressed FITS images (with a resolution of $`1.8\mathrm{}`$); DSS-II is also currently being integrated into the database (see below);
* Images of *crowded* fields (Galactic Plane, Magellanic Clouds) at the full resolution of $`0.67\mathrm{}`$, scanned at the Centre d’Analyse des Images (MAMA machine) in Paris;
* Global plate views ($`5\mathrm{°}\times 5\mathrm{°}`$ or $`6\mathrm{°}\times 6\mathrm{°}`$ according to the survey) are also available for all the plates contributing to the image dataset: these are built at CDS by averaging blocks of pixels from the original scans;
* Other images sets, or user-provided images, in FITS format, having suitable World Coordinate System information in the header (see e.g. Greisen & Calabretta wcs (1995)); this functionality is currently available only for the Java stand-alone version.
### 3.2 Building the database contents
The Aladin project has set up collaborations with the major groups providing digitizations of sky surveys. The original surveys are made of photographic Schmidt plates obtained at Palomar in the North, and ESO/SERC in the South, and covering the whole sky at different epochs and colours (see e.g., MacGillivray potsdam (1994)).
The database currently includes the first Digitized Sky Survey (DSS-I) produced by the Space Telescope Institute (Lasker dss (1992)), for the needs of the Hubble Space Telescope. To create these images, the STScI team scanned the first epoch (1950/1955) Palomar $`E`$ Red and United Kingdom Schmidt $`J`$ Blue plates (including the SERC J Equatorial Extension and some short V-band plates at low galactic latitude) with a pixel size of $`1.7\mathrm{}`$ ($`25\mu m`$). The low resolution and a light data compression (factor of 10) permit storage of images covering the full sky on a set of 102 CD-ROMs.
DSS-II images in the R-band (from Palomar POSS-II F and UK Schmidt SES, AAO-R, and SERC-ER), scanned with a $`1\mathrm{}`$ ($`15\mu m`$) sampling interval (see Lasker dss-ii (1994)) are gradually being included into the system, and will soon be followed by DSS-II images in the B-band (POSS-II J).
In addition, high resolution digitalization of POSS-II, SERC-J, SERC-SR, SERC-I, or ESO-R plates featuring crowded regions of the sky (Galactic Plane and Magellanic Clouds) have been provided by the MAMA facility at the Centre d’Analyse des Images (CAI), Observatoire de Paris (Guibert MAMA (1992)). Sampling is $`0.67\mathrm{}`$ per pixel ($`10\mu m`$). Currently, these high resolution images cover about 15% of the sky, and are stored in a juke-box of optical disks, with a capacity of 500 Gigabytes.
### 3.3 The image server
The image server for Aladin had to be able to deal with various survey data, in heterogeneous formats (uncompressed FITS, compressed JPEG or PMT – see Section 5, etc.). For that, an object-oriented design was chosen, allowing an easy manipulation of image calibrations and headers, through the use of object classes. Image compression or decompression, image reconstruction, and in a near future, part of the recalibration, are seen as class methods.
Images are currently divided into subimages of $`500\times 500`$ pixels (DSS-I), $`768\times 768`$ pixels (DSS-II), or $`1024\times 1024`$ pixels (MAMA).
The 1.5 million subimages are described by records stored in a relational database, encapsulated by several classes of the image management software. When an image of the sky is requested, the original subimages containing the corresponding sky area are retrieved through SQL commands, and the resulting image is built on the fly.
## 4 Usage scenarios
In this section we will focus on describing the usage of the Aladin Java interface, as it is available now (November 1999).
### 4.1 Access
The Aladin home page is available through the CDS Web server at the following address: http://aladin.u-strasbg.fr/
This site provides access to Aladin documentation, including scientific reports, recent publications, etc.
### 4.2 Query modes
The typical usage scenario starts with a request of the digitized image for an area of the sky defined by its central position or name of central object (to be resolved by Simbad). The size of the sky field is determined by the photographic survey used: it is $`14.1\mathrm{}`$ in the case of the DSS-I.
Astrometric information comes from the FITS header of the DSS image, and is generally accurate to the arcsecond (with deviations up to several arcsec. in exceptional cases, on plate edges).
In a subsequent step, the interface, illustrated by Figs. 1 and 2, allows the user to stack several information planes related to the same sky field, to superimpose the corresponding data from catalogues and databases, and to obtain interactive access to the original data.
The possible information planes are the following:
* Image pixels from the Aladin database of digitized photographic plates (DSS-I, MAMA, DSS-II); functionalities include zooming capabilities, inverse video, modification of the color table;
* Information from the Simbad database (Wenger et al. simbad (2000)); objects referenced in Simbad are visualized by color symbols overlaid on top of the black and white image; the shape and color of the symbols can be modified on request, and written labels can be added for explicit identification of the objects; these features are also available for all the other information planes;
* Records from the CDS library of catalogues or tables (VizieR<sup>4</sup><sup>4</sup>4*Internet address:* http://vizier.u-strasbg.fr/, Ochsenbein et al. vizier (2000)); the user can select the desired catalogue from a preselected list including the major reference catalogues such as the Tycho Reference Catalogue (ESA tyc (1997); Høg et al. trc (1998)), GSC (Lasker et al. gsc (1990)), IRAS Point Source Catalog, or USNO A2.0 (Monet usno (1998)); the user can alternatively select the catalogues for which entries may be available in the corresponding sky field, using the VizieR query mechanism by position (see 4.3), catalogue name or keyword;
* Information from the NED database: objects referenced in the NASA/IPAC Extragalactic Database<sup>5</sup><sup>5</sup>5*Internet address:* http://nedwww.ipac.caltech.edu/ (Helou et al. ned (2000)) can also be visualized through queries submitted to the NED server at IPAC;
* Archive images will gradually become available through the corresponding mission logs: Hubble Space Telescope images are currently available (see Fig. 3 for an example), and more archives will follow.
* Local, user data files can also be overlaid, but, because of current limitations of the Java applications, this feature is only available in the stand-alone version, or in Aladin X.
The stack of images and graphics is made visible to the user (under the eye icon, on the right of Fig. 2) so that each plane can be turned on and off. The status of queries is also easily visualized.
For all information planes (Simbad, VizieR, NED) links are provided to the original data. This is done in the following way: when selecting an object on the image, with mouse and cursor, it is possible to call for the corresponding information which will appear in a separate window on the Internet browser. It is also possible to select with the mouse and cursor all objects in a rectangular area: the corresponding objects are listed in a panel on the bottom of the Aladin window; this list includes basic information (name, position and, when applicable, number of bibliographical references) and anchors pointing to the original catalogue or database.
At any moment the position of the cursor is translated in terms of right ascension and declination on the sky and visualized in the top panel of the Aladin window. Additional features are available, such as a tool for computing angular distance between marked objects.
The *standalone* version gives access to additional facilities, not available through the Web, such as printing and saving the images and data.
### 4.3 The catalogue server
The ability to access all VizieR catalogues and tables directly from Aladin is a unique feature which makes it an extremely powerful tool for any cross-identification or classification work.
The “*Select around target*” request relies on a special feature – the genie of the lamp: this is the ability to decide which catalogues, among the database of (currently) over 2,600 catalogues or tables, contain data records for astronomical objects lying in the selected sky area. In order to do that, an index map of VizieR catalogues is produced (and kept up-to-date), on the basis of about ten pixels per square degree: for each such ‘pixel’ the index gives the list of all catalogues and tables which have entries in the field.
When a user hits the button “*Select around target*”, this index is queried and the list of useful catalogues is returned. It is possible, at this stage, either to list all catalogues, or to produce a subset selected on the basis of keywords. Note that, as the index “pixels” generally match an area larger than the current sky field, there is simply a good chance, but not 100%, to actually obtain entries in the field when querying one of the selected tables.
### 4.4 Cache
The images of the 30,000 most cited objects in Simbad are pre-computed and available on a cache on magnetic disk. For these objects, the image is served much faster than for other objects where the image has to be extracted from the Digitized Sky Survey.
### 4.5 Usage statistics
As the newest service developed by CDS, Aladin has not yet been widely publicized, and its usage is in a steeply growing phase. Currently about 10,000 queries are processed monthly, generating the extraction of more than 5,000 images.
## 5 Image compression
Astronomical image compression in the context of Aladin has been discussed in detail by Louys et al. (louys (1999)).
For the Aladin Java interface and for the Aladin previewer, the current choice has been to deliver to the user an image in JPEG 8-bit format, constructed from the original FITS images. JPEG is a general purpose standard which is supported by all current Internet browsers. The size of such an image does not exceed 30 kBytes, and thus the corresponding network load is very small.
In the near future, the Pyramidal Median Transform (PMT) algorithm, implemented in the MR-1 package (Starck et al. pmt (1996)), will be used within Aladin for storing or transferring new image datasets, such as additional high resolution images (see again Louys et al. louys (1999) for details). The corresponding decompression package is being written in Java code, and could be downloaded on request for use within the Java interface.
## 6 Aladin X
The Aladin X-Window interface is the testbed for further developments. It is currently only distributed for the Unix Solaris operating system. Interested potential users should contact CDS for details.
### 6.1 Source extraction
Aladin X includes a procedure for source extraction. The current mechanism will soon be replaced by SExtractor (Bertin & Arnouts 1996A&AS..117..393B (1996)).
### 6.2 Plate calibrations
While the first level astrometric calibrations are given by the digitizing machines, a second level is being developed that will allow the user to *recalibrate* the image with a new set of standards taken, for example, from the Tycho Reference Catalogue. The photometric calibrations (surface and stellar) will eventually also be performed within Aladin, by using the Guide Star Photometric Catalogs (GSPC I and II; Ferrari et al. gspc2 (1994); Lasker et al. gspc1 (1988)).
Users will thus be able to work on the details of local astrometric and photometric plate calibrations in order to extract the full information from the digitized plates.
## 7 Integration of distributed services
While the CDS databases have followed different development paths, the need to build a transparent access to the *whole set* of CDS services has become more and more obvious with the easy navigation permitted by hypertext tools. Aladin has become the prototype of such a development, by giving comprehensive simultaneous access to Simbad, the VizieR Catalogue service, and to external databases such as NED, using a client/server approach and, when possible, standardized query syntax and formats.
In order to be able to go further, the CDS has built a general data exchange model, taking into account all types of information available at the Data Center, known under the acronym of GLU for Générateur de Liens Uniformes – Uniform Link Generator (Fernique et al. glu (1998)).
More generally, with the development of the Internet, and with an increasing number of on-line astronomical services giving access to data or information, it has become critical to develop new tools providing access to distributed services. This is, for instance, the concern expressed by NASA through the AstroBrowse project (Heikkila et al. astrobrowse (1999)). A local implementation of this concept is available at CDS (AstroGlu: Egret et al. astroglu (1998)).
## 8 Future developments
An important direction of development in the near future is the possibility of providing access to images from other sky surveys or deep field observations: obvious candidates are the DENIS (Epchtein denis (1998)) and 2MASS (Skrutskie 2mass (1998)) near-infrared surveys. The first public point source catalogues resulting from these surveys are already available through Aladin, since they are included in the VizieR service. This has already proved useful for validating survey data in preliminary versions of the DENIS catalogue (Epchtein et al. denis-psc (1999)).
The CDS team will also continue to enrich the system functionality. The users play an important role in that respect, by giving feedback on the desired features and user-friendliness of the interfaces.
New developments are currently considered as additional modules which will be incorporated to the general release only when needed, possibly as optional downloads, in order to keep the default version simple and efficient enough for most of the Web applications.
On a longer term, the CDS is studying the possibility of designing *data mining* tools that will help to make a fruitful use of forthcoming very large surveys, and will be used for cross-matching several surveys obtained, for instance, at different wavelengths. A first prototype, resulting from a collaboration between ESO and CDS, in the framework of the VLT scientific environment is currently being implemented (Ortiz et al. ortiz (1999)).
###### Acknowledgements.
CDS acknowledges the support of INSU-CNRS, the Centre National d’Etudes Spatiales (CNES), and Université Louis Pasteur. We are indebted to Michel Crézé who initiated the project while being Director of the CDS, and to all the early contributors to the Aladin project: Philippe Paillou, Joseph Florsch, Houri Ziaeepour, Eric Divetain, Vincent Raclot. Collaboration with STScI, and especially with the late Barry Lasker, and with Brian McLean, is gratefully acknowledged. We thank Jean Guibert and René Chesnel from CAI/MAMA for their continuous support to the project. The Digitized Sky Survey was produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. Java is a registered trademark of Sun Microsystems.
|
no-problem/0002/astro-ph0002170.html
|
ar5iv
|
text
|
# Measuring the Nonlinear Biasing Function from a Galaxy Redshift Survey
## 1 INTRODUCTION
The fact that galaxies of different types cluster differently (e.g., Dressler 1980; Lahav, Nemiroff & Piran 1990; Santiago & Strauss 1992; Loveday et al. 1995; Hermit et al. 1996; Guzzo et al. 1997) indicates that the galaxy distribution is in general biased compared to the underlying mass distribution. Cosmological simulations confirm that halos and galaxies must be biased (e.g., Cen & Ostriker 1992; Kauffmann, Nusser & Steinmetz 1997; Blanton et al. 1999; Somerville et al. 2000). The biasing becomes even more pronounced at high redshift, as predicted by theory (e.g., Kaiser 1986; Davis et al. 1985; Bardeen et al. 1986; Dekel & Rees 1987; Mo & White 1996; Bagla 1998; Jing & Suto 1998; Wechsler et al. 1998), and confirmed by the strong clustering of galaxies observed at $`z3`$ (Steidel et al. 1996; 1998). Knowing the biasing scheme is crucial for extracting dynamical information and cosmological constants from the observed galaxy distribution, and may also be very useful for understanding the process and history of galaxy formation.
The simplest possible biasing model relating the density fluctuation fields of matter and galaxies, $`\delta `$ and $`\delta _\mathrm{g}`$, is the deterministic and linear relation, $`\delta _\mathrm{g}(xxx)=b\delta (xxx)`$, where $`b`$ is a constant linear biasing parameter. However, this is at best a crude approximation, because it is not self-consistent (e.g., it does not prevent $`\delta _\mathrm{g}`$ from becoming smaller than $`1`$ when $`b>1`$) and is not preserved in time. At any given time, scale and galaxy type, the biasing is expected in general to be nonlinear, i.e., $`b`$ should vary as a function of $`\delta `$. The nonlinearity of dark-matter halo biasing (as well as its dependence on scale, mass and time) is approximated fairly well by the model of Mo & White (1996), based on the extended Press-Schechter formalism (Bond et al. 1991). Improved approximations have been proposed by Jing (1998), Catelan et al. (1998), Sheth & Tormen (1999) and Porciani et al. (1999). It is quantified further for halos and galaxies using cosmological $`N`$-body simulations with semi-analytic galaxy formation (e.g., Somerville et al. 2000). The biasing is also expected, in general, to be stochastic, in the sense that a range of values of $`\delta _\mathrm{g}`$ is possible for any given value of $`\delta `$. For example, if the biasing is nonlinear on one scale, it should be different and non-deterministic on any other scale. The origin of the scatter is shot noise as well as the influence of physical quantities other than mass density (e.g., velocity dispersion, the dimensionality of the local deformation tensor which affects the shape of the collapsing object, etc.) on the efficiency of galaxy formation.
Dekel & Lahav (1999) have proposed a general formalism for galaxy biasing, that separates nonlinearity and stochasticity in a natural way. The density fields are treated as random fields, and the biasing is fully characterized by the conditional probability distribution function $`P(\delta _\mathrm{g}|\delta )`$. The constant linear biasing factor $`b`$ is replaced by a mean biasing function,
$$\delta _\mathrm{g}|\delta b(\delta )\delta ,$$
(1)
which can in principle take a wide range of functional forms, restricted by definition to have $`\delta _\mathrm{g}=0`$ and $`\delta _\mathrm{g}|\delta 1`$ for any $`\delta `$. The stochasticity is expressed by the higher moments about this mean, such as the conditional variance
$$\sigma _\mathrm{b}^2(\delta )ϵ^2|\delta /\sigma ^2,ϵ\delta _\mathrm{g}\delta _\mathrm{g}|\delta ,$$
(2)
scaled for convenience by the variance of mass fluctuations, $`\sigma ^2\delta ^2`$. To second order, the biasing function $`b(\delta )`$ can be characterized by two parameters: the moments $`\widehat{b}`$ and $`\stackrel{~}{b}`$,
$$\widehat{b}b(\delta )\delta ^2/\sigma ^2\mathrm{and}\stackrel{~}{b}^2b^2(\delta )\delta ^2/\sigma ^2.$$
(3)
The parameter $`\widehat{b}`$ is the natural extension of the linear biasing parameter, measuring the slope of the linear regression of $`\delta _\mathrm{g}`$ on $`\delta `$, and $`\stackrel{~}{b}/\widehat{b}`$ is a useful measure of non-linearity. The stochasticity is characterized independently by a third parameter, $`\sigma _\mathrm{b}^2ϵ^2/\sigma ^2`$. As has been partly explored by Dekel & Lahav (1999), these parameters should enter any nonlinear analysis aimed at extracting the cosmological density parameter $`\mathrm{\Omega }`$ from a galaxy redshift survey, and are therefore important to measure.
In this paper we propose a simple method to measure the biasing function $`b(\delta )`$ and the associated parameters $`\widehat{b}`$ and $`\stackrel{~}{b}`$ from observed data that are either already available, such as the PSC$`z`$ redshift survey (Saunders et al. 2000), or that will soon become available, such as the redshift surveys of 2dF (Colless 1999) and SDSS (e.g., Loveday et al. 1998) and high-redshift surveys such as DEEP (Davis & Faber 1999). Alternative methods have been proposed to measure the biasing function, using the cumulant correlators of the observed distribution of galaxies in redshift surveys (Szapudi 1998) or their bispectrum (Matarrese, Verde, Heavens 1997, Verde et al. 1998).
We first show in §2, using halos and galaxies in $`N`$-body simulations, that the difference between the cumulative distribution functions (CDFs) of galaxies and mass can be straightforwardly translated into $`\delta _\mathrm{g}|\delta `$ despite the scatter in the biasing scheme. Then, in §3, we demonstrate that for our purpose, $`C(\delta )`$ is insensitive to the cosmological model and can be approximated robustly by a cumulative log-normal distribution. This means that we do not need to observe $`C(\delta )`$, which is hard to do; we only need to measure $`C_\mathrm{g}(\delta _\mathrm{g})`$ and, independently, the rms value $`\sigma `$ of the mass fluctuations on the same scale. In §4, we slightly modify the method to account for redshift-space distortions, and use mock galaxy catalogs from N-body simulations to evaluate the associated errors. Finally, in §5, we estimate the errors due to the sparse sampling and finite volume. The method and its applications to existing and future data are discussed in §6.
## 2 BIASING FUNCTION FROM DISTRIBUTION FUNCTIONS
Let $`C_\mathrm{g}(\delta _\mathrm{g})`$ and $`C(\delta )`$ be the cumulative distribution functions of the density fluctuations of galaxies and mass respectively (at a given smoothing window). Had the biasing relation been deterministic and monotonic, it could have been determined straightforwardly from the difference between these CDFs at given percentiles,
$$\delta _\mathrm{g}(\delta )=C_\mathrm{g}^1[C(\delta )],$$
(4)
where $`C_\mathrm{g}^1`$ is the inverse function of $`C_\mathrm{g}`$. <sup>1</sup><sup>1</sup>1A similar relation has been used by Narayanan & Weinberg (1998) for “debiasing” the galaxy density field for the purpose of dynamical reconstruction. In the presence of scatter in the biasing scheme, strict monotonicity is violated, but it is possible that $`C_\mathrm{g}^1[C(\delta )]`$ is still a good approximation for $`\delta _\mathrm{g}|\delta `$, as long as the latter is monotonic.<sup>2</sup><sup>2</sup>2The absence of spiral galaxies in the centers of rich clusters may result in a non-monotonic biasing function for this type of galaxies at small smoothing scales, as hinted in Blanton et al. (1999). However, using the simulations described in this section, Somerville et al. (2000) do not find non-monotonicity for late type galaxies at $`8h^1\mathrm{Mpc}`$ smoothing, as used in Figure 3 below. The validity of this approximation is addressed in the present section.
We use two cosmological $`N`$-body simulations in which both halos and galaxies were identified (Kauffmann et al. 1999). The cosmological models are $`\tau `$CDM (with $`\mathrm{\Omega }_\mathrm{m}=1`$ and $`h=0.5`$) and $`\mathrm{\Lambda }`$CDM (with $`\mathrm{\Omega }_\mathrm{m}=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ and $`h=0.7`$). $`N=256^3`$ particles were simulated in a periodic box of comoving size $`85`$ and $`141h^1\mathrm{Mpc}`$ respectively (corresponding to a mass resolution of $`1.010^{10}h^1M_{}`$ and $`1.410^{10}h^1M_{}`$). The simulations were run using a parallel adaptive P<sup>3</sup>M code kindly made available by the Virgo Consortium (see Jenkins et al. 1998) as part of the “GIF” collaboration between the HU Jerusalem and the MPA Munich. The present epoch is defined by a linear rms density fluctuation in a top-hat sphere of radius $`8h^1\mathrm{Mpc}`$ of $`\sigma _8=0.6`$ in the $`\tau `$CDM simulation and $`\sigma _8=0.9`$ in the $`\mathrm{\Lambda }`$CDM simulation. Dark-matter halos were identified at densely sampled time steps using a friends-of-friends algorithm. Galaxies were identified inside these halos by applying in retrospect semi-analytic models (SAMs) of galaxy formation (Kauffmann et al. 1999). The SAMs simulate the important physical processes of galaxy formation such as gas cooling, star formation and supernovae feedback. At different times in the redshift range 0 to 3, we select halos by mass and galaxies by luminosity or type. We then compute density fields by applying top-hat smoothing with radii in the range $`515h^1\mathrm{Mpc}`$. We report detailed results for the case of $`8h^1\mathrm{Mpc}`$ smoothing, and refer to the scale dependence in several places.
The figures of this section illustrate the success of the approximation, equation (4), in several different cases based on the $`\tau `$CDM simulation, with top-hat smoothing of radius $`8h^1\mathrm{Mpc}`$ (hereafter TH8, or TH$`X`$ for radius $`Xh^1\mathrm{Mpc}`$), and at different redshifts. Figure 1 refers to halos of mass $`>10^{12}h^1M_{}`$ ($`>100`$ particles). On the top we show the cumulative distributions of halos and underlying mass fluctuations, $`C_\mathrm{g}(\delta _\mathrm{g})`$ and $`C(\delta )`$ (our notation does not distinguish between halos and galaxies). The errors in $`C_\mathrm{g}`$ are computed from 20 bootstrap simulations of the halo field. The errors in $`C`$, estimated in the same way, are smaller by an order of magnitude and are therefore not shown. The bottom panels show a point-by-point comparison of the TH8 fields of $`\delta _\mathrm{g}(xxx)`$ and $`\delta (xxx)`$ at points randomly chosen (1:8) from a uniform grid of spacing $`2.64h^1\mathrm{Mpc}`$ within the simulation box. The true mean biasing function $`\delta _\mathrm{g}|\delta `$ is marked by the filled circles with attached error bars. It is computed by a local linear regression of $`\delta _\mathrm{g}`$ on $`\delta `$ within each bin of $`\delta `$, adopting the value of the fitted line at the center of the bin (only every other bin is shown). Shown in comparison (solid line) is the approximation for $`\delta _\mathrm{g}|\delta `$ obtained by equation (4) from the CDFs, and the corresponding $`1\sigma `$ error range based on the bootstrap realizations (dotted lines).
As can be seen in Figure 1, the approximation is excellent over most of the $`\delta `$ range — the deviation at $`z`$=0 is within the $`1\sigma `$ errors up to $`\delta 1.4`$ (corresponding to $`97\%`$ of the volume). Systematic deviations show up at higher $`\delta `$ values, where the scatter becomes larger and the mean biasing function flatter, making the deviations from monotonicity larger. In order to quantify the quality of the approximation, we average the residuals (scaled by $`\sigma _\mathrm{g}`$):
$$\mathrm{\Delta }=\frac{1}{N_{\mathrm{bins}}\sigma _\mathrm{g}^2}\underset{\delta \mathrm{bins}}{\overset{N_{\mathrm{bins}}}{}}[\delta _\mathrm{g}(\delta )\delta _\mathrm{g}|\delta ]^2,$$
(5)
where $`\delta _\mathrm{g}(\delta )`$ is obtained via equation (4). We exclude the poorly recovered high-density tail by limiting the summation to those $`N_{\mathrm{bins}}`$ bins of $`\delta `$ for which $`C(\delta )<0.99`$ and $`C_\mathrm{g}(\delta _\mathrm{g})<0.99`$. The values of $`\mathrm{\Delta }`$ in the various cases studied, including halos and galaxies in $`\tau `$CDM and $`\mathrm{\Lambda }`$CDM at different redshifts, are listed in Table 1. For example, for the halos shown in Figure 1 at $`z=0`$ we obtain $`\mathrm{\Delta }=0.08`$, indicating that the typical error in the approximation $`\delta _\mathrm{g}(\delta )`$ is small compared to the actual scatter $`\sigma _\mathrm{g}`$ in the halo density field.
A complementary approach for quantifying the quality of the approximation is by testing how well it recovers the values of the moments of the biasing function, $`\widehat{b}`$ and $`\stackrel{~}{b}`$. In Table 1 we present the values of these moments for the different cases, as computed directly from the simulation and as approximated by $`\delta _\mathrm{g}(\delta )`$ (denoted by a subscript “a”). These biasing parameters are computed based on 99.9% of the volume, excluding the very highest density peaks, where the error is large (The only exception is at $`z`$=3, where we use only 99% of the volume because the errors are even larger). For the halos shown in Figure 1 at $`z=0`$, we see that $`\widehat{b}`$ and $`\stackrel{~}{b}`$ are recovered with errors of 1% and 3% respectively.
The middle panels of Figure 1 refer to $`z=1`$, where $`\widehat{b}2.2`$. The approximation of equation (4) holds well in this case up to $`\delta 0.7`$, which corresponds to $`98\%`$ of the volume. The approximation remains good despite the large scatter (compared to the $`z=0`$ case) because the steepness of the biasing function helps maintaining reasonable monotonicity. The goodness of the recovery of the biasing function, with $`\mathrm{\Delta }=0.07`$, is similar to the $`z=0`$ case. The parameters $`\widehat{b}`$ and $`\stackrel{~}{b}`$ are recovered with an accuracy of $`5\%`$ (Table 1). The right panels of Figure 1 demonstrate that the approximation is valid even at $`z`$=3, where the biasing is extremely strong, $`\widehat{b}6.6`$. The recovery of the biasing function is still good, $`\mathrm{\Delta }=0.20`$, and its moments are approximated to within $`2\%`$.
The halo biasing function in the $`\mathrm{\Lambda }`$CDM cosmology is recovered, in general, with similar success, as can be seen in the top part of Table 1. Note that in this case the recovery actually improves at higher redshift. This reflects the fact that in $`\mathrm{\Lambda }`$CDM the halo biasing scatter becomes smaller at higher redshift (see Somerville et al. 2000, Fig. 17). It results from the smaller shot noise due to the higher abundance of high-redshift halos in $`\mathrm{\Lambda }`$CDM compared to $`\tau `$CDM.
Figure 2 is analogous to Figure 1, but now for bright galaxies of $`M_\mathrm{B}5\mathrm{log}h<19.5`$. The recovery is again quantified in Table 1; it is quite similar to the case of halos. The typical error is $`\mathrm{\Delta }0.08`$, and the biasing parameters are recovered with an error of a couple to a few percent.
The performance of our method has been tested for smoothing scales in the range $`515h^1\mathrm{Mpc}`$. For the $`\tau `$CDM model, we find that the quality of the approximation is practically independent of scale throughout this range; the relative error in the biasing parameters is at the level of a few percent, and $`\mathrm{\Delta }`$ is in the range 0.1 to 0.2, rather similar to the values quoted in Table 1 for TH8 smoothing. On the other hand, for $`\mathrm{\Lambda }`$CDM we do find that the performance improves with increasing smoothing scale. With TH15 at $`z=0`$, for halos (or galaxies), the errors in the biasing parameters reduce to below 3% (1%), and $`\mathrm{\Delta }=0.07`$ (0.04), while for TH5 smoothing these errors are about 4 times larger. This difference between the two models can be attributed to a difference in the scale dependence of the biasing scatter (Somerville et al. 2000, Figure 16), which translates to an error in our method via increased deviations from monotonicity.
Before we proceed with the biasing relative to the underlying mass, we note that the relative biasing function of two different galaxy types, $`\delta _{\mathrm{g}_2}|\delta _{\mathrm{g}_1}`$, can be directly observable from a redshift survey. Again, for a deterministic and monotonic biasing process one has
$$\delta _{\mathrm{g}_2}(\delta _{\mathrm{g}_1})=C_{\mathrm{g}_2}^1[C_{\mathrm{g}_1}(\delta _{\mathrm{g}_1})],$$
(6)
and when biasing scatter is present, the question is to what extent equation (6) provides a valid approximation for the true relative biasing function.
Figure 3 shows the relative biasing function of “early” and “late” type galaxies in the two cosmological models, at $`z=0`$ and with TH8 smoothing. These galaxy types are distinguished in the SAM $`N`$-body simulations according to the ratio of bulge to total luminosity in the V band being larger or smaller than 0.4 respectively. The large scatter in the relative biasing, due to errors in the two density fields, is reduced by including all the galaxies, without applying a luminosity cut.
As can be seen in the last three columns of Table 1, the quality of the recovery of the relative biasing function is not as good as in the case of the absolute biasing of galaxies or halos. The values of $`\mathrm{\Delta }`$ range from 0.2 to 0.56, compared to 0.08 to 0.16 in the former cases. This is expected, because in the case of relative biasing the two density fields contribute to the stochasticity or deviation from monotonicity (see also the important role of sampling errors in the recovery of the biasing function, §4.2). The moments of the relative biasing function are recovered with better than 15% accuracy at $`z1`$, and to $``$25% accuracy at $`z=3`$, in both cosmologies. In calculating the moments, unlike in Figure 3, a luminosity cut has been applied: $`M_\mathrm{B}5\mathrm{log}h<19.5`$, and 99% of the volume was used. The fact that the $`\mathrm{\Delta }`$ values are still significantly smaller than unity and the errors in the biasing parameters are not larger than 25% indicate that our method is capable of yielding meaningful estimates of the relative biasing function. In both cosmologies, the relative biasing is almost scale independent in the range 5 – 15 $`h^1\mathrm{Mpc}`$, as is the quality of the reconstruction.
## 3 THE MASS CDF: ROBUST AND LOGNORMAL
Large redshift surveys provide a rich body of data for mapping the galaxy density field in extended regions of space and computing its CDF with adequate accuracy. However, direct mapping of the mass density field is much harder. For example, POTENT reconstruction from peculiar velocities (Dekel, Bertschinger & Faber 1990; Dekel et al. 1999; Dekel 2000) yields the mass distribution in our local cosmological neighborhood (even out to $`100h^1\mathrm{Mpc}`$), which in principle enables direct mapping of the local biasing field. However, the sparse and noisy data limit the mass reconstruction to low resolution ($`10h^1\mathrm{Mpc}`$) compared to the volume sampled, which introduces large cosmic scatter in the mass CDF. New accurate data nearby, based on SBF distances (Tonry et al. 1997) do enable a promising resolution of a few Mpc (see Dekel 2000), but limited to inside the local sphere of radius $`30h^1\mathrm{Mpc}`$.
What makes the method proposed here feasible is the fact that the mass CDF is only weakly sensitive to variations in the cosmological scenario within the range of models that are currently considered as viable models for the formation of large-scale structure (e.g., Primack 1998, Bahcall et al. 1999). It has been proposed that the mass PDF can be well approximated by a log-normal distribution in $`\rho /\overline{\rho }=1+\delta `$ (e.g., Coles & Jones 1991; Kofman et al. 1994), and it has since been argued that this approximation becomes poor for certain power spectra and at the tails of the distribution (Bernardeau 1994; Bernardeau & Kofman 1995). In this section, we investigate the robustness of $`C(\delta )`$ for our purpose here, namely, in comparison with the typical difference between the CDFs of galaxies and mass (i.e., the mean biasing function) which we are trying to approximate.
We use for this purpose a suite of $`N`$-body simulations of six different cosmological models. In addition to the two high-resolution simulations of $`\tau `$CDM and $`\mathrm{\Lambda }`$CDM used in the previous section, we have simulated three random realizations of each of the three following models (all using a Hubble constant of $`h=0.5`$): standard CDM (SCDM; $`\mathrm{\Omega }_\mathrm{m}=1`$ with spectral index $`n=1`$), an extreme open CDM (OCDM; $`\mathrm{\Omega }_\mathrm{m}=0.2`$, $`n=1`$), and an extreme tilted CDM (TCDM; $`\mathrm{\Omega }_\mathrm{m}=1`$, $`n=0.6`$). These simulations were run by Ganon et al. (2000, in preparation) using a PM code (by Bertschinger & Gelb 1991), with $`128^3`$ particles in a $`256h^1\mathrm{Mpc}`$ box. The present epoch is defined in these simulations by a linear fluctuation amplitude of $`\sigma _8=1.0`$. A similar simulation was run using a constrained realization (CR) of the local universe based on the galaxy density in the IRAS 1.2Jy redshift survey under the assumption of no biasing (Kolatt et al. 1996), with $`\mathrm{\Omega }_\mathrm{m}=1`$ and the present defined in this case by $`\sigma _8=0.7`$.
Figure 4 (left) shows for the different models the deviations $`\mathrm{\Delta }C(\delta )`$ of the mass CDFs, smoothed TH8, from a cumulative log-normal distribution with the same $`\sigma `$. The log-normal probability density is
$$P_{\mathrm{ln}}(\delta )=\frac{1}{\stackrel{~}{\rho }}\frac{1}{\sqrt{2\pi }s}\mathrm{exp}\left[\frac{(\mathrm{ln}\stackrel{~}{\rho }m)^2}{2s^2}\right],$$
(7)
where
$$\stackrel{~}{\rho }=1+\delta ,m=0.5\mathrm{ln}(1+\sigma ^2),s^2=\mathrm{ln}(1+\sigma ^2)\mathrm{and}\sigma ^2=\delta ^2.$$
(8)
The cumulative log-normal distribution is obtained by integration,
$$C_{\mathrm{ln}}(\delta )=\mathrm{erf}\left[\frac{\mathrm{ln}\stackrel{~}{\rho }m}{s}\right],$$
(9)
where
$$\mathrm{erf}(x)\frac{1}{\sqrt{2\pi }}_{\mathrm{}}^xe^{t^2/2}𝑑t.$$
(10)
For the cases of OCDM, TCDM and SCDM, the CDF is obtained from the three simulations of each model put together. The errors are similar in the different cases; we therefore plot representative error bars only for the $`\tau `$CDM case.
In all the realizations that had random Gaussian initial conditions, the deviation from lognormality is less than 2%. The constrained realization shows somewhat larger deviations, but even in this case they never exceed 5%. These deviations are indeed smaller than the typical differences between $`C_\mathrm{g}(\delta )`$ and $`C(\delta )`$, which are on the order of 10% (see Figure 1).
In order to evaluate how important the contribution of $`\mathrm{\Delta }C`$ is to the error in the recovery of $`\delta _\mathrm{g}|\delta `$, we compare in the right panel of Figure 4 the true $`\delta _\mathrm{g}|\delta `$ in the $`\tau `$CDM simulation with two approximations $`\delta _\mathrm{g}(\delta )`$ based on equation (4), one using the true matter CDF and the other replacing it with a cumulative log-normal distribution of the same $`\sigma `$. The results of the two approximations are very similar; the differences between them seem to be much smaller than the differences between each of them and the true biasing function $`\delta _\mathrm{g}|\delta `$. We can conclude that for the purpose of recovering the biasing function, for the range of Gaussian cosmological models considered, $`C_{\mathrm{ln}}`$ is a good approximation for $`C`$.
The proximity of $`C`$ and $`C_{\mathrm{ln}}`$ could have been alternatively evaluated by the Kolmogorov-Smirnov (KS) statistic, $`D=\mathrm{max}\{|\mathrm{\Delta }C|\}`$. For computing the KS significance $`q(D)`$, we estimate the effective number of “independent” points by $`N_{\mathrm{eff}}=V_{\mathrm{box}}/V_{\mathrm{win}}`$, where $`V_{\mathrm{box}}`$ is the volume of the simulation box and $`V_{\mathrm{win}}`$ is the effective volume of the smoothing window. A value of $`q1`$ ($`D1`$) corresponds to a good match, and $`q1`$ ($`D1`$) to a poor match. For our $`\tau `$CDM simulation, with TH8 smoothing at $`z=0`$ and 1, we obtain $`D0.01`$ and $`q>0.9999`$, confirming that $`C_{\mathrm{ln}}`$ is a good fit. However, for the larger SCDM and OCDM simulations, although $`D`$ is still only $`0.015`$, the corresponding $`q`$ values are at the level of only a few percent. For TCDM and CR, where $`D`$ is 0.016 and 0.052 respectively, the values of $`q`$ drop to the level of a fraction of a percent, and the discrepancy seems large. This KS test indicates that the log-normal approximation is not always perfect for general purpose, as has been argued in the literature. However, our direct tests reported above demonstrate that the use of the log-normal approximation is adequate for the recovery of the mean biasing function in all these cases.
We comment in passing that while the mass CDF is well approximated for our purpose by a log-normal distribution, the shape of the halo (or galaxy) CDF is usually far from a log-normal shape. This is implied by equation (4), from which it follows that $`C_\mathrm{g}(\delta _\mathrm{g})=C[\delta _\mathrm{g}^1(\delta _\mathrm{g})]`$. One does not expect to recover a log-normal distribution from a general functional form for $`\delta _\mathrm{g}^1`$. In particular, the linear biasing model, which seems to be an acceptable approximation in some cases with large smoothing (e.g., IRAS 1.2Jy galaxies at 12$`h^1\mathrm{Mpc}`$ Gaussian smoothing; Sigad et al. 1998), leads to a $`C_\mathrm{g}(\delta _\mathrm{g})`$ that is far from log-normal. Trying to evaluate the difference between $`C_\mathrm{g}`$ and a log-normal distribution using the KS statistic, we obtain for the halos in the $`\tau `$CDM simulation, with TH8 smoothing, both at $`z=0`$ and 1, $`D0.08`$ and $`q0.05`$, namely a poor fit compared to the $`q1`$ of $`C`$ vs $`C_{\mathrm{ln}}`$. Similar conclusions are valid for galaxies.
Our method for measuring the nonlinear biasing function requires an assumed value of $`\sigma `$. Since $`\sigma `$ is known only to a limited accuracy (§6), we should check the robustness of our results to errors in $`\sigma `$. We repeated the reconstruction described in §2, both for halos and for galaxies, with perturbed values of $`\sigma `$ in a range $`\pm 20\%`$ about the true value of the simulation. Not surprisingly, we find that the analog of the linear biasing parameter, $`\widehat{b}`$, varies roughly in proportion to $`\sigma ^1`$. We also find that $`\stackrel{~}{b}`$ varies in a similar way, such that the ratio $`\stackrel{~}{b}/\widehat{b}`$, which is the natural measure of nonlinear biasing (Dekel & Lahav 1999), is a very weak function of $`\sigma `$, roughly $`\stackrel{~}{b}/\widehat{b}\sigma ^{0.15}`$. This test indicates that our method provides a robust measure of the nonlinearity in the biasing scheme, that is to a large extent decoupled from the uncertainty in the linear biasing parameter.
## 4 REDSHIFT DISTORTIONS
The densities as measured in redshift space (z-space) are in general different from the real-space (r-space) densities addressed so far, because the radial peculiar velocities distort the volume elements along the lines of sight. One approach to deal with redshift distortions is to start by recovering the full galaxy density field in r-space, using the linear or a mildly-nonlinear approximation to gravitational instability (e.g., Yahil et al. 1991; Strauss et al. 1992; Fisher et al. 1995; Sigad et al. 1998), and then compute the biasing function in r-space as outlined above. The accuracy of such a procedure would be limited by the approximation used for nonlinear gravity. Another difficulty with this approach is that it requires one to assume a priori a specific biasing scheme, already in the force calculation that enters the transformation from z-space to r-space, while this biasing scheme is the very unknown we are after; this would require a nontrivial iterative procedure.
The alternative we propose here is to actually use the z-space CDF, $`C_{\mathrm{g},\mathrm{z}}(\delta _{\mathrm{g},\mathrm{z}})`$, as provided directly from counts in cells of galaxies in a redshift survey. If the redshift distortions affect the densities of galaxies and mass in a similar way, then one may expect the biasing function in z-space to be similar to the one in real space,
$$\delta _{\mathrm{g},\mathrm{z}}|\delta _\mathrm{z}=\delta =\delta _\mathrm{g}|\delta .$$
(11)
If we only had a robust functional form for the mass CDF in z-space, $`C_\mathrm{z}(\delta _\mathrm{z})`$, then we could compute the desired biasing function all in z-space, using equation (4) but with the analogous z-space quantities. We thus need to test the validity of equation (11), and come up with a useful approximation for $`C_\mathrm{z}(\delta _\mathrm{z})`$.
Figure 5 illustrates the accuracy of equation (11). It compares the biasing functions in z-space and r-space, as derived via equation (4) and its z-space analog from the corresponding CDFs of halos and mass in the $`\tau `$CDM simulation with TH8 smoothing. The two curves are remarkably similar for $`\delta <0.60.8`$, roughly out to the 1-sigma rms fluctuation value. This is roughly the range where the biasing scatter is reasonably small and our basic method is applicable (§2, Figure 1). The curves deviate gradually as $`\delta `$ increases, partly due to stronger “fingers of god” effects at high densities. The deviation is somewhat weaker for larger-mass halos (perhaps due to a lower velocity dispersion for more massive objects as a result of dynamical friction).
The direction of the deviation from equation (11), as seen in Figure 5, can be obtained by applying linear theory of redshift distortions to the case of linear biasing in r-space, $`\delta _\mathrm{g}=b\delta `$. In linear theory, the density fluctuations in r-space and z-space are related via $`\delta _\mathrm{z}=\delta [1+f(\mathrm{\Omega }_\mathrm{m})\mu ^2]`$, where $`f(\mathrm{\Omega }_\mathrm{m})\mathrm{\Omega }_\mathrm{m}^{0.6}`$ (with a negligible dependence on $`\mathrm{\Omega }_\mathrm{\Lambda }`$, see Lahav et al. 1991) and $`\mu `$ is the cosine of the angle between the galaxy velocity vector and the line of sight. If the galaxies obey the continuity equation, then $`\delta _{\mathrm{g},\mathrm{z}}\delta _\mathrm{g}=\delta _\mathrm{z}\delta `$, which implies the following biasing relation in z-space:
$$\delta _{\mathrm{g},\mathrm{z}}=\frac{b+f(\mathrm{\Omega }_\mathrm{m})\mu ^2}{1+f(\mathrm{\Omega }_\mathrm{m})\mu ^2}\delta _\mathrm{z}.$$
(12)
Averaging over all possible directions and assuming $`\mathrm{\Omega }_\mathrm{m}=1`$, we find that the linear biasing parameter in z-space is predicted to be $`b_z=(3b+2)/5`$ for the case shown in Figure 5. This indicates that the linear biasing parameter tends to be closer to unity in z-space than in r-space. Based on our empirical tests of equation (11), we learn that the nonlinear effects (of biasing and gravity) conspire to make equation (11) a better approximation than implied by the linear approximation.
Note that while the results of Figure 5 based on our high-resolution $`\tau `$CDM simulation are quite accurate in the way they treat halos, they may suffer from significant cosmic variance due to the relatively small volume sampled, where the presence (or absence) of a few “fingers of god” could strongly affect the biasing function in the high-$`\delta `$ regime. To test the validity of equation (11) with reduced cosmic variance, we appeal to yet another set of N-body simulations (by Cole et al. 1997) which cover a much larger volume but with lower resolution. These simulations followed the evolution of $`N=192^3`$ particles in a periodic box of comoving side $`L=345.6h^1\mathrm{Mpc}`$ using an Adaptive P<sup>3</sup>M code. The cosmological models are $`\mathrm{\Lambda }`$CDM ($`\mathrm{\Omega }_\mathrm{m}=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, $`h=0.65`$, cluster-normalized to $`\sigma _8=1.05`$) and $`\tau `$CDM ($`\mathrm{\Omega }=1`$, $`h=0.25`$, cluster-normalized to $`\sigma _8=0.55`$). Nine mock catalogs were extracted from each of the parent simulations, each containing $`510^5`$ particles in a box of $`L=200h^1\mathrm{Mpc}`$. The partial overlap between the catalog volumes is thus about $`50\%`$. The central “observer” was chosen to mimic certain properties of the Local Group environment (see Branchini et al. 1999). Since the resolution of these large simulations is inadequate for a detailed halo identification based on many simulated particles in each halo, we identify individual particles as galaxies using a Monte-Carlo procedure in which the galaxies are chosen to make a random realization of an assumed nonlinear biasing function. Here we adopt the biasing function proposed by Dekel & Lahav (1999) to fit the simulated results of Somerville et al. (2000):
$$\delta _\mathrm{g}(\delta )=\left\{\begin{array}{cc}(1+b_0)(1+\delta )^{b_{\mathrm{neg}}}1\hfill & \delta 0\hfill \\ b_{\mathrm{pos}}\delta +b_0\hfill & \delta >0\hfill \end{array}\right\},$$
(13)
with $`b_{\mathrm{neg}}=2`$ and $`b_{\mathrm{pos}}=1`$. The mass density field is obtained with a Gaussian smoothing of radius $`5h^1\mathrm{Mpc}`$ at the points of a $`128^3`$ cubic grid inside a box of size $`200h^1\mathrm{Mpc}`$. Galaxy densities are obtained at the grid points based on equation (13), and then interpolated to the galaxy positions as defined by the selected particles. Given the appropriate probability distributions $`P(\delta )`$, the value of $`b_0`$ is determined for each choice of the parameters $`b_{\mathrm{neg}}`$ and $`b_{\mathrm{pos}}`$ such that $`\delta _\mathrm{g}=0`$ as required by definition. We obtain $`b_0=0.26`$ and $`b_0=0.19`$ for the models of $`\mathrm{\Lambda }`$CDM and $`\tau `$CDM respectively.
Figure 6 compares the CDFs and associated biasing functions in r-space and z-space, averaged over nine mock catalog from the large-box $`\mathrm{\Lambda }`$CDM simulation. The z-space biasing function is indeed almost indistinguishable from the r-space one (bottom panels); the differences are typically on the order of a couple of percents. The results for $`\tau `$CDM are similar.
In order to quantify this difference further, we define a statistic analogous to equation (5):
$$\mathrm{\Delta }=\frac{1}{N_{\mathrm{bins}}\sigma _\mathrm{g}^2}\underset{\delta \mathrm{bins}}{}[\delta _{\mathrm{g},\mathrm{z}}(\delta _\mathrm{z}=\delta )\delta _\mathrm{g}(\delta )]^2,$$
(14)
in which the first and second terms are the biasing functions as derived from the CDFs in z-space and r-space respectively. The summation is over bins with $`\delta <\delta _{\mathrm{max}}`$, such that $`99\%`$ of the volume is accounted for. We also compute the two moments of the observed biasing function $`\widehat{b}_{\mathrm{obs}}`$ and $`\stackrel{~}{b}_{\mathrm{obs}}`$. These three quantities, averaged over the mock catalogs, are listed in Table 2 (second column). Their deviation from the “true” values (Table 2, first column) is the systematic error. The quoted errors refer to the 1$`\sigma `$ scatter about the mean; they represent the random errors. The results are listed for the two models, $`\mathrm{\Lambda }`$CDM and $`\tau `$CDM. We conclude that the biasing function and its moments, as computed from the z-space CDFs, resemble those computed from the r-space CDFs to within 2%. Note that the Monte Carlo procedure we use to generate mock catalogs artificially reduces the amount of clustering and over-smoothes the density fields for dark and luminous particles. The net effect is to decrease the biasing moments by $`7\%`$, relative to the values implied by the biasing scheme, equation (13). However, this bias does not affect the present analysis for which “true” values are obtained from the mock catalogs themselves and not from equation (13).
Our next task is to come up with a robust CDF for the mass in z-space. We try the same log-normal distribution that was found robust for our purpose in r-space (§3), but with a proper rms in z-space, $`\sigma _\mathrm{z}`$. Based on the linear approximation for Gaussian fields in the small-angle limit (Kaiser 1987), we express $`\sigma _\mathrm{z}`$ in terms of $`\sigma `$ and $`\mathrm{\Omega }_\mathrm{m}`$ of the cosmological model by:
$$\sigma _\mathrm{z}=\left[1+\frac{2}{3}f(\mathrm{\Omega }_\mathrm{m})+\frac{1}{5}f^2(\mathrm{\Omega }_\mathrm{m})\right]^{{\scriptscriptstyle \frac{1}{2}}}\sigma .$$
(15)
We thus approximate the z-space biasing function by $`\delta _{\mathrm{ln},\mathrm{z}}(\delta _\mathrm{z})`$, as derived from the z-space CDFs but where the mass CDF is replaced by a cumulative log-normal distribution function $`C_{\mathrm{ln},\mathrm{z}}`$ (eq. ) with standard deviation $`\sigma _\mathrm{z}`$ (eq. ). The resultant biasing function, averaged over the mock catalogs, is displayed in the bottom panels of Figure 6. We see that for $`\mathrm{\Lambda }`$CDM the differences between $`\delta _{\mathrm{ln},\mathrm{z}}(\delta _\mathrm{z}=\delta )`$ and $`\delta _\mathrm{g}(\delta )`$ are at at the level of a few percent. For $`\tau `$CDM they are only a bit larger; they exceed 10% but only near $`\delta 2`$, at the tail of the distribution. The error in the biasing function $`\mathrm{\Delta }`$ defined in analogy to equation (14), and the biasing moments, are listed in Table 2 (third column, marked “z-space ln”). The systematic error $`\mathrm{\Delta }`$ is still well below 2%, but the biasing parameters are systematically underestimated by 4% and 7% in $`\mathrm{\Lambda }`$CDM and $`\tau `$CDM respectively.
Overall, it seems that our straightforward method deals with redshift distortions fairly well, without any a priori assumption about the biasing scheme.
## 5 SAMPLING ERRORS
The accuracy of the derivation of the galaxy PDF is limited by two observational factors: the finite volume sampled and the mean density of galaxies in the sample.<sup>3</sup><sup>3</sup>3The additional edge effects can be greatly minimized by using a volume-limited sample and a proper choice of cell coverage (see Szapudi & Colombi 1996).
In principle, the limited volume is responsible for cosmic variance due to the fact that the long-wavelength fluctuations in the real universe are not fairly represented in the sampled volume. This is not of major concern for us here because (a) it is expected to introduce only a random error, and (b) as long as the biasing is local, the effects of long waves on the PDFs of galaxies and mass are expected to be correlated, making the local biasing function representative of the universal function despite the relatively small sampling volume.
More important is the shot noise introduced by the combination of volume and sampling density effects. For a given cell size (or smoothing length), the error can be divided into the error in the count within each cell and the error due to the finite number of cells in the sample volume. These shot-noise sources may introduce both random and systematic errors. We evaluate them by computing the mean and standard deviation over a suite of mock catalogs in which we vary either the volume or the sampling density for a fixed smoothing scale.
With TH8 smoothing, our mock catalogs from the large $`\mathrm{\Lambda }`$CDM simulation contain $`N_{\mathrm{eff}}3700`$ independent cells. However, the currently available redshift surveys allow an analysis in a much smaller volume. For example, a volume-limited subsample from the PSC$`z`$ catalog (Saunders et al. 2000), that is cut at a distance where the average galaxy separation is $`l=8h^1\mathrm{Mpc}`$ (i.e., on the order of our smoothing scale), contains only $`600`$ independent cells. We therefore estimate the error associated with reducing the sampled volume such that $`N_{\mathrm{eff}}600`$ in each mock catalog. We select from the simulation 9 such non-overlapping sub-volumes, while keeping the sampling density and smoothing length fixed. The results for $`\mathrm{\Lambda }`$CDM, averaged over the mock catalogs, are shown in the upper panels of Figure 7, and the results for the two cosmological models are summarized in Table 2 (column 4). We find no significant systematic errors due to the volume effect in a sample like PSC$`z`$ and with $`8h^1\mathrm{Mpc}`$ smoothing (except in the very high-$`\delta `$ tail for $`\tau `$CDM). The corresponding random errors in the biasing parameters are $`5\%`$ and $`6\%`$ for $`\mathrm{\Lambda }`$CDM and $`\tau `$CDM respectively.
The sampling density can be parameterized by the mean galaxy separation, $`l`$. In our large simulation $`l=2.5h^1\mathrm{Mpc}`$, much smaller than the smoothing length of $`8h^1\mathrm{Mpc}`$, but in real samples $`l`$ could be on the order of the smoothing length. To test the effect of sampling density, we produce 9 mock catalogs in which galaxies are sub-sampled at random from the original catalog such that the mean separation is $`l=`$ 6, 8, or $`10h^1\mathrm{Mpc}`$, while the smoothing length and large volume are kept fixed with $`N_{\mathrm{eff}}3700`$. The results for $`\mathrm{\Lambda }`$CDM are shown in the bottom panels of Figure 7, and for the two models in Table 2 (columns 5-7). We see that the sparse sampling artificially enhances both positive and negative density fluctuations, which enlarges the width of the galaxy PDF. This results in a steeper biasing function. For $`\mathrm{\Lambda }`$CDM, the effect becomes noticeable only when $`l8h^1\mathrm{Mpc}`$, where the systematic error in the biasing parameters is of order 10% and larger, and $`\mathrm{\Delta }`$ is of order a few percent. For $`\tau `$CDM the sampling-density effect is noticeable already for $`l6h^1\mathrm{Mpc}`$, with the error reaching $`3050\%`$ at $`ł10h^1\mathrm{Mpc}`$. A plausible explanation for why the sparse sampling is more damaging in the $`\tau `$CDM model is that the clustering in this model is weaker ($`\sigma _8`$ is smaller to match the cluster abundance which constrains $`\sigma _8\mathrm{\Omega }^{0.5}`$), and therefore the high-density regions are poorly sampled by galaxies.
In summary: the main source of error in our analysis is the sparse sampling. For recovering the biasing function with TH8 smoothing, the mean separation should be $`8h^1\mathrm{Mpc}`$.
## 6 CONCLUSION
We propose a simple prescription for recovering the mean nonlinear biasing function from a large redshift survey. The biasing function is defined by $`b(\delta )\delta =\delta _\mathrm{g}|\delta `$, and is characterized to second order by two parameters, $`\widehat{b}`$ and $`\stackrel{~}{b}`$, measuring the mean biasing and its nonlinearity respectively. The method is applied at a given cosmology, time, object type and smoothing scale, and involves one parameter that should be assumed a priori — the rms mass density fluctuation $`\sigma `$ on the relevant scale.
The main steps of the algorithm are as follows:
1. Obtain the observed cumulative distribution function in redshift space $`C_{\mathrm{g},\mathrm{z}}(\delta _{\mathrm{g},\mathrm{z}})`$, by counts in cells or with window smoothing at a certain smoothing length.
2. Assume a value for $`\sigma `$ on that scale and for the cosmological density parameter $`\mathrm{\Omega }_\mathrm{m}`$, and approximate the mass CDF in redshift space by $`C_{\mathrm{ln},\mathrm{z}}(\delta _\mathrm{z};\sigma _\mathrm{z})`$, the cumulative log-normal distribution (eq. ), with the width $`\sigma _\mathrm{z}`$ derived from $`\sigma `$ and $`\mathrm{\Omega }_\mathrm{m}`$ by equation (15).
3. Derive the mean biasing function by
$$\delta _\mathrm{g}(\delta =\delta _\mathrm{z})\delta _{\mathrm{g},\mathrm{z}}(\delta _\mathrm{z})=C_{\mathrm{g},\mathrm{z}}^1[C_{\mathrm{ln},\mathrm{z}}(\delta _\mathrm{z};\sigma _\mathrm{z})].$$
(16)
We first showed that the mean biasing function, at TH8 smoothing, can be derived with reasonable accuracy from the r-space CDFs of galaxies (or halos) and mass, despite the biasing scatter. We then demonstrated that for a wide range of CDM cosmologies the mass CDF can be properly approximated for this purpose by a log-normal distribution of the same width $`\sigma `$. Next we showed that the biasing functions in z-space and r-space are very similar, and that the z-space mass CDF can also be approximated by a log-normal distribution, with a width derived from $`\sigma `$ via equation (15). This allows us to apply the method directly to the observed CDF in a redshift survey. The errors in the recovered biasing function and its moments, in an ideal case of dense sampling in a large volume, are at the level of a few percent.
In any realistic galaxy survey the limited volume and discrete sampling introduce further random and systematic errors. For a survey like the PSC$`z`$ survey, the main source of error is the sampling density; the error does not exceed $`10\%`$ as long as the mean observed galaxy separation is kept smaller than the smoothing radius. We are currently in the process of applying this method to the PSC$`z`$ survey (E. Branchini, et al. 2000, in preparation), where a more specific error analysis will be carried out. The sampling errors are expected to be significantly smaller for the upcoming 2dF and SDSS redshift surveys.
In §2 we showed that our method works well both for halos and for galaxies, on scales 5 to 15$`h^1\mathrm{Mpc}`$, and in the redshift range $`0z3`$ over which the biasing is expected to change drastically. We obtain a similar accuracy when we vary the cosmological model, the mass of the halos in the comparison, or galaxy properties such as morphological type and luminosity. The approximation $`\delta _\mathrm{g}(\delta )`$ is consistent (the deviation is less than 1-$`\sigma `$) with the true average biasing function $`\delta _\mathrm{g}|\delta `$ over a wide range of $`\delta `$ values, which covers 98 – 99% of the volume, depending on redshift and the type of biased objects. This allows us to estimate the moments of the biasing function to within a few percent (see Table 1). The moments of the biasing function are derived from 99.9% of the volume (99% at $`z`$=3 and for relative biasing).
The method requires as external parameters the rms mass-density fluctuation $`\sigma `$ and the cosmological parameter $`\mathrm{\Omega }_\mathrm{m}`$. These can be obtained by joint analyses of constraints from several observational data sets, such as the cluster abundance (e.g., Eke et al. 1998), peculiar velocities (e.g., Dekel & Rees 1994; Zaroubi et al. 1997; Freudling et al. 1999), CMB anisotropies (e.g., de Bernardis et al. 1999), and type Ia supernovae (Riess et al. 1998; Perlmutter et al. 1999). Examples for such joint analyses are Bahcall et al. (1999) and Bridle et al. (1999).
The method is clearly applicable at $`z0`$ with available redshift surveys and especially with those that will become available in the near future, 2dF and SDSS. In the future, this method may become applicable at higher redshifts as well, where the biasing plays an even more important role. With the accumulation of Lyman-break galaxies at $`z3`$, it may soon become feasible to reconstruct their PDF by counts in cells, and our method will allow a recovery of the biasing function at this early epoch, with consequences on galaxy formation and on the evolution of structure.
We have concentrated here on smoothing scales relevant to galaxy biasing, but the method may also be applicable for the biasing of galaxy clusters, on scales of a few tens of Mpc. The biasing scatter may be larger for clusters because of their sparse sampling, but the larger mean biasing parameter for clusters may help in regaining the required monotonicity for equation (4) to provide a valid approximation to the mean biasing function. The mass PDF has been checked to be properly approximated by a log-normal distribution at smoothing scales in the range 20 to $`40h^1\mathrm{Mpc}`$, using simulations of the standard CDM and Cold+Hot DM models (Borgani et al. 1995). The errors due to sparse sampling would require a smoothing scale at the high end of this range.
In a large redshift survey which distinguishes between object types, one can measure the relative biasing function between two object types by applying equation (6) in redshift space, using the observed CDFs for the two types without appealing to the underlying mass distribution at all. The upcoming large redshift surveys 2dF and SDSS, and the DEEP survey at $`z1`$, are indeed expected to provide adequate samples of different galaxy types. Compared with the predictions of simulations and semi-analytic modeling of galaxy formation (e.g., Kauffmann et al. 1999; Benson et al. 1998; Baugh et al. 1999; Somerville & Primack 1999), the measured relative biasing function can provide valuable constraints on the formation of galaxies and the evolution of structure.
While implementing the method outlined above for measuring the mean nonlinear biasing function using current and future redshift surveys, the next challenge is to devise a practical method for measuring the biasing scatter about the mean.
###### Acknowledgements.
We thank S. Cole, A. Eldar, G. Ganon, T. Kolatt, R. Somerville and our collaborators in the GIF team, J.M. Colberg, A. Diaferio, G. Kauffmann, and S.D.M. White, for providing simulations and mock catalogs. We thank A. Maller, I. Szapudi and D. Weinberg for stimulating discussions, and V. Narayanan and M. Strauss for a helpful referee report. EB thanks the Hebrew University for its hospitality. This work was supported by the Israel Science Foundation grant 546/98.
Table 1: Recovery of the biasing function from the CDFs $`\mathrm{\Lambda }`$CDM halos vs. mass galaxies vs. mass early vs. late type $`z`$=0 $`z`$=1 $`z`$=3 $`z`$=0 $`z`$=1 $`z`$=3 $`z`$=0 $`z`$=1 $`z`$=3 $`\widehat{b}`$ 0.67 1.21 2.98 0.89 1.31 2.38 1.11 1.32 1.28 $`\widehat{b}_\mathrm{a}`$ 0.58 1.25 2.86 0.80 1.32 2.25 1.20 1.38 1.49 $`\stackrel{~}{b}`$ 0.74 1.24 3.04 0.91 1.31 2.40 1.13 1.34 1.30 $`\stackrel{~}{b}_\mathrm{a}`$ 0.75 1.31 3.08 0.90 1.36 2.38 1.35 1.52 1.64 $`\mathrm{\Delta }`$ 0.16 0.14 0.11 0.08 0.08 0.08 0.55 0.38 0.56 $`\tau `$CDM halos vs. mass galaxies vs. mass early vs. late type $`z`$=0 $`z`$=1 $`z`$=3 $`z`$=0 $`z`$=1 $`z`$=3 $`z`$=0 $`z`$=1 $`z`$=3 $`\widehat{b}`$ 0.90 2.18 6.62 0.93 1.71 4.44 1.17 1.34 1.27 $`\widehat{b}_\mathrm{a}`$ 0.89 2.28 6.75 0.93 1.75 4.32 1.18 1.39 1.50 $`\stackrel{~}{b}`$ 0.93 2.20 7.85 0.95 1.71 4.62 1.18 1.35 1.31 $`\stackrel{~}{b}_\mathrm{a}`$ 0.96 2.30 8.00 0.98 1.76 4.63 1.26 1.46 1.65 $`\mathrm{\Delta }`$ 0.08 0.07 0.20 0.08 0.04 0.08 0.22 0.20 0.54
Table 2: Redshift distortions and sampling errors in the biasing function $`\mathrm{\Lambda }`$CDM True z-space z-space ln Volume $`l=6^a`$ $`l=8^a`$ $`l=10^a`$ $`\widehat{b}`$ 1.13 $`1.12\pm 0.006`$ $`1.09\pm 0.02`$ $`1.12\pm 0.05`$ $`1.17\pm 0.05`$ $`1.23\pm 0.04`$ $`1.31\pm 0.06`$ $`\stackrel{~}{b}`$ 1.14 $`1.12\pm 0.006`$ $`1.10\pm 0.02`$ $`1.13\pm 0.05`$ $`1.17\pm 0.05`$ $`1.24\pm 0.04`$ $`1.32\pm 0.06`$ $`\mathrm{\Delta }`$ $`0.001\pm 0.001`$ $`0.002\pm 0.001`$ $`0.005\pm 0.006`$ $`0.006\pm 0.006`$ $`0.016\pm 0.010`$ $`0.049\pm 0.028`$ $`\tau `$CDM True z-space z-space ln Volume $`l=6^a`$ $`l=8^a`$ $`l=10^a`$ $`\widehat{b}`$ 1.188 $`1.18\pm 0.002`$ $`1.11\pm 0.02`$ $`1.21\pm 0.06`$ $`1.35\pm 0.07`$ $`1.55\pm 0.07`$ $`1.80\pm 0.07`$ $`\stackrel{~}{b}`$ 1.192 $`1.18\pm 0.002`$ $`1.11\pm 0.02`$ $`1.21\pm 0.06`$ $`1.36\pm 0.07`$ $`1.55\pm 0.07`$ $`1.81\pm 0.07`$ $`\mathrm{\Delta }`$ $`0.002\pm 0.0003`$ $`0.016\pm 0.011`$ $`0.072\pm 0.063`$ $`0.177\pm 0.178`$ $`0.563\pm 0.368`$ $`1.505\pm 0.564`$ <sup>a</sup> in units of $`h^1\mathrm{Mpc}`$
|
no-problem/0002/astro-ph0002103.html
|
ar5iv
|
text
|
# The NASA Astrophysics Data System: Data Holdings
## 1 Introduction
Astronomers today are more prolific than ever before. Studies in publication trends in astronomy (Abt (1994), Abt (1995), Schulman et al. (1997)) have hypothesized that the current explosion in published papers in astronomy is due to a combination of factors: growth in professional society membership, an increase in papers by multiple authors, the launching of new spacecrafts, and increased competition for jobs and PIs in the field (since candidate evaluation is partially based on publication history). As the number of papers in the field grows, so does the need for tools which astronomers can use to locate that fraction of papers which pertain to their specific interests.
The ADS Abstract Service is one of several bibliographic services which provide this function for astronomy, but due to the broad scope of our coverage and the simplicity of access to our data, astronomers now rely extensively on the ADS, and other bibliographic services not only link to us, but some have built their bibliographic search capabilities on top of the ADS system. The International Society for Optical Engineering (SPIE) and the NASA Technical Report Service (NTRS) are two such services.
The evolution of the Astrophysics Data System (ADS) has been largely data-driven. Our search tools and indexing routines have been modified to maximize speed and efficiency based on the content of our dataset. As new types of data (such as electronic versions of articles) became available, the Abstract Service quickly incorporated that new feature. The organization and standardization of the database content is the very core upon which the Abstract Service has been built.
This paper contains a description of the ADS Abstract Service from a “data” point of view, specifically descriptions of our holdings and of the processes by which we ingest new data into the system. Details are provided on the organization of the databases (section 2), the description of the data in the databases (section 3), the creation of bibliographic records (section 4), the procedures for updating the database (section 5), and on the scanned articles in the Astronomy database (section 6). We discuss the interaction between the ADS and the journal publishers (section 7) and analyze some of the numbers corresponding to the datasets (section 8). In conjunction with three other ADS papers in this volume, this paper is intended to offer details on the entire Abstract Service with the hopes that astronomers will have a better understanding of the reference data upon which they rely for their research. In addition, we hope that researchers in other disciplines may be able to benefit from some of the details described herein.
As is often the case with descriptions of active Internet resources, what follows is a description of the present situation with the ADS Abstract Service. New features are always being added, some of which necessitate changes in our current procedures. Furthermore, with the growth of electronic publishing, some of our core ideas about bibliographic tools and requirements must be reconsidered in order to be able to take full advantage of new publishing technologies for a new millennium.
## 2 The Databases
The ADS Abstract Service was originally conceived of in the mid 1980’s as a way to provide on-line access to bibliographies of astronomers which were previously available only through expensive librarian search services or through the A&A Abstracts series (Schmadel (1979), Schmadel (1982), Schmadel (1989)), published by the Astronomisches Rechen-Institut in Heidelberg. While the ideas behind the Abstract Service search engine were being developed (see Kurtz et al. (2000), hereafter OVERVIEW), concurrent efforts were underway to acquire a reliable data source on which to build the server. In order to best develop the logistics of the search engine it was necessary to have access to real literature data from the past and present, and to set up a mechanism for acquiring data in the future.
An electronic publishing meeting in the spring of 1991 brought together a number of organizations whose ultimate cooperation would be necessary to make the system a reality (see OVERVIEW for details). NASA’s Scientific and Technical Information Program (STI) offered to provide abstracts to the ADS. STI’s abstracts were a rewritten version of the original abstracts, categorized and keyworded by professional editors. They not only abstracted the astronomical literature, but many other scientific disciplines as well. With STI agreeable to providing the past and present literature, and the journals committed to providing the future literature, the data behind the system fell into place. The termination of the journal abstracting by the STI project several years later was unfortunate, but did not cause the collapse of the ADS Abstract Service because of the commitment of the journal publishers to distribute their information freely.
The STI abstracting approximately covered the period from 1975 to 1995. With the STI data alone, we estimated the completeness of the Astronomy database to be better than 90% for the core astronomical journals. Fortunately, with the additional data supplied by the journals, by SIMBAD (Set of Identifications, Measurements, and Bibliographies for Astronomical Data, Egret & Wenger (1988)) at the CDS (Centre de Données Astronomiques de Strasbourg), and by performing Optical Character Recognition (OCR) on the scanned table of contents (see section 6 below), we are now closer to 99% complete for that period. In the period since then we are 100% complete for those journals which provide us with data, and significantly less complete for those which do not (e.g. many observatory publications and non-U.S. journals). The data prior to 1975 are also significantly incomplete, although we are currently working to improve the completeness of the early data, primarily through scanning the table of contents for journal volumes as they are placed on-line. We are 100% complete for any journal volume which we have scanned and put on-line, since we verify that we have all bibliographic entries during the procedure of putting scans on-line.
Since the STI data were divided into categories, it was easy to create additional databases with non-astronomical data which were still of interest to astronomers. The creation of an Instrumentation database has enabled us to provide a database for literature related to astronomical instrumentation, of particular interest to those scientists building astronomical telescopes and satellite instruments. We were fortunate to get the cooperation of the SPIE very quickly after releasing the Instrumentation database. SPIE has become our major source of abstracts for the Instrumentation database now that STI no longer supplies us with data.
Our Physics and Geophysics database, the third database to go on-line, is intended for scientists working in physics-related fields. We add authors and titles from all of the physics journals of the American Institute of Physics (AIP), the Institute of Physics (IOP), and the American Physical Society (APS), as well as many physics journals from publishers such as Elsevier and Academic Press (AP (AP)).
The fourth database in the system, the Preprint database, contains a subset of the Los Alamos National Laboratory’s (LANL) Preprint Archive (Los Alamos National Laboratory (1991)). Our database includes the LANL astro-ph preprints which are retrieved from LANL and indexed nightly through an automated procedure. That dataset includes preprints from astronomical journals submitted directly by authors.
## 3 Description of the Data
The original set of data from STI contained several basic fields of data (author, title, keywords, and abstracts) to be indexed and made available for searching. All records were keyed on STI’s accession number, a nine-digit code consisting of a letter prefix (A or N) followed by a two-digit publication year, followed by a five-letter identifier (e.g. A95-12345). Data were stored in files named by accession number.
With the inclusion of data from other sources, primarily the journal publishers and SIMBAD, we extended STI’s concept of the accession number to handle other abstracts as well. Since the ADS may receive the same abstract from multiple sources, we originally adopted a system of using a different prefix letter with the remainder of the accession number being the same to describe abstracts received from different sources. Thus, the same abstract for the above accession number from STI would be listed as J95-12345 from the journal publisher and S95-12345 from SIMBAD. This allowed the indexing routines to consider only one instance of the record when indexing. Recently, limitations in the format of accession numbers and the desire to index data from multiple sources (rather than just STI’s version) have prompted us to move to a data storage system based entirely on the bibliographic code.
### 3.1 Bibliographic Codes
The concept of a unique bibliographic code used to identify an article was originally conceived of by SIMBAD and NED (NASA’s Extragalactic Database, Helou & Madore (1988)). The original specification is detailed in Schmitz et al. (1995). In the years since, the ADS has adopted and expanded their definition to be able to describe references outside of the scope of those projects.
The bibliographic code is a 19-character string comprised of several fields which usually enables a user to identify the full reference from that string. It is defined as follows:
YYYYJJJJJVVVVMPPPPA
where the fields are defined in Table 1.
The journal field is left-justified and the volume and page fields are right-justified. Blank spaces and leading zeroes are replaced by periods. For articles with page numbers greater than 9999, the M field contains the first digit of the page number. The A field contains a colon (“:”) if there is no author listed.
Creating bibliographic codes for the astronomical journals is uncontroversial. Each journal typically has a commonly-used abbreviation, and the volume and page are easily assigned (e.g. 1999PASP..111..438F). Each volume tends to have individual page numbering, and in those cases where more than one article appears on a page (such as errata), a “Q”,“R”,“S”, etc. is used as the qualifier for publication to make bibliographic codes unique. When page numbering is not continuous across issue numbers (such as Sky & Telescope), the issue number is represented by a lower case letter as the qualifier for publication (e.g. “a” for issue 1). This is because there may be multiple articles in a volume starting on the same page number.
Creating bibliographic codes for the “grey” literature such as conference proceedings and technical reports is a more difficult task. The expansion into these additional types of data included in the ADS required us to modify the original prototype bibliographic code definition in order to present identifiers which are easily recognizable to the user. The prototype definition of the bibliographic code suggested using a single letter in the second place of the volume field to identify non-standard references (catalogs, PhD theses, reports, preprints, etc.) and using the third and fourth place of that field to unduplicate and report volume numbers (e.g. 1981CRJS..R.3…14W). Since we felt that this created codes unidentifiable to the typical user and since NED and SIMBAD did not feel that users needed to be able to identify books directly from their bibliographic codes, the ADS adopted different rules for creating codes to identify the grey literature.
It is straightforward to create bibliographic codes for conference proceedings which are part of a series. For example, the IAU Symposia Series (IAUS) contains volume numbers and therefore fits the journal model for bibliographic codes. Other conference proceedings, books, colloquia, and reports in the ADS typically contain a four letter word in the volume field such as “conf”, “proc”, “book”, “coll”, or “rept”. When this is the case with a bibliographic code, the journal field typically consists of the first letter from important words in the title. This can give the user the ability to identify a conference proceeding at a glance (e.g. “ioda.book” for “Information and On-Line Data in Astronomy”). We will often leave the fifth place of the journal field as a dot for “readability” (e.g. 1995ioda.book..175M). For most proceedings which are also published as part of a series (e.g. ASP Conference Series, IAU Colloquia, AIP Conference Series), we include in the system two bibliographic codes, one as described above and one which contains the series name and the volume (see section 5.1). We do this so that users can see, for example, that a paper published in one of the “Astronomical Data Analysis Software and Systems” series is clearly labelled as “adass” whereas a typical user might not remember which volume of ASPC contained those ADASS papers. This increases the user’s readability of bibliographic codes.
With the STI data, the details were often unclear as to whether an article was from a conference proceeding, a meeting, a colloquium, etc. We assigned those codes as best we could, making no significant distinction between them. For conference abstracts submitted by the editors of a proceedings prior to publication, we often do not have page numbers. In this case, we use a counter in lieu of a page number and use an “E” (for “Electronic”) in the fourteenth column, the qualifier for publication. If these conference abstracts are then published, their bibliographic codes are replaced by a bibliographic code complete with page number. If the conference abstracts are published only on-line, they retain their electronic bibliographic code with its E and counter number.
There are several other instances of datasets where the bibliographic codes are non-standard. PhD theses in the system use “PhDT” as the journal abbreviation, contain no volume number, and contain a counter in lieu of a page number. Since PhD theses, like all bibliographic codes, are unique across all of the databases, the counter makes the bibliographic code an identifier for only one thesis. IAU Circulars also use a counter instead of a page number. Current Circulars are electronic in form, and although not technically a new page, the second item of an IAU Circular is the electronic equivalent of a second page. Using the page number as a counter enables us to minimize use of the M identifier in the fourteenth place of a bibliographic code for unduplicating. This is desirable since codes containing those identifiers are essentially impossible to create a priori, either by the journals or by users.
The last set of data currently included in the ADS which contain non-standard bibliographic codes is the “QB” book entries from the Library of Congress. QB is the Library of Congress code for astronomy-related books and we have put approximately 17,000 of these references in the system. Because the QB numbers are identifiers by themselves, we have made an exception to the bibliographic code format to use the QB number (complete with any series or part numbers), prepended with the publication year as the bibliographic code. Such an entry is easily identifiable as a book, and these codes enable users to locate the books in most libraries.
It is worth noting that while the bibliographic code makes identification simple for the vast majority of references in the system, we are aware of two instances where the bibliographic definition breaks down. The use of the fourteenth column for a qualifier such as “L” for ApJ Letters makes it impossible to use that column for unduplicating. Therefore, if there are two errata on the same page with the same author initial, there is no way to create unique bibliographic codes for them. We are aware of only one such instance in the 33 years of publication of ApJ Letters. Second, with the electronic publishing of an increasing number of journals, the requirement of page numbers to locate articles becomes unnecessary. The journal Physical Review D is currently using 6-digit article identifiers as page numbers. Since the bibliographic code allows for page numbers not longer than 5 digits, we are currently converting these 6-digit identifiers to their 5-digit hexagesimal equivalent. Both of these anomalies indicate that over the next few years we will likely need to alter the current bibliographic definition in order to allow consistent identification of articles for all journals.
### 3.2 Data Fields
The databases are set up such that some data fields are searchable and others are not. The searchable fields (title, author, and text) are the bulk of the important data, and these fields are indexed so that a query to the database returns the maximum number of meaningful results. (see Accomazzi et al. (2000), hereafter ARCHITECTURE). The text field is the union of the abstract, title, keywords, and comments. Thus, if a user requests a particular word in the text field, all papers are returned which contain that word in the abstract OR in the title OR in the keywords OR in the comments. Appendix A shows version 1.0 of the Extensible Markup Language (XML, see 3.4) Document Type Definition (DTD) for text files in the ADS Abstract Service. The DTD lists fields currently used or expected to be used in text files in the ADS (see section 5.2 for details on the text files). We intend to reprocess the current journal and affiliation fields in order to extract some of these fields.
Since STI ceased abstracting the journal literature, we decided to make the keywords themselves no longer a searchable entity for the time being – they are searchable only through the abstract text field. STI used a different standard set of keywords from the AAS journals, who use a different set of keywords from AIP journals (e.g. AJ prior to 1998). In addition, keywords from a single journal such as the Astrophysical Journal (ApJ) have evolved over the years so that early ApJ volume keywords are not consistent with later volumes. In order to build one coherent set of keywords, an equivalence or synonym table for these different keyword sets must be created. We are investigating different schemes for doing this, and currently plan to have a searchable keyword field again, which encompasses all keywords in the system and equates those from different keyword systems which are similar (Lee et al. (1999)).
The current non-searchable fields in the ADS databases include the journal field, author affiliation, category, abstract copyright, and abstract origin. Although we may decide to create an index and search interface for some of these entities (such as category), others will continue to remain unsearchable since searching them is not useful to the typical user. In particular, author affiliations would be useful to search, however this information is inconsistently formatted so it is virtually impossible to collect all variations of a given institution for indexing coherently. Furthermore, we have the author affiliations for only about half of the entries in the Astronomy database so we have decided to keep this field non-searchable. For researchers wishing to analyze affiliations on a large scale, we can provide this information on a collaborative basis.
### 3.3 Data Sources
The ADS currently receives abstracts or table of contents (ToC) references from almost two hundred journal sources. Tables 2, 3, and 3.3 list these journals, along with their bibliographic code abbreviation, source, frequency with which we receive the data, what data are received, and any links we can create to the data. ToC references typically contain only author and title, although sometimes keywords are included as well. The data are contributed via email, ftp, or retrieved from web sites around the world at a frequency ranging from once a week to approximately once a year. The term “often” used in the frequency column implies that we get them more frequently than once a month, but not necessarily on a regular basis. The term “occasionally” is used for those journals who submit data to us infrequently.
Updates to the Astronomy and Instrumentation databases occur approximately every two weeks, or more often if logistically possible, in order to keep the database current. Recent enhancements to the indexing software have enabled us to perform instantaneous updates, triggered by an email containing new data (see ARCHITECTURE). Updates to the Physics database occurs approximately once every two months. As stated earlier, the Preprint database is updated nightly.
### 3.4 Data Formats
The ADS is able to benefit from certain standards which are adhered to in the writing and submission practices of astronomical literature. The journals share common abbreviations and text formatting routines which are used by the astronomers as well. The use of TeX (Knuth (1984)) and LaTeX (Lamport (1986)), and their extension to BibTeX (Lamport (1986)) and AASTeX (American Astronomical Society (1999)) results in common formats among some of our data sources. This enables the reuse of parsing routines to convert these formats to our standard format. Other variations of TeX used by journal publishers also allows us to use common parsing routines which greatly facilitates data loading.
TeX is a public domain typesetting program designed especially for math and science. It is a markup system, which means that formatting commands are interspersed with the text in the TeX input file. In addition to commands for formatting ordinary text, TeX includes many special symbols and commands with which you can format mathematical formulae with both ease and precision. Because of its extraordinary capabilities, TeX has become the leading typesetting system for science, mathematics, and engineering. It was developed by Donald Knuth at Stanford University.
LaTeX is a simplified document preparation system built on TeX. Because LaTeX is available for just about any type of computer and because LaTeX files are ASCII, scientists are able to send their papers electronically to colleagues around the world in the form of LaTeX input. This is also true for other variants of TeX, although the astronomical publishing community has largely centered their publishing standards on LaTeX or one of the software packages based on LaTeX, such as BibTeX or AASTeX. BibTeX is a program and file format designed by Oren Patashnik and Leslie Lamport in 1985 for the LaTeX document preparation system, and AASTeX is a LaTeX-based package that can be used to mark up manuscripts specifically for American Astronomical Society (AAS) journals.
Similar to the widespread acceptance of TeX and its variants, the extensive use of SGML (Standard Generalized Markup Language, Goldfarb & Rubinsky (1991)) by the members of the publishing community has given us the ability to standardize many of our parsing routines. All data gleaned off the World Wide Web share features due to the use of HTML (HyperText Markup Language, Powell & Whitworth (1998)), an example of SGML. Furthermore, the trend towards using XML (Extensible Markup Language, Harold (1999)) to describe text documents will enable us to share standard document attributes with other members of the astronomical community. XML is a subset of SGML which is intended to enable generic SGML to be served, received, and processed on the Web in the way that is now possible with HTML. The ADS parsing routines benefit from these standards in several ways: we can reuse routines designed around these systems; we are able to preserve original text representations of entities such as embedded accents so these entities are displayed correctly in the user’s browser; and we are able to capture value-added features such as electronic URLs and email addresses for use elsewhere in our system.
In order to facilitate data exchange between different parts of the ADS, we make use of a tagged format similar to the “Refer” format (Jacobsen (1996)). Refer is a preprocessor for the word processors nroff and troff which finds and formats references. While our tagged formats share some common fields (%A, %T, %J, %D), the Refer format is not specific enough to be used for our purposes. Items such as objects, URLs and copyright notices are beyond the scope of the Refer syntax. Details on our tagged format are provided in Table 5. Reading and writing routines for this format are shared by loading and indexing routines, and a number of our data sources submit abstracts to us in this format.
## 4 Creating the Bibliographic Records
One of the basic principles in the parsing and formatting of the bibliographic data incorporated into the ADS database over the years has been to preserve as much of the original information as possible and delay any syntactic or semantic interpretation of the data until a later stage. From the implementation point of view, this means that bibliographic records provided to the ADS by publishers or other data sources typically are saved as files which are tagged with their origin, entry date, and any other ancillary information relevant to their contents (e.g. if the fields in the record contain data which was transliterated or converted to ASCII).
For instance, the records provided to the ADS by the University of Chicago Press (the publisher of several major U.S. astronomical journals) are SGML documents which contain a unique manuscript identifier assigned to the paper during the electronic publishing process. This identifier is saved in the file created by the ADS system for this bibliographic entry.
Because data about a particular bibliographic entry may be provided to the ADS by different sources and at different times, we adopted a multi-step procedure in the creation and management of bibliographic records:
1) Tokenization: Parsing input data into a memory-resident data structure using procedures which are format- and source-specific.
2) Identification: Computing the unique bibliographic record identifier used by the ADS to refer to this record.
3) Instantiation: Creating a new record for each bibliography formatted according to the ADS “standard” format.
4) Extraction: Selecting the best information from the different records available for the same bibliography and merging them into a single entry, avoiding duplication of redundant information.
### 4.1 Tokenization
The activity of parsing a (possibly) loosely-structured bibliographic record is typically more of an art than a science, given the wide range of possible formats used by people for the representation and display of these records. The ADS uses the PERL language (Practical Extraction and Report Language, Wall & Schwartz (1991)) for implementing most of the routines associated with handling the data. PERL is an interpreted programming language optimized for scanning and processing textual data. It was chosen over other programming languages because of its speed and flexibility in handling text strings. Features such as pattern matching and regular expression substitution greatly facilitate manipulating the data fields. To maximize flexibility in the parsing and formatting operations of different fields, we have written a set of PERL library modules and scripts capable of performing a few common tasks. Some that we consider worth mentioning from the methodological point of view are listed below.
* Character set conversion: electronic data are often delivered to us in different character set encodings, requiring translation of the data stream in one of the standard character sets expected by our input scripts. The default character set that has been used by the ADS until recently is “Latin-1” encoding (ISO-8859-1, International Organization for Standardization (1987)). We are now in the process of converting to the use of Unicode characters (Unicode Consortium (1996)) encoded in UTF-8 (UCS Transformation Format, 8–bit form). The advantage of using Unicode is its universality (all character sets can be mapped to Unicode without loss of information). The advantage of adopting UTF-8 over other encodings is mainly the software support currently available (most of the modern software packages can already handle UTF-8 internally). The adoption of Unicode and UTF-8 also works well with our adoption of XML as the standard format for bibliographic data.
* Macro and entity expansion: Several of the highly structured document formats in use today rely on the strengths of the formatting language for the specification of some common formatting tasks or data tokens. Typically this means that LaTeX documents that are supplied to us make use of one or more macro packages to perform some of the formatting tasks. Similarly, SGML documents will conform to some Document Type Definition (DTD) provided to us by the publisher, and will make use of some standard set of SGML entities to encode the document at the required level of abstraction. What this means for us is that even if most of the input data comes to us in one of two basic formats (TeX/LaTeX/BibTeX or SGML/HTML/XML), we must be able to parse a large number of document classes, each one defined by a different and ever increasing set of specifications, be it a macro package or a DTD.
* Author name formatting: Special care has been taken in parsing and formatting author names from a variety of possible input formats to the standard one used by the ADS. The proper handling of author names is crucial to the integrity of the data in the ADS. Without proper author handling, users would be unable to get complete listings on searches by author names which comprise approximately two-thirds of all searches (see Eichhorn et al. (2000), hereafter SEARCH).
Since the majority of our data sources do not provide author names in our standard format (last name, first name or initial), our loading routines need to be able to invert author names accurately, handling cases such as multiple word last names (Da Costa, van der Bout, Little Marenin) and suffixes (Jr., Sr., III). Any titles in an author’s name (Dr., Rev.) were previously omitted, but are now being retained in the new XML formatting of text files.
The assessment of what constitutes a multiple word last name as opposed to a middle name is non-trivial since some names, such as Davis, can be a first name (Davis Hartman), a middle name (A. G. Davis Philip), a last name (Robert Davis), or some combination (Davis S. Davis). Another example is how to determine when the name “Van” is a first name (Van Nguyen), a middle name (W. Van Dyke Dixon), or part of a last name (J. van Allen). Handling all of these cases correctly requires not only familiarity with naming conventions worldwide, but an intimate familiarity with the names of astronomers who publish in the field. We are continually amassing the latter as we incorporate increasing amounts of data into the system, and as we get feedback from our users.
* Spell checking: Since many of the historical records entered in the ADS have been generated by typesetting tables of contents, typographical errors can often be flagged in an automated way using spell-checking software. We have developed a PERL software driver for the international ispell program, a UNIX utility, which can be used as a spell-checking filter on all input to be considered textual information. A custom dictionary containing terms specific to astronomy and space sciences is used to increase the recognition capabilities of the software module. Any corrections suggested by the spell-checker module are reviewed by a human before the data are actually updated.
* Language recognition: Extending the capability of the spell-checker, we have implemented a software module which attempts to guess the language of an input text buffer based on the percentage of words that it can recognize in one of several languages: English, German, French, Spanish, or Italian. This module is used to flag records to be entered in our database in a language other than English. Knowledge of the language of an abstract allows us to create accurate synonyms for those words (see ARCHITECTURE).
### 4.2 Identification
We call identification the activity of mapping the tokens extracted from the parsing of a bibliographic record into a unique identifier. The ADS adopted the use of bibliographic codes as the identifier for bibliographic entries shortly after its inception, in order to facilitate communication between the ADS and SIMBAD. The advantage of using bibliographic codes as unique identifiers is that they can most often be created in a straightforward way from the information given in the list of references published in the astronomical literature, namely the publication year, journal name, volume, and page numbers, and first author’s name (see section 3.1 for details).
### 4.3 Instantiation
“Instantiation” of a bibliographic entry consists of the creation of a record for it in the ADS database. The ADS must handle receipt of the same data from multiple sources. We have created a hierarchy of data sources so that we always know the preferred data source. A reference for which we have received records from STI, the journal publisher, SIMBAD, and NED, for example, must be in the system only once with the best information from each source preserved. When we load a reference into the system, we check whether a text file already exists for that reference. If there is no text file, it is a new reference and a text file is created. If there already is a text file, we append the new information to the current text file, creating a “merged” text file. This merged text file lists every instance of every field that we have received.
### 4.4 Extraction
By “extraction” of a bibliographic entry we mean the procedure used to create a unique representation of the bibliography from the available records. This is essentially an activity of data fusion and unification, which removes redundancies in the bibliographic records obtained by the ADS and properly labels fields by their characteristics. The extraction algorithm has been designed with our prior experience as to the quality of the data to select the best fields from each data source, to cross-correlate the fields as necessary, and to create a “canonical” text file which contains a unique instance of each field. Since the latter is created through software, only one version of the text file must be maintained; when the merged text file is appended, the canonical text file is automatically recreated.
The extraction routine selects the best pieces of information from each source and combines them into one reference which is more complete than the individual references. For example, author lists received from STI were often truncated after five or ten authors. Whenever we have a longer author list from another source, that author list is used instead. This not only recaptures missing authors, it also provides full author names instead of author initials whenever possible. In addition, our journal sources sometimes omit the last page number of the reference, but SIMBAD usually includes it, so we are able to preserve this information in our canonical text file.
Some fields need to be labelled by their characteristics so that they are properly indexed and displayed. The keywords, for example, need to be attributed to a specific keyword system. The system designation allows for multiple keyword sets to be displayed (e.g. NASA/STI Keywords and AAS Keywords) and will be used in the keyword synonym table currently under development (Lee et al. (1999)).
We also attempt to cross-correlate authors with their affiliations wherever possible. This is necessary for records where the preferred author field is from one source and the affiliations are from another source. We attempt to assign the proper affiliation based on the last name and do not assume that the author order is accurate since we are aware of ordering discrepancies in some of the STI records.
Through these four steps in the procedure of creating and managing bibliographic records, we are able to take advantage of receiving the same reference from multiple sources. We standardize the various records and present to the user a combination of the most reliable fields from each data source in one succinct text file.
## 5 Updating the Database
The software to update bibliographic records in the database consists of a series of PERL scripts, typically one per data source, which reads in the data, performs any special processing particular to that data source, and writes out the data to text files. The loading routines perform three fundamental tasks: 1) they add new bibliographic codes to the current master list of bibliographic codes in the system; 2) they create and organize the text files containing the reference data; and 3) they maintain the lists of bibliographic codes used to indicate what items are available for a given reference.
### 5.1 The Master List
The master list is a table containing bibliographic codes together with their publication dates (YYYYMM) and entry dates into the system (YYYYMMDD). There is one master list per database with one line per reference. The most important aspect of the master list is that it retains information about “alternative” bibliographic codes and matches them to their corresponding preferred bibliographic code. An alternative bibliographic code is usually a reference which we receive from another source (primarily SIMBAD or NED) which has been assigned a different bibliographic code from the one used by the ADS. Sometimes this is due to the different rules used to build bibliographic codes for non-standard publications (see section 3.1), but often it is just an incorrect year, volume, page, or author initial in one of the databases (SIMBAD or NED or the ADS). In either case, the ADS must keep the alternative bibliographic code in the system so that it can be found when referenced by the other source (e.g. when SIMBAD sends back a list of their codes related to an object). The ADS matches the alternative bibliographic code to our corresponding one and replaces any instances of the alternative code when referenced by the other data source. Alternative bibliographic codes in the master list are prepended with an identification letter (S for SIMBAD, N for NED, J for Journal) so that their origin is retained.
While we make every effort to propagate corrections back to our data sources, sometimes there is simply a valid discrepancy. For example, alternative bibliographic codes are often different from the ADS bibliographic code due to ambiguous differences such as which name is the surname of a Chinese author. Since Americans tend to invert Chinese names one way (Zheng, Wei) and Europeans another (Wei, Zheng), this results in two different, but equally valid codes. Similarly, discrepancies in journal names such as BAAS (for the published abstracts in the Bulletin of the American Astronomical Society) and AAS (for the equivalent abstract with meeting and session number, but no volume or page number) need different codes to refer to the same paper. Russian and Chinese translation journals (Astronomicheskii Zhurnal vs. Soviet Astronomy and Acta Astronomica Sinica vs. Chinese Astronomy and Astrophysics) share the same problem. These papers appear once in the foreign journal and once in the translation journal (usually with different page numbers), but are actually the same paper which should be in the system only once. The ADS must therefore maintain multiple bibliographic codes for the same article since each journal has its own abbreviation, and queries for either one must be able to be recognized. The master list is the source of this correlation and enables the indexing procedures and search engine to recognize alternative bibliographic codes.
### 5.2 The Text Files
Text files in the ADS are stored in a directory tree by bibliographic code. The top level of directories is divided into directories with four-digit names by publication year (characters 1 through 4 of the bibliographic code). The next level contains directories with five-character names according to journal (characters 5 through 9), and the text files are named by full bibliographic code under these journal directories. Thus, a sample pathname is 1998/MNRAS/1998MNRAS.295…75E. Alternative bibliographic codes do not have a text file named by that code, since the translation to the equivalent preferred bibliographic code is done prior to accessing the text file.
A sample text file is given in the appendices. Appendix B shows the full bibliographic entry, including all records as received from STI, MNRAS, and SIMBAD. It contains XML-tagged fields from each source, showing all instances of every field. Appendix C shows the extracted canonical version of the bibliographic entry which contains only selected information from the merged text file. This latter version is displayed to the user through the user interface (see SEARCH).
### 5.3 The Codes Files
The third basic function of the loading procedures is to modify and maintain the listings for available items. The ADS displays the availability of resources or information related to bibliographic entries as letter codes in the results list of queries and as more descriptive hyperlinks in the page displaying the full information available for a bibliographic entry. A full listing of the available item codes and their meaning is given in SEARCH.
The loading routines maintain lists of bibliographic codes for each letter code in the system which are converted to URLs by the indexing routines (see ARCHITECTURE). Bibliographic codes are appended to the lists either during the loading process or as post-processing work depending on the availability of the resource. When electronic availability of data coincides with our receipt of the data, the bibliographic codes can be appended to the lists by the loading procedures. When we receive the data prior to electronic availability, post-processing routines must be run to update the bibliographic code lists after we are notified that we may activate the links.
## 6 The Articles
The ADS is able to scan and provide free access to past issues of the astronomical journals because of the willing collaboration of the journal publishers. The primary reason that the journal publishers have agreed to allow the scanning of their old volumes is that the loss of individual subscriptions does not pose a threat to their livelihood. Unlike many disciplines, most astronomy journals are able to pay for their publications through the cost of page charges to astronomers who write the articles and through library subscriptions which are unlikely to be cancelled in spite of free access to older volumes through the ADS. The journal publishers continue to charge for access to the current volumes, which is paid for by most institutional libraries. This arrangement places astronomers in a fortunate position for electronic accessibility of astronomy articles.
The original electronic publishing plans for the astronomical community called for STELAR (STudy of Electronic Literature for Astronomical Research, van Steenberg (1992), van Steenberg et al. (1992), Warnock et al. (1992), Warnock et al. (1993)) to handle the scanning and dissemination of the full journal articles. However, when the STELAR project was terminated in 1993, the ADS assumed responsibility for providing scanned full journal articles to the astronomical community. The first test journal to be scanned was the ApJ Letters which was scanned in January, 1995 at 300 dots per inch (dpi). It should be noted that those scans were intended to be 600 dpi and we will soon rescan them at the higher 600 dpi resolution. Complications in the journal publishing format (plates at the end of some volumes and in the middle of others) were noted and detailed instructions provided to the scanning company so that the resulting scans would be named properly by page or plate number.
All of the scans since the original test batch have been scanned at 600 dpi using a high speed scanner and generating a 1 bit/pixel monochrome image for each page. The files created are then automatically processed in order to de-skew and center the text in each page, resize images to a standard U.S. Letter size (8.5 x 11 inches), and add a copyright notice at the bottom of each page. For each original scanned page, two separate image files of different resolutions are generated and stored on disk. The availability of different resolutions allows users the flexibility of downloading either high or medium quality documents, depending on the speed of their internet connection. The image formats and compression used were chosen based on the available compression algorithms and browser capabilities. The high resolution files currently used are 600 dpi, 1 bit/pixel TIFF (Tagged Image File Format) files, compressed using the CCITT Group 4 facsimile encoding algorithm. The medium resolution files are 200 dpi, 1 bit/pixel TIFF files, also with CCITT Group 4 facsimile compression.
Conversion to printing formats (PDF, PCL, and Postscript) is done on demand, as requested by the user. Similarly, conversion from the TIFF files to a low resolution GIF (Graphic Interchange Format) file (75, 100, or 150 dpi, depending on user preferences) for viewing on the computer screen is done on demand, then cached so that the most frequently accessed pages do not need to be created every time. A procedure run nightly deletes the GIF files with the oldest access time stamp so that the total size of the disk cache is kept under a pre-defined limit. The current 10 GBytes of cache size in use at the SAO Article Server causes only files which have not been accessed for about a month to be deleted. Like the full-screen GIF images, the ADS also caches thumbnail images of the article pages which provide users with the capability of viewing the entire article at a glance.
The ADS uses Optical Character Recognition (OCR) software to gain additional data from TIFF files of article scans. The OCR software is not yet adequate for accurate reproduction of the scanned pages. Greek symbols, equations, charts, and tables do not translate accurately enough to remain true to the original printed page. For this reason, we have chosen not to display to the user anything rendered by the OCR software in an unsupervised fashion. However, we are still able to take advantage of the OCR software for several purposes.
First, we are able to identify and extract the abstract paragraph(s) for use when we do not have the abstract from another source. In these cases, the OCR’d text is indexed so that it is searchable and the extracted image of the abstract paragraph is displayed in lieu of an ASCII version of the abstract. Extracting the abstract from the scanned pages is somewhat tedious, as it requires establishing different sets of parameters for each journal, as well as for different fonts used over the years by the same journal. The OCR software can be taught how to determine where the abstract ends, but it does not work for every article due to oddities such as author lists which extend beyond the first page of an article, and articles which are in a different format from others in the same volume (e.g. no keywords or multiple columns). The ADS currently contains approximately 25,000 of these abstract images and more will be added as we continue to scan the historical literature.
We are also currently using the OCR software to render electronic versions of the entire scanned articles for indexing purposes. We will not use this for display to the users, but hope to be able to index it to provide the possibility of full text searching at some future date. We estimate that the indexing of our almost one million scanned pages with our current hardware and software will take approximately two years of dedicated CPU time.
The last benefit that we gain from the OCR software is the conversion of the reference list at the end of articles. We use parsed reference lists from the scanned articles to build citation and reference lists for display through the C and R links of the available items. Since reference lists are typically in one of several standard formats, we parse each reference for author, journal, volume and page number for most journal articles, and conference name, author, and page number for many conference proceedings. This enables us to build bibliographic code lists for references contained in that article (R links) and invert these lists to build bibliographic code lists of articles which cite this paper (C links). We are able to use this process to identify and therefore add commonly-cited articles which are currently missing from the ADS. This is usually data prior to 1975 or astronomy-related articles published in non-astronomy journals.
The Article Service currently contains 250 GBytes of scans, which consists of 1,128,955 article pages comprising 138,789 articles. These numbers increase on a regular basis, both as we add more articles from the older literature and as we scan new journals.
## 7 ADS/Journal Interaction
A description of the data in the ADS would be incomplete without a discussion of the interaction between the ADS and the electronic journals. The data available on-line from the journal publishers is an extension of the data in the ADS and vice versa. This interaction is greatly facilitated by the acceptance of the bibliographic code by many journal publishers as a means for accessing their on-line articles.
Access to articles currently on-line at the journal sites through the ADS comprises a significant percent of the on-line journal access (see OVERVIEW). The best model for interaction between the ADS and a journal publisher is the University of Chicago Press (hereafter UCP), publisher of ApJ, ApJL, ApJS, AJ, and PASP. When a new volume appears on-line at UCP, the ADS is notified by email and an SGML header file for each of those articles is simultaneously transferred to our site. The data are parsed and loaded into the system and appropriate links are created. However, prior to this, the UCP has made use of the ADS to build their electronic version through the use of our bibliographic code reference resolver.
Our bibliographic code reference resolver (Accomazzi et al. (1999)) was developed to provide the capability to automatically parse, identify, and verify citations appearing in astronomical literature. By verifying the existence of a reference through the ADS, journals and conference proceedings editors are able to publish documents containing hyperlinks pointing to stable, unique URLs. Increasingly more journals are linking to the ADS in their reference sections, providing users with the ability to read referenced articles with the click of a mouse button.
During the copy editing phase, UCP editors query the ADS reference resolver and determine if each reference exactly matches a bibliographic code in the ADS. If there is a match, a link to the ADS is established for this entry in their reference section. If there is not a match, one of several scenarios takes place. First, if it is a valid reference not yet included in the ADS (most often the case for “fringe” articles, those peripherally associated with astronomy), our reference resolver captures the information necessary to add it to our database during the next update. Second, if it is a valid reference unable to be parsed by the resolver (sometimes the case for conference proceedings or PhD theses), no action is taken and no link is listed in the reference section. Third, if there is an error in the reference as determined by the reference resolver, the UCP editors may ask for a correction or clarification from the authors.
The last option demonstrates the power of the reference resolver, which has been taught on a journal-by-journal basis how complete the coverage of that journal is in the ADS. Before the implementation of the reference resolver, UCP was able to match 72% of references in ApJ articles (E. Owens, private communication). Early results from the use of the reference resolver show that we are now able to match conference proceedings, so this number should become somewhat larger. It is unlikely that we will ever match more than 90% of references in an article due to references such as “private communication”, “in press”, and preprints, as well as author errors (see section 8). Our own reference resolving of OCR’d reference lists shows that we can match approximately 86
The ADS provides multiple ways for authors and journal publishers to link to the ADS (see SEARCH). We make every effort to facilitate individuals and organizations linking to us. This is easily done for simple searches such as the verification of a bibliographic code or an author search for a single spelling. However, given the complexity of the system, these automated searches can quickly become complicated. Details for conference proceedings editors or journal publishers who are interested in establishing or improving links to the ADS are available upon request. In particular, those who have individual TeX macros incorporated in their references can use our bibliographic code resolver to facilitate linking to the ADS.
## 8 Discussion and Summary
As of this writing (12/1999), there are 524,304 references in the Astronomy database, 523,498 references in the Instrumentation database, 443,858 references in the Physics database, and 3467 references in the Preprint database, for a total of almost 1.5 million references in the system. Astronomers currently write approximately 18,000 journal articles annually, and possibly that many additional conference proceedings papers per year. More than half of the journal papers appear in peer-reviewed journals. These numbers are more than double what they were in 1975, in spite of an increase in the number of words per page in most of the major journals (Abt (1995)), and an increase in number of pages per article (Schulman et al. (1997)). At the current rate of publication, astronomers could be writing 25,000 journal papers per year by 2001 and an additional 20,000 conference proceedings papers. Figure 1 shows the total number of papers for each year in the Astronomy database since 1975, divided into refereed journal papers, non-refereed journal papers, and conferences (including reports and theses). There are three features worth noting. First, the increase in total references in 1980 is due to the inclusion of Helen Knudsen’s Monthly Astronomy and Astrophysics Index, a rich source of data for both journals and conference proceedings which began coverage in late 1979 and continued until 1995. Second, the recent increase in conferences included in the Astronomy database (starting around 1996) is due to the inclusion of conference proceedings table of contents provided by collaborating librarians and typed in by our contractors. Last, the decrease in numbers for 1999 is due to coverage for that year not yet being complete in the ADS.
The growth rate of the Instrumentation and Physics databases is difficult to estimate, primarily because we do not have datasets which are as complete as astronomy. In any case, the need for the organization and maintenance of this large volume of data is clearly important to every research astronomer. Fortunately, the ADS was designed to be able to handle this large quantity of data and to be able to grow with new kinds of data. New available item links have been added for new types of data as they became available (e.g. the links to complete book entries at the Library of Congress) and future datasets (e.g. from future space missions) should be able to be added in the same fashion.
As with any dataset of this magnitude, there is some fraction of references in the system which are incorrect. This is unavoidable given the large number of data sources, errors in indices and tables of contents as originally published, and human error. In addition, many authors do not give full attention to verifying all references in a paper, resulting in the introduction of errors in many places. In a systematic study of more than 1000 references contained in a single issue of the Astrophysical Journal, Abt (1992) found that more than 12% of those contained errors. This number should be significantly reduced with the integration of the ADS reference resolver in the electronic publishing process. However, any mistakes in the ADS can and will get propagated, so steps are being taken by us to maximize accuracy of our entries.
Locating and identifying correlations between multiple bibliographic codes which describe the same article is a time-consuming and sometimes subjective task as many pairs of bibliographic codes need to be verified by manually looking up papers in the library. We use the Abstract Service itself for gross matching of bibliographic codes, submitting a search with author and title, and considering any resulting matches with a score of 1.0 as a potential match. These matches are only potential matches which require verification since authors can submit the same paper to more than one publication source (e.g. BAAS and a refereed journal), and since errata published with the same title and author list will perfectly match the original paper.
When a volume or year is mismatched, it is usually obvious which of a pair of matched bibliographic codes is correct, but if a page number is off, the decision as to which code is correct cannot always be automated. We also need to consider matches with very high scores less than 1.0 since these are the matches where an author name may be incorrect. The correction of errors of this sort is ongoing work which is carried out as often as time and resources permit.
The evolution of the Internet and the World Wide Web, along with the explosion of astronomical services on the Web has enabled the ADS to provide access to our databases in an open and uniform environment. We have been able to hyperlink both to our own resources and to other on-line resources such as the journal bibliographies (Boyce & Biemesderfer (1996)). As part of the international collaboration Urania (Universal Research Archive of Networked Information in Astronomy, Boyce (1998)), the ADS enables a fully functioning distributed digital library of astronomical information which provides power and utility previously unavailable to the researcher.
Perhaps the largest factor which has contributed to the success of the ADS is the willing cooperation of the AAS, CDS, and all the journal publishers. The ADS has largely become the means for linking together smaller pieces of a bigger picture, making an elaborate digital library for astronomers a reality. We currently collaborate with over fifty groups in creating and maintaining cross-links among data centers. These additional collaborations with individuals and institutions worldwide allow us to provide many value-added features to the system such as object information, author email addresses, mail order forms for articles, citations, article scans, and more. A listing of these collaborations is provided in Table 6. Any omissions from this table are purely unintentional, as the ADS values all of our colleagues and the users benefit not only from the major collaborators but the minor ones as well, as these are often more difficult for users to learn about independently. Most of the abbreviations are listed in Tables 2, 3, and 4.
The successful coordination of data exchanges with each of our collaborators and the efforts which went into establishing them in the first place have been key to the success of the ADS. Establishing links to and from the journal publishers, changing these links due to revisions at publisher websites, and tracking and fixing broken links is all considered routine data maintenance for the system. Since it is necessary for us to maintain connectivity to external sites, routine checks of sample links are performed on a regular basis to verify that the links are still active.
Usage statistics for the Abstract Service (see OVERVIEW) indicate that astronomers and librarians at scientific institutions are eager to take advantage of the information that the ADS provides. The widespread acceptance of the ADS by the astronomical community is changing how astronomers do research, placing extensive bibliographic information at their fingertips. This enables researchers to increase their productivity and to improve the quality of their work.
A number of improvements to the data in the ADS are planned for the near future. As always, we will continue our efforts to increase the completeness of coverage, particularly for the data prior to 1975. We have collected most of the major journals back to the first issue for scanning and adding to the Astronomy database. In addition, we are scanning and OCR’ing table of contents for conference proceedings to improve our coverage in that area. We are currently OCR’ing full journal articles to provide full text searching and to improve the completeness of our reference and citation coverage. Finally, as the ADS becomes commonplace for all astronomers, valuable feedback from our users to inform us about missing papers, errors in the database, and suggested improvements to the system serve to guide the future of the ADS and to ensure that the ADS continues to evolve into a more valuable research tool for the scientific community.
## 9 Acknowledgments
The other ADS Team members: Markus Demleitner, Elizabeth Bohlen, and Donna Thompson contribute much on a daily basis. Funding for this project has been provided by NASA under NASA Grant NCC5-189.
## Appendix A
Version 1.0 of the XML DTD describing text files in the ADS Abstract Service.
```
Document Type Definition for the ADS
bibliographic records
Syntax policy
=============
- The element names are in uppercase in order
to help the reading.
- The attribute names are preferably in
lowercase
- The attribute values are allowed to be of
type CDATA to allow more flexibility for
additional values; however, attributes
typically may only assume one of a well-
defined set of values
- Cross-referencing among elements such as
AU, AF, and EM is accomplished through the
use of attributes of type IDREFS (for AU)
and ID (for AF and EM)
<!-- BIBRECORD is the root element of the XML
document. Attributes are:
origin mnemonic indicating individual(s)
or institution(s) who submitted
the record to ADS
lang language in which the contents of
this record are expressed the
possible values are language tags
as defined in RFC 1766.
Examples: lang="fr", lang="en"
-->
<!ELEMENT BIBRECORD ( METADATA?,
TITLE?,
AUTHORS?,
AFFILIATIONS?,
EMAILS?,
FOOTNOTES?,
BIBCODE,
MSTRING,
MONOGRAPH?,
SERIES?,
PAGE?,
LPAGE?,
COPYRIGHT?,
PUBDATE,
CATEGORIES*,
COMMENTS*,
ANOTE?,
BIBTYPE?,
IDENTIFIERS?,
ORIGINS,
OBJECTS*,
KEYWORDS*,
ABSTRACT* ) >
<!ATTLIST BIBRECORD origin CDATA #REQUIRED
lang CDATA #IMPLIED >
<!-- Generic metadata about the ADS record
(rather than the publication) -->
<!ELEMENT METADATA ( VERSION,
CREATOR,
CDATE,
EDATE ) >
<!-- Versioning is introduced to allow parsers
to detect and reject any documents not
complying with the supported DTD -->
<!ELEMENT VERSION ( #PCDATA ) >
<!-- CREATOR is purely informative -->
<!ELEMENT CREATOR ( #PCDATA ) >
<!-- Creation date for the record -->
<!ELEMENT CDATE ( YYYY-MM-DD ) >
<!-- Last modified date -->
<!ELEMENT EDATE ( YYYY-MM-DD ) >
<!-- Title of the publication -->
<!ELEMENT TITLE ( #PCDATA ) >
<!ATTLIST TITLE lang CDATA #IMPLIED >
<!-- AUTHORS contains only AU subelements, each
one of them corresponding to a single author
name -->
<!ELEMENT AUTHORS ( AU+ ) >
<!-- AU contains at least the person’s last name
(LNAME), and possibly the first and middle name(s)
(or just the initials) which would be stored in
element FNAME. PREF and SUFF represent the
salutation and suffix for the name. SUFF
typically is one of: Jr., Sr., II, III, IV.
PREF is rarely used but is here for completeness.
Typically we would store salutations such as
"Rev." (for "Reverend"), or "Prof." (for
"Professor") in this element.
-->
<!ELEMENT AU ( PREF?,
FNAME?,
LNAME,
SUFF? ) >
<!-- The attributes AF and EM are used to cross-
ΨΨ reference author affiliations and email
addresses with the individual author records.
This is the only exception of attributes in
upper case. The typical use of this is:
<AU AF="AF_1 AF_2" EM="EM_3">...</AU>
-->
<!ATTLIST AU AF IDREFS #IMPLIED
EM IDREFS #IMPLIED
FN IDREFS #IMPLIED >
<!-- AU subelements -->
<!ELEMENT PREF ( #PCDATA ) >
<!ELEMENT FNAME ( #PCDATA ) >
<!ELEMENT LNAME ( #PCDATA ) >
<!ELEMENT SUFF ( #PCDATA ) >
<!-- AFFILIATIONS is the wrapper element for
the individual affiliation records, each
represented as an AF element -->
<!ELEMENT AFFILIATIONS ( AF+ ) >
<!ELEMENT AF ( #PCDATA ) >
<!-- the value of the ident attribute should
match one of the values assumed by the AF
attribute in an AU element -->
<!ATTLIST AF ident ID #REQUIRED >
<!ELEMENT EMAILS ( EM+ ) >
<!ELEMENT EM ( #PCDATA ) >
<!-- the value of the ident attribute should
match one of the values assumed by the EM
attribute in an AU element -->
<!ATTLIST EM ident ID #REQUIRED >
<!-- FOOTNOTES and FN subelements are here for
future use -->
<!ELEMENT FOOTNOTES ( FN+ ) >
<!ELEMENT FN ( #PCDATA ) >
<!ATTLIST FN ident ID #REQUIRED >
<!-- BIBCODE; for a definition, see:
http://adsdoc.harvard.edu/abs_doc/bib_help.html
http://adsabs.harvard.edu/cgi-bin/
nph-bib_query?1995ioda.book..259S
http://adsabs.harvard.edu/cgi-bin/
nph-bib_query?1995VA.....39R.272S
This identifier logically belongs to the
IDENTS element, but since it is the
identifier used internally in the system,
it is important to have it in a prominent
and easy to reach place.
-->
<!ELEMENT BIBCODE ( #PCDATA ) >
<!-- MSTRING is the unformatted string for the
monograph (article, book, whatever). Example:
<MSTRING>The Astrophysical Journal, Vol. 526,
n. 2, pp. L89-L92</MSTRING>
-->
<!ELEMENT MSTRING ( #PCDATA ) >
<!-- MONOGRAPH is a structured record containing
the fielded information about the monograph
where the bibliographic entry appeared.
Typically this is created by parsing the
text in the MSTRING element. Example:
<MTITLE>The Astrophysical Journal</MTITLE>
<VOLUME>526</VOLUME>
<ISSUE>2</ISSUE>
<PUBLISHER>University of Chicago Press
</PUBLISHER>
-->
<!ELEMENT MONOGRAPH ( MTITLE,
VOLUME?,
ISSUE?,
MNOTE?,
EDITORS?,
EDITION?,
PUBLISHER?,
LOCATION?,
MID* ) >
<!-- Monograph title (e.g. "Astrophysical Journal") -->
<!ELEMENT MTITLE ( #PCDATA ) >
<!ELEMENT VOLUME ( #PCDATA ) >
<!ATTLIST VOLUME type NMTOKEN #IMPLIED >
<!ELEMENT ISSUE ( #PCDATA ) >
<!-- A note about the monograph as supplied by the
publisher or editor -->
<!ELEMENT MNOTE ( #PCDATA ) >
<!-- List of editor names as extracted from MSTRING.
Formatting is as for AUTHORS and AU elements -->
<!ELEMENT EDITORS ( ED+ ) >
<!ELEMENT ED ( PREF?,
FNAME?,
LNAME,
SUFF? ) >
<!-- Edition of publication -->
<!ELEMENT EDITION ( #PCDATA ) >
<!-- Name of publisher -->
<!ELEMENT PUBLISHER ( #PCDATA ) >
<!-- Place of publication -->
<!ELEMENT LOCATION ( #PCDATA ) >
<!-- MID represents the monograph identification as
supplied by the publisher. This may be useful in
correlating our record with the publisher’s online
offerings. The "system" attribute characterizes
the system used to express the identifier -->
<!ELEMENT MID ( #PCDATA ) >
<!ATTLIST MID type NMTOKEN #IMPLIED >
<!-- If the bibliographic entry appeared in a series,
then the element SERIES contains information
about the series itself. Typically this consists
of data about a conference series (e.g. ASP
Conference Series). Note that there may be
several SERIES elements, since some
publications belong to "subseries" within
a series.
-->
<!ELEMENT SERIES ( SERTITLE,
SERVOL?,
SEREDITORS?,
SERBIBCODE? ) >
<!-- Title, volume, and editors of conference
series -->
<!ELEMENT SERTITLE ( #PCDATA ) >
<!ELEMENT SERVOL ( #PCDATA ) >
<!ELEMENT SEREDITORS ( ED+ ) >
<!-- Serial bibcode for publication (may coincide
with main bibcode) -->
<!ELEMENT SERBIBCODE ( #PCDATA ) >
<!-- PAGE may have the attribute type set to
"s" for (sequential) the value associated
to it does not represent a printed volume
number -->
<!ELEMENT PAGE ( #PCDATA ) >
<!ATTLIST PAGE type NMTOKEN #IMPLIED >
<!-- LPAGE gives the last page number (if known).
Does not make sense if PAGE is type="s" -->
<!ELEMENT LPAGE ( #PCDATA ) >
<!-- COPYRIGHT is just an unformatted string
containing copyright information from
publisher -->
<!ELEMENT COPYRIGHT ( #PCDATA ) >
<!ELEMENT PUBDATE ( YEAR, MONTH? ) >
<!ELEMENT MONTH ( #PCDATA ) >
<!ELEMENT YEAR ( #PCDATA ) >
<!-- CATEGORIES contain subelements indicating in
which subject categories the publication was
assigned. STI/RECON has always assigned a
category for each entry in their system, but
otherwise there is little else in our
database. The attributes origin and system
are used to keep track of the different
classifications used.
-->
<!ELEMENT CATEGORIES ( CA+ ) >
<!ATTLIST CATEGORIES origin NMTOKEN #IMPLIED
system NMTOKEN #IMPLIED >
<!ELEMENT CA ( #PCDATA ) >
<!-- Typically private fields supplied by the
data source. For instance, SIMBAD and LOC
provide comments about a bibliographic
entries -->
<!ELEMENT COMMENTS ( CO+ ) >
<!ATTLIST COMMENTS lang CDATA #IMPLIED
origin NMTOKEN #IMPLIED >
<!ELEMENT CO ( #PCDATA ) >
<!-- Author note -->
<!ELEMENT ANOTE ( #PCDATA ) >
<!-- BIBTYPE describes what type of publication
this entry corresponds to. This is
currently limited to the following tokens
(taken straight from the BibTeX
classification):
article
book
booklet
inbook
incollection
inproceedings
manual
masterthesis
misc
phdthesis
proceedings
techreport
unpublished
-->
<!ELEMENT BIBTYPE ( #PCDATA ) >
<!-- List of all known identifiers for this
publication -->
<!ELEMENT IDENTIFIERS ( ID+ ) >
<!-- Contents of an ID element is the identifier
used by a particular publisher or institution.
Examples:
<ID origin="UCP" system="PUBID">38426</ID>
<ID origin="STI" system="ACCNO">A90-12345</ID>
-->
<!ELEMENT ID ( #PCDATA ) >
<!ATTLIST ID origin NMTOKEN #IMPLIED
type NMTOKEN #REQUIRED >
<!-- the collective list of institutions that have given
us a record about this entry. -->
<!ELEMENT ORIGINS ( OR+ ) >
<!ELEMENT OR ( #PCDATA ) >
<!-- The list of objects associated with the
publication -->
<!ELEMENT OBJECTS ( OB+ ) >
<!ELEMENT OB ( #PCDATA ) >
<!-- Keywords assigned to the publication -->
<!ELEMENT KEYWORDS ( KW+ ) >
<!ATTLIST KEYWORDS Lang CDATA #IMPLIED
origin NMTOKEN #IMPLIED
system NMTOKEN #REQUIRED >
<!ELEMENT KW ( #PCDATA ) >
<!-- An abstract of the publication. This is
typically provided to us by the publisher,
but may in some cases come from other
sources (E.g. STI, which keyed abstracts
in most cases). Therefore we allow several
ABSTRACT elements within each record, each
with a separate origin or language.
The attribute type is used to keep track
of how the abstract data was generated.
For instance, abstract text generated by
our OCR software will have:
origin="ADS" type="OCR" lang="en"
-->
<!ELEMENT ABSTRACT ( P+ ) >
<!ATTLIST ABSTRACT origin NMTOKEN #IMPLIED >
type NMTOKEN #IMPLIED >
lang CDATA #IMPLIED >
<!-- Abstracts are composed of separate
paragraphs which have mixed contents as
listed below. All the subelements listed
below have the familiar HTML meaning and
are used to render the abstract text in a
decent way -->
<!ELEMENT P (#PCDATA |A| BR | PRE | SUP | SUB)* >
<!-- Line breaks (BR) and preformatted text (PRE)
make it possible to display tables and other
preformatted text. -->
<!ELEMENT BR EMPTY >
<!ELEMENT PRE (#PCDATA | A | BR | SUP | SUB )* >
<!-- A is the familiar anchor element. -->
<!ELEMENT A ( #PCDATA | BR | SUP | SUB )* >
<!ATTLIST A HREF CDATA #REQUIRED >
<!-- SUP and SUB are superscripts and subscripts.
In our content model, they are allowed to
contain additional SUP and SUB elements,
although we may decide to restrict them to
PCDATA at some point -->
<!ELEMENT SUP ( #PCDATA | A | BR | SUP | SUB )* >
<!ELEMENT SUB ( #PCDATA | A | BR | SUP | SUB )* >
```
## Appendix B
A sample text file from the ADS Abstract Service showing XML markup for the full bibliographic entry, including records from STI, MNRAS, and SIMBAD. Items in bold are those selected to create the canonical text file shown in Appendix C.
$`<`$?xml version=“1.0”?$`>`$
$`<`$!DOCTYPE ADS\_BIBALL SYSTEM “ads.dtd”$`>`$
$`<`$ADS\_BIBALL$`>`$
$`<`$BIBRECORD origin=“STI”$`>`$
$`<`$TITLE$`>`$Spectroscopic confirmation of redshifts predicted by gravitational lensing$`<`$/TITLE$`>`$
$`<`$AUTHORS$`>`$
$`<`$AU AF=“1”$`>`$
$`<`$FNAME$`>`$Tim$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Ebbels$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“1”$`>`$
$`<`$FNAME$`>`$Richard$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Ellis$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“2”$`>`$
$`<`$FNAME$`>`$Jean-Paul$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Kneib$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“2”$`>`$
$`<`$FNAME$`>`$Jean-Francois$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$LeBorgne$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“2”$`>`$
$`<`$FNAME$`>`$Roser$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Pello$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“3”$`>`$
$`<`$FNAME$`>`$Ian$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Smail$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“4”$`>`$
$`<`$FNAME$`>`$Blai$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Sanahuja$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$/AUTHORS$`>`$
$`<`$AFFILIATIONS$`>`$
$`<`$AF ident=“AF\_1”$`>`$Cambridge, Univ.$`<`$/AF$`>`$
$`<`$AF ident=“AF\_2”$`>`$Observatoire Midi-Pyrenees$`<`$/AF$`>`$
$`<`$AF ident=“AF\_3”$`>`$Durham, Univ.$`<`$/AF$`>`$
$`<`$AF ident=“AF\_4”$`>`$Barcelona, Univ.$`<`$/AF$`>`$
$`<`$/AFFILIATIONS$`>`$
$`<`$MSTRING$`>`$Royal Astronomical Society, Monthly Notices, vol. 295, p. 75$`<`$/MSTRING$`>`$
$`<`$MONOGRAPH$`>`$
$`<`$MTITLE$`>`$Royal Astronomical Society, Monthly Notices$`<`$/MTITLE$`>`$
$`<`$VOLUME$`>`$295$`<`$/VOLUME$`>`$
$`<`$/MONOGRAPH$`>`$
$`<`$PAGE$`>`$75$`<`$/PAGE$`>`$
$`<`$PUBDATE$`>`$
$`<`$YEAR$`>`$1998$`<`$/YEAR$`>`$
$`<`$MONTH$`>`$03$`<`$/MONTH$`>`$
$`<`$/PUBDATE$`>`$
$`<`$CATEGORIES$`>`$
$`<`$CA$`>`$Astrophysics$`<`$/CA$`>`$
$`<`$CATEGORIES$`>`$
$`<`$BIBCODE$`>`$1998MNRAS.295…75E$`<`$/BIBCODE$`>`$
$`<`$BIBTYPE$`>`$article$`<`$/BIBTYPE$`>`$
$`<`$IDENTIFIERS$`>`$
$`<`$ID type=“ACCNO”$`>`$A98-51106$`<`$/ID$`>`$
$`<`$/IDENTIFIERS$`>`$
$`<`$KEYWORDS system=“STI”$`>`$
$`<`$KW$`>`$GRAVITATIONAL LENSES$`<`$/KW$`>`$
$`<`$KW$`>`$RED SHIFT$`<`$/KW$`>`$
$`<`$KW$`>`$HUBBLE SPACE TELESCOPE$`<`$/KW$`>`$
$`<`$KW$`>`$GALACTIC CLUSTERS$`<`$/KW$`>`$
$`<`$KW$`>`$ASTRONOMICAL SPECTROSCOPY$`<`$/KW$`>`$
$`<`$KW$`>`$MASS DISTRIBUTION$`<`$/KW$`>`$
$`<`$KW$`>`$SPECTROGRAPHS$`<`$/KW$`>`$
$`<`$KW$`>`$PREDICTION ANALYSIS TECHNIQUES$`<`$/KW$`>`$
$`<`$KW$`>`$ASTRONOMICAL PHOTOMETRY$`<`$/KW$`>`$
$`<`$/KEYWORDS$`>`$
$`<`$ABSTRACT$`>`$
We present deep spectroscopic measurements of 18 distant field galaxies identified as gravitationally lensed arcs in a Hubble Space Telescope image of the cluster Abell 2218. Redshifts of these objects were predicted by Kneib et al. using a lensing analysis constrained by the properties of two bright arcs of known redshift and other multiply imaged sources. The new spectroscopic identifications were obtained using long exposures with the LDSS-2 spectrograph on the William Herschel Telescope, and demonstrate the capability of that instrument to reach new limits, R = 24; the lensing magnification implies true source magnitudes as faint as R = 25. Statistically, our measured redshifts are in excellent agreement with those predicted from Kneib et al.’s lensing analysis, and this gives considerable support to the redshift distribution derived by the lensing inversion method for the more numerous and fainter arclets extending to R = 25.5. We explore the remaining uncertainties arising from both the mass distribution in the central regions of Abell 2218 and the inversion method itself, and conclude that the mean redshift of the faint field population at R = 25.5 (B = 26-27) is low, (z = 0.8-1). We discuss this result in the context of redshift distributions estimated from multicolor photometry.
$`<`$ABSTRACT$`>`$
$`<`$/BIBRECORD$`>`$
$`<`$BIBRECORD origin=“MNRAS”$`>`$
$`<`$TITLE$`>`$Spectroscopic confirmation of redshifts predicted by gravitational lensing$`<`$/TITLE$`>`$
$`<`$AUTHORS$`>`$
$`<`$AU AF=“1”$`>`$
$`<`$FNAME$`>`$Tim$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Ebbels$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“1” EM=“1”$`>`$
$`<`$FNAME$`>`$Richard$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Ellis$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“2”$`>`$
$`<`$FNAME$`>`$Jean-Paul$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Kneib$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“2”$`>`$
$`<`$FNAME$`>`$Jean-François$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$LeBorgne$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“2”$`>`$
$`<`$FNAME$`>`$Roser$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Pelló$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“3”$`>`$
$`<`$FNAME$`>`$Ian$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Smail$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“4”$`>`$
$`<`$FNAME$`>`$Blai$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Sanahuja$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$/AUTHORS$`>`$
$`<`$AFFILIATIONS$`>`$
$`<`$AF ident=“AF\_1”$`>`$Institute of Astronomy, Madingley Road, Cambridge CB3 0HA$`<`$/AF$`>`$
$`<`$AF ident=“AF\_2”$`>`$Observatoire Midi-Pyrénées, 14 Avenue E. Belin$`<`$/AF$`>`$
$`<`$AF ident=“AF\_3”$`>`$Department of Physics, University of Durham, South Road, Durham DH1 3LE$`<`$/AF$`>`$
$`<`$AF ident=“AF\_4”$`>`$Departament d'Astronomia i Meteorologia, Universitat de Barcelona, Diagonal 648, 08028 Barcelona, Spain$`<`$/AF$`>`$
$`<`$/AFFILIATIONS$`>`$
$`<`$EMAILS$`>`$
$`<`$EM ident=“EM\_1”$`>`$rse$`\mathrm{@}`$ast.cam.ac.uk$`<`$/EM$`>`$
$`<`$/EMAILS$`>`$
$`<`$MSTRING$`>`$Monthly Notices of the Royal Astronomical Society, Volume 295, Issue 1, pp. 75-91.$`<`$/MSTRING$`>`$
$`<`$MONOGRAPH$`>`$
$`<`$MTITLE$`>`$Monthly Notices of the Royal Astronomical Society$`<`$/MTITLE$`>`$
$`<`$MTITLE$`>`$Monthly Notices of the Royal Astronomical Society$`<`$/MTITLE$`>`$
$`<`$VOLUME$`>`$295$`<`$/VOLUME$`>`$
$`<`$ISSUE$`>`$1$`<`$/ISSUE$`>`$
$`<`$/MONOGRAPH$`>`$
$`<`$PAGE$`>`$75$`<`$/PAGE$`>`$
$`<`$LPAGE$`>`$91$`<`$/LPAGE$`>`$
$`<`$PUBDATE$`>`$
$`<`$YEAR$`>`$1998$`<`$/YEAR$`>`$
$`<`$MONTH$`>`$03$`<`$/MONTH$`>`$
$`<`$/PUBDATE$`>`$
$`<`$COPYRIGHT$`>`$1998: The Royal Astronomical Society$`<`$/COPYRIGHT$`>`$
$`<`$BIBCODE$`>`$1998MNRAS.295…75E$`<`$/BIBCODE$`>`$
$`<`$KEYWORDS system=“AAS”$`>`$
$`<`$KW$`>`$GALAXIES: CLUSTERS: INDIVIDUAL: ABELL 2218$`<`$/KW$`>`$
$`<`$KW$`>`$GALAXIES: EVOLUTION$`<`$/KW$`>`$
$`<`$KW$`>`$COSMOLOGY: OBSERVATIONS$`<`$/KW$`>`$
$`<`$KW$`>`$GRAVITATIONAL LENSING$`<`$/KW$`>`$
$`<`$/KEYWORDS$`>`$
$`<`$ABSTRACT$`>`$
We present deep spectroscopic measurements of 18 distant field galaxies identified as gravitationally lensed arcs in a Hubble Space Telescope image of the cluster Abell2218. Redshifts of these objects were predicted by Kneib et al. using a lensing analysis constrained by the properties of two bright arcs of known redshift and other multiply imaged sources. The new spectroscopic identifications were obtained using long exposures with the LDSS-2 spectrograph on the William Herschel Telescope, and demonstrate the capability of that instrument to reach new limits, R≃24 the lensing magnification implies true source magnitudes as faint as R≃25. Statistically, our measured redshifts are in excellent agreement with those predicted from Kneib et al.’s lensing analysis, and this gives considerable support to the redshift distribution derived by the lensing inversion method for the more numerous and fainter arclets extending to R≃25.5. We explore the remaining uncertainties arising from both the mass distribution in the central regions of Abell2218 and the inversion method itself, and conclude that the mean redshift of the faint field population at R≃25.5 (B∼26–27) is low, ⟨z⟩=0.8–1. We discuss this result in the context of redshift distributions estimated from multicolour photometry. Although such comparisons are not straightforward, we suggest that photometric techniques may achieve a reasonable level of agreement, particularly when they include near-infrared photometry with discriminatory capabilities in the 1<z<2 range.
$`<`$/ABSTRACT$`>`$
$`<`$/BIBRECORD$`>`$
$`<`$BIBRECORD origin=“SIMBAD”$`>`$
$`<`$TITLE$`>`$Spectroscopic confirmation of redshifts predicted by gravitational lensing.$`<`$/TITLE$`>`$
$`<`$AUTHORS$`>`$
$`<`$AU$`>`$
$`<`$FNAME$`>`$T.$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Ebbels$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU$`>`$
$`<`$FNAME$`>`$R.$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Ellis$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU$`>`$
$`<`$FNAME$`>`$J.-P.$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Kneib$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU$`>`$
$`<`$FNAME$`>`$J.-F.$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$LeBorgne$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU$`>`$
$`<`$FNAME$`>`$R.$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Pelló$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU$`>`$
$`<`$FNAME$`>`$I.$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Smail$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU$`>`$
$`<`$FNAME$`>`$B.$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Sanahuja$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$/AUTHORS$`>`$
$`<`$MSTRING$`>`$Mon. Not. R. Astron. Soc., 295, 75-91 (1998)$`<`$/MSTRING$`>`$
$`<`$MONOGRAPH$`>`$
$`<`$MTITLE$`>`$Mon. Not. R. Astron. Soc.$`<`$/MTITLE$`>`$
$`<`$VOLUME$`>`$295$`<`$/VOLUME$`>`$
$`<`$MONOGRAPH$`>`$
$`<`$PAGE$`>`$75$`<`$/PAGE$`>`$
$`<`$LPAGE$`>`$91$`<`$/LPAGE$`>`$
$`<`$PUBDATE$`>`$
$`<`$YEAR$`>`$1998$`<`$/YEAR$`>`$
$`<`$MONTH$`>`$03$`<`$/MONTH$`>`$
$`<`$/PUBDATE$`>`$
$`<`$BIBCODE$`>`$1998MNRAS.295…75E$`<`$/BIBCODE$`>`$
$`<`$/BIBRECORD$`>`$
$`<`$/ADS\_BIBALL$`>`$
## Appendix C
An example of an extracted text file from the ADS Abstract Service showing only the preferred instances of each field in XML markup for same bibliographic entry listed in Appendix B.
$`<`$?xml version=“1.0”?$`>`$
$`<`$!DOCTYPE ADS\_ABSTRACT SYSTEM “ads.dtd”$`>`$
$`<`$ADS\_ABSTRACT$`>`$
$`<`$TITLE$`>`$Spectroscopic confirmation of redshifts predicted by gravitational lensing$`<`$/TITLE$`>`$
$`<`$AUTHORS$`>`$
$`<`$AU AF=“1”$`>`$
$`<`$FNAME$`>`$Tim$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Ebbels$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“1” EM=“1”$`>`$
$`<`$FNAME$`>`$Richard$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Ellis$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“2”$`>`$
$`<`$FNAME$`>`$Jean-Paul$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Kneib$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“2”$`>`$
$`<`$FNAME$`>`$Jean-François$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$LeBorgne$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“2”$`>`$
$`<`$FNAME$`>`$Roser$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Pelló$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“3”$`>`$
$`<`$FNAME$`>`$Ian$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Smail$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$AU AF=“4”$`>`$
$`<`$FNAME$`>`$Blai$`<`$/FNAME$`>`$
$`<`$LNAME$`>`$Sanahuja$`<`$/LNAME$`>`$
$`<`$/AU$`>`$
$`<`$/AUTHORS$`>`$
$`<`$AFFILIATIONS$`>`$
$`<`$AF ident=“AF\_1”$`>`$Institute of Astronomy, Madingley Road, Cambridge CB3 0HA$`<`$/AF$`>`$
$`<`$AF ident=“AF\_2”$`>`$Observatoire Midi-Pyrénées, 14 Avenue E. Belin$`<`$/AF$`>`$
$`<`$AF ident=“AF\_3”$`>`$Department of Physics, University of Durham, South Road, Durham DH1 3LE$`<`$/AF$`>`$
$`<`$AF ident=“AF\_4”$`>`$Departament d'Astronomia i Meteorologia, Universitat de Barcelona, Diagonal 648, 08028 Barcelona, Spain$`<`$/AF$`>`$
$`<`$/AFFILIATIONS$`>`$
$`<`$EMAILS$`>`$
$`<`$EM ident=“EM\_1”$`>`$rse$`\mathrm{@}`$ast.cam.ac.uk$`<`$/EM$`>`$
$`<`$/EMAILS$`>`$
$`<`$MSTRING$`>`$Monthly Notices of the Royal Astronomical Society, Volume 295, Issue 1, pp. 75-91.$`<`$/MSTRING$`>`$
$`<`$MONOGRAPH$`>`$
$`<`$MTITLE$`>`$Monthly Notices of the Royal Astronomical Society$`<`$/MTITLE$`>`$
$`<`$VOLUME$`>`$295$`<`$/VOLUME$`>`$
$`<`$ISSUE$`>`$1$`<`$/ISSUE$`>`$
$`<`$/MONOGRAPH$`>`$
$`<`$PAGE$`>`$75$`<`$/PAGE$`>`$
$`<`$LPAGE$`>`$91$`<`$/LPAGE$`>`$
$`<`$PUBDATE$`>`$
$`<`$YEAR$`>`$1998$`<`$/YEAR$`>`$
$`<`$MONTH$`>`$03$`<`$/MONTH$`>`$
$`<`$/PUBDATE$`>`$
$`<`$CATEGORIES$`>`$
$`<`$CA$`>`$Astrophysics$`<`$/CA$`>`$
$`<`$CATEGORIES$`>`$
$`<`$COPYRIGHT$`>`$1998: The Royal Astronomical Society$`<`$/COPYRIGHT$`>`$
$`<`$IDENTIFIERS$`>`$
$`<`$ID type=“ACCNO”$`>`$A98-51106$`<`$/ID$`>`$
$`<`$/IDENTIFIERS$`>`$
$`<`$ORIGINS$`>`$
$`<`$OR$`>`$STI$`<`$/OR$`>`$
$`<`$OR$`>`$MNRAS$`<`$/OR$`>`$
$`<`$OR$`>`$SIMBAD$`<`$/OR$`>`$
$`<`$/ORIGINS$`>`$
$`<`$BIBCODE$`>`$1998MNRAS.295…75E$`<`$/BIBCODE$`>`$
$`<`$BIBTYPE$`>`$article$`<`$/BIBTYPE$`>`$
$`<`$KEYWORDS SYSTEM=“STI”$`>`$
$`<`$KW$`>`$GRAVITATIONAL LENSES$`<`$/KW$`>`$
$`<`$KW$`>`$RED SHIFT$`<`$/KW$`>`$
$`<`$KW$`>`$HUBBLE SPACE TELESCOPE$`<`$/KW$`>`$
$`<`$KW$`>`$GALACTIC CLUSTERS$`<`$/KW$`>`$
$`<`$KW$`>`$ASTRONOMICAL SPECTROSCOPY$`<`$/KW$`>`$
$`<`$KW$`>`$MASS DISTRIBUTION$`<`$/KW$`>`$
$`<`$KW$`>`$SPECTROGRAPHS$`<`$/KW$`>`$
$`<`$KW$`>`$PREDICTION ANALYSIS TECHNIQUES$`<`$/KW$`>`$
$`<`$KW$`>`$ASTRONOMICAL PHOTOMETRY$`<`$/KW$`>`$
$`<`$/KEYWORDS$`>`$
$`<`$KEYWORDS SYSTEM=“AAS”$`>`$
$`<`$KW$`>`$GALAXIES: CLUSTERS: INDIVIDUAL: ABELL 2218$`<`$/KW$`>`$
$`<`$KW$`>`$GALAXIES: EVOLUTION$`<`$/KW$`>`$
$`<`$KW$`>`$COSMOLOGY: OBSERVATIONS$`<`$/KW$`>`$
$`<`$KW$`>`$GRAVITATIONAL LENSING$`<`$/KW$`>`$
$`<`$/KEYWORDS$`>`$
$`<`$ABSTRACT$`>`$
We present deep spectroscopic measurements of 18 distant field galaxies identified as gravitationally lensed arcs in a Hubble Space Telescope image of the cluster Abell2218. Redshifts of these objects were predicted by Kneib et al. using a lensing analysis constrained by the properties of two bright arcs of known redshift and other multiply imaged sources. The new spectroscopic identifications were obtained using long exposures with the LDSS-2 spectrograph on the William Herschel Telescope, and demonstrate the capability of that instrument to reach new limits, R≃24 the lensing magnification implies true source magnitudes as faint as R≃25. Statistically, our measured redshifts are in excellent agreement with those predicted from Kneib et al.’s lensing analysis, and this gives considerable support to the redshift distribution derived by the lensing inversion method for the more numerous and fainter arclets extending to R≃25.5. We explore the remaining uncertainties arising from both the mass distribution in the central regions of Abell2218 and the inversion method itself, and conclude that the mean redshift of the faint field population at R≃25.5 (B∼26–27) is low, ⟨z⟩=0.8–1. We discuss this result in the context of redshift distributions estimated from multicolour photometry. Although such comparisons are not straightforward, we suggest that photometric techniques may achieve a reasonable level of agreement, particularly when they include near-infrared photometry with discriminatory capabilities in the 1<z<2 range.
$`<`$/ABSTRACT$`>`$
$`<`$/ADS\_ABSTRACT$`>`$
|
no-problem/0002/hep-ph0002168.html
|
ar5iv
|
text
|
# Untitled Document
hep-ph/0002268 IASSNS–HEP–00–08
Pseudo-Dirac Solar Neutrinos
Yosef Nir
School of Natural Science, Institute for Advanced Study
Princeton, NJ 08540, USA<sup>1</sup> Address for academic year 1999-2000
nir@ias.edu
Department of Particle Physics
Weizmann Institute of Science, Rehovot 76100, Israel
ftnir@wicc.weizmann.ac.il
Three of the viable solutions of the solar neutrino problem are consistent with close to maximal leptonic mixing: $`\mathrm{sin}^2\theta _{12}=\frac{1}{2}\left(1ϵ_{12}\right)`$ with $`\left|ϵ_{12}\right|1`$. Flavor models can naturally explain close to maximal mixing if approximate horizontal symmetries force a pseudo-Dirac structure on the neutrino mass matrix. An experimental determination of $`\left|ϵ_{12}\right|`$ and sign($`ϵ_{12}`$) can constrain the structure of the lepton mass matrices and consequently provide stringent tests of such flavor models. If both $`\left|ϵ_{12}\right|`$ and $`\mathrm{\Delta }m_{21}^2`$ are known, it may be possible to estimate the mass scale of the pseudo-Dirac neutrinos. Radiative corrections to close to maximal mixing are negligible. Subtleties related to the kinetic terms in Froggatt-Nielsen models are clarified.
2/00
1. Introduction
Three of the solutions of the solar neutrino problem require a large mixing angle \[1--4\]:
$$\begin{array}{cc}\hfill \mathrm{LMA}:& \mathrm{sin}^22\theta _{12}0.71,\mathrm{\Delta }m_{21}^2(120)\times 10^5eV^2,\hfill \\ \hfill \mathrm{LOW}:& \mathrm{sin}^22\theta _{12}0.81,\mathrm{\Delta }m_{21}^2(330)\times 10^8eV^2,\hfill \\ \hfill \mathrm{VAC}_\mathrm{L}:& \mathrm{sin}^22\theta _{12}0.71,\mathrm{\Delta }m_{21}^2(410)\times 10^{10}eV^2.\hfill \end{array}$$
Here LMA and LOW refer to matter-enhanced oscillations with a large mixing angle in the high and low $`\mathrm{\Delta }m^2`$ ranges, respectively, while VAC<sub>L</sub> refers to vacuum oscillations with relatively large $`\mathrm{\Delta }m^2`$. The range for the mixing angle in (1.1) is close to maximal mixing, $`\mathrm{sin}^22\theta _{12}=1`$. This case is particularly interesting from the theoretical point of view. It follows from a simple structure of the relevant $`2\times 2`$ block in the neutrino mass matrix in the basis where the charged lepton mass matrix is diagonal:
$$M_\nu ^{(2)}=m\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right).$$
Such a structure is easily obtained in models of horizontal symmetries \[5--9\] that try to explain the observed smallness and hierarchy in the charged fermion parameters (mass ratios and mixing angles). For example, if the lepton doublets of the first two generations carry an opposite charge under a U(1) symmetry (and the relevant scalar field is neutral), then $`M_\nu ^{(2)}`$ has the structure (1.1) in the symmetry limit.
Any horizontal symmetry must be broken in Nature. An unbroken horizontal symmetry leads to either degeneracy between fermions of different generations or vanishing mixing angles (see e.g. and references therein). In particular, the mass degeneracy implied by (1.1) must be broken to satisfy (1.1). The horizontal symmetry still has observable consequences if the breaking parameters are small. Then the low energy effective theory is subject to selection rules that are manifested in the smallness and hierarchy of the flavor parameters. In the case of close-to-maximal mixing, the small breaking leads to a small splitting between the masses of the two neutrinos and to a small deviation from maximal mixing, that is, the two Majorana neutrinos form a pseudo-Dirac neutrino:
$$\frac{\mathrm{\Delta }m_{21}^2}{m^2}1,1\mathrm{sin}^22\theta _{12}1.$$
Here $`m`$ denotes the average of $`m_1`$ and $`m_2`$. A measurement of these small effects will provide further information about the pattern of symmetry breaking and guide us in the process of selecting among the many presently viable models of horizontal symmetries. (For interesting studies of the implications of solar neutrino measurements for small entries in the neutrino mass matrix, see refs. \[11,,12\].)
There are three light active neutrinos in Nature. (In this work we assume that these are the only light neutrinos and do not consider the possibility of light sterile neutrinos. Note that the large angle solutions of the solar neutrino problem are inconsistent with a pseudo-Dirac $`\nu _e\nu _s`$ combination, such as in the model of ref. .) In the two generation framework, where there is a single mixing angle $`\theta `$, maximal mixing is defined by maximal oscillation depth in vacuum and corresponds to $`\mathrm{sin}^22\theta =1`$. In the three generation framework, what we mean by maximal mixing is that the disappearance probability is equivalent to that for maximal two neutrino mixing at the relevant mass scale . The disappearance probability for $`\nu _e`$ in vacuum, $`P_e\mathit{}`$, is given by
$$P_e\mathit{}=4|V_{e1}|^2|V_{e2}|^2\mathrm{sin}^2\mathrm{\Delta }_{\mathrm{sun}}4|V_{e3}|^2\mathrm{sin}^2\mathrm{\Delta }_{\mathrm{atm}}.$$
Here $`V_{ij}`$ are the elements of the Maki-Nakagawa-Sakata (MNS) matrix and we use the definition $`\mathrm{\Delta }_{jk}\frac{\mathrm{\Delta }m_{jk}^2L}{4E}`$, and the following input from solar and atmospheric neutrino experiments:
$$|\mathrm{\Delta }_{\mathrm{sun}}|=|\mathrm{\Delta }_{21}||\mathrm{\Delta }_{31}||\mathrm{\Delta }_{32}|=|\mathrm{\Delta }_{\mathrm{atm}}|.$$
At the $`L/E`$-scale that is relevant to solar neutrinos, the second term in (1.1) averages out to $`2|V_{e3}|^2`$. The only oscillatory term is the first one, and our definition of maximal mixing corresponds to
$$4|V_{e1}|^2|V_{e2}|^2=1,$$
which leads to
$$|V_{e1}|^2=|V_{e2}|^2=1/2,|V_{e3}|^2=0.$$
In the standard parametrization of the $`V_{\mathrm{MNS}}`$ matrix,
$$V_{\mathrm{MNS}}=\left(\begin{array}{ccc}c_{12}c_{13}& s_{12}c_{13}& s_{13}e^{i\delta }\\ s_{12}c_{23}c_{12}s_{23}s_{13}e^{i\delta }& c_{12}c_{23}s_{12}s_{23}s_{13}e^{i\delta }& s_{23}c_{13}\\ s_{12}s_{23}c_{12}c_{23}s_{13}e^{i\delta }& c_{12}s_{23}s_{12}c_{23}s_{13}e^{i\delta }& c_{23}c_{13}\end{array}\right),$$
where $`s_{ij}\mathrm{sin}\theta _{ij}`$ and $`c_{ij}\mathrm{cos}\theta _{ij}`$, the conditions (1.1) translate into
$$s_{12}^2=1/2,s_{13}=0.$$
By close-to-maximal-mixing we refer to a situation close to (1.1) or, equivalently, to (1.1):
$$ϵ_{12}12s_{12}^21,s_{13}1.$$
Our convention here is that $`\mathrm{\Delta }m_{21}^2m_2^2m_1^2>0`$, so that $`ϵ_{12}>0`$ ($`ϵ_{12}<0`$) corresponds to a situation where the lighter (heavier) state has a larger component of $`\nu _e`$.
Solar neutrino experiments (and, more generally, any oscillation experiments) are sensitive to the mass-squared difference $`\mathrm{\Delta }m_{12}^2`$ but not to the masses themselves. On the other hand, they can be sensitive to small deviations from maximal mixing \[16--18\]. Moreover, matter oscillations (but not vacuum oscillations) are affected differently by $`ϵ_{12}>0`$ and by $`ϵ_{12}<0`$, that is, they are sensitive not only to $`\mathrm{sin}^22\theta _{12}`$ but also to $`\mathrm{sin}^2\theta _{12}`$. In other words, if the solar neutrino problem is solved by one of the large angle solutions, then experiments may provide us with a measurement of the sign and the size of the small parameter $`ϵ_{12}`$. The purpose of this work is to understand the potential lessons for flavor model building from solar neutrino measurements of $`ϵ_{12}`$.
Our interest lies in models where $`ϵ_{12}`$, $`s_{13}`$ and $`\mathrm{\Delta }m_{21}^2/m^2`$ are naturally small. We focus on models where there are no exact relations between entries of the lepton mass matrices (beyond the symmetric structure of the neutrino Majorana mass matrix). The smallness of physical parameters must then be related to the smallness of various entries in the mass matrices and not to fine tuned cancellations between various contributions. As a concerete example of such a framework we think of models of approximate Abelian horizontal symmetries, but most of our results have more general applicability.
Horizontal symmetries constrain the structure of the mass matrices $`M_\nu `$ and $`M_{\mathrm{}}`$. In section 2 we derive the dependence of the mixing angles and of $`\mathrm{\Delta }m_{12}^2/m^2`$ on the entries of the lepton mass matrices. Usually, the constraints of the horizontal symmetries apply at a high energy scale. The effects of renormalization group evolution (RGE) are analyzed in section 3. For specific high energy theories of flavor, such as the Froggatt-Nielsen mechanism , the kinetic terms are corrected in a flavor dependent way when heavy degrees of freedom are integrated out. The effects of non-canonical kinetic terms are studied in section 4. The analysis in sections 2$``$4 is carried out under the simplifying assumption of CP symmetry. Effects of phases are studied within a two generation model in section 5. We apply our results to various models of Abelian flavor symmetries in section 6. We summarize our conclusions in section 7.
2. From Interaction Basis Parameters to Physical Parameters
Flavor models and, in particular, models with horizontal symmetries, constrain the entries of the lepton mass matrices in the interaction basis. To understand the implications of experimental constraints, one needs to express the physical observables (masses and mixing angles) in terms of the interaction basis parameters.
Given the charged lepton mass matrix $`M_{\mathrm{}}`$ and the neutrino mass matrix $`M_\nu `$ in some interaction basis,
$$_M=\left(\begin{array}{ccc}\overline{e_L}& \overline{\mu _L}& \overline{\tau _L}\end{array}\right)M_{\mathrm{}}\left(\begin{array}{c}e_R\\ \mu _R\\ \tau _R\end{array}\right)+\left(\begin{array}{ccc}\nu _e^T& \nu _\mu ^T& \nu _\tau ^T\end{array}\right)M_\nu \left(\begin{array}{c}\nu _e\\ \nu _\mu \\ \nu _\tau \end{array}\right)+\mathrm{h}.\mathrm{c}.,$$
$`V_{\mathrm{MNS}}`$ can be found from the diagonalizing matrices $`V_{\mathrm{}}`$ and $`V_\nu `$:
$$V_{\mathrm{MNS}}=P_{\mathrm{}}V_{\mathrm{}}V_\nu ^{},$$
where $`P_{\mathrm{}}`$ is a diagonal phase matrix. The unitary matrices $`V_\mathrm{}L`$ and $`V_\nu `$ are found from
$$V_\mathrm{}LM_{\mathrm{}}M_{\mathrm{}}^{}V_\mathrm{}L^{}=\mathrm{diag}(m_e^2,m_\mu ^2,m_\tau ^2),V_\nu M_\nu ^{}M_\nu V_\nu ^{}=\mathrm{diag}(m_1^2,m_2^2,m_3^2).$$
Our first step is to express the physical mixing angles in terms of the parameters of the diagonalizing matrices. For simplicity, we ignore CP violation, so that the mass matrices and, consequently, the diagonalizing matrices are real. (We comment on the effects of CP violating phases in section 5.) Let us define the three unitary matrices
$$\begin{array}{cc}\hfill R_{12}(\theta _{12})& \left(\begin{array}{ccc}c_{12}& s_{12}& 0\\ s_{12}& c_{12}& 0\\ 0& 0& 1\end{array}\right),\hfill \\ \hfill R_{13}(\theta _{13})& \left(\begin{array}{ccc}c_{13}& 0& s_{13}\\ 0& 1& 0\\ s_{13}& 0& c_{13}\end{array}\right),\hfill \\ \hfill R_{23}(\theta _{23})& \left(\begin{array}{ccc}1& 0& 0\\ 0& c_{23}& s_{23}\\ 0& s_{23}& c_{23}\end{array}\right).\hfill \end{array}$$
Then, eq. (1.1) (with $`\delta `$ set to zero) can be rewritten as
$$V_{\mathrm{MNS}}=R_{23}(\theta _{23})R_{13}(\theta _{13})R_{12}(\theta _{12}).$$
We further parametrize the diagonalizing matrices as follows:
$$\begin{array}{cc}\hfill V_\nu ^{}=& R_{23}(\theta _{23}^\nu )R_{13}(\theta _{13}^\nu )R_{12}(\theta _{12}^\nu ),\hfill \\ \hfill V_{\mathrm{}}=& R_{23}(\theta _{23}^{\mathrm{}})R_{13}(\theta _{13}^{\mathrm{}})R_{12}(\theta _{12}^{\mathrm{}}).\hfill \end{array}$$
We limit ourselves to the large class of models where there are no exact relations between the entries in $`M_\nu `$ (up to the fact that it is symmetric, that is, $`(M_\nu )_{ij}=(M_\nu )_{ji}`$) and in $`M_{\mathrm{}}`$. Then the smallness of $`ϵ_{12}`$ and $`s_{13}`$ requires that the following parameters are small:
$$s_{12}^{\mathrm{}},s_{13}^{\mathrm{}},ϵ_{12}^\nu ,s_{13}^\nu 1,$$
where
$$ϵ_{12}^\nu 12(s_{12}^\nu )^2.$$
Evaluating to first order in the small parameters of (2.1), we obtain:
$$\begin{array}{cc}\hfill ϵ_{12}=& ϵ_{12}^\nu +2c_{23}^\nu s_{12}^{\mathrm{}}2s_{23}^\nu s_{13}^{\mathrm{}},\hfill \\ \hfill s_{13}=& s_{13}^\nu s_{23}^\nu s_{12}^{\mathrm{}}c_{23}^\nu s_{13}^{\mathrm{}}.\hfill \end{array}$$
We caution the reader that the sign of the terms that depend on $`s_{12}^{\mathrm{}}`$ and $`s_{13}^{\mathrm{}}`$ is ambiguous. In particular, we approximated $`\mathrm{sin}2\theta _{12}^\nu =1`$, but with the parametrization (2.1) it could equal $`1`$. A full treatment of the sign and phase dependence of the $`s_{12}^{\mathrm{}}`$ contribution to $`ϵ_{12}`$ is given in section 5.
Our next step is to express the parameters of the diagonalizing matrices in terms of the mass matrices. For the charged lepton sector, the expressions can be found in refs. \[20,,21\]. Typically, one finds $`s_{12}^{\mathrm{}}(M_{\mathrm{}})_{12}/(M_{\mathrm{}})_{22}`$ and $`s_{13}^{\mathrm{}}(M_{\mathrm{}})_{13}/(M_{\mathrm{}})_{33}`$. Here, we focus on the neutrino mass matrix with a pseudo-Dirac structure. If there are no exact relations between different entries in $`M_\nu `$, then the most general structure that is consistent with $`ϵ_{12}^\nu 1`$ and $`s_{13}^\nu 1`$ is
$$M_\nu =m\left(\begin{array}{ccc}y_{11}& Y_{12}& Y_{13}\\ Y_{12}& y_{22}& y_{23}\\ Y_{13}& y_{23}& Y_{33}\end{array}\right),$$
where
$$Y_{12}1,y_{ij}1.$$
As concerns $`Y_{13}`$ and $`Y_{33}`$, there are three different options:
$$\begin{array}{cc}\hfill (i)& Y_{13}<1,Y_{33}1,\hfill \\ \hfill (ii)& Y_{13}<1,Y_{33}y_{33}1,\hfill \\ \hfill (iii)& Y_{13}y_{13}1,Y_{33}1.\hfill \end{array}$$
(Explicit examples of models in the literature that realize these options are presented in section 7.) It is also convenient to define the matrix
$$\widehat{M}_\nu =R_{13}^T(\theta _{13}^\nu )R_{23}^T(\theta _{23}^\nu )M_\nu R_{23}(\theta _{23}^\nu )R_{13}(\theta _{13}^\nu ).$$
By definition, it is block diagonal. The requirement that $`ϵ_{12}^\nu 1`$ restricts the form of the (12) block:
$$\widehat{M}_\nu =m\left(\begin{array}{ccc}\delta _1& 1& 0\\ 1& \delta _2& 0\\ 0& 0& Y_3\end{array}\right);|\delta _1|,|\delta _2|1.$$
Both $`\mathrm{\Delta }m_{21}^2/m^2`$ and $`ϵ_{12}^\nu `$ depend on only $`\delta _1`$ and $`\delta _2`$:
$$\begin{array}{cc}\hfill \frac{\mathrm{\Delta }m_{12}^2}{m^2}=& 2|\delta _1^{}+\delta _2|,\hfill \\ \hfill ϵ_{12}^\nu =& \frac{|\delta _2|^2|\delta _1|^2}{2|\delta _1^{}+\delta _2|}.\hfill \end{array}$$
We now present, for the three cases of eq. (2.1), expressions for $`s_{23}^\nu `$, $`s_{13}^\nu `$, $`\delta _1`$ and $`\delta _2`$ to first order in the small parameters $`y_{ij}`$. For case (i), we take $`Y_{12}=1`$ and obtain:
$$\begin{array}{cc}\hfill s_{13}^\nu =& Y_{13}/Y_{33},s_{23}^\nu =(s_{13}^\nu Y_{12}+y_{23})/Y_{33},\hfill \\ \hfill \delta _1=& y_{11}Y_{13}^2/Y_{33},\delta _2=y_{22}.\hfill \end{array}$$
For case (ii), we take $`c_{23}^\nu Y_{12}s_{23}^\nu Y_{13}=1`$ and obtain:
$$\begin{array}{cc}\hfill \mathrm{tan}\theta _{23}^\nu =& Y_{13}/Y_{12},s_{13}^\nu =c_{23}^\nu s_{23}^\nu (y_{33}y_{22})((c_{23}^\nu )^2(s_{23}^\nu )^2)y_{23},\hfill \\ \hfill \delta _1=& y_{11},\delta _2=(c_{23}^\nu )^2y_{22}+(s_{23}^\nu )^2y_{33}2s_{23}^\nu c_{23}^\nu y_{23}.\hfill \end{array}$$
For case (iii), we take $`Y_{12}=1`$ and obtain:
$$\begin{array}{cc}\hfill s_{23}^\nu =& \frac{y_{23}Y_{33}+y_{13}Y_{12}}{Y_{33}^2Y_{12}^2},s_{13}^\nu =\frac{y_{13}Y_{33}+y_{23}Y_{12}}{Y_{33}^2Y_{12}^2},\hfill \\ \hfill \delta _1=& y_{11},\delta _2=y_{22}.\hfill \end{array}$$
We would like to emphasize several points related to the results derived above (some of the statements below were previously made in ref. in the context of a specific class of textures for the Dirac and Majorana mass matrices in the seesaw model):
(a) Eq. (2.1) implies that flavor models where $`ϵ_{12}^\nu `$ gives the dominant contribution to $`ϵ_{12}`$ can be strongly constrained by a measurement of $`ϵ_{12}`$. Since $`\delta _1`$ and $`\delta _2`$ depend on different entries of $`M_\nu `$, we expect no exact cancellations in their contribution to $`ϵ_{12}^\nu `$. Consequently, one will be able to use the measured size of $`ϵ_{12}`$ to estimate the size of the larger between $`|\delta _1|`$ and $`|\delta _2|`$, and the sign of $`ϵ_{12}`$ to tell which is larger.
(b) Eq. (2.1) implies that observable deviations from maximal mixing in vacuum oscillations, $`1\mathrm{sin}^22\theta _{12}=ϵ_{12}^20`$, can strongly constrain flavor models. For models with a small $`s_{23}^\nu `$, we have
$$ϵ_{12}^2\mathrm{max}(\frac{\delta _2^2}{4},\frac{\delta _1^2}{4},4(s_{12}^{\mathrm{}})^2).$$
If vacuum oscillations show an observable deviation from maximal mixing, say, $`ϵ_{12}^20.1`$, it would be difficult to explain it with a parametrically suppressed $`\delta _2`$, $`\delta _1`$ and $`s_{12}^{\mathrm{}}`$. The accidental factor of sixteen, however, between the $`s_{12}^{\mathrm{}}`$ and the $`\delta _i`$ contributions in eq. (2.1) favors $`s_{12}^{\mathrm{}}`$ as the major source for such a large effect.
(c) Eq. (2.1) reveals interesting relations between the mass hierarchy and the mixing. The parameters $`ϵ_{12}^\nu `$ and $`\mathrm{\Delta }m_{21}^2/m^2`$ are of the same order of magnitude. Therefore, within models where $`ϵ_{12}^\nu `$ gives the dominant contribution to $`ϵ_{12}`$, one will be able to use the measured values of $`ϵ_{12}`$ and $`\mathrm{\Delta }m_{21}^2`$ to estimate the mass scale $`m`$ of the pseudo-Dirac neutrino pair. If the contributions to $`ϵ_{12}`$ related to $`s_{12}^{\mathrm{}}`$ and/or to $`s_{13}^{\mathrm{}}`$ are larger than the contribution related to $`ϵ_{12}^\nu `$, then the relation between $`ϵ_{12}`$ and $`\mathrm{\Delta }m_{21}^2/m^2`$ is lost and, in particular, $`ϵ_{12}\mathrm{\Delta }m_{21}^2/m^2`$ is possible. In any case, if there are no exact relations between entries of the lepton mass matrices, we expect
$$|ϵ_{12}|>\mathrm{\Delta }m_{21}^2/m^2.$$
This relation can be used in two ways. First, measurements of $`|ϵ_{12}|`$ and of $`\mathrm{\Delta }m_{21}^2`$ would give a lower bound on $`m`$. Second, in our framework we have $`m^2<\mathrm{\Delta }m_{\mathrm{atm}}^2`$ and therefore we expect
$$|ϵ_{12}|>\frac{\mathrm{\Delta }m_{\mathrm{sun}}^2}{\mathrm{\Delta }m_{\mathrm{atm}}^2}.$$
This constraint is particularly powerful if the LMA solution (see eq. (1.1)) is realized in Nature since then $`\mathrm{\Delta }m_{\mathrm{sun}}^2/\mathrm{\Delta }m_{\mathrm{atm}}^20.01`$.
3. Radiative Corrections
We consider the effect of radiative corrections on neutrino mass matrices which at a high energy scale $`\mathrm{\Lambda }`$ have the pseudo-Dirac structure (2.1). In particular, we ask whether at some low energy scale $`\mu `$ that is relevant to the solar neutrinos, a significant deviation from maximal mixing could be induced by renormalization group evolution (RGE). We take as our framework the minimal supersymmetric Standard Model. (Our results apply also to the Standard Model, but there the smallness of the charged lepton Yukawa couplings guarantees that the radiative corrections are negligible for our purposes.)
The important parameter for our purposes is related to the Yuakawa coupling of the tau lepton:
$$ϵ_\tau \frac{g_\tau ^2}{(4\pi )^2}(1+\mathrm{tan}^2\beta )\mathrm{ln}\frac{\mathrm{\Lambda }}{\mu }.$$
Here $`g_\tau (1+\mathrm{tan}^2\beta )^{1/2}=m_\tau /\varphi _d`$ is the tau Yukawa coupling in the supersymmetric standard model. The $`ϵ_\tau `$ parameter could be of $`𝒪(0.01)`$ for large $`\mathrm{tan}\beta `$. (Within the SM, one has to replace $`(1+\mathrm{tan}^2\beta )`$ with $`1/2`$, which gives $`ϵ_\tau 10^6`$.) Define a matrix
$$I_\tau =\mathrm{diag}(1,1,1+ϵ_\tau ).$$
We denote the neutrino mass scale at the high scale $`\mathrm{\Lambda }`$ by $`M_\nu ^{\mathrm{HE}}`$. Then, up to universal corrections and negligibly small effects of the muon and electron Yukawa couplings, the renormalized neutrino mass matrix at a scale $`\mu `$ below $`\mathrm{\Lambda }`$ is given in logarithmic approximation by \[23--31\]
$$M_\nu =I_\tau M_\nu ^{\mathrm{HE}}I_\tau .$$
In this section, the parameters that relate to $`M_\nu ^{\mathrm{HE}}`$ and to its diagonalization are denoted, as before, by $`s_{ij}^\nu `$ and $`\delta _i`$. They can be expressed in terms of the entries of $`M_\nu ^{\mathrm{HE}}`$ according to equations (2.1), (2.1) and (2.1). In other words, we have
$$M_\nu ^{\mathrm{HE}}=mR_{23}(\theta _{23})R_{13}(\theta _{13})\left(\begin{array}{ccc}\delta _1& 1& 0\\ 1& \delta _2& 0\\ 0& 0& Y_3\end{array}\right)R_{13}^T(\theta _{13})R_{23}^T(\theta _{23}).$$
The parameters that relate to $`M_\nu `$ and to its diagonalization will be denoted by $`\widehat{s}_{ij}`$ and $`\widehat{\delta }_i`$. The difference between them and the corresponding $`s_{ij}`$ and $`\delta _i`$ parameters vanishes in the limit $`ϵ_\tau 0`$. The main question that we would like to investigate is whether the differences $`\widehat{\delta }_{1,2}\delta _{1,2}`$ are of $`𝒪(ϵ_\tau )`$ or $`𝒪(ϵ_\tau )`$. In the latter case the radiative corrections can be safely neglected.
After a cumbersome but straightforward calculation, we find the following leading corrections:
$$\begin{array}{cc}\hfill \widehat{s}_{23}^\nu s_{23}^\nu =& \frac{1+Y_3^2}{1Y_3^2}(c_{23}^\nu )^2s_{23}^\nu ϵ_\tau +𝒪(s_{13}^\nu ϵ_\tau ),\hfill \\ \hfill \widehat{s}_{13}^\nu s_{13}^\nu =& \frac{2Y_3}{1Y_3^2}c_{23}^\nu s_{23}^\nu ϵ_\tau +𝒪(s_{13}^\nu ϵ_\tau ),\hfill \\ \hfill \widehat{\delta }_1\delta _1=& \frac{4Y_3^2}{Y_3^21}s_{13}^\nu c_{23}^\nu s_{23}^\nu ϵ_\tau +𝒪(ϵ_\tau ^2),\hfill \\ \hfill \widehat{\delta }_2\delta _2=& \frac{4}{Y_3^21}s_{13}^\nu c_{23}^\nu s_{23}^\nu ϵ_\tau +𝒪(ϵ_\tau ^2).\hfill \end{array}$$
From the expressions for $`\widehat{\delta }_1`$ and $`\widehat{\delta }_2`$, we obtain:
$$\begin{array}{cc}\hfill \frac{\mathrm{\Delta }m_{21}^2}{m^2}=& 2|\delta _1+\delta _2+4s_{13}^\nu c_{23}^\nu s_{23}^\nu ϵ_\tau |,\hfill \\ \hfill \widehat{ϵ}_{12}^\nu =& \frac{(\delta _2\delta _1)}{2}\frac{\delta _2+\delta _1+4s_{13}^\nu c_{23}^\nu s_{23}^\nu ϵ_\tau }{|\delta _1+\delta _2+4s_{13}^\nu c_{23}^\nu s_{23}^\nu ϵ_\tau |}+2\frac{1+Y_3^2}{1Y_3^2}\frac{(\delta _2+\delta _1)s_{13}^\nu c_{23}^\nu s_{23}^\nu ϵ_\tau }{|\delta _1+\delta _2+4s_{13}^\nu c_{23}^\nu s_{23}^\nu ϵ_\tau |}.\hfill \end{array}$$
Eq. (3.1) should be compared to eq. (2.1).
We would like to emphasize the following points concerning equations (3.1) and (3.1):
(a) The change in $`s_{23}^\nu `$ is small, $`(\widehat{s}_{23}^\nu s_{23}^\nu )/s_{23}^\nu =𝒪(ϵ_\tau )`$.
(b) The difference $`\widehat{s}_{13}^\nu s_{13}^\nu `$ is suppressed beyond the naive estimate of $`ϵ_\tau `$: for large (small) $`Y_3`$ it is further suppressed by $`1/Y_3`$ ($`Y_3`$) while for $`Y_31`$ it is suppressed by the small $`s_{23}^\nu `$. Effectively then we have $`(\widehat{s}_{13}^\nu s_{13}^\nu )/s_{13}^\nu =𝒪(ϵ_\tau )`$.
(c) Our main result is that the RGE-induced deviation from maximal mixing and mass splitting are suppressed by $`ϵ_\tau s_{13}`$. A combination of the CHOOZ result and the SuperKamiokande results on atmospheric neutrinos implies that $`s_{13}`$ is small \[34,,35\]. Consequently, the $`ϵ_\tau s_{13}`$ suppression factor is constrained to be below $`𝒪(10^3)`$. In the limit $`s_{13}=0`$, the leading effects are of order $`ϵ_\tau ^2`$ .
To summarize the results of this section: We find that the contribution from radiative corrections to the deviation from maximal mixing is suppressed beyond the smallness of $`ϵ_\tau `$. The leading corrections to $`ϵ_{12}^\nu `$, $`s_{13}^\nu `$ and $`\mathrm{\Delta }m_{21}^2/m^2`$ are $`𝒪[ϵ_\tau \times \mathrm{max}(s_{13},ϵ_\tau )]`$. Model independently, the size of the effect is not larger than $`𝒪(10^3)`$. (This correction could be important for the mass splitting in the VAC<sub>L</sub> solution.) For the deviation from maximal mixing, this correction is too small to be observed.
4. Non-Canonical Kinetic Terms
Models with horizontal symmetries predict the structure of the mass matrices in the basis where the horizontal charges are well defined. This preferred interaction basis can, in general, be different from the basis where the kinetic terms are canonically normalized \[21,,36,,37\]. In particular, when heavy degrees of freedom related to flavor physics are integrated out, the kinetic terms for the left-handed lepton doublets $`L_i`$ ($`i=1,2,3`$) can be modified to
$$R_{ij}L_i^{}\gamma ^\mu _\mu L_j,$$
where $`R`$ is a hermitian matrix. By rescaling of the $`L_i`$ fields we can bring the diagonal entries of $`R`$ to equal unity,
$$R_{ii}=1,|R_{ij}|1(ij).$$
One can always find a hermitian matrix $`K`$ that brings the kinetic terms back to canonical normalization :
$$K^{}RK=\mathrm{diag}(1,1,1).$$
If the mass matrix in the basis where the kinetic terms are of the non-canonical form (4.1) is $`M_\nu ^{\mathrm{NC}}`$, then the true mass matrix, that is the matrix in the basis with canonical kinetic terms, is given by
$$M_\nu =K^TM_\nu ^{\mathrm{NC}}K.$$
$`K`$ has the form
$$K=\left(\begin{array}{ccc}1& k_{12}& k_{13}\\ k_{12}^{}& 1& k_{23}\\ k_{13}^{}& k_{23}^{}& 1\end{array}\right).$$
For simplicity, we again neglect CP violation and take $`R`$ and, consequently $`K`$, to be real.
We are interested in finding the effects of $`k_{ij}0`$ on the deviation from maximal mixing and on the mass splitting. Our analysis follows similar lines to our study of radiative corrections in the previous section. We take
$$M_\nu ^{\mathrm{NC}}=mR_{23}(\theta _{23}^\nu )R_{13}(\theta _{13}^\nu )\left(\begin{array}{ccc}\delta _1& 1& 0\\ 1& \delta _2& 0\\ 0& 0& Y_3\end{array}\right)R_{13}^T(\theta _{13}^\nu )R_{23}^T(\theta _{23}^\nu ).$$
The parameters that relate to the matrix $`M_\nu `$ of eq. (4.1) are denoted by $`\widehat{s}_{ij}^\nu `$ and $`\widehat{\delta }_i`$.
For the differences between $`\widehat{s}_{ij}^\nu ,\widehat{\delta }_i`$ and the corresponding $`s_{ij}^\nu ,\delta _i`$, we find:
$$\begin{array}{cc}\hfill \widehat{s}_{23}^\nu s_{23}^\nu =& \frac{c_{23}^\nu }{Y_3^21}\left[(Y_3^2+1)k_{23}((c_{23}^\nu )^2(s_{23}^\nu )^2)+2Y_3(k_{12}s_{23}^\nu +k_{13}c_{23}^\nu )\right],\hfill \\ \hfill \widehat{s}_{13}^\nu s_{13}^\nu =& \frac{1}{Y_3^21}\left[(Y_3^2+1)(k_{12}s_{23}^\nu +k_{13}c_{23}^\nu )+2Y_3k_{23}((c_{23}^\nu )^2(s_{23}^\nu )^2)\right],\hfill \\ \hfill \widehat{\delta }_{2,1}\delta _{2,1}=& 2(k_{12}c_{23}^\nu k_{13}s_{23}^\nu ).\hfill \end{array}$$
For the mass difference and deviation from maximal mixing, we obtain
$$\begin{array}{cc}\hfill \frac{\mathrm{\Delta }m_{21}^2}{m^2}=& 2|\delta _1+\delta _2+4(k_{12}c_{23}^\nu k_{13}s_{23}^\nu )|,\hfill \\ \hfill \widehat{ϵ}_{12}^\nu =& \frac{(\delta _2\delta _1)}{2}\frac{\delta _1+\delta _2+4(k_{12}c_{23}^\nu k_{13}s_{23}^\nu )}{|\delta _1+\delta _2+4(k_{12}c_{23}^\nu k_{13}s_{23}^\nu )|}.\hfill \end{array}$$
Terms of order $`s_{13}^\nu k_{ij}`$ contribute with different signs to $`\widehat{\delta }_1`$ and $`\widehat{\delta }_2`$ and modify $`\widehat{ϵ}_{12}^\nu `$ in a qualitatively different way. Quantitatively, however, these effects are negligible.
Before we analyze the consequences eqs. (4.1) and (4.1), we would like to make two comments regarding the size of $`k_{ij}`$ in Froggett-Nielsen type models:
1. In most models of horizontal symmetries, we have
$$k_{ij}<s_{ij}^{\mathrm{}}.$$
2. If two fields carry the same horizontal quantum numbers, $`H(L_i)=H(L_j)`$, we can always define these fields in such a way that $`k_{ij}=0`$.
We would like to emphasize the following points:
(a) The changes in $`s_{23}^\nu `$ and $`s_{13}^\nu `$ are of $`𝒪(k_{ij})`$. The effect can be significant for a small mixing angle. In particular, in the supersymmetric framework, if $`s_{ij}^\nu `$ vanishes because of holomorphy , we expect such zeros to be lifted by these corrections. The mixing angle is, however, still parametrically suppressed.
(b) The leading effect on $`ϵ_{12}^\nu `$ does not change its size but, if $`4(k_{12}c_{23}^\nu k_{13}s_{23}^\nu )`$ is not much smaller than max($`\delta _1,\delta _2`$), can affect its sign. (With CP violating phases, also the size of $`ϵ_{12}^\nu `$ is affected, but in the Froggatt-Nielsen class of models the parametric suppression remains the same.)
(c) In principle, the $`k_{ij}`$-related corrections could enhance $`\mathrm{\Delta }m_{21}^2/m^2`$ compared to $`ϵ_{12}^\nu `$ and therefore avoid (2.1). However, in models where the constraint (4.1) holds, (2.1) is valid.
We conclude that, in general, in models where the kinetic terms are normalized according to (4.1), sign($`ϵ_{12}`$) does not give a useful constraint.
5. An Effective Two Generation Framework
In previous sections we took all the parameters in the Lagrangian to be real. To understand some of the effects of phases, we analyze a two generation model allowing for the most general phase structure.
We parametrize the two generation mixing matrix by
$$V=\left(\begin{array}{cc}c& se^{i\beta }\\ s& ce^{i\beta }\end{array}\right),$$
where $`c\mathrm{cos}\theta _{12}`$, $`s\mathrm{sin}\theta _{12}`$ and the phase $`\beta `$ is physical but does not play a role in oscillation experiments. We parametrize the diagonalizing matrices $`V_{\mathrm{}}`$ and $`V_\nu `$ in the following way:
$$V_{\mathrm{}}=\left(\begin{array}{cc}c_{\mathrm{}}& s_{\mathrm{}}e^{i\beta _{\mathrm{}}}\\ s_{\mathrm{}}& c_{\mathrm{}}e^{i\beta _{\mathrm{}}}\end{array}\right),V_\nu =\left(\begin{array}{cc}c_\nu & s_\nu e^{i\beta _\nu }\\ s_\nu & c_\nu e^{i\beta _\nu }\end{array}\right).$$
Using (2.1), we can express the size of the mixing angle in terms of the four parameters $`s_\nu `$, $`s_{\mathrm{}}`$, $`\beta _\nu `$ and $`\beta _{\mathrm{}}`$:
$$s^2=c_{\mathrm{}}^2s_\nu ^2+s_{\mathrm{}}^2c_\nu ^22e(c_{\mathrm{}}s_{\mathrm{}}c_\nu s_\nu e^{i(\beta _{\mathrm{}}\beta _\nu )}).$$
The charged lepton mass matrix can be written as
$$M_{\mathrm{}}=\left(\begin{array}{cc}m_{11}& m_{12}\\ m_{21}& m_{22}\end{array}\right).$$
Our assumption that $`s_{\mathrm{}}`$ is small requires that a certain combination of entries is small:
$$|s_{\mathrm{}}||\delta _{\mathrm{}}|1,$$
where
$$\delta _{\mathrm{}}\frac{m_{11}m_{21}^{}+m_{12}m_{22}^{}}{|m_{21}|^2+|m_{22}|^2|m_{12}|^2|m_{11}|^2}.$$
The neutrino mass matrix is given by
$$M_\nu =m\left(\begin{array}{cc}\delta _1& 1\\ 1& \delta _2\end{array}\right),|\delta _i|1.$$
We include the effects of radiative corrections, parametrized by
$$I_\mu =\mathrm{diag}(1,1+ϵ_\mu );ϵ_\mu \frac{g_\mu ^2}{(4\pi )^2}(1+\mathrm{tan}^2\beta )\mathrm{ln}\frac{\mathrm{\Lambda }}{\mu },$$
and of non-canonical kinetic terms, parametrized by
$$K=\left(\begin{array}{cc}1& k\\ k^{}& 1\end{array}\right).$$
We find:
$$\begin{array}{cc}\hfill \widehat{\delta }_1=& \delta _1+2k^{},\hfill \\ \hfill \widehat{\delta }_2=& \delta _2+2k.\hfill \end{array}$$
We can now express the mass splitting $`\mathrm{\Delta }m^2/m^2`$ and the deviation from maximal mixing $`ϵ_{12}`$ in terms of the three parameters $`\delta _{\mathrm{}}`$ of eq. (5.1) and $`\widehat{\delta }_1`$ and $`\widehat{\delta }_2`$ of eq. (5.1):
$$\begin{array}{cc}\hfill \frac{\mathrm{\Delta }m^2}{m^2}=& 2|\widehat{\delta }_1^{}+\widehat{\delta }_2|,\hfill \\ \hfill ϵ_{12}=& \frac{|\widehat{\delta }_2|^2|\widehat{\delta }_1|^2}{2|\widehat{\delta }_1^{}+\widehat{\delta }_2|}+2e\left[\delta _{\mathrm{}}\left(\frac{|\widehat{\delta }_1^{}+\widehat{\delta }_2|}{\widehat{\delta }_1^{}+\widehat{\delta }_2}\right)\right].\hfill \end{array}$$
Eq. (5.1) allows us to make (or to re-emphasize) the following points:
1. We again observe the accidental factor of four between the $`\delta _{\mathrm{}}`$ contribution and the $`\widehat{\delta }_i`$ contribution to $`ϵ_{12}`$. A large measured value of $`ϵ_{12}`$ might be a hint then to the size of $`\delta _{\mathrm{}}`$.
2. The usefulness of an experimental determination of sign($`ϵ_{12}`$) depends on the relative size of the small parameters. If $`|\delta _1|,|\delta _2||\delta _{\mathrm{}}|,|k|`$, then sign($`ϵ_{12}`$) depends on the relative size of $`|\delta _1|`$ and $`|\delta _2|`$ which is predicted by the models and a useful constraint can be derived. On the other hand, if $`|\delta _{\mathrm{}}|`$ and/or $`|k|`$ are not smaller than both $`|\delta _1|`$ and $`|\delta _2|`$, then sign($`ϵ_{12}`$) depends on the relative phases between $`\delta _{\mathrm{}}`$ or $`k`$ and $`(\delta _1^{}+\delta _2)`$. Since generic models of approximate horizontal symmetries do not predict the phases, we cannot derive any useful constraint.
6. Abelian horizontal symmetries
The most natural application of our results is in the framework of approximate Abelian horizontal symmetries. To understand the principles of this framework, let us take the simplest example of a horizontal symmetry, $`H=U(1)`$, that is broken by a single small parameter. We denote the breaking parameter by $`\lambda `$ and assign to it a horizontal charge $`1`$. Wherever numerical values are relevant, we take $`\lambda =0.2`$ (so that it is of the order of the Cabibbo angle). Within a supersymmetric framework, the following selection rules apply:
a. Terms in the superpotential that carry an integer $`H`$-charge $`n0`$ are suppressed by $`\lambda ^n`$. Terms with $`n<0`$ vanish by holomorphy.
b. Terms in the Kähler potential that carry an integer $`H`$-charge $`n`$ are suppressed by $`\lambda ^{|n|}`$.
We are particularly interested in the leptonic Yukawa terms:
$$_Y=Y_{ij}^{\mathrm{}}L_i\overline{\mathrm{}}_j\varphi _d+\frac{Y_{ij}^\nu }{M}L_iL_j\varphi _u\varphi _u+\mathrm{h}.\mathrm{c}.,$$
where $`i=1,2,3`$ is a generation index, $`L_i`$ are lepton doublet fields, $`\overline{\mathrm{}}_j`$ are lepton charged singlet fields, and $`\varphi _u`$ and $`\varphi _d`$ are the two Higgs fields. The couplings $`Y_{ij}`$ are dimensionless Yukawa couplings and $`M`$ is a high energy scale. The Yukawa terms come from the superpotential. If the sum of the horizontal charges in a particular term is a positive integer, then the resulting mass term is suppressed as follows:
$$\begin{array}{cc}\hfill (M_{\mathrm{}})_{ij}& \varphi _d\lambda ^{H(L_i)+H(\overline{\mathrm{}}_j)+H(\varphi _d)},\hfill \\ \hfill (M_\nu )_{ij}& \frac{\varphi _u^2}{M}\lambda ^{H(L_i)+H(L_j)+2H(\varphi _u)}.\hfill \end{array}$$
Otherwise, i.e. if the sum of charges is negative or non-integer, the Yukawa coupling vanishes. We use the $``$ sign to emphasize that there is an unknown, independent, order one coefficient for each term (except for the relation $`(M_\nu )_{ij}=(M_\nu )_{ji}`$)).
To understand the possible implications of close-to-maximal mixing on theoretical model building, we imagine that future measurements will give
$$ϵ_{12}\lambda .$$
We examine the consequences of such a constraint on three classes of models in the literature. We find that two classes of models will be excluded, while in the other a unique model is singled out that is consistent with all the requirements.
6.1. Holomorphic zeros
Option (i) of eq. (2.1) has been realized in the framework of supersymmetric Abelian horizontal symmetries, where holomorphic zeros can induce a large 23 mixing together with large 23 mass hierarchy . The horizontal symmetry is $`U(1)_1\times U(1)_2`$ with breaking parameters
$$\lambda _1(1,0),\lambda _2(0,1);\lambda _1\lambda _2\lambda =0.2.$$
We impose four requirements on the model: Large 23 mixing, $`s_{23}1`$; Large hierarchy, $`m_2/m_31`$; $`\nu _1`$ and $`\nu _2`$ form a pseudo-Dirac neutrino, $`\mathrm{\Delta }m_{12}^2m^2`$; A deviation from maximal mixing given by $`ϵ_{12}\lambda `$ (this is the hypothetical constraint from solar neutrino measurements). We find that there is a single set of horizontal charge assignments to the Higgs and lepton doublets that is consistent with all four requirements:
$$\varphi _u(0,0),\varphi _d(0,0),L_1(1,0),L_2(1,1),L_3(0,0).$$
(The choice is single up to trivial shifts by hypercharge which is an exact symmetry of the model, by a Peccei-Quinn symmetry that is an accidental symmetry of the Yukawa sector, and by lepton number if it only changes the overall neutrino mass scale and can be absorbed in the parameter $`M`$, and up to trivial exchange of $`U(1)_1U(1)_2`$.) We find then a unique structure for $`M_\nu `$:
$$M_\nu \frac{\varphi _u^2}{M}\left(\begin{array}{ccc}\lambda ^2& \lambda & \lambda \\ \lambda & 0& 0\\ \lambda & 0& 1\end{array}\right).$$
This matrix is of the form (2.1) with option (i) of eq. (2.1). Therefore, eqs. (2.1) can be applied. To have $`s_{23}1`$ and large enough $`ϵ_{12}`$, together with acceptable charged lepton mass hierarchy, we can choose, for example,
$$\overline{\mathrm{}}_1(3,4),\overline{\mathrm{}}_2(3,2),\overline{\mathrm{}}_3(3,0),$$
which gives
$$M_{\mathrm{}}\varphi _d\left(\begin{array}{ccc}\lambda ^8& \lambda ^6& \lambda ^4\\ \lambda ^7& \lambda ^5& \lambda ^3\\ \lambda ^7& \lambda ^5& \lambda ^3\end{array}\right).$$
The parametric suppression of the physical parameters is then as follows:
$$m_\tau /\varphi _d\lambda ^3,m_\mu /m_\tau \lambda ^2,m_e/m_\mu \lambda ^3,$$
$$\mathrm{\Delta }m_{21}^2/\mathrm{\Delta }m_{23}^2\lambda ^3,\mathrm{\Delta }m_{12}^2/m^2\lambda ,$$
$$s_{23}1,s_{13}\lambda ,ϵ_{12}\lambda .$$
The corrections due to a non-canonical kinetic terms,
$$k_{23}\lambda ^2,k_{12}\lambda ^3,k_{13}\lambda ,$$
leave eqs. (6.1) and (6.1) unchanged.
Within the framework of Abelian horizontal symmetries, it is particularly interesting to find predictions for relations among the physical parameters that are independent of a specific choice of horizontal charges. In the quark sector, there is a single such relation , $`|V_{us}||V_{ub}/V_{cb}|`$. In the lepton sector, when singlet neutrinos play no role, there are three such relations . For the class of models where holomorphic zeros give a pseudo-Dirac structure in the 12 sector but do not affect the parameters that are related to the third generation (the model presented in this subsection belongs to this class), we have the following relations:
$$\begin{array}{cc}\hfill ϵ_{12}& s_{13}/s_{23},\hfill \\ \hfill m/m_3& s_{13}s_{23},\hfill \\ \hfill \mathrm{\Delta }m_{12}^2/m^2& s_{13}/s_{23}.\hfill \end{array}$$
The first of these relations, which involves only mixing angles, can be tested if oscillation experiments measure $`ϵ_{12}`$ and $`s_{13}`$. The last two relations can be combined to give another testable relation.
$$\mathrm{\Delta }m_{\mathrm{sun}}^2/\mathrm{\Delta }m_{\mathrm{atm}}^2s_{13}^3s_{23}.$$
6.2. $`L_eL_\mu L_\tau `$ symmetry
Option (ii) of eq. (2.1) can be realized in a particulary interesting frameowork of approximate $`L_eL_\mu L_\tau `$ symmetry \[40--45\]. The symmetry is broken by small parameters, $`\epsilon _+`$ and $`\epsilon _{}`$ of charges $`+2`$ and $`2`$, respectively . The neutrino mass matrix has the following form:
$$M_\nu \frac{\varphi _u^2}{M}\left(\begin{array}{ccc}\epsilon _{}& 1& 1\\ 1& \epsilon _+& \epsilon _+\\ 1& \epsilon _+& \epsilon _+\end{array}\right).$$
This matrix is of the form (2.1) with option (ii) of eq. (2.1). Therefore, eqs. (2.1) can be applied. We find:
$$m_{1,2}=m\left(1\pm 𝒪[\mathrm{max}(\epsilon _+,\epsilon _{})]\right),m_3=m𝒪(\epsilon _+),$$
$$s_{23}^\nu =𝒪(1),s_{13}^\nu =𝒪(\epsilon _+),ϵ_{12}^\nu =𝒪[\mathrm{max}(\epsilon _+,\epsilon _{})].$$
The charged lepton mass matrix has the form :
$$M_{\mathrm{}}\varphi _d\left(\begin{array}{ccc}\lambda _e& \lambda _\mu \epsilon _{}& \lambda _\tau \epsilon _{}\\ \lambda _e\epsilon _+& \lambda _\mu & \lambda _\tau \\ \lambda _e\epsilon _+& \lambda _\mu & \lambda _\tau \end{array}\right),$$
where the $`\lambda _i`$ allow for a generic approximate symmetry that acts on the SU(2)-singlet charged leptons. Such a symmetry, however, does not affect the relevant diagonalizing angles:
$$s_{23}^{\mathrm{}}=𝒪(1),s_{13}^{\mathrm{}}=𝒪(\epsilon _{}),s_{12}^{\mathrm{}}=𝒪(\epsilon _{}).$$
Eqs. (6.1) and (6.1) lead to the following estimates of the physical mixing angles:
$$s_{23}=𝒪(1),s_{13}=𝒪[\mathrm{max}(\epsilon _+,\epsilon _{})],ϵ_{12}=𝒪[\mathrm{max}(\epsilon _+,\epsilon _{})].$$
We can also estimate the corrections due to non-canonical kinetic terms:
$$k_{23}=0,k_{12},k_{13}=𝒪[\mathrm{max}(\epsilon _+,\epsilon _{})].$$
This leaves the parametric suppression of the physical parameters unchanged.
From eqs. (6.1) and (6.1) we obtain:
$$ϵ_{12}=𝒪(\mathrm{\Delta }m_{\mathrm{sun}}^2/\mathrm{\Delta }m_{\mathrm{atm}}^2).$$
Measurements of $`\mathrm{\Delta }m_{ij}^2`$ and of $`ϵ_{12}`$ can then lead to the exclusion of this model . For example, if $`\mathrm{\Delta }m_{\mathrm{sun}}^2/\mathrm{\Delta }m_{\mathrm{atm}}^210^2`$ and $`ϵ_{12}0.1`$ are established, the model will be excluded.
6.3. Models with two breaking parameters
Option (iii) of eq. (2.1), that is hierarchy of mass splittings without hierarchy of masses, has been realized in the framework of non-anomalous horizontal $`U(1)_H`$ symmetry . The symmetry is broken by two small parameters of opposite charges and equal magnitudes:
$$H(\lambda )=+1,H(\overline{\lambda })=1;\lambda =\overline{\lambda }0.2.$$
Then, the following selection rule applies: terms in the superpotential or in the Kahler potential that carry an (integer) $`H`$-charge $`n`$ are suppressed by $`\lambda ^{|n|}`$. The three neutrino masses are of the same order of magnitude, but the mass splitting between $`\nu _1`$ and $`\nu _2`$ is small if we have
$$\begin{array}{cc}\hfill |H(L_1)+H(L_2)|=& 2|H(L_3)|,\hfill \\ \hfill |H(L_1)+H(L_2)|<& 2|H(L_1)|,2|H(L_2)|.\hfill \end{array}$$
From eq. (2.1) we learn that
$$ϵ_{12}^\nu \mathrm{max}(\lambda ^{2|H(L_1)||H(L_1)+H(L_2)|},\lambda ^{2|H(L_2)||H(L_1)+H(L_2)|}).$$
A typical contribution to $`s_{12}^{\mathrm{}}`$ is given by
$$s_{12}^{\mathrm{}}\lambda ^{|H(L_1)+H(\overline{\mathrm{}}_2)||H(L_2)+H(\overline{\mathrm{}}_2)|}.$$
The important point here is that the first condition in eq. (6.1) requires that $`H(L_1)`$ and $`H(L_2)`$ are either both even or both odd. Eqs. (6.1) and (6.1) give then an upper bound on $`ϵ_{12}`$,
$$ϵ_{12}<\lambda ^2.$$
We conclude that if experiments find $`ϵ_{12}\lambda `$, this type of models will be strongly disfavored.
6.4. Alignment
We would like to make a comment on a particular class of supersymmetric models, where there is no degeneracy among the sleptons and the only mechanism to suppress the supersymmetric contributions to lepton flavor changing decays is alignment \[47,,21,,39\], that is small mixing angles in the neutralino-lepton-slepton couplings. In such models, there is a strong constraint on $`s_{12}^{\mathrm{}}`$ (see e.g. ):
$$\frac{B(\mu e\gamma )}{1.2\times 10^{11}}\left(\frac{s_{12}^{\mathrm{}}}{2\times 10^3}\right)^2\left(\frac{100GeV}{m(\stackrel{~}{\mathrm{}})}\right)^4<1,$$
where $`m(\stackrel{~}{\mathrm{}})`$ is the average slepton mass. In these models it is then particularly difficult to explain a large deviation from maximal mixing. If the dominant source of deviation from maximal mixing is $`s_{12}^{\mathrm{}}`$, we have
$$ϵ_{12}2s_{12}^{\mathrm{}}<4\times 10^3\left(\frac{m(\stackrel{~}{\mathrm{}})}{100GeV}\right)^2.$$
7. Conclusions
If the solar neutrino problem is solved by a large mixing angle solution, and if the mixing is established to be close to maximal but not precisely maximal, then interesting constraints for theoretical model building would arise. Specifically, experiments may measure the size and the sign of the small parameter $`ϵ_{12}`$ defined by
$$\mathrm{sin}^2\theta _{12}\frac{1}{2}(1ϵ_{12}).$$
Flavor models can account for a small $`ϵ_{12}`$ by forcing a pseudo-Dirac structure on the neutrino mass matrix through an approximate horizontal symmetry,
$$M_\nu ^{(2)}m\left(\begin{array}{cc}\delta _1& 1\\ 1& \delta _2\end{array}\right),|\delta _1|,|\delta _2|1.$$
We focus on models where there are no exact relations between different entries of the lepton mass matrices (except for $`(M_\nu )_{ij}=(M_\nu )_{ji}`$). Our main points are the following:
1. The most powerful constraints would arise if $`\delta _1`$ and/or $`\delta _2`$ are the dominant sources of $`ϵ_{12}`$. Then the size of $`|ϵ_{12}|`$ gives the size of the larger between $`|\delta _1|`$ and $`|\delta _2|`$ while the sign of $`ϵ_{12}`$ determines which of the two is larger. Moreover, the mass scale of the solar neutrinos (and not only their mass-squared splitting) can be estimated, $`m^2\mathrm{\Delta }m_{21}^2/|ϵ_{12}|`$.
2. If the dominant source of $`ϵ_{12}`$ is a small angle in the diagonalizing matrix for the charged lepton mass matrix, $`s_{12}^{\mathrm{}}`$, then $`|ϵ_{12}|`$ constrains the size of $`s_{12}^{\mathrm{}}`$ but sign($`ϵ_{12}`$) is unlikely to test the theoretical models. The order of magnitude relation between $`|ϵ_{12}|`$ and $`\mathrm{\Delta }m_{12}^2/m^2`$ is lost, but there is still a useful inequality, $`|ϵ_{12}|>\mathrm{\Delta }m_{21}^2/m`$.
3. Radiative corrections do not play a significant role in $`ϵ_{12}`$ and in $`s_{13}`$. They are supppressed by the tau Yukawa coupling, by a loop factor and by $`s_{13}`$. Consequently, their effect is below the level of $`10^3`$.
4. In models of horizontal symmetries where the kinetic terms are not canonically normalized, sign($`ϵ_{12}`$) depends on the kinetic terms as well and is unlikely to test the models.
It remains to be seen whether future developments in solar neutrino experiments would make a convincing case for the intriguing scenario of pseudo-Dirac neutrinos .
Acknowledgments
I thank John Bahcall, Plamen Krastev and Alexei Smirnov for useful discussions and correspondence. Partial support to this work was provided by the Department of Energy under contract No. DE–FG02–90ER40542, by the Ambrose Monell Foundation, by AMIAS (Association of Members of the Institute for Advanced Study), by the Israel Science Foundation founded by the Israel Academy of Sciences and Humanities, and by the Minerva Foundation (Munich).
References
relax J.N. Bahcall, P.I. Krastev and A.Yu. Smirnov, Phys. Rev. D60 (1999) 093001, hep-ph/9905220; Phys. Lett. B477 (2000) 401, hep-ph/9911248. relax M.C. Gonzalez-Garcia, P.C. de Holanda, C. Pena-Garay and J.W.F. Valle, Nucl. Phys. B573 (2000) 3, hep-ph/9906469. relax G.L. Fogli, E. Lisi, D. Montanino and A. Palazzo, Phys. Rev. D62 (2000) 013002, hep-ph/9912231. relax C. Giunti, M.C. Gonzalez-Garcia and C. Pena-Garay, Phys. Rev. D62 (2000) 013005, hep-ph/0001101. relax S.T. Petcov, Phys. Lett. B110 (1982) 245. relax C.N. Leung and S.T. Petcov, Phys. Lett. B125 (1983) 461. relax G.C. Branco, W. Grimus and L. Lavoura, Nucl. Phys. B312 (1989) 492. relax S.T. Petcov and A.Yu. Smirnov, Phys. Lett. B322 (1994) 109, hep-ph/9311204. relax P. Binetruy, S. Lavignac, S. Petcov and P. Ramond, Nucl. Phys. B496 (1997) 3, hep-ph/9610481. relax M. Leurer, Y. Nir and N. Seiberg, Nucl. Phys. B398 (1993) 319, hep-ph/9212278. relax E.Kh. Akhmedov, Phys. Lett. B467 (1999) 95, hep-ph/9909217. relax E.Kh. Akhmedov, G.C. Branco and M.N. Rebelo, Phys. Rev. Lett. 84 (2000) 3535, hep-ph/9912205. relax J.W. Valle and M. Singer, Phys. Rev. D28 (1983) 540. relax V. Barger, S. Pakvasa, T.J. Weiler and K. Whisnant, Phys. Lett. B437 (1998) 107, hep-ph/9806387. relax Z. Maki, M. Nakagawa and S. Sakata, Prog. Theo. Phys. 28 (1962) 870. relax A.H. Guth, L. Randall and M. Serna, JHEP 9908 (1999) 018, hep-ph/9903464. relax A. de Gouvea, A. Friedland and H. Murayama, hep-ph/9910286; hep-ph/0002064. relax A. Friedland, hep-ph/0002063. relax C.D. Froggatt and H.B. Nielsen, Nucl. Phys. B147 (1979) 277. relax L.J. Hall and A. Rasin, Phys. Lett. B315 (1993) 164. relax M. Leurer, Y. Nir and N. Seiberg, Nucl. Phys. B420 (1994) 468, hep-ph/9410320. relax G. Dutta and A.S. Joshipura, Phys. Rev. D51 (1994) 3838, hep-ph/9405291. relax P.H. Chankowski and Z. Pluciennik, Phys. Lett. B316 (1993) 312, hep-ph/9306333. relax K.S. Babu, C.N. Leung and J. Pantaleone, Phys. Lett. B319 (1993) 191, hep-ph/9309223. relax J. Ellis and S. Lola, Phys. Lett. B458 (1999) 310, hep-ph/9904279. relax J.A. Casas, J.R. Espinosa, A. Ibarra and I. Navarro, Nucl. Phys. B556 (1999) 3, hep-ph/9904395; Nucl. Phys. B569 (2000) 82, hep-ph/9905381; JHEP 9909 (1999) 015, hep-ph/9906281. relax R. Barbieri, G.G. Ross and A. Strumia, JHEP 9910 (1999) 020, hep-ph/9906470. relax E. Ma, J. Phys. G25 (1999) L97, hep-ph/9907400 . relax N. Haba, Y. Matsui, N. Okamura and M. Sugiura, Prog. Theor. Phys. 103 (2000) 145, hep-ph/9908429. relax P.H. Chankowski, W. Krolikowski and S. Pokorski, Phys. Lett. B473 (2000) 109, hep-ph/9910231. relax K.R.S. Balaji, A.S. Dighe, R.N. Mohapatra and M.K. Parida, Phys. Rev. Lett. 84 (2000) 5034, hep-ph/0001310. relax CHOOZ collaboration, M. Apollonio et al., Phys. Lett. B420 (1998) 397, hep-ex/9711002; Phys. Lett. B466 (1999) 415, hep-ex/9907037. relax Super-Kamiokande Collaboration, Y. Fukuda et al., Phys. Rev. Lett. 81 (1998) 1562, hep-ex/9807003. relax V. Barger, T.J. Weiler and K. Whishnant, Phys. Lett. B440 (1998) 1, hep-ph/9807319. relax G.L. Fogli, E. Lisi, A. Maronne and G. Scioscia, Phys. Rev. D59 (1999) 033001, hep-ph/9808205. relax E. Dudas, S. Pokorski and C.A. Savoy, Phys. Lett. B356 (1995) 45, hep-ph/9504292. relax G. Eyal and Y. Nir, Nucl. Phys. B528 (1998) 21, hep-ph/9801411. relax Y. Grossman, Y. Nir and Y. Shadmi, JHEP 9810 (1998) 007, hep-ph/9808355. relax Y. Grossman and Y. Nir, Nucl. Phys. B448 (1994) 30, hep-ph/9502418. relax R. Barbieri, L.J. Hall, D. Smith, A. Strumia and N. Weiner, JHEP 9812 (1998) 017, hep-ph/9807235. relax A.S. Joshipura and S.D. Rindani, hep-ph/9811252. relax P.H. Frampton and S.L. Glashow, Phys. Lett. B461 (1999) 95, hep-ph/9906375. relax R.N. Mohapatra, A. Perez-Lorenzana and C.A. de S. Pires, Phys. Lett. B474 (2000) 355, hep-ph/9911395. relax K. Cheung and O.C.W. Kong, Phys. Rev. D61 (2000) 113012, hep-ph/9912238. relax Q. Shafi and Z. Tavartkiladze, Phys. Lett. B482 (2000) 145, hep-ph/0002150. relax Y. Nir and Y. Shadmi, JHEP 9905 (1999) 023, hep-ph/9902293. relax Y. Nir and N. Seiberg, Phys. Lett. B309 (1993) 337, hep-ph/9304307. relax J. Feng, Y. Nir and Y. Shadmi, Phys. Rev. D61 (2000) 113005, hep-ph/9911370. relax M.C. Gonzalez-Garcia, Y. Nir, C. Pena-Garay and A. Smirnov, to appear.
|
no-problem/0002/cond-mat0002042.html
|
ar5iv
|
text
|
# Stability analysis of the limit-from𝐷-dimensional nonlinear Schrödinger equation with trap and two- and three-body interactions
## Abstract
Considering the static solutions of the $`D`$dimensional nonlinear Schrödinger equation with trap and attractive two-body interactions, the existence of stable solutions is limited to a maximum critical number of particles, when $`D2`$. In case $`D=2`$, we compare the variational approach with the exact numerical calculations. We show that, the addition of a positive three-body interaction allows stable solutions beyond the critical number. In this case, we also introduce a dynamical analysis of the conditions for the collapse.
PACS: 03.75.Fi; 47.20.Ky; 02.30.Jr; 31.75.Pf
Keywords: Nonlinear Schrödinger Equation; trapped two and three-body atomic systems; multidimensional systems
Recent experiments on Bose Einstein Condensation (BEC) have brought great attention to its theoretical formulation. Atomic traps are effectively described by the Ginzburg-Pitaevskii-Gross (GPG) formulation of the nonlinear Schrödinger equation (NLSE) , which includes two-body interaction. When the atoms have negative two-body scattering lengths, a formula for the critical maximum number of atoms was presented in ref. . In ref. , the formulation was extended in order to include the effective potential originated from the three-body interaction. In this case, in three-dimensions, it was shown that a kind of first order phase-transition occurs. In this connection, as also considered in the motivations given in , it is relevant to observe that recently it was reported the possibility of altering continuously the two-body scattering length, from positive to negative values, by means of an external magnetic field . Within such perspective, the two-body binding energy can be close to zero, and one can approach the so-called Efimov limit, which corresponds to an increasing number of three-body bound states . Near this limit, nontrivial consequences can occur in the dynamics of the condensate, such that one should also consider three-body effects in the effective nonlinear potential.
In the present work, we study the critical number of atoms in arbitrary $`D`$dimensions, using a variational procedure; and also by an exact numerical approach in the case of dimension $`D=2`$. The $`D`$dimensional NLSE, with attractive two-body interactions, was previously analyzed in models of plasma and light waves in nonlinear media . The collapse conditions, in this case, were investigated without and with the harmonic potential term. In case of $`D=`$3, it was shown that a repulsive nonlinear three-body interaction term can extend considerably the critical limit for the existence of stable solutions .
Motivated by the observed high interest in stable solutions for arbitrary $`D`$, we look for variational solutions in a few significant cases ($`D=`$1,2,4 and 5) not previously considered, when a three-body interaction term, parametrized by $`\lambda _3`$, is added to the effective non-linear interaction that contains a two-body attractive term. Our analysis also shows that, as in case of $`D=3`$, a kind of first-order phase-transition can occur when $`D4`$, for certain cases of $`\lambda _30`$. In the present paper, we have also considered the approach given in , in order to study the stability conditions in the case of arbitrary $`D`$, when the non-linear interaction contains two (attractive) and three-body terms.
In order to obtain an analytical approach and verify the validity of the variational Ritz method, we consider in detail the case of $`D=2`$, with and without the three-body term, comparing the variational results with exact numerical calculations for some relevant physical observables. In this case, we also discuss how the method given in can be extended in order to approach analytically the exact value for the total energy.
By extending the GPG formalism from three to $`D`$ dimensions, including two and three-body interactions in the effective non-linear potential , we obtain
$`i\mathrm{}{\displaystyle \frac{d\psi }{dt}}=\left[{\displaystyle \frac{\mathrm{}^2}{2m}}^2+{\displaystyle \frac{m\omega ^2r^2}{2}}+\lambda _2|\psi |^2+\lambda _3|\psi |^4\right]\psi ,`$ (1)
where $`\psi \psi (\stackrel{}{r},t)`$ is the wave-function normalized to the number of atoms $`N`$, $`\omega `$ is the frequency of the trap harmonic potential and $`m`$ is the mass of the atom. $`\lambda _2`$ and $`\lambda _3`$ are, respectively, the strength of the two- and three-body effective interaction, given in a $`D`$dimensional space. $`r|\stackrel{}{r}|`$ is the hyperradius, such that $`\stackrel{}{r}_{i=1}^Dr_i\widehat{e}_i`$ and $`_{i=1}^D\widehat{e}_i\frac{}{r_i}`$ ($`\widehat{e}_i`$ is the unit vector, with $`i=1,2,\mathrm{}D`$).
The stationary solutions for the chemical potential $`\mu `$ are given by
$$i\mathrm{}\frac{d\psi }{dt}=\mu \psi .$$
(2)
Considering the general solution of eq.(1), $`i\mathrm{}{\displaystyle \frac{d\psi }{dt}}={\displaystyle \frac{\delta }{\delta \psi ^{}}}`$, one can obtain the total energy $`E`$:
$`E`$ $`=`$ $`{\displaystyle d^D\stackrel{}{r}},\mathrm{with}`$ (3)
$``$ $``$ $`{\displaystyle \frac{\mathrm{}^2}{2m}}\left|\psi \right|^2+{\displaystyle \frac{m\omega ^2r^2}{2}}\left|\psi \right|^2+{\displaystyle \frac{\lambda _2}{2}}|\psi |^4+{\displaystyle \frac{\lambda _3}{3}}|\psi |^6.`$ (4)
Here we consider only attractive two-body interaction, which is more interesting in the case of trapped atoms. For $`D=`$3, $`\lambda _24\pi \mathrm{}^2|a|/m`$, where $`a`$ is the two-body scattering length and $`m`$ is the mass of the atom. In the case of arbitrary $`D`$, $`\lambda _2`$ has dimension of energy times $`L^D`$, where $`L`$ is a length scale in such space. However, a convenient redefinition of the wave-function in terms of dimensionless variables will absorb this constant, as will be shown.
Our study will be concentrated on the ground state for a spherically symmetric potential. We first consider the case of $`\lambda _3=0`$, using a variational procedure, with a trial Gaussian wave-function for $`\psi (\stackrel{}{r})`$, normalized to $`N`$, given by
$$\psi _{var}(\stackrel{}{r})=\sqrt{N}\left(\frac{1}{\pi \alpha ^2}\frac{m\omega }{\mathrm{}}\right)^{D/4}\mathrm{exp}\left[\frac{r^2}{2\alpha ^2}\left(\frac{m\omega }{\mathrm{}}\right)\right],$$
(5)
where $`\alpha `$ is a dimensionless variational parameter. From eq. (3), the corresponding expression for the total variational energy can be expressed as
$`E_{var}`$ $`=`$ $`\mathrm{}\omega {\displaystyle \frac{N}{\nu }}_{var},`$ (6)
$`_{var}`$ $``$ $`\nu \left({\displaystyle \frac{D}{4\alpha ^2}}+{\displaystyle \frac{D\alpha ^2}{4}}\right){\displaystyle \frac{\nu ^2\mathrm{\Omega }_D}{4(2\pi )^{D/2}\alpha ^D}}+{\displaystyle \frac{G_3}{6\pi ^D}}{\displaystyle \frac{\nu ^3\mathrm{\Omega }_D^2}{3^{D/2}\alpha ^{2D}}},`$ (7)
where $`\mathrm{\Omega }_D`$ is the solid angle in $`D`$ dimensions,
$`\mathrm{\Omega }_D{\displaystyle \frac{2\pi ^{D/2}}{\mathrm{\Gamma }(D/2)}},G_3{\displaystyle \frac{\lambda _3}{2(\lambda _2)^2}}\mathrm{}\omega ,`$ (9)
$`\mathrm{and}\nu {\displaystyle \frac{N}{\mathrm{\Omega }_D}}{\displaystyle \frac{2\lambda _2}{\mathrm{}\omega }}\left({\displaystyle \frac{m\omega }{\mathrm{}}}\right)^{D/2}.`$ (10)
By using dimensionless variables, $`\stackrel{}{x}\sqrt{m\omega /\mathrm{}}\stackrel{}{r}`$, we redefine the wave-function $`\psi `$ as
$$\varphi (\stackrel{}{x})\sqrt{\frac{2|\lambda _2|}{\mathrm{}\omega }}\psi (\stackrel{}{r}),$$
(11)
such that
$$|\varphi (\stackrel{}{x})|^2d^D\stackrel{}{x}=N\left[\frac{2|\lambda _2|}{\mathrm{}\omega }\right]\left(\frac{m\omega }{\mathrm{}}\right)^{D/2}=\nu \mathrm{\Omega }_D.$$
(12)
The dimensionless equation corresponding to eq. (1), can be rewritten as
$$\left[\left(\underset{1}{\overset{D}{}}\frac{d^2}{dx_i^2}+x_i^2\right)|\varphi |^2+G_3|\varphi |^42\beta \right]\varphi =0,$$
(13)
where $`\beta \mu /(\mathrm{}\omega )`$ is the dimensionless chemical potential. From eqs. (11) and (5), the trial wave-function can be written as
$$\varphi _{var}(x)\sqrt{\nu \mathrm{\Omega }_D}\left(\frac{1}{\pi \alpha ^2}\right)^{D/4}\mathrm{exp}\left(\frac{x^2}{2\alpha ^2}\right),$$
(14)
The variational results, obtained by using the above expressions can be extended analytically to non-integer values of the dimension $`D`$. Minimization of the energy \[eq. (6)\], with respect to $`\alpha ^2`$, is done numerically by sweeping over $`\alpha ^2`$ values. The results for the energy and the chemical potential are shown in Fig. 1. For each value of $`D`$, one can observe a critical number of atoms, $`N_c`$, related to the critical parameter $`\nu _c`$, only when $`D2`$. This critical limit corresponds to the cusps in the upper plot of Fig.1 and is also observed using exact numerical calculation for $`D=`$3. It is also interesting to note that for $`D>2`$ there are two branches of solutions for $`_{var}`$ and $`\beta `$, one stable and the other unstable. In the energy, the lower branch corresponds to stable solutions (minima), while the upper one gives unstable solutions (maxima).
The case with $`D=2`$ is particularly interesting, as no unstable solutions exist and there are stable solutions only for $`\nu <2`$, such that $`\nu _c=2`$. For $`D=2`$, the minimization of eq. (6) in respect to $`\alpha ^2`$ leads to
$$_{var}=\nu \sqrt{1\frac{\nu }{2}}.$$
(15)
The behavior of $`\nu `$, and the corresponding critical limits, as one alters the dimension $`D`$, has other curious particular results. For example, the critical limit $`\nu _c`$ has a minimum for $`D=3`$ ($`\nu _c^{(D)}\nu _c^{(3)}`$ for all $`D`$).
In conclusion of this part of our work, considering arbitrary $`D`$ with $`\lambda _3=0`$, there are no stable solutions for eq. (1), if the wave-function $`\varphi (x)`$, given by eq. (12), is normalized to $`\nu >\nu _c`$. Fig. 1 shows that this restriction is strongest for $`D=3`$: $`\nu _c`$ is a minimum when compared with $`\nu _c`$ for $`D3`$. This is a relevant result, considering that $`\nu `$ is directly proportional to the number of atoms. Also, it is observed that $`\nu _c`$ increases very fast for $`D>3`$.
Next, we also solve equation (13) exactly employing the shooting and Runge-Kutta methods, and compare the results with the corresponding variational ones. In this case, we consider only the particular interesting case of $`D=2`$, with $`\lambda _3=0`$. The results are shown in Fig. 2, for the chemical potential $`\beta `$, the total energy $``$, mean-square-radius $`x^2`$, and the central density $`|\varphi (0)|^2`$. In order to numerically solve eq. (13), in the $`s`$wave, we first write it in terms of the single variable $`x\sqrt{(x_1^2+x_2^2)}`$ and consider the following boundary conditions: $`\varphi ^{}(0)=0`$ (where $``$ stands for the derivative with respect to $`x`$) and $`\varphi (x)`$ $`C\mathrm{exp}(x^2/2+[\beta 1]\mathrm{ln}(x))`$ when $`x\mathrm{}`$, where $`C`$ is a constant to be determined. As observed in Fig.2, the critical limit $`\nu _c=2`$ obtained analytically using the variational approach should be compared with $`\nu _c=1.862`$, obtained by exact numerical calculation. This critical limit was first obtained by Weinstein , in a non-linear approach with two-body term, without the trapping potential. The coincidence of the value with our exact calculation is due to the fact that at the critical limit the mean square radius goes to zero.
We have also compared the results obtained by the variational approach with the exact numerical one, in the case of $`D=2`$, for several values of the three-body interaction term (positive and negative), as shown in Figs. 3 and 4. In Fig. 3 we have the exact numerical approach and in Fig. 4 we have the corresponding variational results. By comparing the results we have for $`D=2`$ (shown in Figs. 3 and 4) with the ones obtained in ref. for $`D=3`$, we should observe that no first order phase-transition exists in two dimensions. As observed in refs. , for $`D=3`$, a first-order phase-transition can occur in trapped condensed states with negative two-body scattering length, when a repulsive three-body (quintic) term is added in the Hamiltonian. As shown in Figs. 3 and 4, with $`G_3`$ positive the range of stability for the number of atoms $`N`$ can be increased indefinitely; with $`G_3`$ negative this range is reduced.
We can analyze the collapse conditions using “the virial theorem” approach . The mean square radius, $`r^2`$, of a $`D`$dimensional condensate, is given by
$`{\displaystyle \frac{d^2r^2}{dt^2}}+4\omega ^2r^2=`$ (16)
$`{\displaystyle \frac{1}{m}}\left[4H+\lambda _2(D2)|\psi |^2+{\displaystyle \frac{4\lambda _3}{3}}(D1)|\psi |^4\right],`$ (17)
where
$$𝒪\frac{1}{N}d^D\stackrel{}{r}\psi ^{}(\stackrel{}{r},t)𝒪\psi (\stackrel{}{r},t)$$
(18)
and $`H=E/N`$. When $`\lambda _3=0`$ we obtain the equation derived in .
We can also write the eq. (16) in dimensionless units, as it was done in eqs. (11-13):
$`{\displaystyle \frac{d^2x^2}{d\tau ^2}}+4x^2={\displaystyle \frac{4}{\nu }}+2f(\tau ),`$ (19)
where
$`f(\tau ){\displaystyle \frac{\lambda _2}{|\lambda _2|}}{\displaystyle \frac{D2}{4}}|\varphi |^2+G_3{\displaystyle \frac{D1}{3}}|\varphi |^4.`$ (20)
Using the initial conditions for $`x^2`$ and $`dx^2/d\tau `$, where, for simplicity, we assume $`dx^2/d\tau =0`$, the solution of eq. (19) is given by
$`x^2`$ $`=`$ $`{\displaystyle \frac{}{\nu }}+\left[x^2|_0{\displaystyle \frac{}{\nu }}\right]\mathrm{cos}(2\tau )`$ (21)
$`+`$ $`{\displaystyle _0^\tau }f(\tau ^{})\mathrm{sin}(2(\tau \tau ^{}))𝑑\tau ^{}.`$ (22)
The stability regions and the estimates for the collapse time can be obtained from the analysis of this solution, like as performed for the case $`\lambda _3=0`$ in . Let us analyze the dynamics when $`D=2`$. In this case, $`\lambda _2`$ does not appear explicitly in $`x^2`$ and $`f(\tau )`$ also does not depend on this parameter:
1. For a positive $`G_3`$, negative $`\lambda _2`$ and $`>0`$ we observe that $`x^2`$ cannot be zero and the condensate is stable. The mean square radius of the condensate oscillates in time around a finite value. This is confirmed by the numerical simulations (see Figs. 3 and 4).
2. For a negative $`G_3`$, positive $`\lambda _2`$ an analysis of stability like the one performed in ref. shows that
a) When the total energy $`<0`$, the condensate is unstable and the wavefields collapse in a finite time at any initial conditions;
b) When $`>0`$, as the function $`f(\tau )`$ is negative, the contribution of the integral term for $`\tau <\pi `$ is negative. Then, we found the collapse condition as
$$x^2|_02\frac{}{\nu }.$$
(23)
The same kind of analysis, for $`D>2`$, is much involved in the present approach, as the sign of the function $`f(\tau )`$ is not fixed at opposite signs for the parameters $`\lambda _2`$ and $`\lambda _3`$.
Some information about the dynamics of the collapse can also be obtained by using the techniques based on integral inequalities . For instance, when $`D=2`$, we can estimate the three-body term contribution in $`E`$, following the procedure given in
$$d^2\stackrel{}{r}|\psi |^6C_2\left(d^2\stackrel{}{r}\frac{|\psi |^2}{2m}\right)^2\left(d^2\stackrel{}{r}|\psi |^2\right)=C_2K^2N,$$
(24)
where $`K`$ is the kinetic energy and $`C_2`$ is defined from the minimization of the functional
$$𝒥=\frac{\left(d^2\stackrel{}{r}|\psi |^2\right)^2\left(d^2\stackrel{}{r}|\psi |^2\right)}{d^2\stackrel{}{r}|\psi |^6}.$$
(25)
Combining with the corresponding estimate for $`d^2\stackrel{}{r}|\psi |^4`$, we obtain $`E>E(K)`$, where
$$E(K)=K+\frac{\omega ^2N^2}{4K}+\frac{\lambda _2}{2}C_1NK+\frac{\lambda _3}{3}C_2K^2N.$$
(26)
When $`\lambda _3=0`$ we get the equation derived in . Equation (26) should be compared with the corresponding variational expression (6), where the kinetic energy is given by $`K=N\mathrm{}\omega /(2\alpha ^2)`$ and $`\alpha `$ is the width of the cloud. As we see, the expression for the energy (26) is very similar to the obtained by the variational approach. However, (26) is valid for arbitrary time and describes the nonstationary dynamics. By using the variational expression (upper limit) for the ground-state, and the right-hand-side of eq. (26) (lower limit), we can approach analytically the exact solution for the total energy
$$E(K)<E<E_{var}.$$
(27)
For a more deep insight to the problem of stability, we need to obtain the values of the constants $`C_1`$ and $`C_2`$. This problem requires a generalization of the method suggested by Weinstein in , to be considered in a future work.
We should observe that exact numerical results, when $`G_3=0`$, have already been considered in refs. (for $`D=1`$ and $`D=3`$), in (for $`D=`$ 3), and in (for $`D=`$2). In , for $`D=3`$, it was also considered the case with $`G_30`$, and shown a kind of first-order phase-transition in the condensate. In the present work, we have extended the variational formalism, in case $`G_30`$, for an arbitrary $`D`$dimension. In the following Figs. 5, 6 and 7, we present our results for the chemical potential as a function of $`\nu `$, for a set of given values of $`G_3`$, in case of $`D=`$ 1, 4 and 5. As one can observe in Fig. 5, even in case of $`D=1`$ one can reach a critical maximum limit for $`\nu `$, when $`G_3`$ is enough negative. For $`D=`$ 4 and 5 (Figs. 6 and 7), we observe similar picture of first-order phase-transition occurring for some specific values of $`G_3`$.
In conclusion, in the present work, we first studied the stability and the critical number of atoms in arbitrary $`D`$dimensions using a variational procedure, for the case we have two-body (attractive) and three-body contributions. This part extends a previous analysis done in refs. . Next, we considered in more detail the case $`D=2`$.. We compared the variational results with exact numerical calculations for the chemical potential, total energy, mean-square-radius and density. Finally, we extended numerically the approach for $`D=2`$, including an effective three-body interaction term. We studied the sensibility of the critical numbers with respect to corrections in the non-linear interaction. The effective interaction considered in the equation contains a trapped harmonic interaction, and two nonlinear terms, proportional to the density $`|\psi |^2`$ (due to first-order two-body interaction) and to $`|\psi |^4`$ (due to first-order three-body interaction). We also verified, by a variational procedure, that a critical number of particles exists only for $`D2`$, when the nonlinear term of the NLSE contains just the cubic term. In case of $`D=1`$, a critical maximum number of atoms can exist with the addition of a negative quintic term (three-body) in the NLSE. In all cases where the number of atoms is limited, we observed that the addition of a positive $`|\psi |^4`$ allows stable solutions beyond the critical number. We also introduced an analysis of the collapse conditions, using “the virial theorem” approach given in . The dynamics of the collapse was discussed in terms of the techniques developed in . In particular, we showed how the exact energy can be approached in the case of $`D=2`$ with two and three-body term contributions.
Acknowledgments We are grateful to Jordan M. Gerton for the suggestions and careful reading of the manuscript. This work was partially supported by Fundação de Amparo à Pesquisa do Estado de São Paulo and Conselho Nacional de Desenvolvimento Científico e Tecnológico.
|
no-problem/0002/astro-ph0002391.html
|
ar5iv
|
text
|
# 1 Dark Matter
## 1 Dark Matter
The problem of missing or Dark Matter, namely that there is insufficient material in the form of stars to hold galaxies and clusters together, has been known since the pioneering work of Bessel, Zwicky and most recently Rubin .
The existence of non-luminous Dark Matter was first inferred in 1984 by Fredrich Bessel from gravitational effects on positional measurements of Sirius and Procyon. In 1933, Zwicky concluded that the velocity dispersion in Rich Clusters of galaxies required 10 to 100 times more mass to keep them bound than could be accounted for by luminous galaxies themselves.
Finally, Trimble noted that the majority of galactic rotation curves, at large radii, remain flat or even rise well outside the radius of the luminous astronomical object.
The missing Dark Matter has been traditionally explained in terms of Dark Matter Halo’s , although none of the Dark Matter Halo models have been very successful in explaining the experimental data .
This paper will describe the missing matter (Dark Energy) in terms of a Cosmological Constant which leads to a constant energy density.
The experimental determination of galactic velocity rotation curves (VRC) has been one of the most important approaches used to estimate the ”local” mass (energy) density of the Universe. Several sets of data from VRC’s will be analysed and the contribution due to the Cosmogical Constant determined.
## 2 Constaints on the value of the Cosmological Constant
It is interesting to estimate the allowed range of values for the Cosmological Constant within the constraints of General Relativity and observational astronomy, (for a comprehensive review, see Bahcall et.al. ).
Starting from a General Relativity point of view, the Friedman energy equation is given by:
$$1=\frac{8\pi G_N}{3}\frac{\rho _{matter}}{H^2}\frac{kc^2}{R^2H^2}+\frac{c^2\mathrm{\Lambda }}{3H^2},$$
(1)
where the Hubble Constant is denoted by $`H`$, the curvature term by $`k`$ and $`G_N`$ denotes the Newton gravitational constant. Eq.(1) can be rewritten as
$$1=\mathrm{\Omega }_m+\mathrm{\Omega }_k+\mathrm{\Omega }_\mathrm{\Lambda }$$
(2)
Here the relative contributions to the energy density of the universe are given by, the mass, curvature and Cosmological Constant.
If we assume that the curvature contribution is small:
$$1=\mathrm{\Omega }_{Matter}+\mathrm{\Omega }_\mathrm{\Lambda }$$
(3)
In order to satisfy equation (3), it was surprising to discover that only a narrow range of values for the observed Cosmological Parameters were allowed. A ”reasonable” set of parameters consistent with observation are:
$$H_O=100Kms^1Mpc^1,\rho _{matter}=5\times 10^{30}gcm^3,\frac{\mathrm{\Omega }_\mathrm{\Lambda }}{\mathrm{\Omega }_{matter}}=4.3$$
(4)
and $`\mathrm{\Omega }_{Matter}+\mathrm{\Omega }_\mathrm{\Lambda }=1.4`$ (here we assume a small value for the curvature $`0.4`$. (For an authoritative review of the matter/energy sources of the universe, see Turner ).
It was found that observational constraints placed upon the range of values for the cosmological parameters lead to a surprisingly narrow range of possible values for the Cosmological Constant, the range being given by:
$$10^{56}<|\mathrm{\Lambda }|<5\times 10^{55}cm^2.$$
(5)
## 3 Experimental Results
It was shown , within the Weak Field Approximation, that the Cosmological Constant at large radii could be determined from galactic velocity rotation curves. This contribution is given by:
$$v_\mathrm{\Lambda }^2(r)=v_{obs}^2(r)v_{mass}^2(r),\mathrm{leading}\mathrm{to},$$
(6)
$$v_\mathrm{\Lambda }^2/r=\frac{c^2\mathrm{\Lambda }r}{3},\mathrm{at}\mathrm{large}\mathrm{r}$$
(7)
The results obtained by this analysis are shown in Table 1.
The experimental values obtained for the Cosmological Constant fall within the range determined from General Relativity and observational constraints. While the initial results are promising, a thorough and systematic analysis of galactic rotation curves needs to be undertaken in order to confirm the trend.
Previous results reported for the value of the Cosmological Constant were 100 to 1000 times the ”allowed value”. This systematic error arose for two main reasons: the first by not taking the gradient of the curves at sufficiently large radii and the second by the lack of access to ”real” experimental data leading to crude data analysis.
The results presented in this paper suffer from the second problem, i.e. all the gradients were obtained from the data in the published literature and not from raw experimental data i.e. M33 Corbelli $`\&`$ Salucci , NGC 3198 and all others from .
However experience has taught us that a cursory look at rotation curves will determine which galaxies are candidates for explanation by a Cosmological Constant and which are not. Galaxies where the velocity rotation curve remains flat or rises at large radii, are immediate candidates. NGC 3198 is a good example, whereas others such as M33 has clearly not relaxed to the Cosmological background, even at many times the galactic radii. A full explanation for M33 has to be sought in a different direction.
Finally, a simple calculation of the effective mass density due to the Cosmological Constant in NGC 3198,
$$\rho _{eff}=\frac{c^2\mathrm{\Lambda }}{4\pi G_N}$$
(8)
leads to a value of $`5.4\times 10^{29}gcm^3`$ which is comparable to the HI mass density at the outer disk of NGC 3198 galaxy. This is further confirmation that the Cosmological Constant effect can be seen at galactic scale lengths.
## 4 Accelerating or Decelerating Universe?
Recently there has been great interest in the Type Ia Supernovae results of Perlmutter et al which suggest that the universe is accelerating.
In this section we will show that the Weak Field Approximation coupled with galactic velocity rotation curve data inevitably lead to a negative Cosmological Constant.
The equation for the VRC is given by <sup>3</sup><sup>3</sup>3In Ref. eq.(15), there was a typographical sign error for one of the terms and also the negative pressure effect associated with $`\mathrm{\Lambda }`$ was not fully appreciated.
$$\frac{v^2}{r}=\frac{Gm}{r^2}+\frac{c^2\mathrm{\Lambda }}{3}r$$
(9)
We note that Eq.(9) is only strictly true for small and large radii, however it will serve to illustrate our arguments.
Using the Newtonian limit of Einstein field equations we derived equation (9). It is important to realize that the Cosmological Constant obeys the equation of state given by,
$$P_\mathrm{\Lambda }=c^2\rho _\mathrm{\Lambda },$$
(10)
where $`P_\mathrm{\Lambda }`$ is the pressure term due to $`\mathrm{\Lambda }`$. Taking the Newtonial limit in the absence of matter, $`T_{\mu \nu }=0`$, the differential equation for the static Newtonian potential becomes
$$^2\mathrm{\Phi }=c^2\mathrm{\Lambda }$$
(11)
leading to,
$$\rho _{eff}=\rho _\mathrm{\Lambda }+\frac{3P_\mathrm{\Lambda }}{c^2}=2\rho _\mathrm{\Lambda }$$
(12)
If we arbitrary set $`\mathrm{\Phi }=0`$ at the origin, then in spherical coordinates (11) has the solution $`\mathrm{\Phi }=\frac{c^2\mathrm{\Lambda }}{6}r^2`$ . Thus, the Cosmological Constant leads to the following correction to the Newtonial potential
$$\mathrm{\Phi }=\frac{Gm}{r}\frac{c^2\mathrm{\Lambda }}{6}r^2$$
(13)
At small galactic radii the velocity versus radius contribution is well known and follows Newtonian physics. For large radii a negative Cosmological Constant gives a positive contribution to the VRC which is what is actually observed. On the other hand the effect of a positive Cosmological Constant would be to lower the rotation curve below that due to matter alone.
The above simple argument, based on observational astronomy, allows only a negative Cosmological Constant as a possible explanation for the galactic velocity rotation curve data. This is in clear disagreement with the Type Ia supernovae results . However, given the uncertainties in the determination of the deceleration parameter, $`q_0`$, derived from supernovae data the approach outlined above has certain merits worth consideration.
In summary these are , the Cosmological Constant is determined from $`direct`$ measurement unlike the Supernovae results, the experimentally determined value is the correct order of magnitude as that required from cosmological constraints, and finally a negative Cosmological Constant is consistent, and indeed a natural physical explanation , for the observed galactic velocity rotation curve data.
Finally, observations of global clusters of stars constrain the age of the universe and consequently place an observational limit on a negative Cosmological Constant of ,
$$|\mathrm{\Lambda }|2.2\times 10^{56}cm^2.$$
(14)
Note, the Cosmological Constant derived from global cluster constraints is in agreement with the experimentally determined value derived from galactic velocity rotation curve data.
### 4.1 Experimental Tests-Dark Matter Halo vs Cosmological Constant
It would be of some interest if it was possible to experimentally distinguish between the contribution of Dark Matter Halo’s and Dark Energy (Cosmological Constant) to galactic rotation curves.
We know that Dark Matter predicts a variation of mass at large radii given by ,
$$M_{DM}(r)r$$
(15)
while for Dark Energy due to a Cosmological Constant,
$$M_\mathrm{\Lambda }(r)r^3[\rho _\mathrm{\Lambda }+(3P_\mathrm{\Lambda }/c^2)].$$
(16)
With these different types of predictive variations it should be possible to design experimental tests to distinguish between the two phenomena.
## 5 Quark Hadron Phase transition
In this section which is of more speculative nature, working within the Extended Large Number Hypothesis, and using the experimentally determined Cosmological Constant, we will demonstrate how the energy density for the Quark - Hadron can be estimated.
However, it is useful to put into context the significance of the Cosmological Constant for many seemingly disparate branches of Physics. The figure 1 below shows the Cosmological Constant at the epicentre of Physics.
The diagram demonstrates a dichotomy whereby several branches of Physics need a non-zero Cosmological Constant in order to explain key physical phenomena, whilst in others a non-zero value presents a fundamental problem.
It is also noted here that while fundamental theories of Particle Physics such as the Standard Model, Quantum Field Theory and String Theory have many major predictive successes they all have problem with a high vacuum energy density. On the other hand while the Extended LNH is formulated from a naive theory it appears to correctly predict the correct vacuum energy density and other cosmological parameters. The Extended LNH relates the value of the Cosmological Constant to the effective mass given by:
$$|\mathrm{\Lambda }|=\frac{G_N^2m_{eff}^6}{h^4}=\frac{c^6L_s^4}{h^6}m_{eff}^6$$
(17)
Matthews pointed out that when using the Extended LNH to determine today’s cosmological parameters, the mass of the proton originally suggested by Dirac should be replaced by the energy density of the last phase transition of the Universe : Quark - Hadron.
Note that in equation (17) there are no free parameters, $`L_s`$ is normalised to the gravitational constant and corresponds to the fundamental length of String Theory.
Using equation (17) and the Cosmological Constant determined from NGC 5033, the effective mass is given by
$$m_{Effective}=332MeV$$
(18)
We will associate this value with the Quark - Hadron phase transition energy. (Other experimentally determined Cosmological Constant data give $`m_{QH}`$ in the range 295 - 410 MeV). The experimentally determined value within the LNH predicts the correct order of magnitude for the phase transition.
The above result poses the question that it might be possible to gain insights on the quantum mechanical origin of the Universe, as Dirac suggested, from direct observation of the present day Universe.
Finally, does the Cosmological Constant provide the key to the integration of the various Physics disciplines as Figure 1 suggests?
## 6 Discussion
Analysis of the galactic rotation curves show that the missing Galactic Dark Matter can be explained in terms of a Cosmological Constant.
This contribution can be considered a prime candidate for the ”Dark Energy” which is smoothly distributed throughout space, and contributes approximately $`70\%`$ to the mass/energy of the Universe .
However, in order to support this thesis for the Cosmological Constant, thorough and systematic analysis of galactic velocity rotation data needs to take place.
It was shown how, within the Weak Field Approximation, that VRC data inevitably lead to a negative value for the Cosmological Constant in direct disagreement with the type Ia Supernovae data. Nevertheless, given the uncertainties in determining the deceleration parameter $`q_0`$ , from the redshift-magnitude Hubble diagram using Type Ia supernovae as standard candles, we believe our approach is worth further consideration.
The experimental values determined for the Cosmological Constant are shown to lie within an acceptable range. These values, used within the Extended Large Number Hypothesis, predict values for the Quark-Hadron phase transition energy in the range 295-410 MeV.
It would be remarkable, if proved correct, that the Cosmological Constant could be directly determined from the analysis of galactic velocity rotation curves.
Equally remarkable, if proved correct, is the idea that astronomical observations can shed light on the last quantum mechanical phase transition of the Universe, namely the Quark-Hadron.
## 7 Acknowledgements
We would like to thank Paolo Salucci for invaluable discussions on Cosmological and Astronomical aspects related to this work and Alexander Love for useful comments on the manuscript. We also thank J. Hargreaves and D. Bailin for suggestions and useful discussions.
We also would like to thank Deja Whitehouse for proof reading this document.
George Kraniotis was supported for this work by PPARC.
|
no-problem/0002/cond-mat0002263.html
|
ar5iv
|
text
|
# Evidence of first-order transition between vortex glass and Bragg glass phases in high-𝑇_c superconductors with point pins: Monte Carlo simulations
\[
## Abstract
Phase transition between the vortex glass and the Bragg glass phases in high-$`T_\mathrm{c}`$ superconductors in $`\stackrel{}{B}\stackrel{}{c}`$ is studied by Monte Carlo simulations in the presence of point pins. A finite latent heat and a $`\delta `$-function peak of the specific heat are observed, which clearly indicates that this is a thermodynamic first-order phase transition. Values of the entropy jump and the Lindemann number are consistent with those of melting transitions. A large jump of the inter-layer phase difference is consistent with the recent Josephson plasma resonance experiment of Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+y</sub> by Gaifullin et al.
\] Vortex states in high-$`T_\mathrm{c}`$ superconductors have been intensively studied experimentally and theoretically . Because of large fluctuations owing to high transition temperature and strong anisotropy, the flux-line lattice (FLL) melts at much lower temperatures than those predicted by Abrikosov’s mean-field theory. The FLL melting is a thermodynamic first-order phase transition. In pure systems, the melting line stretches up to a high magnetic field as large as $`H_{\mathrm{c}_2}`$. However, all experiments show that first-order melting lines terminate at much lower magnetic fields . Complicated phase diagrams are obtained experimentally in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+y</sub> (BSCCO) and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (YBCO) , and it is believed that effects of impurities are essential in real materials. For example, vacancies of oxygen atoms, which play the role of point pins to flux lines, cannot be excluded completely even in crystals of highest quality.
Fisher et al. studied the Ginzburg-Landau model in a random potential, and proposed the so-called vortex glass (VG) phase before experimental studies. Giamarchi and Doussal pointed out that the Bragg glass (BG) phase can exist in weak fields. In this phase, correlation functions decay in power laws , and the structure factor shows a triangular Bragg pattern. Accordingly, a phase transition between these two glass phases may be observed when the magnetic field is sweeped across the phase boundary (see Fig. 1). The existence of the VG–BG phase transition in vortex systems with point pins shows a sharp contrast to the phase diagram of pure systems. Presuming a first-order phase transition, the VG–BG phase boundary was evaluated phenomenologically . However, physical properties around this phase boundary have not been clarified in experiments and numerical calculations until recently. The shape of the phase boundary seems to depend on observed quantities, and some experiments even suggest a crossover rather than a phase transition. Although some simulations gave phase diagrams similar to that of Giamarchi and Doussal, numerical accuracy of these studies was not good enough to distinguish phase transitions and crossovers. The stability of the VG phase was studied by Kawamura including the screening effect.
Quite recently, Gaifullin et al. observed a large jump of the inter-layer phase difference on the VG–BG phase boundary of BSCCO by the Josephson plasma resonance experiment. They claimed that their observation is the evidence of a first-order phase transition. In the present Letter, we show more direct evidence of the first-order phase transition on the VG–BG boundary by large-scale Monte Carlo simulations. That is, a finite latent heat and a $`\delta `$-function peak of the specific heat are observed. Sharp jumps of the inter-layer phase difference and the averaged fluctuations of flux lines are also obtained.
In order to clarify vortex states and phase transitions of high-$`T_\mathrm{c}`$ superconductors in the presence of point pins, we start from the three-dimensional anisotropic, frustrated XY model on a simple cubic lattice . Effects of point pins are introduced into the model by randomly-distributed weakly-coupled plaquettes in the $`ab`$ plane. Since a vortex sitting on a plaquette costs an energy proportional to the couplings surrounding it, flux lines tend to penetrate plaquettes with weaker couplings in order to reduce such loss of energies. The Hamiltonian of our model is given by
$`H`$ $`=`$ $`{\displaystyle \underset{i,jab\mathrm{plane}}{}}J_{ij}\mathrm{cos}\left(\varphi _i\varphi _jA_{ij}\right)`$ (2)
$`{\displaystyle \frac{J}{\mathrm{\Gamma }^2}}{\displaystyle \underset{m,nc\mathrm{axis}}{}}\mathrm{cos}\left(\varphi _m\varphi _n\right),`$
$`A_{ij}`$ $`=`$ $`{\displaystyle \frac{2\pi }{\mathrm{\Phi }_0}}{\displaystyle _i^j}𝐀^{(2)}d𝐫^{(2)},`$ (3)
with the periodic boundary condition along all the directions. Couplings in the $`ab`$ plane are given by $`J_{ij}=bJ`$ ($`0<b<1`$) on the weakly-coupled plaquettes, and $`J_{ij}=J`$ otherwise. The density and the strength of point pins are controlled by the probability of weakly-coupled plaquettes, $`p`$, and the parameter $`b`$, respectively (see Fig. 2). The pinning energy is of order of $`(1b)J`$. A uniform magnetic field is applied along the $`c`$ axis, and its strength is proportional to the averaged number of flux lines per plaquette, $`f`$. Here we concentrate on the model with $`L_x=L_y=50`$ and $`L_c=40`$. This system size is large enough to describe the melting transition in the pure system ($`b=1`$) .
In our model, we have four adjustable parameters: the anisotropy constant $`\mathrm{\Gamma }`$, the density of flux lines $`f`$, the density of point pins $`p`$, and the strength of point pinning $`b`$. In order to investigate the VG–BG transition, we vary $`b`$, while fix the temperature at $`T=0.06J/k_\mathrm{B}`$ and other parameters at $`\mathrm{\Gamma }=20`$, $`f=1/25`$ and $`p=0.003`$. In other words, material parameters of the bulk system and the number and positions of point pins are not changed during the simulations. As will be shown later, this temperature is low enough for the study of the VG–BG phase boundary. Typical Monte Carlo steps (MCS) with the Metropolis algorithm are $`34\times 10^7`$ MCS for equilibration, and $`0.51\times 10^7`$ MCS for measurement.
First, the $`b`$ dependence of the internal energy $`e`$ and the specific heat $`C`$ per flux line per $`ab`$ plane is displayed in Fig. 3. Clearly, the internal energy shows a sharp jump at the transition point $`b^{}=0.895\pm 0.005`$ with a latent heat per flux line per $`ab`$ plane $`Q2.3\times 10^2J`$, and a $`\delta `$-function peak of the specific heat also occurs at the same parameter. These two facts indicate that the VG–BG transition is a thermodynamic first-order phase transition. From this latent heat, the entropy jump at $`b^{}`$ is estimated as
$$\mathrm{\Delta }S=Q/T0.38k_\mathrm{B},$$
(4)
which is comparable to the experimental value in the melting transition of YBCO, $`\mathrm{\Delta }S0.5k_\mathrm{B}`$ .
Second, the $`b`$ dependence of the inter-layer phase difference, $`\mathrm{cos}(\varphi _n\varphi _{n+1})`$, is plotted in Fig. 4. This quantity is related to the Josephson energy per phase variable $`e_\mathrm{J}`$ and the anisotropy constant $`\mathrm{\Gamma }`$ as
$$\mathrm{cos}(\varphi _n\varphi _{n+1})=e_\mathrm{J}\mathrm{\Gamma }^2/J,$$
(5)
and a small change of $`e_\mathrm{J}`$ is magnified in this quantity in extremely anisotropic systems. This quantity also jumps sharply at $`b^{}=0.895\pm 0.005`$, and the value of the jump at $`b^{}`$, $`\mathrm{\Delta }_{\mathrm{PD}}0.12`$, is as large as the experimental value, $`\mathrm{\Delta }_{\mathrm{PD}}0.2`$ . Moreover, the ratio of the jump of the Josephson energy to the latent heat is given by $`\mathrm{\Delta }e_\mathrm{J}/(Qf)0.34`$, which means that the latent heat is equally distributed to all the directions in the VG–BG phase transition in extremely anisotropic systems.
Third, the Lindemann number is evaluated directly . The deviation $`u`$ of a flux line is measured in each $`ab`$ plane from the projection of the mass center of the flux line, and averaged over all the flux lines and the $`ab`$ planes. Then, the Lindemann number $`c_\mathrm{L}`$ is given by
$$c_\mathrm{L}=\underset{bb^{}+0}{lim}u^2^{1/2}/a_0,$$
(6)
with the lattice constant of the triangular FLL, $`a_0=(2/\sqrt{3})^{1/2}/f^{1/2}`$. The $`b`$ dependence of $`u^2^{1/2}/a_0`$ is shown in Fig. 5, and we have $`c_\mathrm{L}0.28`$. This value is almost equal to the one obtained in the FLL melting of pure systems .
Finally, we go into some details of the present simulations. The system with $`b=0.90`$ is calculated at first. Simulations are started from a random configuration at a very high temperature, and the system is gradually cooled down to $`T=0.06J/k_\mathrm{B}`$. During the cooling process, the first-order melting transition characterized by a discontinuous appearance of the helicity modulus along the $`c`$ axis, $`\mathrm{{\rm Y}}_c`$ , takes place at $`T_\mathrm{m}0.079J/k_\mathrm{B}`$, which corresponds to the vortex liquid (VL)–BG phase transition. Then, the strength of point pinning $`b`$ is varied. Since the quantity $`\mathrm{{\rm Y}}_c`$ is proportional to the superfluid density, the region with finite $`\mathrm{{\rm Y}}_c`$ is superconducting. This quantity is nonvanishing for all the values of $`b`$ shown in Figs. 35 at $`T=0.06J/k_\mathrm{B}`$, and therefore the phase transition investigated in the present Letter is not the VL–BG one, but the VG–BG one. Equilibration in systems with point pins is much slower than that in pure systems, and only one sample can be taken for calculations at present. Nevertheless, the results obtained in the present Letter are quite clear-cut and consistent with experiments. Thus, the small number of random sampling does not seem serious. Since positions of point pins are independent in each $`ab`$ plane, the number of $`ab`$ planes, $`L_c=40`$, would be large enough for averaging effects of point pins.
Although we have concentrated on the VG–BG transition for a single density of point pins $`p`$ in the present Letter, we have also investigated the VL–BG and VL–VG transitions for various $`p`$, and obtained the overall phase diagram in the $`p`$$`T`$ plane. The structure of the $`p`$$`T`$ phase diagram is similar to that of the $`B`$$`T`$ phase diagram. Experimentally, the increase of $`p`$ corresponds to the repeated irradiation of electrons or protons, and our $`p`$$`T`$ phase diagram is consistent with recent experiments . Details of this study will be reported elsewhere .
In conclusion, the first thermodynamic evidence of the first-order transition between the vortex glass (VG) and the Bragg glass (BG) phases has been obtained in high-$`T_\mathrm{c}`$ superconductors in the presence of point pins. A finite latent heat and a $`\delta `$-function peak of the specific heat are observed by large-scale Monte Carlo simulations of the three-dimensional anisotropic, frustrated XY model with randomly-distributed weakly-coupled plaquettes. The entropy jump derived from the latent heat is nearly equal to those in the melting transition of YBCO. The Lindemann number evaluated from fluctuations of flux lines, $`c_\mathrm{L}0.28`$, is reasonable for the first-order phase transition. The inter-layer phase difference also shows a sharp jump on the VG–BG phase boundary. This property is consistent with the Josephson plasma resonance experiment of BSCCO by Gaifullin et al.
The present authors would like to thank Prof. Y. Matsuda for communications. Numerical calculations were performed on Numerical Materials Simulator (NEC SX-4) at National Research Institute for Metals, Japan.
|
no-problem/0002/nlin0002032.html
|
ar5iv
|
text
|
# Abstract
## Abstract
We develop a set of equations to describe the population dynamics of many interacting species in food webs. Predator-prey interactions are non-linear, and are based on ratio-dependent functional responses. The equations account for competition for resources between members of the same species, and between members of different species. Predators divide their total hunting/foraging effort between the available prey species according to an evolutionarily stable strategy (ESS). The ESS foraging behaviour does not correspond to the predictions of optimal foraging theory. We use the population dynamics equations in simulations of the Webworld model of evolving ecosystems. New species are added to an existing food web due to speciation events, whilst species become extinct due to coevolution and competition. We study the dynamics of species-diversity in Webworld on a macro-evolutionary timescale. Coevolutionary interactions are strong enough to cause continuous overturn of species, in contrast to our previous Webworld simulations with simpler population dynamics. Although there are significant fluctuations in species diversity because of speciation and extinction, very large scale extinction avalanches appear to be absent from the dynamics, and we find no evidence for self-organised criticality.
## 1 Introduction
Understanding coevolution within communities of interacting species is one of the greatest challenges in the study of ecological systems. Two different sets of issues are involved in modelling such communities. Questions regarding food web structure, the nature of predator-prey interactions, competition for resources, and population dynamics apply on an ecological time scale comparable to the lifetime of individual organisms. Questions regarding evolutionary change of species, introduction of new species to the food web by speciation processes, and removal of species due to extinction apply on an evolutionary time scale orders of magnitude longer than the lifetime of an organism. We argue that these two sets of questions are nevertheless related and that they need to be considered within the same framework. In order to understand food web structures we need to understand the way in which the diversity of organisms in the ecosystem evolved. In order to understand the way the set of species in a food web will coevolve we need to understand the nature of the competitive interactions and predator-prey relationships between them.
The Webworld model, introduced by Caldarelli, Higgs & McKane (1998), and studied further here, is an attempt to model the two timescales simultaneously. The model considers a set of species, each of which has a set of morphological and behavioural features that determine the way it interacts with all the other species, and hence the positions of links in the food web. Population dynamics equations are used to determine the way the population sizes of all the species change over ecological time. In one evolutionary time step of the model one new species is added to the food web, and the populations of all the species then change in response. The new species sometimes adds stably to the ecosystem, sometimes dies out due to competition with existing species, and sometimes causes the extinction of other species. The diversity of species within the ecosystem thus changes on the evolutionary time scale due to speciation and extinction.
In our previous paper (Caldarelli et al (1998), henceforward Paper I), we considered the properties of the food webs generated by Webworld, including the number of basal, intermediate and top species in the web, the number of links per species, and the number of trophic levels. These properties have been measured in real food webs (e.g. Cohen et al., 1990; Hall & Raffaelli, 1991; Goldwasser & Roughgarden, 1993; Martinez & Lawton, 1995), and thus an extensive amount of ecological data exists with which we were able to compare the results of the model. The Webworld model generates food webs with properties that are in reasonable agreement with those of real webs, given the large uncertainties inherent in measurements on real webs. The model also makes predictions about the way food web properties will change as a function of ecological parameters such as the rate of input of external resources to the ecosystem, the efficiency of transfer of resources from prey to consumer at each level of the food chain, and the strength of competition between species for the same resources. As such, we feel that the model goes considerably further than other theoretical models of food web structure, such as the cascade model (Cohen, 1990; Cohen et al, 1990), which are based on constructing random graphs.
The Webworld model also makes predictions about the dynamics of species diversity that can be compared with the evidence from the fossil record. There has been considerable interest in macro-evolutionary models recently, generated by the claim that extinction dynamics are related to the concept of self-organised criticality (Bak & Sneppen, 1993; Solé et al. 1996; Solé et al, 1997; Amaral & Meyer, 1999). The idea is that the avalanches of extinctions visible in the fossil record can be expected to arise from the internal coevolutionary dynamics of the system, and thus one does not need to postulate catastrophic external events such as meteorite strikes or climate changes in order to explain the extinctions. In simulations of the Webworld model in Paper I we found that large extinction avalanches could occur in situations when the ecosystem was poorly adapted to the external conditions, but that as time went by extinction events became smaller and rarer. The ecosystem tended towards a ‘frozen’ food web of mutually well-adapted species that could not be invaded by new species. These results therefore did not support the idea of criticality.
Whilst models such as that of Bak & Sneppen (1993) have the merit of being (deliberately) very simple, we have found that the dynamics of ecosystem models depends substantially on the way that such models are set up, and we feel that it is important to attempt to include some degree of realism in the models if one wishes to draw conclusions about the real world. One of the aims of this Paper is to develop a general set of equations for population dynamics that deals with competition between species and predator-prey interactions in a food web which can have any arbitrary structure of links generated by the evolutionary process. The equations used here are based on ratio-dependent functional responses (Arditi & Ginsburg, 1989; Arditi & Akçakaya, 1990; Arditi & Michalski, 1995) and represent a considerable improvement on those used in Paper I in the way they treat competition between predators for a given prey. Another important change is that increased adaptation of predators leads to a decrease in prey population size in the current model, whereas this was not so in the model used in Paper I. This leads to a continuous overturn of new species replacing old ones, in contrast to the frozen state found in Paper I. Although the stationary state is now a dynamical one, we still find no evidence for self-organised criticality.
The outline of the paper is as follows. In section 2 we define the Webworld model as studied both here and in Paper I. In section 3 we present the new equations for population dynamics. We show that these equations satisfy several logical requirements of general food web models. We also give an interpretation of the choice of diet of predators in terms on evolutionary stable strategies. Section 4 gives details of the simulations of the Webworld model using the new equations. In sections 5 and 6 we investigate the long term evolutionary dynamics, firstly in the absence of predators and then in webs with predation. We conclude in section 7 with a discussion of the implications of these changes and with a comparison with other evolutionary models.
## 2 The Webworld model
This section describes the basis of the Webworld model introduced in I. Details that differ from Paper I will be mentioned specifically. Our model is a stochastic one, since the characteristics of the species and the speciation events are chosen randomly, however we use deterministic dynamical equations for the population sizes of each species. A species is defined by the set of its morphological or behavioral characteristics. We construct a species in the model by picking $`L`$ features out of a pool of $`K`$ possible features. These features represent morphological and behavioural characteristics, which might be, for example, “nocturnal”, “having sharp teeth” and “ability to run fast”, however, we do not assign particular biological attributes to each feature: they are just integers which run from 1 to $`K`$. In our simulations we take $`L=10`$ and $`K=500`$ for illustrative purposes.
The matrix of scores $`m_{\alpha \beta }`$, describes how useful one feature, $`\alpha `$, is against any other feature, $`\beta `$. The $`K\times K`$ matrix $`m_{\alpha \beta }`$ is antisymmetric (i.e., $`m_{\alpha \beta }=m_{\beta \alpha }`$) and is taken to consist of random Gaussian variables with mean zero and unit variance. These are chosen at the beginning of a simulation run and do not change during that particular run. The score $`S_{ij}`$ of one species $`i`$ against another species $`j`$ is then defined as
$$S_{ij}=\mathrm{max}\{0,\frac{1}{L}\underset{\alpha i}{}\underset{\beta j}{}m_{\alpha \beta }\},$$
(1)
where $`\alpha `$ runs over all the features of species $`i`$ and $`\beta `$ runs over all the features of species $`j`$. Thus the score of one species against another is essentially just the sum of the scores of the relevant features against each other. A positive score $`S_{ij}`$ indicates that species $`i`$ is adapted for predation on species $`j`$, whilst a zero score means that there is no predator- prey relationship between the species. The scores will be used in the equations for population dynamics described in the next section. The external environment is represented as an additional species 0 which is assigned a set of $`L`$ features randomly at the beginning of a run and which does not change. Species having a positive score against the external environment represent primary producers of the ecosystem.
The model consists of a changing set of species that may feed on each other and on external resources. External resources are input at a constant rate $`R`$ and are distributed amongst species as a function of their scores, in a way that is discussed later. These resources are then tied up in the ecosystem as potential “food” in the form of prey for predator species. For simplicity, we measure resources and population sizes in the same units, so that $`N_i(t)`$ denotes the number of individuals for species $`i`$ at time $`t`$ or alternatively the amount of resources invested in species $`i`$ at this time.
The short time dynamics is described by a set of equations giving the change of population size of any one species in terms of the population sizes of the other species in the ecosystem. The form of these equations is to be discussed in the next section. A new species is created by a speciation event from one of the existing species. This is carried out by choosing a parent species at random, and introducing a new daughter species into the ecosystem that differs from the parent species by one randomly chosen feature. The new species begins with a population size of 1, and 1 is subtracted from the population of the parent species. The populations of all the species are determined by iterating the population dynamics equations. If the population size of any species falls below one, that species is removed from the system, and so rendered extinct. The population dynamics simulation is continued until all surviving species reach an equilibrium population size, or until a defined large time period is reached without the populations having reached equilibrium. This completes one evolutionary time step of the model, and the program proceeds to add another new species. In order to prevent multiple copies of identical species from arising, each time a new species is added, a check is carried out to ensure that the set of features of the new species is not already represented in the ecosystem.
A minor change between the simulations here and those in Paper I is the way that species are chosen to undergo speciation: in the original model they were chosen with a probability proportional to their population size, here they are chosen randomly. As we will discuss below, this does not lead to qualitative changes in the behaviour of the model. The major change in the model is the form chosen for the population dynamics. This is discussed in detail in the next section.
## 3 Population dynamics
We wish to develop a set of population dynamics equations which is general enough to deal with any food web structure. There have been many models of population dynamics that discuss only two or three species — e.g. plant plus herbivore plus carnivore, or two consumer species competing for the same resource. However many of these models are not easy to generalise to the multiple species case. Most species in an ecosystem are both predators and prey and are in competition with several other species. We require equations which include all these effects at the same time.
Let the rate at which one individual of species $`i`$ consumes individuals of species $`j`$ be denoted by $`g_{ij}(t)`$. This is usually called the ‘functional response’, and it depends in general on the population sizes. We suppose that the population size of each species satisfies an equation of the form:
$$\frac{dN_i(t)}{dt}=N_i(t)+\lambda \underset{j}{}N_ig_{ij}(t)\underset{j}{}N_jg_{ji}(t).$$
(2)
The first term on the right represents a constant rate of death of individuals in absence of interaction with other species. The final term is the sum of the rates of predation on species $`i`$ by all other species, and the middle term is the rate of increase of species $`i`$ due to predation on other species. Where there is no predator-prey relationship between the species the corresponding rate $`g_{ij}`$ is zero. For primary producers the middle term includes a non-zero rate $`g_{i0}`$ of feeding on the external resources. The factor $`\lambda `$ is less than 1, and is known as the ecological efficiency. It represents the fraction of the resources of the prey that are converted into resources of the predator at each stage of the food chain. Throughout this paper, we have taken $`\lambda =0.1`$, a value accepted by many ecologists (Pimm, 1982). We have deliberately chosen the form of Eq. (2) to be the same for all species. We do not want to define different equations for primary producers, herbivores, and carnivores etc, because species can change their position in the ecosystem as it evolves, and most species are both predators and prey. For simplicity, we have set the death rate to be equal for all species and the value of $`\lambda `$ to be equal for all species. In a more complex model we could have allowed these quantities to be functions of the sets of features of each species and then these parameters would have been subject to evolution in the same way that the interactions scores between species are subject to evolution. The choice of the death rate to be unity in (2) essentially sets the time scale for the population dynamics: the time appearing in this equation has been scaled so that the coefficient of $`N_i(t)`$ is one.
Equation (2) is different from Eq. (5) in Paper I, which had been designed to be as simple as possible, and to have only one stationary state. First, Eq. (5) in Paper I is discrete in time, while Eq. (2) is continuous. However, this difference is of minor importance, as Eq. (2) has to be discretized anyway for performing computer simulations, giving Eq. (16) below, which has a similar form to Eq. (5) of paper I in the case of large time steps, $`\mathrm{\Delta }t=1`$. The second and main difference to Paper I consists in the fact that the term describing decrease in population size due to predation in Paper I was chosen to be independent of the rate at which the species is consumed by its predators. Also, the form of the functional response in the consumption term had not been chosen according to ecological considerations.
The main question to be addressed in this section is how to choose a reasonable function $`g_{ij}`$ that is applicable for all the situations that arise in the ecosystem. Since the final form we use is relatively complex, we will describe several particular cases first and build up to the general case.
For a single predator $`i`$ feeding on a single prey $`j`$ we suppose
$$g_{ij}(t)=\frac{S_{ij}N_j(t)}{bN_j(t)+S_{ij}N_i(t)}.$$
(3)
This is known as a ratio-dependent functional response (Arditi & Ginsburg, 1989; Arditi & Michalsi, 1995), because $`g_{ij}`$ can be written as a function of the ratio of prey to predators if both top and bottom are divided by the predator population $`N_i`$. When the prey is very abundant, $`g_{ij}=S_{ij}/b`$, i.e. each predator feeds at a constant maximum rate. When predators are numerous compared to the available prey, there is competition between predators, and the rate at which each individual predator can feed on the prey becomes limited by the amount of prey. In this limit the combined rate of consumption of all predators is $`N_ig_{ij}=N_j`$. This situation is known as donor control. Arditi & Akçakaya (1990) have shown that interference between predators is significant and that the ratio-dependent functional response can be applied to a wide range of real species.
In our model, the same equation is used to treat consumption of external resources by a primary producer. In this case $`i`$ is the primary producer, and the external resources are $`j=0`$, with a value of $`N_0`$ that is kept fixed and equal to $`R/\lambda `$. By writing down the differential equation (2) for the case of a single primary producer species 1, we find that the population size $`N_1`$ reaches a stationary value when $`\lambda g_{10}=1`$. Hence, the equilibrium population size is
$$N_1=(\lambda S_{10}b)N_0/S_{10},$$
(4)
provided this is positive, i.e. species 1 can only survive if $`S_{10}>b/\lambda `$. Thus $`b`$ is an important parameter of the model that determines the minimum score necessary for a consumer to survive on a given resource. With the choice of $`N_0`$ given above, $`N_1`$ tends to $`R`$ if it is very well adapted ($`S_{10}1`$). The parameter $`R`$ represents the fixed rate of supply of non-biological resources, principally sunlight. These resources are renewed constantly and hence are never depleted. Also they cannot accumulate if not used, hence there is no differential equation for the rate of change of $`N_0`$.
If there are additional species competing with $`i`$ for predation on species $`j`$, equation (3) can be generalised as follows:
$$g_{ij}(t)=\frac{S_{ij}N_j(t)}{bN_j(t)+_k\alpha _{ki}S_{kj}N_k(t)}.$$
(5)
The sum in the denominator is over all species $`k`$ which prey on $`j`$, including $`i`$ itself, i.e. it is over all species for which $`S_{kj}>0`$. The additional predator populations are present in the denominator because each individual of species $`i`$ is in competition with the other species as well as with other members of its own species. The factor $`\alpha _{ki}`$ is introduced to represent the fact that competition between members of the same species for a resource is usually stronger than competition between members of different species. Thus $`\alpha _{ki}<1`$ when $`k`$ and $`i`$ are not equal and $`\alpha _{kk}=1`$ for all $`k`$. We will in addition suppose that $`\alpha _{ki}=\alpha _{ik}`$. Although addition of this extra factor complicates the equations, it is actually essential in order to permit coexistence of competing species. As an example consider two species 1 and 2 competing for external resources. In this case:
$$g_{10}(t)=\frac{S_{10}N_0(t)}{bN_0(t)+S_{10}N_1(t)+\alpha _{12}S_{20}N_2(t)},$$
(6)
$$g_{20}(t)=\frac{S_{20}N_0(t)}{bN_0(t)+S_{20}N_2(t)+\alpha _{12}S_{10}N_1(t)}.$$
(7)
In the stationary state $`\lambda g_{10}=1`$ and $`\lambda g_{20}=1`$, hence
$$N_1=\frac{N_0(\lambda (S_{10}\alpha _{12}S_{20})b(1\alpha _{12}))}{S_{10}(1\alpha _{12}^2)},$$
(8)
$$N_2=\frac{N_0(\lambda (S_{20}\alpha _{12}S_{10})b(1\alpha _{12}))}{S_{20}(1\alpha _{12}^2)}.$$
(9)
For coexistence of the two species both the above must be positive, thus the species can only coexist if
$$(1\alpha _{12})(S_{10}b/\lambda )<S_{20}S_{10}<(1\alpha _{12})(S_{20}b/\lambda ).$$
(10)
Therefore the range of the difference between scores for which coexistence is possible is proportional to $`1\alpha _{12}`$. If the competition between species is reduced ($`\alpha _{12}`$ is reduced) it becomes easier for species to coexist on the same resources. If $`\alpha _{12}=1`$ only species with identical scores can coexist. Since in general there is more than one predator per prey in real food webs, it is necessary to introduce the $`\alpha `$ parameter into the model. The result that species can only coexist if between species competition is weaker than within species competition is also found in other models (e.g. Renshaw (1991) pp 137-139).
For the purpose of our simulations we will suppose that the strength of competition depends only on the degree of similarity between the species:
$$\alpha _{ij}=c+(1c)q_{ij},$$
(11)
where $`c`$ is a constant such that $`0c<1`$, and with $`q_{ij}`$ being the fraction of features of species $`i`$ that are also possessed by species $`j`$.
We now consider the case of a single predator with more than one prey. It might seem that we could use equation (3) for each prey $`j`$. However this is unsatisfactory, as explained below. In fact we use:
$$g_{ij}(t)=\frac{S_{ij}f_{ij}(t)N_j(t)}{bN_j(t)+S_{ij}f_{ij}(t)N_i(t)}.$$
(12)
where we have introduced the factor $`f_{ij}`$, which is the fraction of its effort (or available searching time) that species $`i`$ puts into preying on species $`j`$. These efforts must satisfy $`_jf_{ij}=1`$ for all $`i`$. The importance of introducing the efforts can be understood by considering a single predator $`i=3`$ with two prey $`j=1`$ and $`j=2`$ of population sizes $`N_1`$ and $`N_2`$. In the particular case where the prey are equivalent from the predator’s point of view (i.e. $`S_{31}=S_{32}`$), the dynamics of the predator population should be identical to the case where there is just one prey population of size $`N_1+N_2`$. This is a ‘common sense’ condition that has been emphasised by Arditi & Michalski (1995) and Berryman et al (1995), who have shown that many dynamical equations used previously did not satisfy the condition. In our case, since the prey are equivalent, the predator sets its efforts to be proportional to the population sizes: $`f_{31}=N_1/(N_1+N_2)`$ and $`f_{32}=N_2/(N_1+N_2)`$. Calculating the predation rates from equation (12) the total input to the predator can be shown to be
$$g_{31}+g_{32}=\frac{S_{31}(N_1+N_2)}{b(N_1+N_2)+S_{31}N_3},$$
(13)
which is the same as for a combined species of population $`N_1+N_2`$. If the efforts had not been introduced, i.e. equation (3) had been used instead of equation (12), this condition would not be satisfied.
There is an additional reason why it is necessary to introduce the $`f_{ij}`$. The rate of decrease of a prey population $`j`$ caused by a predator $`i`$ is $`N_ig_{ij}`$. If the simple equation (3) is used, then the effect of the predators on the prey is very large when the predator population is large compared to the prey. When a new species evolves, it always begins from small population numbers. Usually, there is an existing species which has some ability to act as a predator on the new species. The existing predator has a relatively large population because it must already be successfully feeding on an established prey species in the ecosystem. Thus the new prey species suffers from an enormous level of predation and almost always becomes extinct as soon as it is introduced, even if it is substantially better adapted than the established prey. This prevents a diverse ecosystem from evolving. This problem is solved by introducing the efforts, since initially the predator puts very little effort into feeding on the new prey because there are so few of them. The effect of the predator on the new prey is thus in proportion to the prey’s population size. This permits newly-evolved species to enter the ecosystem in a reasonable way.
We now require a rule by which predators assign their efforts to different prey when the prey are not equivalent. We suppose that the efforts of any species $`i`$ are chosen so that the gain per unit effort $`g_{ij}/f_{ij}`$ is equal for all prey $`j`$. If this were not true, the predator could increase its energy intake by putting more effort into a prey with higher gain per unit effort. This choice of efforts leads to the condition
$$f_{ij}(t)=\frac{g_{ij}(t)}{_kg_{ik}(t)}.$$
(14)
It is shown in the appendix that this choice is an evolutionarily stable strategy (ESS) (Parker & Maynard Smith, 1990). If the population has efforts chosen in this way, there is no other choice of efforts that can do better, i.e. no other strategy than can become more common if it is rare. When prey are equivalent, the ESS solution reduces to setting the efforts in proportion to the prey population sizes, as above.
Combining all the above considerations we arrive at the following general form for the functional response that is used in the Webworld simulations in this paper:
$$g_{ij}(t)=\frac{S_{ij}f_{ij}(t)N_j(t)}{bN_j(t)+_k\alpha _{ki}S_{kj}f_{kj}(t)N_k(t)},$$
(15)
with the efforts given by Eq. (14).
We have previously shown that a restricted form of Eq. (15) was invariant under aggregation of identical prey. It is straightforward to demonstrate that this holds for the general form (15) too. The invariance under aggregation of identical predators can also be shown. If predator $`i`$ and predator $`l`$ are identical, we have $`S_{ij}=S_{lj}`$ and $`\alpha _{ij}=\alpha _{lj}`$ for all $`j`$, and $`\alpha _{il}=1`$. The combined effect of the two species on prey $`j`$ is therefore
$$N_ig_{ij}+N_lg_{lj}=\frac{S_{ij}N_j(N_if_{ij}+N_lf_{lj})}{bN_j+S_{ij}(N_if_{ij}+N_lf_{lj})+_{kl,i}\alpha _{ki}f_{kj}S_{kj}N_k},$$
which is obviously identical to the effect of one predator species of population size $`N_i+N_l`$ and effort
$$(N_if_{ij}+N_lf_{lj})/(N_i+N_l).$$
Therefore our equations Eq. (2) and Eq. (15) satisfy the logical requirements of invariance under aggregation of identical species. We now go on to consider the practicalities of implementing these equations in simulations.
## 4 Implementation of the Webworld program
Solution of time-dependent differential equations involves a numerical algorithm such as the Runge-Kutta method which integrates forward by small time steps. We require to simulate the dynamics of large numbers of species over large times, hence the efficiency of the algorithm is important. To speed up the computer simulations, we used a discrete version of the dynamics,
$$N_i(t+\mathrm{\Delta }t)=N_i(t)(1\mathrm{\Delta }t)+\mathrm{\Delta }t\left[\lambda \underset{j}{}N_ig_{ij}(t)\underset{j}{}N_jg_{ji}(t)\right],$$
(16)
with a time-step $`\mathrm{\Delta }t=0.2`$, which is quite large. The discrete version of the dynamics would only be identical to the differential equation if $`\mathrm{\Delta }t`$ were very small. However, for our purposes the continuous and discrete time versions provide an equally good description of an ecosystem and we do not wish to distinguish between them. A key point about the equations is that the stationary values of the population sizes from equation (16) are identical to those from equation (2).
In the program it is necessary to continuously update the efforts for each species so that they remain close to the ESS values. We assume that efforts can change on a time scale of days, which is much quicker than the change in population sizes, which occurs on the time scale of the generation time of the organism. If the efforts satisfy Eq. (14), then they also satisfy the ESS condition that the gain per unit effort is equal for all prey. If we begin with some choice of efforts that is not the ESS and substitute these into the $`g_{ij}`$ functions on the right of equation (14) we obtain a new set of efforts that is closer to the ESS. Repeated iteration of this equation therefore causes the efforts to converge on the ESS. In our simulations, after each update of the population sizes using equation (16), we updated the efforts by iterating equations (15) and (14) many times whilst keeping the population sizes fixed, until the maximum relative change in effort was smaller than a threshold. In most of our simulations, this threshold was $`10\%`$. Only then did we proceed with the next update of the population sizes.
In principle, a species $`i`$ can assign part of its effort to any species $`j`$ for which $`S_{ij}>0`$, however, the ESS condition means that not all such species end up with a non-zero fraction of the effort. From (15), the gain per unit effort has a limit as $`f_{ij}`$ tends to 0, and this is the maximum achievable value of $`g_{ij}/f_{ij}`$. If this maximum value is less than the gain per unit effort that can be achieved from some other combination of prey excluding $`j`$, then the ESS solution has $`f_{ij}=0`$, i.e. $`i`$ does not include $`j`$ in its diet. We believe from numerical studies that there is a unique ESS choice of efforts for any fixed set of population sizes. We have proved this in some special cases, but have not yet found a general proof.
The efforts of each species change continuously during the simulation. If $`f_{ij}=0`$ at some point in time, it does not necessarily remain so. If the population size of species $`j`$ increases it may pay $`i`$ to switch some of its effort to $`j`$. Also if a third species goes extinct which was a well-adapted predator of $`j`$, it may be possible for $`i`$ to feed on $`j`$, whereas previously it could not do so because it was outcompeted by the third species. In cases where the ESS solution for a particular effort is zero, iteration of equation (14), from a small starting value, causes it to become ever closer to zero, and eventually the computer sets it to zero when it falls below the smallest real number allowed by the operating system. This creates a problem, since if an effort is exactly zero, it can never increase again by iteration of equation (14). Therefore we introduced a minimum effort $`f_{\mathrm{min}}`$ in the simulation program, such that whenever the value of an effort $`f_{ij}`$ became smaller than $`f_{\mathrm{min}}`$, but the score $`S_{ij}`$ was positive, we set $`f_{ij}=f_{\mathrm{min}}`$. This allows efforts that were previously effectively zero to recover again to large values if conditions change. In cases where the score $`S_{ij}`$ is zero, then the corresponding effort is also exactly zero. In these simulations we chose $`f_{\mathrm{min}}=10^6`$, and found that the results do not depend on the precise value of $`f_{\mathrm{min}}`$, as long as it is small enough.
Beginning from the same ratio-dependent functional response for one predator and one prey (equation 3), Michalski & Arditi (1995) and Arditi & Michalski (1995) generalised the equation to a food web in a way that is different from our equation (15). These authors introduced quantities $`X_i^{r(j)}`$, the part of species $`i`$ that is being accessed as a resource by species $`j`$, and $`X_j^{c(i)}`$, the part of species $`j`$ that is acting as a consumer of species $`i`$. Similar quantities were used by Berryman et al. (1995) with a different form of the functional response. The values of these quantities must be determined self-consistently at each moment in time. This is qualitatively similar to the way in which we determine the solution for the efforts $`f_{ij}`$ at each moment in time. Our equations are simpler because they only require one set of auxiliary variables per species rather than two. We are also able to give an interpretation of the $`f_{ij}`$ in terms of the ESS, which was not done with the alternative formulation using the $`X`$ parameters. The two formulations nevertheless predict similar effects. Michalski & Arditi (1995) show that as the $`X`$ parameters change, links can appear and disappear from the food web, and hence that the structure of the web is not the same at equilibrium as away from equilibrium. The same behaviour is seen in our approach, since the $`f_{ij}`$ can change from zero to non-zero and vice-versa.
## 5 Competition for external resources in absence of predation
We begin by considering the competition between primary producer species for the external resources $`R`$ in the absence of consumer species at higher levels of the food web. We wish to determine how great a degree of diversity can be generated by evolution in this case, and to study the way the competition strength affects this diversity. All species feed on the external resources (“species 0”) only, therefore $`S_{i0}>0`$. All the scores $`S_{ij}`$ for interaction between species are set to zero in this case. Since each species has only one food source, all the efforts are identical to 1. We initialize the model by assigning the values of the feature score matrix $`m_{\alpha \beta }`$, and by choosing 10 random features to be the features of the environment. We then introduce the first species with 10 randomly chosen features and a population size $`N_1(0)=1`$. After iterating the population equations until all population sizes converge, a new species is created by speciation, as described in section 2, and this process is repeated for many evolutionary time steps.
For survival as a primary producer, a species must have a score $`S_{i0}>b/\lambda `$, as shown by equation (4) above. In addition species can coexist only if their scores are sufficiently close. The conditions for coexistence in the case of just two species were given in equation (10). Species with scores which are too low are out-competed, and become extinct. The strength of competition $`\alpha _{ij}`$ between species depends on their degree of similarity $`q_{ij}`$ as defined in equation (11). Thus a species with a relatively low score that is phenotypically distant from its competitors (shares few features) experiences reduced competition and may survive, whereas another species with the same score that is similar to a well-adapted high-score species may be out-competed. The rationale behind this is that different species can use resources in different ways. If plants diversify by being of different sizes, by adapting to different temperatures and moisture levels, and by adopting different means of dispersal etc., they can make more effective use of the fixed amount of sunlight and ground space that is available. When $`c`$ is close to 1 there is strong competition even between distantly relates species. When $`c`$ is small there is much weaker competition between distant species, hence we would expect greater diversification of the ecosystem when $`c`$ is small.
The results of one simulation run are shown in Figure 1. One can see that the species configuration becomes fixed after approximately 13000 time steps. We continued the simulation to 100000 time steps and did not see any change. Obviously, the species configuration is such that no new species can be generated that can survive in the presence of all the existing species. However, the new species generated are all similar to existing species (they differ by only one feature out of 10 from their parent species). Thus the set of species that arises is not necessarily stable against all possible species, just against those which can arise by small changes of the existing set. If we use the same score matrix $`m_{\alpha \beta }`$ and the same set of features for the external environment, but we start with a different initial species, the stationary configuration is different. We also found that the surviving species with the highest score is usually not the one with the largest population. This is because of the dependence of competition on similarity. The species with the highest possible score against the environment is usually not part of the stationary configuration. For example, when $`c=0`$, we found that the mean score of the species in the stationary configuration is below 6.0, while the best possible score is close to 8.0 for our choice of the environment and the score matrix. The similarity between the majority of pairs of species was $`q_{ij}=0`$. When the simulation was started with the species with the highest score, it died out quickly, but the mean stationary score was higher than with a random initial species. This situation was different when $`c`$ was larger. For $`c0.8`$, the advantage of being different was smaller, and the stationary population contained the species with the highest possible score, together with other species with a high degree of similarity to it.
These results in absence of predation confirm the importance of the parameter $`c`$, and show that reduction in competition as similarity between species decreases is an important factor in promoting the evolution of diversity. They also show that in absence of predation the ecosystem evolves towards frozen state that cannot be invaded by new species.
## 6 Webs with predation
In this section we simulate the full Webworld model in the presence of predation. We start the simulations again with a single primary producer species, but as the species diversify, webs with several trophic layers are created. Even though the network of interactions between species can be complex, we find that the iteration of the population dynamics equations usually converges rapidly to a fixed point. This property was also shared by the simpler set we used in Paper I. This means that we can wait for convergence of all the population sizes for each set of species before the next species is added. More complex dynamics is found in other food web models such as the one discussed by Blasius et al (1999). Our main interest here is to determine which species survive in the in the long-term. Thus the important features of the dynamics are whether the newly evolved species can increase in number initially, and whether any other species drop below the extinction threshold, $`N=1`$.
The following figures and data show a selection of simulation results. Figure 2 shows the number of species in the system as function of time, measured again in speciation events. The two curves differ in the set of random numbers, but not in the parameter values. This means that they differ in their score matrix, in the features of the external environment and of the first species.
In contrast to the case without predation, the web has a continuous overturn of species even after a long time. This result is different from Paper I, where the species configuration became so well adapted that it could not be invaded by new species. From Fig. 2, one can also see that simulations with different random numbers give rise to webs with similar species numbers and fluctuation strength. It appears that the differences between runs with the same parameter values are smaller than they were with the previous equations (c.f. figure 3 in Paper I). We also looked at other quantities besides the species numbers, such as those shown in Table 1 below, and they are similar for the two simulation runs.
Next, we studied the influence of the parameter $`c`$ in Eq. (11) on the properties of the web. As discussed earlier in this paper, a smaller value of $`c`$ leads to less competition between species, and it promotes diversity. Figure 3 shows the number of species as function of time for four different values of the parameter $`c`$, all other parameters being equal. One can see that the species number increases with decreasing $`c`$, due to the decrease in competition. We have argued before that as $`c`$ decreases, the efficiency in exploiting a food source depends more on the overlap of a species with its competitors, and less on its score. This is demonstrated in Figure 4, which shows selected scores as function of time. Initially, the basal species with the highest score $`S_{i0}`$ is chosen and its score is plotted as long as the species exists. When the monitored basal species becomes extinct, the basal species with the best score at that moment is chosen and monitored, and so on. We have done the same for the predator-prey pair with the highest score $`S_{ij}`$. Each step in the curves means that the monitored species has become extinct. The figure also shows that species overturn is higher on higher trophic levels. This is no surprise, since basal species have the largest population sizes, and it is therefore more difficult to drive them to extinction. We find that the scores are higher for larger $`c`$, and that, in particular, the basal species are replaced less often for larger $`c`$. For a value of $`c=0.8`$ there was an indication that the monitored basal species had become fixed.
In Paper I we looked extensively at the structure and statistical properties of the food webs generated by Webworld, and compared these to real food webs. We now wish to look at these properties in the new simulations to see how the changes in the form of the dynamical equations influence the web structure. It turns out that, as in Paper I, very reasonable agreement with quantities measured in real food webs can be achieved for certain values of the parameters of the model. For the purpose of defining food web structure, we consider a link between species $`i`$ and species $`j`$ to be present if species $`i`$ consumes at least one individual of species $`j`$ per unit time, i.e. if $`g_{ij}>1`$. As in Paper I, we define the trophic level of a species to be the number of links on the shortest path from the external resources to that species. In the results tables the Average level is the mean trophic level averaged over all species in the web and averaged over time. The Average maximum level is the mean value of the maximum trophic level in the web averaged over time. In the analysis of food webs, species are often classified according to whether they are basal, intermediate, or top. Basal species live exclusively on the external resources (i.e. they have no prey). Top species have no predators. Intermediate species have both predators and prey. The following table summarizes the results for different values of $`c`$, and for the same parameters as in Fig. 3, averaged over several thousand time steps:
| $`c`$ | $`0.8`$ | $`0.6`$ | $`0.4`$ | $`0.2`$ |
| --- | --- | --- | --- | --- |
| No. of species | 27 | 55 | 79 | 196 |
| Links per species | 1.68 | 1.70 | 2.33 | 5.33 |
| Av. level | 2.15 | 2.28 | 2.38 | 2.45 |
| Av. max. level | 4.0 | 3.91 | 3.69 | 3.03 |
| Basal species (%) | 12 | 9 | 8 | 2 |
| Intermediate species (%) | 86 | 90 | 90 | 90 |
| Top species (%) | 2 | 1 | 2 | 8 |
| Mean overlap level 1 | 0.71 | 0.37 | 0.22 | 0.06 |
| Mean overlap level 2 | 0.30 | 0.13 | 0.08 | 0.04 |
| Mean overlap level 3 | 0.17 | 0.09 | 0.07 | 0.04 |
Table 1. Results of simulations of the model with $`R=10^5`$ and $`b=5\times 10^3`$
for four values of the competition parameter $`c`$.
As can be seen from Fig. 3, the two simulations for $`c=0.4`$ and $`c=0.2`$ have not yet reached their stationary state. Nevertheless, Table 1 shows several trends present: with decreasing $`c`$, the fraction of intermediate species, the number of links per species, and the average trophic level of a species increases, whilst the fraction of basal species decreases. The mean overlaps on levels 1, 2 and 3 are the mean values of the quantity $`q_{ij}`$ (fraction of shared features) for all pairs of species on the same level. We observe that the overlap is higher on the lower levels (i.e. lower level species are more diverse). The mean overlap on each level decreases as $`c`$ decreases, because the strength of competition increases. The same effect was discussed in the previous section for the case with no predation.
The effect of the size of the resources on the number of species is shown in Figure 5. As would be expected, a larger set of resources can sustain a larger web of species. Table 2 shows the mean values of selected properties of the web for these simulations:
| $`R`$ | $`1.0\times 10^4`$ | $`1.0\times 10^5`$ | $`3.5\times 10^5`$ | $`1.0\times 10^6`$ |
| --- | --- | --- | --- | --- |
| No. of species | 33 | 57 | 82 | 270 |
| Links per species | 1.76 | 1.91 | 1.91 | 2.96 |
| Av. level | 1.95 | 2.35 | 2.65 | 3.07 |
| Av. max. level | 3.0 | 3.9 | 4.0 | 4.4 |
| Basal species (%) | 18 | 9 | 5 | 11 |
| Intermediate species (%) | 80 | 89 | 89 | 89 |
| Top species (%) | 2 | 2 | 6 | 1 |
| Mean overlap level 1 | 0.32 | 0.34 | 0.31 | 0.27 |
| Mean overlap level 2 | 0.17 | 0.12 | 0.11 | 0.15 |
| Mean overlap level 3 | 0.19 | 0.09 | 0.09 | 0.12 |
Table 2. Results of simulations of the model with $`c=0.5`$ and $`b=5\times 10^3`$
for four values of the resource $`R`$.
With increasing size of resources, the number of species, the number of levels, and the number of links per species increase. A larger fraction of species are intermediate species. An exception is the last simulation (with $`10^6`$ resources), which has not yet reached its stationary state.
We also studied the dependence of the model properties on the other parameters. As $`b`$ is increased, it is more difficult for a species to become established, and in particular during the early stages of a simulation run, the species numbers are smaller.
When the expression for the $`\alpha _{ij}`$ in Eq. (11) is modified, the simulation results are qualitatively similar. We tested explicitly the choice where $`\alpha _{ij}`$ is 1 for $`q_{ij}=1`$, and a constant smaller than 1 for all other $`q_{ij}`$.
The figures given in the tables represent averages over several runs. The standard deviation is moderate for links per species and the average level and average maximum level (about 5% of the mean), but larger for the number of species and the overlaps (about 10%). The fluctuations in the fraction of basal and intermediate species are 10% and 5% respectively, similar to what was found in I. Not surprisingly, given the small numbers involved, the top species have a large standard deviation (about 50% of the mean). For the simulation with the largest value of $`R`$, which has not yet fully reached the stationary state, and where the fluctuations in the number of species are rather large, the figures given in the table are very rough. In Paper I we made extensive comparisons with the statistics of real food webs, hence we will not do that here. Similar food webs are generated using the equations in the present paper to those in paper I.
As mentioned in Section 2, the rule for speciation can be chosen in different ways. Instead of choosing each species with the same probability to be the parent of a new species, we also did a simulation, where a species was chosen with a probability proportional to its population size. The mean number of species is smaller if species undergo speciation in proportion to their population size. The reason is that more change is happening on the lower trophic levels, making it more difficult for species in the higher trophic levels to become established.
Finally, we studied the size distribution of extinction events. Figure 6 shows the number $`N(s)`$ of events for which $`s`$ species went extinct during one time step for one long simulation run. There is a sharp maximum at $`s=1`$, which is due to the fact that more than 90% of the species created by a mutation cannot survive in the presence of all the other species. The curve has an exponential decay for larger $`s`$, indicating that large extinction events are unlikely. This is very different from the “self-organised critical” behaviour found by Bak and Sneppen (1993) in computer simulations of a much simpler model for large-scale evolution, where the size distribution of extinction events follows a power law $`N(s)s^\tau `$ with $`\tau 2`$.
## 7 Conclusion
In this paper, we have studied a model for evolving food webs. We have established a set of coupled ecological equations for the population sizes of the different species in a web which satisfies the logical requirements put forward by Arditi & Michalski (1995), and in which the distribution of foraging effort for each predator follows an evolutionarily stable strategy. We have shown that the model generates food web structures that are comparable to those of real webs, and have considered the trends in the web statistics with changing parameter values.
In the absence of predation, the models gives rise to a stable set of species that cannot be invaded by any close variant species. In contrast, in the presence of predation, a web is built that has a continuous overturn of species. This result is different from the stable species configurations found in Paper I. As the size distribution of extinction events falls off exponentially, our results are also different from those of several simpler models for large-scale evolution, which usually have a size distribution of extinction events that falls of like a power law with an exponent close to 2 (Bak & Sneppen, 1993; Amaral & Meyer, 1999).
A central point in the theory of self-organised critical systems is that small perturbations can sometimes lead to large responses. In the original sandpile model (Bak et al, 1988) the addition of one sand grain to the top of a pile can sometimes lead to avalanches of falling grains. In Webworld, the equivalent effect is the addition of a new species which can occasionally lead to several other species becoming extinct. It is important that extinctions occur as a result of the changes in population sizes caused by adding the new species. There are no random extinctions in Webworld: we do not remove a species unless its population falls below 1. There is also no random replacement of species; when a new species is added, the parent species is not removed. It may happen that the new species out-competes the parent species and thus replaces it, but this only happens if the new species is better adapted than the old one. In contrast, most other macroevolution models (e.g. Bak & Sneppen, 1993; Solé et al. 1996; Amaral & Meyer, 1999) either include random extinction or random replacement of species. If this is done then sooner or later a very well adapted species with a high population will be removed by chance, and this is likely to have a large effect on the structure of the ecosystem and maybe lead to further extinctions. In the sandpile analogy this is like removing a grain from the bottom of the pile. It would not be surprising if changes of this type caused large avalanches. It could be argued that chance extinctions might occur due to stochastic fluctuations in population sizes. Our dynamics is deterministic and thus does not allow for this possibility, however stochastic fluctuations are unlikely to affect species with high population sizes sufficiently to drive them extinct. Therefore we feel that simply removing species whose populations fall below the threshold value of 1 is an adequate way of dealing with extinctions.
One of the major questions that one would wish to address with models such as ours is whether the large scale extinction events observed in the fossil record could arise as a result of the internal dynamics of the ecosystem, or whether external causes are required. We have found in preliminary simulations with our model (results not shown) that even relatively minor changes to the external environment (species 0 in our model) are capable of causing large scale extinction events, which in turn lead to the potential for the growth of new species. Thus it seems clear to us that external perturbations can cause extinction avalanches. The more interesting question is therefore whether mass extinctions occur with a static external environment - the case considered in this paper. Although the long-term behaviour of our model shows a continual overturn of species, no large scale extinctions are seen and we therefore deduce that environmental changes are required to produce these. As we have stressed above, we believe that what is referred to as “internal dynamics” in some models is effectively external, since in these models perfectly well adapted species are removed by random changes. So in our view these effects are indistinguishable from the elimination of species due to some random external perturbation due to environmental change. We conclude that great care has to be taken to distinguish between truly internal dynamics and external influences. For this a realistic model of evolutionary dynamics is required. It is likely that both internal and external effects exist in the spectrum of extinction events seen in the fossil record.
Another question of interest concerns the robustness of the simulation results to modifications of the model. As mentioned at several places in this paper, we found that our qualitative results are insensitive to a variety of changes that we made. However, we should also mention that our findings depend sensitively on a good implementation of the rule for updating the efforts. If the efforts are not given enough time to equilibrate with respect to the population sizes, small values of the efforts cannot recover quickly when a prey becomes more abundant, and we found that this occasionally led to large extinction avalanches which destroyed almost the entire web. Another type of undesirable behaviour was also observed in some simulations where we did not allow the efforts to equilibrate properly. If the efforts of new species are initialized such that they are not close to their equilibrium value, the species configuration becomes frozen after some time, because no new species can become established.
There are many other questions to ask: experimental ones which relate to the comparison with real systems and theoretical ones which have to do with model structure. We hope to investigate such questions in the future. However, we believe that the model introduced in this paper is an important step in our understanding of the evolution of food web structure, being both simple enough to give an understanding of the basic mechanisms at work and realistic enough to allow comparison with data collected by ecologists.
## Acknowledgements
This work was supported in part by EPSRC grant K/79307 (BD and AJM) and by the Minerva Foundation (BD).
## Appendix. Evolutionarily Stable Strategies
Here we consider a predator species $`i`$ and we determine the ESS choice of efforts. Let the total population be $`N_i`$, and suppose that the majority of the population, $`N_in_i`$, have a foraging strategy defined by the efforts $`f_{ij}`$, whilst a small minority, $`n_i`$, have a different strategy $`h_{ij}`$. Following the usual argument of evolutionary game theory, we require to calculate the payoff to the minority and majority strategies, and hence to determine conditions under which the minority can invade. In this case the payoff is the total rate of gain of resources from all prey. The payoff for the strategy $`f_{ij}`$ in absence of the minority strategy is
$$G=\underset{j}{}\frac{S_{ij}f_{ij}N_j}{S_{ij}f_{ij}N_i+K_{ij}},$$
(17)
where, for convenience, we use $`K_{ij}`$ to denote all the terms in the denominator of the $`g_{ij}`$ function in equation (15) that do not depend on the efforts of species $`i`$:
$$K_{ij}=bN_j+\underset{ki}{}\alpha _{ki}S_{kj}f_{kj}N_k.$$
(18)
The payoff for the minority species in the presence of the majority is:
$$G_{min}=\underset{j}{}\frac{S_{ij}h_{ij}N_j}{S_{ij}f_{ij}(N_in_i)+S_{ij}h_{ij}n_i+K_{ij}}.$$
(19)
In the above equation, since the two strategies are played by different individuals which are members of the same species, the $`\alpha `$ factor for competition between individuals with different strategies is 1, as it is for individuals using the same strategy. We require the payoff to the invading strategy when it is rare (i.e. when $`n_iN_i`$), which is just obtained by setting $`n_i`$ equal to zero in (19). In a similar way the payoff to the majority strategy in the presence of the minority can be written down, but this reduces to (17) when $`n_iN_i`$. This gives
$$G_{min}G=\underset{j}{}\frac{S_{ij}N_j(h_{ij}f_{ij})}{S_{ij}f_{ij}N_i+K_{ij}}=\underset{j}{}(h_{ij}f_{ij})\frac{g_{ij}}{f_{ij}},$$
(20)
where $`g_{ij}/f_{ij}`$ in the equation above is the gain per unit effort from prey $`j`$ for the majority strategy. The minority strategy can invade if $`G_{min}G>0`$.
Now suppose that the invading strategy differs from the majority strategy for two prey species $`k`$ and $`l`$, so that $`h_{ik}=f_{ik}+\mathrm{\Delta }f`$, $`h_{il}=f_{il}\mathrm{\Delta }f`$, and $`h_{ij}=f_{ij}`$ for all the other prey $`j`$. In this case
$$G_{min}G=\mathrm{\Delta }f\left(\frac{g_{ik}}{f_{ik}}\frac{g_{il}}{f_{il}}\right).$$
(21)
If the gain per unit effort from prey $`k`$ is higher than that from prey $`l`$ then any strategy with positive $`\mathrm{\Delta }f`$ can invade, whilst if the reverse is true then any strategy with negative $`\mathrm{\Delta }f`$ can invade. However, if the gain per unit effort is equal for the two prey then variant strategies are neutral. The same can be said for any pair of prey species $`k`$ and $`l`$. It therefore follows that the ESS is the strategy with the gain per unit effort being equal for all prey. If neutral variant strategies accumulate to a non-negligible fraction, then selection will again operate to drive the population back to the ESS.
It is interesting to note that the ESS does not correspond to the solution predicted by optimal foraging theory (OFT) (Stephens & Krebs, 1986). The OFT solution would be to maximise $`G`$ in equation (17) with the constraint that the efforts sum to 1. This can be calculated, and the result is (by definition) greater than the total gain to a predator when all adopt the ESS. However, a single ESS predator in a population of OFT predators actually has a higher total gain than the OFT population. Thus the ESS can invade the OFT solution, but the reverse is not true. Hence we argue that the ESS is the appropriate choice of efforts for our model.
In most of the models considered by Stephens & Krebs (1986) the payoff to the predator is not affected by what other predators do, therefore the straightforward OFT solution of optimising the total rate of energy intake is appropriate (see their comments on p 211 regarding game theory). However, competition between predators of the same species and between different species is an essential part of the way our population dynamics equations are set up, and we also believe it is an important factor in real ecosystems. Therefore it is important to treat the foraging problem from a game theory point of view. The need for game theory has also been defended recently by Reeve & Dugatkin (1998). Various game theory models dealing with aspects of foraging behaviour have been proposed (Matsuda et al., 1996; Shaw et al., 1995; Leonardsson & Johansson, 1997; Visser, 1991; Giraldeau & Livoreil, 1998; Sih, 1998).
## References
Amaral, L.A.N. & Meyer, M. (1999). Environmental changes, co-extinction, and patterns in the fossil record. Phys. Rev. Lett. 82, 652-655.
Arditi, R. & Ginzburg, L.R. (1989). Coupling in predator-prey dynamics: Ratio-dependence. J. Theor. Biol. 139, 311-326.
Arditi, R. & Akçakaya, H.R. (1990). Underestimation of mutual interference of predators. Oecologia 83, 358-361.
Arditi, R. & Michalski, J. (1995). Nonlinear food web models and their responses to increased basal productivity. In Food webs: integration of patterns and dynamics (ed. G.A. Polis & K.O. Winemiller), pp. 122-133, Chapman & Hall, London.
Bak, P., Tang, C. and Wiesenfeld, K. (1988). Self-organized criticality, Phys. Rev. A38, 364-374.
Bak, P. & Sneppen, K. (1993). Punctuated equilibrium and criticality in a simple model of evolution. Phys. Rev. Lett. 71, 4083-4086.
Berryman, A.A., Michalski, J., Gutierrez, A.P., Arditi, R. (1995). Logistic theory of food web dynamics. Ecology 76, 336-343.
Blasius, B., Huppert, A. & Stone, L. (1999). Complex dynamics and phase synchronization in spatially extended ecological systems. Nature 399, 354-359.
Caldarelli, G., Higgs, P.G., McKane, A.J. (1998). Modelling coevolution in multispecies communities. J. Theor. Biol. 193, 345-358.
Cohen, J.E. (1990). A stochastic theory of community food webs VI - Heterogeneous alternatives to the Cascade model. Theor. Pop. Biol. 37, 55-90.
Cohen, J.E., Briand, F. & Newman, C.M. (1990). Biomathematics Vol. 20. Community Food Webs, Data and Theory. Springer Verlag, Berlin.
Giraldeau, L.A. & Livoreil, B. (1998). Game theory and social foraging. Game Theory and Animal Behaviour 16-37. Eds. Dugatkin, L.A & Reeve, H.K. Oxford University Press.
Goldwasser, L. & Roughgarden, J. (1993). Construction and analysis of a large Caribbean food web. Ecology 74, 1216-1233.
Hall, S.J. & Raffaelli, D. (1991). Food web patterns: lessons from a species-rich web. J. Anim. Ecol. 60, 823-842.
Leonardsson, K. & Johansson, F. (1997). Optimum search speed and activity: a dynamic game in a three-link trophic system. J. Evol. Biol. 10, 703-729.
Martinez, N.D. & Lawton, J.H. (1995). Scale and food web structure - from local to global. Oikos 73, 148-154.
Matsuda, H., Hori, M. & Abrams, P.A. (1996). Effects of predator-specific defence on biodiversity and community complexity in two-trophic-level communities. Evolutionary Ecology 10, 13-28.
Parker, G.A. & Maynard Smith, J. (1990). Optimality theory in evolutionary biology. Nature. 348, 27-33.
Michalski, J. & Arditi, R. (1995). Food web structure at equilibrium and far from it: is it the same? Proc. R. Soc. Lond. B 259, 217-222.
Pimm, S. L. (1982). Food Webs. Chapman & Hall, London.
Reeve, H.K & Dugatkin, L.A. (1998). Why we need evolutionary game theory. Game Theory and Animal Behaviour 304-311. Eds. Dugatkin, L.A & Reeve, H.K. Oxford University Press.
Renshaw, E. (1991). Modelling Biological Populations in Space and Time. Cambridge studies in mathematical biology, 11. Cambridge University Press.
Shaw, J.J., Tregenza, T., Parker, G.A. & Harvey, I.F. (1995). Evolutionarily stable foraging speeds in feeding scrambles: a model and an experimental test. Proc. Roy. Soc. Lond. B 260, 273-277.
Sih, A. (1998). Game theory and predator-prey response races. Game Theory and Animal Behaviour 221-238. Eds. Dugatkin, L.A & Reeve, H.K. Oxford University Press.
Solé , R.V., Bascompte, J. & Manrubia, S.C. (1996). Extinction: good genes or weak chaos? Proc. Roy. Soc. Lond. B 263 1407-1413.
Solé, R.V., Manrubia, S.C., Benton, M. & Bak, P. (1997). Self-similarity of extinction statistics in the fossil record. Nature 388, 764-767.
Stephens, D.W. & Krebs, J.R. (1986). Foraging Theory. Princeton University Press, New Jersey.
Visser, M.E. (1991). Prey selection by predators depleting a patch: an ESS model. Netherlands J. Zool. 41, 63-80.
## Figures
|
no-problem/0002/gr-qc0002048.html
|
ar5iv
|
text
|
# Colliding Plane Impulsive Gravitational Waves
## 1 Introduction
This paper is a study of the space–time describing the vacuum gravitational field left behind after the head–on collision of two plane impulsive gravitational waves. The known exact solutions are the classical solution of Khan and Penrose and its generalisation by Nutku and Halil . No details of the derivation were given by Khan and Penrose. The Nutku and Halil solution was obtained using a harmonic mapping technique. The latter solution was subsequently rederived by Chandrasekhar and Ferrari (to quote from : ”This paper is addressed, principally, to a more standard derivation of the Nutku-Halil solution than the one sketched by the authors.”) who developed an Ernst–type formulation for vacuum space–times admitting two space–like Killing vectors. They demonstrated that “in some sense, the Nutku–Halil solution occupies the same place in space–times with two space–like Killing vectors as the Kerr solution does in space–times with one time–like and one space–like Killing vector”. Thus the origin of the solution is still quite mysterious. Of course the task of solving Einstein’s vacuum field equations with appropriate boundary conditions for the space–time after the waves collide is mathematically very complex. On the other hand the physical picture, at least up to the appearance of a curvature singularity, is quite simple: two non–interacting plane impulsive waves undergo a head–on collision and the interaction region afterwards contains backscattered radiation from both waves, neither of which remain plane. Our aim in this paper is to introduce a simple assumption based on this picture which provides a key to the origin of the exact solution describing the collision of two completely arbitrary plane impulsive gravitational waves.
The backscattered radiation in the interaction region of space–time between the histories of the waves after the collision determines two intersecting congruences of null geodesics. These are the ‘rays’ associated with the two systems of backscattered radiation. Both congruences have expansion and shear. The ratio of the expansions of each congruence is immediately determined from Einstein’s vacuum field equations and the boundary conditions. The shear of each congruence (the modulus of a complex variable in each case) depends in general in a simple way on the choice of parameter along the null geodesics (in the sense that a change of parameter induces a rescaling of the shear). We assume that a parameter exists along each of the two families of null geodesics such that the shear (i.e. the modulus of the ‘complex shear’) of each congruence is equal. This is the only assumption made apart from the usual assumption of analyticity of the solution of Einstein’s field equations . We show that it implies, together with the field equations, an equation that could be interpreted physically as saying that the energy density of the backscattered radiation from each wave after collision is the same. In addition we demonstrate how this assumption leads to the complete integration of the vacuum field equations.
The outline of the paper is as follows: In section 2 the collision problem is formulated as a boundary–value problem. At this stage, to make the paper as self–contained as possible, reference is made to an Appendix A giving a brief summary of the construction of a plane impulsive gravitational wave in the manner of Penrose . The backscattered radiation fields, existing after the collision, are introduced in section 3. The assumption central to this study is also introduced in this section and a physical implication is explored. Although the key assumption is simple the full integration of Einstein’s vacuum field equations emerging from it is still quite complicated and this is described in section 4. The complications arise because the incoming waves are not in general linearly polarised. For readers who do not want to work through section 4 the considerably simpler case of linearly polarised incoming waves is treated in Appendix B. It is shown there that our basic assumption leads to the Khan and Penrose solution. Finally in section 5 the results of section 4 are summarised and contact is made with the known exact solutions , .
With the use of different boundary conditions to those employed here the approach of this paper can lead to new collision solutions of the field equations. The present authors have published a new collision solution of the Einstein–Maxwell field equations in derived using the ideas described in the present paper.
## 2 The Boundary–Value Problem
The line–element of the space–time describing the vacuum gravitational field of a single plane impulsive gravitational wave having the maximum two degrees of freedom of polarisation may be written in the form (see Appendix A)
$$ds^2=2\left|d\zeta +v_+(aib)d\overline{\zeta }\right|^2+2dudv,$$
(2.1)
where $`a,b`$ are real constants. Here and throughout this paper a bar will denote complex conjugation. The history of the wave is the null hyperplane $`v=0`$ and $`v_+=v\theta (v)`$ where $`\theta (v)`$ is the Heaviside step function. $`u`$ is a second null coordinate. The space–time to the future ($`v>0`$) of the history of the wave is Minkowskian and so is the space–time to the past ($`v<0`$) of $`v=0`$. Writing $`\sqrt{2}\zeta =x+iy`$ we see that (2.1) is in the Rosen–Szekeres form
$$ds^2=\mathrm{e}^U\left(\mathrm{e}^V\mathrm{cosh}Wdx^22\mathrm{sinh}Wdxdy+\mathrm{e}^V\mathrm{cosh}Wdy^2\right)+2\mathrm{e}^Mdudv,$$
(2.2)
with
$`\mathrm{e}^U`$ $`=`$ $`1(a^2+b^2)v_+^2,`$ (2.3)
$`\mathrm{e}^V`$ $`=`$ $`\left[{\displaystyle \frac{1+(a^2+b^2)v_+^22av_+}{1+(a^2+b^2)v_+^2+2av_+}}\right]^{\frac{1}{2}},`$ (2.4)
$`\mathrm{sinh}W`$ $`=`$ $`{\displaystyle \frac{2bv_+}{1(a^2+b^2)v_+^2}},`$ (2.5)
$`M`$ $`=`$ $`0.`$ (2.6)
We consider the head–on collision of this wave with a wave of similar type. This latter wave is described by a space–time with line–element (2.2) but with
$`\mathrm{e}^U`$ $`=`$ $`1(\alpha ^2+\beta ^2)u_+^2,`$ (2.7)
$`\mathrm{e}^V`$ $`=`$ $`\left[{\displaystyle \frac{1+(\alpha ^2+\beta ^2)u_+^22\alpha u_+}{1+(\alpha ^2+\beta ^2)u_+^2+2\alpha u_+}}\right]^{\frac{1}{2}},`$ (2.8)
$`\mathrm{sinh}W`$ $`=`$ $`{\displaystyle \frac{2\beta u_+}{1(\alpha ^2+\beta ^2)u_+^2}},`$ (2.9)
$`M`$ $`=`$ $`0,`$ (2.10)
with $`\alpha ,\beta `$ real constants and $`u_+=u\theta (u)`$. The history of the wave front is the null hyperplane $`u=0`$ in this case. For the collision we consider the space–time to have line–element (2.2) with $`U,V,W,M`$ given by (2.3)– (2.6) in the region $`u<0,v>0`$ and given by (2.7)– (2.10) in the region $`v<0,u>0`$. The region $`u<0,v<0`$ has line–element (2.2) with $`U=V=W=M=0`$ (which agrees with (2.3)–(2.10) when both $`v<0`$ and $`u<0`$). The line–element in the region $`u>0,v>0`$ (after the collision) has the form (2.2) with $`U,V,W,M`$ functions of $`(u,v)`$ satisfying the O’Brien–Synge junction conditions: If $`u=0,v>0`$ then $`U,V,W,M`$ are given by (2.3)–(2.6) with $`v_+=v`$ and if $`v=0,u>0`$ then $`U,V,W,M`$ are given by (2.7)–(2.10) with $`u_+=u`$. Einstein’s vacuum field equations have to be solved for $`U,V,W,M`$ in the interaction region ($`u>0,v>0`$) after the collision subject to these boundary (junction) conditions. These equations are (with subscripts denoting partial derivatives):
$`U_{uv}`$ $`=`$ $`U_uU_v,`$ (2.11)
$`2V_{uv}`$ $`=`$ $`U_uV_v+U_vV_u2\left(V_uW_v+V_vW_u\right)\mathrm{tanh}W,`$ (2.12)
$`2W_{uv}`$ $`=`$ $`U_uW_v+U_vW_u+2V_uV_v\mathrm{sinh}W\mathrm{cosh}W,`$ (2.13)
$`2U_uM_u`$ $`=`$ $`2U_{uu}+U_u^2+W_u^2+V_u^2\mathrm{cosh}^2W,`$ (2.14)
$`2U_vM_v`$ $`=`$ $`2U_{vv}+U_v^2+W_v^2+V_v^2\mathrm{cosh}^2W,`$ (2.15)
$`2M_{uv}`$ $`=`$ $`U_{uv}+W_uW_v+V_uV_v\mathrm{cosh}^2W.`$ (2.16)
The first of these equations can immediately be solved in conjuction with the boundary conditions to be satisfied by $`U`$ on $`u=0`$ and on $`v=0`$ to yield, in $`u>0,v>0`$,
$$\mathrm{e}^U=1(a^2+b^2)v^2(\alpha ^2+\beta ^2)u^2.$$
(2.17)
The problem is to solve (2.12)–(2.13) for $`V,W`$ subject to the boundary conditions and then to solve (2.14) and (2.15) for $`M`$. Equation (2.16) is the integrability condition for (2.14) and (2.15).
## 3 The Backscattered Radiation Fields
We shall for the moment focus attention on the two field equations (2.12) and (2.13). All of our considerations from now on will apply to the interaction region of space–time $`u>0,v>0`$ after the collision. Introducing the complex variables
$$A=V_u\mathrm{cosh}W+iW_u,B=V_v\mathrm{cosh}W+iW_v,$$
(3.1)
we can rewrite the two real equations (2.12) and (2.13) as the single complex equation
$$2A_v=U_uB+U_vA2iAV_v\mathrm{sinh}W,$$
(3.2)
or equivalently as the single complex equation
$$2B_u=U_uB+U_vA2iBV_u\mathrm{sinh}W.$$
(3.3)
Given the form of the line–element (2.2) it is convenient to introduce a null tetrad $`\{m,\overline{m},l,n\}`$ in the region $`u>0,v>0`$ defined by
$$m=\frac{\mathrm{e}^{U/2}}{\sqrt{2}}\left[\mathrm{e}^{V/2}\left(\mathrm{cosh}\frac{W}{2}i\mathrm{sinh}\frac{W}{2}\right)\frac{}{x}+\mathrm{e}^{V/2}\left(\mathrm{sinh}\frac{W}{2}i\mathrm{cosh}\frac{W}{2}\right)\frac{}{y}\right],$$
(3.4)
$$l=\mathrm{e}^{M/2}\frac{}{v},$$
(3.5)
$$n=\mathrm{e}^{M/2}\frac{}{u},$$
(3.6)
with $`\overline{m}`$ the complex conjugate of $`m`$. The integral curves of the vector fields $`l`$ and $`n`$ are twist–free, null geodesics. The coordinate $`v`$ is not an affine parameter along the integral curves of $`l`$ and these curves have complex shear $`\sigma _l`$ and real expansion $`\rho _l`$ (we use the standard definitions for these quantities given in $`\mathrm{\S }`$4.5 of for example) given by
$$\sigma _l=\frac{1}{2}\mathrm{e}^{M/2}B,\rho _l=\frac{1}{2}\mathrm{e}^{M/2}U_v,$$
(3.7)
with $`B`$ as in (3.1). Likewise the coordinate $`u`$ is not an affine parameter along the integral curves of $`n`$ and these curves have complex shear $`\sigma _n`$ and real expansion $`\rho _n`$ given by
$$\sigma _n=\frac{1}{2}\mathrm{e}^{M/2}A,\rho _n=\frac{1}{2}\mathrm{e}^{M/2}U_u,$$
(3.8)
with $`A`$ as in (3.1). We thus see from (2.17), (3.7) and (3.8) that the ratio $`\rho _l/\rho _n`$ is now known in the region $`u>0,v>0`$. In terms of the variables introduced above the non–identically vanishing scale–invariant components of the Riemann tensor in Newman–Penrose notation are $`\mathrm{\Psi }_0,\mathrm{\Psi }_2,\mathrm{\Psi }_4`$ given by
$`2\mathrm{\Psi }_0`$ $`=`$ $`B_v+\left(M_vU_v\right)B+iBV_v\mathrm{sinh}W,`$ (3.9)
$`2\mathrm{\Psi }_2`$ $`=`$ $`M_{uv}{\displaystyle \frac{1}{4}}\left(A\overline{B}\overline{A}B\right),`$ (3.10)
$`2\overline{\mathrm{\Psi }}_4`$ $`=`$ $`A_u+\left(M_uU_u\right)A+iAV_u\mathrm{sinh}W.`$ (3.11)
When these are non–zero we interpret $`\mathrm{\Psi }_0`$ as describing radiation, having propagation direction $`n`$ in space–time, backscattered from the wave with history $`u=0,v>0`$ and we interpret $`\mathrm{\Psi }_4`$ as describing radiation, having propagation direction $`l`$ in space–time, backscattered from the wave with history $`v=0,u>0`$. Thus the integral curves of the null vector fields $`n`$ and $`l`$ are the ‘rays’ associated with the backscattered radiation from the two separating waves after collision.
We now look for an interesting assumption to make regarding the rays associated with the backscattered radiation fields. Since $`\rho _l/\rho _n`$ is known from (3.7), (3.8) and (2.17) we focus attention on the complex shears $`\sigma _l`$ and $`\sigma _n`$ in (3.7) and (3.8). Let us write
$$A=\left|A\right|\mathrm{e}^{i\theta },B=\left|B\right|\mathrm{e}^{i\varphi },f=\theta \varphi ,$$
(3.12)
with $`\theta `$ and $`\varphi `$ real. From (3.2) and (3.3) we can obtain the equations
$`\theta _v`$ $`=`$ $`{\displaystyle \frac{\left|B\right|}{2\left|A\right|}}U_u\mathrm{sin}fV_v\mathrm{sinh}W,`$ (3.13)
$`\varphi _u`$ $`=`$ $`{\displaystyle \frac{\left|A\right|}{2\left|B\right|}}U_v\mathrm{sin}fV_u\mathrm{sinh}W,`$ (3.14)
and
$$2f_{uv}+i\left(A\overline{B}\overline{A}B\right)=\left(U_u\frac{\left|B\right|}{\left|A\right|}\mathrm{sin}f\right)_u\left(U_v\frac{\left|A\right|}{\left|B\right|}\mathrm{sin}f\right)_v.$$
(3.15)
The arguments $`\theta ,\varphi `$ of $`A,B`$ respectively are tetrad dependent. If we transform the tetrad $`\{m,\overline{m},l,n\}`$ by the rotation
$$m\widehat{m}=\mathrm{e}^{i\psi }m,$$
(3.16)
with $`\psi `$ a real–valued function, then
$$\theta \widehat{\theta }=\theta 2\psi ,\varphi \widehat{\varphi }=\varphi 2\psi ,ff.$$
(3.17)
On account of the field equation (3.15) we can, without loss of generality, arrange to have $`\theta +\varphi =\mathrm{constant}`$. To see this we deduce from (3.13) and (3.14) that
$`\left(\widehat{\theta }+\widehat{\varphi }\right)_u`$ $`=`$ $`{\displaystyle \frac{\left|A\right|}{\left|B\right|}}U_v\mathrm{sin}f+f_u2V_u\mathrm{sinh}W4\psi _u,`$ (3.18)
$`\left(\widehat{\theta }+\widehat{\varphi }\right)_v`$ $`=`$ $`{\displaystyle \frac{\left|B\right|}{\left|A\right|}}U_u\mathrm{sin}ff_v2V_v\mathrm{sinh}W4\psi _v.`$ (3.19)
We are free to choose $`\psi `$ to make the right hand sides of (3.18) and (3.19) vanish because the integrability condition for the resulting pair of first order partial differential equations for $`\psi `$ is the field equation (3.15). Hence it is always possible to choose a tetrad so that $`\theta `$ and $`\varphi `$ in (3.12) have the property that $`\theta +\varphi =\mathrm{constant}`$. This result suggests that to discover an interesting assumption to make about the rays associated with the backscattered radiation fields we should consider the ratio (because it depends upon $`\theta \varphi `$ and not $`\theta +\varphi `$)
$$\frac{A}{B}=\frac{\left|A\right|}{\left|B\right|}\mathrm{e}^{if}=\frac{\sigma _n}{\sigma _l},$$
(3.20)
with the last equality following from (3.7) and (3.8). We note that $`f`$ satisfies the second order equation (3.15) to which we shall return later. It is clear from (3.1) that a change of parameters $`u\overline{u}=\overline{u}(u)`$ and $`v\overline{v}=\overline{v}(v)`$ along the integral curves of $`n`$ and $`l`$ rescales $`A`$ and $`B`$ by a function of $`u`$ and a function of $`v`$ respectively. This change of parameter obviously leaves the form of the line–element (2.2) invariant. Also from the field equation (3.2) we deduce that
$$\left(\left|A\right|^2\right)_vU_v\left|A\right|^2=\frac{1}{2}U_u\left(A\overline{B}+\overline{A}B\right),$$
(3.21)
and from the equivalent equation (3.3) we find
$$\left(\left|B\right|^2\right)_uU_u\left|B\right|^2=\frac{1}{2}U_v\left(A\overline{B}+\overline{A}B\right),$$
(3.22)
¿From these two equations we obtain
$$2\left[\mathrm{log}\left(\frac{\left|A\right|^2}{\left|B\right|^2}\right)\right]_{uv}=\left(U_u\frac{\left(A\overline{B}+\overline{A}B\right)}{\left|A\right|^2}\right)_u\left(U_v\frac{\left(A\overline{B}+\overline{A}B\right)}{\left|B\right|^2}\right)_v,$$
(3.23)
which is a partner for the equation (3.15) for $`f`$. This suggests that it might be interesting to explore the following assumption concerning the rays associated with the backscattered radiation: there exist parameters $`\overline{u},\overline{v}`$ along the integral curves of $`n`$ and $`l`$ respectively such that $`\left|A\right|^2=\left|B\right|^2`$ (or equivalently $`\left|\sigma _n\right|^2=\left|\sigma _l\right|^2`$). This is equivalent to the assumption that there exist functions $`C(u),D(v)`$ such that
$$\frac{\left|A\right|^2}{\left|B\right|^2}=C(u)D(v).$$
(3.24)
When $`u=0`$ it follows from (2.4) and (2.5) that
$$\left|B\right|^2=\frac{4(a^2+b^2)}{\left(1(a^2+b^2)v^2\right)^2},$$
(3.25)
and when $`v=0`$ it follows from (2.8) and (2.9) that
$$\left|A\right|^2=\frac{4(\alpha ^2+\beta ^2)}{\left(1(\alpha ^2+\beta ^2)u^2\right)^2}.$$
(3.26)
Also when $`u=0`$ we see from (2.17) that the right hand side of (3.21) vanishes and thus solving (3.21) for $`\left|A\right|^2`$ when $`u=0`$ we obtain
$$\left|A\right|^2=\frac{4(\alpha ^2+\beta ^2)}{1(a^2+b^2)v^2},$$
(3.27)
with the constant numerator here (the constant of integration) chosen so that the two expressions (3.26) and (3.27) for $`\left|A\right|^2`$ agree when $`u=0`$ and $`v=0`$. Similarly when $`v=0`$ the right hand side of (3.22) vanishes and we readily obtain, when $`v=0`$,
$$\left|B\right|^2=\frac{4(a^2+b^2)}{1(\alpha ^2+\beta ^2)u^2}.$$
(3.28)
Thus (3.24) together with the boundary conditions at $`u=0`$ and at $`v=0`$ results in
$$\frac{\left|A\right|^2}{\left|B\right|^2}=\left(\frac{\alpha ^2+\beta ^2}{a^2+b^2}\right)\left[\frac{1(a^2+b^2)v^2}{1(\alpha ^2+\beta ^2)u^2}\right].$$
(3.29)
Hence there exist parameters $`(\overline{u},\overline{v})`$ given by
$$\overline{u}=\mathrm{sin}^1\left(u\sqrt{\alpha ^2+\beta ^2}\right),\overline{v}=\mathrm{sin}^1\left(v\sqrt{a^2+b^2}\right),$$
(3.30)
such that
$$\frac{V_{\overline{u}}^2\mathrm{cosh}^2W+W_{\overline{u}}^2}{V_{\overline{v}}^2\mathrm{cosh}^2W+W_{\overline{v}}^2}=1.$$
(3.31)
We note that when $`\overline{u}=0`$ we have $`u=0`$ and when $`\overline{v}=0`$ we have $`v=0`$. We shall express (3.31) by saying that, in the coordinates $`(\overline{u},\overline{v})`$, (3.29) reads $`\left|A\right|^2=\left|B\right|^2`$. In the coordinates $`(\overline{u},\overline{v})`$ the form of the field equations and the expressions for the Riemann tensor remain invariant, with the derivatives with respect to $`(u,v)`$ being replaced by derivatives with respect to $`(\overline{u},\overline{v})`$ and with $`M`$ replaced by $`\overline{M}`$ according to
$$\mathrm{e}^{\overline{M}}=\frac{\mathrm{cos}\overline{u}\mathrm{cos}\overline{v}}{\sqrt{\alpha ^2+\beta ^2}\sqrt{a^2+b^2}}\mathrm{e}^M.$$
(3.32)
We note from (2.17) and (3.30) that in the barred coordinates
$$\mathrm{e}^U=\mathrm{cos}(\overline{u}\overline{v})\mathrm{cos}(\overline{u}+\overline{v}).$$
(3.33)
Also in these coordinates (3.23) becomes
$$\left(U_{\overline{u}}\mathrm{cos}f\right)_{\overline{u}}=\left(U_{\overline{v}}\mathrm{cos}f\right)_{\overline{v}}U_{\overline{u}}f_{\overline{u}}=U_{\overline{v}}f_{\overline{v}},$$
(3.34)
(the equivalence here following from (3.33) since now $`U_{\overline{u}\overline{u}}=U_{\overline{v}\overline{v}}`$) from which we conclude that
$$f=f(\lambda )\mathrm{with}\lambda =\frac{\mathrm{cos}(\overline{u}\overline{v})}{\mathrm{cos}(\overline{u}+\overline{v})}.$$
(3.35)
We see that $`\lambda =1`$ when $`\overline{u}=0`$ and/or when $`\overline{v}=0`$. Also (3.21) and (3.22) become
$`\left(\left|A\right|^2\right)_{\overline{v}}U_{\overline{v}}\left|A\right|^2`$ $`=`$ $`U_{\overline{u}}\left|A\right|^2\mathrm{cos}f,`$ (3.36)
$`\left(\left|A\right|^2\right)_{\overline{u}}U_{\overline{u}}\left|A\right|^2`$ $`=`$ $`U_{\overline{v}}\left|A\right|^2\mathrm{cos}f.`$ (3.37)
It thus follows that, again in the barred coordinates,
$$\left|A\right|^2=\left|B\right|^2=\mathrm{e}^Ug(\lambda ),$$
(3.38)
for some function $`g(\lambda )`$ satisfying
$$\lambda g^{}=g\mathrm{cos}f,$$
(3.39)
with the prime here and henceforth denoting differentiation with respect to $`\lambda `$.
In order to intepret physically the implications of our assumption that there exists $`(\overline{u},\overline{v})`$ such that $`\left|A\right|^2=\left|B\right|^2`$ we proceed as follows: using (3.20) and the field equations (2.14) and (2.15) in the expressions (3.9) and (3.11) for $`\mathrm{\Psi }_0`$ and $`\mathrm{\Psi }_4`$ we find that we can write
$$U_v\frac{\mathrm{\Psi }_0}{B}=\frac{1}{2}\left\{iU_vf_vU_{vv}+\frac{1}{2}\left|B\right|^2+\frac{B}{2A}U_uU_v+U_v\left(\mathrm{log}\frac{\left|B\right|}{\left|A\right|}\right)_v\right\},$$
(3.40)
and
$$U_u\frac{\overline{\mathrm{\Psi }}_4}{A}=\frac{1}{2}\left\{iU_uf_uU_{uu}+\frac{1}{2}\left|A\right|^2+\frac{A}{2B}U_uU_v+U_u\left(\mathrm{log}\frac{\left|A\right|}{\left|B\right|}\right)_u\right\}.$$
(3.41)
When these are expressed in the coordinates $`(\overline{u},\overline{v})`$ we can put $`\left|A\right|^2=\left|B\right|^2`$ and as above $`U_{\overline{u}\overline{u}}=U_{\overline{v}\overline{v}}`$ and $`U_{\overline{u}}f_{\overline{u}}=U_{\overline{v}}f_{\overline{v}}`$ and thus
$$U_{\overline{u}}\frac{\mathrm{\Psi }_4}{\overline{A}}=U_{\overline{v}}\frac{\mathrm{\Psi }_0}{B}.$$
(3.42)
¿From (3.42) it follows, using the second of (3.7) and of (3.8) that
$$\rho _l^2\left|\mathrm{\Psi }_4\right|^2=\rho _n^2\left|\mathrm{\Psi }_0\right|^2.$$
(3.43)
This equation, which is a consequence of our basic assumption, deserves some physical interpretation. $`\left|\mathrm{\Psi }_0\right|^2`$ and $`\left|\mathrm{\Psi }_4\right|^2`$ are analogous to the energy densities of electromagnetic waves propagating in the $`n`$ and $`l`$ directions respectively in space–time, in the same sense that the Bel– Robinson tensor is analogous to the electromagnetic energy–momentum tensor. However with the coordinates carrying the dimensions of length these quantities have the dimensions of $`(\mathrm{length})^4`$. The quantities $`\rho _l^2\left|\mathrm{\Psi }_4\right|^2`$ and $`\rho _n^2\left|\mathrm{\Psi }_0\right|^2`$ both have dimensions $`(\mathrm{length})^2`$ of energy density and are positive definite expressions in terms of the backscattered radiation fields \[we note that the backscattered radiation has non–vanishing shear and expansion and thus does not consist of systems of plane waves\]. Hence it seems reasonable to suggest the following interpretation for the equation (3.43): in the coordinate system (the barred system) in which $`\left|A\right|^2=\left|B\right|^2`$ the energy density of the backscattered radiation from each of the separating waves after the collision is the same. We note that (3.43) also holds for our solution which describes the collision of an impulsive gravitational wave with an impulsive gravitational wave sharing its wave front with an electromagnetic shock wave. This latter solution contains the Khan and Penrose solution and a solution of Griffiths as special cases. Also (3.43) holds (trivially) for the Bell and Szekeres solution describing the collision of two electromagnetic shock waves. These examples demonstrate that (3.43) holds for a class of collision problems involving gravitational impulse waves and/or electromagnetic shock waves.
## 4 Integration of the Field Equations
We begin by writing (3.15) in the barred system $`(\overline{u},\overline{v})`$. Using (3.20) with $`\left|A\right|=\left|B\right|`$, (3.35) and (3.38) we find that
$$(1\lambda ^2)f^{\prime \prime }2\lambda f^{}+\frac{g}{\lambda }\mathrm{sin}f=\frac{(1+\lambda ^2)}{\lambda ^2}\mathrm{sin}f\frac{(1\lambda ^2)}{\lambda }f^{}\mathrm{cos}f,$$
(4.1)
with $`g`$ given in terms of $`f`$ by (3.39). We can simplify (4.1) to read
$$g=\frac{\lambda }{\mathrm{sin}f}\frac{d}{d\lambda }\left[(1\lambda ^2)\left(f^{}+\frac{\mathrm{sin}f}{\lambda }\right)\right].$$
(4.2)
We get a single third order equation for $`f`$ by eliminating $`g`$ (taken to be non–zero) between (3.39) and (4.2). Since we are working in the barred coordinate system (3.20) gives
$$\frac{A}{B}=\frac{V_{\overline{u}}\mathrm{cosh}W+iW_{\overline{u}}}{V_{\overline{v}}\mathrm{cosh}W+iW_{\overline{v}}}=\mathrm{e}^{if}=\frac{1ih}{1+ih},$$
(4.3)
where, for convenience, we have introduced $`h(\lambda )`$ by the final equality. After eliminating $`g`$ from (3.39) and (4.2) it is useful to write the resulting equation as a differential equation for $`h(\lambda )`$. Then defining
$$G=\frac{2}{1+h^2}(1\lambda ^2)\left(h^{}\frac{h}{\lambda }\right),$$
(4.4)
the equation for $`h(\lambda )`$ can be put in the form
$$\lambda G^{\prime \prime }+G^{}\frac{4}{\lambda }G=\frac{GQ}{1\lambda ^2},$$
(4.5)
where
$$Q=\frac{\lambda }{2h}(1h^2)G^{}+\frac{1}{h}(1+h^2)G.$$
(4.6)
We remark that if we define
$$P=\frac{\lambda }{2h}(1+h^2)G^{}+\frac{1}{h}(1h^2)G,$$
(4.7)
then
$$\lambda P^{}=Q\mathrm{and}P^2Q^2=\lambda ^2\left(G^{}\right)^24G^2.$$
(4.8)
In studying the differential equation (4.5) for $`h`$ we found it helpful to write (4.5) and the second of (4.8) in the form
$`\lambda G^{\prime \prime }+G^{}{\displaystyle \frac{4}{\lambda }}G`$ $`=`$ $`{\displaystyle \frac{\lambda GP^{}}{1\lambda ^2}},`$ (4.9)
$`P^2\lambda ^2\left(P^{}\right)^2`$ $`=`$ $`\lambda ^2\left(G^{}\right)^24G^2,`$ (4.10)
and to work with these equations. Before proceeding further however we need to know $`h`$ and $`h^{}`$ when $`\overline{u}=0`$ and/or when $`\overline{v}=0`$, i.e. we require $`h(1)`$ and $`h^{}(1)`$. To find $`h(1)`$ start by writing (4.3), using (3.30), as
$$\frac{1ih(\lambda )}{1+ih(\lambda )}=\frac{\sqrt{a^2+b^2}\sqrt{1(\alpha ^2+\beta ^2)u^2}}{\sqrt{\alpha ^2+\beta ^2}\sqrt{1(a^2+b^2)v^2}}\left(\frac{V_u\mathrm{cosh}W+iW_u}{V_v\mathrm{cosh}W+iW_v}\right),$$
(4.11)
and evaluate this equation when $`u=0`$ and $`v=0`$. From the boundary conditions on $`V`$ and $`W`$ given by (2.4), (2.5), (2.8) and (2.9) we have that when $`u=0`$ and $`v=0`$:
$$V_u=2\alpha ,V_v=2a,W_u=2\beta ,W_v=2b,$$
(4.12)
and thus from (4.11),
$$\frac{1ih(1)}{1+ih(1)}=\frac{\sqrt{a^2+b^2}}{\sqrt{\alpha ^2+\beta ^2}}\left(\frac{\alpha i\beta }{aib}\right)=\mathrm{e}^{i(\widehat{\alpha }\widehat{\beta })},$$
(4.13)
where
$$\mathrm{e}^{i\widehat{\alpha }}=\frac{\alpha i\beta }{\sqrt{\alpha ^2+\beta ^2}},\mathrm{e}^{i\widehat{\beta }}=\frac{aib}{\sqrt{a^2+b^2}}.$$
(4.14)
It thus follows from (4.13) that
$$h(1)=\mathrm{tan}\left(\frac{\widehat{\alpha }\widehat{\beta }}{2}\right)=k(\mathrm{say}).$$
(4.15)
Next to find $`h^{}(1)`$ we begin with (4.11) and by two differentiations obtain from it
$$\frac{4i(\alpha ^2+\beta ^2)h^{}(1)}{\left(1+ih(1)\right)^2}=\left[\frac{^2}{uv}\left(\frac{V_u\mathrm{cosh}W+iW_u}{V_v\mathrm{cosh}W+iW_v}\right)\right]_{(u=0,v=0)}.$$
(4.16)
To evaluate the right hand side here we first note from (2.17) that when $`u=0`$ and $`v=0`$, $`U_u=0`$ and $`U_v=0`$. Also from the boundary conditions on $`W`$ we have $`W=0`$ when $`u=0`$ and $`v=0`$. Now evaluating the field equations (2.12) and (2.13) when $`u=0`$ and $`v=0`$ we easily see that in this case
$$V_{uv}=0,W_{uv}=0.$$
(4.17)
¿From the boundary conditions satisfied by $`V`$ and $`W`$ we have, when $`u=0`$ and $`v=0`$:
$$V_{vv}=V_{uu}=W_{vv}=W_{uu}=0.$$
(4.18)
Next differentiating (2.12) and (2.13) with respect to $`u`$ we find that when $`u=0`$ and $`v=0`$:
$$V_{uvu}=2\alpha ^2a6\beta ^2a8b\alpha \beta ,W_{uvu}=2b(\alpha ^2+\beta ^2)+8a\alpha \beta .$$
(4.19)
Finally differentiating (2.12) and (2.13) with respect to $`v`$ we find that when $`u=0`$ and $`v=0`$:
$$V_{uvv}=2\alpha a^26\alpha b^28ab\beta ,W_{uvv}=2\beta (a^2+b^2)+8\alpha ab.$$
(4.20)
Now substituting all of these results into the right hand side of (4.16) we obtain
$$h^{}(1)=k,$$
(4.21)
with $`k`$ given by (4.15). Using (4.15) and (4.21) in (4.4) we see that
$$G(1)=0=G^{}(1).$$
(4.22)
We can now set about solving (4.9) and (4.10) for $`G`$ and then obtain $`h(\lambda )`$ from (4.4).
Differentiating (4.10) with respect to $`\lambda `$ and using (4.9) we find that either (a) $`P^{}=0`$ or (b) if $`P^{}0`$ then
$$\lambda P^{\prime \prime }+P^{}\frac{1}{\lambda }P=\frac{\lambda GG^{}}{1\lambda ^2}.$$
(4.23)
We can quickly dispose of case (a). If $`P=\mathrm{constant}0`$ then (4.10) can be integrated to yield
$$G=\frac{P}{4}\left(c_0^2\lambda ^{\pm 2}\frac{1}{c_0^2\lambda ^{\pm 2}}\right),$$
(4.24)
where $`c_0`$ is a constant of integration. It is easy to see that this constant cannot be chosen to satisfy both boundary conditions (4.22). Also if $`P=0`$ then (4.10) integrates to
$$G=c_1\lambda ^{\pm 2},$$
(4.25)
with $`c_1`$ a constant of integration. Clearly we must have $`c_1=0`$ to satisfy (4.22). Thus the only acceptable solution in case (a) is $`G=0`$. Turning now to case (b) with (4.23) holding we find that we can integrate this equation once (using (4.22)) to read
$$\lambda P^{}+\left(\frac{1+\lambda ^2}{1\lambda ^2}\right)P=\frac{\lambda G^2}{2(1\lambda ^2)}.$$
(4.26)
¿From the first of (4.8) this can be written
$$\left(P+Q\right)+\lambda ^2\left(PQ\right)=\frac{\lambda }{2}G^2,$$
(4.27)
and from (4.7) and (4.8) this reads
$$\lambda G^{}=\frac{\lambda h}{2(1+\lambda ^2h^2)}G^22\frac{(1\lambda ^2h^2)}{(1+\lambda ^2h^2)}G.$$
(4.28)
A glance at (4.3) and (4.4) shows how $`h`$ and thence $`G`$ are constructed from the functions $`V`$ and $`W`$ appearing in the line–element (2.2) for $`u>0,v>0`$. On the boundaries of this region $`\lambda =1`$ and within this region $`\lambda >1`$ with $`\lambda `$ becoming infinite when the right hand side of (2.17) vanishes. The $`\lambda =\mathrm{constant}>1`$ curves densely fill the interior of the region $`𝒮`$(say) with boundaries ($`b_1`$) $`u=0,v>0`$, ($`b_2`$) $`v=0,u>0`$ and ($`b_3`$) the right hand side of (2.17) vanishing with $`u>0,v>0`$. Within $`𝒮`$ there is one curve $`\lambda =\mathrm{constant}>1`$ passing through each point. When the field equations are completely integrated for $`(u,v)ϵ𝒮`$ the boundary ($`b_3`$) turns out to be a curvature singularity. For $`G`$ analytic in $`𝒮`$ we conclude from (4.22) and (4.28) that $`G0`$ in $`𝒮`$. It thus follows from (4.4) with the boundary condition (4.15) that
$$h(\lambda )=k\lambda ,$$
(4.29)
for $`(u,v)ϵ𝒮`$.
We are now at the following stage in the integration of the field equations: the function $`U`$ in the line–element (2.2) is given by (2.17) in coordinates $`(u,v)`$ or by (3.33) in coordinates $`(\overline{u},\overline{v})`$. Also on account of (4.3) and (4.29) the functions $`V`$ and $`W`$ in (2.2) satisfy the differential equation
$$\frac{A}{B}=\frac{V_{\overline{u}}\mathrm{cosh}W+iW_{\overline{u}}}{V_{\overline{v}}\mathrm{cosh}W+iW_{\overline{v}}}=\frac{1ik\lambda }{1+ik\lambda },$$
(4.30)
with $`\lambda `$ given by (3.35) and $`k`$ by (4.15). We shall now solve this complex equation for $`V`$ and $`W`$ in terms of the barred coordinates. First we need to note the boundary values of $`V`$ and $`W`$ in terms of the barred coordinates. By (2.4) and (3.30) we have when $`\overline{u}=0`$,
$$\mathrm{e}^V=\left[\frac{\left(\sqrt{a^2+b^2}a\mathrm{sin}\overline{v}\right)^2+b^2\mathrm{sin}^2\overline{v}}{\left(\sqrt{a^2+b^2}+a\mathrm{sin}\overline{v}\right)^2+b^2\mathrm{sin}^2\overline{v}}\right]^{\frac{1}{2}}=\frac{\left|1\mathrm{e}^{i\widehat{\beta }}\mathrm{sin}\overline{v}\right|}{\left|1+\mathrm{e}^{i\widehat{\beta }}\mathrm{sin}\overline{v}\right|},$$
(4.31)
with the second equality following from (4.14). By (2.5) and (3.30) we have when $`\overline{u}=0`$
$$\mathrm{sinh}W=\frac{2b\mathrm{sin}\overline{v}}{\sqrt{a^2+b^2}\mathrm{cos}^2\overline{v}}=i\frac{\left(\mathrm{e}^{i\widehat{\beta }}\mathrm{sin}\overline{v}\mathrm{e}^{i\widehat{\beta }}\mathrm{sin}\overline{v}\right)}{1\left|\mathrm{e}^{i\widehat{\beta }}\mathrm{sin}\overline{v}\right|^2}.$$
(4.32)
The corresponding boundary values on $`\overline{v}=0`$ are obtained by replacing $`\overline{v}`$ by $`\overline{u}`$ and $`\widehat{\beta }`$ by $`\widehat{\alpha }`$ in the final expressions in (4.31) and (4.32). It is convenient to use a complex function $`E`$ (the Ernst function) in place of the two real functions $`V`$ and $`W`$ defined (in a way that is suggested by the final expressions in (4.31) and (4.32)) by
$$\mathrm{e}^V=\left[\frac{\left(1E\right)\left(1\overline{E}\right)}{\left(1+E\right)\left(1+\overline{E}\right)}\right]^{\frac{1}{2}},\mathrm{sinh}W=i\frac{\left(E\overline{E}\right)}{1\left|E\right|^2},$$
(4.33)
or equivalently by
$$E=\frac{\mathrm{sinh}V\mathrm{cosh}W+i\mathrm{sinh}W}{1+\mathrm{cosh}V\mathrm{cosh}W}.$$
(4.34)
Now (4.31) and (4.32) can be written neatly as:
$$\mathrm{when}\overline{u}=0,E=\mathrm{e}^{i\widehat{\beta }}\mathrm{sin}\overline{v},$$
(4.35)
and correspondingly
$$\mathrm{when}\overline{v}=0,E=\mathrm{e}^{i\widehat{\alpha }}\mathrm{sin}\overline{u}.$$
(4.36)
In terms of $`E`$, the complex functions $`A,B`$ can be written
$$A=\frac{2\mathrm{cosh}W}{1\overline{E}^2}\overline{E}_{\overline{u}},B=\frac{2\mathrm{cosh}W}{1\overline{E}^2}\overline{E}_{\overline{v}}.$$
(4.37)
Substitution into (4.30) simplifies this equation to
$$E_{\overline{u}}E_{\overline{v}}=ik\lambda \left(E_{\overline{u}}+E_{\overline{v}}\right).$$
(4.38)
With $`\lambda `$ given by (3.35) this equation establishes that
$$E=E(w)\mathrm{with}w=\mathrm{sin}(\overline{u}+\overline{v})+ik\mathrm{sin}(\overline{u}\overline{v}).$$
(4.39)
We can now determine $`E`$ using the boundary conditions (4.35) and (4.36). To see this easily we first write $`k`$ in (4.15) in the form
$$k=i\frac{\left(\mathrm{e}^{i\widehat{\alpha }}\mathrm{e}^{i\widehat{\beta }}\right)}{\mathrm{e}^{i\widehat{\alpha }}+\mathrm{e}^{i\widehat{\beta }}}.$$
(4.40)
Using this in (4.39) we see that we can consider $`E`$ to have the functional dependence:
$$E=E\left(\mathrm{e}^{i\widehat{\alpha }}\mathrm{sin}\overline{u}\mathrm{cos}\overline{v}+\mathrm{e}^{i\widehat{\beta }}\mathrm{cos}\overline{u}\mathrm{sin}\overline{v}\right).$$
(4.41)
Now the boundary conditions (4.35) and (4.36) establish that
$$E=\mathrm{e}^{i\widehat{\alpha }}\mathrm{sin}\overline{u}\mathrm{cos}\overline{v}+\mathrm{e}^{i\widehat{\beta }}\mathrm{cos}\overline{u}\mathrm{sin}\overline{v},$$
(4.42)
and thus the functions $`V,W`$ appearing in the line–element (2.2) are determined by (4.33) in the coordinates $`(\overline{u},\overline{v})`$. They are then converted into the coordinates $`(u,v)`$ using the transformations (3.30) (see section 5 below).
Finally in the barred system the field equations (2.14) and (2.15) for $`\overline{M}`$ read (using (4.33) and (4.37))
$`\overline{M}_{\overline{u}}`$ $`=`$ $`{\displaystyle \frac{U_{\overline{u}\overline{u}}}{U_{\overline{u}}}}+{\displaystyle \frac{1}{2}}U_{\overline{u}}+{\displaystyle \frac{2E_{\overline{u}}\overline{E}_{\overline{u}}}{U_{\overline{u}}\left(1\left|E\right|^2\right)^2}},`$ (4.43)
$`\overline{M}_{\overline{v}}`$ $`=`$ $`{\displaystyle \frac{U_{\overline{v}\overline{v}}}{U_{\overline{v}}}}+{\displaystyle \frac{1}{2}}U_{\overline{v}}+{\displaystyle \frac{2E_{\overline{v}}\overline{E}_{\overline{v}}}{U_{\overline{v}}\left(1\left|E\right|^2\right)^2}},`$ (4.44)
with $`\overline{M}`$ related to $`M`$ by (3.32). Since we must have $`M=0`$ when $`u=0`$ and when $`v=0`$ we see from (3.32) that
$$\mathrm{when}\overline{u}=0,\mathrm{e}^{\overline{M}}=\frac{\mathrm{cos}\overline{v}}{\sqrt{\alpha ^2+\beta ^2}\sqrt{a^2+b^2}},$$
(4.45)
and
$$\mathrm{when}\overline{v}=0,\mathrm{e}^{\overline{M}}=\frac{\mathrm{cos}\overline{u}}{\sqrt{\alpha ^2+\beta ^2}\sqrt{a^2+b^2}}.$$
(4.46)
In (4.43) and (4.44), $`U`$ is given by (3.33) and $`E`$ by (4.42). Using (4.42) we find
$$E_{\overline{u}}\overline{E}_{\overline{u}}=1\left|E\right|^2=E_{\overline{v}}\overline{E}_{\overline{v}},$$
(4.47)
with
$$1\left|E\right|^2=\mathrm{cos}^2\left(\frac{\widehat{\alpha }\widehat{\beta }}{2}\right)\mathrm{cos}^2(\overline{u}+\overline{v})+\mathrm{sin}^2\left(\frac{\widehat{\alpha }\widehat{\beta }}{2}\right)\mathrm{cos}^2(\overline{u}\overline{v}).$$
(4.48)
The only complication in solving (4.43) and (4.44) is in dealing with the final term in each. In the case of (4.43) this now involves evaluating the integral
$$\frac{2d\overline{u}}{U_{\overline{u}}\left(1\left|E\right|^2\right)}=\frac{2\lambda d\lambda }{(\lambda ^21)\left\{\mathrm{cos}^2\left(\frac{\widehat{\alpha }\widehat{\beta }}{2}\right)+\lambda ^2\mathrm{sin}^2\left(\frac{\widehat{\alpha }\widehat{\beta }}{2}\right)\right\}},$$
(4.49)
where we have changed the variable of integration from $`\overline{u}`$ to $`\lambda `$, given in (3.35), with $`\overline{v}`$ held fixed. This integral is easy to evaluate and using (4.48) again we obtain from (4.43)
$$\mathrm{e}^{\overline{M}}=\frac{1\left|E\right|^2}{F(\overline{v})\sqrt{\mathrm{cos}(\overline{u}\overline{v})\mathrm{cos}(\overline{u}+\overline{v})}},$$
(4.50)
with $`F(\overline{v})`$ a function of integration. By (4.45) we find that in fact $`F`$ is a constant given by
$$F=\sqrt{\alpha ^2+\beta ^2}\sqrt{a^2+b^2}.$$
(4.51)
It is straightforward to see that (4.50) with (4.51) also satisfies (4.44). The integration of Einstein’s vacuum field equations is now complete.
## 5 Discussion
The purpose of this paper has been to propose a simple key to open up the boundary–value problem which is involved in deriving a model in General Relativity of the vacuum gravitational field left behind after the head–on collision of two plane impulsive gravitational waves. This ‘key’ has been provided, with some motivation, in section 3 (following equation(3.23)) and a physical interpretation has been suggested there for the interesting equation (3.43), which is a consequence of the key assumption and some of the vacuum field equations. Our approach has been to focus attention on properties of the backscattered gravitational radiation present after the collision. This appears to be a non–linear phenomenon whose presence therefore ought to be expected to play a central role in the development of a scattering theory for gravitational radiation.
Notwithstanding the simplicity of our key assumption, the derivation of the line–element of the vacuum space–time in the region $`𝒮`$ in section 4 \[the region $`𝒮`$ is defined following equation (4.28)\] is complicated. It is therefore useful to summarise the result: the vacuum space–time in the region $`𝒮`$ has line– element of the form (2.2) with $`U`$ given by (2.17), $`V`$ and $`W`$ given by (4.33) with the complex function $`E`$ in (4.42) expressed in coordinates $`(u,v)`$ as
$$E=(\alpha +i\beta )u\sqrt{1(a^2+b^2)v^2}+(a+ib)v\sqrt{1(\alpha ^2+\beta ^2)u^2},$$
(5.1)
while $`M`$ is reconstructed in coordinates $`(u,v)`$ using (3.30), (3.32), (4.50) and (4.51). The result is
$$\mathrm{e}^M=\frac{\left(1\left|E\right|^2\right)\mathrm{e}^{U/2}}{\sqrt{1(\alpha ^2+\beta ^2)u^2}\sqrt{1(a^2+b^2)v^2}},$$
(5.2)
with $`\mathrm{e}^U`$ given in (2.17). If in (2.17), (5.1) and (5.2) we put $`a^2+b^2=1=\alpha ^2+\beta ^2`$ we recover the original form of the Nutku and Halil solution. If in addition $`b=\beta =0`$ (and thus $`E=\overline{E}`$ and so $`W=0`$) we recover the original form of the Khan and Penrose solution. We note that in the region $`𝒮`$ a curvature singularity , is encountered on the boundary where $`(a^2+b^2)v^2+(\alpha ^2+\beta ^2)u^2=1`$ and the solution above is valid only up to this space–like subspace.
Finally we wish to emphasise that the approach to solving collision problems in General Relativity developed in this paper is not confined to the examples worked through above. With different boundary conditions to those described by equations (2.3)–(2.10) and the paragraph following (2.10), the technique is capable of solving new collision problems (see, for example ).
This collaboration was funded by the Department of Education and Science and by the Ministère des Affaires Etrangères.
## Appendix A Plane Impulsive Wave
The line–element of the vacuum space–time describing the gravitational field of an impulsive pp–wave is given by
$$ds^2=2dZd\overline{Z}+2dV\left(dU+\delta \left(V\right)f(Z,\overline{Z})dV\right).$$
(A.1)
Here $`f`$ is a real–valued function which is harmonic in $`Z,\overline{Z}`$:
$$\frac{^2f}{Z\overline{Z}}=0.$$
(A.2)
The only non–vanishing component of the Riemann tensor in Newman–Penrose notation is
$$\mathrm{\Psi }_0=\frac{^2f}{Z^2}\delta \left(V\right),$$
(A.3)
which is therefore Petrov type N with $`/U`$ as degenerate principal null direction. Integral curves of $`/U`$, which have vanishing expansion and shear, generate the history of the wave–front $`V=0`$. Hence $`V=0`$ is a null hyperplane. Following the example of Penrose a discontinuous coordinate transformation removes the $`\delta `$– function from the line–element (A.1) and introduces a coordinate system $`(v,u,\zeta ,\overline{\zeta })`$ in which the metric tensor is continuous $`\left(C^0\right)`$ across $`V=0`$. This transformation is given by
$$V=v,U=u\theta (v)f(\zeta ,\overline{\zeta })+|g|^2v_+,Z=\zeta +v_+\overline{g}(\overline{\zeta }),$$
(A.4)
where $`\theta (v)`$ is the Heaviside step function, $`v_+=v\theta (v)`$ and $`g(\zeta )=f/\zeta `$ is an analytic function of $`\zeta `$. The line–element (A.1) is transformed under (A.4) into the Rosen form
$$ds^2=2\left|d\zeta +v_+\overline{K}(\overline{\zeta })d\overline{\zeta }\right|^2+2dudv,$$
(A.5)
with $`K(\zeta )=dg/d\zeta =^2f/\zeta ^2`$ and (A.3) becomes
$$\mathrm{\Psi }_0=\frac{^2f}{\zeta ^2}\delta (v).$$
(A.6)
For a plane impulsive wave with two degrees of freedom of polarisation $`^2f/\zeta ^2`$ is a complex constant. We take $`f=\mathrm{Re}\left\{(a+ib)\zeta ^2\right\}`$ with $`a,b`$ real constants for this case. For a linearly polarised plane impulsive gravitational wave either $`a=0`$ or $`b=0`$. We note that (A.4) incorporates Penrose’s geometrical construction of the impulsive pp–wave whereby the history of the wave is formed in Minkowskian space–time by first subdividing the space–time into two halves $`v>0`$ and $`v<0`$ each with boundary $`v=0`$ and then reattaching the halves on $`v=0`$ identifying the points $`(v=0,u,\zeta )`$ and $`(v=0,uf(\zeta ,\overline{\zeta }),\zeta )`$. This is a mapping of the two copies of $`v=0`$ where points on the same generators $`\zeta =\mathrm{constant}`$ of $`v=0`$ are mapped one to the other by the translation $`uuf(\zeta ,\overline{\zeta })`$ and the mapping preserves the intrinsic degenerate metric on $`v=0`$.
## Appendix B Incoming Waves Linearly Polarised
If the incoming waves are linearly polarised $`b=0`$ in (2.3)–(2.5) and $`\beta =0`$ in (2.7)–(2.9). In the interaction region ($`u>0,v>0`$) after the collision the function $`W=0`$. This is the problem solved by Khan and Penrose . Our basic assumption following (3.23) in this case reads: there exist parameters ($`\overline{u},\overline{v}`$) along the integral curves of $`n`$ and $`l`$ respectively such that $`V_{\overline{u}}=V_{\overline{v}}`$. These parameters are given by (3.30) with $`b=\beta =0`$. Now $`V=V(\overline{u}+\overline{v})`$ in $`u>0,v>0`$. In the barred coordinates the boundary value of $`V`$ when $`\overline{u}=0`$ is obtained from (2.4) with $`b=0`$ to be:
$$\mathrm{e}^V=\frac{1+\mathrm{sin}\overline{v}}{1\mathrm{sin}\overline{v}}.$$
(B.1)
Thus for $`u>0,v>0`$ we have
$$\mathrm{e}^V=\frac{1+\mathrm{sin}(\overline{u}+\overline{v})}{1\mathrm{sin}(\overline{u}+\overline{v})}=\frac{(\mathrm{cos}\overline{u}+\mathrm{sin}\overline{v})(\mathrm{cos}\overline{v}+\mathrm{sin}\overline{u})}{(\mathrm{cos}\overline{u}\mathrm{sin}\overline{v})(\mathrm{cos}\overline{v}\mathrm{sin}\overline{u})}.$$
(B.2)
Using (3.30) with $`b=\beta =0`$ we can write (B.2) as
$$\mathrm{e}^V=\frac{(\sqrt{1\alpha ^2u^2}+av)(\sqrt{1a^2v^2}+\alpha u)}{(\sqrt{1\alpha ^2u^2}av)(\sqrt{1a^2v^2}\alpha u)},$$
(B.3)
which is the Khan and Penrose expression for $`V`$ in the interaction region. From (3.30) and (3.33) we have $`U`$ in the coordinates $`(u,v)`$ given by
$$\mathrm{e}^U=1\alpha ^2u^2a^2v^2.$$
(B.4)
The only remaining function to be determined is $`M`$. In the coordinates $`(\overline{u},\overline{v})`$ this is replaced by $`\overline{M}`$ with the latter related to $`M`$ via (3.32) with $`b=\beta =0`$. If we let $`\overline{Q}=\overline{M}+\frac{1}{2}U`$ then in the barred coordinates the field equations (2.14) and (2.15) reduce to the following equations for $`\overline{Q}`$:
$$\overline{Q}_{\overline{u}}=\overline{Q}_{\overline{v}}=2\mathrm{tan}(\overline{u}+\overline{v}).$$
(B.5)
Since $`M=0`$ when $`u=0`$ or $`v=0`$ it is easy to see that when $`\overline{u}=0`$, $`\overline{Q}=\mathrm{log}\left(\mathrm{cos}^2\overline{v}/\alpha a\right)`$ and when $`\overline{v}=0`$, $`\overline{Q}=\mathrm{log}\left(\mathrm{cos}^2\overline{u}/\alpha a\right)`$. It thus follows from (B.5) that
$$\overline{Q}=\mathrm{log}\left(\frac{\mathrm{cos}^2(\overline{u}+\overline{v})}{\alpha a}\right).$$
(B.6)
Hence by (3.32) with $`b=\beta =0`$:
$$\mathrm{e}^M=\frac{\alpha a}{\mathrm{cos}\overline{u}\mathrm{cos}\overline{v}}\mathrm{e}^{\overline{Q}+\frac{1}{2}U}=\frac{[\mathrm{cos}(\overline{u}+\overline{v})\mathrm{cos}(\overline{u}\overline{v})]^{\frac{3}{2}}}{\mathrm{cos}\overline{u}\mathrm{cos}\overline{v}cos^2(\overline{u}\overline{v})}.$$
(B.7)
In the $`(u,v)`$ coordinates this reads
$$\mathrm{e}^M=\frac{(1\alpha ^2u^2a^2v^2)^{\frac{3}{2}}}{\sqrt{1\alpha ^2u^2}\sqrt{1a^2v^2}[\sqrt{1\alpha ^2u^2}\sqrt{1a^2v^2}+\alpha auv]^2},$$
(B.8)
which is the Khan and Penrose expression.
|
no-problem/0002/cond-mat0002235.html
|
ar5iv
|
text
|
# Disturbance spreading in incommensurate and quasiperiodic systems
## I Introduction
The fast development of nanotechnology makes one-dimensional (1D) systems like quantum wire and nanotubes available in laboratories nowadays. The study of physical properties such as localization, thermal conductivity, and electron conductance, etc., in these systems become more and more important. In the past years, many works have been done on the waves (classical and quantum) propagation and localization in 1D disordered media. Particular interests are paid to propagation of the classical waves in random media (see e.g. ), and electron transport in 1D disordered solids . In comparison with the disordered systems, much less is studied about the transport and diffusion properties of incommensurate and quasiperiodic systems, even though the incommensurate structures appear in many physical systems such as quasicrystals, two-dimensional electron systems, magnetic superlattices, charge-density waves, organic conductors, and various atomic monolayers absorbed on crystalline substrates.
In this paper, we would like to study the classical transport of an initially localized excitation in 1D incommensurate, quasiperiodic and random systems so as to better understand transport processes and relaxations properties in these systems. Recently, quantum diffusion in two families of 1D models have attracted much attention. Typical examples are the kicked rotator and kicked Harper models from the field of quantum chaos, and the Harper model and the tight-binding model associated with quasiperiodic sequences. Many interesting dynamical behaviors, such as quantum localization and anomalous diffusion, and their relationships with energy spectra have been investigated in these systems. However, the classical transport in incommensurate and quasiperiodic sytems and its relation with phonon frequency distribution, as well as the diffusive behavior dependence on initial condition, etc. have not yet fully investigated up to now.
The information of disturbance spreading in such system reflects the interior structures of the underlying system. As we shall see later, the spreading properties are determined largely by the density of states, in particular by the phonon model near the zero frequency.
## II Models and numerical results
### A Incommensurate chain
The Frenkel-Kontorova (FK) model is invoked as a prototype of an incommensurate chain in this paper. This model is a 1D atom chain with an elastic nearest neighbor interaction and subjected to an external periodic potential. Most works on this model in the past two decades have been concentrated on ground state properties and phonon spectra, etc. The 1D FK model is described by a dimensionless Hamiltonian
$$H=\underset{n}{}\left[\frac{p_n^2}{2}+\frac{1}{2}(x_{n+1}x_na)^2V\mathrm{cos}(x_n)\right],$$
(1)
were $`p_n`$ and $`x_n`$ are momentum and position of the $`n`$th atom, respectively. $`V`$ is the coupling constant, and $`a`$ is the distance between consecutive atoms without external potential. Aubry and Le Daëron showed that the ground state configuration is commensurate when $`a/2\pi `$ is rational and incommensurate when $`a/2\pi `$ is irrational. For an incommensurate ground state, there are two different configurations separated by the so-called transition by breaking of analyticity predicted by Aubry. This transition survives the quantum fluctuation. Moreover, in contrast to other 1D nonintegrable systems such as the Fermi-Pasta-Ulam chain, the FK chain shows a normal thermal conductivity . For each irrational number $`a/2\pi `$ there exists a critical value $`V_c`$ separating the sliding state ($`V<V_c`$) from pinned state ($`V>V_c`$). The $`V_c=0.9716354\mathrm{}`$ corresponds to the most irrational number, golden mean value $`a/2\pi =(\sqrt{5}1)/2`$. Without loss of generality, we restrict ourselves to this particular value of $`a`$ in the numerical calculations throughout the paper, and it is approximated by a converging series of truncated fraction: $`F_n/F_{n+1}`$, where $`\{F_n\}`$ is the Fibonacci sequence.
The equation of motion for the $`n`$th atom in the FK model around its equilibrium position is
$$\frac{d^2\psi _n}{dt^2}=\psi _{n+1}+\psi _{n1}[2+V\mathrm{cos}(x_n^0)]\psi _n,$$
(2)
where $`x_n^0`$ is the equilibrium position of the $`n`$th atom at ground state, and $`\psi _n`$ is the normalized displacement from the equilibrium position. In fact, to obtain Eq.(2), we have written the whole displacement of particle as $`x_n=x_n^0+ϵ\psi _n`$, where $`ϵ(1)`$ is a small parameter. To quantify the disturbance spreading, the variance of displacements
$$\sigma ^2(t)=\frac{1}{N}\underset{n=1}{\overset{N}{}}|\psi _n(t)\psi _n(0)|^2$$
(3)
is calculated by two numerical methods. The first one is the Runge-Kutta method of the fourth order to integrate Eq. (2) for a given initial condition with free boundary. The second one is to find eigenfrequencies $`\omega _j`$ and eigenvectors $`\alpha _n(j)`$ of equation
$$\omega ^2\psi _n=\psi _{n+1}+\psi _{n1}[2+V\mathrm{cos}(x_n^0)]\psi _n.$$
(4)
The solution of Eq. (4) can be expressed in the following form:
$$\psi _n(t)=\underset{j=1}{\overset{N}{}}\left[A_j\mathrm{cos}(\omega _jt)+B_j\mathrm{sin}(\omega _jt)\right]\alpha _n(j)$$
(5)
where the coefficients $`A_j`$ and $`B_j`$ are determined by initial conditions. Contrasting to the quantum diffusion, the classical evolution Eq. (2) is of the second order of derivative. Thus initial conditions for $`\psi _n`$ and $`d\psi _n/dt`$ are needed. One of our main findings is that the spreading behavior depends on the initial condition. For the initial condition
$$\psi _n=0\text{and}d\psi _n/dt=\delta _{n,n_0}$$
which is called type I, we have $`\sigma ^2t^\alpha `$, and $`\alpha `$ is equal to $`1`$ and $`0`$ for $`V<V_c`$ and $`V>V_c`$, respectively. For the initial condition
$$\psi _n=\delta _{n,n_0}\text{and}d\psi _n/dt=0$$
which is called type II, $`\sigma ^2t^0`$ for any $`V`$. (Of course, there is another type of initial condition, i.e., $`\psi _n=\delta _{n,n_0}`$ and $`d\psi _n/dt=\delta _{n,n_0}`$. Our numerical calculations show that the spreading behavior in this case is the same as that of type-I initial condition.) Figure 1 shows the typical time evolution $`\sigma ^2(t)`$ for the FK model. In numerical calculations, we first obtain the ground state positions of $`N`$ atoms in the FK chain by the gradient method for free boundary, i.e., $`x_00`$ and $`x_N=Na`$. The results of Fig. 1 are obtained by the integration method for $`N=F_{19}=10946`$. $`\sigma ^2(t)`$ is also calculated by the second numerical method for the FK chains of small size, which gives rise to the same results.
It is worth pointing out that the above-mentioned results valid only for the evolution time less than a critical value $`t^{}`$, where $`t^{}N/2v`$ and $`v`$ is the velocity of sound. For our FK model, $`v1`$. After this critical time, i.e., $`t>t^{}`$, the power relation of $`\sigma ^2(t)t^\alpha `$ is destroyed due to the finite size effect.
To get a clear picture of the spreading of a disturbance in an incommensurate chain with two different initial conditions, we plot $`\psi _n(t)`$ in Fig. 2 for the FK chain of $`V=0.4`$. The intensity of gray scale represents the amplitude of the displacement of the particle. Because of the huge amount of the data, we record $`\psi _n(t)`$ at a time interval of 20 time steps, which leads to some discontinuity. Figure 2(a) demonstrates the evolution with initial condition $`\psi _n=0`$ and $`d\psi _n/dt=\delta _{n,n_0}`$, and Fig. 2(b) shows that with initial condition $`\psi _n=\delta _{n,n_0}`$ and $`d\psi _n/dt=0`$. The difference is clear. In the later case the disturbance spreads out in both direction, and the particle remains almost at rest after the disturbance passes it. However, in the first case, wherever the disturbance spreads, the particle will be excited and keeps moving. In the cantorus regime ($`V>V_c`$), the disturbance spreading in the FK chain is similar to the case in Fig. 2(b) regardless of the initial condition.
### B Quasiperiodic and random chains
We turn now to study of disturbance spreading in uniform, quasiperiodic, and random chains. The equation of motion can be written as
$$\frac{d^2\psi _n(t)}{dt^2}=k_n\psi _{n+1}+k_{n1}\psi _{n1}(k_n+k_{n1})\psi _n.$$
(6)
If $`k_n=k`$ for all $`n`$, it corresponds to a uniform chain. For quasiperiodic chains, $`k_n`$ takes two values $`k_1`$ and $`k_2`$ which are arranged according to some deterministic quasiperiodic substitute rules. Here we discuss four types of quasiperiodic chains. They are Fibonacci, Thue-Morse, Rudin-Shapiro, and period-doubling chains, respectively. The substitute rules for them are: $`k_1k_1k_2,k_2k_1`$ (Fibonacci); $`k_1k_1k_2,k_2k_2k_1`$ (Thue-Morse); $`k_1k_1k_1k_1k_1k_2,k_1k_2k_1k_1k_2k_1,k_2k_1k_2k_2k_1k_2,k_2k_2k_2k_2k_2k_1`$ (Rudin-Shapiro); $`k_1k_1k_2,k_2k_1k_1`$ (period-doubling). According to the classification based on the eigenvalues of generating matrix defined by Luck, they are bounded (Fibonacci, Thue-Morse), unbounded (Rudin-Shapiro), and marginal (period-doubling). For comparison, $`\sigma ^2(t)`$ for random chain is also studied. In this case, the values of $`k_n`$ are taken $`k_1`$ and $`k_2`$ with the same probability. Figure 3 shows typical time evolutions of variances for four quasiperiodic and random chains. The disturbance spreading behaviors in these chains are the same as that of the incommensurate FK model at $`V<V_c`$, namely, $`\sigma ^2(t)t`$ for all these chains with the initial condition of nonzero momentum $`[d\psi _n(0)/dt0]`$, and $`t^0`$ for all these chains with initial condition of zero momentum $`[d\psi _n(0)/dt=0]`$.
### C Relationship with phonon spectrum
Figures 1-3 are the main results. They demonstrate that the disturbance spreading depends crucially on the initial condition. In the following, we would like to understand this peculiar behavior in terms of the phonon spectra.
The coefficients $`A_j`$ and $`B_j`$ in Eq. (5) are: $`A_j=0`$ and $`B_j=\alpha _{n_0}(j)/\omega _j`$ for type-I boundary condition; $`A_j=\alpha _{n_0}(j)`$ and $`B_j=0`$ for type II. Therefore the solutions of Eq. (2) are
$`\psi _n`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{N}{}}}\mathrm{sin}(\omega _jt)\alpha _n(j)\alpha _{n_0}(j)/\omega _j,\text{type I},`$ (7)
$`\psi _n`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{N}{}}}\mathrm{cos}(\omega _jt)\alpha _n(j)\alpha _{n_0}(j),\text{type II},`$ (8)
respectively. As $`N\mathrm{}`$, we have
$`{\displaystyle \frac{1}{N}}{\displaystyle \underset{n=1}{\overset{N}{}}}\psi _n^2`$ $``$ $`{\displaystyle _{\omega _{min}}^{\omega _{max}}}\mathrm{sin}^2(\omega t)\alpha _{n_0}^2\rho (\omega )𝑑\omega /\omega ^2,\text{type I,}`$ (9)
$`{\displaystyle \frac{1}{N}}{\displaystyle \underset{n=1}{\overset{N}{}}}\psi _n^2`$ $``$ $`{\displaystyle _{\omega _{min}}^{\omega _{max}}}\mathrm{cos}^2(\omega t)\alpha _{n_0}^2\rho (\omega )𝑑\omega \text{type II},`$ (10)
respectively, where $`\omega _{max}/\omega _{min}`$ is the maximum/minimal frequency of phonon spectra and $`\rho (\omega )`$ is the density of the phonon spectra.
The difference between the integrands in Eq. (10) for type I and type II lies in the factor $`1/\omega ^2`$. As time increases, the dominant contribution of the integral in Eq. (10) for type I comes from the integrand around $`\omega =0`$. The integrand for type II is an oscillated function of time $`t`$, and so is the integral. Therefore the reason for different behaviors of these chains for different initial conditions is due to the coefficients $`B_j`$ in Eq. (5). If $`B_j`$ is equal to zero, i.e., the initial condition with zero momentum, $`\sigma ^2(t)`$ is an oscillated function of time. If $`B_j`$ is nonzero, i.e., the initial condition with nonzero momentum, $`\sigma ^2(t)`$ is proportional to $`t`$.
In fact, the integral in Eq. (10) for type-I boundary condition can be written as
$$_{\omega _{min}t}^{\omega _{max}t}t\mathrm{sin}^2(\stackrel{~}{\omega })\alpha _{n_0}^2\rho (\frac{\stackrel{~}{\omega }}{t})𝑑\stackrel{~}{\omega }/\stackrel{~}{\omega }^2.$$
(11)
If the distribution function of frequency has the scaling behavior $`\rho (\omega )\omega ^\beta `$ at low frequency ($`\omega 0`$), then one has $`\alpha =1\beta `$. For the uniform chain, it is well known that $`\rho (\omega )=2/(\pi \sqrt{\omega _m^2\omega ^2})`$, thus $`\beta =0`$. We also discover that for the Fibonacci chain and random chain, the distribution function of frequency at low frequency are the same as that of uniform chain. To demonstrate this, we calculate the integrated distribution function of frequency (IDFF) for these quasiperiodic chains by directly diagonalizing chains of finite length, and plot them in Fig. 4 as a function of $`\omega `$. The results suggest that for all these quasiperiodic chains the IDFF is proportional to $`\omega `$ at low frequency, thus $`\beta =0`$, which is the same as that of uniform and random chains so that the relationship $`\alpha =1\beta `$ is satisfied for all these systems.
For the incommensurate FK model, it is well known that there is a zero-frequency phonon mode for $`V<V_c`$, whereas there is a phonon gap for $`V>V_c`$. From above discussion, we know that the zero-frequency phonon mode plays a key role in the time behavior of $`\sigma ^2(t)`$. The time behavior of $`\sigma ^2(t)`$ in the incommensurate FK chain at $`V<V_c`$ suggests that the low frequency behavior of distribution function of frequency is the same as that of those chains discussed above. The curves shown in Fig. 4 indeed demonstrate this. But for the incommensurate FK chain at $`V>V_c`$, $`\omega _{min}>0`$, thus the integral in Eq. (10) for type I is an oscillated function and the time behavior of $`\sigma ^2(t)`$ is also an oscillated function of time. The case $`V=V_c`$ is critical. The phonon spectrum of the FK chain at $`V_c`$ is different from that of $`V<V_c`$. It has self-similar structure and is point spectrum \[see Fig. 5(a)\]. Therefore there is no inverse power relation between $`\rho (\omega )`$ and $`\omega `$ at low frequency. It implies that the results would depend on the length of the chain in numerical calculation. This is illustrated in Fig. 5(b), where we plot $`\sigma ^2(t)`$ as a function of $`t`$ for the FK chains of different length at $`V=V_c`$.
## III Conclusion and discussions
We have studied the disturbance spreading in incommensurate, uniform, quasiperiodic and random chains. We have found that the time evolution of variance $`\sigma ^2(t)`$ depends on the initial conditions. Its behavior is determined by the density of phonon frequency around zero frequency. For the initial condition of zero momentum, $`\sigma ^2(t)t^0`$ for all kinds of chains studied in this letter. For the initial condition of nonzero momentum, $`\sigma ^2(t)t^\alpha `$, $`\alpha =1`$ for uniform, quasiperiodic, random chains, and incommensurate FK chain at $`V<V_c`$. Although other physical properties differs from system to system, the time behavior of $`\sigma ^2(t)`$ are the same for all these systems. For the incommensurate FK chain at $`V>V_c`$, $`\sigma ^2(t)`$ is an oscillated function of time. This different behavior of the incommensurate FK chain at different $`V`$ regimes might provide us a different approach to detect the transition by breaking of analyticity experimentally.
This work was supported in part by grants from the Hong Kong Research Grants Council (RGC) and the Hong Kong Baptist University Faculty Research Grant (FRG). Tong’s work was also supported by Natural Science Foundation of Jiangsu Province and Natural Science Foundation of Jiangsu Education Committee, PR China.
|
no-problem/0002/hep-ph0002255.html
|
ar5iv
|
text
|
# TeV scale gravity, mirror universe, and …dinosaurs
## 1 Introduction
The history of science in particular and the Human history in general teach us it is not easy to answer a simple question “What is truth?”. Maybe because the truth usually has infinitely many aspects or projections to be grasped. The following example from I like very much. Consider the two figures below.
Are they different? Certainly they are. This seems to be an indisputable truth. But if our wonderful divine gift – our imagination – will help us to escape bonds of the stiff two-dimensional logic we can see the following three-dimensional picture:
Now it is clear (an indisputable truth again) that these two figures, which originally appeared as two different objects, actually are just two different projections of the same thing – the cone. From the original two-dimensional picture alone it is impossible to establish with certainty whether these two figures are substantially different or not. With the help of imagination we can find as a viable option that these figures may have the common origin and represent in fact the one essence, but in order to prove the case one needs further information (some experiments?)
What follows is an attempt to answer Bludman’s question “Muß es sein?” with regard to the Mirror World. Until experiments firmly prove or disprove its existence, any answer will include a great deal of imagination by necessity. So I will describe things at first sight very different and not related to the Mirror World. I refer to your imagination to accept a possibility that these different tales are in fact fragments of the same story.
To demonstrate the importance of imagination, I will perform a little hocus-pocus now and find the Mirror World even in a simple arithmetical expression.
## 2 Arithmetics of the Mirror World
Let us begin with the (correct) expression
$$5+10+1=16.$$
Is it possible to find the Mirror World in this expression? Do not be hasty. At least a right handed neutrino and SO(10) GUT can be found in this innocent expression, as Buccella had reminded us recently . But after we catch sight of SO(10) from this expression it is possible us to come across a more advanced SO(10)-arithmetics:
$$210+560=770.$$
The remarkable fact about the fancy numbers above is that all these numbers are dimensions of some SO(10) irreducible representations (irreps). Now there is some general problem for you: find all SO(10) irreps such that the sum of the first two irrep dimensions matches exactly the dimension of the third irrep.
Maybe after some time you will find this problem a bit tricky and will decide firstly to try the analogous SO(9)-problem encouraged by the SO(9)-arithmetical observation
$$44+84=128.$$
(1)
If you are lucky enough, you will find a solution or will discover the one, given in the literature by Ramond et al. and will understand that surprisingly equation (1) has its roots in the following (simple) triality structure of the $`F_4`$ exceptional Lie algebra:
Meanwhile you will learn a lot of beautiful mathematics like octonions, triality, Dynkin diagrams, Freudenthal-Tits magic square, Weyl chambers etc.
And after you have become so clever, it will strike to you that $`SO\left(9\right)`$ is nothing but a Wigner’s little group associated to the massless degrees of freedom of eleven-dimensional supergravity. The irreps from (1) just form $`N=1`$ supergravity super-multiplet in eleven dimensions, $`\underset{¯}{44}`$ representing gravitons, $`\underset{¯}{84}`$ – another massless bosonic field and $`\underset{¯}{128}`$ – the Rarita-Schwinger spinor. So the very equation (1) ensures supersymmetry, that is the equality between the bosonic and fermionic degrees of freedom.
But 11-dimensional $`N=1`$ supergravity is just a low energy limit of much bigger theory, called $`M`$-theory . And you will find for sure that this $`M`$(arvellous)-theory also gives in various limits all known string theories in ten dimensions, among them a heterotic string theory which leads in the low energy limit to the $`E_8\times E_8`$ effective gauge theory, this second $`E_8`$ being nothing but the “shadow” world of mirror particles !
After I have found the Mirror World even in some simple arithmetical expression, you will be not surprised, I hope, to hear that my next topic is about creatures very closely related to the massive neutrinos. And this creatures are dinosaurs.
## 3 The dinosaur mystery
First dinosaurs appeared on the Earth about 250 Myr (million years) ago, at the beginning of the Paleozoic Era, in a period of time geologists called “Triassic”. Shortly after their appearance, they grew in size as well as in numbers and types and dominated the food chain nearly for 200 Myr. Some dinosaurs were very powerful creatures. Indeed very powerful and very big. But this did not help them very much when their doomsday came at the end of the Cretaceous Period, the time period they dominated on the Earth. Something very mysterious happened on the Earth about 65 Myr ago and dinosaurs suddenly (in a geological time scale) disappeared: their fossils were found throughout the Mesozoic Era but not in the rock layers of the Cenozoic Era. The first period of this new Era is called “Tertiary” by geologists, so the dinosaur extinction is known as the Cretaceous–Tertiary or K–T extinction. In fact dinosaurs were not the only victims of this extinction – about 85% of all species inhabiting the Earth at that time went extinct, among them many marine species.
Such mass extinctions happened several times in the Earth’s history. Let us mention some major extinctions :
* The Precambrian extinction 650 Myr ago – maybe the first great extinction. About seventy percent of the dominant Precambrian flora and fauna perished.
* The Cambrian extinction 500 Myr ago – about 50% of all animal families went extinct.
* The Devonian extinction 360 Myr ago – The crisis primarily affected the marine community, having little impact on the terrestrial flora.
* The Permian extinction 248 Myr ago – the greatest mass extinction ever recorded in the Earth’s history. About 50% of all animal families perished, as well as about 95% of all marine species and many trees.
One can imagine at least two reasons why it is interesting to answer the question “what killed the dinosaurs?” First of all, without extinctions we would not be here. Extinction of species is a common companion of evolution. A fossil record documents some $`210^5`$ such extinctions. Only $`5\%`$ of all animal and plant species, ever originated on the Earth, are alive today. But this natural extinction process is local and gradual and do not effect much the evolution. On the contrary, mass extinctions are events of global magnitude which nearly destroy the life on the Earth, but after them the evolution is boosted ahead: new varieties of species appear which flourish and promptly occupy the vacant ecological niches. The evolution seems to be a process of punctuated equilibrium. And this process was certainly punctuated by some global event 65 Myr ago and as a result the dinosaur era was changed by the mammal era – the event clearly of great importance for humankind. But this is a “positive” aspect of mass extinctions. There is a negative one too. If this unfortunate thing happened to dinosaurs (and many other less prominent species), there is no guarantee that the same will not happen to us (humankind) and so it is not excluded that we could be also found as fossils someday – the perspective, you certainly do not like. But “evolution loves death more than it loves you or me. This is easy to write, easy to read, and hard to believe. The words are simple, the concept clear– but you don’t believe it, do you? Nor do I. How could I, when we’re both so lovable? Are my values then so diametrically opposed to those that nature preserves? …we are moral creatures in an amoral world. The universe that suckled us is a monster that does not care if we live or die– does not care if it itself grinds to a halt. It is fixed and blind, a robot programmed to kill. We are free and seeing; we can only try to outwit it at every turn to save our skins” . And we can hope to save our skins only if we understand where the danger comes from.
Many theories were suggested to explain the dinosaur mystery. They can be divided into two general groups . The first kind of theories operate with the extinction causes which are intrinsic (that is Earth based) and gradual (last several million years), like volcanism and plate tectonics. These are favorite theories of paleontologists and roughly a half of geologists, attracted by the problem of dinosaur extinction. Another half of geologists and the most astronomers and physicists prefer extinction causes which are extrinsic (of cosmic nature) and sudden, like an asteroid or comet impact.
The asteroid impact as a cause of the K–T extinction was suggested by Alvarez et al. and is the most popular hypothesis today. According to this scenario, the impact of a large object (an asteroid or a comet with $`>10km`$ diameter) 65 Myr ago threw up a huge dust cloud which remained for weeks and blocked sunlight worldwide. Impact(s) may also have triggered rounds of volcanic eruptions. As a result, global and less lasting climate changes, impact-induced global wildfires, acid rains etc. effected Earth’s ecology of that time enough to force the dinosaurs to their end .
The popularity of this hypothesis is based not only on the pagan nature of the contemporary science. I mean its passion of creating various idols, and Luis Alvarez was one of such idols in 1980 because of his Nobel prize. Simply there is some grave objective evidence that the impact really happened at the Cretaceous–Tertiary boundary. The most important evidence is Iridium anomaly discovered by Alvarez et al. .
It seems there is a thin band of deposit of clay at the Cretaceous–Tertiary boundary around the world highly enriched with Iridium. This rare-earth element is quite sparse in Earth’s crust but common in meteorites. So this Iridium anomaly, which was found by Alvarez et al. initially in marine sediments in Italy and afterwards confirmed in both continental and marine sediments at more than 100 areas world-wide, can be considered as the first physical evidence that some cosmic intruder hit the Earth 65 Myr ago.
In fact Iridium can be extruded by volcanos from Earth’s core where it is more abundant. And it is known that just about 65 Myr ago India, which was an isolated island at that time drifting towards its collision with Asia, met the head of a mantle plume, molten rock masses extending from Earth’s core-mantle interface upward to the base of Earth’s crust. This mantle plume found its way through India’s crust producing the Deccan Traps volcanism, the greatest volcanic episodes in the Earth’s history ever known. The hotspot volcano which had produced Deccan Traps still exists today on Reunion Island and even now is releasing Iridium !
Therefore one needs some extra evidence to discriminate between impact and volcano origin of Iridium. These extra evidences are microtektites (very small glass spheres) strewn fields world-wide and the presence of quartz grains with multiple sets of shock lamellae (shocked quartz) in the very same clay layer between Cretaceous and Tertiary sediments. The both are common products of violent explosions followed to hypervelocity impacts and therefore testify in favour of impact, not volcanic origin of Iridium. The last nail into coffin for competitive theories was the discovery that the Chicxulub crater located in the Yucatan Peninsula (Mexico) was in fact the long sought K–T crater .
To summarize, there is a little doubt today (especially among astronomers and physicists) that a large asteroid or comet collided the Earth 65 Myr ago. It cannot be inferred with certainty that this was the only cause of the K–T extinction, or even that it was the major cause. Other factors, like Deccan Traps volcanism, could also play a significant role. Note that the competitive ideas suggested to resolve the dinosaur mystery do not necessarily exclude each other. It may happen that they all contain just different projections of the same truth. An interesting example how extraterrestrial and volcano ideas can be unified is given by Dar . Inspired by the Hubble Space Telescope discovery that the central star of the Helix Nebula is surrounded by a ring of about 3500 giant comet-like objects, he speculates that similar massive objects can be present in outer solar system. Gravitational perturbations (for example by passing field stars) can change their orbits and bring them into the inner solar system. Near encounter of the Earth with such “visiting planet” can generate gigantic water tidal waves of $`1km`$ height and crustal tidal waves of $`100m`$ height. Flexing the Earth by $`100m`$ will release $`10^{34}ergs`$ heat in Earth’s interior in a short time and may trigger the gigantic volcanic eruptions. Note that the Jupiter’s moon Io owes its volcanic activity (the strongest in the solar system) to the frictional heating due to tidal forces.
But now that’s enough about dinosaurs. To proceed and show how dinosaur extinction is related to massive neutrinos, the main topic of our conference, we need another mystery story.
## 4 The parity mystery
It is well known that the weak interactions do not respect P-invariance. To imagine how strange this situation is, let us state this P-noninvariance in another way. The image of our world in a P-mirror does not look like the original. For example, if we take 15 degrees of freedom of the first quark-lepton generation, after reflection in the P-mirror we will have (color degrees of freedom is not indicated for quarks):
Therefore we are lacking right-handed neutrino state for the world to be left-right symmetric! Does this fact mean that the Nature distinguishes left and right? Not necessarily. In the quantum theory space inversion is represented by some quantum-mechanical operator $`𝐏`$. But different observers can choose not only different conventions about what is left or right reference frame, but also different bases in the internal symmetry space of the system. Therefore the operator $`𝐏`$ is determined up to an internal symmetry operator $`𝐒`$. In other words, all operators $`\mathrm{𝐏𝐒}_\mathrm{𝟏},\mathrm{𝐏𝐒}_\mathrm{𝟐},\mathrm{𝐏𝐒}_\mathrm{𝟑},\mathrm{}`$ are equivalent and any of them may be selected as representing space inversion in the Hilbert space of the quantum system. Now if we find some good enough internal symmetry $`𝐒`$, so that $`\mathrm{𝐏𝐒}`$ is conserved, the world will be still invariant with respect to the $`\mathrm{𝐏𝐒}`$-mirror (and this mirror is as good as $`𝐏`$-mirror itself for representing space inversion quantum mechanically). This subtlety in the quantum-mechanical realization of the space inversion transformation was recognized shortly after the experimental discovery of the parity non-conservation and it was suggested that the charge conjugation $`𝐂`$ could be the very internal symmetry needed. Indeed the world looks symmetric when reflected in the $`\mathrm{𝐂𝐏}`$-mirror:
Therefore no absolute definitions of left and right are possible in the world there $`\mathrm{𝐂𝐏}`$ is an unbroken symmetry.
But we know that in our world $`\mathrm{𝐂𝐏}`$ is not an unbroken symmetry. So we are left with a strange opportunity that left and right have absolute meanings in our world, unless we manage to find some other good internal symmetry which will restore the space inversion invariance of the world. But there is no obvious candidate for such internal symmetry. Therefore the scientific community simply became reconciled to the parity non-invariance of Nature. Moreover, the belief that the only good symmetries are the proper Poincaré symmetries became some kind of dogma, as strong as there was the opposite belief before Lee and Yang’s seminal paper that the space inversion and time reversal should also be the exact symmetries of Nature. This prompt rejection of improper Poincaré symmetries looks especially strange if we remember that an internal symmetry which can restore the invariance with respect to the full Poincaré group was in fact suggested in the very paper of Lee and Yang. Maybe their proposal did not gain popularity because at first sight it was no less strange than the suggestion that the left and right reference frames are not equivalent. You can restore the equivalence and hence save the space inversion invariance but you have to pay a price, and the price seems to be too high: duplication of the world. For any ordinary particle, the existence of the corresponding “mirror” particle is postulated. These mirror particles are sterile with respect to the ordinary gauge interactions but interact with their own mirror gauge particles. Vice versa, ordinary particles are singlets with respect to the mirror gauge group. This mirror gauge group is an exact copy of the Standard Model $`G_{WS}=SU\left(3\right)_CSU\left(2\right)_LU\left(1\right)_Y`$ group with only difference that left and right are interchanged when we go from the ordinary to the mirror particles. Therefore the mirror weak interactions reveal an opposite $`𝐏`$-asymmetry and hence in such an extended universe $`\mathrm{𝐌𝐏}`$ is an exact symmetry, where $`𝐌`$ interchanges ordinary and mirror particles, and therefore there is no absolute difference between left and right. This universe looks symmetric when reflected in the $`\mathrm{𝐌𝐏}`$-mirror:
After a decade, Kobzarev, Okun and Pomeranchuk returned to this idea . It was shown that mirror particles should interact only extremely weakly with the ordinary particles to evade conflict with experiment. In fact only gravity provides a bridge between two worlds. But gravitational interactions are very weak. So it is not easy to check the mirror world hypothesis. That’s why the idea remained not popular and even essentially unknown until recently, as illustrated by the fact that it was rediscovered by Foot, Lew and Volkas after another 25 years!
In fact there are also other ways, besides gravity, to connect these two worlds. For example, gauge invariant and renormalizable ordinary-mirror mixing is allowed for neutral particles like Higgs, $`\gamma `$ and $`Z`$ gauge bosons, and neutrinos.
Higgs – mirror Higgs mixing can modify significantly the interactions of the Higgs boson . But we have to wait until the discovery of the Higgs scalar to test this possibility.
Photon – mirror photon kinetic mixing term can originate if there exists mixed form of matter (connector) carrying both ordinary and mirror electric charges . Even for a very heavy connector, the induced mixing is expected to be significant and as a result mirror charged particles from the mirror world acquire a small ($`10^3e`$) ordinary electric charge. Such millicharged particles have never been found . But the most stringent bound on the mixing comes from the possibility for positronium to oscillate into mirror positronium and disappear .
The neutrino case is the most interesting. Although a possible connection between neutrino properties and mirror world was noticed earlier , the real understanding that the mirror world provides a way to reconcile observed neutrino anomalies (solar neutrino deficit, the atmospheric neutrino problem, Los Alamos evidence for neutrino oscillations) arose after two recent papers by Foot and Volkas , Berezhiani and Mohapatra . The latter work considers an asymmetric mirror world with spontaneously broken $`\mathrm{𝐌𝐏}`$. At present this variant of the mirror world scenario, further developed in several subsequent publications, is not excluded by observations. But I will be surprised very much if eventually just this asymmetric mirror world proves to be correct. Why, just imagine, would God have invented the mirror world if parity remains broken?
In the minimal mirror extension of the Standard model, we have just two neutrino Weyl states $`\nu _L`$ and $`\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}`$ (mirror particles are denoted by prime throughout the paper) per generation. If Majorana masses are allowed, the most general neutrino mass matrix consistent with $`MP`$-parity conservation is
$`[\overline{\nu _L},\overline{\left(\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}\right)^C}]\left(\begin{array}{cc}M& m\\ m& M^{}\end{array}\right)\left(\begin{array}{c}\left(\nu _L\right)^C\\ \textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}\end{array}\right)+H.c.,`$ (6)
where the Dirac mass $`m`$ is real. The mass eigenstates are the maximal mixtures of ordinary and mirror neutrinos no matter how small the initial mixing parameter $`m`$ is:
$$\nu _L^+=\frac{1}{\sqrt{2}}\left(\nu _L+\left(\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}\right)^C\right),\nu _L^{}=\frac{1}{\sqrt{2}}\left(\nu _L\left(\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}\right)^C\right).$$
In fact this maximality of mixing is a quite general and very important consequence of the space inversion symmetry restoration through mirror world and provides a clear experimental signature of this scenario .
The mirror world can also naturally accommodate very small neutrino masses by $`MP`$-symmetric variant of the standard seesaw model , or it can even provide an alternative explanation why neutrino masses are so small . Let us consider the latter case. In order that the neutrino not be discriminated as compared to the corresponding charged lepton, let us assume that in addition to the $`\nu _L`$ and $`\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}`$ states there exist a right-handed neutrino $`\nu _R`$ and its left-handed mirror partner $`\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{L}^{\textcolor[rgb]{1,0,1}{}}`$, which are $`G_{WS}G_{WS}`$ singlets. Such states naturally arise if, for example, gauge group of the mirror world $`G_{WS}G_{WS}`$ is a low energy remnant of $`SO\left(10\right)SO\left(10\right)`$ grand unification. In such a grand unified mirror world, some early stages of symmetry breaking (for example $`SO\left(10\right)SO\left(10\right)SU\left(5\right)SU\left(5\right)`$) can generate a large $`\nu _R\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{L}^{\textcolor[rgb]{1,0,1}{}}`$ mixing. Besides, ordinary electroweak Higgs mechanism and its mirror partner will lead to neutrino and mirror neutrino masses. Therefore we expect the following neutrino mass terms
$`_{mass}=M\left(\overline{\nu _R}\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{L}^{\textcolor[rgb]{1,0,1}{}}+\overline{\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{L}^{\textcolor[rgb]{1,0,1}{}}}\nu _R\right)+m\left(\overline{\nu _L}\nu _R+\overline{\nu _R}\nu _L+\overline{\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}}\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{L}^{\textcolor[rgb]{1,0,1}{}}+\overline{\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{L}^{\textcolor[rgb]{1,0,1}{}}}\textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}\right),`$ (7)
where $`m`$ is expected to be of the order of the charged lepton mass of the same generation, while the expected value of M is $`10^{14}10^{15}GeV`$. Among the mass eigenstates of (7) (physical neutrinos denoted by tilde) we have the following Weyl states
$$\stackrel{~}{\nu }_L=\mathrm{cos}\theta \nu _L\mathrm{sin}\theta \textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{L}^{\textcolor[rgb]{1,0,1}{}}\stackrel{\textcolor[rgb]{1,0,1}{~}}{\textcolor[rgb]{1,0,1}{\nu }}_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}=\mathrm{cos}\theta \textcolor[rgb]{1,0,1}{\nu }_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}\mathrm{sin}\theta \nu _R\mathrm{𝐌𝐏}\left(\stackrel{~}{\nu }_L\right),$$
where $`\theta m/M`$ is very small. These Weyl states constitute a very light Dirac neutrino $`(\stackrel{~}{\nu }_L,\stackrel{\textcolor[rgb]{1,0,1}{~}}{\textcolor[rgb]{1,0,1}{\nu }}_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}})`$ with the mass $`m^2/M`$. This neutrino is a rather bizarre object – its left-handed component inhabits mostly our ordinary world, while right-handed component prefers the mirror world intriguing mirror physicists. Alternatively, you can notice that, because $`\overline{\stackrel{\textcolor[rgb]{1,0,1}{~}}{\textcolor[rgb]{1,0,1}{\nu }}^{\textcolor[rgb]{1,0,1}{}}}_R\stackrel{~}{\nu }_L=\overline{\left(\stackrel{~}{\nu }_L\right)^C}\left(\stackrel{\textcolor[rgb]{1,0,1}{~}}{\textcolor[rgb]{1,0,1}{\nu }}_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}\right)^C`$, this ultralight-neutrino mass term
$$m\frac{m}{M}\left(\overline{\stackrel{\textcolor[rgb]{1,0,1}{~}}{\textcolor[rgb]{1,0,1}{\nu }}_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}}\stackrel{~}{\nu }_L+\overline{\stackrel{~}{\nu }_L}\stackrel{\textcolor[rgb]{1,0,1}{~}}{\textcolor[rgb]{1,0,1}{\nu }}_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}\right)$$
can be considered as a degenerate limit of (6) with zero Majorana masses and you can work, if you prefer, in terms of (degenerate) maximally mixed $`\mathrm{𝐂𝐌𝐏}`$ and mass eigenstates
$$\nu _L^+=\frac{1}{\sqrt{2}}\left(\stackrel{~}{\nu }_L+\left(\stackrel{\textcolor[rgb]{1,0,1}{~}}{\textcolor[rgb]{1,0,1}{\nu }}_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}\right)^C\right),\nu _L^{}=\frac{1}{\sqrt{2}}\left(\stackrel{~}{\nu }_L\left(\stackrel{\textcolor[rgb]{1,0,1}{~}}{\textcolor[rgb]{1,0,1}{\nu }}_\textcolor[rgb]{1,0,1}{R}^{\textcolor[rgb]{1,0,1}{}}\right)^C\right).$$
Besides neutrino oscillations, where are some other observed phenomena which can be also interpreted as supporting mirror world hypothesis. It is well known that there is a lot of dark matter in our universe and the mirror matter can constitute a considerable fraction of this dark universe . It is even possible that mirror stars have been already observed as gravitational microlensing events . Recent Hubble Space Telescope star counts revealed the deficit of local luminous matter predicted by Blinnikov and Khlopov many years ago as a result of mirror stars existence. Note however that Hipparchos satellite data have not confirmed the deficit of visible matter. Mirror matter was evoked to explain some mysterious properties of Gamma-ray Bursts . Just during our conference the paper by Mohapatra, Nussinov and Teplit appeared about the latter subject . This paper provokes a thought that maybe the straightest road from mirror world to the ordinary one lays through extra dimensions. So we turn our narrative now towards extra dimensions.
## 5 The hierarchy mystery
The energy scale where gravity becomes strong and quantum gravity effects are essential is given by the Planck mass. This mass can be estimated as follows. Suppose two particles of equal masses $`m`$ are separated at a distance which equals to the corresponding Compton wavelength $`\lambda =1/m`$. If the gravitational interaction energy of the system $`G_Nm^2/\lambda =G_Nm^3`$ is of the same order as the particle rest mass $`m`$, then the former can not be neglected. This gives for the Plank mass
$$M_{Pl}=\frac{1}{\sqrt{G_N}}10^{19}GeV.$$
Huge difference between this quantum gravity energy scale and the electroweak scale $`E_{EW}10^2GeV`$ is astonishing and constitutes the so called hierarchy problem. There is also a gauge hierarchy problem: the Grand Unification scale $`E_{GUT}10^{16}GeV`$ is very big compared to $`E_{EW}`$. Any successful theory should not only explain these hierarchies, but also provide some mechanism to protect them against radiative corrections. Recently an interesting idea was suggested by Arkani-Hamed, Dimopoulos and Dvali how to deal with the hierarchy problem. Certainly, there will be no problem, if there is no hierarchy. But how can we lower the quantum gravity scale so that the hierarchy disappears? It turns out that this is possible if extra spatial dimensions exist with big enough compactification radius.
Suppose besides the usual $`x,y,z`$ coordinates there exist some additional spatial coordinates $`\textcolor[rgb]{1,0,0}{x}_\textcolor[rgb]{1,0,0}{1}\textcolor[rgb]{1,0,0}{,}\textcolor[rgb]{1,0,0}{\mathrm{}}\textcolor[rgb]{1,0,0}{,}\textcolor[rgb]{1,0,0}{x}_\textcolor[rgb]{1,0,0}{n}`$, which are compactified on circles with a common (for simplicity) compactification radius $`R`$. In such a world with toroidal compactification, the gravitational potential, created by an object of mass $`m`$, should be periodic in the extra $`n`$-dimensions. That is, it should be invariant under replacements $`\textcolor[rgb]{1,0,0}{x}_\textcolor[rgb]{1,0,0}{i}\textcolor[rgb]{1,0,0}{x}_\textcolor[rgb]{1,0,0}{i}\pm 2\pi R`$. Besides it should vanish at spatial infinity and obey the $`\left(n+3\right)`$-dimensional Laplace equation. These requirements are satisfied by the following function
$`V={\displaystyle \underset{n_1,\mathrm{},n_n}{}}{\displaystyle \frac{\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{G}}_\textcolor[rgb]{1,0,0}{N}m}{\left[r^2+\underset{i=1}{\overset{n}{}}\left(\textcolor[rgb]{1,0,0}{x}_\textcolor[rgb]{1,0,0}{i}2\pi Rn_i\right)^2\right]^{\left(n+1\right)/2}}},`$
where $`\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{G}}_\textcolor[rgb]{1,0,0}{N}`$ is the Newton constant for $`n+4`$ space-time dimensions and $`r^2=x^2+y^2+z^2`$ is the usual three-dimensional radial distance. If the compactification radius $`R`$ is very large, only the term with $`n_1=0,\mathrm{},n_n=0`$ survives in the sum and we get the Newton law in $`n+4`$ dimensions:
$$V\frac{\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{G}}_Nm}{\stackrel{~}{r}^{n+1}},$$
(8)
where $`\stackrel{~}{r}=\sqrt{r^2+\underset{i=1}{\overset{n}{}}\textcolor[rgb]{1,0,0}{x}_{\textcolor[rgb]{1,0,0}{i}}^{}{}_{}{}^{2}}`$. But if $`Rr`$, the sum can be approximated by an integral
$$V\frac{\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{G}}_\textcolor[rgb]{1,0,0}{N}m}{\left(2\pi R\right)^n}d^{\left(n\right)}\stackrel{}{x}\frac{1}{\left(r^2+\stackrel{}{x}^2\right)^{\left(n+1\right)/2}}\frac{\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{G}}_\textcolor[rgb]{1,0,0}{N}}{R^n}\frac{m}{r}.$$
Therefore for the conventional 4-dimensional Newton constant we have
$$G_N\frac{\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{G}}_\textcolor[rgb]{1,0,0}{N}}{R^n}.$$
On the other hand, the fundamental multidimensional quantum gravity scale $`\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{M}}_{\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{l}}`$ is now determined from
$$\left|\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{M}}_{\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{l}}V\left(\frac{1}{\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{M}}_{\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{l}}}\right)\right|\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{M}}_{\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{l}},$$
where the potential $`V`$ is given by the equation (8), and we have
$$\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{M}}_{\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{l}}=\left[\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{G}}_\textcolor[rgb]{1,0,0}{N}\right]^{\frac{1}{n+2}}.$$
The last two relations indicate
$`{\displaystyle \frac{M_{Pl}}{\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{M}}_{\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{l}}}}\left({\displaystyle \frac{R}{\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}}}\right)^{\frac{n}{2}},`$ (9)
where $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}=1/\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{M}}_{\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{l}}`$ and $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}10^{19}\mathrm{m}`$ (m – one meter), if the fundamental quantum gravity scale $`\stackrel{\textcolor[rgb]{1,0,0}{~}}{\textcolor[rgb]{1,0,0}{M}}_{\textcolor[rgb]{1,0,0}{P}\textcolor[rgb]{1,0,0}{l}}`$ is in a few TeV range. Therefore the initial $`M_{Pl}/E_{EW}`$ hierarchy problem can be traded to another hierarchy: the largeness of the compactification radius compared to $`\textcolor[rgb]{1,0,0}{R}_\textcolor[rgb]{1,0,0}{0}`$. Namely, we get from (9) the corresponding compactification radius as
$$R10^{\frac{32}{n}19}\mathrm{m}.$$
For one extra dimension this means modification of the Newton’s gravity at scales $`R=10^{13}\mathrm{m}`$ and is certainly excluded. But already for $`n=2`$, $`R1\mathrm{m}m`$ – just the scale where our present day experimental knowledge about gravity ends.
Although gravity was not checked in the sub-millimeter range, Standard Model interactions were fairly well investigated far below this scale. Therefore if the large extra dimensions really exist, one needs some mechanism to prevent Standard Model particles to feel these extra dimensions. Remarkably, there are several possibilities to ensure their confinement at a 3-dimensional wall in the multidimensional space . Just to illustrate one of them, let us consider a toy model in the (3+1)-dimensional space-time with the Lagrangian
$`=\overline{\psi }i\widehat{}\psi h\varphi \overline{\psi }\psi +{\displaystyle \frac{1}{2}}\left(_\mu \varphi \right)^2\lambda \left(\varphi ^2v^2\right)^2.`$ (10)
This Lagrangian possesses $`Z_2`$ symmetry
$$\psi i\gamma _5\psi ,\varphi \varphi ,$$
which is spontaneously broken in the true vacuum state where $`<\varphi >=v`$ or $`<\varphi >=v`$. We assume that the spinor-scalar interaction term $`h\varphi \overline{\psi }\psi `$ is small, so in a good approximation the equation of motion for the field $`\varphi `$ looks like
$`_\mu ^\mu \varphi =4\lambda \varphi \left(\varphi ^2v^2\right).`$ (11)
It is easy to check that (11) has a kink-like solution which depends only on the $`z`$-coordinate
$$\stackrel{~}{\varphi }\left(z\right)=v\mathrm{tanh}\left(\sqrt{2\lambda }vz\right).$$
This solution is a domain wall interpolating between two different vacua $`<\varphi >=v`$ and $`<\varphi >=v`$. Its thickness in the $`z`$ direction is of order of $`m^1`$, where $`m=\sqrt{2\lambda }v`$.
Let us consider now the fermion in this kink-like background. The equation of motion which follows from(10) is
$`i\widehat{}\psi =h\stackrel{~}{\varphi }\left(z\right)\psi .`$ (12)
This last equation has a factorized solution
$$\psi =\nu (x,y)f\left(z\right),$$
where $`f\left(z\right)`$ is a scalar function and the $`\nu (x,y)`$ spinor satisfies (note that $`\gamma _3`$ is anti-hermitian)
$$i\widehat{}\nu (x,y)=0,\gamma _3\nu (x,y)=i\nu (x,y).$$
For $`f\left(z\right)`$, equation (12) then gives
$$\frac{df\left(z\right)}{dz}=h\stackrel{~}{\varphi }\left(z\right)f\left(z\right),$$
its solution with $`f\left(0\right)=1`$ being
$$f\left(z\right)=\mathrm{exp}\left\{h_0^z\stackrel{~}{\varphi }\left(z\right)𝑑z\right\}=\mathrm{exp}\left\{\frac{h}{\sqrt{2\lambda }}\mathrm{ln}\left(\mathrm{cosh}zm\right)\right\}.$$
We see that
$$\psi =\nu (x,y)\mathrm{exp}\left\{\frac{h}{\sqrt{2\lambda }}\mathrm{ln}\left(\mathrm{cosh}zm\right)\right\}$$
describes a massless “flat” fermion $`\nu (x,y)`$ localized on the domain wall, the localization scale determined by the fermion-scalar interaction strength $`h`$.
To summarize, the hierarchy mystery maybe indicates the following fascinating structure of our world: the Standard Model particles (and hence human observers) are stuck on a wall (“3-brane”) in the higher ($`4+n`$) dimensional space-time. On the contrary, gravity propagates freely in the remaining space (the bulk) and feels large ($`1\mathrm{m}m`$) compact extra dimensions. In the string theory framework, this picture is naturally achieved if the ordinary particles correspond to endpoints of open strings attached to the brane, while gravity, represented by closed strings, can propagate in the bulk. The most surprising thing about this crazy idea is that it doesn’t come in immediate conflict with known experimental facts .
I’d like to end this chapter with some my personal experience with extra dimensions. Some times ago I had sent an e-mail letter to my friend in Chicago. Soon I received the answer saying “I have received a message from you but I don’t know to which Sasha it is addressed (I don’t know about any Sasha now in Milano)”. I was surprised, not so much by what my letter went to Milan instead of Chicago, but by the fact that the answer was from Andrea Gamba, and while preparing my diploma theses at my university years I had read a very interesting paper by A. Gamba about peculiarities of the eight-dimensional space. I was intrigued and asked him if he was the very eight-dimensional Gamba. The answer was “It’s really a mystery how I received your letter; unfortunately I don’t know about 8-dimensional space, in 1967 I was 5 years old… But certainly your message passed through some extra dimension!”
So personally I’m quite convinced about existence of extra dimensions. I was so much astonished by the coincidence described above that I even wrote a scientific paper about peculiarities of the eight-dimensional space and its possible connection to the generation problem – this paper can be considered as a material evidence of communications through extra dimensions. But now it’s time to stop making fun and ask what profit the large extra dimensions can give for the mirror world.
## 6 Extra dimensions and the mirror universe
Gravity is the main connector between our and mirror worlds. Therefore, if it becomes strong at high energies of about few TeV, the immediate consequence will be a possibility to produce mirror particles at future high energy colliders via virtual graviton exchange. The typical total cross-sections are
$$\sigma \frac{s^3}{\mathrm{\Lambda }^8}\left(\mathrm{f}ewpb\right)\left(\frac{s}{\mathrm{T}eV^2}\right)^3\left(\frac{\mathrm{T}eV}{\mathrm{\Lambda }}\right)^8,$$
where $`\mathrm{\Lambda }1\mathrm{T}eV`$ is an ultraviolet cutoff energy for the effective low-energy theory, presumably of the order of the bulk Planck mass . These cross sections are quite sizeable, but unfortunately there is no clear experimental signature for such kind of events. May be more useful signature have reactions accompanied by the initial-state radiation but we expect severe background problems here, in particular from the real graviton emission. Therefore the TeV-scale quantum gravity can allow quite effective mirror matter production at future TeV-range colliders, but it will be very difficult to convince skeptics that the mirror particles have been really produced.
Another interesting effect is quarkonium – mirror quarkonium oscillations. As a result, heavy C-even quarkonia can oscillate into their mirror counterparts, and hence disappear from our world. Unfortunately the expected probabilities are very small . For example, the probability for $`\chi _{b2}`$ state to oscillate into its mirror partner is about $`310^{14}`$.
The most promising effect is connected to mirror supernova, because some part of a mirror supernova energy will be released in our world too. In $`\textcolor[rgb]{1,0,1}{e}^\textcolor[rgb]{1,0,1}{+}\textcolor[rgb]{1,0,1}{e}^{\textcolor[rgb]{1,0,1}{}}e^+e^{},\gamma \gamma `$ reactions were considered as a tool to transfer energy from the mirror to the ordinary sector. The resulting ordinary energy emissivity per unit volume per unit time of a mirror supernova core with a temperature $`T`$ is given by the thermal average over the Fermi-Dirac distribution and was found to be
$`\dot{q}={\displaystyle \frac{6T^{13}}{25\pi ^3\mathrm{\Lambda }^8}}\left[I_5\left(\nu \right)I_6\left(\nu \right)+I_5\left(\nu \right)I_6\left(\nu \right)\right],`$ (13)
where
$$\nu =\frac{\mu _e}{T}\mathrm{a}ndI_n\left(\nu \right)=\underset{0}{\overset{\mathrm{}}{}}𝑑x\frac{x^n}{\mathrm{exp}\left(x+\nu \right)+1},$$
$`\mu _e`$ being the chemical potential for mirror electrons in the mirror-supernova core.
Let us compare (13) to the neutrino emissivity by supernova (only the leading term is shown)
$$\dot{q}_{\nu \overline{\nu }}=\frac{2G_F^2T^9}{9\pi ^5}\left(C_V^2+C_A^2\right)\left[I_3\left(\nu \right)I_4\left(\nu \right)+I_3\left(\nu \right)I_4\left(\nu \right)\right],$$
(14)
where $`C_A=\frac{1}{2},C_V=\frac{1}{2}+2\mathrm{sin}^2\mathrm{\Theta }_W`$ and $`G_F`$ is the Fermi coupling constant. For the core temperature $`T=30\mathrm{M}eV`$, chemical potential $`\mu _e345\mathrm{M}eV`$ and $`\mathrm{\Lambda }1\mathrm{T}eV`$, the last equations (13) and (14) give
$$\frac{\dot{q}}{\dot{q}_{\nu \overline{\nu }}}1.410^{16}.$$
As expected, we get a very small number. But in the first $`10`$ seconds the neutrino luminosity from a supernova is enormous : $`L_{\nu \overline{\nu }}310^{45}W`$ for each species of neutrino. And even $`1.410^{16}`$-th part of $`L_{\nu \overline{\nu }}`$ is thousand times larger than the solar luminosity!
Therefore mirror supernovas can be seen by ordinary observers, at least for some seconds after their birth. Note that according to we are already observing light from mirror supernovas as gamma ray bursts!
We tacitly assumed above that the ordinary and mirror matter are located on the same 3-brane. For space-times with extra dimensions this is not necessarily the only possibility. In fact you can imagine a situation then different worlds are located on different 3-branes (or even on branes with dimensionality other than 3). But I would be careful of using the nickname “mirror” for particles living on different brane. Maybe “shadow world” or “parallel world” is more appropriate in this case. I prefer to reserve the name “mirror world” for situations which mean the exact parity symmetry. But how the exact parity invariance can be reconciled with parallel worlds? A priori one can’t expect any symmetry between parallel worlds which are located on different branes. For me the only natural possibility is to ensure the parity symmetry for separate brane worlds. I think this may be achieved if particles can’t cross the brane (in the low energy approximation) and are trapped on the different surfaces of the brane. Then the parity transformation will involve a transition from one brane surface to another. Therefore the mirror particles are just particles located on the another surface of our brane and so are not separated from the ordinary world very much in extra dimension, if the brane is thin. In this case one should expect the same low scale quantum gravity effects as discussed at the beginning of this chapter for the situation then the ordinary and mirror particles inhabit the same brane.
This idea is not as wild as it seems at first sight. Let me recall you an interesting condensed matter analogy: vierbein domain walls in superfluid $`{}_{}{}^{3}He`$-$`A`$ film . Such domain wall divides the bulk into two classically separated “worlds”: no quasiparticle can cross the wall in the classical limit. But “Planck scale physics” allows these worlds to communicate and quasiparticles with high enough energy can cross the wall. Moreover, the left-handed chiral quasiparticle becomes right-handed when the wall is crossed!
If you want a really cool crazy idea – here it is: the mirror world without mirror particles . To illustrate this idea, imagine you are the king of ants living in a two-dimensional flatland. One day your main court astrologist gives you a piece of exciting news that there is a deep sense in the notions of left and right, because nature does not respect parity symmetry and so the absolute meaning of the left, as the side preferred by stars, can be established. You immediately decide to notify your subject ants to what is left – the lucky side. So you send couriers with this mission throughout your kingdom. It may happen however that your world has a non-trivial global structure in the higher dimensional space and constitutes, for example, a Möbius strip. Then after some time one of your couriers can be found in a land, your main astrologist calls the land of shadows. You can not see him but can communicate with him using gravity. Gravitationally you feel as if he were somewhere very close. And really he is just beneath you on the Möbius strip – see Fig.1 below .
But you are flat, as are all of your subjects, and so have no idea about extra dimensions. You can’t say that your courier ant is turned upside-down, because he is two-dimensional. And his two-dimensional appearance, checked by gravity, looks the same as for all other ants. Simply in his zeal to fulfill your order he traveled too far away. And everybody knows in your kingdom that if you travel long enough way you will return the same place, but will return as an invisible shadow. Your main astrologist says that one can reach the land of shadows after very long journey. But anyway this land of shadows is a part of your kingdom – nobody, even your main astrologist, can tell you where the ordinary land ends and the land of shadows begins. So naturally you want your shadow subjects also to have the correct notion to what is the left side. And here a great surprise is awaiting you. For your main astrologist horror, you shadow courier indicates completely different side as the left side – the side which originally was marked as right by the very same courier before he left the court.
Hence in a such Möbius world the absolute difference between left and right has meaning only locally. No such difference can be established globally – the world as the whole is parity invariant!
If you do not like worlds to have edges, you can consider, for example, a Klein’s bottle universe instead. In this case you need at least four space dimensions to realize such (two-dimensional) world without self-intersections.
## 7 Nemesis – the dark (matter) sun?
But, for goodness’s sake, what have in common all these mirror worlds and extra dimensions with dinosaurs? – you may ask. To explain this, we need one more (in fact my favorite) dinosaur extinction theory :
“There is another Sun in the sky, a Demon Sun we cannot see. Long ago, even before great grandmother’s time, the Demon Sun attacked our Sun. Comets fell, and a terrible winter overtook the Earth. Almost all life was destroyed. The Demon Sun has attacked many times before. It will attack again.”
It is a very nice theory, having almost mythical power, isn’t it? But such explanation would be enough in some primitive society, not spoiled by the science and civilization. You need more scientific story, I suspect. And the scientific story begins with the question: are mass extinctions periodic?
“Most discoveries in physics are made because the time is ripe” . And not only in physics. Although Fischer and Arthur had already suggested a 32-Myr periodicity in marine mass extinctions , it took about seven years for the subject to become popular. And this happened when Raup and Sepkoski’s seminal paper appeared. They used extensive extinction data about 3500 families of marine animals Sepkoski had collected for years. After scrutinizing the data, only 567 families were selected for which the data were considered as the most reliable. The extinction rates of these families plotted versus the geological time exhibited a puzzling periodicity. Fig.2 shows Raup and Sepkoski’s original data as presented by Muller
The geological time scale accuracy is a rather subtle point and not everybody agrees that the periodicity is statistically significant. But we think that Raup and Sepkoski’s analysis should be considered as at least a strong indication of 26-30 Myr periodicity in the extinction data. Especially if you take into account that the same periodicity was confirmed in Sepkoski’s later studies of fossil genera . A similar periodicity has been observed in the cratering rate on the Earth , in magnetic reversals and in orogenic tectonism .
But if this mysterious periodicity is indeed real, you need some extraordinary explanation for it. Some such explanations were suggested shortly after Raup and Sepkoski’s findings. All of them use extraterrestrial causes to explain terrestrial mass extinctions. This is not surprising because only in astronomy one can find clocks with such a large period.
Rampino and Stothers suggested that the Sun’s motion perpendicular to the galactic plane can modulate comet fluxes streaming towards the inner solar system, because when the Sun crosses the galactic plane twice in his $``$60 Myr period oscillations the probability to meet molecular clouds increases. Of course, it is an interesting fact that the half-period of solar oscillations perpendicular to the galactic plane practically coincides to the mass extinctions period. But at least two obvious drawbacks of this hypothesis can be indicated. First of all, the present amplitude of the solar oscillations perpendicular to the galactic plane is comparable with the scale of molecular clouds height. So it is unlikely these Sun’s oscillations to be able to produce any detectable periodicity in encounters with molecular clouds . Besides, the Sun’s oscillations in and out of the galactic plane are out of phase with mass extinctions: the Sun is presently just near the galactic plane, whilst we are about half-way between extinctions .
Another mechanism, which can lead to periodic comet showers, postulates the existence of yet undiscovered tenth planet (planet X) in the solar system . It is assumed that this planet had swept out a gap in the comet disk beyond the orbit of Neptune during its lifetime. If the orbit of planet X has modest eccentricity and inclination to the ecliptic, it will pass close to the inner and outer edges of the gap twice in its perihelion precession period. And this precession period is expected to be about 56 Myr – nearly twice the extinction period, if the semi-major axis of the orbit is $``$ 100 AU – big enough to ensure that it is not a simple matter to discover such planet. This is an interesting hypothesis but the question with it is whether the needed gap in the comet distribution around the tenth planet could be maintained .
Most solar-type stars have companion(s). Partially based on this observation, Davis et al. and independently Whitmire and Jackson suggested that the Sun maybe is no exception and also has a distant companion star. How can this putative solar companion cause periodic comet showers? If its orbital period is $``$26 Myr it will have a large semi-major axis $`a8.810^4AU1.4`$ light years according to the Kepler’s third law. But even in this case its perihelion $`r_{min}=a\left(1e\right)`$, where $`e`$ stands for the orbital eccentricity, can be of the order of $`310^4`$ AU if $`e0.7`$, sufficiently low to disturb the inner Oort cloud – a comet reservoir containing about $`10^{13}`$ comets. Then every perihelion passage of the companion star will induce a cometary shower which after some tens of thousand years will enter the inner solar system and some of them will hit the Earth with high probability. Schematically this is shown in Fig.3 .
The hypothetical solar companion star was named Nemesis, “after the Greek Goddess who relentlessly persecutes the excessively rich, proud and powerful” . This name became the most popular, although the Hindu God of destruction Shiva and his Mother Goddess Kali were argued to be alternatives more suitable to convey dual aspects of mass extinctions .
Let us take a bit closer look at the Nemesis theory and estimate how many comets are expected to hit the Earth because of the Oort cloud perturbation caused by Nemesis. To do this, we need some model for the distribution of comet orbits in the inner Oort cloud and we take the simplest model : all comets have the same semi-major axis $`a=10^4AU`$ and their positions and velocities are uniformly distributed in the phase space. Only comets with the perihelion distance $`a\left(1e\right)<1AU`$ cross the Earth’s orbit and for each crossing have some chance to hit the Earth. These comets should have orbital eccentricities $`e>11AU/a=110^4`$. So the fraction $`\nu `$ of the inner Oort cloud comets which will cross the Earth’s orbit twice within 1 Myr, the cometary orbital period for our choice of their semi-major axis, is given by
$`\nu ={\displaystyle \underset{0.9999}{\overset{1}{}}}f\left(e\right)𝑑e.`$ (15)
Here $`f\left(e\right)`$ is a distribution function for the eccentricity $`e`$. Because, for fixed semi-major axis, $`1e^2L^2`$, $`L`$ being the orbital angular momentum, the distribution function for $`e^2`$ is the same as the distribution function for $`L^2`$. The latter can be derived from our supposition about the uniform distribution of the comets in the phase space. But it is possible to guess this distribution function more easily by using the analogy with a highly excited quantum-mechanical hydrogen atom . For highly excited states $`L^2l^2`$, where $`l1`$ is the total angular momentum quantum number. Let us ask: if one excites a hydrogen atom what is the probability that the quantum number $`l`$ will lay within the range from $`l`$ to $`l+\mathrm{\Delta }l`$? Each hydrogen atom level is $`\left(2l+1\right)`$-fold degenerate. So the desired probability will be proportional to
$$\underset{l}{\overset{l+\mathrm{\Delta }l}{}}\left(2l+1\right)\underset{l}{\overset{l+\mathrm{\Delta }l}{}}2l2l\mathrm{\Delta }l,$$
where we have assumed $`l1`$. Therefore, the distribution function for $`l`$ is $`g\left(l\right)=2l`$ in the classical limit $`l1`$. This means that $`l^2`$ is distributed uniformly, and so does $`L^2`$ and hence $`e^2`$. But if $`e^2`$ is distributed uniformly, the distribution function for the eccentricity will be $`f\left(e\right)=2e`$ and (15) gives
$$\nu =\underset{110^4}{\overset{1}{}}2e𝑑e210^4.$$
The total number of comets in the inner Oort cloud is estimated to be $`N=10^{13}`$. Therefore $`\nu N210^9`$ comets will rush towards the Earth in every 1 Myr. The geometrical cross section of the Earth constitutes $`1.810^9`$ part of its orbital area. And this number should be even slightly enhanced because of the gravitational focusing (about $`1.1`$-times ). Therefore the expected number of comet hits on the Earth’s surface is about $`210^91.810^91.128`$. Here the last factor $`2`$ accounts for the fact that a comet will cross the Earth’s orbit twice during its perihelion passage and, therefore, will have two chances to hit the Earth.
This estimate indicates that the Earth would be a very hazardous place, hardly capable to develop any complex forms of life, unless it has some protection against these comet storms. And it is really protected by its faithful safeguards Jupiter and Saturn. Most of the comets crossing Saturn’s orbit will be ejected from the solar system after a few orbital period due to gravitational perturbations by Jupiter and Saturn. Because of this effect, the distribution of the Oort cloud comets in the phase space is in fact not uniform: the region corresponding to orbits that enter the inner solar system, the so called “loss cone”, is normally empty. Therefore the Earth usually sits secure in the quiet “eye” of the comet storm .
Do you realize that we owe our opportunity to attend this conference to Jupiter? I was quite amazed when this thought crossed my mind while preparing these notes. Complex life might be quite rare in the universe . It is not sufficient to find a star like the sun which has a planet like the Earth. You need also to supply respective safeguards.
When Nemesis comes close, it disturbs Oort cloud comets and, as a result, fills the loss cone. In other words, this means that about two billion comets are sent towards the Earth each time Nemeses passes its perihelion. The total number of impacts expected on Earth will be higher than eight – our above estimate. Paradoxically, this is due to effects of Jupiter and Saturn. A small number of comets from the Nemesis induced shower will not be immediately expelled from the solar system by these safeguards but instead perturbed into smaller, frequently returning orbits. This comets will visit the planetary system several times until their final ejection on hyperbolic orbits or disintegration due to a close approach to the Sun. Hence the probability to hit the Earth increases several times, up to order of magnitude .
As we see, if the Nemesis is heavy enough to fill the loss cone, its close approaches to the Sun will be catastrophic for creatures like dinosaurs. Smaller creatures, like cockroaches, can possibly survive and enjoy the night sky filled with comets, with several new comets appearing every day. It was shown that if the mass of the Nemesis is not much smaller than $`0.1M_{}`$, the loss cone will be indeed filled by a single perihelion passage of the perilous solar companion for assumed eccentricity $`e=0.7`$.
The next obvious question to be answered before acceptance of the Nemesis theory is the stability of such a wide binary system. While orbiting the Sun, Nemesis experiences both slowly changing and rapidly fluctuating perturbations. The former is due to galactic tides and the Coriolis forces (remember that the solar rest frame rotates around the galactic center). The latter is caused by passing field stars and interstellar clouds.
For assumed semi-major axis, the Nemesis is in the region where the Sun’s gravity still dominates over the Galaxy field. But due to galactic tides, the orbit oriented parallel to the galactic plane is more stable than orbits at higher galactic latitudes . Moreover, retrograde orbits are more stable because for such orbits Coriolis forces increase stability . Therefore it may be more probable for the Nemesis to be located at low inclinations with respect to the galactic plane. But it is not excluded that present day Nemesis has high inclination, because its orbit is not rigid but subject to various perturbations. So one can imagine that Nemesis started with low inclination and much less wide orbit and random perturbations had lead to its present wide and high galactic latitude orbit, where it can still have several hundred Myr lifetime (according to , the lifetime for an orbit perpendicular to the galactic plane is $``$500 Myr).
The perturbing effects of passing field stars were studied by extensive numerical calculations . It was shown that the period of “double star clock” fluctuates randomly due to this effect. But the expected drift in orbital period over last 250 Myr (the geological period of interest in a light of Raup and Sepkoski’s data) is within a 10 to 20 % – low enough not to spoil periodicities in observable mass extinction data.
The lifetime of $`10^3`$ Myr for the Sun-Nemesis system found in this calculations suggests that it is not possible for the Nemesis to be on such wide and eccentric orbit all the time during solar system existence. So either Nemesis was captured by the Sun relatively recently – the event considered as extremely unlikely because it requires three-body encounters or very close encounters to allow a tidal dissipation of the excessive energy, or its orbit was much more tight at early years of solar system and random-walked to its present position. In the latter case one can expect higher bombardment rate in the past. And it is known that at least in the period between 4.5 and 3 Gyr the bombardment rate was indeed very high. It is believed that one such collision of a planetary size object with the Earth lead to the formation of the Moon. Intriguingly, a moon of right size and at right position appears to be one more ingredient for complex life to develop on the Earth’s surface , because it minimizes changes in the Earth’s tilt, ensuring climate stability.
One more important question was successfully settled by these numerical calculations. In principle, some perturbation can force Nemesis to enter into the planetary system and cause “a catastrophe of truly cosmogonical proportions” . Fortunately, this fatal event turned out to have a very low probability and hence the planetary system can survive the presence of distant solar companion .
The effects of interstellar clouds are the most uncertain. Opinions about the fate of Sun-Nemesis system here change from extreme pessimistic to extreme optimistic . The truth should lay somewhere in the middle between these two extremes. Unlike a field star, a single close encounter with a giant molecular cloud can instantly disrupt a wide binary. But in contrast to the stellar neighborhood of the Sun, both the distribution and internal structure of the interstellar clouds are poorly known near the Sun . Disruptive effects of interstellar clouds were investigated by Hut and Tremaine . Their analysis indicate that the effects of interstellar clouds lead most probably to the lifetime of $`10^3`$ Myr for distant solar companion, comparable to the lifetime caused by stellar perturbations . Therefore interstellar clouds seem to be also harmless for the Nemesis hypothesis if Nemesis begins its career at much tighter orbit than the postulated present orbit.
To summarize, there are some indications of 26-30 Myr periodicity in mass extinction data and in some other geological phenomena. This periodicity can naturally explained if we assume existence of a distant solar companion star – Nemesis. Its present orbit is not stable enough to ensure such a wide and eccentric orbit all the time since the solar system formation. But if the Nemesis was on a much more tighter orbit in the past and random-walked to its present position due to various perturbations, nothing seems to invalidate the hypothesis. The only drawback of this theory is that Nemesis was never found. And this is the point where mirror world enters the game: you can’t expect to discover Nemesis through conventional observations if it is made from some mirror stuff, can you?
But why mirror Nemesis? Is any more serious reason for the God, except to hide Nemesis from us, to choose the mirror option? Maybe there is. While looking at the solar system, an obsessive impression appears that every detail of it was designed to make an emergence of complex life possible . And it took billions of years of evolution for creatures to appear as intelligent as we are. Nemesis, it is believed, had played an important role in this process, periodically punctuating evolution. Therefore you need the Nemesis to orbit for ages. As we have mentioned earlier, the best way to do this is to place the Nemesis orbit at lower galactic latitudes to minimize disruptive effects of galactic tides and hence increase the orbit stability. But if Nemesis is made from the ordinary matter and was formed from the same nebula as the rest of the solar system, you expect the Nemesis orbit to be in the ecliptic plane – at high galactic latitudes. On the contrary, if the solar system was formed from a nebula of mixed mirrority (the possibility of such nebula was considered in ), a priori there is no reason the mirror part of the nebula to have the same angular momentum direction as the ordinary part. So for mirror Nemesis it is natural just to be formed in a plane different from the ecliptic.
Of course, the above given arguments are not completely rigorous. But who knows, maybe the answer on the question “what killed the dinosaurs?” really sounds like this: Nemesis, the mirror matter sun.
## 8 Conclusions
The mirror Nemesis hypothesis emerged almost as a joke during our e-mail discussions with Robert Foot. After some thought we found no reason why this hypothesis, although somewhat extravagant, might not be true . As a result, the dinosaur theme, which I originally intended to introduce for just to make presentation more vivid, quickly became one of the central motives of this talk, and you have the story presented above. I hope you enjoyed it regardless whether dinosaurs were really eyewitnesses of the mirror world or not.
I consider the possibility to restore the equivalence between left and right through the mirror world as very attractive. Theories with extra spatial dimensions, and $`M`$-theory in particular, can easily produce various “shadow worlds” which are however not necessarily parity invariant (this refers to the $`E_8\times E_8`$ model also, mentioned earlier), but some of them might be, so realizing the mirror world scenario. Maybe, it is even possible to have a mirror world without mirror particles. $`M`$-theory nonorientable compactifications, suggested so far , do not lead to the realistic model, as I can judge. But it will be very interesting to find a realistic example and show that the parity noninvariance of our world indicates to its nonorientable topology and is only local phenomenon.
“In the Soviet scientific society the scientists had one freedom that scientists in the West lacked and still lack (perhaps the only real freedom that Eastern scientists had), and that was to spend time also on esoteric questions. They did not have to be scrutinized by funding agencies every now and then” . The Soviet Union disappeared and so did this freedom. You can consider this paper, if you like, as a nostalgia for this kind of freedom, enabling to escape bonds of the stiff pragmatic logic.
## Acknowledgements
Although, as I became aware while preparing these notes, historically I owe my chance to attend this beautiful place and conference to Jupiter, the Moon and Nemesis, all their efforts would be in vain without professor Zurab Berezhiani. I thank him very much for giving me a possibility to attend the Gran Sasso Summer Institute, and for his kind hospitality during the conference. I also thank to Denis Comelli, Francesco Villante and Anna Rossi for their help at Ferrara and Gran Sasso.
I’m indebted to Sergei Blinnikov for encouragement and for indicating the Hubble-Hipparchos controversy.
The content of this talk would be very different unless fruitful discussions with Robert Foot, which I acknowledge with gratitude.
I thank Piet Hut for sending me reprints of his articles, which were heavily used in these notes.
|
no-problem/0002/astro-ph0002084.html
|
ar5iv
|
text
|
# X-Ray Spectral and Timing Evolution During the Decay of the 1998 Outburst from the Recurrent X-Ray Transient 4U 1630–47
## 1. Introduction
Although the recurrent X-ray transient and black hole candidate (BHC) 4U 1630–47 has been studied extensively since its first detected outburst in 1969 (Priedhorsky (1986)), interest in this source has intensified due to observations made during its 1998 outburst. During the 1998 outburst, radio emission was detected for the first time (Hjellming et al. (1999)). Although the source was not resolved in the radio, the optically thin radio emission suggests the presence of a radio jet. Also, low frequency quasi-periodic oscillations (QPOs) were discovered during the 1998 outburst (Dieters et al. 1998a ) using the Rossi X-ray Timing Explorer (RXTE).
Here, we report on X-ray observations of 4U 1630–47 made with RXTE (Bradt, Rothschild & Swank (1993)) during the decay of its 1998 outburst. We compare the X-ray light curve to those of other BHC X-ray transients and study the evolution of the spectral and timing properties during the decay. Like many other X-ray transients, the light curve of 4U 1630–47 shows an exponential decay and a secondary maximum (Chen, Shrader & Livio (1997)). During the early part of the decay, when the X-ray flux was high, 4U 1630–47 showed canonical soft state characteristics (van der Klis (1995)Nowak (1995)Chen & Taam (1996)), including an energy spectrum with a strong soft component and a steep power-law and relatively low timing variability with a fractional RMS (Root-Mean-Square) amplitude of a few percent. Later in the decay, we observe a transition to a spectrally harder and more variable state, which has similarities to transitions observed for GS 1124–68 (Ebisawa et al. (1994)Miyamoto et al. (1994)) and GRO J1655–40 (Mendez et al. (1998)) near the ends of their outbursts.
In this paper, we describe the 4U 1630–47 X-ray light curve for the 1998 outburst and the RXTE observations (§2). In §3 and §4, we present results of modeling the power and energy spectra, respectively. In §5, we examine the transition in more detail, and §6 contains a discussion of the results. Finally, §7 contains a summary of our findings.
## 2. Observations and Light Curve
We analyzed PCA (Proportional Counter Array) and HEXTE (High Energy X-ray Timing Experiment) data from 51 RXTE pointings of 4U 1630–47 during the decay of its 1998 outburst. The observation times, integration times and background subtracted 2.5-20 keV PCA count rates are given in Table 1. In Figure 1, we show the 1.5-12 keV PCA fluxes with the ASM (All-Sky Monitor) flux measurements in the same energy band. The ASM light curve was produced from data provided by the ASM/RXTE teams at MIT and at the RXTE SOF and GOF at NASA’s GSFC. The 1998 outburst was first detected by BATSE on Modified Julian Date 50841 (MJD = JD–2400000.5), and 4U 1630–47 was not detected by the ASM until about MJD 50847 (Hjellming et al. (1999)Kuulkers et al. 1998a ). Figure 1 shows that the ASM flux increased rapidly after MJD 50850, peaking at $`1.10\times 10^8`$ erg cm<sup>-2</sup> s<sup>-1</sup> (1.5-12 keV) on MJD 50867. The flux dropped to about $`6\times 10^9`$ erg cm<sup>-2</sup> s<sup>-1</sup> soon after the peak, and our RXTE observations began during this time. Our observations fill a gap in the ASM light curve near MJD 50880, showing that a flare occurred during this time. The flux decayed exponentially between MJD 50883 and MJD 50902 with an e-folding time of 14.4 d. After the exponential decay, the flux increased by about 50% over a time period of about 20 d, and a secondary maximum occurred near MJD 50936. After the secondary maximum, the flux decay is consistent with an exponential with an e-folding time of 12 d to 13 d. In Figure 1, the vertical dashed line at MJD 50951 marks an abrupt change in the timing properties of the source, which is described in detail below. The source flux at the transition was between 6 and 7 $`\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup>.
Soft $`\gamma `$-ray bursts were detected from a position near 4U 1630–47 on MJD 50979 (Kouveliotou et al. (1998)), 7 d after our last RXTE observation, and the $`\gamma `$-ray source has been named SGR 1627–41. Although the position of SGR 1627–41 is not consistent with the position of 4U 1630–47 (Hurley et al. (1999)), the two sources are close enough so that they were both in the RXTE field of view during our observations, allowing for the possibility of source confusion. As described in detail in the appendix, RXTE scans and BeppoSAX observations provide information about possible source confusion. Based on the evidence, we conclude that it is very unlikely that SGR 1627–41 contributed significantly to the flux detected during our observations of 4U 1630–47.
## 3. X-Ray Timing
For each observation, we produced 0.0156-128 Hz power spectra to study the timing properties of the system. For each 64 s interval, we made an RMS normalized power spectrum using data in the 2-21 keV energy band. To convert from the Leahy normalization (Leahy et al. (1983)) to RMS, we determined the Poisson noise level using the method described in Zhang et al. (1995) with a deadtime of 10 microseconds per event. For each observation, the individual 64 s power spectra were averaged, and the average spectrum was fitted using a least-squares technique and several different analytic models. For individual 64 s power spectra, we calculated the error bars using equation A11 from Leahy et al. (1983). When combining the power spectra for an entire observation, we used two different methods to calculate the errors. In one method, we calculated the errors by propagating the error bars for individual power spectra. This method does not account for any intrinsic (i.e., non-random) changes in the power spectrum over the duration of the observation. We also estimated the error by calculating $`\sigma /\sqrt{N}`$, where $`\sigma `$ is the standard deviation of the power measurements from the individual spectra, and $`N`$ is the number of 64 s power spectra being combined. For all observations, the error estimates are approximately the same above $``$2 Hz, indicating that the shape of the power spectrum at higher frequencies does not change significantly during an observation. However, below $``$2 Hz, the calculated errors are significantly larger using the second method, indicating that intrinsic changes in this region of the power spectrum are significant. In the following, we have used the second method to calculate the errors.
To determine the analytic model to use for the continuum noise, we began by fitting the power spectrum for each observation with a power-law model. For some observations, the power-law fits are acceptable ($`\chi _\nu ^21.0`$); however, in most cases, the reduced $`\chi ^2`$ is significantly greater than 1.0 and systematic features appear in the residuals. Strong QPOs dominate the residuals for several observations, and these are discussed in detail below. For the observations without obvious QPOs, the power-law residuals are similar and show a broad excess peaking between 0.5 and 1.0 Hz. To model this broad excess, we focus on the observation 8 power spectrum since the statistics are good for this observation and there are no strong QPOs. Fitting the observation 8 power spectrum with a power-law alone gives a poor fit ($`\chi ^2/\nu =680/444`$). Previous studies of the power spectra of BHCs show that the continuum noise can be described by a model consisting of two components: A power-law and a band-limited noise component (e.g., Cui et al. 1997; Miyamoto et al. 1994). In applying this model to 4U 1630–47, we used a broken power-law with the lower power-law index fixed to zero for the band-limited component, and hereafter this model is referred to as the flat-top model. Applying this two-component model to the observation 8 power spectrum gives a significantly improved fit ($`\chi ^2/\nu =486/441`$). Figure 2a shows the observation 8 power spectrum fitted with the two-component model.
For each observation, we fitted the power spectrum using the power-law model alone, the flat-top model alone and the combination of the two components. For several of the observations, the statistics are not good enough to uniquely determine the best continuum model. In these cases, we combined consecutive observations, as indicated in Table 2, to improve the statistics and refitted the power spectra with the same models. For observations 1 to 10, the fit using the two-component model is significantly better than using either of the individual components, indicating that these power spectra require both components. For observations 11 to 51, the flat-top model alone provides a significantly better fit than the power-law model alone, and the two-component model does not provide a significantly better fit than the flat-top component alone. We conclude that only the flat-top component is necessary to fit these power spectra. The continuum parameters for all observations are given in Table 2. In cases where the power-law component is not significantly detected, the 90% confidence upper limit on the contribution from a power-law with an index of $`1.0`$ is given. Figure 2b shows the power spectrum for observations 11 to 40 combined, illustrating that the power-law component is not significant at low frequencies. We note that there is some evidence for excess noise near 45 Hz, but this excess is not statistically significant. For observations 41 to 51, the RMS amplitude for the continuum noise is 10% to 17%, which is considerably higher than for observations 1 to 40. In determining the continuum parameters, we included Lorentzians to model the QPOs as marked in Table 2.
To determine where QPOs are present, we examined the residuals for fits with the continuum model only. For observations 1-2, 3, 6, 7, 8, 41, 42, 43, 44, 45, 46-48 and 49, systematic features in the residuals suggest the presence of QPOs. To determine if these features are statistically significant, we compared the $`\chi ^2`$ for a fit with the continuum model only to a fit with a Lorentzian added to the continuum model. F-tests indicate that QPOs significant at greater than 96% confidence occurred for observations 1-2, 3, 41, 42, 43 and 46-48. For observation 1-2, the continuum model provides a relatively poor fit to the data ($`\chi ^2/\nu =561/441`$), and the largest residuals occur near 11 Hz. The fit is significantly improved ($`\chi ^2/\nu =471/438`$) when a Lorentzian is added to the continuum model. The QPO centroid, FWHM and RMS amplitude are $`10.8\pm 0.2`$ Hz, $`2.9\pm 0.6`$ Hz and $`2.01\%\pm 0.16`$%, respectively. Although the features for observations 6, 7 and 8 are not as statistically significant, they also have centroids between 10 and 13 Hz and may be related to the observation 1-2 QPO.
For observation 3, the continuum model provides an extremely poor fit ($`\chi ^2/\nu =987/441`$), and the largest residuals occur near 6 Hz. Although the fit is significantly improved by the addition of a Lorentzian at 5.7 Hz, the fit is still relatively poor ($`\chi ^2/\nu =685/438`$), and systematic features are present in the residuals, which indicate that the 5.7 Hz QPO is not well-described by a Lorentzian. As for some other BHCs (Belloni et al. (1997)Revnivtsev, Trudolyubov & Borozdin (1999)), the QPO has a high frequency shoulder that can be modeled using a second Lorentzian. Modeling the QPO with Lorentzians at 5.4 Hz and 6.2 Hz improves the fit to $`\chi ^2/\nu =608/435`$. The fit can be further improved to $`\chi ^2/\nu =552/432`$ by the addition of a QPO near 11 Hz. It is possible that the 11 Hz QPO is a harmonic of the lower frequency QPO, but it may also be related to the QPO that occurred during observation 1-2. Table 3 summarizes the QPO parameters for observation 3, and it should be noted that three Lorentzians were included in the model in determining the continuum parameters given in Table 2. Figure 3 shows the observation 3 power spectrum fitted with a model consisting of the continuum plus three Lorentzians to model the QPOs. The Lorentzians at $`5.43\pm 0.02`$ Hz, $`6.19\pm 0.04`$ Hz and $`10.79\pm 0.14`$ Hz have RMS amplitudes of $`2.89\%\pm 0.18`$%, $`2.85\%\pm 0.21`$% and $`1.85\%\pm 0.20`$%, respectively. To determine if the QPO properties changed during the observation, we divided observation 3 into two time segments with durations of 576 s and 512 s, made power spectra for each segment and fitted the power spectra with a model consisting of the continuum (flat-top plus power-law) plus three Lorentzians. The results for these fits are given in Table 3. There is no evidence for large changes in the QPO properties between the two time segments.
The increase in the continuum noise level that occurred between observations 40 and 41 was accompanied by the appearance of a QPO at $`3.390\pm 0.008`$ Hz with an RMS amplitude of $`7.30\%\pm 0.33`$%. In subsequent observations, the QPO frequency gradually shifted to lower frequency. Figure 4 shows the power spectra for observations 41, 42, 43 and 46-48. After the 3.4 Hz QPO appeared for observation 41, QPOs occurred at $`2.613\pm 0.012`$ Hz, $`1.351\pm 0.012`$ Hz and $`0.228\pm 0.003`$ Hz for observations 42, 43 and 46-48, respectively. We note that the observation 43 QPO shows some evidence for a high frequency shoulder. QPOs with lower statistical significance occurred for observations 44, 45 and 49 with frequencies of $`0.430\pm 0.006`$ Hz, $`0.365\pm 0.011`$ Hz and $`0.182\pm 0.005`$ Hz. It should be noted that these QPOs are consistent with the gradual shift to lower frequencies. The QPO parameters for observations 41 to 49 are given in Table 4.
## 4. Energy Spectra
We produced PCA and HEXTE energy spectra for each observation using the processing methods described in Tomsick et al. (1999). We used the PCA in the 2.5-20 keV energy band and HEXTE in the 20-200 keV energy band. For the PCA, we used standard mode data, consisting of 129-bin spectra with 16 s time resolution, included only the photons from the top anode layers and estimated the background using the sky-VLE model<sup>1</sup><sup>1</sup>1See M.J. Stark et al. 1999, PCABACKEST, available at http:// lheawww.gsfc.nasa.gov/docs/xray/xte/pca.. We used the version 2.2.1 response matrices with a resolution parameter of 0.8 and added 1% systematic errors to account for uncertainties in the PCA response. As described in Tomsick et al. (1999), we used Crab spectra to test the response matrices and found that the response matrix calibration is better for PCUs 1 and 4 than for the other three Proportional Counter Units (PCUs); thus, we only used these two PCUs for spectral analysis and allowed for free normalizations between PCUs. PCU 4 was off during three observations (34, 39 and 48), and, to avoid instrumental differences, we did not use these observations in our spectral analysis. Previously, we found that the PCA over-estimates the source flux by a factor of 1.18 (Tomsick et al. (1999)), and, in this paper, we reduced the fluxes and spectral component normalizations by a factor of 1.18 so that the PCA flux scale is in agreement with previous instruments.
HEXTE energy spectra were produced using standard mode data, consisting of 64-bin spectra with 16 s time resolution. We used the March 20, 1997 HEXTE response matrices and applied the necessary deadtime correction (Rothschild et al. (1998)). For the spectral fits, the normalizations were left free between cluster A and cluster B. It is well-known that the HEXTE and PCA normalizations do not agree, so the normalizations were left free between HEXTE and the PCA. The HEXTE background subtraction is performed by rocking on and off source. Each cluster has two background fields, and we checked the HEXTE background subtraction by comparing the count rates for the two fields. In cases where contamination of one of the fields occurred, we only used the data from the non-contaminated background field.
We first fitted the energy spectra using a power-law with interstellar absorption, but this model does not provide acceptable fits to any of the spectra. For most of the observations, the residuals suggest the presence of a soft component, which is typical for 4U 1630–47 (Tomsick, Lapshov & Kaaret (1998)Parmar et al. (1997)). A soft component was also detected during BeppoSAX observations of 4U 1630–47, which overlap with our RXTE observations (Oosterbroek et al. (1998)). Since Oosterbroek et al. (1998) found that a disk-blackbody model (Makishima et al. (1986)) provides a good description of the soft component observed by BeppoSAX, we added a disk-blackbody model to the power-law component and refitted the RXTE spectra. Although the addition of a soft component improves the fits significantly in most cases, the fits are only formally acceptable for a small fraction of the observations, and, in the worst case, the reduced $`\chi ^2`$ is 3.1 for 106 degrees of freedom.
A broad iron absorption edge, associated with the Compton reflection component (Lightman & White (1988)), is commonly observed in the energy spectra of BHCs (Ebisawa et al. (1994) and references therein; Sobczak et al. (1999)). We refitted the 4U 1630–47 spectra with the model given in equation 3 of Ebisawa et al. (1994), which includes a broad absorption edge in addition to the disk-blackbody and power-law components. Following Ebisawa et al. (1994), we fixed the width of the absorption edge to 10 keV and left the edge energy free. For all of the 4U 1630–47 observations, the fits are significantly better with the absorption edge. As an example, for observation 8, the fit improved from $`\chi ^2/\nu =179/106`$ using the disk-blackbody plus power-law model without the edge to $`\chi ^2/\nu =110/104`$, indicating that the edge is required at the 99.1% confidence level. In addition to the absorption edge, an iron emission line is expected due to fluorescence of the X-ray illuminated accretion disk material (Matt et al. (1992)); thus, we have added an emission line to our model to determine whether the line is present in the spectra. We used a narrow emission line since the width of the emission line could not be constrained, and the energy of the emission line was a free parameter.
We fitted the spectra with the column density free and also with the column density fixed to the mean value for the 51 observations, $`9.45\times 10^{22}`$ cm<sup>-2</sup>. For all observations, the quality of the fit is not significantly worse with the column density fixed. Table 5 shows the results for the spectral fits with the column density fixed using a model consisting of a power-law, a disk-blackbody component, a narrow emission line and a broad absorption edge. The free parameters for the power-law component are the photon index ($`\mathrm{\Gamma }`$) and the normalization. For the disk-blackbody component, the temperature at the inner edge of the disk ($`kT_{in}`$) and the normalization are free parameters. Rather than the power-law and disk-blackbody normalizations, the component fluxes are given in Table 5. The emission line energy ($`E_{line}`$) and normalization ($`N_{line}`$) and the edge energy ($`E_{edge}`$) and optical depth ($`\tau _{\mathrm{Fe}}`$) are free parameters. However, in cases where the best fit value for $`E_{edge}`$ is less than 7.1 keV (the value for neutral iron), we fixed $`E_{edge}`$ to 7.1 keV. In Table 5, we do not give error estimates for $`kT_{in}`$ since the uncertainty for this parameter is dominated by systematic error due to uncertainty in the correct value for the column density. By comparing the values found for $`kT_{in}`$ with the column density fixed to those with the column density free, we estimate that the systematic error is 0.05 keV. For the 51 observations, the largest $`\chi _\nu ^2`$ is 1.32 for 102 degrees of freedom and $`\chi _\nu ^2<1.0`$ for 44 of the observations, indicating that the spectra are well-described by the model. Figure 5 shows the observation 8 energy spectrum and residuals. The residuals shown in Figure 5 typify the quality obtained for the observations.
For each observation, we determined the significance of the emission line by refitting the spectra without the line and using an F-test. In cases where the significance of the emission line is less than 90%, we fixed $`E_{line}`$ to the best fit value and determined the 90% confidence upper limit on $`N_{line}`$. Although most of the spectra do not require the emission line at a high confidence level, the line is required at greater than 90% confidence for 16 of the 51 observations, and at greater than 95% confidence for 9 observations. In the cases where the iron line is detected at greater than 90% confidence, the equivalent width of the iron line is between 45 eV (for observation 7) and 110 eV (for observation 47).
We also determined the significance of the disk-blackbody component using the same method described above for the emission line. With the column density fixed, the disk-blackbody component is required at greater than 97% confidence for every observation; however, with the column density free, the disk-blackbody component is not required for several observations. With the column density free, the disk-blackbody components are significant at only 50% and 65% confidence for observations 3 and 4, respectively, and at between 46% and 70% confidence for observations 41 to 51. In Table 5, the disk-blackbody fluxes for these observations are marked as upper limits since the component is not detected. For observations 41 to 51, the best fit values of $`kT_{in}`$ are also marked as upper limits since the peak of the disk-blackbody flux falls below the PCA band pass and we cannot constrain $`kT_{in}`$ and the column density independently.
The flux levels and line parameters are similar for observations 44 to 51 so we refitted the combined spectrum for these observations. As shown in Table 5, an emission line at $`6.46\pm 0.04`$ keV is detected at 99.93% confidence. The line energy is consistent with emission from neutral or mildly ionized iron and the line equivalent width is 91 eV. We also fitted the combined spectrum with a model consisting of a disk-blackbody and a power-law, and Figure 6 shows the data-to-model ratio, clearly indicating the presence of the iron line. Since 4U 1630–47 lies along the Galactic ridge ($`l=336.91^{}`$, $`b=0.25^{}`$), we have considered the possibility that the 4U 1630–47 spectra are contaminated by Galactic ridge emission. It is unlikely that the ridge emission is the source of the iron line detected in our spectra because the line energy we observe is considerably lower than the values measured by $`ASCA`$, $`Ginga`$ and $`Tenma`$ for the Galactic ridge, which are all near 6.7 keV (Kaneda et al. (1997) and references therein). Also, based on the spectrum of the Galactic ridge emission measured by RXTE (Valina & Marshall (1998)), the spatially averaged Galactic ridge 2.5-20 keV flux is only 6% of the flux for the combination of observations 44 to 51, indicating that the level of contamination by the Galactic ridge emission should be low.
## 5. State Transition
Figure 7 shows the evolution of the timing and spectral parameters for observations 33 to 51. Significant changes in the 4U 1630–47 emission properties occurred between observations 40 and 41, and we interpret this as evidence that a state transition occurred. In Figure 7, the transition is marked with a vertical dashed line at MJD 50951. At the transition, an increase in source variability occurred with the 0.01-10 Hz RMS amplitude of the flat-top component increasing from between 2.1% and 3.9% for observations 33 to 40 to $`10.2\%\pm 0.6`$% for observation 41. As shown in panel b<sub>1</sub> of Figure 7, the RMS amplitude continued to increase after the transition, reaching a maximum value of $`17.3\%\pm 0.8`$% for observation 46-48. In addition to the increase in the continuum noise level, a QPO appeared for observation 41, and the centroid QPO frequency and RMS amplitude are shown in panels b<sub>2</sub> and b<sub>3</sub>, respectively. The timing changes occurred in less than 2 d and with only a small change in the 1.5-12 keV flux (shown in panel a of Figure 7).
To determine if a QPO was present before the transition, we made a combined power spectrum for observations 33 to 40. When the 2-21 keV power spectrum is fitted with a flat-top model, the residuals show no clear evidence for a QPO. The 90% confidence upper limit on the RMS amplitude for a QPO in a frequency range from 0.1 Hz to 10 Hz is 2.4%. We performed an additional test by determining the energy range where the observation 41 QPO is strongest. For observation 41, the RMS amplitudes are $`6.1\%\pm 0.4`$% and $`8.6\%\pm 0.4`$% for the 2-6 keV and 6-21 keV energy bands, respectively, indicating that the strength of the QPO increases with energy. Since the QPO is stronger in the 6-21 keV energy band for observation 41, we produced a 6-21 keV power spectrum for observations 33 to 40. As before, when a flat-top model is used to fit the power spectrum, the residuals do not show evidence for QPOs, and the 90% confidence upper limit on the RMS amplitude for a QPO in a frequency range from 0.1 Hz to 10 Hz is 2.9%.
Although the difference between the observation 40 and 41 energy spectra is not as distinct as for the power spectra, changes occurred. In Figure 7, the spectral parameters $`\mathrm{\Gamma }`$ and $`kT_{in}`$ are shown in panels c<sub>1</sub> and c<sub>2</sub>, respectively. The power-law index hardened slightly between observations 40 and 41; however, this change appears to be part of a larger trend, which occurred over a span of 8 d between observations 38 and 43. The inner disk temperature began to decrease near observation 37, and the soft component is not confidently detected after observation 40, which probably indicates that $`kT_{in}`$ continued to drop after observation 40. The spectral changes are also illustrated in Figures 8a and 8b, which show the energy spectra for observations 40 and 41, respectively. Figure 8c shows the energy spectrum for observations 44 to 51, indicating that the spectrum continued to harden after the transition.
In summary, during the transition, the noise level increased, the power-law spectral index hardened and the soft component flux in the $`RXTE`$ band pass decreased. Similar changes are typically observed in BHC systems when soft-to-hard state transitions occur (van der Klis (1995)Nowak (1995)Chen & Taam (1996)), and we conclude that such a transition occurred for 4U 1630–47. We also show that QPOs were not present during the observations leading up to the transition, indicating that their appearance during observation 41 is related to the state transition.
## 6. Discussion
### 6.1. Comparisons to Previous 4U 1630–47 Outbursts
Since 4U 1630–47 was discovered in 1969, quasi-periodic outbursts have been observed from this source every 600 to 690 d (Kuulkers et al. (1997))<sup>2</sup><sup>2</sup>2However, the 1999 outburst significantly deviates from this periodicity (McCollough et al. (1999)).. The light curve for the 1998 4U 1630–47 outburst is the best example of a “fast-rise exponential-decay” (or FRED) light curve (Chen et al. 1997) that has been observed for 4U 1630–47. A FRED light curve may have been observed for 4U 1630–47 by the $`Vela5B`$ X-ray monitor in 1974 (Priedhorsky (1986); Chen et al. 1997), but the temporal coverage was sparse compared to the coverage obtained for the 1998 outburst. Good temporal coverage was obtained for the 1996 outburst by the $`RXTE`$/ASM, and a FRED light curve was not observed. After the start of the 1996 outburst, the flux stayed at a high level for about 100 d before decaying exponentially with an e-folding time of about 14.9 d (Kuulkers et al. (1997)). Although the overall light curve shapes are different for the two outbursts, it is interesting that the e-folding time for the 1998 outburst, 14.4 d, is close to the 14.9 d e-folding time for the 1996 outburst. This may suggest that the e-folding time is related to a physical property of the system that does not change between outbursts. For example, the e-folding time may be related to the mass of the compact object (Cannizzo, Chen & Livio (1995)) or the radius of the accretion disk (King & Ritter (1998)).
A state transition with similarities to the soft-to-hard transition we report in this paper was observed by $`EXOSAT`$ during the decay of the 1984 outburst from 4U 1630–47. Four $`EXOSAT`$ observations of 4U 1630–47 were made during outburst decay (Parmar, Stella & White (1986)). During the first two observations in 1984 April and 1984 May, a strong soft component was observed in the energy spectrum. The power-law was harder in May than in April and became even harder for two observations made in 1984 July. During the July observations, the soft component was not clearly detected. Assuming a soft-to-hard transition occurred between May and July, the transition took place at a luminosity between $`10^{36}`$ erg s<sup>-1</sup> and $`10^{38}`$ erg s<sup>-1</sup> (1-50 keV), which is consistent with the luminosity where the 1998 soft-to-hard transition occurred, $`7\times 10^{36}`$ erg s<sup>-1</sup> (2.5-20 keV). The luminosities given here are for an assumed distance of 10 kpc; however, the distance to 4U 1630–47 is not well-determined.
### 6.2. Comparisons to Other Black Hole Candidate X-Ray Transients
Here, we compare the properties 4U 1630–47 displayed during the decay of its 1998 outburst to those observed for other X-ray transients. We have compiled a list of comparison sources using Tanaka & Shibazaki (1996) and Chen, Shrader & Livio (1997). The comparison group contains the BHC X-ray transients that had strong soft components during outburst and FRED light curves. The comparison sources from the above references are GS 1124–68, GS 2000+251, A 0620–00, EXO 1846–031, Cen X-2, 4U 1543–47 and A 1524–617. We also include a recent X-ray transient, XTE J1748–288, that has similar properties to this group. For the eight comparison sources, the exponentially decaying portions of their X-ray light curves have e-folding times ranging from 15 d to 80 d (Chen et al. 1997; Revnivtsev et al. 1999), and the mean decay time is 39 d. Thus, the 14.4 d e-folding time for 4U 1630–47 is shorter than average, but not unprecedented.
Like 4U 1630–47, secondary maxima occurred in the X-ray light curves of 4U 1543-47, A 0620–00, GS 2000+251 and GS 1124–68, and a tertiary maximum occurred for A 0620–00 (Kaluzienski et al. (1977)). It is likely that the secondary and tertiary maxima are the result of X-ray irradiation of the outer accretion disk or the optical companion (King & Ritter (1998)Chen, Livio & Gehrels (1993)Augusteijn, Kuulkers & Shaham (1993)). In this picture, the time between the start of the outburst and subsequent maxima depends on the viscous time scale of the disk. For A 0620–00, GS 2000+251 and GS 1124–68, secondary maxima are observed 55 to 75 d after the start of the outburst. These maxima, often referred to as “glitches”, consist of a sudden upward shift in X-ray flux, interrupting the exponential decay. The tertiary maximum observed for A 0620–00 about 200 d after the start of the outburst is significantly different, and can be described as a broad (35 to 40 d) bump in the X-ray light curve near the end of the outburst. The 4U 1630–47 secondary maximum is similar to the A 0620–00 tertiary maximum since it is a broad (about 25 d) increase in flux near the end of the outburst. However, the secondary maximum peaked about 89 d after the start of the outburst, which is considerably less than for A 0620–00.
Four sources in our comparison group exhibited soft-to-hard state transitions during outburst decay: A 0620–00 (Kuulkers et al. 1998b ), GS 2000+251 (Tanaka & Shibazaki (1996)), GS 1124–68 (Kitamoto et al. (1992)) and XTE J1748–288 (Revnivtsev et al. 1999). The 4U 1630–47 transition occurred 104 d after the start of the outburst, while transitions for the other four sources occurred 100 to 150 d, 230 to 240 d, 131 to 157 d and about 40 d after the starts of the outbursts for A 0620–00, GS 2000+251, GS 1124–68 and XTE J1748–288, respectively. Detailed X-ray spectral and timing information is available after the transition to the hard state for GS 1124–68. Like 4U 1630–47, the GS 1124–68 transition was marked by an increase in the RMS noise amplitude; however, in contrast to 4U 1630–47, QPOs were not observed for GS 1124–68 in the hard state (Miyamoto et al. (1994)). Also, during the GS 1124–68 transition, the X-ray spectrum hardened with a drop in the inner disk temperature ($`kT_{in}`$) and a change in the power-law photon index ($`\mathrm{\Gamma }`$) from 2.2 to 1.6 (Ebisawa et al. (1994)). During the 4U 1630–47 transition, the change in the soft component was consistent with a drop in $`kT_{in}`$, and $`\mathrm{\Gamma }`$ changed from 2.3 to 1.8. While the $`Ginga`$ observations of GS 1124–68 were relatively sparse near the transition, our observations of 4U 1630–47 show that soft-to-hard transitions can occur on a time scale of days.
### 6.3. Hard State QPOs
Although QPOs were not detected after the GS 1124–68 state transition, QPOs were observed after a similar transition for the microquasar GRO J1655–40 during outburst decay (Mendez et al. (1998)). RXTE observations of GRO J1655–40 show that a state transition occurred between 1997 August 3 and 1997 August 14. The transition was marked by an increase in the continuum variability from less than 2% RMS to 15.6% RMS, a decrease in the characteristic temperature of the soft spectral component ($`kT_{in}`$) from 0.79 keV to 0.46 keV and the appearance of a QPO at 6.46 Hz with an RMS amplitude of 9.8%. A QPO was also detected at 0.77 Hz during an August 18 RXTE observation of GRO J1655–40 when the 2-10 keV flux was about a factor of four lower than on August 14; thus, the shift to lower frequencies with decreasing flux is common to GRO J1655–40 and 4U 1630–47. The correlations between spectral and timing properties for the microquasar GRS 1915+105 are similar to those observed for GRO J1655–40 and 4U 1630–47. Markwardt, Swank & Taam (1999) and Muno, Morgan & Remillard (1999) found that 1-15 Hz QPOs are observed for GRS 1915+105 more often when the source spectrum is hard. Markwardt et al. (1999) report a correlation between QPO frequency and disk flux, and Muno et al. (1999) find that the QPO frequency is correlated with $`kT_{in}`$. Although these results suggest that the QPO is related to the soft component, the fact that the QPO strength increases with energy for 4U 1630–47, GRO J1655–40 and GRS 1915+105 indicates that the QPO mechanism modulates the hard component flux.
A physical model that has been used to explain the energy spectra of BHC systems involves the presence of an advection-dominated accretion flow or ADAF (Narayan, Garcia & McClintock (1997)). The model assumes the accretion flow consists of two zones: An optically thin ADAF region between the black hole event horizon and a transition radius, $`r_t`$, and a geometrically thin, optically thick accretion disk outside $`r_t`$. Esin, McClintock & Narayan (1997) developed and used this model to explain the spectral changes observed for GS 1124–68 during outburst decay, which are similar to the spectral changes observed for 4U 1630–47. The different emission states observed during the decay can be reproduced by decreasing the mass accretion rate and increasing $`r_t`$. This model suggests that the gradual decrease in the QPO frequencies observed for GRO J1655–40 and 4U 1630–47 may be related to a gradual increase in $`r_t`$ or a gradual drop in the mass accretion rate (or both).
In studies of the X-ray power spectra of BHC and neutron star X-ray binaries, Wijnands & van der Klis (1999) find a correlation between the frequency of QPOs between 0.2 and 67 Hz and the break frequency of the continuum component (described as a flat-top component in this paper). Such a correlation is interesting since it suggests that there is a physical property of the system that sets both time scales and that the physical property does not depend on the different properties of BHCs and neutron stars. While 4U 1630–47 was in its hard state, the break frequency gradually decreased from $`3.33\pm 0.36`$ Hz to $`0.48\pm 0.03`$ Hz between observations 41 and 46-48 as the QPO frequency dropped from 3.4 Hz to 0.23 Hz (cf. see Tables 2 and 4). As for the other sources included in the Wijnands & van der Klis (1999) sample, 4U 1630–47 exhibits a correlation between QPO frequency and break frequency. However, for 4U 1630–47, the QPO frequency is below or consistent with the break frequency, while in other sources the QPO frequency is above the break frequency.
### 6.4. Emission Properties During the Flare
Figure 9 shows the 2-60 keV PCA light curves for the two observations made during the flare which occurred around MJD 50880 (observations 3 and 4). For observation 3, short (about 4 s) X-ray dips are observed. We have examined the light curves for all 51 observations and find that X-ray dips are only observed for observation 3. However, 4U 1630–47 observations made by another group show that short X-ray dips were observed earlier in the outburst (Dieters et al. (1999)). In addition to the dips, Figure 9 shows that the level of variability is much higher for observation 3 than for observation 4. Table 2 details the differences between the power spectra for these two observations. For observation 3, the flat-top and power-law RMS amplitudes are 3.55% and 4.36%, respectively, while, for observation 4, the flat-top and power-law RMS amplitudes are 1.83% and 1.10%, which are even lower than most of the nearby non-flare observations. Also, QPOs are observed for observation 3 but not for observation 4. The timing differences between these two observations are especially remarkable because the energy spectra for observations 3 and 4 are nearly identical (cf. Table 5).
The asymmetry of the low frequency QPO peak for observation 3 is similar to QPOs observed for GS 1124–68 (Belloni et al. (1997)) and XTE J1748–288 (Revnivtsev et al. 1999). For these two sources and for 4U 1630–47, the asymmetric shape of the QPO can be modeled using two Lorentzians, suggesting that the asymmetry may be due to a shift in the QPO centroid during the observation. Revnivtsev et al. (1999) find that some properties of the XTE J1748–288 power spectra are consistent with this picture. The 4U 1630–47 timing properties during observation 3 are not consistent with a gradual shift in the QPO centroid during the observation since the 5.4 Hz and 6.2 Hz Lorentzians are present in both segments of the observation (cf. Table 3). The stability of the QPO shape may indicate that the asymmetric peak is caused by an intrinsic property of the QPO mechanism. However, for observations of 4U 1630–47 containing dips, Dieters et al. (1999) find that the frequencies of some QPOs are lower within the dips than outside the dips. For our observation 3, it is possible that frequency changes during the dips (cf. Figure 9) cause the QPO profile to be asymmetric.
## 7. Summary and Conclusions
We have analyzed data from 51 RXTE observations of 4U 1630–47 during the decay of its 1998 outburst to study the evolution of its spectral and timing properties. During the decay, the X-ray flux dropped exponentially with an e-folding time of about 14.4 d, which is short compared to most other BHC X-ray transients. The e-folding time was nearly the same (14.9 d) for the decay of the 1996 outburst, which may indicate that this time scale is set by some property of the system that does not change between outbursts. For the 1998 outburst, the decay was interrupted by a secondary maximum, which is commonly observed for BHC X-ray transients.
Our analysis of the 4U 1630–47 power spectra indicates that 0.2 Hz to 11 Hz QPOs with RMS amplitudes between 2% and 9% occurred during the observations. During one of our early observations, when the source was relatively bright, a QPO occurred near 6 Hz with a profile that cannot be described by a single Lorentzian. Similar asymmetric QPO peaks have been observed previously for GS 1124–68 (Belloni et al. (1997)) and XTE J1748–288 (Revnivtsev et al. 1999). For all three sources (4U 1630–47, GS 1124–68 and XTE J1748–288), the QPO is well-described by a combination of two Lorentzians.
Near the end of the outburst, an abrupt change in the 4U 1630–47 spectral and timing properties occurred, and we interpret this change as evidence for a soft-to-hard state transition. Our observations indicate that most of the changes in the emission properties, associated with the transition, occurred over a time period less than 2 d. The timing properties changed after the transition with an increase in the continuum noise level and the appearance of a QPO. A 3.4 Hz QPO appeared immediately after the transition, and, in subsequent observations, the QPO frequency decreased gradually to about 0.2 Hz. At the transition, the energy spectrum also changed with an abrupt drop in the soft component flux in the RXTE band pass, which was probably due to a drop in the inner disk temperature. A change in the power-law photon index from 2.3 to 1.8, also associated with the transition, occurred over a time period of 8 d. Although many of these changes are typical of soft-to-hard state transitions, the QPO behavior and the short time scale for the transition are not part of the canonical picture for state transitions (van der Klis (1995)Nowak (1995)Chen & Taam (1996)). Finally, we note that 4U 1630–47 exhibits interesting behavior (e.g., state changes and QPOs) below a flux level of $`10^9`$ erg cm<sup>-2</sup> s<sup>-1</sup>, indicating that observing programs for X-ray transients should be designed to follow these sources to low flux levels.
The authors would like to thank J.H. Swank for approving observations of 4U 1630–47 at low flux levels, S. Dieters for providing results from BeppoSAX observations prior to publication and an anonymous referee whose comments led to an improved paper. We acknowledge partial support from NASA grants NAG5-4633, NAG5-4416 and NAG5-7347.
## Appendix A SGR 1627–41
Soft $`\gamma `$-ray bursts were detected from a position near 4U 1630–47 on MJD 50979 (Kouveliotou et al. (1998)), 7 d after our last RXTE observation. The soft $`\gamma `$-ray repeater, SGR 1627–41, was observed with RXTE on MJD 50990, and a 0.15 Hz QPO was detected during the observation (Dieters et al. 1998b ). Although the position of SGR 1627–41 is not consistent with the position of 4U 1630–47 (Hurley et al. (1999)), the two sources are close enough so that they were both in the RXTE field of view during the observation made on MJD 50990 and also during our observations, allowing for the possibility of source confusion. We inspected the RXTE 0.125 s light curves for our 4U 1630–47 observations, and there is no evidence for activity (e.g., bursts) from SGR 1627–41. An RXTE scanning observation made on 1998 June 21 (MJD 50985) and BeppoSAX observations made on 1998 August 7 (MJD 51032) and 1998 September 16 (MJD 51072) provide information about possible source confusion. The scanning observation indicates that 4U 1630–47 was much brighter than SGR 1627–41 on June 21. Below, we present an analysis of the data from the scanning observation. 4U 1630–47 was also much brighter than SGR 1627–41 during the BeppoSAX observations. On August 7 and September 16, the 2-10 keV unabsorbed flux for 4U 1630–47 was 30 to 40 times higher than for SGR 1627–41 (Woods et al. (1999)Dieters et al. (1999)). It is likely that 4U 1630–47 also dominated the flux detected during the June 26 RXTE observation and that it is responsible for the 0.15 Hz QPO. Given the low persistent flux detected for SGR 1627–41 by BeppoSAX, $`6.7\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup> unabsorbed in the 2-10 keV band (Woods et al. (1999)), it seems very unlikely that this source could be bright enough to produce the QPOs observed during our observations.
After soft $`\gamma `$-ray bursts were detected from SGR 1627–41 by BATSE (Burst and Transient Source Experiment) on 1998 June 15 (Kouveliotou et al. (1998)), RXTE scanning observations were made to locate a source of persistent X-ray emission related to soft $`\gamma `$-ray repeater (SGR). When the scans were made, the position of SGR 1627–41 was restricted to the IPN (3rd Interplanetary Network) annulus reported in Hurley et al. (1998a), which is consistent with the position of the supernova remnant G337.0-0.1. RXTE scans were made along the IPN annulus on 1998 June 19 and nearly perpendicular to the IPN annulus on 1998 June 21. Since other SGRs are associated with supernova remnants, the perpendicular scan was centered on G337.0-0.1. In the following months, the IPN position was improved (Hurley et al. 1998b ) and a source of persistent X-ray emission related to the SGR was discovered using BeppoSAX (Woods et al. (1999)). These observations restrict the SGR 1627–41 position to a 2 by 16<sup>′′</sup> region that is consistent with the position of G337.0-0.1, making an association between the two likely (Hurley et al. (1999)).
We analyzed the RXTE data from the June 21 scan to determine if the persistent X-ray emission from SGR 1627–41 could have been bright enough to contaminate our RXTE observations of 4U 1630–47. The linear scan passed through the positions of both G337.0-0.1 and 4U 1630–47 for this purpose. Figure 1 shows the background subtracted 2-60 keV PCA count rate versus scan angle. We fitted the light curve using a model consisting of a single point source and a constant count rate offset to account for small uncertainties in the background subtraction. We used the 1996 June 5 PCA collimator response to model the scan light curve produced by a point source. A good fit is achieved ($`\chi ^2/\nu =91/161`$), indicating that the light curve is consistent with the presence of one source. Figure 1 shows that the source position is consistent with 4U 1630–47 and not G337.0-0.1. Also, the source amplitude is about 187 s<sup>-1</sup> (2-60 keV, 5 PCUs), which is close to the count rates reported for observations 44 to 51 in Table 1. The RXTE scan indicates that it is very unlikely that our 4U 1630–47 observations are significantly contaminated by emission from SGR 1627–41.
|
no-problem/0002/gr-qc0002066.html
|
ar5iv
|
text
|
# Some Remarks on the Neutrino Oscillation Phase in a Gravitational Field
## Acknowledgments
One of the authors (JGP) would like to thank CNPq–Brazil for partial financial support. The other (CMZ) would like to thank FAPESP-Brazil for financial support. They would like also to thank M. Nowakowski for helpful discussions, and G. F. Rubilar for valuable comments.
|
no-problem/0002/math0002171.html
|
ar5iv
|
text
|
# On Erdős’s elementary method in the asymptotic theory of partitions 1991 Mathematics Subject Classification. Primary 11P72; Secondary 11P81, 11P82. Key words and phrases. Partitions, Hardy–Ramanujan asymptotic formula. additive number theory.
## 1 Asymptotic formulas for partition functions
Let $`p(n)`$ denote the number of partitions of $`n`$. Hardy and Ramanujan and, independently, Uspensky discovered the asymptotic formula
$$p(n)\frac{1}{4n\sqrt{3}}\mathrm{exp}\left(\pi \sqrt{\frac{2n}{3}}\right).$$
(1)
It follows that
$$\mathrm{log}p(n)\pi \sqrt{\frac{2n}{3}}.$$
(2)
The proof of (1) uses complex analysis and modular functions. Erdős later discovered an elementary proof of this asymptotic formula; his argument is complicated, but uses only estimates for the exponential function and induction from the identity
$$np(n)=\underset{kan}{}ap(nka).$$
The purpose of this paper is to show that Erdős’s method, which is rarely used and almost completely forgotten, is powerful enough to produce asymptotic estimates for many partition functions.
Let $`A`$ be a nonempty set of positive integers, and let $`p_A(n)`$ the number of partitions of $`n`$ into parts belonging to the set $`A`$. In this paper we consider sets $`A`$ that are unions of congruence classes. Let $`m1`$, and let $`r_1,\mathrm{},r_{\mathrm{}}`$ be integers such that
$$1r_1<r_2<\mathrm{}<r_{\mathrm{}}m$$
and
$$(r_1,\mathrm{},r_{\mathrm{}},m)=1.$$
(3)
Let $`A`$ be the set of all positive integers $`a`$ such that $`ar_i(modm)`$ for some $`i`$. The divisibility condition (3) implies that $`p_A(n)1`$ for all sufficiently large integers $`n`$. We shall prove that
$$\mathrm{log}p_A(n)\pi \sqrt{\frac{2\mathrm{}n}{3m}}.$$
This result is not new; it is contained, for example, in a paper of Meinardus that is heavily analytic. We shall prove this result using only Erdős’s elementary method.
Andrews \[1, Chapter 6\] provides references to asymptotic formulas for various partition functions. Among the few papers that use Erdős’s ideas are Freitag , Grosswald , and Kerawala . Expositions of Erdős’s original work can be found in the books of Grosswald , Hua , and Nathanson .
## 2 Estimates for sums of exponential functions
###### Lemma 1
If $`0tn`$, then
$$\sqrt{n}\frac{t}{2\sqrt{n}}\frac{t^2}{2n^{3/2}}\sqrt{nt}\sqrt{n}\frac{t}{2\sqrt{n}}.$$
Proof. If $`0x1`$, then
$$1\frac{x}{2}\frac{x^2}{2}(1x)^{1/2}1\frac{x}{2}.$$
The result follows by letting $`x=t/n`$.
###### Lemma 2
If $`x>0`$, then
$$\frac{e^x}{\left(1e^x\right)^2}<\frac{1}{x^2}.$$
If $`0<x1`$, then
$$\frac{e^x}{\left(1e^x\right)^2}>\frac{1}{x^2}2.$$
Proof. The power series expansion for $`e^x`$ gives
$`e^{x/2}e^{x/2}`$ $`=`$ $`2{\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{(2k+1)!}}\left({\displaystyle \frac{x}{2}}\right)^{2k+1}`$
$`=`$ $`x+x^3{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{x^{2k2}}{(2k+1)!2^{2k}}}.`$
If $`x>0`$, then
$$e^{x/2}e^{x/2}>x$$
and so
$$\frac{e^x}{\left(1e^x\right)^2}=\frac{1}{\left(e^{x/2}e^{x/2}\right)^2}<\frac{1}{x^2}.$$
If $`0<x1`$, then
$`e^{x/2}e^{x/2}`$ $`<`$ $`x+x^3{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{2^{2k}}}`$
$`<`$ $`x+x^3`$
$`<`$ $`{\displaystyle \frac{x}{1x^2}}`$
and so
$$\frac{e^x}{\left(1e^x\right)^2}=\frac{1}{\left(e^{x/2}e^{x/2}\right)^2}>\left(\frac{1}{x}x\right)^2>\frac{1}{x^2}2.$$
###### Lemma 3
If $`0q<1`$, then
$$\underset{v=1}{\overset{\mathrm{}}{}}v^3q^v<\frac{6q}{(1q)^4}.$$
Proof. Differentiating the power series
$$\frac{1}{1q}=\underset{v=0}{\overset{\mathrm{}}{}}q^v,$$
we obtain
$`{\displaystyle \frac{1}{(1q)^2}}`$ $`=`$ $`{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}vq^{v1},`$
$`{\displaystyle \frac{2}{(1q)^3}}`$ $`=`$ $`{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}v(v1)q^{v2},`$
$`{\displaystyle \frac{6}{(1q)^4}}`$ $`=`$ $`{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}v(v1)(v2)q^{v3}`$
$`=`$ $`{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}(v^33v(v1)v)q^{v3}.`$
Therefore,
$`{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}v^3q^v`$ $`=`$ $`{\displaystyle \frac{6q^3}{(1q)^4}}+3q^2{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}v(v1)q^{v2}+q{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}vq^{v1}`$
$`=`$ $`{\displaystyle \frac{6q^3}{(1q)^4}}+{\displaystyle \frac{6q^2}{(1q)^3}}+{\displaystyle \frac{q}{(1q)^2}}`$
$`=`$ $`{\displaystyle \frac{q^3+4q^2+q}{(1q)^4}}`$
$`<`$ $`{\displaystyle \frac{6q}{(1q)^4}}.`$
###### Lemma 4
Let $`n`$ be a positive integer and let $`c_1`$ and $`\epsilon `$ be positive real numbers. Then
$$\underset{k=1}{\overset{\mathrm{}}{}}\frac{e^{\frac{c_1k}{2\sqrt{n}}}}{1e^{\frac{c_1k}{2\sqrt{n}}}}=O\left(n^{\frac{1}{2}+\epsilon }\right).$$
Proof. We apply the Lambert series identity (Hardy and Wright \[8, Theorem 310\])
$$\underset{k=1}{\overset{\mathrm{}}{}}\frac{q^k}{1q^k}=\underset{k=1}{\overset{\mathrm{}}{}}d(k)q^k,$$
where $`0<q<1`$ and $`d(k)`$ is the divisor function. Let
$$q=e^{\frac{c_1k}{2\sqrt{n}}}.$$
Since
$$d(k)k^\epsilon $$
(Hardy and Wright \[8, Theorem 315\]), and since $`e^xx^{(1+2\epsilon )}`$ for $`xc_1/(2\sqrt{n})`$, we have
$$e^{\frac{c_1k}{2\sqrt{n}}}\left(\frac{2\sqrt{n}}{c_1k}\right)^{1+2\epsilon }$$
and
$`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{e^{\frac{c_1k}{2\sqrt{n}}}}{1e^{\frac{c_1k}{2\sqrt{n}}}}}`$ $`=`$ $`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}d(k)e^{\frac{c_1k}{2\sqrt{n}}}`$
$``$ $`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}k^\epsilon \left({\displaystyle \frac{2\sqrt{n}}{c_1k}}\right)^{1+2\epsilon }`$
$`=`$ $`n^{\frac{1}{2}+\epsilon }\left({\displaystyle \frac{2}{c_1}}\right)^{1+2\epsilon }{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{k^{1+\epsilon }}}`$
$``$ $`n^{\frac{1}{2}+\epsilon }.`$
This completes the proof.
###### Lemma 5
Let $`m,\mathrm{},r_1,\mathrm{},r_{\mathrm{}}`$ be positive integers such that
$$1r_1<r_2<\mathrm{}<r_{\mathrm{}}m.$$
Let $`A`$ be the set of all positive integers $`a`$ such that
$$ar_i(modm)\text{for some }i=1,\mathrm{},\mathrm{}\text{.}$$
Let $`\vartheta `$ be a real number with
$$\vartheta >1.$$
Let
$$c_0=\pi \sqrt{\frac{2\mathrm{}}{3m}},$$
and
$$c_1=\left(\sqrt{1+\vartheta }\right)c_0=\pi \sqrt{\frac{2(1+\vartheta )\mathrm{}}{3m}}.$$
Then
$$\underset{k=1}{\overset{\mathrm{}}{}}\underset{aA}{}ae^{\frac{c_1ka}{2\sqrt{n}}}=\frac{n}{1+\vartheta }+O\left(n^{\frac{1}{2}+\epsilon }\right)$$
for every $`\epsilon >0`$.
Proof. Let
$$q=e^{\frac{c_1k}{2\sqrt{n}}}.$$
Then $`0<q<1`$. For $`1rm`$, we have
$`{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}(r+mv)e^{\frac{c_1k(r+mv)}{2\sqrt{n}}}`$ $`=`$ $`{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}(r+mv)q^{r+mv}`$
$`=`$ $`mq^r{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}vq^{mv}+rq^r{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}q^{mv}`$
$`=`$ $`{\displaystyle \frac{mq^{r+m}}{(1q^m)^2}}+{\displaystyle \frac{rq^r}{1q^m}}`$
$`=`$ $`{\displaystyle \frac{me^{\frac{c_1k(r+m)}{2\sqrt{n}}}}{\left(1e^{\frac{c_1km}{2\sqrt{n}}}\right)^2}}+{\displaystyle \frac{re^{\frac{c_1kr}{2\sqrt{n}}}}{1e^{\frac{c_1km}{2\sqrt{n}}}}}.`$
Therefore,
$`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}(r+mv)e^{\frac{c_1k(r+mv)}{2\sqrt{n}}}`$ $`=`$ $`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{me^{\frac{c_1k(r+m)}{2\sqrt{n}}}}{\left(1e^{\frac{c_1km}{2\sqrt{n}}}\right)^2}}+{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{re^{\frac{c_1kr}{2\sqrt{n}}}}{1e^{\frac{c_1km}{2\sqrt{n}}}}}`$
$`=`$ $`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{me^{\frac{c_1k(r+m)}{2\sqrt{n}}}}{\left(1e^{\frac{c_1km}{2\sqrt{n}}}\right)^2}}+O\left(n^{\frac{1}{2}+\epsilon }\right),`$
since
$$0<\underset{k=1}{\overset{\mathrm{}}{}}\frac{re^{\frac{c_1kr}{2\sqrt{n}}}}{1e^{\frac{c_1km}{2\sqrt{n}}}}m\underset{k=1}{\overset{\mathrm{}}{}}\frac{e^{\frac{c_1k}{2\sqrt{n}}}}{1e^{\frac{c_1k}{2\sqrt{n}}}}n^{\frac{1}{2}+\epsilon }$$
by Lemma 4.
From the definitions of the constants $`c_0`$ and $`c_1`$, we obtain the upper bound
$`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{me^{\frac{c_1k(r+m)}{2\sqrt{n}}}}{\left(1e^{\frac{c_1km}{2\sqrt{n}}}\right)^2}}`$ $`<`$ $`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{me^{\frac{c_1km}{2\sqrt{n}}}}{\left(1e^{\frac{c_1km}{2\sqrt{n}}}\right)^2}}`$
$`<`$ $`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{4n}{c_1^2k^2m}}\text{(by Lemma }\text{2}\text{)}`$
$`=`$ $`{\displaystyle \frac{4n}{(1+\vartheta )c_0^2m}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{k^2}}`$
$`=`$ $`{\displaystyle \frac{4n\pi ^2}{6(1+\vartheta )c_0^2m}}`$
$`=`$ $`{\displaystyle \frac{n}{(1+\vartheta )\mathrm{}}}.`$
Therefore,
$`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{aA}{}}ae^{\frac{c_1ka}{2\sqrt{n}}}`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}(r_i+mv)e^{\frac{c_1k(r_i+mv)}{2\sqrt{n}}}`$
$`<`$ $`{\displaystyle \underset{i=1}{\overset{\mathrm{}}{}}}\left({\displaystyle \frac{n}{(1+\vartheta )\mathrm{}}}+O\left(n^{\frac{1}{2}+\epsilon }\right)\right)`$
$`=`$ $`{\displaystyle \frac{n}{1+\vartheta }}+O\left(n^{\frac{1}{2}+\epsilon }\right).`$
We compute a lower bound as follows:
$`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{me^{\frac{c_1k(r+m)}{2\sqrt{n}}}}{\left(1e^{\frac{c_1km}{2\sqrt{n}}}\right)^2}}`$ $`>`$ $`m{\displaystyle \underset{k\frac{2\sqrt{n}}{c_1m}}{}}e^{\frac{c_1kr}{2\sqrt{n}}}\left({\displaystyle \frac{e^{\frac{c_1km}{2\sqrt{n}}}}{\left(1e^{\frac{c_1km}{2\sqrt{n}}}\right)^2}}\right)`$
$`>`$ $`m{\displaystyle \underset{k\frac{2\sqrt{n}}{c_1m}}{}}e^{\frac{c_1kr}{2\sqrt{n}}}\left({\displaystyle \frac{4n}{c_1^2k^2m^2}}2\right)\text{(by Lemma }\text{2}\text{)}`$
$`=`$ $`{\displaystyle \frac{4n}{c_1^2m}}{\displaystyle \underset{k\frac{2\sqrt{n}}{c_1m}}{}}{\displaystyle \frac{e^{\frac{c_1kr}{2\sqrt{n}}}}{k^2}}+O\left(\sqrt{n}\right).`$
Since $`e^x1x`$, we have
$`{\displaystyle \underset{k\frac{2\sqrt{n}}{c_1m}}{}}{\displaystyle \frac{e^{\frac{c_1kr}{2\sqrt{n}}}}{k^2}}`$ $``$ $`{\displaystyle \underset{k\frac{2\sqrt{n}}{c_1m}}{}}{\displaystyle \frac{1}{k^2}}\left(1{\displaystyle \frac{c_1kr}{2\sqrt{n}}}\right)`$
$`=`$ $`{\displaystyle \underset{k\frac{2\sqrt{n}}{c_1m}}{}}{\displaystyle \frac{1}{k^2}}{\displaystyle \frac{c_1r}{2\sqrt{n}}}{\displaystyle \underset{k\frac{2\sqrt{n}}{c_1m}}{}}{\displaystyle \frac{1}{k}}`$
$`=`$ $`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{k^2}}{\displaystyle \underset{k>\frac{2\sqrt{n}}{c_1m}}{}}{\displaystyle \frac{1}{k^2}}{\displaystyle \frac{c_1r}{2\sqrt{n}}}{\displaystyle \underset{k\frac{2\sqrt{n}}{c_1m}}{}}{\displaystyle \frac{1}{k}}`$
$`=`$ $`{\displaystyle \frac{\pi ^2}{6}}+O\left({\displaystyle \frac{1}{\sqrt{n}}}\right)+O\left({\displaystyle \frac{\mathrm{log}n}{\sqrt{n}}}\right).`$
Therefore,
$`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{me^{\frac{c_1k(r+m)}{2\sqrt{n}}}}{\left(1e^{\frac{c_1km}{2\sqrt{n}}}\right)^2}}`$ $`>`$ $`{\displaystyle \frac{4n}{c_1^2m}}\left({\displaystyle \frac{\pi ^2}{6}}+O\left({\displaystyle \frac{1}{\sqrt{n}}}\right)+O\left({\displaystyle \frac{\mathrm{log}n}{\sqrt{n}}}\right)\right)+O\left(\sqrt{n}\right)`$
$`=`$ $`{\displaystyle \frac{n}{(1+\vartheta )\mathrm{}}}+O\left(\sqrt{n}\mathrm{log}n\right),`$
and so
$`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{aA}{}}ae^{\frac{c_1ka}{2\sqrt{n}}}`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{v=0}{\overset{\mathrm{}}{}}}(r_i+mv)e^{\frac{c_1k(r_i+mv)}{2\sqrt{n}}}`$
$`=`$ $`{\displaystyle \underset{i=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{me^{\frac{c_1k(r_i+m)}{2\sqrt{n}}}}{\left(1e^{\frac{c_1km}{2\sqrt{n}}}\right)^2}}+O\left(n^{\frac{1}{2}+\epsilon }\right)`$
$`>`$ $`{\displaystyle \underset{i=1}{\overset{\mathrm{}}{}}}\left({\displaystyle \frac{n}{(1+\vartheta )\mathrm{}}}+O\left(\sqrt{n}\mathrm{log}n\right)\right)+O\left(n^{\frac{1}{2}+\epsilon }\right)`$
$`=`$ $`{\displaystyle \frac{n}{1+\vartheta }}+O\left(n^{\frac{1}{2}+\epsilon }\right).`$
This completes the proof.
The notation $`_{ka>n}`$ (resp. $`_{kan}`$) means the sum over all positive integers $`k`$ and all integers $`aA`$ such that $`ka>n`$ (resp. $`kan`$).
###### Lemma 6
Let $`A`$ be a set of positive integers, and let $`c_1`$ and $`N_0`$ be positive numbers. For every $`n2N_0`$,
$$0<\underset{ka>nN_0}{}ae^{\frac{c_1ka}{2\sqrt{n}}}\frac{1}{\sqrt{n}}.$$
Proof. If $`n2N_0`$, then $`nN_0n/2.`$ Since
$$e^x\frac{1}{x^6}\text{for }x>0\text{,}$$
we have
$`{\displaystyle \underset{ka>nN_0}{}}ae^{\frac{c_1ka}{2\sqrt{n}}}`$ $``$ $`{\displaystyle \underset{ka>nN_0}{}}a\left({\displaystyle \frac{2\sqrt{n}}{c_1ka}}\right)^6`$
$``$ $`n^3{\displaystyle \underset{ka>nN_0}{}}{\displaystyle \frac{1}{k^6a^5}}`$
$``$ $`n^3{\displaystyle \underset{ka>nN_0}{}}{\displaystyle \frac{1}{(ka)^{7/2}k^{5/2}a^{3/2}}}`$
$``$ $`n^3{\displaystyle \underset{ka>nN_0}{}}{\displaystyle \frac{1}{(n/2)^{7/2}k^{5/2}a^{3/2}}}`$
$``$ $`{\displaystyle \frac{1}{\sqrt{n}}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{k^{5/2}}}{\displaystyle \underset{aA}{}}{\displaystyle \frac{1}{a^{3/2}}}`$
$``$ $`{\displaystyle \frac{1}{\sqrt{n}}}.`$
###### Lemma 7
Let $`A`$ be a set of positive integers, and let $`c_1`$ be a positive number. Then
$$0<\underset{kan}{}k^2a^3e^{\frac{c_1ka}{2\sqrt{n}}}n^2.$$
Proof. This is a straightforward computation. We have
$`{\displaystyle \underset{kan}{}}k^2a^3e^{\frac{c_1ka}{2\sqrt{n}}}`$ $``$ $`{\displaystyle \underset{k=1}{\overset{n}{}}}k^2{\displaystyle \underset{aA}{}}a^3e^{\frac{c_1ka}{2\sqrt{n}}}`$
$``$ $`{\displaystyle \underset{k=1}{\overset{n}{}}}k^2{\displaystyle \underset{v=1}{\overset{\mathrm{}}{}}}v^3e^{\frac{c_1kv}{2\sqrt{n}}}`$
$``$ $`6{\displaystyle \underset{k=1}{\overset{n}{}}}{\displaystyle \frac{k^2e^{\frac{c_1k}{2\sqrt{n}}}}{\left(1e^{\frac{c_1k}{2\sqrt{n}}}\right)^4}}\text{(by Lemma }\text{3}\text{)}`$
$`=`$ $`6{\displaystyle \underset{k=1}{\overset{n}{}}}{\displaystyle \frac{e^{\frac{c_1k}{2\sqrt{n}}}}{\left(1e^{\frac{c_1k}{2\sqrt{n}}}\right)^2}}{\displaystyle \frac{k^2}{\left(1e^{\frac{c_1k}{2\sqrt{n}}}\right)^2}}`$
$`<`$ $`6{\displaystyle \underset{k=1}{\overset{n}{}}}\left({\displaystyle \frac{4n}{c_1^2k^2}}\right){\displaystyle \frac{k^2}{\left(1e^{\frac{c_1k}{2\sqrt{n}}}\right)^2}}\text{(by Lemma }\text{2}\text{)}`$
$``$ $`n{\displaystyle \underset{k=1}{\overset{n}{}}}{\displaystyle \frac{1}{\left(1e^{\frac{c_1k}{2\sqrt{n}}}\right)^2}}.`$
Let
$$x=\frac{c_1k}{2\sqrt{n}}.$$
If $`1k\sqrt{n}`$, then $`0<xc_1/2`$ and
$$1e^x=_0^xe^t𝑑txe^xxe^{c_1/2}.$$
It follows that
$$\left(1e^{\frac{c_1k}{2\sqrt{n}}}\right)^2=\left(1e^x\right)^2e^{c_1}x^2=\frac{e^{c_1}c_1^2k^2}{4n},$$
and so
$$\underset{1k\sqrt{n}}{}\frac{1}{\left(1e^{\frac{c_1k}{2\sqrt{n}}}\right)^2}\frac{4e^{c_1}n}{c_1^2}\underset{1k\sqrt{n}}{}\frac{1}{k^2}n.$$
If $`k>\sqrt{n}`$, then
$$\underset{\sqrt{n}<kn}{}\frac{1}{\left(1e^{\frac{c_1k}{2\sqrt{n}}}\right)^2}\underset{\sqrt{n}<kn}{}\frac{1}{\left(1e^{\frac{c_1}{2}}\right)^2}n.$$
Therefore,
$$\underset{k=1}{\overset{n}{}}\frac{1}{\left(1e^{\frac{c_1k}{2\sqrt{n}}}\right)^2}n$$
and
$$\underset{kan}{}k^2a^3e^{\frac{c_1ka}{2\sqrt{n}}}n\underset{k=1}{\overset{n}{}}\frac{1}{\left(1e^{\frac{c_1k}{2\sqrt{n}}}\right)^2}n^2.$$
This completes the proof.
## 3 Upper and lower bounds for $`\mathrm{log}p_A(n)`$
We define $`p_A(0)=1`$ and $`p_A(n)=0`$ for all $`n1`$. We use $`k`$ to denote a positive integer, $`v`$ a nonnegative integer, and $`a`$ an element of the set $`A`$ of congruence classes modulo $`m`$. The asymptotic formula for $`\mathrm{log}p_A(n)`$ will be proved by induction from the following classical recursion formula.
###### Lemma 8
Let $`A`$ be a nonempty set of positive integers, and let $`p_A(n)`$ be the number of partitions of $`n`$ into parts belonging to $`A`$. Then
$$np_A(n)=\underset{kan}{}ap_A(nka).$$
Proof. We enumerate the partitions of $`n`$ into parts belonging to $`A`$ as follows:
$$n=a_{i,1}+a_{i,2}+\mathrm{}+a_{i,s_i}\text{for }i=1,\mathrm{},p_A(n).$$
Then
$$np_A(n)=\underset{i=1}{\overset{p_A(n)}{}}\underset{j=1}{\overset{s_i}{}}a_{i,j}=\underset{aA}{}aN(a,n),$$
where $`N(a,n)`$ is the total number of times that the integer $`a`$ occurs in the $`p_A(n)`$ partitions of $`n`$. The number of partitions in which the integer $`a`$ occurs at least $`k`$ times is $`p_A(nka)`$, and so the number of partitions in which the integer $`a`$ occurs exactly $`k`$ times is
$$p_A(nka)p_A(n(k+1)a).$$
Therefore,
$$N(n,a)=\underset{k=1}{\overset{\mathrm{}}{}}k\left(p_A(nka)p_A(n(k+1)a)\right)=\underset{k=1}{\overset{\mathrm{}}{}}p_A(nka),$$
and so
$$np_A(n)=\underset{aA}{}aN(a,n)=\underset{aA}{}\underset{k=1}{\overset{\mathrm{}}{}}ap_A(nka)=\underset{kan}{}ap_A(nka),$$
since $`p_A(nka)=0`$ if $`ka>n`$. This completes the proof.
###### Theorem 1
Let $`m,\mathrm{},r_1,\mathrm{},r_{\mathrm{}}`$ be positive integers such that
$$1r_1<r_2<\mathrm{}<r_{\mathrm{}}m.$$
Let $`A`$ be the set of all positive integers $`a`$ such that
$$ar_i(modm)\text{for some }i=1,\mathrm{},\mathrm{}\text{.}$$
Then
$$\underset{n\mathrm{}}{lim\; sup}\frac{\mathrm{log}p_A(n)}{\pi \sqrt{\frac{2\mathrm{}n}{3m}}}1.$$
Proof. Let $`0<\epsilon <1/2`$,
$$c_0=\pi \sqrt{\frac{2\mathrm{}}{3m}},$$
and
$$c_1=\left(\sqrt{1+\epsilon }\right)c_0=\pi \sqrt{\frac{2(1+\epsilon )\mathrm{}}{3m}}.$$
We shall prove that there exists a constant $`K=K(\epsilon )`$ such that
$$p_A(n)Ke^{c_1\sqrt{n}}$$
(4)
for all nonnegative integers $`n`$. This implies that
$$\mathrm{log}p_A(n)\mathrm{log}K+\left(\sqrt{1+\epsilon }\right)c_0\sqrt{n},$$
and so
$$\frac{\mathrm{log}p_A(n)}{c_0\sqrt{n}}\sqrt{1+\epsilon }+\frac{\mathrm{log}K}{c_0\sqrt{n}}$$
and
$$\underset{n\mathrm{}}{lim\; sup}\frac{\mathrm{log}p_A(n)}{c_0\sqrt{n}}1.$$
Therefore, it suffices to prove (4).
Applying Lemma 5 with $`\vartheta =\epsilon `$, we have
$$\underset{k=1}{\overset{\mathrm{}}{}}\underset{aA}{}ae^{\frac{c_1ka}{2\sqrt{n}}}=\frac{n}{1+\epsilon }+O\left(n^{\frac{1}{2}+\epsilon }\right).$$
There exists a positive integer $`N=N(\epsilon )`$ such that
$$\underset{k=1}{\overset{\mathrm{}}{}}\underset{aA}{}ae^{\frac{c_1ka}{2\sqrt{n}}}<n$$
(5)
for all $`nN`$. We can choose a number $`K=K(\epsilon )`$ so that the upper bound (4) holds for all positive integers $`nN`$. Let $`n>N`$ and assume that inequality (4) holds for all integers less than $`n`$. Then
$`np_A(n)`$ $`=`$ $`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{aA}{}}ap_A(nka)\text{(by Lemma }\text{8}\text{)}`$
$``$ $`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{aA}{}}aKe^{c_1\sqrt{nka}}\text{(by inequality (}\text{4}\text{))}`$
$``$ $`K{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{aA}{}}ae^{c_1\sqrt{n}\frac{c_1ka}{2\sqrt{n}}}\text{(by Lemma }\text{1}\text{)}`$
$`=`$ $`Ke^{c_1\sqrt{n}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{aA}{}}ae^{\frac{c_1ka}{2\sqrt{n}}}`$
$`<`$ $`nKe^{c_1\sqrt{n}}\text{(by inequality (}\text{5}\text{))}.`$
Dividing by $`n`$, we obtain (4). This completes the proof.
Note that in Theorem 1 we do not assume that $`(r_1,\mathrm{},r_{\mathrm{}},m)=1`$.
###### Theorem 2
Let $`m,\mathrm{},r_1,\mathrm{},r_{\mathrm{}}`$ be positive integers such that
$$1r_1<r_2<\mathrm{}<r_{\mathrm{}}m,$$
and
$$(r_1,\mathrm{},r_{\mathrm{}},m)=1.$$
Let $`A`$ be the set of all positive integers $`a`$ such that
$$ar_i(modm)\text{for some }i=1,\mathrm{},\mathrm{}\text{.}$$
Then
$$\underset{n\mathrm{}}{lim\; inf}\frac{\mathrm{log}p_A(n)}{\pi \sqrt{\frac{2\mathrm{}n}{3m}}}1.$$
Proof. Let $`0<\epsilon <1/2`$,
$$c_0=\pi \sqrt{\frac{2\mathrm{}}{3m}},$$
and
$$c_1=\left(\sqrt{1\epsilon }\right)c_0=\pi \sqrt{\frac{2(1\epsilon )\mathrm{}}{3m}}.$$
The divisibility condition $`(r_1,\mathrm{},r_{\mathrm{}},m)=1`$ implies that there exists a number $`N_0`$ such that $`p_A(n)1`$ for all integers $`nN_0`$. We shall prove that there exists a positive number $`K`$ such that
$$p_A(n)Ke^{c_1\sqrt{n}}$$
(6)
for all integers $`nN_0`$. This implies that
$$\underset{n\mathrm{}}{lim\; inf}\frac{\mathrm{log}p_A(n)}{\pi \sqrt{\frac{2\mathrm{}n}{3m}}}1.$$
By Lemma 5 (with $`\vartheta =\epsilon `$), Lemma 6, and Lemma 7, there exists a number $`N_1=N_1(\epsilon )2N_0`$ such that, for all integers $`nN_1,`$
$`{\displaystyle \underset{kanN_0}{}}ae^{\frac{c_1ka}{2\sqrt{n}}}+{\displaystyle \frac{c_1}{2n^{3/2}}}{\displaystyle \underset{kanN_0}{}}k^2a^3e^{\frac{c_1ka}{2\sqrt{n}}}`$
$`=`$ $`{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{aA}{}}ae^{\frac{c_1ka}{2\sqrt{n}}}{\displaystyle \underset{ka>nN_0}{}}ae^{\frac{c_1ka}{2\sqrt{n}}}+{\displaystyle \frac{c_1}{2n^{3/2}}}{\displaystyle \underset{kanN_0}{}}k^2a^3e^{\frac{c_1ka}{2\sqrt{n}}}`$
$`=`$ $`{\displaystyle \frac{n}{1\epsilon }}+O\left(n^{\frac{1}{2}+\epsilon }\right)+O\left(n^{\frac{1}{2}}\right)+O\left(n^{\frac{1}{2}}\right)`$
$`>`$ $`n.`$
We can choose a positive number $`K=K(\epsilon )`$ such that $`p_A(n)`$ satisfies inequality (6) for $`N_0nN_1`$.
Let $`n>N_1`$, and suppose that inequality (6) holds for all integers in the interval $`[N_0,n1]`$. We shall prove by induction that this inequality also holds for $`n.`$ Note that $`nkaN_0`$ if $`kanN_0`$. By Lemma 8, we have
$`np_A(n)`$ $`=`$ $`{\displaystyle \underset{kan}{}}ap_A(nka)`$
$``$ $`{\displaystyle \underset{kanN_0}{}}ap_A(nka)`$
$``$ $`K{\displaystyle \underset{kanN_0}{}}ae^{c_1\sqrt{nka}}`$
$``$ $`K{\displaystyle \underset{kanN_0}{}}ae^{c_1\left(\sqrt{n}\frac{ka}{2\sqrt{n}}\frac{k^2a^2}{2n^{3/2}}\right)}\text{(by Lemma }\text{1}\text{)}`$
$`=`$ $`Ke^{c_1\sqrt{n}}{\displaystyle \underset{kanN_0}{}}ae^{\frac{c_1ka}{2\sqrt{n}}}e^{\frac{c_1k^2a^2}{2n^{3/2}}}`$
$``$ $`Ke^{c_1\sqrt{n}}{\displaystyle \underset{kanN_0}{}}ae^{\frac{c_1ka}{2\sqrt{n}}}\left(1{\displaystyle \frac{c_1k^2a^2}{2n^{3/2}}}\right)\text{(since }e^x1x\text{)}`$
$`=`$ $`Ke^{c_1\sqrt{n}}\left({\displaystyle \underset{kanN_0}{}}ae^{\frac{c_1ka}{2\sqrt{n}}}{\displaystyle \frac{c_1}{2n^{3/2}}}{\displaystyle \underset{kanN_0}{}}k^2a^3e^{\frac{c_1ka}{2\sqrt{n}}}\right)`$
$`>`$ $`nKe^{c_1\sqrt{n}}.`$
Dividing by $`n`$, we obtain the lower bound (6). This completes the induction.
###### Theorem 3
Let $`m,\mathrm{},r_1,\mathrm{},r_{\mathrm{}}`$ be positive integers such that
$$1r_1<r_2<\mathrm{}<r_{\mathrm{}}m,$$
and
$$(r_1,\mathrm{},r_{\mathrm{}},m)=1.$$
Let $`A`$ be the set of all positive integers $`a`$ such that
$$ar_i(modm)\text{for some }i=1,\mathrm{},\mathrm{}\text{.}$$
Then
$$\mathrm{log}p_A(n)\pi \sqrt{\frac{2\mathrm{}n}{3m}}.$$
Proof. This follows immediately from Theorem 1 and Theorem 2.
The following well–known result is an immediate consequence of Theorem 3.
###### Theorem 4
Let $`q(n)`$ denote the number of partitions of $`n`$ into distinct parts. Then
$$\mathrm{log}q(n)\pi \sqrt{\frac{n}{3}}.$$
Proof. Let $`A`$ be the set of positive odd numbers, that is, the set of positive integers congruent to 1 modulo $`2`$. Euler proved that $`q(n)=p_A(n)`$, so the result follows immediately from Theorem 3 with $`m=2`$ and $`\mathrm{}=1`$.
|
no-problem/0002/gr-qc0002091.html
|
ar5iv
|
text
|
# Inflation and quintessence with nonminimal coupling
## 1 Introduction
The idea of cosmological inflation is legitimately regarded as a breakthrough of modern cosmology: it solves the horizon, flatness and monopole problem, and it provides a mechanism for the generation of density perturbations needed to seed the formation of structures in the universe . The essential qualitative feature of inflation, the acceleration of the universe, is also required (albeit at a different rate) at the present epoch of the universe in order to explain the data from high redshift supernovae . If confirmed, the latter imply that a form of matter with negative pressure (“quintessence”) is beginning to dominate the dynamics of the universe. Scalar fields have been proposed as natural models of quintessence .
Inflation is believed to be driven by a scalar field, apart possibly from the $`R^2`$ inflationary scenario in higher derivative theories of gravity or in supergravity (e.g. ).
The inflaton field $`\varphi `$ obeys the Klein-Gordon equation
$$\mathrm{}\varphi \xi R\varphi \frac{dV}{d\varphi }=0,$$
(1.1)
where $`V(\varphi )`$ is the scalar field potential, $`R`$ denotes the Ricci curvature of spacetime, and the term $`\xi R\varphi `$ in Eq. (1.1) describes the explicit nonminimal coupling (NMC) of the field $`\varphi `$ to the Ricci curvature . A possible mass term $`m^2\varphi ^2/2`$ for the field $`\varphi `$ and the cosmological constant $`\mathrm{\Lambda }`$ are embodied in the expression of $`V(\varphi )`$.
Eq. (1.1) is derived from the Lagrangian density
$$\sqrt{g}=\left[\frac{R}{16\pi G}\frac{1}{2}^c\varphi _c\varphi V(\varphi )\frac{\xi }{2}R\varphi ^2\right]\sqrt{g},$$
(1.2)
where $`g`$ is the determinant of the metric tensor $`g_{ab}`$, and $`_c`$ is the covariant derivative operator. In inflationary theories it is assumed that the scalar field dominates the evolution of the universe and that no forms of matter other than $`\varphi `$ are included in the Lagrangian density (1.2).
Two values of the coupling constant $`\xi `$ are most often encountered in the literature: $`\xi =0`$ (minimal coupling) and $`\xi =1/6`$ (conformal coupling) , while the possibility $`\left|\xi \right|>>1`$ (strong coupling) has also been considered many times, for both signs of $`\xi `$ .
Contrarily to common belief, the introduction of NMC is not a matter of taste; NMC is instead forced upon us in many situations of physical and cosmological interest. There are many compelling reasons to include an explicit nonminimal (i.e. $`\xi 0`$) coupling in the action: NMC arises at the quantum level when quantum corrections to the scalar field theory are considered, even if $`\xi =0`$ for the classical, unperturbed, theory ; NMC is necessary for the renormalizability of the scalar field theory in curved space . But what is the value of $`\xi `$ ? This problem has been addressed at both the classical and quantum level (, and references therein). The answer depends on the theory of gravity and of the scalar field adopted; in most theories used to describe inflationary scenarios, it turns out that a value of the coupling constant $`\xi 0`$ cannot be avoided.
In general relativity, and in all other metric theories of gravity in which the scalar field $`\varphi `$ is not part of the gravitational sector, the coupling constant necessarily assumes the value $`\xi =1/6`$ . The study of asymptotically free theories in an external gravitational field, described by the Lagrangian density
$$_{AF}\sqrt{g}=\sqrt{g}\left(aR^2+bG_{GB}+cC_{abcd}C^{abcd}\xi R\varphi ^2+_{matter}\right)$$
(1.3)
(where $`G_{GB}`$ is the Gauss-Bonnet invariant and $`C_{abcd}`$ is the Weyl tensor) shows a scale-dependent coupling parameter $`\xi (\tau )`$. In Refs. it was shown that asymptotically free grand unified theories (GUTs) have a $`\xi `$ depending on a renormalization group parameter $`\tau `$, and that $`\xi (\tau )`$ converges to $`1/6`$, $`\mathrm{}`$, or to any initial condition $`\xi _0`$ as $`\tau \mathrm{}`$ (this limit corresponds to strong curvature conditions and to the early universe), depending on the gauge group and on the matter content of the theory. In Ref. it was also obtained that $`\left|\xi (\tau )\right|+\mathrm{}`$ in $`SU(5)`$ GUTs. Similar results were derived in finite GUTs without running of the gauge coupling, with the convergence of $`\xi `$ to its asymptotic value being much faster . An exact renormalization group study of the $`\lambda \varphi ^4`$ theory shows that $`\xi =1/6`$ is a stable infrared fixed point .
In the large $`N`$ limit of the Nambu-Jona-Lasinio model, $`\xi =1/6`$ ; in the $`O(N)`$-symmetric model with $`V=\lambda \varphi ^4`$, $`\xi `$ is generally nonzero and depends on the coupling constants $`\xi _i`$ of the individual bosonic components . Higgs fields in the standard model have $`\xi 0`$ or $`\xi 1/6`$ . Only a few investigations produce $`\xi =0`$: the minimal coupling is predicted if $`\varphi `$ is a Goldstone boson with a spontaneously broken global symmetry , for a semiclassical scalar field with backreaction and cubic self-interaction , and for theories formulated in the Einstein conformal frame . In view of the above results, it is wise to incorporate an explicit NMC between $`\varphi `$ and $`R`$ in the inflationary paradigm and in quintessence models.
A conservative approach to inflation and quintessence employs general relativity as the underlying gravity theory (exceptions are $`R^2`$, extended, hyperextended and stringy inflation and the extended quintessence model of Ref. ), and conformal coupling is unavoidable in general relativity, as well as in any metric theory of gravity in which the scalar field is part of the non-gravitational sector (e.g. when $`\varphi `$ is a Higgs field) .
The viability of an inflationary scenario and the constraints on the inflationary model are profoundly affected by the presence of NMC and by the value of the coupling constant $`\xi `$ ( and references therein; ). The analysis of the various inflationary scenarios considered in the literature usually leads to the result that NMC makes it harder to achieve inflation with a given potential that is known to be inflationary for $`\xi =0`$ . There are two main reasons for this difficulty:
1) The common attitude in the literature on nonminimally coupled scalar fields in inflation is that the coupling constant $`\xi `$ is a free parameter to fine-tune at one’s own will in order to solve problems of the inflationary scenario under consideration. The fine-tuning of certain parameters of inflation is reduced by fine-tuning the extra parameter $`\xi `$ instead. For example, the self-coupling constant $`\lambda `$ of the scalar field in the chaotic inflation potential $`V=\lambda \varphi ^4`$ is subject to the constraint $`\lambda <10^{12}`$ coming from the observational limits on the amplitude of fluctuations in the cosmic microwave background. This constraint makes the scenario uninteresting because the energy scale predicted by particle physics is much higher. The constraint on $`\lambda `$ is reduced by fine-tuning $`\xi `$ instead ; while the fine-tuning of $`\xi `$ is less drastic than that of the self-coupling constant $`\lambda `$ by several orders of magnitude , one cannot be satisfied with the fact that NMC is introduced ad hoc to improve the fine tuning problems (and still does not completely cure them). A more rigorous approach consists in studying the prescriptions for the value of $`\xi `$ given in the literature (which are summarized in Ref. ) and the consequences of NMC for the known inflationary scenarios. The philosophy of this approach is that NMC is often unavoidable and the value of $`\xi `$ is not arbitrary but is determined by the underlying physics. Once the value of the coupling constant $`\xi `$ is predicted, one does not have anymore the freedom to adjust its value and the fine-tuning problems that may plague the inflationary scenario reappear. Several inflationary scenarios turn out to be theoretically inconsistent when one takes into account the appropriate values of the coupling constant .
2) Most of the inflationary scenarios are built upon the slow-roll approximation , in which the Einstein-Friedmann dynamical equations are solved. It is more difficult to achieve the slow rolling of the scalar field when $`\xi 0`$. In fact, an almost flat section of the potential $`V(\varphi )`$ gives slow rollover of $`\varphi `$ when $`\xi =0`$, but its shape is distorted by the NMC term $`\xi R\varphi ^2/2`$ in the Lagrangian density (1.2). The extra term plays the role of an effective mass term for the inflaton. The phenomenon was described by Abbott in the new inflationary scenario with the Ginzburg-Landau potential, by Futamase and Maeda in chaotic inflation, and by Fakir and Unruh ; and the generalization to any slow roll inflationary potential is straightforward . This mechanism is quantitatively discussed in Sec. 6.
How general are the previous conclusions ? They hold for particular inflationary scenarios, and the conclusion that it is always more difficult to achieve a sufficient amount of inflation in the presence of NMC is premature. In principle, it is possible that a suitable scalar field potential $`V(\varphi )`$ be balanced by the NMC term $`\xi R\varphi ^2/2`$ in the Lagrangian density (1.2), thus producing an “effective potential” which is inflationary and even gives a slow-roll regime. In this situation, NMC would make it easier to achieve inflation, thus opening the possibility for a wider class of scalar field potentials to be considered. This possibility is studied in this paper; the discussion is kept as general as possible, without specifying a particular inflationary scenario until it is necessary.
In a previous paper , the theoretical consistency of the known inflationary scenarios was studied from the point of view of the theoretical prescriptions for the value of $`\xi `$ and of the fine-tuning of the parameters. Calculations of density perturbations with nonminimally coupled scalar fields have been performed in Refs. , while observational constraints on $`\xi `$ were derived in Refs. . Here instead we study the effect of NMC by analyzing the dynamical equations for the scale factor of the universe and the scalar field, without specifying the value of the coupling constant $`\xi `$. Aspects of the physics of NMC which give rise to ambiguities in the literature are also clarified.
Throughout this paper it is assumed that gravity is described by Einstein’s theory with a scalar field as the only source of matter, as described by the Lagrangian density (1.2). Only in Secs. 2 and 4.3 is the presence of a different kind of matter in addition to the scalar field allowed.
The plan of the paper is as follows: in Sec. 2 the possible ways of writing the Einstein equations in the presence of NMC are discussed and compared, together with the corresponding conservation laws and with the issue of the effective gravitational constant. In Sec. 3 the positivity of the energy density of a nonminimally coupled scalar field is discussed in the context of cosmology. In Sec. 4 a necessary condition for the acceleration of a universe driven by a nonminimally coupled scalar field is derived; this is relevant for both inflation and quintessence models based on scalar fields. The question of whether the acceleration can occur due to pure NMC without a potential $`V(\varphi )`$ is answered. In Sec. 5, scalar field potentials that are known to be inflationary for $`\xi =0`$ are studied and it is shown that NMC spoils inflation rather than helping it. In Sec. 6 the slow-roll approximation to inflation with NMC and the attractor behavior of de Sitter solutions are studied. This is relevant for the calculation of density and gravitational wave perturbations and, ultimately, for the comparison with observations of the cosmic microwave background. Sec. 7 presents a discussion of conformal transformation techniques used in cosmology with NMC, while Sec. 8 contains a discussion and the conclusions.
## 2 Field equations and conservation laws
When discussing nonminimally coupled scalar fields, many authors choose to reason in terms of an effective gravitational constant instead of keeping a $`\varphi `$-dependent term in the left hand side of the Einstein equations. In this section this approach is discussed and compared with the more conservative approaches using a $`\varphi `$-independent gravitational constant, and the corresponding conservation equations are studied. The following discussion has early parallels, for the special cases in Refs. and a recent but incomplete one in Ref. .
One begins from the action
$`S=S_g[g_{cd}]+S_{int}[g_{cd},\varphi ]+S_\varphi [g_{cd},\varphi ]+S_m[g_{cd},\psi _m]=`$
$`={\displaystyle d^4x\sqrt{g}\left[\left(\frac{1}{2\kappa }\frac{\xi \varphi ^2}{2}\right)R\frac{1}{2}g^{ab}_a\varphi _b\varphi V(\varphi )\right]}+S_m[g_{cd},\psi _m],`$ (2.1)
where $`\kappa 8\pi G`$, $`S_g=(2\kappa )^1d^4x\sqrt{g}R`$ is the purely gravitational part of the action, $`S_{int}=\xi /2d^4x\sqrt{g}R\varphi ^2`$ is an explicit interaction term between the gravitational and the $`\varphi `$ fields, $`S_\varphi `$ describes the purely material part of the action associated with the scalar field, and the remainder $`S_m`$ describes matter fields other than $`\varphi `$, collectively denoted by $`\psi _m`$.
The variation of the action (2.1) with respect to $`\varphi `$ leads to the Klein-Gordon equation (1.1). By varying Eq. (2.1) with respect to $`g_{ab}`$ and using the well known formulas
$$\delta \left(\sqrt{g}\right)=\frac{1}{2}\sqrt{g}g_{ab}\delta g^{ab},$$
(2.2)
$$\delta \left(\sqrt{g}R\right)=\sqrt{g}\left(R_{ab}\frac{1}{2}g_{ab}R\right)\delta g^{ab}\sqrt{g}G_{ab}\delta g^{ab}$$
(2.3)
(where $`G_{ab}`$ is the Einstein tensor), one obtains the Einstein equations in the form
$$\left(1\kappa \xi \varphi ^2\right)G_{ab}=\kappa \left(\stackrel{~}{T}_{ab}[\varphi ]+\stackrel{~}{T}_{ab}[\psi _m]\right)\kappa \stackrel{~}{T}_{ab}^{(total)},$$
(2.4)
where
$$\stackrel{~}{T}_{ab}[\varphi ]=_a\varphi _b\varphi \frac{1}{2}g_{ab}^c\varphi _c\varphi Vg_{ab}+\xi \left[g_{\mu \nu }\mathrm{}(\varphi ^2)_\mu _\nu (\varphi ^2)\right].$$
(2.5)
and
$$\stackrel{~}{T}_{ab}[\psi _m]=\frac{2}{\sqrt{g}}\frac{\delta S_m[\psi _m,g_{cd}]}{\delta g^{ab}}.$$
(2.6)
One can also rewrite Eq. (2.4) by taking the factor $`\kappa \xi \varphi ^2G_{ab}`$ to the right hand side,
$$G_{ab}=\kappa \stackrel{~}{\stackrel{~}{T}}_{ab},$$
(2.7)
where
$$\stackrel{~}{\stackrel{~}{T}}_{ab}=\stackrel{~}{T}_{ab}^{(total)}+\xi \varphi ^2G_{ab}.$$
(2.8)
By taking a different approach, the coefficient of the Ricci scalar in the action (2.1) can be written as $`(16\pi G_{eff})^1`$, where
$$G_{eff}\frac{G}{18\pi G\xi \varphi ^2}$$
(2.9)
is an effective, $`\varphi `$-dependent, gravitational coupling. This way of proceeding is analogous to the familiar identification of the Brans-Dicke scalar field $`\varphi _{BD}`$ with the inverse of an effective gravitational constant ($`G_\varphi =\varphi _{BD}^1`$) in the gravitational sector of the Brans-Dicke action
$$S_{BD}=d^4x\sqrt{g}\left(\varphi _{BD}R\frac{\omega }{\varphi _{BD}}^a\varphi _{BD}_a\varphi _{BD}\right).$$
(2.10)
By adopting this point of view in the case of a nonminimally coupled scalar field, one divides Eq. (2.4) by the factor $`1\kappa \xi \varphi ^2`$ to obtain the Einstein equations in the form
$$G_{ab}=\kappa _{eff}\left(\stackrel{~}{T}_{ab}[\varphi ]+\stackrel{~}{T}_{ab}[\psi _m]\right)=\kappa _{eff}\stackrel{~}{T}_{ab}^{(total)}$$
(2.11)
(where $`\kappa _{eff}8\pi G_{eff}`$), which looks more familiar to the relativist’s eye.
The approach using the effective gravitational coupling (2.9) has been used to investigate the situation in which $`G_{eff}=0`$ and the “antigravity” regime corresponding to $`G_{eff}<0`$ .
A third possibility is to use the form of the Einstein equations
$$G_{ab}=\kappa \left(T_{ab}[\varphi ]+T_{ab}[\varphi ,\psi _m]\right)\kappa T_{ab}^{(total)},$$
(2.12)
where
$$T_{ab}[\varphi ]\frac{1}{1\kappa \xi \varphi ^2}\stackrel{~}{T}_{ab}[\varphi ],$$
(2.13)
$$T_{ab}[\varphi ,\psi _m]\frac{1}{1\kappa \xi \varphi ^2}\stackrel{~}{T}_{ab}[\psi _m],$$
(2.14)
and the gravitational coupling is given by the true constant $`G`$.
If $`\xi 0`$ the forms (2.4), (2.7), (2.11) and (2.12) of the Einstein equations are all equivalent (apart from the conservation of the corresponding stress-energy tensors, which is discussed later). If instead $`\xi >0`$, caution must be exercised to ensure that the factor $`1\kappa \xi \varphi ^2`$ by which Eq. (2.4) is divided does not vanish. The division by $`1\kappa \xi \varphi ^2`$ used to write Eqs. (2.11) and (2.12) unavoidably introduces the two critical values of the scalar field
$$\pm \varphi _c=\pm \frac{m_{pl}}{\sqrt{8\pi \xi }}(\xi >0),$$
(2.15)
which are barriers that the scalar field cannot cross. At $`\varphi =\pm \varphi _c`$ the effective gravitational coupling (2.9), its gradient, and the stress-energy tensor $`T_{ab}^{(total)}`$ in Eq. (2.12) diverge. Therefore, solutions of the field equations can only be obtained for which $`\left|\varphi \right|<\varphi _c`$ or $`\left|\varphi \right|>\varphi _c`$ at all times. For $`\xi >0`$, one has obtained a restricted form of the field equations and a restricted class of solutions; any solution $`\varphi `$ of the original theory described by Eq. (2.4) which crosses the barriers $`\pm \varphi _c`$ is lost in passing to the picture of Eq. (2.11) or of Eq. (2.12).
Although the caveat on the division by the factor ($`1\kappa \xi \varphi ^2`$) looks trivial, surprisingly it is missed in the literature on scalar field cosmology with NMC, and the restricted range of validity of the solutions goes unnoticed. In particular, investigations of the coupled Einstein-Klein-Gordon equations using dynamical systems methods and aiming at determining generic solutions and attractors, are put in jeopardy by the previous considerations if they employ the form (2.11) or (2.12) of the Einstein equations. For example, the approach of Eq. (2.4) is used in Ref. , which makes correct statements on the general class of solutions of the field equations with NMC, while the parallel treatment of Ref. ) using Eq. (2.11) cannot claim to study general solutions.
We proceed by discussing the conservation equations for $`T_{ab}`$ and $`\stackrel{~}{T}_{ab}`$. The approach of Eq. (2.4) for $`\xi >0`$ uses a truly constant gravitational coupling $`G`$, but the field equations (2.4) do not guarantee covariant conservation of $`\stackrel{~}{T}_{ab}^{(total)}`$: in fact the contracted Bianchi identities $`^bG_{ab}=0`$ yield
$$^b\stackrel{~}{T}_{ab}^{(total)}=\frac{2\kappa \xi \varphi }{1\kappa \xi \varphi ^2}\stackrel{~}{T}_{ab}^{(total)}^b\varphi $$
(2.16)
when the denominator is nonvanishing. The covariant divergence $`^b\stackrel{~}{T}_{ab}^{(total)}`$ vanishes only for the trivial case $`\varphi =`$const. and approximately vanishes in regions of spacetime where $`\varphi `$ is nearly constant. When the scalar $`\varphi `$ is the only source of gravity, $`\stackrel{~}{T}_{ab}^{(total)}=\stackrel{~}{T}_{ab}[\varphi ]`$, the constancy of $`\varphi `$ corresponds to de Sitter solutions (if $`\xi 0`$), with the energy-momentum tensor of quantum vacuum $`\stackrel{~}{T}_{ab}=V(\varphi )g_{ab}`$ and equation of state $`P=\rho `$.
On the contrary, in the approach based on Eq. (2.12) the relevant stress-energy tensor $`T_{ab}^{(total)}`$ is covariantly conserved,
$$^bT_{ab}^{(total)}=0,$$
(2.17)
as a consequence of the contracted Bianchi identities. This is probably the reason why the approach based on Eqs. (2.12) has been preferred over alternative formulations. However, the loss of generality in the solutions for $`\xi >0`$ must be kept in mind.
To give an idea of how the conservation equation for ordinary matter is modified by NMC, we consider the case of a dust fluid acting as the source of gravity together with the nonminimally coupled scalar field. When the scalar identically vanishes, the equation $`^bT_{ab}=0`$ for the stress-energy tensor $`T_{ab}[\psi _m]=\rho u_au_b`$ (where $`u^a`$ is the dust four-velocity) implies the geodesic equation for fluid particles
$$u^b_bu^a=0,$$
(2.18)
and the conservation equation for the energy density $`\rho `$
$$\frac{d\rho }{d\lambda }+\rho ^bu_b=0,$$
(2.19)
where $`\lambda `$ is an affine parameter along the geodesics. When the nonminimally coupled scalar appears together with the dust, Eq. (2.16) yields
$$\left(\frac{d\rho }{d\lambda }+\rho ^bu_b+\frac{2\kappa \xi \rho \varphi }{1\kappa \xi \varphi ^2}\frac{d\varphi }{d\lambda }\right)u_a+\rho \frac{Du_a}{D\lambda }=0,$$
(2.20)
from which one derives again the geodesic equation $`Du^a/D\lambda u^b_bu^a=0`$ and the modified conservation equation
$$\frac{d\rho }{d\lambda }+\rho ^bu_b+\frac{2\kappa \xi \varphi \rho }{1\kappa \xi \varphi ^2}\frac{d\varphi }{d\lambda }=0.$$
(2.21)
The geodesic hypothesis is satisfied since test particles move on geodesics. In the weak field limit, the modified conservation equation (2.21) reduces to
$$\frac{\rho }{t}+\stackrel{}{}\left(\rho \stackrel{}{v}\right)+\frac{2\kappa \xi \varphi }{1\kappa \xi \varphi ^2}\left(\frac{\varphi }{t}+\stackrel{}{}\varphi \stackrel{}{v}\right)\rho =0.$$
(2.22)
Finally, we consider the approach using Eq. (2.7); it employs the truly constant gravitational coupling $`G`$ and it guarantees that the stress-energy tensor $`\stackrel{~}{\stackrel{~}{T}}_{ab}`$ is covariantly conserved,
$$^b\stackrel{~}{\stackrel{~}{T}}_{ab}=0,$$
(2.23)
as can be deduced by using the contracted Bianchi identities and Eq. (2.7).
## 3 Positivity of the energy density
It is acknowledged that the energy density of a nonminimally coupled scalar field has a sign that depends on the particular solution $`\varphi `$ and on the spacetime metric $`g_{ab}`$. This statement is easy to understand upon inspection of the rather complicated expression (4.4) for the energy density of a nonminimally coupled scalar in a Friedmann-Lemaitre-Robertson-Walker (FLRW) universe. Since it is difficult or impossible to establish a priori the sign of $`\rho `$, the minimal physical requirement $`\rho 0`$ has to be checked a posteriori for the known solutions of the field equations.
In this section we limit the discussion to homogeneous and isotropic cosmologies; scalar fields are extremely important in this context, due to their role as inflaton, dark matter, and quintessence. In this context, it is possible to improve substantially on the subject of the positivity of the energy density.
In a spatially flat or closed (curvature index $`K0`$) FLRW universe dominated by a scalar field, the solution $`(a(t),\varphi (t))`$ of the field equations satisfies the Hamiltonian constraint
$$\left(\frac{\dot{a}}{a}\right)^2=\frac{\kappa }{3}\rho \frac{K}{a^2},$$
(3.1)
which follows from the field equations in the form (2.7) and (2.8), and from which it is immediate to deduce that the energy density $`\rho `$ is always non-negative for a solution of the Einstein equations, in spite of the complication of the expression of $`\rho `$ in terms of $`\varphi `$, $`\dot{\varphi }`$, $`a`$ and $`H=\dot{a}/a`$ (see for example Eq. (6.4)).
Note that the different forms of the field equations considered in the previous section lead to different stress-energy tensors, and therefore to different definitions of energy density of the scalar field. Hence, by using field equations different from (2.7) and (2.8) one is not able to draw conclusions on the sign of $`\rho `$.
## 4 Necessary conditions for the acceleration of the universe
In the rest of this paper we specialize our considerations to cosmology. In this section, we study a necessary condition for the universe to accelerate when a nonminimally coupled scalar field is the dominant source of gravity. This is relevant for both inflation and quintessence models based on scalar fields with NMC. While such inflationary models are well known , the use of scalar fields with NMC as dark matter and, more recently, as quintessence models is less known. The necessary condition for cosmic acceleration derived here is applied in the following sections.
The study of necessary conditions for the acceleration of the universe enables one to determine whether NMC helps, or makes it difficult to achieve inflation for a given scalar field potential, in comparison with the corresponding situation for minimal coupling.
We begin by considering the spatially flat Einstein-de Sitter universe with line element
$$ds^2=dt^2+a^2(t)\left(dx^2+dy^2+dz^2\right)$$
(4.1)
in comoving coordinates $`(t,x,y,z)`$. In this section we adopt the form (2.12) of the Einstein field equations keeping in mind the caveat of Sec. 2; the Einstein-Friedmann equations are
$$\frac{\ddot{a}}{a}=\frac{\kappa }{6}(\rho +3P),$$
(4.2)
$$H^2\left(\frac{\dot{a}}{a}\right)^2=\frac{\kappa }{3}\rho ,$$
(4.3)
where an overdot denotes differentiation with respect to the comoving time $`t`$. The energy density $`\rho `$ and pressure $`P`$ of the scalar field are given by the diagonal components of the stress-energy tensor $`T_{ab}[\varphi ]`$ of Eq. (2.13):
$$\rho =\left(1\kappa \xi \varphi ^2\right)^1\left[\frac{(\dot{\varphi })^2}{2}+V(\varphi )+6\xi H\varphi \dot{\varphi }\right],$$
(4.4)
$$P=\left(1\kappa \xi \varphi ^2\right)^1\left[\left(\frac{1}{2}2\xi \right)\dot{\varphi }^2V(\varphi )2\xi \varphi \ddot{\varphi }4\xi H\varphi \dot{\varphi }\right].$$
(4.5)
Equations (4.1), (4.4) and (4.5) yield, in the case of minimal coupling ($`\xi =0`$),
$$\frac{\ddot{a}}{a}=\frac{\kappa }{3}\left(\dot{\varphi }^2V\right).$$
(4.6)
An inflationary era in the evolution of the universe includes as an essential feature an accelerated expansion, $`\ddot{a}>0`$ (of course, other ingredients are required for a successful inflationary scenario: a natural mechanism of entry into inflation, a sufficient amount of expansion, a graceful exit mechanism, acceptable scalar and tensor perturbations, etc). It is clear from Eq. (4.6) that when $`\xi =0`$ a necessary (but not sufficient) condition for acceleration is given by $`V>0`$. It is useful to keep in mind that the slow-roll approximation used to solve the equations of inflation, corresponds to the dominance of the scalar field potential energy density $`V(\varphi )`$ over its kinetic energy density, $`V(\varphi )>>\dot{\varphi }^2/2`$. In the slow-roll approximation for $`\xi =0`$, $`\rho V`$ and hence the necessary condition for acceleration $`V>0`$ reduces to the minimal requirement of the positivity of the scalar field energy density, and of the existence of real solutions of Eq. (4.2).
What is the necessary condition for acceleration analog to $`V>0`$ when $`\xi 0`$ ? The first part of this section is devoted to answering this question.
By definition, acceleration corresponds to the condition $`\rho +3P<0`$; upon use of Eqs. (4.4) and (4.5), this inequality is equivalent to
$$(13\xi )\dot{\varphi }^2V3\xi \varphi \left(\ddot{\varphi }+H\dot{\varphi }\right)<0.$$
(4.7)
The Klein-Gordon equation (1.1), which takes the form
$$\ddot{\varphi }+3H\dot{\varphi }+\xi R\varphi +\frac{dV}{d\varphi }=0,$$
(4.8)
is then used to substitute for $`\ddot{\varphi }`$, obtaining
$$(13\xi )\dot{\varphi }^2V+3\xi ^2R\varphi ^2+6\xi H\varphi \dot{\varphi }+3\xi \varphi \frac{dV}{d\varphi }<0,$$
(4.9)
and Eq. (4.4) can be used to rewrite the definition of cosmic acceleration (4.9) as
$$x\left(1\kappa \xi \varphi ^2\right)\rho 2V+\dot{\varphi }^2\left(\frac{1}{2}3\xi \right)+3\xi ^2R\varphi ^2+3\xi \varphi \frac{dV}{d\varphi }<0.$$
(4.10)
To proceed, one assumes the weak energy condition $`\rho 0`$. As a result of the difficulty of handling the dynamical equations analytically when $`\xi 0`$, in the rest of this section we restrict ourselves to values of the coupling constant $`\xi 1/6`$. Albeit limited, this semi-infinite range covers many of the prescriptions for the value of $`\xi `$ given in the literature . One then has $`2V+3\xi \varphi dV/d\varphi x<0`$ and a necessary condition for cosmic acceleration to occur when $`\xi 1/6`$ is
$$V\frac{3\xi }{2}\varphi \frac{dV}{d\varphi }>0.$$
(4.11)
Eq. (4.11) reduces to the well known necessary condition for acceleration $`V>0`$ for minimal coupling.
Unfortunately, the necessary and sufficient condition for acceleration (4.9) is not very useful in general because different terms, which depend on the solution $`(a(t),\varphi (t))`$ and have opposite signs can balance one another and hamper a general analysis of the problem. In practice, one is compelled to adopt one of the specific forms of $`V(\varphi )`$ considered in the literature and solve the equations for $`a(t)`$ and $`\varphi (t)`$ for specific examples. However, a few considerations of general character can still be given.
First, we take the point of view that a potential $`V(\varphi )`$ is given, for example by a particle physics theory, and we study the effect of introducing NMC in the field equations. The discussion is kept as general as possible, without specifying the value of $`\xi `$.
Keeping in mind the necessary condition (4.11) for acceleration of the universe in the $`\xi =0`$ case, consider an even potential $`V(\varphi )=V(\varphi )`$ which is increasing for $`\varphi >0`$. This is the case, e.g., of a pure mass term $`m^2\varphi ^2/2`$, or of the quartic potential $`V=\lambda \varphi ^4`$, or of their combination $`V(\varphi )=m^2\varphi ^2/2+\lambda \varphi ^4+V_0`$, where $`V_0`$ is constant. For $`0<\xi <1/6`$, one has $`\xi \varphi dV/d\varphi >0`$ and it is harder to satisfy the necessary condition (4.11) for acceleration than in the minimal coupling case. Hence one can say that, for this class of potentials, it is harder to achieve acceleration of the universe, and hence inflation. If instead $`\xi <0`$, the necessary condition for cosmic acceleration is more easily satisfied than in the $`\xi =0`$ case, but one is not entitled to say that with NMC it is easier to achieve inflation (because a necessary, and not a sufficient condition for acceleration is considered).
Let us consider now an even potential $`V(\varphi )=V(\varphi )`$ such that $`dV/d\varphi <0`$ for $`\varphi >0`$. This is the case, e.g., of the Ginzburg-Landau potential $`V(\varphi )=\lambda \left(\varphi ^2v^2\right)^2`$ for $`0<\varphi <v`$, or of an inverted harmonic oscillator potential , which approximates the potential for natural inflation
$$V_{ni}(\varphi )=\lambda ^4\left[1+\mathrm{cos}\left(\frac{\varphi }{f}\right)\right]$$
(4.12)
around its maximum at $`\varphi =0`$. For $`0<\xi 1/6`$, it is easier to satisfy the necessary condition (4.11) for acceleration when $`\xi 0`$ than when $`\xi =0`$ but, again, this does not allow one to conclude that the universe actually accelerates its expansion. If $`\xi <0`$ instead, it is harder to achieve acceleration than in the $`\xi =0`$ case.
The inequality (4.11) can be read in a different way: assume, for simplicity, that $`V>0`$ and $`\varphi >0`$ (it is straightforward to generalize to the case in which $`V`$ or $`\varphi `$, or both, are negative). Then, if $`0\xi 1/6`$, (4.11) is equivalent to
$$\frac{d}{d\varphi }\left\{\mathrm{ln}\left[\frac{V}{V_0}\left(\frac{\varphi _0}{\varphi }\right)^{\frac{2}{3\xi }}\right]\right\}<0,$$
(4.13)
where $`V_0`$ and $`\varphi _0`$ are arbitrary (positive) constants. As a result of the fact that the logarithm is a monotonically increasing function of its argument, the necessary condition for cosmic acceleration (4.11) amounts to require that the potential $`V(\varphi )`$ grows with $`\varphi `$ slower than the power-law potential $`V_{crit}(\varphi )V_0\left(\varphi /\varphi _0\right)^{\frac{2}{3\xi }}`$. If instead $`\xi <0`$, the necessary condition for cosmic acceleration amounts to require that $`V`$ grows faster than $`V_{crit}(\varphi )`$ as $`\varphi `$ increases. This criterion is further developed in Sec. 4.3.
### 4.1 No acceleration without a scalar field potential
Taking to the extreme the possibility of a balance between the potential $`V(\varphi )`$ and the term $`\xi R\varphi ^2/2`$ in (1.2), the question arises of whether it is possible to obtain acceleration of the universe with a free, massless scalar field with no cosmological constant (i.e. $`V=0`$) for suitable values of the coupling constant $`\xi `$. In particular, we are interested to the case of strong coupling $`|\xi |>>1`$, which has been considered many times in the literature .
Inflation driven by a pure NMC term turns out to be impossible for negative values of $`\xi `$. In fact, by assuming that the expansion of the universe is accelerated and that $`V=0`$, Eq. (4.9) and the expression of the Ricci curvature in an Einstein-de Sitter space
$$R=6\left(\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}\right)>6H^2$$
(4.14)
yield, for $`\xi <0`$,
$$(13\xi )\dot{\varphi }^2+3\xi ^2R\varphi ^2+6\xi H\varphi \dot{\varphi }\left(\dot{\varphi }+3\xi H\varphi \right)^20,$$
(4.15)
thus contraddicting (4.9). Therefore, the combined assumptions $`\ddot{a}>0`$ and $`V=0`$ lead to an absurdity. The previous analysis fails to yield conclusions when $`\xi >0`$ because terms of different signs can balance in the left hand side of (4.9). The previous reasoning does not make use of the weak energy condition.
The discussion can easily be extended to values of $`\xi `$ in the range $`0<\xi 1/6`$ by using an independent argument: by rewriting (4.9) as
$$\dot{\varphi }^2\left(\frac{1}{2}3\xi \right)<\left[3\xi ^2R\varphi ^2+\left(1\kappa \xi \varphi ^2\right)\rho \right],$$
(4.16)
and using $`\rho 0`$, one concludes that the right hand side is negative when the cosmic expansion accelerates and that (4.16) can be satisfied only if the term on the left hand side is negative, i.e. if $`\xi >1/6`$, which contraddicts the assumptions. Therefore,
for $`\xi 1/6`$, the NMC term alone cannot act as an effective potential to provide acceleration of the universe.
A further argument (again for $`\xi 1/6`$) consists in noting that the necessary condition for acceleration (4.11) is not satisfied if $`V(\varphi )`$ vanishes identically. Unfortunately no conclusion can be obtained analytically when $`\xi >1/6`$.
### 4.2 Negative potentials
In the usual studies of inflation and quintessence with $`\xi =0`$, only positive scalar field potentials are considered. The reason is easy to understand: in the slow-rollover approximation to inflation $`V>>\dot{\varphi }^2/2`$, $`\rho V`$, and $`V>0`$ corresponds to $`\rho >0`$, a minimal requirement. However, this is no longer true when $`\xi 0`$ and $`\rho `$ is given by the more complicated expression (4.4). Indeed, negative scalar field potentials have been considered in the literature on NMC, in inflation or in other contexts ,
The question of whether the positive term $`\xi R\varphi ^2/2`$ can balance a negative $`V(\varphi )`$ arises. In the toy model of Ref. a negative potential $`V`$ is balanced by the coupling term $`\xi R\varphi ^2/2`$ in such a way that inflation is achieved: there, a closed FLRW universe dominated by a conformally coupled scalar field is investigated in Einstein’s gravity. By assuming the equation of state $`P=(\gamma 1)\rho `$, the potential deemed necessary for inflation is derived numerically for small values of the constant $`\gamma `$; the resulting $`V(\varphi )`$ is significantly different from the corresponding potential derived analytically in Ref. for the case $`\xi =0`$ and for the same values of $`\gamma `$.
Mathematically, the possibility of a negative $`V(\varphi )`$ which is inflationary in the presence of NMC extends the range of potentials explored so far, but the meaning of a negative scalar field potential $`V(\varphi )`$ remains unclear and the latter is probably unpalatable to most particle physicists.
### 4.3 Quintessence models
In quintessence models based on a nonminimally coupled scalar field, the energy density of the latter is beginning to dominate the dynamics of the universe, but there is also ordinary matter with energy density $`\rho _ma^3`$ and vanishing pressure $`P_m=0`$. Eq. (4.4) is modified according to
$$\rho =\left(1\kappa \xi \varphi ^2\right)^1\left[\rho _m+\frac{(\dot{\varphi })^2}{2}+V(\varphi )+6\xi H\varphi \dot{\varphi }\right]=\frac{\rho _m}{1\kappa \xi \varphi ^2}+\rho _\varphi ,$$
(4.17)
where $`V(\varphi )`$ is an appropriate quintessential potential. The necessary and sufficient condition for the acceleration of the universe $`\rho +3P<0`$ is written as
$$y\frac{\rho _m}{2}+\left(13\xi \right)\dot{\varphi }^2V+6\xi H\varphi \dot{\varphi }+3\xi ^2R\varphi ^2+3\xi \varphi \frac{dV}{d\varphi }<0,$$
(4.18)
where the Klein-Gordon equation (4.8) has been used to substitute for $`\ddot{\varphi }`$. As before, one obtains a necessary condition for the accelerated expansion of the universe by rewriting the quantity $`y`$ as
$$y=\frac{\rho _m}{2}+\left(1\kappa \xi \varphi ^2\right)\rho _\varphi 2V+\dot{\varphi }^2\left(\frac{1}{2}3\xi \right)+3\xi ^2R\varphi ^2+3\xi \varphi \frac{dV}{d\varphi }<0,$$
(4.19)
and by assuming that $`\rho _m`$ and $`\rho _\varphi `$ are non-negative; one obtains again Eq. (4.11) as a necessary condition for the acceleration of a universe in which quintessence is modelled by a nonminimally coupled scalar field, in the additional presence of ordinary matter.
The analysis can be refined by noting that, when quintessence dominates, $`\rho _\varphi V`$. By introducing the matter and scalar field energy densities measured in units of the critical density $`\rho _c`$ (respectively, $`\mathrm{\Omega }_m=\rho _m/\rho _c`$ and $`\mathrm{\Omega }_\varphi =\rho _\varphi /\rho _c`$), one has $`\rho _mV\mathrm{\Omega }_m/\mathrm{\Omega }_\varphi `$ and
$$y=\left(1+\frac{\mathrm{\Omega }_m}{2\mathrm{\Omega }_\varphi }\kappa \xi \varphi ^2\right)V+\left(\frac{1}{2}3\xi \right)\dot{\varphi }^2+3\xi ^2R\varphi ^2+3\xi \varphi \frac{dV}{d\varphi }<0.$$
(4.20)
For $`\xi 1/6`$ one has
$$\left(1+\frac{\mathrm{\Omega }_m}{2\mathrm{\Omega }_\varphi }|_0\kappa \xi \varphi ^2\right)V+3\xi \varphi \frac{dV}{d\varphi }y<0,$$
(4.21)
where the ratio $`\mathrm{\Omega }_m/\mathrm{\Omega }_\varphi `$ has been approximated by its present value (which is correct at least around the present epoch). By assuming again, for simplicity, that $`V`$ and $`\varphi `$ are positive, the necessary condition for quintessential inflation for $`\xi 1/6`$ is
$$\frac{d}{d\varphi }\left\{\mathrm{ln}\left[\frac{V}{V_0}\left(\frac{\varphi _0}{\varphi }\right)^\alpha \text{exp}\left(\frac{\kappa }{6}\varphi ^2\right)\right]\right\}<0,$$
(4.22)
where $`V_0`$ and $`\varphi _0`$ are constants and
$$\alpha =\left(1\frac{\mathrm{\Omega }_m}{2\mathrm{\Omega }_\varphi }|_0\right)\frac{1}{3\xi }.$$
(4.23)
Then, to have quintessential expansion with nonminimal coupling and $`0<\xi 1/6`$, one needs a potential $`V(\varphi )`$ that does not grow with $`\varphi `$ faster than the function
$$C(\varphi )=V_0\left(\frac{\varphi }{\varphi _0}\right)^\alpha \mathrm{exp}\left(\frac{\kappa }{6}\varphi ^2\right).$$
(4.24)
If instead $`\xi <0`$, $`V(\varphi )`$ must grow faster than $`C(\varphi )`$. The necessary conditions for cosmic acceleration in a quintessence-dominated universe are useful for future reference in studies of quintessence models with NMC.
## 5 Fixing the scalar field potential
As a result of the complication of the coupled Einstein-Klein-Gordon equations when $`\xi 0`$, general analytical considerations on the occurrence of inflation with nonminimally coupled scalar fields are necessarily quite limited, as seen in Sec. 2. However, one can (at least partially) answer the following meaningful question:
is it harder or easier to achieve acceleration of the universe with NMC for the potentials that are known to be inflationary in the minimal coupling case ?
Since in many situations these potentials are motivated by a high energy physics theory, they are of special interest. In order to appreciate the effect of the inclusion of a NMC term in a given inflationary scenario, we study some exact solutions for popular inflationary potentials, and the necessary condition (4.11) for the occurrence of inflation.
### 5.1 $`V=0`$
In order to illustrate the qualitative difference between minimal and nonminimal coupling, it is sufficient to compare the solution for $`V=0`$, $`\xi =0`$ with the corresponding solution for the special value of the NMC coupling constant $`\xi =1/6`$. For minimal coupling, one has the stiff equation of state $`P=\rho `$, and the scale factor $`a(t)=a_0t^{1/3}`$, as can be deduced by the inspection of Eqs. (4.4) and (4.5) (we exclude the trivial case of Minkowski space). In the $`V=0`$, $`\xi =1/6`$ case, the Klein-Gordon equation is conformally invariant, corresponding to the vanishing of the trace of $`T_{ab}[\varphi ]`$, to the radiation equation of state $`P=\rho /3`$, and to the scale factor $`a(t)=a_0t^{1/2}`$. This is in agreement with the fact that there are no accelerated universe solutions for $`V=0`$ and any value of $`\xi `$, because the necessary condition (4.11) cannot be satisfied in this case.
### 5.2 $`V=V_0=`$constant
For $`\xi =0`$ a constant potential can be regarded as a cosmological constant in the Einstein equations. Viceversa, a $`\mathrm{\Lambda }`$-term in the Einstein equations,
$$G_{ab}=\mathrm{\Lambda }g_{ab}+\kappa T_{ab},$$
(5.1)
can be incorporated into the scalar field potential by means of the substitution
$$V(\varphi )V(\varphi )+\frac{\mathrm{\Lambda }}{\kappa };$$
(5.2)
this is true subject to the condition that the scalar field be constant.
The equivalence between cosmological constant and constant scalar field potential no longer holds when $`\xi 0`$ and the form (2.12) of the Einstein equations is used. In fact, in this case the addition of a cosmological constant term to the left hand side of the Einstein equations (2.12),
$$\kappa T_{ab}\kappa T_{ab}+\mathrm{\Lambda }g_{ab}$$
(5.3)
is equivalent to the substitution
$$V(\varphi )V(\varphi )+\frac{\mathrm{\Lambda }}{\kappa }\left(1\kappa \xi \varphi ^2\right)=V(\varphi )+V_1(\varphi ).$$
(5.4)
The extra piece $`V_1(\varphi )=\frac{\mathrm{\Lambda }}{\kappa }\left(1\kappa \xi \varphi ^2\right)`$ in the potential does not correspond to a mere shift in the potential energy density (usually identified with the vacuum energy), but it also adds a self-interaction with the shape of an inverted harmonic oscillator. This is an example of how different things are when $`\xi `$ is allowed to be different from zero, and testifies to the difference between the physical interpretations associated with the different ways of writing the field equations discussed in Sec. 2. When $`\xi 0`$, a constant potential cannot be interpreted as the vacuum energy density coming from the left hand side of the Einstein equations.
In the case $`V=V_0`$, the necessary condition (4.11) for cosmic acceleration when $`\xi 1/6`$ coincides with the corresponding condition for minimal coupling, $`V=\mathrm{\Lambda }/\kappa >0`$. A negative $`V_0`$ does not give rise to acceleration of the universe and hence to inflation. While for $`\xi =0`$ a negative $`\mathrm{\Lambda }`$ violates the weak energy condition, this may no longer be true for $`\xi 0`$. The necessary and sufficient condition for acceleration when $`\xi =0`$ is $`\mathrm{\Lambda }>\kappa \dot{\varphi }^2/2`$, which recalls the slow-roll condition.
The $`\xi =0`$, $`V=`$const. case itself deserves comment. In this case, the Einstein-Friedmann equations with $`V=\mathrm{\Lambda }/\kappa `$ have the familiar de Sitter solution (historically, the prototype of inflation)
$$a(t)=a_0\mathrm{exp}(Ht),\dot{H}=0,\dot{\varphi }=0,$$
(5.5)
corresponding to the vacuum equation of state $`P=\rho `$. In addition one has, for $`\mathrm{\Lambda }0`$, the exact solution
$$a(t)=a_1\left[\mathrm{sinh}\left(\sqrt{3\mathrm{\Lambda }}t\right)\right]^{1/3},$$
(5.6)
corresponding to the non-constant scalar field
$$\varphi (t)=\pm \sqrt{\frac{2}{3\kappa }}\mathrm{ln}\left[\mathrm{tanh}\left(\frac{\sqrt{3\mathrm{\Lambda }}}{2}t\right)\right]+\varphi _0,$$
(5.7)
where $`a_1`$ and $`\varphi _0`$ are integration constants. The latter solution is asymptotic to (5.5) at late times $`t+\mathrm{}`$, in agreement with the cosmic no-hair theorems , but it exhibits a big-bang singularity at $`t=0`$ (where $`a(t)t^{1/3}`$), while the FLRW universe described by the de Sitter solution (5.5) has been expanding forever. In addition, the solution (5.6) and (5.7) corresponds to an effective equation of state that changes with time, and interpolates between the two extremes $`P=\rho `$ (“stiff” equation of state) at early times $`t0`$ and the vacuum equation of state $`P=\rho `$ at late times. The exact solution (5.6) and (5.7) tells us two things (note that we are not even talking about the more complicated NMC in this example):
1) Contrarily to naive statements found in the literature, fixing the scalar field potential $`V(\varphi )`$ does not fix the equation of state, and, therefore, the scale factor $`a(t)`$.
2) A given potential $`V(\varphi )`$ may correspond to very different equations of state, depending on the solution $`(g_{ab},\varphi )`$ of the field equations.
To conclude this subsection, we note that one can impose that the solution (5.5) hold (in the spirit, e.g., of Ref. ); in this case, if $`\xi =0`$, the Klein-Gordon equation implies that $`dV/d\varphi |_{\varphi _0}=0`$. If instead $`\xi 0`$, by imposing that the solution (5.5) hold, the Klein-Gordon equation implies that $`dV/d\varphi |_{\varphi _0}=12\xi H^2\varphi _0`$. A positive linear potential $`V=\lambda \varphi `$ achieves de Sitter expansion with constant $`\varphi `$ if $`\xi <0`$ and $`H^2=\lambda /(12|\xi |)`$.
### 5.3 $`V=m^2\varphi ^2/2`$
A pure mass term is perhaps the most natural “potential” for a scalar field, and an example of the class of even potentials for which $`\varphi dV/d\varphi >0`$ considered in Sec. 4. For $`\xi =0`$, it corresponds to chaotic inflation , while for $`\xi <0`$, it can still generate inflation. For example, the exponentially expanding solution
$$H=H_{}=\frac{m}{(12|\xi |)^{1/2}},\varphi =\varphi _{}=\frac{1}{(\kappa |\xi |)^{1/2}}$$
(5.8)
has been studied, not in relation to the early universe, but as the description of short periods of unstable exponential expansion of the universe which occur after the star formation epoch, well into the matter dominated era . The fact that the Ricci curvature $`R`$ is constant for this particular solution makes this case particularly suitable for the interpretation of the $`\xi R\varphi ^2/2`$ term in the Lagrangian density as a negative mass term, which balances the intrinsic mass term $`m^2\varphi ^2/2`$ in the potential, thus conspiring to give a vanishing effective mass $`m_{eff}=\left(m^2|\xi |R\right)^{1/2}`$. However, the so called late time mild inflationary scenario based on the relation $`m_{eff}=0`$ is unphysical, as is best seen by studying the scalar wave tails in the corresponding spacetime . In fact, the scenario corresponds to a spacetime in which a massive scalar field propagates sharply on the light cone at every point; this occurs because the usual tail due to the intrinsic mass $`m`$ is cancelled by a second tail term describing the backscattering of the $`\varphi `$ waves off the background curvature of spacetime .
For $`\xi >0`$, the nonminimally coupled scalar field has been studied by Morikawa , who found no inflation. The $`\xi >0`$ case was studied in order to explain the reported periodicity in the redshift of galaxies . If the model was correct, the parameter $`\xi `$ could be determined directly from astronomical observations, and it would provide information on whether general relativity is the correct theory of gravity . However, the prevailing opinion among astronomers is that the reported periodicity in galactic redshifts is not genuine, but is an artifact of the statistics used to analyze the astronomical data. The nonminimally coupled, massive, scalar field model may however be resurrected in the future in conjunction with the more recent claims of redshift periodicities for large scale structures .
From the point of view of this paper, the introduction of NMC destroys the acceleration of the cosmic expansion for large positive values of $`\xi `$ when $`V(\varphi )=m^2\varphi ^2/2`$. This is relevant since we were not able to draw conclusions for $`\xi >1/6`$ in Sec. 4.
### 5.4 Quartic potential
The potential $`V=\lambda \varphi ^4`$ corresponds to chaotic inflation for $`\xi =0`$. When $`\xi 0`$ we limit ourselves to consider the case of conformal coupling. For $`\xi =1/6`$, the Klein-Gordon equation (4.8) is conformally invariant, corresponding to the vanishing of the trace $`T=\rho 3P`$ of the scalar field stress-energy tensor, to the radiation equation of state $`P=\rho /3`$ and to the non-inflationary expansion law $`a(t)t^{1/2}`$. The introduction of conformal coupling destroys the acceleration occurring in the minimal coupling case for the same potential; however, accelerated solutions can be recovered by breaking the conformal symmetry with the introduction of a mass for the scalar or of a cosmological constant (which, in this respect, behaves in the same manner ). Exact accelerating and non-accelerating solutions corresponding to integrability of the dynamical system for the potential $`V=\mathrm{\Lambda }+m^2\varphi ^2/2+\lambda \varphi ^4`$ are presented in Ref. for special sets of the parameters $`(\mathrm{\Lambda },m,\lambda )`$.
### 5.5 $`V=\lambda \varphi ^n`$
In general, the necessary condition for cosmic acceleration (4.11) depends on the particular solution of the Klein-Gordon equation, which is not known a priori. However, this dependence disappears for power-law potentials. This case contains those of the previous subsections and also the potential $`V\varphi ^{|\beta |}`$, which approximates the potential for intermediate inflation and has been used in quintessence models . The previous examples can be extended to the case of a potential proportional to an even power of the scalar field, which is associated to chaotic inflation for $`\xi =0`$. The necessary condition (4.11) for the occurrence of accelerated cosmic expansion then becomes
$$\lambda \left(1\frac{3n\xi }{2}\right)>0$$
(5.9)
when $`\xi 1/6`$. Under the assumption $`\lambda >0`$ corresponding to a positive scalar field potential, the necessary condition (4.11) for acceleration of the cosmic expansion fails to be satisfied when $`\xi 2/3n`$, independently of the solution $`\varphi `$ and of the initial conditions ($`\varphi _0,\dot{\varphi }_0`$). This is interesting for $`n4`$. Hence, also in this case, NMC destroys acceleration in the range of values of $`\xi `$ $`(2/3n,1/6]`$.
The potential $`V=\lambda \varphi ^n`$ with $`n>6`$ gives rise to power-law inflation $`a=a_0t^p`$, where
$$p=\frac{1+(n10)\xi }{(n4)(n6)|\xi |}$$
(5.10)
(; see also Ref. for the case $`\xi =1/6`$). The universe is accelerating if $`p>1`$. The range of values $`6<n10`$ is interesting for superstring theories ; however, the scenario is fine-tuned for $`\xi >0`$ . For $`\xi <0`$ the solution is accelerating only if $`6<n<4+2\sqrt{3}7.464`$.
### 5.6 Exponential potential
The potential $`V=V_0\mathrm{exp}\left(\sqrt{2\kappa /p}\varphi \right)`$ is associated to power-law inflation $`a=a_0t^p`$ when $`\xi =0`$ and $`\varphi >0`$. An exponential potential is the fingerprint of theories which are reformulated in the Einstein conformal frame by means of a suitable conformal transformation of a theory previously set in the Jordan conformal frame (see Ref. for an explanation of this terminology and for the relevant formulas). This class of theories include Kaluza-Klein and higher-dimensional theories with compactified extra dimensions, scalar-tensor theories of gravity, higher derivative and string theories . In this case, the low energy prediction for the coupling constant yields the value $`\xi =0`$ . Nevertheless, one can consider a positive exponential potential also for $`\xi 0`$, and in this case the necessary condition for cosmic acceleration is
$$\frac{\varphi }{m_{pl}}>\frac{1}{6\xi }\sqrt{\frac{p}{\pi }}\left(0<\xi \frac{1}{6}\right),$$
(5.11)
$$\frac{\varphi }{m_{pl}}<\frac{1}{6|\xi |}\sqrt{\frac{p}{\pi }}\left(\xi <0\right).$$
(5.12)
## 6 NMC and the slow-roll approximation to inflation
For minimal coupling the equations of inflation are solved in the slow-roll approximation , which amounts to assuming that the solution is approximately the de Sitter space
$$(H,\varphi )=(\sqrt{\frac{\mathrm{\Lambda }}{3}},0);$$
(6.1)
the slow-roll approximation works because the solution (6.1) is an attractor of the dynamical equations for $`\xi =0`$, and slow-roll inflation is a quasi-de Sitter expansion .
The slow-roll approximation is often used to solve also the equations for $`\xi 0`$; however it is unknown whether the de Sitter solution (6.1) is still an attractor in this case, and the use of the slow-roll approximation is unjustified unless this question is affirmatively answered..
We adopt the form (2.7) and (2.8) of the field equations, which guarantees the generality of the solution, the covariant conservation of the stress-energy tensor, the weak energy condition, and the constancy of the gravitational coupling. The equations of motion can be written as the Klein-Gordon equation (1.1), the trace of the Einstein equations, and the Hamiltonian constraint, respectively:
$$R=6\left(\dot{H}+2H^2\right)=\kappa \left(\rho 3P\right),$$
(6.2)
$$3H^2=\kappa \rho ,$$
(6.3)
where the energy density and pressure are given by
$$\rho =\frac{\dot{\varphi }^2}{2}+3\xi H^2\varphi ^2+6\xi H\varphi \dot{\varphi }+V$$
(6.4)
$$P=\frac{\dot{\varphi }^2}{2}V\xi \left(4H\varphi \dot{\varphi }+2\dot{\varphi }^2+2\varphi \ddot{\varphi }\right)\xi \left(2\dot{H}+3H^2\right)\varphi ^2.$$
(6.5)
The equations of motion can be reduced to a two-dimensional system of first order equations for the variables $`H`$ and $`\varphi `$ ,
$$6\dot{H}\left[1+\xi \left(6\xi 1\right)\kappa \varphi ^2\right]+\kappa \left(6\xi 1\right)\dot{\varphi }^212H^2+12\xi \left(16\xi \right)\kappa H^2\varphi ^2+4\kappa V6\kappa \xi \varphi \frac{dV}{d\varphi }=0,$$
(6.6)
$$\frac{\kappa }{2}\dot{\varphi }^26\xi \kappa H\varphi \dot{\varphi }+3H^23\kappa \xi H^2\varphi ^2\kappa V=0;$$
(6.7)
then it is clearly convenient to formulate the problem in terms of the variables $`H`$ and $`\varphi `$. One can rewrite the system (6.6) and (6.7) as two equations that explicitly give the vector field $`(\dot{H},\dot{\varphi })`$ of the system. In the language of dynamical systems, the fixed points of this system are the de Sitter solutions corresponding to constant Hubble function and scalar field. It is straightforward to check that, for $`V=\mathrm{\Lambda }/\kappa 0`$, the solutions
$$(H,\varphi )=(\pm \sqrt{\frac{\mathrm{\Lambda }}{3}},0)$$
(6.8)
satisfy Eqs. (1.1), (6.2), and (6.3) for arbitrary values of $`\xi `$. The slow-roll formalism is only meaningful when applied around a stable de Sitter solution (6.8); otherwise, small perturbations of the background run away from it and from inflation. Hence one asks whether the solutions (6.8) are stable or unstable fixed points; the answer is given by a local stability analysis. When the potential $`V=\mathrm{\Lambda }/\kappa `$ is left unchanged the equations for the perturbations $`\delta H`$ and $`\delta \varphi `$, defined by
$$H=H_0+\delta H,\varphi =\delta \varphi ,$$
(6.9)
yield perturbations that decrease exponentially with time for the expanding solution (6.8) and therefore stability for any $`\xi 0`$; there is instability for $`\xi <0`$. The contracting solution (6.8) is unstable for any value of $`\xi `$. It is significant that the sign of the coupling constant $`\xi `$ affects the stability of the solution.
However, it is more interesting to consider perturbations of the equations of motion corresponding to a perturbed potential
$$V(\varphi )=\frac{\mathrm{\Lambda }}{\kappa }+V_0^{}\delta \varphi +\frac{V_0^{\prime \prime }}{2}\delta \varphi ^2+\frac{V_0^{\prime \prime \prime }}{6}\delta \varphi ^3+\frac{V_0^{(IV)}}{24}\delta \varphi ^4+\mathrm{}.$$
(6.10)
The cosmological constant is then seen as the zeroth order approximation of the potential. The density and pressure perturbations are given by
$$\delta \rho =\frac{\delta \dot{\varphi }^2}{2}+3\xi H_0^2\delta \varphi ^2+6\xi H_0\delta \varphi \delta \dot{\varphi }+\frac{V_0^{\prime \prime }}{2}\delta \varphi ^2+\mathrm{},$$
(6.11)
$$\delta P=\frac{\delta \dot{\varphi }^2}{2}4\xi H_0\delta \varphi \delta \dot{\varphi }2\xi \delta \dot{\varphi }^22\xi \delta \varphi \delta \ddot{\varphi }3\xi H_0^2\delta \varphi ^2\frac{V_0^{\prime \prime }}{2}\delta \varphi ^2+\mathrm{},$$
(6.12)
where ellipsis denote higher order contributions and the Klein-Gordon equation implies $`V_0^{}dV/d\varphi |_0=0`$. The perturbations satisfy the equations of motion
$$\delta \ddot{\varphi }+3H_0\delta \dot{\varphi }+\left(12\xi H_0^2+V_0^{\prime \prime }\right)\delta \varphi +\mathrm{}=0,$$
(6.13)
$$\delta H=\frac{\kappa }{6H_0}\left(\frac{\delta \dot{\varphi }^2}{2}+3\xi H_0^2\delta \varphi ^2+6\xi H_0\delta \varphi \delta \dot{\varphi }+\frac{V_0^{\prime \prime }}{2}\delta \varphi ^2\right)+\mathrm{}.$$
(6.14)
By assuming fundamental solutions of the form
$$\delta \varphi =ϵ\text{e}^{\alpha t}$$
(6.15)
one finds the algebraic equation for $`\alpha `$
$$\alpha ^2+3H_0\alpha +12\xi H_0^2+V_0^{\prime \prime }=0.$$
(6.16)
Let us first analyze the stability of the expanding de Sitter solution (6.8); the fundamental solutions $`\delta \varphi _{1,2}`$ corresponding to
$$\alpha _{1,2}=\frac{3H_0}{2}\left(1\pm \sqrt{1\frac{16\xi }{3}\frac{4V_0^{\prime \prime }}{3\mathrm{\Lambda }}}\right)$$
(6.17)
are exponentially decreasing (or constant) when $`116\xi /34V_0^{\prime \prime }/3\mathrm{\Lambda }`$ is not greater than unity, which corresponds to stability and is achieved for $`\xi V_0^{\prime \prime }/(4\mathrm{\Lambda })`$. There is instability when $`\xi <V_0^{\prime \prime }/(4\mathrm{\Lambda })`$.
Note that, for $`\xi =0`$, there is stability for $`V_0^{\prime \prime }>0`$ which happens, e.g., when the potential has a minimum $`\mathrm{\Lambda }/\kappa `$ at $`\varphi =0`$; a solution starting at any value of $`\varphi `$ is attracted towards the minimum (in slow-roll if the potential is sufficiently flat). If instead $`V_0^{\prime \prime }<0`$ and the potential has a maximum, the solution starting at $`\varphi =0`$ runs away from it.
When $`\xi 0`$, the potential and the NMC term $`\xi R\varphi ^2/2`$ balance; if $`V(\varphi )`$ has a minimum $`V_0=\mathrm{\Lambda }/\kappa `$ at $`\varphi =0`$ the solution is unstable for large negative values of $`\xi `$. If instead the potential has a maximum $`\mathrm{\Lambda }/\kappa `$ at $`\varphi =0`$, then the expanding de Sitter solution is unstable for any negative $`\xi `$ and stable only for $`4\mathrm{\Lambda }\xi V_0^{\prime \prime }<0`$. Again, the stability character of the de Sitter solution is determined not only by the shape of the potential, but also by the value of the coupling constant $`\xi `$. This analysis makes exact the previous qualitative considerations of Refs. on the balance between $`\xi R\varphi ^2/2`$ and $`V(\varphi )`$, and is not limited to the case in which $`V(\varphi )`$ has an extremum at $`\varphi =0`$.
The contracting de Sitter solution (6.8) is unstable for any value of $`\xi `$, as is deduced by repeating the analysis above.
## 7 Conformal transformation techniques
Conformal transformation techniques are often used to reduce the study of a cosmological scenario with a nonminimally coupled scalar field to the problem of a minimally coupled field, with considerable mathematical simplification (see Ref. for a review). The “Jordan conformal frame” in which the scalar field couples nonminimally to the Ricci curvature is mapped into the “Einstein frame” in which the (transformed) scalar is minimally coupled. The two frames are not physically equivalent, and care must be taken in applying conformal techniques .
The conformal transformation is given by
$$g_{ab}\stackrel{~}{g}_{ab}=\mathrm{\Omega }^2g_{ab},$$
(7.1)
where
$$\mathrm{\Omega }=\sqrt{1\kappa \xi \varphi ^2},$$
(7.2)
and the scalar field is redefined according to
$$d\stackrel{~}{\varphi }=\frac{\sqrt{1\kappa \xi (16\xi )\varphi ^2}}{1\kappa \xi \varphi ^2}d\varphi .$$
(7.3)
The “new” scalar $`\stackrel{~}{\varphi }`$ in the Einstein frame $`(\stackrel{~}{g}_{ab},\stackrel{~}{\varphi })`$ is minimally coupled,
$$\stackrel{~}{\mathrm{}}\stackrel{~}{\varphi }\frac{d\stackrel{~}{V}}{d\stackrel{~}{\varphi }}=0,$$
(7.4)
where
$$\stackrel{~}{V}\left(\stackrel{~}{\varphi }\right)=\frac{V\left[\varphi \left(\stackrel{~}{\varphi }\right)\right]}{\left(1\kappa \xi \varphi ^2\right)^2}$$
(7.5)
and $`\varphi =\varphi \left(\stackrel{~}{\varphi }\right)`$ is obtained by integrating and inverting Eq. (7.3). The conformal transformation technique is useful to solve the equations of cosmology in the Einstein frame and then map the solutions $`(\stackrel{~}{g}_{ab},\stackrel{~}{\varphi })`$ back into the physical solutions $`(g_{ab},\varphi )`$ of the Jordan frame with NMC. Although from the mathematical point of view it is convenient to obtain exact solutions with NMC in this way (see e.g. Ref. ), in general the procedure is not very interesting from the physical point of view. In fact, one starts from a known solution for a potential $`\stackrel{~}{V}\left(\stackrel{~}{\varphi }\right)`$ motivated by particle physics in the unphysical Einstein frame, and one obtains a solution in the physical Jordan frame which corresponds to a potential $`V(\varphi )`$ with no physical justification and, therefore, not very interesting. Furthermore, if a solution is inflationary in one frame, its conformally transformed counterpart in the other frame is not necessarily inflationary. To give an example, we consider a conformally coupled scalar field. Starting with the potential $`\stackrel{~}{V}\left(\stackrel{~}{\varphi }\right)=\lambda \stackrel{~}{\varphi }^4`$ in the Einstein frame, one integrates Eq. (7.3) and uses Eq. (7.5) to obtain
$$V(\varphi )=\left(\frac{3}{2\kappa }\right)^2\lambda \left(1\frac{\kappa }{6}\varphi ^2\right)^2\mathrm{ln}^4\left[\frac{\sqrt{\kappa /6}\varphi +1}{\sqrt{\kappa /6}\varphi 1}\right].$$
(7.6)
While the quartic potential in the unphysical Einstein frame is everyday’s routine, one would be hard put to justify the potential (7.6).
There is, however, a meaningful situation in which an inflationary solution is mapped into another inflationary solution by the conformal transformation: the slow-roll approximation described in the previous section. To prove this statement, one begins by noting that an exact de Sitter solution (6.8) is invariant under the conformal transformation (7.1), (7.2), and (7.3). In fact, when $`\varphi `$ is constant, Eq. (7.1) reduces to a rescaling of the metric by a constant factor (which can be absorbed into a coordinate rescaling), and the scalar $`\varphi `$ is mapped into another constant $`\stackrel{~}{\varphi }`$. Moreover, it is proven in Sec. 6 that a de Sitter solution is an attractor point in the phase space for suitable values of the coupling constant $`\xi `$, with nonminimal coupling as well as with minimal coupling; hence, for these suitable values of $`\xi `$, the conformal transformation maps an attractor of the Jordan frame into an attractor of the Einstein frame. It is therefore meaningful to consider the slow-roll approximation to inflation in both frames.
In the Jordan frame the Hubble parameter is given by
$$a=a_0\mathrm{exp}\left[H(t)t\right],$$
(7.7)
$$H(t)=H_0+\delta H(t),$$
(7.8)
where $`H_0`$ is constant and $`\left|\delta H\right|<<\left|H_0\right|`$. In the Einstein frame one has the line element
$$d\stackrel{~}{s}^2=\mathrm{\Omega }^2ds^2=d\stackrel{~}{t}^2+\stackrel{~}{a}^2\left(dx^2+dy^2+dz^2\right),$$
(7.9)
where $`d\stackrel{~}{t}=\mathrm{\Omega }dt`$ and $`\stackrel{~}{a}=\mathrm{\Omega }a`$. The Hubble parameter in the Einstein frame is
$$\stackrel{~}{H}\frac{1}{\stackrel{~}{a}}\frac{d\stackrel{~}{a}}{d\stackrel{~}{t}}=\frac{1}{\mathrm{\Omega }}\left(H+\frac{\dot{\mathrm{\Omega }}}{\mathrm{\Omega }}\right),$$
(7.10)
where an overdot denotes differentiation with respect to the Jordan frame comoving time $`t`$. For an exact de Sitter solution $`H=`$const. implies $`\stackrel{~}{H}=`$const. and vice-versa. A slow-roll inflationary solution in the Jordan frame satisfies Eq. (7.8) and
$$\varphi (t)=\varphi _0+\delta \varphi (t),$$
(7.11)
where $`\varphi _0`$ is constant and $`\left|\delta H\right|<<\left|H_0\right|`$, $`\left|\delta \varphi \right|<<\left|\varphi _0\right|`$; the corresponding Einstein frame quantities are
$$\stackrel{~}{H}=\frac{1}{\sqrt{1\kappa \xi \varphi _0^2}}\left(H_0+\delta H+\frac{\kappa \xi \varphi _0H_0}{1\kappa \xi \varphi _0^2}\delta \varphi \frac{\kappa \xi \varphi _0}{1\kappa \xi \varphi _0^2}\delta \dot{\varphi }\right)=\stackrel{~}{H}_0+\delta \stackrel{~}{H}$$
(7.12)
and
$$\stackrel{~}{\varphi }=\stackrel{~}{\varphi }_0+\delta \stackrel{~}{\varphi }$$
(7.13)
where, to first order,
$$\stackrel{~}{H}_0=\frac{H_0}{\sqrt{1\kappa \xi \varphi _0^2}},$$
(7.14)
$$\frac{\delta \stackrel{~}{H}}{\stackrel{~}{H}}=\frac{\delta H}{H_0}+\frac{\kappa \xi \varphi _0^2}{1\kappa \xi \varphi _0^2}\left(\frac{\delta \varphi }{\varphi _0}\frac{\delta \dot{\varphi }}{H_0\varphi _0}\right),$$
(7.15)
$$\delta \stackrel{~}{\varphi }=\frac{\sqrt{1\kappa \xi \left(16\xi \right)\varphi _0^2}}{1\kappa \xi \varphi _0^2}\delta \varphi .$$
(7.16)
The smallness of the Jordan frame quantities in Eq. (7.15) guarantees the smallness of the deviation from a de Sitter solution $`\delta \stackrel{~}{H}/\stackrel{~}{H}`$ in the Einstein frame; slow-roll inflation in the Jordan frame implies slow-roll inflation in the Einstein frame. The converse is not true, as shown in Ref. in the special case $`\xi =1/6`$, and therefore some caution must be taken when mapping back solutions from the Einstein to the Jordan frame. These considerations are relevant for the calculation of density and gravitational wave perturbations with NMC aimed at testing NMC inflation with present and future satellite observations . One must take special care when computing quantum fluctuations and applying the conformal transformation (7.1), (7.2) and (7.3) since, in general, the vacum state of one conformal frame is changed into a different state in the other frame .
The conformal transformation is only defined for $`\xi <0`$ and, if $`\xi >0`$, for values of $`\varphi `$ such that $`\varphi \pm \varphi _c=\pm \left(\kappa \xi \right)^{1/2}`$. For large values of $`\xi `$, this is a serious limitation on the usefulness of conformal transformation techniques. When $`\xi >0`$ and the nonminimally coupled scalar field approaches the critical values $`\pm \varphi _c`$, $`\stackrel{~}{g}_{ab}`$ degenerates, $`\stackrel{~}{\varphi }`$ diverges and the conformal transformation technique cannot be applied. This happens when $`\varphi 0.199\xi ^{1/2}m_{pl}`$, which induces the unreasonable constraint $`\left|\varphi \right|<0.2m_{pl}`$ if $`\xi `$ is of order unity (for example, chaotic inflation requires $`\varphi `$ larger than about $`5m_{pl}`$ ). In particular, for strong positive coupling $`\xi >>1`$, the critical value $`\left|\varphi _c\right|`$ corresponds to very low energies.
The conformal transformation technique cannot provide solutions with $`\varphi `$ crossing the barriers $`\pm \varphi _c`$, even when such solutions are physically admissible. In this sense, the conformal technique has the same limitations of the form of the field equations (2.11) and (2.12) discussed in Sec. 2. The transformed scalar $`\stackrel{~}{\varphi }`$ in the Einstein frame can be explicitly expressed in terms of $`\varphi `$ by integrating Eq. (7.3),
$$\stackrel{~}{\varphi }=\sqrt{\frac{3}{2\kappa }}\mathrm{ln}\left[\frac{\xi \sqrt{6\kappa \varphi ^2}+\sqrt{1\xi \left(16\xi \right)\kappa \varphi ^2}}{\xi \sqrt{6\kappa \varphi ^2}\sqrt{1\xi \left(16\xi \right)\kappa \varphi ^2}}\right]+f\left(\varphi \right),$$
(7.17)
where
$$f\left(\varphi \right)=\left(\frac{16\xi }{\kappa \xi }\right)^{1/2}\mathrm{arcsin}\left(\sqrt{\xi \left(16\xi \right)\kappa \varphi ^2}\right)$$
(7.18)
for $`0<\xi 1/6`$ and
$$f\left(\varphi \right)=\left(\frac{6\xi 1}{\kappa \xi }\right)^{1/2}\text{ arcsinh}\left(\sqrt{\xi \left(6\xi 1\right)\kappa \varphi ^2}\right)$$
(7.19)
for $`\xi 1/6`$. Eqs. (7.17)-(7.19) show that $`\stackrel{~}{\varphi }\pm \mathrm{}`$ in the Einstein frame as $`\varphi \pm \varphi _c`$ in the Jordan frame. Any nonminimally coupled solution $`\varphi `$ crossing the barriers $`\pm \varphi _c`$ cannot be found by applying the conformal transformation technique.
An explicit example of such a solution is the one corresponding to a nonminimally coupled scalar field which is constant and equal to one of the critical values. In this case the field equations (1.1), (6.2), and (6.3) yield $`R=6\left(\dot{H}+2H^2\right)=0`$ and $`V=0`$. The solution
$$a=a_0\sqrt{tt_0},\varphi =\pm \frac{1}{\sqrt{\kappa \xi }}\left(\xi >0\right),$$
(7.20)
corresponds to the vanishing of the trace of the energy-momentum tensor $`\stackrel{~}{T}_{ab}\left[\varphi \right]=\stackrel{~}{T}_{ab}^{(total)}`$ in Eqs. (2.7) and (2.8), and to the radiation equation of state $`P=\rho /3`$.
## 8 Discussion and conclusions
Scalar fields are a basic ingredient of particle physics and cosmology and many arguments strongly suggest that a scalar field must couple nonminimally to the Ricci curvature of spacetime in the theories of gravity and of the scalar field used to build most scenarios of inflation and quintessence. Therefore, one cannot ignore NMC in these models. In this paper we approach several topics in the physics of NMC, from a general (i.e. not limited to a specific potential $`V(\varphi )`$) point of view.
First, it is shown that the possible forms of writing the field equations are not equivalent, and it is pointed out that some of them lead to loss of generality and to a restricted class of solutions. This is not a problem when one focuses on a specific solution $`(g_{ab},\varphi )`$, but it compromises studies that aim at generality like, e.g., the dynamical system analysis of the equations of cosmology in the phase space. Further, a shadow is cast on the reality of the time-variability of the effective gravitational constant $`G_{eff}(t)`$ in many cosmological scenarios using NMC. In fact, the time-variability may be removed by passing to a different form of the field equations, and this interpretation problem deserves attention in the future.
The conservation equations for the different forms of the field equations are discussed, and it emerges that different formulations lead to different definitions of the energy density and pressure of the scalar field. As a result of this fact, the problem of whether a nonminimally coupled scalar field satisfies the weak energy condition becomes fuzzy. In this paper, the form of the field equations (2.7) and (2.8) is preferred because i) it does not lead to loss of generality, ii) the stress-energy tensor (2.8) is covariantly conserved and satisfies the weak energy condition in spatially flat and closed FLRW universes, and iii) the gravitational coupling is constant, and there are no interpretation problems with a time-varying $`G_{eff}(t)`$.
The crucial feature of inflationary and quintessence models, i.e. the acceleration of the universe, is studied when the universe is dominated by a nonminimally coupled scalar field. The inclusion of the NMC term $`\xi R\varphi ^2/2`$ in the Lagrangian density seems to make it harder to achieve cosmic acceleration for most potentials that are known to be inflationary when $`\xi =0`$. This conclusion derives from the dynamical equations for the scalar field and the scale factor and does not rely upon the slow-roll approximation, nor does it arise from independent consistency requirements of the kind discussed in Ref. . In addition to the dynamical arguments, one must keep in mind that a given inflationary scenario must be consistent with the theoretical prescriptions for the value of $`\xi `$, which further constrain the known scenarios . Fine-tuning arguments or, in other words, the genericity of inflation, are also an issue .
The NMC term $`\xi R\varphi ^2/2`$ can balance a suitable scalar field potential $`V(\varphi )`$ and induce cosmic acceleration with a wider class of potentials than it is normally considered. However, the NMC term in the Lagrangian cannot completely substitute for a potential and induce an acceleration epoch when $`V=0`$. We have proved this statement in Sec. 4 for values of the coupling constant $`\xi 1/6`$, but we could not reach a conclusion for $`\xi >1/6`$. A new solution for $`\xi =0`$ expanding from a big-bang singularity and quickly approaching a de Sitter space is also presented.
Since almost all the inflationary scenarios proposed to date are based on the slow-roll approximation, the role of de Sitter solutions (which are fixed points) as attractor points in the phase space is crucial. We study this issue in the presence of NMC and we find that the stability of the expanding de Sitter solution (6.8) is determined not only by the shape of the potential $`V(\varphi )`$ (as is the case of minimal coupling), but also by the value of the coupling constant $`\xi `$. A more general analysis including perturbations which are space-dependent or anisotropic is needed to confirm the stability; however, our local perturbation analysis is sufficient to establish instability for
$$\xi <\xi _0\frac{V_0^{\prime \prime }}{4\mathrm{\Lambda }},$$
(8.1)
where $`\mathrm{\Lambda }=\kappa V_0`$. Note that $`\xi _0=\eta _0/4`$, where $`\eta =V^{\prime \prime }/(\kappa V)`$ is one of the slow-roll parameters used in the slow-roll approximation for minimal coupling .
Contracting de Sitter solutions are always unstable, as in the $`\xi =0`$ case. These considerations set precise limits on the domain in which the slow-roll approximation is meaningful in the presence of NMC, and is fundamental for the computation of scalar and tensor perturbations. Ultimately, the amplitudes and spectral indices of these perturbations are the predictions of the theory to be compared with observations of the cosmic microwave background.
Conformal transformation techniques are widely used in scalar field cosmology and it is useful to clarify their link with the slow-roll approximation. We prove that slow-roll inflation in the physical Jordan frame (in which the scalar field is nonminimally coupled) implies slow-roll inflation in the unphysical Einstein frame (but not vice-versa), and make explicit the limitations intrinsic to the use of conformal transformations. Analytic examples are given in which the conformal transformation method cannot be applied.
Recently, there has been a great deal of work on NMC in both inflationary and quintessence models; this paper justifies certain assumptions and methods used, solves some of the problems posed, and provide caveats on difficulties that were overlooked. Our considerations will be applied in the future to specific models; other areas in which NMC is relevant include quantum cosmology, classical and quantum wormholes, and the stability of boson stars.
## Acknowledgments
It is a pleasure to thank S. Odintsov and E. Gunzig for useful discussions, and a referee for valuable contributions to Sec. 4.3. This work was supported by the EEC grant PSS\* 0992 and by OLAM, Fondation pour la Recherche Fondamentale, Brussels.
|
no-problem/0002/cond-mat0002256.html
|
ar5iv
|
text
|
# Theoretical Perspectives on Spintronics and Spin-Polarized Transport
## I Introduction
Spintronics is a new branch of electronics where electron spin (rather than, or, in addition to electron charge) is the active element for information storage and transport . Spintronic devices have the potential to replace and complement various conventional electronic devices with improved performance. In a broader sense, spintronics also includes new fields such as spin-based quantum computation and communication . To determine the feasibility of spintronic devices and more generally of various applications of spin-polarized transport (such as solid-state quantum computing) it is essential to answer questions like how to create and detect spin-polarized carriers, how to maintain their spin polarization and spin coherence, or conversely how the spin polarization and spin coherence are destroyed.
In this paper we will explore the question of how conduction electrons (and holes) lose their spin coherence in view of our recent theoretical investigation of spin decoherence in metals . The conduction-electron spin relaxation time $`T_1`$ (which is the same as the transverse decoherence time $`T_2`$ for electronic systems with nearly spherical Fermi surfaces) also determines the quality of spintronic devices. The longer the conduction electrons remain in a certain spin state (up or down) the longer and more reliably can they store and carry information.
One important realization of spintronic devices is based on hybrid semiconductor structures . In spite of the initial proposal over three decades ago and numerous experimental efforts, one of the key ingredients, the direct electrical spin injection into non-magnetic semiconductor has only recently been realized . Nevertheless by fabricating a novel class of ferromagnetic semiconductors based on Mn-doped GaAs, and employing extensive experience with semiconductor technology which dominates traditional electronics, a significant progress is expected. One of the important limitations, however, is the influence of interfaces between the different materials in such hybrid structures. In this context, we will discuss here spin-polarized transport in hybrid semiconductor structures including our proposal to study semiconductor/superconductor structures , which provide a means to measure the degree of spin polarization and to investigate the interfacial scattering.
Quantum computation has been one of the most actively studied areas in general physics in the past few years . The dream of using quantum objects such as electrons as the basic unit of a computer, which is the ultimate of circuit miniaturization, together with the promise of exponential speed-up due to quantum mechanics, has drawn intense interest of scientists from a wide range of specialties. Electron and nuclear spins are the quantum bits (qubits) of some promising proposals for quantum computers (QC) . For these proposals to work, one needs to be able to precisely manipulate the dynamics of these spins, in particular, to rotate single spins and entangle two spins. Here we will focus on several proposals to produce and to detect spin entanglement.
From the early days of nuclear magnetic resonance we know how to manipulate nuclear spins by radio frequency (rf) fields. In particular, rf fields can induce transitions between Zeeman states and even saturate spin population. A recent discovery claims to do the same with optical fields, for which nuclear spins are normally transparent. Electrons, however, can be spin-polarized by light and transfer the polarization to nuclei through the hyperfine coupling. By a periodic modulation of the electronic spin population, one can resonantly flip nuclear spins: the resulting oscillating hyperfine coupling acts as an effective “rf field.” Efforts to polarize nuclear spins through the hyperfine coupling are not new. In the past it was proposed (and in some cases verified) that a nuclear polarization can arise from saturating spins of conduction electrons by a rf field (Overhauser effect ), or purely electronically by generating hot carriers (Feher effect ). Here we will review some of these proposals and discuss their merits in the current context of spintronics and coherent control of nuclear spin dynamics.
## II Spin decoherence in electronics materials
Conduction electrons lose memory of their spin orientation through collisions with phonons, other electrons, and impurities. The crucial interaction which provides the necessary spin-dependent potential is the spin-orbit interaction. The spin-orbit interaction is a relativistic effect which can have various sources in electronic materials; the two most important sources being the interactions between electrons and impurities, and electrons and (lattice host) ions. The impurity-induced spin-orbit interaction (Overhauser ) is a random-site potential and, as such, it can induce momentum scattering accompanied by spin flip. The ion-induced spin-orbit interaction is a different story. This interaction is nicely periodic and by itself would not lead to any spin relaxation at all. However, the ion-induced spin-orbit interaction becomes a viable source of spin relaxation when combined with a momentum scattering mechanism (impurities or phonons). This was first realized by Elliott : because the periodic lattice potential (that includes a spin-orbit interaction) yields Bloch states which are, in general, not spin eigenstates, even a spin-independent scattering (by impurities or phonons) can induce spin flip . This mechanism of spin relaxation in metals and semiconductors is now called Elliott-Yafet mechanism (Yafet made a significant contribution to the theory by studying spin-flip electron-phonon interactions). We note that in materials without a center of inversion (like GaAs and many other interesting semiconductors), there are other relevant mechanisms of spin-relaxation. These mechanisms along with recent attempts of modulating spin dynamics in semiconductors are reviewed in .
Spin relaxation times $`T_1`$ in metals are typically nanoseconds (the record, $`T_11\mu `$s, is held by a very pure Na sample at low temperatures ). Spin relaxation is an incredibly long process when compared with momentum relaxation; momentum relaxation times $`\tau `$ are just tens of femtoseconds at room temperature. That electron spins are a promising medium for information storage follows from the large value of the factor $`T_1/\tau `$. A crude estimate of $`T_1`$ is $`T_1\tau /b^2`$, where $`bV_{SO}/E_F`$, with $`V_{SO}`$ denoting an effective strength of the spin-orbit interaction, and $`E_F`$ the Fermi energy. Since $`V_{SO}E_F`$, it follows that $`T_1/\tau 1`$. The temperature ($`T`$) dependence of $`1/T_1`$ is similar to the temperature dependence of resistivity $`\rho `$: At low T (below 20 K) the spin relaxation is dominated by impurity scattering and is temperature independent. At higher temperatures electrons lose spin coherence by colliding with phonons. Above the Debye temperature, where the whole spectrum of phonons is excited and the number of phonons increases linearly with increasing temperature, $`1/T_1T`$, similar to resistivity. In an intrinsic sample (with negligible amount of impurities), $`1/T_1`$ follows the Yafet law , $`1/T_1T^5`$ (again similar to $`\rho `$), which is yet to be seen in experiment. The case of semiconductors is less clear-cut. Typical magnitudes of $`T_1`$ in semiconductors are nanoseconds too, but $`T_1`$ varies strongly with magnetic field, temperature, doping, and strain. The task of sorting different mechanisms at different regimes is very difficult and remains to be completed .
For how long an electron can travel in a solid-state environment without flipping its spin? Is there a limit on $`T_1`$? In an ideal impurity-free sample, $`T_1`$ would approach infinity as temperature gets to the absolute zero. Thus a recipe to increase $`T_1`$ at low temperatures is to produce very pure samples. But the most interesting region is at room temperature. Here phonons are the limiting factor, not impurities. Since we cannot get rid of phonons, increasing $`T_1`$ means reducing the spin-orbit coupling ($`b^2`$). Typically, the heavier is the atom, the stronger is the spin-orbit coupling. Therefore lighter metals like Na, Cu, or Li have longer $`T_1`$ than heavy metals like Hg or Pb. We do not know how large $`T_1`$ can be at room temperature, but an educated guess would be a microsecond for the materials of current technological interest.
Is there a way to control the spin relaxation rate at least within a few orders of magnitude? To answer this question we need to understand more where the strength of the spin-orbit interaction $`b`$ comes from. We already pointed out that in general $`bV_{SO}/E_F`$. This is indeed what a typical electron on the Fermi surface recognizes as the spin-orbit scattering: an electron with a spin up in the absence of the spin-orbit interaction acquires a spin down amplitude of magnitude $`b`$ when the interaction is turned on. Perturbation theory gives $`bV_{SO}/\mathrm{\Delta }E`$, with $`\mathrm{\Delta }E`$ being the typical (vertical) energy difference between neighboring bands. For a general Fermi surface point $`\mathrm{\Delta }EE_F`$ and one recovers $`bV_{SO}/E_F`$. But there can be points on the Fermi surface with $`\mathrm{\Delta }EE_F`$! Such points occur near Brillouin zone boundaries or accidental degeneracy lines. In the former case $`\mathrm{\Delta }EV_GE_F`$, where $`V_G`$ is the $`G`$th Fourier component of the electron-ion interaction ($`G`$ is the reciprocal lattice vector associated with the Brillouin zone boundary); in the latter case $`\mathrm{\Delta }E`$ approaches zero and degenerate perturbation theory gives $`b1`$. We call the points on the Fermi surface where $`bV_{SO}/E_F`$ spin hot spots . The area of the Fermi surface covered by spin hot spots is not large, so it may seem that on the average these points will not contribute much to spin relaxation. It turns out , however, that despite their small weight, spin hot spots dominate the average $`b^2`$ which is then significantly enhanced (typically by 1 to 4 orders of magnitude). Spin relaxation time $`T_1`$ is correspondingly reduced.
Spin hot spots are ubiquitous in polyvalent metals. Our theory then predicts that spin relaxation in polyvalent metals proceeds faster than expected (in fact, the significance of the points of accidental degeneracy for spin relaxation in Al was first pointed out by Silsbee and Beuneu ). This is indeed what is observed. Long before the theory was developed, Monod and Beuneu collected $`T_1(T)`$ for different metals with the expectation to confirm the formula $`1/T_1(T)b^2/\tau (T)`$, with the simple estimate of $`bV_{SO}/E_F`$. This indeed worked for several metals (monovalent alkali and noble metals like Na or Cu), but not for polyvalent Al, Pd, Be, and Mg (these remain the only polyvalent metals measured thus far). Spin relaxation times for the measured polyvalent metals were 2-4 orders of magnitude smaller than expected. The explanation of this unexpected behavior came with the spin-hot-spot model (see the comparison between the measured and calculated $`T_1`$ of Al in ).
In addition to providing a theoretical explanation for the longstanding problem of why electron spins in metals like Al or Mg decay unexpectedly fast, our theory also shows a way of tailoring spin dynamics of conduction electrons. Spin hot spots arise from band structure anomalies which can be shrunk or swollen by band-structure engineering. Strain, for example, can make a Fermi surface cross through Brillouin zone boundaries, thus increasing the hot-spot area and correspondingly $`1/T_1`$. Other possibilities include alloying, applying pressure, changing dimensionality of the system, or doping (if dealing with semiconductors). Any effect that changes the topology of the Fermi surface will have a severe effect on spin relaxation. This prediction remains to be verified experimentally. The important result of Kikkawa and Awschalom which shows that $`T_1`$ of some III-V and II-VI semiconductors can be significantly (by two orders of magnitude) enhanced by doping, is not a manifestation of spin hot spots, but it is still most probably (directly or not) a band structure effect.
## III Spin-polarized transport in hybrid semiconductor structures
We consider next some aspects of the spin-polarized transport in semiconductors and how the studies of semiconductor/superconductor (Sm/S) hybrid structures can be used to investigate the feasibility of novel spintronic devices. With the prospect of making spintronic devices which consist of hybrid structures, it is necessary to understand the influence of interfaces between different materials. In the effort to fabricate increasingly smaller devices, it is feasible to attain a ballistic regime, where the carrier mean free path exceeds the relevant system size. Consequently, the scattering from the interfaces plays a dominant role. In a wide variety of semiconductors the main sources of interfacial scattering at the interface with normal metal are arising from the formation of a native Schottky barrier and the large difference in carrier densities, i.e., Fermi velocity mismatch in the two materials. In the absence of spin-polarized carriers this leads to reduced interfacial transparency and different techniques are employed to suppress the Schottky barrier which can be examined using the low temperature transport measurements in Sm/S structures . For a spin-polarized transport in a non-magnetic semiconductor where the polarized carriers are electrically injected from a ferromagnet or ferromagnetic semiconductor, the situation is more complicated. Magnetically active interface can introduce both potential and the spin-flip scattering leading to the spin-dependent transmission (spin filtering) across the interface and the change of the degree of carrier spin polarization. The latter possibility has profound effect on spintronic devices as they rely on the controlled and preferably large carrier spin polarization.
While there are alternative ways available to create spin-polarized carriers and spin-polarized transport in a semiconductor, an important obstacle to develop semiconductor based spintronic devices was to achieve direct spin injection from a ferromagnet . Previous experiments demonstrating spin injection into the non-magnetic metal and into superconductor have created strong impetus to advance studies of spin-polarized transport in the corresponding materials. In the experiment by Hammar et al. , permalloy (Ni<sub>0.8</sub>Fe<sub>0.2</sub>,Py) was used as a ferromagnet for the spin injection in two-dimensional electron gas. It was theoretically suggested that limitations for achieving higher degree of spin polarization are consequences of working in a diffusive regime and the current conversion near the ferromagnet/semiconductor interface. A different approach, which would circumvent such difficulties, was proposed by Tang et al. who have considered spin injection and detection in a ballistic regime. Subsequent experiments on spin injection in semiconductors have also employed diluted magnetic semiconductors and ferromagnetic semiconductors as sources of spin-polarized carriers . In these cases the effect of reduced interfacial barrier and Fermi velocity mismatch (as compared to the interface of semiconductor with metallic ferromagnet) should facilitate injection of carriers across the interface with a substantial degree of spin polarization. Investigating this point is another reason to perform experiments and theoretical studies focusing on the role of interfacial scattering.
We have proposed employing spin-polarized transport in Sm/S hybrid structures to address the role of interfacial scattering and detecting the degree of spin polarization. Introducing the S region in the semiconductor structures has a dual purpose. Choosing S as a conventional , spin-singlet metallic superconductor (Al, Sn,..) implies forming of Schottky barrier at the Sm/S interface which we want to investigate and by cooling these materials below the temperature of superconducting transition, $`T_c`$, scattering processes present exclusively in the superconducting state can serve as a diagnostic tool. At temperatures much lower than $`T_c`$, and at low applied bias voltage the transport is governed by the process of Andreev reflection . Prior to work in , spin-polarized Andreev reflection has been investigated theoretically and experimentally only in the context of ferromagnets. In this two-particle process, an incident electron, together with a second electron of the opposite spin (with their total energy $`2E_F`$, slightly above and below $`E_F`$, respectively) are transfered across the interface into the superconductor where they form a Cooper pair. Alternatively, this process can be viewed as an incident electron which at a Sm/S interface is reflected as a hole belonging to the opposite spin subband, back to the Sm region while a Cooper pair is transfered to the superconductor. The probability for Andreev reflection at low bias voltage is thus related to the square of the normal state transmission coefficient and can have stronger dependence on the junction transparency than the ordinary single particle tunneling. For spin-polarized carriers, with different populations in two spin subbands, only a fraction of the incident electrons from the majority subband will have a minority subband partner in order to be Andreev reflected. In the superconducting state, for an applied voltage smaller than the superconducting gap, single particle tunneling is not allowed in the S region and the modification of the Andreev reflection amplitude by spin polarization or junction transparency will be manifested in transport measurements. To our knowledge, there have not yet been performed experiments on the spin-polarized transport in Sm/S structures. High sensitivity to the degree of spin polarization displayed in the experiments on ferromagnets (including measurements of the spin-polarization for the first time in some materials), should serve as a strong incentive to examine semiconductors in a similar way. Performing such experiments in semiconductors would enable the use of advanced fabrication techniques, tunable electronic properties (such as carrier density and the Fermi velocity) and well studied band structure needed in the theoretical interpretation.
Introducing superconducting regions in the S/Sm structures is not limited to the diagnostic purpose. They can give rise to new physical phenomena relevant to the device operation. For example, different application of spin-polarized transport in Sm/S structures has been suggested by Kulić and Endres . They consider properties of thin films in the ferromagnetic insulator/superconductor/ferromagnetic insulator (FI/S/FI) configuration which display qualitatively different behavior from the previously studied structures where the FI is replaced by the metallic ferromagnet (F). In such F/S/F systems it is known that there are important proximity effects of extending superconducting order parameter in the non-superconducting material. Consequently, it has been shown that they give rise to oscillations in $`T_c`$ as a function of the thickness of the superconducting region. In contrast, for FI/S/FI structures it was shown that $`T_c`$ is independent of the thickness of superconducting thin film and can be tuned by changing the angle of magnetization direction lying in the planes of each FI region. It was proposed that these features and the simpler physical properties compared to the F/S/F systems can be used to implement switches and logic circuits. For example, switching between the normal and superconducting state could be performed by changing the magnetization directions (for a spin singlet superconductor $`T_c`$ depends only on the relative angle between the two magnetization vectors). Here we note that the novel ferromagnetic semiconductors may be suitable candidates for the FI regions discussed above. With the appropriate Mn doping they would display insulating behavior and effectively suppress proximity effects.
## IV Spin-based solid state quantum computation
For spins in solids to be useful in quantum computing, it is important that one has certain ways to move information regarding these spins. This transfer can be achieved through nearest (fixed) neighbor interactions, such as among nuclear spins; or one can use mobile objects like conduction electrons in semiconductors. While the later approach gives us more freedom in manipulating the system, it is also more susceptible to relaxation caused by transport.
One of the first proposal to use electron spins in solids for the purpose of quantum computing suggests confining electrons in quantum dots. The spins of trapped electrons serve as qubits, while quantum dots in which they reside serve as tags for each qubit. There is one electron in each quantum dot so that each qubit can be readily identified. The individual electron spins can be easily manipulated by a pulsed local magnetic field. It is conceivable that such field can be produced by local magnetic moments such as a magnetic quantum dot or an STM tip. Furthermore, if the electrons can be moved in the structure to an area away from the rest of the qubits, without losing its identity, the requirement on the magnetic field can be loosened. Such transport of electrons might be achieved through, for example, channels or STM tips. Controlled exchange interaction between electrons in the nearest neighbor quantum dots can produced desired entanglement between electron spins , while finite magnetic field can be applied to reduce the error rate during this process . It has also been proposed that optical mediated entanglement can be achieved if the quantum dots are placed in a micro-cavity .
To produce a practical electron-spin-based quantum dot QC is going to be an extremely challenging experimental problem. For example, because electrons are identical particles, exchange errors are always looming whenever two electrons have wavefunction overlaps . Stray electrons (trapped on surface or impurities) can easily cause information loss through this channel. If electronic qubits are not moved around, single qubit operations would require precisely controlled local magnetic fields which should not affect unintended electrons. Similarly, two-qubit operations would require well controlled tuning of the gate voltage between neighboring quantum dots. Electron spins relax much faster (in the order of ns to $`\mu `$s ) than nuclear spins (minutes to hours ), which would invariably decrease the signal-to-noise ratio and require error correction. The above-mentioned problems can be dealt with one by one. For example, swap gate (or square root of swap) is an essential ingredient for two-qubit operations. Thus it would be a big step forward if one can demonstrate the swap action in a double dot, even if the swap efficiency is far less than 100%. In the spirit of converting spin information into transport properties, one approach might be to inject two streams of electrons into the two coupled quantum dots, with one stream fully polarized. By adjusting the speed of injection, one can control the time that electrons remain in the dots, so that it is possible that at the output the originally unpolarized electron stream would acquire some degrees of average spin polarization, which can then be measured.
Electron spins are not the only possible building blocks for proposed spin-based solid state QC. One proposal that has attracted a lot of attention attempts to combine the extremely long coherence time for nuclear spins and the immense industrial experience with silicon processing to produce a scalable QC. Donor nuclear spins are employed as qubits in this scheme. Donor electrons also play important roles here. Controlled by two types of gates, electrons are used to adjust nuclear resonance frequency for one-qubit operations and to transfer information between donor nuclear spins through electron exchange and hyperfine interaction, crucial for two-qubit operations. The fabrication of regular array of donors may be a daunting task. The additional “layer”of the QC structure (the electrons, as the intermediary) may provide major decoherence channel. However, despite all the problems, the exceedingly long life time of qubits means that the proposal is one of the more promising QC models in the long run.
## V Spin entanglement in solids
Spin entanglement is an essential ingredient for spin-based quantum computer, quantum communication, quantum cryptography, and other applications. It has been theoretically proposed that two-electron spin entanglement can be measured using an ordinary electron beam splitter or through a loop consisted of double-dot in which electrons undergo cotunneling . Another proposal distinguishes singlet and triplet states by detecting their energy difference . The common theme here is to measure transport properties of electrons and infer spin information from transport. Direct spin measurement is not impossible with the current technology (using SQUID), but it is slow and not quite sensitive enough for the purpose of quantum computing.
Due to the many obstacles we mentioned before in the pursuit of creating and detecting controlled spin entanglement in solids, it is useful to separate the two tasks and treat them individually. For example, to test a detecting scheme, it would be ideal if we have a well-established source of entangled electrons, so that we can test the sensitivity and other properties of the detecting scheme. Here we propose to use Cooper pairs as such a source. In many ordinary superconductors, the Cooper pairs are in a singlet state . Our goal is to transfer a Cooper pair from the superconducting region into a non-superconducting region as two spin-entangled electrons (a Cooper pair injection process in analogy to the inverse of previously discussed Andreev reflection). One conceivable scenario is to use heterostructures with discrete energy levels to satisfy energy conservation and to enhance the cross section of the process. If these entangled electrons can be successfully led out of the superconductor and into a normal region through the above procedure, they can then be separated using means such as Stern-Gerlach type of techniques so that the opposite spin electrons are separated and propagate in two separate channels. We thus obtain a source of two streams of entangled electrons. By controlling the size of the point contact between the superconducting and the normal regions, the arrival-time-correlation between two entangled electrons can be enhanced so that they can produce signatures of a spin singlet state. Indeed, if the Cooper pairs in the superconductor source are triplets (as suspected for quasi-one-dimensional organic superconductors , and for Sr<sub>2</sub>RuO<sub>4</sub> ), signatures of spin triplet state would be present. To have such controlled source of entangled electrons would be important both for testing the entanglement detection schemes and for applications in areas such as quantum communication.
## VI Optical and electronic control of nuclear spin polarization
The recent discovery by Kikkawa and Awschalom of the optically induced nuclear spin polarization in GaAs gives an impetus to the search for new ways of controlling coherent dynamics of nuclear spins. In the experiment a sample of GaAs held at 5 K was placed in a magnetic field of about 5 T. Short laser pulses (100 fs) of circularly-polarized light with the frequency tuned to the GaAs band gap (1.5 eV) were then shot on the sample (perpendicularly to the applied field) to create a nonequilibrium population of spin-polarized conduction electrons (as a result of the circular light polarization). The electron spins then rotated about the total magnetic field which now consisted of the applied field and whatever field generated by polarized nuclei. By measuring the rotational frequency, the field induced by the polarized nuclei was measured as a function of time. The experiment found that by pumping the electron population with 76 MHz repetition rate laser pulses, the nuclei became polarized. After about 250 seconds of pumping the polarization field was about 0.1 T. It is not clear what exactly is the mechanism behind the nuclear polarization (a simple picture based on the Overhauser effect disagrees with the experiment), but there is little doubt that the polarization is nuclear (because of the large relaxation times of order minutes) and that it is induced optically.
An even more fascinating possibility, also studied by Kikkawa and Awschalom , is a dynamical control of the nuclear spins. In the standard nuclear magnetic resonance experiments nuclear spins which rotate about an applied field can be flipped by applying a microwave radiation of the frequency of the spin rotation. This happens because the microwave field has a component of the oscillating magnetic field perpendicular to the applied field. But such a perpendicular oscillating field can be created purely electronically! One just has to create a nonequilibrium population of spin-polarized electrons with the spin orientation perpendicular to the applied field, and repeat this process periodically with the period of the nuclear spin rotation. The hyperfine interaction is then the required oscillating interaction and should be able to resonantly tip nuclear spins. This was indeed observed .
That nuclear spins can be controlled electronically was first suggested by Feher as early as in the late 50’s. Feher pointed out that nuclear polarization can be induced by the hyperfine interaction if the effective temperature $`T_R`$ characterizing the electronic velocity distribution differs from the electronic spin temperature $`T_S`$ which determines the occupation of electronic Zeeman states. Feher proposed several mechanisms that would lead to $`T_RT_S`$ : hot-electron transport, electron drift in an electric field gradient or in a perpendicular magnetic field, and the injection of electrons whose g-factor differs from the one of the electrons inside the sample. All these methods rely on the fact that spin equilibration proceeds slower than momentum equilibration (see Section II). One practical use of this idea would be a dc-driven maser , in which paramagnetic impurities are polarized electronically to an effective negative temperature. We believe that the Feher effect will be revived by new experiments since it shows how to manipulate nuclear spins purely electronically (without the need of either rf or optical fields) which can be of great interest in the efforts of integrating standard electronics with quantum information processing.
|
no-problem/0002/astro-ph0002137.html
|
ar5iv
|
text
|
# Electron Acceleration and Time Variability of High Energy Emission from Blazars
## 1 INTRODUCTION
High energy emission from blazars is usually thought to be produced by relativistically moving jets or blobs from the nucleus of galaxies (e.g., Blandford & Rees, 1978; Blandford & Königl, 1979; Maraschi, Ghisellini, & Celotti, 1992; Sikora, Begelman, & Rees, 1994; Inoue & Takahara, 1996). The physical properties of such jets have been probed mostly based on the steady state models of synchrotron radiation and inverse Compton scattering by a nonthermal electron population (SSC model). However, blazars are also characterized by rapid and strong time variability. Recent observations have revealed that the emission exhibits short time variations in X- and gamma-ray bands on timescales from weeks down to half an hour (e.g., Mukherjee et al., 1997; Ulrich, et al., 1997, for review), as fast time variations of Mrk 421 were observed by X-rays and TeV gamma-rays (Gaidos et al., 1996; Takahashi et al., 1996); similar time variations of Mrk 501 were also found by multiwavelength observations (Kataoka et al., 1999). These observations should provide important clues on physical processes in relativistic jets, in particular, on electron acceleration.
To make theoretical inferences, we need to calculate time-dependent emission spectra from a time-dependent electron population. An example of such theoretical models was recently presented by Mastichiadis & Kirk (1997). They solved the kinetic equations of electrons and photons simultaneously, by injecting power-law electrons with an exponential cutoff. They showed various possibilities to explain the time variations observed from Mrk 421, such as the changes in the magnetic field or the maximum Lorentz factor of nonthermal electrons. Although the correlation between X-rays and TeV gamma-rays are important to discuss the SSC model, their estimate of Compton scattering in the Klein-Nishina regime does not necessarily correctly account for the energy change in scatterings, because of a simplified treatment of Compton scattering in the Klein-Nishina regime \[see Mastichiadis & Kirk (1995) for the details of their calculation method\]. Kirk et al. (1998), on the other hand, extended the above model, assuming that an acceleration region and a cooling region are spatially separated; i.e., electrons accelerated in a shock region are transferred to a cooling region where they emit synchrotron photons (they did not include Compton scattering). Their model was intended to explain the time variability of X-rays, by changing acceleration timescale.
Besides the time variations of flare activities explained by Mastichiadis & Kirk (1997) and Kirk et al. (1998), the early stages of acceleration are of great importance. By examining the properties of the time evolutions of photon spectra during acceleration, we may obtain the diagnoses of acceleration mechanisms. The recent development of observations in X-rays (ASCA and Beppo-SAX) to TeV gamma-rays (e.g., Whipple and HEGRA) and future experiments might be used to confirm the diagnoses.
In this paper, we use a formulation similar to Mastichiadis & Kirk (1997), but with the full Klein-Nishina cross section in the Compton scattering kernel, so that the emission in the TeV range is calculated correctly. We also include a particle acceleration process by considering spatially separated acceleration and cooling regions as in Kirk et al. (1998), although we do not consider the spatial transfer of electrons. We particularly emphasize the detailed study of the properties of electron and photon spectra in the early stage of acceleration.
We describe our model in §2 and present numerical results in §3. Summary of our results is given in §4.
## 2 MODEL
### 2.1 Acceleration and Cooling Regions
We assume that observed photons are emitted from a blob moving relativistically towards us with Doppler factor $`𝒟=[\mathrm{\Gamma }(1\beta _\mathrm{\Gamma }\mu )]^1`$, where $`\mathrm{\Gamma }`$ is the Lorentz factor of the blob, $`\beta _\mathrm{\Gamma }`$ is the speed of the blob in units of light speed $`c`$, and $`\mu `$ is the cosine of the angle between the line of sight and the direction of motion of the blob. The blob is a spherical and uniform cloud with radius $`R`$, except that the blob includes an acceleration region which is presumably a shock front. It is assumed that the spatial volume of the acceleration region is small, and that the acceleration region is a slab with thickness $`R_{\mathrm{acc}}`$ defined below. The spectra of electrons and photons in the blob are calculated for the acceleration and cooling regions separately by solving equations described in §2.2.
We assume that the acceleration region (hereafter AR) and the cooling region (hereafter CR) are spatially separated; shocks in the blob are expected to be the site of electron acceleration and electrons cool mainly outside the shock regions. In the AR, electrons are mainly accelerated and cooling is unimportant except for the highest value of $`\gamma `$, while, in the CR, electrons with a nonthermal spectrum are injected from the AR; the escape rate of electrons from the AR is equal to the injection rate of electrons in the CR because of the number conservation. We further assume that acceleration time, $`t_{\mathrm{acc}}`$, and escape time, $`t_{e,\mathrm{esc}}`$, in the AR are energy independent as given in equation (4) below. With these assumptions, the number spectrum of electrons in the AR is a power law with a power-law index $`2`$, i.e., $`N(\gamma )\gamma ^2`$, which is confirmed analytically (e.g., Kirk et al., 1998). Thus the maximum energy of electrons is determined by the balance of cooling and acceleration. Since we consider $`t_{\mathrm{acc}}R/c`$ (see §3.1), the size of the AR is much smaller than the size of the blob itself.
We use this formulation because $`N(\gamma )\gamma ^2`$ is expected from the theory of shock acceleration (e.g., Druly, 1983; Blandford & Eichler, 1987). We, however, do not solve the spatial transfer of electrons as was done by Kirk et al. (1998). Instead, we simply calculate escaping electrons from the AR and put them into the CR. Although this may be an oversimplified model for realistic situations, the actual geometrical situation of shocks is not well known, either. Strictly speaking, our formulation is valid when ARs and CRs are more or less uniformly distributed in a cloud, but it is expected to be a fair approximation to the case where a single shock propagates in a jet as was studied by Kirk et al. (1998). As for the calculation of photon spectra, photons originating from one region penetrate into the other region but most of the photons originate from the CR since the size of the AR is small. Thus, the electron cooling in the blob is governed either by its own magnetic field or by synchrotron photons stemming from the CR. We treat appropriately this situation in numerical calculations.
### 2.2 Kinetic Equations
The equation describing the time-evolution of the electron number spectrum in the AR is given by
$$\frac{N(\gamma )}{t}=\frac{}{\gamma }\left\{\left[\left(\frac{d\gamma }{dt}\right)_{\mathrm{acc}}\left(\frac{d\gamma }{dt}\right)_{\mathrm{loss}}\right]N(\gamma )\right\}\frac{N(\gamma )}{t_{e,\mathrm{esc}}}+Q(\gamma ),$$
(1)
where $`\gamma `$ is the Lorentz factor of electrons and $`N(\gamma )`$ is the number density of electrons per unit $`\gamma `$. We assume that monochromatic electrons with Lorentz factor $`\gamma _0`$ are injected in the AR, i.e., $`Q(\gamma )=Q_0\delta (\gamma \gamma _0)`$. Electrons are then accelerated and lose energy by synchrotron radiation and Compton scattering; the energy loss rate is denoted by $`(d\gamma /dt)_{\mathrm{loss}}`$. The acceleration term is approximated by
$$\left(\frac{d\gamma }{dt}\right)_{\mathrm{acc}}=\frac{\gamma }{t_{\mathrm{acc}}}.$$
(2)
In the framework of diffusive shock acceleration (e.g., Druly, 1983; Blandford & Eichler, 1987), $`t_{\mathrm{acc}}`$ can be approximated as
$$t_{\mathrm{acc}}=\frac{20\lambda (\gamma )c}{3v_s^2}3.79\times 10^6\left(\frac{0.1\mathrm{G}}{B}\right)\xi \gamma \mathrm{sec},$$
(3)
where $`v_sc`$ is the shock speed, $`B`$ is the magnetic field, and $`\lambda (\gamma )=\gamma m_ec^2\xi /(eB)`$ is the mean free path assumed to be proportional to the electron Larmor radius with $`\xi `$ being a parameter, $`m_e`$ the electron mass, and $`e`$ the electron charge. Although this expression is valid only for test particle approximation in non-relativistic shocks, we rely on this since the basic dependences are not much changed in general cases.
For the convenience of numerical calculations, we assume $`t_{\mathrm{acc}}`$ does not depend on $`\gamma `$:
$$t_{\mathrm{acc}}=3.79\times 10\left(\frac{0.1\mathrm{G}}{B}\right)\left(\frac{\gamma _f}{10^7}\right)\xi \mathrm{sec},$$
(4)
where $`\gamma _f`$ is assumed to be a characteristic Lorentz factor of relativistic electrons and used as a parameter; we set $`\gamma _f=10^7`$ throughout this paper. Although realistic acceleration time for the smaller values of $`\gamma `$ should be correspondingly shorter, we make this choice because we mainly concern about the electrons with the large values of $`\gamma `$. One worry about this choice is the effect on the spectrum of accelerated electrons. We make sure that the resultant spectrum is that expected in diffusive shock acceleration by choosing $`t_{e,\mathrm{esc}}=t_{\mathrm{acc}}`$ in the AR; this assumption of $`t_{e,\mathrm{esc}}=t_{\mathrm{acc}}`$ is the same as used by Mastichiadis & Kirk (1995) in their proton acceleration model.
The electron spectrum in the CR is calculated by equation (1), with $`(d\gamma /dt)_{\mathrm{acc}}`$ dropped. Also $`Q(\gamma )`$ is replaced by the escaping electrons from the AR and $`t_{e,\mathrm{esc}}`$ is set to be $`2R/c`$. The assumption of $`2R/c`$ in estimating $`t_{e,\mathrm{esc}}`$ is merely based on that electrons escaping from the blob take longer time than photons, and this point needs further work.
The relevant equation for the time evolution of photons is given by
$$\frac{n_{\mathrm{ph}}(ϵ)}{t}=\dot{n}_\mathrm{C}(ϵ)+\dot{n}_{\mathrm{em}}(ϵ)\dot{n}_{\mathrm{abs}}(ϵ)\frac{n_{\mathrm{ph}}(ϵ)}{t_{\gamma ,\mathrm{esc}}},$$
(5)
where $`n_{\mathrm{ph}}(ϵ)`$ is the photon number density per unit energy $`ϵ`$. Compton scattering is calculated as
$$\dot{n}_\mathrm{C}(ϵ)=n_{\mathrm{ph}}(ϵ)𝑑\gamma N(\gamma )R_\mathrm{C}(ϵ,\gamma )+𝑑ϵ^{}𝑑\gamma P(ϵ;ϵ^{},\gamma )R_\mathrm{C}(ϵ^{},\gamma )n_{\mathrm{ph}}(ϵ^{})N(\gamma ),$$
(6)
using the exact Klein-Nishina cross section. First term of equation (6) denotes the rate that photons with energy $`ϵ`$ are scattered by electrons with the number spectrum $`N(\gamma )`$; $`R_\mathrm{C}`$ is the angle-averaged scattering rate. Second term of equation (6) denotes the spectrum of scattered photons: $`P(ϵ;ϵ^{},\gamma )`$ is the probability that a photon with energy $`ϵ^{}`$ is scattered by an electron with energy $`\gamma `$ to have energy $`ϵ`$. The probability $`P`$ is normalized such that $`P(ϵ;ϵ^{},\gamma )𝑑ϵ=1`$. The details of $`R_\mathrm{C}`$ and $`P`$ are given in Jones (1968) and Blandford & Coppi (1990).
Photon production and self-absorption by synchrotron radiation are included in $`\dot{n}_{\mathrm{em}}(ϵ)`$ and $`\dot{n}_{\mathrm{abs}}(ϵ)`$, respectively. The synchrotron emissivity and absorption coefficient are calculated based on the approximations given in Robinson & Melrose (1984) for mildly relativistic electrons and Crusius & Schlickeiser (1986) for relativistic electrons. External photon sources are not included. The rate of photon escape is estimated as $`n_{\mathrm{ph}}(ϵ)/t_{\gamma ,\mathrm{esc}}`$. We set $`t_{\gamma ,\mathrm{esc}}=R_{\mathrm{acc}}/c`$ and $`R/c`$ in the AR and CR, respectively, because the scattering depth of the blob is much smaller than unity.
The comoving quantities are transformed back into the observer’s frame depending on the Doppler factor and the redshift $`z`$; $`ϵ_{\mathrm{obs}}=ϵ𝒟/(1+z)`$, and $`dt_{\mathrm{obs}}=dt(1+z)/𝒟`$
## 3 RESULTS
We first examine the case where the cloud is initially empty and the injection of electrons starts at $`t=0`$. The distribution function of the injected electrons is mono-energetic: Here $`\gamma _0=2`$ is assumed. The strength of magnetic fields is assumed to have the same value both in ARs and in CRs, which is 0.1 G except in §3.4. Other parameters are redshift $`z=0.05`$, Hubble constant $`H_0=75`$ km sec<sup>-1</sup> Mpc<sup>-1</sup>, and Doppler factor $`𝒟=10`$. We also assume that the size of a cloud is measured by the timescale of variability, which is assumed to be $`R/(c𝒟)=5\times 10^4`$ sec in the observer’s frame.
### 3.1 Time Evolution in Early Phase
First we simulate the time evolution from $`t=0`$ to $`R/c`$ to study the evolution in an early stage. We assume $`\xi =5\times 10^2`$ (i.e., $`t_{\mathrm{acc}}1.9\times 10^4`$ sec in the blob frame), and injection duration $`t=0`$$`R/c`$. In the CR, the escape time of electrons is assumed to be $`2R/c`$. The size of the AR is assumed to be $`R_{\mathrm{acc}}=ct_{\mathrm{acc}}/2`$. (Note that, in sections below, when we change the value of $`t_{\mathrm{acc}}`$, $`R_{\mathrm{acc}}`$ is also changed accordingly.) The injection rate of electrons in the AR is $`0.1`$ electrons cm<sup>-3</sup> sec<sup>-1</sup>. The volume of the AR is $`2.1\times 10^{47}`$ cm<sup>3</sup> and the total injection rate is $`2.1\times 10^{46}`$ electrons sec<sup>-1</sup>, assuming that the AR is approximated by a disk with radius $`R`$ and thickness $`R_{\mathrm{acc}}`$. The total power of electrons amounts to $`5\times 10^{41}`$ ergs sec<sup>-1</sup> by acceleration, if the power-law spectrum with an index of 2 is realized between $`\gamma _{\mathrm{min}}`$ and $`\gamma _{\mathrm{max}}`$; the minimum and maximum Lorentz factors $`\gamma _{\mathrm{min}}`$ and $`\gamma _{\mathrm{max}}`$ are tentatively taken to be $`2`$ and $`3\times 10^6`$, respectively.
In Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars, the evolution of the electron number spectrum is shown both for an AR and a CR. It is seen that electrons injected with the Lorentz factor 2 are gradually accelerated and the value of $`\gamma _{\mathrm{max}}`$ increases with time, where we take the value of $`\gamma _{\mathrm{max}}`$ such that $`N(\gamma )=0`$ for $`\gamma >\gamma _{\mathrm{max}}`$. The value of $`\gamma _{\mathrm{max}}`$ in a steady state is determined by the balance among $`t_{\mathrm{acc}}`$, $`t_{e,\mathrm{esc}}`$, and cooling time $`t_{\mathrm{cool}}`$ in the AR. Because we assume $`t_{\mathrm{acc}}=t_{e,\mathrm{esc}}`$ in the AR, $`\gamma _{\mathrm{max}}`$ is simply determined by $`t_{\mathrm{acc}}`$ and $`t_{\mathrm{cool}}`$. The value of $`\gamma _{\mathrm{max}}`$ in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars is $`4\times 10^6`$. The spectrum reaches almost a steady state within $`R/c`$, which is a power law, $`N(\gamma )\gamma ^2`$; note that $`t_{\mathrm{acc}}2\times 10^4`$ sec and $`R/c=5\times 10^5`$ sec in the comoving frame of the blob for the present model.
In the CR, the effect of electron escape is negligible in the time interval shown in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars, because the simulation is terminated at $`t=R/c`$ while $`t_{e,\mathrm{esc}}=2R/c`$. Because of radiative cooling, a break $`\gamma _{\mathrm{br}}`$ appears at around $`3\times 10^5`$. This break moves to lower energy when the evolution is continued until a steady state is attained; $`\gamma _{\mathrm{br}}10^4`$ at $`t=10R/c`$. There is also a slight deceleration of electrons by cooling, which is shown by curves below $`\gamma =2`$.
The spectral energy distribution (SED) of emission from the CR is shown in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars; the flux and the photon energy are plotted in the observer’s frame. Curves in the figure show the time evolution, with equally spaced time interval for $`t=0R/c`$ by solid curves. They evolve from lower to upper curves. In this stage, the synchrotron radiation dominates, because Compton scattering needs a timescale $`R/c`$ to be effective. SED at $`t=2R/c`$ (dotted curve) and $`10R/c`$ (dashed curve) are also shown in the figure; here electrons are continuously injected until $`t=10R/c`$. As shown by those curves, when the evolution is continued after $`R/c`$, the Compton component continues to increase before reaching a steady state. The peak energy of synchrotron emission initially increases but begins to decrease after about $`0.8R/c`$ because electrons with $`\gamma <\gamma _{\mathrm{br}}`$ continue to accumulate and the value of $`\gamma _{\mathrm{br}}`$ decreases while those with $`\gamma >\gamma _{\mathrm{br}}`$ are saturated because of radiative cooling. After $`t=2R/c`$ the effects of electron escape begin to further modify the synchrotron spectrum; the intensity at the high energy part decreases while that at low energy still continues to increase slightly.
For $`t=0R/c`$, light curves in the X-ray range are shown in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars. Hard X-rays become dominant after $`t15t_{\mathrm{acc}}`$, where $`t_{\mathrm{acc}}/𝒟2\times 10^3`$ sec in the observer’s frame.
The time evolution of the energy densities of electrons and photons in the CR are shown in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars: The energy densities in the AR are comparable with those in the CR for the parameters we used. In the CR, $`t_{e,\mathrm{esc}}=2R/c`$ and $`t_{\gamma ,\mathrm{esc}}=R/c`$ are assumed, so that the energy density of electrons is larger than that of photons. As was mentioned above, first the synchrotron photon energy-density rapidly increases and later the Compton photon energy-density (indicated by SSC in the figure) increases. It should be noted that the ratio of the energy densities of the Compton component to the synchrotron component is about 0.7 in the final stage, while the ratio of energy densities of synchrotron photons to magnetic fields is about 9. This is because the energy range of the target photons of Compton scattering is only a part of the synchrotron spectrum due to the Klein-Nishina limit. This result implies that we should be cautious about the estimate of the magnetic field strength from observations; if we simply estimate the magnetic field by multiplying the energy density of synchrotron photons by the ratio of synchrotron luminosity to Compton luminosity, it results in a large overestimation of magnetic field.
The energy injected through the electron acceleration is finally carried away by electrons and photons from the blob. The ratio of the amounts of the energies carried by electrons and photons is about $`1.8:1`$ in a steady state (i.e., $`t10R/c`$). That is, electrons carry more jet power than radiation in this specific model.
The trajectories in the energy-flux vs. photon-index plane are shown for $`t=0`$$`10R/c`$ in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars for various energy bands. Because the value of $`\gamma _{\mathrm{max}}`$ decreases due to radiative cooling, the flux of X-rays decreases when $`t`$ $`>`$ a few $`R/c`$. The flux of gamma-rays, on the other hand, continues to increase because of Compton scattering (see Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars).
### 3.2 Dependence on Acceleration Timescale
By changing the value of $`\xi `$, we compare the electron spectrum for different values of the acceleration time, where we keep $`t_{\mathrm{acc}}=t_{e,\mathrm{esc}}`$ in the AR. In Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars, steady-state distributions of electrons for different values of $`\xi `$ are compared. When the acceleration time scale is longer, the value of $`\gamma _{\mathrm{max}}`$ is reduced because of radiative cooling in the AR. Consequently, the emission spectrum becomes softer (Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars). It should be noted that because a smaller value of $`\xi `$ leads to a smaller value of $`R_{\mathrm{acc}}`$ in out model, the luminosity from a blob becomes smaller when $`\xi `$ is smaller. The extreme limit of $`\xi =1`$ corresponding to the Böhm limit results in the most efficient Compton luminosity and the highest gamma-ray energy. In this limit, $`\gamma _{\mathrm{max}}`$ is about $`2\times 10^9`$ and the inverse Compton SED shows a steep cut off at $`10^4`$ TeV for $`𝒟=10`$ if electron-positron pair production is neglected. Note that $`t_{\mathrm{acc}}`$ in reality depends on $`\gamma `$, while we assume $`t_{\mathrm{acc}}`$ does not depend on $`\gamma `$ and the above values were calculated assuming $`\gamma _f=10^7`$ in equation (4).
The shape of SED has a significant curvature in the TeV region in our calculations. This curvature is in contrast to the observations of TeV gamma-rays from Mrk 421, which are fitted by a power law (Krennrich et al., 1999). Mrk 501, on the other hand, show a curvature in TeV emission (Catanese et al., 1997), and there are models which explain the curvature by intergalactic absorption (e.g., Konopelko et al., 1999; Krennrich et al., 1999). We, however, do not address these issues in this paper, since we are mainly interested in the temporal behavior of electrons and photons due to electron acceleration in the source.
### 3.3 Dependence on the Injection Rate
The spectral energy distributions of electrons and photons depend on the value of the injection rate $`Q(\gamma )`$ in the AR as well. If the value of $`Q(\gamma )`$ is larger with the fixed values of $`\gamma _0`$, $`t_{e,\mathrm{esc}}`$, and $`t_{\mathrm{acc}}`$, the accumulation of electrons in the CR increases, resulting in the dominance of the Compton component. An example of SED is shown Electron Acceleration and Time Variability of High Energy Emission from Blazars, where the electron injection rate in the acceleration smaller by a factor 10 than in the model shown in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars, i.e., electrons are injected at the rate of $`0.01`$ electrons cm<sup>-3</sup> sec<sup>-1</sup>. The peak of the synchrotron component decreases by a factor 10 and that of the Compton component decreases by a factor 100.
### 3.4 Dependence on Magnetic Field
When the size of a cloud and the number density of electrons are fixed, the value of $`\gamma _{\mathrm{max}}`$ is larger for smaller values of $`B`$, because the synchrotron cooling rate is proportional to $`B^2`$. However, this is not the case in our model, because not only the cooling rate but also $`t_{\mathrm{acc}}`$ depends on B. When $`B`$ is smaller, $`t_{\mathrm{acc}}`$ is larger, which results in the larger size of the AR. Because we fix the particle injection rate per unit volume in the AR, the total number of electrons injected into the CR per unit time is larger by the electron number conservation. As a result, the Compton cooling in the CR becomes stronger and the value of $`\gamma _{\mathrm{max}}`$ becomes smaller. However, the increase or decrease of $`\gamma _{\mathrm{max}}`$ actually depends on the combination of synchrotron cooling and Compton cooling. Such dependence on $`B`$ in the CR is shown in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars; SEDs at $`t=10R/c`$ are compared for $`B=0.05`$, $`0.1`$, and $`0.5`$ G with the same values of other parameters as in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars. In the CR, the values of $`\gamma _{\mathrm{max}}`$ are $`8\times 10^6`$, $`4\times 10^6`$, and $`8\times 10^5`$ for $`B=0.05`$, $`0.1`$, and $`0.5`$ G, respectively.
### 3.5 Termination of Acceleration
It is conceivable that acceleration is terminated by the end of electron injection in the AR due to the change of shock structure, etc., so that plasmas cease to emit hard photons. To exemplify such a situation, we continue the injection and acceleration up to $`t=4R/c`$ with the parameters used in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars and terminate the injection and the acceleration abruptly at $`t=4R/c`$, while the simulation is continued until $`t=7R/c`$. A break of the power-law spectrum of electrons in the AR appears after acceleration is terminated, and the break moves to lower energy with time. The response of the emission spectrum to the termination of acceleration is almost simultaneous in different energy bands as shown by light curves in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars. It is observed that the decay at 0.5 – 2 keV band lags that at 2 – 40 keV, which is characteristic to the models that assume the injection of power-law electrons and a sudden termination of injection. The decay in the keV range and 1 – 10 TeV bands is exponential, because the supply of the electrons producing those photons is turned off. On the other hand, electrons producing GeV photons are still supplied for a while by the cooling of the highest energy electrons which produced 1 – 10 TeV photons.
### 3.6 Flare
Up to now, we have assumed that at the initial stage the cloud is empty and there are no high energy electrons or photons. This is certainly an over simplification. Many flare events have been observed in X- and gamma-ray ranges by ASCA, Whipple, etc. They are overlaid on a steady emission component. As an example of applications of our code, a flare is simulated, i.e., we simply change the value of $`t_{\mathrm{acc}}`$ for a period of time. More specifically, at $`t=0`$ the distributions of electrons and photons are in the steady state which is obtained for the parameters used in §3.1; see the dashed curve in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars for the steady photon energy distribution. The steady state is still continued for $`R/c`$. We then replace $`t_{\mathrm{acc}}`$ by $`t_{\mathrm{acc}}/1.2`$ for $`t=R/c2R/c`$ (about 14 hours in the observer’s frame); after $`t=2R/c`$, the original value of $`t_{\mathrm{acc}}`$ is used. The electron escape time in the AR is also changed keeping $`t_{e,\mathrm{esc}}=t_{\mathrm{acc}}`$. In Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars, light curves are shown for such a flare. The response of the light curves to the change of $`t_{\mathrm{acc}}`$ (on/off of a flare) is slightly delayed, because of photon production and Compton scattering time. It is also noticed that the change of the light curve at $`110`$ GeV band delays behind X-rays and TeV gamma-rays. This is a result of an interplay of the time evolution of electron and synchrotron photon spectra. It is shown that the light curves of 2 – 10 and 10 – 40 keV proceed that of 0.5 – 2 keV. This behavior is different from that shown in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars, where the initial condition was an empty blob.
The trajectories in the energy flux and photon index are shown in Figure Electron Acceleration and Time Variability of High Energy Emission from Blazars for $`t=0`$$`10R/c`$. This behavior is qualitatively similar to observed one for Mrk 421 by ASCA (Takahashi et al., 1996). Though the amplitudes of the change in the photon index of 2 – 10 keV and its energy flux are different from those of the observation, these values are dependent on parameters such as $`t_{\mathrm{acc}}`$ and the duration of the flare, etc.
## 4 SUMMARY
Simulations of the time evolution of electron and photon energy distributions were presented as a model of time variations observed by X- and gamma-rays from blazars. By assuming that acceleration and cooling regions in a blob are spatially separated, we calculated the energy spectra of electrons in each regions. Electrons in the acceleration region are accelerated with a characteristic timescale $`t_{\mathrm{acc}}`$ and escape on a timescale $`t_{e,\mathrm{esc}}`$; here we assumed $`t_{\mathrm{acc}}=t_{e,\mathrm{esc}}`$, so that the electron spectrum in a steady state obeys a power law, $`N(\gamma )\gamma ^2`$, as realized in the standard model of shock acceleration (e.g., Druly, 1983; Blandford & Eichler, 1987). Electrons escaping from the acceleration region are injected into the cooling region where they lose energy by radiation and finally escape from the blob on a timescale assumed to be $`2R/c`$. With these assumptions, we performed the simulations of the time evolutions of electrons and photons for various values of parameters. Although we did not include a specific acceleration mechanism, we took into account the salient features of diffusive shock acceleration, so that we could study the properties of time variation accompanying shock acceleration.
We first presented the results of the time evolution of the spectral energy distribution of radiation associated with the evolution of the electron number spectrum. In the early stage of the evolution, i.e., $`t=0`$$`R/c`$, the synchrotron component dominates the spectrum. The energy flux of soft X-rays starts to rise earlier than that of hard X-rays. Later ($`t>R/c`$), the Compton luminosity gradually increases. At the same time, the peak energy of the synchrotron component decreases because of radiative cooling. It was found that in a steady state, escaping electrons carry more energy than radiation: This result, of course, depends on the values of the parameters used. We also showed the dependence of time evolution on the acceleration timescale, the electron injection rate, and the strength of magnetic fields. The value of $`\gamma _{\mathrm{max}}`$ and the ratio of the synchrotron luminosity to the Compton luminosity depend on such parameters.
We next simulated a flare by simply changing the value of $`t_{\mathrm{acc}}`$ for a certain time span. With a shorter acceleration timescale, more energetic electrons are produced and consequently more hard photons are produced. The relation between the energy flux and the photon index during a flare was obtained, which is similar to the one observed from Mrk 421 (Takahashi et al., 1996).
Our formulation provides a method to treat high energy flares including particle acceleration processes, which is beyond usual analyses where nonthermal electron spectra are arbitrarily assumed and only cooling processes are included. Although we have not applied our model to any specific case of flares, it is straightforward to do this using our code. The examples presented here seem to cover a wide range of observed flares. These applications are deferred to future work. On the theoretical side, as proposed by Kirk et al. (1998), electrons accelerated at a shock are transferred outside of the shock and cool radiatively. To include such spatial transfer of electrons, we, in future, need to solve for the structure around acceleration regions.
Recently Chiaberge & Ghisellini (1999) showed observational consequences associated with time variations with timescales shorter than $`R/c`$. When such short timescale variations occur, observed emission is a superposition from various parts of a cloud. Then the time profile of each time variation is not necessarily observed clearly. The model presented in this paper contains the acceleration timescale shorter than $`R/c`$. Thus our model may not directly reflect observed spectra. However, to understand the relation between electron acceleration and time variation of emission, such a study should be useful.
M.K. and F.T. have been partially supported by Scientific Research Grants (M.K.: Nos. 09223219 and 10117215; F.T.: Nos. 09640323, 10117210, and 11640236) from the Ministry of Education, Science, Sports and Culture of Japan.
|
no-problem/0002/cond-mat0002374.html
|
ar5iv
|
text
|
# Wealth condensation in a simple model of economy
(<sup>1</sup> Service de Physique de l’État Condensé, Centre d’études de Saclay,
Orme des Merisiers, 91191 Gif-sur-Yvette Cedex, France
<sup>2</sup> Science & Finance, 109-111 rue Victor Hugo, 92532 Levallois cedex, France;
http://www.science-finance.fr
<sup>3</sup> Laboratoire de Physique Théorique de l’Ecole Normale Supérieure <sup>1</sup><sup>1</sup>1UMR 8548: Unité Mixte du Centre National de la Recherche Scientifique, et de l’École Normale Supérieure. ,
24 rue Lhomond, 75231 Paris Cedex 05, France )
## Abstract
We introduce a simple model of economy, where the time evolution is described by an equation capturing both exchange between individuals and random speculative trading, in such a way that the fundamental symmetry of the economy under an arbitrary change of monetary units is insured. We investigate a mean-field limit of this equation and show that the distribution of wealth is of the Pareto (power-law) type. The Pareto behaviour of the tails of this distribution appears to be robust for finite range models, as shown using both a mapping to the random ‘directed polymer’ problem, as well as numerical simulations. In this context, a transition between an economy dominated by a few individuals from a situation where the wealth is more evenly spread out, is found. An interesting outcome is that the distribution of wealth tends to be very broadly distributed when exchanges are limited, either in amplitude or topologically. Favoring exchanges (and, less surprisingly, increasing taxes) seems to be an efficient way to reduce inequalities.
LPTENS preprint 00/06
Electronic addresses : bouchaud@spec.saclay.cea.fr mezard@physique.ens.fr
It is a well known fact that the individual wealth is a very broadly distributed quantity among the population. Even in developed countries, it is common that $`90\%`$ of the total wealth is owned by only $`5\%`$ of the population. The distribution of wealth is often described by ‘Pareto’-tails, which decay as a power-law for large wealths :
$$𝒫_>(W)\left(\frac{W_0}{W}\right)^\mu ,$$
(1)
where $`𝒫_>(W)`$ is the probability to find an agent with wealth greater than $`W`$, and $`\mu `$ is a certain exponent, of order $`1`$ both for individual wealth or company sizes (see however ).
Here, we want to discuss the appearance of such Pareto tails on the basis of a very general model for the growth and redistribution of wealth, that we discuss in some simple limits. We relate this model to the so-called ‘directed polymer’ problem in the physics literature , for which a large number of results are known, that we translate into the present economical framework. We discuss the influence of simple parameters, such as the connectivity of the exchange network, the role of income or capital taxes and of state redistribution of wealth, on the value of the exponent $`\mu `$. One of the most interesting output of such a model is the generic existence of a phase transition, separating a phase where the total wealth of a very large population is concentrated in the hands of a finite number of individuals (corresponding, as will be discussed below, to the case $`\mu <1`$), from a phase where it is shared by a finite fraction of the population.
The basic idea of our model is to write a stochastic dynamical equation for the wealth $`W_i(t)`$ of the $`i^{th}`$ agent at time $`t`$, that takes into account the exchange of wealth between individuals through trading, and is consistent with the basic symmetry of the problem under a change of monetary units. Since the unit of money is arbitrary, one indeed expects that the equation governing the evolution of wealth should be invariant when all $`W_i`$’s are multiplied by a common (arbitrary) factor. The evolution equation that we consider is therefore the following:
$$\frac{dW_i}{dt}=\eta _i(t)W_i+\underset{j(i)}{}J_{ij}W_j\underset{j(i)}{}J_{ji}W_i,$$
(2)
where $`\eta _i(t)`$ is a gaussian random variable of mean $`m`$ and variance $`2\sigma ^2`$, which describes the spontaneous growth or decrease of wealth due to investment in stock markets, housing, etc., while the terms involving the (assymmetric) matrix $`J_{ij}`$ describe the amount of wealth that agent $`j`$ spends buying the production of agent $`i`$ (and vice-versa). It is indeed reasonable to think that the amount of money earned or spent by each economical agent is proportional to its wealth. This makes equation (2) invariant under the scale transformation $`W_i\lambda W_i`$. Technically the above stochastic differential equation is interpreted in the Stratonovich sense .
The simplest model one can think of is the case where all agents exchange with all others at the same rate, i.e $`J_{ij}J/N`$ for all $`ij`$. Here, $`N`$ is the total number of agents, and the scaling $`J/N`$ is needed to make the limit $`N\mathrm{}`$ well defined. In this case, the equation for $`W_i(t)`$ becomes:
$$\frac{dW_i}{dt}=\eta _i(t)W_i+J(\overline{W}W_i),$$
(3)
where $`\overline{W}=N^1_iW_i`$ is the average overall wealth. This is a ‘mean-field’ model since all agents feel the very same influence of their environment. By formally integrating this linear equation and summing over $`i`$, one finds that the average wealth becomes deterministic in the limit $`N\mathrm{}`$:
$$\overline{W}(t)=\overline{W}(0)\mathrm{exp}((m+\sigma ^2)t).$$
(4)
It is useful to rewrite eq. (3) in terms of the normalised wealths $`w_iW_i/\overline{W}`$. This leads to:
$$\frac{dw_i}{dt}=(\eta _i(t)m\sigma ^2)w_i+J(1w_i),$$
(5)
to which one can associate the following Fokker-Planck equation for the evolution of the density of wealth $`P(w,t)`$:
$$\frac{P}{t}=\frac{[J(w1)+\sigma ^2w]P}{w}+\sigma ^2\frac{}{w}\left[w\frac{wP}{w}\right].$$
(6)
The equilibrium, long time solution of this equation is easily shown to be:
$$P_{eq}(w)=𝒵\frac{\mathrm{exp}\frac{\mu 1}{w}}{w^{1+\mu }}\mu 1+\frac{J}{\sigma ^2},$$
(7)
where $`𝒵=(\mu 1)^\mu /\mathrm{\Gamma }[\mu ]`$ is the normalisation factor. One can check that $`w1`$, as it should.
Therefore, one finds in this model that the distribution of wealth exhibits a Pareto power-law tail for large $`w`$’s. In agreement with intuition, the exponent $`\mu `$ grows (corresponding to a narrower distribution), when exchange between agents is more active (i.e. when $`J`$ increases), and also when the success in individual investment strategies is more narrowly distributed (i.e. when $`\sigma ^2`$ decreases).
One can actually also define the above model in discrete time, by writing:
$$W_i(t+\tau )=\left[J\tau \overline{W}+(1J\tau )W_i\right]e^{V(i,t)}$$
(8)
where $`V`$ is an arbitrary random variable of mean $`m\tau `$ and variance $`2\sigma ^2\tau `$, and $`J\tau <1`$. In this setting, this amounts to study the so-called Kesten variable for which the asymptotic distribution again has a power-law tail, with an exponent $`\mu `$ found to be the solution of:
$$(1J\tau )^\mu e^{\mu V}=e^V^\mu .$$
(9)
Therefore, this model leads to power-law tails for a very large class of distributions of $`V`$, such that the solution of the above equation is non trivial (that is if the distribution of $`V`$ decays at least as fast as an exponential). Is is easy to check that $`\mu `$ is always greater than one and tends to $`\mu =1+J/\sigma ^2`$ in the limit $`\tau 0`$. Let us notice that a somewhat similar discrete model was studied in in the context of a generalized Lotka-Volterra equation. However that model has an additional term (the origin of which is unclear in an economic context) which breaks the symmetry under wealth rescaling, and as a consequence the Pareto tail is truncated for large wealths.
In this model, the exponent $`\mu `$ is always found to be larger than one. In such a regime, if one plots the partial wealth $`S_n=_{i=1}^nw_i`$ as a function of $`n`$, one finds an approximate straight line of slope $`1/N`$, with rather small fluctuations (see Fig. 1). This means that the wealth is not too unevenly distributed within the population. On the other hand, the situation when $`\mu <1`$, which we shall encounter below in some more realistic models, corresponds to a radically different situation (see Fig. 2). In this case, the partial wealth $`S_n`$ has, for large $`N`$, a devil staircase structure, with a few individuals getting hold of a finite fraction of the total wealth. A quantitative way to measure this ‘wealth condensation’ is to consider the so-called inverse participation ratio $`Y_2`$ defined as:
$$Y_2=\underset{i=1}{\overset{N}{}}w_i^2.$$
(10)
If all the $`w_i`$’s are of order $`1/N`$ then $`Y_21/N`$ and tends to zero for large $`N`$. On the other hand, if at least one $`w_i`$ remains finite when $`N\mathrm{}`$, then $`Y_2`$ will also be finite. The average value of $`Y_2`$ can easily be computed and is given by: $`Y_2=1\mu `$ for $`\mu <1`$ and zero for all $`\mu >1`$ . $`Y_2`$ is therefore a convenient order parameter which quantifies the degree of wealth condensation.
It is interesting to discuss several extensions of the above model. First, one can easily include, within this framework, the effect of taxes. Income tax means that a certain fraction $`\varphi _I`$ of the income $`dW_i/dt`$ is taken away from agent $`i`$. Therefore, there is a term $`\varphi _IdW_i/dt`$ appearing in the right-hand side of Eq. (2). Capital tax means that there is a fraction $`\varphi _C`$ of the wealth which is substracted per unit time from the wealth balance, Eq. (2). If a fraction $`f_I`$ of the income tax and $`f_C`$ of the capital tax are evenly redistributed to all, then this translates into a term $`+f_I\varphi _Id\overline{W}/dt+f_C\varphi _C\overline{W}`$ in the right-hand side of the wealth balance, which now reads:
$$\frac{dW_i}{dt}=\eta _i(t)W_i+J(\overline{W}W_i)\varphi _I\frac{dW_i}{dt}\varphi _CW_i+f_I\varphi _I\frac{d\overline{W}}{dt}+f_C\varphi _C\overline{W}$$
(11)
All these terms can be treated exactly within the above mean-field model allowing for a detailed discussion of their respective roles. The rate of exponential growth of the average wealth $`\overline{W}(t)`$ becomes equal to:
$$\gamma \frac{m+\sigma ^2/(1+\varphi _I)\varphi _C(1f_C)}{1+\varphi _I(1f_I)}.$$
(12)
The Pareto tail exponent $`\mu `$ is now given by:
$$\mu 1=\frac{J(1+\varphi _I)}{\sigma ^2}+\frac{1+\varphi _I}{\sigma ^2(1+\varphi _I(1f_I))}\left[\varphi _If_I(m+\frac{\sigma ^2}{1+\varphi _I})+\varphi _C(f_C+\varphi _I(f_Cf_I))\right].$$
(13)
This equation is quite interesting. It shows that income taxes tend to reduce the inequalities of wealth (i.e., lead to an increase of $`\mu `$), even more so if part of this tax is redistributed. On the other hand, quite surprisingly, capital tax, if used simultaneously to income tax and not redistributed, leads to a decrease of $`\mu `$, i.e. to a wider distribution of wealth. Only if a fraction $`f_C>f_I\varphi _I/(1+\varphi _I)`$ is redistributed will the capital tax be a truly social tax. Note that in the above equation, we have implicitly assumed that the growth rate $`\gamma `$ is positive. In this case, one can check that $`\mu `$ is always greater than $`1+(J+\varphi _Cf_C)(1+\varphi _I)/\sigma ^2`$, which is larger than one.
Another point worth discussing is the relaxation time associated to the Fokker-Planck equation (6). By changing variables as $`w=\xi ^2`$ and $`P(w)=\xi ^3Q(\xi )`$, one can map the above Fokker-Plank equation to the one studied in , which one can solve exactly. For large time differences $`T`$, one finds that the correlation function of the $`w`$’s behaves as:
$$w(t+T)w(t)w(t)^2\mathrm{exp}((\mu 1)\sigma ^2T)\mu >2$$
(14)
and
$$w(t+T)w(t)w(t)^2\frac{1}{(\sigma ^2T)^{3/2}}\mathrm{exp}(\mu ^2\sigma ^2T/4)\mu <2$$
(15)
This shows that the relaxation time is, for $`\mu <2`$, given by $`4/\mu ^2\sigma ^2`$. Therefore, rich people become poor (and vice versa) on a finite time scale in this model. A reasonable order of magnitude for $`\sigma `$ is $`10\%`$ per $`\sqrt{\text{year}}`$. In order to get $`\mu 11`$, one therefore has to choose $`J0.01`$ per year, i.e. $`1\%`$ of the total wealth of an individual is used in exchanges. \[This $`J`$ value looks rather small, but in fact we shall see below that a more realistic (non-mean field model) allows to increase $`J`$ while keeeping $`\mu `$ fixed\]. In this case, the relaxation time in this model is of the order of $`100`$ years.
Let us now escape from the mean-field model considered above and describe more realistic situations, where the number of economic neighbours to a given individual is finite. We will first assume that the matrix $`J_{ij}`$ is still symmetrical, and is either equal to $`J`$ (if $`i`$ and $`j`$ trade), or equal to $`0`$. A reasonable first assumption is that the graph describing the connectivity of the population is completely random, i.e. that two points are neighbours with probability $`c/N`$ and disconnected with probability $`1c/N`$. In such a graph, the average number of neighbours is equal to $`c`$. We thus scale $`\widehat{J}=J/c`$ in order to compare results with various connectivities (and insure a smooth large connectivity limit). We have performed some numerical simulations of Eq. (2) for $`c=4`$ and have found that the wealth distribution still has a power-law tail, with an exponent $`\mu `$ which only depends on the ratio $`J/\sigma ^2`$. This is expected since a rescaling of time by a factor $`\alpha `$ can be absorbed by changing $`J`$ into $`\alpha J`$ and $`\sigma `$ into $`\sqrt{\alpha }\sigma `$; therefore, long time (equilibrium) properties can only depend on the ratio $`J/\sigma ^2`$. As shown in Fig. 3, the exponent $`\mu `$ can now be smaller than one for sufficiently small values of $`J/\sigma ^2`$. In this model, one therefore expects wealth condensation when the exchange rate is too small. Note that we have also computed numerically the quantity $`Y_2`$ and found very good agreement with the theoretical value $`1\mu `$ determined from the slope of the histogram of the $`w_i`$’s.
From the physical point of view, the class of models which we consider here belong to the general family of directed polymers in random media. The two cases we have considered so far correspond respectively to a polymer on a fully connected lattice, and a polymer on a random lattice. A variant of this model can be solved exactly using the method of Derrida and Spohn for the so-called directed polymer problem on a tree. In this variant one assumes that at each time step $`\tau `$ the connectivity matrix is completely changed and chosen anew using the same probabilities as above. Each agent $`i`$ chooses at random exactly $`c`$ new neighbours $`\mathrm{}(i,t)`$, the wealth evolution equation becomes
$$W_i(t+\tau )=\left[\frac{J\tau }{c}\underset{\mathrm{}=1}{\overset{c}{}}W_{\mathrm{}(i,t)}+(1J\tau )W_i(t)\right]e^{V(i,t)}$$
(16)
where $`V`$ is a gaussian random variable of mean zero and variance $`2\sigma ^2\tau `$. One can then write a closed equation for the evolution of the wealth distribution . In this case, the wealth condensation phenomenon takes place whenever $`\sigma ^2\tau +J\tau \mathrm{ln}(J\tau /c)+(1J\tau )\mathrm{ln}(1J\tau )>0`$. For $`J\tau 1`$ the transition occurs for $`\sigma ^2=\sigma _c^2=J(1+\mathrm{ln}(c/J\tau ))`$.
For $`\sigma >\sigma _c`$, one finds that $`\mu `$ is given by:
$$\mu \frac{\mathrm{ln}\left(\frac{c}{\sigma ^2\tau }\right)}{\mathrm{ln}(c/J\tau )}$$
(17)
and is less than one, signalling the onset of a phase where wealth is condensed on a finite number of individuals. This precisely corresponds to the glassy phase in the directed polymer language. The above formula shows that $`\mu `$ depends only weakly on $`\sigma `$ or $`J`$, in qualitative agreement with our numerical result for the continuous time model (see Fig. 3). Note that in the limit $`c\mathrm{}`$, $`\sigma _c\mathrm{}`$ and the glassy phase disappears, in agreement with the results above, obtained directly on the mean-field model. Note also that in the limit $`\tau 0`$, where the reshuffling of the neighbours becomes very fast, wealth diffusion within the population becomes extremely efficient and, as expected, the transition again disappears. Finally, in the simple case where $`J\tau =1`$ (each agent trading all of his wealth at each time step), the critical value is $`\sigma _c^2\tau =\mathrm{ln}c`$ and the exponent $`\mu `$ in the condensed phase is simply $`\mu =\sigma _c/\sigma `$, and $`\mu =\sigma ^2/\sigma _c^2`$ for $`\mu >1`$ (see ).
Let us note, en passant, that the model considered by Derrida and Spohn has another interesting interpretation if the $`W_i`$’s describe the wealth of companies. The growth of a company takes place either from internal growth (leading to a term $`\eta _i(t)W_i`$ much as above), but also from merging with another company. If the merging process between two companies is completely random and takes place at a rate $`\lambda `$ per unit time, then the model is exactly the same as the one considered in Section 3 of (see in particular their Eq. (3.2)).
Although not very realistic, one could also think that the individuals are located on the nodes of a d-dimensional hypercubic lattice, trading with their neighbours up to a finite distance. In this case, one knows that for $`d>2`$ there exists again a phase transition between a ‘social’ economy where $`\mu >1`$ and a rich dominated phase $`\mu <1`$. On the other hand, for $`d2`$, and for large populations, one is always in the extreme case where $`\mu 0`$ at large times. In the case $`d=1`$, i.e. operators organized along a chain-like structure, one can actually compute exactly the distribution of wealth by transposing the results of . One finds for example that the ratio of the maximum wealth to the typical (e.g. median) wealth behaves as $`\mathrm{exp}\sqrt{N}`$, where $`N`$ is the size of the population, instead of $`N^{1/\mu }`$ in the case of a Pareto distribution with $`\mu >0`$. The conclusion of the above results is that the distribution of wealth tends to be very broadly distributed when exchanges are limited, either in amplitude (i.e. $`J`$ too small compared to $`\sigma ^2`$) or topologically (as in the above chain structure). Favoring exchanges (in particular with distant neighbours) seems to be an efficient way to reduce inequalities.
Let us now discuss in a cursory way the extension of this model to the case where the matrix $`J_{ij}`$ has a non trivial structure. One can always write:
$$J_{ij}=D_{ij}\mathrm{exp}\frac{F_{ij}}{2}J_{ji}=D_{ij}\mathrm{exp}+\frac{F_{ij}}{2},$$
(18)
where $`D_{ij}`$ is a symmetric matrix describing the frequency of trading between $`i`$ and $`j`$. $`F_{ij}`$ is a local bias: it describes by how much the amount of trading from $`i`$ to $`j`$ exceeds that from $`j`$ to $`i`$. In the absence of the speculative term $`\eta _iW_i`$, Eq. (2) is actually a Master equation describing the random motion of a particle subject to local forces $`F_{ij}`$, where $`J_{ij}`$ is the hopping rate between site $`j`$ and site $`i`$. This problem has also been much studied . One can in general decompose the force $`F_{ij}`$ into a potential part $`U_iU_j`$ and a non potential part. For a purely potential problem, the stationary solution of Eq. (2) with $`\eta _i0`$ is the well known Bolzmann weight:
$$W_{i,eq}=\frac{1}{Z}\mathrm{exp}(U_i)Z=\underset{i=1}{\overset{N}{}}\mathrm{exp}(U_i).$$
(19)
The statistics of the $`W_i`$ therefore reflects that of the potential $`U_i`$; in particular, large wealths correspond to deep potential wells. Pareto tails correspond to the case where the extreme values of the potential obey the Gumbel distribution, which decays exponentially for large (negative) potentials .
The general case where $`\eta _i`$ is non zero and/or $`F_{ij}`$ contains a non potential part is largely unknown, and worth investigating. A classification of the cases where the Pareto tails survive the introduction of a non trivial bias field $`F_{ij}`$ would be very interesting. Partial results in the context of population dynamics have been obtained recently in . The case where the $`i`$’s are on the nodes of a $`d`$ dimensional lattice should be amenable to a renormalisation group analysis along the lines of , with interesting results for $`d2`$. Work in this direction is underway .
In conclusion, we have discussed a very simple model of economy, where the time evolution is described by an equation capturing, at the simplest level, exchange between individuals and random speculative trading in such a way that the fundamental symmetry of the economy under an arbitrary change of monetary units is obeyed. Although our model is not intended to be fully realistic, the family of equations given by Eq. (2) is extremely rich, and leads to interesting generic predictions. We have investigated in details a mean-field limit of this equation and showed that the distribution of wealth is of the Pareto type. The Pareto behaviour of the tails of this distribution appears to be robust for more general connectivity matrices, as a mapping to the directed polymer problem shows. In this context, a transition between an economy governed by a few individuals from a situation where the wealth is more evenly spread out, is found. The important conclusion of the above model is that the distribution of wealth tends to be very broadly distributed when exchanges are limited. Favoring exchanges (and, less surprisingly, increasing taxes) seems to be an efficient way to reduce inequalities.
Acknowledgments: We want to thank D.S. Fisher, I. Giardina and D. Nelson for interesting discussions. MM thanks the SPhT (CEA-Saclay) for its hospitality.
|
no-problem/0002/cond-mat0002338.html
|
ar5iv
|
text
|
# From mesoscopic magnetism to the anomalous 0.7 conductance plateau
## Abstract
We present a simple phenomenological model which offers a unifying interpretation of the experimental observations on the 0.7 conductance anomaly of quantum point contacts. The model utilizes the Landauer-Büttiker formalism and involves enhanced spin correlations and thermal depopulation of spin subbands. In particular our model can account for the plateau value 0.7 and the unusual temperature and magnetic field dependence. Furthermore it predicts an anomalous suppression of shot noise at the 0.7 plateau.
It has been known and well understood since 1988 that the dc-conductance $`G`$ of narrow quantum point contacts and quantum wires (both referred to as QPCs below) is quantized in units of $`G_2=2e^2/h`$. During the past five years an increasing part of the experimental and theoretical work on QPCs has been devoted to studies of deviations from this integer quantization. In particular the discovery of the 0.7 conductance anomaly in 1996 posed one of the most intriguing and challenging puzzles in the field. This anomaly is a narrow plateau, or in some cases just a plateau-like feature appearing in scans of $`G`$ versus gate voltage $`V_g`$ at a value of $`G`$ which is reduced by a factor 0.7 relative to the ideal value $`G_2`$. The 0.7 conductance anomaly has been recorded in in numerous QPC transport experiments (even before it was noted in 1996, see e.g. Ref. ) involving many different materials, geometries and measurement techniques. It can therefore be regarded as a universal effect.
Due to its universal character and the absence of a theoretical understanding the 0.7 anomaly has been subject to intensive experimental studies. In this paper we show that many of the experimental findings can in fact be consistently interpreted by invoking a model of enhanced spin correlations, both spatially and temporally, of the charge carriers in the QPC interaction. Due to the low density the exchange interaction between electrons in the QPC is strong and hence there is a tendency to lign up the electron spins there. Based on this physical picture, we formulate a simple phenomenological model of tendency to form partially polarized states, which together with the Landauer-Büttiker (LB) formalism naturally explains many experimental features of the 0.7 anomaly.
At this point it is important to mention that under very general conditions truly 1d systems cannot exhibit ferromagnetic ordering at all. Therefore we emphasize that the model we are presenting does not rely on having a static magnetic moment, but only of having a dynamical mesoscopic polarization, where the correlation length is longer than the size of the QPC, and where the correlation time is longer than the passage time through the constriction.
Summary of experimental facts. Although the 0.7 anomaly has been observed in many other experiments we refer mainly to the work of the Cambridge group and the Copenhagen group presenting detailed studies of the magnetic field and temperature dependence of the anomaly. We emphasize that we are not dealing with the overall suppression of the conductance plateaus which has been seen in some samples, and which has been attributed to effects exterior to the contact region.
The main experimental features of the 0.7 anomaly are:
(e1) The anomalous plateau is observed in a large variety of QPCs at a value $`G=\gamma G_2`$, where the suppression factor $`\gamma `$ is close to 0.7 . A typical semiconductor QPC has a width less than 0.1 $`\mu `$m and a length in the range 0.1 to 10 $`\mu `$m.
(e2) The temperature dependence is qualitatively the same for all samples: the anomalous plateau is fully developed in some (device dependent) temperature range typically above 2 K. With increasing temperature both the anomalous and the integer plateaus vanish by thermal smearing, while with decreasing temperature the width of the anomalous plateau shrinks and the value of the suppression factor $`\gamma `$ approaches 1 .
(e3) A detailed study of the temperature dependence of $`\gamma `$ in QPCs with a particularly large subband separation shows that in the low temperature regime the conductance suppression has an activated behavior: $`1\gamma (T)\mathrm{exp}(T_a/T)`$ .
(e4) The activation temperature $`T_a`$ is a function of $`V_g`$ vanishing at some critical gate voltage $`V_g^0`$. Close to $`V_g^0`$ the dependence of $`T_a`$ on $`V_g`$ is well approximated by a power law, $`T_a(V_gV_g^0)^\alpha `$, with $`\alpha 2`$.
(e5) At a fixed temperature corresponding to a well developed 0.7 plateau, $`\gamma `$ shows a strong dependence on an in-plane magnetic field. With increasing magnetic field $`\gamma `$ smoothly decreases from 0.7 at $`B=0`$ T to 0.5 at $`B=13`$ T. The latter value corresponds to the expected LB conductance of one spin split subband.
(e6) Under the same temperature conditions as in (e5) the 0.7 anomaly depends on the source-drain bias. The suppression factor $`\gamma `$ increases smoothly from $`0.7`$ at zero bias to $`0.9`$ at large bias ($`2`$ mV) .
Alternative explanations for the 0.7 anomaly. First we can rule out impurity backscattering for two reasons (1) it would lead to a non-universal suppression of conductance with a strong sample dependent dependence on $`V_g`$ and (2) the temperature dependence expected from thermal smearing of the LB conductance is found to be much weaker than the observed dependence of the 0.7 anomaly . Thus a single particle picture cannot explain the effect. With inclusion of electron-electron interactions, a strong temperature dependence of conductance suppression has been shown to arise due to an interaction induced renormalization of backscattering, but the temperature dependence is opposite to the observed one. We also note that in the framework of Luttinger liquid theory interaction effects alone has no effect on the dc-conductance. Mechanism based on activated backscattering, has also been suggested, but like for the impurity backscattering suggestion, such a model cannot possibly offer an explanation neither for the existence of the plateau nor for its value.
Already in the first paper it was pointed out that due to its magnetic field dependence the 0.7 anomaly is related to spin polarization. This idea has been elaborated on in theoretical papers, however, none of these approaches have explained all of the experimental facts, and most strikingly they predict plateaus at $`G=G_2`$ or $`0.5G_2`$ instead of as the observed $`0.7G_2`$.
The phenomenological model. In our model we assume that the transmission coefficient of electrons can be calculated in a “frozen” configuration of spin in the mesoscopic constriction. The dynamics of collective degrees of freedom describing fluctuations of spin, are assumed to happen on larger time-scales. Also the distribution of spin is assumed to be smooth such that an adiabatic approximation is valid. In the “frozen spin configuration” the transmission coefficient $`𝒯_\sigma ^{\mathrm{tot}}`$ for a spin-$`\sigma `$ electron going through the QPC can thus be calculated as $`𝒯_\sigma ^{\mathrm{tot}}=𝒯_\sigma (E)P_\sigma +𝒯_{\overline{\sigma }}(E)P_{\overline{\sigma }}`$. Here $`P_\sigma `$ ($`P_{\overline{\sigma }}`$) is the probability of finding the incoming spin parallel (antiparallel) to the instantaneous polarization. In the isotropic case with $`P_\sigma =P_{\overline{\sigma }}`$ this leads to the same results as a static situation where two spin subbands are formed as shown in Fig. 1a, and therefore for simplicity we adopt this picture in the following modeling. Let the energy dispersion laws be given as
$$\epsilon _\sigma (k)=\epsilon _\sigma ^0(k)+\epsilon _\sigma ^s,\sigma =,,$$
(1)
where $`\epsilon _\sigma ^0(k)0`$ for $`k0`$ and $`\epsilon _\sigma ^s`$ is the subband edge. The system is partially polarized if the chemical potential $`\mu `$ and the subband edges satisfy $`\epsilon _{}^s(\mu )<\epsilon _{}^s(\mu )<\mu `$, where we have explicitly indicated the $`\mu `$-dependence of the subband edges. Given this model, at finite temperature $`T`$ using an idealized step-function transmission coefficient the LB conductance $`G(T)`$ of this system is
$$G(T)=\frac{1}{2}G_2\underset{\sigma =,}{}_{\mathrm{}}^{\mathrm{}}𝑑\epsilon \mathrm{\Theta }(\epsilon \epsilon _\sigma ^s)f^{}[\epsilon \mu ],$$
(2)
where $`f^{}`$ is the derivative of the Fermi-Dirac distribution $`f[x]=[\mathrm{exp}(x/k_BT)+1]^1`$ and $`\mathrm{\Theta }(x)`$ is the step function. By integration we obtain
$$G(T)=\frac{1}{2}G_2(f[\epsilon _{}^s(\mu )\mu ]+f[\epsilon _{}^s(\mu )\mu ]).$$
(3)
The important parameter is the spin down Fermi energy $`\mathrm{\Delta }(\mu )`$ given by the energy difference between $`\mu `$ and the minority spin subband edge (see Fig. 1):
$$\mathrm{\Delta }(\mu )=\mu \epsilon _{}^s(\mu ).$$
(4)
Consider now the situation where the spin polarization is nearly complete, i.e. $`\mathrm{\Delta }(\mu )\epsilon _{}^s(\mu )\epsilon _{}^s(\mu )`$. In this case three distinct temperature regimes exist. In the high temperature regime, $`k_BT\epsilon _{}^s(\mu )\epsilon _{}^s(\mu )`$, both terms in Eq. (3) are 0.5 so that $`G=0.5G_2`$. At low temperatures, $`k_BT\mathrm{\Delta }(\mu )`$, both terms in Eq. (3) are 1 and the conductance is the usual $`G_2`$. Remarkably, in the entire temperature range
$$\mathrm{\Delta }(\mu )k_BT\epsilon _{}^s(\mu )\epsilon _{}^s(\mu ),$$
(5)
the contribution of the first term is 0.5 while the second term remains 1 yielding $`G=0.75G_2`$, and the magic number $`0.7`$ emerges. Thus a 0.7 quasi-plateau appears if the condition (5) is fulfilled for a sufficiently broad range of $`\mu `$ (in experiments $`\mu V_g`$). In fact, below we argue that it follows from general considerations that the functional form of $`\mathrm{\Delta }(\mu )`$, also shown in Fig. 1(b), is
$$\mathrm{\Delta }(\mu )=\{\begin{array}{cc}C(\mu \mu _c)^2,\hfill & \mathrm{for}\mu >\mu _c\hfill \\ D(\mu \mu _c),\hfill & \mathrm{for}\mu <\mu _c\hfill \end{array}$$
(6)
which exactly expresses the tendency for $`\epsilon _{}^b(\mu )`$ to lock onto the value of $`\mu `$ by keeping $`\mathrm{\Delta }(\mu )`$ small. We derive Eq. (6) starting from a local spin density functional: $`F=E[n_{},n_{}]\mu (n_{}+n_{})`$. (We neglect non-conservation of spin due to surface terms.) In the spirit of the Landau theory of critical phenomena we minimize this functional in the vicinity of the “critical” point $`\mu _c`$, where the cross-over from full to partial polarization occurs. Near this point we have $`n_{}n_{}`$ and the condition for the minimum of the free energy becomes
$$\begin{array}{ccccc}\frac{F}{n_{}}\hfill & =& \alpha +\alpha ^{}\delta n_{}+\gamma n_{}\mu \hfill & =& 0\hfill \\ \frac{F}{n_{}}\hfill & =& \beta +\beta ^{}n_{}+\gamma \delta n_{}\mu \hfill & =& 0,\hfill \end{array}$$
(7)
where we have made the linearization $`n_{}=n_{}^0+\delta n_{}`$ for the majority spins and assumed that the leading terms are linear in $`\delta n_{}`$ and $`n_{}`$. The solution for the minority spin density in the case of $`\mu >\mu _c`$ is $`n_{}(\mu \mu _c)`$ which combined with the 1d property that $`n_{}^2\epsilon _F^{}=\mathrm{\Delta }`$ leads to Eq. (6). In the other case, $`\mu <\mu _c`$, $`\mathrm{\Delta }`$ is the energy gap for adding a minority spin and it is caused by the interaction energy. Thus again within the same simplified approach, we expect $`\mathrm{\Delta }`$ to be proportional to the density of majority spins, and hence $`\mathrm{\Delta }=D(\mu \mu _c)`$.
An in-plane magnetic field B is readily taken into account by adding Zeeman energy terms and substituting
$$\epsilon _{}^s\epsilon _{}^sg\mu _B|𝐁|,\epsilon _{}^s\epsilon _{}^s+g\mu _B|𝐁|.$$
(8)
Experimental implications of the model. In the following we discuss how the model can explain the experimental observations (e1)-(e6) summarized above. To facilitate comparison with experiment we have added a spin-degenerate subband with $`\epsilon _2^s=\epsilon _{}^s+E`$, where $`E`$ is a constant transverse-mode subband-spacing. In Fig. 2 observations (e1) and (e2) are clearly seen in the model calculation. The plateau-like feature in the figure is due to the specific functional form of $`\mathrm{\Delta }(\mu )`$ in Eq. (6) which as mentioned before ensures the fulfilment of condition Eq. (5). In this idealized case with a step-function transmission coefficient the plateau appears at $`0.75`$ as discussed above.
Observation (e3) follows trivially from Eq. (2) with the activation temperature $`T_a=\mathrm{\Delta }(\mu )`$. Assuming that in the vicinity of $`\mu _c`$ the chemical potential depends linearly on the gate voltage $`V_g`$ Eq. (6) immediately predicts (e4) with the exponent $`\alpha =2`$. We now turn to the characteristic magnetic field dependence (e5) of the 0.7 plateau at a fixed temperature. The result of the model calculation using Eqs. (3), (6) and (8) is shown in Fig. 3. In accordance with observation the 0.7 anomaly develops smoothly into an ordinary Zeeman split 0.5 plateau. The last experimental observation (e6) concerns finite bias. This brings us into a strong non-equilibrium situation which is outside the scope of the present work. However, considering a small finite bias not too far from the equilibrium case, we do find that the 0.75 plateau rises, which gives additional support for the picture presented here.
Non-ideal transmission. Our idealized model with a step-function transmission coefficient predicts an anomaly around 0.75 rather than around 0.6 - 0.7 as usually observed in the experiments (see Figs. 2 and 3). When we include more realistic transmission coefficients, $`𝒯_\sigma (\epsilon )`$, allowing for resonances to occur this discrepancy in fact finds a natural explanation. In accordance with the LB formalism we replace Eq. (2) by
$$G(T)=\frac{1}{2}G_2\underset{\sigma =,}{}_{\mathrm{}}^{\mathrm{}}𝑑\epsilon 𝒯_\sigma (\epsilon )f^{}[\epsilon \mu ].$$
(9)
In contrast to the idealized model this expression is not universal, but two general features can be expected. First of all, due to the conditions Eqs. (5) and (6) the quasi-plateau persists. Secondly, mainly the transmission coefficient of minority spin band will be affected which results in a suppression of the anomalous plateau while the integer plateau remains close to 1. In Fig. 4 this is illustrated by using the transmission coefficient for a rectangular potential barrier. This choice of $`𝒯_\sigma (\epsilon )`$ might be particularly relevant to the recent experiments on long quantum wires.
Suppression of shot noise. Deeper insight in the nature of the 0.7 anomaly may be obtained from shot noise measurements. Below we contrast the standard LB treatment with our model. In the standard spin degenerate case the conductance is interpreted in terms of an overall reduction of the transmission coefficient $`𝒯_0`$ and the noise spectrum at the 0.7 anomaly is
$$I_\omega I_\omega _{\omega 0}=e(1𝒯_0)I=G_2𝒯_0(1𝒯_0)\mathrm{\Delta }\mu $$
(10)
with $`𝒯_0=0.7`$. In our model the 0.7 anomaly comes from thermal depopulation of spin subbands and not from a reduced transmission, and the noise spectrum is
$$I_\omega I_\omega _{\omega 0}=\frac{1}{2}G_2[(1𝒯_{})𝒯_{}+(1𝒯_{})𝒯_{}]\mathrm{\Delta }\mu ,$$
(11)
which in the simple version with $`𝒯_\sigma (\epsilon )=\mathrm{\Theta }(\epsilon \epsilon _\sigma ^s)`$ leads to a vanishing shot noise. When non-ideal transmission is included, our model does not predict a universal noise contribution for the minority spins. We can, however, see that while the 0.7 quasi-plateau may be strongly reduced by additional backscattering ($`𝒯_{}1`$), the transmission in the majority spin subband remains large ($`𝒯_{}1`$), and Eq. (11) yields
$$I_\omega I_\omega _{\omega 0}G_2𝒯_0(1𝒯_0)\mathrm{\Delta }\mu .$$
(12)
Thus in general our model predicts a strong suppression of shot noise as compared to the standard result Eq. (10). This effect may already have been observed (see Fig. 3 in Ref. ).
In summary, we have presented a phenomenological model which can account for the experimental observations of the anomalous 0.7 conductance plateau in mesoscopic QPCs. The model is built on an assumption of an effective instantaneous partial polarization seen by the transversing electrons, while the ground state itself needs not have a finite magnetic moment. We hope that the present picture can inspire future work on microscopic theories of enhanced spin correlations in open mesoscopic systems.
Acknowledgements. We are grateful for the experimental data provided by Anders Kristensen and James Nicholls. H.B. and V.V.C. both acknowledge support from the Danish Natural Science Research Council through Ole Rømer Grant No. 9600548.
|
no-problem/0002/astro-ph0002447.html
|
ar5iv
|
text
|
# Optically dim counterparts of hard X-ray selected AGNsBased on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Centro Galileo Galilei of the CNAA (Consorzio Nazionale per l’Astronomia e l’Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.
## 1 Introduction
X-ray background (XRB) synthesis models ascribe most of the high energy flux to radio quiet, absorbed Active Galactic Nuclei (AGNs) at intermediate and high redshifts (e.g. Comastri et al. 1995, Gilli et al. 1999, Pompilio et al. 2000). Observationally the available information on AGNs at these redshifts refers mostly to unabsorbed nuclei, since the current samples of radio quiet AGNs have been selected mostly with color techniques, or in the soft X rays. Only recently have selection criteria less sensitive to absorption been used. Examples are the radio quiet red QSOs (Kim & Elvis 1999), analogous to the ones already found in radio loud samples (eg. Webster et al. 1995), and the spectroscopic identifications in the ELAIS field (Rowan–Robinson et al. 1999). Yet, most of our knowledge about absorbed, radio quiet AGNs is limited to low redshifts and low luminosities, where spectroscopic surveys of bright galaxies have been performed.
The High Energy LLarge Area Survey \[HELLAS, Comastri et al. 2000, Fiore et al. 2000 (paper II)\] aims at providing a useful sample of hard X-ray selected (5–10 keV), optically identified AGNs while waiting for the Chandra and XMM results. The survey instrument is the BeppoSAX MECS. The sky coverage is 1–50 square degrees at $`F_{510keV}=`$ 5–30 $`\times 10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$, respectively, and is 84 deg<sup>2</sup> at fluxes higher than $`9\times 10^{13}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. The cataloged sources amount to 147, and at the fainter limit the source density is $`16.9\pm 6.4`$ deg<sup>-2</sup>, implying that about 20–30% of the XRB at these energies has been resolved. A program of optical identification is underway, which includes optical/near-IR broad band photometry and near-IR imaging, beside optical spectroscopy of all candidates down to R = 20. So far, 63 optical counterparts have been identified, in about two thirds of the examined errorboxes. About half of the spectra are typical of QSOs, with a blue continuum and broad lines, about half are of intermediate type (1.8–1.9), generally with red continua, and a few of them contain only narrow lines \[Fiore et al. 1999 (paper I), La Franca et al. 2000 (paper III)\].
In this paper we present and discuss preliminary results of the near-IR photometry and imaging observations of the spectroscopically identified counterparts. Combined with the optical information presented in papers I and III these data give a broad-band view of the properties of the HELLAS sources and allow a preliminary census of the hard XRB contributors.
## 2 The observations
Most of the near-IR images were obtained with the ARNICA camera (Lisi et al. 1996) at the Italian National Telescope Galileo (TNG). Only one (#2) out of a total of 10 objects was observed with NIRC (Matthews & Soifer 1994) at Keck I. We observed these 10 objects at K-short ($`2.16\mu `$m) and 8 of them at J ($`1.25\mu `$m). Tab. 1 gives the observation log along with the limiting magnitudes reached in the exposures. The items are sorted according to the B–R color, which is a measure of the AGN dominance, as will be discussed in the following. The first column gives an identification number that, for sake of clarity, will be used in place of the full SAX name reported in column 2. Objects #1 and #4 are actually two QSOs that have been identified within the same errorbox of one of the HELLAS sources (paper I).
The observations were performed by mosaicing the field every minute, with offsets of 10-20<sup>′′</sup> around the source, both to sample the background and to minimize the effects of artifacts in the array. The data reduction pipeline was similar to that described in Hunt et al. (1994). Each image was divided by a differential flat field made out of sky twilight images. Images of each mosaic were aligned by means of field stars and, then, coadded with a sigma clipping rejection to exclude hot and dead temporary pixels, not accounted for by the bad pixel mask.
In this paper we also take advantage of the optical spectra used to identify the sources (papers I and III). In particular we will use the information on the continuum shape and the equivalent width of the broad lines. Optical photometry of these objects was taken from various sources: Palomar sky survey, UKST sky survey and some new observations obtained at various telescopes in connection with the spectroscopic program. Each of these optical data set was obtained with slightly different filters, however we homogenized the data to the Johnson system by normalizing the observed spectra to the observed photometric points and then resampling in the Johnson bands.
## 3 Results and discussion
Tab. 2 gives the main results of our near-IR observations. Columns 2 to 4 give the X-ray properties, while columns 5 and 6 give properties inferred from the optical spectra and, in particular, spectral classification and redshift. Our subsample of HELLAS sources was not selected with a specific criterion, it consists of all the early HELLAS identifications known and accessible at the time of the observations. This subsample includes intermediate type (1.9) AGNs, “classical” broad line QSOs with blue continuum, “red” broad line QSOs, characterized by a red underlying continuum, and one LINER, i.e. all the HELLAS identified types. Our sample also covers the whole range of redshift of the parent sample. Thus, even if it is not statistically solid because of the small size, it still contains information on the general properties of the HELLAS sources. Column 10 gives the full-width at half-maximum of the near-IR image measured along the major axis. The source is labelled as resolved/extended in column 9 if it is at least two times wider than the seeing.
Narrow line AGNs, intermediate type AGNs and red QSOs all show extended emission indicative of a significant (near-IR) contribution of the host galaxy. Furthermore, the B–K colors are not clustered around the value of $`2`$, typical of color–selected QSOs at high redshift, but extend to very red values up to B–K$``$6, in analogy with the radio selected AGNs (Webster et al. 1995). Therefore, our red AGNs can be considered the radio quiet counterpart of those found in the radio surveys. Instead, all and only blue QSOs are unresolved; as we shall see, this is a consequence of the dominance of the AGN component and, possibly, of their higher redshift. These findings are summarized in Fig. 1, where we plot R–K versus B–R and where point-like and extended sources are marked with circles and squares respectively. The solid oblique line is the locus of a single powerlaw, and the big cross marks the point where a powerlaw would give B–K = 2.1. The dotted line is the reddening curve for a QSO spectrum<sup>1</sup><sup>1</sup>1We used the standard Galactic extinction curve and a QSO template derived from a combination of the average spectra given in Elvis et al. (1994) and Francis et al. (1991). at $`z=0.26`$ (the average of the three reddest objects, #8, #9, and #10), starting from A<sub>V</sub>=0.5 for sake of clarity. The star gives the colors of an old stellar population<sup>2</sup><sup>2</sup>2The galaxy templates were taken from Bruzual & Charlot (1993), with solar abundances and a Salpeter IMF., again at $`z=0.26`$. The dashed line gives mixed QSO-galaxy contribution models in steps of 10% relative contribution (black dots) to the rest-frame V band, down to a minimum galactic contribution of 50%. We see that even the blue QSOs are redder than usual, and the sources with bent, non–powerlaw spectra are all and only those with extended images. Also, a QSO template cannot explain all our data, irrespective of the amount of reddening. Instead, the reddest objects have colors similar to those expected from an evolved stellar population.
As an additional check, we compared the observed optical to near-IR photometry and optical spectra of each single object with those expected from (reddened) QSOs and from evolved stellar populations. An example of this method for a specific object is given in Vignali et al. (2000). The combination of the (reddened) AGN and of the galactic component should give an acceptable fit of the observed photometric points. We also request that the model matches the shape of the observed optical spectrum (paper III) with a maximum tolerance of about 30% (to allow for uncertainties introduced by the non-parallactic angle of the slit). Finally we also tried to fit the equivalent width of the observed broad hydrogen lines, or to meet their upper limits, within a factor of two (given the EW spread observed in optical samples of QSOs). QSO and stellar population templates are the same as for Fig. 1 (notes 1 and 2). The free parameters of the model are the reddening of the QSO, the age of the stellar population and the relative contribution of these two components.
Fig. 2 shows examples of our spectral fits. They refer to three representative sources, i.e. the bluest QSO (#1), a “transition” source (#6), located around the bend in Fig. 1, and the reddest object of our sample (#10). The thin solid line in the upper panels is the best fit QSO+galaxy model, the two thick lines indicate the shape (and spread) of the line-free continuum observed in the optical spectra, while points with errorbars are the photometric fluxes normalized to R. One sees clearly the 4000 Å break in the red, spatially resolved sources, with a discernible progression from the “transition” sources to the very red ones. Along the progression, the B–R color increases from $``$1 to $``$2.8 and the preferred age for the model population increases from 10<sup>9</sup> to 10<sup>10</sup> years. The fractional contribution of the reddened AGN (to the rest frame V-band) decreases from 100% in the bluest objects to a few percent in the red ones. Although the small contribution from a reddened AGN is required even in the reddest objects, this cannot dominate their red colors. This is clearly shown by the dotted lines in the lower panels of Fig. 2, which give the best fit to the photometric points using only a (reddened) QSO template: either the continuum shape or the photometric points, or both, are poorly fitted. The best model fitting to the photometry and spectral shape is in perfect agreement with the imaging results, in the sense that the contribution of the host galaxy is dominant in those sources which appear extended. Finally, it is interesting to note that the best fitting stellar populations are generally old/evolved. However, one should bear in mind that in certain cases different models such as a reddened 10<sup>9</sup> year old population or a reddened continuous burst provide also an acceptable fit to the data, indicating some degree of degeneracy.
Red, absorbed AGNs are about half of the identified sources in the HELLAS sample, which in turn are about two thirds of the examined errorboxes. The fraction of these obscured AGNs is expected to increase significantly at fainter X-ray fluxes, where the remaining 70–80% of the hard X-ray background is produced (eg. Gilli et al. 1999). As a consequence, our result suggests that a large fraction of the hard XRB contributors have optical/near-IR counterparts which appear as “normal” galaxies (possibly with narrow AGN–like emission lines). A population of red AGNs similar to that analyzed in this paper but at z$`>`$1 would probably remain undetected at R=20. These might be the counterparts of the HELLAS sources for which no optical identification was found.
Ours are among the first red QSOs selected in hard X-rays, while previous samples have been selected in the radio (e.g. Webster et al. 1995) or in the soft X-rays (Kim & Elvis 1999). The prevalent interpretation in the latter cases is that the continuum of the red QSOs comes from the QSOs themselves, seen through an appropriate amount of reddening material (eg. Masci et al. 1998, Kim & Elvis 1999). In the case of our objects most of the continuum is instead due to the host galaxy, and absorption is needed only to make the galaxy’s the dominant contribution. Both Figs. 1 and 2 show that the reddened QSOs interpretation is untenable, since reddened QSO models fail to fit colors and spectral shapes. Also, all of them have extended IR images. The discrepancy with Masci et al. (1998) and Kim & Elvis’ (1999) results is probably to ascribe to the tendency of their selection criteria to find QSO that are on average less absorbed than ours (hence the QSO, although reddened, still dominates over the galaxy): Kim & Elvis select their red QSOs among bright soft X-ray sources, while Masci et al. select flat-spectrum radio sources that, according to the unified model, should be preferentially seen pole-on. Other studies, which use selection criteria less sensitive to absorption, as in our case, find red QSOs whose continuum is dominated by their host galaxies, in agreement with what found by us. Among these studies, Benn et al. (1998) find host galaxy-dominated red QSOs in radio sources which have steep radio spectra (hence preferentially edge-on according to the unified model). Hasinger et al. (1999) and Lehmann et al. (2000) find several red-AGNs among faint ROSAT sources whose red colors are ascribed to the contribution from their hosts; in some of these sources the redshift moves the rest frame hard-X band into the soft band, while in low-z objects the depth of the X-ray observation could detect the soft excess of obscured systems. Finally, Kruper & Canizares (1989) also found a large fraction of red-AGNs that are probably dominated by their host galaxies by selecting sources at X-ray energies (0.5–4.5 keV) higher than ROSAT. Our finding on red QSOs is in line with the results of Benn et al. (1998), Lehmann et al. (2000) and Kruper & Canizares (1989),
## 4 Conclusions
We presented new near-IR (J and Ks band) observations of a sample of 10 objects selected in the hard X-rays (5–10 keV). These sources were discovered in a large survey (HELLAS) performed by the BeppoSAX satellite, which resolves $`2030`$% of the hard X-ray background. The sample includes 4 blue broad line QSOs and 6 AGNs with redder continua whose optical emission line spectra range from broad line objects (red QSOs), to intermediate type 1.9 AGNs, to LINER.
The B–K color ranges from the standard value of $`2`$ (typical of U–B color selected QSOs) up to $``$6, similar to the color of red QSOs found in radio surveys.
The red AGNs show extended near-IR images. Model fitting of the photometry and spectral data shows that all and only the red AGNs are dominated by the emission of the host galaxy (with an age of 10<sup>9</sup>–10<sup>10</sup> yr). Red AGNs amount to about a third of the total HELLAS sources, and their fraction is expected to increase significantly at fainter X-ray fluxes, where most of the hard X-ray background is produced. Therefore, our result suggests that a significant fraction of the counterparts of the sources making the hard X-ray background appear as “normal” galaxies at optical and near-IR wavelengths. Chandra and XMM are expected to discover a large number of this class of objects.
###### Acknowledgements.
We thank the TNG staff and C. Baffa for technical assistance during the observations. We are grateful to P. Giommi, G. Matt, S. Molendi and G.C. Perola, who are involved in the HELLAS project. This work was partially supported by the Italian Space Agency (ASI) through the grant ARS-99-75 and by the Italian Ministry for University and Research (MURST) through the grant Cofin-98-02-32.
|
no-problem/0002/gr-qc0002050.html
|
ar5iv
|
text
|
# Acoustics of early universe. — Flat versus open universe models
## 1 Introduction
Discovering the wave nature of scalar perturbations in the early universe has a long history. Watchful reader of Harrison’s classical paper can guess the wave equations out of formulae given there (Section 5.5). Trigonometric or Bessel solutions together with $`\omega \eta `$-dependence characteristic for flat perturbed universes appear in both classical and gauge-invariant theories . In the case of flat universe the gauge-specific wave equations are explicitly given by Sachs and Wolfe ( see theorem pp. 76–77). The comprehensive phonon description of perturbations in the flat radiation-filled universe, together with the attempt to quantize them, has been formulated by Lukash and continued in its quantum aspect by others . The wave character is confirmed in the original Lifshitz-Khalatnikov formalism. Acoustic motions of the baryon-electron system after recombination have been noticed by Yamamoto at al. . Some parallels between scalar perturbation dynamics and gravitational waves can be found in .
Controversies, however arise over the gravitational instability criteria, the gauge problems and the role of the space curvature.
(1) In the $`\eta 0`$ limit one can formally construct the growing and decaying solutions. Since these solutions are typically considered as the large scale approximation ($`\omega 0`$) the structure formation is expected in scales greater than the sound horizon. Consequently the Jeans criterion is understood as the dispersion relation dividing perturbations into two classes: acoustic waves and gravitationally bound structures. No dispersion relations like that can be inferred from the exact solutions -, .
(2) As long as the results depend on the coordinate system (the gauge-specific solutions differ one from another) their physical meaning is a subject of dispute. Acoustic field deserves complete gauge-invariant treatment.
(3) The problem of acoustic field does not seem to be solved properly in open universes, where most authors traditionally employ the flat space Fourier analysis, instead of Fourier expansions in the Lobachevski space.
In attempt to clear those points, we propose a simple perturbation description, which is unique for all signs of curvature, and based on the gauge-invariant perturbation formalisms (Sakai , Bardeen , Kodama and Sasaki , Lyth and Mukherjee , Padmanabhan , Brandenberger, Kahn and Press , Ellis, Bruni and Hwang , Olson , see also ).
Section 2 contains a brief recipe of how to reduce equations obtained in these theories to a single, second order partial differential equation (3). Differences between the formalisms occur to be of no importance here, and we obtain exactly the same propagation equation for all of them. We show how to transform this equation to the wave equation in its normal form.
We obtain a general, “profile-independent” solution for the flat universe (Section 3), without appealing to the Fourier transform. We demonstrate that the gauge-invariant density perturbation propagate in radiation-dominated universe in the same way as electromagnetic or gravitational waves propagate in the epoch of the matter domination. Eventually, we expand perturbations into planar waves, in order to discuss some basic features of the spectrum and the spectrum transfer function.
In section 4 we describe the sound propagation in open universes. We analyse the dispersive role of the curvature. The space curvature prevents perturbations of frequencies smaller than some critical $`\omega _\mathrm{c}`$ from propagating in space, and systematically reduces the group velocity for others, when $`\omega `$ goes down to $`\omega _\mathrm{c}`$.
Section 5 is devoted to Gaussian acoustic fields. We derive the spectrum transfer function in the form suitable to estimate the role of the space curvature in the microwave background.
## 2 Scalar perturbations in the early universe
In the universe filled with highly relativistic matter the energy momentum tensor is trace-free. The dynamics of the scale factor $`a(\eta )`$ expressed as a function of the conformal time $`\eta `$ is governed by
$$T_\mu ^\mu =\frac{6}{a^3(\eta )}\left(a^{\prime \prime }(\eta )+Ka(\eta )\right)=0$$
(1)
and yields
$$a(\eta )=\sqrt{\frac{}{3}}\frac{\mathrm{sin}\left(\sqrt{K}\eta \right)}{\sqrt{K}}.$$
(2)
We treat the curvature index $`K`$ as continuous quantity and keep $`K`$ explicitly in both the equations and solution, as far as possible. Traditional formulae can be recovered by setting $`K=\pm 1`$ or by the limit procedure $`K0`$. Normalization $`\sqrt{/3}`$ recalls the constant of motion $`=\rho (\eta )a^4(\eta )`$.
The perturbation equation expressed in the orthogonal gauge<sup>1</sup><sup>1</sup>1or the gauge-invariant differential measures of inhomogeneity and parameterized by the conformal time<sup>2</sup><sup>2</sup>2In the orthogonal gauge the conformal time is defined as the integral $`\eta =\frac{1}{a(t)}dt`$, where time $`t`$ means the orthogonal time — the time parameter constant on orthogonal hypersurfaces. $`\eta `$ takes the canonical form (free of first derivatives)
$$\frac{^2}{\eta ^2}X(\eta ,𝐱)\frac{2}{3a^2(\eta )}X(\eta ,𝐱)\frac{1}{3}{}_{}{}^{(3)}\mathrm{\Delta }X(\eta ,𝐱)=0$$
(3)
where $`{}_{}{}^{(3)}\mathrm{\Delta }`$ denotes the Laplace-Beltrami operator acting on orthogonal hypersurfaces.
Equation (3) can be easily derived from the Raychaudhuri and the continuity equations (see the procedure in or ). It also can be recovered from the Sakai equation (formula (5.1), $`KX`$), the equation for density perturbations in orthogonal gauge (Bardeen’s formula (4.9), $`\rho _\mathrm{m}X`$, Kodama and Sasaki chap. IV, formula (1.5), $`\mathrm{\Delta }X`$, Lyth and Mukherjee formulae (16–17), $`\delta X`$, Padmanabhan Eq. (4.88), $`\delta X`$), the equation for gauge invariant metric potentials (Brandenberger, Kahn and Press formula (3.35), $`\mathrm{\Phi }_H/\rho a^2X`$), the equation for gauge invariant density gradients (Ellis, Bruni and Hwang formula (38), $`𝒟X`$) or Laplacians (Olson formulae (8–9), as well as its extension to open universes formula (22)) after transforming these equations to conformal time (if parameterized differently) and employing the Helmholtz equation to restore partial form of the perturbation equation. Suitable changes of the variable names as indicated above ($`\text{original}X`$) are necessary.
We introduce a new perturbation variable $`\widehat{X}`$
$$\widehat{X}(\eta ,𝐱)=\frac{1}{a(\eta )}\frac{}{\eta }(a(\eta )X(\eta ,𝐱)).$$
(4)
While $`X(\eta ,𝐱)`$ satisfies (3) and $`a(\eta )`$ is given by (2) the perturbation variable $`\widehat{X}`$ obey the wave equation in its normal form
$$\frac{^2}{\eta ^2}\widehat{X}(\eta ,𝐱)\frac{1}{3}{}_{}{}^{(3)}\mathrm{\Delta }\widehat{X}(\eta ,𝐱)=0.$$
(5)
The time derivatives of the gauge-invariant inhomogeneity measures also form gauge-invariant variables. In this sense variable $`\widehat{X}`$ is equally good as $`X`$. However, time derivatives may be difficult to observe at the last scattering surface, and hardly represent physically meaningful aspects of the cosmic structure. The equation (5) plays only an auxiliary role, nevertheless is very useful. Formally, it describes the wave (massless field) propagating in the static space-time of constant space-curvature. The specific case of positive curvature (Einstein static universe) has been considered in the context of quantum field theory on curved background. In our case spaces of zero or negative curvature are of particular importance. We will discuss both cases individually.
## 3 Sound waves on the flat background
When the space curvature vanishes the equation (3) reads as
$$\frac{^2}{\eta ^2}X(\eta ,𝐱)\frac{2}{\eta ^2}X(\eta ,𝐱)\frac{1}{3}{}_{}{}^{(3)}\mathrm{\Delta }X(\eta ,𝐱)=0$$
(6)
and is essentially the same as the propagation equation for gravitational or electromagnetic<sup>3</sup><sup>3</sup>3See formula (5.2.6) in after substituting $`g=\eta ^6`$. waves in the dust-filled universe
$$\frac{^2}{\eta ^2}X(\eta ,𝐱)\frac{2}{\eta ^2}X(\eta ,𝐱){}_{}{}^{(3)}\mathrm{\Delta }X(\eta ,𝐱)=0.$$
(7)
The only differences are that gravitational and electromagnetic waves are expressed by the tensor $`h_{\mu \nu }`$ and vector $`A_\mu `$ respectively, and they propagate with the speed of light ($`c=1`$), while the solutions to equation (6), represent scalar waves travelling with the velocity $`v=1/\sqrt{3}`$.
Now, the Laplacian $`{}_{}{}^{(3)}\mathrm{\Delta }`$ operates in Euclidean space. Equation (5) when expressed in Cartesian coordinates $`\{𝐱\}`$ is solved by an arbitrary function $`\widehat{X}=\widehat{X}(𝐧𝐱v\eta )`$ with $`v=1/\sqrt{3}`$ and $`𝐧𝐧=1`$. However, to keep the linear approximation valid we require that $`\widehat{X}(𝐧𝐱v\eta )`$ to be limited throughout the space<sup>4</sup><sup>4</sup>4$`\eta ϵ\text{IR}:𝐱ϵ<\widehat{X}(𝐧𝐱v\eta )<ϵ`$..
Knowing the general solutions of the equation (5) we can return to observables $`X`$. We look for the general form of $`X(\eta ,𝐱)`$ among the solutions of the equation (4)
$$X(\eta ,𝐱)=\frac{1}{\eta }(_0^\eta \eta ^{}\widehat{X}(𝐧𝐱v\eta ^{})d\eta ^{}+F(𝐱)).$$
(8)
On the strength of (6) $`F(𝐱)`$ is harmonic $`{}_{}{}^{(3)}\mathrm{\Delta }F(𝐱)=0`$ and must be constant if limited throughout the space of constant curvature . With no loss of generality<sup>5</sup><sup>5</sup>5The freedom to choose this constant is not different from ambiguity in the indefinite integral in (8). Appropriate integration constants are traditionally tuned to give the perturbation spatial average equal to zero., we put $`F(𝐱)=0`$. Eventually, the general, spatially limited solution to the equation (6) is expressed by the integral
$$X(\eta ,𝐱)=\frac{1}{\eta }_0^\eta \eta ^{}\widehat{X}(𝐧𝐱v\eta ^{})d\eta ^{}$$
(9)
of an arbitrary, but also spatially limited function $`\widehat{X}(𝐧𝐱v\eta )`$. The solution describes a wave having the time-dependent profile and travelling with the constant velocity $`v=1/\sqrt{3}`$.
This can be easily confirmed by the Fourier expansion analysis. Indeed, for any real function $`\widehat{X}(𝐧𝐱v\eta )`$ expressed as
$$\widehat{X}(\eta ,𝐱)=(𝙰_k𝚞_k(\eta ,𝐱)+𝙰_k^{}𝚞_k^{}(\eta ,𝐱))d𝐤$$
(10)
with
$$𝚞_k(\eta ,𝐱)=\frac{1}{\sqrt{2\omega }}\mathrm{e}^{i(𝐤𝐱\omega \eta )}$$
(11)
corresponds $`X(\eta ,𝐱)`$
$$X(\eta ,𝐱)=(𝒜_ku_k(\eta ,𝐱)+𝒜_k^{}u_k^{}(\eta ,𝐱))d𝐤$$
(12)
expanded into modes $`u_k(\eta ,𝐱)`$
$$u_k(\eta ,𝐱)=\mu _\omega (\eta )\mathrm{e}^{i𝐤𝐱}=\frac{1}{\sqrt{2\omega }}\left(1+\frac{1}{i\omega \eta }\right)\mathrm{e}^{i(𝐤𝐱\omega \eta )}$$
(13)
The frequency $`\omega `$ obeys the dispersion relation $`\omega ^2=k^2/3`$, the Fourier coefficient $`𝒜_k=\frac{1}{i\omega }𝙰_k`$ is an arbitrary complex function of the wave number $`k`$, and $`u_k`$ is obtained from (9) after substituting $`\widehat{X}=𝚞_k`$. Modes $`u_k`$ like $`𝚞_k`$ form an orthonormal base in the function space with the Klein-Gordon scalar product . Both $`𝚞_k`$ and $`u_k`$ form travelling waves, but only $`𝚞_k`$ have their absolute value constant in time. Therefore, the generic perturbation $`X(\eta ,𝐱)`$ is composed of plane waves $`u_k`$ of decaying amplitude, which perfectly agrees with Sachs and Wolfe results ( pp. 76–77) obtained in alternative perturbation approach. Waves move with constant velocity $`v=1/\sqrt{3}`$ independently of their length-scale. Short-scale and long-scale perturbations do not form different classes of solutions.
In the theory appealing to stochastic processes the initial perturbation is given at random at the end of the quantum epoch $`\eta _i>0`$, and develops gravitationally according to (6) in the interval $`\eta >\eta _i`$. Therefore, solution’s singularity at $`\eta =0`$ is purely mathematical fact with no physical consequences.
## 4 Sound waves in the curved space
While decomposing perturbations into Fourier series in flat or opened universes we should respect some specific effects caused by the curvature. This particularly refers to open universes (the Lobachevski space), where the orthogonal expansions exist only for a class of perturbations with sufficiently short-scale autocorrelation . To expand the others one needs supplementary series (supercurvature modes ) of non-orthogonal<sup>6</sup><sup>6</sup>6In the sense of the Klein-Gordon scalar product. solutions to the Helmholtz equation, which are numbered by imaginary wave numbers $`k[i,i]`$.
Let us adopt spherical coordinates $`\{r,\theta ,\varphi \}`$ as more appropriate for curved maximally symmetric spaces. In the same manner like in the scalar field theories the density perturbation expands as
$$X(\eta ,r,\theta ,\varphi )=\underset{lm}{}(A_{klm}u_{klm}(\eta ,r,\theta ,\varphi )+A_{klm}^{}u_{klm}^{}(\eta ,r,\theta ,\varphi ))dk$$
(14)
where modes $`u_{klm}(\eta ,r,\theta ,\varphi )=\mu _{\omega ,K}(\eta )Y_{klm}(r,\theta ,\varphi )`$ are expressed by hyperspherical harmonics $`Y_{klm}(r,\theta ,\varphi )`$ and time-dependent amplitude $`\mu _{\omega ,K}(\eta )`$ fulfilling the time-equation (obtained by separation from (3)):
$$\frac{\mathrm{d}^2}{\mathrm{d}\eta ^2}\mu _{\omega ,K}(\eta )+\left(\frac{k^2K}{3}\frac{2K}{\mathrm{sin}^2(\sqrt{K}\eta )}\right)\mu _{\omega ,K}(\eta )=0.$$
(15)
We find solutions to (15) in the exact form as
$$\mu _{\omega ,K}(\eta )=\frac{1}{\sqrt{2}}\sqrt{\frac{\omega }{\omega ^2K}}\left(1+\sqrt{K}\frac{\mathrm{cot}(\sqrt{K}\eta )}{i\omega }\right)\mathrm{e}^{i\omega \eta }.$$
(16)
and their complex conjugates. Solutions $`\mu _{\omega ,K}(\eta )`$ approach $`\mu _\omega (\eta )`$ (eq. 13) in the $`K0`$ limit. The frequency $`\omega `$ and the wave number are related to each other by the dispersion relation
$$\omega (k)=\frac{\sqrt{k^2K}}{\sqrt{3}}.$$
(17)
which can be obtained by simple substitution of (16) into (15) and perfectly agrees with the dispersion relation obtained for the variable $`Y`$ on the strength of equation (5) (compare chapter 5.2).
Functions $`Y_{klm}(r,\theta ,\varphi )`$ solve the Helmholtz equation
$${}_{}{}^{(3)}\mathrm{\Delta }Y_{klm}(r,\theta ,\varphi )=(k^2K)Y_{klm}(r,\vartheta ,\varphi )$$
(18)
and can be split into the radial part $`\mathrm{\Pi }_{kl}`$ and the two dimensional spherical functions $`Y_{lm}(\vartheta ,\phi )`$
$$Y_{klm}(\chi ,\vartheta ,\phi )=\mathrm{\Pi }_{kl}(\chi )Y_{lm}(\vartheta ,\phi )$$
(19)
Solutions to radial equation
$$\frac{^2}{\chi ^2}\mathrm{\Pi }_{kl}(\chi )+2\mathrm{coth}\chi \frac{}{\chi }\mathrm{\Pi }_{kl}(\chi )\left(\lambda +\frac{l(l+1)}{\mathrm{sinh}^2\chi }\right)\mathrm{\Pi }_{kl}(\chi )=0$$
(20)
are given by
$`\mathrm{\Pi }_{kl}`$ $`=`$ $`N_{kl}\stackrel{~}{\mathrm{\Pi }}_{kl}`$
$`N_{kl}`$ $`=`$ $`\sqrt{{\displaystyle \frac{2}{\pi }}}k^2\left[{\displaystyle \underset{n=0}{\overset{l}{}}}(n^2+k^2)\right]^{1/2}`$
$`\stackrel{~}{\mathrm{\Pi }}_{kl}`$ $`=`$ $`(k^2\mathrm{sinh}\chi )^l\left({\displaystyle \frac{1}{k\mathrm{sinh}\chi }}{\displaystyle \frac{\mathrm{d}}{\mathrm{d}(k\chi )}}\right)^{l+1}\mathrm{cos}(k\chi )`$
The lowest multipole solutions
$`\stackrel{~}{\mathrm{\Pi }}_{k0}`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{sinh}\chi }}\left({\displaystyle \frac{\mathrm{sin}k\chi }{k}}\right)`$
$`\stackrel{~}{\mathrm{\Pi }}_{k1}`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{sinh}\chi }}\left(\mathrm{cos}k\chi +\mathrm{coth}\chi {\displaystyle \frac{\mathrm{sin}k\chi }{k}}\right)`$
$`\stackrel{~}{\mathrm{\Pi }}_{k2}`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{sinh}\chi }}\left(3\mathrm{coth}\chi \mathrm{cos}k\chi +(3\mathrm{coth}^2\chi k^21){\displaystyle \frac{\mathrm{sin}k\chi }{k}}\right)`$
are enough to demonstrate properties of both two series of hyperspherical harmonics. For real wave numbers ($`k^2>0`$) the $`\mathrm{\Pi }_{k1}`$ (consequently also $`Y_{klm}(r,\theta ,\varphi )`$) functions oscillate in space. They form an orthonormal basis in the sense of the scalar product $`(f_1|f_2)=f_1f_2^{}\sqrt{g}\mathrm{d}^3x.`$ As proved by Gelfand and Naimark they are complete to expand square integrable functions in the Lobachevski space . For imaginary wave numbers contained in the interval $`1k^2<0`$, the $`\mathrm{\Pi }_{k1}`$ (and $`Y_{klm}(r,\theta ,\varphi )`$) functions build supplementary series. These functions are regular, limited but strictly positive throughout space, so they are not orthogonal<sup>7</sup><sup>7</sup>7Modes with $`k=\pm i`$ are constant throughout space. One can subtract them by suitable changes in the background metric. Other modes with $`k(i,i)`$, although positive everywhere, decrease with distance strong enough to keep zero mean value. For instance, for the spherically symetric perturbation with the density excess given by $`\rho (\chi )=\mathrm{\Pi }_{\frac{i}{2}0}(\chi )=2csch(\chi )\mathrm{sinh}\left(\frac{\chi }{2}\right)`$ the mass-to-volume ratio $`\overline{\rho }(r)=\frac{4\pi _0^r\rho (\chi )\mathrm{sinh}^2(\chi )d\chi }{4\pi _0^r\mathrm{sinh}^2(\chi )d\chi }`$ tends to zero with volume tending to infinity: $`\overline{\rho }(r)\frac{\rho (r)\mathrm{sinh}^2(r)}{\mathrm{sinh}^2(r)}2csch(r)\mathrm{sinh}\left(\frac{r}{2}\right)0.`$ No redefinition of the background can absorb perturbations like that.. The supplementary series is redundant for expansion of square integrable functions. Nevertheless, this series is necessary to expand weakly homogeneous stochastic processes in the Lobachevski space .
In this way in the open universe one obtains two types of hyperspherical harmonics $`Y_{klm}(r,\theta ,\varphi )`$ and consequently two types of modes $`u_{klm}(\eta ,r,\theta ,\varphi )`$. Modes $`u_{klm}(\eta ,r,\theta ,\varphi )`$ with real $`k`$ are orthogonal by means of Klein-Gordon scalar product and expand waves of square integrable profile. Modes $`u_{klm}(\eta ,r,\theta ,\varphi )`$ with $`1k^2<0`$ form “waves of infinite length-scale”. Both types of modes may contribute to the spectrum of randomly (or quantum) originated inhomogeneities .
The density perturbations propagate in the open universe in different manner than the scalar fields or gravitational waves do. Acoustic waves of different length-scales propagate with different velocities. Indeed from relation (17) we can infer both the phase and group velocity of sound in the form
$$v_\mathrm{f}(k)=\frac{\omega (k)}{k}=\frac{\sqrt{1+k^2}}{\sqrt{3}k}$$
(21)
and
$$v_\mathrm{g}(k)=\frac{}{k}\omega (k)=\frac{k}{\sqrt{3}\sqrt{1+k^2}}.$$
(22)
The group velocity decreases with the wave number $`k`$, to vanish completely at the $`k0`$ limit. The condition $`k=0`$ determines the critical frequency $`\omega (0)=1/\sqrt{3}`$, below which the wave propagation is forbidden. Therefore, the acoustic travelling waves are composed of the principal series modes. The supplementary series build ‘global’ standing waves of supercurvature scale.
## 5 Gaussian acoustic field
The generic acoustic field described by (5) is composed of waves travelling in different directions. Clearly, this property also refers to solutions of the equation (3). The mechanism, which creates initial small perturbations is expected to be of probabilistic nature (thermodynamic or quantum fluctuations), therefore the evolution of linear structure is usually expressed in the language of stochastic processes. The homogeneity of this stochastic process reflects the universe homogeneity. Weakly homogeneous processes have their Fourier expansions
$$\widehat{X}(\eta ,𝐱)=(𝙰_k𝚞_k(\eta ,𝐱)+𝙰_k^{}𝚞_k^{}(\eta ,𝐱))d𝐤$$
(23)
and consequently
$$X(\eta ,𝐱)=(𝒜_ku_k(\eta ,𝐱)+𝒜_k^{}u_k^{}(\eta ,𝐱))d𝐤$$
(24)
where the coefficient $`𝒜_k`$ are random variables of $`k`$ and the integral has a stochastic sense . In a generic Gaussian field<sup>8</sup><sup>8</sup>8Allen and collaborators distinguish between stationary random fields fulfilling (25-26) and the squeezed random fields, which violates (26). Stationary processes have their precise meaning in the framework of the stochastic theory not equivalent to (25-26). We prefer to name these random fields generic to stress analogy to generic classical fields. the expectation values for $`𝒜_k`$ fulfill
$`E[𝒜_k𝒜_k^{}^{}]`$ $`=`$ $`𝒫_k\delta _{kk^{}},`$ (25)
$`E[𝒜_k𝒜_k^{}]`$ $`=`$ $`0.`$ (26)
$`𝒫_k`$ is defined as the field spectrum. The first relation guarantees the statistical independence of waves of different wave numbers, the second — says that no particular phase is preferred. Waves moving in different directions are statistically independent.
The temperature fluctuations at the last scattering surface draw our attention to spatial correlations of $`\delta \rho /\rho `$ measured at the instant $`\eta =\eta _\mathrm{r}`$. In the flat universe the two-point spatial autocorrelation $`R(h)`$ of the field $`X`$ given by (16) can be expressed as:
$`R(h)`$ $`=`$ $`{\displaystyle \frac{1}{4\pi }}{\displaystyle E[X(𝐱,\eta )X[𝐱+𝐡,\eta )]\delta (𝐡𝐡1)d𝐡}`$ (27)
$`=`$ $`{\displaystyle \frac{1}{4\pi }}{\displaystyle 2u_ku_k^{}𝒫_k\mathrm{exp}(i𝐤𝐡)\delta (𝐡𝐡1)d𝐤d𝐡}`$ (28)
$`=`$ $`{\displaystyle _0^{\mathrm{}}}4\pi k^2j_0(hk)2\mu _\omega \mu _\omega ^{}𝒫_kdk`$ (29)
$`=`$ $`{\displaystyle _0^{\mathrm{}}}4\pi k^2{\displaystyle \frac{\mathrm{sin}(hk)}{hk}}p_k(\eta )dk`$ (30)
where $`j_0`$ is the spherical Bessel function, and $`p_k`$ stands for the space spectrum of the density perturbation at a given moment $`\eta `$. Following Peebles we define the transfer function $`T_\omega (\eta )=2\mu _\omega \mu _\omega ^{}`$, which converts the time-invariant field spectrum $`𝒫_k`$ into the space spectrum $`p_k(\eta )`$.
$$p_k(\eta )=T_\omega (\eta )𝒫_k=2\mu _\omega \mu _\omega ^{}𝒫_k.$$
(31)
The formula for the space spectrum $`p_k(\eta )`$ splits into two factors: 1) $`T(\eta )`$ — describing the role of gravity, and 2) $`𝒫_k`$ coming from other interactions and rendering their probabilistic nature prior or during radiational era. We do not discus any specific form of $`𝒜_k`$ or $`𝒫_k`$ in this paper. We assume, however, that $`𝒜_k`$ enables one to construct small perturbations, and in particular does not cause divergences in Fourier integrals. Employing (13) one easily founds
$$T_\omega (\eta )=\frac{1}{\omega }(1+\frac{1}{(\omega \eta )^2}).$$
(32)
As seen from (32) the evolution of each mode depends on the product $`(\omega \eta )^2`$. The contribution from modes much larger than the horizon scale $`\omega \eta 1`$ strongly decreases with time, while the perturbation well inside the horizon $`\omega \eta 1`$ keep constant amplitude. This property confirms the stability of Robertson-Walker symmetry against the generic (both classical and stochastic) large-scale density perturbations.
In the same manner one can express random acoustic fields in the open universe
$$X(\eta ,r,\theta ,\varphi )=\underset{lm}{}(A_{klm}u_{klm}(\eta ,r,\theta ,\varphi )+A_{klm}^{}u_{klm}^{}(\eta ,r,\theta ,\varphi ))dk.$$
(33)
According to Yaglom’s theorem, for weakly homogeneous processes integration runs over both series principal and supplementary. The autocorrelation function now reads :
$$R(h)=\underset{R_+[0,i]}{}4\pi k^2\frac{\mathrm{sin}(hk)}{k\mathrm{sin}(h)}p_k(\eta )dk$$
(34)
and the transfer function determined from formula (16) and (31) takes the form
$$T_\omega (\eta )=\frac{1}{\omega }\left(1+\frac{1}{\omega ^2K}\frac{K}{\mathrm{sinh}^2(\sqrt{K}\eta )}\right)$$
(35)
We rewrite $`T_\omega (\eta )`$ as a function of the energy density
$$T_\omega (\rho )=\frac{1}{\omega }\left(1+\frac{\sqrt{\rho }}{3(\omega ^2K)}\right)$$
(36)
to demonstrate that the curvature modifies substantially the space spectrum $`p_k`$ only in the low frequency limit (supercurvature modes). Therefore, one may expect to find the curvature signature mostly in low multipoles (dipole, quadruple) . To extract this geometrical effect the knowledge of the field spectrum $`𝒫_k`$ is indispensable.
Expansions (12) are typically employed in the gravitational waves theory , and in the scalar field theory , while the density perturbations theories traditionally solve, basically the same, propagation equation in terms of the Bessel $`J_{3/2}`$ and Neumann $`N_{3/2}`$ functions. The $`J_{3/2}`$ and $`N_{3/2}`$ are identified with “growing” and “decaying” modes respectively, according to their limit behaviour at $`\eta =0`$. Since the transition from $`\{u,u^{}\}`$ basis to $`\{J_{3/2},N_{3/2}\}`$ is the unitary transformation, both representations are equivalent. When the “decaying” mode is rejected (a “standard practice” in cosmology — see comments in ) the unitarity is broken down and solution space is truncated to the space of standing waves. Then the acoustic field is in a highly “squeezed state” and consequently the characteristic peaks in the transfer function appears .
## 6 Remarks on scales and observables
There is a substantial difference between the dispersion on curvature, we described above, and “the curvature imprint” in CMBR spectrum, anticipated by the acoustic peaks hypothesis. The last hypothesis claims that the early perturbed universe was overdominated by stationary waves . One may justify this assumption by appealing to squeezing phenomena at the transition from deSitter phase to the radiation dominated epoch. (In a transitions like that the large-scale stationary gravitational waves are generated (see ). The same refers to massless scalar field.) The dominance of standing waves with specifically correlated phases should exhibit a series of peaks in the CMBR spectrum. Positions of these peaks are sensitive to details of the universe dynamics ($`\mathrm{\Omega },\mathrm{\Lambda }`$, etc.).
On the contrary, the dispersion effect described in the section 4 has strictly geometrical character. It comes directly from the wave equation in the radiational epoch and does not depend on universe’s past (evolution prior to radiational era is irrelevant here). The radiation-filled universe become “opaque” to sound waves greater than the curvature radius, whatever is the sound origin. In particular, no additional mechanism preferring standing waves at the beginning of the radiational era is needed. On that account the dispersion might form a reliable curvature tracer, provided it is observable at all, i.e.
1) the space scales of supercurvature perturbations must be “small enough” to fit well in the observable part of our universe, and
2) observational data should be complete enough to distinguish between standing and travelling waves at the last scattering.
Answer to the first question is relatively easy and it was already formulated in the literature in terms of multipole decomposition . We repeat the same result below by use of simple geometrical consideration. Let us assume we live in the open ($`\mathrm{\Omega }=0.2`$) universe which is presently dominated by matter ($`p=0`$) . We see the last scattering surface at some $`\eta _r`$ with the redshift $`z_r=1000`$. For sake of simplicity (following ) we assume an instant transition from radiational epoch ($`p=\rho /3`$) to the galactic era ($`p=0`$), which occur just at the last scattering moment $`\eta =\eta _r`$. In such a universe model the scale factor evolves as
$$a(\eta )=\sqrt{\frac{}{3}}\frac{\mathrm{sinh}^2\left(\frac{\eta +\eta _r}{2}\right)}{\mathrm{sinh}(\eta _r)}$$
and the radius of the visible universe $`\chi _r`$ can be easily expressed as a function of redshift $`z_r`$ and the cosmological parameter $`\mathrm{\Omega }`$
$$\chi (\mathrm{\Omega },z_r)=2arccoth\left(\frac{1}{\sqrt{1\mathrm{\Omega }}}\right)2arcsinh\left(\sqrt{\frac{1\mathrm{\Omega }}{\mathrm{\Omega }(1+z_r)}}\right)$$
Setting typical values $`\mathrm{\Omega }=0.2`$ and $`z_r=1000`$ one obtains $`\chi _r=2.76`$. The equator plane “draws” on the last scattering surface a circle of the perimeter $`l=2\pi \mathrm{sinh}(\chi _r)=49.5`$, consequently, the curvature radius (in this units equal to one) takes $`\alpha =360/l=7.28`$ degrees on the sky. Let us consider now a spherically symmetric density perturbation (on the last scattering surface) described by the $`k=0`$ hyperspherical function $`Y_{000}=\chi csch(\chi )`$. The $`Y_{0lm}`$ functions, with arbitrary $`l`$ and $`m`$, form a boundary between the principal and supplementary series, and can be understood as the “shortest” supercurvature modes. Since these functions are positive everywhere, we express the perturbation length-scale as the half-magnitude width $`l_{1/2}`$. For spherically symmetric mode $`Y_{000}`$ it is roughly $`l_{1/2}=2.2`$. This corresponds to $`32^{}`$ on the sky. Equator intersects about 10 patches of that size. The supercurvature perturbations contribute mainly to the lowest multipoles in the CMBR spectrum (roughly $`l<5`$), but the visible part of the universe is large enough to produce the curvature effects. Another problem is how to determine from the CMBR data, which waves are stationary and which of them are travelling ones. The oscillation time scale for waves close to the critical frequency $`\omega _\mathrm{c}=\sqrt{1/3}`$ is of the order of the universe age, thus no direct observation of their temporal behaviour can be done. The same refers to gravitational squeezed waves (see ). On the other hand, the information we need is “hidden” on the last scattering surface, and the “only task” is to read it properly. For standing perturbations the density and the velocity fields are strongly correlated . Density extrema coincide with the expansion extrema throughout the entire space<sup>9</sup><sup>9</sup>9Considerably more complex coincidences may be expected when multifluid models are taken into account. These cases need independent investigations.. This means that even in random acoustic fields the density and the velocity perturbations loose their statistical independence in scales comparable with the space curvature and consequently would gradually correlate on the sky when the angle scales outgrow ten degrees. The key to solve this problem is to find a second independent observable, which would help us to separate the potential and Doppler contributions to the CMBR temperature fluctuations. We can hardly propose a definite candidate at the present stage of observations, but both the polarization measurements and the large scale flows analysis seems to be steps in the right direction.
## 7 Summary
The gauge-invariant analysis confirms that density perturbations in the radiation dominated universe form a field of acoustic waves. In the flat universe the density perturbations of all length-scales move with the same sound speed $`v=1/\sqrt{3}`$. Short and long perturbations do not form different classes of solutions. Perturbations’ velocity is independent of the wave number, and in particular is the same for subhorizon and superhorizon inhomogeneities. Although the gauge-invariant theory confirms the fundamental properties of acoustic field, which have been known from gauge-specific descriptions , the propagation equations are not the same. An outstanding property of the gauge-invariant description is that propagation equation for sound in radiational era are identical with the propagation equations for gravitational or electromagnetic waves in the matter dominated universe.
In the open universe perturbations evolve in a more complex manner. The negative space curvature causes the dispersion of acoustic waves. The universe geometry determines the minimal frequency for traveling acoustic waves in the similar way as the geometry of the wave conductor determines the minimal frequency for waves propagating inside. The critical frequency is related solely to the space curvature (not to the Jeans length-scale). Below this frequency perturbations form standing waves of supercurvature scale. In the radiation dominated universe the distinction between travelling and standing acoustic waves strictly coincides with the division into subcurvature and supercurvature inhomogeneities. Supercurvature standing waves are generic solutions.
As commonly expected, the spectrum transfer function depends on the universe geometry, but differences are essential only in the large scale limit. In the subcurvature regime generic Gaussian acoustic field evolve like the acoustic field in the flat universe. Significant curvature effects may appear in supercurvature scales — lowest multipoles in the MBR temperature map.
## Acknowledgements
We would like to thank Prof. Andrzej Staruszkiewicz for helpful remarks concerning the harmonic analysis in the Lobachevski space. This work was partially supported by State Committee for Scientific Research, project No 2 P03D 014 17.
|
no-problem/0002/nucl-th0002037.html
|
ar5iv
|
text
|
# Hyperon polarization in Kaon Photoproduction from the deuteron.
## 1 Introduction
Since hyperon-nucleon scattering experiments are difficult to perform, hyperon production processes such as $`\gamma +dK^++Y+N`$ and $`e+de^{}+K^++Y+N`$ appear as natural candidates that allow exploring the $`YN`$ interaction. One can obtain the information of the $`YN`$ interaction by analyzing the correlated $`YN`$ final states. An inclusive $`d(e,e^{}K^+)YN`$ experiment has already been performed in Hall C at TJNAF, while the data for $`d(\gamma ,K^+Y)N`$ are being analyzed in Hall B.
Recently, we found that various meson-theoretical $`YN`$ interactions generate $`S`$-matrix poles around the $`\mathrm{\Lambda }N`$ and $`\mathrm{\Sigma }N`$ thresholds. The pole near the $`\mathrm{\Sigma }N`$ threshold is related to the strength and the property of the $`\mathrm{\Lambda }N\mathrm{\Sigma }N`$ coupling and causes enhancements in the $`\mathrm{\Lambda }N`$ elastic total cross sections. The hope is that the pole structure of the $`YN`$ $`t`$ matrix will have visible effects in such production processes mention above.
In this paper, we study the inclusive $`d(\gamma ,K^+)YN`$ and exclusive $`d(\gamma ,K^+Y)N`$ processes for $`\theta _K=0^{}`$ and predict various observables including polarization observables.
## 2 Formalism
The reaction processes $`\gamma +dK^++\mathrm{\Lambda }(\mathrm{\Sigma })+N`$ are expressed by the operator $`T_i`$ as
$$T_i|\mathrm{\Psi }_d>=\underset{j}{}U_{ij}t_{\gamma K}^{(j)}|\mathrm{\Psi }_d>i,j=\mathrm{\Lambda }N,\mathrm{\Sigma }N,$$
(1)
where the operator $`t_{\gamma K}^{(i)}`$ describes the elementary processes $`\gamma +NK^++\mathrm{\Lambda }(\mathrm{\Sigma })`$, and $`|\mathrm{\Psi }_d>`$ represents the deuteron state which is generated by the Nijmegen 93 $`NN`$ interaction . The operator $`U_{ij}`$ corresponds to the $`YN`$ final state interaction processes, and is represented as
$`U_{ij}`$ $`=`$ $`\delta _{ij}+V_{ij}G_0^{(j)}+{\displaystyle \underset{j^{}}{}}V_{ij^{}}G_0^{(j^{})}V_{j^{}j}G_0^{(j)}+\mathrm{}`$ (2)
$`=`$ $`\delta _{ij}+{\displaystyle \underset{j^{}}{}}V_{ij^{}}G_0^{(j^{})}U_{j^{}j},`$
where $`V_{ij}`$ is $`YN`$ interaction including $`\mathrm{\Lambda }N\mathrm{\Sigma }N`$ coupling. We ignore the $`K^+`$ meson interaction with the nucleon and hyperon in the final states. From Eqs.(1) and (2), one can deduce the coupled set of integral equations for $`T_i`$,
$$T_i|\mathrm{\Psi }_d>=t_{\gamma K}^{(i)}|\mathrm{\Psi }_d>+\underset{j^{}}{}V_{ij^{}}G_0^{(j^{})}T_j^{}|\mathrm{\Psi }_d>.$$
(3)
We solve this set (3) after partial-wave decomposition in momentum space. The three elementary process $`\gamma +pK^++\mathrm{\Lambda }(\mathrm{\Sigma }^0)`$ and $`\gamma +nK^++\mathrm{\Sigma }^{}`$ are properly incorporated in the driving term in Eq.(3). Equation (3) is solved on isospin bases $`\mathrm{\Lambda }N`$ and $`\mathrm{\Sigma }N`$, but the resulting amplitudes are transformed into those on the particle bases $`\mathrm{\Lambda }n`$, $`\mathrm{\Sigma }^0n`$ and $`\mathrm{\Sigma }^{}p`$ by which the inclusive $`d(\gamma ,K^+)`$, exclusive $`d(\gamma ,K^+Y)`$ cross sections and hyperon polarizations are calculated. For details, we refer the reader to ref..
## 3 Results
At present, we calculate the observables only for the $`K^+`$ meson scattered to 0 degree $`(\theta _{K^+}=0^{})`$. The Nijmegen soft-core $`YN`$ interactions NSC97f and NSC89 and a recently updated production operator for the $`\gamma +NK^++\mathrm{\Lambda }(\mathrm{\Sigma })+N`$ processes are used.
Figure 1(a) shows the inclusive cross sections which sum up the contributions of the $`K^+\mathrm{\Lambda }n`$, $`K^+\mathrm{\Sigma }^0n`$ and $`K^+\mathrm{\Sigma }^{}p`$ final states. The solid and dashed lines are the predictions of the NSC97f and NSC89 $`YN`$ interactions, respectively. The dotted line shows the results of the plane wave impulse approximation (PWIA). The arrows indicate the two thresholds $`K^+\mathrm{\Lambda }N`$ ($`p_K=977.30`$ MeV/c) and $`K^+\mathrm{\Sigma }N`$ ($`p_K=869.14`$ MeV/c).
The two pronounced peaks around $`p_K=`$945 and 809 MeV/c are due to the quasi-free processes of $`\mathrm{\Lambda }`$ and $`\mathrm{\Sigma }`$, where one of the nucleon in the deuteron is spectator and has zero momentum in the laboratory system. Significant FSI effects are found around the $`K^+\mathrm{\Lambda }N`$ and $`K^+\mathrm{\Sigma }N`$ thresholds. The cross section is increased up to 86% by FSI near the $`K^+\mathrm{\Lambda }N`$ threshold. Around the $`K^+\mathrm{\Sigma }N`$ threshold, as shown Fig.1(b), the strengths and shapes of the enhancements by the NSC97f and NSC89 are quite different.
Figure 2(a) illustrates the exclusive $`d(\gamma ,K^+\mathrm{\Lambda })`$ cross sections just below the $`K^+\mathrm{\Sigma }N`$ threshold ($`p_K=870`$ MeV/c). The FSI effects are seen both at very forward and at large angles. The PWIA cross sections are basically zero at backward angles, while the FSI calculations still show some strength. Figure 2(b) demonstrates the $`\mathrm{\Lambda }`$ recoil polarizations with incoming polarized photon. The $`\mathrm{\Lambda }`$ recoil polarizations in PWIA are almost one. This is because the incoming photon is polarized along the $`z`$ axis, but the target deuteron is unpolarized and the outgoing $`K^+`$ meson carries no spin and angular momentum in this case ($`\theta _K=0^{}`$). However, the final state interactions cause the large deviations from one, and the prediction by NSC97f is quite different from that of NSC89.
The exclusive results to the $`K^+\mathrm{\Sigma }^{}p`$ final states just above this threshold ($`p_K=`$865 MeV/c) are shown in Fig.3. The prominent FSI effects are seen both in the cross sections and in the double polarization observable.
Finally, we briefly discuss the production operator for $`\gamma +NK^++\mathrm{\Lambda }(\mathrm{\Sigma })`$ processes. In Fig.4, the inclusive cross sections in PWIA in which an old version of the operator is used are compared to those with the present version. The latter has been improved in the fitting to the data of $`\gamma +NK^++\mathrm{\Lambda }(\mathrm{\Sigma })`$ including the new SAPHIR data. The difference between the predictions by the two versions are quite large as in Fig.4, which suggests this reaction $`d(\gamma ,K^+)YN`$ is another promising candidate for investigating the operator.
|
no-problem/0002/cond-mat0002329.html
|
ar5iv
|
text
|
# Strange nonchaotic attractor in a dynamical system under periodic forcing
## I INTRODUCTION
The strange nonchaotic attractor (SNA) is an object that has chaotic attractor features like fractal dimension and nondifferentiability (strangeness) but no exponential sensitivity to initial conditions, i.e., its largest Lyapunov exponent is nonpositive . It has some analogies with trajectories that have been found in a study of the Frenkel-Kontorova model and of the Chirikov-Taylor map . Aubry has shown that the incommensurate ground-states of the Frenkel-Kontorova model can undergo a breaking of analiticity transition, between a smooth and a fractal set. Shenker and Kadanoff calculated the fractal power spectrum of the fractal trajectory that appears in the Chirikov-Taylor map after the breakup of a KAM-like surface a fractal power spectrum. This map can be related to an incommensurate driving of a nonlinear oscillator. Another example is the accumulation point of the period-doubling cascade of the logistic map . However, this attractor occurs in a zero measure set in the parameter space. SNA as a typical behavior, a finite measure set in the parameter space of a model, has been found in the context of nonlinear quasiperiodic external forcing (i.e., the forcing of a signal with two incommensurate frequencies) . The study of the SNA has also been recently connected with the localization problem .
A lot of work has been done in order to characterize the features of a SNA; its route of formation, autocorrelation function and power spectrum. Besides a period-doubling cascade, many different routes for the formation of an SNA have been proposed: (1) the collision between a period-doubled torus and its unstable parent torus ; (2) the progressive fractalization of a two-dimensional ergodic torus and, (3) for systems with quasiperiodic tori in symmetric invariant subspaces, the loss of the transverse stability of a torus . Another particular feature of the SNA is that its autocorrelation function does not decay with the time delay like that of the chaotic attrator. It can either be fractal or similar to the quasiperiodic case . The power spectrum of a SNA can be singular continuous and in this case has fractal features as discussed in many papers
The aim of this paper is to answer the question: are SNAs restricted to quasiperiodically driven nonlinear systems? We believe that the answer is no. We studied a map (hereafter called YOS map) that has been proposed in the magnetic context to describe the behavior of an analog of the ANNNI model on the Bethe lattice and which shows, for external periodic forcing, the occurrence of a SNA . The same map exhibits typical features of a neuron (like activation threshold, nerve blocking and rebound behavior) and has been coined as a dynamical perceptron of second order (related to the dimension two of the map), because it corresponds to a two-layer recurrent neural network . Independently, Kanter et al. proposed a recurrence relation scheme known as a sequence generator, which is essentially the same map, generalized for any number of dimensions. This map can also be viewed as a discrete-time, nonlinear oscillator, for some values of the parameters.
Anishchenko et al. have claimed that they found a SNA through periodic driving of a map, but Pikovsky et al. have shown that it was a chaotic attractor with a tiny Lyapunov exponent. We show that Anishchenko et al.’s attractor is indeed strange chaotic, but ours is strange nonchaotic and related to quasiperiodic attractors.
This paper is organized as follows. Section II is dedicated to describing the map and its attractors, mainly the strange nonchaotic one. In section III we characterize this attractor through its Lyapunov exponents, fractal dimension, autocorrelation function and power spectrum. The conclusions are addressed to section IV.
## II THE MAP AND ITS ATTRACTORS
The two-dimensional YOS map which this paper is concerned with is given by
$`x_{n+1}`$ $`=`$ $`\mathrm{tanh}\left[{\displaystyle \frac{x_n\kappa y_n+H(n)}{T}}\right]`$ (1)
$`y_{n+1}`$ $`=`$ $`x_n`$ (2)
The YOS map was initially proposed to model the mean magnetizations $`(x_n,y_n)`$ of the $`(n^{th};(n1)^{th})`$-shells of a Bethe lattice, for an analog of the ANNNI model in a constant magnetic field (H was independent of n). Here, we extend it for a nonuniform field. T is the temperature and $`\kappa =J_2/J_1`$ where $`J_1(J_2)`$ is the exchange coupling between nearest (next-nearest) neighbor spins on a Bethe lattice. Yokoi et al. obtained the phase diagram of this model at zero field . Tragtenberg and Yokoi studied the effect of finite uniform field . A sinusoidal wave $`H(n)=H_0\mathrm{cos}(2\pi \omega n)`$ is the particular form of $`H(n)`$ we will adopt throughout this paper, representing a shell-dependent external field/input.
Kinouchi and Tragtenberg studied the properties of the map as a neuron model, where $`x_n`$ is the action potential of the neuron at time $`n`$. $`1/T`$ and $`\kappa /T`$ are the weight factors for the two previous states of the neuron. $`H(n)/T`$ is naturally defined as the external current as a function of the discrete time $`n`$. They showed that the map exhibits many neural features.
Kanter et al. proposed the following real number sequence generator:
$$s_l=\mathrm{tanh}\left[\beta \underset{n=1}{\overset{N}{}}W_ns_{ln}\right]$$
(3)
and studied this map in the context of time-series and neural networks. $`H(n)`$ could be introduced for representing a time dependent input signal.
From the purely dynamical system point of view, the YOS map represents a nonlinear oscillator for the range of the parameters considered in this paper, and $`H(n)`$ represents a time-dependent external input. The attractors of map YOS can be fixed points, Q-cycles (cycles of period Q), quasiperiodic, chaotic or strange nonchaotic . For the parameters $`\kappa =1,T=0.5,H_0=0`$ the system oscillates with period 6 (see Fig. 1). But, for different values of $`H_0`$ and $`\omega `$, keeping the same $`\kappa `$ and $`T`$, the attractor becomes richer (see Fig. 2).
This kind of attractor can also be obtained for other values of $`\omega `$, and is therefore characteristic of this periodically driven nonlinear map. For others examples of SNA of this map see .
## III CHARACTERIZATION OF THE SNA
A SNA is a fractal object with no exponential sensitivity to initial conditions (the SNA at the accumulation point of the period-doubling bifurcations of the logistic map has null Lyapunov exponent but has polynomial sensitivity to initial conditions ). In order to characterize this attractor, we investigated the largest Lyapunov exponent, fractal dimension, autocorrelation function and power spectrum.
### A Lyapunov exponents: naïve and more accurate calculation
Before calculating the Lyapunov exponent of the attractor of Fig. 2, let us briefly discuss the sensitivity to initial conditions of one of the attractors studied by Anishchenko et al. .
They proposed a four-dimensional map made up by two asymmetrically coupled circle maps, with two coupling parameters (A and $`\gamma _2`$). They argued that the for $`\gamma _2=0`$ the system (1) can be a circle map with quasiperiodic forcing. A small value of $`\gamma _2`$ will make it an autonomous four-dimensional map that could show a SNA.
This four-dimensional map is given by
$`x_{n+1}`$ $`=`$ $`x_n+\mathrm{\Omega }_1{\displaystyle \frac{K_1}{2\pi }}\mathrm{sin}(2\pi x_n)+\gamma _1y_n`$ (5)
$`+A\mathrm{cos}(2\pi u_n)mod\mathrm{\hspace{0.33em}1},`$
$`y_{n+1}`$ $`=`$ $`\gamma _1y_n{\displaystyle \frac{K_1}{2\pi }}\mathrm{sin}(2\pi x_n),`$ (6)
$`u_{n+1}`$ $`=`$ $`u_n+\mathrm{\Omega }_2{\displaystyle \frac{K_2}{2\pi }}\mathrm{sin}(2\pi u_n)+\gamma _2(y_n+v_n)mod1,`$ (7)
$`v_{n+1}`$ $`=`$ $`\gamma _2(y_n+v_n){\displaystyle \frac{K_2}{2\pi }}sin(2\pi u_n).`$ (8)
For the set of parameters $`\mathrm{\Omega }_1=0.5,\mathrm{\Omega }_2=(\sqrt{5}1)/2,K_2=0.03,A=0.4,\gamma _1=\gamma _2=0.01`$ and $`K_1=0.8784`$, Anishchenko et al. claimed the attractor is strange nonchaotic. They found a null largest Lyapunov exponent within the numerical accuracy of the method they used.
A positive definite largest Lyapunov exponent corresponds to an exponential expansion of an hypercube of initial conditions in at least one of the directions of the phase space. In other terms, we can study the sign of the largest Lyapunov exponent of a map by studying the stretching and contraction of a hypercube of initial conditions. Here we present a simpler version of this procedure, studying only the evolution of the distance between trajectories generated by only two initial conditions.
We take two different sets of initial values of (x,y,u,v) and calculate the distance $`d(n)`$ between the trajectories generated by each set, as the number of iterations is increased. That is perhaps the naïvest way to investigate the largest Lyapunov exponent of a map. The first set we take as $`x_0=y_0=u_0=v_0=0.7`$ and the second as $`x_0^{}=y_0^{}=u_0^{}=v_0^{}=(0.710^{12})`$. Figure 3 represents the first 30 000 iterations of the evolution of d(n).
We can see at first sight that the system is chaotic, since the distance between the trajectories with different initial conditions grows exponentially. A simple estimation of the slope of the rugged curve leads to $`(0.8\pm 0.3)10^3`$ for the largest Lyapunov exponent $`\lambda _+`$, where we have assumed that the behavior of the distance is governed by this exponent and given by
$$d(n)d(0)\mathrm{exp}(\lambda _+n).$$
(9)
This result agrees with surprising accuracy with that obtained by Pikovsky and Feudel , using the Wolf-Swift-Swinney-Vastano algorithm .
Then, this naïve method of checking the sensitivity of initial conditions seems to be powerful and we will use it as well as the more accurate method due to Eckmann-Kamphorst-Ruelle-Ciliberto (EKRC) to calculate the largest Lyapunov exponent of the attractor of Fig. 2.
Fig. 4a exhibits some self-similarity in the behavior of $`d(n)`$ as a function of $`n`$. We took many pairs of initial conditions such that the distances between the initial conditions from each pair were $`10^2,10^6,10^{10}and10^{14}`$. Then we calculated how $`d(n)`$ vary for each pair as a function of the iterations, for the SNA of the Fig. 2. The result is represented in Fig. 4b. The various curves d(n) x n have a scale invariant like behavior, i.e., for various values of the difference in initial conditions the evolution with the iterations is rather similar, preserving the form for different scales of distance. Moreover, none of the curves show exponential divergence, although they may vary within few orders of magnitude. It suggests a null largest Lyapunov exponent. We confirmed this using calculations based on the EKRC method shown below. This behavior (scale invariance in distance and zero largest Lyapunov exponent) is similar to that of a typical quasiperiodic attractor.
Fig. 5 shows the behavior of the absolute value of the largest Lyapunov exponent approximants as a function of the number of iterations (neglecting the first 10,000), for the attractor of the Fig. 2. This attractor has the same shape for many initial conditions: we took ($`x_0,y_0`$) = ($`\pm 1,\pm 0.5,0;\pm 1,\pm 0.5,0`$). The calculations were performed using the Eckmann-Kamphorst-Ruelle-Ciliberto method. They do indicate that the largest Lyapunov exponent is zero, since the absolute values of its approximants scale with the number of iterations $`n`$ as $`|\lambda _+|n^1`$. Using the same method, we found $`\lambda _{}=0.709971\pm 0.000001`$, for the smallest Lyapunov exponent.
### B Fractal dimension
The SNA of the Fig. 2 is a complex geometrical object with a fractal Hausdorff dimension ($`D_F`$). In order to find out this dimension we have used the box counting method . The diagram with the number of boxes $`N(a)`$ visited by the SNA of Fig. 2 as a function of the edge length $`a`$ is shown in Fig.6. The initial condition is $`(x_0,y_0)=(1,1)`$, and the first $`10^4`$ iterations were discarded. We considered the next $`10^9`$ iterations, and box edges between $`a=10^1`$ and $`a=10^3`$. Even with this number of iterations, we can see that box edges smaller than $`10^{2.6}`$ lead to artificially small values of the fractal dimension. However, a larger number of iterations were computationally prohibitive. The same behavior was observed in .
Fig. 7 has been constructed by taking ordered sets of four consecutive points of Fig. 6 and calculating the linear coefficient of the best straight line determined by them. Error bars follow from least squares fitting. The leftmost point of this figure represents the fitting of the four smallest values of log(1/a). The point of order 2 represents the fitting of the second to fifth point of Fig. 6, counting from the left to the right, and so on. Then, we conclude that the fractal dimension is $`D_F=1.80\pm 0.09`$. This is the same value found in Ref. for the attractor of Grebogi et al. , but they considered this value quite uncertain (since it was obtained from just three points). In the same reference, Ding et al. used heuristic arguments to conjecture that $`D_F=2`$ and found no contradiction with the result they numerically found. But this value is definitely far from the value we obtained for the attractor of the Fig. 2. The evidence points to the fractal character of this object.
### C Autocorrelation function
The normalized autocorrelation function of an attractor { $`x_n`$} can be defined as
$$C(\tau )=\frac{_{n=1}^Nx(n)x(n+\tau )}{_{n=1}^Nx^2(n)}.$$
(10)
The calculation of the autocorrelation function can give clues about the nature of the attractor in question. Its fractal character can indicate the fractality of the attractor. However, when we are in the presence of a SNA we can observe at least two kinds of autocorrelation function: fractal or quasiperiodic .
Fig. 8 represents the autocorrelation function of the attractor of the Fig. 2, and is very similar to those related to quasiperiodic attractors, like that found in Ref. for the strange nonchaotic attractor of the model C defined therein. We neglected the first $`10^4`$iterations and considered averages over the next $`10^5`$ iterations.
### D Power spectrum
The first step in investigating the power spectrum of an attractor given by a sequence {$`x_n`$} is defining its discrete Fourier transform
$$s(w,N)=N^{1/2}\underset{n=1}{\overset{N}{}}x_ne^{i2\pi wn}.$$
(11)
Then,we can define the power spectrum of the attractor as:
$$P(w)=\underset{N\mathrm{}}{lim}<|s(w,N)|^2>.$$
(12)
The power spectrum of periodic attractors consists of $`\delta `$-peaks at the harmonics of the fundamental frequency, whilst in the chaotic case the spectrum is continous. For a quasiperiodic case characterized by two incommensurate frequencies $`\omega _1`$ and $`\omega _2`$, the spectrum contains all the frequencies of the form $`n\omega _1+m\omega _2`$.
Many works report power spectra of a SNA with singular continuous character, like those found in some models of quasiperiodic lattices and quasiperiodically forced quantum systems . This spectrum has a fractal appearance, where there are peaks weaker than $`\delta `$-functions distributed along a self-similar landscape.
Fig. 9 shows the power spectrum of the SNA of the Fig. 2. It has many scales of peaks, exhibiting a fractal appearance. We neglected the transient of the first $`10^4`$ iterations and took the next $`10^4`$. The detailed study of the fractal character of this power spectrum as well as a renormalization group approach for it is the subject of a forthcoming publication.
## IV Conclusions
We are able to show that a strange nonchaotic attrator can result from the dynamics of a periodically driven nonlinear oscillator, the YOS map. We have shown that a fractal object with zero largest Lyapunov exponent can emerge from this dynamics in a finite range of the parameters space. The Hausdorff dimension of this object is $`D_F=1.80\pm 0.09`$. Its correlation function oscillates like those of quasiperiodic attractors and its power spectrum has a fractal (or multifractal) appearance.
## V acknowledgements
MT acknowledges O. Kinouchi by suggesting that this attractor could be strange nonchaotic, is grateful to N.N.Oiwa for valuable discussions, mainly for his criticism against the character of the attractor reported here, thanks C. Denniston for bringing to our attention Ref. , and acknowledges FINEP/Brazil for the partial financial support. We acknowledge J.M. Yeomans for the careful reading of the manuscript. A.S. Cassol and F.L.S. Veiga acknowledge CAPES/CNPq and CNPq, respectively, for partial financial support.
|
no-problem/0002/astro-ph0002517.html
|
ar5iv
|
text
|
# A Broad 22 𝜇m Emission Feature in the Carina Nebula H II Region Based on observations with ISO, an ESA project with instruments funded by ESA members states (especially the PI countries France, Germany, the Netherlands, and the United Kingdom) and with the participation of ISAS and NASA.
## 1 Introduction
Supernovae have been suggested besides evolved stars as one of the major sources of interstellar dust (see Gehrz 1989, Jones and Tielens 1994, Dwek 1998 for review). Supporting evidence includes observations of dust condensation in the ejecta of SN 1987A (Moseley et al. 1989, Whitelock et al. 1989, Dwek et al. 1992, Wooden et al. 1993), and those of the newly synthesized dust in the Cassiopeia A (Cas A) supernova remnant (Arendt, Dwek, & Moseley 1999). The dust formation mechanism and the amount of dust that is formed in supernovae are still poorly known. Observations of SN 1987A and Cas A showed that the mass of the newly formed dust is much less than expected, and the discrepancy may be due to the fact that most of the dust is cold and cannot be detected in the far-infrared (Dwek 1998, Arendt et al. 1999). Finding an abundant dust component in the interstellar medium (ISM) which is formed only in supernovae will support the hypothesis that supernovae are a major source of interstellar dust. Furthermore, since the amount of this specific grain is proportional to the number of supernova, its total mass in the ISM can be used as a tracer of the supernova rate or star formation rate in external galaxies. In this Letter we report the detection of a broad 22 $`\mu `$m emission dust feature in the Carina nebula H II region by the ISO guaranteed time observations. We found that the shape of the present 22 $`\mu `$m emission dust feature is similar to the 22 $`\mu `$m emission feature observed in Cas A. We also found a similar emission feature in two starburst galaxies from the ISO archival data.
## 2 Observations
The observations were made as part of the ISO guaranteed time program (TONAKA.WDISM1) using the Short Wavelength Spectrometer (SWS; de Graauw et al. 1996). All the observations were made with the SWS AOT01 mode with scan speed of 1, which provided full grating spectra of 2.38 to 45.2 $`\mu `$m with a resolution of $`\lambda `$/$`\mathrm{\Delta }`$$`\lambda `$ = 300. The data have been processed through the Off-Line Processing (OLP) 8.4, and reduced with the SWS Interactive Analysis (IA) package developed by the SWS Instrument Dedicated Team. We observed the Car I H II region in the Carina nebula and regions away from it to the nearby molecular clouds (see de Grauuw et al. 1981 for discussions of molecular clouds in the Carina nebula). The Car I H II region is excited by the Trumpler 14, an open cluster containing numerous O-type stars. Totally four positions were observed. Pos 1 is at the Car I H II region with $`l`$ = 287.399 and $`b`$ = –0.633. Pos 2 ($`l`$ = 287.349 and $`b`$ = –0.633), Pos 3 ($`l`$ = 287.299 and $`b`$ = –0.633), and Pos 4 ($`l`$ = 287.249 and $`b`$ = –0.633) are at a distance of 2.4, 4.7, and 7.1 pc away from Pos 1, respectively. Throughout our Letter, we adopt a Sun-to-Carina nebula distance of 2.7 kpc (Grabelsky et al. 1988). Since the SWS aperture size varies across the wavelength ranges, we adjusted the difference in fluxes at the SWS band boundaries by scaling the spectra to the shortest band. The above adjustment does not affect the results presented here.
## 3 Results
Figure 1a shows the observed SWS spectrum of the Carina nebula at Pos 1. A broad feature from $``$ 18 to 28 $`\mu `$m is clearly seen in the spectrum. The adjustment of the observed fluxes due to the different aperture sizes of SWS has no effect on this feature, since the SWS has the same aperture size from 12 – 27.5 $`\mu `$m. It is difficult, however, to derive the spectral shape of this feature correctly since the underlying continuum emission is very strong. We derived the feature shape by assuming the feature starts at 18 $`\mu `$m and ends at 28 $`\mu `$m. Then the assumed underlying continuum emission, as shown in Figure 1a by the dashed line, is subtracted from the observed spectrum. The continuum emission comprises grains of graphite with temperature of 157 K and silicate with temperature of 40 K. Dust optical constants are adopted from Draine (1985). The resultant feature shape is shown in Figure 1b, in which a peak around 22 $`\mu `$m is clearly seen. This new 22 $`\mu `$m feature is distinctly different than the 21 $`\mu `$m feature that was discovered by Kwok, Volk, & Hrivnak (1989). The 21 $`\mu `$m feature, which was only observed in carbon-rich post asymptotic giant branch stars, has a much narrow feature width of $``$ 4 $`\mu `$m (Volk, Kwok, & Hrivnak 1999) compared to that of the present 22 $`\mu `$m feature (with width of $``$ 10 $`\mu `$m). This suggests that the 21 and 22 $`\mu `$m emission features are arising from different kinds of dust grain. Figure 2a shows another Carina nebula spectrum at Pos 2. The broad feature from 18 to 28 $`\mu `$m is also seen in this spectrum. The continuum emission comprising graphite with temperature of 135 K and silicate with temperature of 42 K is assumed (the dashed line in Fig. 2a) and subtracted from the observed spectrum. The excess emission is shown in Figure 2b. The unidentified infrared (UIR) emission features become stronger at Pos 2, a position farther away from the Car I H II region compared to Pos 1. The slight difference in feature shape between figures 1b and 2b is probably due to the dust temperature effect. The 22 $`\mu `$m emission feature is also seen in Pos 3 and Pos 4 (not shown) but in weaker intensity.
The same broad feature has been reported in the SWS spectra of M17–SW H II region by Jones et al. (1999). They found in their spectra that the intensity of this emission feature decreases with distance from the exciting stars, the same phenomenon we see in the present four observed spectra. The decrease of feature intensity may be due to: (1) dilution by the cool dust emission from the nearby molecular clouds; (2) emission of the feature requiring very high UV radiation intensity to be excited; and/or (3) decrease in the abundance of this specific grain with distance from the exciting stars. Identification of the carrier of the feature will help us to understand the observed decrease in the feature intensity.
Very recently, a broad emission dust feature with peak at 22 $`\mu `$m was reported in Cas A (Arendt et al. 1999). We compare this 22 $`\mu `$m feature and the present feature to see whether there is a similarity in feature shape. The comparison is shown in Figure 3. The Cas A spectrum was observed in the optical knot called N3 (see Arendt et al. 1999 for details), and is obtained from the ISO archival data. In order to obtain a better fit at wavelengths longer than 28 $`\mu `$m, we choose a new continuum emission, as shown in Figure 1a by the dotted line, to give the 22 $`\mu `$m feature more long wavelength emission. The new continuum emission comprises graphite with temperature of 160 K and silicate with temperature of 45 K. In Figure 3 we can see that the feature present in the Carina nebula shows a good agreement with that observed in Cas A. The origin of the excess emission around 13 $`\mu `$m is unknown. It should be noted, however, that the emission in Cas A at wavelengths between 20 and 50 $`\mu `$m may arise mostly from a warm ($``$ 90 K) silicate component that originates from the diffuse shell (see Tuffs et al. 1999 for discussions of the spectral energy distribution of Cas A). If this warm silicate component is subtracted from the Cas A N3 spectrum, the resultant 22 $`\mu `$m feature (without emission at wavelengths longer than 30 $`\mu `$m) will give a good fit to our observed 22 $`\mu `$m feature shown in Figure 1b.
## 4 Discussion
Evolved stars and supernovae have been suggested as the major production sources of interstellar dust. Past observations of evolved stars have found a number of dust features in the near to far-infrared ranges (see Waters et al. 1999 for a recent review). However, the broad 22 $`\mu `$m emission feature that we found in Carina nebula H II region has never been reported in evolved stars. On the other hand, the present broad 22 $`\mu `$m emission feature is quite similar to the emission feature of newly synthesized dust observed in Cas A, suggesting that both of these features arise from the same dust grain, and that supernovae are probably the major production source of this new interstellar grain. The non-detection of the 22 $`\mu `$m feature in SN 1987A (Moseley et al. 1989) does not make the latter suggestion less convincing, since the infrared emission in SN 1987A probably arises from optically thick clumps. Lucy et al. (1991) and Wooden et al. (1993) suggest that the infrared emission in SN 1987A is dominated by the dust in the optically thick clumps, and the low density small grains in the interclump medium contribute to the visual extinction. With this model, the infrared emission in SN 1987A is a graybody emission, but the visual extinction is not.
We would expect to find the 22 $`\mu `$m dust feature in astronomical sources with high supernova rate if supernovae are the major production source of this new interstellar grain. Starburst galaxies are an ideal place to search for. From the ISO archival data we found that two starburst galaxies, M82 and NGC7582, show a similar 22 $`\mu `$m emission feature. Figure 4 shows the SWS spectrum of the nuclear region of NGC7582, a narrow-line X-ray galaxy with strong starburst in the central kpc (Radovich et al. 1999, and references therein). The 20 to 30 $`\mu `$m emission is mostly or completely arising from the broad 22 $`\mu `$m emission feature. The spectrum of NGC7582 was taken by the SWS AOT01 with the speed of 2. We processed the data through the OLP 8.4 and reduced with the SWS IA package in a way similar to the Carina nebula spectra. The feature intensity in M82 (not shown) is much weaker, about 10$`\%`$ of the 18 – 28 $`\mu `$m emission if the continuum is assumed to pass through the 18 and 28 $`\mu `$m data points. Two other starburst galaxies, NGC253 and Circinus may also have a 22 $`\mu `$m feature, but they are further weak in intensity and more observations are needed to confirm it.
The findings of the 22 $`\mu `$m dust feature in H II regions and starburst galaxies suggest that this new grain could be an abundant component of interstellar dust. If the amount of this interstellar grain in the ISM is supposed to be proportional to the number of supernovae, its total mass in the ISM can be used as a tracer of the supernova rate or star formation rate in external galaxies. Studies of a large sample of starburst galaxies are required to confirm the above relationship. Only a limited number of galaxies have been observed by the SWS full grating scan mode, and a statistically useful sample of starburst galaxies for this study is not available at present. Future space missions like the Space InfraRed Telescope Facility (SIRFT) and Infrared Imaging Surveyor (IRIS), and the Stratospheric Observatory for Infrared Astronomy (SOFIA) are expected to provide the necessary data base.
The existence of this broad 22 $`\mu `$m emission feature complicates the dust model used in the study of the spectral energy distribution of starburst galaxies. Dust grains like graphite, amorphous carbon, silicates, and polycyclic aromatic hydrocarbons may not be representative of all the dust properties in starburst galaxies. Particularly, this broad 22 $`\mu `$m emission feature could have significant effects in the derivation of the dust color temperature based on the 20 – 30 $`\mu `$m photometric flux (e.g., the Infrared Astronomical Satellite 25 $`\mu `$m data) as well as the number counts of deep surveys in the infrared spectral range to be carried out by SIRTF and IRIS observations, and must be taken into account appropriately.
Arendt et al. (1999) suggested that the carrier of the 22 $`\mu `$m feature observed in Cas A is Mg protosilicate based on the good agreement between the observed feature shape and the laboratory spectrum of the Mg protosilicate taken by Dorschner et al. (1980). They found that FeO can also give a good fit to their observed 22 $`\mu `$m feature, but the required dust temperature higher than expected and the deficient of emission at wavelegths longer than 30 $`\mu `$m led them to rule it out as a promising candidate. If the identification of Mg protosilicate is true, it is the second silicate grain besides the astronomical silicates found in the ISM. More observations are needed to confirm (or test) the suggested identification. Observing the 22 $`\mu `$m feature in a variety of astronomical environments will provide useful information in studies on chemical composition and emission mechanism of the carrier.
The major results of this Letter are: (1) a broad 22 $`\mu `$m emission dust feature is detected in H II regions and starburst galaxies; (2) the 22 $`\mu `$m emission feature is similar in shape with the emission feature of newly synthesized dust observed in the ejecta of Cas A, and both of these features arise from the same carrier; and (3) supernovae are probably the major production source of this new interstellar dust.
We would like to thank the SWS IDT for providing the SWS IA software, and ISO project members for their efforts and help. We thank Robert Gehrz for useful comments. We also thank Issei Yamamura for useful discussions on the data reduction, and K. Kawara, Y. Satoh, H. Okuda, and the Japanese ISO team for their continuous help and encouragement. K. W. C. is supported by the JSPS Postdoctoral Fellowship for Foreign Researchers. This work was supported in part by Grant-in-Aids for Scientific Research from JSPS.
|
no-problem/0002/hep-th0002040.html
|
ar5iv
|
text
|
# Thick domain walls and singular spaces
## I Introduction
Domain walls have recently attracted renewed attention, after it was pointed out in that four-dimensional gravity can be realized on a thin wall connecting two slices of $`AdS`$ space. From the point of view of a four-dimensional observer on the domain wall the spectrum of gravity consists of a massless graviton and a tower of Kaluza-Klein modes with continuous masses. It was shown in that the KK modes give a subleading correction to the gravitational interaction between two test masses on the domain wall. In the thin wall setup there is only gravity in the bulk, and the only five-dimensional space that can appear is $`AdS`$<sup>*</sup><sup>*</sup>*Of course one can also consider slices of $`dS`$ or Minkowski space, but they do not yield a four-dimensional graviton.. On the other hand, in supergravity or string theory one expects to have other bulk fields including scalars.
The original proposal of has been generalized in several directions. One generalization involves turning on a cosmological constant on the domain wall , which results in time-dependent cosmological scenarios. Other extensions include higher dimensional embeddings , models with a mass gap for the continuum modes , and realizations of domain walls in gravity coupled to scalars . There is also an extensive literature on supergravity domain walls , but so far the construction of has not been realized in supergravity. In fact it was shown to be impossible in any of the known five-dimensional supergravities .
The thin wall construction of has the disadvantage that the curvature is singular at the location of the wall. This problem can be avoided if gravity is coupled to a scalar field. By choosing a suitable potential for the scalar, we can readily generate smooth domain wall solutions that interpolate between two $`AdS`$ spaces. However, once we have a scalar in the bulk, other space-times besides $`AdS`$, $`dS`$, and Minkowski space can appear. In this note we will study some examples of such spaces. Specifically, we consider a class of thick domain walls in gravity coupled to scalars, that interpolate between spaces with naked singularities instead of regular $`AdS`$ horizons. Normally such spaces would be discarded as unphysical, but in this context there are reasons to believe that considering these spaces may be meaningful.
One reason for thinking so comes from the recent proposal (see also ) that five-dimensional bulk gravity in the thin domain wall case has an equivalent description in terms of a cut-off four-dimensional CFT on the domain wall, very much in the spirit of the $`AdS`$/CFT correspondence . The details of this correspondence are rather unclear at present. For instance, it is not clear how to identify the CFT in the non-supersymmetric purely five-dimensional setup of , or how to match operators and KK modes. It is also unclear how to impose a sharp cutoff on the CFT that preserves four-dimensional Poincare invariance. However, leaving these considerations aside, we can freely borrow results from the $`AdS`$/CFT literature on RG flows in five-dimensional supergravity . In RG flows to non-conformal theories (see e.g. ) the $`AdS`$ horizon gets replaced with a naked singularity. This singularity is physical in the sense that the singular behavior corresponds to strong coupling effects like confinement or screening in the boundary theory. Since the non-conformal boundary theory makes sense in the infrared, the singular behavior of the metric must be resolved, either by lifting to ten dimensions or via string theory. Unfortunately we are not aware of any criterion that tells us exactly which type of naked singularity has a physical interpretation. We will simply assume that the singularities in the space we consider can appear in RG flows to non-conformal theories. In the last section we will discuss the validity of this assumption. If our singularities are physical, we can think of our five-dimensional space-times as four-dimensional gravity coupled to a non-conformal field theory. Such theories are well defined, which provides a justification for considering this type of singular space. We will give a more detailed discussion of these ideas in the last section.
A second argument for considering spaces that end in singularities comes from analyzing the spectrum of gravity from a four-dimensional point of view. In the original setup there was a single massless graviton and a continuous tower of KK states. The KK states couple to matter on the thin domain wall and cause small violations of four-dimensional energy and momentum conservation. This violation of conservation laws also occurs in the presence of a naked singularity. The traditional point of view posits that spaces with naked singularities are physically acceptable only if one imposes boundary conditions that guarantee four-dimensional energy and momentum conservation. These boundary conditions are usually referred to as unitary boundary conditions. Note that this point of view is rather different than the $`AdS`$/CFT inspired approach described above. In the latter case we want energy and momentum to leak out into either the $`AdS`$ horizon in the setup of , or into the naked singularities we discuss here. This leakage corresponds to four-dimensional gravitation exciting the degrees of freedom of the non-conformal field theory. Nonetheless, we can impose unitary boundary conditions and analyze the spectrum of the KK modes in that case. It turns out that these boundary conditions remove the continuum part of the KK spectrum for the models discussed here and in . The theory of the discrete part of the spectrum is unitary, because these modes die off rapidly enough as we approach the singularity.
The discussion so far involved only flat domain walls with four-dimensional Poincare invariance. Some of the solutions we study in this note can accommodate a constant curvature on the four-dimensional slice, turning it into four-dimensional de Sitter or anti-de Sitter space. Such bent domain walls have appeared previously in cosmological thin wall solutions and similar thick domain walls in four dimensions were discussed in . Our solution can be viewed as a non-singular analog of the cosmological thin wall solutions. The ambient space of bent domain walls generically has horizons or singularities. We analyze a domain wall interpolating between two singular spaces in some detail and also give an example of a thick domain wall which interpolates between spaces with regular horizons. The purpose of the second example is merely to show that such solutions exist. Unfortunately it is too complicated for an analytical treatment.
Apart from their relevance for cosmology, bent domain walls are interesting because they are generic solutions of five-dimensional gravity coupled to scalars. By generic we mean that the supersymmetry inspired first order formalism of cannot be used to generate a solution. In it was shown that even for flat domain walls where this “superpotential” formalism is applicable, there is no simple supersymmetric extension of the $`AdS`$ domain wall solutions. Since the bent domain wall solutions cannot be obtained from any known first order formalism, they are probably non-supersymmetric and therefore, in a sense, generic.
This paper is organized as follows. In section II we briefly review gravity coupled to a scalar, and discuss a class of solutions for both flat and bent domain walls. These domain walls interpolate between spaces with naked singularities. Our solutions are simple enough that we can solve the quantum mechanics problem exactly for flat domain walls. In section III we analyze the spectrum of metric fluctuations by studying the equivalent quantum mechanics problem with and without imposing unitary boundary conditions. In section IV we discuss possible implications of our results, including various speculations on the role of thick domain walls and singular spaces in the light of the $`AdS`$/CFT correspondence.
## II Gravity coupled to scalars
The action for five-dimensional gravity coupled to a single real scalar reads
$$S=d^4x𝑑r\sqrt{g}\left(\frac{1}{4}R+\frac{1}{2}(\varphi )^2V(\varphi )\right),$$
(1)
We will consider metrics of the form
$$ds^2=e^{2A(r)}\left(dx_0^2e^{2\sqrt{\overline{\mathrm{\Lambda }}}x_0}\underset{i=1}{\overset{3}{}}dx_i^2\right)dr^2$$
(2)
or
$$ds^2=e^{2A(r)}\left(e^{2\sqrt{\overline{\mathrm{\Lambda }}}x_3}(dx_0^2dx_1^2dx_2^2)dx_3^2\right)dr^2,$$
(3)
where the four-dimensional slices are de Sitter and anti-de Sitter respectively. The equations of motion following from the action and the ansatz for the metric are
$`\varphi ^{\prime \prime }+4A^{}\varphi ^{}`$ $`=`$ $`{\displaystyle \frac{V(\varphi )}{\varphi }}`$ (4)
$`A^{\prime \prime }+\overline{\mathrm{\Lambda }}e^{2A}`$ $`=`$ $`{\displaystyle \frac{2}{3}}\varphi ^2`$ (5)
$`A^2\overline{\mathrm{\Lambda }}e^{2A}`$ $`=`$ $`{\displaystyle \frac{1}{3}}V(\varphi )+{\displaystyle \frac{1}{6}}\varphi ^2.`$ (6)
The prime denotes differentiation with respect to $`r`$, and we have assumed that both $`\varphi `$ and $`A`$ are functions of $`r`$ only. The equations of motion above were obtained using the metric Eq. (2). Reversing the sign of $`\overline{\mathrm{\Lambda }}`$ yields the corresponding equations of motion for Eq. (3).
If $`\overline{\mathrm{\Lambda }}=0`$ the four-dimensional slices in Eqs. (2,3) are Minkowski, and we can use the first order formalism of to generate solutions. The fields and the potential can be parametrized by a single function $`W(\varphi )`$ as
$$\varphi ^{}=\frac{1}{2}\frac{W(\varphi )}{\varphi },A^{}=\frac{1}{3}W(\varphi ),V(\varphi )=\frac{1}{8}\left(\frac{W(\varphi )}{\varphi }\right)^2\frac{1}{3}W(\varphi )^2.$$
(7)
For $`\overline{\mathrm{\Lambda }}0`$ there is no know first order formalism, so we need to solve the equations of motion directly. In general it is difficult to find tractable solutions, because highly non-linear combinations of $`A(r)`$ and its derivatives appear in the equations of motion.
For domain walls with four-dimensional de Sitter slices, the equations of motion provide some model independent information. Assuming we have a reflection symmetric domain wall we can choose $`A(0)=1`$ and $`A^{}(0)=0`$. The second equation in Eq. (4) then implies that $`A^{\prime \prime }`$ is negative and $`A(r)`$ diverges to $`\mathrm{}`$ faster than for $`\overline{\mathrm{\Lambda }}=0`$. For $`\overline{\mathrm{\Lambda }}=0`$ such symmetric domain walls interpolate between $`AdS`$ spaces which have regular horizons infinitely far away from $`r=0`$. For $`\overline{\mathrm{\Lambda }}0`$ we expect a horizon or a singularity at a finite distance $`r=r^{}`$. For domain walls with $`AdS_4`$ slices there does not seem to be a similar argument. We discuss an example with a naked singularity at $`r=r^{}`$ first.
A class of solutions with naked singularities is given by $`A(r)=n\mathrm{log}(d\mathrm{cos}(cr))`$. Unfortunately the expressions for the scalar field and the potential are simple only if $`n=1`$, so we will focus on that case. Other choices for $`n`$ are equally valid, but there is no closed from expression for $`\varphi `$ and the potential. Nevertheless these cases can be analyzed numerically. By picking suitable units for $`\overline{\mathrm{\Lambda }}`$ we can set $`d=1`$. The complete solution to the equations of motion is then given by
$`A(r)`$ $`=`$ $`\mathrm{log}(\mathrm{cos}(cr))`$ (8)
$`\varphi (r)`$ $`=`$ $`{\displaystyle \frac{1}{c}}\sqrt{{\displaystyle \frac{3}{2}}(c^2\overline{\mathrm{\Lambda }})}\mathrm{log}\left({\displaystyle \frac{1+\mathrm{tan}\left(\frac{cr}{2}\right)}{1\mathrm{tan}\left(\frac{cr}{2}\right)}}\right)`$ (9)
$`V(\varphi )`$ $`=`$ $`{\displaystyle \frac{3}{4}}\mathrm{cosh}^2\left({\displaystyle \frac{c\varphi }{\sqrt{\frac{3}{2}(c^2\overline{\mathrm{\Lambda }})}}}\right)\left(3\overline{\mathrm{\Lambda }}+c^24c^2\mathrm{tanh}^2\left({\displaystyle \frac{c\varphi }{\sqrt{\frac{3}{2}(c^2\overline{\mathrm{\Lambda }})}}}\right)\right).`$ (10)
This solution has only two adjustable parameters, $`c,\overline{\mathrm{\Lambda }}`$, which determine the location of the singularities, the curvature of the four-dimensional slice, and the thickness of the wall. The metric with this choice of $`A(r)`$ has a naked singularity at $`r^{}=\pm \pi /2c`$. However, the singularity is very similar to the one encountered in the $`AdS`$ flow to $`N=1`$ SYM . This may indicate that it can be resolved either by lifting the five-dimensional geometry to ten dimensions, or by string theory. The scalar diverges at the singularity. If we think of it as a modulus from some compactification manifold, this divergence can indicate that the compactification manifold shrinks to zero size or becomes infinitely large, so that the five dimensional truncation becomes invalid. There are some examples where singularities in five dimensions actually correspond to non-singular ten dimensional geometries .
In the limit $`c^2=\overline{\mathrm{\Lambda }}`$ our solution simplifies dramatically. The scalar vanishes and the potential becomes constant. In fact, this limit of our solution is $`dS_5`$ written in unusual coordinates. Note that our solution is valid only for $`c^2\overline{\mathrm{\Lambda }}`$. The curvature of the four dimensional slice imposes the constraint $`r^{}\pi /2\sqrt{\overline{\mathrm{\Lambda }}}`$ on the location of the horizon.
By changing the sign of $`\overline{\mathrm{\Lambda }}`$ we obtain a solution for a domain wall with $`AdS_4`$ slices. In that case there is no constraint on the location of the horizons, or conversely the value of $`\overline{\mathrm{\Lambda }}`$.
Finally we should point out that the first order formalism of does not apply here. For instance, using Eq. (7) we can compute $`W(\varphi )`$ from the expression for $`\varphi `$. The potential computed from $`W`$ has the same form as the potential above, but the coefficients do not agree. It would be very interesting to either find a first order formalism for $`\overline{\mathrm{\Lambda }}0`$ or show that it does not exist.
The solution above has the virtue that we can solve the equations of motion analytically, but as we pointed out, it has naked singularities at a finite distance from the center of the domain wall. This behavior is not generic. It is easy to pick $`A(r)`$ such that we get regular horizons instead of singularities. On such example with three free parameters is
$$e^{A(r)}=(r^2r^2)\left(\frac{\sqrt{\overline{\mathrm{\Lambda }}}}{2r^{}}\left(1+\frac{1}{4r^2}(r^2r^2)\right)+c(r^2r^2)^2\right),$$
(11)
but the solutions for $`\varphi (r)`$ and the potential have to be obtained numerically. We simply mention this example to show that such solutions exist, but we will not investigate it further in this paper.
In the limit $`\overline{\mathrm{\Lambda }}=0`$ four-dimensional Poincare invariance is restored and we can write down the solution for all $`n`$.
$`A(r)`$ $`=`$ $`n\mathrm{log}(\mathrm{cos}(cr))`$ (12)
$`\varphi (r)`$ $`=`$ $`\sqrt{{\displaystyle \frac{3n}{2}}}\mathrm{log}\left({\displaystyle \frac{1+\mathrm{tan}\left(\frac{cr}{2}\right)}{1\mathrm{tan}\left(\frac{cr}{2}\right)}}\right)`$ (13)
$`V(\varphi )`$ $`=`$ $`{\displaystyle \frac{3nc^2}{4}}\left(\mathrm{cosh}^2\left(\sqrt{{\displaystyle \frac{2}{3n}}}\varphi \right)4n\mathrm{sinh}^2\left(\sqrt{{\displaystyle \frac{2}{3n}}}\varphi \right)\right)`$ (14)
Note that in this case the solution can be parametrized by $`W(\varphi )`$, so we could have found it using the first order formalism of . Since the form of $`A(r)`$ is the same as in the previous example, this solution also has naked singularities at $`r^{}=\pm \pi /2c`$, but unlike in the previous example there is no limit in which they disappear. If $`n=1/4`$ the potential is constant and near the singularity we have $`g_{00}=e^{2A}\sqrt{r^{}r}`$. This is the behavior found in for general flows assuming that the potential can be neglected near the singularity.
Since this solution has four-dimensional Poincare invariance, we expect to find a massless graviton and a tower of KK excitations. This solution is simple enough that we can give a complete solution of the equivalent quantum mechanics problem for $`n=1`$. The $`n=2`$ case is also tractable at the level of the example in . We will comment on the differences between these two examples in Section IV.
## III Graviton Fluctuations
The solutions in the previous section provide backgrounds in which the fluctuations of the metric exhibit interesting behavior. It is difficult to analyze the metric fluctuations in general, since they couple to fluctuations of the scalar field. However, it was shown in that there is a sector of the metric fluctuations that decouples from the scalar and satisfies a simple wave equation. Strictly speaking this is true only if the four-dimensional slice is Minkowski space. If the four-dimensional space is curved, there can be an extra curvature term in the equations of motion for the metric fluctuations. We will ignore these subtleties in this section and study solutions to the scalar wave equations. If the four-dimensional space is flat, the agruments of indicate that we are computing the mass spectrum of metric fluctuations. For curved four-dimensional slices we assume that the solutions of the scalar wave equation have qualitatively the same features as the metric fluctuationsWe thank C. Kennedy for pointing out an incorrect statement in the previous version of this section..
A general metric fluctuation takes the form
$$ds^2=e^{2A(r)}(g_{ij}+h_{ij})dx^idx^jdr^2,$$
(15)
where we have made a gauge choice, and $`g_{ij}`$ is the four-dimensional $`dS`$, $`AdS`$ or Minkowski metric. The fluctuation $`h_{ij}`$ is taken to be small, so the linearized Einstein equation provides the equation of motion for it. As shown in the transverse traceless part of the metric fluctuation, $`\overline{h}_{ij}`$, satisfies the same equation of motion as a five-dimensional scalar. It turns out that transforming to conformally flat coordinates simplifies this wave equation considerably. In terms of $`z=𝑑re^{A(r)}`$ the metric takes the form
$$ds^2=e^{2A(z)}\left(g_{ij}dx^idx^jdz^2\right),$$
(16)
and if the four-dimensional slices are $`dS_4`$, the transverse traceless parts of the metric fluctuation satisfy
$$\left(_z^2+3A^{}(z)_z_{x_0}^23\sqrt{\overline{\mathrm{\Lambda }}}_{x_0}+e^{2\sqrt{\overline{\mathrm{\Lambda }}}x_0}\underset{a=1}{\overset{3}{}}_{x_a}\right)\overline{h}_{ij}=0.$$
(17)
This equation can be simplified further by rewriting the metric fluctuation as $`\overline{h}_{ij}=e^{3(A+\sqrt{\overline{\mathrm{\Lambda }}}x_0)/2}\rho _k(x)\psi _{ij}(z)`$, where $`\rho _k(x)`$ satisfies $`g^{ij}_i_j\rho _k(x)=k^2\rho _k(x)`$. Dropping the indices on $`\psi `$ we finally find
$$\left(_z^2+V_{QM}k^2\right)\psi =0$$
(18)
with
$$V_{QM}=\frac{9\overline{\mathrm{\Lambda }}}{4}+\frac{9}{4}A^2(z)+\frac{3}{2}A^{\prime \prime }(z).$$
(19)
Note that for $`\overline{\mathrm{\Lambda }}0`$ there is an extra constant piece in the potential. This implies that the quantum mechanics problem does not factorize as in . The argument given there now constrains $`k^2+9\overline{\mathrm{\Lambda }}/4`$ to be positive. We will come back to this point when we analyze the solutions of this Schrödinger equation.
These equations were written assuming that the four-dimensional slice is $`dS_4`$. The analogous equation for $`AdS_4`$ slices can be obtained by the analytic continuation $`x_0ix_3`$, $`x_3ix_0`$, and $`\sqrt{\overline{\mathrm{\Lambda }}}i\sqrt{\overline{\mathrm{\Lambda }}}`$. We are now ready to turn to specific examples.
The metric for the solution in Eq. (8) can be transformed to the conformally flat form, Eq. (16), with $`A(z)=\mathrm{log}(\mathrm{cosh}(cz))`$. Using Eq. (19), we find for the potential
$$V_{QM}=\frac{9(c^2\overline{\mathrm{\Lambda }})}{4}\frac{15c^2}{4}\frac{1}{\mathrm{cosh}^2(cz)}.$$
(20)
The shape of this potential is rather different than the one found in for a thick domain wall interpolating between $`AdS`$ spaces. The most important differences are that our potential asymptotes to a non-zero constant for $`z\pm \mathrm{}`$ and that there is no potential barrier separating the asymptotic region from the interior of the domain wall. Since the potential asymptotes to a constant we will find plane wave solutions for sufficiently large $`k^2`$, but these solutions are separated from the discrete modes by a mass gap. These general observations can be made precise. The Schrödinger equation with this potential has a general solution
$`\psi `$ $`=`$ $`a{}_{2}{}^{}F_{1}^{}(ϵ{\displaystyle \frac{3}{2}},1ϵ+{\displaystyle \frac{3}{2}},1ϵ,{\displaystyle \frac{1}{2}}(1x))(1x^2)^{ϵ/2}`$ (22)
$`+b{}_{2}{}^{}F_{1}^{}(ϵ{\displaystyle \frac{3}{2}},1+ϵ+{\displaystyle \frac{3}{2}},1+ϵ,{\displaystyle \frac{1}{2}}(1x))(1x^2)^{ϵ/2},`$
where $`x=\mathrm{tanh}(cz)`$ and $`ϵ^2=k^2/c^2+9(c^2\overline{\mathrm{\Lambda }})/4c^2`$. To find the discrete part of the spectrum we set $`a=0`$ to ensure that $`\psi `$ is regular at $`z=\mathrm{}`$ ($`x=1`$). If $`ϵ3/2=n𝐙_0^+`$, the solution is also finite as $`z\mathrm{}`$ ($`x=1`$), so the discrete part of the spectrum is given by $`ϵ_n=3/2n`$, $`n=0,1`$, or
$$k_n^2=\frac{9(c^2\overline{\mathrm{\Lambda }})}{4}c^2\left(\frac{3}{2}n\right)^2,$$
(23)
and the corresponding wave functions are
$$\psi _0(z)\frac{1}{\mathrm{cosh}^{3/2}(cz)},\psi _1(z)\frac{\mathrm{sinh}(cz)}{\mathrm{cosh}^{3/2}(cz)}.$$
(24)
For $`\overline{\mathrm{\Lambda }}=0`$ we find the expected massless graviton with $`k^2=0`$ and an excited state with $`k^2=2c^2`$. For $`\overline{\mathrm{\Lambda }}>0`$ the four-dimensional slice is $`dS`$ and at least the lowest $`k^2`$ is negative. If the four-dimensional metric is $`AdS`$ ($`\overline{\mathrm{\Lambda }}<0`$), all $`k^2`$ are positive. These results may appear surprising at first sight. However, we should keep in mind that the notion of mass is somewhat murky in $`AdS`$ and $`dS`$ spaces. In both of these cases, $`k^2`$ is a constant that appears in the separation of variables. It should not be confused with a four-dimensional mass. If we put $`\psi _0`$ and $`\rho _k`$ for the lowest $`k^2`$ into the expression for $`\overline{h}_{ij}`$ we find that the metric fluctuation always satisfies the four-dimensional wave equation $`D_lD^lh_{ij}=0`$, $`l=0,1,2,3`$. There are several definitions of mass in $`dS`$ and $`AdS`$, so it is not clear if these fields are massless, but fields that satisfy this wave equation never signal an instability. It is worth mentioning that the negative values of $`k^2`$ are possible because the factorization argument of constrains the combination $`k^2+9\overline{\mathrm{\Lambda }}/4`$ to be positive.
The solutions of the Schrödinger equation, Eq. (18), also include a continuous spectrum of eigenfunctions with $`ϵ^20`$ that asymptote to plane waves as $`z\pm \mathrm{}`$. Formally these solutions can be obtained from Eq. (22) by substituting $`ϵi\kappa `$. For $`x1`$ ($`z\mathrm{}`$) we find the asymptotic behavior $`\psi (z)ae^{ic\kappa z}+be^{ic\kappa z}`$, plane waves as expected. It is easy to show that Eq. (22) also asymptotes to a plane wave for $`x1`$. To summarize, the solutions to the Schrödinger equation, Eq. (18), consist of two normalizable states with discrete eigenvalues, and a continuum of states that asymptote to plane waves at infinity.
To proceed in our discussion, we will now specialize to the case $`\overline{\mathrm{\Lambda }}=0`$, so we can interpret $`k^2`$ as a four-dimensional mass. In this limit our solution describes a thick domain wall interpolating between two spaces with naked singularities. As mentioned in the introduction, we can appeal to the $`AdS`$/CFT correspondence and think of modes propagating in the fifth direction as excitations of some non-conformal field theory, which should render the four-dimensional theory unitary. We will comment on this relation in the last section.
This is in harmony with the approach of , which is to accept small violations of unitarity in the theory on the four-dimensional slice at $`z=0`$. In our case this theory contains a massless graviton, one massive state, and a continuum of modes with a mass gap of size $`m_{gap}=3c/2`$. At very low energies, none of these massive modes can be excited, and an observer at $`z=0`$ sees pure four-dimensional gravity. At higher energies the massive state can be excited, giving some corrections to Newton’s law, and finally at energies larger than the gap the whole continuum of modes can be excited. Since there is a mass gap in the theory, the corrections to Newton’s law will always be negligible at sufficiently long distances. Violations of unitarity occur only if modes that can travel out to the singularities can be excited, i.e. only at energies above the mass of the lightest continuum mode. Both the corrections to Newton’s law and the way unitarity is violated is rather different in the thin wall scenario of . There the contribution of the KK modes is suppressed because in the quantum mechanics description they need to tunnel through a potential barrier. The resulting suppression of these modes at $`z=0`$ turns out to be sufficient to make the violations of unitarity too small to detect in present day experiments.
Another way of dealing with the violations of various conservation laws in the four-dimensional theory is to impose unitary boundary conditions . These boundary conditions ensure that no supposedly conserved quantities disappear into the singularity. This approach was used in to render a specific naked singularity harmless.
We will simplify our discussion by introducing a new massless scalar field, $`\mathrm{\Phi }`$, that satisfies the same equation of motion as the metric fluctuation. This scalar field should not be confused with the scalar in the solutions in the previous section. We will discuss the unitary boundary conditions in terms of this scalar because that simplifies the argument somewhat. Since the metric fluctuations and the scalar satisfy the same equation of motion, the results should carry over to metric fluctuations.
For $`\overline{\mathrm{\Lambda }}=0`$ the metric Eq. (2) has a number of Killing vectors. It will be sufficient for our purposes to consider only the ones generating four-dimensional translations. These Killing vectors are given by $`\xi _i^\mu =\delta _i^\mu `$, where $`i`$ is a four-dimensional index. To construct currents from these Killing vectors we need the stress tensor for a massless scalar
$$T_{\mu \nu }=\frac{1}{2}_\mu \mathrm{\Phi }_\nu \mathrm{\Phi }\frac{1}{2}g_{\mu \nu }\left(\frac{1}{2}_\alpha \mathrm{\Phi }^\alpha \mathrm{\Phi }\right).$$
(25)
The currents $`J^\mu =T^{\mu \nu }\xi _\nu ^i`$ satisfy conservation laws of the form
$$\frac{1}{\sqrt{g}}_\mu \left(\sqrt{g}J^\mu \right)=0,$$
(26)
which express the conservation of four-dimensional energy and momentum. To ensure that these quantities are conserved in the presence of a singularity, we demand that the flux into the singularity vanishes
$$\underset{z\mathrm{}}{lim}\sqrt{g}J^z=\underset{z\mathrm{}}{lim}\sqrt{g}g^{zz}\frac{1}{2}_i\mathrm{\Phi }_z\mathrm{\Phi }=0.$$
(27)
The solution for $`\mathrm{\Phi }`$ take the same form as the solutions for $`\overline{h}_{ij}`$, i.e., $`\mathrm{\Phi }e^{3A(z)/2}\psi (z)`$ with $`\psi `$ given in Eq. (22). Using the asymptotic form $`\psi (z)a\mathrm{sin}(c\kappa z)+b\mathrm{cos}(c\kappa z)`$, we find for the flux
$$\underset{z\mathrm{}}{lim}e^{3A(z)/2}\psi (z)_z\left(e^{3A(z)/2}\psi (z)\right)\psi (z)\left((3b+2a\kappa )\mathrm{cos}(c\kappa z)+(3a2b\kappa )\mathrm{sin}(c\kappa z)\right),$$
(28)
which does not vanish for any choice of $`a,b,\kappa `$ except $`a=b=0`$. This implies that that the unitary boundary conditions eliminate all continuum modes from the spectrum. It is easy to check that the two discrete modes do not generate any flux into the singularity, so the unitary spectrum consists of these two modes. We expect similar results if we impose unitary boundary conditions in either the thin wall setup of or the thick wall versions in . In those cases only the four-dimensional massless graviton survives, and the continuum of KK states are projected out.
This situation should be contrasted with the example encountered in . In that case the potential in the quantum mechanics problem diverges near the singularity. This results in an infinite tower of eigenfunctions with discrete eigenvalues. The potential in our example asymptotes to a constant near the singularity, so we get a continuum of plane wave states. It turns out to be impossible to satisfy the no flux condition with these solutions, which implies that all of these states are projected out.
We close this section with a brief comment on the solutions with $`\overline{\mathrm{\Lambda }}=0`$ and $`n>1`$. For $`n=2`$ the transformation to conformally flat coordinates is given by $`z=\mathrm{tan}(cr)/c`$ and the conformal factor reads $`A(z)=\mathrm{log}(1+c^2z^2)`$. The potential in the quantum mechanics problem is given by
$$V_{QM}=3c^2\frac{14c^2z^2}{(1+c^2z^2)^2}.$$
(29)
This potential appeared previously in and a similar potential was discussed in detail in . Unlike the $`n=1`$ potential, this potential asymptotes to zero for large $`z`$. There is one discrete bound state at threshold and a continuum of states that asymptote to plane waves as $`z\pm \mathrm{}`$. We can repeat the analysis above for this potential with essentially the same result. Imposing unitary boundary conditions eliminates the continuous spectrum, leaving only the four-dimensional graviton, while invoking the $`AdS`$/CFT correspondence allows us to retain the continuum. In that case the entire fifth dimension gets reinterpreted as a non-conformal field theory on the four-dimensional slice, and any bulk excitations should be viewed as excitations of this field theory.
## IV Discussion and speculations
In this note we worked out an example of a thick domain wall that interpolates between two spaces with naked singularities. Our example is simple enough that we can compute the spectrum of the graviton KK modes exactly. It is possible to extend this solution to domain walls with cosmological constants in the four-dimensional slices. These domain walls can be viewed as non-singular analogs of the bent thin domain walls that appeared in the literature as cosmological extensions of the setup in .
There are other reasons for considering this type of thick wall. Thick walls with four-dimensional Minkowski slices can be obtained from a first order “superpotential” formalism, but to find bent solutions one needs to solve the non-linear equations of motion directly. In this sense the bent walls are more generic than the flat examples.
The traditional approach to rendering naked singularities harmless consists of imposing unitary boundary conditions on modes propagating in the bulk. If we take this approach for our solution, or for solutions of the type studied in , we project out all continuum modes, leaving only the four-dimensional graviton and other discrete modes if any exist.
The $`AdS`$/CFT correspondence offers another point of view. Most of this section will be devoted to comments and speculations about this correspondence in domain wall settings of the type studied here and in . We would like to emphasize that unlike in the original $`AdS`$/CFT correspondence , there is at present no precise recipe for relating five-dimensional gravity to a boundary field theory in domain wall space-times of the type studied in . Without such a recipe, our comments are necessarily of a very speculative nature, but we hope that some of them may lead to a firmer understanding of this correspondence in time.
Before discussing the thick domain walls studied here and in , let us briefly review how the $`AdS`$/CFT correspondence is expected to work in the scenario of . The setup in consists of a thin domain wall separating the horizon parts of two $`AdS`$ spaces. Usually the $`𝐙_2`$ symmetry of this space-time is gauged, so that the two slices of $`AdS`$ are identified. The location of the domain wall cuts off the $`AdS`$ space at some finite radial distance. Gravity in the slice of $`AdS`$ is expected to have a dual description as a strongly coupled cutoff CFT on the domain wall.
We can adopt these arguments and apply them to thick domain walls. Let us first consider the domain wall discussed in . It can be viewed as a non-singular version of the setup in , since it interpolates between the horizon parts of two $`AdS`$ spaces. Unlike in the thin wall case, one usually does not mod out by the $`𝐙_2`$ symmetry of the geometry, so we have two independent physical $`AdS`$ spaces. Since the thick domain wall smoothly connects the two slices of $`AdS`$, there is no sharp cutoff as in the thin wall case. A possible interpretation of this is that a smooth domain wall corresponds to a soft (or softer) cutoff in the field theory. From the field theory perspective this is more desirable than the sharp cutoff imposed by a thin wall. While there is no known regularization scheme with a sharp cutoff that preserves four-dimensional Poincare invariance, we have a candidate for a soft cutoff. Dimensional Regularization preserves the desired invariances and corresponds to a soft cutoff in momentum space. Thick domain walls may be more appealing from the field theory point of view, but in gravity they pose additional challenges. For instance, it is not clear where the four-dimensional field theory is supposed to live, since the space does not have a boundary. This problem could potentially be cured by orbifolding the thick domain wall, which introduces a boundary at $`z=0`$. The geometry already has a $`𝐙_2`$ symmetry, so orbifolding it simply identifies the two $`AdS`$ spaces, but the derivative of the scalar does not vanish at $`z=0`$, so we need to put a source for it on the orbifold fixed point. Orbifolding the space should not affect our speculation that the thick domain wall corresponds to a soft cutoff in the CFT.
We now turn to the examples studied here. The main difference is that we are considering thick domain walls that interpolate between singular spaces. As mentioned before, such singular spaces appear in $`AdS`$ flows to non-conformal theories, so we speculate that we can replace our singular five-dimensional geometry by a non-conformal four-dimensional field theory with a soft cutoff. This speculation is even harder to make precise than the previous one, because five-dimensional gravity fails near the singularity and higher dimensional gravity or string theory has to come to the rescue. Nonetheless, we will forge ahead and offer some speculations on the field theory interpretations of the singularities we studied here.
We will discuss the $`\overline{\mathrm{\Lambda }}=0`$ solutions given in Eq. (12). For $`n=1`$ the spectrum consist of the massless four-dimensional graviton, an excited state with $`m=\sqrt{2}c`$, and a continuum of states with masses $`mm_{gap}=3c/2`$. If we assume that our singular space corresponds to a non-conformal theory such as SYM or QCD, we can attempt to interpret this KK spectrum. A confining theory will have a strong coupling scale, $`\mathrm{\Lambda }_{QCD}`$, which sets the mass scale for the light states in the theory. We can interpret the mass gap found in the KK modes as the energy needed to make the lightest particle in the non-conformal field theory. The presence of the mass gap in our solution at least does not automatically rule it out. Unfortunately we do not have a good interpretation for the single massive resonance in the spectrum. This state should not have a field theory counterpart, since it is localized on the domain wall and does not propagate out to the singularity. Luckily we can eliminate this state by imposing the orbifold projection discussed above for the $`AdS`$ domain wall. This provides us with a boundary and eliminates this unwanted state. If a version of the $`AdS`$/CFT correspondence can be formulated at all, it is likely to be in the orbifolded case.
We also briefly discuss the solution for $`n=2`$. The equivalent quantum mechanics problem in that case cannot be solved completely, but we can obtain enough information to discuss this case in the light of the $`AdS`$/CFT correspondence. The spectrum in this case is very much like that found in . We find a single massless graviton and a continuum of plane wave states with masses starting at zero. Unlike the case studied in this space has naked singularities at a finite distance from the domain wall. These singularities imply that, if this space has a field theory interpretation, it should be in terms of a non-conformal theory. We have already discussed the confining case above. Since there is no mass gap in the $`n=2`$ case, we suggest that this space may correspond to a theory that is free in the infrared. Such a theory would have excitations with masses that are continuous from zero. From the original form of the $`AdS`$/CFT correspondence we expect the gravity description to break down completely if the field theory becomes weakly coupled. This makes it unlikely that the singularity for $`n=2`$ can be resolved by lifting to ten dimensions. The dual description should require string theory on some highly curved manifold.
Unfortunately all of our speculations follow from the assumption that we can use the $`AdS`$/CFT correspondence to gain some intuition about the domain wall space-times we studied here. To make any of our statements more precise we would need a formulation of the $`AdS`$/CFT correspondence along the lines of . This is clearly a necessary ingredient if we want to study domain walls in singular space-times in a more reliable and systematic way.
As this paper was nearing completion, and appeared. These papers have no direct overlap with the results here, but they provide another motivation for studying singular spaces. These papers discuss an intrinsically higher dimensional approach to solving the cosmological constant problem. In their analysis they naturally encounter spaces with singularities at finite distances. While it is not clear if the spaces we discussed here can be used in that context, their results provide another piece of evidence that singular spaces may play an important role in domain wall universe scenarios.
After this paper was submitted a first order formalism for bent thick domain walls appeared in the newest version of .
###### Acknowledgements.
It is a pleasure to thank Miguel Costa, Josh Erlich, Igor Klebanov, Lisa Randall, Yuri Shirman, and Kostas Skenderis for comments and helpful conversations. This work was supported in part by DOE grants #DF-FC02-94ER40818 and #DE-FC-02-91ER40671.
|
no-problem/0002/physics0002001.html
|
ar5iv
|
text
|
# 1 Predicted abundance pattern 𝑃(𝑛) (probability for a taxon to have 𝑛 subtaxa) of the branching model with different values of 𝑚. The curves have been individually rescaled.
## Abstract
For taxonomic levels higher than species, the abundance distributions of number of subtaxa per taxon tend to approximate power laws, but often show strong deviationns from such a law. Previously, these deviations were attributed to finite-time effects in a continuous time branching process at the generic level. Instead, we describe here a simple discrete branching process which generates the observed distributions and find that the distribution’s deviation from power-law form is not caused by disequilibration, but rather that it is time-independent and determined by the evolutionary properties of the taxa of interest. Our model predicts—with no free parameters—the rank-frequency distribution of number of families in fossil marine animal orders obtained from the fossil record. We find that near power-law distributions are statistically almost inevitable for taxa higher than species. The branching model also sheds light on species abundance patterns, as well as on links between evolutionary processes, self-organized criticality and fractals.
Taxonomic abundance distributions have been studied since the pioneering work of Yule , who proposed a continuous time branching process model to explain the distributions at the generic level, and found that they were power laws in the limit of equilibrated populations. Deviations from the geometric law were attributed to a finite-time effect, namely, to the fact that the populations had not reached equilibrium. Much later, Burlando compiled data that appeared to corroborate the geometric nature of the distributions, even though clear violations of the law are visible in his data also. In this paper, we present a model which is based on a discrete branching process whose distributions are time-independent and where violations of the geometric form reflect specific environmental conditions and pressures that the assemblage under consideration was subject to during evolution. As such, it holds the promise that an analysis of taxonomic abundance distributions may reveal certain characteristics of ecological niches long after its inhabitants have disappeared.
The model described here is based on the simplest of branching processes, known in the mathematical literature as the Galton-Watson process. Consider an assemblage of taxa at one taxonomic level. This assemblage can be all the families under a particular order, all the subspecies of a particular species, or any other group of taxa at the same taxonomic level that can be assumed to have suffered the same evolutionary pressures. We are interested in the shape of the rank-frequency distribution of this assemblage and the factors that influence it.
We describe the model by explaining a specific example: the distribution of the number of families within orders for a particular phylum. The adaptation of this model to different levels in the taxonomic hierarchy is obvious. We can assume that the assemblage was founded by one order in the phylum and that this order consisted of one family which had one genus with one species. We further assume that new families in this order are created by means of mutation in individuals of extant families. This can be viewed as a process where existing families can “replicate” and create new families of the same order, which we term daughters of the initial family. Of course, relatively rarely, mutations may lead to the creation of a new order, a new class, etc. We define a probability $`p_i`$ for a family to have $`i`$ daughter families of the same order (true daughters). Thus, a family will have no true daughters with probability $`p_0`$, one true daughter with probability $`p_1`$, and so on. For the sake of simplicity, we initially assume that all families of this phylum share the same $`p_i`$. We show later that variance in $`p_i`$ among different families does not significantly affect the results, in particular the shape of the distribution. The branching process described above gives rise to an abundance distribution of families within orders, and its probability distribution can be obtained from the Lagrange expansion of a nonlinear differential equation . Using a simple iterative algorithm in place of this Lagrange expansion procedure, we can calculate rank-frequency curves for many different sets of $`p_i`$. It should be emphasized here that we are mostly concerned with the shape of this curve for $`n10^4`$, and not the asymptotic shape as $`n\mathrm{}`$, a limit that is not reached in nature.
For different sets of $`p_i`$, the theoretical curve can either be close to a power-law, a power law with an exponential tail or a purely exponential distribution (Fig. 1).
We show here that there is a global parameter that distinguishes among these cases. Indeed, the mean number of true daughters, i.e., the mean number of different families of the same order that each family gives rise to in the example above,
$`m={\displaystyle \underset{i=0}{\overset{\mathrm{}}{}}}ip_i`$ (1)
is a good indicator of the overall shape of the curve. Universally, $`m=1`$ leads to a power law for the abundance distribution. The further $`m`$ is away from $`1`$, the further the curve diverges from a power-law and towards an exponential curve. The value of $`m`$ for a particular assemblage can be estimated from the fossil record also, allowing for a characterization of the evolutionary process with no free parameters. Indeed, if we assume that the number of families in this phylum existing at one time is roughly constant, or varies slowly compared to the average rate of family creation (an assumption the fossil record seems to vindicate ), we find that $`m`$ can be related to the ratio $`R_o/R_f`$ of the rates of creation of orders and families—by
$`m=(1+{\displaystyle \frac{R_o}{R_f}})^1`$ (2)
to leading order .
In general, we can not expect all the families within an order to share the same $`m`$. Interestingly, it turns out that even if the $`p_i`$ and $`m`$ differ widely between different families, the rank-frequency curve is identical to that obtained by assuming a fixed $`m`$ equal to the average of $`m`$ across the families (Fig. 2), i.e., the variance of the $`p_i`$ across families appears to be completely immaterial to the shape of the distribution—only the average $`\mu m`$ counts.
In Fig. 3, we show the abundance distribution of families within orders for fossil marine animals , together with the prediction of our branching model. The theoretical curve was obtained by assuming that the ratio $`R_o/R_f`$ is approximated by the ratio of the total number of orders to the total number of families
$$\frac{R_o}{R_f}\stackrel{}{=}\frac{N_o}{N_f}$$
(3)
and that both are very small compared to the rate of mutations. The prediction $`\mu =0.9(16)`$ obtained from the branching process model by using (3) as the sole parameter fits the observed data remarkably well ($`P=0.12`$, Kolmogorov-Smirnov test, see inset in Fig. 3). Alternatively, we can use a best fit to determine the ratio $`R_o/R_f`$ without resorting to (3), yielding $`R_o/R_f=0.115(20)`$ ($`P=0.44`$). Fitting abundance distributions to the branching model thus allows us to determine a ratio of parameters which reflect dynamics intrinsic to the taxon under consideration, and the niche(s) it inhabits. Indeed, some taxa analyzed in Refs. are better fit with $`0.5<\mu <0.75`$, pointing to conditions in which the rate of taxon formation was much closer to the rate of subtaxon formation, indicating either a more “robust” genome or richer and more diverse niches.
In general, however, Burlando’s data suggest that a wide variety of taxonomic distributions are fit quite well by power laws ($`\mu =1`$). This seems to imply that actual taxonomic abundance patterns from the fossil record are characterized by a relatively narrow range of $`\mu `$ near $`1`$. This is likely within the model description advanced here. It is obvious that $`\mu `$ can not remain above $`1`$ for significant time scales as this would lead to an infinite number of subtaxa for each taxon. What about low $`\mu `$? We propose that low values of $`\mu `$ are not observed for large (and therefore statistically important) taxon assemblages for the following reasons. If $`\mu `$ is very small, this implies either a small number of total individuals for this assemblage, or a very low rate of beneficial taxon-forming (or niche-filling) mutations. The former might lead to this assemblage not being recognized at all in field observations. Either case will lead to an assemblage with too few taxons to be statistically tractable. Also, since such an assemblage either contains a small number of individuals or is less suited for further adaptation or both, it would seem to be susceptible to early extinction.
The branching model can—with appropriate care—also be applied to species-abundance distributions, even though these are more complicated than those for higher taxonomic orders for several reasons. Among these are the effects of sexual reproduction and the localized and variable effects of the environment and other species on specific populations. Still, as the arguments for using a branching process model essentially rely on mutations which may produce lines of individuals that displace others, species-abundance distributions may turn out not to be qualitatively as different from taxonomically higher-level rank-frequency distributions as is usually expected.
Historically, species abundance distributions have been characterized using frequency histograms of the number of species in logarithmic abundance classes. For many taxonomic assemblages, this was found to produce a humped distribution truncated on the left—a shape usually dubbed lognormal . In fact, this distribution is not incompatible with the power-law type distributions described above. Indeed, plotting the fossil data of Fig. 3 in logarithmic abundance classes produces a lognormal (Fig. 4).
For species, $`\mu `$ is the mean number of children each individual of the species has. (Of course, for sexual species, $`\mu `$ would be half the mean number of children per individual.) In the present case, $`\mu `$ less than $`1`$ implies that extant species’ populations decrease on average, while $`\mu `$ equal to $`1`$ implies that average populations do not change. An extant species’ population can decline due to the introduction of competitors and/or the decrease of the size of the species’ ecological niche. Let us examine the former more closely. If a competitor is introduced into a saturated niche, all species currently occupying that niche would temporarily see a decrease in their $`m`$ until a new equilibrium was obtained. If the new species is significantly fitter than the previously existing species, it may eliminate the others. If the new species is significantly less fit, then it may be the one eliminated. If the competitors are about as efficient as the species already present, then the outcome is less certain. Indeed, it is analogous to a non-biased random walk with a possibility of ruin. The effects of introducing a single competitor are transient. However, if new competitors are introduced more or less periodically, then this would act to push $`m`$ lower for all species in this niche and we would expect an abundance pattern closer to the exponential curve as opposed to the power-law than otherwise expected. We have examined this in simulations of populations where new competitors were introduced into the population by means of neutral mutations—mutations leading to new species of the same fitness as extant species—and found that these are fit very well by the branching model. A higher rate of neutral mutations and thus of new competitors leads to distributions closer to exponential. We have performed the same experiment in more sophisticated systems of digital organisms (artificial life) and found the same result .
If no new competitors are introduced but the size of the niche is gradually reduced, we expect the same effect on $`m`$ and on the abundance distributions. Whether it is possible to separate the effects of these two mechanisms in ecological abundance patterns obtained from field data is an open question. An analysis of such data to examine these trends would certainly be very interesting.
So far, we have sidestepped the difference between historical and ecological distributions. For the fossil record, the historical distribution we have modeled here should work well. For field observations where only currently living groups are considered, the nature of the death and extinction processes for each group will affect the abundance pattern. In our simulations and artificial-life experiments, we have universally observed a strong correlation between the shapes of historical and ecological distributions. We believe this correspondence will hold in natural distributions as well when death rates are affected mainly by competition for resources. The model’s validity for different scenarios is an interesting question, which could be answered by comparison with more taxonomical data.
Our branching process model allows us to reexamine the question of whether any type of special dynamics—such as self-organized criticality (SOC)—is at work in evolution . While showing that the statistics of taxon rank-frequency patterns in evolution are closely related to the avalanche sizes in SOC sandpile models, the present model clearly shows that instead of a subsidiary relationship where evolutionary processes may be self-organized critical, the power-law behaviour of both evolutionary and sandpile distributions can be understood in terms of the mechanics of a Galton-Watson branching process . The mechanics of this branching process are such that the branching trees are probabilistic fractal constructs. However, the underlying stochastic process responsible for the observed behaviour can be explained simply in terms of a random walk . For evolution, the propensity for near power-law behaviour is found to stem from a dynamical process in which $`\mu 1`$ is selected for and highly more likely to be observed than other values, while the “self-tuning” of the SOC models is seen to result from arbitrarily enforcing conditions which would correspond to the limit $`R_o/R_f0`$ and therefore $`m1`$ .
Acknowledgments. We would like to thank J. J. Sepkoski for kindly sending us his amended data set of fossil marine animal families. Access to the Intel Paragon XP/S was provided by the Center of Advanced Computing Research at the California Institute of Technology. This work was supported by a grant from the NSF.
Correspondence and requests for materials should be addressed to C.A. (e-mail: adami@krl.caltech.edu).
|
no-problem/0002/astro-ph0002285.html
|
ar5iv
|
text
|
# High Velocity Star Formation in the LMC
## 1. Introduction
The Large Magellanic Cloud (LMC) shows a clear contrast between regular kinematics and irregular structure, with its offcenter bar and lack of any clear stellar spiral morphology. The velocities as traced by carbon star velocities (Graff et al. 2000; Hardy et al. 2000; Kunkel et al. 1997) and by H$`\alpha `$ emmision (Kim et al. 1999) are well fit by a rotating disk although there may be a non-disk component (Graff et al. 2000; Luks & Rohlfs, 1992). The overall velocity dispersion of the carbon stars $`20\mathrm{km}\mathrm{s}^1`$ is small compared to the rotational velocity of the LMC $`(6070\mathrm{km}\mathrm{s}^1)`$ indicating that the stellar component of the LMC is relatively flat and rotationally supported. Moreover, Graff et al. (2000) showed that the younger, metal rich carbon stars in the inner $`4^{}`$ of the LMC have a much lower velocity dispersion, only $`8\mathrm{km}\mathrm{s}^1`$. This contrast suggests that the LMC lies in a nearly face-on plane, but is irregular within that plane.
The three-dimensional structure of LMC dust was measured using the “light-echo” technique on SN1987A by Xu, Crotts & Kunkel (1996). They identified 12 seperate dust sheets. Most significantly, in this work and in a follow up spectroscopic study (Xu & Crotts 1999), they identified three components with the spherical shell N157C enclosing the OB association LH 90. This shell was found to lie 490 pc in front of SN1987A. Including the LMC inclination of $`30^{}`$, the component of this distance perpendicular to the LMC plane is 425 pc.
This distance is much greater than the virial thickness of the young stellar population of the LMC in the location of SN1987A $`90`$ pc (given the local surface density of 100 $`M_{}/\mathrm{pc}^2`$ which we determine below). Thus, it is difficult to imagine how these two young stellar populations came to be so separated. Xu et al. (1996) suggest that SN1987A is a “…runaway star behind the disk of the Large Magellanic Cloud”.
Classical runaway stars can be found high above the Milky Way plane (Conlon et al. 1990). The runaway O and B stars are thought to be ejected by one of two processes: supernova explosions in close binary systems (Blaauw 1961) and strong dynamical interactions in star clusters (Poveda, Ruiz, & Allen 1967; Gies & Bolton 1986). Indeed, Hipparcos measurements of O and B stars have found several runaways that can be identified as having been ejected from particular OB associations (de Zeeuw et al. 1999).
However, Efremov (1991) and Panagia et al. (2000) have identified SN1987A as belonging to KMK 80 (Kontizas, Metaxa & Kontizas 1988) “…a loose young cluster $`12\pm 2`$ Myr old….” (Panagia et al. 2000). Thus, it cannot be a classic runaway star; any of the violent ejection mechanisms discussed above would eject only the single star, and not its cluster.
In the next section, we solve for the initial kinematics of SN1987A and its associated cluster, KMK 80. We find that the cluster formed in the LMC plane, moving with a velocity of $`50\mathrm{km}\mathrm{s}^1`$ perpendicular to the LMC plane.
## 2. Kinematics of SN1987A
We begin by examining the velocities of these two young clusters relative to the LMC. The disk solution of Hardy et al. (2000) at the projected position of SN1987A is $`271\pm 1\mathrm{km}\mathrm{s}^1`$.
By comparison, SN1987A has a redshift of $`286\mathrm{km}\mathrm{s}^1`$ (Meaburn, Bryce & Holloway 1995) while the N157C complex containing LH 90 has a redshift of $`270\mathrm{km}\mathrm{s}^1`$ (Xu & Crotts 1999). Thus, the velocity of LH 90 is perfectly consistent with the LMC disk velocity at this point.
On the other hand, SN1987A is in two respects inconsistent with being a member of the cold population: first, it is moving $`15\mathrm{km}\mathrm{s}^1`$ relative to the disk, faster than the $`8\mathrm{km}\mathrm{s}^1`$ typical of the cold population. Secondly, and more importantly, it lies far above the scale height of the cold population (and even above the scale height of the hot population).
To take account of both effects simultaneously, we define the “verticle energy” of a star to be $`Ev_z^2/2+\mathrm{\Phi }(z)`$ and approximate the potential energy to be $`\mathrm{\Phi }(z)2\pi G\mathrm{\Sigma }|z|`$ for stars of height $`z150`$pc. An examination of the isophotal map of the LMC of de Vaucouleurs (1957) shows that the surface brightness of the LMC in the neigborhood of SN1987A is about 21.7 mag./arcsec<sup>2</sup>, or 56 $`L_{}\mathrm{pc}^2`$. Assigning a Population I mass-luminosity ratio of 1.7, we derive a mass surface density of roughly $`\mathrm{\Sigma }_{\mathrm{SN1987A}}100M_{}\mathrm{pc}^2`$. We derive a total energy of $`1300(\mathrm{km}\mathrm{s}^1)^2`$ corresponding to a midplane velocity of $`50\mathrm{km}\mathrm{s}^1`$. Thus, the total gravitational energy of the supernova is much too high for it to be a member of the cold population (and somewhat high even for the hot population).
We note that the age of the star cluster containing the supernova is about 12 Myr which is consitent with estimates of the age of the precursor to the supernova. If the star cluster was formed at the LMC plane 12 Myr ago, with a velocity perpendicular to the plane of $`50\mathrm{km}\mathrm{s}^1`$, and this velocity decreased with a gravitational acceleration of $`3\mathrm{km}\mathrm{s}^1\mathrm{Myr}^1`$, it would today be $`400`$pc above the plane moving at $`14\mathrm{km}\mathrm{s}^1`$, consitent with its measured distance of 425 pc above the plane and relative velocity of $`15kms`$.
## 3. Discussion
The match between these numbers is compelling, and we suggest that the entire KMK 80 star cluster was formed 12 Myr ago at the LMC plane, but with an extraordinarily high velocity of $`50\mathrm{km}\mathrm{s}^1`$ perpendicular to the plane. The agreement between age and flight time is typical of most runaway O and B stars in the halo of the Milky Way (Keenan, Brown & Lennon 1986).
We do not know what mechanism could create a star cluster moving at such high velocities. As far as we know, there is no counterpart in the Milky Way. However, we can speculate on two possible mechanisms. First, the cluster might have formed as part of a galactic fountain pushed out of the LMC by supernovae or stellar winds. Such a mechanism was put forward by Xu & Crotts who suggested that SN1987A was formed on a shell of gas pushed out of the LMC by LH 90. These authors noted that SN1987A is on the outskirts of the extremely violent 30 Dor. region.
Second, a dense cloud of gas could have smashed through the LMC disk, triggering star formation in the process with the resulting stars carrying some of the initial momentum of the cloud. This cloud could have been fountain material raining back down onto the LMC disk, or it could have been a high velocity cloud orbiting either the LMC or the Milky Way.
There are a few systems in the Milky Way that might have been formed in processes similar to the KMK 80 cluster. In addition to runaway O stars, the Milky Way Halo also contains young, high velocity, high metallicity A stars (Perry 1969; Rodgers 1971). These stars are all roughly the same age, $`<650`$Myr (Lance 1988), which suggests that they were created from the collsion of a Magellanic Cloud sized galaxy with the Milky Way disk (Rodgers, Harding, & Sadler 1981; Lance 1988). A similar recent collision in the LMC might generate high velocity star formation without breaking up KMK 80.
Gould’s belt (Gould 1874; Pöppel 1997) contains several OB associations in a roughly planar region oriented $`18^{}`$ from the plane of the Milky Way. Comerón & Torra (1994) suggested that Gould’s belt arose from the glancing collision of a high velocity cloud with the Milky Way disk. Perhaps KMK 80 is part of a similar structure oriented more nearly perpendicular to the LMC plane.
Logically, there are only two alternatives to our interpretation that KMK 80 formed at high vertical velocity. First, KMK 80 may actually lie in the LMC plane while the progenitor of SN1987A is simply seen projected against this cluster, having been earlier ejected from a binary. This appears to us to be a priori unlikely and can in any event be tested by spectroscopic observations of KMK 80 members. In addition to confirming SN1987A as a radial-velocity member of this cluster, such measurements would yield the metallicity of the cluster and so of the SN1987A progenitor.
Second, SN1987A could actually lie in the LMC plane while N157C lies 490 pc closer to us. Then, either LH 90 would still be at the center of N157C, or it would lie in the LMC plane and be seen by chance projected against the center of this cloud. In the first case, one would still have the same problem of an OB association lying far from the LMC plane. As for the second case, the probability of a chance projection of two such naturally associated structures seems incredibly low. In either case, KMK 80 would have to have been born with a vertical energy at least equal to its present kinetical energy of $`(15\mathrm{km}\mathrm{s}^1)^2`$, which is still quite high. Moveover, the N157C cloud would have to have exactly the same radial velocity as the LMC plane despite the fact that it lies $`400`$pc from it. Hence, the various alternatives to our interpretation, while not actually ruled out, require extraordianry combinations of coincidences.
We thank Arlin Crotts, Yuri Efremov and David Weinberg for useful discussions. This work was supported in part by grant AST 97-27520 from the NSF.
|
no-problem/0002/math0002134.html
|
ar5iv
|
text
|
# Random triangle in square: geometrical approach
## I Intro
We call our approach geometrical as instead of considering 6-fold integral in abstract space we consider random triangle (RT) inside the plane rectangle when all possible cases are explicitly apparent.
Area of triangle with vertices p1=(x1,y1), p2=(x2,y2), p3=(x3,y3) is equal to
$`s=\frac{1}{2}(x1(y2y3)+x2(y1+y3)+x3(y1y2)).(1)`$
Let points p1, p2, p3 are randomly (with constant differential probability function) distributed over the rectangle with sides A, B. What is the mean area of triangles with vertices p1,p2,p3?
Answer is evident: zero, as any given triangle corresponds to 6 cases of full permutation of three points at vertices of the triangle. Mean area of this 6 triangles, as given by (1), is zero.
But if we take triangle as geometrical figure and if we consider an area of such a figure as positive value, then we must take absolute value of s in formula (1) and… calculation of relevant integrals become impossible even for Mathematica. So Michael Trott in his recent brilliant paper in Mathematica Journal found 496 different integrals each over subregion with the same sign of $`s`$, and then used Mathematica to solve such an enormously difficult task. Needless to say that M.Trott’s stunning skill of using Mathematica is far out of scope of ordinary reader (as me, e.g.), so I’ve spend some three weeks in searching a more simple solution. The result is most easily get by the explicit geometrical approach.
## II Geometrical approach
Here we consider RT in square (=right rectangle) with side length $`A`$. First observation is that due to points symmetry we may simplify problem by considering particular relation between points. For example, as we will do here, we may consider only case $`x1<x2<x3`$ with due account of normalizing condition.
First point (p1) may take any position inside the square, so the region of integration over x1, y1 is $`0<x1<A;0<y1<A`$ at all cases considered further.
Now we should discriminate two cases of relation between ordinates of 1st and 2nd points: 1) $`y2>y1`$ and 2) $`y2<y1`$.
### A $`y2>y1`$
In this case, important is relation between two angular coefficients $`k1`$ and $`k2`$:
$`k1=(Ay1)/(Ax1);k2=(y2y1)/(x2x1)`$.
$`\mathrm{𝐤𝟐}>\mathrm{𝐤𝟏}`$. In this case the line $`(p1,p2)`$ crosses the upper side of square at the point $`(x31m,A)`$ with $`x31m=(Ay2)/k2+x2`$, see Fig.1 , panel 1A.
The region of integration over $`x2,y2`$ is:
$`x1<x2<A;y21m<y2<A,y21m=k1(x2x1)+y1.`$
The first region of integration over $`x3,y3`$ is : $`x2<x3<x31m;k2(x3x2)+y2<y3<A`$, region 1+, panel 1A, Fig. 1.
At this region formula (1) gives positive value as points $`(p1,p2,p3)`$ make right-handed system: moving in direction $`(p1p2p3p1)`$ we make counter-clockwise rotation.
Now we are ready to calculate first integral:
$$I1=_0^A𝑑x1_0^A𝑑y1_{x1}^A𝑑x2_{y21m}^A𝑑y2_{x2}^{x31m}𝑑x3_{k2(x3x2)+y2}^A(s)𝑑y3=\frac{A^8}{34560}.$$
Second integral appears from region ”under” the first integral’s region, region 2-, panel 1A, Fig. 1, and here formula (1) should be taken with negative sign:
$$I2=_0^A𝑑x1_0^A𝑑y1_{x1}^A𝑑x2_{y21m}^A𝑑y2_{x2}^{x31m}𝑑x3_0^{k2(x3x2)+y2}(s)𝑑y3=\frac{23A^8}{34560}.$$
Note that 2nd integral differs from 1st one only by integration boundaries over $`y3`$ (and by sign of $`s`$).
Also, the interesting exact relation occurs between numerical values of two considered integrals : $`I2=23I1.`$
Last integral at the case $`\mathrm{𝐤𝟐}>\mathrm{𝐤𝟏}`$, corresponding to region 3-, panel 1A, Fig. 1, is;
$$I3=_0^A𝑑x1_0^A𝑑y1_{x1}^A𝑑x2_{y21m}^A𝑑y2_{x31m}^A𝑑x3_0^A(s)𝑑y3=\frac{7A^8}{1728}=140I1.$$
Note that all three multiple integrals have the same first four particular integral regions and we may write down them in a more compact form, but we will not do this pure ”decorative” operation.
$`\mathrm{𝐤𝟐}<\mathrm{𝐤𝟏}`$. In this case and still at $`\mathrm{𝐲𝟏}<\mathrm{𝐲𝟐}`$, the line $`(p1,p2)`$ crosses the right side of square; now the region of integration over $`y2`$ is $`y1<y2<y21m,`$ and we have two different regions of integration over $`p3`$:
$`x2<x3<A,`$ $`k2(x3x2)+y2<y3<A,`$ with positive $`s`$, region 4+, panel 1B, Fig. 1,which gives $`I4`$, and
$`x2<x3<A,0<y3<`$ $`k2(x3x2)+y2,`$ with negative $`s`$, r. 5-. p. 1A, Fig. 1, which gives $`I5`$.
Therefore we have two additional integrals:
$$I4=_0^A𝑑x1_0^A𝑑y1_{x1}^A𝑑x2_{y1}^{y21m}𝑑y2_{x2}^A𝑑x3_{k2(x3x2)+y2}^A(s)𝑑y3=\frac{19A^8}{34560}=19I1.$$
$$I5=_0^A𝑑x1_0^A𝑑y1_{x1}^A𝑑x2_{y1}^{y21m}𝑑y2_{x2}^A𝑑x3_0^{k2(x3x2)+y2}(s)𝑑y3=\frac{37A^8}{34560}=37I1.$$
### B $`y2<y1`$
Now important is relation between two coefficients $`k3`$ and $`k4:`$
$`k3=y1/(Ax1);k4=(y1y2)/(x2x1)`$.
$`\mathrm{𝐤𝟒}<\mathrm{𝐤𝟑}`$. If $`k4<k3,`$ then the line $`(p1,p2)`$ crosses the right side of square , see panel 1C, Fig. 1, and we have the case completely analogous to the case considered in the previous section (see also panel 1B) and two integrals, I6 with positive $`s`$ and I7 with negative $`s`$, are equal to I4 and I5 respectively. Here our geometrical approach is particularly explicitly demonstrate his power: it is sufficient to look at panels 1A - 1D of the Fig. 1 to be convinced that actually we have only two different cases, one case when line (p1,p2) crosses two opposite sides of square and second case when line (p1,p2) crosses two adjacent sides of square.
$`\mathrm{𝐤𝟒}>\mathrm{𝐤𝟑}`$. Now the line (p1,p2) crosses lower side of square, panel 1D, Fig. 1, and we have in essence the case coinciding with previous one, panel 1A, Fig. 1, so we have another three integrals with values actually found before: I8=I1, I9=I2, and I10=I3.
Now sum of all 10 integrals is equal to $`II=11A^8/864.`$ The normalizing coefficient is found by calculating a sum of abovementioned 10 integrals with (+/-s)=1 in all cases which gives $`JJ=A^6/6,`$ that is 1/6 of volume of hypercube with side A. So the mean area of random triangle inside the square is $`II/JJ=11/144`$ of host-figure’s square, $`A^2`$.
## III RT in rectangle
Our geometrical approach allows easily to calculate also the mean area of random triangle when host-figure is rectangle. Being experienced with the square case we consider here only two cases leading to 5 integrals.
As result, we present a simple and transparent Mathematica’s code for calculating the mean area of random triangle in rectangle with sides A and B.
```
(* y2\,>\,y1 *)
k1=(B-y1)/(A-x1);k2=(y2-y1)/(x2-x1); Y2=k1*(x2-x1)+y1; Y3=k2*(x3-x1)+y1;
(* k2\,>\,k1 *)
X=(B-y2)/k2+x2;
I1:=Integrate[s,x1,0,A,y1,0,B,x2,x1,A,y2,Y2,B, x3,x2,X,y3,Y3,B];
(* I1=A^4*B^4/34560 *)
I2:=Integrate[-s,x1,0,A,y1,0,B,x2,x1,A,y2,Y2,B, x3,x2,X,y3,0,Y3];
(* I2=23*I1 *)
I3:=Integrate[-s,x1,0,A,y1,0,B,x2,x1,A,y2,Y2,B, x3,X,A,y3,0,B];
(* I3=140 I1 *)
(* k2\,<\,k1 *)
I4:=Integrate[s,x1,0,A,y1,0,B,x2,x1,A,y2,y1,Y2, x3,x2,A,y3,Y3,B];
(* I4=19*I1 *)
I5:=Integrate[-s,x1,0,A,y1,0,B,x2,x1,A,y2,y1,Y2, x3,x2,A,y3,0,Y3];
(* I5=37*I1 *)
I15=I1+I2+I3+I4+I5; (* I15=11*A^4*B^4/1728 *)
(*calculation of normalizing coefficient*)
s=1;J1=I1;(* J1=A^3*B^3/432 *);s=-1;J2=I2;(* J2=5*J1 *)
s=-1;J3=I3;(* J3=18*J1 *);s=1;J4=I4;(* J4=5*J1 *)
s=-1;J5=I5;(* J5=7*J1 *); J15=J1+J2+J3+J4+J5; (*J15=A^3*B^3/12*);
MeanSquareOfRandomTriangleInRectangle=I15/J15;(* = (11/144)*A*B *)
(*MeanSquareOfRandomTriangleInRectangle = 11/144 of rectangle’s square*)
```
## IV RT in square frame
Now we consider the related problem of random triangle in square frame. Let three points are randomly (with constant differential probability function) distributed along the sides of unit square (side length and square being 1). What is the mean area of triangles formed by these points as vertices?
The solution is elementary, but we consider it for pure pedagogical purposes. First observation is that due to symmetry of square and due to symmetry of points it is sufficient to assume that 1st particle is at bottom side of square. Then four different cases should be considered.
2nd particle is also at bottom side
Let 3rd particle moves with constant linear velocity along all sides of the the square. We are looking for value of integral over coordinates of 3rd particle and then over coordinates of 2nd particle which is allowed to move only along the bottom side of square.
In the Mathematica’s language we should calculate the following path integral:
```
s[x1_,y1_,x2_,y2_,x3_,y3_]:=(1/2)*Abs[(x1*(y2-y3)+
x2*(-y1+y3)+x3*(y1-y2))]:
I1:=Integrate[(Integrate[s[x1,0,x2,0,x3,0],{ x3,0,1}]+
Integrate[s[x1,0,x2,0,1,y3],{ x2, 0,1},{y 3,0,1}]+
Integrate[s[x1,0,x2,0,x3,1],{ x2, 0,1},{ x3,0,1}]+
Integrate[s[x1,0,x2,0,0,y3],{ x2, 0,1},{ y3,0,1}]),{x2,0,1}]
```
Result is $`I1=\frac{1}{2}x1+x1^2.`$
2nd particle is at the right side of the square
The relevant integral which we do not write down is equal to $`I2=\frac{118x1+3x1^2}{12}.`$
2nd particle is at the upper side of the square. $`I3=\frac{116x1+6x1^2}{12}.`$
2nd particle is at the left side of the square. $`I4=\frac{6+2x1+3x1^2}{12}.`$
Sum of these for integrals gives $`I14=I1+I2+I3+I4=\frac{17}{6}2x1+2x1^2.`$ Now `Integrate[I14,{x1,0,1}]` gives 5/2. The normalizing coefficient is evidently 16, as we calculate 4 path integrals (over 3rd particle) which of them has path length equal to 4. Final result is: the mean area of random triangle inscribed in unit square is 5/32.
Let us look at this value (and check it!) from another point of view. We divide all sides of unit square to 10 equal parts and let each of three particles take all mid-points of these 40 parts. Then we have 40x40x40x40=640,000 triangles with mean area (as calculated by Mathematica) equal to 249/1600=498/3200 that is very close to 5/32.
So we understand more vividly in what sense the mean area of random triangles inscribed in the square is 5/32.
It is very interesting to compare these two ”mean” values 11/144 and 5/32. First value is 22/45, that is almost exactly 1/2, of the second one. That is mean square of triangles with vertices randomly distributed all over the square is almost exactly half of mean square of triangles vertices of which are allowed to occur only at sides of square.
The reason of considering this last problem is originally related to my attempts to find simple solution of first problem. Is seemed to me that by solving the problem of inscribed triangles I could somehow solve the first problem also. Some hazy ideas about differentiation/integration connection between two problems unfortunately gave no yield and I solved these two problems separately.
What is left is the problem of mean volume of tetrahedron in the cube (M.Trott, personal communication ). I hope that geometrical approach will also help in this much more difficult problem. But if the geometrical approach managed to reduce the number of integrals from 496 cumbersome ones in original solution by M. Trott to 5 very simple integrals, hopefully it will help in tetrahedron-in-cube problem as well.
Numerical value get by Mathematica gives 1/72, but this is not exact value, this is value get by Mathematica’s command `Rationalize[NumericalValue,10^-4]`.
## Acknowledgement
The useful correspondence with M. Trott is highly appreciated.
## References
M. Trott, Mathematica Journal, v7 i2, 189-197,1998.
|
no-problem/0002/astro-ph0002034.html
|
ar5iv
|
text
|
# The Detectability of Gamma-Ray Bursts and Their Afterglows at Very High Redshifts
## Introduction
We first show that the GRBs with well-established redshifts could have been detected out to very high redshifts (VHRs). Then, we show that their soft X-ray, optical, and infrared afterglows could also have been detected out to these redshifts.
## Detectability of GRBs
We first show that GRBs are detectable out to very high redshifts. The peak photon number luminosity is
$$L_P=_{\nu _l}^{\nu _u}\frac{dL_P}{d\nu }𝑑\nu ,$$
(1)
where $`\nu _l<\nu <\nu _u`$ is the band of observation. Typically, for BATSE, $`\nu _l=50`$ keV and $`\nu _u=300`$ keV. The corresponding peak photon number flux $`P`$ is
$$P=_{\nu _l}^{\nu _u}\frac{dP}{d\nu }𝑑\nu .$$
(2)
Assuming that GRBs have a photon number spectrum of the form $`dL_P/d\nu \nu ^\alpha `$ and that $`L_P`$ is independent of z, the observed peak photon number flux $`P`$ for a burst occurring at a redshift $`z`$ is given by
$$P=\frac{L_P}{4\pi D^2(z)(1+z)^\alpha },$$
(3)
where $`D(z)`$ is the comoving distance to the GRB. Taking $`\alpha =1`$, which is typical of GRBs , Equation (3) coincidentally reduces to the form that one gets when $`P`$ and $`L_P`$ are bolometric quantities.
Using these expressions, we have calculated the limiting redshifts detectable by BATSE and HETE-2, and by Swift, for the seven GRBs with well-established redshifts and published peak photon number fluxes. In doing so, we have used the peak photon number fluxes given in Table 1 of , taken a detection threshold of 0.2 ph s<sup>-1</sup> for BATSE and HETE-2 and 0.04 ph s<sup>-1</sup> for Swift , and set $`H_0=65`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`\mathrm{\Omega }_m=0.3`$, and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ (other cosmologies give similar results).
Figure 1 displays the results. This figure shows that BATSE and HETE-2 would be able to detect four of these GRBs (GRBs 970228, 970508, 980613, and 980703) out to redshifts $`2z4`$, and three (GRBs 971214, 990123, and 990510) out to redshifts of $`20z30`$. Swift would be able to detect the former four out to redshifts of $`5z15`$, and the latter three out to redshifts in excess of $`z70`$, although it is unlikely that GRBs occur at such extreme redshifts (see §3 below). Consequently, if GRBs occur at VHRs, BATSE has probably already detected them, and future missions should detect them as well.
## Detectability of GRB Afterglows
The soft X-ray, optical and infrared afterglows of GRBs are also detectable out to VHRs. The effects of distance and redshift tend to reduce the spectral flux in GRB afterglows in a given frequency band, but time dilation tends to increase it at a fixed time of observation after the GRB, since afterglow intensities tend to decrease with time. These effects combine to produce little or no decrease in the spectral energy flux $`F_\nu `$ of GRB afterglows in a given frequency band and at a fixed time of observation after the GRB with increasing redshift:
$$F_\nu (\nu ,t)=\frac{L_\nu (\nu ,t)}{4\pi D^2(z)(1+z)^{1a+b}},$$
(4)
where $`L_\nu \nu ^at^b`$ is the intrinsic spectral luminosity of the GRB afterglow, which we assume applies even at early times, and $`D(z)`$ is again the comoving distance to the burst. Many afterglows fade like $`b4/3`$, which implies that $`F_\nu (\nu ,t)D(z)^2(1+z)^{5/9}`$ in the simplest afterglow model where $`a=2b/3`$ . In addition, $`D(z)`$ increases very slowly with redshift at redshifts greater than a few. Consequently, there is little or no decrease in the spectral flux of GRB afterglows with increasing redshift beyond $`z3`$.
For example, find in the case of GRB 980519 that $`a=1.05\pm 0.10`$ and $`b=2.05\pm 0.04`$ so that $`1a+b=0.00\pm 0.11`$, which implies no decrease in the spectral flux with increasing redshift, except for the effect of $`D(z)`$. In the simplest afterglow model where $`a=2b/3`$, if the afterglow declines more rapidly than $`b1.7`$, the spectral flux actually increases as one moves the burst to higher redshifts!
As another example, we calculate the best-fit spectral flux distribution of the early afterglow of GRB 970228 from , as observed one day after the burst, transformed to various redshifts. The transformation involves (1) dimming the afterglow,<sup>1</sup><sup>1</sup>1Again, we have set $`\mathrm{\Omega }_m=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$; other cosmologies yield similar results. (2) redshifting its spectrum, (3) time dilating its light curve, and (4) extinguishing the spectrum using a model of the Ly$`\alpha `$ forest. For the model of the Ly$`\alpha `$ forest, we have adopted the best-fit flux deficit distribution to Sample 4 of from . At redshifts in excess of $`z=4.4`$, this model is an extrapolation, but it is consistent with the results of theoretical calculations of the redshift evolution of Ly$`\alpha `$ absorbers . Finally, we have convolved the transformed spectra with a top hat smearing function of width $`\mathrm{\Delta }\nu =0.2\nu `$. This models these spectra as they would be sampled photometrically, as opposed to spectroscopically; i.e., this transforms the model spectra into model spectral flux distributions.
Figure 2 shows the resulting K-band light curves. For a fixed band and time of observation, steps (1) and (2) above dim the afterglow and step (3) brightens it, as discussed above. Figure 2 shows that in the case of the early afterglow of GRB 970228, as in the case of GRB 980519, at redshifts greater than a few the three effects nearly cancel one another out. Thus the afterglow of a GRB occurring at a redshift slightly in excess of $`z=10`$ would be detectable at K $`16.2`$ mag one hour after the burst, and at K $`21.6`$ mag one day after the burst, if its afterglow were similar to that of GRB 970228 (a relatively faint afterglow).
Figure 3 shows the resulting spectral flux distribution. The spectral flux distribution of the afterglow is cut off by the Ly$`\alpha `$ forest at progressively lower frequencies as one moves out in redshift. Thus high redshift ($`1z5`$) afterglows are characterized by an optical “dropout” , and very high redshift ($`z5`$) afterglows by an infrared “dropout.”
In conclusion, if GRBs occur at very high redshifts, both they and their afterglows would be detectable.
|
no-problem/0002/cond-mat0002376.html
|
ar5iv
|
text
|
# Search for Magnetic Field Induced Gap in a High-𝑇_𝑐 Superconductor
## Abstract
Break junctions made of the optimally doped high temperature superconductor Bi<sub>2</sub>Sr<sub>2</sub>Ca<sub>2</sub>CuO<sub>8</sub> with $`T_c`$ of 90 K has been investigated in magnetic fields up to 12 T, at temperatures from 4.2 K to $`T_c`$. The junction resistance varied between 1 k$`\mathrm{\Omega }`$ and 300 k$`\mathrm{\Omega }`$. The differential conductance at low biases did not exhibit a significant magnetic field dependence, indicating that a magnetic-field-induced gap (Krishana et al., Science 277 83 (1997)), if exists, must be smaller than 0.25 meV.
Much of what we know about the electronic states in high-$`T_c`$ superconductors has been learned by using tunneling devices different from the metal-insulator-metal layer junctions so successful in exploring traditional superconductivity . Superconductor-insulator-superconductor (SIS) tunneling on break junctions provided one of the first clear indications for the failure of a fully gapped $`s`$-wave density of states (DOS) in these materials . Optimally doped samples were studied by tunneling in great detail . More recently, the oxygen doping dependence of the gap has been investigated on SIS junctions created by proper manipulation of a normal metal point contact . Superconductor-insulator-normal metal (SIN) junctions were used very successfully in scanning tunneling spectroscopy studies , and in point contact measurements . Although many of these junctions are much less controlled than the traditional metal oxide layer junctions, there is a reasonable level of consistency between the various techniques, indicating that the features observed are intrinsic to the materials.
The present study was motivated by a recent report of Krishana et al. on the magnetic field dependence of the thermal conductivity , and by theoretical arguments about the behavior of $`d`$-wave superconductors in magnetic field . The apparent non-analytical behavior reported by Krishana et al. raised the intriguing possibility of a magnetic field induced instability of the $`d`$-wave superconducting state with the appearance of a complete gap at temperatures below 20 K and magnetic fields of the order of 1 T. Theoretical foundations in terms of mixing a (complex) $`d_{xy}`$ component to the $`d_{x^2y^2}`$ state were suggested by Balatsky and Laughlin .
We performed break junction tunneling measurements in magnetic fields, with the goal of testing these conjectures directly. The tunneling device used in this work is an advanced version of an earlier one , used most recently in the study of the superconducting gap of Rb<sub>3</sub>C<sub>60</sub> . In short, a very thin optimally doped Bi<sub>2</sub>Sr<sub>2</sub>Ca<sub>2</sub>CuO<sub>8</sub> (BSCCO) single crystal of $`T_c=90`$ K was mounted on a flexible support, and contacted with gold wires. The sample was cooled in He atmosphere, and a break junction was created in situ by bending the support. (A similar method has been employed for Al and Nb point contacts by Scheer et al. and Muller et al. , respectively.) A piezoelectric rod was used for fine tuning of the junction. The tunneling current was parallel, and the magnetic field was perpendicular to the copper oxide planes. Early reports of point contact spectroscopy and break-junction tunneling on BSCCO superconductor in magnetic field by Vedeneev et al. did not address the same issue, they did not have sufficient resolution to investigate the effect of the magnetic field on the shape of the low bias region of the conductance curves and furthermore their data were interpreted within a thermally broadened $`s`$-wave symmetry BCS gap.
In zero field and at low temperatures earlier results of ours and others have been reproduced. At low temperatures the zero bias conductance of the junctions was close to zero. At finite bias voltages the differential conductance followed an approximately quadratic bias dependence at low voltages and exhibited peaks around $`\pm `$60 mV. The corresponding peak-position in the density of states of the $`d`$-wave superconductor (the $`d`$-wave gap) is at $`\mathrm{\Delta }=40`$ meV . This is in general agreement with the values reported for an optimally doped sample .
If the magnetic field induces a gap $`\mathrm{\Delta }^{}`$ in the density of states, then the differential conductance is expected to change: in the low temperature limit a fully gapped DOS results in vanishing tunneling conductivity for voltages up to $`2\mathrm{\Delta }^{}/e`$. Numerous junctions were measured in search of this effect. In Figs. 1 and 2 two examples are shown, representing high and low resistance junctions, respectively. In the upper pannel of Fig. 1 the $`I`$-$`V`$ characteristics at $`B=0`$ and 12 T fields are shown in the $`\pm 200`$ mV range. The middle pannel displays the corresponding condunctace curves normalized to the value at 200 mV in zero magnetic field. The low bias part of the conductance curve is blown up in the lower panel of Fig. 1. The curves at $`B=1`$, 2 and 12 T are shifted in respect to the zero field value for the sake of a better view. The same parabola (corresponding to a linear density of state) is drawn as an eyeguide over the experimental points for scans at differenet magnetic fields. It is evident from these curves that no change was found in the shape of the tunneling characteristic in the temperature and magnetic field range where the thermal conductivity anomaly was observed by Krishana et al. . The absence of magnetic field dependence of the tunneling conductance at 4.2 K is illustrated in Fig. 2 for the case of a low resistance junction. Similar conclusions were reached for temperatures up to 30 K on all the samples investigated.
It should be mentioned that the finite size of the junctions averages the physical properties on the involved area. For a $`d`$-wave superconductor, a full gap should be expected only for pure tunneling along $`a`$-$`a`$ or $`b`$-$`b`$ directions (on a microscopic scale). In these junctions the tunneling averages the density of states for many $`k`$-values, and we always see a quadratic voltage dependence of the conductance at low biases. Nevertheless, if the magnetic field would suppress the nodes in the gap, the conductance would be zero below the lowest gap value on the Fermi surface, no matter how is the average done in different $`k`$-directions.
Josephson current is observed in low resistance junctions. In the differential conductance, calculated from the measured currents and voltages, this feature shows up as a sharp peak, centered at zero voltage. Ideally, the peak should be very narrow. The width of the peak is a good measure of our experimental resolution - an example is shown in the inset of Fig. 2. From the half width of the curve we deduce a voltage resolution of about $`1`$ mV from these measurements. Combining the results illustrated in Figs. 1 and 2 with the resolution deduced from the Josephson current measurements, a magnetic field induced gap of $`\mathrm{\Delta }^{}>0.25`$ meV is excluded by the present study. (Note that a gap of $`\mathrm{\Delta }^{}`$ produces a conductivity change over the voltage range of $`4\mathrm{\Delta }^{}/e`$).
From a purely experimental perspective, the thermal conductivity ($`\kappa `$) data of the Princeton group places a lower limit on $`\mathrm{\Delta }^{}`$. According to the interpretation favored by the authors, the absence of all field dependence in $`\kappa `$ is due to an exponentially vanishing quasiparticle population - in other words it is due to a gap that is significantly larger than the temperature. For example, at $`B=2`$ T and $`T=10`$ K the gap should be $`\mathrm{\Delta }^{}1`$ meV, and it is expected to increase at higher magnetic fields and decreasing temperatures. This is not compatible with our observations, in particular with the 4.2 K data shown on the Figures.
The review of current theories reveals several of possibilities for introducing a new energy scale to the problem. We will use a representative magnetic field of $`B=10`$ T for quantitative comparison. The cyclotron energy, in the order of $`\mathrm{}\omega _cav_F\frac{eB}{c}`$ is about 1 meV at this field (using reasonable values of the lattice spacing $`a`$ and Fermi velocity $`v_F`$). A fine structure on this scale is close the limit of the voltage resolution in the present experiment, and it can not be entirely excluded. Janko describes the states in the Abrikosov vortices, obtaining characteristic energies in the order of magnitude of the geometric mean of the superconducting gap and the cyclotron frequency, $`\mathrm{\Delta }^{}\sqrt{\mathrm{\Delta }\mathrm{}\omega _c}`$. The energy is in the order of 15 meV, clearly excluded by our measurement. Finally, Laughlin describes a mechanism where the new gap is $`\mathrm{\Delta }^{}=\mathrm{}v\sqrt{2\frac{eB}{\mathrm{}c}}=5`$ meV (here $`\mathrm{}v=4.5^6`$ cm sec<sup>-1</sup> was used for the root mean square velocity of the $`d`$-wave node). A fully gapped DOS of $`\mathrm{\Delta }^{}=5`$ meV should result in reduced conductivity over a 20 mV wide voltage range, that is clearly contradicted by the experimental results shown in Figs. 1 (lower panel) and 2.
In conclusion, our tunneling measurements on break junctions place an upper limit of $`\mathrm{\Delta }^{}0.25`$ meV on the magnetic field induced gap in BSCCO. The field induced anomaly in the thermal conductivity must have some other explanation, possibly along the lines suggested by Aubin et al. .
We thank E. Tutiś and B. Janko for valuable discussions. LM is indebted to L. Zuppiroli for his hospitality. This project was supported by the Swiss National Science Foundation and by OTKA T026327.
$`\mathrm{\#}`$Permanent address: Technical University of Budapest, Hungary
$``$Permanent address: Department of Physics, SUNY @ Stony Brook, NY 11794, USA
|
no-problem/0002/cond-mat0002051.html
|
ar5iv
|
text
|
# Bunching of fluxons by the Cherenkov radiation in Josephson multilayers
## I Introduction
In the recent years, a great deal of attention was attracted to different kinds of solid state multilayered systems, e.g., artificial Josephson and magnetic multilayers, high-temperature superconductors (HTS) and perovskites, to name just a few. Multilayers are attractive because it is often possible to multiply a physical effect achieved in one layer by $`N`$ (and sometimes by $`N^2`$), where $`N`$ is the number of layers. This can be exploited for fabrication of novel solid-state devices. In addition, multilayered solid state systems show a variety of new physical phenomena which result from the interaction between individual layers.
In this article we focus on Josephson multilayer, the simplest example of which is a stack consisting of just two long Josephson junctions (LJJs). The results of our consideration can be applied to intrinsically layered HTS materials , since the Josephson-stack model has proved to be appropriate for these structures .
In earlier papers it was shown that, in some cases, a fluxon (Josephson vortex) moving in one of the layers of the stack may emit electromagnetic (plasma) waves by means of the Cherenkov mechanism. The fluxon together with its Cherenkov radiation have a profile of a traveling wave, $`\varphi (xut)`$, having an oscillating gradually decaying tail. Such a wave profile generates an effective potential for another fluxon which can be added into the system. If the second fluxon is trapped in one of the minima of this traveling potential, we can get a bunched state of two fluxons. In such a state, two fluxons can stably move at a small constant distance one from another, which is not possible otherwise. Fluxons of the same polarity usually repel each other, even being located in different layers.
Similar bunched states were already found in a discrete Josephson transmission lines, as well as in long Josephson junctions with the so-called $`\beta `$-term due to the surface impedance of the superconductor . The dynamics of conventional LJJ is described by the sine- Gordon equation which does not allow the fluxon to move faster than the Swihart velocity and, therefore, the Cherenkov radiation never appears. In both cases mentioned above (the discrete system or the system with the $`\beta `$-term), the perturbation of the sine-Gordon equation results in a modified dispersion relation for Josephson plasma waves and appearance of an oscillating tail. This tail, in turn, results in an attractive interaction between fluxons, i.e., bunching. Nevertheless, the mere presence of an oscillating tail is not a sufficient condition for bunching.
In this paper, we investigate the problem of fluxon bunching in a system of two and three inductively coupled junctions with a primary state $`[1|0]`$ (one fluxon in the top junction and no fluxon in the bottom one) or $`[0|1|0]`$ (a fluxon only in the middle junction of a 3-fold stack). We show that bunching is possible for some fluxon configurations and specific range of parameters of the system. In addition, it is found that the bunched states radiate less than single-fluxon states, and therefore can move with a higher velocity. Section II presents the results of numerical simulations, in section III we discuss the obtained results and a feasibility of experimental observation of bunched states. We also derive a simple analytical expression which show the possibility of the existence of bunched states. Section IV concludes the work.
## II Numerical Simulations
The system of equations which describes the dynamics of Josephson phases $`\varphi ^{A,B}`$ in two coupled LJJ<sup>A</sup> and LJJ<sup>B</sup> is :
$`{\displaystyle \frac{\varphi _{xx}^A}{1S^2}}\varphi _{tt}^A\mathrm{sin}\varphi ^A{\displaystyle \frac{S}{1S^2}}\varphi _{xx}^B`$ $`=`$ $`\alpha \varphi _t^A\gamma ;`$ (1)
$`{\displaystyle \frac{\varphi _{xx}^B}{1S^2}}\varphi _{tt}^B{\displaystyle \frac{\mathrm{sin}\varphi ^B}{J}}{\displaystyle \frac{S}{1S^2}}\varphi _{xx}^A`$ $`=`$ $`\alpha \varphi _t^B\gamma ,`$ (2)
where $`S`$ ($`1<S<0`$) is a dimensionless coupling constant, $`J=j_c^A/j_c^B`$ is the ratio of the critical currents, while $`\alpha `$ and $`\gamma =j/j_c^A`$ are the damping coefficient and normalized bias current, respectively, that are assumed to be the same in both LJJs. It is also assumed that other parameters of the junctions, such as the effective magnetic thicknesses and capacitances, are the same. As has been shown earlier, the Cherenkov radiation in a two-fold stack may take place only if the fluxon is moving in the junction with smaller $`j_c`$. We suppose in the following that the fluxon moves in LJJ<sup>A</sup>, that implies $`J<1`$.
In the case $`N=3`$, we impose the symmetry condition $`\varphi ^A\varphi ^C`$, which is natural when the fluxon moves in the middle layer, and, thus, we can rewrite equations from Ref. \[\] in the form
$`{\displaystyle \frac{\varphi _{xx}^A}{12S^2}}\varphi _{tt}^A\mathrm{sin}\varphi ^A{\displaystyle \frac{S\varphi _{xx}^B}{12S^2}}`$ $`=`$ $`\alpha \varphi _t^A\gamma ;`$ (3)
$`{\displaystyle \frac{\varphi _{xx}^B}{12S^2}}\varphi _{tt}^B\mathrm{sin}\varphi ^B{\displaystyle \frac{2S\varphi _{xx}^A}{12S^2}}`$ $`=`$ $`\alpha \varphi _t^B\gamma .`$ (4)
Note the factor 2 in the last term on the l.h.s. of Eq. (4). In the case of three coupled LJJs, we assume $`J=1`$, since for more than two coupled junctions the Cherenkov radiation can be obtained for a uniform stack with equal critical currents
### A Numerical technique
The numerical procedure works as follows. For a given set of the LJJs parameters, we compute the current-voltage characteristic (IVC) of the system, i.e., $`\overline{V}^{A,B}(\gamma )`$. To calculate the voltages $`\overline{V}^{A,B}`$ for fixed values of $`\gamma `$, we simulate the dynamics of the phases $`\varphi ^{A,B}(x,t)`$ by solving Eqs. (1) and (2) for $`N=2`$ or Eqs. (3) and (4) for $`N=3`$, using the periodic boundary conditions:
$`\varphi ^{A,B}(x=L)`$ $`=`$ $`\varphi ^{A,B}(x=0)+2\pi N^{A,B};`$ (5)
$`\varphi _x^{A,B}(x=L)`$ $`=`$ $`\varphi _x^{A,B}(x=0),`$ (6)
where $`N^{A,B}`$ is the number of fluxons trapped in LJJ<sup>A,B</sup>. In order to simulate a quasi-infinite system, we have chosen annular geometry with the length (circumference) of the junction $`L=100`$.
To solve the differential equations, we use an explicit method \[expressing $`\varphi ^{A,B}(t+\mathrm{\Delta }t)`$ as a function of $`\varphi ^{A,B}(t)`$ and $`\varphi ^{A,B}(t\mathrm{\Delta }t)`$\], treating $`\varphi _{xx}`$ with a five-point, $`\varphi _{tt}`$ with a three-point, and $`\varphi _t`$ with a two-point symmetric finite-difference scheme. The spatial and time steps used for the simulations were $`\delta x=0.025`$, $`\delta t=0.00625`$. After the simulation of the phase dynamics for $`T=10`$ time units, we calculate the average dc voltages $`\overline{V}^{A,B}`$ for this time interval as
$$\overline{V}^{A,B}=\frac{1}{T}_0^T\varphi _t^{A,B}(t)𝑑t=\frac{\varphi ^{A,B}(T)\varphi ^{A,B}(0)}{T}.$$
(7)
The dc voltage at point $`x`$ can be defined as average number of fluxons (the flux) passed through the junction at this point. Since the average fluxon density is not singular in any point of the junction (otherwise the energy will grow infinitely), we conclude that average dc voltage is the same for any point $`x`$. Therefore, for faster convergence of our averaging procedure, we can additionally average the phases $`\varphi ^{A,B}`$ in (7) over the length of the stack.
After the values of $`\overline{V}^{A,B}`$ were found as per Eq. (7), the evolution of the phases $`\varphi ^{A,B}(x,t)`$ is simulated further during $`1.1T`$ time units, the dc voltages $`\overline{V}^{A,B}`$ are calculated for this new time interval and compared with the previously calculated values. We repeat such iterations further, increasing the time interval by a factor 1.1 until the difference in dc voltages $`|\overline{V}(1.1^{n+1}T)\overline{V}(1.1^nT)|`$ obtained in two subsequent iterations becomes less than an accuracy $`\delta V=10^4`$. The particular factor $`1.1`$ was found to be quite optimal and to provide for fast convergence, as well as more efficient averaging of low harmonics on each subsequent step. Very small value of this factor, e.g., $`1.01`$ (recall that only the values greater than 1 have meaning), may result in a very slow convergence in the case when $`\varphi (t)`$ contains harmonics with the period $`T`$. Large values of the factor, e.g., $`2`$, would consume a lot of CPU time already during the second or third iteration and, hence, are not good for practical use.
Once the voltage averaging for current $`\gamma `$ is complete, the current $`\gamma `$ is increased by a small amount $`\delta \gamma =0.005`$ to calculate the voltages at the next point of the IVC. We use a distribution of the phases (and their derivatives) achieved in the previous point of the IVC as the initial distribution for the following point.
Further description of the software used for simulation can be found in Ref. .
### B Two coupled junctions
For simulation we chose the following parameters of the system: $`S=0.5`$ to be close to the limit of intrinsically layered HTS, $`J=0.5`$ to let the fluxon accelerate above the $`\overline{c}_{}`$ and develop Cherenkov radiation tail. The velocity $`\overline{c}_{}`$ is the smallest of Swihart velocities of the system. It characterizes the propagation of the out-of-phase mode of Josephson plasma waves. The value of $`\alpha =0.04`$ is chosen somewhat higher than, e.g., in (Nb-Al-AlO<sub>x</sub>)<sub>N</sub>-Nb stacks. This choice is dictated by the need to keep the quasi-infinite approximation valid and satisfy the condition $`\alpha L1`$. Smaller $`\alpha `$ requires very large $`L`$ and, therefore, unaffordably long simulation times. So, we made a compromise and chose the above $`\alpha `$ value.
First, we simulated the IVC $`u(\gamma )`$ in the $`[1|0]`$ state, by sweeping $`\gamma `$ from 0 up to 1 and making snapshots of phase gradients at every point of the IVC. This IVC is shown in Fig. 1(a), and the snapshot of the phase gradient at $`\gamma =0.3`$ is presented in Fig. 1(b). As one can see, the Cherenkov radiation tail, which is present for $`u>\overline{c}_{}`$, has a sequence of minima where the second fluxon may be trapped.
#### 1 $`[1+1|0]`$ state
In order to create a two-fluxon bunched state and check its stability, we used the following “solution-engineering” procedure. By taking a snapshot of the phase profiles $`\varphi _{A,B}(x)`$ at the bias value $`\gamma _0=0.3`$, we constructed an ansatz for the bunched solution in the form
$$\varphi _{A,B}^{\mathrm{new}}(x)=\varphi _{A,B}(x)+\varphi _{A,B}(x+\mathrm{\Delta }x),$$
(8)
where $`\mathrm{\Delta }x`$ is chosen so that the center of the trailing fluxon is placed at one of the minima of the Cherenkov tail. For example, to trap the trailing fluxon in the first, second and third well, we used $`\mathrm{\Delta }x=0.9`$, $`\mathrm{\Delta }x=2.4`$ and $`\mathrm{\Delta }x=3.9`$, respectively. The phase distribution (and derivatives), constructed in this way, were used as the initial condition for solving Eqs. (1) and (2) numerically. As the system relaxed to the desired state $`[1+1|0]`$, we further traced $`u(\gamma )`$ curve, varying $`\gamma _0`$ down to $`0`$ and up to $`1`$.
We accomplished this procedure for a set of $`\mathrm{\Delta }x`$ values, trying to trap the second fluxon in every well. Fig. 1(c) shows that a stable, tightly bunched state of two fluxons is indeed possible. Actually, all the $`[1+1|0]`$ states obtained this way have been found to be stable, and we were able to trace their IVCs up and down, starting from the initial value of the bias current $`\gamma =0.3`$. For the case when the trailing fluxon is trapped in the first, second and third minima, such IVCs are shown in Fig. 2.
The most interesting feature of these curves is that they correspond to the velocity of the bunched state that is higher than that of the $`[1|0]`$ state, at the same value of the bias current. Comparing solutions shown in Figs. 1(b) and 1(c), we see that the amplitude of the trailing tail is smaller for the bunched state. This circumstance suggests the following explanation to the fact that the observed velocity is higher in the state $`[1+1|0]`$ than in the single-fluxon one. Because the driving forces acting on two fluxons in the bunched and unbunched states are the same, the difference in their velocities can be attributed only to the difference in the friction forces. The friction force acting on the fluxon in one junction is
$$F_\alpha =\alpha _{\mathrm{}}^+\mathrm{}\varphi _x\varphi _t𝑑x,$$
(9)
and the same holds for the other junction. By just looking at Fig. 1(b) and (c) it is rather difficult to tell in which case the friction force is larger, but accurate calculations using Eq. (9) and profiles from Fig. 1(b) and (c) show that the friction force acting on two fluxons with the tails shown in Fig. 1(b) is somewhat higher than that for Fig. 1(c). This result is not surprising if one recalls that, to create the bunched state, we have shifted the $`[1|0]`$ state by about half of the tail oscillation period relative to the other single-fluxon state. Due to this, the tails of the two fluxons add up out of phase and partly cancel each other, making the tail’s amplitude behind the fluxon in the bunched state lower than that in the $`[1|0]`$ state.
From Fig. 2 it is seen that every bunched state exists in a certain range of values of the bias current. If the current is decreased below some threshold value, fluxons dissociate and start moving apart, so that the interaction between them becomes exponentially small. When the trailing fluxon sits in a minimum of the Cherenkov tail sufficiently far from the leading fluxon, the IVC corresponding to this bunched state is almost undistinguishable from that of the $`[1|0]`$ state, as the two fluxons approach the limit when they do not interact. We have found that IVCs for $`M>3`$, where $`M`$ is the potential well’s number, is indeed almost identical to that of the $`[1|0]`$ state. In contrast to bunching of fluxons in discrete LJJ, the transitions from one bunched state to another with different $`M`$ do *n*ot take place in our system. Thus, we can say that the current range of a bunched state with smaller $`M`$ “eclipses” the bunched states with larger $`M`$.
The profiles of solutions found for various values of the bias current are shown in Fig. 3. We notice that at the bottom of the step corresponding to the bunched state the radiation tail is much weaker and fluxons are bunched tighter. This is a direct consequence of the fact that at lower velocities the radiation wavelength and the distance between minima becomes smaller, and so does the distance between the two fluxons. At a low bias current, the radiation wavelength and, hence, width of the potential wells become very small and incommensurable with the fluxon’s width. Therefore, the fluxon does not fit into the well and the bunched states virtually disappear.
#### 2 $`[1|1]`$ state
The initial condition for this state was constructed in a similar fashion to the $`[1+1|0]`$ one, but now using a cross-sum of the shifted and unshifted solutions:
$$\varphi _{A,B}^{\mathrm{new}}(x)=\varphi _{A,B}(x)+\varphi _{B,A}(x+\mathrm{\Delta }x).$$
(10)
If for the $`[1+1|0]`$ state, $`\mathrm{\Delta }x`$ was $`(\lambda \frac{1}{2})M`$, $`M=1,2\mathrm{}`$, then in the $`[1|1]`$ state we have to take $`\mathrm{\Delta }x\lambda M`$. We can also take $`M=0`$ i.e., $`\mathrm{\Delta }x=0`$, which corresponds to the degenerate case of the in-phase $`[1|1]`$ state. The stability of this state was investigated in detail analytically by Grønbech-Jensen and co-authors, and is outside the scope of this paper.
Our efforts to create a bound state $`[1|1]`$ using the phase in the form (10) with $`M=1,2\mathrm{}`$ have not lead to any stable configuration of bunched fluxons with $`\mathrm{\Delta }x0`$.
#### 3 Higher-order states
Looking at the phase gradient profiles shown in Fig. 3, one notes that these profiles are qualitatively very similar to the original profile of the soliton with a radiation tail behind it \[see Fig. 1(b)\], with the only difference that there are two bunched solitons with a tail. So, we can try to construct two pairs of bunched fluxons moving together, i.e., get a $`[2+2|0]`$ bunched state. As before, the trapping of the trailing pair is possible in one of the minima of the tail generated by the leading pair. To construct such a double-bunched state we employ the initial conditions obtained using Eq. (8) at the bias point $`\gamma _0=0.3`$, using the steady phase distribution obtained for the $`[2|0]`$ state at $`\gamma _0=0.3`$. The shift $`\mathrm{\Delta }x`$ was chosen in such a way that a pair of fluxons fits into one of the minima of the tail. We note that in this case we needed to vary $`\mathrm{\Delta }x`$ a little bit before we have achieved trapping of the trailing pair in a desired well.
Simulations show that the obtained $`[2+2|0]`$ states are stable and demonstrate an even higher velocity of the whole four-fluxon aggregate. The corresponding IVCs and profiles are shown in Fig. 4(a) and (b), respectively. Note that at $`\gamma <0.22`$ the bunched state $`[2+2|0]`$ splits first into $`[1+1_2+1_3+1_3|0]`$ state (the subscripts denote the well’s number $`M`$, counting from the previous fluxon), and at still lower bias current, $`\gamma <0.2`$, they split into two independent $`[1+1_2|0]`$ and $`[1+1_5|0]`$ states. This two states move with slightly different velocities and can collide with each other due to the periodic nature of the system. As a result of collisions, these states ultimately undergo a transformation into two independent $`[1+1_5|0]`$ states. As the bias decreases below $`0.1`$, the velocity $`u`$ becomes smaller than $`\overline{c}_{}`$ and the Cherenkov radiation tails disappear. At this point, each of the $`[1+1_5|0]`$ states smoothly transforms into two independent $`[1|0]`$ states. The interaction between these states is exponentially small, with a characteristic length $`1`$ (or, $`\lambda _J`$ in physical units). We note that the interaction between kinks in the region $`u>\overline{c}_{}`$, where they have tails, also decreases exponentially, but with a larger characteristic length $`\alpha ^1`$.
The procedure of constructing higher-order bunched states can be performed using different states as “building blocks”. In particular, we also tried to form the $`[2+1|0]`$ bunched state. Note that if two different states are taken as building blocks, we need to match their velocities, and, hence, the wave lengths of the tail. Thus, we have to combine two states at the same velocity, rather than at the same bias current. Since different states have their own velocity ranges, it is not always possible. As an example, we have constructed a $`[2+1|0]`$ state out of $`[2|0]`$ state at $`\gamma =0.15`$ and $`[1|0]`$ state at $`\gamma =0.45`$ using an ansatz similar to (8). These states have approximately the same velocity $`u0.95`$ (see to Fig. 2). The constructed state was simulated, starting from the points $`\gamma =0.3`$ and $`\gamma =0.35`$, tracing IVC up and down as before. Depending on the bias current the system ends up in different states, namely in the state $`[1+1_1+1_2|0]`$ for $`\gamma _0=0.3`$, or in the state $`[1+1_1+1_1|0]=[3|0]`$ for $`\gamma _0=0.35`$. The IVCs of both states are shown in Fig. 4. The profiles of the phase gradients are shown in Fig. 4(c).
Our attempts to construct the states with a higher number of bunched fluxons, e.g., $`[4+4|0]`$, have failed since four fluxons do not fit into one well. We have concluded that such states immediately get converted into one of the lower-order states.
### C Three coupled junctions
We have performed numerical simulation of Eqs. (3) and (4), using the same technique as described in the previous section. Our intention here is to study the 3-junction case in which the fluxon is put in the middle junction ($`[0|1|0]`$ state). All other parameters were the same as in the case of the two-junction system, except for the ratio of the critical currents $`J`$, which was taken equal to one. This simplest choice is made because in a system of $`N>2`$ coupled identical junctions the Cherenkov radiation appears in a $`[0|\mathrm{}|0|1|0|\mathrm{}|0]`$ state for $`u>\overline{c}_{}0.765`$ (this pertains to $`S=0.5`$).
Fig. 5 shows the IVCs of the original state $`[0|1|0]`$, as well as IVCs of the bunched state $`[0|1+1|0]`$ for $`M=1,\mathrm{\hspace{0.25em}2},\mathrm{\hspace{0.25em}3}`$. The profiles of the phase gradients at points A through D are shown in Fig. 6. Qualitatively, the bunching in the 3-fold system takes place in a similar fashion as that in the 2-fold system. Nevertheless, we did not succeed in creating a stable fluxon configuration with $`M=3`$, although the stable states with other $`M`$ were obtained. We would like to mention, that when the second fluxon was put in the second minimum of the potential to get the state with $`M=2`$, the state with $`M=1`$ has been finally established as a result of relaxation. The same behavior was observed when we put the fluxon initially in the third minimum, the system ended up in the state $`[1+1_2|0]`$. For $`M4`$, the behavior was as usual. We tried to vary $`\mathrm{\Delta }x`$ smoothly, so that the center of the trailing fluxon would correspond to different positions between the second and fourth well, but in this case we did not succeed to get $`[1+1_2|0]`$ state.
Following the same way as for two coupled junctions, we tried to construct $`[0+1|1|0+1]`$ states. As in the case $`N=2`$, these states were found unstable for any $`M>0`$, e.g., they would split into $`[0|1+1_2|0]`$ and $`[1|1|1]`$. The state $`[0|2+2|0]`$ was not stable either for $`M=1,\mathrm{\hspace{0.25em}2},\mathrm{\hspace{0.25em}3}`$ and the bias currents $`\gamma _0=0.20`$, 0.30, 0.35.
The state $`[0|2+1|0]=[0|3|0]`$, constructed by combining the solutions for the $`[0|1|0]`$ and $`[0|2|0]`$ states moving with equal velocities was found to be stable when starting at $`\gamma =0.25`$ and sweeping bias current up and down. The dependence $`u(\gamma )`$ is shown in Fig. 5. One may note, that for the states $`[0|2|0]`$ and $`[0|3|0]`$ the dependence is not smooth. Indeed, for these states the Cherenkov radiation tail is so long ($`L`$), that our annular system cannot simulate an infinitely long system, resulting in Cherenkov resonances which inevitably appear in the system with a finite perimeter.
## III Analysis and Discussion
Because of the non-linear nature of the bunching problem, it is hardly tractable analytically. Therefore, we here present an approach in which we analyze the asymptotic behavior of the fluxon’s front and trailing tails in the linear approximation. This technique is similar to that employed in Ref. . We assume that, at distances which are large enough in comparison with the fluxon’s size, the fluxon’s profile is exponentially decaying,
$$\varphi (x,t)\mathrm{exp}[p(xut)],$$
(11)
where $`p`$ is a complex number which can be found by substituting this expression into Eqs. (1) and (2). As a result we arrive at an equation
$$\left|\begin{array}{cc}\frac{p^2}{1S^2}p^2u^21\alpha pu& \frac{Sp^2}{1S^2}\\ \frac{Sp^2}{1S^2}& \frac{p^2}{1S^2}p^2u^2\frac{1}{J}\alpha pu\end{array}\right|=0,$$
(12)
In general, this yields a 4-th order algebraic equation which always has 4 roots. If we want to describe a soliton moving from left to right with a radiation tail behind it, we have to find the values $`p`$ among the four roots which adequately describe the front and rear parts of the soliton. Because the front (right) part of the soliton is not oscillating, it is described by Eq. (11) with real $`p<0`$. The rear (left) part of the soliton is the oscillating tail, consequently it should be described by Eq. (11) with complex $`p`$ having $`\mathrm{Re}(p)>0`$, the period of oscillations being determined by the imaginary part of $`p`$. Analyzing the 4-th order equation, we conclude that the two necessary types of the roots coexist only for $`u>\overline{c}_{}`$, which is quite an obvious result.
To analyze the possibility of bunched state formation, we consider two fluxons situated at some distance from each other. We propose the following two conditions for the two fluxons to form a bunched state:
1. Since non-oscillating tails result only in repulsion between fluxons, while the oscillating tail leads to mutual trapping, the condition
$$\mathrm{Re}(p_l)<|p_r|,$$
(13)
can be imposed to secure bunching. Here $`p_l`$ is the root of Eq. (12) which describes the left (oscillating) tail of the leading (right) fluxon, and $`p_r`$ is the root of Eq. (12) which describes the right (non-oscillating) tail of the trailing (left) fluxon.
2. The relativistically contracted fluxon must fit into the minimum of the tail, i.e.,
$$\frac{\pi }{\mathrm{Im}(p)}>\sqrt{\frac{u^2}{\overline{c}_{}^2}1},$$
(14)
where $`\pi /\mathrm{Im}(p)`$ is half of the wavelength of the tail-forming radiation (the well’s width), and the expression on the r.h.s. of Eq. (14) approximately corresponds to the contraction of the fluxon at the trans-Swihart velocities. Although our system is not Lorentz invariant, numerical simulations show that the fluxon indeed shrinks (not up to zero) when approaching the Swihart velocity $`\overline{c}_{}`$ from both sides.
Following this approach, we have found that the second condition (14) is always satisfied. The first condition (13) gives the following result. Bunching is possible at $`u>u_b>\overline{c}_{}`$. The value of $`u_b`$ can be calculated numerically and for $`S=0.5`$, $`J=0.5`$, $`\alpha =0.04`$ it is $`u_b=0.837`$. Looking at Fig. 2, we see that this velocity corresponds to the bias point where the $`[1+1_M|0]`$ states cease to exist. Thus, our crude approximation reasonably predicts the velocity range where the bunching is possible.
## IV Conclusion
In this work we have shown by means of numerical simulations that:
* The emission of the Cherenkov plasma waves by a fluxon moving with high velocity creates an effective potential with many wells, where other fluxons can be trapped. This mechanism leads to bunching between fluxons of the same polarity.
* We have proved numerically that in the system of two and three coupled junctions the bunched states for the fluxons in the same junction such as $`[1+1|0]`$, $`[1+2|0]`$, $`[2+2|0]`$, $`[0|1+1|0]`$ are stable. The states with fluxons in different junctions like $`[1|0+1]`$ and $`[0+1|1|0+1]`$ are numerically found unstable (except for the degenerated case $`M=0`$, when $`[1|1]`$ is a simple in- phase state).
* Bunched fluxons propagate at a substantially higher velocity than the corresponding free ones at the same bias current, because of lower losses per fluxon.
* When decreasing the bias current, transitions between the bunched states with different separations between fluxons were not found. This behavior differs from what is known for the bunched states in a discrete system. In addition, a splitting of multi-fluxon states into the states with smaller numbers of bunched fluxons is observed.
###### Acknowledgements.
This work was supported by a grant no. G0464-247.07/95 from the German-Israeli Foundation.
|
no-problem/0002/math-ph0002023.html
|
ar5iv
|
text
|
# Beyond Octonions
(February 2000)
## Abstract
We investigate Clifford Algebras structure over non-ring division algebras. We show how projection over the real field produces the standard Attiyah-Bott-Shapiro classification.
Quaternions and octonions may be presented as a linear algebra over the field of real numbers $``$ with a general element of the form
$$Y=y_0e_0+y_ie_i,y_0,y_i$$
(1)
where $`i=1,2,3`$ for quaternions $``$ and $`i=\mathrm{1..7}`$ for octonions $`𝕆`$. We always use Einstein’s summation convention. The $`e_i`$ are imaginary units, for quaternions
$`e_ie_j`$ $`=`$ $`\delta _{ij}+ϵ_{ijk}e_k,`$ (2)
$`e_ie_0`$ $`=`$ $`e_0e_i=e_i,`$ (3)
$`e_0e_0`$ $`=`$ $`e_0,`$ (4)
where $`\delta _{ij}`$ is the Kronecker delta and $`ϵ_{ijk}`$ is the three dimensional Levi–Cevita tensor, as $`e_0=1`$ when there is no confusion we omit it. Octonions have the same structure, only we must replace $`ϵ_{ijk}`$ by the octonionic structure constant $`f_{ijk}`$ which is completely antisymmetric and equal to one for any of the following three cycles
$$123,\mathrm{\hspace{0.33em}145},\mathrm{\hspace{0.33em}176},\mathrm{\hspace{0.33em}246},\mathrm{\hspace{0.33em}257},\mathrm{\hspace{0.33em}347},\mathrm{\hspace{0.33em}\hspace{0.33em}365}.$$
(5)
The important feature of real, complex, quaternions and octonions is the existence of an inverse for any non-zero element. For the generic quaternionic or octonionic element given in (1), we define the conjugate $`Y^{}`$ as an involution $`\left(Y^{}\right)^{}=Y`$, such that
$$Y^{}=y_0e_0y_ie_i,$$
(6)
introducing the norm as $`N\left(Y\right)Y=YY^{}=Y^{}Y`$ then the inverse is
$$Y^1=\frac{Y^{}}{Y}.$$
(7)
The Norm is nondegenerate and positively definite. We have the decomposition property
$$XY=XY$$
(8)
$`N\left(xy\right)`$ being nondegenerate and positive definite obeys the axioms of the scalar product.
Going to higher dimensions, we define “hexagonions” ($`𝕏`$) by introducing a new element $`e_8`$ such that
$$\begin{array}{cccc}𝕏\hfill & =\hfill & 𝕆_1+𝕆_2e_8\hfill & \\ & =\hfill & x_0e_0+\mathrm{}+x_{16}e_{16}.\hfill & x_\mu \hfill \end{array}$$
(9)
and
$$e_ie_j=\delta _{ij}+C_{ijk}e_k.$$
(10)
Now, we have to find a suitable form of the completely antisymmetric tensor $`C_{ijk}`$. Recalling how the structure constant is written for octonions
$`𝕆`$ $`=`$ $`_1+_2e_4`$ (11)
$`=`$ $`x_0e_0+\mathrm{}+x_7e_7,`$
where $``$ are quaternions, we have already chosen the convention $`e_1e_2=e_3`$ which is extendable to (11). We set $`e_1e_4=e_5`$, $`e_2e_4=e_6`$ and $`e_3e_4=e_7`$, but we still lack the relationships between the remaining possible triplets, $`\{e_1,e_6,e_7\};`$ $`\{e_2,e_5,e_7\};`$ $`\{e_3,e_5,e_6\}`$ which can be fixed by using
$$\begin{array}{c}e_1e_6=e_1(e_2e_4)=(e_1e_2)e_4=e_3e_4=e_7,\\ e_2e_5=e_2(e_1e_4)=(e_2e_1)e_4=+e_3e_4=+e_7,\\ e_3e_5=e_3(e_1e_4)=(e_3e_1)e_4=e_2e_4=e_6.\end{array}$$
These cycles define all the structure constants for octonions. Returning to $`𝕏`$, we have the seven octonionic conditions, and the decomposition (9). We set $`e_1e_8=e_9,e_2e_8=e_A,e_3e_8=e_B,e_4e_8=e_C,e_5e_8=e_D,e_6e_8=e_E,e_7e_8=e_F`$ where $`A=10,B=11,C=12,D=13,E=14`$ and $`F=15`$. The other elements of the multiplication table may be chosen in analogy with (11). Explicitly, the 35 hexagonionic triplets are
$$\begin{array}{ccccccc}(123),& (145),& (246),& (347),& (257),& (176),& (365),\\ (189),& (28A),& (38B),& (48C),& (58D),& (68E),& (78F),\\ (1BA),& (1DC),& (1EF),& (29B),& (2EC),& (2FD),& (3A9),\\ (49D),& (4AE),& (4BF),& (3FC),& (3DE),& (5C9),& (5AF),\\ (5EB),& (6FD),& (6CA),& (6BD),& (79E),& (7DA),& (7CB).\end{array}$$
This can be extended for any generic higher dimensional $`𝔽^n`$.
It can be shown by using some combinatorics that the number of such triplets $`N`$ for a general $`𝔽^n`$ algebra is ($`n>1`$)
$$N=\frac{\left(2^n1\right)!}{\left(2^n3\right)!3!},$$
(12)
giving
$$\begin{array}{cccc}𝔽^n& n& dim& N\\ & 2& 4& 1\\ 𝕆& 3& 8& 7\\ 𝕏& 4& 16& 35\\ & & andsoon.& \end{array}$$
One may notice that for any non-ring division algebra $`\left(𝔽,n>3\right)`$$`N>dim(𝔽^n)`$ except when dim = $`\mathrm{},`$ i.e. a functional Hilbert space with a Cliff(0,$`\mathrm{}`$) structure.
It is clear that for any ring or non–ring division algebras, $`e_i,e_j𝔽^n`$, we have
$$\{e_i,e_j\}=2\delta _{ij}.$$
(13)
As we explained in and , treating quaternions and octonions as elements of $`R^4`$ and $`R^8`$ respectively, we can find the full set of matrices $`R\left(4\right)`$ and $`R\left(8\right)`$ that corresponds to any elements $`e_i`$ explicitly
$`\begin{array}{cccc}forquaternions\hfill & e_i\hfill & \hfill & (𝔼_i)_{\alpha \beta }=\delta _{i\alpha }\delta _{\beta 0}\delta _{i\beta }\delta _{\alpha 0}+ϵ_{i\alpha \beta },\hfill \\ \{E_i,E_j\}=2\delta _{ij}\hfill & & i,j=\mathrm{1..3},\hfill & \alpha ,\beta =\mathrm{1..4},\hfill \\ foroctonions\hfill & e_i\hfill & \hfill & (𝔼_i)_{\alpha \beta }=\delta _{i\alpha }\delta _{\beta 0}\delta _{i\beta }\delta _{\alpha 0}+f_{i\alpha \beta },\hfill \\ \{E_i,E_j\}=2\delta _{ij}\hfill & & i,j=\mathrm{1..7},\hfill & \alpha ,\beta =\mathrm{1..8}\hfill \end{array}`$ (18)
(19)
Following, the same translation idea projecting our algebra $`𝕏`$ over $`^{16}`$, any $`𝔼_i`$ is given by a relation similar to that given in (19),
$$(𝔼_i)_{\alpha \beta }=\delta _{i\alpha }\delta _{\beta 0}\delta _{i\beta }\delta _{\alpha 0}+C_{i\alpha \beta }.$$
(20)
But contrary to quaternions and octonions, the Clifford algebra (over the real field $`R^{16}`$)closes only for a subset of these $`E_i`$’s, namely
$$\{𝔼_i,𝔼_j\}=2\delta _{ij}\text{for}i,j,k=1\mathrm{}8not\mathrm{\hspace{0.33em}1}\mathrm{}15.$$
(21)
Because we have lost the ring division structure. We can find easily that another ninth $`𝔼_i`$ can be constructed, in agreement with the Clifford algebra classification . There is no standard<sup>3</sup><sup>3</sup>3Look to for a non standard representation. 16 dimensional representation for $`Cliff\left(15\right)`$. Following this procedure, we can give a simple way to write real Clifford algebras over any arbitrary Euclidean dimensions.
Sometimes, a specific multiplication table may be favored. For example in soliton theory, the existence of a symplectic structure related to the bihamiltonian formulation of integrable models is welcome. It is known from the Darboux theorem, that locally a symplectic structure is given up to a minus sign by
$$𝒥_{dim\times dim}=\left(\begin{array}{cc}0& \mathrm{𝟏}_{\frac{dim}{2}}\\ \mathrm{𝟏}_{\frac{dim}{2}}& 0\end{array}\right),$$
(22)
this fixes the following structure constants
$`C_{\left(\frac{dim}{2}\right)1\left(\frac{dim}{2}+1\right)}=1,`$ (23)
$`C_{\left(\frac{dim}{2}\right)2\left(\frac{dim}{2}+2\right)}=1,`$ (24)
$`\mathrm{}`$ (25)
$`C_{\left(\frac{dim}{2}\right)\left(\frac{dim}{2}1\right)\left(dim1\right)}=1,`$ (26)
which is the decomposition that we have chosen in (11) for octonions
$$C_{415}=C_{426}=C_{437}=1.$$
(27)
Generally our symplectic structure is
$$\left(1|𝔼_{\left(\frac{dim}{2}\right)}\right)_{\alpha \beta }=\delta _{0\alpha }\delta _{\beta \left(\frac{dim}{2}\right)}\delta _{0\beta }\delta _{\alpha \left(\frac{dim}{2}\right)}ϵ_{\alpha \beta \left(\frac{dim}{2}\right)}.$$
(28)
Moreover some other choices may exhibit a relation with number theory and Galois fields . It is highly non-trivial how Clifford algebraic language can be used to unify many distinct mathematical notions such as Grassmanian , complex, quaternionic and symplectic structures.
The main result of this section, the non-existence of standard associative 16 dimensional representation of $`Cliff(0,15)`$ is in agreement with the Atiyah–Bott–Shapiro classification of real Clifford algebras . In this context, the importance of ring division algebras can also be deduced from the Bott periodicity .
I would like to thank P. Rotelli for some useful comments.
|
no-problem/0002/hep-ex0002003.html
|
ar5iv
|
text
|
# Detection of very small neutrino masses in double-beta decay using laser tagging
## 1 Introduction
Recent results from a number of independent experiments can be interpreted as due to finite neutrino masses and, in particular, high statistics measurements of atmospheric neutrinos by the Super-Kamiokande experiment are regarded by most as firm evidence that neutrinos have non-zero masses.
While these measurements, based on oscillations, have set the stage for a systematic study of the intrinsic neutrino properties, only upper-limits exist on the absolute magnitude of neutrino masses. Indeed, theoretical models span a large range of scenarios, from the degenerate case where mass differences among flavors are small with respect to the absolute mass scale , to the hierarchical, where mass differences are of the same order as the mass themselves. However, while the neutrino mass scale is unknown, the present data on oscillations lead rather naturally to masses in the range $`0.01<m_\nu <1`$ eV, as shown recently, e.g. in .
It is unlikely that direct neutrino mass measurements, most notably with tritium , will be able to reach sensitivities substantially below 1 eV in the near future. In contrast, we will show that a large double-$`\beta `$ decay experiment using isotopically enriched $`{}_{}{}^{136}\mathrm{Xe}`$can reach a sensitivity corresponding to neutrino masses as low as $``$0.01 eV. A xenon detector offers the unique possibility of identifying the final state, thus providing an essentially background-free measurement of unprecedented sensitivity for the neutrino mass.
It is well known that neutrinoless double beta decay, $`0\nu \beta \beta `$, can proceed only if neutrinos are massive Majorana particles. If the $`0\nu \beta \beta `$occurs, the effective Majorana neutrino mass $`m_\nu `$ is related to the half-life $`T_{1/2}^{0\nu \beta \beta }`$ as:
$$m_\nu ^2=(T_{1/2}^{0\nu \beta \beta }G^{0\nu \beta \beta }(E_0,Z)|M_{GT}^{0\nu \beta \beta }\frac{g_V^2}{g_A^2}M_F^{0\nu \beta \beta }|^2)^1$$
(1)
where $`G^{0\nu \beta \beta }(E_0,Z)`$ is a known phase space factor depending on the end-point energy $`E_0`$ and the nuclear charge $`Z`$, and $`M_{GT}^{0\nu \beta \beta }`$ and $`M_F^{0\nu \beta \beta }`$ are the Gamow-Teller and Fermi nuclear matrix elements for the process. Here we have defined $`m_\nu =_im_iU_{ei}^2`$ with $`U`$ being the mixing matrix in the lepton sector, and $`m_i`$ the masses of the individual Majorana neutrinos. Hence, although difficulties in the nuclear models used to calculate the matrix elements give some uncertainty on the value of $`m_\nu `$ (see, e.g. ), $`0\nu \beta \beta `$decay is sensitive to the masses of all neutrino flavors, provided that the mixing angles are non-negligible.
## 2 Backgrounds and Experimental Limitations
At present the best sensitivities to $`0\nu \beta \beta `$decay have been reached with $`{}_{}{}^{76}\mathrm{Ge}`$ diode ionization counters with an exposure of 24.16 kg yr and with a $`{}_{}{}^{136}\mathrm{Xe}`$ time projection chamber (TPC) with an exposure of 4.87 kg yr . The measured half-life limits of $`5.7\times 10^{25}`$ yrs for germanium and $`4.4\times 10^{23}`$ yrs for xenon can be interpreted as neutrino mass limits of, respectively, $`m_\nu <0.2(0.6)`$ eV and $`m_\nu <2.2(5.2)`$ eV using the Quasi-Particle Random Phase Approximation (QRPA) (Shell Model (NSM) ) for the nuclear matrix element calculation.
The germanium detector rejects background on the basis of its excellent energy resolution and of pulse-shape discrimination and has recently reported a specific background as low as 0.3 kg<sup>-1</sup> yr<sup>-1</sup> FWHM<sup>-1</sup>. The relatively inferior energy resolution in TPCs have been complemented by superior tracking capabilities that allow the xenon experiment to partially reconstruct the two-electron topology of $`\beta \beta `$-decay, obtaining a specific background of 2.5 kg<sup>-1</sup> yr<sup>-1</sup> FWHM<sup>-1</sup> . None of these backgrounds is sufficient for a decisive experiment in the interesting region discussed above. For that, it is essential to find a reliable way to drastically increase the size of the detectors while, simultaneously, reducing the backgrounds. This dual requirement stems from the fact that in a background-free experiment the neutrino mass sensitivity scales as $`m_\nu 1/\sqrt{T_{1/2}^{0\nu \beta \beta }}1/\sqrt{Nt}`$, where $`t`$ is the measurement time and $`N`$ the number of nuclei under study. In the opposite extreme, when the background scales with $`Nt`$, the neutrino mass sensitivity would scale only as $`m_\nu 1/(Nt)^{1/4}`$. Obviously, the backgrounds observed in all current experiments will become the true limiting factor of any kind of future large experiment, hampering the full utilization of very large masses of $`\beta \beta `$ emitters. Unlike other isotopes, however, $`{}_{}{}^{136}\mathrm{Xe}`$allows for direct tagging of the Ba-ion final state using optical spectroscopy, as pointed-out for the first time in . While this technique cannot discriminate between $`0\nu \beta \beta `$and $`2\nu \beta \beta `$-decays, this second process is not the dominant background at the sensitivities sought here and can be eliminated by kinematical reconstruction in a xenon TPC. Moreover, xenon is one of the easiest isotopes to enrich, and like argon can be used as active medium in ionization chambers. Hence, an experiment with xenon can use an entirely new variable in order to drastically reduce the backgrounds and explore the range of interesting neutrino masses.
## 3 Barium detection in xenon
The $`\beta \beta `$-decay of $`{}_{}{}^{136}\mathrm{Xe}`$produces a Ba<sup>++</sup> ion in the final state that can be readily neutralized to Ba<sup>+</sup> by an appropriate secondary gas in the TPC. The barium ion can then be individually detected through its laser-induced fluorescence. Single Ba<sup>+</sup> ions were first observed in 1978 using a radio frequency quadrupole trap and laser cooling.
The level structure of the alkali-like Ba<sup>+</sup> ion is shown in Figure 1. Due to the strong 493 nm allowed transition ground-state ions can be optically excited to the $`6^2\mathrm{P}_{1/2}`$ state from where they have substantial branching ratio (30%) to decay into the meta-stable $`5^4\mathrm{D}_{3/2}`$ state. Specific Ba<sup>+</sup> detection is then achieved by exciting the system back into the $`6^2\mathrm{P}_{1/2}`$ state with 650 nm radiation and observing the blue photon from the decay to the ground state (70% branching ratio). This transition has a spontaneous lifetime of 8 ns and when saturated will radiate $`6\times 10^7`$ photons/s. A pair of lasers tuned onto the appropriate frequencies and simultaneously steered to the place where the $`\beta \beta `$-decay candidate event is found can provide a very selective validation, effectively providing a new independent constraint to be used in $`\beta \beta `$-decay background subtraction. The light from the P to S transition conveniently lays in the region of maximum quantum efficiency of bialkali photocathodes so that an array of conventional large-area photomultipliers can be used for the detection. The very large saturation rate makes the experiment possible even with modest photocathode angular coverage. While it is possible in principle to steer the lasers anywhere inside the TPC, the very large volume and the need for light baffling may favor a scheme in which the Ba<sup>+</sup> drift in the large electric field is used to bring the ion in specific laser detection regions.
The primary difference between the single atom work done previously and this experiment is that here the $`\mathrm{Ba}^+`$ ion is not in vacuum but rather in a buffer gas (Xe) at a pressure of several atmospheres. The high pressure xenon has two effects: first, it effectively traps the barium since diffusion in the dense gas is sufficient to confine the atoms for long enough time to obtain a signal; second, it pressure broadens the optical transitions increasing the laser power required. We calculate that at 5 atm the barium ion will diffuse only $`0.7`$ mm in 1 s, and during this time it can be cycled over $`10^7`$ times through the three-level scheme of Figure 1, emitting more than $`10^7`$ 493 nm photons. The ion drift in the electric field of the TPC can be accurately measured and corrected for in the process of steering the lasers. For the moment we notice that drift velocities of Tl<sup>+</sup> in Xe have been measured and are low enough to make the correction possible. We also find from preliminary calculations that at the same pressure the broadened line-width is $`20`$ GHz, 1000 times greater than the natural line-width. This results in a saturation intensity of $`5`$ W/cm<sup>2</sup>. Ba<sup>+</sup> lifetime, together with pressure broadening of the lines involved, and the lifetime of the $`5^4\mathrm{D}_{3/2}`$ state, will have to be measured in xenon before the detection system can be fully optimized. However, measurements on the corresponding neutral Ba states at 0.5 atm in He and Ar show that operations at 5 to 10 atm should be possible.
## 4 A large xenon gas TPC with barium tagging
Among the different gas ionization detectors known, a TPC is the ideal detector for $`\beta \beta `$-decay as it has no wires or other materials in the detection volume hence reducing the background internal to the detector and simplifying the laser scanning. The requirements of size, efficiency for contained events, energy, and spatial resolution can be achieved with a large (40 m<sup>3</sup>) gas-phase TPC with xenon. TPCs of similar size at atmospheric pressure are successfully in operation , in one case with some 10 kg of $`{}_{}{}^{136}\mathrm{Xe}`$ . A 5 to 10 atm, 40 m<sup>3</sup> chamber would contain 1 to 2 tons of $`{}_{}{}^{136}\mathrm{Xe}`$and is within the reach of current state-of-the-art technology.
The ultimate sensitivity of the technique is probably limited by the practical availability of $`{}_{}{}^{136}\mathrm{Xe}`$that represents 8.9% of natural xenon and has to be extracted by isotopic separation. While the isotopic separation is indeed one of the main challenges of the experiment, in the case of $`{}_{}{}^{136}\mathrm{Xe}`$this operation is simplified by the fact that xenon is a gas at standard conditions, and essentially any known separation technology can be used. Relatively large amounts of $`{}_{}{}^{136}\mathrm{Xe}`$have been obtained in the past by ultracentrifugation with purities sufficient for the experiment described . $`{}_{}{}^{136}\mathrm{Xe}`$handling is simplified by its inert character that allows for the extraction of cosmogenically produced elements with distillation and chemical filtering that can be carried-on in the underground laboratory prior (and during) the operation of the experiment. An experiment with up to 10 tons of $`{}_{}{}^{136}\mathrm{Xe}`$is consistent with the availability of xenon on the world market and can be setup by assembling a few TPC modules inside a single pressure vessel or, possibly, by increasing the pressure and/or volume of a single chamber.
The use of liquid-xenon (LXe) in a TPC would result in a very compact detector with considerable advantages. However the range of 1.2 MeV electrons from the $`0\nu \beta \beta `$-decay in LXe is only of 2.4 mm so that, given any conceivable spatial resolution, the topological information, essential for background rejection, would be lost. In contrast, at room temperature 5 atm of xenon gas correspond to a density of 30 g/l and an electron range of 21.6 cm at 1.2 MeV. In Figure 2 we show a possible scheme for the $`\beta \beta `$-decay detector.
It is known that it is quite difficult to obtain electron multiplication and stable operation in xenon chambers. This is due to the far UV scintillation light ($`\lambda _{\mathrm{scint}}178`$ nm corresponding to 6.93 eV), copiously produced with the multiplication process, that in turn ionizes the gas and extracts electrons from the metallic surfaces. In our case this particular requirement is greatly relaxed by the use of gas micropattern detector planes instead of a more conventional anode wire array for the chamber readout, drastically limiting the solid angle available for the UV radiation . In addition, the chamber will also use a quencher gas, able to absorb the xenon scintillation UV and re-emit light in the blue or green so that timing information can be recovered using the photomultipliers and the event can be localized also in the third (time) coordinate. A field of $``$1 kV/cm will be needed in order to achieve high drift velocity for electrons in xenon. While this field strength is not sufficient to neutralize Ba<sup>++</sup> to Ba<sup>+</sup>, an appropriate choice of ionization potential (IP) for the quencher gas can achieve this purpose. IPs between 5.2 eV (Ba) and 10.001 eV (Ba<sup>+</sup>) are found in several organic molecules that would provide a stable environment for Ba<sup>+</sup>. Molecules like TMA ($`(\mathrm{CH}_3)_3\mathrm{N}`$), TEA ($`(\mathrm{C}_2\mathrm{H}_5)_3\mathrm{N}`$), TMG ($`(\mathrm{CH}_3)_4\mathrm{Ge}`$) and TMS ($`(\mathrm{CH}_3)_4\mathrm{Si}`$) are candidates fulfilling the above requirements.
An underground site with an overburden of $`2000`$ mwe (meter water equivalent) will reduce the cosmic ray muon flux to $`<10^2\mathrm{m}^2\mathrm{s}^1`$, corresponding to less than 0.1 Hz through the detector. Muons recorded in the TPC have a distinctive signature and indeed were observed and rejected by the Gotthard group based on track length, lack of scattering (high energy), and specific ionization. The last two features provide good discrimination even for tracks that clip a small corner of the chamber. The online trigger processor will be able to analyze and reject most muon tracks without activating the laser Ba-tagging system. Based on simple geometrical considerations we expect the laser system to be activated by muon tracks less than once per hour. Some background will be produced by neutrons produced by muon spallation in the structures (rock and other materials) outside the detector. These fast neutrons enter the detector with some efficiency and produce spallation reactions on the xenon, carbon and hydrogen nuclei contained in the TPC. While the original muon and the neutron trail go undetected, the spallation processes provide very distinct high ionization, short tracks and can be distinguished and rejected even at trigger level. Hence at the depth considered the detector does not need an active veto counter, as already found by the Gotthard group.
More serious are the $`\gamma `$-ray backgrounds from natural radioactivity, either produced outside the TPC (mainly by the rock or concrete), or inside (mainly <sup>222</sup>Rn and, possibly, <sup>85</sup>Kr and <sup>42</sup>Ar). External $`\gamma `$-ray backgrounds from the rock and concrete can be attenuated by a $``$25 cm thick lead or steel enclosure and by the 1 cm thick pressure vessel that will be built out of low-activity steel. Additionally, cleaner shielding could be provided, if needed. In the following we conservatively assume that the TPC itself (without Ba-tagging system) will have the same specific rate of mis-identified background as the Gotthard experiment, and we rely on Ba-tagging for the final step of reduction. This hypothesis is conservative since the larger volume of our TPC provides better self shielding for the radiation produced externally.
## 5 Discussion
We estimate the position and energy resolution of our chamber from an extrapolation using the Gotthard TPC. Assuming a drift velocity similar to the one in Gotthard and a drift distance of 250 cm (3.6 times larger than the Gotthard case) we obtain transverse and longitudinal position resolutions ($`\sigma `$) of better than 5 mm. This figure should give us roughly the same background suppression factor as obtained at Gotthard, since the electron range is 21 cm. Furthermore, unlike in the case of the Gotthard experiment, the knowledge of the time of occurrence will provide longitudinal localization of each event, improving the understanding of external backgrounds and allowing for a longitudinal fiducial volume cut.
Energy resolution should be no worse than the $`\sigma _E/E=2.5\%`$ (at 1.592 MeV) obtained at Gotthard, since in addition to the total charge, also the scintillation light will be collected . The possibility of using the laser tagging system with a small spot-size in a “raster scan” mode around the event location, will localize the decay vertex with mm precision and allow a full kinematical fit including energy, angular correlation and range for each of the electrons. It is expected that substantial improvement in energy resolution and background rejection will be possible in this way.
While the quantitative advantage of each of the techniques discussed will be better understood after extensive laboratory tests and a full Monte Carlo simulation, here we simply assume that the above methods, together with the Ba<sup>+</sup> identification, will reduce the sum of all radioactivity backgrounds by at least three orders of magnitude with respect to the Gotthard case, making a 10-ton $`{}_{}{}^{136}\mathrm{Xe}`$experiment essentially background free. From the size and geometry of the chamber we obtain an efficiency for fully contained events of 70%. In Table 1 we compare the projected sensitivity of this experiment with the present limits on $`0\nu \beta \beta `$decay.
It is interesting to note that for the very large isotopic masses contemplated here, the background on $`0\nu \beta \beta `$decay from the well known $`2\nu \beta \beta `$mode has to be carefully estimated. As already remarked this “background” is not directly suppressed by the laser tagging methods. However, the energy spectra of the electrons that represent the only distinctive feature of this background will be better measured thanks to the better knowledge of the event kinematics. In the following we will conservatively disregard this additional information and suppress the $`2\nu \beta \beta `$mode using total energy information, with the resolution mentioned above. In order to understand the role of the resolution we select events in the two intervals $`I_{sym}=[Q_{\beta \beta }2\sigma _E,Q_{\beta \beta }+2\sigma _E]`$ and $`I_+=[Q_{\beta \beta },Q_{\beta \beta }+2\sigma _E]`$. We then compute the number of $`2\nu \beta \beta `$decay events left in each case. For $`\sigma _E/E=2.8\%`$ we have 23 $`\mathrm{events}\mathrm{yr}^1\mathrm{ton}^1`$ left in $`I_{sym}`$ and 0.36 $`\mathrm{events}\mathrm{yr}^1\mathrm{ton}^1`$ left in $`I_+`$. A better resolution of $`\sigma _E/E=2\%`$ is used in the 10 ton - 10 yr exposure in Table 1. The loss in efficiency due to the asymmetric cut (and to the tails beyond $`2\sigma `$) are taken into account in the table as appropriate. It is clear, however, that our estimate is quite conservative as the $`2\nu \beta \beta `$decay background will be further suppressed by the kinematic fitting.
The projected neutrino mass sensitivity of 10 - 50 meV would make the discussed experiment competitive with other large scale double beta searches which have been proposed. NEMO3 is scheduled to start data taking in 2001 with initially 7 kg of <sup>100</sup>Mo and 1 kg of <sup>82</sup>Se in form of passive source foils in a gas tracking detector utilizing a magnetic field and a scintillator calorimeter . A neutrino mass sensitivity of 0.1 eV is expected after 5 years. The ultimate goal is to run with 20 kg of <sup>82</sup>Se, <sup>100</sup>Mo or <sup>150</sup>Nd in order to measure an effective neutrino mass below 0.1 eV. A large cryogenic detector (CUORE), able to operate 600 kg of TeO<sub>2</sub> crystals, could be used to search for the $`0\nu \beta \beta `$decay of $`{}_{}{}^{130}\mathrm{Te}`$. A neutrino mass sensitivity of around 0.1 eV has been quoted for this device . The proponents of the GENIUS project propose to use up to one ton of isotopically enriched <sup>76</sup>Ge suspended in a large tank of liquid nitrogen . The goal is to reach a mass sensitivity of around 0.01 eV. The latter two are calorimetric approaches. However, it has to be pointed out that the experiment discussed in this paper is the only one among those third generation projects which plans to utilize a novel technique for background suppression in the form of Ba tagging.
In summary, we have described an advanced $`\beta \beta `$ detector system that uses a hybrid of atomic and particle physics techniques to qualitatively improve background suppression. Such detector opens new possibilities in using massive amounts of $`{}_{}{}^{136}\mathrm{Xe}`$for an advanced $`\beta \beta `$-decay experiment that would explore neutrino masses in the range 10 - 50 meV, providing a unique opportunity for discoveries in particle physics and cosmology.
## 6 Acknowledgments
We would like to specially thank F. Boehm for the help and guidance received in formulating much of this paper. We also owe gratitude to U. Becker, S. Chu, H. Henrikson, L. Ropelewski, T. Thurston and R. Zare for many useful discussions. Finally we would like to thank R.G.H. Robertson for pointing out an inconsistency in the original version of Table 1.
|
no-problem/0002/cond-mat0002383.html
|
ar5iv
|
text
|
# 1 Energy vs 𝜏 at packing fractions ϕ=0.05 (upper) and 0.4 (lower) at different inelasticities. The dotted lines represent the short time behavior (Haff’s law), and the dashed-dotted lines the theory of [1].
Replacement cond-mat/0002383
Asymptotic Energy Decay in Inelastic Fluids
J.A.G. Orza, R. Brito and M.H. Ernst
Dpto. de Física Aplicada I, Universidad Complutense
28040 Madrid, Spain.
The goal of the present publication is the comparison of the two existing theoretical predictions for the long time behavior of the total energy $`E(t)`$ in a large system of freely evolving inelastic hard spheres (IHS) with computer simulations of $`N=\mathrm{50\hspace{0.17em}000}`$ hard disks.
(i) The first theory, a mode coupling theory by Brito and Ernst , predicts that $`EA\tau ^{d/2}`$ for $`d2`$, where $`\tau (t)`$ is the average number of collisions suffered by a particle in time $`t`$ ($`\tau `$ is a nonlinear function of $`t`$, whose analytic form is unknown at large $`t`$) and $`A`$ is a known coefficient, which depends on the coefficient of restitution, $`r`$, and on the density, $`\varphi `$.
Computer simulations of $`\mathrm{ln}E(\tau )/E(0)`$ are shown in Fig. 1, where we have chosen units such that $`E(0)=1`$. These plots confirm that the small fluctuation theory of gives at fixed $`\varphi `$ quantitative predictions inside an $`r`$window of width $`\mathrm{\Delta }r0.1`$, centered around $`r_00.6,\mathrm{\hspace{0.17em}0.75},\mathrm{\hspace{0.17em}0.8},\mathrm{\hspace{0.17em}0.85}`$ at densities $`\varphi 0.05,\mathrm{\hspace{0.17em}0.11},\mathrm{\hspace{0.17em}0.245},\mathrm{\hspace{0.17em}0.4}`$ respectively. At larger/smaller $`r`$values, $`E(\tau )`$ decays faster/slower than the prediction of . The location $`r_0`$ is determined by two balancing effects: sufficiently high density to suppress large relative density fluctuations which increase the mean overall collision frequency $`d\tau /dt`$ (compression of $`\tau `$axis), competing with sufficiently high inelasticity, which favors local inelastic collapses (a finite number of collisions in an infinitesimal time). Such collisions merely increase the $`\tau `$value (stretching of $`\tau `$axis) at fixed $`t`$ without advancing the $`N`$ particle dynamics. At small density the relative density fluctuations become very large, as the clusters keep growing and compactifying, which invalidates the linear theory of .
(ii) Ben-Naim et al. show that the behavior of a fluid of inelastic hard rods is described by the totally inelastic (sticky) gas. Consequently, the energy decays at long times as $`t^{2/3}`$ in 1-D, and is independent of the coefficient of restitution. Moreover, they conjecture that the total energy $`E`$ for $`2d4`$ decays as $`Bt^{d/2}`$ with an unknown coefficient $`B`$, independent of inelasticity. The results of are only valid at asymptotically large times.
Our goal is to test this conjecture against simulations of inelastic hard disks. The energy decay when plotted as $`\mathrm{log}E`$ versus $`\mathrm{log}t`$ (see Fig. 2), gives the misleading impression that our simulations have reached their asymptotic time dependence, and suggests that $`ECt^a`$ decays algebraically with a density-dependent exponent $`a`$, but with $`a`$ and $`C`$ independent of the dissipation, possibly corresponding to a sticky hard sphere fluid. At small densities (Fig. 2a), $`a1`$, which offers partial support for the conjecture, while at higher densities (Fig. 2b) the analysis seems to show a smaller exponent.
However, a more sensitive test is to plot $`E(0)/E(t)`$ versus $`t`$, and test whether the curves for different values of the restitution coefficient $`r`$ become linear in $`t`$, and tend to coincide for large times, i.e. become independent of $`r`$. This is done in Fig. 3 for three different packing fractions $`\varphi =0.05,0.245,0.4`$, each for a range of $`r`$–values over a long time interval far beyond where Haff’s law is valid. The results at low density can hardly be considered as evidence for the conjecture. The curves at intermediate density, $`\varphi =0.245`$ show behavior that might be linear in $`t`$, but depends strongly on the degree of inelasticity. Simulations at density $`\varphi =0.11`$ show behavior similar to the ones for $`\varphi =0.245`$. The behavior at the highest density, at $`\varphi =0.4`$, in the time interval considered, looks roughly independent of the degree of inelasticity, but the curves show a tendency to diverge at later times.
In general, the curves in Fig. 3 do not seem to have reached their asymptotic form, and are not conclusive enough to support or refute conjectures about the asymptotic $`t`$-dependence being independent of the degree of inelasticity.
Finally we observe that at asymptotically large times there is an important distinction between small and thermodynamically large systems, for the following reason. The growth of patterns, i.e. vortices and density clusters, is controlled by diffusive modes , and typical diameters, $`L_v(\tau )`$ of these patterns grow as $`\sqrt{\tau }`$. As soon as the system size $`LL_v(\tau )`$, patterns start to interfere with their periodic images, and there occurs crossover to a steady state, which is fully determined by the (unphysical) periodic bounday conditions. This is the case for asymptotically large times in small systems. In thermodynamically large systems this crossover is never reached.
In small systems we have observed that the energy decays as, $`E(t)C(N,r)/t^2`$ for $`t\mathrm{}`$, with a coefficient $`C`$ that depends on $`N`$ and $`r`$, and that differs in general from Haff’s law. The law $`E(t)t^2`$ seems to hold for small systems in any dimension larger than 1 (see also ).
|
no-problem/0002/hep-ph0002219.html
|
ar5iv
|
text
|
# References
IITB-TPH-0001
Baryogenesis through axion domain wall
S.N. Nayak and U.A.Yajnik
Physics Department, Indian Institute of Technology Bombay, Mumbai- 400076, India
Abstract
Generic axion models give rise to axion domain walls in the early Universe and they have to disappear not to overclose the universe, thus limiting the nature of discrete symmetry allowed in these type of models. Through QCD sphalerons, net chiral charge can be created by these collapsing walls which in turn can result the observed baryon assymetry.
nayaks@phy.iitb.ernet.in
yajnik@phy.iitb.ernet.in
The explanation of observed baryon asymmetry of the universe is still a hunting problem in the realm of cosmology and particle physics, thus generating lots of diverse ideas and activity. The earlier scenario to produce the asymmetry via the decay of heavy gauge bosons and scalar in GUTS cannot survive the sphaleron wash out at the electroweak scale, unless B-L is an exact symmetry . The scenario of electroweak baryogenesis through sphaleron transition also runs into problem because of inadequete CP violation in Higgs sector and more importantly, it is not clear that the electroweak phase transition is strongly first order to realise the out of equilibrium condition through bubble dynamics. In this background it was suggested to generate baryon asymmetry through topological defects (the remnants of some earlier symmetry breaking) at the electroweak scale . In this scenario the baryogenesis takes place inside the core of the defects where the sphaleron transition takes place. Here we discuss the issue in the context of axion domain wall and show that we can produce sufficient amount of baryons at the scale much below the weak scale. Similar situation has been considered recently by Brandenberger et al .
Many axion models also have discrete Z(N) symmetry which is spontaneously broken at $`T=\mathrm{\Lambda }_{QCD}`$. This is generic for any axion models where the Pecci-Quinn symmetry $`U_{PQ}(1)`$ is broken only by QCD gluon anomaly. In the above N is the number of quark flavours that rotate under $`U_{PQ}(1)`$. Because of this discrete symmetry, there exist N degenerate and distinct $`CP`$ conserving minima of the axion potential which is of the form
$$V(a)=m_{a}^{}{}_{}{}^{2}(v_{PQ}/N)^2[1f(aN/v_{PQ})],$$
(1)
where $`f`$ is a periodic function of period $`2\pi `$ and $`v_{PQ}`$ is the Pecci-Quinn scale. These disconnected and degenerate vacuum states gives rise to axion domain walls at $`T=\mathrm{\Lambda }_{QCD}`$, when the discrete symmetry is spontaneously broken. The resulting domain walls have thickness $`\mathrm{\Delta }=m_a^1`$ and surface energy density $`\eta =m_av_{}^{2}{}_{PQ}{}^{}`$, where $`m_a`$ is the mass of axion.
These domain walls are disastrous cosmologically and has to disappear so that they do not over close the universe, unless N=1. One way to achieve this is to introduce a soft breaking term of the form $`\mu ^3\mathrm{\Phi }`$ . Here we are considering DFSZ axion model and $`\mathrm{\Phi }`$ is the singlet under standard gauge group . This would produce an effective value of $`\theta _{QCD}`$ of order $`\mu ^3/m_{a}^{}{}_{}{}^{2}v_{PQ}`$. For this to be consistent with the upper limit on the electric dipole moment of neutron we get
$$\mu ^310^9\frac{f_{\pi }^{}{}_{}{}^{2}m_{\pi }^{}{}_{}{}^{2}}{v_{PQ}}.$$
(2)
This soft breaking term would produce a shift in energy density among the degenerate vacuum, hence a pressure towards the domain with highest vacuum energy leading to annihilation of walls. It is also possible that the domain walls created may not survive the QCD phase transition since $`Z(N)`$ symmetry may be dynamically broken.
In this letter we argue that even for the brief period that they exist they can produce sufficient amount of baryons. The important ingredient that goes in to our argument is the existence of the sphaleron like configuration in QCD and the rate of this topological transition is given by
$$\mathrm{\Gamma }_S=\kappa \alpha _{s}^{}{}_{}{}^{4}T^4.$$
(3)
In the above $`\alpha _s`$ is the strong coupling constant and the proportionality constant $`\kappa `$ can be of the order thousand . This is the transition rate over the potential energy barrier separating vacua of different Chern-Simons number. But unlike the electoweak case where the sphaleron transition is the source of baryon number violation; the QCD sphaleron does not induce any baryon number violation since it has only parity conserving vector couplings. So as it is, this scenario will create some chiral charge separation mechanism. Baryogenesis will be achieved if additionally we have a nonvanishing chemical potential induced by other mechanism. For instance it could be a background field effect as in spontaneous baryogenesis scenario and its variants .
The CP violating phase that is needed for baryogensis is nothing but the strong CP violating parameter $`\theta _{QCD}`$ which need not be zero at high temperature. The value of $`\theta `$ has to be decided by some stocastic process in a given horizon volume. The model we are discussing where the domain wall has to disapear due to the explicit soft breaking term has an effective $`\theta `$ that is consistent with above experimental constraint and also ensures that the walls do not overclose the universe . We take the CP violating phase to be of order $`10^{10}`$.
The final ingredient for the baryogensis is the departure from thermal equlibrium. In our scenario this is automatically achieved when the walls annihilate due to difference in vacuum energy. The situation is similar to the model independent pictures of defect mediated baryogenesis. Whereas mere translational motion or long lived defects cannot induce any net assymmetry, collapse and mutual annihilation can lead to creation of net assymmetry . Let $`V_{BG}`$ be the effective three dimensional volume in which the time irreversible processes occur during the disappearance of walls. Then the net baryon number density is then given by
$$\mathrm{\Delta }n_B=\frac{1}{V}\frac{\mathrm{\Gamma }_S}{T}V_{BG}\mathrm{\Delta }\theta ,$$
(4)
with V as the total volume.
As discussed earlier, the above formula needs to be supplemented by the contribution from a mechanism that converts the net chiral charge into baryonic charge. For example, one can consider an extra factor $`m_f/T`$ with $`m_f`$ as the fermion mass , in evaluating the baryon number density. In the baryogenesis scenario where electroweak sphaleron transitions takes place in the core of topological defects, this factor turns out to be order one. In our case this factor can enhance the rate of baryon production since the temperature we are interested is of QCD scale. But at present we are not considering this factor, as we are presenting our picture in a qualitative way and one has to see whether it enters in our calculation or not. Then the baryon to entropy ratio in volume V is
$$\frac{\mathrm{\Delta }n_B}{s}=g_{}^{}{}_{}{}^{1}\alpha _{s}^{}{}_{}{}^{4}\mathrm{\Delta }\theta \frac{V_{BG}}{V}.$$
(5)
To evaluate the volume suppression factor, let us take the average separation of the domain walls as $`\xi (t)`$, which from kibble mechanism is
$$\xi (t)=T_{c}^{}{}_{}{}^{1},$$
(6)
where $`T_c`$ is the temperature where Z(N) symmetry is spontaneously broken and is equal to $`(m_av_{PQ})^{1/2}`$. Then the volume occupied by the domain walls in a horizon size $`d_H(t)`$ is
$$V_{BG}=\xi (t)^2m_{a}^{}{}_{}{}^{1}(\frac{d_{H(t)}}{\xi (t)})^3.$$
(7)
The last factor is the number of domains in the horizon volume. With this the volume suppression factor turns out to be
$$\frac{V_{BG}}{V}=(m_\pi f_\pi )^{1/2}/m_a.$$
(8)
In the above we have used $`T_c=(m_av_{PQ})^{1/2}=(f_\pi m_\pi )^{1/2}`$. Since at QCD scale the pion mass can go to zero the above volume suppression factor can be of order unity for suitable value of the axion mass. This aspect of the problem requires detailed calculation in the specific axion model. But qualitatively, the thickness of the axion wall goes inversly proportional to the axion mass. The mass of the axion due to instanton effect at QCD scale is
$$m_a(T)=0.1m_a(T=0)(\mathrm{\Lambda }_{QCD}/T)^{3.7}.$$
(9)
So it is possible that at temperature just around QCD scale the thickness of the wall is only a few order of magnitude smaller than the horizon size and $`V_{BG}/V`$ need not be a serious suppression factor.
Another crucial criteria that our picture satisfies, is the fitting of QCD sphaleron inside the axion domain wall, hence requiring no modification in the bulk value of $`\mathrm{\Gamma }_S`$. The size of QCD sphaleron will be of order $`\mathrm{\Lambda }_{}^{1}{}_{QCD}{}^{}`$ which is smaller than the wall thickness that is $`m_{a}^{}{}_{}{}^{1}`$ for allowed value of axion mass. So with $`\kappa `$ of order thousand, the CP violating phase of the order $`10^{10}`$ and $`g_{}`$ is of the order $`10`$ we can produce sufficient amount of baryons at the QCD scale.
Acknowledgements
This work was carried out as a part of a Department of Science and Technology project SP/S2/K-08/97.
|
no-problem/0002/hep-ex0002015.html
|
ar5iv
|
text
|
# Further Search for the Decay 𝐾⁺→𝜋⁺𝜈𝜈̄
## Abstract
A search for additional evidence for the rare kaon decay $`K^+\pi ^+\nu \overline{\nu }`$ has been made with a new data set comparable in sensitivity to the previous exposure that produced a single event. No new events were found in the pion momentum region examined, $`211<P<229`$ MeV/$`c`$. Including a reanalysis of the original data set, the backgrounds were estimated to contribute $`0.08\pm 0.02`$ events. Based on one observed event, the new branching ratio is $`B`$($`K^+\pi ^+\nu \overline{\nu }`$)$`=1.5_{1.2}^{+3.4}\times 10^{10}`$.
Evidence for the decay $`K^+\pi ^+\nu \overline{\nu }`$ at a branching ratio of $`B`$($`K^+\pi ^+\nu \overline{\nu }`$) $`=4.2_{3.5}^{+9.7}\times 10^{10}`$ based on the observation of a single event has been reported by our group. In the Standard Model (SM) calculation of $`B`$($`K^+\pi ^+\nu \overline{\nu }`$), the dominant effects of the top quark in second order weak loops make this flavor-changing neutral current decay very sensitive to $`V_{td}`$, the coupling of the top to down quarks in the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix. A fit based on the current phenomenology gives a prediction of $`(0.82\pm 0.32)\times 10^{10}`$ for this branching ratio . If constraints from $`|V_{ub}/V_{cb}|`$ and $`ϵ_K`$ are not imposed, a limit $`B(K^+\pi ^+\nu \overline{\nu })<1.67\times 10^{10}`$ can be extracted that is almost entirely free of theoretical uncertainties. Although our initial observation is consistent with the SM prediction, the possibility of a larger-than-expected branching ratio gives further impetus for additional measurements. In this paper, we present results of a combined analysis of the 1995 sample and a new data sample of comparable sensitivity. All data were taken with the E787 apparatus at the Alternating Gradient Synchrotron (AGS) of Brookhaven National Laboratory.
In the decay $`K^+\pi ^+\nu \overline{\nu }`$ at rest, the $`\pi ^+`$ momentum endpoint is $`227`$ MeV/$`c`$. Definitive recognition of this signal requires that no other observable activity is present in the detector and all backgrounds are suppressed below the sensitivity for the signal. Major background sources include the two-body decays $`K^+\mu ^+\nu _\mu `$ ($`K_{\mu 2}`$) and $`K^+\pi ^+\pi ^0`$ ($`K_{\pi 2}`$), scattered pions in the beam, and $`K^+`$ charge exchange (CEX) reactions resulting in decays $`K_L^0\pi ^+l^{}\overline{\nu }_l`$, where $`l=e`$ or $`\mu `$. The E787 detector was designed to effectively distinguish these backgrounds from the signal.
In the new data sets, taken during the 1996 and 1997 runs of the AGS, kaons of about 700 MeV/$`c`$ were incident on the apparatus at a rate of $`(47)\times 10^6`$ per 1.6-s spill. The kaons were detected and identified by Čerenkov, tracking, and energy loss ($`dE/dx`$) counters. About 25% of the incident kaons reached an active target, primarily consisting of 413 5-mm square scintillating fibers which were used for kaon and pion tracking. Measurements of the momentum ($`P`$), range ($`R`$) and kinetic energy ($`E`$) of charged decay products were made using the target, a central drift chamber, and a cylindrical range stack with 21 layers of plastic scintillator and two layers of straw chambers (RSSC’s), all confined within a 1-T solenoidal magnetic field. The $`\pi ^+\mu ^+e^+`$ decay sequence of the decay products in the range stack was observed using 500-MHz transient digitizers. Photons were detected in a $`4\pi `$-sr calorimeter consisting of a 14-radiation-length-thick barrel detector made of lead/scintillator sandwich and 13.5-radiation-length-thick endcaps of undoped CsI crystals. In comparison with Ref. , the newer data were taken at a lower $`K^+`$ momentum to reduce accidental hits, and improvements were made to the trigger and data acquisition systems to take data more efficiently.
The data were analyzed with the goal of reducing the total expected background to significantly less than one event in the combined data sample. A decay particle was positively identified as a $`\pi ^+`$ using $`P`$, $`R`$ and $`E`$, and by the $`\pi ^+\mu ^+e^+`$ decay sequence. Events associated with any other decay products including photons or with beam particles were efficiently eliminated by utilizing the detector’s full coverage of the $`4\pi `$ solid angle. The requirements of a clean hit pattern in the target and a delayed decay at least 2 ns after an identified $`K^+`$ suppressed background events due to CEX and scattered beam $`\pi ^+`$. In this work, the search for $`K^+\pi ^+\nu \overline{\nu }`$ events was restricted to the measured momentum region $`211<P_{\pi ^+}<229`$ MeV/$`c`$, between the $`K_{\mu 2}`$ and $`K_{\pi 2}`$ peaks, to further limit backgrounds.
Compared to the analysis of Ref. , improvements in the kinematic reconstruction routines were made to reduce the tails of the $`P`$, $`R`$ and $`E`$ resolution functions. In the new analysis, the position of the incident kaon at the last beam counter was used to aid in the determination of the correct stopping position of the kaon in the target, and thus to reduce the uncertainty in the pion range masked by the kaon track. Accidental hits that might have been included in the pion energy measurements were identified and removed more efficiently. The measurement of the $`z`$ component (the direction of the symmetry axis of the detector) of the pion track in the range stack was improved by using only the projection of the drift chamber fit and the end-to-end timing in the range stack scintillators, but excluding the $`z`$ information from the RSSC’s, which had a long resolution tail. These changes resulted in some shifts in the kinematic values found for individual events, but the average quantities stayed the same. In addition, improvements were made in the particle identification criteria, particularly in the measurements of the $`\pi ^+\mu ^+e^+`$ decay sequence.
Overall optimization of the signal selection and background rejection criteria resulted in roughly a factor of two reduction of the expected backgrounds per kaon decay and an increase of 25% in the acceptance for $`K^+\pi ^+\nu \overline{\nu }`$ in the 1995 sample. For the entire 1995–1997 exposure, the numbers of background events expected from the sources mentioned above were $`b_{K_{\mu 2}}=0.03\pm 0.01`$, $`b_{K_{\pi 2}}=0.02\pm 0.01`$, $`b_{Beam}=0.02\pm 0.02`$ and $`b_{CEX}=0.01\pm 0.01`$. In total, the background level anticipated with the final analysis cuts was $`b=0.08\pm 0.02`$ events. Tests of the background estimates near the signal region confirmed the expectations. The acceptance for $`K^+\pi ^+\nu \overline{\nu }`$, $`A=0.0021\pm 0.0001^{stat}\pm 0.0002^{syst}`$, was derived from the factors given in Table I . The estimated systematic uncertainty in the acceptance of about $`10\%`$ was due mostly to the uncertainty in pion-nucleus interactions.
Analysis of the full data set yielded only the single event previously reported. This result is shown in Fig. 1, the range (in equivalent cm of scintillator) vs. kinetic energy plot of events surviving all other cuts. The revised kinematic values of the observed event are $`P=218.2\pm 2.7`$ MeV/$`c`$, $`R=34.7\pm 1.2`$ cm and $`E=117.7\pm 3.5`$ MeV . Based on one observed event, the acceptance $`A`$ and the total exposure of $`N_{K^+}=3.2\times 10^{12}`$ kaons entering the target, the new value for the branching ratio is $`B(K^+\pi ^+\nu \overline{\nu })=1.5_{1.2}^{+3.4}\times 10^{10}`$.
Using the relations given in Ref. and varying each of the input parameters with the limits given therein, the present result provides a constraint, $`0.002<|V_{td}|<0.04`$. The extraction of these limits requires knowledge of $`V_{cb}`$ and the assumption of CKM unitarity. Alternatively, one can extract corresponding limits on the quantity $`|\lambda _t|`$ ($`\lambda _tV_{ts}^{}V_{td}`$): $`1.07\times 10^4<|\lambda _t|<1.39\times 10^3`$, without reference to the $`B`$-decay system. In addition, the limits $`1.10\times 10^3<`$ Re$`(\lambda _t)<1.39\times 10^3`$ and Im$`(\lambda _t)<1.22\times 10^3`$ can be obtained from our result. The latter is of particular interest because Im($`\lambda _t`$) is proportional to the Jarlskog invariant and thus to the area of the unitarity triangle. Our result bounds this quantity without reference to the $`B`$-decay system or to measurements of CP violation in $`K_L^0\pi \pi `$ decays.
The limit found in the search for decays of the form $`K^+\pi ^+X^0`$, where $`X^0`$ is a neutral weakly interacting massless particle , is $`B(K^+\pi ^+X^0)<1.1\times 10^{10}`$ (90% CL), based on zero events observed in a $`\pm 2\sigma `$ region around the pion kinematic endpoint.
###### Acknowledgements.
We gratefully acknowledge the dedicated effort of the technical staff supporting this experiment and of the Brookhaven AGS Department. We thank A. J. Buras for useful discussions. This research was supported in part by the U.S. Department of Energy under Contracts No. DE-AC02-98CH10886, W-7405-ENG-36, and grant DE-FG02-91ER40671, by the Ministry of Education, Science, Sports and Culture of Japan through the Japan-U.S. Cooperative Research Program in High Energy Physics and under the Grant-in-Aids for Scientific Research, for Encouragement of Young Scientists and for JSPS Fellows, and by the Natural Sciences and Engineering Research Council and the National Research Council of Canada.
|
no-problem/0002/hep-ex0002020.html
|
ar5iv
|
text
|
# The Physical Significance of Confidence Intervals
## I Introduction
The possibility to apply successfully Frequentist statistics to problematic cases in frontier research has received a fundamental contribution with the proposal of the Unified Approach by Feldman and Cousins . The Unified Approach uses Neyman’s method to construct a confidence belt that guarantees the derivation of confidence intervals with a correct Frequentist coverage (see ). Furthermore, the Unified Approach allows to obtain an automatic transition from two-sided confidence intervals to upper limits in the case of negative results, preserving the property of a correct Frequentist coverage. Several works have followed the Unified Approach paper discussing alternative methods to construct the confidence belt. Hence, at present several Frequentist methods with interesting properties are available and under discussion. Their performances are also often compared with those of the Bayesian Theory (see Ref. ).
In this paper we define some appropriate statistical quantities that could help to decide which is the most appropriate method to use in order to obtain confidence intervals with the desired level of physical significance (see also Ref. ). We also define the sensitivity of an experiment to a signal.
The physical significance of a confidence interval is its degree of reliability, which is very important, because scientists use confidence intervals provided by experiments or specialized compilations (for example, the Review of Particle Physics ) as inputs for their calculations. Unreliable confidence intervals may lead to wrong or hazardous conclusions.
In Section II we consider the expectation value of the upper limit in absence of a signal (that we propose to call *“exclusion potential”*) and its standard deviation. In Sections III and IV we discuss the possibility to calculate the expectation value (*“upper and lower detection functions”*) and the standard deviation of the upper and lower limits of the confidence interval produced by an experiment in the presence of a signal and we propose an appropriate definition of the *“sensitivity”* of an experiment to the signal searched for.
In this article we consider explicitly the case of a Poisson process with known background in the framework of Frequentist and Bayesian statistical theories, but similar considerations apply also to the case of a Gaussian distribution with boundary.
## II Exclusion potential
An important quantity introduced by Feldman and Cousins is the average upper limit for the signal $`\mu `$ that would be obtained by an ensemble of experiments if $`\mu =0`$ (in the case of a Poisson process with known background, “the average upper limit that would be obtained by an ensemble of experiments with the expected background and no true signal”). They called this quantity *“sensitivity”*, but we think that this name is quite misleading, because it gives the impression that this quantity represents the expected capability of the experiment to reveal a true signal <sup>*</sup><sup>*</sup>* For example, one can check on the on-line Webster Dictionary at http://www.m-w.com/dictionary that “sensitivity” is “the quality or state of being sensitive” and the adjective “sensitive” means “capable of being stimulated or excited by external agents (as light, gravity, or contact)”. In our case, since the background is known, it can be considered an “internal agent”, whereas the true signal is the “external agent” under investigation. From the physical point of view the sensitivity is related in many fields of application to the concept of minimum detectable signal, therefore it is a quantity defined for $`\mu >0`$, not when $`\mu =0`$. . Instead, the true signal is assumed to be absent. Hence, it is clear that the quantity under consideration does not represent the sensitivity of the experiment to the signal that is searched for, but it represents the expected upper limit for $`\mu `$ that will be obtained if there is no signal. Therefore, we propose to call this quantity *“exclusion potential”* The on-line Webster Dictionary at http://www.m-w.com/dictionary says that “potential” is “something that can develop or become actual”. , a name that we will use in the following. As a further justification of this name, we note that in the case of neutrino oscillation experiments the exclusion potential is associated with the so-called “exclusion curves” in the space of the neutrino mixing parameters.
In the case of a Poisson process with known background the exclusion potential is given by
$$\mu _{\mathrm{ep}}(b,\alpha )=\underset{n=0}{\overset{\mathrm{}}{}}\mu _{\mathrm{up}}(n,b,\alpha )P(n|\mu =0,b),$$
(1)
where $`n`$ is the number of counts, $`b`$ is the expected mean background, $`\mu `$ is the mean true signal, $`P(n|\mu ,b)`$ is the Poisson p.d.f. for the process and $`\mu _{\mathrm{up}}(n,b,\alpha )`$ is the upper limit of the $`100(1\alpha )\%`$ confidence interval for $`\mu `$ corresponding to $`n`$ counts. The exclusion potential $`\mu _{\mathrm{ep}}`$ depends on the values of the upper limits $`\mu _{\mathrm{up}}(n,b,\alpha )`$, which are different in the different methods for calculating the confidence belt.
It is interesting to note that the above definition of exclusion potential can be extended to the results obtained with the Bayesian Theory (see, for example, ) if $`\mu _{\mathrm{up}}`$ is interpreted as the upper limit of the Bayesian credibility interval for $`\mu `$. In the following we will consider the Bayesian Theory assuming a flat prior and shortest credibility intervals for $`\mu `$. In this case the posterior p.d.f. for $`\mu `$ is
$$P(\mu |n,b)=(b+\mu )^ne^\mu \left(n!\underset{k=0}{\overset{n}{}}\frac{b^k}{k!}\right)^1,$$
(2)
and the probability (degree of belief) that the true value of $`\mu `$ lies in the range $`[\mu _1,\mu _2]`$ is given by
$$P(\mu [\mu _1,\mu _2]|n,b)=\left(e^{\mu _1}\underset{k=0}{\overset{n}{}}\frac{(b+\mu _1)^k}{k!}e^{\mu _2}\underset{k=0}{\overset{n}{}}\frac{(b+\mu _2)^k}{k!}\right)\left(\underset{k=0}{\overset{n}{}}\frac{b^k}{k!}\right)^1.$$
(3)
The shortest $`100(1\alpha )\%`$ credibility intervals $`[\mu _{\mathrm{low}},\mu _{\mathrm{up}}]`$ are obtained by choosing $`\mu _{\mathrm{low}}`$ and $`\mu _{\mathrm{up}}`$ such that $`P(\mu [\mu _{\mathrm{low}},\mu _{\mathrm{up}}]|n,b)=1\alpha `$ and $`P(\mu _{\mathrm{low}}|n,b)=P(\mu _{\mathrm{up}}|n,b)`$ if possible (with $`\mu _{\mathrm{low}}0`$), or $`\mu _{\mathrm{low}}=0`$.
The solid lines in Figs. 1 and 3 represent the exclusion potential as a function of the background in the interval $`0b20`$ for a confidence level $`90\%`$ ($`\alpha =0.10`$) obtained with the Unified Approach (Figs. 1A and 3A), with the Bayesian Ordering method The Bayesian Ordering method, as the Unified Approach, is a Frequentist method with correct coverage and automatic transition from two-sided confidence intervals to upper limits in the case of negative results. (Figs. 1B and 3B) and with the Bayesian Theory assuming a flat prior and shortest credibility intervals (Figs. 1C and 3C).
Feldman and Cousins suggested that in the cases in which the measurement is less than the estimated mean background and a stringent upper bound on $`\mu `$ is inferred, the experimental collaboration should report also the exclusion potential (that they call “sensitivity”) of the experiment. This is also recommended by the Particle Data Group .
In practice, the comparison of the upper bound and the exclusion potential is used as an assessment of the reliability of the upper bound. However, one can notice that a simple comparison of the upper bound with the exclusion potential does not give any information unless a meaningful scale of comparison is given. We think that a meaningful scale is the possible fluctuation of the upper bound $`\mu _{\mathrm{up}}(n,b,\alpha )`$ in an ensemble of experiments with the expected background and no true signal. A quantification of this fluctuation is provided by the standard deviation $`\sigma _{\mathrm{ep}}(b,\alpha )`$ of the upper limit $`\mu _{\mathrm{up}}`$ calculated assuming $`\mu =0`$,
$$\sigma _{\mathrm{ep}}^2(b,\alpha )=\underset{n=0}{\overset{\mathrm{}}{}}\left[\mu _{\mathrm{up}}(n,b,\alpha )\mu _{\mathrm{ep}}(b,\alpha )\right]^2P(n|\mu =0,b).$$
(4)
The shadowed regions delimited by the dashed lines in Fig. 1 represent the range $`[\mu _{\mathrm{ep}}\sigma _{\mathrm{ep}},\mu _{\mathrm{ep}}+\sigma _{\mathrm{ep}}]`$ for $`\alpha =0.10`$ obtained with the Unified Approach (A), with the Bayesian Ordering method (B) and with the Bayesian Theory assuming a flat prior and shortest credibility intervals. The corresponding probability to obtain an upper bound in the interval $`[\mu _{\mathrm{ep}}\sigma _{\mathrm{ep}},\mu _{\mathrm{ep}}+\sigma _{\mathrm{ep}}]`$ if $`\mu =0`$ is shown in the three upper figures 1a, 1b and 1c. One can see that, except for small values of the background $`b`$, the probability to obtain an upper bound in the interval $`[\mu _{\mathrm{ep}}\sigma _{\mathrm{ep}},\mu _{\mathrm{ep}}+\sigma _{\mathrm{ep}}]`$ if $`\mu =0`$ is not far from 68% in the three considered methods. The probability curves in Fig. 1 have wild jumps because $`n`$ is an integer and $`\mu _{\mathrm{up}}`$ has discrete jumps as $`n`$ is varied.
As an illustration, in Fig. 2 we have plotted the probability of the possible values of the 90% CL upper bound $`\mu _{\mathrm{up}}`$ if $`\mu =0`$ and $`b=13`$. One can see that the upper bound $`\mu _{\mathrm{up}}`$ can assume only discrete values. The probability to obtain an upper bound in the interval $`[\mu _{\mathrm{ep}}\sigma _{\mathrm{ep}},\mu _{\mathrm{ep}}+\sigma _{\mathrm{ep}}]`$, delimited by the dotted vertical lines in Fig. 2, has discrete jumps as $`b`$ is changed, depending on which possible values of the upper bound are included in the interval.
It is also possible to calculate the possible range of fluctuation of the upper bound $`\mu _{\mathrm{up}}`$ with a desired probability $`\gamma `$. The shadowed regions delimited by the dashed lines in Fig. 3 show the 90% width of the 90% CL upper limit, i.e. the possible range of fluctuation of the $`\mu _{\mathrm{up}}(n,b,\alpha =0.90)`$ with probability $`\gamma =0.90`$ as a function of $`b`$ if $`\mu =0`$. This range of fluctuation has been calculated in order to obtain a band as symmetric as possible around $`\mu _{\mathrm{ep}}`$. In practice this is done by a computer program that simultaneously decreases the lower limit $`\mu _{\mathrm{up}}^{()}(b,\alpha ,\gamma )`$ of the band (lower dashed lines in Fig. 3) and increases the upper limit $`\mu _{\mathrm{up}}^{(+)}(b,\alpha ,\gamma )`$ of the band (upper dashed lines in Fig. 3) until the condition
$$\underset{n=n^{()}(b,\alpha ,\gamma )}{\overset{n^{(+)}(b,\alpha ,\gamma )}{}}P(n|\mu =0,b)\gamma $$
(5)
is reached. Here $`n^{()}(b,\alpha ,\gamma )`$ and $`n^{(+)}(b,\alpha ,\gamma )`$ are the values of $`n`$ such that
$$\mu _{\mathrm{up}}(n^{()}(b,\alpha ,\gamma ),b,\alpha )=\mu _{\mathrm{up}}^{()}(b,\alpha ,\gamma ),\mu _{\mathrm{up}}(n^{(+)}(b,\alpha ,\gamma ),b,\alpha )=\mu _{\mathrm{up}}^{(+)}(b,\alpha ,\gamma ).$$
(6)
The inequality sign in Eq. (5) is needed because $`n`$ is an integer and in general it is not possible to obtain exactly the desired probability $`\gamma `$. This is also the reason for the fact that the dashed lines in Fig. 3 are not smooth. The dotted lines in Fig. 3 represent the lower limit for $`\mu _{\mathrm{up}}(n,b,\alpha =0.90)`$ as a function of $`b`$, that is obtained for $`n=0`$.
It is clear that having analyzed the data using, for example, the Unified Approach and having obtained an upper bound $`\mu _{\mathrm{up}}`$ with an expected background $`b`$, looking at Fig. 3A one can judge if the upper bound $`\mu _{\mathrm{up}}`$ is reasonable or the result of an unlikely statistical fluctuation. For example, the Heidelberg-Moscow double-beta decay experiment measured recently $`n=7`$ events with an expected background $`b=13`$. Using the Unified Approach they obtained $`\mu _{\mathrm{up}}=2.07`$ at 90% CL, with exclusion potential $`\mu _{\mathrm{ep}}=7.51`$. Looking at Fig. 3A one can see that the 90% lower limit for $`\mu _{\mathrm{up}}`$ assuming $`\mu =0`$ is $`\mu _{\mathrm{up}}^{()}=2.07`$, so the discrepancy between $`\mu _{\mathrm{up}}`$ and $`\mu _{\mathrm{ep}}`$ is just acceptable at the border of 10% probability. Using the Bayesian Ordering method they would have obtained $`\mu _{\mathrm{up}}(n=7,b=13,\alpha =0.10)=3.33`$, with exclusion potential $`\mu _{\mathrm{ep}}(b=13,\alpha =0.10)=8.11`$, and with the Bayesian Theory (with a flat prior and shortest credibility intervals) $`\mu _{\mathrm{up}}(n=7,b=13,\alpha =0.10)=4.01`$ and $`\mu _{\mathrm{ep}}(b=13,\alpha =0.10)=7.81`$.
Of course, one can obtain the same results using the measured value of $`n`$ and the Poisson p.d.f. $`P(n|\mu =0,b)`$ with the expected background $`b`$. The advantage of Figs. 1 and 3 is that one can easily transform $`\mu _{\mathrm{up}}`$ and $`\mu _{\mathrm{ep}}`$ in the physical quantity of interest, presenting the result only in terms of physical quantities. For example, in double-beta decay experiments $`\mu `$ is connected to the effective Majorana mass $`|m|`$ of the electron neutrino by the relation
$$|m|=\xi \frac{\sqrt{\mu }}{||},$$
(7)
where $`\xi `$ is a constant with dimension of mass that depends on the decaying nucleus and $``$ is the nuclear matrix element. In the case of the Heidelberg-Moscow double-beta decay experiment , the decaying nucleus is <sup>76</sup>Ge, with $`\xi =0.57\mathrm{eV}`$. Using the nuclear matrix element $`||=2.80`$ calculated in , the 90% CL upper bound for the effective Majorana mass obtained with the Unified Approach is $`|m|_{\mathrm{up}}=0.29\mathrm{eV}`$, the corresponding exclusion potential is $`|m|_{\mathrm{ep}}=0.56\mathrm{eV}`$ and the 90% lower limit for the fluctuations of $`|m|_{\mathrm{up}}`$ (if $`|m|=0`$) is $`|m|_{\mathrm{up}}^{()}=0.29\mathrm{eV}`$. Using instead the Bayesian Ordering method, we obtain $`|m|_{\mathrm{up}}=0.37\mathrm{eV}`$, $`|m|_{\mathrm{ep}}=0.58\mathrm{eV}`$ and $`|m|_{\mathrm{up}}^{()}=0.37\mathrm{eV}`$, and with the Bayesian Theory (with a flat prior and shortest credibility intervals) we have $`|m|_{\mathrm{up}}=0.41\mathrm{eV}`$, $`|m|_{\mathrm{ep}}=0.57\mathrm{eV}`$ and $`|m|_{\mathrm{up}}^{()}=0.41\mathrm{eV}`$.
Let us define the Pull of a null result as
$$\mathrm{Pull}(n,b,\alpha )=\frac{\mu _{\mathrm{up}}(n,b,\alpha )\mu _{\mathrm{ep}}(b,\alpha )}{\sigma _{\mathrm{ep}}(b,\alpha )}.$$
(8)
If $`\mathrm{Pull}(n,b,\alpha )1`$, the experimental upper limit $`\mu _{\mathrm{up}}`$ is significantly weaker than the exclusion potential $`\mu _{\mathrm{ep}}`$ and may be considered as a weak indication that a signal may be present ($`\mu >0`$). On the other hand, if $`\mathrm{Pull}(n,b,\alpha )1`$ and there is no doubt on the value of the mean background $`b`$, it means that the experiment has experienced an unlikely low fluctuation of the background and the resulting upper bound, that is significantly more stringent than the exclusion potential, is not reliable from a physical point of view and it is likely to increase if the experiment is continued (a quite undesirable behavior), as shown by the example of the KARMEN experiment (see Fig. 4 of Ref. ). Moreover, a method that gives values of the Pull closer to zero produces upper bounds that are more reliable from a physical point of view.
For example, in the case of the Heidelberg-Moscow double-beta decay experiment , we have $`n=7`$, $`b=13`$ and $`\alpha =0.10`$, giving $`\sigma _{\mathrm{ep}}=3.91`$ and $`\mathrm{Pull}=1.39`$ in the Unified Approach, $`\sigma _{\mathrm{ep}}=3.52`$ and $`\mathrm{Pull}=1.36`$ with the Bayesian Ordering method, and $`\sigma _{\mathrm{ep}}=2.98`$ and $`\mathrm{Pull}=1.27`$ in the Bayesian Theory with a flat prior and shortest credibility intervals. Hence, among the two Frequentist methods that we have considered, the upper bound obtained with Bayesian Ordering is slightly more reliable, from a physical point of view, than the one obtained with the Unified Approach. If one is willing to accept the Bayesian Theory, the corresponding upper bound is clearly the most reliable one from a physical point of view.
Let us point out that the knowledge of the possible fluctuation of the upper bound $`\mu _{\mathrm{up}}`$ with respect to the exclusion potential $`\mu _{\mathrm{ep}}`$ can also help to *decide, before looking at the data, which is the most appropriate Frequentist method for the statistical analysis*.
This can be understood comparing, for example, Figs. 3A and 3B, obtained with two Frequentist methods with correct coverage. One can see that the upper dashed lines in the two figures almost coincide and the exclusion potential is slightly lower in the Unified Approach, with a difference going from 0.30 for $`b=1`$ to 0.73 for $`b=19`$, with a relative difference of 8–9%. On the other hand, the lower dashed lines and the dotted lines obtained with the Bayesian Ordering are significantly higher than those obtained with the Unified Approach. The difference of the lower dashed lines, goes from 0.43 for $`b=1`$ to 1.40 for $`b=19`$, with a relative difference going from 27% to 64%. The difference between the lowest possible values of $`\mu _{\mathrm{up}}`$ (dotted lines) is quite large: for $`b4`$ the lowest possible values of $`\mu _{\mathrm{up}}`$ is about 0.8 in the Unified Approach and about 1.8 in the Bayesian Ordering method, more than twice!
Hence, if one does not want to risk to have to present a very stringent limit, which would be statistically correct but physically misleading, in the case of observation of less events than the expected background, one can look at figures like Figs. 3A and 3B and the corresponding ones for other Frequentist methods and decide which is the method more suitable for his tastes. Let us emphasize that this choice *must be done before looking at the data*. If one chooses the statistical method on the basis of the data, the property of coverage is lost.
Furthermore, the exclusion potential of an experiment can be calculated and published before starting the experiment or before the data are known, in order to have an indication of the excluded region that will be obtained in the case of absence of a signal (or a signal much smaller than the expected background). We think that it would be useful to publish together with the exclusion potential also the standard deviation of the upper limit in the absence of a signal, in order to illustrate the possible fluctuations of the excluded region and to give, at the same time, a quantitative statement on the precision of the experiment.
## III Detection functions
The exclusion potential and the standard deviation of the upper limit in the absence of a signal are interesting quantities, but they give only information on the possible experimental result in the worst-case scenario, that in which the signal is absent or so small to be undetectable. Usually researchers are more interested in finding positive signals. For example, they would like to know in advance which would be the most likely outcome of the experiment if there is a true signal. In this case, we propose to calculate the *upper and lower detection functions*, $`\mu _+(\mu ,b,\alpha )`$ and $`\mu _{}(\mu ,b,\alpha )`$, obtained averaging the upper and lower limits $`\mu _{\mathrm{up}}(n,b,\alpha )`$ and $`\mu _{\mathrm{low}}(n,b,\alpha )`$ over $`n`$ with the Poisson p.d.f. $`P(n|\mu ,b)`$:
$$\mu _\pm (\mu ,b,\alpha )=\underset{n=0}{\overset{\mathrm{}}{}}\mu _{\genfrac{}{}{0pt}{}{\mathrm{up}}{\mathrm{low}}}(n,b,\alpha )P(n|\mu ,b).$$
(9)
The standard deviation of $`\mu _{\genfrac{}{}{0pt}{}{\mathrm{up}}{\mathrm{low}}}(n,b,\alpha )`$ is given by
$$\sigma _\pm ^2(\mu ,b,\alpha )=\underset{n=0}{\overset{\mathrm{}}{}}\left[\mu _{\genfrac{}{}{0pt}{}{\mathrm{up}}{\mathrm{low}}}(n,b,\alpha )\mu _\pm (\mu ,b,\alpha )\right]^2P(n|\mu ,b).$$
(10)
In Fig. 4 we have plotted $`\mu _+`$, $`\mu _{}`$ (upper and lower solid lines, respectively) as functions of $`\mu `$ for $`\alpha =0.10`$ and $`b=13`$ in the Unified Approach (Fig. 4A), in the Bayesian Ordering method (Fig. 4B) and in the Bayesian Theory assuming a flat prior and shortest credibility intervals (Fig. 4C). The shadowed regions delimited by the dashed lines in Fig. 4 represent the bands $`\mu _+\pm \sigma _+`$ (upper band) and $`\mu _{}\pm \sigma _{}`$ (lower band). From Fig. 4 one can see that the average upper bound $`\mu _+`$ is almost identical in the three methods, but the range $`\mu _+\pm \sigma _+`$ for small values of $`\mu `$ is shortest in the Bayesian Theory and largest in the Unified Approach. The average lower bound $`\mu _{}`$ and the range $`\mu _{}\pm \sigma _{}`$ are similar in the three methods, with the small difference that $`\mu _{}\sigma _{}>0`$ for $`\mu 9`$ in the two Frequentist methods (Unified Approach and Bayesian Ordering) and $`\mu 10`$ in the Bayesian Theory. The three upper plots in Fig. 4 show the probability to find $`\mu _{\mathrm{low}}`$ in the interval $`\mu _{}\pm \sigma _{}`$ (solid lines) and the probability to find $`\mu _{\mathrm{up}}`$ in the interval $`\mu _+\pm \sigma _+`$ (dashed lines). The latter probability is high (larger than 80%) for small values of $`\mu `$, where the interval $`\mu _{}\pm \sigma _{}`$ includes zero, and stabilizes around 68% for higher values of $`\mu `$ (the fluctuations and discontinuities of the probability as a function of $`\mu `$ are due to the discreteness of $`n`$).
The detection functions and the standard deviations of the lower and upper bounds show the expected result and its possible fluctuations if the signal under measurement is not negligibly small. In the next section we present a definition of sensitivity of an experiment to a signal.
## IV Sensitivity to a signal
Often researchers would like to plan an experiment capable of revealing a signal whose value is indicated by previous measurements or predicted by theory. Hence, it is useful to define the *sensitivity of an experiment to a signal*.
Two probabilities must be involved in the definition of the sensitivity to a signal: the confidence level $`1\alpha `$ of the confidence interval that represents the result of the experiment and the probability $`\lambda `$ to find a confidence interval with a positive lower bound.
We think that an appropriate definition of the $`100\lambda \%`$ sensitivity corresponding to a $`100(1\alpha )\%`$ confidence level, $`\mu _\mathrm{s}(b,\alpha ,\lambda )`$, of an experiment measuring a Poisson process with known background $`b`$ is *the value of $`\mu `$ for which there is a probability $`\lambda `$ to find a positive lower limit for $`\mu `$ with confidence level $`100(1\alpha )\%`$*. Hence, we define $`\mu _\mathrm{s}(b,\alpha ,\lambda )`$ through the equation<sup>§</sup><sup>§</sup>§ After the completion of this work, we have been informed that Hernandez, Nava and Rebecchi defined similar criteria in order to calculate “discovery limits” in prospective studies. Our Eq. (11) coincides with their Eq. (6) with $`\delta =1\lambda `$, and our Eq. (12) corresponds to their Eq. (5) with $`ϵ=\alpha `$ in the case of Frequentist methods with correct coverage and automatic transition from two-sided confidence intervals to upper limits for a small number of counts.
$$\underset{nn_\mathrm{s}(b,\alpha )}{}P(n|\mu _s(b,\alpha ,\lambda ),b)=\lambda ,$$
(11)
where $`n_\mathrm{s}(b,\alpha )`$ is the smallest integer such that
$$\mu _{\mathrm{low}}(n_\mathrm{s}(b,\alpha ),b,\alpha )>0.$$
(12)
Here $`\mu _{\mathrm{low}}(n,b,\alpha )`$ is the lower limit of the $`100(1\alpha )\%`$ confidence interval (credibility interval in the Bayesian Theory) corresponding to the observation of $`n`$ events. In all Frequentist methods with correct coverage that guarantee an automatic transition from two-sided confidence intervals to upper limits for $`nb`$ (as the Unified Approach and the Bayesian Ordering method ), the acceptance interval for $`\mu =0`$ starts at $`n_1(\mu =0,b,\alpha )=0`$ and ends at $`n_2(\mu =0,b,\alpha )`$, where $`n_2(\mu =0,b,\alpha )`$ is the smallest integer such that
$$\underset{n=0}{\overset{n_2(\mu =0,b,\alpha )}{}}P(n|\mu =0,b)1\alpha .$$
(13)
Then it is clear that
$$n_\mathrm{s}(b,\alpha )=n_2(\mu =0,b,\alpha )+1$$
(14)
is the smallest integer that satisfies Eq. (12).
Figure 5 shows the value of $`\mu _\mathrm{s}(b,\alpha ,\lambda )`$ as a function of $`b`$ in Frequentist methods (solid lines) and in the Bayesian Theory with a flat prior and shortest credibility intervals (dashed lines) for $`1\alpha =0.90,\mathrm{\hspace{0.17em}0.95},\mathrm{\hspace{0.17em}0.99}`$ and $`\lambda =0.50,\mathrm{\hspace{0.17em}0.90},\mathrm{\hspace{0.17em}0.99}`$. The lines are not smooth because of the discreteness of $`n_\mathrm{s}(b,\alpha )`$ that causes jumps of the solution of Eq. (11) as $`b`$ varies from one value to another with different $`n_\mathrm{s}(b,\alpha )`$.
The sensitivity $`\mu _\mathrm{s}(b,\alpha ,\lambda )`$ provides useful information for the planning of an experiment with the purpose of exploring a range $`[\mu _{\mathrm{min}},\mu _{\mathrm{max}}]`$ of possible values of $`\mu `$ that could be inferred from the results of other experiments or from theory. In order to do this, the background in the experiment must be small enough that the sensitivity $`\mu _\mathrm{s}(b,\alpha ,\lambda )`$ is smaller than $`\mu _{\mathrm{min}}`$. In this case the experiment will have probability $`1\alpha `$ to obtain a correct result (i.e. a confidence interval that contains the true value of $`\mu `$) and a probability bigger than $`\lambda `$ to obtain a positive lower limit, i.e. to reveal a true signal, if $`\mu >\mu _\mathrm{s}(b,\alpha ,\lambda )`$. The probability that the experiment will reveal a true signal within a correct confidence interval is larger the product $`(1\alpha )\lambda `$ (if $`\mu >\mu _\mathrm{s}(b,\alpha ,\lambda )`$). Therefore, it is desirable to have both $`1\alpha `$ and $`\lambda `$ large. In Fig. 5 we have chosen $`1\alpha =0.90,\mathrm{\hspace{0.17em}0.95},\mathrm{\hspace{0.17em}0.99}`$, that are commonly used values for the confidence level, and $`\lambda =0.50,\mathrm{\hspace{0.17em}0.90},\mathrm{\hspace{0.17em}0.99}`$. We think that $`1\alpha `$ should be always chosen large, preferably $`1\alpha =0.99`$, because getting a correct result is the most important thing. As for $`\lambda `$, a large value is important in order to have good chances to reveal the signal (if it exist!). For example, having $`1\alpha =0.99`$ and $`\lambda =0.99`$ gives a probability bigger than 98% to find a true signal within a correct confidence interval (if $`\mu >\mu _\mathrm{s}(b,\alpha ,\lambda )`$). On the other hand, for $`1\alpha =0.90`$ and $`\lambda =0.50`$ ($`\lambda =0.90`$) the probability to find a true signal within a correct confidence interval can be as low as 45% (81%).
From Fig. 5 one can see that the sensitivity increases sub-linearly as the background increases. Since the background increases linearly with the time of data-taking, the sensitivity of the experiment increases sub-linearly as a function of data-taking time. Let us consider an experiment searching for a signal produced by a new process for which there is an indication from previous experiments or from theory. Since the signal, as the background, increases linearly with the time of data-taking, there is a time such that the signal becomes larger than the sensitivity and this time provides an estimate of the data-taking time necessary to reveal the new process.
It is interesting to notice that the sensitivity for $`b=13`$, $`1\alpha =0.90`$ is $`\mu _\mathrm{s}7`$ for $`\lambda =0.50`$ and $`\mu _\mathrm{s}13`$ for $`\lambda =0.90`$, in rough agreement with the lower bands in Fig. 4, which show that there is a good chance to find a lower limit for $`\mu `$ bigger than zero if $`\mu 10`$.
From this example it is clear that the sensitivity of an experiment is different from its exclusion potential. The proposals of new experiments on the search for a signal should present the sensitivity as the most interesting characteristic of the experiment. The exclusion potential should be also presented as an illustration of the potentiality of the experiment in the most unfortunate case of absence of a signal.
Let us remind the reader that the word “probability” has different definitions in the Frequentist and Bayesian theories. In the Bayesian theory “probability” is defined as “degree of belief”, whereas in the Frequentist theory it is defined as ratio of the number of positive cases and total number of trials in a large ensemble. The Frequentist definition avoids the need of subjective judgment, but it is not clear what is its meaning in the planning and realization of *one* experiment (or a few experiments). Whatever the meaning of Frequentist probability in this case, we think that it is comforting to see from Fig. 5 that the Frequentist and Bayesian values for $`\mu _\mathrm{s}(b,\alpha ,\lambda )`$ are quite close.
## V Conclusions
In conclusion, we have defined some quantities that may help to asses the reliability (physical significance) of the confidence intervals obtained with different methods. We have also defined appropriately the sensitivity of an experiment to a signal.
In Section II we have considered the quantity called “sensitivity” by Feldman and Cousins and we have argued that this name is not appropriate because this quantity does not represent the capability of an experiment to reveal a signal. We proposed to call this quantity *“exclusion potential”*.
Considering the case of a Poisson process with known background, we have shown how the exclusion potential and the standard deviation of the upper limit in the absence of a signal may help to choose the method that is more appropriate for obtaining reliable upper limits (Section II). We have also defined the Pull of a null result, that quantifies the reliability of an experimental upper limit. In Section III we have defined the upper and lower detection functions, that give the most likely outcome of an experiment if there is a signal. In Section IV we proposed an appropriate definition of sensitivity of an experiment to a signal. These definitions apply to both Frequentist and Bayesian statistical theories and can be easily generalized to any process in which a quantity $`\mu `$ with known probability distribution is measured: the upper (lower) detection function is the average of the upper (lower) limit and the $`100\lambda \%`$ sensitivity is the lower value of $`\mu `$ for which there is a probability $`\lambda `$ to find a positive lower limit.
We considered explicitly the case of a Poisson process with known background in the framework of Frequentist and Bayesian statistical theories, but similar considerations and conclusions apply also to the case of a Gaussian distribution with boundary.
|
no-problem/0002/astro-ph0002280.html
|
ar5iv
|
text
|
# Use of DPOSS data to study globular cluster halos: an application to M~92
## 1 Introduction
The tidal radii of globular clusters (GCs) are important tools for understanding the complex interactions of GCs with the Galaxy. In fact, they have traditionally been used to study the mass distribution of the galactic halo (Innanen et al. Inn83 (1983)), or to deduce GCs orbital parameters (Freeman & Norris Fre81 (1981); Djorgovski et al. DJ96 (1996)). Tidal radii have usually been estimated (only in few cases directly measured), by fitting King models to cluster density profiles rarely measured from the inner regions out to the tidal radius, because of the nature of the photographic material, that prevented any measure in the cluster center, and the small format of the first digital cameras. Only in the last few years, the advent of deep digitized sky surveys and wide field digital detectors has allowed us to deal with the overwhelming problem of contamination from field stars and to probe the outer region of GCs directly (Grillmair et al. Gri95 (1995), hereafter G95; Zaggia et al. Zag95 (1995); Zaggia et al. Zag97 (1997); Lehman & Scholz LS97 (1997)). The study of tidal tails in galactic satellites is gaining interest for many applications related to the derivation of the galactic structure and potential, the formation and evolution of the galactic halo, as well as the dynamical evolution of the clusters themselves. Recent determinations of proper motion for some globular clusters with HIPPARCOS have made it possible to estimate the orbital parameters of a good number of them (Dinescu et al. Din99 (1999)). This helps to clarify the nature and structure of tidal extensions in GCs.
In principle, available tools to enhance cluster star counts against field stars rely on the color-magnitude diagram (CMD), proper motions, radial velocities, or a combination of the three techniques. The application of these techniques to GCs have led to the discovery that tidal or extra-tidal material is a common feature: Grillmair (Gri98 (1998)), for instance, reported the discovery of tidal tails in 16 out of 21 globular clusters. Interestingly, signature of the presence of tidal tails in GCs has also been found in four GC’s in M31 (Grillmair Gri96 (1996)). For galactic clusters, the discovery was made by using a selection in the CMD of cluster stars on catalogs extracted from digitized photographic datasets. The CMD selection technique is an economical and powerful method to detect GC tails, since it significantly decreases the number of background and/or foreground objects.
In order to test the feasibility of a survey of most GCs present in the Northern hemisphere, we applied the CMD technique to the galactic globular cluster M~92 (NGC 6341), with the aim of measuring the tidal radius and searching for the possible presence of extra-tidal material. We used plates from the Digitized Second Palomar Sky Survey (hereafter DPOSS), in the framework of the CRoNaRio (Caltech-Roma-Napoli-Rio de Janeiro) collaboration (Djorgovski et al. DJ97 (1997), Andreon et al. And97 (1997), Djorgovski et al. DJ99 (1999)). A previous account on this work was given in Zaggia et al. (Zag98 (1998)). This is the first of a series of papers dedicated to the subject –an ideal application for this kind of all-sky surveys.
## 2 The Color$``$Magnitude Diagram
The material used in this work are the $`J`$ and $`F`$ DPOSS plates of the field 278. For each band, we extracted from the whole digitized plate a sub-image (size: $`8032\times 8032`$ pixels), corresponding to an area of $`136\mathrm{}\times 136\mathrm{}`$ , with a pixel size of $`1\mathrm{}`$, centered on M~92 at coordinates (Harris Har96 (1996)):
$$\alpha _{J2000}=17^h\mathrm{\hspace{0.33em}17}^m\mathrm{\hspace{0.33em}07.3}^s$$
$$\delta _{J2000}=+43^o\mathrm{\hspace{0.33em}08}\mathrm{}\mathrm{\hspace{0.33em}11.5}\mathrm{}$$
The two images were linearized by using a density-to-intensity (DtoI) calibration curve, provided by the sensitometric spots available on the DPOSS plates. The $`F`$ plate is contaminated by two very similar satellite tracks (as an alternative, the two tracks come from a high altitude civil airplane) lying $`9\mathrm{}`$ and $`13\mathrm{}`$ from the cluster center and crossing the field in a South-East/North-West direction. The effect of these tracks can be seen as empty strips on the lower panel of Fig. 1. Other thin, fainter tracks and some galaxies are present on the same plate, but at larger distances from the cluster core region. We applied the CMD technique to datasets obtained with different astronomical packages, in order to test the reliability of object detection and photometry in crowded stellar fields. On the DPOSS plates containing M~92, we used both the SKICAT and DAOPHOT packages. SKICAT, written at Caltech (see Weir et al. 1995a , and refs. therein), is the standard software used by the CRoNaRio collaboration for the DPOSS plate processing and catalog construction. DAOPHOT is a well-tested program for stellar photometry, developed by Stetson (Ste87 (1987)), and widely used by stellar astronomers. In this work we have used DAOPHOT only to obtain aperture photometry, with APPHOT, of objects detected with the DAOFIND algorithm on the DPOSS plates.
### 2.1 The data set
The SKICAT output catalog only contains objects classified as stars in both filters. For each object, we used $`M_{\mathrm{Core}}`$ (the magnitude computed from the central nine pixels), because the other aperture magnitude is measured on an area far too large for crowded regions. The final SKICAT catalog consists of 108779 objects. Since SKICAT is optimized for the detection of faint galaxies, in the present case we needed to test its performances in crowded stellar fields to ensure that it properly detected the stellar population around the cluster.
Thus, SKICAT has been compared to DAOPHOT, which is specifically designed for crowded fields stellar photometry and has been repeatedly tested in a variety of environments, including globular clusters. The DAOPHOT dataset was built using aperture photometry on the objects detected with the DAOFIND. The threshold was set at $`3.5\sigma `$, similar to the one used by SKICAT. Aperture photometry was preferred to PSF fitting photometry, due to the large variability of the DPOSS point-spread function which makes the PSF photometry less accurate than the aperture photometry. We used an aperture of $`1.69`$ pixels of radius, corresponding to an area of approximately 9 pixels, i.e. equivalent to the area used by FOCAS/SKICAT to compute $`M_{\mathrm{Core}}`$. Indeed, the advantages of using PSF fitting are more evident in the central and more crowded regions of the cluster, while we are mainly interested in the outskirts, where crowding is less dramatic. Thus, we adopted the results from the aperture photometry, and we refer to this dataset as the DAOFIND+PHOT dataset.
The total number of objects detected in the $`J`$ and $`F`$ plates is, respectively, 240138 and 253977. The larger number of objects detected by DAOFIND, compared to those from SKICAT, is mainly due to the better capacity of DAOFIND in detecting objects in the crowded regions of the core. In the case of DAOFIND, since the convolution kernel, which is set essentially by the pixel size and seeing value, is much smaller than in SKICAT, we also have objects measured near the satellite tracks.
The FOCAS/SKICAT and DAOFIND+PHOT aperture photometry were then compared, and the results are shown in Fig. 2 where the SKICAT aperture magnitude is plotted versus the difference between itself and $`M_{\mathrm{Core}}`$. The average difference is zero, with an error distribution typical of this kind of tests, i.e. a fan-like shape with growing dispersion at fainter magnitudes. The distribution of F magnitudes in Fig. 2 clearly shows the effects of saturation at the bright end, and in both plots there are several outliers, owing to the field crowding. In fact, these outliers are much more concentrated in the inner $`12\mathrm{}`$, where their density is 0.439 arcmin<sup>-2</sup>, than at larger distances from the center, where the density drops to 0.022 arcmin<sup>-2</sup>. These objects are mostly classified as non stellar by SKICAT and DAOPHOT, since they are either foreground galaxies or, more often, unresolved multiple objects, and were rejected in the final catalogs. Their area is taken into account later on, when we compute the effective area of the annuli in the construction of the radial profile. The outliers show an asymmetric distribution with SKICAT magnitudes being brighter at bright magnitudes and viceversa at fainter magnitudes. This is due to two reasons: in the case of large objects, SKICAT splits them into multiple entries, but keeps the $`M_{\mathrm{Core}}`$ value of the originally detected (big) object; at fainter magnitudes, where objects are small, $`M_{\mathrm{Core}}`$ is computed on a number of pixels less than 9, while the aperture photometry of the objects in the DAOFIND catalog are always computed on a circle of 1.69 pixel radius.However, the contribution of these outliers to the counts is far below 1 percent of the total, as can be seen from the histogram plotted on the right hand side of fig. 2.
The above analysis shows that SKICAT catalogs are, after a suitable cleaning, usable “as they are” also for studies of moderately crowded stellar fields. We shall use the DPOSS-DAOPHOT dataset because it can better detect objects in highly crowded fields, which allows us to probe into the inner ($`200\mathrm{}<r<400\mathrm{}`$) regions of the cluster and merge our star counts profile with the published one of Trager et al. (Tra95 (1995), hereafter T95).
### 2.2 The color-magnitude diagram
In order to build the CMD of the cluster, individual catalogs were matched by adopting a searching radius of 5 $`\mathrm{}`$, and keeping the matched object with the smallest distance. The derived CMD is shown in Fig. 3 for two annular regions: the inner one, between 5$`\mathrm{}`$ and 12$`\mathrm{}`$ from the center (left-hand panel) and the outer one, referring to the background, between 60$`\mathrm{}`$ and 67$`\mathrm{}`$ (right-hand panel). The M~92 turn-off region, as well as part of the horizontal branch, are clearly visible. At bright magnitudes, the giant branch turns to the blue, due to plate saturation. At large angular distances from the center of M~92, most objects are galactic stars with only a small contribution from the cluster.
For reducing the background/foreground field contribution, we used an approach similar to that of Grillmair et al. Gri95 (1995). First of all, We selected an annular region ($`200\mathrm{}<r<300\mathrm{}`$) around the cluster center to find the best fiducial CMD sequence of the cluster stars. Then, the CMD of this region was compared with the CMD of the field at a distance greater than $`1^{}`$ from the cluster center. The two CMD’s were normalized by their area and then we binned the CMD and computed the $`S/N`$ ratio of each element, just like in G95 (their Eq. 2). Finally, we then obtained the final contour of the best CMD region by cutting at a $`S/N1`$. Contours are shown in Fig. 3, on which a solid line marks the CMD region used to select the “bona fide” cluster stars as described above. We must say that this CMD selection is not aimed at finding all the stars in the cluster but only at the best possible enhancement of the cluster stars as compared to the field stars. This is why the region of the sub-giant/giant branch is not included, since here the cluster stars are fewer than in the field. By extracting objects at any distance from the center of the cluster, in the selected CMD region, the field contamination is reduced by a factor of $`7`$. In absence of strong color gradients, the fraction of lost stars does not depend on the distance from the center.
## 3 Extra-tidal excess in M~92
### 3.1 Radial density profile
As first step, we built a 2-D star density map by binning the catalog in step of $`1\mathrm{}\times 1\mathrm{}`$. Then, we fitted a polynomial surface to the background, selecting only the outermost regions of the studied area. The background correction is expressed in the same units of 2-D surface density counts, and can be direclty applied to the raw counts. A tilted plane was sufficient to interpolate the background star counts (SCs). Higher-order polynomials did not provide any substantial improvement over the adopted solution. We compared the fitted background with IRAS maps at 100$`\mu `$, but we did not find any direct signs of a correlation between the two. Rather, the direction of the tilt is consistent with the direction of the galactic center. Hence, the tilt of the background, which is however very small ($`0.01\mathrm{mag}/\mathrm{arcmin}`$), can be considered as due to the galactic gradient.
The cluster radial density profile was obtained from the background subtracted SC’s by counting stars in annuli of equal logarithmic steps. The uncorrected surface density profile (hereafter SDP) is expressed as:
$$SDP_i^{\mathrm{uncorr}}=2.5log(N_{i,i+1}/A_{i,i+1})+const,$$
where $`N_{i,i+1}`$ indicates the number of objects in the annulus between $`r_i`$ and $`r_{i+1}`$, and $`A_{i,i+1}`$ the area of the annulus. The constant was determined by matching the profile with the published profile of T95 in the overlap range. The effective radius at each point of the profile is given by:
$$r_i^{\mathrm{eff}}=\sqrt{\frac{1}{2}\times (r_i^2+r_{i+1}^2)}.$$
The SDP must now be corrected for crowding. When dealing with photographic material, it is not possible to apply the widely known artificial star technique used with CCD data. Therefore, we used a procedure similar to the ones described in Lehman & Scholz (LS97 (1997)) and Garilli et al. (Gar99 (1999)): we estimated the area occupied by the objects in each radial annulus by selecting all the pixels brighter than the background noise level plus three sigmas, and considered as virtually uncrowded the external annuli in which the percentage (very small, $`0.5\%`$) of filled area did not vary with the distance. The external region starts at $`1000\mathrm{}`$ from the cluster center with a filling factor smaller than $`2\%`$. After correcting the area covered by non-stellar objects, the ratio unfilled/filled area gives the crowding correction. This correction was computed at the effective radii of the surface brightness profile (hereafter SBP) and smoothed using a spline function. The corrected surface brightness profile was then computed as:
$$SBP_i=SBP_i^{\mathrm{uncorr}}+2.5\times \mathrm{log}(1.0frac)$$
where $`frac`$ is the crowding correction factor determined at the i$`th`$ point on the profile.
The crowding corrected SBP of M~92 derived from DPOSS data is shown in Fig. 4 as filled dots and the uncorrected counts as crosses. Table 1 lists the measured surface brightness profile. With a simple number counts normalization we joined our profile to the one (open circles) derived by T95, in order to extend the profile to the inner regions. We then fitted a single-mass King model to our profile. The fitting profile is drawn on Fig. 4 as a continuous line. Our value for the tidal radius, $`r_\mathrm{t}=740\mathrm{}`$, turned out to be similar to the value given in Brosche et al. (Bro99 (1999)), $`r_\mathrm{t}=802\mathrm{}`$ and slightly smaller than the one given in T95, $`r_\mathrm{t}=912\mathrm{}`$. As it can be seen from the figure, DPOSS data extend at larger radial distances than the T95 compilation and reveal the existence of a noticeable deviation from the isotropic King model derived from the direct fitting of the SBP. This deviation is a clear sign of the presence of extra tidal material. We also tried fitting anisotropic King models to the SBP, but the fit was not as good as in the isotropic case.
At what level is this deviation significant? The determination of the tidal radius of a cluster is still a moot case. While fitting a King model to a cluster density profile, the determination of the tidal radius comes from a procedure where the overall profile is considered, and internal points weigh more than external ones. On the one hand, this is an advantage since the population near the limiting radius is a mix of bound stars and stars on the verge of being stripped from the cluster by the Galaxy tidal potential. On the other hand, the tidal radius obtained in this way can be a poor approximation of the real one. In the classical picture, and in presence of negligible diffusion, the cluster is truncated at its tidal radius at perigalacticon (see Aguilar et al. Agu88 (1988)). Nevertheless, Lee & Ostriker (Lee87 (1987)) pointed out that mass loss is not instantaneous at the tidal radius, and, for a given tidal field, they expect a globular cluster to be more populated than in the corresponding King model. Moreover, a globular cluster along its orbit also suffers from dynamical shocks, due to the crossing of the Galaxy disk and, in case of eccentric orbits, to close passages near the bulge, giving rise to enhanced mass-loss and, later on, to the destruction of the globular cluster itself. Gnedin & Ostriker (Gne97 (1997)) found that, after a gravitational shock, the cluster expands as a whole, as a consequence of internal heating. In this case, some stars move beyond the tidal radius but are not necessarily lost, and are still gravitationally bound to the cluster. This could explain observed tidal radii larger than expected for orbits with a small value of the perigalacticon. Brosche et al. (Bro99 (1999)) point out that the observed limiting radii are too large to be compatible with perigalacticon $`r_\mathrm{t}`$, and suggest that the appropriate quantity to be considered is a proper average of instantaneous tidal radii along the orbit. It can be seen from Fig. 4 that the cluster profile deviates from the superimposed King model before the estimated tidal radius, and has a break in the slope at about $`r850\mathrm{}`$, after which the slope is constant. We shall come back later to this point.
In Fig. 5 we show the surface density profile, expressed in number of stars to allow a direct comparison with J99, and binned in order to smooth out oscillations in the profile, due to the small S/N ratio arising with small-sized annuli. J99 predict that stars stripped from a cluster, and forming a tidal stream, show a density profile described by a power law with exponent $`\gamma =1`$. We fitted a power law of the type $`\mathrm{\Sigma }(r)r^\gamma `$ to the extra-tidal profile (dashed line). The best fit gives a value $`\gamma =0.85\pm 0.08`$ and is shown as a dashed line in Fig. 5. The errors on the profile points include also the background uncertainty, in quadrature, so that the significance of the extra-tidal profile has been estimated in terms of the difference $`f_i3\sigma _i`$, where $`f_i`$ is the number surface density profile at point $`i`$, and $`\sigma _i`$ its error, which includes the background and the signal Poissonian uncertainties. This quantity is positive for all the points except for the outermost one. The fitted slope is consistent with the value proposed by J99 and in good accordance with literature values for other clusters (see G95 and Zaggia et al. Zag97 (1997)).
We then fit an ellipse to the extra-tidal profile in order to derive a position angle of the tidal extension, and checked whether the profile in that direction differs from the one obtained along the minor axis of the fitting ellipse, to confirm that the extra-tidal material is a tail rather than a halo. The best fitting ellipse, made on the “isophote” at the $`2\sigma `$ level from the background (approximately $`1.5r_\mathrm{t}`$ from the center of the cluster), turned out to have a very low ellipticity, ($`e0.05\pm 0.01`$ at P.A.$`54^{}\pm 15^{}`$). We have also measured the radial profiles along the major and minor axes, using an aperture angle of $`\pm 45^{}`$, in order to enhance the S/N ratio in the counts. The two profiles turned out to be indistinguishable within our uncertainties. This result shows that the halo material has a significantly different shape than the internal part of the cluster which shows an ellipticity of $`0.10\pm 0.01`$ at P.A. $`141^{}\pm 1`$ as found by White & Shawl (Whi87 (1987)).
### 3.2 Surface density map
In the attempt to shed more light upon presence and characteristics of the extra-tidal extension, we used the 2-D star counts map, as described at the beginning of the previous section. We applied a Gaussian smoothing algorithm to the map, in order to enhance the low spatial frequencies and cut out the high frequency spatial variations, which contribute strongly to the noise. We smoothed the map using a Gaussian kernel of $`6\mathrm{}`$. The resulting smoothed surface density map is shown in Fig. 6. Since the background absolute level is zero, the darkest gray levels indicate negative star counts. In this image, the probable tidal tail of M~92 (light-gray pixels around the cluster) is less prominent than in the radial density profile: this is because data are not averaged in azimuth. On the map we have drawn three “isophotal” contours at 1, 2 and 3$`\sigma `$ over the background. The fitted tidal radius is marked as a thick circle and the two arrows point toward the galactic center (long one) and in the direction of the measured proper motion (see Dinescu et al. Din99 (1999)). The tidal halo does not seem to have a preferred direction. A marginal sign of elongation is possibly visible along a direction almost orthogonal to that of the galactic center.
As pointed out in the previous section, if we build the profile along this direction and orthogonally to it, we do not derive clear signs of any difference in the star count profiles in one direction or the other, mainly because of the small number counts.
On the basis of these results, we can interpret the extra-tidal profile of M~92 as follows: at radii just beyond the fitted King profile tidal radius, the profile resembles a halo of stars –most likely still tied up to the cluster or in the act of being stripped away. As the latter process is not instantaneous, these stars will still be orbiting near the cluster for some time. We cannot say whether this is due to heating caused by tidal shocks, or to ordinary evaporation: a deep CCD photometry to study the mass function of extra-tidal stars would give some indications on this phenomenon. At larger radii, the 1 $`\sigma `$ “isophote” shows a barely apparent elongation of the profile in the direction SW to NE, with some possible features extending approximately towards S and E. Although the significance is only at 1 $`\sigma `$ level, these structures are visible and might be made up by stars escaping the cluster and forming a stream along the orbit. As pointed out in Meylan & Heggie (Mey97 (1997)), stars escape from the cluster from the Lagrangian points situated on the vector connecting the cluster with the center of the Galaxy, thus forming a two-sided lobe, which is then twisted by the Coriolis force. A clarifying picture of this effect is given in Fig. 3 of Johnston (Jon98 (1998)).
## 4 Summary and conclusions
We investigated the presence and significance of a tidal extension of the brightness profile of M~92. The main results of our study are:
1. The presence of an extra-tidal profile extending out to $`0.5^{}`$ from the cluster center, at a significance level of $`3\sigma `$ out to $`r2000\mathrm{}`$. We found no strong evidence for preferential direction of elongation of the profile. This may imply that we are detecting the extra-tidal halo of evaporating stars, which will later form a tidal stream. Moreover, the tidal tail might be compressed along the line of sight –see, for instance, Fig. 18 of G95. In fact, G95 point out that tidal tails extend over enormous distances ahead and behind the cluster orbit, and the volume density is subject to the open-orbit analogous of Kepler’s third law: near apogalacticon, stars in the tidal tail undergo differential slowing-down, so that the tail converges upon the cluster. Actually, most models (e.g., Murali & Dubinsky Mur99 (1999)) predict that the extra-tidal material should continue to follow the cluster orbit and thus take the shape of an elongated tail, or a stream. The stream has been already revealed in dwarf spheroidal galaxies of the local group (Mateo et al. Mat98 (1998)), but whether the stream can also be visible in significantly smaller objects like globular clusters is currently a moot point.
2. By constructing the surface density map and performing a Gaussian smoothing, the low-frequency features are enhanced over the background. We find some marginal evidence for a possible elongation in the extra-tidal extention based on a visual inspection of this map. This elongation may be aligned in a direction perdendicular to the Galactic center, although we already know that the significance of this result is low; additional observations will be required to settle the issue. A similar displacement is described in Fig. 3 of Johnston (Jon98 (1998)).
Finally, we want to stress the power of the DPOSS material in conducting this kind of programs, either by using the standard output catalogs, as they come out from the processing pipeline, or the specific re-analysis of the digitized plate scans. In the future we will extend this study to most of the globular clusters present on the DPOSS plates.
###### Acknowledgements.
SGD acknowledges support from the Norris Foundation. We thank the whole POSS-II, DPOSS, and CRoNaRio teams for their efforts.
|
no-problem/0002/cond-mat0002146.html
|
ar5iv
|
text
|
# Shape and Motion of Vortex Cores in Bi2Sr2CaCu2O8+δ
## I introduction
The study of the vortex phases in high temperature superconductors (HTS’s) has lead to both theoretical predictions of several novel effects and experiments accompanied by challenging interpretations. The reasons are multiple. First, the unconventional symmetry of the order parameter — most likely $`d_{x^2y^2}`$ — leads to the presence of low-lying quasiparticle excitations near the gap nodes, which in turn has inspired the predictions of a nonlinear Meissner effect , of a $`\sqrt{H}`$ dependence of the density of states at the Fermi level near the vortex cores ($`N(0,𝐫)`$) , and of a four-fold symmetry of the vortices . However, the experimental evidence for the first two effects is still controversial , and scanning tunneling spectroscopy (STS) measurements on vortex cores have not shown any clear signature of a $`\sqrt{H}`$ dependence of $`N(0,𝐫)`$ . Concerning the four-fold symmetry, a tendency of square vortices was found in previous measurements , but inhomogeneities make it difficult to be decisive about it. Interestingly, a four-fold symmetry has been observed around single atom zinc impurities in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> (BSCCO), which are of a smaller size than the vortex cores .
A second reason is related to the interaction of vortices with pinning centers, responsible for the rich vortex phase diagram of the HTS’s . The pinning of vortices is mainly due to local fluctuations of the oxygen concentration , and facilitated by their highly 2D ”pancake” character. In BSCCO the areal density of oxygen vacancies per Cu-O double layer is in fact surprisingly large: $`10^{17}`$ m<sup>-2</sup> , corresponding to an average distance between the oxygen vacancies of the order of 10 Å. This vortex pinning results for BSCCO in the absence of any regular flux line lattice at high fields, as demonstrated by both neutron diffraction and STS experiments. Moreover, since the distance between the oxygen vacancies is of the order of the vortex core size, one may expect that not only the vortex distribution, but also the vortex core shape will be dominated by pinning effects, and not by intrinsic symmetries like that of the order parameter. A detailed understanding of the interaction of vortices with pinning centers will thus be of importance to explain the lack of correspondence between theoretical predictions about the vortex shape and STS measurements.
Third, and again as a consequence of the anisotropy of the order parameter, low-energy quasiparticles are not truly localized in the vortex core (contrarily to the situation in $`s`$-wave superconductors ). For pure $`d`$-wave superconductors these quasiparticles should be able to escape along the nodes of the superconducting gap. Thus in the vortex core spectra one expects a broad zero-bias peak of spatially extended quasiparticle states . However, tunneling spectra of the vortex cores in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (YBCO) showed two clearly separated quasiparticle energy levels, which were interpreted as a signature of localized states . In BSCCO two weak peaks have been observed in the some vortex core spectra , suggesting a certain similarity to the behaviour in YBCO. Another important characteristic of HTS’s that follows from the STS studies mentioned above, is the extremely small size of the vortex cores in these materials. The large energy separation between the localized quasiparticle states directly implies that the vortex cores in YBCO are of such a size that quantum effects dominate. This is even more true in BSCCO: not only the in-plane dimensions of the vortex cores are smaller than in YBCO and become of the order of the interatomic distances , but due to the extreme anisotropy of the material also their out-of-plane size is strongly reduced. This highly quantized character of vortices in HTS’s is equally demonstrated by the non-vanishing magnetic relaxation rate in the limit of zero temperature, attributed to quantum tunneling of vortices through the energy barriers between subsequent pinning centers .
In this paper we present a detailed STS study of the shape of the vortices in BSCCO. We will show that this shape is influenced by inhomogeneities. The samples presented here, which we characterize as moderately homogeneous, are used to study the behaviour of vortex cores under these conditions. Apart from the vortex core shape, this also includes the evolution in time of the vortices. We will show that both effects can be related to tunneling of vortices between different pinning centers. This is another indication of the possible extreme quantum behaviour of vortex cores in HTS’s. A corollary of this paper is that only extremely homogeneous samples will show intrinsic shapes of vortex cores.
## II Experimental Details
The tunneling spectroscopy was carried out using a scanning tunneling microscope (STM) with an Ir tip mounted perpendicularly to the (001) surface of a BSCCO single crystal, grown by the floating zone method. The crystal was oxygen overdoped, with $`T_c=77`$ K, and had a superconducting transition width of 1 K (determined by an AC susceptibility measurement). We cleaved in situ, at a pressure $`<10^8`$ mbar, at room temperature, just before cooling down the STM with the sample. The sharpness of the STM tip was verified by making topographic images with atomic resolution. Tunneling current and sample bias voltage were typically 0.5 nA and 0.5 V, respectively. We performed the measurements at 4.2 K with a low temperature STM described in Ref. , and those at 2.5 K with a recently constructed <sup>3</sup>He STM . A magnetic field of 6 T parallel to the $`c`$-axis of the crystal was applied after having cooled down the sample. The measurements presented here were initiated 3 days after having switched on the field.
The $`dI/dV`$ spectra measured with the STM correspond to the quasiparticle local density of states (LDOS). In the superconducting state one observes two pronounced coherence peaks, centered around the Fermi level, at energies $`\pm \mathrm{\Delta }_p`$. The gap size $`\mathrm{\Delta }_p`$ varied from 30-50 meV. In the vortex cores the spectra are remarkably similar to those of the pseudogap in BSCCO measured above $`T_c`$ , with a total disappearance of the coherence peak at negative bias, a slight increase of the zero bias conductivity, and a decrease and shift to higher energy of the coherence peak at positive bias. To map the vortex cores we define a gray scale using the quotient of the conductivity $`\sigma (V_p)=dI/dV(V_p)`$ at a negative sample voltage $`V_p=\mathrm{\Delta }_p/e`$ and the zero bias conductivity $`\sigma (0)=dI/dV(0)`$. Thus we obtain spectroscopic images, where vortex cores appear as dark spots. Since we measure variations of the LDOS, which occur at a much smaller scale (the coherence length $`\xi `$) than the penetration depth $`\lambda `$, we can get vortex images at high fields. A tunneling spectrum is taken on the time scale of seconds, spectroscopic images typically take several hours (about 12 hours for the images of 100x100 nm<sup>2</sup> presented below). The images therefore necessarily reflect a time-averaged vortex density.
In all large-scale images we have suppressed short length-scale noise by averaging each point over a disk of radius $`20`$ Å. When zooming in to study the shape of individual vortices, we strictly used raw data. Further experimental details can be found in previous publications .
## III Results
### A Vortex Distribution
In Fig. 1 we show spectroscopic images of the surface of a BSCCO crystal, at different magnetic field strengths. The large dark structure, clearly visible at the right of Figs. 1(b) and (c), corresponds to a degraded region resulting from a large topographic structure, already observed in the topographic image Figs. 1(a). The presence of this structure allows an exact position determination throughout the whole experimental run. As can be seen in Fig. 1, the number of vortices at 6 and at 2 T, in exactly the same region, scales very well with the total number of flux quanta ($`\mathrm{\Phi }_0`$) that one should expect at these field strengths. This clearly proves that the observed dark spots are directly related to vortex cores, and not to inhomogeneities, defects or any form of surface degradation. The large spot in the upper left corner of Fig. 1(c) forms an exception: it appeared after a sudden noise on the tunnel current while we were scanning on that position, showed semiconducting spectra (typical for degraded tunneling conditions) afterwards, and remained even after having set the external field to 0 T. One should however not exclude that a vortex is pinned in this degraded zone. Finally, the size and density of the vortices are fully consistent with previous measurements .
Instead of a well ordered vortex lattice, one observes patches of various sizes and shapes scattered over the surface. This clearly indicates the disordered nature of the vortex phase in BSCCO at high fields, again in consistency with previous STM studies and neutron scattering data , and stressing the importance of pinning for the vortex distribution.
### B Vortex Shapes
As a next step we increase the spatial resolution in order to investigate individual vortex cores. Some vortices appear with square shapes, but most vortices in this study have irregular shapes. Closer inspection of the tunneling spectra reveals small zones inside the vortex core that show superconducting behaviour. That is, when scanning through a vortex core one often observes (slightly suppressed) coherence peaks (Fig. 2(a)), typical for the superconducting state, at some spots inside the vortex core. The latter is generally characterized by the absence of these peaks. In some cases, the vortex cores are even truly split into several smaller elements (Fig. 2(b)), totally separated by small zones showing the rise of coherence peaks. This has been verified by measuring the full spectra along lines through the vortex core, as in Fig. 2(a).
The smaller elements of a split vortex core cannot be related to separate vortices: first, the vortex-vortex repulsion makes it highly improbable that several vortex cores are so close to each other; second, counting all these elements as a flux quantum in Fig. 1(b), one finds a total flux through the surface that is far too large compared to the applied field. One should note here that the magnetic size of a flux line is of the order of the penetration depth $`\lambda `$, two orders of magnitude larger than the vortex core splitting observed here.
### C Vortex Motion
With subsequent spectroscopic images like Fig. 1(b), one can also study the vortex distribution as a function of time. We expect the vortex motion to be practically negligible, since we allowed the vortices to stabilize for more than 3 days . However, in Fig. 3 one can see that many vortices still have not reached totally stable positions. Many of them roughly stay on the same positions over the time span of our measurement, but others move to neighboring positions. Five different cases of moving vortices are indicated by the ellipses and the rectangle in Fig. 3.
In the panels on the left side the precise intensity of each point is difficult to read out directly. In order to investigate more quantitatively the time evolution of the vortex distribution, from one frame to the next, we show in the right part of Fig. 3 3D representations of the area that is marked by the rectangle in the 2D spectroscopic images. They give an idea of the gray scale used in the 2D images, and provide a detailed picture of the movement of the vortex core in front, from the right in Fig. 3(a) to the left in Fig. 3(c). The vortex core at the back does not move, and serves as a reference for the intensity. We remind that the intensity, or height in the 3D images, is a measure of the LDOS, which in a vortex core is different from the superconducting DOS. It is most interesting to see what happens in Fig. 3(b): the (moving) vortex core is divided between two positions. Thus, the vortex core moves from one position to the other, passing through an intermediate state where the vortex splits up between the two positions. Note that these two positions do not correspond to two vortices. In fact, the split vortex is characterized by the lower intensity compared to the nearby (reference) vortex. This means that the coherence peak at negative voltage does not completely disappear, as it should if we had a complete and stable vortex at each of these positions. Note also that the density of vortices around the rectangular area on the left side in Fig. 3 will clearly be too high if we count the mentioned positions and all positions in the ellipses as individual flux quanta. The split vortex discussed here is not a unique example. Similar behaviour can be found for several other vortex cores, as indicated by the ellipses in Fig. 3. This gradual change of position is in striking contrast to the STS observations of moving vortices in NbSe<sub>2</sub> and YBCO .
### D Temperature Dependence
We performed measurements both at 4.2 and at 2.5 K, on samples cut from the same batch of crystals. The data taken at 2.5 K (see also Fig. 2) are fully consistent with the presented work at 4.2 K. In Fig. 4 we provide a general view of the vortex cores at 2.5 T, including an analogue of the moving vortex core of Fig. 3. Though it is hard to obtain any quantitative data, one can conclude that the vortex cores roughly have the same size, similar irregular shapes, and examples of split vortex cores can be easily found.
## IV Discussion
### A Experimental Considerations
The observation of such a highly irregular pattern of vortex cores, as presented above, requires a careful analysis of the experimental setup. However, the fact that keeping exactly the same experimental conditions the number of vortex cores scales with the magnetic field, is a direct proof of the absence of artificial or noise-related structures in the spectroscopic images. Furthermore, since topographic images showed atomic resolution, there is no doubt that the spatial resolution of the STM is largely sufficient for the analysis of vortex core shapes.
The stability of the magnetic field can be verified by counting the number of vortices in the subsequent images at 6 T (Fig. 3). Since, excluding the split vortices marked by the ellipses, this number is constant ($`26\pm 3`$), we can exclude any substantial long time-scale variation of the magnetic field. Some variation in the total black area from one image to the other can be related to the tunneling conditions: a little more noise on the tunnel current will give a relatively large increase of the small zero-bias conductance. Since we divide by the zero-bias conductance to obtain the spectroscopic images, this may lead to some small variations in the integrated black area of the images.
### B Delocalization
Keeping in mind the randomness of the vortex distribution at 6 T due to pinning of vortices, we now relate both the split vortex cores (Fig. 2) and the intermediate state between two positions (Fig. 3 and Fig. 4) to the same phenomenon: the vortex cores appear to be delocalized between different positions which correspond to pinning potential wells, and during the measurement hop back and forth with a frequency that is too high to be resolved in this experiment. According to this analysis not only the distribution, but also the observed shape of the vortex cores is strongly influenced by pinning.
The pinning sites most probably result from inhomogeneities in the oxygen doping, which are thought to be responsible for the variations of the gap size (see experimental details). The distance over which the vortices are split corresponds to the average spacing between oxygen vacancies ($`10100`$ Å ). We did not observe any sign of resonant states related to impurities, as in recent STM experiments on BSCCO . The driving forces causing vortex movements in Fig. 3 and Fig. 4 are most probably due to a slow variation of the pinning potential, resulting from the overall rearrangement of vortices.
The vortex delocalization and movement presented here can directly be connected to the vortex creep as measured in macroscopic experiments, like magnetic relaxation . The main difference, of course, is that we do not observe whole bundles of vortices moving over relatively large distances, but only single vortex cores that are displaced over distances much smaller than the penetration depth $`\lambda `$. That is, it will not be necessary to displace whole groups of vortices, many of which might be pinned much stronger than the delocalized vortices we observe. A second difference is the absence of a uniform direction of the movements in the STM images, most probably because the Lorentz driving forces have been reduced to an extremely small value (which also follows from the very gradual changes in Fig. 3 and Fig. 4).
### C Thermal Fluctuations versus Quantum Tunneling
Regarding now the mechanism responsible for the vortex delocalization, the main question is whether we are dealing with thermal fluctuations, or quantum tunneling between pinning potential wells. In fact magnetic relaxation measurements on BSCCO show a crossover temperature from thermal to quantum creep of $`25`$ K , which means that with these STM measurements we are on the limit between the two.
In the case of thermally induced motion, there is a finite probability for the vortex to jump over the energy barrier between the two potential wells. The vortex is continuously moving from one site to the other, with a frequency that is too high to be resolved by our measurements. In the case of quantum tunneling, the vortex is truly delocalized. That is, the vortex can tunnel through the barrier, and one observes a combination of two base states (i.e. positions), like in the quantum text book example of the ammonia molecule . Thermal fluctuations will lead to a continuous dissipative motion between the two sites; quantum tunneling gives a dissipationless state in which the vortex is divided between two positions.
An instantaneous observation of several base states of a quantum object would be impossible, since each measurement implies a collapse of the quantum wave function into one state. However, the STM gives only time averaged images, and with the tunneling current in this experiment we typically detect one electron per nanosecond. If the vortex core relaxes back to its delocalized state on a time scale smaller than nanoseconds, the vortex can appear delocalized in the STM images. Moreover, it should be clear that the long time ( 12 hours) between the subsequent images in Fig. 3 and 4 has nothing to do with the vortex tunneling time; it is tunneling of the vortex that allows the intermediate state. The creep of vortices (either by quantum tunneling or by thermal fluctuations) is a slow phenomenon here. At a given region the pinning potential due to inhomogenieties and interactions with other vortices evolves on a time scale of hours, shifting the energetically most favorable position from one site to the other. However, the tunneling occurs much faster ($`<`$ns) than this slow potential evolution, creating the possibility of a superposition of both states (positions), each with a probability depending on the local value of the pining potential. Following the analogy of the ammonia molecule: a moving vortex core corresponds to an ammonia molecule submitted to an external external field (”overall pinning potential”) that is slowly changing. Initially, due to this external field the state with the hydrogen atom at the left is more favorable, then the field changes such that left and right are equally stable (”split vortex”), and finally the right state is most probable: the hydrogen atom has moved from left to right on a long time scale, whereas the tunneling itself occurs on a much faster time scale. By the way, since the change of the local pinning potential results from a rearrangement of all surrounding vortices (over a distance $`\lambda `$), the time scale of this change will still be strongly dependent on the (short) tunneling time itself.
In order to discuss the possibility of quantum tunneling, one should consider the tunneling time and the for our measurements negligible temperature dependence . The importance of the tunneling time is two-fold: the tunneling rate is strongly dependent on the time needed to pass through the pinning barriers (an effect which will be extremely difficult to measure directly); and in our measurements the tunneling time must be faster than the probe response time (as explained in the previous paragraph). From collective creep theory one can get an order-of-magnitude estimate for the tunneling time $`t_c\mathrm{}/U(S_E/\mathrm{})10^{11}`$ s, with the Euclidian action for tunneling $`S_E/\mathrm{}10^2`$ and the effective pinning energy $`U10^2`$ K derived from magnetic relaxation measurements . This is clearly below the upper limit of 1 ns set by the probe response time.
For a discussion about the implications of the temperature independence of quantum tunneling, one should first consider the temperature dependence expected for vortex movements that result from thermal fluctuations. Thermally induced hopping between different pinning sites should be proportional to $`\mathrm{exp}(U/k_BT)`$, where $`U`$ is again the effective pinning energy . From magnetic relaxation measurements on BSCCO one can derive a value of about $`1010^3`$ K for this quantity . Assuming for the moment that this $`U`$ determines the hopping of individual vortices, it should then be compared to the Euclidian action for quantum tunneling, which with magnetic relaxation measurements is estimated to be $`S_E/\mathrm{}10^2`$ , and plays a role like $`U/k_BT`$ in the Boltzman distribution. For measurements presented here, it is important to note again that they were taken more than 3 days after having increased the field from 0 to 6 T. Since for $`B=6`$ T the induced current density $`j`$ relaxes back to less than $`0.01`$ of its initial value in about 10 seconds , we are clearly in the limit where $`j`$ and thus the Lorentz driving forces (which reduce the energy barrier for vortex creep) approach zero. This means that the effective pinning potential $`U`$ rises, if not to infinity like in isotropic materials, to a value which in principle is much higher than the one which determines vortex creep in magnetic relaxation measurements at comparable field strengths . With nearly zero Lorentz forces the tilt of the overall pinning potential will thus be small compared to the pinning barriers, making thermal hopping over the barriers highly improbable (at low temperatures). Quantum creep, in the limit of vanishing dissipation, is independent of the collective aspect of $`U`$, while the probability for thermal creep decreases as $`\mathrm{exp}(U/k_BT)`$ . So one can expect quantum creep to become more important than thermal creep when more time has passed after having changed the field. In other words, in spite of the fact that the tilt of the overall pinning potential is small compared to the pinning barriers, the vortices can still move a little, i.e. disappearing and reappearing elsewhere.
However, the collective $`U`$ may be higher than the pinning barriers for the individual vortex movements observed in our experiments. Thus, in order to find a lower bound for the latter, we also estimate $`U`$ for the moving vortices from our microscopic measurement. First we calculate the magnetic energy of a vortex due to the interaction with its nearest neighbors, using
$$E_{int}=d\frac{\mathrm{\Phi }_0^2}{8\pi ^2\lambda ^2}\underset{i}{}\{ln(\frac{\lambda }{r_i})+0.12\},$$
(1)
where $`d`$ is the length of the vortex segment, $`\mathrm{\Phi }_0`$ is the flux quantum, $`\lambda `$ the in-plane penetration depth and $`r_i`$ the distance to its $`i`$th neighbor . Parameters are conservatively chosen such to give a true minimum estimate for $`E_{int}`$ (and thus for $`U`$): we restrict the out-of-plane extent of the vortices to zero and thus only take $`d=15`$ Å, the size of one double Cu-O layer (”pancake vortices”), and for $`\lambda `$ take the upper bound of different measurements, 2500 Å . Taking the vortex in Fig. 3(b), and determining the positions between which it is divided as well as the positions of the neighboring vortices, one can find the difference between the magnetic interaction energies of the delocalized vortex at its two positions. We obtain $`E_{int}120`$ K. Now the absence of any vortex lattice indicates that the pinning potential wells are generally larger than the magnetic energy difference between the subsequent vortex positions, and Fig. 3(b) reflects a vortex state that is quite common in our measurements (Fig. 1). Following these arguments one can safely assume that the effective potential well pinning the vortex in Fig. 3 is larger than this difference: $`U>E_{int}=120`$ K. in agreement with the estimates given above. So we obtain $`U/k_BT>1010^2`$ for temperatures around 4 K. In the limit of zero dissipation $`S_E/\mathrm{}(k_F\xi )^2`$. On the basis of STS experiments this can be estimated to be $`10`$. This value is smaller than the one quoted above, and suggests that quantum tunneling is dominant in our measurements.
The most direct evidence for quantum creep can be obtained from measurements at different temperatures. The hopping rate for thermally induced movements is given by $`\omega _0\mathrm{exp}(U/k_BT)`$, where $`U`$ is the pinning potential, and $`\omega _0`$ the characteristic frequency of thermal vortex vibration . Assuming $`U=100`$ K, and a conservatively large estimate of $`\omega _010^{11}`$ s<sup>-1</sup>, the hopping rate should drop from 1 s<sup>-1</sup> to 10<sup>-7</sup> s<sup>-1</sup> on cooling from 4.2 to 2.5 K. This gives a huge difference between the respective measurements at these temperatures. However, spectroscopic images at 4.2 and 2.5 K show the same pattern of moving and delocalized vortices. Following the same kind of estimations as above, the delocalized vortex at 2.5 K (Fig. 4) gave $`U>210`$ K, which makes thermal creep even more unlikely here. Even if the frequency of the individual thermal vortex movements were too high to be resolved by our measurements both at 4.2 and at 2.5 K (this would mean a rather unrealistic characteristic frequency $`\omega _0>10^{15}`$), one would still expect to see a difference. As a matter of fact, the driving force for the vortex movements results from an overall rearrangement of vortices. This means that the displacements of vortices will always depend on the hopping frequency, and that even for very high hopping rates one should observe a reduction of the number of vortices that are displaced in our images, when the hopping rate is reduced by a factor 10<sup>7</sup>.
## V Conclusion
We observed vortex cores that were delocalized over several pinning potential wells. Regardless of the exact mechanism (thermal hopping or quantum tunneling) responsible for this delocalization, our measurements point out that pinning effects not only dominate the distribution of the vortex cores, but also their shape. As a consequence intrinsic (four-fold?) symmetries of the vortex cores will be obscured in microscopic measurements. The delocalization of the vortex cores implies that the vortex cores in this study appear larger than their actual — unperturbed — size, indicating a coherence length that is even smaller than was expected on the base of previous studies .
The analysis given above strongly favors an interpretation in terms of quantum tunneling of vortex cores. This would not only mean the first microscopic signature of the vortex quantum tunneling as derived from magnetic relaxation measurements, it is also a further indication that objects of larger size and complexity than one or several atoms can appear as a superposition of different quantum states.
###### Acknowledgements.
This work was supported by the Swiss National Science Foundation.
|
no-problem/0002/astro-ph0002288.html
|
ar5iv
|
text
|
# EVIDENCE FOR PRESSURE DRIVEN FLOWS AND TURBULENT DISSIPATION IN THE SERPENS NW CLUSTER
## 1 Introduction
Stars form through the collapse of dense cores within molecular clouds. The detection and measurement of the motions associated with such star forming collapse appears to be secure (see reviews by Evans 2000; Myers, Evans, & Ohashi 2000). The focus has primarily been on isolated, individual star forming regions since these are the least complex cases to understand both observationally and theoretically. However, the majority of stars form in clusters (Zinnecker, McCaughrean, & Wilking 1993) so a broader understanding of star formation, including such fundamental issues as the origin of the IMF and the formation of massive stars, requires the study of how stars form in groups.
Relative to an individual star, the deeper potential well of a stellar cluster should imply faster, and possibly easier to measure, inward motions, but observations are complicated by the generally greater distance to clusters than to individual star forming regions studied so far, and by the overpowering luminosity of massive stars that limit the ability to image lower mass neighbors and thus to obtain a complete view of cluster birth. For example, observations of the massive cluster forming region, W49N, have produced evidence for a global collapse of material onto the cluster as a whole (Welch et al. 1987) and, at higher resolution, for infall onto bright individual protostars (Zhang & Ho 1997). However, it has not been possible to explore the formation of more moderate mass stars in these objects because of dynamic range limitations.
In this paper, we present millimeter line and continuum observations of a cluster forming region in the Serpens molecular cloud. This region is well suited for an exploration into the formation processes of stellar clusters because it is nearby (d=310 pc; de Lara, Chavarria-K, & Lopez-Molina 1991) and contains a moderately dense embedded cluster (stellar density $`450`$ pc<sup>-3</sup>; Eiroa & Casali 1992) with many highly embedded millimeter wavelength continuum sources (Testi & Sargent 1998) but no O or B stars. In addition, Williams & Myers (1997) and Mardones (1998) have found signatures of widespread infall motions in this region.
The combination of proximity and low mass makes it possible to identify individual star forming condensations and examine the structure and dynamics of the cluster on a core by core basis. With this goal, we mapped the cluster in the optically thick CS(2–1) and thin N<sub>2</sub>H<sup>+</sup>(1–0) lines with the FCRAO and BIMA telescopes. Their different optical depths allow us to probe cloud velocities from an outer envelope to the center (Leung & Brown 1977). The two species are also formed by different chemical pathways with abundances that depend on environment and time (Bergin et al. 1997) and thus their relative intensities offer information on the physical conditions and chemical age of the cores. In an earlier paper (Williams & Myers 1999a) we reported the discovery of a starless core that appears to be collapsing based on an analysis of a small part of the maps. Here, we report on the full dataset and investigate the dynamics of the dense gas across the whole cluster. We find a number of new cores, some starless, others with continuum sources, suggesting that new stars are continually being added to the group. There are spectroscopic signatures of outflow, infall, and localized dissipation of turbulence throughout the cloud which offer important clues about the physical processes involved in cluster formation. The observations are outlined in §2 and the data displayed in §3. An analysis of the data follows in §4, and we discuss our findings in §5, concluding in §6.
## 2 Observations
Singledish maps of N<sub>2</sub>H<sup>+</sup>(1–0) (93.1762650 GHz, F<sub>1</sub>F$`=0112`$) and CS(2–1) (97.980968 GHz) were made at the Five College Radio Astronomy Observatory<sup>1</sup><sup>1</sup>1FCRAO is supported in part by the National Science Foundation under grant AST9420159 and is operated with permission of the Metropolitan District Commission, Commonwealth of Massachusetts (FCRAO) 14 m telescope in December 1996 using the QUARRY 15 beam array receiver and the FAAS backend consisting of 15 autocorrelation spectrometers with 1024 channels set to an effective resolution of 24 kHz (0.06 km/s). The observations were taken in frequency switching mode and, after folding, 3rd order baselines were subtracted. The pointing and focus were checked every 3 hours on nearby SiO maser sources. The FWHM of the telescope beam is $`50^{\prime \prime }`$, and a map covering $`6^{}\times 8^{}`$ was made at Nyquist ($`25^{\prime \prime }`$) spacing. QUARRY was replaced with the SEQUOIA array in late 1997. This 16 element array is built with low noise MMIC based amplifiers and has much improved sensitivity. This enabled us to map the weak C<sup>34</sup>S(2–1) (96.412982 GHz) line using the same backends and observing technique in March 1998.
Observations were made with the 10 antenna Berkeley-Illinois-Maryland array<sup>2</sup><sup>2</sup>2Operated by the University of California at Berkeley, the University of Illinois, and the University of Maryland, with support from the National Science Foundation (BIMA) for two 8 hour tracks in each line during April 1997 (CS; C array) and October/November 1997 (N<sub>2</sub>H<sup>+</sup>; B and C array). A two field mosaic was made with phase center, $`\alpha (2000)=18^\mathrm{h}29^\mathrm{m}47.^\mathrm{s}5,\delta (2000)=01^{}16^{}51\stackrel{}{\mathrm{.}}4`$, centered close to S68N, and a second slightly overlapping pointing at $`\mathrm{\Delta }\alpha =33\stackrel{}{\mathrm{.}}0,\mathrm{\Delta }\delta =91\stackrel{}{\mathrm{.}}0`$, centered close to SMM1. Amplitude and phase were calibrated using 4 minute observations of 1751+096 (4.4 Jy) interleaved with each 22 minute integration on source. The correlator was configured with two sets of 256 channels at a bandwidth of 12.5 MHz (0.16 $`\mathrm{km}\mathrm{s}^1`$ per channel) in each sideband and a total continuum bandwidth of 800 MHz.
The data were calibrated and maps produced using standard procedures in the MIRIAD package. The two pointings were calibrated together but inverted from the $`uv`$ plane individually to avoid aliasing since their centers are separated by more than one primary beam FWHM. The FCRAO data (after first scaling to common flux units using a gain of 43.7 Jy K<sup>-1</sup>) were then combined with the BIMA data using maximum entropy deconvolution. The final maps were produced by linearly mosaicking the two pointings in the $`xy`$ plane. The combination of the single dish and interferometer data results in maps that are fully sampled from the map size down to the resolution limit (i.e., there is no “missing flux”).
In addition, we obtained a map of continuum emission by summing over the line-free channels. This map showed a number of point sources corresponding to warm dusty envelopes around embedded protostars. A similar map was obtained, over the entire Serpens complex, using the OVRO interferometer by Testi & Sargent (1998). Our map was of lower sensitivity and in order to make a better comparison, we were awarded additional time from February to April 1999 to map the continuum at 110 GHz in a mosaic consisting of 3 fields overlapping at their primary beam FWHM over the same region.
The resolution of the final (naturally weighted) maps was $`8\stackrel{}{\mathrm{.}}1\times 5\stackrel{}{\mathrm{.}}6`$ at p.a. $`+2^{}`$ for the continuum, $`10\stackrel{}{\mathrm{.}}0\times 7\stackrel{}{\mathrm{.}}8`$ at p.a. $`72^{}`$ for CS, and $`8\stackrel{}{\mathrm{.}}5\times 4\stackrel{}{\mathrm{.}}6`$ at p.a. $`+2^{}`$ for N<sub>2</sub>H<sup>+</sup>. Additional spectral line datasets were created by restoring to a common $`10^{\prime \prime }`$ (3100 AU) beam for analysis and comparison. The velocity resolution of these maps was 0.16 $`\mathrm{km}\mathrm{s}^1`$.
## 3 Analysis
### 3.1 Continuum and integrated line maps
Maps of the continuum emission and integrated CS and N<sub>2</sub>H<sup>+</sup> line intensities are presented in Figure 1. The continuum map was obtained from the 3 field mosaic at 110 GHz (see above). The rms noise level was 1.0 mJy beam<sup>-1</sup> at the center, increasing toward the map edges (all maps are corrected for primary beam attenuation). Seven sources are labeled; S68N, SMM1 (also known as S68 FIRS1), SMM5, and SMM10 were mapped by Casali, Eiroa, & Duncan (1993) and Davis et al. (1999) and we have labeled the three others S68Nb through S68Nd because of their proximity to S68N. These seven are also present in Testi & Sargent (1998) although we do not confirm several other sources in their map for which our data have sufficient coverage, sensitivity, and resolution to detect. Singledish observations with the 1.3 mm facility bolometer on the IRAM 30 m telescope should clarify the issue.
Source positions and fluxes are listed in Table 1. S68Nc may be a double or multiple source since it is highly elongated but we could not distinguish more than one significant peak, so it is listed in Table 1 as a single source. SMM1, with a peak flux of 165 mJy beam<sup>-1</sup>, is considerably brighter than the other sources, and it proved problematic to clean the map completely of its sidelobes. Wright et al. (1999) pointed out that systematic errors including incomplete $`uv`$ coverage, calibration uncertainties, and pointing errors, limit the fidelity of a mosaicked image to the true source brightness distribution to $`12`$% at best. Such errors would lie above the level of the noise and therefore may be misinterpreted as detections. Both clean and maximum entropy deconvolution methods (with varying parameters, and source modeling and replacement) produced maps that all contained the seven labeled sources but also created elongated features in the vicinity of SMM1 that varied in position and flux from map to map. Therefore we have a high confidence in the labeled sources in Figure 1 but we believe the unlabeled features to the southwest of SMM1 to be artifacts of the data acquisition and reduction process and we disregard them. The problems associated with this map illustrate the difficulties in obtaining a complete view of cluster formation in more massive and luminous star forming environments.
There are also some image artifacts present in the line maps. The near-zero declination of the source resulted in strong north-south sidelobes even with the 45 baselines of the BIMA interferometer. As with the continuum map, these proved difficult to clean away completely and thus there may be some small sidelobe contamination from the brighter sources in each map. Based on experimenting with different deconvolution procedures, we estimate this uncertainty in line strengths to be $`20\%`$ in addition to the thermal noise and flux calibration uncertainties.
The N<sub>2</sub>H<sup>+</sup> and CS maps each show several condensations but present a very different appearance. The N<sub>2</sub>H<sup>+</sup> map follows the distribution of the continuum sources much more closely than the CS map which features a prominent starless core to the west of S68N (Williams & Myers 1999a). These differences between the N<sub>2</sub>H<sup>+</sup> and CS maps are probably due to a combination of differences in optical depth, protostellar outflows, and depletion. As we show below, the N<sub>2</sub>H<sup>+</sup> emission is optically thin but most CS spectra are self-absorbed and have considerable optical depth. Moreover, outflow wings are prominent in the CS line profiles around SMM1 and S68N, but are almost entirely absent from the N<sub>2</sub>H<sup>+</sup> spectra at the same positions. Finally, the time dependent chemical models of Bergin & Langer (1997) suggest that CS should deplete onto grains prior to star forming collapse but N<sub>2</sub>H<sup>+</sup> should remain in the gas phase even at high densities during the collapse phase. For these reasons, in the following subsections we determine the physical properties of the cluster forming gas and individual star forming cores from the N<sub>2</sub>H<sup>+</sup> data and measure motions between core envelopes and centers using the CS data.
### 3.2 N<sub>2</sub>H<sup>+</sup>(1–0)
The hyperfine structure of the $`J=10`$ transition of N<sub>2</sub>H<sup>+</sup> spreads out the emission into seven components (Caselli, Myers, & Thaddeus 1995), each with considerably lower intensity than a single component line would have, but with the benefit of also spreading out the optical depth so that individual components can be optically thin even when the sum of optical depths over all components might exceed unity. By fitting the seven hyperfine components simultaneously, we maximize the information in the data while taking advantage of the individual low optical depths to determine the systemic velocity and linewidth of the gas.
We fit the spectra using a function of the form,
$$T_\mathrm{B}(v)=\left[J(T_{\mathrm{ex}})J(T_{\mathrm{bg}})\right]\left\{1\mathrm{exp}[\underset{i}{}g_i\tau (v;v_i)]\right\},$$
$`(1)`$
where the measured brightness temperature $`T_\mathrm{B}`$ is a function of velocity $`v`$, $`T_{\mathrm{ex}}`$ is the excitation temperature, $`T_{\mathrm{bg}}=2.73`$ K is the cosmic background temperature, and the sum is over hyperfine components $`i=1,2,\mathrm{..7}`$. For each component, $`g_i`$ is the statistical weight (Womack, Ziurys, & Wyckoff 1992) normalized so $`_ig_i=1`$, and $`\tau (v;v_i)`$ is the total optical depth parameterized by $`v_i`$, the relative centroid velocity of each component (Caselli et al. 1995),
$$\tau (v;v_i)=\tau _0\mathrm{exp}[(vv_iv_0)^2/2\sigma ^2].$$
$`(2)`$
Here $`\tau _0`$ is the peak optical depth (summed over all components), $`v_0`$ is the systemic velocity of the gas, and $`\sigma `$ is the velocity dispersion.
Spectra were analyzed across the map after first restoring to a circular $`10^{\prime \prime }`$ FWHM beam and sampling on a regular $`10^{\prime \prime }`$ square grid. The fits to the spectra require four parameters, $`T_{\mathrm{ex}},\tau _0,v_0`$, and $`\sigma `$, but only the velocity and dispersion were tightly constrained by the data. The fitted excitation temperature and optical depth are determined from the intensities of the 7 hyperfine components but the two are inversely correlated resulting in a wide range of pairs that fit any given peak with only small changes in line shape that are indistinguishable given the moderate signal-to-noise ratios in the data. Moreover, there appear to be significant excitation anomalies analogous to the very low noise spectrum in Caselli et al. (1995). Because of the large uncertainties in $`T_{\mathrm{ex}}`$ and $`\tau `$, we do not discuss them further.
We also made three component, optically thin, gaussian fits, $`T_\mathrm{B}(v)=T_0_ig_i\mathrm{exp}[(vv_iv_0)^2/2\sigma ^2]`$, to the data. In most cases, the residuals were not significantly greater than the four parameter fit demonstrating the degeneracy in $`T_{\mathrm{ex}}`$ and $`\tau `$. We also checked our fits with the hyperfine structure fitting routine in the CLASS data reduction package. All 3 methods show good agreement in the velocity and dispersion.
By adding synthetic noise to very low noise N<sub>2</sub>H<sup>+</sup> spectra we tested the effect of varying signal-to-noise ratios on the fits. The systemic velocity could be accurately determined even in very noisy spectra but the fitted linewidth systematically increased as the signal-to-noise ratio decreased. The effect was noticeable for peak ratios less than 10, and became severe for ratios less than 5. In the following analysis, only spectra with peak signal-to-noise ratios greater than 5 are fit. For these spectra, we derived $`v_0`$, and $`\sigma `$ using the method described by equations (1) and (2).
The systemic velocity varies from 7.9 to 9.3 $`\mathrm{km}\mathrm{s}^1`$ and away from the outflow around S68N (see §3.3) there are no strong gradients. The dispersion displayed a more interesting variation. Figure 2 plots the non-thermal velocity dispersion, $`\sigma _{\mathrm{NT}}=[\sigma ^2\sigma _\mathrm{T}^2]^{1/2}`$, where $`\sigma _\mathrm{T}=0.075`$ $`\mathrm{km}\mathrm{s}^1`$ is the thermal velocity dispersion for N<sub>2</sub>H<sup>+</sup> at $`T_{\mathrm{kin}}=20`$ K (Wolf-Chase et al. 1998). The greatest values, $`\sigma _{\mathrm{NT}}>0.6`$ $`\mathrm{km}\mathrm{s}^1`$ occur in the cores containing the S68N and SMM1 sources which both power strong outflows. The minimum $`\sigma _{\mathrm{NT}}`$ is 0.16 $`\mathrm{km}\mathrm{s}^1`$ which is more than twice $`\sigma _\mathrm{T}`$ and shows that internal motions in the cores are predominantly turbulent. However, there are a number of regions where the turbulent velocity field drops to a local, confined, minimum. We identify eight such regions and label them Q1 through Q8 (for quiescent).
Three of the quiescent regions, Q1, Q2, and Q8, are coincident with peaks of integrated N<sub>2</sub>H<sup>+</sup> intensity but the other five are not. Note that even in the low intensity regions, the signal-to-noise ratio is strong enough to determine the dispersion quite accurately and that any bias would tend to increase the dispersion. The quiescent regions are not prominent in the map of integrated intensity because their linewidth is small. Indeed, our analysis of the data suggests to us that these quiescent “cores” are of greater interest than the cores of high integrated intensity.
The eight quiescent cores tend to have higher peak temperatures than their immediate surroundings and are especially prominent in Figure 3 which plots the peak N<sub>2</sub>H<sup>+</sup> temperature divided by the total velocity dispersion. This is a measure of the optical depth (which was not well determined from the hyperfine fitting) and it rises toward the quiescent cores which possess both relatively low dispersion and high peak temperatures.
The quiescent core Q2 is almost coincident with the continuum source S68Nb and Q5 extends to encompass S68Nd but the other quiescent cores are apparently starless. Q6 lies $`13^{\prime \prime }`$ southeast of the strong CS core S68NW that was discussed in Williams & Myers (1999a). Table 2 lists the quiescent core locations, non-thermal velocity dispersion and the ratio of (thermal plus non-thermal) velocity dispersion as the data are smoothed from $`10^{\prime \prime }`$ to $`50^{\prime \prime }`$. This ratio is discussed later in §4 in the context of pressure gradients. Spectra toward the core centers at $`15^{\prime \prime }`$ and $`50^{\prime \prime }`$ resolution are shown in Figure 4. The $`15^{\prime \prime }`$ spectra are slightly smoothed from the highest available resolution so as to achieve a higher signal-to-noise ratio and more clearly show fine features in the spectral profiles. The lines are narrower and brighter at higher resolution and resemble the “kernels” in cluster forming cores described in Myers (1998). We discuss this point further in §4. The red shoulders in Q1, Q5, and Q8 are also apparent in the other hyperfine components and may simply be velocity structure or possibly self-absorption. None of the quiescent core spectra show blue shoulders from high velocity infall.
The concomitant decrease in velocity dispersion and increase in optical depth as measured by the peak temperature divided by the dispersion suggests that the quiescent cores have condensed out of the larger scale cluster forming cloud through a localized reduction in turbulent pressure support. The dissipation of turbulence as a means of core formation has been alluded to previously but has only recently been discussed explicitly by Nakano (1998) and Myers & Lazarian (1998). A decrease in pressure support should result in an inward flow of material. The search for such a flow is the subject of the following subsection.
### 3.3 CS(2–1)
A map of CS and C<sup>34</sup>S(2–1) spectra from FCRAO observations is shown in Figure 5. Since these two species share very similar chemical pathways, the differences in profiles are primarily due to their different optical depths. The ratio of peak CS to C<sup>34</sup>S temperatures ranges from of 4–8 indicating moderate CS optical depths $`\tau 13`$ (Williams & Myers 1999b). Whereas the C<sup>34</sup>S spectra present a single peak and are approximately gaussian, the CS spectra have low-level line wing emission and are generally double-peaked. The line wings are due to outflows from several cluster members, as we show below, and the two peaks result from self-absorption since the C<sup>34</sup>S emission peaks at the dip of the CS spectra. Self-absorbed spectra from a static core would be symmetric but here they are clearly asymmetric. We use the asymmetries in the self-absorption to probe the velocity differences between outer and inner regions of the cores, and thereby search for inward motions (e.g. Zhou 1995). A blue (low velocity) peak that is brighter than the red (high velocity) indicates that the (outer) absorbing layer is relatively red-shifted, i.e., infalling, whereas the opposite asymmetry implies outward motions. The greater the blue-red difference, the greater the relative motions between the inner and outer regions of the core (Myers et al. 1996).
There is a preponderance of spectra with infall-type asymmetry and only a few spectra around S68N with the opposite symmetry in Figure 5. Moreover, the average spectrum over the cluster (not shown) is self-absorbed with a brighter blue than red peak. This suggests that there is large scale contraction of the gas around the cluster. The size of the contracting region is greater than 0.2 pc in diameter and extends well beyond the continuum sources. Extended asymmetrical self-absorption in this source in the $`2_{12}1_{10}`$ line of H<sub>2</sub>CO is also discussed in Mardones (1998). We have observed a similarly sized infalling region in CS(2–1) in the Cepheus A cluster forming region (Williams & Myers 1999b). At $`50^{\prime \prime }`$, the resolution of these data is too coarse to isolate the dynamics of individual cores but the addition of the interferometer maps allows us to follow infall and outflow motions down to the scale of the individual protostellar cores.
The Serpens cloud is known to contain a number of molecular outflows (White, Casali, & Eiroa 1995; Davis et al. 1999). Figure 6 maps the blue- and red-shifted emission for the BIMA data only. By analyzing the data prior to combining with the FCRAO data, the bulk of the mostly uniform cloud emission is resolved out and small scale features, such as outflows, stand out. The resulting map shows outflows around S68N, SMM1, SMM10, and possibly S68Nb. The existence of an outflow from S68Nb is uncertain because of confusion with the extensive red lobe around S68N.
CS spectra toward the quiescent cores and continuum sources are plotted in Figure 7. Here, we use the combined FCRAO/BIMA dataset since it is essential that all there be no missing flux if we are to interpret the spectra correctly. Spectra are centered on the position of minimum N<sub>2</sub>H<sup>+</sup> linewidth for the quiescent cores or the continuum source otherwise, and are at a slightly smoothed $`15^{\prime \prime }`$ resolution. There are approximately symmetric line wings from the S68N, SMM1, and SMM10 outflows, a blue line wing from the possible outflow around Q2/S68Nb, and weak one-sided wings from the red lobe of the S68N outflow around Q5/S68Nd and Q7. The N<sub>2</sub>H<sup>+</sup> velocity and FWHM linewidth are indicated by the solid vertical line and shading respectively (spectra are shown in Figure 4). The N<sub>2</sub>H<sup>+</sup> velocity lies at the dip of the double-peaked CS spectra as for the C<sup>34</sup>S spectra in Figure 5 except for Q8.
The Q8 core, with the narrowest N<sub>2</sub>H<sup>+</sup> linewidth in the map, lies at the edge of the cluster and is uncontaminated by CS outflow wings. The CS spectra around the core all have the classic infall profile, but at the position of the linewidth minimum the N<sub>2</sub>H<sup>+</sup> velocity lines up with the blue CS peak and not the dip as would be expected for self-absorption. Given that all the other double-peaked spectra in the map are self-absorbed it seems unlikely that the double-peaked spectra in this region are not also self-absorbed. However, the central N<sub>2</sub>H<sup>+</sup> spectrum in Figure 4 shows a second, red, peak at $`15^{\prime \prime }`$ and a red shoulder at $`50^{\prime \prime }`$ and it may be that the N<sub>2</sub>H<sup>+</sup> is also self-absorbed here. The low resolution C<sup>34</sup>S data in Figure 5 demonstrates that the CS spectra are self-absorbed in this region but to settle the issue in the Q8 core itself, sensitive higher resolution observations of C<sup>34</sup>S are necessary.
Aside from the uncertainty over the interpretation of the Q8 core, Figure 7 shows an overall trend for the CS infall asymmetry to be greatest in those cores with small N<sub>2</sub>H<sup>+</sup> linewidths. That is, the CS spectra toward the quiescent cores all have a greater blue than red peak indicating a positive infall velocity, but cores S68Nc, SMM10, and SMM1 present a very symmetric appearance indicating near-zero infall. The extended red CS outflow lobe from S68N in Figure 6 makes it difficult to isolate the Q5/S68Nd and Q7 cores and study their dynamics individually in this line, even though they are well separated in the N<sub>2</sub>H<sup>+</sup> map. It is particularly difficult to diagnose infall motions around S68N itself where the infall asymmetry abruptly reverses from one position to another $`10^{\prime \prime }`$ away. Nevertheless, Figure 7 suggests a connection between the level of core turbulence and the asymmetry of the CS profiles, in turn related to the infall speed. We explore this in more detail in the following section.
## 4 Pressure driven flows and turbulent dissipation
The singledish spectra in Figure 5 suggests a global infall onto the cluster but the addition of the interferometer data allows a higher resolution view that reveal a wide range of spectral asymmetries that present a patchwork of inward and outward flows. The size of the region over which infall motions are observed is large, $`0.2`$ pc in diameter, at least partly because the cluster contains a number of collapsing cores. The ($`3000`$ AU) resolution of the combined dataset separates out the individual protostars and protostellar cores from each other, and permits an analysis of how the small scale inward and outward motions are related to the local environment.
Detailed modeling of line profiles enable the infall speed to be determined (e.g., Zhou 1995; Williams et al. 1999) in isolated, low mass star forming cores but there is a much greater range of spectral shapes in the CS data here. For example, many are contaminated by outflows, either internal or overlapping from neighboring sources. Thus the modeling requires several additional parameters with the result that the uncertainty in the determination of the infall speed at each point is large. Therefore, rather than try to fit the spectra directly, we estimated the location of the main features in the CS spectra by eye using a simple cursor based routine. This resulted in a catalog of positions and velocities where the CS spectra have local peaks and dips.
The simplest estimate of the CS spectral asymmetry is the difference in blue and red peak temperatures, $`\mathrm{\Delta }T_{\mathrm{b}\mathrm{r}}=T_{\mathrm{blue}}T_{\mathrm{red}}`$. We consider the difference, rather than the ratio of peak temperatures since it is determined with a smaller error. This is obviously only defined for double-peaked spectra and we therefore exclude spectra with possible infall “shoulders”. A second measure of asymmetry that has the advantage of being defined for all spectral shapes is the velocity difference between the peak CS and N<sub>2</sub>H<sup>+</sup> velocities. This is the unnormalized $`\delta v`$ parameter introduced by Mardones et al. (1997). Based on simulations with simple two-layer infall models (Myers et al. 1996), we find that the blue-red temperature difference correlates linearly with the infall speed for a given optical depth and excitation temperature and that the scatter when a range of optical depths $`\tau =25`$ and excitation temperatures $`T_{\mathrm{ex}}=1525`$ K is considered is relatively small. The velocity difference also correlates linearly with infall speed but is more sensitive to optical depth variations. Consequently we use $`\mathrm{\Delta }T_{\mathrm{b}\mathrm{r}}`$ as a measure of infall speed and analyze its distribution across the cluster.
To test the hypothesis made in the previous section that the CS asymmetries are large where the N<sub>2</sub>H<sup>+</sup> velocity dispersion is small, we simply plot $`\mathrm{\Delta }T_{\mathrm{b}\mathrm{r}}`$ against $`\sigma _{\mathrm{NT}}(\text{N}\text{2}\text{H}\text{+})`$ in Figure 8. The points are very scattered and there is no significant correlation between the two quantities. However, when binned by $`\sigma _{\mathrm{NT}}`$, the average $`\mathrm{\Delta }T_{\mathrm{b}\mathrm{r}}`$ is greater than zero (implying a positive infall velocity). Moreover, $`\mathrm{\Delta }T_{\mathrm{b}\mathrm{r}}>0`$ K for all spectra with $`\sigma _{\mathrm{NT}}<0.23`$ $`\mathrm{km}\mathrm{s}^1`$. The lack of direct correlation is because several different dynamical states have been grouped together. By selecting individual regions, we can isolate the different states.
Figure 9 plots the variation of CS asymmetry and N<sub>2</sub>H<sup>+</sup> velocity dispersion against the distance from the center of a core for two quiescent cores and two continuum sources. A similar behavior is found in the other cores: the velocity dispersion tends to increase and the blue-red temperature difference generally decreases with increasing radius from the center of the quiescent cores and vice versa for the continuum sources. Moreover, the temperature difference is positive at the centers of the quiescent cores but is small or negative (i.e., outflow) at the centers of the continuum cores.
The slopes of the least squares fits, or the radial gradients, of the CS blue-red temperature difference and N<sub>2</sub>H<sup>+</sup> non-thermal velocity dispersion are tabulated in Table 3 and plotted against each other in Figure 10. Generally, the quiescent cores lie toward the lower right section (increasing N<sub>2</sub>H<sup>+</sup> dispersion and decreasing CS asymmetry with radius) and the continuum sources lie in the upper right section (decreasing N<sub>2</sub>H<sup>+</sup> dispersion and increasing CS asymmetry with radius). For a constant density, a change in velocity dispersion implies a change in pressure which results in a flow toward the lower pressure (i.e., lower dispersion) regions. The conversion from CS asymmetry to infall speed depends not only on the blue-red temperature difference but also the excitation temperature and optical depth. If these do not vary greatly from core envelope to center then the observed increase in $`\mathrm{\Delta }T_{\mathrm{b}\mathrm{r}}`$ as $`\sigma _{\mathrm{NT}}`$ decreases toward the centers of the quiescent cores implies an increase in infall speed.
Since the pressure depends linearly on the density and quadratically on the velocity dispersion, and since the infall speed is more sensitive to the blue-red temperature difference than the excitation temperature and optical depth, the inverse correlation of $`\mathrm{\Delta }T_{\mathrm{b}\mathrm{r}}`$ with $`\sigma _{\mathrm{NT}}`$ in Figure 10 is evidence for pressure driven, inward, flows in the quiescent cores. The correlation also extends to negative dispersion gradients and outflow motions around the continuum sources illustrating the disruptive effect of young protostars on their parent cloud. Finally, we note that a fit through the data points suggests a slightly negative CS asymmetry gradient at $`d\sigma _{\mathrm{NT}}/dr=0`$ which may indicate small, presumably gravitational, inward motions even when the pressure gradient is zero.
Using the two-layer model of Myers et al. (1996), we find that the infall speed corresponding to a typical value, $`\mathrm{\Delta }T_{\mathrm{b}\mathrm{r}}=1`$ K, for $`\sigma _{\mathrm{NT}}=0.3`$ $`\mathrm{km}\mathrm{s}^1`$ is $`v_{\mathrm{in}}0.05`$ $`\mathrm{km}\mathrm{s}^1`$. This is quite small and similar to that expected for the quasistatic contraction of a magnetically supported, isothermal core (Lizano & Shu 1989). However, the nature of the flow is very different from the predictions of such ambipolar diffusion models: all the cores, whether with or without continuum sources, have highly non-thermal linewidths and the infall speed is greatest in those cores with the smallest linewidths. This inverse correlation is not expected in a purely gravitational collapse and is more suggestive of a pressure driven flow. Nakano (1998) and Myers & Lazarian (1998) describe how inward flows onto a core can occur through the dissipation of turbulence and consequent loss of pressure support. As a core grows and its center becomes more opaque to ionizing radiation, its coupling to the magnetic field, and the range of MHD waves that propagate through it, decrease. Without replenishment, the waves decay within a free-fall time and the turbulent pressure, $`\rho \sigma _{\mathrm{NT}}^2`$, rapidly decreases (where $`\rho `$ is the mass density). Since wave support is maintained in the lower opacity, more highly ionized, core envelope, its pressure remains the same with the result that there is a pressure gradient, leading to a flow, from core envelope to center. The magnitude of the flow will be greatest where the pressure gradient is greatest, or equivalently where the central linewidths are smallest if the external pressure is approximately constant.
The magnitude of the inward speed depends on the pressure difference between core center and envelope. The observed inward motions are small, $`v_{\mathrm{in}}\sigma _{\mathrm{NT}}`$, and therefore the pressure differences are small. Since the total (thermal plus non-thermal) velocity dispersion at $`10^{\prime \prime }`$ is a factor of $`0.360.75`$ less than at $`50^{\prime \prime }`$ (Table 2), the density contrast between core centers and envelopes is inferred to lie in the range $`28`$. This is comparable to the ratio of peak to average density for cores in the Ophiuchus cluster forming cloud (Motte, André, & Neri 1998).
As the core grows, the opacity increases further until the ionization is dominated by cosmic rays ($`A_V4`$; McKee 1989). In this case, Myers (1998) predicts the existence of “kernels”, $`6000`$ AU, in size that are completely cutoff from MHD waves. If the external pressure is sufficiently large, as in massive star forming regions, these kernels can be stable, supported by thermal pressure against their self-gravity. The criterion for stability (Myers 1998 equation 3) can be rewritten as $`\sigma _{\mathrm{NT}}/\sigma _\mathrm{T}=1.5`$ which is satisfied in all the cores here where we have found $`\sigma _{\mathrm{NT}}/\sigma _\mathrm{T}>2.1`$. The quiescent cores extend over $`10^{\prime \prime }20^{\prime \prime }`$, which is close to the expected size of a kernel, and their velocity FWHM are $`0.5`$ $`\mathrm{km}\mathrm{s}^1`$ similar to Myers’ Figure 2, but there is insufficient signal-to-noise to discern the predicted thermal “spike”. There are possible examples in the residuals to the hyperfine fits to the spectra but they are not consistent across all the components and may be due to poorly cleaned sidelobes from other cores (the cleaning method is non-linear and varies in its effectiveness from channel to channel). High spatial and velocity resolution singledish observations of higher transition lines such as N<sub>2</sub>H<sup>+</sup>(3–2) offer an independent test for the presence of a thermal spike and may also be used to constrain the density contrast in the cores.
The maps in Figures 1,2 reveal a number of protostars and pre-protostellar collapsing cores. The cluster did not form in a single event, therefore, but continues to accrue members through an ongoing process of individual star formation. Hurt & Barsony (1996) analyzed the spectral energy distribution (SED) of several of the bright sources in this cluster and concluded that they were Class 0 protostars. The IRAS data does not have the resolution to resolve the emission (and therefore to define their SED in the far-infrared) from the seven continuum sources that we have identified here but if, following Hurt & Barsony, we divide up the IRAS fluxes evenly between all the objects, all seven would be classified as Class 0.
The discovery of the quiescent cores, and their association with high infall motions, suggests that they are the precursors to the Class 0 sources. Within the boundaries of the maps here, and at the sensitivity of the observations, we have found approximately equal numbers of continuum sources and quiescent cores (7 and 8 respectively, with 2 shared). If stars continue to form at a constant rate, then the lifetime of the quiescent cores must be approximately the same as the lifetime of the Class 0 phase of protostellar evolution, $`3\times 10^4`$ yr (André & Montmerle 1994). Such a short lifetime implies a dynamic evolution since the free-fall timescale, $`t_{\mathrm{ff}}=(G\rho )^{1/2}6\times 10^4`$ yr, for $`n_{\mathrm{H}_2}=10^6`$ $`\mathrm{cm}^3`$, approximately equal to the inferred volume density of the quiescent cores and a factor of two greater than the critical density of N<sub>2</sub>H<sup>+</sup>. This rapid evolution is consistent with core growth through the decay of turbulence since this should occur on a free-fall timescale (Nakano 1998).
## 5 Summary
This paper presents millimeter wavelength continuum and spectral line observations of a young, embedded, low mass cluster forming region in the Serpens molecular cloud. 7 continuum sources are found at the resolution and sensitivity of these data. The distribution of these sources corresponds well with the N<sub>2</sub>H<sup>+</sup> emission but the CS data presents a different appearance, with high velocity emission from outflows around 4 continuum sources, and central dips in spectral profiles from self-absorption (as shown on the large scale from singledish C<sup>34</sup>S data and on the small scale from the N<sub>2</sub>H<sup>+</sup>). Away from the powerful outflow around S68N, the self-absorption is red-shifted which we interpret as indicating inward motions.
The N<sub>2</sub>H<sup>+</sup> linewidth is dominated by non-thermal motions throughout the cluster. However, there are 8 regions where the non-thermal velocity dispersion reaches a local minimum. 6 are starless and 2 contain continuum sources. They do not all coincide with peaks of the integrated intensity but they all stand out in maps of the peak temperature divided by the dispersion, a measure of the optical depth. The CS spectra toward these “quiescent” cores are particularly asymmetric, indicating relatively high infall speeds. Generally, the N<sub>2</sub>H<sup>+</sup> dispersion increases and the CS blue-red temperature difference decreases with increasing distance from the core centers, and vice versa for the continuum sources. The correlation of CS asymmetry, related to infall speed, with N<sub>2</sub>H<sup>+</sup> dispersion, related to the local turbulent pressure, suggests that the inward flows are at least partly pressure driven and that the cores formed through the localized dissipation of turbulence as envisioned by Nakano (1998) and Myers & Lazarian (1998). Such a scenario is consistent with the observed numbers of quiescent cores and Class 0 sources.
The singledish data alone shows a net inward motion onto the cluster. Although there is clearly considerable smearing of the detailed dynamics, this suggests that it may be fruitful to search for infall signatures in more distant clusters at $`0.10.2`$ pc resolution. It will, of course, be important to observe other nearby cluster forming regions at higher resolution, $`3000`$ AU, to augment this study. Ultimately, the comparison of conditions in many clusters will give a clearer picture of their formation, show the effects of different environments, and, through an inventory of continuum sources and pre-protostellar cores, can be hoped to elucidate the origins of the stellar IMF (Motte et al. 1998; Testi & Sargent 1998).
JPW is supported by a Jansky fellowship. Partial support has also been provided by the NASA Origins of Solar Systems Program, grant NAGW-3401. We thank Leo Blitz and Dick Plambeck for generously assigning additional BIMA tracks to remap the continuum emission and Marc Pound and Tamara Helfer for advice concerning the data reduction.
|
no-problem/0002/hep-ph0002060.html
|
ar5iv
|
text
|
# 1 Minimal Superstring Standard Models
## 1 Minimal Superstring Standard Models
The most realistic string models found to date have been constructed in the free fermionic formulation of the heterotic–string. A large number of three generation models, which differ in their detailed phenomenological characteristics, have been built. All these models share an underlying $`Z_2\times Z_2`$ orbifold structure, which naturally gives rise to three generations with the $`SO(10)`$ embedding of the Standard Model spectrum<sup>*</sup><sup>*</sup>*Among the three generation orbifold models, constructed to date, only the free fermionic models possess the $`SO(10)`$ embedding of the Standard Model spectrum. Recently, it was further demonstrated that free fermionic heterotic–string models can also produce models with solely the spectrum of the Minimal Supersymmetric Standard Model (MSSM) in the effective low energy field theory . This is achieved due to the decoupling of all non–MSSM, exotic, and not–exotic, string states, at or slightly below the string scale, by Standard Model singlet VEVs which cancel the anomalous $`U(1)`$ D–term. This provides, for the first time, an example of a Minimal Standard Heterotic–String Model (MSHSM).
The emergence of a MSHSM in the free fermionic formulation reinforces the motivation for an improved understanding of this class of string compactifications. One of the important advancements of the last few years has been the development of techniques for systematic analysis of the $`F`$– and $`D`$–flat directions of (string) models. Indeed, in demonstrating the existence of a free fermionic MSHSM we have utilized those improved techniques . However, one limitation of those systematic studies performed to date is that they have included only flat directions of non–Abelian singlet fields. That is, fields which are singlets of all the non–Abelian gauge groups of a given string model and which may carry only Abelian $`U(1)`$ charges, or are singlets of the entire four dimensional gauge group. On the other hand, it has been shown in the past that some of the phenomenological constraints, such as quark–mixing , may necessitate the use of non–Abelian VEVs. This was also suggested in our recent exploration of possible generational mass hierarchies and effective Higgs $`\mu `$ terms resulting from singlet VEVs in our MSHSM .
In this letter we therefore begin the task of extending the systematic analysis of flat directions for the cases which include non–Abelian VEVs. For our investigation we again start with the model we have denoted “FNY,” first introduced in . An important question that has been of some debate in previous studies, and is relevant for the question of supersymmetry breaking, is whether flat directions which include non–Abelian VEVs can be exact. Indeed, a particularly important result which we show here for the first time is the demonstration of a MSHSM solution which includes non–Abelian VEVs and is flat to all orders of nonrenormalizable terms. We further elaborate on the specific complications which arise in considering non–Abelian VEVs in the string models and briefly discuss some of the phenomenological implications of non–Abelian VEVs in the MSHSM. In a follow up to this letter we will present a large collection of (systematically generated) non–Abelian MSHSM $`D`$–flat directions that retain $`F`$–flatness to at least seventh order, along with a study of the phenomenological features of these directions.
## 2 Non–Abelian Flat MSSM directions of the FNY model
As advertised above, the model that we choose to study in this paper is the FNY model , which produced the first example of a MSHSM. The boundary conditions and GSO projection coefficients which define the model are given in ref. together with the cubic level and higher order terms in the superpotential. Here, we plunge directly to the analysis of the non–Abelian flat directions.
### 2.1 Generic $`D`$– and $`F`$–Flatness Constraints
Spacetime supersymmetry is broken in a model when the expectation value of the scalar potential,
$`V(\phi )=\frac{1}{2}{\displaystyle \underset{\alpha }{}}g_\alpha D_a^\alpha D_a^\alpha +{\displaystyle \underset{i}{}}|F_{\phi _i}|^2,`$ (2.1)
becomes non–zero. The $`D`$–term contributions in (2.1) have the form,
$`D_a^\alpha `$ $``$ $`{\displaystyle \underset{m}{}}\phi _m^{}T_a^\alpha \phi _m,`$ (2.2)
with $`T_a^\alpha `$ a matrix generator of the gauge group $`g_\alpha `$ for the representation $`\phi _m`$, while the $`F`$–term contributions are,
$`F_{\mathrm{\Phi }_m}`$ $``$ $`{\displaystyle \frac{W}{\mathrm{\Phi }_m}}.`$ (2.3)
The $`\phi _m`$ are the scalar field superpartners of the chiral spin–$`\frac{1}{2}`$ fermions $`\psi _m`$, which together form a superfield $`\mathrm{\Phi }_m`$. Since all of the $`D`$ and $`F`$ contributions to (2.1) are positive semidefinite, each must have a zero expectation value for supersymmetry to remain unbroken.
For an Abelian gauge group, the $`D`$–term (2.2) simplifies to
$`D^i`$ $``$ $`{\displaystyle \underset{m}{}}Q_m^{(i)}|\phi _m|^2`$ (2.4)
where $`Q_m^{(i)}`$ is the $`U(1)_i`$ charge of $`\phi _m`$. When an Abelian symmetry is anomalous, that is, the trace of its charge over the massless fields is non–zero,
$`\mathrm{Tr}Q^{(A)}0,`$ (2.5)
the associated $`D`$–term acquires a Fayet–Iliopoulos (FI) term, $`ϵ\frac{g_s^2M_P^2}{192\pi ^2}\mathrm{Tr}Q^{(A)}`$,
$`D^{(A)}`$ $``$ $`{\displaystyle \underset{m}{}}Q_m^{(A)}|\phi _m|^2+ϵ.`$ (2.6)
$`g_s`$ is the string coupling and $`M_P`$ is the reduced Planck mass, $`M_PM_{Planck}/\sqrt{8\pi }2.4\times 10^{18}`$ GeV.
The FI term breaks supersymmetry near the string scale,
$`Vg_s^2ϵ^2,`$ (2.7)
unless its can be cancelled by a set of scalar VEVs, $`\{\phi _m^{}\}`$, carrying anomalous charges $`Q_m^{}^{(A)}`$,
$$D^{(A)}=\underset{m^{}}{}Q_m^{}^{(A)}|\phi _m^{}|^2+ϵ=0.$$
(2.8)
To maintain supersymmetry, a set of anomaly–cancelling VEVs must simultaneously be $`D`$–flat for all additional Abelian and the non–Abelian gauge groups,
$$D^{i,\alpha }=0.$$
(2.9)
A non–trivial superpotential $`W`$ also imposes numerous constraints on allowed sets of anomaly–cancelling VEVs, through the $`F`$–terms in (2.1). $`F`$–flatness (and thereby supersymmetry) can be broken through an $`n^{\mathrm{th}}`$–order $`W`$ term containing $`\mathrm{\Phi }_m`$ when all of the additional fields in the term acquire VEVs,
$`F_{\mathrm{\Phi }_m}`$ $``$ $`{\displaystyle \frac{W}{\mathrm{\Phi }_m}}\lambda _n\phi ^2({\displaystyle \frac{\phi }{M_{str}}})^{n3},`$ (2.10)
where $`\phi `$ denotes a generic scalar VEV. If $`\mathrm{\Phi }_m`$ additionally has a VEV, then supersymmetry can be broken simply by $`W0`$. (The lower the order of an $`F`$–breaking term, the closer the supersymmetry breaking scale is to the string scale.)
### 2.2 Non–Abelian flat directions in the FNY Model
In we classified the MSSM producing flat directions of the FNY model that are composed solely of singlet fields. Following this, in we studied the phenomenological features of these singlet directions. We now consider here generalized MSSM–producing flat directions in the FNY model that contain non–Abelian VEVs. In our prior investigations we demanded stringent flatness. That is, we required $`F`$–flatness term by term in the superpotential, rather than allowing $`F`$–flatness to result from cancellation between terms. The absence of any non–zero terms from within $`F_{\mathrm{\Phi }_m}`$ and $`W`$ is clearly sufficient to guarantee $`F`$–flatness along a given $`D`$–flat direction. However, such stringent demands are not necessary for $`F`$–flatness. Total absence of all individual non–zero VEV terms can be relaxed: collections of such terms appear without breaking $`F`$–flatness, so long as the terms separately cancel among themselves in each $`F_{\mathrm{\Phi }_m}`$ and in $`W`$. However, even when supersymmetry is retained at a given order in the superpotential via cancellation between several terms in a specific $`F_{\mathrm{\Phi }_m}`$, supersymmetry could well be broken at a slightly higher order.
Non–Abelian VEVs offer one solution to the stringent $`F`$–flatness issue. Because non–Abelian fields contain more than one field component, self–cancellation of a dangerous $`F`$–term can sometimes occur along non–Abelian directions. That is, for some directions it may be possible to maintain “stringent” $`F`$–flatness even when dangerous $`F`$–breaking terms appear in the stringy superpotential. We will demonstrate self–cancellation of a non–Abelian direction using, as examples, the four non–Abelian flat directions, FDNA1 through FDNA4, presented in Table I. These four directions are the simplest MSSM $`D`$–flat non–Abelian directions that are also $`F`$–flat to at least seventh order. We will show that self–cancellation is not possible for FDNA1 and FDNA2, while it is for FDNA3 and FDNA4.
The singlet fields receiving VEVs, $`\{\mathrm{\Phi }_{12},\mathrm{\Phi }_{23},\overline{\mathrm{\Phi }}_{56},\mathrm{\Phi }_4,\mathrm{\Phi }_4^{^{}},\overline{\mathrm{\Phi }}_4,\overline{\mathrm{\Phi }}_4^{^{}},H_{31}^s,H_{38}^s\},`$ are the same for these four flat directions.For a list of the massless fields in the FNY model see . The distinguishing aspect of these four directions is their non–Abelian components. The non–Abelian set for each direction is formed from a subset of the $`SU(2)_H`$ doublet fields $`\{H_{23},H_{26},V_{40}\}`$ and/or the $`SU(2)_H^{}`$ doublet fields $`\{H_{25},H_{28},V_{37}\}`$. FDNA1 involves solely $`SU(2)_H`$ fields: $`H_{23}`$, $`H_{26}`$, and $`V_{40}`$, while FDNA2 is a $`SU(2)_H^{}`$ parallel involving the corresponding $`H_{25}`$, $`H_{28}`$, and $`V_{37}`$. In contrast, FDNA3 and FDNA4 contain both $`SU(2)_H`$ and $`SU(2)_H^{}`$ doublets: the sets $`\{H_{23},V_{40},H_{28},V_{37}\}`$, and $`\{H_{26},V_{40},H_{25},V_{37}\}`$, respectively.
Our four non–Abelian flat directions can be separated into two sets, $`\{`$FDNA1, FDNA2$`\}`$ and $`\{`$FDNA3, FDNA4$`\}`$ due to a global $`Z_2`$ symmetry under which all $`SU(2)_H`$ and $`SU(2)_H^{}`$ fields are exchanged: $`H_{23}H_{25}`$, $`H_{26}H_{28}`$, $`V_5H_9`$, $`V_7H_{10}`$, $`V_{15}H_{19}`$, $`V_{17}H_{20}`$, $`V_{25}H_{29}`$, $`V_{27}H_{30}`$, $`V_{39}H_{35}`$, and $`V_{40}H_{37}`$. This symmetry is maintained in the superpotential to very high (and probably all) order in the superpotential. This implies that our findings regarding FDNA1 (FNDA3) have parallels for FDNA2 (FDNA4). Therefore we will examine only FDNA1 and FDNA3, but our findings will similarly apply to FDNA2 and FDNA4 after the appropriate field exchanges.
Now let us focus on FDNA1. In this $`D`$–flat direction the ratio of norms of the VEVs is:
$`|\mathrm{\Phi }_{12}|^2=2|\mathrm{\Phi }_{23}|^2=|\overline{\mathrm{\Phi }}_{56}|^2=|H_{31}^s|^2=|H_{38}^s|^2`$ (2.11)
$`=2|H_{23}|^2=2|H_{26}|^2=|V_{40}|^22|\alpha |^2;\mathrm{and}`$ (2.12)
$`(|\mathrm{\Phi }_4|^2+|\mathrm{\Phi }_4^{^{}}|^2)(|\overline{\mathrm{\Phi }}_4|^2+|\overline{\mathrm{\Phi }}_4^{^{}}|^2)=|\alpha |^2,`$ (2.13)
where $`\alpha `$ is the overall VEV scale determined by eq. (2.8),
$`\alpha =\sqrt{{\displaystyle \frac{g_s^2M_P^21344/112}{192\pi ^2}}}1\times 10^{17}\mathrm{GeV}.`$ (2.14)
This VEV ratio is fixed simply by the the Abelian $`D`$–terms (2.4,2.6) and the Cartan subalgebra (i.e., the diagonal) part of the $`SU(2)_H`$ and $`SU(2)_H^{}`$ $`D`$–terms.
In generic non–Abelian flat directions, the signs of the VEV components of a non–Abelian field are fixed by non–diagonal mixing of the VEVs in the corresponding non–Abelian $`D`$-terms (2.2). Since FDNA1 contains $`SU(2)_H`$ doublets, we must require
$`D^{SU(2)_H}=H_{23}^{}T^{SU(2)}H_{23}+H_{26}^{}T^{SU(2)}H_{26}+V_{40}^{}T^{SU(2)}V_{40}=0,`$ (2.15)
where
$`T^{SU(2)}{\displaystyle \underset{a=1}{\overset{3}{}}}T_a^{SU(2)}=\left(\begin{array}{cc}1& 1i\\ 1+i& 1\end{array}\right).`$ (2.18)
The only solutions to (2.15) consistent with $`|H_{23}|^2=|H_{26}|^2=|\alpha |^2`$ are (up to a $`\alpha \alpha `$ transformation)
$`H_{23}=\left(\begin{array}{c}\alpha \\ \alpha \end{array}\right),H_{26}=\left(\begin{array}{c}\alpha \\ \alpha \end{array}\right)V_{40}=\left(\begin{array}{c}\sqrt{2}\alpha \\ \sqrt{2}\alpha \end{array}\right),`$ (2.25)
and
$`H_{23}=\left(\begin{array}{c}\alpha \\ \alpha \end{array}\right),H_{26}=\left(\begin{array}{c}\alpha \\ \alpha \end{array}\right)V_{40}=\left(\begin{array}{c}\sqrt{2}\alpha \\ \sqrt{2}\alpha \end{array}\right).`$ (2.32)
A ninth–order superpotential term, $`\mathrm{\Phi }_{23}\overline{\mathrm{\Phi }}_{56}\mathrm{\Phi }_4^{^{}}H_{31}^sH_{38}^sH_{23}H_{26}V_{40}V_{39}`$, jeopardizes the $`F`$–flatness of this non–Abelian $`D`$–flat direction via,
$`F_{V_{39}}`$ $``$ $`{\displaystyle \frac{W}{V_{39}}}`$ (2.33)
$``$ $`\mathrm{\Phi }_{23}\overline{\mathrm{\Phi }}_{56}\mathrm{\Phi }_4^{^{}}H_{31}^sH_{38}^sH_{23}H_{26}V_{40}+H_{23}H_{26}V_{40}+H_{26}H_{23}V_{40}.`$ (2.34)
Self–cancellation of this $`F`$–term could occur if the non–Abelian VEVs resulted in
$`H_{23}H_{26}V_{40}+H_{23}H_{26}V_{40}+H_{26}H_{23}V_{40}=0.`$ (2.35)
However, neither (2.25) nor (2.32) produce this zero value. Instead, they generate
$`F_{V_{39}}`$ $`=`$ $`\lambda _9{\displaystyle \frac{8\alpha ^8}{M_P^6}}\left(\begin{array}{c}\pm 1\\ +1\end{array}\right),`$ (2.38)
with $`+1`$ for (2.25) and $`1`$ for (2.32).
In contrast to FDNA1, we now show that self–cancellation of a dangerous $`F`$–term can occur for FDNA3. That is, non–Abelian $`F`$–term self–cancellation is consistent with $`D`$–term flatness for FDNA3. Along the FDNA3 direction, the ratio of the singlet VEVs is the same as for FDNA1 except for the $`\mathrm{\Phi }_4`$ contribution. For FDNA3,
$`(|\mathrm{\Phi }_4|^2+|\mathrm{\Phi }_4^{^{}}|^2)(|\overline{\mathrm{\Phi }}_4|^2+|\overline{\mathrm{\Phi }}_4^{^{}}|^2)=0.`$ (2.39)
The significant difference between FDNA3 and FDNA1 lies in FDNA3’s non–Abelian VEV ratio,
$`|H_{23}|^2=|V_{40}|^2=|H_{28}|^2=|V_{37}|^2=|\alpha |^2.`$ (2.40)
The $`SU(2)_H`$ $`D`$–term,
$`D^{SU(2)_H}=H_{23}^{}T^{SU(2)}H_{23}+V_{40}^{}T^{SU(2)}V_{40}=0`$ (2.41)
has the two solutions
$`H_{23}=\left(\begin{array}{c}\alpha \\ \alpha \end{array}\right),V_{40}=\left(\begin{array}{c}\alpha \\ \alpha \end{array}\right),`$ (2.46)
and
$`H_{23}=\left(\begin{array}{c}\alpha \\ \alpha \end{array}\right),V_{40}=\left(\begin{array}{c}\alpha \\ \alpha \end{array}\right).`$ (2.51)
The $`SU(2)_H^{}`$ $`D`$–term solutions for $`H_{28}`$ and $`V_{37}`$ have parallel form.
FDNA3’s $`F`$–flatness is threatened by an eighth–order superpotential term,
$`\mathrm{\Phi }_{23}\overline{\mathrm{\Phi }}_{56}H_{31}^sH_{38}^sH_{23}V_{40}H_{28}V_{35},`$ (2.52)
through
$`F_{V_{35}}`$ $``$ $`{\displaystyle \frac{W}{V_{35}}}`$ (2.53)
$``$ $`\mathrm{\Phi }_{23}\overline{\mathrm{\Phi }}_{56}H_{31}^sH_{38}^sH_{23}V_{40}H_{28}.`$ (2.54)
Either set of $`SU(2)_H`$ VEVs (2.46) or (2.51) results in $`H_{23}V_{40}=0`$. Hence $`F_{V_{35}}=0`$ self–cancellation is consistent with $`D`$–flatness for FDNA3. Elimination of $`F_{V_{35}}`$ makes FDNA3 flat to all finite order in the superpotential.
One can show that generic self–cancellation of a dangerous $`F`$–term occurs when, for at least one non–Abelian gauge group under which some of the flat direction VEVs carry charge, the ratio of the powers of the corresponding fields in the $`F`$–term is equivalent to the ratio of the norms of the flat direction VEVs carrying the given non–Abelian charge.
## 3 Hidden Sector Condensates
Non–Abelian VEV directions such as FDNA3 or FDNA4 can yield a three generation MSSM model while maintaining supersymmetry at the string /FI scale. Supersymmetry must ultimately be broken slightly above the electroweak scale, though. Along either of these two directions, our FNY model shows qualitatively how supersymmetry may be broken dynamically by hidden sector field condensation after either of these two directions is invoked. Recall that each of the flat directions FDNA3 and FDNA4 break both of the hidden sector $`SU(2)_H`$ and $`SU(2)_H^{^{}}`$ gauge symmetries, but leave untouched the hidden sector $`SU(3)_H`$. Thus, condensates of $`SU(3)_H`$ fields can initiate supersymmetry breaking .
The set of nontrivial $`SU(3)_H`$ fields is composed of five triplets,
$`\{H_{42},V_4,V_{14},V_{24},V_{34}\},`$ (3.1)
and five corresponding anti–triplets,
$`\{H_{35},V_3,V_{13},V_{23},V_{24}\}.`$ (3.2)
In both FDNA3 and FDNA4, singlet VEVs give unsuppressed FI–scale mass to two triplets, $`V_{24}`$ and $`V_{34}`$, and two anti–triplets, $`V_{23}`$ and $`V_{33}`$, via trilinear superpotential terms,Note that these $`SU(3)_H`$ triplet mass terms also occur along the simplest MSSM singlet flat direction possible .
$`\mathrm{\Phi }_{12}V_{23}V_{24}+\mathrm{\Phi }_{23}V_{33}V_{34}`$ (3.3)
a slightly suppressed mass to one triplet/anti-triplet pair, $`H_{42}`$, and $`H_{35}`$, via a fifth order term,
$`\overline{\mathrm{\Phi }}_{56}H_{31}^sH_{38}^sH_{42}H_{35}.`$ (3.4)
and a significantly suppressed mass to the pair, $`V4`$ and $`V3`$, via a tenth order term,
$`\mathrm{\Phi }_{23}\overline{\mathrm{\Phi }}_{56}H_{31}^sH_{38}^sH_{23}V_{40}H_{28}V_{37}V_4V_3.`$ (3.5)
Before supersymmetry breaking, the last triplet/anti–triplet pair, $`V_{14}`$ and $`V_{13}`$, remain massless to all finite order.
Consider a generic $`SU(N_c)`$ gauge group containing $`N_f`$ flavors of matter states in vector–like pairings $`T_i\overline{T}_i`$, $`i=1,\mathrm{}N_f`$. When $`N_f<N_c`$, the gauge coupling $`g_s`$, though weak at the string scale $`M_{str}`$, becomes strong at a condensation scale defined by
$`\mathrm{\Lambda }=M_P\mathrm{e}^{8\pi ^2/\beta g_s^2},`$ (3.6)
where the $`\beta `$–function is given by,
$`\beta =3N_c+N_f.`$ (3.7)
The $`N_f`$ flavors counted are only those that ultimately receive masses $`m\mathrm{\Lambda }`$. Thus, for our model $`N_c=3`$ and $`N_f=1`$ (counting only the vector–pair, $`V_{14}`$ and $`V_{13}`$), which corresponds to $`\beta =8`$ and results in an $`SU(3)_H`$ condensation scale
$`\mathrm{\Lambda }=\mathrm{e}^{19.7}M_P7\times 10^9\mathrm{GeV}.`$ (3.8)
At this condensation scale $`\mathrm{\Lambda }`$, the matter degrees of freedom are best described in terms of the composite “meson” fields, $`T_i\overline{T}_i`$. (Here the meson field is $`V_{14}V_{13}`$.) Minimizing the scalar potential of our meson field induces a VEV of magnitude,
$`V_{14}V_{13}=\mathrm{\Lambda }^3\left({\displaystyle \frac{m}{\mathrm{\Lambda }}}\right)^{N_f/N_c}{\displaystyle \frac{1}{m}}.`$ (3.9)
This results in an expectation value of
$`W=N_c\mathrm{\Lambda }^3\left({\displaystyle \frac{m}{\mathrm{\Lambda }}}\right)^{N_f/N_c}`$ (3.10)
for the non–perturbative superpotential.
Supergravity models are defined in terms of two functions, the Kähler function, $`G=K+\mathrm{ln}|W|^2`$, where $`K`$ is the Kähler potential and $`W`$ the superpotential, and the gauge kinetic function $`f`$. These functions determine the supergravity interactions and the soft–supersymmetry breaking parameters that arise after spontaneous breaking of supergravity, which is parameterized by the gravitino mass $`m_{3/2}`$. The gravitino mass appears as a function of $`K`$ and $`W`$,
$`m_{3/2}=\mathrm{e}^{K/2}W.`$ (3.11)
Thus,
$`m_{3/2}\mathrm{e}^{K/2}W\mathrm{e}^{K/2}N_c\mathrm{\Lambda }^3\left({\displaystyle \frac{m}{\mathrm{\Lambda }}}\right)^{N_f/N_c}.`$ (3.12)
Restoring proper mass units explicitly gives,
$`m_{3/2}\mathrm{e}^{K/2}N_c({\displaystyle \frac{\mathrm{\Lambda }}{M_P}})^3\left({\displaystyle \frac{m}{\mathrm{\Lambda }}}\right)^{N_f/N_c}M_P.`$ (3.13)
Our meson field $`V_{14}V_{13}`$ will acquire a mass of at least the supersymmetry breaking scale, so let us assume $`m_{V_{14}V_{13}}1`$ TeV. The resulting gravitino mass is
$`m_{3/2}`$ $``$ $`\mathrm{e}^{K/2}\left({\displaystyle \frac{7\times 10^9\mathrm{GeV}}{2.4\times 10^{18}\mathrm{GeV}}}\right)^3\left({\displaystyle \frac{1000\mathrm{GeV}}{7\times 10^9\mathrm{GeV}}}\right)^{1/3}2.4\times 10^{18}\mathrm{GeV}`$ (3.14)
$``$ $`\mathrm{e}^{K/2}\mathrm{\hspace{0.17em}0.3}\mathrm{eV}.`$
In standard supergravity scenarios, one generally obtains soft–supergravity–breaking parameters, such as scalar and gaugino masses and scalar interaction, that are comparable to the gravitino mass: $`m_o`$, $`m_{1/2}`$, $`A_om_{3/2}`$. A gravitino mass of the order of the supersymmetry breaking scale would require $`\mathrm{e}^{K/2}10^{12}`$ or $`K55`$. On the other hand, for a viable model, $`\mathrm{e}^{K/2}𝒪(1)`$ would necessitate a decoupling of local supersymmetry breaking (parametrized by $`m_{3/2}`$) from global supersymmetry breaking (parametrized by $`m_o`$, $`m_{1/2}`$). This is indeed possible in the context of no–scale supergravity , endemic to weakly coupled string models.
In specific types of no–scale supergravity, the scalar mass $`m_o`$ and the scalar coupling $`A_o`$ have null values thanks to the associated form of the Kähler potential. Furthermore, the gaugino mass can go as a power of the gravitino mass, $`m_{1/2}\left(\frac{m_{3/2}}{M_P}\right)^{1\frac{2}{3}q}M_P`$, for the standard no–scale form of $`G`$ and a non–minimal gauge kinetic function $`f\mathrm{e}^{Az^q}`$, where $`z`$ is a hidden sector moduli field . A gravitino mass in the range $`10^5`$ eV $`\stackrel{<}{}m_{3/2}\stackrel{<}{}10^3`$ eV is consistent with the phenomenological requirement of $`m_{1/2}100`$ GeV for $`\frac{3}{4}\stackrel{>}{}q\stackrel{>}{}\frac{1}{2}`$. Note that decoupling between the local and global breaking of supersymmetry also appears to be realized in strongly coupled heterotic strings .
## 4 Discussion
In this letter we have presented the four simplest non–Abelian $`D`$–flat directions of the FNY model that (i) produce exactly the MSSM fields as the only MSSM–charged fields in the low energy effective field theory and (ii) are flat to at least seventh order. $`F`$–flatness for the first two directions is necessarily broken by ninth order superpotential terms. For the last two directions, eighth order terms also pose a threat to $`F`$–flatness. All of these eighth and ninth order terms contain non–Abelian fields. For each of the latter two directions, we showed that a set of non–Abelian VEVs exist that is consistent with $`D`$–flat constraints and by which “self–cancellation” of the respective eighth order term can occur. By this, we mean that for each specific set of non–Abelian VEVs imposed by $`D`$–flatness constraints, the expectation value of the dangerous $`F`$–term is zero. Hence, the “dangerous” superpotential terms pose no problem and our latter two directions become flat to all finite order.
In we discussed reasons why non–Abelian VEVs are likely required for a phenomenologically viable low energy effective MSSM, at least for the FNY string model. Evidence has also been presented in the past suggesting this might be true as well for all MSHSM $`Z_2\times Z_2`$ models. This implies that there is significant worth in exploring the generic properties of non–Abelian flat directions in $`Z_2\times Z_2`$ models that contain exactly the MSSM three generations and two Higgs doublets as the only MSSM–charged fields in the low energy effective field theory. For the next step in our study, we will present in a large set of systematically generated MSSM–producing non–Abelian flat directions for the FNY model. We will then analyze the phenomenological differences between our non–Abelian directions and our past singlet directions.
## 5 Acknowledgments
This work is supported in part by DOE Grants No. DE–FG–0294ER40823 (AF) and DE–FG–0395ER40917 (GC,DVN,JW).
## Appendix A Example Non-Abelian $`D`$– and $`F`$–flat MSSM Directions
| FD$`\mathrm{\#}`$ | Q’ | $`\mathrm{\Phi }_{12}`$ | $`\mathrm{\Phi }_{23}`$ | $`\overline{\mathrm{\Phi }}_{56}`$ | ($`\mathrm{\Phi }_4`$) | $`H_{31}^s`$ | $`H_{38}^s`$ | $`H_{23}`$ | $`H_{26}`$ | $`V_{40}`$ | $`H_{25}`$ | $`H_{28}`$ | $`V_{37}`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| FDNA1 | -1 | 2 | 1 | 2 | 1 | 2 | 2 | 1 | 1 | 2 | | | |
| FDNA2 | -1 | 2 | 1 | 2 | 1 | 2 | 2 | | | | 1 | 1 | 2 |
| FDNA3 | -1 | 2 | 1 | 2 | 0 | 2 | 2 | 1 | | 1 | | 1 | 1 |
| FDNA4 | -1 | 2 | 1 | 2 | 0 | 2 | 2 | | 1 | 1 | 1 | | 1 |
Table I: Example FNY directions flat through at least seventh order that contain VEVs of Non-Abelian charged Hidden Sector Fields. All component VEVs in these directions are uncharged under the MSSM gauge group. Column one entries specify the class to which an example direction belongs. Column two entries give the anomalous charges $`Q^{}Q^{(A)}/112`$ of the flat directions. The next several column entries specify the ratios of the norms of the VEVs. The $`\mathrm{\Phi }_4`$–related component is the net value (in units of the square overall VEV scale) of $`|\mathrm{\Phi }_4|^2+|\mathrm{\Phi }_4^{^{}}|^2|\overline{\mathrm{\Phi }}_4|^2|\overline{\mathrm{\Phi }}_4^{^{}}|^2`$. E.g., a “1” in the $`\mathrm{\Phi }_4`$ column for FDNA1 specifies that $`|\mathrm{\Phi }_4|^2+|\mathrm{\Phi }_4^{^{}}|^2|\overline{\mathrm{\Phi }}_4|^2|\overline{\mathrm{\Phi }}_4^{^{}}|^2=1\times |\alpha |^2`$.
| FD$`\mathrm{\#}`$ | Dangerous $`W`$ Terms | Self–Cancellation $`F`$–Flatness Solution ? |
| --- | --- | --- |
| FDNA1 | $`\mathrm{\Phi }_{23}\overline{\mathrm{\Phi }}_{56}\overline{\mathrm{\Phi }}_4H_{31}^sH_{38}^sH_{23}H_{26}V_{40}V_{39}`$ | No. |
| FDNA2 | $`\mathrm{\Phi }_{23}\overline{\mathrm{\Phi }}_{56}\mathrm{\Phi }_4^{^{}}H_{31}^sH_{38}^sH_{25}H_{28}V_{37}V_{35}`$ | No. |
| FDNA3 | $`\mathrm{\Phi }_{23}\overline{\mathrm{\Phi }}_{56}H_{31}^sH_{38}^sH_{23}V_{40}H_{28}V_{35}`$ | Yes, via $`\{H_{23},V_{40}\}.`$ |
| FDNA4 | $`\mathrm{\Phi }_{23}\overline{\mathrm{\Phi }}_{56}H_{31}^sH_{38}^sH_{26}V_{40}H_{25}V_{35}`$ | Yes, via $`\{H_{26},V_{40}\}.`$ |
Table II: Dangerous $`F`$-breaking superpotential terms for flat directions in Table I.
Column one entries specify the class of a flat direction. The entry in the next column specifies superpotential terms that can (possibly) break $`F`$–flatness and the last column entry indicates whether or not there is an allowed set of non–Abelian VEVs, consistent with $`D`$-flatness constraints, by which $`F`$–flatness may be maintained through self–cancellation.
|
no-problem/0002/cond-mat0002350.html
|
ar5iv
|
text
|
# Untitled Document
(revised August 2000)
Nuclear Spin Qubit Dephasing Time in the
Integer Quantum Hall Effect Regime
Dima Mozyrsky, Vladimir Privman and Israel D. Vagner
Department of Physics, Clarkson University, Potsdam, New York 13699–5820, USA
also at Grenoble High Magnetic Field Lab, Max-Planck-Institut für Festkörperforschung, and Centre National de la Recherche Scientifique, BP 166, Grenoble 9, F-38042, France
ABSTRACT
We report the first theoretical estimate of the nuclear-spin dephasing time $`T_2`$ owing to the spin interaction with the two-dimensional electron gas, when the latter is in the integer quantum Hall state, in a two-dimensional heterojunction or quantum well at low temperature and in large applied magnetic field. We establish that the leading mechanism of dephasing is due to the impurity potentials that influence the dynamics of the spin via virtual magnetic spin-exciton scattering. Implications of our results for implementation of nuclear spins as quantum bits (qubits) for quantum computing are discussed.
PACS: 73.20.Dx, 71.70.Ej, 03.67.Lx, 76.60.-k
1. Introduction
Recent ideas \[1-3\] of utilizing nuclear spins in semiconductor quantum wells and heterojunctions as quantum bits (qubits) for quantum computation have generated new emphases in the studies of nuclear-spin relaxation and, especially, quantum decoherence, in such systems. In this work we consider the case of the integer $`\nu =1`$ quantum-Hall state . The two-dimensional electron gas is then in a nondissipative state. Since the electrons mediate the dominant interaction between nuclear spins, it is reasonable to expect that relaxation times of the latter, as well as decoherence/dephasing effects, will occur on large time scales.
Solid-state proposals for quantum computation \[1-3\] with nuclear spins are all presently theoretical. Related proposals to utilize quantum dots \[7-15\] are also at present all in the theory stage. Usually, non-zero-nuclear-spin atoms will be considered placed by modern “atomic engineering” techniques in a host material of zero nuclear spin isotope. In order to allow positioning with respect to other features of the system, such as gate electrodes , and making replicas , etc., the nuclear-spin separation will be larger than the atomic size, typically, of order 20 to 100Å. At these separations, the direct magnetic dipole-dipole interaction of the nuclear spins is negligible.
The dynamics of the nuclear spins is governed by their interactions with each other and with their environment. In the regime of interest, these interactions are mediated by the two-dimensional electron gas. Various time scales are associated with this dynamics. The relaxation time $`T_1`$ is related to energy exchange and thermalization of the spins. Quantum mechanical decoherence and dephasing will occur on the time scale $`T_2`$. The latter processes correspond to demolition of quantum-mechanical superposition of states and erasure of phase information, due to interactions with the environment. Generally, there are many dynamical processes in the system, so the times $`T_1`$ and $`T_2`$ may not be uniquely, separately defined . Theoretically and experimentally, it has been established that processes of energy exchange are slow at low temperatures, so $`T_1`$ is very large, but there still might be some decoherence owing to quantum fluctuations. Generally, for various systems, there are extreme examples of theoretical prediction, ranging from no decoherence to finite decoherence at zero temperature, depending on the model assumptions.
In order to consider control (“programming”) of a quantum computer, we have to identify the time scale $`T_{\mathrm{ext}}`$ of the single-spin rotations owing to the interactions with an external NMR magnetic field. We also identify the time scale $`T_{\mathrm{int}}`$ associated with evolution owing to the pairwise spin-spin interactions. The preferred relation of the time scales is $`T_1,T_2T_{\mathrm{ext}},T_{\mathrm{int}}`$, which is obviously required for coherent quantum-mechanical dynamics.
The aim of this work has been to advance theoretical understanding of the time-scales of interest for the quantum computer proposal based on nuclear spins in a two-dimensional electron gas, with the latter in the integer quantum Hall effect state obtained at low temperatures, of order 1K, and in high magnetic fields, of several Tesla, in two-dimensional semiconductor structures . This system is a promising candidate for quantum computing because the nuclear spin relaxation time $`T_1`$ can be as large as $`10^3`$sec. In the summarizing discussion, Section 5, we discuss and compare the values of all the relevant time scales.
Our main result, presented in Sections 2 through 4, is the first theoretical calculation of the nuclear-spin dephasing/decoherence time scale $`T_2`$ for such systems. We note that the recent study \[21-23\] of the nuclear-spin relaxation time $`T_1`$, has relied heavily on the accepted theoretical and experimental views of the properties and behavior of the electronic state of the two-dimensional electron gas in the quantum Hall regime. These electronic properties have been a subject of several studies \[4-6,21-29\]. We utilize these results in our calculation as well.
2. The Model
We consider a single nuclear spin coupled to a two-dimensional electron gas in a strong magnetic field, $`B`$, along the $`z`$ axis which is perpendicular to the two-dimensional structure. Assuming nuclear spin-$`\frac{1}{2}`$, for simplicity, we write the Hamiltonian as
$$H=H_n+H_e+H_{ne}+H_{\mathrm{imp}}$$
$`(1)`$
Here the first term is the nuclear spin interaction with the external magnetic field, $`H_n=\frac{1}{2}\gamma _nB\sigma _z`$, where $`\gamma _n`$ includes $`\mathrm{}`$ and the nuclear $`g`$-factor, and $`\sigma _z`$ is a Pauli matrix.
The second term is the electronic component of the total Hamiltonian (1). Within the free-electron nomenclature, the Fermi level lies in between the two Zeeman sub-levels of the lowest Landau level. The spin-up sub-level is then completely occupied, so the filling factor is $`\nu =1`$, while the spin-down sub-level is completely empty; note that the relevant effective electronic $`g`$-factor in typically negative. In fact, the calculation need not be limited to the lowest Landau level. Here, however, to avoid unilluminating mathematical complications, we restrict our attention to the lowest level, as has been uniformly done in the literature \[24-27\].
The last two terms in (1) correspond to the nuclear-spin electron interactions and to the effects of impurities. These will be addressed shortly. The magnetic sub-levels are actually broadened by impurities. At low temperatures, the $`\nu =1`$ system is in the quantum Hall state. The interactions of the two-dimensional electron gas with the underlying material are not shown in (1). They are accounted for phenomenologically, as described later.
The electron-electron interactions are treated within an approximate quasiparticle theory which only retains transition amplitudes between Zeeman sub-levels. The elementary excitations of the electron gas are then well described as magnetic spin excitons, or spin waves, \[24-27\]. The spin excitons are quasiparticles arising as a result of the interplay between the Coulomb repulsion of the electrons and their exchange interaction. A creation operator of a spin exciton with a two dimensional wave vector $`𝐤`$ can be written in terms of the electronic creation operators $`a^{\mathrm{}}`$ in the spin-down Zeeman sub-level and annihilation operators $`b`$ in the spin-up sub-level as
$$A_𝐤^{\mathrm{}}=\sqrt{\frac{2\pi \mathrm{}^2}{L_xL_y}}\underset{p}{}e^{i\mathrm{}^2k_xp}a_{p+\frac{k_y}{2}}^{\mathrm{}}b_{p\frac{k_y}{2}}$$
$`(2)`$
Here $`\mathrm{}=\sqrt{c\mathrm{}/eB}`$ is the magnetic length, and the $`p`$-summation is taken in such a way that the wave number subscripts are quantized in multiples of $`2\pi /L_y`$. Note that expression (2) assumes the Landau gauge, which is not symmetric under $`xy`$. For our purposes, the following expression for the dispersion relation of the excitons provides an adequate approximation,
$$E_𝐤=\mathrm{\Delta }+\left(\frac{\pi }{2}\right)^{1/2}\left(\frac{e^2}{ϵ\mathrm{}}\right)\frac{\mathrm{}^2k^2}{2}$$
$`(3)`$
Here $`\mathrm{\Delta }=|g|\mu _BB`$, where $`\mu _B`$ is the Bohr magneton, and $`g`$ is the electronic $`g`$-factor, and $`ϵ`$ is the dielectric constant of the material. It has been pointed out that the gap $`\mathrm{\Delta }`$ in the excitonic spectrum suppresses nuclear spin relaxation at low temperatures. The electronic Hamiltonian can be written in terms of the spin exciton operators as
$$H_e=_0+\underset{𝐤}{}E_𝐤A_𝐤^{\mathrm{}}A_𝐤$$
$`(4)`$
where the $`c`$-number $`_0`$ is the spin-independent ground state energy of the electron gas. This description of the electronic gas is appropriate only for low density of excitons, which is the case in our calculation, as will be seen later.
We now turn to the third term in (1), the interaction between the electrons and nuclear spins. It can be adequately approximated by the hyperfine Fermi contact term
$$H_{ne}=\frac{8\pi }{3}\gamma _ng\mu _B𝐈_n\underset{e}{}𝐒_e\delta ^{(3)}\left(𝐫_e𝐑_n\right)$$
$`(5)`$
Here $`\mathrm{}𝐈_n`$ and $`\mathrm{}𝐒_e`$ are nuclear and electronic spin operators, respectively, and $`𝐫_e`$ are the electron coordinates. The nuclear coordinate $`𝐑_n`$ can be put equal to zero. Such an interaction can be split into two parts
$$H_{ne}=H_{\mathrm{diag}}+H_{\mathrm{offdiag}}$$
$`(6)`$
where $`H_{\mathrm{diag}}`$ corresponds to the coupling of the electrons to the diagonal part of nuclear spin operator $`𝐈_n`$, and $`H_{\mathrm{offdiag}}`$ — to its off-diagonal part.
The diagonal and off-diagonal contributions can be rewritten in terms of electronic creation and annihilation operators as
$$H_{\mathrm{diag}}=\frac{(8\pi /3)\gamma _ng\mu _B|w_0(0)|^2}{\sqrt{\pi }L_y\mathrm{}d}\underset{k,q}{}e^{\frac{\mathrm{}^2}{2}\left(k^2+q^2\right)}\sigma _z\left(a_k^{\mathrm{}}a_qb_k^{\mathrm{}}b_q\right)$$
$`(7)`$
$$H_{\mathrm{offdiag}}=\frac{(8\pi /3)\gamma _ng\mu _B|w_0(0)|^2}{\sqrt{\pi }L_y\mathrm{}d}\underset{k,q}{}e^{\frac{\mathrm{}^2}{2}\left(k^2+q^2\right)}\left(\sigma ^+b_k^{\mathrm{}}a_q+\sigma ^{}a_k^{\mathrm{}}b_q\right)$$
$`(8)`$
Here $`\sigma ^\pm =\frac{1}{2}\left(\sigma _x\pm i\sigma _y\right)`$. The interactions of the electrons of the two-dimensional gas with the underlying material are incorporated phenomenologically through the dielectric constant and $`g`$-factor, see (3), et seq., and via $`|w_0(0)|^2`$ and $`d`$ in (8) above. The latter is the transverse dimension of the effectively two-dimensional region (heterojunction, quantum well) in which the electrons are confined. The quantity $`w_0(0)`$ represents phenomenologically the enhancement of the amplitude of the electron wave function at the nuclear position owing to the effective potential it experiences as it moves in the solid-state material. It is loosely related to the zero-momentum lattice Bloch wavefunction at the origin.
For the purposes of the calculations performed here, with the relevant states being the ground state and the single-exciton states of the electron gas, one can show that the terms in (7) that correspond to different $`k`$ and $`q`$ do not contribute, while the remaining sum over $`k`$ becomes a $`c`$-number, representing the Knight shift of the polarized electrons. Thus $`H_{\mathrm{diag}}`$ can be incorporated into the nuclear-spin energy splitting, redefining the Hamiltonian of the nuclear spin as $`H_n=\frac{1}{2}\mathrm{\Gamma }\sigma _z`$, where $`\mathrm{\Gamma }=\gamma _n\left(B+B_{\mathrm{Knight}}\right)`$. Note that the Knight shift can be used to estimate the value of the phenomenological parameter $`|w_0(0)|`$ from experimental data. The off-diagonal coupling (8) can be expressed on terms of the excitonic operators (2) as follows ,
$$H_{\mathrm{offdiag}}=\frac{C}{\sqrt{L_xL_y}}\underset{𝐤}{}e^{\mathrm{}^2k^2/4}\left(A_𝐤^{\mathrm{}}\sigma ^{}+A_𝐤\sigma ^+\right)$$
$`(9)`$
where
$$C=\frac{(8\pi /3)\gamma _ng\mu _B|w_0(0)|^2}{\sqrt{2\pi }\mathrm{}d}$$
$`(10)`$
The summations over $`k_x`$ and $`k_y`$ are taken over all the integer multiples of $`2\pi /L_x`$ and $`2\pi /L_y`$, respectively.
The last term in (1) describes the interaction of the electrons with impurities and plays a crucial role in nuclear relaxation in the systems of interest. This interaction can written in the spin-exciton representation as
$$H_{\mathrm{imp}}=\left(2i/L_xL_y\right)\underset{𝐤,𝐪}{}U\left(𝐪\right)\mathrm{sin}\left[\mathrm{}^2\left(k_xq_yk_yq_x\right)/2\right]A_𝐤^{\mathrm{}}A_{𝐤+𝐪}$$
$`(11)`$
where $`U\left(𝐪\right)=U_{\mathrm{imp}}\left(𝐫\right)e^{i𝐪𝐫}d^2𝐫`$ is the Fourier component of the impurity potential for electrons in the two-dimensional plane. We will assume that the impurity potential has zero average and can be modeled by the Gaussian white noise completely described by its correlator, $`U_{\mathrm{imp}}\left(𝐫\right)U_{\mathrm{imp}}\left(𝐫^{}\right)=Q\delta ^{(2)}\left(𝐫𝐫^{}\right)`$.
In summary, the relevant terms in the full Hamiltonian (1) can be expressed solely in terms of the nuclear-spin operators and spin-excitation operators as
$$H=\frac{1}{2}\mathrm{\Gamma }\sigma _z+\underset{𝐤}{}E_𝐤A_𝐤^{\mathrm{}}A_𝐤+\underset{𝐤}{}g_𝐤\left(A_𝐤^{\mathrm{}}\sigma ^{}+A_𝐤\sigma ^+\right)+\underset{𝐤,𝐪}{}\varphi _{𝐤,𝐪}A_𝐤^{\mathrm{}}A_{𝐤+𝐪}$$
$`(12)`$
where the explicit expressions for $`E_𝐤`$, $`g_𝐤`$ and $`\varphi _{𝐤,𝐪}`$ can be read off (3), (9) and (10), respectively, and the quantity $`\mathrm{\Gamma }`$ was introduced in the text preceding Eq. (9).
3. Energy Relaxation
In order to set the stage for the calculation of $`T_2`$, let us first briefly summarize in this section aspects of the calculation of the nuclear-spin relaxation time $`T_1`$, along the lines of . The dominant mechanism for both processes at low temperatures is the interactions with impurities. Thus, both calculations are effectively zero-temperature, single-spin; these assumptions will be further discussed in Section 5.
We assume that initially, at time $`t=0`$, the nuclear spin is polarized, while the excitons are in the ground state, $`|\mathrm{\Psi }\left(0\right)=||\mathrm{𝟎}`$, where $`|`$ is the polarized-down (excited) state of the nuclear spin and $`|\mathrm{𝟎}`$ is the ground state of spin-excitons. Since the Hamiltonian (12) conserves the $`z`$-component of the total spin in the system, the most general wavefunction evolving from $`|\mathrm{\Psi }\left(0\right)`$ can be written as
$$|\mathrm{\Psi }\left(t\right)=\alpha \left(t\right)||\mathrm{𝟎}+\underset{𝐤}{}\beta _𝐤\left(t\right)|+|\mathrm{𝟏}_𝐤$$
$`(13)`$
with $`|+`$ corresponding to the nuclear spin in the ground state and $`|\mathrm{𝟏}_𝐤`$ describing the single-exciton state with the wave vector $`𝐤`$. Equations of motion for the coefficients $`\alpha `$ and $`\beta _𝐤`$ can be easily derived from the Schrödinger equation:
$$i\mathrm{}\dot{\alpha }=\frac{1}{2}\mathrm{\Gamma }\alpha +\underset{𝐤}{}g_𝐤\beta _𝐤$$
$`(14)`$
$$i\mathrm{}\dot{\beta }_𝐤=\frac{1}{2}\mathrm{\Gamma }\beta _𝐤+E_𝐤\beta _𝐤+\underset{𝐪}{}\varphi _{𝐤,𝐪}\beta _𝐪+g_𝐤\alpha $$
$`(15)`$
In order to solve the system of equations (14)-(15), we introduce Laplace transforms, $`\stackrel{~}{f}\left(S\right)=_0^{\mathrm{}}f(t)e^{St}𝑑t`$, which satisfy
$$iS\mathrm{}\stackrel{~}{\alpha }i\mathrm{}=\frac{1}{2}\mathrm{\Gamma }\stackrel{~}{\alpha }+\underset{𝐤}{}g_𝐤\stackrel{~}{\beta }_𝐤$$
$`(16)`$
$$iS\mathrm{}\stackrel{~}{\beta }_𝐤=\frac{1}{2}\mathrm{\Gamma }\stackrel{~}{\beta }_𝐤+E_𝐤\stackrel{~}{\beta }_𝐤+\underset{𝐪}{}\varphi _{𝐤,𝐪}\stackrel{~}{\beta }_𝐪+g_𝐤\stackrel{~}{\alpha }$$
$`(17)`$
Let us first solve (16)-(17) for the case when the interaction of spin-excitons with impurities is switched off, i.e., $`\varphi _{𝐤,𝐪}=0`$. After some algebra we obtain
$$\frac{1}{\stackrel{~}{\alpha }(s)}=s+\frac{i}{\mathrm{}}\underset{𝐤}{}\frac{g_𝐤^2}{is\mathrm{}+\mathrm{\Gamma }E_𝐤}$$
$`(18)`$
where we have shifted the variable: $`s=S+i\mathrm{\Gamma }/(2\mathrm{})`$, which only introduces an noninteresting phase factor.
In the absence of the hyperfine interaction, i.e., for $`g_𝐤=0`$, $`\stackrel{~}{\alpha }(s)`$ in (18) has only the pole at $`s=0`$. When the interaction is switched on, the pole shifts from zero. This shift can be calculated in a standard way, within the leading order perturbative approach, by taking the limit $`s0`$, so that $`\frac{1}{i\mathrm{}s^++\mathrm{\Gamma }E_𝐤}𝒫\frac{1}{\mathrm{\Gamma }E_𝐤}i\pi \delta \left(\mathrm{\Gamma }E_𝐤\right)`$, where $`𝒫`$ denotes the principal value. This type of approximation is encountered in quantum optics . The relaxation rate and the added phase shift of the nuclear-spin excited-state probability amplitude $`\alpha (t)`$ are given by the real and imaginary parts of the pole, $`\frac{1}{T_1}=\frac{2\pi }{\mathrm{}}_𝐤g_𝐤^2\delta \left(\mathrm{\Gamma }E_𝐤\right)`$ and $`\omega =𝒫_𝐤\frac{g_𝐤^2}{\mathrm{\Gamma }E_𝐤}`$ respectively, so that $`\alpha (t)e^{t/(2T_1)+i\omega t}`$. It is obvious that due to the large gap in the spin-exciton spectrum (3), $`\mathrm{\Gamma }\mathrm{\Delta }`$, the energy conservation required by the delta function above can never be satisfied, and so in the absence of interaction with impurities, $`T_1=\mathrm{}`$. It also transpires that $`T_2`$ is infinite , as will become apparent later.
Interactions with impurities, described by the last term in (12), will modify the solution of (16)-(17), and, as a consequence, the energy conservation condition. In particular, if the impurity potential is strong enough, it can provide additional energy to spin-excitons, so that their energy can fluctuate on the scale of order $`\mathrm{\Gamma }`$ thus making nuclear-spin relaxation possible. This mechanism corresponds to large fluctuations of the impurity potential $`U(𝐫)`$, which usually occur with a rather small probability, so $`T_1`$ is very large for such systems.
In order to carry out the above program quantitatively, one has to solve the system of equations (16)-(17) with nonzero $`\varphi _{𝐤,𝐪}`$. Such a solution is only possible within an approximation. One can introduce the effective spin-exciton self-energy $`\mathrm{\Sigma }_𝐤`$ in (18), so that $`\frac{1}{i\mathrm{}s+\mathrm{\Gamma }E_𝐤}\frac{1}{i\mathrm{}s+\mathrm{\Gamma }E_𝐤+\mathrm{\Sigma }_𝐤}`$. An integral equation for $`\mathrm{\Sigma }_𝐤`$ can then be derived, taking the continuum limit in (16)-(17). Solving this equation would allow one to calculate the relaxation rate from (18). However, in order to satisfy the energy conservation, we require $`\mathrm{\Gamma }E_𝐤+\mathrm{\Sigma }_𝐤=0`$, so the self-energy should be rather large, of order $`E_𝐤`$. Therefore, as a result of the spectral gap of the excitons, the perturbative approach is inadequate as it automatically assumes that $`|\mathrm{\Sigma }_𝐤||E_𝐤|`$. Instead, a certain variational approach has been adapted to evaluate $`T_1`$, consistent with the experimental values of order 10$`^3`$sec; for further discussed see Section 5.
4. Dephasing Mechanism
We argue that in order to calculate the phase shift due to the impurity potential, one can indeed use the perturbative solution of (16)-(17). Indeed, phase shifts result in virtual processes that do not require energy conservation and therefore are dominated by relatively small fluctuations of the impurity potential simply because large fluctuations are very rare. Moreover, the terms in the sum in (18) that contribute to the relaxation rate, do not contribute to the phase shift, see the discussion above. This consideration also applies when the self-energy is introduced.
One can show that the contribution to dephasing linear in $`\varphi _{𝐤,𝐩}`$ vanishes due to symmetry. Thus, let us solve (16)-(17) perturbatively up to the second order in $`\varphi _{𝐤,𝐩}`$ and perform the inverse Laplace transform of $`\stackrel{~}{\alpha }(s)`$. Within this approximation, the pole of $`\stackrel{~}{\alpha }(s)`$ in the complex-$`s`$ plane is imaginary, so that $`|\alpha (t)|=1`$. We conclude that $`\alpha (t)e^{i\omega _Ut}`$ and $`\beta _𝐤(t)=0`$, where the part of the phase-shift responsible for dephasing is
$$\omega _U=\frac{1}{\mathrm{}}\underset{𝐤}{}\frac{g_𝐤}{E_𝐤}\underset{𝐪}{}\frac{\varphi _{𝐤,𝐪}}{E_{𝐤𝐪}}\underset{𝐩}{}\frac{\varphi _{𝐤𝐪,𝐩}g_{𝐤𝐪𝐩}}{E_{𝐤𝐪𝐩}}$$
$`(19)`$
The zeroth-order term in (19) was dropped as irrelevant for our calculation of the dephasing time. Since $`\mathrm{\Gamma }`$ is much smaller than $`E_𝐤`$, it was also omitted.
As expected, the perturbative solution does not describe the energy relaxation ($`T_1`$), but it does yield the additional phase shift due to the impurity potential. We will see shortly that this phase shift, when averaged over configurations of the impurity potential, produces a finite dephasing time, $`T_2`$.
Let us consider the reduced density matrix of the nuclear spin, given by
$$\rho _n(t)=\left[\mathrm{Tr}_e|\mathrm{\Psi }\left(t\right)\mathrm{\Psi }\left(t\right)|\right]_U$$
$`(20)`$
recall (13). Here the trace is partial, taken over the states of the spin-excitons, while the outer brackets denote averaging over the impurity potential. The trace over the spin-excitons can be carried out straightforwardly because within the leading-order perturbative approximation used here they remain in the ground state; all excitations are virtual and contribute only to the phase shift. The diagonal elements of $`\rho _n(t)`$ are not influenced by virtual excitations and remain constant.
The off-diagonal elements of $`\rho _n(t)`$ contain the factors $`e^{\pm i\omega _Ut}`$. It is the averaging of these quantities over the white-noise impurity potential $`U(𝐫)`$ that yields dephasing of the nuclear spin. In order to proceed, let us rewrite (19) more explicitly. From (9)-(11), after changing summation index ($`𝐤𝐤\frac{𝐪+𝐩}{2}`$) in the first sum in (19) we obtain
$$\omega _U=\frac{4C^2}{\mathrm{}(L_xL_y)^3}\underset{𝐪,𝐩}{}U(𝐪)U(𝐩)e^{\frac{\mathrm{}^2}{8}\left(𝐩+𝐪\right)^2}\underset{𝐤}{}\frac{e^{\frac{\mathrm{}^2}{2}𝐤^2}\mathrm{sin}\frac{\mathrm{}^2}{2}[𝐤+\frac{𝐩}{2},𝐪]_z\mathrm{sin}\frac{\mathrm{}^2}{2}[𝐤\frac{𝐪}{2},𝐩]_z}{E_{𝐤+\frac{𝐪+𝐩}{2}}E_{𝐤+\frac{𝐪𝐩}{2}}E_{𝐤\frac{𝐪+𝐩}{2}}}$$
$`(21)`$
Here we use the following shorthand notation for the $`z`$ component of a vector product $`[𝐤,𝐪]_z=k_xq_yk_yq_x`$.
It is appropriate to assume that impurity potentials are short-range, i.e., $`a\mathrm{}`$, where $`a`$ is the scale of variation of $`U_{\mathrm{imp}}(𝐫)`$. This assumption and the white-noise property of the impurity potentials, are required to make the problem amenable to analytical calculation. Thus, the main contribution to the Fourier transform $`U(𝐩)`$, dominating the summation in (21), comes from large wavevectors $`𝐩`$ (and $`𝐪`$), of order $`a^1\mathrm{}^1`$. Therefore one can replace the exponent $`e^{\frac{\mathrm{}^2}{8}\left(𝐩+𝐪\right)^2}`$ by the Kronecker symbol $`\delta _{𝐪,𝐩}`$, to obtain a simplified expression for $`\omega _U`$
$$\omega _U=\frac{4C^2}{\mathrm{}(L_xL_y)^3}\underset{𝐩}{}U(𝐩)U(𝐩)\underset{𝐤}{}\frac{e^{\frac{\mathrm{}^2}{2}𝐤^2}\mathrm{sin}^2\frac{\mathrm{}^2}{2}[𝐤,𝐩]_z}{E_𝐤^2E_{𝐤+𝐩}}$$
$`(22)`$
Now the sum over $`𝐤`$ can be carried out because for $`𝐩𝐤`$, we can assume that $`E_{𝐤+𝐩}E_𝐩E_c(\mathrm{}^2/2)𝐩^2`$, where $`E_c=\left(\pi /2\right)^{1/2}\left[e^2/(ϵ\mathrm{})\right]`$.
Moreover for large $`𝐩`$, the factor $`\mathrm{sin}^2\left\{(\mathrm{}^2/2)[𝐤,𝐩]_z\right\}`$ can be replaced by its average, $`1/2`$. Finally, we get
$$\frac{1}{L_xL_y}\underset{𝐤}{}\frac{e^{\frac{\mathrm{}^2}{8}𝐤^2}\mathrm{sin}^2\frac{\mathrm{}^2}{2}[𝐤,𝐩]_z}{E_𝐤^2E_{𝐤+𝐩}}\frac{1}{E_c\mathrm{}^2𝐩^2}\frac{1}{2\pi }k𝑑k\frac{e^{\frac{\mathrm{}^2}{8}k^2}}{\left(\mathrm{\Delta }+E_c\frac{\mathrm{}^2}{2}k^2\right)^2}$$
$`(23)`$
The integral can be evaluated explicitly; specifically, for $`\frac{\mathrm{\Delta }}{E_c}1`$ we get
$$_0^{\mathrm{}}k𝑑k\frac{e^{\frac{\mathrm{}^2}{8}k^2}}{\left(\mathrm{\Delta }+E_c\frac{\mathrm{}^2}{2}k^2\right)^2}\frac{1}{\mathrm{}^2E_c\mathrm{\Delta }}$$
$`(24)`$
so that
$$\omega _U=\frac{2C^2}{\mathrm{}\pi E_c^2\mathrm{}^4\mathrm{\Delta }}\frac{1}{(L_xL_y)^2}\underset{𝐩}{}\frac{U(𝐩)U(𝐩)}{𝐩^2}$$
$`(25)`$
Recall that we have assumed the white-noise distribution for the impurity potential, $`U_{\mathrm{imp}}\left(𝐫\right)U_{\mathrm{imp}}\left(𝐫^{}\right)=Q\delta ^{(2)}\left(𝐫𝐫^{}\right)`$. This corresponds to the following probability distribution functional for the Fourier-transformed potential,
$$P\left[U(p)\right]=N\mathrm{exp}\left[\frac{1}{2QL_xL_y}\underset{𝐩}{}U(𝐩)U(𝐩)\right]$$
$`(26)`$
The latter expression, and other approximations assumed earlier, allow to reduce the averaging of $`e^{i\omega _Ut}`$ to a product of Gaussian integrations. The off-diagonal elements of the nuclear-pin density matrix are, thus,
$$\rho _{01}\underset{𝐩}{}\left(1\frac{i\tau }{L_xL_y𝐩^2}\right)^{\frac{1}{2}}=\mathrm{exp}\left[\frac{1}{2}\underset{𝐩}{}\mathrm{ln}\left(1\frac{i\tau }{L_xL_y𝐩^2}\right)\right]$$
$`(27)`$
where $`\tau =4QC^2t/(\mathrm{}\pi E_c^2\mathrm{}^4\mathrm{\Delta })`$.
We are interested in real part of the sum in (27), which represents decoherence/dephasing of the nuclear spin. The off-diagonal elements decay exponentially as
$$\rho _{01}\mathrm{exp}\left[\frac{1}{4}\underset{𝐩}{}\mathrm{ln}\left(1+\frac{\tau ^2}{(L_xL_y)^2𝐩^4}\right)\right]$$
$`(28)`$
The summation over $`𝐩`$ in (32) can be converted into integration
$$\underset{𝐩}{}\mathrm{ln}\left(1+\frac{\tau ^2}{(L_xL_y)^2𝐩^4}\right)=\frac{L_xL_y}{(2\pi )^2}d^2𝐩\mathrm{ln}\left(1+\frac{\tau ^2}{(L_xL_y)^2𝐩^4}\right)$$
$`(29)`$
Explicit calculation then yields the result that $`\rho _{01}e^{\tau /16}`$ or $`\rho _{01}e^{t/T_2}`$, where
$$T_2=\frac{2\mathrm{}\mathrm{}^2\mathrm{\Delta }}{U_2C^2}$$
$`(30)`$
with $`U_2=Q/(2\pi \mathrm{}^2E_c^2)`$.
5. Results and Discussion
The quantity $`U_2`$ characterizes the strength of the impurity potential with respect to the Coulomb interactions . Let us summarize typical parameter values for a GaAs heterojuction, which is the system best studied in the literature. For magnetic field value $`B=10`$T, we have the following values of parameters, $`\mathrm{}=0.8\times 10^8`$m, $`E_c=3\times 10^{21}`$J, $`C=2.5\times 10^{36}`$J m, $`\mathrm{\Delta }=4.6\times 10^{23}`$J. From experimental data for electronic mobility, one then estimates $`U_20.0025`$, yielding $`T_240\mathrm{sec}`$. We emphasize that this is an order of magnitude estimate only, because of the uncertainty in various parameter values assumed and the fact that the parameters, especially the strength of the disorder, may vary significantly from sample to sample. For instance, there is another estimate of the disorder strength $`Q`$ available in the literature , obtained by fitting the value of $`T_1`$ to the experimentally measured $`10^3`$sec , as cited earlier. This yields an estimate for $`T_2`$ that is smaller, $`T_20.5`$sec. Generally, we expect that with typical-quality samples $`T_2`$ may be a fraciton of a second or somewhat larger.
Let us point out that to date there are no direct experimental probes of dephasing by the disorder-dominated mechanism identified here for dilute nuclear-spin positioning appropriate for quantum computing. Such systems where never engineered. For those materials whose atoms have nonzero nuclear-spin-isotope nuclei, specifically, for GaAs (spins $`3/2`$), we are aware only of one experiment where indirect information on dephasing can be obtained from the linewidth. However, in that case the dipolar interactions cannot be neglected and likely provide the dominant dephasing mechanism. For quantum computing, the host material will have to be isotope-engineered with zero nuclear spins, e.g., Si .
Let us now compare various time scales relevant for quantum computing applications. The relaxation time $`T_1`$ is of order $`10^3`$sec . For the spin-spin interaction time scale $`T_{\mathrm{int}}`$, values as short as $`10^{11}`$sec have been proposed . These estimates are definitely overly optimistic and require further work. Since such calculations require considerations beyond the single-spin interactions, they are outside the scope of the present work. For $`T_{\mathrm{ext}}`$, modern experiments have used NMR field intensities corresponding to the spin-flip times of $`10^5`$sec. This can be reduced to $`10^7`$sec, and with substantial experimental effort, perhaps even shorter times, the main limitation being heating up of the sample by the radiation.
Thus, the present information on the relevant time scales does not show violation of the condition $`T_1,T_2T_{\mathrm{ext}},T_{\mathrm{int}}`$, stated in the introduction, required for quantum computing. To firmly establish the feasibility of quantum computing, reliable theoretical evaluation of $`T_{\mathrm{int}}`$ is needed, as well as experimental realizations of few-qubit systems engineered with nuclear spins positioned as separations of order 30 to 100 Å.
We also note that typical lab samples, for which the parameter values used were estimated, have been prepared to observe the quantum-Hall-effect plateaus in the resistance. The latter requires a finite density of impurities. However, for the quantum-computer applications, a much cleaner sample would suffice. Indeed, as suggested by our calculations, $`T_2`$ is mostly due to dephasing owing to virtual spin-exciton scattering from impurities. Therefore, the value of $`T_2`$ can be increased by using cleaner samples.
We acknowledge helpful discussions with Drs. S. E. Barrett, M. L. Glasser and R. Mani. This research was supported by the National Security Agency (NSA) and Advanced Research and Development Activity (ARDA) under Army Research Office (ARO) contract number DAAD 19-99-1-0342.
REFERENCES
1. V. Privman, I.D. Vagner and G. Kventsel, Phys. Lett. A 239, 141 (1998).
2. B.E. Kane, Nature 393, 133 (1998).
3. C.M. Bowden and S.D. Pethel, Laser Phys. 10, 35 (2000).
4. The Quantum Hall Effect, R.E. Prange and S.M. Girvin, editors (Springer-Verlag, New York, 1987).
5. Yu.A. Bychkov, T. Maniv and I.D. Vagner, Solid State Commun. 94, 61 (1995).
6. I.D. Vagner and T. Maniv, Physica B 204, 141 (1995).
7. D. Loss and D.P. DiVincenzo, Phys. Rev. A 57, 120 (1998).
8. M.S. Sherwin, A. Imamoglu and T. Montroy, Phys. Rev. A 60, 3508 (1999).
9. A. Imamoglu, D.D. Awschalom, G. Burkard, D.P. DiVincenzo, D. Loss, M. Sherwin and A. Small, Phys. Rev. Lett. 83, 4204 (1999).
10. R. Vrijen, E. Yablonovitch, K. Wang, H.W. Jiang, A. Balandin, V. Roychowdhury, T. Mor and D. DiVincenzo, Phys. Rev. A 62, 012306 (2000).
11. T. Tanamoto, Physica B 272, 45 (1999).
12. G.D. Sanders, K.W. Kim and W.C. Holton, Phys. Rev. A 60, 4146 (1999).
13. S. Bandyopadhyay, Phys. Rev. B 61, 13813 (2000).
14. X. Hu and S. Das Sarma, Phys. Rev. A 61, 062301 (2000).
15. N.-J. Wu, M. Kamada, A. Natori and H. Yasunaga, Quantum Computer Using Coupled Quantum Dot Molecules, preprint quant-ph/9912036.
16. K. Blum, Density Matrix Theory and Applications (Plenum, New York, 1981).
17. C.P. Slichter, Principles of Magnetic Resonance, Third Edition (Springer-Verlag, Berlin, 1990).
18. D. Mozyrsky and V. Privman, J. Statist. Phys. 91, 787 (1998).
19. Mesoscopic Phenomena in Solids, Modern Problems in Condensed Matter Sciences — Vol. 30, B.L. Altshuler, P.A. Lee and R.A. Webb, editors (Elsevier, Amsterdam, 1991).
20. A.J. Legget, S. Chakravarty, A.T. Dorsey, M.P.A. Fisher and W. Zwerger, Rev. Mod. Phys. 59, 1 (1987) \[Erratum ibid. 67, 725 (1995)\].
21. I.D. Vagner and T. Maniv, Phys. Rev. Lett. 61, 1400 (1988).
22. D. Antoniou and A.H. MacDonald, Phys. Rev. B 43, 11686 (1991).
23. S.V. Iordanskii, S.V. Meshkov and I.D. Vagner, Phys. Rev. B 44, 6554 (1991).
24. Yu.A. Bychkov, S.V. Iordanskii and G.M. Eliashberg, JETP Lett. 33, 143 (1981).
25. C. Kallin and B.I. Halperin, Phys. Rev. B 30, 5655 (1984).
26. C. Kallin and B.I. Halperin, Phys. Rev. B 31, 3635 (1985).
27. Quantum Hall Effect, A.H. MacDonald, Editor (Kluwer Academic Publ., Dordrecht, 1989).
28. Quantum Hall Effect, M. Stone, Editor (World Scientific, Singapore, 1992).
29. Perspectives in Quantum Hall Effects: Novel Quantum Liquids in Low-Dimensional Semiconductor Structures, S. Das Sarma and A. Pinczuk, Editors (Wiley, New York, 1996).
30. W.H. Louisell, Quantum Statistical Properties of Radiation (Wiley, New York, 1973).
31. B.I. Shklovskii and A.L. Efros, Electronic Properties of Doped Semiconductors (Springer-Verlag, Berlin, 1984).
32. R. Kubo, M. Toda and N. Hashitsume, Statistical Physics, Vol. II (Springer-Verlag, Berlin, 1985).
33. A. Berg, M. Dobers, R.R. Gerhardts and K. von Klitzing, Phys. Rev. Lett. 64, 2563 (1990).
34. M. Dobers, K. von Klitzing, G. Weiman and K. Ploog, Phys. Rev. Lett. 61, 1650 (1988).
35. S.E. Barrett, G. Dabbagh, L.N. Pfeiffer, K.W. West and R. Tycko, Phys. Rev. Lett. 74, 5112 (1995).
|
no-problem/0002/cond-mat0002194.html
|
ar5iv
|
text
|
# Geometry, Statistics and Asymptotics of Quantum Pumps
## Abstract
We give a pedestrian derivation of a formula of Büttiker et. al. (BPT) relating the adiabatically pumped current to the $`S`$ matrix and its (time) derivatives. We relate the charge in BPT to Berry’s phase and the corresponding Brouwer pumping formula to curvature. As applications we derive explicit formulas for the joint probability density of pumping and conductance when the $`S`$ matrix is uniformly distributed; and derive a new formula that describes hard pumping when the $`S`$ matrix is periodic in the driving parameters.
Brouwer , and Aleiner et. al. , building on results of Büttiker, Pretre and Thomas (BPT) , pointed out that adiabatic scattering theory leads to a geometric description of charge transport in mesoscopic quantum pumps. Some of these works, and certainly our own work, was motivated by experimental results of Switkes et. al. on such pumps.
In this article we examine the formula of BPT , which relates adiabatic charge transport to the $`S`$ matrix and its (time) derivatives, in the special case of single-channel scattering. We show that the formula admits a simple interpretation in terms of three basic processes at the Fermi energy. Two of these are dissipative and non-quantized. The third integrates to zero for any cyclic variation in the system.
Next, we describe the geometric significance of BPT and relate it to Berry’s phase . It follows that the pumping formula of Brouwer can be interpreted as curvature and is formally identical to the adiabatic curvature. In spite of the interesting geometry the topological aspects of pumping are trivial. In particular, we prove that all Chern numbers associated to the Brouwer formula are identically zero.
We proceed with two applications. First we give an elementary and explicit derivation of the joint probability density for pumping and conductance. This problem was studied in . Brouwer’s results go beyond ours as he also calculates the tails of the distributions and we don’t. On the other hand, parts of his results are numerical, and they are certainly not elementary. Finally, we calculate, for the first time, the asymptotics of hard pumping for $`S`$ matrices that depend periodically on two parameters. If the system traverses a circle of radius $`R`$ in parameter space, with $`R`$ large, then the amount of charge transported is order $`\sqrt{R}`$, multiplied by a quasi-periodic (oscillatory) function of $`R`$ leading to ergodic behavior.
We shall use units where $`e=m=\mathrm{}=1`$, so the electron charge is $`1`$ and the quantum of conductance is $`e^2/h=\frac{1}{2\pi }`$. The mutual Coulombic interaction of the electrons is disregarded.
The BPT formula: Consider a scatterer connected to leads that terminate at electron reservoirs. All the reservoirs are initially at the same chemical potential and at zero temperature. The scatterer is described by its (on-shell) $`S`$ matrix, which, in the case of $`n`$ channels is an $`n\times n`$ matrix parameterized by the energy and other parameters associated with the adiabatic driving of the system (e.g. gate voltages and magnetic fields).
The BPT formula says that the charge $`dq_{\mathrm{}}`$ entering the scatterer from the $`\mathrm{}`$-th lead due to an adiabatic variation of $`S`$ is
$$dq_{\mathrm{}}=\frac{i}{2\pi }\text{Tr}\left(Q_{\mathrm{}}dSS^{}\right),$$
(1)
where $`Q_{\mathrm{}}`$ is a projection on the channels in the $`\mathrm{}`$-th lead, and the $`S`$ matrix is evaluated at the Fermi energy. In the special case of two leads, each lead carrying a single channel,
$$S=\left(\begin{array}{cc}r& t^{}\\ t& r^{}\end{array}\right),Q_{\mathrm{}}=\left(\begin{array}{cc}1& 0\\ 0& 0\end{array}\right)$$
(2)
where $`r,(r^{})`$ and $`t,(t^{})`$ are the reflection and transmission coefficients from the left (right) and $`Q_{\mathrm{}}`$ projects on the left lead. In this case Eq. (1), for the charge entering through the left lead, reduces to
$$2\pi dq_{\mathrm{}}=i(\overline{r}dr+\overline{t}^{}dt^{}).$$
(3)
We shall present an elementary derivation of (3).
Derivation: Every unitary $`2\times 2`$ matrix can be expressed in the form:
$$S=e^{i\gamma }\left(\begin{array}{cc}\mathrm{cos}(\theta )e^{i\alpha }& i\mathrm{sin}(\theta )e^{i\varphi }\\ i\mathrm{sin}(\theta )e^{i\varphi }& \mathrm{cos}(\theta )e^{i\alpha }\end{array}\right),$$
(4)
where $`0\alpha ,\varphi <2\pi ,0\gamma <\pi `$ and $`0\theta \pi /2`$. In terms of these parameters, Eq. (3) reads
$$2\pi dq_{\mathrm{}}=\mathrm{cos}^2(\theta )d\alpha +\mathrm{sin}^2(\theta )d\varphi d\gamma .$$
(5)
The basic strategy of our derivation of Eq. (5) is to find processes that vary each of the parameters in turn, and keep track of how much current is generated by each process. An underlying assumption is that current depends only on $`S(k_F)`$ and $`\dot{S}(k_F)`$, so that processes that give rise to the same change in the $`S`$ matrix also give rise to the same current. Because we do not prove this assertion, our derivation cannot be considered a complete proof.
We understand the four parameters as follows. (See figures.) The parameter $`\alpha `$ is associated with translations: Translating the scatterer a distance $`dL=d\alpha /2k_F`$ to the right multiplies $`r,(r^{})`$ by $`e^{id\alpha },(e^{id\alpha })`$, and leaves $`t`$ and $`t^{}`$ unchanged. The parameter $`\varphi `$ is associated with a vector potential $`A`$ near the scatterer. This induces a phase shift $`d\varphi =A`$ across the scatterer, and multiplies $`t,(t^{})`$ by $`e^{id\varphi },(e^{id\varphi })`$, while leaving $`r`$ and $`r^{}`$ unchanged. The parameter $`\theta `$ determines the conductance of the system: $`g=|t|^2/2\pi =\mathrm{sin}^2(\theta )/2\pi `$. Finally, $`\gamma =(i/2)\mathrm{log}detS`$ is related, by Krein’s spectral shift , to the number of electrons trapped in the scatterer. As a consequence, for any closed path in the space of Hamiltonians $`𝑑\gamma =0`$.
To determine the effect of changing $`\alpha `$ we imagine a process that changes $`\alpha `$ and leaves the other parameters fixed, namely translating the scatterer a distance $`dL=d\alpha /2k_F`$ to the right. The scatterer passes through a fraction $`|t|^2`$ of the $`k_FdL/\pi =d\alpha /2\pi `$ electrons that occupy the region of size $`dL`$, and pushes the remaining $`|r|^2d\alpha /2\pi `$ electrons forward. Thus
$$2\pi dq=\mathrm{cos}^2(\theta )d\alpha .$$
(6)
This result can be obtained less heuristically, , by working in the reference frame of the moving scatterer and integrating the contribution of each wave number from 0 to $`k_F`$. From this one also sees that the rate of dissipation at the reservoirs, $`P`$, is quadratic in the current $`I`$, with a coefficient that depends on the dispersion relation. If the dispersion relation is quadratic, then
$$P=2\pi I^2/|r(k_F)|^2.$$
(7)
To change $`\varphi `$, we vary the vector potential. This induces an EMF of strength $`\dot{A}=\dot{\varphi }`$. The current is simply the voltage times the Landauer conductance $`|t|^2/2\pi `$ . Integrating over time gives
$$2\pi dq=\mathrm{sin}^2(\theta )d\varphi .$$
(8)
A current $`I`$ then dissipates energy at the reservoirs at a rate
$$P=2\pi I^2/|t(k_F)|^2.$$
(9)
To understand the effect of changing $`\theta `$ and $`\gamma `$, we first suppose our scatterer is right-left symmetric, so $`r=r^{}`$ and $`t=t^{}`$. Then changes in $`\theta `$ and $`\gamma `$ would draw equal amounts of charge to the scatterer from the left and right leads. The charge that accumulates on the scatterer is given by Krein’s spectral shift . The charge coming from the left is half this, namely :
$$2\pi dq=\frac{2\pi i}{4\pi }d\mathrm{log}detS=d\gamma .$$
(10)
Since every $`S`$ matrix can be obtained by translating and adding a vector potential to a right-left symmetric scatterer, the formula (10) applies to all possible $`S`$-matrices. The formula (10) applies to all possible scatterers, symmetric or not.
Combining (6), (8) and (10), gives BPT, Eq. (5).
The effect of changing $`\gamma `$ integrates to zero on a closed loop. Changing $`\theta `$ does not give any transport at all. Thus, only changes in $`\alpha `$ and $`\varphi `$ contribute to the net transport of a quantum pump. These are dissipative processes, with the rate of energy dissipation $`P`$ (in the reservoirs) that is bounded from below by $`2\pi I^2`$. This is contrary to the assertions of , who claimed that the charge transport is the sum of a quantized non-dissipative term and a dissipative term that is not quantized.
Geometrical interpretation: $`𝒜=2\pi dq`$ is the 1-form (vector potential) associated with Berry’s phase. If we define the unit spinor $`|\psi =\left(\begin{array}{c}r\\ t^{}\end{array}\right)`$ then
$$𝒜=i\psi |d\psi .$$
(11)
The set of all spinors $`|\psi `$ is a 3-sphere (since $`|r|^2+|t^{}|^2=1`$), while the set of all ratios $`r/t^{}`$ is the projective space $`CP^1=\text{ }|\mathrm{C}+\{\mathrm{}\}S^2`$. The natural map between them, namely $`|\psi r/t^{}`$, is called the Hopf fibration, and $`𝒜`$ is called the “global angular form” of this fibration.
To compute the charge transported by a closed cycle $`C`$ in parameter space, we can either integrate the 1-form $`𝒜`$ around $`C`$, or (by Stokes’ theorem) integrate the exterior derivative (curl) $`\mathrm{\Omega }=d𝒜`$ over a disk $`D`$ whose boundary is $`C`$. $`\mathrm{\Omega }`$ is the curvature 2-form of Brouwer :
$`\mathrm{\Omega }`$ $`=`$ $`id\psi |d\psi =i{\displaystyle \frac{d\overline{z}dz}{(1+|z|^2)^2}},`$ (12)
where $`z=r/t^{}`$. The expression $`id\psi |d\psi `$ is formally identical to the adiabatic (Berry’s) curvature that appears also in the quantum Hall effect .
In the last expression one sees that the curvature sees only the ratio $`z=r/t^{}`$, and not $`r`$ and $`t^{}`$ separately. The curvature $`\mathrm{\Omega }`$ is the $`U(2)`$-invariant area form on $`CP^1`$, and its integral over all of $`CP^1`$ is $`2\pi `$. $`\mathrm{\Omega }`$ is also the curvature of the Hopf fibration.
In the study of non-dissipative quantum transport, Chern numbers play a role. These are topological invariants that equal the integral of the curvature over closed surfaces in parameter space. In the context of adiabatic scattering, however, all Chern numbers are zero. The vector bundle over parameter space is topologically trivial, and the vector $`(r,t^{})`$ gives a section of this bundle.
These geometrical constructions generalize to systems with $`n`$ incoming and $`m`$ outgoing channels, . The first $`n`$ rows of $`S`$ span an $`n`$-dimensional complex subspace of $`\text{ }|\mathrm{C}^{n+m}`$. The space of all such subspaces, called a Grassmannian, has a naturally defined 2-form, called the Kähler form . Up to a constant factor, the Brouwer 2-form equals the Kähler form. In addition, there is a canonically defined line bundle over this Grassmannian, and $`𝒜`$ equals the global angular form for this bundle.
Statistics of weak pumping: Next we consider how a random scatterer transports charge when two parameters are varied gently and cyclically. More precisely, we consider the charge transported by moving along the circle $`X_1=ϵ\mathrm{cos}(\tau )`$, $`X_2=ϵ\mathrm{sin}(\tau )`$ in parameter space. If $`ϵ`$ is small, then the charge transport is close to $`\frac{\pi ϵ^2}{2\pi }\mathrm{\Omega }(_1,_2)`$, evaluated at the origin, where $`_j`$ are the tangent vectors associated with the parameters $`X_j`$. The vectors $`_j`$ map to random vectors on $`U(2)`$, which we assume to be Gaussian with covariance $`C`$. The problem is then to understand the possible values of the curvature $`\mathrm{\Omega }`$ applied to two random vectors.
To do this, we first need to understand the statistics of 2-forms applied to pairs of random vectors, and to understand the geometry of the group $`U(2)`$.
Take two random vectors in $`R^2`$, and see how much area they span. By random vectors we mean independent, identically distributed Gaussian random vectors whose components $`X_j`$ have the covariance $`X_iX_j=C\delta _{ij}`$. The area $`A`$ is distributed as a 2-sided exponential:
$$dP(A)=\frac{1}{2C}e^{|A|/C}dA.$$
(13)
This is seen as follows. If the two vectors are $`X`$ and $`Y`$, then the area is $`X_1Y_2X_2Y_1`$. $`X_1Y_2`$ and $`X_2Y_1`$ are independent random variables, and a calculation shows that their characteristic function is $`1/\sqrt{k^2C^2+1}`$. Their sum is a random variable with characteristic function $`(k^2C^2+1)^1`$, and so exponentially distributed.
We parameterize the group $`U(2)`$ by the angles $`(\alpha ,\gamma ,\varphi ,\theta )`$, as in (4). A standard, bi-invariant metric on $`U(2)`$ is
$`{\displaystyle \frac{1}{2}}Tr(dS^{}dS)`$ $`=`$ $`(d\gamma )^2+\mathrm{cos}^2\theta (d\alpha )^2+`$ (14)
$`+`$ $`\mathrm{sin}^2\theta (d\varphi )^2+(d\theta )^2.`$ (15)
In this metric the vectors $`_i`$ are orthogonal but not orthonormal. Unit tangent vectors are
$$e_\gamma =_\gamma ,e_\alpha =\frac{1}{\mathrm{cos}\theta }_\alpha ,e_\varphi =\frac{1}{\mathrm{sin}\theta }_\varphi ,e_\theta =_\theta .$$
(16)
The volume form is $`\mathrm{sin}(\theta )\mathrm{cos}(\theta )d\alpha d\gamma d\varphi d\theta .`$ The curvature 2-form, from Eq. (12), is
$$\mathrm{\Omega }=2\mathrm{sin}(\theta )\mathrm{cos}(\theta )d\theta (d\alpha +d\varphi ).$$
(17)
A scattering matrix is time reversal invariant if and only if $`t=t^{}`$. The space of time-reversal matrices is parameterized exactly as before, only now with $`\varphi `$ identically zero. The volume form for the metric inherited from $`U(2)`$ is $`\mathrm{cos}(\theta )d\alpha d\gamma d\theta ,`$ and the curvature form is now $`\mathrm{\Omega }=2\mathrm{sin}(\theta )\mathrm{cos}(\theta )d\theta d\alpha `$.
We are now prepared to compute the statistics of weak pumping, assuming that the $`S`$ matrix is uniformly distributed and that the tangent vectors to the space of $`S`$ matrices are Gaussian random variables. This problem was studied by Brouwer , in the framework of random matrix theory, by which we mean that Brouwer posits an a-priori measure on the space of Hamiltonians. Random matrix theory is more powerful, in that the distribution of tangent vectors is fixed by the theory. The price one pays is that the analysis is also far from elementary and the results are, in part, only numerical.
For systems without time reversal symmetry, random matrix theory posits that the $`S`$ matrix is distributed on $`U(2)`$ with a uniform measure. Since the conductance $`g`$ is $`g|t|^2=\mathrm{sin}^2\theta `$ we have that $`dg\mathrm{sin}\theta \mathrm{cos}\theta d\theta `$, proportional to the volume form: The conductance $`g`$ is therefore uniformly distributed.
A random tangent vector to $`U(2)`$ is $`X=X_\theta e_\theta +X_\alpha e_\alpha +X_\varphi e_\varphi +X_\gamma e_\gamma ,`$ where $`X_j`$ are Gaussians with $`X_jX_k=C\delta _{jk}`$. The curvature associated with two random tangent vectors $`X,Y`$ is, by Eq. (17)
$$\mathrm{\Omega }(X,Y)=2\left(X_\theta W_\theta Y_\theta Z_\theta \right),$$
(18)
where $`W_\theta =\mathrm{sin}\theta Y_\alpha +\mathrm{cos}\theta Y_\varphi `$ and $`Z_\theta =\mathrm{sin}\theta X_\alpha +\mathrm{cos}\theta X_\varphi .`$ The variables $`W_\theta `$ and $`Z_\theta `$ are independent, each with variance $`C`$. From Eq. (13), the distribution of the curvature is exponential and independent of $`|t|`$. The joint distribution of curvature, $`\omega `$, and conductance, $`g=\frac{1}{2\pi }|t|^2`$ is given by the probability density
$$\frac{\pi }{2C}e^{|\omega |/2C}d\omega dg$$
(19)
with $`\omega `$ ranging from $`\mathrm{}`$ to $`\mathrm{}`$ and $`g`$ from 0 to $`\frac{1}{2\pi }`$.
For systems with time reversal symmetry, the $`S`$ matrix is uniformly distributed on the $`t=t^{}`$ submanifold, with the metric inherited from $`U(2)`$. The tangent vectors are now Gaussian random variables of the form $`X=X_\theta e_\theta +X_\alpha e_\alpha +X_\gamma e_\gamma ,`$ and the curvature is now
$$\mathrm{\Omega }(X,Y)=2\mathrm{sin}\theta \left(X_\theta Y_\alpha Y_\theta X_\alpha \right).$$
(20)
Since the curvature depends on $`\theta `$ the curvature and the conductance are correlated. The volume form indicates that $`\sqrt{g}`$, and not $`g`$, is uniformly distributed. This favors insulators. The joint distribution for curvature and conductance is
$$\frac{1}{4\sqrt{g}C}e^{|\omega |/2C\sqrt{2\pi g}}d\omega d(\sqrt{g}).$$
(21)
This formula says that, statistically, good pumps tend to be good conductors; $`\omega /\sqrt{g}`$, rather than $`\omega `$ itself, is independent of $`g`$.
We have assumed, so far, that the variance $`C`$ is a constant. There is no reason for this and it is natural to let $`C`$ itself be a random variable. Given a probability distribution for the covariance, $`d\mu (C)`$, one integrates the formulas (19) and (21) over $`C`$. One sees, by inspection, that in the absence (presence) of time reversal symmetry, $`\omega `$ ($`\omega /\sqrt{g}`$) is independent of $`g`$. Furthermore, the distribution of $`\omega `$ after integrating over $`g`$ is smooth away from $`\omega =0`$, but has a discontinuity in derivative (log divergence) at $`\omega =0`$. In these qualitative features, our results agree with Brouwer’s. However, the tails of the distribution for large pumping may depend on the tail of $`d\mu (C)`$; While (19) and (21) have exponentially small tails, power law tails $`d\mu (C)`$ will lead to power law in the tails in $`\omega `$. Since we do not determine $`d\mu (C)`$ we can not determine the tails. Using random matrix theory Brouwer determined the power decay in $`\omega `$ .
Hard Pumping: Finally, we consider what happens for hard pumping. Here one can no longer evaluate the curvature at a point and multiply by the area. One needs to honestly integrate the curvature. Hard pumping was addressed by who studied it in the context of random matrix theory and showed, using rather involved diagrammatic techniques, that pumping scales like the root of the perimeter. Here we shall describe a complementary, elementary result that holds provided the $`S`$ matrix is a periodic function of the parameters. This is the case, for example, when the pumping is driven by two Aharonov-Bohm fluxes.
With $`S(x,y)`$ periodic in the driving parameters $`x`$ and $`y`$, so is the curvature $`\mathrm{\Omega }(x,y)=\widehat{\mathrm{\Omega }}_{mn}e^{i(mx+ny)}.`$ Since the global angular form is also periodic, $`\widehat{\mathrm{\Omega }}_{00}=0`$.
The integral $`_{|x|<R}\mathrm{\Omega }`$ of the curvature on a large disc of radius $`R`$ is, to leading order,
$$\sqrt{8\pi R}\frac{\widehat{\mathrm{\Omega }}_{nm}}{N(n,m)^{3/2}}\mathrm{sin}\left(N(n,m)R\frac{\pi }{4}\right),$$
(22)
where $`N(n,m)=\sqrt{n^2+m^2}`$. The charge transported in a cycle is proportional to the square root of the perimeter (or the fourth root of the area) times a quasiperiodic function of $`R`$. This follows from the evaluation of the elementary integral $`_{|x|<R}𝑑x𝑑ye^{i(nx+my)}`$, which equals a Bessel function whose large-$`R`$ asymptotic behavior is $`\sqrt{\frac{8\pi R}{N^3}}\mathrm{sin}(NR\frac{\pi }{4}).`$ From Eq. (22) one can determine the probability distribution for charge transport, (viewed as a random variable with uniform distribution on the radius $`R`$). Eq. (22) turns out to be closely related to a celebrated problem in ergodic number theory: The Gauss circle problem .
This result does not directly apply to the pump studied by Switkes , because the parameters they vary do not have built in periodicity. Nevertheless, it illustrates two features of pumps that have been observed experimentally. The first is that hard driving transports a lot of charge, with scaling that is sublinear, as in a random process, and the second that the directionality of hard pumping is essentially unpredictable.
We thank A. Kamenev for extremely valuable insights and A. Auerbach, P. Brouwer and C. Marcus for useful correspondences, and U. Sivan for a careful reading of the manuscript, and valuable suggestions. This research was supported in part by the Israel Science Foundation, the Fund for Promotion of Research at the Technion, the DFG, the National Science Foundation and the Texas Advanced Research Program.
|
no-problem/0002/nlin0002013.html
|
ar5iv
|
text
|
# Fractal stabilization of Wannier-Stark resonances
## 1 1
In this letter we study the spectral and dynamical properties of a Bloch particle affected by static and time-periodic forces:
$$\widehat{H}=\widehat{p}^2/2+\mathrm{cos}x+Fx+F_\omega \mathrm{cos}(\omega t)x,\widehat{p}=\mathrm{}^{}\mathrm{d}/\mathrm{d}x,$$
(1)
where $`\mathrm{}^{}`$ is the scaled Planck constant (see below). Originally this problem was formulated for a Bloch electron in dc-ac electric fields $`\widehat{H}=\widehat{p}^2/2m+V(x)+e[E+E_\omega \mathrm{cos}(\omega t)]x`$, $`V(x+a)=V(x)`$ and attracted much attention because of the similarity with the Hofstadter problem . Indeed, the energy spectrum of a Bloch electron in a 2D lattice under the action of a constant magnetic field $`B`$ depends on the magnetic matching ratio $`\beta =h/eBa^2`$ ($`a`$ is the lattice constant) and has a fractal structure as function of this parameter. Analogously, the quasienergy spectrum of system (1) depends on the electric matching ratio
$$\alpha =\frac{\omega }{\omega _B},\omega _B=\frac{2\pi F}{\mathrm{}^{}},$$
(2)
where $`\omega _B`$ is the Bloch frequency, which is $`\omega _B=eEa/\mathrm{}`$ in the case of a crystal electron. For rational ratios $`\alpha =r/q`$ the quasienergy spectrum has a band structure; it is discrete, however, for irrational values of $`\alpha `$ . Numerically the quasienergy spectrum of the system (1) was studied in ref. by using the tight-binding approximation. It was show that for $`\alpha =r/q`$ the quasienergy bands are arranged in a structure resembling the famous Hofstadter butterfly (see Fig. 1 in Ref. ). It should be pointed out, however, that the results of paper only partially describe the spectrum of the system (1) because the tight-binding approximation neglects the decay of the quasienergy states. The actual quasienergy spectrum is complex, where the imaginary part of the spectrum defines the lifetimes of the metastable quasienergy states.
Because of the extremely small lattice period in crystals, the fractal structure of the quasienergy spectrum has never been observed in solid state systems. However, a signature of it was recently found in an experiment with cold atoms in an optical lattice . The latter system models the solid state Hamiltonian (1), where the neutral atoms moving in the optical potential $`V(x)`$ ($`k_L`$ is the laser wave vector) take over the role of the crystal electrons. The effect of the electric fields can be mimicked, for example, by the inertial force induced by accelerating the experimental setup as a whole. (In practice, however, the acceleration was obtained by an appropriate chirping of the laser frequency.) The system “atom in a standing wave” has an essentially larger lattice period than the solid state system and is, in addition, free of relaxation processes due to scattering by impurities and the Coulomb interaction. These features of the system were utilized earlier in ref. to observe experimentally the Wannier-Stark ladder of resonances. The main modification of the experiment in comparison with the experiment is that a strong periodic driving with frequency $`\omega =(r/q)\omega _B`$ was imposed. (Only the cases $`r/q=1/2`$ and $`r/q=1/3`$ were reported.) Then the atomic survival probability as a function of the frequency of the probe signal shows additional anti-peaks (see fig. 3 in ref. ), which were interpreted as an indication of the fractal structure of the spectrum. We note that the width of these anti-peaks is given by the width (i.e. the inverse lifetime) of the first excited Wannier-Stark resonances. This imposes a fundamental restriction on the resolution and it seems impossible to resolve matching ratios $`\alpha =\omega /\omega _B`$ for $`\alpha `$ different from lowest rational numbers.
## 2 2
The discussed papers study the system in the deep quantum region. In the present paper we discuss the manifestation of the fractal structure of the quasienergy spectrum in the semiclassical region of the system parameters. We shall show that in this region the fractal nature of the spectrum can be observed without using a probe signal and the electric matching ratio can be resolved with any desired accuracy.
The characteristic measure of the systems “classicality” is the scaled Planck constant $`\mathrm{}^{}`$ entering the momentum operator in the Hamiltonian (1). Referring to the system “atom in a standing wave” the scaled Planck constant is given by
$$\mathrm{}^{}=\left(\frac{8\mathrm{}\omega _{rec}}{V_0}\right)^{1/2},$$
(3)
where $`\omega _{rec}=\mathrm{}k^2/2m`$ is the atomic recoil frequency and $`V_0`$ is the depth of the optical potential $`V(x)=V_0\mathrm{cos}^2(k_Lx)`$. In the experiments and the value of the scaled Planck constant was $`\mathrm{}^{}1.5`$ and $`1.6`$, respectively. In our numerical studies we use $`\mathrm{}^{}=0.25`$. Since the amplitude $`V_0`$ of the optical potential is proportional to the square of the laser field amplitude, this implies a larger intensity of the laser.
We simulate the wave packet dynamics of the system (1) by numerical solution of the time-dependent Schrödinger equation in the momentum representation using parameters $`\mathrm{}^{}=0.25`$, $`\omega =10/6`$, $`F_\omega =4.16`$ with values of the static field $`F`$ in the interval $`0.030F0.083`$. Then the survival probability
$$P(t)=_{|p|<p_0}|\psi (p,t)|^2dp,$$
(4)
is calculated. The initial state is the localized Wannier state, which for the cosine potential practically coincides with a minimal uncertainty wave packet centered at $`x=\pi `$. The motivation for the chosen value of $`p_0=6`$, which is much larger than the region of support of the initial wave packet, is given in sec. 3 below. This numerical simulation models the experimental situation of ref. with the modification that there is no probe signal but the amplitude $`F`$ of the static force is varied.
The result for the survival probability as a function of $`F`$ — or, equivalently, as a function of the matching ratio $`\alpha =\omega /\omega _B`$ — is shown in fig. 1 and 2(a), which is the central result of the paper. The dots in fig. 1 mark the survival probability $`P(t)`$ calculated for increasing values of the observation time $`t=40T_B`$,…$`t=160T_B`$. The values of static force $`F`$ are chosen for rational values of the electric matching ratios $`\omega /\omega _B=r/q`$ with $`q420`$. (Explicitly, we considered for $`q`$ all divisors of $`2^2357=420`$ and all coprime $`r`$-values in the interval $`6/7r/q15/7`$.) To guide the eyes, the dots are connected by a solid line. The actual width of the peaks, which is inversely proportional to the observation time, is smaller in fig. 2(a), which shows a more fully developed distribution for a longer time $`t=200T_B`$.
It is seen in fig. 2(a) that there are pronounced peaks above an almost constant background which appear at low order resonances between the driving frequency and the Bloch frequency. The seven largest peaks in the figure correspond (from left to right) to $`r/q=1`$, $`5/4`$, $`4/3`$, $`3/2`$, $`5/3`$, $`7/4`$, and $`r/q=2`$, and the observed peak-heights fall off rapidly with increasing denominator $`q`$. As illustrated by fig. 1, this peak structure develops gradually in time, starting from $`P(0)=1`$ at time zero and originates from the different long-time behavior of the survival probabilities. We find exponential decay in time for irrational values of $`\omega /\omega _B`$ and algebraic decay for rational ones . Thus the survival probability as a function of $`F`$ reflects the fractal nature of the spectrum in the long-time regime. Moreover, there is no fundamental resolution constraint and an arbitrary number of peaks can be resolved by increasing the observation time. We also note that instead of varying the static force one can vary the frequency $`\omega `$ of the driving force. In this case, having in mind a laboratory experiment, it looks reasonable to use the gravitational force as the static force .
## 3 3
The key point of the conducted numerical experiment is that the amplitude $`F_\omega `$ and the frequency $`\omega `$ of the driving force are chosen such as to insure chaotic dynamics of the system in the classical limit. We furthermore choose $`p_0`$ in (4) to be larger than the boundary between the regular and the chaotic component of the classical phase space (see fig. 1(b) in ref. ). Then the classical survival probability decays exponentially
$$P_{cl}(t)=\mathrm{exp}(\nu t),$$
(5)
where the decay coefficient $`\nu `$, the inverse classical lifetime, is determined by the classical Lyapunov exponent and the fractal dimension of the chaotic repellor. We computed the classical decay rate numerically and determined the $`F`$-dependence of the classical decay rate as $`\nu 0.15F`$ in the parameter region considered here. It should be noted that the exponential decay of the classical probability cannot be considered as an universal law. In some systems it is a transient phenomenon and changes to an algebraic decay caused by long-lived trajectories sticking near stability islands . However, this is not the case for system (1) and no sign of an algebraic decay of the classical probability was detected (at least until time $`t=200T_b`$, which was the maximal time in our numerical simulation).
Because we measure the observation time $`t`$ in units of the Bloch period $`T_B=\mathrm{}^{}/F`$, the $`F`$ dependence in Eq. (5) cancels and the classical survival probability is practically constant. The quantum results follow closely the classical exponential decay (5) for irrational values of the matching ration $`\omega /\omega _B`$ providing the flat background of the quantum results shown in fig. 2. The peaks above the classical plateau for resonant driving, i.e. rational values of the matching ratio $`\alpha =\omega /\omega _B`$ are a quantum “stabilization”-phenomenon, which can be understood as follows.
It was shown in ref. that the eigenvalue problem for the quasienergies of the system (1) for $`\alpha =r/q`$ can be mapped onto an effective scattering problem with $`q`$ open channels. When the matching ratio $`\alpha `$ is an irrational number, the number of channel is infinite and the system follows the classical dynamics provided the condition $`\mathrm{}^{}1`$ is satisfied. When the matching ratio is a rational number, however, the number of decay channels is finite and the quantum system (independent of the value of $`\mathrm{}^{}`$) is essentially more stable than the classical one . In this case the behavior of the survival probability $`P(t)`$ differs from the exponential decay (5) and is determined by the distribution of the imaginary parts of the quasienergies, i.e. the distribution of the resonance widths.
In the case of chaotic classical dynamics an analytic expression for the resonance statistics is supplied by (non-hermitian) random matrix theory (RMT) . The validity of RMT for system (1) was checked numerically in ref. and a satisfactory correspondence was noticed. Converting the result of RMT from energy to time domain shows that the decay of the probability follows asymptotically an inverse power law
$$P(t)(\mathrm{\Gamma }_Wt/q\mathrm{}^{})^q,tt^{}q\mathrm{}^{}/\mathrm{\Gamma }_W,$$
(6)
where $`\mathrm{\Gamma }_W`$ is the Weisskopf width, which is a free parameter in the abstract RMT. Identifying the parameter $`\mathrm{\Gamma }_W/\mathrm{}^{}`$ with the classical decay coefficient $`\nu `$, eq. (5) and eq. (6) can be combined in the single equation
$$P(t)=\left(1+\frac{\nu t}{q}\right)^q,$$
(7)
which has the correct short- and long-time asymptotic and provides a first crude approximation to the more elaborate RMT result .
Figure 2(b) shows the values of the function (7) for $`t=200T_B`$ and the same values of the matching ratio $`\alpha `$ as in fig. 2(a) where we use a slightly different graphic presentation of $`P(t)`$ to stress that the function (7) is a discontinuous function of $`\alpha `$ for any $`t`$. In contrast, the atomic survival probability shown in fig. 2(a) is a continuous function of $`F`$ where its fractal structure develops gradually as $`t\mathrm{}`$. In fact, the probabilities (4) calculated for two close rational numbers $`\alpha _1`$ and $`\alpha _2`$ follow each other during a finite “correspondence” time. (For instance, for $`\alpha _1=1`$ and $`\alpha _2=999/1000`$ the correspondence time is found to be about $`50T_B`$.) Thus it takes some time to distinguish two close rationals, although they may have very different denominators and, therefore, very different asymptotics (6). With this remark reserved, a nice structural (and even semiquantitative) correspondence is noticed. In addition, it seems worthwhile to note that also the pronounced quantum resonance peaks, i.e. the quantum algebraic decay $`P(t)(\nu t/q)^q`$ predicted by RMT, is mainly determined by the purely classical $`\nu `$-coefficient due to classically chaotic scattering dynamics.
## 4 4
We have analyzed the system (1) in context with recent experiments studying the dynamics of cold atoms in a standing laser wave . It is shown that in the semiclassical region of the system parameters the atomic survival probability as a function of the static force (or, alternatively, of the driving frequency) shows a fractal structure. This fractal structure is actually related to the fractal nature of the quasienergy spectrum determined by the degree of rationality of the electric matching ratio (2). In fact, when a rational sequence of $`\alpha =r/q`$ converges to some irrational value, the quasienergy bands progressively split into sub-bands. This process is accompanied by loss of stability of the quasienergy states and shows up, finally, in the complicated (fractal) structure of the atomic survival probabilities which can be measured in laboratory experiments.
Finally, we would like to distinguish the fractal structure of the survival probability discussed above from the fractal structure of the survival probability studied in papers . The latter appears as a quantum manifestation of the hierarchical island structure of classical phase space in a system with an algebraic decay of the classical probability. The origin of the former phenomenon, however, is the fluctuating number of the decay channels depending on the value of the electric matching ratio (2), which is the control parameter of the system (1).
\***
This work has been supported by the Deutsche Forschungsgemeinschaft (SPP 470 ‘Zeitabhängige Phänomene und Methoden in Quantensystemen der Physik und Chemie’).
|
no-problem/0002/astro-ph0002162.html
|
ar5iv
|
text
|
# Characterizing the Peak in the Cosmic Microwave Background Angular Power Spectrum
\[
## Abstract
A peak has been unambiguously detected in the cosmic microwave background (CMB) angular spectrum. Here we characterize its properties with fits to phenomenological models. We find that the TOCO and BOOM/NA data determine the peak location to be in the range 175–243 and 151–259 respectively (both ranges 95% confidence) and determine the peak amplitude to be between $`70\mathrm{and}90`$ $`\mu \mathrm{K}`$. By combining all the data, we constrain the full-width at half-maximum to be between 180 and 250 at 95% confidence. Such a peak shape is consistent with inflation-inspired flat, cold dark matter plus cosmological constant models of structure formation with adiabatic, nearly scale-invariant initial conditions. It is inconsistent with open and defect models.
\]
Introduction. If the adiabatic cold dark matter (CDM) models with scale-invariant initial conditions describe our cosmogony, then an analysis of the anisotropy in the CMB can reveal the cosmological parameters to unprecedented accuracy . A number of studies have aimed at determining, with various prior assumptions, a subset of the $`10`$ free parameters that affect the statistical properties of the CMB . The parameter most robustly determined from current data is $`\mathrm{\Omega }`$, the ratio of the mean matter/energy density to the critical density (that for which the mean spatial curvature is zero). These investigations show that $`\mathrm{\Omega }`$ is close to one. This result, combined with other cosmological data, implies the existence of some smoothly distributed energy component with negative pressure such as a cosmological constant.
A weakness of previous approaches is that the conclusions depend on the validity of the assumed model. In this Letter we take a different tack and ask what we know independent of the details of the cosmological model. We find the peak location, amplitude and width are consistent with those expected in adiabatic CDM models. Furthermore, as $`l_{\mathrm{peak}}200\mathrm{\Omega }^{1/2}`$ in these models, the observed peak location implies $`\mathrm{\Omega }1`$. The determination of the peak location is robust; it does not depend on the parametrization of the spectrum, assumptions about the distribution of the power spectrum measurement errors, nor on the validity of any one data set. The model-dependent determinations of $`\mathrm{\Omega }`$ are further supported by the inconsistency of the data with competing models, such as topological defects, open models with $`\mathrm{\Omega }<0.4`$, or the simplest isocurvature models.
The Data. The last year of the 1000’s saw new results from MSAM, PythonV, MAT/TOCO , Viper, CAT, IAC and BOOM/NA, all of which have bearing on the properties of the peak. These results are plotted in Fig. 1. We have known for several years that there is a rise toward towards $`l=200`$ but it is now clear that the spectrum also falls significantly towards $`l=400`$.
For all the medium angular scale experiments, the largest systematic effect is the calibration error which is roughly 10% for each. Contamination from foreground emission is also important and not yet fully accounted for in some experiments (e.g. TOCO). A correction for this contribution, for which $`\delta T_ll^{1/2}`$, will affect the amplitude of the peak though will not strongly affect its position. Thorough analyses by the MSAM and PYTHON teams show that the level of contamination in those experiments was $`<3\%`$.
The three experiments that have taken data that span the peak are MSAM, TOCO, and BOOM/NA. All experiments exhibit a definite increase over the Sachs-Wolfe plateau though the significance of a feature based on the data alone, e.g. a peak, differs between experiments. We may assess the detection of a feature by examining the deviation from the best fit flat line, $`\overline{\delta T}`$. For the three MSAM points, we find $`\overline{\delta T}=46\pm 4.9\mu `$K with a reduced $`\chi ^2`$ of 0.43 (Probability to exceed, $`P_{>\chi ^2}=0.65`$. The calibration error is not included.). Thus, no feature is detected with these data alone though there is a clear increase over DMR. For the seven BOOM/NA points, we find $`\overline{\delta T}=55.3\pm 4.2\mu `$K with a reduced $`\chi ^2`$ of 1.94 ($`P_{>\chi ^2}=0.05`$, assuming the data are anti-correlated at the 0.1 level). For the ten TOCO points, $`\overline{\delta T}=69.3\pm 2.7\mu `$K with a reduced $`\chi ^2`$ of 4.86 ($`P_{>\chi ^2}<10^5`$) Calibration errors will not change $`\chi ^2/\nu `$, however a correction for foreground emission will have a slight effect. Though we examine all data in the following, we focus particularly on BOOM/NA and TOCO because of their detections of a feature.
Fits to Phenomenological Models. To characterize the peak amplitude and location we fit the parameters of two different phenomenological models. For the first, we start with the best fit DK99 adiabatic CDM model, $`\delta T_l^{DK}`$, and form $`\delta T_l=(\delta T_l^{DK}\delta T_{l=10}^{DK})\alpha +\delta T_{l=10}^{DK}`$ by varying $`\alpha `$, and then stretching in $`l`$. We characterize each stretching with the peak position and peak amplitude. This method has the virtue that the resulting spectra resemble adiabatic models and so if one assumes that these models describe Nature, then these results are the ones to which we should pay the most attention.
Our second model for $`\delta T_l^2`$ is a Gaussian: $`\delta T_l^2=A^2\mathrm{exp}\left(\left(ll_c\right)^2/(2\sigma _l^2)\right)`$. Depending on the width, this spectrum can look very much like, or unlike, the spectra of adiabatic models . We view this versatility as a virtue since we are interested in a characterization of the peak which is independent of physical models.
We fit to these phenomenological models in two ways. For the stretch model, we examine the $`\chi ^2`$ of the residuals between the published data and each model. The widths of the window functions are ignored and we assume the data are normally distributed in $`\delta T_l`$ with a dispersion given by the average of the published error bars (GT in Table 1). This is an admittedly crude method but it works well because the likelihoods as a function of $`\delta T_l`$ are moderately well approximated by a normal distribution.
For both the Gaussian shape and the stretch model, we also perform the full fit as outlined in BJK (RAD in Table 1). For the Gaussian shape model, the constraints on the amplitude and location are given below after marginalization over the width $`\sigma _l`$. In all fitting, we ignore the experiments that are affected by $`l<30`$ (DMR, FIRS and Tenerife) because we want the parameters of our Gaussian to be determined by behavior in the peak region.
| Data | Model | Fit | $`N/\nu `$ | $`\chi ^2/\nu `$ | $`P_{>\chi ^2}`$ | $`l_{peak}`$ | $`\delta T_{peak}`$ |
| --- | --- | --- | --- | --- | --- | --- | --- |
| | | | | | | | $`\mu `$K |
| All | G | Rad | 58/55 | 1.25 | 0.10 | $`229\pm 8.5`$ | 78 |
| T | G | Rad | 10/7 | 0.41 | 0.89 | $`206\pm 16`$ | 95 |
| T | S | GT | 10/8 | 0.94 | 0.48 | $`214\pm 14`$ | 88 |
| T | S | Rad | 10/8 | 0.84 | 0.57 | $`209\pm 17`$ | 92 |
| B | G | Rad | 7/4 | 0.19 | 0.94 | $`208\pm 21`$ | 69 |
| B | S | GT | 7/5 | 0.39 | 0.85 | $`215\pm 24`$ | 69 |
| B | S | Rad<sub>0</sub> | 7/5 | 0.23 | 0.95 | $`205\pm 27`$ | 72 |
| B | S | Rad | 7/5 | 0.39 | 0.85 | $`206\pm 26`$ | 68 |
| P | G | Rad | 33/30 | 1.13 | 0.28 | $`262\pm 24`$ | 68 |
<sup>*</sup><sup>*</sup>footnotemark: *
ALL stands for all publically available data sets (except for VIPER which was not used because of unspecified point-to-point correlations), the T is for the TOCO data, the B for BOOM/NA and the P is for “Previous”, meaning all data prior to BOOM/NA and TOCO. footnotemark: G and S are for the Gaussian shape and stretch methods respectively footnotemark: $`N`$ is the number of data points and $`\nu `$ the degrees of freedom. <sup>§</sup><sup>§</sup>footnotemark: §Rad<sub>0</sub> and Rad corresponds to log normal and normal distributions for the likelihood respectively.
The main thing to notice in the Table is that the position of the peak is robustly determined by either TOCO or BOOM/NA to be in the range 185 to 235, regardless of the method. For the quoted errors, we have marginalized over all parameters except the position. The peak amplitudes are subject to change as there is some dependence on the model parametrization and the foreground contamination has not been thoroughly assessed.
We account for the calibration uncertainty through a convolution of the likelihood of the fits with a normal distribution of the fractional error . BOOM/NA, TOCO97 and TOCO98 have calibration uncertainties of 8%, 10.5% and 8% respectively. However, 5% of this is due to uncertainty in the temperature of Jupiter and therefore, assuming that these uncertainties add in quadrature, we get $`\sigma _{\mathrm{Jup}}=0.05`$, $`\sigma _{T97}=0.092`$, $`\sigma _{T98}=0.062`$ and $`\sigma _{B97}=0.062`$. We then find, for TOCO, that the full likelihood in $`\delta T_l`$ and $`l`$ is given by
$`L(l_c,\delta T_l)`$ $`=`$ $`{\displaystyle 𝑑\sigma _l𝑑u_{\mathrm{Jup}}𝑑u_{T97}𝑑u_{T98}L_{T97}(l_c,\delta T_lu_{\mathrm{Jup}}u_{T97},\sigma _l)}`$ (3)
$`\times L_{T98}(l_c,\delta T_lu_{\mathrm{Jup}}u_{T98},\sigma _l)P_G(u_{97}1;\sigma _{T97})`$
$`\times P_G(u_{T98}1;\sigma _{T98})P_G(u_{\mathrm{Jup}}1;\sigma _{\mathrm{Jup}})`$
where $`P_G(x;\sigma )=\mathrm{exp}\left(x^2/(2\sigma ^2)\right)/\sqrt{2\pi \sigma ^2}`$, $`u`$ is integrated from 0 to $`\mathrm{}`$ and, e.g., $`L_{T97}(l_c,\delta T_l,\sigma _l)=\mathrm{exp}(\chi ^2/2)`$ where $`\chi ^2`$ is evaluated on a grid of $`\delta T_l^2`$, $`l_c`$ & $`\sigma _l`$ using RADPACK as discussed in BJK. We get similar results for TOCO when simply using a combined total calibration error of 8.5%.
For the Gaussian model we can also marginalize over $`A`$ and $`l_c`$ to place 95% confidence bounds on the width: $`75<\sigma _l<105`$ for ALL, $`50<\sigma _l<105`$ for TOCO and $`55<\sigma _l<145`$ for BOOM/NA.
Are the data in Fig 1 consistent? DK99 found that the best-fit model, given all the data at the time, had a $`\chi ^2`$ of 79 for 63 degrees of freedom, which is exceeded 8% of the time. Here we see that the $`\chi ^2`$ for the fit of the Gaussian model is 69 for 55 degrees of freedom, which is exceeded 10% of the time. We conclude that, although there may well be systematic error in some of these data sets, we have no compelling evidence of it. However, we take caution from the fact that we had to adjust the calibration parameters from their nominal values to their best-fit values in order to reduce the $`\chi ^2`$ to 69. Left at their nominal values with calibration uncertainty ignored, the data are not consistent with each other. Thus we believe that the compilation results are perhaps less reliable than those for either BOOM/NA or TOCO.
Implications for Physical Models. Flat, adiabatic, nearly scale-invariant models have similar peak properties to those of our best-fit phenomenological models. Most importantly the peak location, as determined by three independent data sets (“Previous”, TOCO, BOOM/NA), is near $`l210`$, as expected. Depending on the data set chosen, the amplitude is higher than expected but can easily be accommodated, within the uncertainties, with a cosmological constant. Combining all the data, there is a preference for $`l_{\mathrm{peak}}>210`$ which suggests a cosmological constant (at $`h=0.65`$, $`l_{\mathrm{peak}}`$ goes from 200 at $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ to 220 at $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$). However, this result is not seen in any individual data set.
A good approximation to the first peak in the DK99 best-fit model is given by the Gaussian model with $`\sigma _l=95`$. From the $`\sigma _l`$ constraints quoted earlier we see that the data have no significant preference for peaks that are either narrower or broader than those in inflation-inspired CDM models.
A general perturbation is a combination of adiabatic and isocurvature perturbations. Adiabatic perturbations are such that at each point in space, the fractional fluctuations in the number density of each particle species is the same for all species. Isocurvature perturbations are initially arranged so that, despite fluctuations in individual species, the total energy density fluctuation is zero. Given multiple components, there are a number of different ways of maintaining the isocurvature condition. Below we assume the isocurvature condition is maintained by the dark matter compensating everything else.
Isocurvature initial conditions result in shifts to the CMB power spectrum peak locations. For a given wavenumber, the temporal phase of oscillations in the baryon-photon fluid depends on the initial relation between the dark matter and the fluid. Those waves with oscillation frequencies such that they hit an extremum at the time of last-scattering in the adiabatic case, will hit a null in the isocurvature case. The effect on the first peak is a shift from $`l200\mathrm{\Omega }^{1/2}`$ to $`l350\mathrm{\Omega }^{1/2}`$. Given the observation of $`l_{\mathrm{peak}}210`$, simple isocurvature models require $`\mathrm{\Omega }>2`$—which is inconsistent with a number of observations.
Critical to the Doppler peak structure, in either adiabatic or isocurvature models, is the temporal phase coherence for Fourier modes of a given wavenumber. In topological defect models, the continual generation of new perturbations by the non-linear evolution of the defect network destroys this temporal phase coherence and the acoustic peaks blend into a broad hump which is wider and peaks at higher $`l`$ than the observed feature.
One can make defect model power spectra with less power at $`l=400`$ than at $`l=200`$ with ad-hoc modifications to the standard ionization history. But even for these models the drop is probably not fast enough. The contrast between the power at $`l=200`$ and $`l=400`$ is a great challenge for these models.
There are scenarios with initially isocurvature conditions that can produce CMB power spectra that look much like those in the adiabatic case. This can be done by adding to the adiabatic fluctuations (of photons, neutrinos, baryons and cold dark mater) another component, with a non-trivial stress history, which maintains the isocurvature condition.
Conclusions. Our phenomenological models have allowed for rapid, model-independent, investigation of the consistency of CMB datasets, and of the robustness of the properties of the peak in the CMB power spectrum. The peak has been observed by two different instruments, and can be inferred from an independent compilation of other data sets. The properties of this peak are consistent with those of the first peak in the inflation-inspired adiabatic CDM models, and inconsistent with a number of competing models, with the possible exception of the more complicated isocurvature models mentioned above. It is perhaps instructive that where the confrontation between theory and observation can be done with a minimum of theoretical uncertainty, the adiabatic CDM models have been highly successful.
###### Acknowledgements.
LK wishes to thanks S. Meyer and M. Tegmark for useful conversations and is supported by the DOE, NASA grant NAG5-7986 and NSF grant OPP-8920223. LP wishes to thank MAT/TOCO team members Mark Devlin, Randy Dorwart, Rob Caldwell, Tom Herbig, Amber Miller, Michael Nolta, Jason Puchalla, Eric Torbet, & Huan Tran, for insights and encouragement, and Chuck Bennett for comments on an earlier version of this work. LP is supported by NSF grant PHY 96-00015 and NASA grant NAS5-96021.
|
no-problem/0002/math0002013.html
|
ar5iv
|
text
|
# Fixed points of analytic actions of supersoluble Lie groups on compact surfaces
## Introduction
Let $`M`$ denote a compact connected surface, with possibly empty boundary $`M`$, endowed with a (real) analytic structure. $`T_pM`$ is the tangent space to $`M`$ at $`pM`$. The Euler characteristic of $`M`$ is denoted by $`\chi (M)`$.
Let $`G`$ be a Lie group with Lie algebra $`(G)=𝒢`$; all groups are assumed connected unless the contrary is indicated. An action of $`G`$ on $`M`$ is a homomorphism $`\alpha `$ from $`G`$ to the group $`𝖧(M)`$ of homeomorphisms of $`M`$ such that the evaluation map
$$\mathrm{𝖾𝗏}^\alpha =\mathrm{𝖾𝗏}:G\times MM,(g,x)\alpha (g)(x)$$
is continuous. We usually suppress notation for $`\alpha `$, denoting $`\alpha (g)(x)`$ by $`g(x)`$. The action is called $`C^r,r\{1,2\mathrm{};\omega \}`$ if $`\mathrm{𝖾𝗏}`$ is a $`C^r`$ map, where $`C^\omega `$ means analytic.
The set $`𝒜(G,M)`$ of actions of $`G`$ on $`M`$ is embedded in the space of continuous maps $`G\times MM`$ by the correspondence $`\alpha \mathrm{𝖾𝗏}^\alpha `$. We endow $`𝒜(G,M)`$ with the topology of uniform convergence on compact sets.
A point $`pM`$ is a fixed point for an action $`\alpha `$ of $`G`$ if $`\alpha (g)(p)=p`$ for all $`gG`$. The set of fixed points is denoted by $`\mathrm{𝖥𝗂𝗑}(G)`$ or $`\mathrm{𝖥𝗂𝗑}(\alpha (G))`$.
In this paper we consider the problem of finding conditions on solvable group actions that guarantee existence of a fixed point.
When $`\chi (M)0`$, every flow (action of the real line $`𝐑`$) on $`M`$ has a fixed point; this was known to Poincaré for flows generated by vector fields, and for continuous actions it is a well known consequence of Lefschetz’s fixed point theorem. E. Lima showed that every abelian group action on $`M`$ has a fixed point, and J. Plante extended this to nilpotent groups.
These results do not extend to solvable groups: Lima constructed a fixed point free action on the 2-sphere of the solvable group $`A`$ of homeomorphisms of $`𝐑`$ having the form $`xax+b,a>0,b𝐑`$; and Plante constructs fixed point free action of $`A`$ on all compact surfaces. These actions are not known to be analytic; but Example 3 below describes a fixed point free, analytic action of a 3-dimensional solvable group on $`S^2`$.
Recall that $`G`$ is supersoluble if every element of $`𝒢`$ belongs to a codimension one subalgebra (see Barnes ). Our main result is the following theorem:
###### Theorem 1
Let $`G`$ be a connected supersoluble Lie group and $`M`$ a compact surface $`M`$ such that $`\chi (M)0`$. Then every analytic action of $`G`$ on $`M`$ has a fixed point.
Since the group $`A`$ described above is supersoluble, Lima’s $`C^{\mathrm{}}`$ action cannot be improved to a fixed point free analytic action. The following result shows it cannot be approximated by analytic actions:
###### Corollary 2
Let $`G`$ and $`M`$ be as in Theorem 1. If $`\alpha 𝒜(G,M)`$ has no fixed point, then $`\alpha `$ has a neighborhood in $`𝒜(G,M)`$ containing no analytic action.
Proof By Theorem 1 and compactness of $`M`$, it suffices to prove the following: For all convergent sequences $`\beta _n\beta `$ in $`𝒜(G,M)`$ and $`p_np`$ in $`M`$, with $`p_n\mathrm{𝖥𝗂𝗑}(\beta _n(G))`$, we have $`p\mathrm{𝖥𝗂𝗑}(\beta (G))`$. Being a connected locally compact group, $`G`$ is generated by a compact neighborbood $`K`$ of the identity. Then $`\beta _n(g)\beta (g)`$ uniformly for $`gK`$, so $`\beta (g)(p)=p`$ for all $`gK`$. Since $`K`$ generates $`G`$, this implies that $`p\mathrm{𝖥𝗂𝗑}(\beta (G))`$.
In Theorem 1, the hypothesis that $`G`$ is connected is essential: the abelian group of rotations of $`S^2`$ generated by reflections in the three coordinate axes is a well known counterexample. And every Lie group with a nontrivial homomorphism to the group of integers acts analytically without fixed point on every compact surface admitting a fixed point free homeomorphism, thus on every surface except the disk and the projective plane.
The following example shows that supersolubility is essential:
###### Example 3
Let $`Q`$ be the 3-dimensional Lie group obtained as the semidirect product of the real numbers $`𝐑`$ acting on the complex numbers $`𝐂`$ by $`tz=e^{it}z`$; this group is solvable but not supersoluble. Identify $`Q`$ with the space $`𝐑\times 𝐂𝐑^3`$ and note that left multiplication defines a linear action of $`Q`$ on $`𝐑^3`$. The induced action on the 2-sphere $`S`$ of oriented lines in $`𝐑^3`$ through the origin has no fixed point, and $`\chi (S)=2`$. Geometrically, one can see this as the universal cover of the proper euclidean motions of the plane, acting on two copies of the plane joined along a circle at infinity.
We thank F.-J. Turiel for pointing out a small error in an earlier version of our manuscript. He has also obtained some interesting results complementary to ours in .
## Proof of Theorem 1
We assume given an action $`\alpha :G𝖧(M)`$. The orbit of $`pM`$ is $`G(p)=\{g(x):gG\}`$. The isotropy group of $`pM`$ is the closed subgroup $`I_p=\{gG:\alpha (g)(p)=p\}`$. The evaluation map $`\mathrm{𝖾𝗏}_p:GM`$ at $`pM`$ is defined by $`gg(p)`$.
Suppose that the action is $`C^r,r1`$. Then $`\mathrm{𝖾𝗏}_p`$ induces a bijective $`C^r`$ immersion $`i_p:G/I(p)G(p)`$. The tangent space $`E(p)T_pM`$ to this immersed manifold at $`p`$ is the image of $`T_eG`$ under the differential of $`\mathrm{𝖾𝗏}_p`$ at the identity $`eG`$.
For $`j=0,1,2`$, let $`V_j=V_j(G)M`$ denote the union of the $`j`$-dimensional orbits. Then $`M=V_2V_1V_0`$. Each $`V_j`$ is invariant, $`V_2`$ is open, $`V_1V_0`$ is compact, and $`V_0=\mathrm{𝖥𝗂𝗑}(G)`$.
###### Lemma 4 (Plante)
Assume that $`G`$ is solvable and that $`G(p)`$ is a compact 1-dimensional orbit. Then there is a closed normal subgroup $`HG`$ of codimension 1 such that every point of $`G(p)`$ has isotropy group $`H`$.
Proof Choose a homeomorphism $`f:G(p)S^1`$ (the circle). Let $`\beta :G𝖧(S^1)`$ be the action defined by $`\beta (g)=f\alpha (g)f^1`$. Because $`G`$ is solvable, by a result of Plante (, Theorem 1.2) there exists a homeomorphism $`h`$ of $`S^1`$ conjugating $`\beta (G)`$ to the rotation group $`\mathrm{SO}(2)`$. Since $`\beta (G)`$ is abelian and acts transitively on $`S^1`$, all points of $`S^1`$ have the same isotropy group for $`\beta `$; this isotropy group is the required $`H`$.
Analyticity is used to establish the following useful property:
###### Lemma 5
Assume that $`G`$ acts analytically and that $`\mathrm{𝖥𝗂𝗑}(G)=\mathrm{}`$. Then either $`V_1=M`$ and $`\chi (M)=0`$, or else $`V_1`$ is the (possibly empty) union of a finite family of orbits, each of which is a smooth Jordan curve contained in $`M`$ or in $`MM`$.
Proof Since there are no orbits of dimension $`0`$, $`V_1`$ is a compact set comprising the points $`p`$ such that $`dimE_p1`$. It is easy to see that $`V_1`$ is a local analytic variety.
If $`V_1=M`$ then the map $`pE_p`$ is a continuous field of tangent lines to $`M`$, tangent to $`M`$ at boundary points. The existence of such a field implies that $`\chi (M)=0`$.
Assume that $`V_1M`$. Note that $`dim_pV_11`$ at each $`pV_1`$. Since $`M`$ is connected and $`V_1`$ is a variety, $`V_1`$ must have dimension $`1`$ at each point. The set of points where $`V_1`$ is not smooth is a compact, invariant 0-dimensional subvariety, i.e., a finite set of fixed points, hence empty. Since $`V_1`$ consists of 1-dimensional orbits, $`V_1`$ must be a compact, smooth invariant 1-manifold without boundary, i.e. each component of $`V_1`$ is a Jordan curve. Since $`M`$ is the union of invariant Jordan curves, any component of $`V_1`$ that meets $`M`$ is a component of $`M`$.
In view of Lemma 5, it suffices to prove the following more general result:
###### Proposition 6
Let $`G`$ be a connected supersoluble Lie group acting continuously on the compact connected surface $`M`$. Assume that
there are no fixed points
for each closed subgroup $`H,`$ $`V_1(H)`$ is the union (perhaps empty) of finitely many disjoint Jordan curves.
Then $`\chi (M)=0`$.
By passing to a universal covering group we assume that $`G`$ is simply connected. This implies that every closed subgroup is simply connected (see Hochschild , Theorem XII.2.2.)
We proceed by induction on $`dimG`$, the case $`G=𝐑`$ having been covered in the introduction. Henceforth assume inductively that $`dimG=n2`$ and that the proposition holds for all supersoluble groups of lower dimension. With this hypothesis in force, we first rule out the case that $`M`$ is a disk:
###### Proposition 7
If $`M`$ is as in Proposition 6, then $`\chi (M)1`$
Proof Suppose not; then $`M`$ is a closed 2-cell. Since there are no fixed points, $`M`$ is an orbit, hence a component of $`V_1`$. Every component of $`V_1`$ bounds a unique 2-cell in $`M`$, and there are only finitely many such 2-cells. Let $`D`$ be one that contains no other. Then $`D`$ is invariant under $`G`$, and the action of $`G`$ on $`D`$ is fixed point free. Therefore we may assume that $`M=D`$, so that $`V_1=M`$.
By Lemma 4 there exists a closed normal subgroup $`H`$ of codimension one with $`M\mathrm{𝖥𝗂𝗑}(H)`$. Let $`RG`$ be a 1-parameter subgroup transverse to $`H`$ at the identity; then $`RH=G`$.
Because $`G`$ is supersoluble, there is a codimension one subalgebra $`𝒦𝒢`$ containing the Lie algebra $``$ of $`R`$. Because $`G`$ is simply connected and solvable $`𝒦`$ is the Lie algebra of a closed subgroup $`KG`$ of dimension $`n1`$, and $`KH=G`$. By the induction hypothesis there exists $`p\mathrm{𝖥𝗂𝗑}(K)`$. Then $`dimG(p)dimGdimK=1`$. Therefore $`pV_1=D`$. We now have $`p\mathrm{𝖥𝗂𝗑}(K)\mathrm{𝖥𝗂𝗑}(H)=\mathrm{𝖥𝗂𝗑}(G)`$, a contradiction.
We return now to the case of general $`M`$.
Denote the connected components of $`MV_1`$ by $`U_i,\mathrm{},U_r,r1`$. Each $`U_i`$ is an open orbit, whose set theoretic boundary $`\mathrm{𝖻𝖽}U_i`$ is a (possibly empty) union of components of $`V_1`$. The closure $`\overline{U}_i`$ is a compact surface invariant under $`G`$, whose boundary as a surface is $`U_i=\mathrm{𝖻𝖽}U_i`$.
We show that $`U_i`$ is an open annulus. Let $`HG`$ be the isotropy subgroup of $`pU_i`$. Evaluation at $`p`$ is a surjective fibre bundle projection $`GU_i`$ with standard fibre $`H`$. Therefore there is an exact sequence of homotopy groups
$$\mathrm{}\pi _j(G)\pi _j(U_i)\pi _{j1}(H)\pi _{j1}(G)\mathrm{}\pi _0(G)=\{0\}$$
ending with the trivial group $`\pi _0(G)`$ of components of $`G`$. The component group $`\pi _0(H)`$ is solvable (see Raghunathan , Proposition III.3.10), so taking $`j=1`$ shows that $`\pi _1(U_i)`$ is solvable. Therefore $`U_i`$ is a sphere, torus, open 2-cell, or open annulus. If $`U_i`$ is a torus then $`U_i=M`$, contradicting $`\chi (M)0`$. The sphere is ruled out by the exact sequence $`\pi _2(G)\pi _2(U_i)\pi _1(H)`$, because $`\pi _2(G)=0`$ for every Lie group and $`\pi _1(H)=0`$. Proposition 7 rules out the 2-cell.
It follows that $`\overline{U}_i`$ is a closed annulus, so $`\chi (\overline{U}_i)=0`$. By the additivity property $`\chi (AB)=\chi (A)+\chi (B)\chi (AB)`$ of the Euler characteristic, any space $`M`$ built by gluing annuli along their boundary circles must have $`\chi (M)=0`$.
|
no-problem/0002/hep-ph0002015.html
|
ar5iv
|
text
|
# The proton and the photon, who is probing whom in electroproduction?
## 1 INTRODUCTION
The HERA collider, where 27.5 GeV electrons collide with 920 GeV protons, is considered a natural extension of Rutherford’s experiment and the process of deep inelastic $`ep`$ scattering (DIS) is interpreted as a reaction in which a virtual photon, radiated by the incoming electron, probes the structure of the proton. In this talk I would like to discuss this interpretation and ask the question of who is probing whom .
The structure of the talk will be the following: it will start with posing the problem, after which our knowledge about the structure of the proton as seen at HERA will be presented followed by a description of our present understanding of the structure of the photon as seen at HERA and at LEP . Next, an answer to the question posed in the title will be suggested and the talk will be concluded by some remarks about the nature of the interaction between the virtual photon and the proton .
## 2 THE QUESTION - WHO IS PROBING WHOM?
### 2.1 The process of DIS
The process of DIS is usually represented by the diagram shown in figure 1. If the lepton does not change its identity during the scattering process, the reaction is labeled neutral current (NC), as either a virtual photon or a $`Z^0`$ boson can be exchanged. When the identity of the lepton changes in the process, the reaction is called charged current (CC) and a charged $`W^\pm `$ boson is exchanged. During this talk we will discuss only NC processes.
Using the four vectors as indicated in the figure, one can define the usual DIS variables: $`Q^2=q^2`$, the ’virtuality’ of the exchanged boson, $`x=Q^2/(2Pq)`$, the fraction of the proton momentum carried by the interacting parton, $`y=(Pq)/(Pk)`$, the inelasticity, and $`W^2=(q+P)^2`$, the boson-proton center of mass energy squared.
The interpretation of the diagram describing a NC event is the following. The electron beam is a source of photons with virtuality $`Q^2`$. These virtual photons ‘look’ at the proton. Any ‘observed’ structure belongs to the proton. How can we be sure that we are indeed measuring the structure of the proton? Virtual photons have no structure. Is that always true? We know that real photons have structure; we even measure the photon structure function $`F_2^\gamma `$ . Let us discuss this point further in the next subsections.
### 2.2 The fluctuating photon
How is it possible that the photon, which is the gauge particle mediating the electromagnetic interactions, has a hadronic structure? Ioffe’s argument : the photon can fluctuate into $`q\overline{q}`$ pairs just like it fluctuates into $`e^+e^{}`$ pairs (see figure 3). If the fluctuation time, defined in the proton rest frame as $`t_f(2E_\gamma )/m_{q\overline{q}}^2`$, is much larger than the interaction time, $`t_{int}r_p`$, the photon builds up structure in the interaction. Here, $`E_\gamma `$ is the energy of the fluctuating photon, $`m_{q\overline{q}}`$ is the mass into which it fluctuates, and $`r_p`$ is the radius of the proton.
The hadronic structure of the photon, built during the interaction, can be studied by measuring the photon structure function $`F_2^\gamma `$ in a DIS type of experiment where a quasi-real photon is probed by a virtual photon, both of which are emitted in $`e^+e^{}`$ collisions, as described in figure 3. This diagram is very similar to that in DIS on a proton target (figure 1).
### 2.3 Structure of virtual photons?
Does a virtual photon also fluctuate and acquire a hadronic structure? The fluctuation time of a photon with virtuality $`Q^2`$ is given by $`t_f(2E_\gamma )/(m_{q\overline{q}}^2+Q^2)`$, and thus at very high $`Q^2`$ one does not expect the condition $`t_ft_{int}`$ to hold. However at very large photon energies, or at very low $`x`$, the fluctuation time is independent of $`Q^2`$: $`t_f1/(2m_px)`$, where $`m_p`$ is the proton mass, and thus even highly virtual photons can acquire structure. For instance, at HERA presently $`W`$ 200 - 300 GeV, and since $`xQ^2/(Q^2+W^2)`$, $`x`$ can be as low as 0.01 even for $`Q^2`$ = 1000 GeV<sup>2</sup>. In this case, the fluctuation time will be very large compared to the interaction time and the highly virtual photon will acquire a hadronic structure. How do we interprate the DIS diagram of figure 1 in this case? Whose structure do we measure? Do we measure the structure of the proton, from the viewpoint of the proton infinite momentum frame, or do we measure the structure of the virtual photon, from the proton rest frame view? Who is probing whom?
When asked this question, Bjorken answered that physics can not be frame dependent and therefore it doesn’t matter: we can say that we measure the structure of the proton or we can say that we study the structure of the virtual photon. I will try to convince you at the end of my talk that this answer makes sense.
## 3 THE STRUCTURE OF THE PROTON
In this section we will refrain from discussing the question posed above and will accept the interpretation of measuring the structure of the proton via the DIS diagram in figure 1. We present below information about the structure of the proton as seen from the DIS studies at HERA.
### 3.1 HERA
With the advent of the HERA $`ep`$ collider the kinematic plane of $`x`$-$`Q^2`$ has been extended by 2 orders of magnitude in both variables from the existing fixed target DIS experiments, as depicted in figure 4.
The DIS cross section for $`epeX`$ can be written (for $`Q^2M_Z^2`$) as ,
$$\frac{d^2\sigma }{dxdQ^2}=\frac{4\pi \alpha ^2}{xQ^2}\left\{\frac{y^2}{2}2xF_1(x,Q^2)+(1y)F_2(x,Q^2)\right\}.$$
(1)
In the quark-parton model (QPM), the proton structure function $`F_2`$ is only a function of $`x`$ and can be expressed as a sum of parton densities, and is related to $`F_1`$ through the Callan-Gross relation ,
$$F_2(x)=\underset{i}{}e_i^2xq_i(x)=2xF_1,$$
(2)
where $`e_i`$ is the electric charge of quark $`i`$ and the index $`i`$ runs over all the quark flavours.
Note that in Quantum Chromodynamics (QCD), the Callan-Gross relation is violated, and the structure function is a function of $`x`$ and $`Q^2`$,
$$F_2(x,Q^2)2xF_1(x,Q^2)=F_L(x,Q^2)>0,$$
(3)
where the longitudinal structure function $`F_L`$ contributes in an important way only at large $`y`$.
The motivation for measuring $`F_2(x,Q^2)`$ can be summarized as follows: (a) test the validity of perturbative QCD (pQCD) calculations, (b) decompose the proton into quarks and gluons, and (c) search for proton substructure.
### 3.2 QCD evolution - scaling violation
Quarks radiate gluons; gluons split and produce more gluons at low $`x`$ and also $`q\overline{q}`$ pairs at low $`x`$. This QCD evolution chain is usually described in leading order by splitting functions $`P_{ij}`$, as shown in figure 5.
This procedure leads to scaling violation in the following way: there is an increase of $`F_2`$ with $`Q^2`$ at low $`x`$ and a decrease at high $`x`$. Scaling holds at about $`x`$=0.1. The data follows this prediction of QCD, as can be seen in figure 6.
### 3.3 Overview of $`F_2`$
The fixed target experiments provided information at relatively high $`x`$ and thus enabled the study of the behaviour of valence quarks. The first HERA results showed a surprisingly strong rise of $`F_2`$ as $`x`$ decreases. An example of such a rise is given in figure 7 where $`F_2`$ increases as $`x`$ decreases, for a fixed value of $`Q^2`$ = 15 GeV<sup>2</sup>.
This increase is the result of the rising gluon density at low $`x`$. Note the good agreement between both HERA experiments, H1 and ZEUS, and also between HERA and the fixed target data.
### 3.4 Evolution of $`F_2`$
The measurements of $`F_2`$ as function of $`x`$ and $`Q^2`$ can be used to obtain information about the parton densities in the proton. This is done by using the pQCD DGLAP evolution equations. One can not calculate everything from first principles but needs as input from the experiment the parton densities at a scale $`Q_0^2`$, usually taken as a few GeV<sup>2</sup>, above which pQCD is believed to be applicable.
There are several groups which perform QCD fits, the most notable are MRST and CTEQ . They parameterize the $`x`$ dependence of the parton densities at $`Q_0^2`$ in the form,
$$xq(x,Q^2)x^{\eta _1}(1x)^{\eta _2}f_{smooth}(x).$$
(4)
The free parameters like $`\eta _1`$ and $`\eta _2`$ are adjusted to fit the data for $`Q^2>Q_0^2`$.
An example of such an evolution study can be seen in figure 8 where the $`F_2`$ data are presented as function of $`x`$ for fixed $`Q^2`$ values. The increase of $`F_2`$ with decreasing $`x`$ is seen over the whole range of measured $`Q^2`$ values. The pQCD fits give a good description of the data down to surprisingly low $`Q^2`$ values.
The resulting parton densities from the MRST parameterization at $`Q^2`$ = 20 GeV<sup>2</sup> are shown in figure 10. One sees the dominance of the $`u`$ valence quark at high $`x`$ and the sharp rise of the sea quarks at low $`x`$. In particular, the gluon density at low $`x`$ rises very sharply and has a value of more than 20 gluons per unit of rapidity at $`x10^4`$. In figure 10 one sees the extracted gluon density by the H1 experiment at three different $`Q^2`$ values. The density of the gluons at a given low $`x`$ increases strongly with $`Q^2`$.
### 3.5 Rise of $`F_2`$ with decreasing $`x`$
The rate of the rise of $`F_2`$ with decreasing $`x`$ is $`Q^2`$ dependent. This can be clearly seen in figure 12 where $`F_2`$ is plotted as a function of $`x`$ for three $`Q^2`$ values. The rate of rise decreases as $`Q^2`$ gets smaller.
What can we say about the rate of rise? To what can one compare it? The proton structure function $`F_2`$ is related to the total $`\gamma ^{}p`$ cross section $`\sigma _{tot}(\gamma ^{}p)`$,
$$F_2=\frac{Q^2(1x)}{4\pi ^2\alpha }\frac{Q^2}{Q^2+4m_p^2x^2}\sigma _{tot}(\gamma ^{}p)\frac{Q^2}{4\pi ^2\alpha }\sigma _{tot}(\gamma ^{}p),$$
(5)
where the approximate sign holds for low $`x`$. Since we have a better feeling for the behaviour of the total cross section with energy, we plot in figure 12 the $`F_2`$ data converted to $`\sigma _{tot}(\gamma ^{}p)`$ as function of $`W^2`$ for fixed values of $`Q^2`$. For comparison we plot also the total $`\gamma p`$ cross section. One sees that the shallow $`W`$ behaviour of the total $`\gamma p`$ cross section changes to a steeper behaviour as $`Q^2`$ increases. The curves are the results of the ALLM97 parameterization (see below) which gives a good description of the transition seen in the data.
### 3.6 The transition region
The data presented above show a clear change of the $`W`$ dependence with $`Q^2`$. At $`Q^2`$=0 the processes are dominantly non-perturbative and the resulting reactions are usually named as ‘soft’ physics. This domain is well described in the Regge picture. As $`Q^2`$ increases, the exchanged photon is expected to shrink and one expects pQCD to take over. The reactions are said to be ‘hard’. Where does the transition from soft to hard physics take place? Is it a smooth or abrupt one? In the following we describe two parameterizations, one fully based on the Regge picture while the other combines the Regge approach with a QCD motivated one.
### 3.7 Example of two parameterizations
Donnachie and Landshoff (DL) succeeded to describe all existing hadron-proton total cross section data in a simple Regge picture by using a combination of a Pomeron and a Reggeon exchange, the former rising slowly while the latter decreasing with energy,
$$\sigma _{tot}=Xs^{0.08}+Ys^{0.45},$$
(6)
where $`s`$ is the square of the total center of mass energy. The two numerical parameter, related to the intercepts $`\alpha (0)`$ of the Pomeron and Reggeon trajectories, respectively, are the result of fitting this simple expression to all available data, some of which are shown in the first two plots in figure 13. These parameters give also a good description of the total $`\gamma p`$ cross section data which were not used in the fit and are also shown on the right hand side of the figure.
Donnachie and Landshoff wanted to extend this picture also to virtual photons (for $`Q^2<`$10 GeV<sup>2</sup>), keeping the power of $`W^2`$, which is related to the Pomeron intercept, fixed with $`Q^2`$. Their motivation was to see what is the expected contribution from non-perturbative physics, or soft physics as we called it above, at higher $`Q^2`$.
The other example is that of Abramowicz, Levin, Levy, Maor (ALLM) , which was updated by Abramowicz and Levy (ALLM97) . This parameterization uses a Regge motivated approach at low $`x`$ together with a QCD motivated one at high $`x`$ to parameterize the whole ($`x,Q^2`$) phase space, fitting all existing $`F_2`$ data. This parameterization uses a so-called interplay of soft and hard physics (see ) .
The two parameterizations are compared to the low $`Q^2`$ HERA data together with that of the fixed target E665 experiment in figure 15. Here one sees again how the cross section changes from a $`(W^2)^{0.08}`$ behaviour at very low $`Q^2`$ to a $`(W^2)^{0.20.4}`$ as $`Q^2`$ increases. The simple DL parameterization as implemented by ZEUS (ZEUSREGGE in the figure) fails to describe the data above $`Q^2`$ 1 GeV<sup>2</sup>. ALLM97 describes the data well in the whole region. DL98 , which adds to the soft Pomeron an additional hard Pomeron, can also describe the data, but loses the simplicity of the original DL one.
One can quantify the change in the rate of increase by using the parameter $`\lambda `$. Since $`\sigma _{tot}(W^2)^{\alpha (0)1}`$ this implies that $`F_2x^\lambda `$. The fitted value of $`\lambda `$ as function of $`Q^2`$ is shown in figure 15 for the ZEUS (upper) and the H1 (lower) experiments. One sees a clear increase of $`\lambda `$ with $`Q^2`$ which cannot be reproduced by the simple Regge picture but needs an approach in which there is the interplay of soft and hard physics .
### 3.8 What have we learned about the structure of the proton?
Let us summarize what we have learned so far about the structure of the proton.
* The density of partons in the proton increases with decreasing $`x`$.
* The rate of increase is $`Q^2`$ dependent; at high $`Q^2`$ the increase follows the expectations from the pQCD hard physics while at low $`Q^2`$ the rate is described by the soft physics behaviour expected by the Regge phenomenology.
* Though there seems to be a transition in the region of $`Q^2`$ 1-2 GeV<sup>2</sup>, there is an interplay between the soft and hard physics in both regions.
## 4 THE STRUCTURE OF THE PHOTON
In this part we will describe what is presently known about the structure of the photon, both from $`e^+e^{}`$ experiments as well as from HERA.
### 4.1 Photon structure from $`e^+e^{}`$
The hadronic structure function of the photon, $`F_2^\gamma `$, was measured in $`e^+e^{}`$ collisions which can be interpreted as depicted in figure 3. A highly virtual $`\gamma ^{}`$ with large $`Q^2`$ probes a quasi-real $`\gamma `$ with $`P^2`$ 0.
The measurements of $`F_2^\gamma `$ showed a different behaviour than that of the proton structure function. From the $`Q^2`$ dependence, shown in figure 17 , one sees positive scaling violation for all $`x`$.
This different behaviour can be understood as coming from an additional splitting to the ones present in the proton case (see figure 5). In the photon case, the photon can split into a $`q\overline{q}`$ pair, $`\gamma q\overline{q}`$. The contribution resulting from this splitting, called the ‘box diagram’, causes positive scaling violation for all $`x`$. In addition, and again contrary to the proton case, it also causes the photon structure function to be large for high $`x`$ values, as can be seen in figure 17 where $`F_2^\gamma `$ is plotted as function of $`x`$ for fixed $`Q^2`$ values. From this figure one can also see that there exist very little data in the low $`x`$ region.
### 4.2 Photon structure from HERA
At HERA, the structure of the photon can be studied by selecting events in which the exchanged photon is quasi-real and the probe is provided by a large transverse momentum parton from the proton. The probed photon can participate in the process in two ways. In one, the interaction takes place before it fluctuates into a $`q\overline{q}`$ pair and thus the whole of the photon participates in the interaction. Such a process is called a ‘direct’ photon interaction. In the other case, the photon first fluctuates into partons and only one of these partons participates in the interaction while the rest continue as the photon remnant. This process is said to be a ‘resolved’ photon interaction. An example of leading order diagrams describing dijet photoproduction for the two processes is shown in figure 19.
If one defines a variable $`x_\gamma `$ as the fraction of the photon momentum taking part in a dijet process, we expect $`x_\gamma `$ 1 in the direct case, while $`x_\gamma `$ 1 in the resolved photon interaction. These two processes are clearly seen in figure 19 where the $`x_\gamma `$ distribution shows a two peak structure, one coming from the direct photon and the other from the resolved photon interactions .
One way of obtaining information about $`F_2^\gamma `$ from HERA is to measure the dijet photoproduction as function of $`x_\gamma `$ and to subtract the contribution coming from the direct photon reactions. This is shown in figure 21,
where the measurements are presented at fixed values of the hard scale, which is taken as the highest transverse energy jet . One can go one step further by assuming leading order QCD and Monte Carlo (MC) models to extract the effective parton densities in the photon. An example of the extracted gluon density in the photon is shown in figure 21. The gluon density increases with decreasing $`x`$, a similar behaviour to that of the gluon density in the proton. The data have the potential of differentiating between different parameterization of the parton densities in the photon, as can be seen in the same figure.
### 4.3 Virtual photons at HERA
One can study the structure of virtual photons in a similar way as described above. In this case, the $`Q^2`$ of the virtual photon has to be much smaller than the transverse energy squared of the jet, $`E_t^2`$, which provides the hard scale of the probe. Such a study is presented in figure 23, where the dijet cross section is plotted as function of $`x_\gamma `$ for different regions in $`Q^2`$ and $`E_t^2`$. One sees a clear excess over the expectation of direct photon reactions, indicating that virtual photons also have a resolved part.
This fact can also been seen in figure 23 where the ratio of resolved to direct photon interactions is plotted as function of the virtuality $`Q^2`$ of the probed photon . One sees that although the ratio decreases with $`Q^2`$, it remains non-zero even at relatively high $`Q^2`$ values.
### 4.4 Virtual photons at LEP
The study of the structure of virtual photons in $`e^+e^{}`$ reactions was dormant for more than 15 years following the measurement done by the PLUTO collaboration . Recently, however, the L3 collaboration at LEP measured the structure function of photons with a virtuality of 3.7 GeV<sup>2</sup>, using as probes photons with a virtuality of 120 GeV<sup>2</sup>.
In the same experiment, the structure function of real photons was also measured. Both results can be seen in figure 25 and within errors the structure function of the virtual photons is of the same order of magnitude as that of the real ones. The effective structure function is also presented as function of the virtuality of the probed photon $`P^2`$ in figure 25 and show very little dependence on $`P^2`$ up to values of $``$ 6 GeV<sup>2</sup> .
### 4.5 What have we learned about the structure of the photon?
Let us summarize what we have learned so far about the structure of the photon.
* At HERA one can see clear signals of the 2-component structure of quasi-real photons, a direct and a resolved part.
* Virtual photons can also have a resolved part at low $`x`$ and fluctuate into $`q\overline{q}`$ pairs.
* Structure of virtual photons has been seen also at LEP.
## 5 THE ANSWER
Following the two sections on the structure of the proton and the photon, let us remind ourselves again what our original question was. At low $`x`$ we have seen that a $`\gamma ^{}`$ can have structure. Does it still probe the proton in an $`ep`$ DIS experiment or does one of the partons of the proton probe the structure of the $`\gamma ^{}`$?
The answer is just as Bjorken said: at low $`x`$ it does not matter. Both interpretation are correct. The emphasis is however ‘at low $`x`$’. At low $`x`$ the structure functions of the proton and of the photon can be related through Gribov factorization . By measuring one, the other can be obtained from it through a simple relation. This can be seen as follows.
Gribov showed that the $`\gamma \gamma `$, $`\gamma p`$ and $`pp`$ total cross sections can be related by Regge factorization as follows:
$$\sigma _{\gamma \gamma }(W^2)=\frac{\sigma _{\gamma p}^2(W^2)}{\sigma _{pp}(W^2)}.$$
(7)
This relation can be extended to the case where one photon is real and the other is virtual,
$$\sigma _{\gamma ^{}\gamma }(W^2,Q^2)=\frac{\sigma _{\gamma ^{}p}(W^2,Q^2)\sigma _{\gamma p}(W^2)}{\sigma _{pp}(W^2)},$$
(8)
or to the case where both photons are virtual,
$$\sigma _{\gamma ^{}\gamma ^{}}(W^2,Q^2,P^2)=\frac{\sigma _{\gamma ^{}p}(W^2,Q^2)\sigma _{\gamma ^{}p}(W^2,P^2)}{\sigma _{pp}(W^2)}.$$
(9)
Since at low $`x`$ one has $`\sigma \frac{4\pi ^2\alpha }{Q^2}F_2`$, one gets the following relations between the proton structure function $`F_2^p`$, the structure function of a real photon, $`F_2^\gamma `$, and that of a virtual photon, $`F_2^\gamma ^{}`$:
$$F_2^\gamma (W^2,Q^2)=F_2^p(W^2,Q^2)\frac{\sigma _{\gamma p}(W^2)}{\sigma _{pp}(W^2)},$$
(10)
and
$$F_2^\gamma ^{}(W^2,Q^2,P^2)=\frac{4\pi ^2\alpha }{P^2}\frac{F_2^p(W^2,Q^2)F_2^p(W^2,P^2)}{\sigma _{pp}(W^2)}.$$
(11)
The relation given in equation (10) has been used to ‘produce’ $`F_2^\gamma `$ ‘data’ from well measured $`F_2^p`$ data in the region of $`x<`$0.01, where the Gribov factorization is expected to hold. The results are plotted in figure 26 together with direct measurements of $`F_2^\gamma `$. Since no direct measurements exist in the very low $`x`$ region for $`Q^2>4`$ GeV<sup>2</sup>, it is difficult to test the relation. However both data sets have been used for a global QCD leading order and higher order fits to obtain parton distributions in the photon. Clearly there is a need of more precise direct $`F_2^\gamma `$ data for such a study.
In any case, our answer to the question would be that at low $`x`$ the virtual photon and the proton probe the structure of each other. In fact, what one probes is the structure of the interaction. At high $`x`$, the virtual photon can be assumed to be structureless and it studies the structure of the proton.
## 6 DISCUSSION - THE STRUCTURE OF THE INTERACTION
We concluded in the last section that at low $`x`$ one studies the structure of the interaction. Let us discuss this point more clearly.
We saw that in case of the proton at low $`x`$, the density of the partons increases with decreasing $`x`$. Where are the partons located? In the proton rest frame, Bjorken $`x`$ is directly related to the space coordinate of the parton. The distance $`l`$ in the direction of the exchanged photon is given by ,
$$l=\frac{1}{2m_px}\frac{0.1\mathrm{fm}}{x}.$$
(12)
Therefore partons with $`x>`$0.1 are in the interior of the proton, while all partons with $`x<`$0.1 have no direct relation to the structure of the proton. The low $`x`$ partons describe the properties of the $`\gamma ^{}p`$ interaction.
How can we describe a $`\gamma ^{}p`$ interaction at low $`x`$? It occurs in two steps: first the virtual photon fluctuates into a $`q\overline{q}`$ pair and then the configuration of this pair determines if the interaction is ‘soft’ or ‘hard’ . The soft process is the result of a large spatial configuration in which the photon fluctuates into an asymmetric small $`k_T`$ $`q\overline{q}`$ pair. The hard nature of the interaction is obtain when the fluctuation is into a small configuration of a symmetric $`q\overline{q}`$ pair with large $`k_T`$. The two configurations are shown in figure 28.
At $`Q^2`$=0 the asymmetric configuration is dominant and the large color forces produce a hadronic component which interacts with the proton and leads to hadronic non-perturbative soft physics. The symmetric component contributes very little; the high $`k_T`$ configuration is screened by color transparency (CT). At higher $`Q^2`$ the contribution of the symmetric small configuration gets bigger. Each one still contributes little because of CT, but the phase space for such configurations increases. Nevertheless, the asymmetric large configuration is also still contributing and thus both soft and hard components are present. Another way to see this interplay is by looking at the diagram in figure 28. In a simple QPM picture of DIS, the fast quark from the asymmetric configuration becomes the current jet while the slow quark interacts with the proton in a soft process. Thus the DIS process looks in the $`\gamma ^{}p`$ frame just like the $`Q^2`$=0 case. This brings the interplay of soft and hard processes.
## 7 CONCLUSION
* In DIS experiments at low $`x`$ one studies the ‘structure’ of the $`\gamma ^{}p`$ interaction.
* In order to study the interior structure of the proton, one needs to measure the high $`x`$ high $`Q^2`$ region. This will be done at HERA after the high luminosity upgrade.
I would like to thank Professors K. Maruyama and H. Okuno for organizing a pleasant and lively Symposium. Special thanks are due to Professor K. Tokushuku and his group for being wonderful hosts during my visit in Japan. Finally I would like to thank Professor H. Abramowicz for helpful discussions.
|
no-problem/0002/hep-ph0002091.html
|
ar5iv
|
text
|
# 1 The single target-spin asymmetry 𝐴^sin{ϕ_ℎ}_{𝑈𝐿} for 𝜋⁺ production as a function of Bjorken 𝑥, evaluated using 𝑀_𝐶=0.28 GeV in Eq.(). The solid line corresponds to ℎ₁=𝑔₁, the dashed one to ℎ₁=(𝑓₁+𝑔₁)/2. Data are from Ref. [].
DESY 00-016
hep-ph/0002091
Single-spin Azimuthal Asymmetries
in the “Reduced Twist-3 Approximation”
E. De Sanctis<sup>a</sup>, W.-D. Nowak<sup>b</sup>, K.A. Oganessyan<sup>a,b,c</sup> <sup>1</sup><sup>1</sup>1e-mail: kogan@hermes.desy.de
<sup>a</sup>INFN-Laboratori Nazionali di Frascati
I-00044 Frascati, via Enrico Fermi 40, Italy
<sup>b</sup>DESY Zeuthen
D-15738 Zeuthen, Platanenallee 6, Germany
<sup>c</sup>Yerevan Physics Institute
375036 Yerevan, Alikhanian Br.2, Armenia
## Abstract
We consider the single-spin azimuthal asymmetries recently measured at the HERMES experiment for charged pions produced in semi-inclusive deep inelastic scattering of leptons off longitudinally polarized protons. Guided by the experimental results and assuming a vanishing twist-2 transverse quark spin distribution in the longitudinally polarized nucleon, denoted as “reduced twist-3 approximation”, a self-consistent description of the observed single-spin asymmetries is obtained. In addition, predictions are given for the $`z`$ dependence of the single target-spin asymmetry.
Semi-inclusive deep inelastic scattering (SIDIS) of leptons off a polarized nucleon target is a rich source of information on the spin structure of the nucleon and on parton fragmentation. In particular, measurements of azimuthal asymmetries in SIDIS allow the further investigation of the quark and gluon structure of the polarized nucleon. The HERMES collaboration has recently reported on the measurement of single target-spin asymmetries in the distribution of the azimuthal angle $`\varphi `$ relative to the lepton scattering plane, in semi-inclusive charged pion production on a longitudinally polarized hydrogen target . The $`\mathrm{sin}\varphi `$ moment of this distribution was found to be significant for $`\pi ^+`$-production. For $`\pi ^{}`$ it was found to be consistent with zero within present experimental uncertainties, as it was the case for the $`\mathrm{sin}2\varphi `$ moments of both $`\pi ^+`$ and $`\pi ^{}`$. Single-spin asymmetries vanish in models in which hadrons consist of non-interacting collinear partons (quarks and gluons), i.e. they are forbidden in the simplest version of the parton model and perturbative QCD. Non-vanishing and non-identical intrinsic transverse momentum distributions for oppositely polarized partons play an important role in most explanations of such non-zero single-spin asymmetries; they are interpreted as the effects of “naive time-reversal-odd” (T-odd) fragmentation functions \[2-6\], arising from non-perturbative hadronic final-state interactions. In Refs. these asymmetries were evaluated and it was shown that a good agreement with the HERMES data can be achieved by using only twist-2 distribution and fragmentation functions.
In this letter the single target-spin sin$`\varphi _h`$ and sin$`\mathrm{\hspace{0.17em}2}\varphi _h`$ azimuthal asymmetries are investigated in the light of the recent HERMES results . It will be shown that these results may be interpreted towards a vanishing twist-2 quark transverse spin distribution in the longitudinally polarized nucleon <sup>2</sup><sup>2</sup>2After this work has been completed we became aware of Refs. where this possibility has also been considered.. Under this assumption, which will be called hereafter “reduced twist-3 approximation”, the sub-leading order in $`1/Q`$ single target-spin sin$`\varphi _h`$ asymmetry reduces to the twist-2 level and is interpreted as the effect of the convolution of the transversity distribution and the T-odd fragmentation function. In this situation, also measurements with a longitudinally polarized target at HERMES may be used to extract the transversity distribution in a way similar to that proposed in Ref. for a transversely polarized target, once enough statistics will be collected.
The sin$`\varphi _h`$ and sin$`\mathrm{\hspace{0.17em}2}\varphi _h`$ moments of experimentally observable single target-spin asymmetries in the SIDIS cross-section can be related to the parton distribution and fragmentation functions involved in the parton level description of the underlying process . Their anticipated dependence on $`p_T`$ ($`k_T`$), the intrinsic transverse momentum of the initial (final) parton, reflects into the distribution of $`P_{hT}`$, the transverse momentum of the semi-inclusively measured hadron. The moments are defined as appropriately weighted integrals over this observable, of the cross section asymmetry:
$$\frac{|P_{hT}|}{M_h}\mathrm{sin}\varphi _h\frac{d^2P_{hT}\frac{|P_{hT}|}{M_h}\mathrm{sin}\varphi _h\left(d\sigma ^+d\sigma ^{}\right)}{d^2P_{hT}\left(d\sigma ^++d\sigma ^{}\right)},$$
(1)
$$\frac{|P_{hT}|^2}{MM_h}\mathrm{sin}2\varphi _h\frac{d^2P_{hT}\frac{|P_{hT}|^2}{MM_h}\mathrm{sin}2\varphi _h\left(d\sigma ^+d\sigma ^{}\right)}{d^2P_{hT}\left(d\sigma ^++d\sigma ^{}\right)}.$$
(2)
Here $`+()`$ denote the antiparallel (parallel) longitudinal polarization of the target and $`M`$ ($`M_h`$) is the mass of the target (final hadron). For both polarized and unpolarized leptons these asymmetries are given by <sup>3</sup><sup>3</sup>3We omit the current quark mass dependent terms.
$$\frac{|P_{hT}|}{M_h}\mathrm{sin}\varphi _h(x,y,z)=\frac{1}{I_0(x,y,z)}[I_{1L}(x,y,z)+I_{1T}(x,y,z)],$$
(3)
$$\frac{|P_{hT}|^2}{MM_h}\mathrm{sin}2\varphi _h(x,y,z)=\frac{8}{I_0(x,y,z)}S_L(1y)h_{1L}^{(1)}(x)z^2H_1^{(1)}(z),$$
(4)
where
$$I_0(x,y,z)=(1+(1+y)^2)f_1(x)D_1(z),$$
$$I_{1L}(x,y,z)=4S_L\frac{M}{Q}(2y)\sqrt{1y}[xh_L(x)zH_1^{(1)}(z)h_{1L}^{(1)}(x)\stackrel{~}{H}(z)],$$
(5)
$$I_{1T}(x,y,z)=2S_{Tx}(1y)h_1(x)zH_1^{(1)}(z).$$
(6)
With $`k_1`$ ($`k_2`$) being the 4-momentum of the incoming (outgoing) charged lepton, $`Q^2=q^2`$ where $`q=k_1k_2`$ is the 4-momentum of the virtual photon. $`P`$ ($`P_h`$) is the momentum of the target (final hadron), $`x=Q^2/2(Pq)`$, $`y=(Pq)/(Pk_1)`$, $`z=(PP_h)/(Pq)`$, $`k_{1T}`$ the incoming lepton transverse momentum with respect to the virtual photon momentum direction, and $`\varphi _h`$ is the azimuthal angle between $`P_{hT}`$ and $`k_{1T}`$ around the virtual photon direction. Note that the azimuthal angle of the transverse (with respect to the virtual photon) component of the target polarization, $`\varphi _S`$, is equal to 0 ($`\pi `$) for the target polarized parallel (anti-parallel) to the beam . The components of the longitudinal and transverse target polarization in the virtual photon frame are denoted by $`S_L`$ and $`S_{Tx}`$, respectively. Twist-2 distribution and fragmentation functions have a subscript ‘1’: $`f_1(x)`$ and $`D_1(z)`$ are the usual unpolarized distribution and fragmentation functions, while $`h_{1L}^{(1)}(x)`$ and $`h_1(x)`$ describe the quark transverse spin distribution in longitudinally and transversely polarized nucleons, respectively. The twist-3 distribution function in the longitudinally polarized nucleon is denoted by $`h_L(x)`$ . The spin dependent fragmentation function $`H_1^{(1)}(z)`$, describing transversely polarized quark fragmentation (Collins effect ), can be interpreted as the production probability of an unpolarized hadron from a transversely polarized quark . The fragmentation function $`\stackrel{~}{H}(z)`$ is the interaction-dependent part of the twist-3 fragmentation function: $`H(z)=2zH_1^{(1)}(z)+\stackrel{~}{H}(z)`$. The functions with superscript $`(1)`$ denote $`p_T^2`$\- and $`k_T^2`$-moments, respectively:
$$h_{1L}^{(1)}(x)d^2p_T\left(\frac{p_T^2}{2M^2}\right)h_{1L}^{}(x,p_T^2),$$
(7)
$$H_1^{(1)}(z)z^2d^2k_T\left(\frac{k_T^2}{2M_h^2}\right)H_1^{}(z,z^2k_T^2).$$
(8)
The function $`h_L(x)`$ can be split into a twist-2 part, $`h_{1L}^{(1)}(x)`$, and an interaction-dependent part, $`\stackrel{~}{h}_L(x)`$ :
$$h_L(x)=2\frac{h_{1L}^{(1)}(x)}{x}+\stackrel{~}{h}_L(x).$$
(9)
As it was shown in Refs. this relation can be rewritten as
$$h_L(x)=h_1(x)\frac{d}{dx}h_{1L}^{(1)}(x).$$
(10)
The weighted single target-spin asymmetries defined above are related to the ones measured by HERMES through the following relations:
$$A_{UL}^{\mathrm{sin}\varphi _h}\frac{2M_h}{P_{hT}}\frac{|P_{hT}|}{M_h}\mathrm{sin}\varphi _h,$$
(11)
$$A_{UL}^{\mathrm{sin}2\varphi _h}\frac{2MM_h}{P_{hT}^2}\frac{|P_{hT}^2|}{MM_h}\mathrm{sin}2\varphi _h,$$
(12)
where the subscripts $`U`$ and $`L`$ indicate unpolarized beam and longitudinally polarized target, respectively.
When combining the HERMES experimental results of a significant target-spin sin$`\varphi _h`$ asymmetry for $`\pi ^+`$ and of a vanishing sin$`\mathrm{\hspace{0.17em}2}\varphi _h`$ asymmetry with the preliminary evidence from $`Z^0`$ $`2`$-jet decay on a non-zero T-odd transversely polarized quark fragmentation function , it follows immediately from Eq.(4) that $`h_{1L}^{(1)}(x)`$, the twist-2 transverse quark spin distribution in a longitudinally polarized nucleon, should vanish. Consequently, from Eqs. (9, 10) follows that
$$h_L(x)=\stackrel{~}{h}_L(x)=h_1(x).$$
(13)
In this situation the single target-spin sin$`\varphi _h`$ asymmetry given by Eq.(3) reduces to the twist-2 level (“reduced twist-3 approximation”). The fact that $`h_{1L}^{(1)}(x)`$ (see Eq.7) vanishes may be interpreted as follows: the distribution function $`h_{1L}^{}(x,p_T^2)`$, which is non-zero itself, vanishes at any $`x`$ when it is averaged over the intrinsic transverse momentum of the initial parton, $`p_T`$. As a matter of fact, in a longitudinally polarized nucleon partons polarized transversely at large $`p_T`$ may indeed have a polarization opposite to that at smaller $`p_T`$, at any $`x`$.
It is important to mention that the “reduced twist-3 approximation” does not require $`\stackrel{~}{H}(z)=0`$, which otherwise would lead to the inconsistency that $`H_1^{(1)}(z)`$ would be required to vanish .
For the numerical calculations the non-relativistic approximation $`h_1(x)=g_1(x)`$ is taken as lower limit <sup>4</sup><sup>4</sup>4For non-relativistic quarks $`h_1(x)=g_1(x)`$. Several models suggest that $`h_1(x)`$ has the same order of magnitude as $`g_1`$ . The evolution properties of $`h_1`$ and $`g_1`$, however, are very different . At the $`Q^2`$ values of the HERMES measurement the assumption $`h_1=g_1`$ is fulfilled at large, i.e. valence-like, $`x`$ values, while large differences occur at lower $`x`$ . , and $`h_1(x)=(f_1(x)+g_1(x))/2`$ as an upper limit . For the sake of simplicity, $`Q^2`$-independent parameterizations were chosen for the distribution functions $`f_1(x)`$ and $`g_1(x)`$ .
To calculate the T-odd fragmentation function $`H_1^{(1)}(z)`$ , the Collins parameterization for the analyzing power of transversely polarized quark fragmentation was adopted:
$$A_C(z,k_T)\frac{|k_T|}{M_h}\frac{H_1^{}(z,k_T^2)}{D_1(z,k_T^2)}=\frac{M_C|k_T|}{M_C^2+k_T^2}$$
(14)
For the distribution of the final parton’s intrinsic transverse momentum, $`k_T`$, in the unpolarized fragmentation function $`D_1(z,k_T^2)`$ a Gaussian parameterization was used with $`z^2k_T^2=b^2`$ (in the numerical calculations $`b=0.36`$ GeV was taken ). For $`D_1^{\pi ^+}(z)`$ the parameterization from Ref. was adopted. In Eq.(14) $`M_C`$ is a typical hadronic mass whose value may range from $`m_\pi `$ to $`M_p`$. Using $`M_C=2m_\pi `$ for the analyzing power of Eq.(14) results in
$$\frac{_{z_0=0.1}^1𝑑zH_1^{}(z)}{_{z_0=0.1}^1𝑑zD_1(z)}=0.062,$$
(15)
which is in good agreement with the experimental result $`0.063\pm 0.017`$ given for this ratio in Ref. . Here $`H_1^{}(z)`$ is the unweighted polarized fragmentation function, defined as:
$$H_1^{}(z)z^2d^2k_TH_1^{}(z,z^2k_T^2).$$
(16)
It is worth mentioning that the ratio in Eq.(15) is rather sensitive to the lower limit of integration, $`z_0`$ . By using $`z_0=0.01`$, the ratio reduces to 0.03; choosing a value of $`z_0`$ equal to 0.2 (0.3), the ratio increases to about $`0.1`$ ($`0.12`$). This behavior is mainly due to the fact that the fragmentation function $`D_1(z)`$ diverges at small values of $`z`$.
In Fig. 1, the asymmetry $`A_{UL}^{\mathrm{sin}\varphi _h}(x)`$ of Eq.(11) for $`\pi ^+`$ production on a proton target is presented as a function of $`x`$-Bjorken and compared to HERMES data , which correspond to $`1`$ GeV<sup>2</sup> $`Q^215`$ GeV<sup>2</sup>, $`4`$ GeV $`E_\pi 13.5`$ GeV, $`0.02x0.4`$, $`0.2z0.7`$, and $`0.2y0.8`$. The two theoretical curves are calculated by integrating over the same kinematic ranges taking $`P_{hT}=0.365`$ GeV as input. The latter value is obtained in this kinematic region assuming a Gaussian parameterization of the distribution and fragmentation functions with $`p_T^2=(0.44)^2`$ GeV<sup>2</sup> .
From Fig. 1 it can be concluded that there is good agreement between the calculation in this letter and the HERMES data. Note that the ‘kinematic‘ contribution to $`A_{UL}^{\mathrm{sin}\varphi _h}(x)`$, coming from the transverse component of the target polarization (with respect to the virtual photon direction) and given by $`I_{1T}`$ (Eq.(6)), amounts to only $`25\%`$.
The $`z`$ dependence of the asymmetry $`A_{UL}^{\mathrm{sin}\varphi _h}`$ for $`\pi ^+`$ production is shown in Fig. 2, where the two curves correspond to two limits for $`h_1(x)`$, as introduced above. No data are available yet to constrain the calculations.
In conclusion, the recently observed single-spin azimuthal asymmetries in semi-inclusive deep inelastic lepton scattering off a longitudinally polarized proton target at HERMES are interpreted on the basis of the so-called “reduced twist-3 approximation”, that is assuming a vanishing twist-2 transverse quark spin distribution in the longitudinally polarized nucleon. This leads to a self-consistent description of the observed single-spin asymmetries. In this approach the target-spin sin$`\varphi _h`$ asymmetry is interpreted as the effect of the convolution of the transversity distribution, $`h_1(x)`$, and a T-odd fragmentation function, $`H_1^{(1)}(z)`$, and may allow to probe transverse spin observables in a longitudinally polarized nucleon.
In addition, predictions are given for the $`z`$ dependence of the single target-spin sin$`\varphi _h`$ asymmetry, for which experimental data are not yet published.
We would like to thank P. Mulders for many useful discussions, V. Korotkov for very useful comments and R. Kaiser for the careful reading of the manuscript. The work of K.A.O. was in part supported by INTAS contributions (contract numbers 93-1827 and 96-287) from the European Union.
|
no-problem/0002/cond-mat0002126.html
|
ar5iv
|
text
|
# Structure and Magnetism of well-defined cobalt nanoparticles embedded in a niobium matrix
## I Introduction
Structural and magnetic properties of clusters, i.e. particles containing from two to a few thousand atoms, are of great interest nowadays. From a technological point of view, those systems are part of the development of high density magnetic storage media, and, from a fundamental point of view, the physics of magnetic clusters still needs to be investigated. Indeed, to perform stable magnetic storage with small clusters, one has to control the magnetization reversal process (nucleation and dynamics), and thus make a close connection between structure and magnetic behavior. To reach the magnetic properties of small clusters, there are two available approaches : ”macroscopic” measurements (using a Vibrating Sample Magnetometer (VSM) or a Super Quantum Interference Device (SQUID)) on a cluster collection (10<sup>9</sup> particles) that implicate statistical treatments of the data, and ”microscopic” measurements on a single particle. From now, micro-magnetometers (MFM, Hall micro-probe, or classical micro-squid) were not sensitive enough to perform magnetic measurements on a single cluster. The present paper constitutes the preliminary study toward magnetic measurements on a small single cluster using a new microsquid design. We focus on and try to connect structural and magnetic properties of a cluster collection. With a view to clear up structural questions, we first study the structure of nanocrystalline Co-particles embedded in a niobium matrix by means of Transmission Electron Microscope (TEM) observations, X-ray diffraction and absorption techniques. Then magnetization measurements are performed on the same particles to deduce their magnetic size and their anisotropy terms.
## II experimental devices
We use the co-deposition technique recently developed in our laboratory to prepare the samples. It consists in two independent beams reaching at the same time a silicon (100) substrate at room temperature : the pre-formed cluster beam and the atomic beam used for the matrix. The deposition is made in a Ultra High Vacuum (UHV) chamber (p=$`5.10^{10}`$ Torr) to limit cluster and matrix oxidation. The cluster source used for this experiment is a classical laser vaporization source improved according to Milani-de Heer design. It allows to work in the Low Energy Cluster Beam Deposition (LECBD) regime : clusters do not fragment arriving on the substrate or in the matrix. The vaporization Ti:Sapphire laser used provides output energies up to 300 mJ at 790 nm, in a pulse duration of 3 $`\mu `$s and a 20 Hz repetition rate. It presents many advantages described elsewhere, such as adjustable high cluster flux. The matrix is evaporated thanks to a UHV electron gun in communication with the deposition chamber. By monitoring and controlling both evaporation rates with quartz balances, we can continuously adjust the cluster concentration in the matrix. We previously show that this technique allows to prepare nanogranular films from any couple of materials, even two miscible ones forbidden by the phase diagram at equilibrium. We determine the crystalline structure and the morphology of cobalt clusters deposited onto copper grids and protected by a thin carbon layer (100 $`\AA `$). From earlier High Resolution Transmission Electron Microscopy (HRTEM) observations, we found that cobalt clusters form quasi-spherical nanocrystallites with a f.c.c structure, and a sharp size distribution. In order to perform macroscopic measurements on a cluster collection using surface sensitive techniques, we need films having a 5-25 nm equivalent thickness of cobalt clusters embedded in 500 nm thick niobium films. We chose a low cluster concentration (1-5 $`\%`$) to make structural and magnetic measurements on non-interacting particles. One has to mention that such concentration is still far from the expected percolation threshold (about 20 $`\%`$). From both X-ray reflectometry and grazing X-ray scattering measurements, we measured the density of the Nb films: 92 $`\%`$ of the bulk one, and a b.c.c polycrystalline structure as reported for common bulk. X-ray absorption spectroscopy (XAS) was performed on D42 at the LURE facility in Orsay using the X-ray beam delivered by the DCI storage ring at the Co K-edge (7709 eV) by electron detection at low temperature (T=80 K). The porosity of the matrix is low enough and avoids the oxidation of the reactive Co clusters as shown in X-ray Absorption Near Edge Structure (XANES) spectra at the Co-K edge where no fingerprint of oxide on cobalt clusters embedded in niobium films is observed. The results of the Extended X-ray Absorption Fine Structure (EXAFS) simulations reveal the local distances between first Co-neighbors and their number for each component. Magnetization measurements on diluted samples were performed using a Vibrating Sample Magnetometer (VSM) at the Laboratoire Louis Néel in Grenoble. Other low temperature magnetization curves of the same samples were obtained from X-ray Magnetic Circular Dichroism (XMCD) signal. The measurement was conducted at the European Synchrotron Radiation Facility in Grenoble at the ID12B beamline. The degree of circular polarization was almost 80 $`\%`$, and the hysteresis measurements were performed using a helium-cooled UHV electromagnet that provided magnetic fields up to 3 Tesla.
## III structure
The origin of the EXAFS signal is well established as mentioned in various references. If multiple scattering effects are neglected on the first nearest neighbors, the EXAFS modulations are described in terms of interferences between the outgoing and the backscattered photoelectron wave functions. We use Mc-Kale tabulated phase and amplitude shifts for all types of considered Co-neighbors. The EXAFS analysis is restricted to solely simple diffusion paths from the standard fitting code developed in the Michalowicz version where an amplitude reduction factor S$`{}_{}{}^{2}{}_{0}{}^{}`$ equal to 0.7 and an asymmetric distance distribution based on hard sphere model are introduced. The first consideration traduces the possibility of multiple electron excitations contributing to the total absorption coefficient reduction. The second one is needed to take into account the difference between the core and the interface Co-atom distances in the cluster. So, in the fit, R<sub>j</sub> and s<sub>j</sub> values, corresponding to the shortest distance and to the asymmetry parameter of the j<sup>th</sup> atom from the excited one respectively, replace the average distance in the standard EXAFS formulation. We also define N<sub>j</sub> the coordination number, $`\sigma _j`$ the Debye-Waller factor of the j<sup>th</sup> atom, k the photoelectron momentum and $`\mathrm{\Gamma }`$(k) its mean free path. Structural parameters ($`N_j,R_j,\sigma _j,s_j`$) were determined from the simulation of the EXAFS oscillations (Fig. 1). As for some systems with two components (for example in metallic superlattices previously studied), the first Fourier transform peak of the EXAFS spectrum presents a shoulder which can be understood unambiguously in terms of phase-shift between Co and Nb backscatterers for k values around 5$`\AA ^1`$. This splitting in the real space corresponds to a broadening of the second oscillation in the momentum space (see Fig. 1). Thus, in the simulations, we first consider two kinds of Co-neighbors : cobalt and niobium. But, a preliminary study of cobalt and niobium core levels by X-ray photoelectron spectroscopy reveals a weak concentration of oxygen inside the sample owing to the UHV environment. The core level yielding provides an oxygen concentration of about 5 $`\%`$. Such Co-O bonding is taken into account for the fit improvement. Moreover, from HRTEM and X-ray diffraction patterns , we know : the mean size of the clusters (3 nm), their inner f.c.c structure with a lattice parameter close to the bulk one and their shape close to the Wulff equilibrium one (truncated octahedron). Finally, the Co/Nb system can be usefully seen as a cobalt core with the bulk parameters and a more or less sharp Co/Nb interface. From these assumptions, we use the simulation of EXAFS oscillations to describe the Co/Nb interface and to verify it is relevant with an observed alloy in the phase diagram (tetragonal Co<sub>6</sub>Nb<sub>7</sub>). The best fitted values of EXAFS oscillations are the following :
\- 70 $`\%`$ of Co atoms are surrounded with cobalt neighbors in the f.c.c phase with the bulk-like distance (d<sub>Co-Co</sub>=2.50 $`\AA `$), corresponding to N<sub>1</sub>=8.4, R<sub>1</sub>=2.495 $`\AA `$, $`\sigma _1`$=0.1 $`\AA `$, s<sub>1</sub>=0.18 $`\AA `$ in EXAFS simulations.
\- 26 $`\%`$ of Co atoms are surrounded with niobium neighbors in the tetragonal Co<sub>6</sub>Nb<sub>7</sub> phase, corresponding to N<sub>2</sub>=3.1, R<sub>2</sub>=2.58 $`\AA `$, $`\sigma _2`$=0.16 $`\AA `$, s<sub>2</sub>=0.06 $`\AA `$ in EXAFS simulations.
\- 4 $`\%`$ of Co atoms are surrounded with oxygen neighbors with a distance equal to d<sub>Co-O</sub>=2.0 $`\AA `$ based on the typical oxygen atomic radii in chemisorption systems or transition metal oxides. This environment corresponds to N<sub>3</sub>=0.5, R<sub>3</sub>=1.9 $`\AA `$, $`\sigma _3`$=0.04 $`\AA `$, s<sub>3</sub>=0.1 $`\AA `$ in EXAFS simulations.
According to Ref., a 3 nm-diameter f.c.c truncated octahedron consists of 35.6 $`\%`$ core atoms (zone a), 27 $`\%`$ atoms in the first sublayer (zone b) and 37.6 $`\%`$ atoms in the surface layer (zone c). Let us propose the following compositions : a pure f.c.c Co phase in zone a, a Co<sub>4</sub>Nb phase in zone b, and a Co<sub>6</sub>Nb<sub>7</sub>O<sub>2</sub> phase in zone c (i.e. : at the cluster-matrix interface). The corresponding coordination numbers : N<sub>1</sub>(Co-Co)=8.5, N<sub>2</sub>(Co-Nb)=3.1 and N<sub>3</sub>(Co-O)=0.4 are in good quantitative agreement with the coordination numbers N<sub>1</sub>, N<sub>2</sub> and N<sub>3</sub> we obtain from EXAFS simulations. Concerning the other fitting parameters, what is found is the high value for the mean free path of the photoelectron ($`\mathrm{\Gamma }=1.6`$) and the Debye Waller factor for Co-metal environment ($`\sigma `$$`>`$ 0.1 $`\AA `$). Notice that because we did not dispose of experimental phase and amplitude, but calculated ones, a large difference between sample and reference is expected, so their absolute values do not represent physical reality but only are necessary to attenuate the amplitude of oscillations. On the contrary, the total number of neighbors is fixed by TEM experiments which reveal a f.c.c-phase for the Co-clusters (so N<sub>1</sub>+N<sub>2</sub>+N<sub>3</sub>=11$`\pm `$1). To follow the shape, position and relative amplitudes of the oscillations, N<sub>j</sub> is a free parameter for each component in the simulation and besides is related to the concentration of the j<sup>th</sup> atom from the Co-absorber one in the sample. This study finally evidences a diffuse interface between cobalt and niobium mostly located on the first monolayer.
In summary, we made a consistent treatment of all the experimental results obtained from different techniques. We notice that EXAFS spectra show unambiguously a smooth interface between miscible elements as cobalt and niobium. This information will be of importance and is the key to understand the magnetic behavior discussed below.
## IV magnetism
Here, we present the magnetic properties of these nanometer sized clusters embedded in a metallic matrix. Furthermore such a system will be used to perform microsquid devices in order to reach magnetization measurements on an isolated single domain cluster. The present study deals with macroscopic measurements performed on a particle assembly (typically 10<sup>14</sup>) of cobalt clusters in a niobium matrix to describe the magnetic properties of the Co/Nb system. Because of the goal mentioned above, we focus on very diluted samples (less than 2 $`\%`$ volumic for Co concentrations). For these low cluster concentrations, magnetic couplings between particles are negligible whereas dipolar and RKKY interactions in the case of metallic matrix, are considered. Nevertheless, both last contributions which vary as 1/d$`{}_{}{}^{3}{}_{ij}{}^{}`$ (where d<sub>ij</sub> is the mean distance between particles) are expected to be weak compared to ferromagnetic order inside the cluster. In a first approximation, we neglect any kind of surface disorder so that a single domain cluster can be seen as an isolated macrospin with uniform rotation of its magnetization. It means that the atomic spins in the cluster remain parallel during the cluster magnetization rotation. In an external applied field, the magnetic energy of a nanoparticle is the sum of a Zeeman interaction (between the cluster magnetization and the local field), and anisotropy terms (as shape, magnetocrystalline, surface (interface in our case) or strain anisotropy). At high temperatures (T$`>`$100 K), anisotropy contributions of nanometric clusters can be neglected compared to the thermal activation ($`K_{eff}V/k_B30K`$) and clusters act as superparamagnetic independent entities. A way to estimate the interparticle interactions is to plot 1/$`\chi `$ vs. T in the superparamagnetic regime. 1/$`\chi `$ follows a Curie-Weiss-like law :
$$\frac{1}{\chi }=C(T\theta )$$
(1)
and $`\theta `$ gives an order of magnitude of the particle interactions. From experimental data, we give 1/$`\chi `$ vs. T on Fig. 2, and find $`\theta `$=1-2 K, which is negligible compared with the other energies of the clusters. In the superparamagnetic regime, we also estimate in Section A the magnetic size distribution of the clusters. At low temperatures, clusters have a ferromagnetic behavior due to the anisotropy terms. And, in Section B, we experimentally estimate their mean anisotropy constant.
### A Magnetic size measurement
In the following, we make the approximation that the atomic magnetic moment is equal to 1.7 $`\mu _B`$ at any temperature (or 1430 emu/cm<sup>3</sup> like in the bulk h.c.p cobalt). Besides, our synthesized cobalt clusters have approximately a 3 nm diameter and contain at least 1000 atoms. According to references , a magnetic moment enhancement only appears for particles containing less than 500 atoms. So in our size range we can assume that the atomic cobalt moment is close to the bulk phase one (m<sub>Co</sub>=1.7$`\mu _B`$). We consider a log-normal size distribution :
$$f(D)=\frac{1}{D\sqrt{2\pi \sigma ^2}}exp\left(\left(ln\left(\frac{D}{D_m}\right)\right)^2\frac{1}{2\sigma ^2}\right)$$
(2)
where D<sub>m</sub> is the mean cluster diameter and $`\sigma `$ the dispersion. In the superparamagnetic regime, we can use a classical Langevin function $`L(x)`$ and write :
$$\frac{m(H,T)}{m_{sat}}=\frac{_0^{\mathrm{}}D^3L(x)f(D)𝑑D}{_0^{\mathrm{}}D^3f(D)𝑑D},x=\frac{\mu _0H(\pi D^3/6)M_S}{k_BT}$$
(3)
where H is the applied field ($`\mu _0`$H in Tesla), T the temperature and m<sub>sat</sub> the saturation magnetic moment of the sample estimated on magnetization curves at low temperatures under a 2 Tesla field. First of all, on Fig. 3, one can see that for T$`>`$100 K, m(H/T) curves superimpose according to Eq. (3) for a magnetic field being applied in the sample plane (we checked that the results are the same for a perpendicular applied field). Secondly, one can notice that for T=30 K, the magnetization deviation to the high temperature curves comes from the fact that the anisotropy is not negligible anymore, and one has to use a modified Langevin function in the simulation. In this equation, we also assume the particles to feel the applied field, actually, they feel the local field which is the sum of the external field and the mean field created by the surrounding particles in the sample. Furthermore, in the superparamagnetic regime, we fit experimental m(H,T) curves obtained from VSM measurements to find D<sub>m</sub> and $`\sigma `$, the mean diameter and dispersion of the ”magnetic size” distribution, respectively (see Fig. 4). For those fits, we still use the M<sub>S</sub> bulk value (the use of other ones given in references leads quite to the same results (with an error less than 5 $`\%`$), the determining factors being D<sub>m</sub> and $`\sigma `$). Figure (5) displays D<sub>m</sub> and $`\sigma `$ for two niobium deposition rates (V<sub>Nb</sub>=3 $`\AA `$/s and V<sub>Nb</sub>=5 $`\AA `$/s, respectively). Such results are compared with the real cluster sizes deduced from TEM observations. The magnetic domain is always smaller than the real diameter. Furthermore, the magnetic domain decreases as the deposition rate increases. This indicates that the kinetics of the deposition plays a crucial role for the nature of the interface. For example, we found a magnetic domain size of 2.3 nm (resp. 1.8 nm) for a 3 nm diameter cluster when V<sub>Nb</sub>=3 $`\AA `$/s (resp. V<sub>Nb</sub>=5 $`\AA `$/s), the dispersion $`\sigma `$=0.24 remained the same.
### B Anisotropy
The bulk value of the f.c.c cobalt cubic magnetocrystalline anisotropy constant is : K<sub>MA</sub>=2.7.10<sup>6</sup> erg/cm<sup>3</sup> less than the h.c.p bulk phase one (4.4.10<sup>6</sup> erg/cm<sup>3</sup>). The shape anisotropy constant K<sub>shape</sub> can be calculated from the demagnetizing factors and the saturation magnetization. In case of weak distortions in the sphericity, the shape anisotropy for a prolate spheroid can be expressed as follows :
$$E_{shape}=\frac{1}{2}\mu _0M_S^2(N_zN_x)\mathrm{cos}^2(\theta )=K_{shape}\mathrm{cos}^2(\theta )$$
(4)
M<sub>S</sub> is the saturation magnetization of the particle : M<sub>S</sub> =1430 emu/cm <sup>3</sup>, $`\theta `$ the angle between the magnetization direction and the easy axis, and N<sub>x</sub>, N<sub>z</sub> the demagnetizing factors along x-axis and z-axis respectively. We plot on Fig. 6, the constant anisotropy K<sub>shape</sub> as a function of the prolate spheroid deformation c/a (with c and a representing the wide and small ellipsoid axis, respectively). For a truncated octahedron, the ratio c/a has been evaluated lower than 1.2 which restricts the K<sub>shape</sub> value of the order of 10<sup>6</sup> erg/cm<sup>3</sup>. However, we have no information about the magnitude of interface and strain anisotropies in our system.
Let us now experimentally evaluate the anisotropy constant K<sub>eff</sub> of cobalt clusters from low temperature measurements. Hysteresis curves are obtained from VSM experiments, but at very low temperatures (i.e. T$`<`$8 K), superconducting fluctuations appear due to the niobium matrix and prevent any magnetization measurements on the whole sample. So, we also use X-ray Magnetic Circular Dichroism as a local magnetometer by recording the MCD signal at the cobalt L<sub>3</sub> white line as a function of the applied magnetic field (for details on the method see Ref.). The angle of the incident beam is fixed at 55 with respect to the surface normal and the magnetic field is parallel to the sample surface. The absorption signal is recorded by monitoring the soft X-ray fluorescence yield chosen for its large probing depth (1000 $`\AA `$). Finally, from hysteresis curves given by both VSM and XMCD techniques, we deduce m<sub>r</sub>(T), the remanent magnetic moment vs. T down to 5.3 K, and we normalize it by taking : m<sub>r</sub>(8.1K)<sub>VSM</sub>=m<sub>r</sub>(8.1K)<sub>XMCD</sub>, the curve m<sub>r</sub>(T)/m<sub>r</sub>(5.3K) is given on Fig. 7. To evaluate m<sub>r</sub>(T), one can write :
$$m_r(T)=\frac{m_{sat}}{Cte}\frac{_{D_B(T)}^{\mathrm{}}D^3f(D)𝑑D}{_0^{\mathrm{}}D^3f(D)𝑑D}$$
(5)
where D<sub>B</sub>(T) is the particle blocking diameter at temperature T. Cte is a parameter independent of the particle size. Cte=$`2`$ if clusters have a uniaxial magnetic behavior and $`3\sqrt{3}`$ if they have a cubic magnetic one. In order to rule out this Cte, we plot the ratio :
$$\frac{m_r(T)}{m_r(5.3K)}=\frac{_{D_B(T)}^{\mathrm{}}D^3f(D)𝑑D}{_{D_B(5.3K)}^{\mathrm{}}D^3f(D)𝑑D}$$
(6)
One finds D<sub>B</sub>(T) when the relaxation time of the particle is equal to the measuring time : $`\tau =\tau _0exp(K_{eff}V/k_BT)=\tau _{mes}`$.
$$D_B^3(T)=aT,a=\frac{6k_B}{\pi K_{eff}}ln\left(\frac{\tau _{mes}}{\tau _0}\right)$$
(7)
$`\tau _0`$ is the microscopic relaxation time of the particle, taken independent of the temperature. The fit result is presented on Fig. 7. We find a=3.5$`\pm `$0.1 nm<sup>3</sup>/K, and by taking $`\tau _{mes}`$=10 s, and $`\tau _0`$=10<sup>-12</sup>-10<sup>-9</sup> s, we obtain K<sub>eff</sub>=2.0 $`\pm `$0.3.10<sup>6</sup> erg/cm<sup>3</sup>. By fitting Zero Field Cooled (ZFC) curves for different applied fields, we can also evaluate K<sub>eff</sub>. Besides, if we neglect the blocked particle susceptibility, we have :
$$\frac{m_{ZFC}(H,T)}{m_{sat}}=\frac{_0^{D_B(H,T)}D^3L(x)f(D)𝑑D}{_0^{\mathrm{}}D^3f(D)𝑑D}$$
(8)
Moreover, for low field values compared with the anisotropy field of cobalt clusters (estimated to be $`\mu _0`$H<sub>a</sub>=0.4 T), we can make the approximation :
$$D_B^3(H,T)=af\left(\frac{H}{H_a}\right)Ta\left(1+\alpha \frac{H}{H_a}\right)T$$
(9)
where a is the coefficient of Eq. (7), $`\alpha `$ a numerical constant. The ZFC curve fits are presented on Fig. 8. A linear extrapolation to $`\mu _0`$H=0 T also gives a$``$3.5 nm<sup>3</sup>/K and an anisotropy constant of 2.0$`\pm `$0.3.10<sup>6</sup> erg/cm<sup>3</sup> for the same numerical values as above. We found a similar result for the second sample with a niobium evaporation rate of 5 $`\AA `$/s. Finally, we experimentally found an anisotropy constant close to the one of quasi-spherical f.c.c cobalt clusters.
## V discussion
The ”magnetic size” distribution is compared to the one obtained from TEM observations of pure Co-clusters prepared in the same experimental conditions (see Fig. 5(a)). For all the studied Co/Nb samples, we systematically find a global size reduction which might be related to the formation of a non-magnetic alloy at the interface as suggested by EXAFS simulations. The most significant parameter in the magnetically dead alloy thickness, seems to be the rate of deposition of the niobium matrix (V<sub>Nb</sub>). As an example, we mention that for V<sub>Nb</sub>=5 $`\AA `$/s, the reduction is twice the one for V<sub>Nb</sub>=3 $`\AA `$/s (see Fig. 5(b)). That result suggests the model proposed in Fig. 9(a), 9(b). As cobalt-niobium forms a miscible system, we show that the more V<sub>Nb</sub> increases, the more the quantity of Nb-atoms introduced at the cobalt cluster surface increases.
To study the magnetism of the perturbed monolayers at the interface, we prepared a cobalt-niobium alloy using induction-heating under argon atmosphere with 40 $`\%`$-Co and 60 $`\%`$-Nb atomic weights. From classical X-ray $`\theta `$/2$`\theta `$ diffraction ($`\lambda `$=1.5406 $`\AA `$), we identified the $`\beta `$-phase given by the binary phase diagram : Co<sub>6</sub>Nb<sub>7</sub>. From VSM measurements on this sample, we found a remaining paramagnetic susceptibility $`\chi `$=10<sup>-4</sup> (for 2$`<`$T$`<`$300 K) corresponding to the ”Pauli” paramagnetism of the sample. This feature could explain the ”dead” layer at the cluster surface. Obi and al. obtained two ”magnetically dead” cobalt monolayers on cobalt-niobium multilayers evaporated by a rf-dual type sputtering method (”magnetically dead” layers were also suggested by Mühge and al. for Fe/Nb multilayers). Finally, we can underline the fact that the pre-formed cobalt clusters by LECBD technique are very compact nanocrystallites which conserve a magnetic core even if embedded in a miscible matrix. The existence of a ”magnetically dead” layer at the cluster-matrix interface may reduce surface effects compared with recent results obtained on smaller cobalt particles (150-300 atoms) stabilized in polymers. The estimated mean anisotropy constant might correspond to cubic magnetocrystalline or shape effects. To confirm this assumption, works are in progress to investigate the magnetic properties of a single cluster in a niobium matrix using a new microsquid technique.
One can also mention that for XMCD signals detected from the total electron yield method, the extraction of quantitative local magnetic values from the applicability of the individual orbital and spin sum rules is in progress. Nevertheless, one can mention a small enhancement of the orbital/spin magnetic moment ratio. Such increase might come from the orbital magnetic moment enhancement expected for small particles. Systematic XMCD studies on clusters assembled Co/X films should be performed on Si-protected layers under synchrotron radiation to confirm these results.
## VI conclusion
We have shown that the magnetic properties of nanoparticles can be evaluated unambiguously if we know the size, the shape and the nature of the interface. This latter is given by EXAFS spectroscopy. We summarize the main results :
\- the mean like-bulk Co-Co distance (d<sub>Co-Co</sub>=2.50 $`\AA `$) concerns the 3/4 of the atoms (namely : the core atoms)
\- Co-Nb bonds are located on roughly one monolayer at the surface of the Co-clusters embedded in the Nb-matrix.
Even though this interface is rather sharp, it is of importance since the interface thickness is on the same order of magnitude than the cluster radius. In addition, some magnetic properties were approached by different complementary techniques as VSM magnetometry (at temperatures higher than 8 K) and XMCD signal detected by the fluorescence yield method (at temperatures from 5.3 K to 30 K) under a magnetic field. We show the good result coherence on the superimposed range (8 K$`<`$T$`<`$30 K) for both techniques probing the whole thickness of the sample. The main result is the possibility of a ”magnetically dead” layer at the interface Co/Nb, to relate to the alloyed interface (from EXAFS measurements) and to the moderate anisotropy value (found around 2.10<sup>6</sup> erg/cm<sup>3</sup>). To confirm this assumption and to understand the role of the interface on the anisotropy terms involved in so low dimension magnetic nanostructures, XMCD measurements at the Co-L<sub>2,3</sub> edge have to be provided on a Co/Nb bilayer stacking (alternating 2 monolayers of Co and 2 monolayers of Nb) with the same Nb-deposition rates as in our systems.
## VII Aknowledgements
The authors would like to thank M. NEGRIER and J. TUAILLON for fruitfull discussions, C. BINNS from the University of Leceister, United Kingdom and J.VOGEL from the Laboratoire Louis Néel at Grenoble, France for their help during the first XMCD tests on the ID12B line of N. BROOKES at the ESRF in Grenoble.
|
no-problem/0002/hep-ph0002173.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
One of the most compelling features of CP violation in the $`B`$ system is that all three interior angles of the unitarity triangle, $`\varphi _1(\beta )`$, $`\varphi _2(\alpha )`$ and $`\varphi _3(\gamma )`$, can be measured cleanly, i.e. without theoretical hadronic uncertainties. The $`B`$ system is thereby expected to provide a test of CP violation in the standard model (SM). Any inconsistency with the predictions of the SM will reveal the much sought after signal of new physics (NP).
NP can affect CP violation in one of two possible ways: through contributions to $`B`$ decays or to $`B_d^0`$-$`\overline{B_d^0}`$ mixing. Most decay modes of the $`B`$-meson are dominated by $`W`$-mediated tree-level diagrams and will not be much affected by NP, since in most models of NP there are no contributions that can compete with the SM. Thus, with the exception of penguin-dominated decay modes, NP cannot significantly affect the decays. However, new contributions to $`B_d^0`$-$`\overline{B_d^0}`$ mixing can affect the CP asymmetries. Such NP contributions will affect the extraction of $`V_{td}`$ and $`V_{ts}`$, as well as possible measurements of $`\varphi _1,\varphi _2`$ and $`\varphi _3`$. Thus, NP enters principally through contributions to $`B_d^0`$-$`\overline{B_d^0}`$ mixing.
The angles $`\varphi _1`$,$`\varphi _2`$ and $`\varphi _3`$ are to be measured principally through the modes $`B_d^0(t)\mathrm{\Psi }K_S`$, $`B_d^0(t)\pi \pi `$ (or $`\rho \pi `$), and $`B^\pm DK^\pm `$ (or $`D^{}K^\pm `$), respectively. NP in $`B_d^0`$-$`\overline{B_d^0}`$ mixing will then affect the measurements of $`\varphi _1`$ and $`\varphi _2`$, but in opposite directions. That is, in the presence of a new-physics phase $`\varphi _{NP}`$, the CP angles are changed as follows: $`\varphi _1\varphi _1\varphi _{NP}`$ and $`\varphi _2\varphi _2+\varphi _{NP}`$. Hence the sum $`\varphi _1+\varphi _2+\varphi _3`$ is insensitive to the NP. However, if $`\varphi _3`$ is measured in the decay $`B_s^0(t)D_s^\pm K^{}`$ , then $`\varphi _1+\varphi _2+\varphi _3\pi `$ can be found if there is NP in $`B_s^0`$-$`\overline{B_s^0}`$ mixing.
The most well known method for detecting NP is to compare the unitary triangle as constructed from measurements of the angles with that constructed from independent measurements of the sides. Any inconsistency will be evidence for new physics. However, since at present the allowed region of the unitarity triangle is rather large, the triangle as constructed from the angles could still lie within the allowed region even if NP is present. Furthermore, even if the $`\varphi _1`$-$`\varphi _2`$-$`\varphi _3`$ triangle lies outside the allowed region, one might still be skeptical about the presence of NP: perhaps the theoretical uncertainties which go into the constraints on the unitarity triangle have been underestimated.
Clearly we would like cleaner, more direct tests of the SM in order to probe for the presence of NP. More promising tests for NP are possible by comparing two distinct decay modes which, in the SM, probe the same CP angle. One can compare the rate asymmetries in $`B^\pm DK^\pm `$ and $`B_s^0(t)D_s^\pm K^{}`$, both of which measure $`\varphi _3`$. A discrepancy between the extracted values would point to NP in $`B_s^0`$-$`\overline{B_s^0}`$ mixing. Similarly, a discrepancy in $`\varphi _1`$, as measured via $`B_d^0(t)\mathrm{\Psi }K_S`$ and $`B_d^0(t)\varphi K_S`$, implies new physics in the $`bs`$ penguin. One can also measure the CP asymmetry in the decay $`B_s^0(t)\mathrm{\Psi }\varphi `$, which vanishes to a good approximation in the SM. Such an asymmetry would indicate the presence of new physics in $`B_s^0`$-$`\overline{B_s^0}`$ mixing. Note that all such tests probe NP in the $`bs`$ flavour-changing neutral current (FCNC).
One may then ask the question: are there are any direct tests of NP in the $`bd`$ FCNC? For example, consider pure $`bd`$ penguin decays such as $`B_d^0K^0\overline{K^0}`$ or $`B_s^0\varphi K_S`$, with the asumption that $`t`$-quark contribution dominates among up-type quarks in the loop. In such a case the SM would predict that (i) the CP asymmetry in $`B_d^0(t)K^0\overline{K^0}`$ vanishes, and (ii) the CP asymmetry in $`B_s^0(t)\varphi K_S`$ measures $`\mathrm{sin}2\varphi _1`$ . Any discrepancy between measurements of these CP asymmetries and their predictions would thus imply that there is NP in either $`B_d^0`$-$`\overline{B_d^0}`$ mixing or the $`bd`$ penguin, i.e. in the $`bd`$ FCNC. However, it is well known that $`bd`$ penguins are not dominated by the internal $`t`$-quark. The contributions of the $`u`$\- and $`c`$-quarks can be as large as 20–50% of that of the $`t`$-quark. As a consequence, one cannot probe NP in $`bd`$ FCNC using such modes, and, unfortunately, the answer to the question asked is no.
## 2 The CKM Ambiguity
The full $`bd`$ penguin amplitude is a sum of contributions from the three internal up-type quarks in the loop:
$$P=P_uV_{ub}^{}V_{ud}+P_cV_{cb}^{}V_{cd}+P_tV_{tb}^{}V_{td},$$
(1)
with $`V_{ub}e^{i\varphi _3}`$ and $`V_{td}e^{i\varphi _1}`$. Using the unitarity relation, $`V_{ud}V_{ub}^{}+V_{cd}V_{cb}^{}+V_{td}V_{tb}^{}=0`$, the $`u`$-quark piece can be eliminated in Eq. (1), allowing us to write
$$P=𝒫_{cu}e^{i\delta _{cu}}+𝒫_{tu}e^{i\delta _{tu}}e^{i\varphi _1},$$
(2)
where $`\delta _{cu}`$ and $`\delta _{tu}`$ are strong phases. Now imagine that there were a method in which a series of measurements allowed us to cleanly extract $`\varphi _1`$ using the above expression. In this case, we would be able to express $`\varphi _1`$ as a function of the observables.
On the other hand, we can instead use the unitarity relation to eliminate the $`t`$-quark contribution in Eq. (1), yielding
$$P=𝒫_{ct}e^{i\delta _{ct}}+𝒫_{ut}e^{i\delta _{ut}}e^{i\varphi _3}.$$
(3)
Comparing Eqs. (2) and (3), we see that they have the same form. Thus, the same method used to extract $`\varphi _1`$ from Eq. (2) can be used on Eq. (3) to obtain $`\varphi _3`$. That is, we would be able to write $`\varphi _3`$ as the same function of the observables as was used for $`\varphi _1`$ above! But this implies that $`\varphi _1=\varphi _3`$, which clearly does not hold in general.
Due to the ambiguity in the parametrization of the $`bd`$ penguin — which we refer to as the CKM ambiguity — we conclude that one cannot cleanly extract the weak phase of any penguin contribution. Indeed, it is impossible to cleanly test for the presence of new physics in the $`bd`$ FCNC. Nevertheless, it is instructive to examine in detail a few candidate methods, to see exactly how they fail.
The measurement of the time-dependent rate for the decay $`B_d^0(t)K^0\overline{K^0}`$ can at best allow one to extract the magnitudes and relative phase of $`e^{i\varphi _1}A`$ and $`e^{i\varphi _1}\overline{A}`$, where $`A`$ is the amplitude for $`B_d^0K^0\overline{K^0}`$. With an independent measurement of $`\varphi _1`$, there are a total of 4 measurements. Using the form of the $`bd`$ penguin given in Eq. 2, we have $`e^{i\varphi _1}A=e^{i\varphi _1}(𝒫_{cu}e^{i\delta _{cu}}+𝒫_{tu}e^{i\delta _{tu}}e^{i\varphi _1^{}})`$, where we have written the phase $`\varphi _1^{}`$ to allow for the possibility of new physics. There are thus 5 theoretical (hadronic) parameters: $`𝒫_{cu}`$, $`𝒫_{tu}`$, $`\delta _{cu}\delta _{tu}`$, $`\varphi _1`$, and $`\theta _{NP}\varphi _1^{}\varphi _1`$. We see that there are not enough measurements to determine all the theoretical parameters. In fact, there is just one more theoretical unknown than there are measurements. A similar examination of the $`B\pi \pi `$ isospin analysis, Dalitz-plot analysis of $`B3\pi `$, angular analysis of $`B^0VV`$ (where $`V`$ is a vector meson), and a combined isospin $`+`$ angular analysis of $`B\rho \rho `$ leads to the same conclusion that there is one more unknown than there are measurements.
We thus conclude that, due to the CKM ambiguity, if one wishes to test for the presence of NP in the $`bd`$ FCNC by comparing the weak phase of the $`t`$-quark contribution to the $`bd`$ penguin with that of $`B_d^0`$-$`\overline{B_d^0}`$ mixing, it is necessary to make a single assumption about the hadronic parameters.
## Acknowledgments
R.S. would like to thank the organizers of this conference, Prof. H.Y. Cheng and Prof. A.I. Sanda, for financial assistance to attend the conference. The work of D.L. was financially supported by NSERC of Canada.
|
no-problem/0002/astro-ph0002129.html
|
ar5iv
|
text
|
# Multifrequency VLBI observations of faint gigahertz peaked spectrum sources
## 1 Introduction
Gigahertz Peaked Spectrum (GPS, e.g. O’Dea 1998) are a class of extragalactic radio source, characterised by a convex shaped radio spectrum peaking at about 1 GHz in frequency, and sub-galactic sizes. Their small sizes make observations using Very Long Baseline Interferometry (VLBI) necessary to reveal their radio morphologies. Early VLBI observations showed that some GPS sources identified with galaxies have Compact Double (CD) morphologies (Philips and Mutel, 1982), and it was suggested that these were the mini-lobes of very young or alternatively old, frustrated objects (Philips and Mutel, 1982; Wilkinson et al. 1984, van Breugel, Miley and Heckman, 1984). Later, when reliable VLBI observations at higher frequencies became possible, it was found that some of the CD-sources had a compact flat spectrum component in their centres (Conway et al. 1992, Wilkinson et al. 1994). These flat spectrum components were interpreted as the central cores, and many CD-sources were renamed compact triples or Compact Symmetric Objects (CSO, Conway et al. 1992, Wilkinson et al. 1994). High dynamic range VLBI observations by Dallacasa et al (1995) and Stanghellini et al. (1997) have shown that most GPS galaxies indeed have jets leading from the central compact core to the outer hotspots or lobes. This is in contrast to the GPS sources identified with quasars, which tend to have core-jet morphologies with no outer lobes (Stanghellini et al. 1997). Snellen et al. (1999) have shown that the redshift distributions of the GPS galaxies and quasars are very different, and that it is therefore unlikely that they form a single class of object unified by orientation. They suggest that they are separate classes of object, which just happen to have the same radio-spectral morphologies.
The separation velocities of the hotspots have now been measured for a small number of GPS galaxies to be $`0.2h^1`$c (Owsianik and Conway, 1998; Owsianik, Conway and Polatidis, 1998; Tschager et al. 1999). This makes it very likely that these are young objects of ages typically $`10^3`$ yr (assuming a constant separation velocity), rather than old objects constrained in their growth by a dense ISM. These are therefore the objects of choice to study the early evolution of extragalactic radio sources.
In the past, work has been concentrated on samples of the radio brightest GPS sources (eg. O’Dea et al 1991). In order to disentangle radio power and redshift effects on the properties of GPS sources, we constructed a sample of faint GPS sources from the Westerbork Northern Sky Survey (WENSS, Rengelink et al. 1997), which in combination with other samples allows, for the first time, the study of these objects over a large range of flux density and radio spectral peak frequency. The construction of the faint sample is described in Snellen et al. (1998a); the optical and near-infrared imaging is described in Snellen et al. (1998b); and the optical spectroscopy in Snellen et al. (1999a). This paper describes multi-frequency VLBI observations of the sample, and the radio-morphologies of the individual sources. What can be learned from the faint GPS sample about radio source evolution is discussed in the accompanying paper (Snellen et al. 2000).
## 2 The Sample
The selection of the sample has been described in detail in Snellen et al. (1998a), and is summarised here. Candidate GPS sources were selected from the Westerbork Northern Sky survey, by means of an inverted spectrum between 325 MHz and higher frequencies. The sources are located in two regions of the survey; one with $`15^h<\alpha <20^h`$ and $`58^{}<\delta <75^{}`$, which is called the mini-survey region (Rengelink et al. 1997), and the other with $`4^h00^m<\alpha <8^h30^m`$ and $`58^{}<\delta <75^{}`$. Additional observations at 1.4, 5, 8.4 and 15 GHz were carried out with the WSRT and the VLA, yielding a sample of 47 genuine GPS sources with peak frequencies ranging from 500 MHz to more than 15 GHz, and peak flux densities ranging from $`30`$ to $`900`$ mJy. This sample has been imaged in the optical and near-infrared, resulting in an identification fraction of $``$ 87 % (Snellen et al. 1998b). Redshifts have been obtained for 40% of the sample (Snellen et al., 1999).
## 3 Observations
Snapshot VLBI observations were made of the entire sample of faint GPS sources at 5 GHz, and of sub-samples at 15 GHz and 1.6 GHz frequency. In order to observe the large number of sources required in a reasonable amount of time, we observed in “snapshot” mode (eg. Polatidis et al. 1995, Henstock et al. 1995). This entails observing a source for short periods of time at several different hour angles. Using a VLBI array of typically more than 10 telescopes, this provides sufficient $`u,v`$ coverage for reliable mapping of complex sources (Polatidis et al. 1995). To maximize the $`u,v`$ coverage for each source we attempted to schedule three to four scans as widely spaced as possible within the visibility window during which the source could be seen by all antennas. Fortunately, the majority of the sources are located at a sufficiently high declination ($`>57^{}`$) that they are circumpolar for most EVN and VLBA antennas, and therefore could be scheduled for observation at optimal hour angles.
### 3.1 The 5 GHz Observations, Correlation and Reduction
The 5 GHz data were obtained during a 48 hour observing session on 15 and 16 May 1995. All telescopes of the VLBA, and six telescopes of the EVN were scheduled to participate in this global VLBI experiment (see table 1). The data were recorded using the Mark III recording system in mode B, with an effective bandwidth of 28 MHz centred at 4973 MHz. Left circular polarization was recorded. Since the motion of some of the antennas is limited in hour angle, we inevitably had to schedule a few scans when the source could not be observed at one or two telescopes. All sources were observed for three scans of 13 minutes (13<sup>m</sup> corresponds to a single pass on a tape).
The data were correlated using the VLBA correlator in Socorro, New Mexico, four months after the observations took place. The output of the correlator provides a measure of the complex fringe visibility sampled at intervals of 2 seconds on each baseline, at $`7\times 16`$ frequencies within the 28 MHz band, with the phase referenced to an a priori model of the source position, antenna locations, and atmosphere. The residual phase gradients in time and frequency due to delay and rate errors in the a priori model are estimated and removed, during the process of “fringe fitting”. Fringe fitting was performed using the AIPS task FRING, an implementation of the Schwab & Cotton (1983) algorithm. A solution interval of 3 minutes and a point source model were used, and Effelsberg was taken as the “reference telescope” whenever possible. No fringes were found for the Cambridge telescope. The amplitude calibration was performed with the AIPS tasks ANTAB and APCAL, using system temperature and antenna gain information. The visibility data were averaged across the observing band and then written in one single $`u,v`$-file per object. The typical $`u,v`$ coverage obtained for a source is shown in figure 1.
The final images were produced after several cycles of imaging and self-calibration using the AIPS tasks IMAGR and CALIB. Solution intervals were decreased in each step, starting with a few minutes, until no increase of the image quality (using noise-level and the presence of negative structure as criteria) was detected. If a source was sufficiently strong, antenna amplitude solutions were also determined for each scan. For each source a “natural” weighted image was produced. If the $`u,v`$-data were of sufficient quality, a “uniform” weighted image was also produced.
### 3.2 The 15 GHz Observations, Correlation and Reduction
The 15 GHz data were obtained during a 24 hour observing session on 29 June 1996, using the ten telescopes of the VLBA. The data were recorded in $`12881`$ mode (128 Mbits/sec, 8 IF channels, 1 bit/sample), with an effective bandwidth of 32 MHz centred at 15360 MHz. All 27 sources in the sample with peak frequencies higher than 5 GHz and/or peak flux densities greater than 125 mJy were observed. The expected maximum brightness in each of the images at 15 GHz was estimated from the overall radio spectra of the sources and their 5 GHz VLBI morphology. In order to use the conventional fringe-fitting methods of VLBI imaging, the signal to noise ratio on each baseline within the coherence time has to be sufficiently high. Sources with an expected maximum brightness at 15 GHz of $`>60`$ mJy/beam are sufficiently strong and were observed for 3 scans of 11 minutes each. However, sources with expected maximum brightnesses of $`<60`$ mJy/beam, were observed using a “phase-referencing” method to increase the signal to noise ratio. This involves observations of the target source interspersed with observations of a nearby ($`<2.5^{}`$) compact calibrator source. Measurements of residual delay and rate are made towards this bright source and transferred to the target source data. We used cycles of 3 minutes on the target source and 1.5 minutes on the calibrator source. The total integration time on a target was 45 minutes divided over three scans. The sources for which the phase referencing technique was required and the calibration sources used (in brackets) were B0400+6042 (B0354+599), B0436+6152 (B0444+634), B0513+7129 (B0518+705), B0531+6121 (B0539+6200), B0538+7131 (B0535+6743), B0755+6354 (B0752+6355),
B1525+6801 (B1526+670), B1538+5920 (B1550+5815), B1600+7131 (B1531+722), B1819+6707 (B1842+681),
and B1841+6715 (B1842+681). Data reduction of the phase referenced observations is similar to that for the 5 GHz data. The typical $`u,v`$ coverage obtained for a source at 15 GHz is shown in figure 1.
### 3.3 The 1.6 GHz Observations, Correlation and Reduction
The 1.6 GHz data were obtained during two observing sessions, both involving the ten telescopes of the VLBA and 4 antennas of the EVN (see table 1). The Westerbork data in the second session was lost due to technical failure. The data were recorded in $`12842`$ mode (128 Mbits/sec, 4 IF channels, 2 bit/sample), with an effective bandwidth of 32 MHz centred at 1663 MHz and 1655 during the first and second session respectively. In the first session, a subsample of 23 objects was observed for $`2\times 12`$ hours on 14 and 16 September 1997. This subsample contained all sources with peak frequencies $`<5`$ GHz, which were found to be extended in the 5 GHz observations. In the second session, all 9 remaining sources with peak frequencies $`<3`$ GHz, which had not been imaged before at this frequency, were observed. The sources were typically observed for $`4\times 11`$ minutes each, and an example of a $`u,v`$ coverage is shown in figure 1. The data were correlated in Socorro. No fringes were found for B0513+7129, B0537+6444, and B0544+5847. Several sources in the second session were observed using phase referencing. These sources, with their calibrators in brackets, are B0537+6444 (B0535+677), B0830+5813 (B0806+573), B1557+6220 (B1558+595), B1639+6711 (B1700+685), and B1808+6813 (B1749+701). The data were reduced in a similar way as the data at 5 GHz.
## 4 Results
The parameters of the resulting 102 images (29 at 1.6 GHz, 47 at 5 GHz, and 26 at 15 GHz) are given in table 2. Figure 2 shows the rms noise as function of the peak brightness in the images at the three observing frequencies. The dynamic ranges (defined as the ratio of the maximum brightness in the image to the rms noise in an area of blank sky) are between 125 and 2500 at 1.6 GHz, between 25 and 1700 at 5 GHz, and between 30 and 500 at 15 GHz. At 1.6 GHz, two of the bright sources have higher rms-noise levels than expected, which may indicate that the dynamic range is not limited by the thermal noise. To be able to compare the VLBI observations of this faint sample with those on bright GPS samples, it is important to determine whether components have been missed due to the limited dynamic range for this faint sample. We therefore plotted the distribution of dynamic range for the observations closest in frequency to the spectral peak (Fig. 3). Only 2 objects (B0755+6354 and B0544+5847) turn out not to have an image with a dynamic range $`>100`$.
In Figure 4 the ratio of total VLBI flux density in the images to the flux density in the NVSS at 1.6 GHz, to the MERLIN observations at 5 GHz, and to the VLA 15 GHz flux densities (from Snellen et al. 1998a), are plotted. This enables us to judge whether substantial structure has been resolved out in the VLBI observations. At 1.6 GHz, typically 90% of the NVSS flux density is recovered in the VLBI observations, while at 5 GHz the distribution peaks at 100%. Only at 15 GHz is the distribution much broader and peaks at about 80% of the flux density in the VLA observations, and hence provides some evidence that at this frequency some extended structure may be missed. The broadness of the peak is probably also influenced by variability.
Figures 5, 6, 7 give the maps of the individual sources with observations at three, two and one frequency respectively. For each source, the images have the same size at each frequency, and are centred in such a way that identical components at different frequencies match in relative position.
### 4.1 Model Fitting
Quantitative parameters of the source brightness distributions were estimated by fitting elliptical Gaussian functions to the maps, using the AIPS-task JMFIT. For some of the complex sources it was necessary to restrict the fit to a number of point sources (e.g. 0513+7129 at 5 GHz). In a few cases, the positions of some of the fitted components were kept fixed to correspond to their positions at higher frequency (e.g. 0752+6355 at 5 GHz). We checked whether the model was a good representation of the source structure, by comparing the total flux density in the image to that in the model, and by ensuring that the residual image did not show any significant negative structure. A spectral decomposition was performed by matching the components believed to correspond to each other at the different frequencies. Due to the increase in resolution with frequency, some components at the higher frequencies were combined to match a single component at the lower frequency. The decomposed spectra are shown along with the images in figures 5 and 6. The results of the fits are given in table 3.
Column 1 gives the source name, column 2 the figure in which the maps are shown, column 3 the classification (as discussed in the next section), and in column 4 the component name used for the spectral decomposition. Columns 5 to 9 give for each component observed at 1.6 GHz the flux density, relative position in R.A. and Dec., and the fitted angular size (major and minor axis, and position angle). Columns 10 to 14, and columns 15 to 19 give the same for the components observed at 5 GHz and 15 GHz respectively.
### 4.2 Classification of the Radio Morphologies
We classify the radio morphologies in four ways:
* Compact Symmetric Objects (CSO). Sources with a compact flat spectrum component with extended components with steeper spectra on either side.
* Core-Jet sources (CJ). Sources with compact flat spectrum component with one or more components with steeper spectra on one side only.
* Compact Double (CD). Sources showing two dominant components with comparable spectra, but no evidence of a central flat spectrum component.
* Complex sources (CX). Sources with a complex morphology, not falling in one of the above categories.
From the 47 sources in the sample, 3 could be classified as CSO, 11 as CD, 7 as CJ, and 2 as CX. Of the 25 remaining sources, 2 were resolved at only one frequency, 2 were only observed at one frequency, and 20 show a single component at 2 frequencies, and therefore could not be classified. For one source (1642+6701) it was not clear how to overlay the 1.6 and 5 GHz maps. The individual sources are briefly discussed below.
#### 4.2.1 Discussion of Individual Sources
B0400+6042: CD Several components are visible with the two outer components having comparable spectra and the central component having a marginally flatter spectrum. This source is tentatively classified as a CD, but it could also be a CSO.
B0436+6152: CSO The brightest component at 15 GHz in the centre is interpreted as the core, with extended steeper spectrum components to the north-east and south-west. This source is classified as a CSO.
B0513+7129: CX This object shows two dominant components with a jet-like feature pointed to the north-west. Although the component with the flattest spectrum is located in the center, this source is classified as a complex source due to the strangely bent structure.
B0539+6200: CJ The south-western component has a flatter spectral index than the north-eastern component. Only two components are visible, hence this object is tentatively classified as a CJ.
B0752+6355: CX The “C” shaped morphology of this compact source showing components with a large range of spectral indices, leads us to classify this object as one with a complex morphology.
B1620+6406: CD The steep spectrum of the northern component is similar to the spectrum of the southern component (using the upper-limit for the flux density at 5 GHz). We therefore tentatively classify this object as a CD.
B1642+6701: - It was not possible to reliably overlay the two maps at 1.6 and 5 GHz, but the most likely match is shown in figure 6. Due to this uncertainty, it was not possible to reliably classify this source.
B1647+6225: CD The two components have similar spectra, with a possible jet leading to the northern component. This object is tentatively classified as a CD.
B1655+6446: CD Only the southern component is detected at 5 GHz. However its steep spectral index, and the upper-limit to the spectral index of the northern component makes us tentatively classify this source as a CD.
B1657+5826: CD Only the western component is detected at 5 GHz. However its steep spectral index, and the upper-limit to the spectral index of the eastern component makes us classify this source as a CD.
B1819+6707: CSO Two dominant components are visible in this source at 1.6, 5, and 15 GHz with comparable spectra. A faint compact component is visible in between in the 5 GHz map. This object is therefore classified as a CSO.
B1942+7214: CJ This object shows faint extended structure to the south-west in its 1.6 and 5 GHz images. The bright northern component appears to have a flatter spectrum than the faint extended emission. We tentatively classify this object as a core-jet.
B1946+7048: CSO This source is the archetype compact symmetric object (CSO), and has been discussed in detail by Taylor and Vermeulen (1997). The core is only visible at 15 GHz.
B1954+6146: CJ Only the flat spectrum compact component in the south is detected at 15 GHz. The limit to the spectral index of the northern component makes us tentatively classify this source as a core-jet.
## 5 Discussion
In radio bright samples, GPS quasars are found to have core-jet or complex structure, while GPS galaxies are found to have larger sizes with jets and lobes on both sides of a putative center of activity (Stanghellini et al. 1997). Although observations at another frequency are needed to confirm their classification, allmost all radio-bright GPS galaxies from Stanghellini et al (1997) can be classified as CSOs. The morphological dichotomy of GPS galaxies and quasars, and their very different redshift distributions make it likely that GPS galaxies and quasars are not related to each other and just happen to have similar radio spectra. It has been speculated that GPS quasars are a subset of flat spectrum quasars in general (eg. Snellen et al. 1999a). In addition, if galaxies and quasars were to be unified by orientation, due to changes in its observed radio spectrum (Snellen et al. 1998c), it is not expected that a GPS galaxy observed at a small viewing angle would be seen as a GPS quasar.
Not all CSOs are GPS sources. The contribution of the (possibly variable) flat spectrum core can be significant and outshine the convex spectral shape produced by the mini-lobes. This can be due to a small viewing angle towards the object, causing the Doppler boosted core and fast moving jet, which feeds the approaching mini-lobe, to be important (Snellen et al. 1998c). An example of such a CSO, possibly observed at a small viewing angle, is 1413+135 (Perlman et al. 1994). In addition, the jets feeding the mini-lobes can be significantly curved, for example in 2352+495 by precession (Readhead et al. 1996). This can cause parts of the jet to move at an angle close to the line of sight, with significant Doppler boosting as a result. In both cases the large contrast between the approaching and receding parts of the radio source makes it also increasingly difficult to identify the object as a CSO.
Figure 8 shows the number of galaxies and quasars, in our faint GPS sample, classified as CJ, CSO, CX and those not possible to classify. All three objects classified as CSOs are optically identified with galaxies. Although this is in agreement with the findings of Stanghellini et al (1997) for the radio-bright sample, it should be noted that for only 4 quasars was it possible to make a classification. This is mainly due to the fact that the angular sizes of the quasars are significantly smaller than the angular sizes of the galaxies. Six out of 18 classifiable GPS galaxies are found to have CJ or CX structures, and 9 of the classifiable GPS galaxies are found to have CD structures. We conclude that the strong morphological dichotomy between GPS galaxies and quasars found by Stanghellini (1997) in the bright GPS sample, is not as strong in this faint sample. Note, however, that the classification for the majority of the CJ and CD sources is based on two components and their relative spectral indices only. This makes their classification rather tentative. Firstly, a CD source could be erroneously classified as a CJ source due to a difference in the observed age between the approaching and receding lobe, causing a difference in observed radio spectrum of the two lobes. For a separation velocity of 0.4c, as observed for radio bright GPS galaxies (Owsianik and Conway 1998; Owsianik, Conway and Polatidis 1998), such an age difference can be as large as 30%. Secondly, differences in the local environments of the two lobes can also influence the spectra of the two lobes, resulting in an erroneous classification as core-jet. For example, if only the two lobes had been visible, B1819+6707 (fig 6) could have been mistaken for a core-jet source, since the spectral index of the eastern lobe is flatter than that of the western lobe.
## 6 Conclusions
Multi-frequency VLBI observations have been presented of a faint sample of GPS sources. All 47 sources in the sample were successfully observed at 5 GHz, 26 sources were observed at 15 GHz, and 20 sources were observed at 1.6 GHz. In this way 94% of the sources have been mapped above and below their spectral peak. The spectral decomposition allowed us to classify 3 GPS galaxies as compact symmetric objects (CSO), 1 galaxy and 1 quasar as complex (CX) sources, 2 quasars and 5 galaxies as core-jet (CJ) sources, and 9 galaxies and 2 quasars as compact doubles (CD). Twenty-five of the sources could not be classified, 20 because they were too compact. The strong morphological dichotomy of GPS galaxies and quasars found by Stanghellini et al. (1997) in their radio bright GPS sample is not so clear in this sample. However, many of the sources classified as CD and CJ have a two-component structure, making their classification only tentative.
## Acknowledgements
The authors are greatful to the staff of the EVN and VLBA for support of the observing projects. The VLBA is an instrument of the National Radio Astronomy Observatory, which is operated by Associated Universities, Inc. under a Cooperative Agreement with the National Science Foundation. This research was supported by the European Commission, TMR Access to Large-scale Facilities programme under contract No. ERBFMGECT950012, and TMR Programme, Research Network Contract ERBFMRXCT96-0034 “CERES”.
|
no-problem/0002/gr-qc0002090.html
|
ar5iv
|
text
|
# Localization of gravitational energy in ENU model and its consequences
## 1 Section
$`\backslash `$$`\backslash `$
Localization of gravitational energy in ENU model and its consequences
Jozef Sima<sup>a</sup>, Miroslav Sukenik<sup>a</sup> and Julius Vanko<sup>b</sup>
<sup>a</sup>Slovak Technical University, Dep. Inorg. Chem., Radlinskeho 9, 812 37 Bratislava, Slovakia
<sup>b</sup>Comenius University, Dep. Nucl. Physics, Mlynska dolina F1, 842 48 Bratislava, Slovakia
e-mail: sima@chelin.chtf.stuba.sk; vanko@fmph.uniba.sk
Abstract. The contribution provides the starting points and background of the model of Expansive Nondecelerative Universe (ENU), manifests the advantage of exploitation of Vaidya metrics for the localization and quantization of gravitational energy, and offers four examples of application of the ENU model, namely energy of cosmic background radiation, energy of Z and W bosons acting in weak interactions, hyperfine splitting observed for hydrogen 1s orbital. Moreover, time evolution of vacuum permitivity and permeability is predicted.
I. Theoretical backround
Due to the simultaneous creation of both the matter and gravitational energy (having the identical absolute values but differing in the sign of the values) the total energy of the Universe is equal to zero in the model of Expansive Nondecelerative Universe (ENU) and thus one of the fundamental requirement of the Universe evolution is fulfilled. It has been evidenced that such a Universe can expand by the velocity of light $`c`$ and it therefore holds
$`a=c.t_c=\frac{2G.M_U}{c^2}`$ (1)
where $`a`$ is the gauge factor (at present a 1.3 x 10<sup>26</sup> m), $`t_c`$is the cosmological time, $`M_U`$ is the mass of the Universe (it approaches at present 8.6 x 10<sup>52</sup> kg).
In the ENU model, due to the matter creation the Vaidya metrics must be used which enables to localize the gravitational energy. For weak fields Tolman’s relation
$`\epsilon _g=\frac{R.c^4}{8\pi .G}=\frac{3m.c^2}{4\pi a.r^2}`$ (2)
can be applied in which $`\epsilon _g`$ is the density of the gravitational energy induced by a body with the mass $`m`$ in the distance $`r,R`$ denotes the scalar curvature. It should be pointed out that contrary to the more frequently used Schwarzschild metrics (in which $`\epsilon _g`$= 0 outside a body, and $`R`$ = 0), in the Vaidya metrics $`R`$ $``$0 and $`\epsilon _g`$ may thus be quantified and localized also outside a body. It has been shown that at the same time it must hold
$`\epsilon _g=\frac{3E_g}{4\pi .\lambda ^3}`$ (3)
where $`E_g`$ is the quantum of the gravitational energy, the corresponding Compton wavelength can be expressed as
$`\lambda =\frac{\mathrm{}.c}{E_g}`$ (4)
Substitution of (4) into (3) and comparison of (2) and (3) leads to
$`\left|E_g\right|=\left(\frac{m.\mathrm{}^3.c^5}{a.r^2}\right)^{1/4}`$ (5)
in which $`E_g`$denotes the quantum of the gravitational energy induced by a body with the mass $`m`$in the distance $`r`$.
The validity of (5) was tested both in the field of macrosystems and microworld. Application of equation (5) allowed us to derive in an independent way the Hawking’s relation for black hole evaporation and explain the presence of some peaks in low-temperature far-infrared and Raman spectra of several compounds .
Some of the further verifications and applications of relation (5) are given in the following parts.
II. Energy of cosmic background radiation
From the beginning to the end of radiation era, the Universe was in thermodynamic equilibrium. Based on the above postulate it can be supposed that the energy of a photon of the cosmic background radiation equaled to the energy of a gravitational quantum, i.e.
$`k.T=\left|E_g\right|=\left(\frac{m.\mathrm{}^3.c^5}{a.r^2}\right)^{1/4}`$ (6)
When taking m in (6) as the mass of the Universe, $`M_U`$
$`M_U=\frac{a.c^2}{2G}`$ (7)
and $`r`$ as the gauge factor $`a`$
$`r=a`$ (8)
a well-known formula
$`k.T\left(\frac{\mathrm{}^3.c^5}{2G.t_c^2}\right)^{1/4}`$ (9)
is obtained, however, when comparing to , the mode of its derivation is independent. The present consistency might be evaluated as an evidence of justification of the ENU model.
III. Weak interactions
In our previous paper the mass of Z and W bosons was derived stemming from the energy density. As it will be shown in the following, an identical relationship can be obtained using equation (5), i.e. stemming from gravitational energy quantization. Let us substitute m by the limiting mass
$`m=\frac{a.\mathrm{}^2}{g_F}`$ (10)
where $`g_F`$ is the Fermi constant, and express $`r`$ as the Comptom wavelength of the vector bosons Z and W possessing the mass $`m_{ZW}`$
$`r=\frac{\mathrm{}}{m_{ZW}.c}`$ (11)
In such a case, from (5), (10) and (11) we obtain
$`\left|E_g\right|=m_{ZW}.c^2`$ (12)
if the known relation
$`m_{ZW}^2\frac{\mathrm{}^3}{g_F.c}`$ (13)
was applied.
IV. Hyperfine structure of the hydrogen atom K-level
Equation (5) can be exploited to an independent prediction of the value of hyperfine structure $`E_{HF}`$observed in the spectra of hydrogen atom (experimental value for the electron occupying H1s orbital is $`E_{HF}`$ = 1420 MHz). Suppose, the energy of the hyperfine splitting induced in the hydrogen atom K-level by the proton magnetic momentum is identical to the energy given by equation (5). Such an identity may be taken as a condition of the stability of the hydrogen atom. When putting the mass of electron $`m_e`$ (9.109 x 10<sup>-31</sup> kg) and the Bohr radius of H1s orbital
$`r52.9\times 10^{12}m`$ (14)
into (5), the energy value
$`E_{HF}2400MHz`$ (15)
is obtained. This value is 1.7 times higher than the experimental value and thus closer to it than that of calculated one using a commonly applied simplified equation (16)
$`E_{HF}\frac{I_{H1s}.\alpha ^2.m_e}{m_p}`$ (16)
in which $`I_{H1s}`$ is the hydrogen atom ionization energy (13.6 eV) and $`\alpha `$is the constant of hyperfine splitting.
V. Time evolution of the vacuum permitivity
The constant of hyperfine structure $`\alpha `$ is defined as
$`\alpha =\frac{e^2}{4\pi .ϵ_o.\mathrm{}.c}`$ (17)
At the beginning of separation of electromagnetic interactions the equation
$`\alpha =1`$ (18)
had to be valid. When substituting
$`r=\frac{\mathrm{}}{m_e.c.\alpha }`$ (19)
into the left side of (21) and
$`I_{H1s}m_e.c^2.\alpha ^2`$ (20)
into the right side of (21)
$`\left(\frac{m_e.\mathrm{}^3.c^5}{a.r^2}\right)^{1/4}I_{H1s}.\alpha ^2.\frac{m_e}{m_p}`$ (21)
the dependence
$`\alpha a^{1/14}`$ (22)
appears. Two conclusions may be derived from the above relationships. The first one is that equation (18) relates to the time
$`t10^{10}s`$ (23)
which is just the time in which the weak and electromagnetic interactions were separated. The second consequence relates to (22), i.e. the time evolution of the constant of hyperfine splitting. Since the velocity of light, electronic charge and Planck constant are considered to be time independent quantities, time evolution of the Universe (e.g. changes in its mass and, in turn, also in charge and electrostatic field density) and the gradual increase of the gauge factor may be reflected in a very slow change in the vacuum permitivity $`\epsilon _o`$ (an electric property) and vacuum permeability $`\mu _o`$ (a magnetic property).
Conclusions
1.Increase in the gauge factor has several consequences which are to be unveiled and explained in the future. One of them is a gradual decrease in the hyperfine splitting constant which can be related to a time-increasing of vacuum permitivity and time-decreasing of vacuum permeability.
2.Capability of localization of the gravitational energy within ENU is a challenge for answering the questions such as unification of all four fundamental physical interactions, stability or invariability of some physical quantities and “constants“.
References
1.S. Hawking, Sci. Amer., 236 (1980) 34
2.V. Skalsky, M. Sukenik, Astrophys. Space Sci., 236 (1991) 169
3.P.C. Vaidya: Proc. Indian Acad. Sci., A33 (1951) 264
4.J. Sima, M. Sukenik, Preprint: gr-qc 9903090
5. I. L. Rozentahl, Advances in Mathematics, Physics and Astronomy, 31 (1986) 241 (in Czech)
6.M. Sukenik, J. Sima, J. Vanko, Preprint: gr-qc 0001059
|
no-problem/0002/astro-ph0002331.html
|
ar5iv
|
text
|
# CO on Titan: More Evidence for a Well-Mixed Vertical Profile
## 1. Introduction
The atmosphere of Titan exhibits a complex photochemistry, and many nitriles and hydrocarbons have been detected by Voyager spacecraft and from Earth. Until recently, however, only two oxygen-bearing species had been detected on Titan: $`CO_2`$ (observed by Voyager 1; Samuelson et al. (1983)) and CO (observed from Earth; Lutz et al. (1983)).
The presence of oxygenated molecules is interesting because the atmosphere of Titan is strongly reducing. The cold temperatures of the lower stratosphere and the troposphere imply that $`CO_2`$ condenses out of the lower atmosphere and is continuously deposited on the surface. To sustain the carbon dioxide abundance a source of oxygen is needed, and it is generally assumed to be supplied in water from bombardment of the upper atmosphere by icy grains. In this model vaporized water is quickly photolyzed to produce OH, and OH reacts with hydrocarbon radicals such as $`CH_3`$ to produce CO. CO in turn reacts with OH to produce $`CO_2`$ (Samuelson et al. (1983), Yung et al. (1984), Toublanc et al. (1995), Lara et al. (1996)). While $`CO_2`$ has a short lifetime (order 10<sup>3</sup>–10<sup>4</sup> years), the photochemical lifetime of $`CO`$ in the atmosphere of Titan is estimated to be very long ($`10^9`$ years; Yung et al. (1984), Chassefière and Cabane (1991)).
Observationally the missing piece of the oxygen chemistry has been the source, water. Recently, water vapor was detected in the upper atmosphere of Titan by the Short Wavelength Spectrometer (SWS) aboard the Infrared Satellite Observatory (Coustenis et al. (1998)). With observations of the three major components of oxygen chemistry, it is now possible to check the internal consistency of photochemical models, and to compare the oxygen chemistry and water infall rate of Titan with the other giant planets, particularly Saturn (Feuchtgruber et al. (1997), Coustenis et al. (1998)).
Understanding the oxygen chemistry relies on accurate knowledge of the abundance and distribution of each species. A longstanding discussion regarding the CO distribution in Titan’s atmosphere, spanning more than a decade, has been primarily directed toward determining if CO is well-mixed (Marten et al. (1988), Gurwell and Muhleman (1995), Hidayat et al. (1998)). Since the residence lifetime of CO is long compared to transport timescales, the molecular weight of $`CO`$ is the same as for the dominant $`N_2`$ gas, and the atmosphere is never cold enough for CO to condense, carbon monoxide should be uniformly mixed in the Titan atmosphere to high altitudes.
Observational data, however, give conflicting results. Table 1 provides data on the CO abundance as measured by ground-based observers over the past 17 years. These observations have been sensitive to either the troposphere (near- and mid-IR) or the stratosphere (millimeter). The data in Table 1 show that no clear consensus has emerged regarding the CO abundance, either in the troposphere or the stratosphere.
In this Note, we present an analysis of new interferometric observations of the $`CO(21)`$ line on Titan. The results of this study have important implications for our understanding of the oxygen budget and photochemistry of the stratosphere of Titan.
## 2. Observations and Data Reduction
Observations of the $`CO(21)`$ rotational transition (rest frequency $`\nu _0`$=230.5380 GHz) on Titan were made on November 11 and 12, 1999 with the Owens Valley Radio Observatory Millimeter Array, located near Big Pine, California. The Titan ephemeris data was generated using the Jet Propulsion Laboratory’s Horizons on-line system (Giorgini et al. (1996)). Titan was approaching eastern elongation with respect to Saturn, with a separation increasing from $``$145<sup>′′</sup> to more than 200<sup>′′</sup> over the two day period.
The interferometer was aligned in a fairly compact configuration, providing a synthesized beam of roughly 2″$`\times `$2.5″ at the observing frequency, while Titan’s apparent surface diameter was 0.86<sup>′′</sup> (at a distance of 8.2165 AU). Titan was observed on each night over a period of about 7 hours, when it was above 30 elevation. A single measurement on each baseline consisted of a three minute integration during which the complex visibility of the source was recorded. Amplitude and phase gain variations were monitored through observations of 0235+164 approximately every 20 minutes, and antenna pointing was checked about every two hours using 3C84. The total integration time spent on Titan equaled 238 minutes on each night.
The signal was detected for each antenna pair in two correlator systems: a wide-band analog cross correlator ($``$1 GHz bandwidth) and a digital spectrometer. The CO line is significantly wider than this system bandwidth, and we utilized two local oscillator tunings to provide better coverage of the line: on November 11 the digital spectrometer measured (in two secondary LO tunings) the line in the upper sideband from $`\mathrm{\Delta }\nu =656`$ MHz $`+32`$ MHz, and on November 12 from $`\mathrm{\Delta }\nu =32`$ MHz $`+656`$ MHz. Spectra in the image sideband, approximately 3 GHz lower in frequency, were also recorded. The sideband signals were isolated to better than 20 dB using a phase-switching cycle. The combination of two first and second LO tunings allowed us to ultimately measure $`\pm `$650 MHz of the center frequency of each sideband at 4 MHz resolution, and $`\pm `$16 MHz at 0.5 MHz resolution in the line core.
Calibration of the digital correlator passband was done through observations of 3C273 and an internal correlated noise source. The relative calibration of the sidebands was accurately measured from observations of Uranus, 3C273, 3C84, and 0235+164 (to $`1`$%), since nearly all weather and instrumental effects impact each sideband in a similar manner. We note that the ability to isolate the astronomical signal sidebands and record independent spectra in each sideband represents a significant advantage for interferometric relative to single-dish observations of CO on Titan because the emission line is significantly broader than the spectrometer bandwidth. In this case the most precise measure of the line-to-continuum (LTC) ratio is provided by separating the sidebands. This relative sideband calibration then allows for the production of an accurate LTC spectrum, with one sideband sensing the continuum, and the other the line.
The Titan signal strength was sufficient for the application of phase self-calibration (see Thompson, Moran, and Swenson (1986)) to remove atmospheric phase variations, which cause decorrelation of the signal, on timescales shorter than the standard calibration cycle. After calibration, a complete spatially unresolved (e.g. “zero-spacing”) spectrum for each day was obtained by fitting the observed complex visibilities for each channel with a model of the Titan visibility function, correcting for the spatial sampling of the interferometer. The absolute flux scale was provided by scaling the continuum sideband intensity to equal the radiative transfer model flux of Titan at 227.5 GHz, corrected for the date and time of the observations (see below). The same scaling factor was applied to the emission line sideband, preserving the relative calibration.
The resulting combined spectrum of the $`CO(21)`$ line on Titan is shown in Figure 1. The data clearly shows the $`CO(21)`$ line is a strong emission feature in the spectrum of Titan. The image sideband spectrum is essentially flat except for a weak emission line due to the $`HC_3N(2524)`$ rotational transition at 227.419 GHz. This is particularly important because it shows that the sideband isolation procedure was effective to well below the noise level of the spectrum.
## 3. Modeling & Analysis
The radiative transfer model used to analyze the new CO data is nearly identical to the one discussed in Gurwell and Muhleman (1995), and we only highlight important aspects.
The basic parameters of the Titan atmosphere were derived from revised Voyager 1 radio occultation results (Lindal et al. (1983), Lellouch (1990), Coustenis and Bézard (1995)), including an atmospheric base at 2575 km from the center of Titan, with a surface pressure and temperature of 1460 millibar and 96.7 K. For the thermal profile of the atmosphere we used an equatorial profile determined by Coustenis and Bézard (1995, their profile A) based upon the occultation results and Voyager 1/IRIS spectra (Fig. 2) combined with model J of Yelle (1991) for the upper atmosphere; this same model was used by Hidayat et al. (1998) to analyze their results. This model is appropriate since the observations reported here are unresolved (whole-disk) spectra, which are heavily weighted by emission from equatorial and low latitudes.
The millimeter continuum opacity on Titan is due to collision induced dipole absorption by $`N_2N_2`$, $`N_2`$Ar, and $`N_2`$CH<sub>4</sub>, and was modeled according to the results of Courtin (1988). The spectroscopic parameters for the $`CO(21)`$ line were taken from the JPL catalog (Pickett et al. (1992); see also http://spec.jpl.nasa.gov). The full Voigt lineshape profile calculation using a fast computational method (Hui et al. (1978)), integrated in pressure over atmospheric layers of constant temperature, was done using a collisional line-broadening coefficient for $`CO(21)`$ in $`N_2`$ of $`\gamma =2.21(T/300K)^{0.74}`$ MHz mbar<sup>-1</sup> (Semmoud-Monnanteuil and Colmont (1987)). Radiative transfer calculations at appropriate frequencies were performed for a variety of radial steps, including limb-sounding geometries, and integrated over the apparent disk to provide the model whole-disk spectrum.
The contribution functions ($`W(z)=e^\tau d\tau /dz`$) for several frequency offsets from the $`CO(21)`$ line center are shown in Fig. 2, for a single raypath at the disk center. This function describes the relative contribution of different regions of the atmosphere to the emitted radiation at each frequency. The plotted functions assume a CO abundance of 50 ppm, constant with altitude. The $`\mathrm{\Delta }\nu =3000`$ MHz contribution function corresponds to the middle of the continuum (lower) sideband, and is dominated by the collision induced opacity of $`N_2`$. The other functions correspond to the emission (upper) sideband, and are dominated by $`CO(21)`$ opacity. The full line senses the atmosphere from 40 km (the tropopause) to 400 km. However, the range from 200 to 400 km is sounded mostly in the inner 4 MHz of the line core. At 4 MHz spectral resolution we are limited to sensing the CO abundance from the tropopause to $``$200 km. The 0.5 MHz spectrum of the line core pushes this upper bound to near 350 km in the absence of noise. Thermal noise on the spectral measurements in practice limit our sensitivity to $`300`$ km.
### 3.1. Best-fit Uniform CO Distribution
The radiative transfer model was run for a series of uniform CO distributions from $`q`$(CO)=10 to 90 ppm, in steps of 10 ppm. Resulting spectra are shown in Fig. 1 (in steps of 20 ppm for clarity). The model spectra have been convolved to the measurement spectral resolution of the data in each panel. The model calculations show that the continuum (lower) sideband emission is essentially unaffected by the CO distributions considered, and is an excellent continuum measurement. The model gives a flux at 227.5 GHz of 1.565 Jy for the geometry of the observations, equal to a disk-average Rayleigh-Jeans brightness temperature of 71.4 K.
Even by eye, the 50 ppm uniform model provides an exceptionally good fit to the data at both resolutions. The 50 ppm model gives an RMS residual of 86.8 mJy for the 4 MHz data, with models of 40 and 60 ppm giving RMS residuals that are factors of 1.4 and 1.1 times larger, respectively. Given that a large number of channels are involved (324), even an 10% increase in RMS residuals is quite significant. The 0.5 MHz spectrum is also consistent with this model. A rigorous least-squares analysis for the best-fit uniform profile gives a formal solution of 52$`\pm `$2 ppm from 40 to 300 km.
### 3.2. Best-fit Non-Uniform CO Distribution
An iterative least-squares inversion algorithm (following Gurwell and Muhleman (1995)) based on the radiative transfer model was utilized to solve for a best-fit non-uniform CO distribution. The logarithm of the CO distribution was constrained to be a linear function of altitude. This constrained solution tests whether a gradient in the CO distribution is consistent with the observed spectrum.
We find that the best-fit non-uniform profile, with formal error, is 48$`\pm `$4 ppm at 40 km, rising to 60$`\pm `$10 ppm at 300 km. The RMS residual is 86.1 mJy, representing less than 1% improvement in the residual over the best-fit uniform profile.
### 3.3. Error Estimates and the Best-fit CO Distribution
The formal errors quoted in the above sections are the direct results of the least-squares analyses, and therefore do not take into account errors in the radiative transfer modeling or the calibration of the spectrum.
To test whether the continuum emission model is a serious source of error (since the spectrum is calibrated by referencing to the continuum sideband data), we recomputed the continuum emission at 227.5 GHz, scaling the collision induced dipole absorption calculated from the data of Courtin (1988) by factors of 0.5 and 2. The calculated emission results were indistinguishable from the nominal model, which can be explained as the result of two factors. First, the collision induced continuum absorption scales as the square of pressure, and is therefore a very steep function of altitude. Therefore, increasing (or decreasing) the absorption coefficient even by factors of two will only increase the peak of the contribution function by a small fraction of a scale height. Second, the peak of the contribution function is right at the tropopause, where the temperature gradient is near zero. The result is that the emission change is very small, and we estimate that this error is about 1%.
The spectrum sidebands were calibrated assuming the QSO calibration sources had a spectral index of -0.5 (e.g. flux $`\nu ^{0.5}`$). However, the spectral index of these types of sources vary over the range of 0 to -1, and could lead to a calibration error of approximately 1% in the relative calibration over the 3 GHz difference in the sidebands.
Adding the calibration errors in quadrature, we find an error in the relatively calibrated spectrum of about 1.4%. Using the uniform distribution models discussed in section 3.1, we find that a 3% error in the relative calibration could lead to an error of roughly 10 ppm in a worst-case situation (we note that this does not include a refitting of the lineshape, which would tend to reduce this error; hence this is a worst-case estimate). The calibration error is then about 6 ppm using this scaling. For the uniform model solution, the formal error is significantly smaller than this calibration error estimate, and we believe that the error of our measurement is therefore about 6 ppm.
Noting that the non-uniform solution only improves the RMS residuals by 1% at best, and that the formal errors on the non-uniform solution encompass our uniform solution, we favor the uniform model for the CO distribution, which is in agreement with the current understanding of the chemistry of CO in the atmosphere of Titan. The high resolution data provides the information on altitudes above 200 km, and as can be seen in Fig. 1 this data has a higher RMS noise (by a factor of $``$2); this increases our error estimate by a factor of about two over this altitude range. We therefore find that the $`CO(21)`$ spectrum is best fit by a uniform profile of 52 ppm, with estimated errors of 6 ppm (40 to 200 km) and 12 ppm (200 to 300 km).
## 4. Discussion
The results presented here are nearly identical to our previous estimate of the CO distribution based on observations of the $`CO(10)`$ transition (Gurwell and Muhleman (1995)) and consistent with the original measurement of tropospheric CO (Lutz et al. (1983)). Taken together, these measurements suggest a vertical profile of CO that is constant with altitude, at about 52 ppm, from the surface to at least 300 km.
These results are at odds with the recent measurements of Noll (1996), who found a tropospheric abundance of 10 ppm, and Hidayat et al. (1998), who found a stratospheric CO abundance of around 27 ppm (Table 1). Noll (1996) explored the possibility that their simple reflecting layer was not the surface, but a higher altitude ’haze’ layer. If the reflecting layer was at 0.9 bar (14 km) the spectrum was best fit with a CO abundance of 60 ppm. However, based on other evidence they found this model less satisfactory than a surface reflecting layer. The results of Hidayat et al. come from an analysis of several lines of CO, including the $`CO(10)`$ and $`CO(21)`$ lines; the discrepancy between their results and ours does not appear to be due to differences in modeling the atmosphere of Titan, but derives from differences in the measurement techniques and the resulting calibrated spectra (A. Marten, personal communication). However, we do point out that the interferometric method does offer advantages over single-dish observations for measuring the very broad lines of CO from the atmosphere of Titan.
We find the model of a uniform distribution of CO in the atmosphere of Titan provides a good fit to our data, but we cannot rule out a difference between the tropospheric and stratospheric CO abundance, since our data is insensitive to the lower atmosphere. A final confirmation of the abundance of CO and its vertical distribution requires further near- and mid-IR measurements of CO in the troposphere.
Acknowledgements
This work was supported in part by NASA grant NAG5-7946.
TABLE 1. Observations of CO in Titan’s Atmosphere
| Altitude | Mixing ratio (ppm)<sup>a</sup> | Wavelength | Reference |
| --- | --- | --- | --- |
| Troposphere | 48$`{}_{32}{}^{}{}_{}{}^{+100}`$ | 1.57$`\mu m`$ | Lutz et al. (1983) |
| Stratosphere | 60$`\pm `$40<sup>b</sup> | 2.6 mm | Muhleman et al. (1984) |
| Stratosphere | 2$`{}_{1}{}^{}{}_{}{}^{+2}`$ | 2.6 mm | Marten et al. (1988) |
| Stratosphere | 50$`\pm `$10 | 2.6 mm | Gurwell and Muhleman (1995) |
| Troposphere | 10$`{}_{5}{}^{}{}_{}{}^{+10}`$ | 4.8$`\mu m`$ | Noll et al. (1996) |
| Stratosphere | 27$`\pm `$5<sup>c</sup> | 2.6, 1.3, 0.9 mm | Hidayat et al. (1998) |
| Stratosphere | 52$`\pm `$6 | 1.3 mm | this work |
| <sup>a</sup>Mixing ratio defined as N(CO)/N(Total), i.e. not referenced to $`N_2`$. | | | |
| <sup>b</sup>Reanalyzed by Paubert et al. (1984): 75$`{}_{45}{}^{}{}_{}{}^{+105}`$ ppm | | | |
| <sup>c</sup>Non-uniform model: 29$`\pm `$5 ppm (60 km), 24$`\pm `$5 ppm (175 km), 4.8$`\pm `$2 ppm (350 km) | | | |
Figure Captions
|
no-problem/0002/math0002091.html
|
ar5iv
|
text
|
# Growth of sumsets in abelian semigroups11footnote 1Supported in part by grants from the PSC–CUNY Research Award Program and the NSA Mathematical Sciences Program.
Let $`S`$ be an abelian semigroup, written additively, that contains the identity element 0. Let $`A`$ be a nonempty subset of $`S.`$ The cardinality of $`A`$ is denoted $`|A|.`$ For any positive integer $`h`$, the sumset $`hA`$ is the set of all sums of $`h`$ not necessarily distinct elements of $`A`$. We define $`hA=\{0\}`$ if $`h=0`$. Let $`A_1,\mathrm{},A_r,`$ and $`B`$ be nonempty subsets of $`S`$, and let $`h_1,\mathrm{},h_r`$ be nonnegative integers. We denote by
$$B+h_1A_1+\mathrm{}+h_rA_r$$
(1)
the set of all elements of $`S`$ that can be represented in the form $`b+u_1+\mathrm{}+u_r,`$ where $`bB`$ and $`u_ih_iA_i`$ for all $`i=1,\mathrm{},r.`$ If the sets $`A_1,\mathrm{},A_r,`$ and $`B`$ are finite, then the sumset (1) is finite for all $`h_1,\mathrm{},h_r.`$ The growth function of this sumset is
$$\gamma (h_1,\mathrm{},h_r)=|B+h_1A_1+\mathrm{}+h_rA_r|.$$
For example, let $`S`$ be the additive semigroup of nonnegative integers $`𝐍_0`$, and let $`A_1,\mathrm{},A_r,`$ and $`B`$ be nonempty, finite subsets of $`𝐍_0`$, normalized so that $`0BA_1\mathrm{}A_r`$ and $`\mathrm{gcd}(A_1\mathrm{}A_r)=1`$. Let $`b^{}=\mathrm{max}(B)`$ and $`a_i^{}=\mathrm{max}A_i`$ for $`i=1,\mathrm{},r`$. Han, Kirfel, and Nathanson determined the asymptotic structure of the sumset $`B+h_1A_1+\mathrm{}+h_rA_r`$. They proved that there exist integers $`c`$ and $`d`$ and finite sets $`C[0,c2]`$ and $`D[0,d2]`$ such that
$$B+h_1A_1+\mathrm{}+h_rA_r=C[c,b^{}+\underset{i=1}{\overset{r}{}}a_i^{}h_id]\left(b^{}+\underset{i=1}{\overset{r}{}}a_i^{}h_iD\right).$$
for $`\mathrm{min}(h_1,\mathrm{},h_r)`$ sufficiently large. This implies that the growth function is eventually a multilinear function of $`h_1,\mathrm{},h_r,`$ that is, there exists an integer $`\mathrm{\Delta }`$ such that
$$|B+h_1A_1+\mathrm{}+h_rA_r|=a_1^{}h_1+\mathrm{}+a_r^{}h_r+b^{}+1\mathrm{\Delta }$$
for $`\mathrm{min}(h_1,\mathrm{},h_r)`$ sufficiently large. The explicit determination of the sets $`C`$ and $`D`$ is a difficult unsolved problem in additive number theory. In the case $`r=1`$, it is called the linear diophantine problem of Frobenius. For a survey of finite sumsets in additive number theory, see Nathanson .
The theorem about sums of finite sets of integers generalizes to sums in an arbitrary abelian semigroup $`S.`$ We shall prove that if $`A_1,\mathrm{},A_r,`$ and $`B`$ are finite, nonempty subsets of $`S`$, then the growth function $`\gamma (h_1,\mathrm{},h_r)`$ is eventually polynomial, that is, there exists a polynomial $`p(z_1,\mathrm{},z_r)`$ such that
$$\gamma (h_1,\mathrm{},h_r)=|B+h_1A_1+\mathrm{}+h_rA_r|=p(h_1,\mathrm{},h_r)$$
for $`\mathrm{min}(h_1,\mathrm{},h_r)`$ sufficiently large. The case $`r=1`$ is due to Khovanskii . We use his method to extend the result to the case $`r2.`$ The idea of the proof is to show that the growth function is the Hilbert function of a suitably constructed module graded by the additive semigroup $`𝐍_0^r`$ of $`r`$–tuples of nonnegative integers.
We need the following result about Hilbert functions. Let $`R`$ be a finitely generated $`𝐍_0^r`$–graded connected commutative algebra over a field $`E`$. Then $`R=_{h𝐍_0^r}R_h`$. Suppose that $`R`$ is generated by $`s`$ homogeneous elements $`y_1,\mathrm{},y_s`$ with $`y_iR_{\delta _i}`$, that is, the degree of $`y_i`$ is $`\mathrm{deg}y_i=\delta _i𝐍_0^r`$. Let $`M`$ be a finitely generated $`𝐍_0^r`$–graded $`R`$–module. For $`h=(h_1,\mathrm{},h_r)𝐍_0^r`$, we define the Hilbert function
$$H(M,h)=dim_E\left(M_{(h_1,\mathrm{},h_r)}\right).$$
For $`z=(z_1,\mathrm{},z_r)`$, we define
$$z^h=z_1^{h_1}\mathrm{}z_r^{h_r}.$$
Consider the formal power series
$$F(M,z)=\underset{h𝐍_0^r}{}H(M,h)z^h.$$
Then there exists a vector $`\beta `$ with integer coordinates and a polynomial $`P(M,z)=P(M,z_1,\mathrm{},z_h)`$ with integer coefficients such that
$$F(M,z)=\frac{z^\beta P(M,z)}{_{i=1}^s(1z^{\delta _i})}.$$
(This is Theorem 2.3 in Stanley \[7, p. 33\]).
###### Theorem 1
Let $`A_1,\mathrm{},A_r,`$ and $`B`$ be finite, nonempty subsets of an abelian semigroup $`S`$. There exists a polynomial $`p(z_1,\mathrm{},z_r)`$ such that
$$|B+h_1A_1+\mathrm{}+h_rA_r|=p(h_1,\mathrm{},h_r)$$
for all sufficiently large integers $`h_1,\mathrm{},h_r.`$
Proof. For $`i=1,\mathrm{},r,`$ let
$$A_i=\{a_{i,1},\mathrm{},a_{i,k_i}\},$$
where
$$|A_i|=k_i1.$$
We introduce a variable $`x_{i,j}`$ for each $`i=1,\mathrm{},r`$ and $`j=1,\mathrm{},k_i`$. Fix a field $`E`$. We begin with the polynomial ring
$$R=E[x_{1,1},\mathrm{},x_{r,k_r}]$$
in the $`s=k_1+\mathrm{}+k_r`$ variables $`x_{i,j}`$. The algebra $`R`$ is connected since it is an integral domain (cf. Hartshorne \[2, Exercise 2.19, p. 82\]). For each $`r`$–tuple $`(h_1,\mathrm{},h_r)𝐍_0^r`$ we let
$$R_{(h_1,\mathrm{},h_r)}$$
be the vector subspace of $`R`$ consisting of all polynomials that are homogeneous of degree $`h_i`$ in the variables $`x_{i,1},\mathrm{},x_{i,k_i}`$. In particular, $`E=R_{(0,\mathrm{},0)}`$. Then
$$R=\underset{(h_1,\mathrm{},h_r)𝐍_0^r}{}R_{(h_1,\mathrm{},h_r)}.$$
The multiplication in the algebra $`R`$ is consistent with this direct sum decomposition in the sense that
$$R_{(h_1,\mathrm{},h_r)}R_{(h_1^{},\mathrm{},h_r^{})}R_{(h_1+h_1^{},\mathrm{},h_r+h_r^{})},$$
and so $`R`$ is graded by the semigroup $`𝐍_0^r`$.
Next we construct an $`𝐍_0^r`$–graded $`R`$–module $`M.`$ To each $`r`$–tuple $`(h_1,\mathrm{},h_r)𝐍_0^r`$ we associate a finite-dimensional vector space $`M_{(h_1,\mathrm{},h_r)}`$ over the field $`E`$ in the following way. To each element
$$uB+h_1A_1+\mathrm{}+h_rA_r$$
we assign the symbol
$$[u,h_1,\mathrm{},h_r].$$
Let $`M_{(h_1,\mathrm{},h_r)}`$ be the vector space consisting of all $`E`$–linear combinations of these symbols. Then
$$dim_EM_{(h_1,\mathrm{},h_r)}=|B+h_1A_1+\mathrm{}+h_rA_r|.$$
(2)
Let
$$M=\underset{(h_1,\mathrm{},h_r)𝐍_0^r}{}M_{(h_1,\mathrm{},h_r)}.$$
This is an $`𝐍_0^r`$–graded vector space over $`E.`$
To make $`M`$ a module over the algebra $`R,`$ we must construct a bilinear multiplication $`R\times MM.`$ We define the product of the variable $`x_{i,j}R`$ and the basis element $`[u,h_1,\mathrm{},h_r]M`$ as follows:
$$x_{i,j}[u,h_1,\mathrm{},h_r]=[u+a_{i,j},h_1,\mathrm{},h_{i1},h_i+1,h_{i+1},\mathrm{},h_r].$$
This makes sense since
$$uB+h_1A_1+\mathrm{}+h_iA_i+\mathrm{}+h_rA_r$$
and so
$$u+a_{i,j}B+h_1A_1+\mathrm{}+(h_i+1)A_i+\mathrm{}+h_rA_r.$$
This induces a well-defined multiplication of elements of $`M`$ by polynomials in $`R`$ since, if $`i<i^{}`$,
$`x_{i^{},j^{}}\left(x_{i,j}[u,h_1,\mathrm{},h_r]\right)`$
$`=`$ $`x_{i^{},j^{}}[u+a_{i,j},h_1,\mathrm{},h_i+1,\mathrm{},h_r]`$
$`=`$ $`[u+a_{i,j}+a_{i^{},j^{}},h_1,\mathrm{},h_i+1,\mathrm{},h_i^{}+1,\mathrm{},h_r]`$
$`=`$ $`[u+a_{i^{},j^{}}+a_{i,j},h_1,\mathrm{},h_i+1,\mathrm{},h_i^{}+1,\mathrm{},h_r]`$
$`=`$ $`x_{i,j}[u+a_{i^{},j^{}},h_1,\mathrm{},h_i^{}+1,\mathrm{},h_r]`$
$`=`$ $`x_{i,j}\left(x_{i^{},j^{}}[u,h_1,\mathrm{},h_r]\right).`$
The case $`ii^{}`$ is similar. Note that this is the only place where we use the commutativity of the semigroup $`S.`$ It follows that $`M`$ is an $`R`$–module. Moreover,
$$R_{(h_1,\mathrm{},h_r)}M_{(h_1^{},\mathrm{},h_r^{})}M_{(h_1+h_1^{},\mathrm{},h_r+h_r^{})},$$
and so $`M`$ is a graded $`R`$–module. Furthermore, the finite set
$$\{[b,0,\mathrm{},0]:bB\}M$$
generates $`M`$ as an $`R`$–module.
Since $`x_{i,j}R_{\delta _{i,j}}`$, where $`\mathrm{deg}(x_{i,j})=\delta _{i,j}`$ is the $`r`$–tuple whose $`i`$–th coordinate is 1 and whose other coordinates are 0, and since
$$\frac{1}{(1z_i)^{k_i}}=\underset{h_i=0}{\overset{\mathrm{}}{}}\left(\genfrac{}{}{0pt}{}{h_i+k_i1}{k_i1}\right)z_i^{h_i},$$
we have
$`F(M,z)`$ $`=`$ $`{\displaystyle \underset{h𝐍_0^r}{}}H(M,h)z^h`$
$`=`$ $`{\displaystyle \frac{z^\beta P(M,z)}{_{i=1}^r_{j=1}^{k_i}(1z^{\delta _{i,j}})}}`$
$`=`$ $`{\displaystyle \frac{z^\beta P(M,z)}{_{i=1}^r(1z_i)^{k_i}}}`$
$`=`$ $`z^\beta P(M,z){\displaystyle \underset{i=1}{\overset{r}{}}}{\displaystyle \underset{h_i=0}{\overset{\mathrm{}}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{h_i+k_i1}{k_i1}}\right)z_i^{h_1}`$
$`=`$ $`z^\beta P(M,z){\displaystyle \underset{h=(h_1,\mathrm{},h_r)𝐍_0^r}{}}{\displaystyle \underset{i=1}{\overset{r}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{h_i+k_i1}{k_i1}}\right)z^h.`$
This implies that the Hilbert function $`H(M,h)`$ is a polynomial in $`h_1,\mathrm{},h_r`$ for $`\mathrm{min}(h_1,\mathrm{},h_r)`$ sufficiently large. By (2), the growth function is the Hilbert function of $`M`$. This completes the proof.
|
no-problem/0002/astro-ph0002053.html
|
ar5iv
|
text
|
# Black hole X-ray binaries: A new view on soft-hard spectral transitions
## 1 Introduction
For a decade it has been known that the spectra of X-ray novae show changes from a soft state at high luminosity to a hard state when the luminosity has declined during the outburst (Tanaka 1989). The persistent canonical black hole system Cyg X-1 also undergoes occasional transitions between its standard low luminosity (hard) state and a soft state (see Fig. 1). Such changes between the two spectral states have been observed for several systems, regardless of whether the compact object is a neutron star (Aql X-1, 1608-522) or a black hole (GS/GRS 1124-684, GX 339-4) (Tanaka & Shibazaki 1996). Here we concentrate on black hole sources. Observations show that the phenomenon always occurs at a luminosity around $`10^{37}\mathrm{erg}/\mathrm{s}`$, which corresponds to a mass accretion rate of about $`10^{17}\mathrm{g}/\mathrm{s}`$ (Tanaka 1999).
The two spectral states are thought to be related to different states of accretion: (1) the soft spectrum originates from a thin disk which extends down to the last stable orbit plus a corona above the disk, (2) the hard spectrum originates from a thin disk outside a transition radius $`r_{tr}`$ and a coronal flow/ ADAF inside. The spectral transitions of Nova Muscae 1991 and Cygnus X-1 were modelled based on this picture by Esin et al. (1997, 1998). The value of $`r_{tr}`$ was taken as the maximal distance $`r`$ for which an ADAF with that accretion rate can exist (“strong ADAF proposal”, Narayan & Yi 1995). We determine the location of the inner edge of the thin disk from the equilibrium between it and the corona above.
## 2 Generation of the coronal flow
### 2.1 Evaporation
The equilibrium between the cool accretion disk and the corona above (Meyer & Meyer-Hofmeister 1994) is established in the following way. Frictional heat released in the corona flows down into cooler and denser transition layers. There it is radiated away if the density is sufficiently high. If the density is too low, cool matter is heated up and evaporated into the corona until an equilibrium density is established (Meyer 1999).
Mass drained from the corona by an inward drift is replaced by mass evaporating from the thin disk as the system establishes a stationary state. When the evaporation rate exceeds the mass flow rate in the cool disk the disk terminates. Inside only a hot coronal flow exists.
### 2.2 Physics of the corona
Mass flow in the corona is similar to that in the thin disk. Differential (Kepler-like) rotation causes transfer of angular momentum outwards and mass flow inwards. The corona is geometrically much thicker than the disk underneath. Therefore sidewise energy transport is not negligible. Sidewise advection, heat conduction downward, radiation from the hot optically thin gas flow and wind loss are all important for the equilibrium between corona and thin disk. A detailed description would demand the solution of a set of partial differential equations in radial distance $`r`$ and vertical height $`z`$. In particular a sonic transition requires treatment of a free boundary condition on an extended surface.
From simplified modelling and analysis we find the following pattern of coronal flow. When a hole in the thin disk exists there are three regimes with increasing distance from the black hole. (1) Near the inner edge of the thin disk the gas flows towards the black hole. (2) At larger $`r`$ wind loss is important taking away about 20% of the total matter inflow. (3) At even larger distances some matter flows outward in the corona as a consequence of conservation of angular momentum. One might compare this with the flow in a “free” thin disk without the tidal forces acting in a binary. In such a disk matter flows inward in the inner region and outward in the outer region, with conservation of the total mass and angular momentum (Pringle 1981).
### 2.3 Model
We model the equilibrium between corona and thin disk in a simplified way. This is possible since the evaporation process is concentrated near the inner edge of the thin disk. Thus the corona above the innermost zone of the disk dominates the global structure. Further inward there is no thin disk anymore. The representative dominant region from $`r`$ to $`r`$+$`\mathrm{\Delta }r`$ has to be chosen such that evaporation further outward is not important. One incorporates the effects of frictional heat generation, conduction, radiation, sidewise loss of energy and wind loss at large height into this one zone ( “one-zone-model”). A set of ordinary differential equations for mass, motion, and energy with boundary conditions at the bottom (downward thermal flux - pressure relation) and at the top (sonic transition) uniquely determine mass accretion rate, wind loss and temperature in the corona as a function of radius. We restrict the analysis to a stationary corona.
The evaporation process was first investigated for disks in dwarf nova systems (Meyer & Meyer-Hofmeister 1994, Liu et al 1995). The situation is similar for disks around black holes (Meyer 1999). The coronal gas flowing into the hole and replaced by evaporation from the disk is understood as the supply for an ADAF which was used successfully to model the spectra of several black hole sources. A recent review by Narayan et al. (1998) gives a detailed description of accretion in the vicinity of a black hole.
## 3 Computational results
### 3.1 The critical mass flow rate $`\dot{M}_{\mathrm{crit}}`$
We use the same equations as Liu et al. (1995). The efficiency of evaporation at given distance $`r`$ from the compact star determines the location of the inner edge of the thin disk $`r_{\mathrm{tr}}`$. The relation between the mass flow rate $`\dot{M}`$ in the disk and $`r_{\mathrm{tr}}`$ was now computed also for black hole systems. In Fig. 2 we show this relation for a 6 $`M_{}`$ black hole (viscosity parameter $`\alpha `$=0.3).
Up to now only the decreasing branch was known and investigated. The interesting new feature is that the efficiency of evaporation reaches a maximum. This means that as the mass accretion rate in the disk is increased the inner edge moves inward, but if the rate exceeds a critical value $`\dot{M}_{\mathrm{crit}}`$ the thin disk can no longer be fully depleted by evaporation (for this accretion rate the inner disk edge is at about 340 Schwarzschild radii). The thin disk then extends inward towards the last stable orbit.
The temperature in the corona increases with decreasing radius, but reaches a saturation value where the coronal mass flow reaches maximum. The value h in Fig. 2 is the height where the pressure has decreased by 1/e.
### 3.2 What causes the maximum of the coronal mass flow rate?
A change in the physical process that removes the heat released by friction is the cause for the maximum of the coronal mass flow rate seen in Fig. 2. A dimensional analysis of the equations yields the following result. For large inner radii coronal heating is balanced by inward advection and wind loss. This fixes the coronal temperature at about 1/8 of the virial temperature $`T_\mathrm{v}`$ ($`\mathrm{}T_\mathrm{v}/\mu =GM/r`$, $`\mathrm{}`$ gas constant, $`\mu `$ molecular weight, $`G`$ gravitational constant) (see Fig. 2). Downward heat conduction and subsequent radiation in the denser lower region play a minor role for the energy loss though they always establish the equilibrium density in the corona above the disk.
With rising temperature, thermal heat conduction removes an increasing part of the energy released and finally becomes dominant. For optically thin bremsstrahlung the temperature saturates at a universal value defined by a combination of conductivity and radiation coefficients, the Boltzmann and the gas constant, and the non-dimensional $`\alpha `$-parameter of friction, (see Fig. 2). Dimensional analysis of the equations yields the rate of mass accretion through the corona as a function of temperature divided by the Kepler frequency $`(GM/r^3)^{1/2}`$. For small radii this gives the $`r^{3/2}`$ law in Fig. 2.
The maximum accretion rate occurs where the sub-virial temperature for large radii reaches the saturation temperature for small radii. Since the virial temperature is proportional to $`M/r`$, this radius $`r_{\mathrm{crit}}`$ is proportional to $`M`$. Then the accretion rate, proportional to the inverse of the Kepler frequency, also becomes proportional to $`M`$.
### 3.3 Approximations used for our model
Synchrotron and Compton cooling have been left out. Synchrotron cooling is non-dominant as long as the magnetic energy density stays below roughly 1/3 of the pressure. Compton cooling and heating by photons from the disk surface and from the accretion centre are non-dominant at all distances larger than that of the peak of the coronal mass flow rate, $`rr_{\mathrm{crit}}`$ ($`340R_s`$, Fig. 2). They become important for smaller radii.
The conductive flux remains small compared to the upper limit, the transport by free streaming electrons, so that classical thermal heat conduction is a good approximation. We have neglected lateral heat inflow by thermal conduction. This term is small compared to the dominant advective and wind losses at large radii, and vanishes when the temperature becomes constant at small radii.
Temperature equilibrium between electrons and ions requires that the collision times between them remains shorter than the heating timescale. This holds for $`rr_{\mathrm{crit}}`$, but the condition fails for $`r<r_{\mathrm{crit}}`$ where a two temperature corona can develop.
Tangled coronal magnetic fields could reduce electron thermal conductivity. We note however that reconnection and evaporation tend to establish a rather direct magnetic path between disk and corona.
## 4 Spectral transitions
### 4.1 Predictions from the evaporation model
At maximum luminosity of an X-ray nova outburst the mass accretion rate is high and the thin disk extends inward to the last stable orbit. A corona exists above the thin disk, but the mass flow in the thin disk is so high that no hole appears. In decline from outburst the mass accretion rate decreases. When $`\dot{M}_{\mathrm{crit}}`$ is reached a hole forms at $``$ 340 $`R_s`$ and the transition soft/hard occurs. If the mass accretion rate varies up and down as in high-mass X-ray binaries we expect hard/soft and soft/hard transitions. In Fig. 3 we show the expected behaviour schematically.
The descending branch for smaller $`r`$ indicates the possibility that an interior disk could form. We note that a gap exists between the exterior standard thin disk and an interior disk. In this gap the flow assumes the character of an ADAF with different temperature of ions and electrons, due to its high temperature and poor collisional coupling. This provides the possibility that the interior disk fed by this flow has a two temperature corona on top, different from a standard thin disk plus corona in the high state. We will discuss this in a further investigation.
### 4.2 Comparison with observations
The three persistent (high-mass) black hole X-ray sources LMC X-1, LMC X-3 and Cyg X-1 show a different behaviour. LMC X-1 is always in the soft state (Schmidtke et al. 1999). For LMC X-3, most of the time in the soft state, recently recurrent hard states have been detected (Wilms et al. 1999). Cyg X-1 spends most of its time in the hard state with occasional transitions to the soft state (see e.g. Fig. 1). This can be interpreted as caused by different long-term mean mass transfer rates: the highest rate (scaled to Eddington luminosity) in LMC X-1, the lowest in Cyg X-1, and in between in LMC X-3. Transient sources show a soft/hard transition during the decay from outburst. The best studied source is the X-ray Nova Muscae 1991 (Cui et al. 1997).
The transition always occurs around $`L_X10^{37}`$ erg/s (Tanaka 1999). Our value for the critical mass accretion rate for a 6 $`M_{}`$ black hole, $`10^{17.2}`$g/s, corresponds to a standard accretion disk luminosity of about $`10^{37.2}`$ erg/s. This is very close agreement.
For accretion rates below $`\dot{M}_{\mathrm{crit}}`$ the location of the inner edge of the standard thin disk derived from the evaporation model also agrees with observations (Liu et al. 1999).
At the moment of spectral transition our model predicts the inner edge near 340 Schwarzschild radii. The observed timescale for the spectral transition of a few days (Zhang et al. 1997) agrees with the time one obtains for the formation of a disk at 340 $`R_s`$ with an accretion rate $`\dot{M}_{\mathrm{crit}}`$.
But even in the low state X-ray observations of a reflecting component indicates the existence of a disk further inward, at 10 to 25 $`R_s`$ (Gilfanov et al. (1998), Zycki et al. (1999)). This might point to a non-standard interior disk as discussed above and explain why the spectral transitions in Cygnus X-1 could be well fitted by Esin et al. (1998) with a disk reaching inward to $`100R_s`$.
## 5 Conclusions
We understand the spectral transition as related to a critical mass accretion rate. For rates $`\dot{M}\dot{M}_{\mathrm{crit}}`$ (the peak coronal mass flow rate) the standard disk reaches inward to the last stable orbit and the spectrum is soft. Otherwise the ADAF in the inner accretion region provides a hard spectrum. At $`\dot{M}_{\mathrm{crit}}`$ the transition between dominant advective losses further out and dominant radiative losses further in occurs. Except for the difference between the sub-virial temperature of the corona and the closer-to-virial temperature of an ADAF of the same mass flow rate, this same critical radius is predicted by the “strong ADAF proposal” (Narayan & Yi (1995). In general however, the strong ADAF proposal results in an ADAF region larger than that which the evaporation model yields.
The transition between the two spectral states has been observed for black hole and neutron star systems, in persistent and transient sources (Tanaka & Shibazaki 1996, Campana et al.1998). This points to similar physical accretion processes. Menou et al. (1999) already discussed the accretion via an ADAF in neutron star transient sources. Our results should also be applicable.
The relations for a 6 $`M_{}`$ black hole plotted in Fig. 2 can be scaled to other masses: in units of Schwarzschild radii and Eddington accretion rates the plot is universal. The application to disks around supermassive black holes implies interesting conclusions for AGN.
###### Acknowledgements.
We thank Marat Gilfanov, Eugene Churazov and Michael Revnivtsev for the spectral data of Cygnus X-1.
|
no-problem/0002/astro-ph0002339.html
|
ar5iv
|
text
|
# RUNAWAY OF LINE-DRIVEN WINDS TOWARDS CRITICAL AND OVERLOADED SOLUTIONS
## 1 Introduction
Atmospheres of hot luminous stars and accretion disks in active galactic nuclei and cataclysmic variables form extensive outflows due to super-Eddington radiation fluxes in UV resonance and subordinate lines. An understanding of these winds is hampered by the pathological dependence of the driving force on the flow velocity gradient. Castor, Abbott, & Klein (1975; CAK hereafter) found that line-driven winds (hereafter LDWs) from O stars should adopt a unique, critical state which corresponds to maximum mass loss rate. The equation of motion for a 1-D, spherically symmetric, polytropic outflow subject to a Sobolev line force allows for two infinite families of so-called shallow and steep solutions. However, none of these families can provide for a global solution alone. Shallow solutions do not reach infinity, while steep solutions do not extend into the subsonic regime including the photosphere. The critical wind starts then as the fastest shallow solution and switches at the critical point in a continuous and differentiable manner to the slowest steep solution. Hence the critical point and not the sonic point determines the bottleneck in the wind. This description in principle applies equally to winds from stars and accretion disks.
A physical interpretation of the CAK critical point was given by Abbott (1980), who derived a new type of radiative-acoustic waves (hereafter Abbott waves). These waves can propagate inward, in the stellar rest frame, only from below the CAK critical point. Above the critical point, they are advected outwards. Hence, the CAK critical point serves as an information barrier, much as the sonic or Alfvén points in thermal and hydromagnetic winds. Abbott’s analysis was challenged by Owocki & Rybicki (1986) who found for a pure absorption LDW the signal speed to be the sound speed and not the much faster Abbott speed. As noted already by these authors, this should be a consequence of assuming pure line absorption, which does not allow for any radiatively modified, inward wave mode. Meanwhile there is ample evidence for Abbott waves in time-dependent wind simulations (Owocki & Puls 1999).
Shallow solutions fail to reach infinity because they cannot perform the required spherical expansion work, implying that the flow starts to decelerate. Since this usually occurs very far out in the wind, the local wind speed is much larger than the local escape speed, and the wind escapes to infinity. Thus, a simple generalization of the CAK model allowing for flow deceleration renders shallow solutions globally admissible. This raises a fundamental question of why the wind would adopt the critical solution at all, and attain the critical mass loss rate and velocity law, as proposed by CAK.
In this Letter we analyze a physical mechanism which drives shallow solutions towards the critical one, and discuss under what conditions this evolutions does not terminate at the CAK solution, but continues into the realm of overloaded solutions. We find that simulations so far were affected by numerical runaway towards the critical solution, by not accounting for Abbott waves in the Courant time step.
## 2 Abbott waves
Abbott waves are readily derived by bringing the wind equations into characteristic form. We consider a 1-D planar wind of velocity $`v(z,t)`$ and density $`\rho (z,t)`$, assuming zero sound speed. The continuity and Euler equation are,
$$\frac{\rho }{t}+v\frac{\rho }{z}+\rho \frac{v}{z}=0,$$
(1)
$$E\frac{v}{t}+v\frac{v}{z}+g(z)CF(z)\left(\frac{v/z}{\rho }\right)^\alpha =0.$$
(2)
Here, $`g(z)`$ and $`F(z)`$ are gravity and radiative flux, respectively. The CAK line force is given by $`g_\mathrm{l}CF(z)(v^{}/\rho )^\alpha `$ (with $`v^{}v/z`$), with constant $`C`$ and exponent $`0<\alpha <1`$. The unique, stationary CAK wind, $`v_\mathrm{c}(z),\rho _\mathrm{c}(z)`$, is found by requiring a critical point at some $`z_\mathrm{c}`$. The number of solutions for $`vv^{}(z)`$ changes from 2 to 1 at $`z_\mathrm{c}`$ (which is a saddle point), hence $`E/(vv^{})_\mathrm{c}=0`$ holds. Writing $`C`$ in terms of critical point quantities, the Euler equation becomes,
$$\frac{v}{t}+v\frac{v}{z}+g(z)$$
$$\alpha ^\alpha (1\alpha )^{(1\alpha )}\frac{F(z)}{F(z_\mathrm{c})}g(z_\mathrm{c})^{1\alpha }(\rho _\mathrm{c}v_\mathrm{c})^\alpha \left(\frac{v/z}{\rho }\right)^\alpha =0.$$
(3)
Note that for stationary planar winds, $`\rho v`$ is constant. If, in addition, $`g`$ and $`F`$ are taken constant with height, and $`\rho _\mathrm{c}v_\mathrm{c}v^{}/\rho `$ is replaced by $`vv^{}/\dot{m}`$, with normalized mass loss rate $`\dot{m}\rho v/\rho _\mathrm{c}v_\mathrm{c}`$, one finds that $`E`$ does no longer depend explicitly on $`z`$ for stationary solutions. Hence, $`vv^{}`$ is independent of $`z`$, too. This implies that $`z_\mathrm{c}`$ is ill-defined, and every point of the CAK solution is a critical point. CAK removed this degeneracy by introducing gas pressure terms. Here we take a different approach and assume $`g=z/(1+z^2)`$. A situation with roughly constant radiative flux and gravity showing a maximum at finite height could be encountered above isothermal disks around compact objects (cf. Feldmeier & Shlosman 1999). The critical point is determined by the regularity condition, $`dE/dz_\mathrm{c}=0`$, hence $`z_\mathrm{c}=1`$ and the critical point coincides with the gravity maximum. For simplicity also we chose $`\alpha =1/2`$ from now on, which is reasonably close to realistic values $`\alpha 2/3`$ (Puls et al. 1999). None of our results should depend qualitatively on the assumptions made so far. The Euler equation is
$$\frac{v}{t}+v\frac{v}{z}+g(z)2\sqrt{g_\mathrm{c}\rho _\mathrm{c}v_\mathrm{c}}\sqrt{\frac{v/z}{\rho }}=0,$$
(4)
where $`g_\mathrm{c}g(z_\mathrm{c})`$. The stationary solutions for wind acceleration are given by
$$vv^{}(z)=\frac{g_\mathrm{c}}{\dot{m}}\left(1\pm \sqrt{1\frac{\dot{m}g(z)}{g_\mathrm{c}}}\right)^2,$$
(5)
where plus and minus signs refer to steep and shallow solutions, respectively. For $`\dot{m}1`$, shallow and steep solutions are globally, i.e., everywhere, defined. For $`\dot{m}>1`$, solutions are called overloaded, and become imaginary in a neighborhood of the gravity maximum. These winds carry too large mass loss rates and eventually stagnate.
Next we put the Euler equation into quasi-linear form, which does not mean to linearize it. Differentiating $`E`$ with respect to $`z`$ (Courant & Hilbert 1962; Abbott 1980) and introducing $`fv/z`$, eqs. (1, 4) become,
$`\left[{\displaystyle \frac{}{t}}+v{\displaystyle \frac{}{z}}\right]\rho +\rho f=0,`$ (6)
$`\left[{\displaystyle \frac{}{t}}+(v+v_\mathrm{A}){\displaystyle \frac{}{z}}\right]{\displaystyle \frac{f}{\rho }}+{\displaystyle \frac{1}{\rho }}{\displaystyle \frac{g}{z}}=0,`$ (7)
with inward Abbott speed in the rest frame, $`v_\mathrm{A}\sqrt{g_\mathrm{c}v/\dot{m}v^{}}`$. In the WKB approximation, individual spatial and temporal variations are much larger than the inhomogeneous term $`g^{}/\rho `$ in eq. (7), and the latter can be neglected. Consequently, $`v^{}/\rho `$ is a Riemann invariant propagating at characteristic speed $`v+v_\mathrm{A}`$. Perturbations of $`v^{}/\rho `$ correspond to the amplitude of a wave propagating at phase speed $`v+v_\mathrm{A}`$. Note that $`v^{}/\rho `$ is proportional to the Sobolev line optical depth, indicating that this wave is a true radiative mode.
The second characteristic is determined by the continuity equation (6). In the advection operator in square brackets, $`v`$ has to be read as $`v+0`$ in the zero-sound speed limit. This outwards propagating invariant corresponds to a sound wave, with amplitude $`\rho `$ scaling with gas pressure.
At the critical point, $`\dot{m}=1`$ and $`vv^{}(z_\mathrm{c})=g(z_\mathrm{c})`$ after eq. (5), hence $`v_{\mathrm{Ac}}=v_\mathrm{c}`$ (where we introduced $`v_{\mathrm{Ac}}v_\mathrm{A}(z_\mathrm{c})`$). Abbott waves stagnate at the critical point, in analogy with sound waves at the sonic point. For shallow solutions, $`\dot{m}<1`$ and $`vv^{}<v_\mathrm{c}v_\mathrm{c}^{}`$ from eq. (5), hence $`v+v_\mathrm{A}<0`$. Shallow LDW solutions are therefore the subcritical analog to solar wind breezes.
Because in the rest frame, the inward Abbott mode can propagate at larger absolute speeds than the outward sound mode, Abbott waves can determine the Courant time step in time-explicit hydrodynamic simulations. Violating the Courant step results in numerical instability. Despite this fact, Abbott waves along shallow solutions were never considered in the literature.
## 3 Wind convergence towards the critical solution
We turn our attention to a physical mechanism which can drive LDWs away from shallow solutions, and towards the critical one. Starting from an arbitrary shallow solution as initial condition, we explicitly introduce perturbations at some fixed location in the wind and study their evolution. In order to keep unperturbed shallow solutions stable in numerical simulations, we fix one outer boundary condition, according to inward propagating Abbott waves. Either a constant mass loss rate at the outer boundary or non-reflecting boundary conditions (Hedstrom 1977) serve this aim. At the inner, subcritical boundary, we also fix one boundary condition, according to incoming sound waves. Non-reflecting boundary conditions and $`\rho =const`$ give similar results.
Wind convergence towards the critical solution is then triggered by negative flow velocity gradients. Allowing for $`v^{}<0`$ turns the inward Abbott mode of phase speed $`v+v_\mathrm{A}<0`$ in the rest frame into an outwards propagating mode. This is readily seen for a line force which is zero for negative $`v^{}`$, i.e., when all photons are absorbed at a resonance location between the photosphere and the wind point. The Euler equation simplifies to that for an ordinary gas, with characteristic speed $`v0>0`$ in the zero sound speed limit. At the other extreme, for a purely local line force where the unattenuated stellar or disk radiation field reaches the wind point, $`g_\mathrm{l}\sqrt{|v^{}|}`$. Here the Abbott phase speed is found to be $`v+v_\mathrm{A}`$, with $`v_\mathrm{A}=+\sqrt{g_\mathrm{c}v/\dot{m}v^{}}`$ for $`v^{}<0`$.
Consider then a sawtooth-like velocity perturbation (sinusoidal perturbation lead to similar results). Slopes $`v^{}>0`$ propagate inwards, slopes $`v^{}<0`$ propagate outwards. Hence, as a kinematical consequence, a sawtooth which is initially symmetric with respect to the underlying stationary velocity law evolves towards larger velocities. This is demonstrated in Figure 1 where, in course of time, a periodic sawtooth perturbation is introduced at $`z=2`$. The line force is assumed to be $`\sqrt{|v^{}|}`$, and the initial shallow solution has $`\dot{m}=0.8`$. The figure shows 2$`\frac{1}{2}`$ perturbation cycles. For upward pointing kinks the slopes propagate apart and a flat velocity law develops between them. At each time step $`dt`$, a new increment $`dv=4dt\delta v/T`$ ($`\delta v`$ and $`T`$ being the amplitude and period of the sawtooth) is added at $`z=2`$, hence the flattening velocity law does not show up in region A of Figure 1. Overall, the wind speed at the perturbation site evolves towards larger values during these phases. On the other hand, for downward pointing kinks of the sawtooth, $`\delta v`$, the two approaching slopes merge, and the wind speed evolves back towards its unperturbed value after each decrement $`dv=4dt\delta v/T`$. The wind velocity hardly evolves during these phases, cf. region B of Figure 1. Over a full perturbation cycle, the wind speed clearly increases.
Essentially any perturbation which introduces negative $`v^{}`$ will accelerate the wind. The amplitude of the perturbation is rather irrelevant since, with decreasing perturbation wavelength, negative $`v^{}`$ occur at ever smaller amplitudes. However, in more realistic winds, dissipative effects may smear out short-scale perturbations before they can grow. Details of the physical mechanism will be discussed elsewhere.
If the perturbation lies downstream from the critical point, the wind converges to the critical solution. Namely, as soon as the perturbation site comes to lie on the supercritical part of the CAK solution during its evolution, positive velocity slopes propagate outwards, and combine with negative slopes to a full wave train. No information is propagated upstream. This unconditional stability of the outer CAK solution is shown in the lower panel of Fig. 1.
## 4 Wind convergence towards overloaded solutions
Wind runaway towards larger speeds as caused by perturbations introduced upstream from the critical point does not terminate at the critical CAK solution. For low-lying perturbations, communication with the wind base is still possible once the subcritical branch of the CAK solution is reached. The wind gets further accelerated into the domain of mass-overloaded solutions (where $`vv^{}>v_\mathrm{c}v_\mathrm{c}^{}`$ and hence $`v>v_\mathrm{c}`$ for $`z<z_\mathrm{c}`$ according to eq. 5), until a generalized critical point develops, which prevents inward propagation of Abbott waves and adjustment of the mass loss rate. Such generalized critical points are given by ‘termination’ points, $`z_\mathrm{t}`$, of overloaded solutions, where the velocity becomes imaginary. At $`z_\mathrm{t}`$, the number of real solutions $`vv^{}(r)`$ changes from 2 (shallow and steep) to 0. Hence, termination points are defined by the same condition as the CAK critical point (at which the number of solutions changes from 2 via 1 to 2), $`E/(vv^{})_\mathrm{t}=0`$. From the stationary version of eq. (4), $`v_{\mathrm{At}}=v_\mathrm{t}`$, hence Abbott waves stagnate at termination points, and the latter are generalized critical points.
The fact that perturbations with negative $`v^{}`$ accelerate the wind either to the critical or an overloaded state can be casted into black-hole conjecture (Penrose 1965): a LDW avoids a ‘naked’ base, and encloses it with a critical surface.
Since to each $`z_\mathrm{t}`$ there corresponds a unique, supercritical mass loss rate, the latter is determined by the perturbation location alone. Using $`v_{\mathrm{At}}=v_\mathrm{t}`$, one finds $`\dot{m}_\mathrm{t}=g_\mathrm{c}/g_\mathrm{t}>1`$ for a planar wind with constant radiative flux.
At a termination point, $`vv^{}`$ jumps to the decelerating branch, $`vv^{}<0`$. Beyond a well-defined location above the gravity maximum, the super-CAK mass loss rate can again be lifted by the line force, and $`vv^{}`$ jumps back to the accelerating branch. Hence, two stationary kinks occur in the velocity law. Figure 2 shows a hydrodynamic simulation of the evolution towards an overloaded solution. Sawtooth-type velocity perturbations were introduced at $`z=0.8`$. Correspondingly, $`\dot{m}=1.025`$ for the overloaded solution, using $`g=z/(1+z^2)`$.
Future work has to clarify whether LDWs show deep-seated perturbations. It seems however unlikely that they would occur at a unique location. Hence, overloaded winds should be non-stationary and show a range of supercritical mass loss rates.
More fundamentally, time-dependent overloaded solutions occur already for single, unique perturbation sites, once the latter lie below a certain height. For the present wind model, this is at $`z0.66`$. The overloading is then so severe and the decelerating region so broad that negative wind speeds result (cf. Poe, Owocki, & Castor 1990). The corresponding mass loss rates are still only a few percent larger than the CAK value. The gas which falls back towards the photosphere collides with outflowing gas, and a time-dependent situation develops. Within each perturbation period, a shock forms in the velocity law, supplemented by a dense shell. These shocks and shells propagate outwards (Feldmeier & Shlosman 2000).
Although strong perturbations introducing negative velocity gradients can appear already in O star winds, accretion disk winds are the prime suspects. The reason for this is that accretion processes and their radiation fields in cataclysmic variables and galactic nuclei are intrinsically variable on a range of timescales (Frank, King, & Raine 1992), and that disk LDWs are driven by a combination of uncorrelated, local and central radiation fluxes.
## 5 Summary
We find that shallow solutions to line-driven winds are subcritical with respect to Abbott waves (sub-abbottic). These waves cause shallow solutions to evolve towards larger speeds and mass loss rates because of the asymmetry of the line force with regard to positive and negative velocity gradients and because perturbations with opposite signs of $`dv/dz`$ propagate in opposite directions. Steep velocity slopes propagate towards the wind base, steepen the inner wind and lift it to higher mass loss rates. In the presence of enduring wind perturbations, this proceeds until a critical point forms and Abbott waves can no longer penetrate inwards.
The resulting solution does not necessarily correspond to the CAK wind. For perturbations which originate below the critical point, the developing Abbott wave barrier is found to be the termination point of a mass-overloaded solution. The velocity law acquires a kink at the termination point, where the wind starts to decelerate. Whether the wind converges to a critical or overloaded solution depends entirely on the location of perturbations, and not, e.g., on boundary conditions at the wind base.
If Abbott waves are not accounted for in the Courant time step of hydrodynamic simulations, we find that numerical runaway can drive the solution towards the critical CAK wind. A detailed discussion of this will be given elsewhere.
Future work has to clarify whether and where perturbations causing local flow deceleration, $`dv/dz<0`$, can occur in LDWs. Overloaded winds may be detected observationally. While their mass loss rates should still be close to CAK values, broad regions of decelerating flow could be identified in P Cygni line profiles. Furthermore, shocks occurring in overloaded solutions with infalling gas may contribute to the X-ray emission from LDWs, besides shocks from the line-driven instability (Lucy 1982; Owocki et al. 1988). Note that the present wind runaway occurs already in the lowest order Sobolev approximation, and is therefore, unrelated to the line-driven instability which depends on velocity curvature terms (Feldmeier 1998).
###### Acknowledgements.
We thank R. Buchler, J. Drew, R. Kudritzki, C. Norman, S. Owocki, and J. Puls for intense blackboard discussions, and the referee, Stan Owocki, for suggestions improving the manuscript. This work was supported in part by PPA/G/S/1997/00285, NAG 5-3841, WKU-522762-98-6 and HST GO-08123.01-97A.
|
no-problem/0002/nlin0002024.html
|
ar5iv
|
text
|
# Estimation of initial conditions from a scalar time series
## Abstract
We introduce a method to estimate the initial conditions of a mutivariable dynamical system from a scalar signal. The method is based on a modified multidimensional Newton-Raphson method which includes the time evolution of the system. The method can estimate initial conditions of periodic and chaotic systems and the required length of scalar signal is very small. Also, the method works even when the conditional Lyapunov exponent is positive. An important application of our method is that synchronization of two chaotic signals using a scalar signal becomes trivial and instantaneous.
A trajectory that a given dynamical system traverses in its state space depends on the particular set of initial conditions with which it starts. In particular, the state of a chaotic system at a latter time is exponentially sensitive to changes in its initial state . This defining feature of a chaotic system leads to a complex behaviour in state space that appears random yet is deterministic, which means that an initial state uniquely fixes the future course of its evolution. Though their are several invariant measures of a chaotic system which are not sensitive to the intial conditions, the exact trajectory crucially depends on the intial state and hence is difficult to reproduce due to sensitivity to initial conditions.
In light of these facts, it is interesting and important to ask whether the complete set of initial conditions of a given multivariable dynamical system can be estimated from a given scalar time series for a single state space variable. We show that this question can be answered in the affirmative and present a novel and simple method to estimate the initial conditions. Our method is based on a modified multidimensional Newton-Raphson method where we include the time evolution of the system. The length of the time series required for the calculations is typically very small.
Our results raise some interesting issues regarding the information content of a time series. In standard embedding techniques, a vector space is constructed from successive iterates of a single variable and a trajectory is reconstructed in this space . While embedding, it is crucial to choose an appropriate time delay so that the successive iterates are well resolved and contain qualitatively different information. In our method we use a very small time series and the total duration is typically much less than the standard delay time in embedding techniques. It is interesting that we can recover the initial conditions and hence the trajectory from such a small duration of time series.
An important application of our method is in the problem of synchronization of two chaotic systems . This problem itself has attracted wide attention in recent times due to its potential application to secured communication and parameter estimation . Our estimation of the initial conditions makes the problem of synchronization almost trivial. We also find that our method works for most of the cases where other methods fail .
Let us consider an autonomous dynamical system given by,
$$\dot{𝐱}=𝐅(𝐱),$$
(1)
where $`𝐱=(x_1,x_2,\mathrm{},x_d)`$ is a $`d`$-dimensional state vector whose evolution is governed by the function $`𝐅=(F_1,F_2,\mathrm{},F_d)`$. Given an initial state vector $`𝐱(0)`$ at time $`t=0`$, the time evolution $`𝐱(t)`$ is uniquely determined by Eq. (1). Now let us assume that only one component of the state vector is known to us and we take it to be $`x_1(t)`$ without loss of generality. The problem that we address is to obtain the initial state vector $`𝐱(0)`$ from the knowledge of the scalar signal $`x_1(t)`$.
Let $`𝐲(0)`$ denote a random initial state vector and $`𝐲(t)`$ its time evolution obtained from Eq. (1). Let $`𝐰(t)`$ denote the difference
$$𝐰(t)=𝐲(t)𝐱(t).$$
(2)
We look for the solution of the equation
$$𝐰(t)=0.$$
(3)
Noting that the initial state vectors $`𝐲(0)`$ and $`𝐱(0)`$ uniquely determine the difference $`𝐰(t)`$, one of the solutions of Eq. (3) is $`𝐲(\mathrm{𝟎})𝐱(0)=0`$ and this is the solution that we are searching for.
We now introduce the notation $`𝐰^n=𝐰^n(𝐲^0,𝐱^0)=𝐰(n\mathrm{\Delta }t)`$, where $`\mathrm{\Delta }t`$ is a small time interval. Similarly, $`𝐲^n=𝐲(n\mathrm{\Delta }t)`$ and $`𝐱^n=𝐱(n\mathrm{\Delta }t)`$. With this notation condition (3) can be written as $`𝐰^n=0`$.
Our approach to the solution of Eq. (3) is a modified Newton-Raphson method which includes the time evolution of the system.
Let us first consider $`𝐰^1`$. We have
$`0`$ $`=`$ $`𝐰^1(𝐱^0,𝐱^0),`$ (4)
$`=`$ $`𝐰^1(𝐲^0+\delta 𝐲^0,𝐱^0),`$ (5)
$`=`$ $`𝐰^1(𝐲^0,𝐱^0)+(\delta 𝐲^0\mathbf{}\mathbf{}_{𝐲^\mathrm{𝟎}})𝐰^1(𝐲^0,𝐱^0)+𝒪((\delta 𝐲^0)^2),`$ (6)
where $`\delta 𝐲^0=𝐱^0𝐲^0=𝐰^0`$ and the last step is a Taylor series expansion in $`\delta 𝐲^0`$. For small $`\mathrm{\Delta }t`$, we can write
$$𝐰^1(𝐲^0,𝐱^0)=𝐰^0+\mathrm{\Delta }t[𝐅(𝐲^0)𝐅(𝐱^0)]+𝒪((\mathrm{\Delta }t)^2).$$
(7)
Substituting Eq. (7) in Eq. (6) and neglecting higher order terms, we get,
$$𝐰^1(𝐲^0,𝐱^0)=𝐰^0\mathrm{\Delta }t(𝐰^0\mathbf{}\mathbf{}_{𝐲^\mathrm{𝟎}})𝐅(𝐲^0).$$
(8)
It is convenient to write the above equation in a matrix form as
$`W^1`$ $`=`$ $`(I+\mathrm{\Delta }tJ^0)W^0,`$ (9)
$`=`$ $`A^0W^0,`$ (10)
where $`W^n`$ is the column matrix corresponding to the vector $`𝐰^n`$, $`I`$ is the identity matrix, $`A^n=I+\mathrm{\Delta }tJ^n`$, and the elements of the Jacobian matrix $`J^n`$ are given by
$$J_{ij}^n=\frac{F_i(𝐲^n)}{y_j^n}.$$
(11)
Next we consider $`𝐰^2`$ or $`W^2`$. Proceeding as above, we get (see Eq. (10)),
$`W^2`$ $`=`$ $`(I+\mathrm{\Delta }tJ^1)W^1`$ (12)
$`=`$ $`(I+\mathrm{\Delta }tJ^1)(I+\mathrm{\Delta }tJ^0)W^0`$ (13)
$`=`$ $`A^1A^0W^0.`$ (14)
Similarly, the equation for $`W^n`$ is
$`W^n`$ $`=`$ $`(I+\mathrm{\Delta }tJ^{n1})(I+\mathrm{\Delta }tJ^{n2})\mathrm{}(I+\mathrm{\Delta }tJ^0)W^0`$ (15)
$`=`$ $`A^{n1}A^{n2}\mathrm{}A^0W^0.`$ (16)
We now concentrate on the first component of the signal whose time series is assumed to be known. For a $`d`$-dimensional system we need $`d1`$ equations to determine the initial state vector $`𝐱^0`$. Eqs. (10), (14) and (16) give us the required relations.
$`W_1^1`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{d}{}}}A_{1i}^0W_i^0,`$ (17)
$`W_1^2`$ $`=`$ $`{\displaystyle \underset{i,j=1}{\overset{d}{}}}A_{1i}^1A_{ij}^0W_j^0,`$ (18)
$`\mathrm{}`$ (19)
$`W_1^{d1}`$ $`=`$ $`{\displaystyle \underset{i,\mathrm{},l,m=1}{\overset{d}{}}}A_{1i}^{d2}\mathrm{}A_{lm}^0W_m^0,`$ (20)
These are $`d1`$ simultaneous equations for $`W^0`$.
The numerical procedure is as follows. We set the intial state of system (1) to a random initial guess vector $`\left(𝐲^0\right)_{old}`$ with $`\left(y_1^0\right)_{old}=x_1^0`$ and evolve it using Eq. (1). Using this vector $`𝐲(t)`$ we write down $`d1`$ simultaneous equations (Eqs. (20)) which can be solved for $`d1`$ unknown components of $`𝐰^0=\delta 𝐲^0`$. Also, $`\delta y_1^0=0`$. Thus the initial guess vector can be improved by
$$\left(𝐲^0\right)_{new}=\left(𝐲^0\right)_{old}+\delta 𝐲^0.$$
(21)
This sets up an iterative scheme giving us better and better estimates of the initial vector which converge to $`𝐱^0`$.
We note that as in Newton-Raphson method, the choice of the initial guess vector can be very important . In some cases, the iterative procedure of Eq. (21) may not converge or converge to a wrong root. In such cases, a different choice of initial guess vector can be useful.
We further note the similarity of our method with the so called method of variational equations in analytical dynamics . The method of variational equations can be appied to a known Hamiltonian system to determine an unknown neighbouring trajectory to an already known one. There, the method requires a complete particular solution of a known set of Hamiltonian equations of motion. In contrast, we have used our method for dissipative chaotic systems. In such systems an analytical solution of the equations of motion cannot be known. Further our method requires only one component of a complete trajectory to be sampled. This has important consequences in the problem of synchronization using a scalar signal.
We now demonstrate our method of estimating the initial state. As our first example we discuss the Rössler system given by ,
$`\dot{x}_1`$ $`=`$ $`x_2x_3,`$ (22)
$`\dot{x}_2`$ $`=`$ $`x_1+ax_2,`$ (23)
$`\dot{x}_3`$ $`=`$ $`b+x_3(x_1c).`$ (24)
First, we consider a case when the time series for $`x_1`$ is given and we want to estimate $`(x_2^0,x_3^0)`$. We chose the parameters $`(a,b,c)`$ such that the system is in the chaotic regime and the initial state $`𝐱^0`$ is on the chaotic attractor. We start with an arbitrary initial state $`𝐲^0=(y_1^0,y_2^0,y_3^0)`$ with $`y_1^0=x_1^0`$. From Eqs. (20) we get a pair of simultaneous equations as,
$`w_1^1`$ $`=y_1^1x_1^1=`$ $`\mathrm{\Delta }t\delta y_2^0+\mathrm{\Delta }t\delta y_3^0`$ (25)
$`w_1^2`$ $`=y_1^2x_1^2=`$ $`\left(2\mathrm{\Delta }t+a(\mathrm{\Delta }t)^2\right)\delta y_2^0`$ (27)
$`+\left(2\mathrm{\Delta }t+(y_1^0c)(\mathrm{\Delta }t)^2\right)\delta y_3^0`$
which can be solved for $`(\delta y_2^0,\delta y_3^0)`$. With $`\delta y_1^0=0`$ we use these in an iterative manner (Eq. (21)) to obtain the correct intial conditions.
Table 1 shows successively corrected $`(y_2^0,y_3^0)`$ obtained using the iterative process as discussed. These are the successive estimates for $`(x_2^0,x_3^0)`$. Let $`e_i=|y_i^0x_i^0|`$ denote the absolute error in the estimation of $`x_i^0`$. In Fig. 1(a) we plot a graph corresponding to Table 1 showing errors $`e_2`$ and $`e_3`$ (on logarithmic scale) plotted against the number of iterations of our method (Eq. (21)). From Table 1 and Fig. 1(a) we see that the successive estimates converge to the correct values of $`(x_2^0,x_3^0)`$. Using only two data points in the given time series $`x_1(t)`$, we can thus readily estimate the full initial state $`𝐱^0`$. We also note that the rate of convergence is very good. In about 8 to 10 iterates we obtain the initial values $`(x_2^0,x_3^0)`$ to within computer accuracy. If we write the deviations of the succesive iterates from the correct values in the form
$$(e_i)_n=\left|\left(y_i^0\right)_nx_i^0\right|e^{\alpha n},$$
(28)
where $`n`$ is the number of iterations, then the value of the parameter $`\alpha `$ is found to be $`2.01`$ for $`e_2`$ and $`2.02`$ for $`e_3`$. This is consistant with the fact that Newton-Raphson method has a quadratic convergence .
We note that the largest Lyapunov exponent for the subsystem $`(y_2^0,y_3^0)`$ (conditional or subsystem Lyapunov exponent) is positive . The success of our method does not depend on whether this Lyapunov exponent is positive or negative. This is important for synchronization of chaotic signals as will be discussed afterwards.
We next present cases where time series for the variables $`x_2`$ and $`x_3`$ of the Rössler system are given. The procedure is similar to the case of time series for $`x_1`$ as discussed above. Fig. 1 (b) shows the errors $`e_1`$ and $`e_3`$, when time series for $`x_2`$ is given, plotted against the number of iterations. The parameter $`\alpha `$ (Eq. (28)) is $`1.97`$ for $`e_1`$ and $`1.95`$ for $`e_3`$. This again indicates a quadratic convergence. Similarly, Fig. 1 (c) shows the quantities $`e_1`$ and $`e_2`$ when time series for $`x_3`$ is given, as a function of the number of iterations. The parameter $`\alpha `$ (Eq. (28)) is $`1.28`$ for $`e_1`$ and 1.30 for $`e_2`$ which shows a convergence slower than quadratic. We note that the largest subsystem Lyapunov exponent is negative when time series for $`x_2`$ is given and is positive when time series for $`x_3`$ is given .
As our next example we consider the Chua’s circuit which in its dimensionless form is given by ,
$`\dot{x}_1`$ $`=`$ $`\alpha (x_2x_1f(x_1)),`$ (29)
$`\dot{x}_2`$ $`=`$ $`x_1x_2+x_3,`$ (30)
$`\dot{x}_3`$ $`=`$ $`\beta x_2,`$ (31)
$`f(x_1)`$ $`=`$ $`bx_1+{\displaystyle \frac{1}{2}}(ab)\left[|x_11||x_1+1|\right].`$ (32)
We chose the parameters $`a,b,\alpha `$ and $`\beta `$ such that the attractor is a limit cycle. The initial state $`𝐱^0`$ is chosen in the basin of attraction of this limit cycle. Fig. 2 shows the errors $`e_2`$ and $`e_3`$ when a time series for the variable $`x_1`$ is given as a function of the number of iterations of our method. The parameter $`\alpha `$ (Eq. (28)) in this case takes values $`1.29`$ for $`e_2`$ and $`1.25`$ for $`e_3`$ showing a slower than quadratic convergence.
We have also applied our method for the cases when time series for the variables $`x_2`$ and $`x_3`$ are given and also for cases when parameters are such that the attractor is chaotic. In all the cases we are able to estimate the full initial state vector.
We have successfully applied our method to estimate the initial state vector using a given scalar time series for many other dynamical systems as well. These include Lorenz system in its periodic, chaotic or intermittent regimes, the disk dynamo system modelling a periodic reversal of earth’s magnetic field , a 3-d plasma system formed by a three wave resonant coupling equations and a four dimensional phase converter circuit .
Now we will discuss an important application of our estimation method in the problem of synchronization of two identical chaotic systems coupled unidirectionally by a scalar signal. Let us suppose that Eq. (1) describes a chaotic system and let us consider a replica of it given by, $`\dot{𝐲}=𝐅(𝐲)`$. Without losing generality let’s further assume that a scalar output signal $`x_1(t)`$ is given. The aim is to synchronize vector $`𝐲(t)`$ with $`𝐱(t)`$ using this scalar signal. Our method of estimating initial conditions makes this procedure trivial. Using the scalar signal we estimate the initial vector $`𝐱^0`$ and set $`𝐲^0=𝐱^0`$. This clearly leads to an instantaneous synchronization of the two trajectories. As we have demonstrated in the case of Rössler system, our method works even when the largest conditional Lyapunov exponent is positive which is the case when other methods of synchronization are known to fail .
To summarise, we have introduced a novel yet simple method to estimate initial conditions of a multivariable dynamical system from a given scalar signal. Our method is based on a multidimensional Newton-Raphson method where we include the time evolution of the system. The method gives a reasonably fast convergence to the correct initial state. The required length of the time series is very small. The method works even when the largest conditional Lyapunov exponent is positive. An important consequence of the method is that the problem of synchronization of identical chaotic systems using scalar signal becomes trivial since evolution of two such systems can then be started from identical initial states.
|
no-problem/0002/astro-ph0002258.html
|
ar5iv
|
text
|
# Dithering Strategies for Efficient Self-Calibration of Imaging Arrays
## 1 Introduction
In order to achieve their required performance, many observing systems must observe with sensitivities near their confusion limits. Many instruments are capable of reaching these limits in crowded stellar fields such as the Galactic center. Future instruments such as the Infrared Array Camera (IRAC) and the Multiband Imaging Photometer (MIPS) on the Space Infrared Telescope Facility (SIRTF) and those planned for the Next Generation Space Telescope (NGST) will be able to reach limits in which the confusion of extragalactic sources becomes significant. In general, the measurement noise is determined by both the statistical fluctuations of the photon flux and uncertainties in detector gain and offset. Any successful calibration procedure must determine these detector parameters sufficiently accurately so that their uncertainties make small contributions to the measurements errors compared to those of the background fluctuations. If the science done with the instrument requires substantial spatial or temporal modeling, calibration requirements become more demanding, ultimately requiring similar integration time for observation and calibration as in the case of the COBE FIRAS instrument (Mather et al. 1994; Fixsen et al. 1994). Additionally, in such cases robust error estimators are often needed. A common method to determine the instrument calibration is to look at known calibration scenes (e.g. a dark shutter, an illuminated screen, or a blank region of sky) of different brightnesses to deduce gain and offset of each detector pixel. This requires a well characterized calibration source and often a change in instrument mode to carry out the measurement. This procedure may introduce systematic errors relating to the extrapolations from the time and conditions of the calibration observations to the time and conditions of the sky observations and from the intensity (and assumed flatness) of the calibration source to the intensity of the observed sky. A different approach is to use the measurements of the sky alone to extract the calibration data for the system. By using the sky observations for calibration, the systematic errors introduced by applying a calibration derived from a distinctly different data set are eliminated. Such methods require a set of dithered images, where a single sky location is imaged on many different detector pixels.
Typical CCD and IR array data reduction procedures for a set of dithered images make use of a known or measured dark frame ($`F^p`$) and derive the flat field ($`G^p`$) through taking the weighted average or median value of all data ($`D^i`$) observed by each detector pixel $`p`$ ($`ip`$) in a stack of dithered images (e.g. Tyson 1986, Tyson & Seitzer 1988, Joyce 1992, Gardner 1995). The least squares solution of
$$𝒟^i=G^pS^0$$
(1)
where $`𝒟^i=D^iF^p`$ ($`ip`$) and $`S^0`$ is the perfectly flat sky intensity, for $`G^p`$, the flat field, is
$$G^p=\frac{_{ip}𝒟^iW_i}{_{ip}W_i}\frac{1}{S^0}$$
(2)
which is simply the weighted average of the data collected by each detector pixel normalized by the constant sky intensity (to be determined later though the absolute calibration of the data). The weights, $`W_i`$, are normally determined by the inverse variance of the data, but may also be set to zero to exclude sources above the background level. The use of the median, instead of the weighted average, also rejects the outliers arising from the observations of real sources instead of the flat background, $`S^0`$, and formally corresponds to a minimization of the mean absolute deviation rather than a least squares procedure. In either form, this method requires observations of relatively empty fields where variations in the background sky level are not larger than the faintest signal that is sought. Thus, throughout this paper we refer to such procedures as “flat sky” techniques. As instrumentation improves and telescope sensitivity increases, this condition is becoming harder to fulfill. In fields at low Galactic latitude, stellar and nebular confusion can be unavoidable, and at high latitude deep imaging (particularly in the infrared) is expected to reach the extragalactic confusion limit. In such cases, because of the complex background, and in other cases where external influences (e.g. moonlight, zodiacal light) create a sky background with a gradient, the flat sky approach does not work and a more comprehensive approach must be used.
Such an approach has been presented by Fixsen et al. (2000) who describe the general least squares solution for deriving the sky intensity $`S^\alpha `$ at each pixel $`\alpha `$, in addition to the detector gain (or flat field) $`G^p`$ and offset (or dark current + bias) $`F^p`$ at each detector pixel $`p`$, where each measurement, $`D^i`$, is represented by
$$D^i=G^pS^\alpha +F^p.$$
(3)
(Throughout this paper we refer to the procedure described by Fixsen et al. (2000) as the “least squares” procedure.) They show how the problem of inverting large matrices can be circumvented, and how the formulation of the problem allows for explicit tracking of the uncertainties and correlations in the derived $`G^p`$, $`F^p`$, and $`S^\alpha `$. Fixsen et al. also show that although the formal size of the matrices used in the least squares solution increases as $`P^2`$, where $`P`$ is the number of pixels in the detector array, the number of non-zero elements in these matrices increases only as $`M\times P`$, where $`M`$ is the number of images in the data set. In practice, the portion of the least squares solution for the detector gains and offsets is calculated first, and then the data are corrected to produce images of the sky ($`S^\alpha `$) that are registered and mapped into a final single image. Because this approach explicitly assumes a different sky intensity at each pixel, the crowded or confused fields that can cause the flat sky technique to fail are an aid to finding the least squares solution. Thus, the need for chopping away from a complex source in order to observe a blank sky region is eliminated. The simultaneous solution for both the detector gain and offset also eliminates the need for dark frame measurements, although if dark frame measurements are available then they can be used with the other data to reduce the uncertainty of the procedure. We note that this general least squares approach may also be applied in non-astronomical situations (e.g. terrestrial observing) where complex images are the norm.
The flat sky technique works well in situations where all detector pixels spend most of the time observing the same celestial calibration source, namely the flat sky background. For this technique, dithering is required only to ensure that all pixels usually do see the background. Because all pixels have observed the same source, the relative calibrations of any two pixels in the detector are tightly constrained, regardless of the separation between the pixels, i.e.
$$\frac{G^1}{G^2}=\frac{G^1S^0}{G^2S^0}=\frac{𝒟^1}{𝒟^2}.$$
(4)
However, in the more general least squares solution of Fixsen et al. (2000), each sky pixel ($`S^\alpha `$) represents a different celestial calibration source. The only pixels for which the relative calibrations are tightly constrained are those that through dithering have observed common sky pixels. Pixels that do not observe a common sky pixel are still constrained, though less directly, by intermediate detector pixels that do observe common sky pixels. For example, the relative calibration of detector pixels 1 and 3 which observe sky pixels $`\alpha `$ and $`\beta `$ respectively, but no common sky pixels, may be established if an intermediate detector pixel 2 does observe both sky pixels $`\alpha `$ and $`\beta `$, i.e.
$$\frac{G^1}{G^3}=\frac{G^1S^\alpha }{G^2S^\alpha }\frac{G^2S^\beta }{G^3S^\beta }=\frac{𝒟^{1\alpha }}{𝒟^{2\alpha }}\frac{𝒟^{2\beta }}{𝒟^{3\beta }}.$$
(5)
Other detector pixels might require multiple intermediate pixels to establish a relative calibrations. As the chain of intermediate pixels grows longer, the uncertainty of the relative calibration of the two pixels also grows. Therefore, when applying the least squares solution, the exact dither pattern becomes much more important than in the flat sky technique. For the least squares solution to produce the smallest uncertainty, the dither pattern should be one that establishes the tightest correlations between all pairs of detector pixels using a small number of dithered images. Even if one is only interested in small scale structure on the sky (e.g. point sources), it is still important to have the detector properly calibrated on all spatial scales to prevent large scale detector variations from biasing results derived for both sources and backgrounds imaged in different parts of the array.
Whether obtained by flat sky, least squares, or other techniques, the quality of the calibration is ultimately determined by its uncertainties. For the least squares solution of Fixsen et al. (2000), understanding the uncertainties is relatively straight forward, because it is a linear process, i.e. $`P^\alpha =L_i^\alpha D^i`$ where $`P^\alpha `$ is the set of fitted parameters, $`D^i`$ is the data, and $`L_i^\alpha `$ is a linear operator. Then, given a covariance matrix of the data, $`\mathrm{\Sigma }^{ij}`$, the solution covariance matrix is $`V^{\alpha \beta }=L_i^\alpha L_j^\beta \mathrm{\Sigma }^{ij}`$. For a nonlinear process such as a median filter the uncertainties are harder to calculate. The diagonal terms of the covariance matrix of the solution might be sufficiently well approximated by Monte Carlo methods, but the off-diagonal components are far more numerous and often more pernicious as the effects can be more subtle than the simple uncertainty implied by the diagonal components. For this reason the off-diagonal components are often ignored. Creating final images at subpixel resolution (e.g. “drizzle”, Fruchter & Hook 1998) may introduce additional correlations beyond those described by the covariance matrix, and disproportionately increase the effects of the off-diagonal elements of the correlation matrix. Accurate knowledge of all these uncertainties is especially important for studies that seek spatial correlations within large samples, such as deep galaxy surveys or studies of cosmic backgrounds, so that any detected correlations are certifiably real and not artifacts caused by the calibration errors and unrecognized because of incomplete or faulty knowledge of the uncertainties.
Table 1 itemizes some of the features of each data analysis technique. The remainder of this paper is concerned with characterizing what makes a dither pattern good for self-calibration purposes using the least squares solution. We present a “figure of merit” (FOM) which can be used as a quantitative means of ranking the suitability of different dither patterns (§2). We then present several examples of good, fair and poor dither patterns (§3), and investigate how changes to the patterns affect their FOM. In §4, we show how dithered data can be collected in the context of both deep and shallow surveys. We also investigate the combined effects of dithering and the survey grid geometry on the completeness of coverage provided by the survey. Section 5 discusses miscellaneous details of the application and implementation of dithering. Section 6 summarizes the results.
## 2 Evaluation of Dithering Strategies
### 2.1 Dithering
To be specific, we define the process of “dithering” as obtaining multiple mostly overlapping images of a single field. Normally, each of the dithered images has a different spatial offset from the center of the field, and none of the offsets of the dither pattern is larger than about half of the size of the detector array. Generally, the set of dithered images is averaged in some manner into a single high-quality image for scientific analysis. This is distinct from the processes of “surveying” or “mapping”, in which a field much larger than the size of the array is observed, using images that are only partially overlapping. If survey data is combined into a single image for analysis, then the process required is one of mosaicking more than averaging. A region may be surveyed or mapped using dithered images at each of the survey grid points.
There are several reasons why an observer might wish to collect dithered data. One is simply to make sure that no point in the field remains unobserved because it happened to be targeted by a defective pixel in the detector array. To meet this objective, two dither images would suffice, provided their offsets are selected to prevent two different bad pixels from targeting the same sky location. A second reason to dither is so that point sources sample many different subpixel locations or phases. Such a data set allows recovery of higher resolution in the event that the detector pixel scale undersamples the instrumental point spread function. Several procedures have been developed for this type of analysis, which is commonly applied to HST imaging data and 2MASS data (e.g. Fruchter & Hook 1998; Williams et al. 1996; Lauer 1999; Cutri et al. 1999). A third reason to dither is to obtain a data set which contains sufficient information to derive the detector calibration and the sky intensities from the dithered data alone. As discussed in the introduction, for the flat sky approach, the flatness of the background is a more important concern than the particular dither pattern. However, this is reversed when the least squares solution to the calibration is derived (Fixsen et al. 2000). The structure of the sky is less important than the dither pattern which needs to be chosen carefully so that the solution is well-constrained.
In an attempt to cover as wide a field as possible, the detector array often undersamples the instrument point spread function. This undersampling can lead to increased noise in the least squares calibration procedure. There are several ways this extra noise can be alleviated. One way is to use strictly integer-pixel offsets in the dither pattern. However, this requires very precise instrument control, and eliminates the possibility of reconstruction of the image at subpixel resolution (i.e. resolution closer to that of the point spread function). A second way to reduce noise is to assign lower weights to data where steep intensity gradients are present. A third way of dealing with the effects of undersampled data is to use subpixel interlacing of the sky pixels within the least squares solution procedure. This technique may require additional dithering over the region since the interlaced sky subpixels are covered less densely than full size pixels. A fourth means is that the least squares procedure of Fixsen et al. (2000) could be modified to account for each datum ($`D^i`$) arising from a combination of several pixel (or subpixel) sky intensities ($`S^\alpha `$). This is a significant complication of the procedure.
After the least squares method is used to derived the detector calibration, users can always apply the method of their choice (e.g. “drizzle” described by Fruchter & Hook 1998) for mapping the set of calibrated images into a single subpixelized image. Such methods may or may not allow continued tracking of the uncertainties and their correlations that the least squares procedure provides.
Dithering involves repointing the telescope or instrument, and thus may require additional time compared to simply taking multiple exposures of the same field. Multiple exposures of the same field without dithering would allow rejection of data affected by transient effects (e.g. cosmic rays), and improved sensitivity through averaging exposures, but of course lack the benefits described above. Whether the time gained by not dithering outweighs the benefits lost, will depend on the instrument and the observer’s scientific goals.
### 2.2 A Figure of Merit
The accuracy of the calibration of an array detector cannot be fully specified by a single number or even a single number per detector pixel. The full covariance matrix is necessary to provide a complete description of the uncertainties. The magnitude of the diagonal elements of the covariance matrix (i.e. $`\sigma _p^2`$) is determined primarily by the noise characteristics of the instrument and the sky, and is sensitive to the number of images collected in a set of dithered data, but not to the dither pattern. The off-diagonal elements of the covariance matrix are sensitive to the dither pattern, and through the correlations they represent, any measurements made from the calibrated data will contain some imprint of the dither pattern. (In general these correlations degrade the signal quality although they can improve the results of some types of measurements depending on whether the correlations are positive or negative and whether the two data elements are used with the same or opposite sign in the measurement.) In order to obtain the best calibration, one would like to use a dither pattern that minimizes the correlations it leaves in the calibrated data. Since comparison of the entire covariance matrices for different dither patterns is awkward, we adopt a single number, a “figure of merit”, that is intended to provide a generic measure of the relative size of the off diagonal terms of the covariance matrix. The figure of merit (FOM) is designed only to compare different dither patterns rather than investigating all of the details of a full observing system (i.e. particular telescope/instrument combinations). The instrumental details matter of course, and in practice they may place additional constraints in choosing the dither pattern.
Here we make several simplifying assumptions to ease the calculations and comparisons. First we assume that all of the detector pixels have approximately the same noise and gain. Next we assume that the noise is independent of sky position, either because the Poisson counting statistics are not important or the observed field is so uniform that the photon counting statistics do not vary appreciably across the field. With these assumptions we can simultaneously solve for both the gain and/or offset for each detector pixel and the sky brightness of each sky pixel (Fixsen et al. 2000). The solution necessarily introduces correlations into the uncertainties.
For the figure of merit we choose only a single pixel at the center of the array and look at its correlations. This is done to reduce the calculational burden which includes 4 billion correlations for a modest $`256\times 256`$ detector. Since all of the pixels are locked to the same dither pattern the correlations are similar for the other pixels (discussed below). We sum the absolute value of the correlations between the central pixel and all of the other pixels. This is compared with the variance of the central pixel, $`\sigma _{p_0}^2`$, as this is the irreducible uncertainty due to detector noise alone. Thus, we define the figure of merit ($`FOM`$) as:
$$FOM=\frac{\sigma _{p_0}^2}{_{i\mathrm{all}\mathrm{pixels}}|V_{ip_0}|}$$
(6)
where $`V`$ is the covariance matrix of the detector parameters. The absolute value is used here to ensure that the sum will be small only if all of the terms are small, not because some of the frequent negative correlations happen to cancel the positive correlations. In detail, the FOM is a function ($`f(x)1/(1+x)`$) of the mean absolute value of the normalized off-diagonal elements of the covariance matrix. With this definition, the FOM is bounded on the range , and can be thought of as an efficiency of encoding correlations in the dither pattern, i.e. a high FOM is desired in a dither pattern.
Equation 6 is not unique. A wide variety of possible quantitative figures of merit could be calculated. Ideally one would choose the FOM that gives the lowest uncertainties in the final answer. This can be done if the question, i.e. quantity to be measured or scientific goal, is well determined. In that case the question can be posed as a vector (or if there is a set of questions, a corresponding set of vectors in the form of a matrix). The vector (or matrix) can then be dotted on either side of the covariance matrix and the resulting uncertainty minimized. There are several problems in this approach. One is that the matrix is too large to practically fit in most computers. A second problem is that the question may not be known before the data are collected. A third problem is that the same data may be used to answer several questions. To deal with the first issue we use only a single row or column of the symmetric covariance matrix. As shown below, the rows of the matrix have a similar structure over most of the array. To deal with the other two issues, the FOM uses the sum the absolute value of all of the terms. This may not be the ideal FOM for a specific measurement, but it should be a good FOM for a wide variety of measurements to be made from the data.
Throughout this paper, we calculate the FOM based on calibration which only seeks to determine the detector gains or offsets, but not both. When both gains and offsets are sought, the solution for the covariance matrix contains degeneracies that are only broken by the presence of a non-uniform sky brightness (Fixsen et al. 2000). The FOM when solving for one detector parameter is similar to that which would apply when solving for both gains and offsets.
### 2.3 Dither Patterns and Radio Interferometers
In order to compute relative gain and/or offset, two detector pixels must observe the same sky pixel or have a connection through other detector pixels that mutually observe one or more sky pixels. A shorter path of intermediate detectors implies a tighter connection and lower uncertainties. One goal of dithering is to tighten the connections between detectors and thus lower the uncertainties. This combinatorial problem happens to share geometrical similarities with another problem that has been dealt with previously, namely covering the $`uv`$ plane with a limited number of antennas in a radio interferometer.
Figure 1 shows the $`uv`$ coverage of the VLA for a snapshot of a source at the zenith. Each antenna pair leads to a single sample marked with a dot in the $`uv`$ plane. Also shown is the map of $`|V_{ip_0}|`$ generated by using a 27-position dither pattern with the same geometry as the VLA array (§3.2). The strongest correlations are found at locations of the direct dither steps corresponding to the VLA baselines. However, the non-zero correlations (and anti-correlations) found elsewhere in the map make a significant contribution to the total FOM.
Figure 2 shows maps of $`|V_{ip_0}|`$ generated using different choices of $`p_0`$. These maps illustrate that the correlations of all pixels are similar in structure to those of the central pixel, but the finite size of the detector limits the correlations available to pixels near the detector edges. The dither pattern used in this demonstration is the VLA pattern described in §3.2.
Despite the similar geometries of radio interferometer $`uv`$ coverage and dither pattern maps of $`|V_{ip_0}|`$, several important differences should be noted. First, with radio telescopes only direct pairs of antennas (although all pairs) can be used to generate interference patterns, whereas with dither patterns a path involving several intermediate detector pixels can be used to generate an indirect correlation. However, the greater the number of intermediate steps that must be used to establish a correlation, the noisier it will be. Second, the $`uv`$ coverage is derived instantly. Observing over a period of time fills in more of the $`uv`$ plane as the earth’s rotation changes the interferometer baselines relative to the target source. In contrast, the $`|V_{ip_0}|`$ coverage shown in Figs. 1 and 2 is only achieved after collecting dozens of dithered images. To fill in additional coverage, the dither pattern must be altered directly because there is no equivalent of the earth rotation that alters the geometry of the instrument with respect to the sky. Another important difference is that the short interferometer baselines (found near the center of the $`uv`$ plane) are sensitive to the large-scale emission. For dither patterns the inverse relation holds. Direct correlations between nearby detector pixels are sensitive to small-scale structure in the detector properties and sky intensities. Thus the outer edge of the interferometer’s $`uv`$ coverage represents a limit on the smallest-scale structure that can be resolved, while the outer edge of strong $`|V_{ip_0}|`$ correlations represents a limit on the largest-scale variations that can be reliably distinguished.
Overall, the geometrical similarities suggest that patterns used and proposed for radio interferometers may prove to be a useful basis set for constructing dither patterns. In the following section, we calculate the FOM for several patterns inspired by radio interferometers in addition to other designs.
## 3 Various Dither Patterns
Several general algorithms for generating dither patterns have been examined. In many cases, we have also explored variants of the basic algorithms by changing functional forms, adding random perturbations, or applying overall scale factors. We have also tested several specific examples of dither patterns from various sources. Examples of the patterns described below are shown in Figure 3. All tests reported here assumed detector dimensions of $`256\times 256`$ pixels unless otherwise noted.
### 3.1 Reuleaux Triangle
Take an equilateral triangle and draw three 60° arcs connecting each pair of vertexes, while centered on the opposite vertex. The resulting fat triangle is a Reuleaux triangle. This basic shape has been used to set the geometry of the Sub-Millimeter Array (SMA) on Mauna Kea (Keto 1997).
This shape can be used as a dither pattern by taking equally spaced steps along each side of the Reuleaux triangle. The length of the steps is set by the overall size of the triangle (a free parameter) and the number of frames to be used in the pattern. For an interferometer, Keto (1997) shows that the $`uv`$ coverage can be improved by displacing the antennas from their equally spaced positions around the triangle.
### 3.2 VLA
The “Y”-shaped array configurations of the Very Large Array (VLA) radio interferometer are designed such that the antenna positions from the center of the array are proportional to $`i^{1.716}`$ (Thompson, et al. 1980). The three arms of the array are separated from each other by $`120`$°. We have adopted this geometry to provide a dither pattern with positions chosen along each of the three arms at
$$dr=\sqrt{dx^2+dy^2}=i^p\mathrm{where}i=1,2,3,\mathrm{},M/3.$$
(7)
and $`p`$ is an arbitrary power which can be used to scale the overall size of the pattern. The first step along each of the 3 arms is always at $`dr=1.0`$. The azimuths of the arms were chosen to match those of the VLA, at 355°, 115°, and 236°.
### 3.3 Random
Random dither patterns were tested using $`dx`$ and $`dy`$ steps generated independently from normal (Gaussian) or from uniform (flat) distributions. The widths of the normal distribution or the symmetric minimum and maximum of the uniform distribution are free parameters.
### 3.4 Geometric Progression
We have generated a geometric progression pattern, stepping in $`x`$ in steps of $`(f)^n`$, where $`n=0,1,\mathrm{}N1`$ and $`f^N=256`$. The same steps are also used in the $`y`$ direction. This pattern separates the $`x`$ and $`y`$ dimensions. In each dimension the pattern is quite economical in generating correlations up to the point where $`f=2`$. Beyond this there is little to be gained in adding more dither steps in the $`x`$ or $`y`$ direction. However, there is some benefit expected in adding steps combining $`x`$ and $`y`$ offsets. Hence, for a $`256\times 256`$ array, we should expect the geometric pattern to be good for $`M2log_2(256)=16`$ positions and not show much improvement by adding more positions.
The geometric progression patterns used here contain two additional steps chosen at $`(dx,dy)=(0,0)`$ and at a position such that $`dx=dy=0.0`$. This is a cross-shaped pattern, with one diagonal pointing, from which any desired pixel-to-pixel correlation can be made with a small number of intermediate steps. The alternating sign of the steps builds up longer separations quickly.
### 3.5 Other Patterns
Several other patterns were also tested with little or no modifications. The patterns that were planned for the WIRE moderate and deep surveys were examined with both the nominal dither steps, and with steps scaled by a factor of 2 to account for the difference between the $`128\times 128`$ pixel WIRE detectors and a larger $`256\times 256`$ pixel detector. The pattern used for NICMOS observations of the HDF-S was tested. The configuration of the 13 antennas of the Degree Angular Scale Interferometry (DASI; Halverson, et al. 1998) was used as a scalable pattern. The declination scanning employed by 2MASS yields a linear dither pattern.
### 3.6 Figures of Merit for the Patterns
In the simplest form, a specific pattern, $`M`$ images deep, would be used to collect data at a single target. The FOM for all patterns tested, with various $`M`$ and other modifications, are listed in Table 2. For all patterns, the FOM increases (improves) as $`M`$ increases. For $`M<20`$ the change is quite rapid. The variations of FOM as a function of $`M`$ for the tabulated versions of each of the patterns are shown in Figure 4.
Table 2 also lists results for a Reuleaux triangle pattern applied to a $`32\times 32`$ detector, and for two large grid dither patterns applied to the same array. The grid dither patterns are square grids with 1 pixel spacings between dithers, such that for the $`M=1024`$ pattern a single sky pixel is observed with each detector, and for the $`M=4096`$ pattern a $`32\times 32`$ pixel region of sky is observed with each detector pixel. These results demonstrate that in the extreme limit where all correlations are directly measured, the $`FOM1.0`$. The FOM does not reach 1.0 because of the finite detector and dither pattern sizes.
For the $`256\times 256`$ arrays, the Reuleaux and random (normal) patterns have the best FOM for $`M>20`$. The VLA pattern is only a little worse, but other patterns have distinctly smaller FOM than these patterns. For the scalable VLA, random, Reuleaux, and DASI patterns, the best FOM for a fixed $`M`$ usually occurs when the maximum $`|dx|`$ or $`|dy|128`$ pixels. For patterns with small $`M`$ the optimum scale factor is usually smaller, to avoid too many large spacings between widely scattered dither positions. For values of $`M<20`$ no pattern seems to produce a good FOM, however, the geometric pattern usually does best in this regime. Rotating the patterns with respect to the detector array generally produces only modest changes in the FOM. For $`M30`$, the FOM of a Reuleaux pattern is improved by adding small random perturbations to the dither positions. No optimization of the perturbations was performed (as Keto 1997), but apparently any perturbation is better than none for small $`M`$ patterns. Deep Reuleaux triangle patterns are neither improved nor worsened by small perturbations.
The results presented in Fig. 4 and Table 2 indicate that a good FOM is dependent on patterns that sample a large number and wide range of spatial scales. A variety of patterns with different geometries can yield satisfactory results, as demonstrated by the rather different Reuleaux triangle and random patterns. Therefore, attempts to find the single “optimum” pattern may not be very useful, and selection of a dither pattern needs to carefully avoid patterns that contain obvious or hidden redundancies that lead to a poor FOM. An example of this sort of pitfall is the $`M=18`$ geometric pattern, for which all dither steps are integer powers of 2, leading to a FOM that is worse than geometric patterns with depths of $`M=14`$ or 16.
The coverage of the VLA, random, and Reuleaux triangle dither patterns when used for observation of a single target is shown as maps in Figure 5, and histograms in Figure 6. The Reuleaux triangle dither pattern provides the largest region covered at maximum depth, but if a depth less than the maximum is still useful then the VLA dither pattern may provide the largest area covered.
The importance of the largest dither steps in a pattern is demonstrated through analysis of simulated WIRE data. A synthetic sky was sampled using both geometric progression and random dither patterns. The maximum dither offset was 38 pixels for the geometric progression pattern and 17 pixels for the random pattern. The FOM for this geometric pattern is 0.127, and for this random pattern it is 0.099. WIRE’s detectors were $`128`$ arrays. The gain response map used in the simulations contained large scale gradients with amplitudes of $`10\%`$. Figure 7 shows comparisons between the actual gains and the gains derived when the self-calibration procedure described by Fixsen, et al. (2000) is employed. The random dither pattern without the larger dither offsets was less effective at identifying the large scale gain gradient. The undetected structure in the gain winds up appearing as a sky gradient that affects the photometry of both the point sources and the background in the images.
## 4 Surveys
### 4.1 Deep Surveys
For obtaining a standard deep survey, we have assumed that the same dither pattern is repeated at each location of a grid. The survey grid is assumed to be aligned with the detector array and square, with a spacing no larger than the size of the array. The FOM for surveys using several different dither patterns and grid spacings are listed in Table 3. The FOM derived for the entire survey as a single data set is basically determined by the FOM of the dither pattern used. The overlap between dithers from adjacent points in the survey grid, effectively adds additional steps to the dither pattern, which slightly improves the FOM over that of the pattern when used for a single target. Smaller survey grid spacings lead to increased overlap and increased FOM, but also lead to a smaller area of sky covered in a fixed number of frames. The improvement in the FOM when used in surveys rather than singly is most significant for relatively shallow dither patterns, however, even in a survey, the FOM of a shallow pattern is still not very good. The FOM improves only slightly as the survey grid grows larger than the basic $`2\times 2`$ unit cell.
### 4.2 Shallow Surveys
For shallow surveys in which as few as 2 images per grid location are desired, using the same small $`M`$ dither pattern at each location yields a very poor FOM. An alternate method of performing a shallow survey is to choose a larger $`M`$ dither pattern and apply successive steps of the dither pattern at successive locations in the survey grid (Figure 8). If the survey is large enough, it can contain all the direct correlations of the large $`M`$ dither pattern, though spread out among many survey grid points rather than at a single location. The FOM of the shallow survey can thus approach the FOM of the single deeper dither pattern. The advantage of altering the dither pattern at each survey grid point is still present, though less significant, as the survey depth increases. The FOM derived from various surveys using this shallow survey strategy are shown in Table 4.
A random dither pattern is a natural choice for use in this shallow survey strategy. One can proceed by simply generating a new random set of dithers at each survey grid point. If a more structured dither pattern is used as the basis for the shallow survey (e.g. the Reuleaux triangle in Fig. 8), then one must address the combinatorial problem of selecting the appropriate subsets of the larger dither pattern at each survey grid point. The example shown in Fig. 8 is not an optimized solution to the combinatorial problem.
### 4.3 Survey Coverage & Grids
When a large area is to be observed, the most efficient way to cover the region is to use a square survey grid aligned with the detector array and with a grid spacing equal to the size of the array, or slightly less to guard against bad edges or pointing errors. In this mode a deep survey using the same $`M`$ position dither pattern at each survey grid point will cover the desired region at a depth of $`M`$ or greater. There will be no holes in the coverage, though the edges of the surveyed region will fade from coverage of $`M`$ to 0 with a profile determined by the dither pattern used (Fig. 6). A shallow survey, using a different dither pattern at each grid point, may or may not have coverage holes depending on the maximum size of the dither steps and the grid spacing of the survey. The constraint for avoiding coverage holes is that the overlap of the survey grid must be more than the maximum range of dither step offsets (independently in the $`x`$ and $`y`$ coordinates), e.g.
$$X\mathrm{\Delta }X>\mathrm{max}(dx_i)\mathrm{min}(dx_i)$$
(8)
where $`X`$ is the size of the array, $`\mathrm{\Delta }X`$ is the survey grid spacing, and $`dx_i`$ are the dither steps ($`i=1\mathrm{}M`$). This constraint places the survey grid points close enough together that coverage holes are avoided even if dithers at adjacent grid point are offset in the maximum possible opposite directions. If the shallow survey observing program can be arranged to avoid this worst case, then the grid spacing may be increased without developing coverage holes. Coverage holes may be undesirable when mapping an extended object, but may be irrelevant if one is simply seeking a random selection of point sources to count. Note that some minor coverage holes are inevitable, where data are lost to bad pixels or cosmic rays. Additionally, a coverage hole where a depth of $`M=1`$ is achieved instead of $`M=3`$ might be more serious than one where $`M=18`$ is achieved instead of $`M=20`$.
For this shallow survey strategy there is an inherent tradeoff between the area covered (without holes) and the FOM. Using a dither pattern containing large dither steps as the basis for the survey will lead to a good FOM, but require a relatively large overlap in the survey grid spacing and a consequent loss of area covered by the survey. Decreasing the scale of the dither pattern leads to a lower FOM, but permits an increase in the survey grid spacing and total area covered. The ideal balance between these will depend on the instrumental characteristics and the scientific objectives.
In many instances, an observer may want to survey or map a region of fixed celestial coordinates. In some cases, instrumental constraints (i.e. the ability to rotate the telescope or detector array relative to the optical boresight) may not allow alignment between the detector array and the desired survey grid. This will result in coverage holes in the surveyed region, unless the grid spacing is reduced enough to prevent holes regardless of the array orientation. If a square grid with a spacing of $`\mathrm{\Delta }X=X/\sqrt{2}`$ is used then coverage holes are prevented for any possible orientation of the arrays. This is illustrated by plots in the first two rows of Figure 9, which shows the array positions for $`4\times 4`$ $`M=1`$ survey (without dithering). With a deep survey strategy, avoidance of holes in the $`M=1`$ case will prevent holes at any depth $`M`$, but for the shallow survey strategy additional overlap may need to be built into the survey grid to prevent holes as discussed above. Decreasing the survey grid by a factor of $`\sqrt{2}`$ in each dimension results in a grid that covers only half the area that could be covered if the detectors and grid are aligned. This efficiency can be increased if the survey is set up on a triangular grid rather than a square grid. If alternate rows of the survey grid are staggered by $`X/2`$ (middle row of Fig. 9) and the vertical spacing of the grid is reduced by a factor of $`\sqrt{3}/2`$, then holes are prevented as long as the array orientation remains fixed throughout the survey (4th row of Fig. 9). The area covered by this triangular grid will be $`87\%`$ of the maximum possible area, rather than $`50\%`$ for the square grid required to prevent holes. If the array orientation is not fixed throughout the survey (last column of Fig. 9) then the triangular grid must be reduced by an additional factor of $`\sqrt{3}/2`$ in both dimensions. This results in a $`65\%`$ efficiency for the triangular grid versus $`50\%`$ for the square grid, which requires no further reduction. The FOM of a survey on a triangular grid is similar to that of a survey on a square grid with an equivalent amount of overlap.
## 5 Other Miscellaneous Details
The most flexible implementation of the dithering strategies presented here would be to have the dither steps be determined algorithmically from a small set of user-supplied parameters. For example, an observer could select: a type of dither pattern (e.g. Reuleaux triangle or random), a pattern depth $`M_{pattern}`$, and a scaling factor to control the overall size of the pattern. From this information, the telescope control software could calculate and execute the desired dither pattern. For the shallow survey strategy presented above, the observer would also need to supply: the survey depth, $`M_{survey}<M_{pattern}`$, and perhaps an index to track which grid point of the survey is being considered (software might handle this automatically).
Sometimes design or operational constraints require that the dither patterns reside in a set of pre-calculated look-up tables. In this case (which has applied to both WIRE and IRAC) the observer’s ability to set the dither pattern is more limited. However, some of the limitations of using dither tables can be mitigated if the observer is not forced to use dither steps from the tables in a strictly sequential fashion. For example, one dither table might contain an $`M=72`$ Reuleaux triangle dither pattern calculated on a scale to produce the optimum FOM. If the observer is allowed to set the increment, $`\mathrm{\Delta }i`$, used in stepping through this dither table, then by selecting $`\mathrm{\Delta }i=3`$ or $`\mathrm{\Delta }i=4`$, then dither patterns of $`M=24`$ or $`M=18`$ can be generated. Allowing non-integer increments (subsequently rounded) would enable the selection of a dither pattern of any depth $`M72`$. This adjustment of the increment is most clearly useful for very symmetric dither patterns such as the Reuleaux triangle pattern. For a dither table containing a random pattern, non-sequential access to the table can have other uses. First, in applying the shallow survey strategy, a random dither table of length $`M_{pattern}`$ could be used to sequentially generate $`M_{survey}<M_{pattern}`$ dithers at each successive survey grid point. Selection of dither steps would wrap around to the beginning of the table once the end of the table is reached. For example a dither table of $`M_{pattern}=100`$ could be used sequentially to generate 20 different patterns for an $`M=5`$ shallow survey. Even better would be to have a table with $`M_{pattern}`$ a prime number, e.g. 101. Then, wrapping the table allows the sequential generation of $`M_{pattern}`$ different dither patterns for any $`M_{survey}`$, though some of these dither patterns will differ from others by only one step. Additional random patterns can be generated by setting different increments for stepping through the table. Enabling specification of the starting point in the dither table would additionally allow the observer to pick up the random dither pattern sequence at various (or the same) positions as desired. These capabilities would enable an observer to exploit the large number of combinations of dither steps available in a finite length dither table, in efforts to maximize the FOM. Use of a fixed dither table can also be made less restrictive if a scaling factor can be applied to the dither pattern size. A free scaling factor provides an additional means of adjusting the pattern size as desired to meet coverage or FOM goals.
For the cases presented in this paper, we have assumed that the orientation of the detector array remains fixed throughout the execution of the dither pattern and any larger survey (except for the last column of Fig. 9). However, rotation of the detector array relative to the dither pattern, either within a single pointing, or at different pointings in a deep survey, is an effective way of establishing combinations of direct pixel-to-pixel correlations that cannot be obtained using purely translational dither steps. Inclusion of rotation of the detector can lead to further improvements in the FOM of a given dither pattern or survey. In the extreme, a dither pattern could even be made entirely out of rotational rather than translational dither steps. However, without an orthogonal “radial” dither step, rotation alone is similar to dithering with steps in the $`x`$-direction but not the $`y`$-direction. The ability to implement rotations of the detector will be allowed or limited by the design and operating constraints of the telescope and instruments being used.
Bright sources can often saturate detectors and cause residual time-dependent variations in detector properties. For observations of a field containing a bright source, use of a random dither pattern may lead to streaking as the source is trailed back and forth across the detector array between dithers. In contrast the use of a basically hollow or circular dither pattern such as the Reuleaux triangle pattern, will only trail the source through a short well-defined pattern, which will lie toward the outer edge of the detector if the source position is centered in the dither pattern. If the pattern scale of the dither pattern is increased, the trail of the source can be pushed to or off the edges of the detector, though the $`FOM`$ will suffer if the pattern scale is greatly increased. In other words, a hollow dither pattern with a large scale could be used to obtain a series of images looking around but not at a bright source.
Dithering may be performed by repointing the telescope, or by repositioning the instrument in the focal plane, for example through the use of a tilting optics as in the 2MASS (Kleinmann 1992) or SIRTF MIPS (Heim, et al. 1998) instruments. Calculation of the $`FOM`$ of the dither pattern will be independent of the technique used. The self-calibration procedure, however, may be affected by effective instrumental changes if it is repositioned in the focal plane. The alternative repointing of the telescope can be much more time consuming and may limit the use of large $`M`$ dither patterns.
The combined use of two or more non-contiguous fields is transparent to the self-calibration procedure. If the same dither pattern is used on each of the separate fields, the resulting $`FOM`$ will be the same as that for a single field. The $`FOM`$ would be improved for the combined data set if the dither pattern is different for each of the subset. The $`FOM`$ for data set of non-contiguous regions is thus similar to that obtained using the same dither strategy in a contiguous survey, except there is a small loss in the $`FOM`$ because of the lack of overlap between adjacent regions.
Another means of minimizing coverage holes when using a shallow survey strategy is to oversample the depth of the survey. For example, performing the shallow survey at a depth of $`M=4`$ when $`M=3`$ is the intended goal will result in fewer holes at a depth of 3 for a fixed grid spacing, and in a better FOM for the overall survey. However, the cost in time of the additional exposures may be prohibitive.
The FOM as calculated here only depends upon the offsets of the dither pattern rounded to the nearest whole pixel. This means that any desired combination of fractional pixel offsets to facilitate subpixel image reconstruction may be added to the dither patterns without affecting the various aspects discussed in this paper. If using dither tables, one could have separate tables for the large scale and the fractional pixel dithers, with the actual dithers made by adding selected entries from the two tables. This could allow simultaneous and independent implementation of large-scale and subpixel dithering strategies. Only subpixel image reconstruction that demands exclusively small ($`1`$ pixel) dithering would be incompatible with the dithering strategies presented here.
## 6 Conclusion
We have shown that proper selection of observing strategies can dramatically improve the quality of self-calibration of imaging detectors. We have established a figure of merit (Eq. 6) for quantitatively ranking different dither patterns, and have identified several patterns that enable good self-calibration of a detector on all spatial scales. The layouts of radio interferometers correspond to good dither patterns. Both the highly ordered Reuleaux triangle pattern and the unstructured random pattern provide good FOM with moderate or deep observations. This indicates that good patterns must sample a range of spatial scales without redundancy, and if this condition is met, then secondary characteristics of the patterns or instrument constraints may determine the actual choice of the dither pattern. Any dither pattern must contain steps as large as half the size of the detector array if large scale correlations are to be effectively encoded in the dithered data set. Deep surveys can take advantage of the use of a single good dither pattern. Shallow surveys can obtain good FOM by altering the dithers used at each of the survey grid points. Using a fixed pattern throughout a shallow survey makes it difficult or impossible to apply a self-calibration procedure to the resulting data sets. The use of triangular instead of square survey grids can be more efficient in executing complete-coverage surveys when the array orientation cannot be set to match the survey grid. Good dither patterns and survey strategies can be devised even in some seemingly restricted situations. The ultimate importance of dithering and a good FOM will depend on the nature of the instrument and the data and on the scientific goals. For many goals, obtaining a larger quantity of data may not be an adequate substitute for obtaining data with a good FOM.
We thank D. Shupe and the WIRE team for supplying simulated data using several different dither patterns. W. Reach and members of the SSC and IRAC instrument teams were helpful in providing useful ideas and criticism throughout the development of this work. J. Gardner, J. Mather, and the anonymous referee provided very helpful criticism of the manuscript.
|
no-problem/0002/nucl-th0002040.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Electromagnetic signals from relativistic heavy-ion collision directly probe the properties of the dense matter created during the collision since their interactions with the surrounding matter are negligible. However, the observed lepton pairs and photons do not originate only at one temperature and density, but the distribution is a complicated integral over the entire space-time history of the system. Therefore, to draw any conclusions of the observed yield, one has to understand the evolution of the system and how it affects dilepton emission.
The evolution of the system is a complicated many-body problem which can not be solved from basic principles but has to be described using phenomenological models instead. Various models based on hydrodynamics and transport theory have been successfully used to describe the hadron data measured in A+A collisions at the CERN SPS energies. However, when they are used to describe dilepton emission in the same collisions, the dilepton yields around invariant mass 500 MeV differ roughly by a factor two . At this mass region the CERES collaboration at CERN has measured a significant excess of dileptons over the estimated background . It has been suggested that this enhancement might be an in medium effect or possibly a precursor of chiral symmetry restoration , but before drawing any such conclusions one has to understand why different expansion dynamics can lead to equally large enhancements.
To investigate the effect of expansion dynamics to dilepton production we have compared the dilepton yields from three different models – transport, hydrodynamical model with zero pion chemical potential and hydrodynamical model with conserved pion number.
## 2 The models
To simplify the study of expansion dynamics we have kept the particle content of the system as simple as possible. The only particles included are pions and rho mesons and the only production channel for electron pairs is $`\pi \pi `$ annihilation. No in-medium modifications of particle properties have been taken into account, but all cross sections, widths etc. are those of free particles.
The transport model we use is the relativistic BUU transport model described in ref. and the hydrodynamical model the 2+1 dimensional non-boost invariant model described in ref. . One of the important differences between these models is that pion number is conserved in the transport model but not in the hydrodynamic model. The pion number conservation leads to non-zero pion chemical potential which is one of the possible causes of the difference in the dilepton yields . To study the effect of non-zero pion chemical potential in the framework of a hydrodynamical expansion we made a new version of the hydrodynamic model where the conserved baryon number is replaced by a conserved pion number<sup>1</sup><sup>1</sup>1We define the conserved pion number as $`𝒩_\pi =n_\pi +2n_\rho `$, where $`n_\pi `$ and $`n_\rho `$ are the actual number densities of pions and rho-mesons respectively..
As mentioned the only dilepton production channel we consider is $`\pi \pi `$ annihilation. The cross section for this process used in the transport description is given in ref. whereas the thermal production rate used in the hydrodynamical description is the one calculated by Gale and Kapusta .
We have checked the consistency of our calculations by imposing periodic boundary conditions to our models, initializing the systems in thermal and chemical equilibrium and checking that the equilibrium is maintained. In this case the dilepton emission from all three models is identical and corresponds to the thermal rate at this temperature.
In the simulations of the actual heavy-ion collisions, the initial state of the evolution is chosen to reproduce the observed hadron spectra . However, in the present calculations we use the same initial state for all models: a spherical fireball with a radius of $`r=8`$ fm in thermal and chemical equilibrium with no initial flow. The density profile is assumed to be Woods-Saxon with the maximum energy density of $`ϵ=0.5`$ GeV/fm<sup>3</sup> which corresponds to a maximum initial temperature of $`T=218`$ MeV, pion number density $`n_\pi =0.38`$ fm<sup>-3</sup> and rho number density $`n_\rho =0.20`$ fm<sup>-3</sup>. Initially the system contains 560 pions and 260 rhos. The edge of the system is defined by the radius where temperature drops below the decoupling temperature of the hydrodynamic model. This temperature is set to be $`T_{dec}=120`$ MeV in both versions of the hydrodynamic model whereas there is no need for a decoupling temperature in the transport model. In the pion number conserving hydro decoupling at $`T_{dec}=120`$ MeV leads to an average pion chemical potential on the decoupling surface of $`\mu _\pi =75`$ MeV.
## 3 Results
Since the simulations of the actual heavy-ion collisions are tuned to reproduce the observed hadron spectra, we calculate the pion spectra as well. The resulting $`p_t`$ spectra of pions is shown in fig. 2. In the hydrodynamic model with zero chemical potential the pion number is not conserved and the number of final pions is smaller than in the other two models. Another difference is that the effective equation of state of the transport model is softer than in the hydrodynamic model. This is manifested in the slope of the $`p_t`$ spectrum which is steeper for transport calculation than for hydrodynamic calculation with zero chemical potential.
The pion number conserving hydro gives almost similar $`p_t`$ slope compared to transport. The steeper slope than in zero chemical potential hydro is easily understood. When the system dilutes and pion number is conserved, a larger fraction of energy is stored in the mass of pions than in the case of zero chemical potential. This leads to faster decrease of temperature and the decoupling temperature is reached at an earlier stage of evolution when the flow is less developed.
Fig. 2 depicts the distribution of lepton pairs originating from $`\pi \pi `$ annihilations during the system evolution. The most striking feature is that the difference between the two hydrodynamical models is tiny. The effect of increasing chemical potential and thus larger pion density is counterbalanced by the shorter lifetime and faster cooling of the system leading to practically indistinguishable dilepton yields. However, the pion spectra from these two models are different. If the models are required to produce similar pion spectra, the initial state of the model with zero chemical potential should be larger and have lower initial temperature than the pion number conserving model. This difference in the initial state would also lead to different dilepton production.
Since the transport model and the hydrodynamic model lead to similar pion spectra their dilepton yields can be compared without reservations. The difference between these models is similar to that seen in the attempts to reproduce the CERES data. This supports our hypothesis that details of expansion dynamics do have a significant effect on the dilepton production. The shapes of the distributions look like the system in transport description cools faster but lives longer than in hydro. Whether this is the case remains to be investigated in more detail.
We have demonstrated that the effect of the expansion dynamics on dilepton production is visible and that the non-zero pion chemical potential is not the main cause of this effect. At the present stage of the work there are still many open questions like the temperature evolution in the transport description and when and where the dileptons are emitted. It also has to be checked how the distributions change if all the models are required to produce similar pion spectra.
## Acknowledgements
This work was supported by the Director, Office of Science, Office of High Energy and Nuclear Physics, Division of Nuclear Physics, and by the Office of Basic Energy Sciences, Division of Nuclear Sciences, of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098.
|
no-problem/0002/hep-lat0002030.html
|
ar5iv
|
text
|
# Supersymmetric Yang-Mills Theories from Domain Wall Fermions11footnote 1Talk given by D.K. at CHIRAL ’99, Taipei, Sep. 13-18, 1999.
## 1 Chirality and accidental supersymmetry
There has been intense interest in supersymmetry (SUSY) in the past two decades. The past several years have witnessed many interesting and compelling speculations about strongly coupled SUSY theories. It would be interesting to test these conjectures on the lattice. However, since Poincaré symmetry does not exist on the lattice, supersymmetry does not either. In principle, one could tune lattice theories to the SUSY critical point. However, just as Poincaré symmetry is recovered without fine tuning, one might hope that SUSY could be similarly obtained in the continuum limit.
The secret to why the Poincaré symmetric point takes no work to find is that the imposition of hypercubic symmetry and gauge symmetry ensures that Poincaré symmetry is an accidental symmetry. All allowed operators that violate the symmetry are irrelevant. Therefore the continuum limit of the theory automatically exhibits more symmetry than it possesses at finite lattice spacing. If supersymmetry could arise as an accidental symmetry as well, then simulation of such theories would not entail fine-tuning of parameters, and would be relatively simple.
The outlook for this approach in SUSY theories with scalar fields is poor…scalar mass terms violate SUSY, are relevant, and cannot be forbidden by any symmetry, unless the scalars are Goldstone bosons. I will return to this issue later, when we talk about $`N=2`$ super Yang-Mills (SYM) theories.
However, there are some SUSY theories which do not entail scalars. Of particular interest is $`N=1`$ SYM theory in $`d=4`$ dimensions. In this theory, the only relevant SUSY violating parameter is the gaugino mass, which can be forbidden by a discrete chiral symmetry. Thus if one can realize the chiral symmetry on the lattice, $`N=1`$ SUSY can arise as an accidental symmetry in the continuum limit. This is where domain wall fermions come in, for which chiral symmetry violation (for weak coupling) tends to zero exponentially fast in the domain wall separation. In this talk we clarify how domain wall fermions may be used to study SYM theories <sup>2</sup><sup>2</sup>2It has long been recognized that SYM theory can arise accidentally as the low energy limit of a theory with gauge and chiral symmetry, and the correct fermion representation . Lattice implementation of $`N=1`$ SYM with Wilson fermions and fine-tuning is discussed in . Using domain wall fermions for simulation of $`N=1`$ SYM was suggested in , but here we follow a different approach. Our results parallel prior work on $`N=1`$ SYM theories in the overlap formulation , which is equivalent to domain walls with infinite separation.. Throughout this talk we will actually discuss only the continuum version, as it is simpler to formulate, if less rigorous, and there are no technical or conceptual obstacles to translating this work to the lattice. We address in turn $`N=1`$ in $`d=4`$, $`N=1`$ in $`d=3`$, and $`N=2`$ in $`d=4`$.
## 2 $`N=1`$ SUSY Yang-Mills theory in $`d=4`$ dimensions
$`N=1`$ SUSY Yang-Mills theory in $`d=4`$ Minkowski space consists of a gauge group with a massless adjoint Majorana fermion, the gaugino. It is expected to exhibit all sorts of fascinating features, such as confinement, discrete vacua, domain walls, and excitations on these domain walls which transform as fundamentals under the gauge group .
The gaugino should arise as an edge state in a 5-d theory of domain wall fermions. The only subtlety in using the machinery of domain wall fermions is how to obtain a single Majorana fermion in Minkowski space, since without modification, the theory gives rise to massless Dirac fermions in Euclidian space.
Let us first review how a massless Dirac fermion arises in the domain wall approach. Consider a Dirac fermion in a 5-dimensional Euclidian continuum, where the fifth dimension is compact: $`x_5=R\theta `$, $`\theta (\pi ,\pi ]`$. The mass of the fermion is given by a periodic step function
$`m(x_5)=Mϵ(\theta )=\{\begin{array}{cc}+M\hfill & \pi /2<\theta \pi /2\hfill \\ M\hfill & \mathrm{otherwise}\hfill \end{array}`$ (1)
We introduce gauge fields independent of the coordinate $`x_5`$, so that the Euclidian action is given by
$`S_5={\displaystyle d^5xi\overline{\mathrm{\Psi }}D(x_5)\mathrm{\Psi }},D(x_5)=\left[\text{ /}D_4+_5\gamma _5+m(x_5)\right],\gamma _\mu =\gamma _\mu ^{}.`$ (2)
Here $`\text{ /}D_4`$ is the usual $`d=4`$ gauge covariant derivative for a Dirac fermion in the adjoint representation. It is convenient to expand $`Psi`$ and $`\overline{\mathrm{\Psi }}`$ as
$`\begin{array}{ccc}\hfill \mathrm{\Psi }(x_\mu ,x_5)& =& _n\left[b_n(x_5)P_++f_n(x_5)P_{}\right]\psi _n(x_\mu ),\hfill \\ \hfill \overline{\mathrm{\Psi }}(x_\mu ,x_5)& =& _n\overline{\psi }_n(x_\mu )\left[b_n(x_5)P_{}+f_n(x_5)P_+\right].\hfill \end{array}`$ (4)
Here $`P_\pm =(1\pm \gamma _5)/2`$ are the chiral projection operators, $`\psi _n`$ and $`\overline{\psi }_n`$ are ordinary 4-d Dirac spinors, and $`b_n`$, $`f_n`$ form a complete basis of periodic functions satisfying the eigenvalue equations
$`[_5+m(x_5)]b_n=\mu _nf_n,[_5+m(x_5)]f_n=\mu _nb_n.`$ (5)
With this expansion, the action $`S_5`$ may be rewritten as a theory of an infinite number of 4-d flavors with masses $`\mu _n`$,
$`S_5={\displaystyle \underset{n}{}}{\displaystyle d^4x\overline{\psi }_n(x)\left[i\text{ /}D_4+i\mu _n\right]\psi _n(x)}.`$ (6)
It is straight forward to solve the above equations for $`\mu _n`$. First of all, one finds zero modes
$`\mu _0=0,b_0(x_5)=e^{^{x_5}m(y)𝑑y},f_0(x_5)=e^{+^{x_5}m(y)𝑑y}.`$ (7)
Note that $`b_0`$ is localized at $`\theta =\pi /2`$, while $`f_0`$ is localized at $`\theta =+\pi /2`$. Nonzero modes have wave functions which are linear combinations of sines and cosines appropriately matched at the locations of the domain walls. The corresponding eigenvalues are doubly degenerate
$`\mu _n=\sqrt{M^2+n^2/R^2},n=\pm 1,\pm 2,\mathrm{}`$ (8)
If instead of having a kink-like mass profile for the $`\mathrm{\Psi }`$ fermions we had a constant mass $`M`$ (again with periodic boundary conditions), the corresponding eigenvalues $`\overline{\mu }`$ would be
$`\overline{\mu }_0=M,\overline{\mu }_n=\sqrt{M^2+n^2/R^2},n=\pm 1,\pm 2,\mathrm{}`$ (9)
Note that for $`n0`$, the eigenvalues $`\mu _n`$ and $`\overline{\mu }_n`$ are equal. It follows that the ratio of fermion determinants for a kink and a constant mass is given by (assuming appropriate regularization)
$`{\displaystyle \frac{det\left[i(\text{ /}D_4+\gamma _5_5+Mϵ(\theta ))\right]}{det\left[i(\text{ /}D_4+\gamma _5_5+M)\right]}}={\displaystyle \frac{det\left[i\text{ /}D_4\right]}{det\left[i(\text{ /}D_4+M)\right]}}`$ (10)
Note that the right hand side of the above equation corresponds to a massless Dirac fermion and an uninteresting Pauli-Villars field. The left and right handed components of the massless Dirac fermion correspond to the edge states $`b_0`$ and $`f_0`$ (see Fig. 1.). This method for obtaining a single massless Dirac fermion is robust when transcribed on the lattice : the beauty of the method is that there is no chirality in 5-d, and if one shifts or renormalizes the fermion mass term in the 5-d theory by $`\delta m`$ (with $`|\delta m|<M`$), the effective 4-d theory still has a massless mode. That is because there is a gap in the bulk, so that $`b_0`$ and $`f_0`$ fall off exponentially, while a chiral symmetry breaking fermion mass must be proportional to the (exponentially small) overlap of $`b_0`$ and $`f_0`$.
In order to simulate $`N=1`$ SYM theory, we need to impose a Majorana condition on $`\psi _0`$. Note that in 4-d Minkowski space, the Majorana condition is $`\psi =C\overline{\psi }^T`$, where $`C`$ is the charge conjugation matrix, satisfying $`C^1\gamma ^\mu C=\gamma _{}^{\mu }{}_{}{}^{T}`$ and $`C^1T_aC=T_a^T`$ for generators $`T_a`$ of real or pseudo-real representations of the gauge group. In Minkowski space charge conjugation interchanges left- and right-handed particles. In our Euclidian domain wall theory, the left- and right-handed modes live on the two different kinks. This suggests that the correct “Majorana” condition for the 5-d Euclidian theory is to define a 5-d reflection which interchanges the two chiral zeromodes, $`_5:\theta \theta `$, and to impose the constraint on the 5-d Dirac fermions
$`\mathrm{\Psi }=_5C\overline{\mathrm{\Psi }}^T`$ (11)
The 5-d path integral then results in a fermion pfaffian, rather than a fermion determinant:
$`Z_5=\mathrm{Pf}\left[i_5C\left(\text{ /}D_4+\gamma _5_5+m(x_5)\right)\right].`$ (12)
It is straightforward to check that we are taking the pfaffian of an antisymmetric operator, as is required <sup>3</sup><sup>3</sup>3Actually, the operator is only antisymmetric if the fermion is in a real representation, such as an adjoint, instead of a pseudoreal representation. Thus our method is consistent with Witten’s result that a theory of a single Weyl pseudoreal fermion is sick.. In terms of the mode expansion in 4-d fields, note that $`_5`$ interchanges $`b_n(x_5)f_n(x_5)`$, so the constraint yields
$`\begin{array}{ccc}\hfill \mathrm{\Psi }& =& _n\left[b_n(x_5)P_++f_n(x_5)P_{}\right]\psi _n(x_\mu )\hfill \\ & =& _n\left[b_n(x_5)P_++f_n(x_5)P_{}\right]C\overline{\psi }_n^T(x_\mu )=_5C\overline{\mathrm{\Psi }}^T\hfill \end{array}`$ (15)
which implies the conventional (Euclidian) Majorana constraint on the 4-d fermion fields:
$`\psi _n(x_\mu )=C\overline{\psi }_n^T(x_\mu ).`$ (16)
Using the same technique as in the Dirac case to remove bulk modes, we arrive at a formula for the pfaffian of a massless Majorana fermion:
$`{\displaystyle \frac{\mathrm{Pf}\left[i_5C(\text{ /}D_4+\gamma _5_5+Mϵ(\theta ))\right]}{\mathrm{Pf}\left[i_5C(\text{ /}D_4+\gamma _5_5+M)\right]}}={\displaystyle \frac{\mathrm{Pf}\left[iC\text{ /}D_4\right]}{\mathrm{Pf}\left[iC(\text{ /}D_4+M)\right]}}`$ (17)
This formula is easily extended to the lattice by replacing the Dirac action by the Wilson action in all five dimensions . As mentioned before, this leads to an answer identical to that derived by Neuberger , although derived in a somewhat different way.
By using Neuberger’s closed expression for the domain wall determinant, it is possible to show that the lattice version of the above pfaffian is positive definite, and hence can be computed unambiguously as the square root of the Dirac determinant <sup>4</sup><sup>4</sup>4This observation was made to DK by Y. Kikukawa.. Thus the domain wall approach has an added advantage over the Wilson fermion strategy, which suffers from a pfaffian which is not positive definite . it is therefore feasible with present technology to begin exploring this interesting theory.
## 3 $`N=1`$ SUSY Yang-Mills theory in $`d=3`$ dimensions
$`N=1`$ SYM in $`d=3`$ is an interesting theory as well, especially in light of Witten’s recent discussion of dynamical SUSY breaking . Once again the spectrum consists of a gauge symmetry and a Majorana fermion, the gaugino. There are two independent relevant operators that break SUSY: the gaugino mass and the (quantized) Chern-Simons term, with one linear combination of the two being supersymmetric. In what follows we will assume that form some gauge groups it is possible to formulate the lattice theory such that the coefficient of the Chern-Simons term in the effective 3-d continuum theory vanishes (work in progress here!). In that case, the only relevant SUSY breaking operator is once again the gaugino mass. If we can realize chiral symmetry and gauge symmetry with a Majorana fermion, SUSY will once again arise in the continuum as an accidental symmetry, modulo the unresolved issue of the Chern-Simons term.
We saw above that without constraints, a 5-d domain wall theory led to a massless Dirac fermion in 4-d; to end up with a Majorana fermion we had to impose a generalization of the Majorana constraint, which effectively took the square root of the 5-d domain wall determinant. However, following the same procedure in one fewer dimensions, a 4-d domain wall system with a Dirac fermion gives rise to two massless Dirac fermions in 3-d, four times as many degrees of freedom as we wish! In particular,
$`{\displaystyle \frac{det\left[i(D_i\gamma _i+\gamma _4_4+Mϵ(\theta ))\right]}{det\left[i(D_i\gamma _i+\gamma _4_4+M)\right]}}={\displaystyle \frac{\left[deti\text{ /}D_3\right]^2}{\left[deti(\text{ /}D_3+M)\right]^2}}`$ (18)
where on the left hand side, the index $`i`$ runs from 1 to 3 and the $`\gamma `$ matrices are $`4\times 4`$; on the right hand side, $`\text{ /}D_3`$ is the 3-d Dirac operator ($`2\times 2`$ dimensional in spinor space). Therefore it is clear we need to impose two binary constraints on the system.
First of all, instead of using 4-d Dirac domain wall fermions, we can impose the 4-d Euclidian Majorana constraint, $`\psi =C_4\overline{\psi }^T`$, where $`C_4`$ is a 4-d charge conjugation matrix. This naturally gives rise to a 3-d theory with two Majorana fermions localized at the two kinks. To reduce the spectrum to a single Majorana fermion in 3-d we use the trick of the previous section and constrain the field further to be Majorana under the 3-d charge conjugation matrix $`C_3`$, and a simultaneous reflection $`_4`$ in the compact fourth dimension. Thus the simultaneous constraints are:
1. $`\mathrm{\Psi }(x_i,x_4)=C_4\overline{\mathrm{\Psi }}^T(x_i,x_4)`$,
2. $`\mathrm{\Psi }(x_i,x_4)=_4C_3\overline{\mathrm{\Psi }}^T(x_i,x_4)`$
To be explicit, one can choose the $`\gamma `$ matrix basis:
$`\gamma _i=\sigma _1\sigma _i,\gamma _4=\sigma _31,C_3=1\sigma _2,C_4=\sigma _1\sigma _2.`$ (19)
It isn’t obvious how to simultaneously impose these two constraints until one uses constraint (1) to replace constraint (2) by
* $`\mathrm{\Psi }(x_i,x_4)=_4C_3C_4^1\mathrm{\Psi }(x_i,x_4)`$
This last constraint, relating $`\mathrm{\Psi }`$ to its reflection, tells us that we are living on an orbifold — only half the world we were considering represents independent degrees of freedom. So what we do is impose constraint (1) and compute the path integral over half our original space, namely for $`\theta (0,\pi ]`$ with suitable boundary conditions at the fixed points of $`_4`$:
$`\left[1C_3C_4^1\right]\mathrm{\Psi }(x_i,x_4)|_{x_4=0,\pi R}=0`$ (20)
Then one finds the desired result,
$`{\displaystyle \frac{\mathrm{Pf}\left[iC_4(D_i\gamma _i+\gamma _4_4+Mϵ(\theta ))\right]}{\mathrm{Pf}\left[iC_4(D_i\gamma _i+\gamma _4_4+M)\right]}}={\displaystyle \frac{\mathrm{Pf}\left[iC_3\text{ /}D_3\right]}{\mathrm{Pf}\left[iC_3(\text{ /}D_3+M)\right]}},\theta (0,\pi ].`$ (21)
We have not yet completed our analysis of the reality/positivity of the 4-d pfaffians on the lattice, and the related issue of the Chern-Simons term in the effective 3-d theory.
## 4 $`N=2`$ SUSY Yang-Mills theory in $`d=4`$ dimensions
$`N=2`$ SYM in $`d=4`$ would be fascinating to simulate on the lattice, since in the continuum it exhibits a vast array of interesting phenomena . One might think that it impossible to do without fine tuning, however, because of the scalar fields in the $`N=2`$ gauge multiplet. However, a promising idea is to formulate the theory first as an $`N=1`$ SUSY theory in $`d=6`$ (starting from a domain wall theory in $`d=7`$) . The light spectrum of the $`d=6`$ theory, with UV cutoff $`\mathrm{\Lambda }_6`$ would consist of gauge fields and a Weyl fermion. Then at a scale $`\mathrm{\Lambda }_4\mathrm{\Lambda }_6`$, one compactifies to $`d=4`$: the extra two gauge boson polarizations become the complex scalar of the $`d4`$, $`N=2`$ gauge multiplet, while the Weyl fermion in $`d=6`$ becomes the required two Weyl fermions in $`d=4`$. Furthermore, all gauge, $`\varphi ^4`$ and Yukawa couplings in the $`d=4`$ effective theory are derived from the $`d=6`$ gauge coupling $`g_6`$.
This approach is made respectable by the fact that in the continuum, the $`N=1`$ SUSY algebra in $`d=6`$ reduces under compactification to the $`N=2`$ SUSY algebra in $`d=4`$ .
Of course, the idea is still to have the target $`N=2`$ theory arise as an accidental symmetry in the effective theory. What one must try to do then is to take $`\mathrm{\Lambda }_4`$ sufficiently smaller than $`\mathrm{\Lambda }_6`$ so that by the time one has scaled down to $`\mathrm{\Lambda }_4`$ and passed over to the $`d=4`$ effective theory, the theory is “supersymmetric enough” to ensure that the noxious scalar masses radiatively generated in the effective $`d=4`$ theory are “small enough”.
How small is “small enough”? To study the $`N=2`$ theory in the strongly coupled region, where it is interesting, we need the scalar mass $`m_s`$ to satisfy $`m_s\mathrm{\Lambda }_{SQCD}`$ where $`\mathrm{\Lambda }_{SQCD}`$ is the scale where the $`N=2`$ gauge interactions get strong.
Unfortunately this is impossible to achieve. The $`N=1`$ supersymmetry in the $`d=6`$ theory is only a symmetry of the operators of leading dimension; SUSY is violated by higher dimension operators, suppressed by powers of $`\mathrm{\Lambda }_6`$. Thus the SUSY violating radiatively generated scalar masses in the $`d=4`$ effective theory will be suppressed by powers of $`\mathrm{\Lambda }_4/\mathrm{\Lambda }_6`$. We can suppress these terms as much as we want, by taking this ratio to be very small! However, the mass scale $`\mathrm{\Lambda }_{SQCD}`$ is always smaller as it is exponentially small in $`\mathrm{\Lambda }_4/\mathrm{\Lambda }_6`$.
To understand this, define the dimensionless gauge coupling $`\widehat{g}_6=g_6\mathrm{\Lambda }_6`$ in the $`d=6`$ theory. Since we begin with a weakly coupled domain wall fermion in $`d=7`$ $`\widehat{g}_6\begin{array}{c}<\hfill \\ \hfill \end{array}1`$. The coupling of the $`d=4`$ theory renormalized at the compactification scale $`\mathrm{\Lambda }_4`$ is then given by $`g_4=g_6\mathrm{\Lambda }_4=\widehat{g_6}\mathrm{\Lambda }_4/\mathrm{\Lambda }_6`$. Therefore
$`\mathrm{\Lambda }_{SQCD}\mathrm{\Lambda }_4e^{8\pi ^2/g_4^2}\mathrm{\Lambda }_4e^{8\pi ^2/\widehat{g}_6^2(\mathrm{\Lambda }_6/\mathrm{\Lambda }_4)^2}\mathrm{\Lambda }_4e^{(\mathrm{\Lambda }_6/\mathrm{\Lambda }_4)^2}.`$ (22)
We see that while we obtain scalar masses suppressed by powers of $`\mathrm{\Lambda }_4/\mathrm{\Lambda }_6`$, the strong interaction scale $`\mathrm{\Lambda }_{SQCD}`$ is exponentially suppressed in the same ratio. It follows that one cannot study the $`N=2`$ theory in the interesting strongly interacting regime starting from a weakly coupled domain wall in $`d=7`$, without fine tuning.
The above argument does not rule out studying $`N=2`$ SYM in $`d=3`$ by compactifying a $`d=4`$ theory with approximate $`N=1`$ supersymmetry, since the gauge coupling in $`d=3`$ does not run logarithmically. However, this $`d=3`$ theory has no ground state in the continuum, and so it does not seem interesting to simulate.
## 5 Conclusions
Domain wall fermions offer a compelling advantage over Wilson fermions in simulating $`N=1`$ supersymmetric Yang-Mills theories on the lattice in $`d=4`$ and $`d=3`$. In each case, supersymmetry arises as an accidental symmetry, without fine-tuning. Both of these theories should be interesting to study in the near future.
As for SUSY theories with scalars: it is hard to imagine how one can evade fine-tuning — after all, if one did have such a method, it would provide an alternative to SUSY as a solution to the hierarchy problem!
It would be interesting to study perfect supersymmetric actions to try to extract the analogue of a Ginsparg-Wilson relation for supersymmetry, for then one might identify a clever approach to SUSY theories with scalars in the spectrum, one that minimizes the fine-tuning problems.
|
no-problem/0002/astro-ph0002449.html
|
ar5iv
|
text
|
# What Damped Ly-alpha Systems Tell Us About the Radial Distribution of Cold Gas at High Redshift
## 1 Introduction
This paper is the first in a series of papers that examines the properties of Damped Lyman-$`\alpha `$ Systems (DLAS) in the context of Cold Dark Matter (CDM) based Semi-Analytic Models (SAMs). Traditionally, DLAS are believed to be the progenitors of present day spiral galaxies (Wolfe 1995) and thus any model of galaxy formation must also account for their properties. The current wealth of observational data on DLAS includes their number density, column density distribution, metallicities, and kinematic properties — see Lanzetta, Wolfe & Turnshek (1995); Storrie-Lombardi, Irwin, & MCMahon (1996); Storrie-Lombardi & Wolfe (2000); Pettini et al. (1994); Lu et al. (1996); Pettini et al. (1997); Prochaska & Wolfe (1997b, 1998, 1999, 2000); Wolfe & Prochaska (2000). These data potentially provide important constraints on cosmology and theories of galaxy formation. Here we especially focus on the new kinematic data.
Previously, the number density of DLAS has been used to provide constraints on cosmological models (Mo & Miralda-Escude 1994; Kauffmann & Charlot 1994; Ma & Bertschinger 1994; Klypin et al. 1995). These studies assumed a simple correspondance between collapsed dark matter halos and cold gas to obtain upper limits on the amount of cold gas that could be present. Gas cooling, star formation, supernovae feedback, and ionization were neglected. A different approach was used by Lanzetta et al. (1995); Wolfe et al. (1995); Pei & Fall (1995); Pei, Fall & Hauser (1999), in which the observed metallicities and *observed* number densities of the DLAS were used to model global star formation and chemical enrichment in a self-consistent way. The latter approach was set in a classical “closed-box” style framework rather than a cosmological context.
Clearly, in order to model DLAS realistically one needs to include the astrophysical processes of gas dynamics and cooling, star formation, and chemical enrichment within a cosmological framework. However, this is a challenge with our current theoretical and numerical capability. Cosmological $`N`$-body simulations with hydrodynamics are hampered by the usual limitations of volume and resolution. This is apparent in, for example, the recent work by Gardner et al. (1999), in which it was found that even rather high-resolution hydrodynamical simulations could not account for most of the observed DLAS. Gardner et al. (1999) concluded that the majority of the damped Ly-$`\alpha `$ absorption must arise from structures below the resolution of their simulations. In addition, it is well known that such simulations fail to reproduce the sizes and angular momenta of present day observed spiral galaxies (Steinmetz 1999). One might therefore be suspicious of the accuracy of their representation of the spatial distribution of the cold gas that gives rise to DLAS at high redshift. Because observational samples of DLAS are cross-section weighted, these properties are likely to introduce crucial selection effects. Semi-analytic approaches can deal with nearly arbitrary resolution and volumes, but are limited in the sophistication and accuracy of their physical “recipes”. In particular, most previous SAMs have focussed on the bulk properties of galaxies, and have not attempted to model the spatial location of galaxies relative to one another or the spatial distribution of gas and stars within galaxies.
The only previous attempt to model the properties of DLAS in a CDM framework is the work of Kauffmann (1996) (hereafter K96). In K96 the radial distribution of cold gas in galactic discs was modelled by assuming that the initial angular momentum of the gas matched that of the halo, and that angular momentum was conserved during the collapse. Star formation was then modelled using the empirical law of Kennicutt (1989, 1998), in which the star formation is a function of the surface density of the gas, and cuts off below a critical threshold density. K96 then showed that the number density, column density distribution, and metallicities of observed DLAS could be reasonably well reproduced within the Standard Cold Dark Matter (SCDM) cosmology, and predicted the distribution of circular velocities of discs that would give rise to DLAS. Assuming that each observed DLAS corresponds to a single galactic disc, this can then be compared with the observed distribution of velocity widths derived from the kinematics of unsaturated, low-ionization metal lines (Prochaska & Wolfe 1997b, 1998).
Prochaska & Wolfe found the velocity distribution predicted by K96 to be strongly inconsistent with their data. Furthermore Jedamzik & Prochaska (1998) showed that the thick rotating disc model favored by Prochaska & Wolfe (1997b) could only be reconciled with a finely tuned CDM model. But CDM actually predicts that halos will have much substructure, and Haehnelt, Steinmetz & Rauch (1998) found that large $`\mathrm{\Delta }v`$ velocity profiles consistent with those observed by Prochaska & Wolfe are produced in their very high-resolution hydrodynamical simulations. These profiles arose not from the rotation of a single disc, but from lines of sight intersecting multiple proto-galactic “clumps”. Subsequently, McDonald & Miralda-Escudé (1999) also showed with a simple analytical model that DLAS produced by intersection with a few gas clouds could create kinematics consistent with the observations in a CDM universe.
These results were encouraging but remain somewhat inconclusive. The hydro simulations do not allow the construction of a statistical, cross-section selected sample of DLAS, so it is difficult to assess how typical are the systems that they identified. In addition, these simulations were restricted to a single cosmology (SCDM), and did not include star formation or supernovae feedback. The generic difficulty of hydro simulations in producing reasonable discs at low redshift has already been noted. Therefore a further investigation using detailed semi-analytic models is worthwhile.
In the standard CDM picture of galaxy formation (based on White & Rees 1978; Blumenthal et al. 1984) gas is heated to the virial temperature when a halo forms and then cools and falls into the centre of the halo where it subsequently forms stars. In SAMs, which include the hierarchical formation of structure, this process happens numerous times as halos continually merge and form larger structures. This naturally results in halos that may contain many gaseous discs, each one associated with a sub-halo that prior to merging had been an independent halo. In this paper, we explore the possibility that such a scenario can account for the observed kinematics of the DLAS in a manner analogous to the proto-galactic clumps of Haehnelt et al. (1998) and the gas clouds of McDonald & Miralda-Escudé (1999). Here, however, the number densities, gas contents, and metallicities of these proto-galaxies are determined by the full machinery of the SAMs, which have been tuned to produce good agreement with the optical properties of galaxies at low and high redshift. We introduce new ingredients to describe the kinematics of satellite galaxies within dark matter halos, and the spatial distribution of cold gas in discs. We also include a model that is not based on the machinery of the SAMs to demonstrate that our general conclusions are not overly dependent on the specifics of how these processes are handled in the SAMs.
We start with a review of the the observational properties of DLAS (section 2). Next, section 3 gives a brief description of the ingredients of the SAMs, and describes how we simulate the observational selection process for DLAS and produce simulated velocity profiles. We demonstrate in section 4.1 that gaseous discs with sizes determined by conservation of angular momentum fail to match the kinematic data, and then in section 4.2 show that acceptable solutions can be found if the gaseous discs have a large radial extent. Section 5 examines the sensitivity of our results to a number of model parameters. In section 6 we discuss the properties of the gas discs in our model and compare them to HI observations of local spirals and with the results of hydro simulations. Lastly we close with some discussion and conclusions.
## 2 Observational Properties of DLAS
DLAS are defined as those absorption systems that have a column density of neutral hydrogen in excess of $`2\times 10^{20}`$ atoms per square centimeter (Wolfe et al. 1986). Prochaska & Wolfe (1996, 1997a) found that the velocity profiles of low ionization state metal lines (Si<sup>+</sup>, Fe<sup>+</sup>, Cr<sup>+</sup>, etc.) trace each other well and therefore presumably the kinematics of the cold gas.
They therefore undertook to obtain a large sample of the kinematic properties of DLAS as measured by the associated metal lines and compared them to the predictions from a number of models. All of the observations were obtained with HIRES (Vogt 1992) on the 10m Keck I telescope. None of the DLAS were chosen with a priori kinematic information and the metal line profiles were selected according to strict criteria including that they not be saturated, therefore it is believed that the sample is kinematically unbiased. We have taken care that our model profiles match the resolution and signal-to-noise of the observations and that they conform to the same profile selection criteria.
Prochaska & Wolfe (1997b, hereafter PW97) developed four statistics to characterize the velocity profiles of the gas, which we also use to compare our models to the data set of 36 velocity profiles in Prochaska & Wolfe (1998) and Wolfe & Prochaska (2000). The four statistics as defined in PW97 are:
* $`\mathrm{\Delta }v`$, the velocity interval statistic, defined as the width containing $`90\%`$ of the optical depth.
* $`f_{mm}`$, the mean-median statistic, defined as the distance between the mean and the median of the optical depth profile, divided by $`\mathrm{\Delta }v/2`$.
* $`f_{edg}`$, the edge-leading statistic, defined as the distance between the highest peak and the mean, divided by $`\mathrm{\Delta }v/2`$.
* $`f_{2pk}`$, the two-peak statistic, defined as the distance of the second peak to the mean. Positive if on the same side of the mean as the first peak and negative otherwise.
The other observational data that our modeling of DLAS must conform to are the differential density distribution $`f(N)`$ (the number of absorbers per unit column density per unit absorption distance) and the distribution of metal abundances. The most recent determination of $`f(N)`$ comes from Storrie-Lombardi & Wolfe (2000). The metal abundances in damped systems at high redshift ($`z>2`$) have most recently been compiled by Pettini et al. (1997) and Prochaska & Wolfe (1999, 2000).
## 3 Models
### 3.1 Semi-Analytic Models
We use the semi-analytic models developed by the Santa Cruz group (Somerville 1997; Somerville & Primack 1999; Somerville, Primack & Faber 2000), which are based on the general approach pioneered by White & Frenk (1991), Kauffmann, White & Guiderdoni (1993) and Cole et al. (1994). Our analysis is based on the fiducial $`\mathrm{\Lambda }`$CDM model presented in Somerville et al. (2000, hereafter SPF), which was shown there to produce good agreement with many properties of the observed population of Lyman-break galaxies at redshift $`2.54`$, and the global evolution with redshift of the star formation density, metallicity, and cold gas density of the Universe. Below we describe the aspects of the SAMs most relevant to modeling the DLAS, and refer the reader to SPF and Somerville & Primack (1999, hereafter SP), for further details.
#### 3.1.1 halos and sub-halos
The number density of virialized dark matter halos as a function of mass and redshift is given by an improved Press-Schechter model (Sheth & Tormen 1999). The merging history of each dark matter halo at a desired output redshift is then determined according to the prescription of Somerville & Kolatt (1999). As in SP, we assume that halos with velocity dispersions less than $`30\mathrm{km}\mathrm{s}^1`$ are photoionized and that the gas within them cannot cool or form stars. This sets the effective mass resolution of our merger trees. When halos merge, the central galaxy in the largest progenitor halo becomes the new central galaxy and all other halos become “sub-halos”. These sub-halos are placed at a distance $`f_{mrg}r_{\mathrm{vir}}`$ from the centre of the new halo, where $`r_{\mathrm{vir}}`$ is the virial radius of the new halo. We will take $`f_{mrg}`$ to be 0.5 as in SP, but will examine the importance of this parameter in section 5.
After each merger event, the satellite galaxies fall towards the centre of the halo due to dynamical friction. We calculate the radial position of each satellite within the halo using the differential formula
$`r_{fric}{\displaystyle \frac{dr_{fric}}{dt}}=0.42ϵ^{0.78}{\displaystyle \frac{Gm_{sat}}{V_c}}\mathrm{ln}(1+{\displaystyle \frac{m_h}{m_{sat}}}).`$ (1)
Here $`m_h`$ and $`m_{sat}`$ are the masses of the halo and satellite respectively, and $`ϵ`$ is a “circularity” parameter which describes the orbit of the satellite and is drawn from a flat distribution between 0.02 and 1 as suggested by N-body simulations (Navarro, Frenk & White 1995). The halos are assumed to have a singular isothermal density profile and to be tidally truncated where the density of the sub-halo is equal to that of its host at its current radius. When a sub-halo reaches the centre of the host, it is destroyed and the galaxy contained within it is merged with the central galaxy.
#### 3.1.2 gas and stars
In our models, gas can occupy one of two phases, cold or hot. Halos contain hot gas, which is assumed to be shock-heated to the virial temperature of the halo and distributed like the dark matter in a singular isothermal sphere (SIS). After a cooling time $`t=t_{\mathrm{cool}}`$ has elapsed, gas at a sufficiently high density (corresponding to the gas within the “cooling radius” $`r_{cool}`$) is assumed to cool and condense into a disc. This cold gas then becomes available for star formation.
Star formation takes place in both a quiescent and bursting mode. Quiescent star formation proceeds in all discs whenever gas is present, according to the expression
$`\dot{m}_{}={\displaystyle \frac{m_{cold}}{\tau _{}}},`$ (2)
where $`m_{cold}`$ is the mass in cold gas and $`\tau _{}`$ is an efficiency factor that is fixed using nearby galaxy properties (see below). In the bursting mode, which takes place following galaxy-galaxy mergers, the efficiency of star formation is sharply increased for a short amount of time ($``$ 50–100 Myr). The efficiency and timescale of the starbursts has been calibrated using the results of hydrodynamical simulations as described in SPF. The merger rate is determined by the infall of satellites onto the central galaxy, as described above, and the collision of satellites with one another according to a modified mean-free path model (see SP and SPF).
In association with star formation, supernovae may reheat and expell the cold gas from the disc and/or the halo. We model this using the disc-halo model of SP, in which the efficiency of the feedback is larger for galaxies residing in smaller potential wells. These stars also produce metals, which are mixed with the cold inter-stellar gas, and may be subsequently ejected and mixed with the hot halo gas, or ejected into the diffuse extra-halo IGM. Our simple constant-yield, instantaneous recycling model for chemical enrichment produces reasonable agreement with observations of metallicities of nearby galaxies (SP), the redshift evolution of the metallicity of cold gas implied by observations of DLAS, and the metallicity of the Lyman-$`\alpha `$ forest (SPF).
The main free parameters of the model are the star formation efficiency, $`\tau _{}`$, the supernovae feedback efficiency $`ϵ_{SN}^0`$ and the mass of metals produced per unit mass of stars, or effective yield, $`y`$. As described in SP, we set these parameters so that a “reference galaxy” with a rotation velocity of 220 $`\mathrm{km}\mathrm{s}^1`$ at redshift zero has a luminosity, cold gas mass fraction and metallicity in agreement with local observations. Good agreement is then obtained with optical and HI properties of local galaxies (SP), and optical properties of high redshift galaxies (SPF).
#### 3.1.3 cosmology
In this paper our fiducial models are set within a $`\mathrm{\Lambda }`$CDM cosmology with $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7,\mathrm{\Omega }_0=0.3,h=0.7`$, corresponding to model $`\mathrm{\Lambda }`$CDM.3 in SP, and the fiducial model of SPF. We have presented similar results for a standard CDM ($`\mathrm{\Omega }_0=1`$) cosmology in Maller et al. (1999). As recent observational results seem to favor a cosmological constant (Perlmutter et al. 1999) and a flat universe (Melchiorri et al. 1999) we feel justified in focusing on only this cosmology. In section 5 we show that our results are not very sensitive to the assumed cosmology.
We focus our analysis on halos at an output redshift of $`z=3`$. We have also performed an identical analysis on halos at $`z=2`$ and find no significant differences, consistent with the kinematic data and column density distribution $`f(N)`$, which show little evolution over this range. We expect to see evolution both in low redshift ($`z<1.5`$) and very high redshift ($`z>4`$) systems, however we will defer discussion of this to a future paper.
### 3.2 The Spatial Distribution of Cold Gas
The standard SAMs do not provide us with information on the radial distribution of gas and stars in the model galaxies. It is reasonable to assume that the surface density of the cold gas is important in determining the star formation rate in the gaseous discs, and in this case the radial distribution of gas should be modelled self-consistently within the SAMs. This has been done in the models of K96. However, there are many uncertainties attached to modelling the structure of the gaseous disc in the initial collapse, and how it may be modified by mergers, supernovae feedback, and secular evolution. Therefore here we choose a different approach. The SAMs described above produce good agreement with the observed $`z3`$ luminosity function of Lyman-break galaxies (SPF). The total mass density of cold gas at this redshift is also in agreement with estimates derived from observations from DLAS (Storrie-Lombardi et al. 1996; Storrie-Lombardi & Wolfe 2000). We can therefore ask how this gas must be distributed relative to these galaxies in order to produce agreement with an independent set of observations, the kinematic data.
We assume that the vertical profile of the gas is exponential, and consider two functional forms for the radial profiles of the cold gas: exponential and $`1/R`$ (Mestel). The exponential radial profile is motivated by observations of local spiral galaxies, which indicate that the light distribution of the disc is well fit by an exponential (Freeman 1970). If one assumes that as cold gas is converted into stars its distribution doesn’t change (which many theories of disc sizes implicitly assume), then the profile of cold gas at high redshift should also be exponential. The column density of the gas may then be parameterized by two quantities, the scale length $`R_g`$ and the central column density $`N_0m_{\mathrm{gas}}/(2\pi \mu m_HR_g^2)`$ (where $`m_{\mathrm{gas}}`$ is the total mass of cold gas in the disc, $`m_H`$ is the mass of the hydrogen atom and $`\mu `$ is the mean molecular weight of the gas, which we take to be 1.3 assuming 25% of the gas is Helium). The column density as a function of radius is given by
$`N_{exp}(R)=N_0\mathrm{exp}\left[{\displaystyle \frac{R}{R_g}}\right]`$ (3)
The $`1/R`$ profile, sometimes refered to as a Mestel distribution (Mestel 1963), is also motivated by observations. Radio observations (Bosma 1981) have shown that the surface density of HI gas is proportional to the projected surface density of the total mass, which for a perfectly flat rotation curve would imply a $`1/R`$ distribution. We parameterize the Mestel disc in terms of the truncation radius $`R_t`$ and the column density at that radius $`N_tm_{\mathrm{gas}}/(2\pi \mu m_HR_t^2)`$:
$`N_{mes}(R)=N_t{\displaystyle \frac{R_t}{R}}.`$ (4)
In the limit of infinitely thin discs, we can calculate the cross section for these distributions analytically and use them to check our numeric code. For an exponential disc, the inclination averaged cross section is
$`\sigma (N^{}>N)={\displaystyle \frac{\pi R_g^2\gamma _m^2}{2}}(\mathrm{ln}^2{\displaystyle \frac{N_0}{N\gamma _m}}+\mathrm{ln}{\displaystyle \frac{N_0}{N\gamma _m}}+{\displaystyle \frac{1}{2}})`$ (5)
(Bartelmann & Loeb 1996). The variable $`\gamma _m=\mathrm{min}[\frac{N_0}{N},1]`$ is introduced because a column density of $`N`$ when $`N>N_0`$ can only be reached if the disc is inclined such that $`\mathrm{cos}\theta >\gamma _m`$. For Mestel discs the corresponding expression is
$`\sigma (N^{}>N)=\pi R_t^2{\displaystyle \frac{N_t}{N}}({\displaystyle \frac{1}{2}}\mathrm{ln}{\displaystyle \frac{N_t}{N}})`$ (6)
for $`N>N_t`$.
### 3.3 Selecting and Modeling DLAS
The fiducial SPF “standard SAMs” provide us with a list of galaxies contained within a halo of a given mass or circular velocity at a given redshift. For each of these galaxies, we are also provided with the internal circular velocity, radial distance from the halo centre, stellar exponential scale length, and the cold gas, stellar, and metal content of its disc. We distribute the galaxies randomly on circular orbits (we discuss the importance of this simplification in section 5) and assign them random inclinations.
We create twenty realizations of a grid of halos with circular velocities between 50 $`\mathrm{km}\mathrm{s}^1`$ and 500 $`\mathrm{km}\mathrm{s}^1`$. These correspond to different Monte Carlo realizations of the halos’ merging histories. We then choose a model for the radial distribution of the gas and calculate the surface density distribution, constrained by the total gas mass as determined by the SAMs. We create twenty random realizations of the satellite orbits and inclinations in each of the four hundred halos and calculate the column density along each line of sight. The number of lines of sight passed through each halo is determined by the cross-section weighted probability of intersecting a halo of that mass. The total number of lines of sight is chosen to produce about ten thousand DLAS. Each line of sight that passes through a total column density exceeding $`2\times 10^{20}\mathrm{cm}^2`$ is then saved (along with all the properties of the halo that it is found in) and analyzed using the methods of PW97.
To create synthesized spectra we must include substructure in the gas discs, which we do by assuming that the gas is distributed in small clouds within the disc. The necessary parameters are: $`\sigma _{int}`$, the internal velocity dispersion of each cloud; $`N_c`$, the number of clouds; and $`\sigma _{cc}`$, their isotropic random motions. Following PW97, we take $`\sigma _{int}=4.3\mathrm{km}\mathrm{s}^1`$ and $`N_c=5`$; both values were derived from Voigt profile fits to the observations with $`N_c=5`$ being the minimum acceptable number of individual components. Increasing the cloud number, $`N_c`$ to as high as 60 does not improve the goodness of fit (PW97) for a disc model like we are considering here because our model discs are relatively thin. Also we take $`\sigma _{cc}=10\mathrm{km}\mathrm{s}^1`$, since we assume that the gas discs are cold. These internal velocities are in addition to the circular velocity of the disc and the motions between discs. For every line of sight the positions of the clouds are chosen by taking the continuous density distribution to be a probability distribution; i.e. the likelihood of a cloud being at a position in space is proportional to the gas density at that point. Synthetic metal-line profiles are produced taking into account the varying metallicity of the gas in the multiple discs along the sightline as given by the SAMs. The spectrum is smoothed to the resolution of the HIRES spectrograph (Vogt 1992), noise is added and then the four statistics of PW97 are applied. Finally, a Kolmagornov–Smirnoff (KS) test is performed to ascertain the probability that the data of Prochaska & Wolfe (1998) and Wolfe & Prochaska (2000) could be a random subset of the model.
It should be noted that while we try to include all of the relevant physics in the modeling, there are a number of simplifications. The kinematics of sub-halos within the host halos assumes that the sub-halos are on circular orbits and utilizes an approximate formula for the effects of dynamical friction. We assume that the gas discs have a simple radial profile and are axisymmetric. Also we assume that the distribution of the gas does not depend on galaxy environment or Hubble type. We expect that gas discs should be distorted by the presence of other galaxies in the same halo or by previous merger events (cf. McDonald & Miralda-Escudé 1999; Kolatt et al. 1999) yet we ignore these effects. In the spirit of SAMs we hope that these assumptions will capture the essential properties of the resulting DLAS to first order, and we investigate the sensitivity of our results to some of these assumptions. In section 6, we note the good agreement of some of the features of our model with the results of recent hydrodynamical simulations, and in the future we hope to refine our modelling by further comparisons with simulations.
## 4 Results
### 4.1 Unsuccessful Models: Classical Discs
In this section we investigate several models based on standard theories of the formation of galactic discs. These theories are generically based on the idea of Mestel (1963) that the specific angular momentum of the material that forms the galactic disc is conserved as it cools and condenses. Since this idea was first applied in the classic work of Fall & Efstathiou (1980), many authors have refined this theory by including the effects of the adiabatic contraction of the dark halo, the presence of a bulge, more realistic halo profiles, and disc stability criteria (Blumenthal et al. 1986; van der Kruit 1987; Flores et al. 1993; Dalcanton, Spergel & Summers 1997; Mo, Mao, & White 1998; van den Bosch 1999).
In the simplest of such models, we assume a singular isothermal profile for the dark matter halo, neglect the effects of the halo contraction on the assembly of the disc, and assume that the profile of the cold gas after collapse has the form of an exponential. The exponential scale length is then given by the simple expression:
$`R_s={\displaystyle \frac{1}{\sqrt{2}}}\lambda _Hr_i`$ (7)
where $`\lambda _H`$ is the dimensionless spin parameter of the halo, and $`r_i`$ is the initial radius of the gas before collapse. In $`N`$-body simulations, the spin parameter $`\lambda _H`$ for dark matter halos is found to have a log-normal distribution with a mean of about 0.05 (Warren et al. 1992). A generalization of this model, using an NFW profile for the dark matter halo and including the effect of halo contraction, has recently been presented by Mo et al. (1998).
In model EXP1, we assume $`\lambda _H=0.05`$ for all halos, take $`r_i=\mathrm{min}[r_{cool},r_{vir}]`$, and calculate the scale length for each disc from eqn 7. Note that when this approach is used to model *stellar* scale lengths, the values that we obtain are in good agreement with observations at redshift zero and redshift $`3`$ (SP; SPF).
However local observations (Broeils & Rhee 1997) find that in gas rich galaxies the HI disc always has a larger extent than the stellar disc. To explore this scenario we try a model where the exponential scale length of the gas is given by a multiple of the stellar disc scale length. We find multiplying the scale length calculated from eqn. 7 by a factor of six (model EXP6) produces the best agreement with the kinematic data but can still be rejected at $`>95\%`$ confidence level.
In model MMW, we use the fitting formulae of Mo et al. (1998) to obtain the scale radius. In this model we do not use the gas content predicted by the SAMs, but instead, following Mo et al. (1998) we assume that the disc mass is a fixed fraction (one tenth) of the total mass of each halo or sub-halo. This procedure produces roughly three times more cold gas per halo than the SAMs as there is no star-formation and no hot gas. Thus this model should be seen as an upper limit on the amount of cold gas that is available to form DLAS in the halo mass range we are considering. The spin parameter $`\lambda _H`$ is chosen randomly from a log-normal distribution, and the exponential scale length is found from eqn 7. The main difference between our MMW model and the actual model of Mo et al. is that we include sub-halos (multiple galaxies in each halo). Because Mo et al. do not simulate the merging history of their halos, they assume that only one galaxy inhabits each halo (which would correspond to our central galaxy).
The models of K96 used the assumption that the initial profile of the cold gas resulted from conservation of angular momentum, and modelled star formation according to the empirical law proposed by Kennicutt (1989). Kauffmann then found that surface density of the gas discs tended to remain close to the critical surface density, as Kennicutt in fact observed. In our model KENN, based on these observations and the results of K96, we again take the total mass of cold gas from the SAMs, and distribute it at the critical density, which for a flat rotation curve is given by :
$`N_{cr}=1.5\times 10^{21}\mathrm{cm}^2\left({\displaystyle \frac{V_c}{200\mathrm{km}\mathrm{s}^1}}\right)\left({\displaystyle \frac{1\mathrm{kpc}}{R}}\right).`$ (8)
Thus for a given $`V_c`$ this is a Mestel distribution, with $`N_t`$ determined by the above equation and the total mass of cold gas.
The KS test results for the four statistics of PW97 for these four models are shown in Table 1. The most important failing of the models is in the $`\mathrm{\Delta }v`$ statistic. Fig. 1 shows the distribution of $`\mathrm{\Delta }v`$ for the data and models. The $`\mathrm{\Delta }v`$ values produced by these models are peaked around 50 $`\mathrm{km}\mathrm{s}^1`$ with very few systems having $`\mathrm{\Delta }v>100\mathrm{km}\mathrm{s}^1`$, in sharp contrast to the data. This is the same result found in PW97 for a single-disc CDM model (e.g. the model of K96). It is not surprising, as it turns out that in these models most DLAS are in fact produced by a single disc, as shown in Fig. 2. Only for the EXP6 model are half of the DLAS the result of intersections with more than a single gas disc, and only this model has a $`\mathrm{\Delta }v`$ distribution that is not rejected at greater than $`99.9\%`$ confidence.
It is easy to understand why there are so few multiple intersections in these models by examining Fig. 3. This figure shows a projection of the gas discs residing within a halo of circular velocity 156 $`\mathrm{km}\mathrm{s}^1`$. The sizes of gas discs in these models are much smaller than the separation between them and thus multiple intersections are rare. The sizes of the gas discs in EXP1 and the KENN model are rather similar. The gas discs in the MMW model are generally bigger because there is more cold gas in each disc and the log-normally distibuted $`\lambda _H`$ varies this compared to the EXP1 and KENN models. In these three models almost all the gas is above the column density limit to be considered a damped system. In EXP6 with more extended lower density discs we find some discs where a large fraction of their area lies below the damped level. More extended exponential discs than those considered here do not increase the number of DLAS coming from multiple intersections because the area dense enough to be above the damped limit rapidly shrinks.
In Fig. 4 we show the column density distribution $`f(N)`$ for these models in comparison with the data of Storrie-Lombardi & Wolfe (2000). Once again, of the four models only EXP6 comes close to fitting the data. Thus although the *total* mass of cold gas is in agreement with that derived from the observations, the total *cross-section* for damped absorption is too small if the gas and stars have a similar radial extent and distribution, as predicted by standard models of disc formation. We therefore conclude that we may need to consider a radically different picture of gaseous discs at high redshift.
### 4.2 Successful Models: Gas Discs with Large Radial Extent
In the previous section we found that models in which the sizes of gas discs at high redshift were calculated from angular momentum conservation fail to reproduce the kinematics and column density distribution of observed DLAS. A model based on the observations of Kennicutt (1989) for local gas discs and the results of the model of K96 also failed. We noted that a common feature of these models is that the majority of DLAS arise from a single galactic disc because of the small radial extent of these discs compared to their separation. If we wish to investigate a scenario like the one proposed by Haehnelt et al. (1998), in which the kinematics of DLAS arise from lines of sight intersecting multiple objects, it is clear that the gaseous discs must be much larger in radial extent.
Unfortunately there does not exist an alternative theoretical framework for the sizes of gas discs, especially at high redshift, so in this section we will simply develop a toy model for the distribution of cold gas. We hope that the insight gained from such a toy model will lead to a more physically motivated model in the future. In our toy models we assume a Mestel profile and assume that the HI discs are truncated at a fixed column density, perhaps by a cosmic ionizing background. We investigate a range of values for this critical column density $`N_t`$, which is the only additional free parameter of the model. We take the vertical scale height of the gas to be half the stellar disc scale length, as calculated from eqn. 7. Since the radial extent of the gas is now so large compared to the stars, this still results in gaseous discs that are quite “thin”. We find that our results are only modestly dependent on the assumed vertical scale height, as we show in section 5 (see also Maller et al. 2000b).
The distribution of $`\mathrm{\Delta }v`$ for several values of $`N_t`$ between 2 and 5 $`\times 10^{19}\mathrm{cm}^2`$ are shown in Fig. 5. We find that a value for the truncation column density $`N_t`$ of $`4\times 10^{19}\mathrm{cm}^2`$ (i.e., $`\mathrm{log}N_t=19.6`$) gives the best fit to the kinematic data. The distribution now shows a significant tail to large values of $`\mathrm{\Delta }v`$, in much better agreement with the data. For values of $`N_t`$ less than $`4\times 10^{19}\mathrm{cm}^2`$ the models produce more large values of $`\mathrm{\Delta }v`$ than are seen in the data, while higher values of $`N_t`$ produce fewer large values of $`\mathrm{\Delta }v`$. Fig. 6 shows the relationship between the number of gaseous discs that produce a DLAS and the truncation level $`N_t`$. We see that when $`40`$ percent of the DLAS come from a single gas disc we get the best fit to the kinematic data.
Table 2 shows the KS probabilities for the four statistics of PW97. The mean-median statistic ($`f_{mm}`$) shows a clear trend with $`N_t`$, such that the statistic improves for higher values of $`N_t`$. This is because multiple intersections can produce values of $`f_{mm}`$ between 0.8 and 1, which are not found in the data. These high values occur in velocity profiles of two narrow peaks separated by a large “valley”. An example of this type of profile is the fourth system in Fig. 7. While the statistics of PW97 show agreement between this model and the data, the profiles with $`\mathrm{\Delta }v>100`$ km/s, whose kinematics are dominated by the motions of the multiple discs relative to one another, show large parts of velocity space with no absorbtion (Fig. 7). This is something which is not seen in the data. It is possible that such profiles arise because of the simplicity of our modeling and that in a more physical scenario this configuration would not occur.
The gas discs have such a large radial extent in this model (see Fig. 8) that they will clearly be perturbed by one another and not retain the simple circular symmetry that we are imposing. Perhaps a model in which most of the gas is in tidal streams would be more appropriate. Or perhaps the cold gas is not associated with the individual galaxies at all, but then we must understand what keeps it from being ionized by the extra-galactic UV background. Our toy model demonstrates that the cold gas must somehow be distributed with a very large covering factor in order to reproduce the observed kinematics of the DLAS; understanding how it attains this distribution will require furthur study and most likely hydro simulations.
We now investigate the column density distribution $`f(N)`$ and the metal abundances for these models. Fig. 9 shows $`f(N)`$ for the models and the data. The column density distribution is not very sensitive to the truncation density $`N_t`$, and all the models produce about the right number of absorbers except in the highest column density bin. This may be due to our simplistic assumption that all gas discs are truncated at the same column density. If a small fraction of them were much denser it might be possible to have enough high column density systems without significantly affecting the kinematic properties of the absorbers. It should also be noted that the data was tabulated assuming a $`q_0=0`$ cosmology, which may introduce an additional discrepancy in comparing to our $`q_0=0.55`$ $`\mathrm{\Lambda }`$CDM cosmology. We note that the shape of the distribution is fairly similar to that of the data.
We also show the average metal abundances of our absorbers versus HI column density in Fig. 10. The observational data points are \[Zn/H\] measurements of DLAS with $`z>2`$ (Pettini et al. 1997; Prochaska & Wolfe 1999). One can see our model gives DLAS with metallicities in agreement with the data, and also reproduces the observed trend with HI column density. One might expect that systems with higher HI column densities should have higher metallicities because they are more likely to be in more massive halos. However, because we truncate all gas discs at the same value of $`N_t`$ the distribution of column densities is the same for all halos masses. Thus our parameterization naturally explains the flat distribution of metal abundances with HI column densities.
## 5 Model Dependencies
We have presented a model that can produce the observed kinematic properties of the DLAS as well as the other known properties of these systems. In this section we examine the sensitivity of our model to some of the simplifying assumptions we have made. We examine the effect of changing the disc thickness, the orbits of the satellites, the cosmological model, and the assumption of rotationally supported discs. We find that none of these have a large effect on the kinematics of the DLAS in our models. Table 3 shows the KS probabilities when these various assumptions are changed. The effect of any of these changes is less than changing the truncation density from $`N_t=4\times 10^{19}\mathrm{cm}^2`$ to $`5\times 10^{19}\mathrm{cm}^2`$. We conclude that our general conclusions are not sensitive to any of these assumptions.
### 5.1 Disc Thickness
In section 4.2 we assumed the vertical scale length of the gas to be one half the stellar disc scale length. Because the gas is so much more radially extended than the stars, this still resulted in very thin discs. One might think that as long as $`h_z`$ is small compared to the radial size of the discs its exact value would not be important. However, as explained in Maller et al. (2000b) very thin discs have an increased cross section to being nearly edge-on, which changes their kinematic properties. Thus the KS probabilities change non-trivially when we consider thinner discs with $`h_z=0.1R_{}`$. We favour the model with $`h_z=0.5R_{}`$ because these large discs are very likely to be warped by interactions, and Prochaska & Wolfe (1998) have shown that using a larger scale height has an effect similar to including warps in the discs. Increasing the disc thickness to $`h_z=R_{}`$ also has a non-negligible effect on the kinematics because thicker discs create a larger $`\mathrm{\Delta }v`$ for a single disc encounter. Thus there is a trade-off, and we see that we can reproduce the kinematics either with thinner discs with larger radial extent, or thicker discs with smaller radial extent.
### 5.2 Circular Orbits
We have assumed that all the satellites are on circular orbits within the halo, which is clearly unrealistic. To test the importance of this assumption we explore the opposite extreme, which is to assume that all satellites are on radial orbits. The potential of a SIS is $`\mathrm{\Phi }(r)=V_c^2\mathrm{ln}(r)`$ so conservation of energy gives us
$`v(r)=\sqrt{2}V_c\sqrt{\mathrm{ln}r_m/r}`$ (9)
where $`r_m`$ is the maximum radius the satellite reaches. From this one can compute that the time it takes for a satellite to travel from $`r_m`$ to $`r`$ is given by
$`t(r)={\displaystyle \frac{r_m}{\sqrt{2}V_c}}\mathrm{erf}(\mathrm{ln}r_m/r).`$ (10)
and that the orbital period is $`P=2\sqrt{2\pi }r_m/V_c`$. We find that the expression $`r(t)=r_m(1.75(4t/P))`$ for $`0<t<P/4`$ is a reasonable fit to the true function (satellites spend less than 5% of their time in the inner fourth of the orbit). We therefore use it to determine the probability distribution of satellites along their radial orbits. From Table 3 we see that assuming all radial orbits slightly improves the statistics of our model and thus considering a true distribution of orbits will probably only increase the agreement between the data and our model, but not enough to rescue any of the unsuccessful models.
### 5.3 The Initial Infall Radius of Satellites
As explained in section 3.1, when halos merge the satellite galaxies are placed at a distance $`f_{mrg}`$, in units of the virial radius, from the centre of the new halo. One might worry that this parameter, by influencing the position of satellite galaxies in the halo, may be crucial to our DLAS modeling. To test this we try a model with $`f_{mrg}`$ set to 1.0 (instead of the 0.5 as it has been up to now). This requires us to change the free parameters of the SAMs to maintain the normalization of the reference galaxy, as described in SP. The results of this model are shown in Table 3. Doubling this parameter results in only a modest change in the kinematic properties of DLAS. This is because the important factor is the number of galaxies in the inner part of the halo (which will give rise to multiple intersections); satellites that start their infall from farther out still spend a similar amount of time near the central object which explains the modest effect on the kinematic properties.
### 5.4 Cosmology
So far we have only considered models set within the currently favoured $`\mathrm{\Lambda }`$CDM.3 cosmology. However, we would like to know how sensitive our results are to the assumed cosmology. We consider two other cosmologies, a flat universe with $`\mathrm{\Omega }_0=0.5`$ ($`\mathrm{\Lambda }`$CDM.5) and an open universe with $`\mathrm{\Omega }_0=0.3`$ (OCDM.3; as in SP). The free parameters must be readjusted for each cosmology as described in SP. The KS probabilities are listed in Table 3. One sees that the effect of changing the cosmological model is not a drastic one. We do not show the results here, but we note that our conclusions concerning the dynamical tests do not change even in a cosmology with very low $`\mathrm{\Omega }_0=0.1`$, nor do they change if we assume $`\mathrm{\Omega }_0=1`$. The total mass of cold gas as a function of redshift is rather sensitive to cosmology, however the distribution of $`\mathrm{\Delta }v`$ is almost completely insensitive to cosmology. This is because the distribution of $`\mathrm{\Delta }v`$ from single hits depends only on the *shape* of the power spectrum, which is very similar on these scales for any CDM model. The contribution from multiple hits is determined by the dependence of the merger rate on halo mass, which again is a weak function of cosmology.
### 5.5 Non-Rotating Gas
The final assumption we explore is seemingly a key one: that the cold gas is rotationally supported in discs. We consider a simple alternative model where the cold gas has a bulk velocity with the same magnitude as the circular velocity of the halo, but in a random direction. This might represent gas that is dominated by streaming motion or infall rather than rotation. We are still able to reproduce the observed kinematics, because they are dominated by the motions of the various sub-halos not the motions within the discs. Thus the fundamental assumption that cold gas at high redshift is in rotationally supported discs may need to be reconsidered.
## 6 Properties of Gaseous Discs in our Model
We have demonstrated that the standard theories of disc formation cannot reproduce the observed properties of DLAS, and have proposed a rather unorthodox alternative which succeeds in reproducing these observations. Here we compare our models with observations of local discs and with results from recent hydrodynamical simulations to assess whether the model is reasonable. These results are all for the fiducial model of section 4.2.
Because local gas discs do not show a common surface density profile, it is common practice to cite their properties out to some surface density contour, often taken to be 1 $`M_{\mathrm{}}`$pc<sup>-2</sup> which is equal to $`1.25\times 10^{20}\mathrm{cm}^2`$. We will denote the radius where the column density reaches this value as $`R_{\mathrm{HI}}`$. One observed local property of gas discs is that the average surface density $`<\sigma _{\mathrm{HI}}>`$ out to this level is approximately constant with a value $`3.8\pm 1.1M_{\mathrm{}}`$pc<sup>-2</sup> (Broeils & Rhee 1997). Because in our model all galaxies are normalized by the same value of $`N_t`$, the average surface density is identically equal to $`2M_{\mathrm{}}`$pc<sup>-2</sup>. The galaxies in our model (at $`z3`$) thus have an average surface density half the local value and share the property that this value is independent of gas mass.
We can also compare the sizes of gaseous discs in our models to the observations of Broeils & Rhee (1997). Fig. 11 shows the distribution of $`R_{\mathrm{HI}}`$ for the discs that give rise to DLAS in our model (at $`z=3`$) and for the local data. The local discs are about twice as large as the high-redshift discs producing the DLAS in our model. The gas discs of the model however extend another factor of 3 before they are truncated, something which is not usually seen locally. Note that as these populations are selected in very different ways, in addition to being at very different redshifts, it is not clear that the distributions should agree closely. However, we see that the radial extent of the gas in our model is not that drastically different from that in local spiral galaxies.
Local HI surveys find no systems with average surface densities less than $`5\times 10^{19}\mathrm{cm}^2`$ (Zwaan et al. 1997) which is attributed to photo-ionization of HI discs below a column density of a few $`10^{19}\mathrm{cm}^2`$ by the extra-galactic UV background (Corbelli & Salpeter 1993; Maloney 1993). Thus the value of $`N_t`$ that we attain from considerations of the DLAS kinematics and number density is surprisingly close to local estimates.
If the gas discs in our toy model are truncated because of photo-ionization then we would expect the gas near the truncation edge to have a high ionization fraction. However this does not translate into DLAS with high ionization fractions because this low column density gas will only be a fraction of the gas that composes a DLAS. Most of the column density of a DLAS will come from gas at higher densities, as the total needs to be in excess of $`2\times 10^{20}\mathrm{cm}^2`$, and thus the average ionization state of the gas will be low, in agreement with the observations. This model would predict that the lower column density components of the velocity profile would be more likely to have higher ionization states, something that can be checked in the existing data.
It is also interesting to investigate the distribution of halo masses giving rise to DLAS. Fig. 12 shows the distribution of circular velocities of the halos containing discs that give rise to DLAS. Also shown is the average cross section for DLAS as a function of circular velocity, which agrees fairly well with the results of Gardner et al. (1997) (slope = 2.94) and Haehnelt, Steinmetz & Rauch (1999) (slope = 2.5), but not those of Gardner et al. (1999) who finds a much shallower slope of 0.9. Haehnelt et al. determine their average cross section by fitting to the observed $`\mathrm{\Delta }v`$ distribution so we expect that the relationship between the circular velocity of the halo and the $`\mathrm{\Delta }v`$ of the DLAS that arise in it must be the same in our modeling and the simulations of Haehnelt et al. . This is in fact the case as can be seen by comparing from Fig. 13 and Fig. 1 in Haehnelt et al. (1999). This seems to suggest that the very different approaches of hydro simulations and SAMs are converging on a common picture for the nature of the DLAS.
## 7 Discussion and Conclusions
We have explored the properties of DLAS in semi-analytic models of galaxy formation. These models produce good agreement with many optical properties of galaxies at low and high redshift, and the total mass of cold gas at redshift $`3`$ is also in reasonable agreement with observations. It is therefore interesting to ask whether the kinematic properties, metallicities, and column densities of DLAS in these models are in agreement with observations. We investigated the dependences of these properties on cosmology, the distribution of satellite orbits, and gaseous disc scale height, and found that our results were not sensitive to these assumptions. Our results are *extremely sensitive* to our assumptions about the radial distribution of cold gas within galactic discs. Given that one believes the other components of our model, one can then perhaps learn about the distribution of cold neutral gas at high redshift.
Currently popular theories of disc formation posit that the radial size of a galactic disc is determined by the initial specific angular momentum of the dark matter halo in which it forms, and that the cold gas traces the stellar component. Often, the profile of the disc is assumed to have an exponential form. We investigate several variants of such models, based on ideas in the literature such as Fall & Efstathiou (1980), Mo et al. (1998) and Kauffmann (1996). We find that the kinematics of DLAS arising in such models are in strong conflict with the observations of Prochaska & Wolfe (1997b, 1998). This is consistent with the previous work of Prochaska & Wolfe, in which it was shown that if the $`\mathrm{\Delta }v`$ of each DLAS arises from a single rotating disc, generic CDM models can be ruled out at high confidence level. Our work has shown that in theories of disc formation based on angular momentum conservation, the resulting gaseous discs are so small that most DLAS are produced by a single disc and thus the models suffer from the familiar difficulties with the observed kinematics. In addition, although the total mass of cold gas in the models is in agreement with the estimate of $`\mathrm{\Omega }_{\mathrm{gas}}`$ from Storrie-Lombardi et al. (1996), when we use realistic cross-section-weighted column-density criteria to select DLAS in our models, we find that the overall number density of damped systems is too small. This again seems to indicate that the covering factor of the gas is too small.
We therefore abandon the standard picture of discs and investigate a toy model in which the gas is distributed according to a Mestel distribution with a fixed truncation radius. We adjust the truncation radius as a free parameter to find the best fit with the observations, and find that the best-fit value is consistent with the expected ionization edge due to a cosmic ionizing background. This results in gas discs which are considerably more radially extended than the standard models discussed above. For this class of models, we find good agreement with the four diagnostic statistics of PW97 which describe the kinematics; in particular, the distribution of velocity widths $`\mathrm{\Delta }v`$ in the models now has a tail to large $`\mathrm{\Delta }v200300\mathrm{km}\mathrm{s}^1`$ as in the observations. In the models, the majority of these large $`\mathrm{\Delta }v`$ systems arise from “multiple hits”, lines of sight that pass through more than one rotating disc, as in the picture proposed by Haehnelt et al. (1998). The column density distribution and metallicities of the DLAS in the models are also in reasonable agreement with the observations.
This working model for DLAS has many additional implications that may be tested by observations in the near future. One interesting issue is the relationship between DLAS and the Lyman-break galaxies (Steidel et al. 1996; Lowenthal et al. 1997; Steidel et al. 1998), about which little is currently known observationally (Djorgovski 1997; Moller & Warren 1998). Previous theoretical predictions used simplifed relations to estimate luminosities (Mo et al. 1998; Haehnelt et al. 1999). Because the SAMs include detailed modelling of star formation-related processes as well as full stellar population synthesis, we are in a position to make much more detailed and perhaps more reliable predictions. Our model suggests that $`20\%`$ of DLAS contain at least one galaxy with an $`R`$ magnitude brighter then 25.5 in the same dark matter halo. The median projected distance between the DLAS and the Lyman-break galaxy is about 30 kpc (Maller et al. 2000a).
Another interesting comparison is with the kinematics of the high ionization state elements (Wolfe & Prochaska 2000). In the simple picture of the SAMs, these profiles would naturally be associated with the gas that has been shock heated to the virial temperature of the halo. The hot gas is distributed spherically in the sub-halo, unlike the cold gas, which would explain why the velocity profiles of the high ions do not trace the low ions (Prochaska & Wolfe 1997a). However, the velocity widths of the two profiles, which are dominated by the motions between sub-halos within the same larger halo, would be related.
The kinematics and $`f(N)`$ distribution for absorbers below the damped limit but with column densities above the value where the disc is truncated (i.e. Lyman-limit systems) also provide an interesting test of our model. Our modeling suggests that the incidence of these absorbers arising from cold discs should increase with the same slope as in Fig. 9 and then turn over abruptly around a column density of $`5\times 10^{19}\mathrm{cm}^2`$. Below this column density these Lyman limit systems must be composed of more diffuse ionized gas. This is found in hydro simulations (Davé et al. 1999) and supported by some observational evidence (Prochaska 1999). Thus observations at these column densities can directly probe whether gas discs reach the values of $`N_t`$ we require to explain the DLAS kinematics and $`f(N)`$. Lastly it is possible to explore how the properties of the DLAS evolve with redshift. The merging rate is a strong function of redshift (Kolatt et al. 1999, 2000) so we would expect the number of “multiple hits” and therefore the kinematics of the DLAS to be significantly different at low redshift. All these issues will be explored in greater detail in subsequent papers.
If one is really to accept our conclusion that high redshift discs have an extended Mestel-type radial profile, clearly we must develop a theory for their origin. It is possible that the standard theory of disc formation is applicable at low redshift, but that some other process dominates at higher redshift. For example, mergers are far more common at high redshift, and the gas fractions of discs are higher (cf. SPF). This may result in efficient transfer of orbital angular momentum to the gas, producing tidal tails that distribute the gas out to large radii. Locally, some interacting galaxies show extremely extended rotating HI, presumably resulting from such a mechanism (Hibbard 1999). Another possibility is that starbursts triggered by mergers produce supernovae-driven outflows like those seen in local starbursts (Heckman 1999) that could also result in extended gas distributions. We find that a toy model in which the gas clouds have a bulk velocity equal to the rotation velocity of the disc, but in a random direction (i.e. not in a rotationally supported disc) still produces good agreement with the observed DLAS kinematics because the kinematics of our model are dominated by the motions of the multiple discs, not the kinematics within these discs.
It is worth noting that the surface densities of the gas discs in our model would be far below the critical value for star formation determined by observations Kennicutt (1989, 1998), implying that there may be very little star formation taking place in a ‘quiescent’ mode. It is interesting that SPF found that a picture in which quiescent star formation at high redshift is very inefficient and most of the star formation occurs in merger-induced bursts provides the best explanation of the high redshift Lyman-break galaxies. Even in the extreme case in which quiescent star formation is completely switched off, we find that the starburst mode alone can easily produce the observed level of star formation at high redshift. Thus in the high redshift universe interactions between galaxies seem to play a rather prominent role in determining their gas properties and star formation histories.
## Acknowledgements
We thank George Blumenthal, James Bullock, Romeel Davé, Avishai Dekel, Martin Haehnelt, Tsafrir Kolatt, Risa Wechsler, and Art Wolfe for stimulating conversations. We thank Lisa Storrie-Lombardi and Art Wolfe for allowing us to use their data prior to publication. This work was supported by NASA and NSF grants at UCSC. AHM and RSS also acknowledge support from University Fellowships from the Hebrew University, Jerusalem and JXP acknowledges support from a Carnegie postdoctoral fellowship. The bibliography was produced with Jonathon Baker’s Astronat package.
|
no-problem/0002/physics0002045.html
|
ar5iv
|
text
|
# Can Science ‘explain’ Consciousness ?
## I Introduction
Among all the human endeavours, science can be considered to be the most powerful for the maximum power it endowes us to manipulate the nature through an understanding of our position in it. This understanding is gained when a set of careful observations based on tangible perceptions, acquired by sensory organs and/or their extensions, is submitted to the logical analysis of human intellect as well as to the intuitive power of imagination to yield the abstract fundamental laws of nature that are not self-evident at the gross level of phenomenal existence. There exists a unity in nature at the level of laws that corresponds to the manifest diversity at the level of phenomena.
Can consciousness be understood in this sense by an appropriate use of the methodology of science ? The most difficult problem related to consciousness is perhaps, ‘how to define it ?’. Consciousness has remained a unitary subjective experience, its various ‘components’ being reflective (the recognition by the thinking subject of its own actions and mental states), perceptual (the state or faculty of being mentally aware of external environment) and a free will (volition). But how these components are integrated to provide the unique experience called ‘consciousness’, familiar to all of us, remains a mystery. Does it lie at the level of ‘perceptions’ or at the level of ‘laws’ ? Can it be reduced to some basic ‘substance’ or ‘phenomenon’ ? Can it be manipulated in a controlled way ? Is there a need for a change of either the methodology or the paradigm of science to answer the above questions ? In this article, I make a modest attempt to answer these questions, albeit in a speculative manner.
## II Can Consciousness be reduced further ?
Most of the successes of science over the past five hundred years or so can be attributed to the great emphasis it lays on the ‘reductionist paradigm’. Following this approach, can consciousness be reduced either to ‘substance’ or ‘phenomena’ in the sense that by understanding which one can understand consciousness ?
### A Physical Substratum
The attempts to reduce consciousness to a physical basis have been made in the following ways by trying to understand the mechanism and functioning of the human brain in various different contexts.
* Physics
The basic substratum of physical reality is the ‘state’ of the system and the whole job of physics can be put into a single question : ‘given the initial state, how to predict its evolution at a later time ?’. In classical world, the state and its evolution can be reduced to events and their spatio-temporal correlations. Consciousness has no direct role to play in this process of reduction, although it is responsible to find an ‘objective meaning’ in such a reduction.
But the situation is quite different in the quantum world as all relevant physical information about a system is contained in its wavefunction (or equivalently in its state vector), which is not physical in the sense of being directly measurable. Consciousness plays no role in the deterministic and unitary Schrödinger evolution (i.e. the U-process of Penrose) that the ‘unphysical’ wavefunction undergoes.
To extract any physical information from the wavefuction one has to use the Born-Dirac rule and thus probability enters in a new way into the quantum mechanical description despite the strictly deterministic nature of evolution of the wavefunction. The measurement process forces the system to choose an ‘actuality’ from all ‘possibilities’ and thus leads to a non-unitary collapse of the general wavefunction to an eigenstate (i.e. the R-process of Penrose) of the concerned observable. The dynamics of this R-process is not known and it is here some authors like Wigner have brought in the consciousness of the observer to cause the collapse of the wavefunction. But instead of explaining the consciousness, this approach uses consciousness for the sake of Quantum Mechanics which needs the R-process along with the U-process to yield all its spectacular successes.
The R-process is necessarily non-local and is governed by an irreducible element of chance, which means that the theory is not naturalistic: the dynamics is controlled in part by something that is not a part of the physical universe. Stapp has given a quantum mechanical model of the brain dynamics in which this quantum selection process is a causal process governed not by pure chance but rather by a mathematically specified non-local physical process identifiable as the conscious process. It was reported that attempts have been made to explain consciousness by relating it to the ‘quantum events’, but any such attempt is bound to be futile as the concept of ‘quantum event’ in itself is ill-defined !
Keeping in view the fundamental role that the quantum vacuum plays in formulating the quantum field theories of all four known basic interactions of nature spreading over a period from the big-bang to the present, it has been suggested that if at all consciousness be reduced to anything ‘fundamental’ that should be the ‘quantum vacuum’ in itself. But in such an approach the following questions arise: 1) If consciousness has its origin in the quantum vacuum that gives rise to all fundamental particles as well as the force fields, then why is it that only living things possess consciousness ?, 2) What is the relation between the quantum vacuum that gives rise to consciousness and the space-time continuum that confines all our perceptions through which consciousness manifests itself ?, 3) Should one attribute consciousness only to systems consisting of ‘real’ particles or also to systems containing ‘virtual’ particles ? Despite these questions, the idea of tracing the origin of ‘consciousness’ to ‘substantial nothingness’ appears quite promising because the properties of ‘quantum vacuum’ may ultimately lead us to an understanding of the dynamics of the R-process and thus to a physical comprehension of consciousness.
One of the properties that distinguishes living systems from the non-living systems is their ability of self-organisation and complexity. Since life is a necessary condition for possessing consciousness, can one attribute consciousness to a ‘degree of complexity’ in the sense that various degrees of consciousness can be caused by different levels of complexity? Can one give a suitable quantitative definition of consciousness in terms of ‘entropy’ that describes the ‘degree of self-organisation or complexity’ of a system ? What is the role of non-linearity and non-equilibrium thermodynamics in such a definition of consciousness ? In this holistic view of consciousness what is the role played by the phenomenon of quantum non-locality, first envisaged in EPR paper and subsequently confirmed experimentally by Aspect et. al ? What is the role of irreversibility and dissipation in this holistic view ?
* Neuro-biology
On the basis of the vast amount of information available on the structure and the modes of communication (neuro-transmitters, neuro-modulators, neuro-hormones) of the neuron, neuroscience has empirically found the neural basis of several attributes of consciousness. With the help of modern scanning techniques and by direct manipulations of the brain, neuro-biologists have found out that various human activities (both physical and mental) and perceptions can be mapped into almost unique regions of the brain. Awareness, being intrinsic to neural activity, arises in higher level processing centers and requires integration of activity over time at the neuronal level. But there exists no particular region that can be attributed to have given rise to consciousness. Consciousness appears to be a collective phenomena where the ‘whole’ is much more than the sum of parts ! Is each neuron having the ‘whole of consciousness’ within it, although it does work towards a particular attribute of consciousness at a time ?
Can this paradigm of finding neural correlates of the attributes of consciousness be fruitful in demystifying consciousness ? Certainly not ! As it was aptly concluded the currently prevalent reductionist approaches are unlikely to reveal the basis of such holistic phenomenon as consciousness. There have been holistic attempts to understand consciousness in terms of collective quantum effects arising in cytoskeletons and microtubles; minute substructures lying deep within the brain’s neurons. The effect of general anaesthetics like chloroform (CHCl<sub>3</sub>), isofluorane (CHF<sub>2</sub>OCHClCF<sub>3</sub>) etc. in swiching off the consciousness, not only in higher animals such as mammals or birds but also in paramecium, amoeba, or even green slime mould has been advocated to be providing a direct evidence that the phenomenon of consciousness is related to the action of the cytoskeleton and to microtubles. But all the implications of ‘quantum coherence’ regarding consciousness in such approach can only be unfolded after we achieve a better understanding of ‘quantum reality’, which still lies ahead of the present-day physics.
* Artificial Intelligence
Can machines be intelligent ? Within the restricted definition of ‘artificial intelligence’, the neural network approach has been the most promising one. But the possibility of realising a machine capable of artificial intelligence based on this approach is constrained at present by the limitations of ‘silicon technology’ for integrating the desired astronomical number of ‘neuron-equivalents’ into a reasonable compact space. Even though we might achieve such a feat in the foreseeable future by using chemical memories, it is not quite clear whether such artificially intelligent machines can be capable of ‘artificial consciousness’. Because one lacks at present a suitable working definition of ‘consciousness’ within the frame-work of studies involving artificial intelligence.
Invoking Gödel’s incompleteness theorem, Penrose has argued that the technology of electronic computer-controlled robots will not provide a way to the artificial construction of an actually intelligent machine–in the sense of a machine that ‘understands’ what it is doing and can act upon that understanding. He maintains that human understanding (hence consciousness) lies beyond formal arguments and beyond computability i.e. in the Turing-machine-accessible sense.
Assuming the inherent ability of quantum mechanics to incorporate consciousness, can one expect any improvement in the above situation by considering ‘computation’ to be a physical process that is governed by the rules of quantum mechanics rather than that of classical physics ? In ‘Quantum computation’ the classical notion of a Turing machine is extended to a corresponding quantum one that takes into account the quantum superposition principle. In ‘standard’ quantum computation, the usual rules of quantum theory are adopted, in which the system evolves according to the U-process for essentially the entire operation, but the R-process becomes relevant mainly only at the end of the operation, when the system is ‘measured’ in order to ascertain either the termination or the result of the computation.
Although the superiority of the quantum computation over classical computation in the sense of complexity theory have been shown, Penrose insists that it is still a ‘computational’ process since U-process is a computable operation and R-process is purely probabilistic procedure. What can be achieved in principle by a quantum computer could also be achieved, in principle, by a suitable Turing-machine-with-randomiser. Thus he concludes that even a quantum computer would not be able to perform the operations required for human conscious understanding. But we think that such a view is limited because ‘computation’ as a process need not be confined to a Turing-machine-accessible sense and in such situations one has to explore the power of quantum computation in understanding consciousness.
We conclude from the above discussions that the basic physical substrata to which consciousness may be reduced are ‘neuron’, ‘event’ and ‘bit’ at the classical level, whereas at the quantum level they are ‘microtuble’, ‘wavefunction’ and ‘qubit’; depending on whether the studies are done in neuro-biology, physics and computer science respectively. Can there be a common platform for these trio of substrata ?
We believe the answer to be in affirmative and the first hint regarding this comes from Wheeler’s remarkable idea: “ it from bit i.e. every it – every particle, every field of force, even the spacetime continuum itself – derives its function, its meaning, its very existence entirely – even if in some contexts indirectly – from the apparatus-elicited answers to yes or no questions, binary choices, bits”. This view of the world refers not to an object, but to a vision of a world derived from pure logic and mathematics in the sense that an immaterial source and explanation lies at the bottom of every item of the physical world. In a recent report the remarkable extent of embodiment of this vision in modern physics has beed discussed alongwith the possible difficulties faced by such a scheme. But can this scheme explain consciousness by reducing it to bits ? Perhaps not unless it undergoes some modification. Why ?
Because consciousness involves an awareness of an endless mosaic of qualitatively different things –such as the colour of a rose, the fragrance of a perfume, the music of a piano, the tactile sense of objects, the power of abstraction, the intuitive feeling for time and space, emotional states like love and hate, the ability to put oneself in other’s position, the abilitiy to wonder, the power to wonder at one’s wondering etc. It is almost impossible to reduce them all to the 0-or-1 sharpness of the definition of ‘bits’. A major part of human experience and consciousness is fuzzy and hence can not be reduced to yes or no type situations. Hence we believe that ‘bit’ has to be modified to incorporate this fuzzyness of the world. Perhaps the quantum superposition inherent to a ‘qubit’ can help. Can one then reduce the consciousness to a consistent theory of ‘quantum information’ based on qubits ? Quite unlikely, till our knowledge of ‘quantum reality’ and the ‘emergence of classicality from it’ becomes more clear.
The major hurdles to be cleared are (1) Observer or Participator ? (In such equipment-evoked, quantum-information-theoretic approach, the inseparability of the observer from the observed will bring in the quantum measurement problem either in the form of dynamics of the R-process or in the emergence of classicality of the world from a quantum substratum. We first need the solutions to these long-standing problems before attempting to reduce the ‘fuzzy’ world of consciousness to ‘qubits’! ); (2) Communication ? (Even if we get the solutions to the above problems that enable us to reduce the ‘attributes of consciousness’ to ‘qubits’, still then the ‘dynamics of the process that gives rise to consciousness’ will be beyond ‘quantum information’ as it will require a suitable definition of ‘communication’ in the sense expressed by F$`\varphi `$llesdal “ Meaning is the joint product of all evidence that is available to those who communicate”. Consciousness helps us to find a ‘meaning’ or ‘understanding’ and will depend upon ‘communication’. Although all ‘evidence’ can be reduced to qubits, ‘communication’ as an exchange of qubits has to be well-defined. Why do we say that a stone or a tree is unconscious ? Is it because we do not know how to ‘communicate’ with them ? Can one define ‘communication’ in physical terms beyond any verbal or non-verbal language ? Where does one look for a suitable definition of ‘communication’ ? Maybe one has to define ‘communication’ at the ‘substantial nothingness’ level of quantum vacuum.); (3) Time’s Arrow ? (How important is the role of memory in ‘possessing consciousness’ ? Would our consciousness be altered if the world we experience were reversible with respect to time ? Can our consciousness ever find out why it is not possible to influence the past ?).
Hence we conclude that although consciousness may be beyond ‘computability’, it is not beyond ‘quantum communicability’ once a suitable definition for ‘communication’ is found that exploits the quantum superposition principle to incorporate the fuzzyness of our experience. Few questions arise: 1) how to modify the qubit ?, 2) can a suitable definition of ‘communication’, based on immaterial entity like ‘qubit’ or ‘modified qubit’, take care of non-physical experience like dream or thoughts ? We assume, being optimistic, that a suitable modification of ‘qubit’ is possible that will surpass the hurdles of communicability, dynamics of R-process and irreversibility. For the lack of a better word we will henceforth call such a modified qubit as ‘Basic Entity’ (BE).
### B Non-Physical Substratum
Unlike our sensory perceptions related to physical ‘substance’ and ‘phenomena’ there exists a plethora of human experiences like dreams, thoughts and lack of any experience during sleep which are believed to be non-physical in the sense that they cannot be reduced to anything basic within the confinement of space-time and causality. For example one cannot ascribe either spatiality or causality to human thoughts, dreams etc. Does one need a frame-work that transcends spatio-temporality to incorporate such non-physical ‘events’ ? Or can one explain them by using BE ? The following views can be taken depending on one’s belief:
* Modified BE \[ M(BE) \]
What could be the basic substratum of these non-physical entities ? Could they be understood in terms of any suitably modified physical substratum ? At the classical level one might think of reducing them to ‘events’ which, unlike the physical events, do not have any reference to spatiality. Attempts have been made to understand the non-physical entities like thoughts and dreams in terms of temporal events and correlation between them. Although such an approach may yield the kinematics of these non-physical entities, it is not clear how their dynamics i.e. evolution etc. can be understood in terms of temporal component alone without any external spatial input, when in the first place they have arose from perceptions that are meaningful only in the context of spatio-temporality ?! Secondly, it is not clear why the ‘mental events’ constructed after dropping the spatiality should require new set of laws that are different from the usual physical laws.
At the quantum level one might try to have a suitable modification of the wavefunction to incorporate these non-physical entities. One may make the wavefunction depend on extra parameters, either physical or non-physical, to give it the extra degrees of freedom to mathematically include more information. But such a wavefunction bound to have severe problems at the level of interpretation. For example, if one includes an extra parameter called ‘meditation’ as a new degree of freedom apart from the usual ones, then how will one interpret squared modulus of the wavefunction ? It will be certainly too crude to extend the Born rule to conclude that the squared modulus in this case will give the probability of finding a particle having certain meditation value ! Hence this kind of modification will not be of much help except for the apparent satisfaction of being able to write an eigenvalue equation for dreams or emotions ! This approach is certainly not capable of telling how the wavefunction is related to consciousness, let alone a mathematical equation for the evolution of consciousness !
If one accepts consciousness as a phenomenon that arises out of execution of processes then any suggested new physical basis can be shown to be redundant. As we have concluded earlier, all such possible processes and their execution can be reduced to BE and spatio-temporal correlations among BE using a suitable definition of communication.
Hence to incorporate non-physical entities as some kind of information one has to modify the BE in a subtle way. Schematically M(BE)= BE $``$ X, where $``$ stands for a yet unknown operation and X stands for fundamental substratum of non-physical information. X has to be different from BE; otherwise it could be reducible to BE and then there will be no spatio-temporal distinction between physical and non-physical information. But, how to find out what is X ? Is it evident that the laws for M(BE) will be different from that for BE ?
* Give up BE
One could believe that it is the ‘Qualia’ that constitutes consciousness and hence consciousness has to be understood at a phenomenological level without disecting it into BE or M(BE). One would note that consciousness mainly consists of three phenomenological processes that can be roughly put as retentive, reflective and creative. But keeping the tremendous progress of our physical sciences and their utility to neuro-sciences in view, it is not unreasonable to expect that all these three phenomenological processes, involving both human as well as animal can be understood oneday in terms of M(BE).
* Platonic BE
It has been suggested that consciousness could be like mathematics in the sense that although it is needed to comprehend the physical reality, in itself it is not ‘real’.
The ‘reality’ of mathematics is a controversial issue that brings in the old debate between the realists and the constructivists whether a mathematical truth is ‘a discovery’ or ‘an invention’ of the human mind ? Should one consider the physical laws based on mathematical truth as real or not ?! The realist’s stand of attributing a Platonic existence to the mathematical truth is a matter of pure faith unless one tries to get the guidance from the knowledge of the physical world. It is doubtful whether our knowledge of physical sciences provides support for the realist’s view if one considers the challenge to ‘realism’ in physical sciences by the quantum world-view, which has been substantiated in recent past by experiments that violate Bell’s inequalities.
Even if one accepts the Platonic world of mathematical forms, this no way makes consciousness non-existent or unreal. Rather the very fact that truth of such a platonic world of mathematics yields to the human understanding as much as that of a physical world makes consciousness all the more profound in its existence.
## III Can Consciousness be manipulated ?
Can consciousness be manipulated in a controlled manner ? Experience tells us how difficult it is to control the thoughts and how improbable it is to control the dreams. We discuss below few methods prescribed by western psycho-analysis and oriental philosophies regarding the manipulation of consciousness. Is there a lesson for modern science to learn from these methods ?
### A Self
The subject of ‘self’ is usually considered to belong to an ‘internal space’ in contrast to the external space where we deal with others. We will consider the following two cases here:
* Auto-suggestions
There have been evidences that by auto-suggestions one can control one’s feelings like pain and pleasure. Can one cure oneself of diseases of physical origin by auto-suggestions ? This requires further investigations.
* Yoga and other oriental methods
The eight-fold (asthanga) Yoga of Patanjali is perhaps the most ancient method prescribed to control one’s thought and to direct it in a controlled manner. But it requires certain control over body and emotions before one aspires to gain control over mind. In particular it lays great stress on ‘breath control’ (pranayama) as a means to relax the body and to still the mind. In its later stages it provides systematic methods to acquire concentration (dhyan) and to prolong concentration on an object or a thought (dharna).
After this attainment one can reach a stage where one’s awareness of self and the surrounding is at its best. Then in its last stage, Yoga prescribes one’s acute awareness to be decontextualized from all perceptions limited by spatio-temporality and thus to reach a pinnacle called (samadhi) where one attains an understanding of everything and has no doubts. In this sense the Yogic philosophy believes that pure consciousness transcends all perceptions and awareness. It is difficult to understand this on the basis of day to day experience. Why does one need to sharpen one’s awareness to its extreme if one is finally going to abandon its use ? How does abandonning one’s sharpened awareness help in attaining a realisation that transcends spatio-temporality? Can any one realise anything that is beyond the space, time and causality ? What is the purpose of such a consciousness that lies beyond the confinement of space and time ?
### B Non-Self
The Non-Self belongs to an external world consisting of others, both living and non-living. In the following we discuss whether one can direct one’s consciousness towards others such that one can affect their behaviour.
* Hypnosis, ESP etc…
It is a well-known fact that it is possible to hypnotise a person and then to make contact with his/her sub-conscious mind. Where does this sub-conscious lie ? What is its relation to the conscious mind ? The efficacy of the method of hypnosis in curing people of deep-rooted psychological problems tells us that we are yet to understand the dynamics of the human brain fully.
The field of Para-Psychology deals with ‘phenomena’ like Extra Sensory Perception (ESP) and telepathy etc. where one can direct one’s consciousness to gain insight into future or to influence others mind. It is not possible to explain these on the basis of the known laws of the world. It has been claimed that under hypnosis a subject could vividly recollect incidents from the previous lives including near-death and death experiences which is independent of spatio-temporality. Then, it is not clear, why most of these experiences are related to past ? If these phenomena are truely independent of space and time, then studies should be made to find out if anybody under hypnosis can predict his/her own death, an event that can be easily verifiable in due course of time, unlike the recollections of past-life !
* PK, FieldREG etc.
Can mind influence matter belonging to outside of the body ? The studies dubbed as Psycho-Kinesis (PK) have been conducted to investigate the ‘suspect’ interaction of the human mind with various material objects such as cards, dice, simple pendulum etc. An excellent historical overview of such studies leading upto the modern era is available as a review paper, titled “ The Persistent Paradox of Psychic Phenomena: An Engineering Perspective”, by Robert Jahn of Princeton University published in Proc. IEEE (Feb. 1982).
The Princeton Engineering Anomalies Research (PEAR) programme of the Department of Applied Sciences and Engineering, Princeton University, has recently developed and patented a ‘Field REG’ (Field Random Event Generator) device which is basically a portable notebook computer with a built-in truely random number generator (based on a microelectronic device such as a shot noise resistor or a solid-state diode) and requisite software for on-line data processing and display, specifically tailored for conducting ‘mind-machine interaction’ studies.
After performing large number of systematic experiments over the last two decades, the PEAR group has reported the existence of such a consciousness related mind-machine interaction in the case of ‘truely random devices’. They attribute it to a ‘Consciousness Field Effect’. They have also reported that deterministic random number sequences such as those generated by mathematical algorithm or pseudo-random generators do not show any consciousness related anomalous behaviour. Another curious finding is that ‘intense emotional resonance’ generates the effect whereas ‘intense intellectual resonance’ does not ! It is also not clear what is the strength of the ‘consciousness field’ in comparison to all the four known basic force fields of nature.
One should not reject outright any phenomenon that cannot be explained by the known basic laws of nature. Because each such phenomenon holds the key to extend the boundary of our knowledge further. But before accepting these effects one should filter them through the rigours of scientific methodology. In particular, the following questions can be asked:
* Why are these events rare and not repeatable ?
* How does one make sure that these effects are not manifestations of yet unknown facets of the known forces ?
* Why is it necessary to have truely random processes ? How does one make sure that these are not merely statistical artifacts ?
If the above effects survive the scrutiny of the above questions (or similar ones) then they will open up the doors to a new world not yet known to science. In such a case how does one accomodate them within the existing framework of scientific methods ? If these effects are confirmed beyond doubt, then one has to explore the possibility that at the fundamental level of nature, the laws are either different from the known physical laws or there is a need to complement the known physical laws with a set of non-physical laws ! In such a situation, these ‘suspect’ phenomena might provide us with the valuable clue for modifying BE to get M(BE) that is the basis of everything including both physical and mental !
## IV Is there a need for a change of paradigm ?
Although reductionist approach can provide us with valuable clues regarding the attributes of consciousness, it is the holistic approach that can only explain consciousness. But the dualism of Descarte that treats physical and mental processes in a mutually exclusive manner will not suffice for understanding consciousness unless it makes an appropriate use of complementarity for mental and physical events which is analogous to the complementarity evident in the quantum world.
## V Conclusion
Where does the brain end and the mind begin ? Brain is the physical means to acquire and to retain the information for the mind to process them to find a ‘meaning’ or a ‘structure’ which we call ‘understanding’ that is attributed to consciousness. Whereas attributes of consciousness can be reduced to BE \[or to M(BE)\], the holistic process of consciousness can only be understood in terms of ‘quantum communication’, where ‘communication’ has an appropriate meaning. Maybe one has to look for such a suitable definition of communication at the level of ‘quantum vacuum’.
## VI Acknowledgements
It is a pleasure to thank the organisers, in particular to Prof. B. V. Sreekantan and Dr. Sangeetha Menon; for the hospitality and encouragement as well as for providing the conducive atmosphere that made this article possible.
|
no-problem/0002/hep-ph0002298.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Statistical quantum chromodynamics predicts that at sufficiently high densities or temperatures the quarks and gluons confined inside hadrons undergo a deconfining phase transition to a plasma of quarks and gluons. The last two decades of high energy nuclear physics activity has been directed towards the production of this new state of matter through relativistic heavy ion collisions. This has led to experiments at BNL AGS and CERN SPS and to the building of the BNL Relativistic Heavy Ion Collider and a planning of the ALICE experiment at the CERN Large Hadron Collider. With the reported confirmations of the quark-hadron phase transition at the relativistic heavy ion collision experiments at the CERN SPS , the first step in the search for quark-gluon plasma, which pervaded the early universe, microseconds after the big bang and which may be present in the core of neutron stars, is complete.
The emphasis of the experiments at the BNL RHIC and the CERN LHC will now necessarily shift to an accurate determination of the properties of the quark matter. An important observable for this is the speed of sound in the plasma, defined through:
$$c_s^2=\frac{p}{ϵ}.$$
(1)
Often one writes,
$$p=c_s^2ϵ$$
(2)
for the equation of state of the quark matter, where $`ϵ`$ is the energy density and $`p`$ is the pressure. For the simplest bag-model equation of state (with $`\mu _B=0`$), we write
$`ϵ`$ $`=`$ $`3aT^4+B,`$ (3)
$`p`$ $`=`$ $`aT^4B,`$ (4)
$`a`$ $`=`$ $`\left[2\times 8+{\displaystyle \frac{7}{8}}\times 2\times 2\times 3\times N_f\right]{\displaystyle \frac{\pi ^2}{90}},`$ (5)
with $`B`$ as the bag pressure and $`N_f`$ as the number of flavours, so that $`c_s^2=1/3`$. In general $`\mathrm{\Delta }=ϵ3p`$ measures the deviation of the equation of state from the ideal gas of massless quarks and gluons (when it is identically zero) and depends sensitively on the interactions present in the plasma. The lattice QCD calculations show that $`\mathrm{\Delta }`$ 0, till the temperature is several times the critical temperature . This implies that in general $`c_S^21/3`$. Any experimental information on this will be most welcome.
We show in the present work that the transverse momentum dependence of the survival probability of the $`J/\psi `$ and $`\mathrm{{\rm Y}}`$ at RHIC and LHC energies are quite sensitive to the value of the speed of sound. The very long life-time of the plasma likely to be attained at LHC makes it even more sensitive to the details of the equation of state of the quark matter through the transverse expansion of the plasma.
## 2 Formulation
The theory of quarkonium suppression in QGP is very well studied and several excellent reviews exist , which dwell both on the phenomenology as well as on the experimental situation. We recall the basic details which are relevant for the present demonstration.
The interquark potential for (non-relativistic) quarkonium states at zero temperature may be written as:
$$V(r,0)=\sigma r\frac{\alpha }{r}$$
(6)
where $`r`$ is the separation between $`Q`$ and $`\overline{Q}`$. The bound-states of $`c\overline{c}`$ and $`b\overline{b}`$ are well described if the parameters $`\sigma =`$ 0.192 GeV<sup>2</sup>, $`\alpha =`$0.471, $`m_c=`$ 1.32 GeV, and $`m_b=`$ 4.746 GeV are used . At finite temperatures the potential is modified due to colour screening, and evolves to:
$$V(r,T)=\frac{\sigma }{\mu (T)}\left[1e^{\mu (T)r}\right]\frac{\alpha }{r}e^{\mu (T)r}.$$
(7)
The screening mass increases with temperature. When $`\mu (T)0`$, the equation (6) is recovered. At finite temperature, when $`r0`$ the $`1/r`$ behaviour is dominant, while as $`r\mathrm{}`$ the range of the potential decreases with $`\mu (T)`$. This makes the binding less effective at finite temperature. Semiclassically, one can write for the energy of the pair,
$$E(r,T)=2m_Q+\frac{c}{m_Qr^2}+V(r,T)$$
(8)
where $`<p^2><r^2>=c=𝒪(\mathcal{1})`$. Radius of the bound state at any temperature is obtained by minimizing $`E(r,T)`$. Beyond some critical value $`\mu _D`$ for the screening mass $`\mu (T)`$, no minimum is found. The screening is now strong enough to make the binding impossible and the resonance can not form in the plasma. The ground state properties of some of the quarkonia reported by authors of Ref. are given in table 1. We have also listed the formation time of these resonances defined in Ref. as the time taken by the heavy quark to traverse a distance equal to the radius of the quarkonium in its rest frame $`m_Qr_{Q\overline{Q}}/p_{Q\overline{Q}}`$, where $`p_{Q\overline{Q}}`$ is the momentum of either of the quarks of the resonance. It may be recalled that somewhat different values for the formation time are reported by Blaizot and Ollitrault who solve the bound-state problem within the WKB approximation and define the formation time as the time spent by a quark in going between the two classical turning points.
Now let us consider a central collision in a nucleus-nucleus collision, which results in the formation of quark gluon plasma at some time $`\tau _0`$. Let us concentrate at $`z=0`$ and on the region of energy density, $`ϵϵ_s`$ which encloses the plasma which is dense enough to cause the melting of a particular state of quarkonium. We assume the plasma to cool, according to Bjorken’s boost invariant (longitudinal) hydrodynamics and then generalize our results to include the transverse expansion of the plasma. We assume that the $`Q\overline{Q}`$ pair is produced at the transverse position $`𝐫`$ at $`\tau =0`$ on the $`z=0`$ plane with momentum $`𝐩_T`$. In the collision frame, the pair would take a time equal to $`\tau _FE_T/M`$ for the quarkonium to form, where $`E_T=\sqrt{p_T^2+M^2}`$ and $`M`$ is the mass of the quarkonium. During this time, the pair would have moved to the location $`(𝐫+\tau _F𝐩_T/M)`$. If at this instant, the plasma has cooled to an energy density less than $`ϵ_s`$, the pair would escape and quarkonium would be formed. If however, the energy density is still larger than $`ϵ_s`$, the resonance will not form and we shall have a quarkonium suppression .
It is easy to see that the $`p_T`$ dependence of the survival probability will depend on how rapidly the plasma cools. If the initial energy density is sufficiently high, the plasma will take longer to cool and only the pairs with very high $`p_T`$ will escape. If however the plasma cools rapidly, then even pairs with moderate $`p_T`$ will escape. The transverse expansion of the plasma can further accelerate the rate of cooling giving us an additional handle to explore the equation of state, which as we know, will control the expansion of the plasma.
### 2.1 Longitudinal expansion of the plasma
As indicated, we first take the Bjorken’s boost-invariant longitudinal hydrodynamics to explore the expansion of the plasma. Thus, the energy momentum tensor of the plasma is written as ;
$$T^{\mu \nu }=(ϵ+p)u^\mu u^\nu +g^{\mu \nu }p,$$
(9)
where $`ϵ`$ is the energy-density, $`p`$ is the pressure, and $`u^\mu `$ is the four velocity of the fluid, in a standard notation. If the effects of viscosity are neglected, the energy-momentum conservation is given by
$$_\mu T^{\mu \nu }=0.$$
(10)
The assumption of the boost-invariance provides that the energy density, pressure, and temperature become functions of only the proper time $`\tau `$ and that the Eq.(10) simplifies to
$$\frac{dϵ}{d\tau }=\frac{ϵ+p}{\tau }.$$
(11)
The effect of the speed of sound is seen immediately. Using the Eq.(2), we can now write
$$ϵ(\tau )\tau ^{1+c_s^2}=ϵ(\tau _0)\tau _0^{1+c_s^2}=\text{const.}$$
(12)
so that if $`c_s^2`$ is small, the cooling is slower. Chu and Matsui explored the consequence of the extremes $`c_s^2=1/3`$ and $`c_s^2=0`$ on the $`p_T`$ dependence of the survival probability. We shall explore the sensitivity of the quarkonium suppression to the equation of state by (some-what arbitrarily) choosing two values of the speed of sound, $`1/\sqrt{3}`$ and $`1/\sqrt{5}`$, in the following.
We now have all the ingredients to write down the survival probability and we closely follow Chu and Matsui for this.
We take a simple parametrization for the energy-density profile:
$$ϵ(\tau _0,r)=ϵ_0\left[1\frac{r^2}{R^2}\right]^\beta \theta (Rr)$$
(13)
where $`r`$ is the transverse co-ordinate and $`R`$ is the radius of the nucleus. One can define an average energy density $`<ϵ_0>`$ as
$$\pi R^2<ϵ_0>=2\pi r𝑑rϵ(r)$$
(14)
so that
$$ϵ_0=(1+\beta )<ϵ_0>.$$
(15)
We have taken $`\beta =1/2`$, which may be thought as indicative of the energy deposited being proportional to the number of participants in the system. In case one feels that the energy deposited may be proportional to number of nucleon-nucleon collisions then one can repeat the calculations with $`\beta =1`$, which will reflect the proportionality of the deposited energy to the nuclear thickness. The average energy-density is obtained from the Bjorken formula:
$$<ϵ_0>=\frac{1}{\pi R_T^2\tau _0}\frac{dE_T}{dy}$$
(16)
where $`E_T`$ is the transverse energy deposited in the collision.
The time $`\tau _s`$ when the energy density drops to $`ϵ_s`$ is easily estimated as
$`\tau _s(r)`$ $`=`$ $`\tau _0\left[{\displaystyle \frac{ϵ(\tau _0,r)}{ϵ_s}}\right]^{1/(1+c_s^2)}`$ (17)
$`=`$ $`\tau _0\left[{\displaystyle \frac{ϵ_0}{ϵ_s}}\right]^{1/(1+c_s^2)}\left[1{\displaystyle \frac{r^2}{R^2}}\right]^{\beta /(1+c_s^2)}`$
As discussed earlier , we can equate the duration of screening $`\tau _s(r)`$ to the formation time $`t_F=\gamma \tau _F`$ for the quarkonium to get the critical radius, $`r_s`$:
$$r_s=R\left[1\left(\frac{\gamma \tau _F}{\tau _{s0}}\right)^{(1+c_s^2)/\beta }\right]^{1/2}\theta \left[1\frac{\gamma \tau _F}{\tau _{s0}}\right],$$
(18)
where $`\tau _{s0}=\tau _s(r=0)`$. This critical radius, is seen to mark the boundary of the region where the quarkonium formation is suppressed. As discussed earlier, the quark-pair will escape the screening region (and form quarkonium) if its position and transverse momentum $`𝐩_T`$ are such that
$$\left|𝐫+\tau _F𝐩_T/M\right|r_s.$$
(19)
Thus, if $`\varphi `$ is the angle between the vectors $`𝐫`$ and $`𝐩_T`$, then
$$\mathrm{cos}\varphi \left[(r_s^2r^2)M\tau _F^2p_T^2/M\right]/\left[2r\tau _Fp_T\right],$$
(20)
which leads to a range of values of $`\varphi `$ when the quarkonium would escape. We also realize that if the right hand side of the above equation is greater than 1, then no angle is possible when the quarkonium can escape. Now we can write for the survival probability of the quarkonium:
$$S(p_T)=\left[_0^Rr𝑑r_{\varphi _{\text{max}}}^{+\varphi _{\text{max}}}𝑑\varphi P(𝐫,𝐩_T)\right]/\left[2\pi _0^Rr𝑑rP(𝐫,𝐩_T)\right],$$
(21)
where $`\varphi _{\text{max}}`$ is the maximum positive angle ($`0\varphi \pi `$) allowed by Eq.(20), and
$$\varphi _{\text{max}}=\{\begin{array}{cc}\pi \hfill & \text{if }y1\hfill \\ \mathrm{cos}^1|y|\hfill & \text{if }1<y<1\hfill \\ 0\hfill & \text{if }y1\hfill \end{array},$$
(22)
where
$$y=\left[(r_s^2r^2)M\tau _F^2p_T^2/M\right]/\left[2r\tau _Fp_T\right],$$
(23)
and $`P`$ is the probability for the quark-pair production at $`𝐫`$ with transverse momentum $`𝐩_T`$, in a hard collision. Assuming that the $`𝐩_T`$ and $`𝐫`$ dependence for hard collisions factor out, we approximate
$$P(𝐫,𝐩_T)=P(r,p_T)=f(r)g(p_T),$$
(24)
where we take
$$f(r)\left[1\frac{r^2}{R^2}\right]^\alpha \theta (Rr)$$
(25)
with $`\alpha =1/2`$. The Eq.(21) can be solved analytically for some limiting cases of $`p_T`$ etc., see Ref. .
## 3 Transverse expansion of the plasma
It is generally accepted that the rare-faction wave from the surface of the plasma will reach the centre by $`\tau =R/c_s`$. For the case of lead nuclei, this comes to about 12 fm/$`c`$. If the life time of the QGP is comparable to this time, the transverse expansion of the plasma can not be ignored. The transverse expansion of the plasma will lead to a much more rapid cooling than suggested by a purely longitudinal expansion.
For the four-velocity of the collective flow we write:
$$u^\mu =(\gamma ,\gamma 𝐯)$$
(26)
where $`𝐯`$ is the collective flow velocity and $`\gamma =1/\sqrt{1v^2}`$. We further assume that the longitudinal flow of the plasma has a scaling solution, so that the boost-invariance along the longitudinal direction remains valid. Assuming cylindrical symmetry, valid for central collisions, it can be shown that the four-velocity $`u^\mu `$ should have the form,
$$u^\mu =\gamma _r(\tau ,r)(t/\tau ,v_r\mathrm{cos}\varphi ,v_r\mathrm{sin}\varphi ,z/\tau ),$$
(27)
with
$`\gamma _r`$ $`=`$ $`\left[1v_r^2(\tau ,r)\right]^{1/2}`$
$`\tau `$ $`=`$ $`(t^2z^2)^{1/2}`$
$`\eta `$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{ln}{\displaystyle \frac{t+z}{tz}}.`$ (28)
Thus all the Lorentz scalars are now functions of $`\tau `$ and $`r`$, and independent of the space-time rapidity $`\eta `$. This reduces the $`(3+1)`$ dimensional expansion with cylindrical symmetry and boost-invariance along the longitudinal direction to:
$$_\tau T^{00}+r^1_r(rT^{01})+\tau ^1(T^{00}+p)=0$$
(29)
and
$$_\tau T^{01}+r^1_r\left[r(T^{00}+p)v_r^2\right]+\tau ^1T^{01}+_rp=0$$
(30)
where
$$T^{00}=(ϵ+p)u^0u^0p$$
(31)
and
$$T^{01}=(ϵ+p)u^0u^1.$$
(32)
We see that the speed of sound which will appear through the dependence of the pressure $`p`$ on the energy-density will affect the time-evolution of all the quantities, especially through the gradient terms. This is of-course extensively documented. We solve these equations using well established methods with initial energy density profiles as before and estimate the constant energy density contours appropriate for $`ϵ=ϵ_s`$ to get $`\tau _s(r)`$. Rest of the treatment follows as before. In these calculations we have assumed the initial transverse velocity to be identically zero.
We only need to identify the initial conditions. We consider $`Pb+Pb`$ collisions ($`Au+Au`$, for RHIC) with the initial average energy densities:
$$<ϵ_0>=\{\begin{array}{cc}6.3\text{ GeV/fm}\text{3}\hfill & \text{SPS, }\tau _0=0.5\text{ fm}\hfill \\ & \\ 60\text{ GeV/fm}\text{3}\hfill & \text{RHIC, }\tau _0=0.25\text{ fm}\hfill \\ & \\ 425\text{ GeV/fm}\text{3}\hfill & \text{LHC, }\tau _0=0.25\text{ fm}\hfill \end{array}$$
(33)
The estimate for SPS is obtained from assumption of QGP formation in $`Pb+Pb`$ experiments, while those for RHIC and LHC are taken from the self-screened parton cascade calculation . If a formation time for the SPS energies is assumed to be of the order of 1 fm/$`c`$, the estimate given above will drop to about 3 GeV/fm<sup>3</sup>. A larger initial energy density than the one assumed here for RHIC and LHC could be obtained by using the concept of parton saturation . We are, however, interested in only a demonstration of the effect of equation of state on the quarkonium suppression, and we feel that the values used here are enough for this.
## 4 Results
### 4.1 Speed of sound vs. transverse expansion
It is quite clear that a competition between the speed of sound and the onset of the transverse expansion during the life-time of the deconfining matter can lead to interesting possibilities. In order to illustrate this and to explore the consequences, we show in Fig. 1 the time corresponding to the constant energy density contours which enclose the deconfining matter- which can dissociate the directly produced $`J/\psi `$ (see table 1), at RHIC energies. We see that owing to the (relatively) short time that the QGP would take to cool down to $`ϵ_s^{J/\psi }`$, the effect of the transverse flow is marginal and for $`c_s^2=1/3`$, limited to large radii. Large changes in the contour are seen when the speed of sound is varied. This is very important indeed. Note that the cooling to the value appropriate for $`\mathrm{{\rm Y}}`$ suppression is attained too quickly to be affected by the transverse expansion, and even the change due to variation in the speed of sound is quite small. Of-course the duration of the deconfining medium is prolonged if the speed of sound is reduced.
The corresponding results for the LHC energies are shown in Fig. 2. Now we see that at $`r=0`$ the duration of the deconfining medium reduces by a factor of $`2`$ when the speed of sound is $`1/\sqrt{3}`$, and the transverse expansion of the plasma is allowed. This is a consequence of the longer time which the plasma takes to cool at LHC.
The scenario for the $`\mathrm{{\rm Y}}`$ dissociating matter at LHC is quite akin to the case of $`J/\psi `$ dissociating medium at RHIC; the results are affected by the speed of sound and not by the transverse flow (Fig. 3).
### 4.2 Consequences for survival of quarkonia
Now we return to the transverse momentum dependence of the survival probabilities. As a first step, we plot the survival of the directly produced $`J/\psi `$’s at SPS, RHIC and LHC energies (Fig. 4) when only longitudinal expansion is accounted for and the speed of sound is varied. We see that RHIC energies provide the most suitable environment to measure the speed of sound with the help of $`J/\psi `$ suppression. The variations in the $`p_T`$ dependence is too meagre at SPS (due to a very short duration of the deconfining medium) and at LHC (now, due to a very long duration!) when the speed of sound is varied.
This advantage of RHIC energies is maintained when the transverse expansion is accounted for (Fig. 5). As one could have expected from the contours (Fig. 1), the results are more sensitive to the variation of the speed of sound than to the transverse flow. The accuracy of the procedure is seen from the fact that the survival probability around $`p_T=`$ 15 GeV for $`c_s^2=1/3`$ is identical for the longitudinal and the transverse expansion of the plasma. This is a direct reflection of the identity of the corresponding contours near $`r=0`$ (Fig. 1).
The $`J/\psi `$ suppression at LHC energies, as indicated, becomes sensitive to the transverse flow, the shape of the survival probability changes and the largest $`p_T`$ for which the formation is definitely possible is enhanced by about 10 GeV (Fig. 6).
The $`\mathrm{{\rm Y}}`$ suppression, which in our prescription is possible only at the LHC energy, is seen to be clearly affected by the speed of sound but not by the transverse expansion of the plasma (Fig. 7).
### 4.3 Consequences of chain decays of quarkonia
The entire discussion so far has been in terms of the directly produced $`J/\psi `$’s and $`\mathrm{{\rm Y}}`$’s. However it is well established that only about 58% of the observed $`J/\psi `$ in $`pp`$ collisions originate directly, while 30% of them come from $`\chi _c`$ decay and 12% from the decay of $`\psi ^{}`$. Thus the survival probability of the $`J/\psi `$ in the QGP can be written as:
$$S=0.58S_\psi +0.3S_{\chi _c}+0.12S_\psi ^{},$$
(34)
in an obvious notation. We give the survival probabilities of these resonances for a transversely expanding plasma at RHIC with the speed of sound as $`1/\sqrt{3}`$ in the left panel of Fig. 8. We see that the competition of the formation times and the duration of the sufficient dissociation energies render a rich detail to the suppression pattern of the charmonia. We shall later argue that the $`\psi ^{}`$’s are easily destroyed by a moderately hot hadronic gas as well, as their binding energy is on the order of just 50 MeV. The right panel of the figure shows the survival probability as a function of the transverse momentum when the speed of sound in the plasma is varied from $`1/\sqrt{3}`$ to $`1/\sqrt{5}`$. We see that while the gross features for shape of the survival probability remain similar to Fig. 5, as seen earlier, the survival of the $`J/\psi `$ for larger $`p_T`$ is now enhanced as the $`\chi _c`$’s, which decay to form $`J/\psi `$ start escaping. Thus the complete escape of the $`\chi _c`$ having $`p_T>`$ 10 GeV (for $`c_s^2=1/3`$) lends a distint kink in $`S(p_T)`$ at $`p_T`$ 10 GeV. Its location, which shifts to about 14 GeV when the speed of sound is decreased, can perhaps be more accurately determined by plotting the derivative $`dS/dp_T`$, which will have a discontinuity there. If the statistics is really good (which unfortunately is a somewhat difficult proposition), this discontinuity in the $`dS/dp_T`$ for the $`J/\psi `$ can be a unique signature of melting of the resonance in plasma, as the $`p_T`$ dependence of the survival probability due to absorption by hadrons should be weak and smooth .
The corresponding results for the LHC energies are given in Fig. 9, in an analogus manner, and we see the characteristic ‘kink’ in $`S(p_T)`$ has shifted to about $`p_T=16`$ GeV, announcing the complete escape of $`\chi _c`$ from the plasma.
The decay contribution of the resonances completely alters the shape of the survival probabilities for $`\mathrm{{\rm Y}}`$ from that seen earlier (Fig. 7).
We note that (Fig. 10) both $`\chi _b`$ and $`\mathrm{{\rm Y}}^{}`$ get suppressed at the RHIC energies, while the directly produced $`\mathrm{{\rm Y}}`$ is likely to escape. However, as only about 54% of the $`\mathrm{{\rm Y}}`$’s may be directly produced, while about 32% have their origin in the decay of $`\chi _b`$ and a further 14% in the decay of $`\mathrm{{\rm Y}}^{}`$, the resultant (right panel, Fig.10) survival probability is just about 50% for the lowest $`p_T`$, signalling the suppressions of the higher resonances.
The results for LHC energies become quite dramatic (Fig. 11) as now the ‘kink’ in the survival probability is very clearly seen at $`p_T`$20 GeV for $`c_s^2=1/3`$ and at $``$ 26 GeV for the lower speed of sound. If one looks at the $`dS/dp_T`$ then it could be very useful indeed. We may add that fluctuations in the initial conditions etc. may perhaps make it difficult to notice this aspect for $`J/\psi `$’s, though for the $`\mathrm{{\rm Y}}`$’s these should survive, provided we have good statistics at these $`p_T`$.
The large difference in this behaviour seen between charmonium and bottomonium suppression has its origin in the large difference in the energy densities required to melt the different resonances of charmonia and bottomonia.
### 4.4 Absorption by nucleons and comovers
So far we have discussed the fate of quarkonia only when the presence of quark gluon plasma is considered. It is very well established that there are several aspects like initial state scattering of the partons, shadowing of partons, absorption of the pre-resonances ( $`|Q\overline{Q}g>`$ states) by the nucleons before they evolve into physical quarkonia, and also dissociation of the resonances by the comoving hadrons . It has been argued that the absorption by co-moving hadrons will be important for $`\psi ^{}`$, due to its very small binding energy, while for more tightly bound resonances it may be weak .
Let us briefly comment on them one-by-one. Shadowing of partons should play an important role in the reduced production of quarkonia, especially at the LHC energies. It is clear that if shadowing is important, we shall witness a larger effect on $`J/\psi `$ than on $`\mathrm{{\rm Y}}`$, because of the smaller values of the $`x`$ for gluons. At the same time, the effect of shadowing should be similar for different resonances of the charmonium (or bottomonium), as similar $`x`$ values would be involved for them.
The absorption of the pre-resonances by the nucleons is another source of $`p_T`$ dependence. It is important, to recall once again that as the absorption is operating on the pre-resonance, the effect should be identical for all the states of the quarkonium which are formed.
This is a very important consideration as it is clear that if we look at the ratio of rates for different states of $`J/\psi `$ or the $`\mathrm{{\rm Y}}`$ family as a function of $`p_T`$, then in the absence of QGP-effects they would be identical to what one would have expected in absence of nuclear absorption and shadowing, providing a clear pedestal for the observation of QGP .
There is another aspect of $`p_T`$ dependence which needs to be commented upon, before we conclude. The (initial state) scattering of partons, before the gluons of the projectile and the target nucleons fuse to produce the $`Q\overline{Q}`$-pair, leads to an increase of the $`<p_T^2>`$ of the resonance which emerges from the collision. The increase in the $`<p_T^2>`$, compared to that for $`pp`$ collisions is directly related to number of collisions the nucleons are likely to undergo, before the gluonic fusion takes place. This leads to a rich possibility of relating the average transverse momentum of the quarkonium to the transverse energy deposited in the collision (which decides the number of participants and hence the number of collisions). Considering that collisions with large $`E_T`$ may have formation of QGP in the dense part of the overlapping region, the quarkonia, which are produced in the densest part (and hence contributing the largest increase in the transverse momentum) are also most likely to melt and disappear. This may lead to a characteristic saturation and even turn-over of the $`<p_T^2>`$ when plotted against $`E_T`$ when the QGP formation takes place. In absence of QGP, this curve would continue to rise with $`E_T`$.
Obviously, all these (well explored and yet non-QGP) effects need to be accounted for, before we can begin to see the suppression of the quarkonium due to the formation of QGP. It seems that this has been achieved at least at the SPS energies .
The next step would obviously be the one discussed in the present work, that of looking for the $`p_T`$ dependence of the survival probability, to see if we can get more detailed information on the equation of the state of the plasma. This would require high precision data, extending to larger $`p_T`$ at several $`E_T`$. This may prove to be difficult, though not impossible in principle at least. It will however prove to be very valuable, if it can be done.
## 5 Summary and Discussion
We have seen that the survival probability of $`J/\psi `$ at RHIC energies and that of $`\mathrm{{\rm Y}}`$ at LHC energies can provide valuable information about the equation of state of the quark matter, as the results are not affected by uncertainties of transverse expansion of the plasma. If the transverse expansion of the plasma takes place, it gives a distinct shape to the survival probability of the $`J/\psi `$ at LHC energies, whose detection will be a sure signature of the transverse flow of the plasma within the QGP phase.
Before concluding we would add that in these exploratory demonstrations we have chosen some specific values for the deconfining matter which can dissociate quarkonia. These could be different, in particular the $`ϵ_s^{J/\psi }`$ could be much larger than the value used here. This, however, will not change the basic results as the time-scales involved in the $`\mathrm{{\rm Y}}`$ and the $`J/\psi `$ suppression are so very different.
The other uncertainty comes from the usage of $`ϵ_s`$ as the criterion for deconfinement. One could have as well used the Debye mass as the temperature and the fugacity changed in a chemically equilibrating plasma, to fix the deconfining zone. This can indeed be done, along with the other extreme of the Debye mass estimated from lattice QCD. This has been studied in great detail by authors of Ref. and can be easily extended to the present case. We plan to do it in a future publication.
In brief, we have shown that the $`J/\psi `$ and $`\mathrm{{\rm Y}}`$ suppression at RHIC and LHC energies can be successfully used to map the equation of state for the quark-matter. As the two processes will map different but over-lapping regions, taken together, these results will help us to explore a vast region of the equation of state.
We thank Bikash Sinha for useful discussions.
|
no-problem/0002/astro-ph0002174.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Powerful radio galaxies are frequently associated with extended emission line nebulosities which extend on radial scales of 5 — 100 kpc from the nuclei of the host early-type galaxies (Hansen et al. 1987, Baum et al. 1988). The morphological and kinematical properties of these nebulae provide important clues to the origins of the gas, and the origins of the activity as a whole. The study of such gas is, for example, important for our understanding of the building of galaxy disks and bulges by infall since their epoch of formation. Therefore, it is crucial to determine the extent to which the observed emission line properties reflect the intrinsic distribution of the warm gas in the haloes of the host galaxies, and the extent to which they reflect the effects of the nuclear activity and interactions with the extended radio sources.
In most low redshift ($`z<0.2`$) radio galaxies the optical emission line regions are broadly distributed in angle around the nuclei of the host galaxies, the correlations between the optical and radio structural axes are weak, and the gas kinematics are often quiescent, with line widths and velocity shifts consistent in most cases with gravitational motions in the host early-type galaxies (Tadhunter, Fosbury & Quinn 1989; Baum, Heckman & van Breugel 1990). However, optical observations reveal a dramatic change in the properties of the nebulosities as the redshift and radio power increase: the emission line kinematics become more disturbed (compare Tadhunter et al. 1989 with McCarthy, Baum & Spinrad 1996) and the optical/UV structures become more closely aligned with the radio axes of the host galaxies (McCarthy et al. 1987, McCarthy & van Breugel 1989). The most recent high resolution HST images of $`z1`$ radio galaxies show that the structures are not only closely aligned with the radio axes, but they are also highly collimated, with a jet-like appearance (Best et al. 1996). The nature of the “alignment effect” is a key issue for our general understanding of powerful radio galaxies, of particular relevance to the use of radio sources as probes of the high redshift universe.
Of the many models which have been proposed to explain the alignment effect, the two which have received the most attention are the anisotropic illumination and the jet-cloud interaction<sup>1</sup><sup>1</sup>1We use “jet-cloud interaction” as a generic term to describe interactions between the warm ISM and the radio-emitting components, which could include the radio lobes and hot-spots, as well as the radio jets. models.
In the case of anisotropic illumination it is proposed that the gaseous haloes of the host galaxies are illuminated by the broad radiation cones of the quasars hidden in the cores of the galaxies (e.g. Barthel 1989), with the emission lines resulting from photoionization of the ambient ISM by the EUV radiation in the cones (e.g. Fosbury 1989), and the extended optical/UV continuum comprising a combination of the nebular continuum emitted by the warm emission line clouds (Dickson et al. 1995) and scattered quasar light (Tadhunter et al. 1988, Fabian 1989). The alignment of the obscuring tori perpendicular to the collimation axes of the plasma jets then leads to a natural alignment of the extended nebulosities with the radio axes. The best evidence to support the anisotropic illumination model is provided by polarimetric observations of powerful radio galaxies at all redshifts which show evidence for high UV polarization and scattered quasar features (e.g. Tadhunter et al. 1992, Young et al. 1996, Dey & Spinrad 1996, Cimatti et al. 1996, Cimatti et al. 1997, Ogle et al. 1997). It would be difficult to explain these polarimetric results in terms of any mechanism other than scattering of the anisotropic radiation field of an illuminating quasar or AGN. However, despite the success of the illumination model at explaining the polarization properties, a sigificant fraction of radio galaxies — comprising $``$30 – 50% of radio galaxies at $`z1`$, and a smaller proportion of lower redshifts — are dominated by jet-like UV emission line structures, which are more highly collimated than would be expected on the basis of the 45 – 60 opening half-angle illumination cones predicted by the unified schemes for powerful radio galaxies (Barthel 1989, Lawrence 1991). Moreover, the highly disturbed emission line kinematics observed in many high-z sources are also difficult to reconcile with quasar illumination of the undisturbed ambient ISM of the host galaxies.
Jet-cloud interactions have the potential to explain many of the features of powerful radio galaxies which cannot be explained in terms anisotropic quasar illumination. Although the jet-cloud interactions are likely to be complex, at the very least the clouds will be compressed, ionized and accelerated as they enter the shocks driven through the ISM by the jets. Therefore, jet-cloud interactions provide a promising explanation for the high-surface-brightness and extreme emission line kinematics of the structures aligned along the radio axes of high-z sources. Indeed, recent spectroscopic observations of jet-cloud interactions in low redshift radio galaxies provide clear observational evidence for the acceleration and ionization of warm clouds by the jet-induced shocks (e.g. Clark et al. 1998, Villar-Martin et al. 1999). Moreover, theoretical modelling work has demonstrated that jet-induced shocks are a viable, if not unique, mechanism for producing the emission line spectra of radio galaxies (e.g. Dopita & Sutherland 1998).
It is clear that no single mechanism can explain the emission line properties of radio galaxies over the full range of redshift and radio power; some combination of AGN illumination and jet-cloud interactions is required, with the jet-cloud interactions becoming increasingly important as the redshift and/or radio power increases. However, a major problem with such a combined model is that, while the polarimetric results demonstrate that quasar illumination is important in many high redshift sources, the extended structures are often dominated by highly collimated jet-like structures, with no sign of the broad cone-like emission line distributions predicted by the unified schemes for powerful radio galaxies.
How do we explain this dearth of broad cones in the objects with the most highly collimated structures? Possibilities include the following.
* The gas structures are intrinsically aligned along the radio axes of the high redshift sources, so that the emission line nebulae reflect the true distribution of warm/cool gas, rather than the ionization patterns induced by the jets or illuminating AGN. For example, West (1994) has proposed that a general alignment of the gas structures along the radio axis may be a natural consequence of the formation of giant elliptical galaxies in a heirarchical galaxy formation scenario, although it is not clear that the structures formed in this way would be quite as highly collimated as those observed in the high-z radio galaxies.
* The nuclei of the host galaxies do not contain powerful quasars, and the ionization of the extended gas is dominated by the jets: either by direct jet/cloud interactions, or by illumination by the relativistically-beamed jet radiation. This scenario is supported by the discovery at low redshifts of a class of powerful radio galaxies with weak, low ionization nuclear emission line regions (see Laing 1994, Tadhunter et al. 1998). However, at least some high-z radio galaxies with highly collimated UV structures show direct evidence for powerful quasar nuclei in the form of scattered quasar features in their polarized spectra, so this explanation cannot hold in every case.
* The broad-cone radiation of the buried quasars does not escape from the nuclear regions of the host galaxies, because of the absorbing effects of circum-nuclear gas. In this case the ionization of the extended gas in the aligned structures is likely to be dominated by jet-cloud interactions, but quasar or beamed jet radiation may also contribute along the jet axis, if the jets punch a hole in the obscuring material. Direct evidence for requisite obscuring material in the central regions is provided by the relatively high occurrence rate of associated CIV and Ly$`\alpha `$ absorption line systems in the UV spectra of radio-loud quasars (e.g. Anderson et al. 1987, Wills et al. 1995), the relatively red SEDs of steep spectrum radio quasars (Baker 1997), and the detection of significant BLR reddening in a substantial fraction of nearby broad-line radio galaxies (e.g. Osterbrock, Koski & Miller 1995, Hill et al. 1996).
* The dearth of broad ionization cones in the high-z sources is a consequence of an observational selection effect: most of the existing emission line images of the high-z sources have been taken in the light of the low ionization \[OII\]$`\lambda `$3727 line which is emitted particularly strongly by the jet-induced shocks (e.g. Clark et al. 1997, 1998), but is relatively weak in the more highly ionized quasar illumination cones. Thus, given that the published ground-based images of the high-z objects are relatively shallow and have a low spatial resolution, while the published HST images have a higher resolution but are insensitive to low surface brightness structures, the existing images are likely to be biased in favour of the high-surface-brightness shocked structures along the radio axes. In this case, we would expect deep emission images to reveal gaseous structures outside the main high surface brightness structures aligned along the radio axes. If the gas away from the radio axis is predominantly photoionized by quasars hidden in the cores of the galaxies we would expect the extended low surface brightness structures to have a broad distribution, consistent with quasar illumination. Detection of such emission line morphologies in the objects with the most highly collimated structures would lead to a reconciliation between the anisotropic illumination and jet-cloud interaction models, thereby resolving the outstanding uncertainties surrounding the nature of the alignment effect.
In order to test the latter possibility it is important to obtain deep emission line imaging observations for the objects with closely aligned radio and optical structures. We report here pilot observations of two intermediate-redshift radio galaxies — 3C171 ($`z=0.2381`$) and 3C277.3($`z=0.08579`$) — which are nearby prototypes of the high-z radio galaxies, in the sense that they show high surface brightness emission line structures which are closely aligned along their radio axes. The results challenge some commonly-held assumptions about the ionization of the extended gaseous haloes around powerful radio galaxies.
## 2 Observations
Emission line and continuum observations of 3C171 and 3C277.3(Coma A) were taken using the Taurus Tunable Filter (TTF) on the 4.2m WHT telescope at the La Palma Observatory on the night of the 27th January 1998. A log of the observations is presented in Table 1, while a full description of the TTF is given in Bland-Hawthorn & Jones (1998a,b). Use of the f/2 camera of TAURUS with the Tek5 CCD resulted in a pixel scale of 0.56 arcseconds per pixel; and the seeing conditions were subarcsecond for the observations reported here. The faintest structures visible in the images for both objects have an H$`\alpha `$ surface brightness of $`1\times 10^{17}`$ erg cm<sup>-2</sup> s<sup>-1</sup> arcsec<sup>-2</sup>.
Because of ghosting effects in the flat field images, no flat fielding of the data was attempted. However, comparisons between images taken with different filters and/or with the objects placed in different positions on the detector, demonstrate that the ghost images of stars and galaxies in the field do not contaminate the images of the main target objects described below.
The reduction of the images consisted of bias subtraction, atmospheric extinction correction, flux calibration, sky subtraction and registration. From the comparison between the measurements of the flux calibration standard stars taken at various times during the run it is estimated that the absolute flux calibration is accurate to within $`\pm `$30%, and the H$`\alpha `$ emission line fluxes agree at this level with the long-slit spectroscopy measurements in Clark (1996). For the emission line images, the TTF was tuned to the wavelength of H$`\alpha `$ shifted to the redshift of the emission lines in the nuclear regions of the galaxies. However, velocity structure in the haloes of the host galaxies may result in the emission lines in the extended structures not being exactly centred in the TTF bandpass, which has a Lorentzian shape. We estimate that, at maximum, this will result in the fluxes being underestimated by a factor of two for components with extreme $`\pm `$600 km/s shifts, but this will not affect our main conclusions which are based largely on the emission line structures, rather than the emission line fluxes.
In order the facilitate comparison between the radio and optical structures, radio images were obtained for both sources. The radio and optical images were registered by matching the positions of the core radio sources with the positions of the continuum centroids in the optical continuum images, with the pixel scale and rotation of the optical images calibrated using the known positions of stars in the CCD fields.
The radio image of Coma A was made using data taken with the VLA A-array configuration at 1.4 GHz (20cm). This gives a resolution for the final image of 1.14x1.13 arcseconds in p.a. -72. The data, which were extracted from the VLA archive, were originally presented and discussed in great detail by van Breugel et al. (1985). We therefore refer to that paper for all the radio information about Coma A.
The radio image of 3C171 was kindly provided by K. Blundell. The image was made with the VLA at 8 GHz with a resolution of 1.3 arcsec FWHM. More information about the radio characteristics of this source can be found in Blundell (1996).
## 3 Results
### 3.1 3C277.3 (Coma A)
Previous spectroscopic and imaging observations of 3C277.3 by van Breugel et al. (1985) and Clark (1996) show the presence of a series of high surface brightness structures along the radio axis. These include: a high ionization emission line region associated with knots in radio jet some 6 arcseconds to the south east of the nucleus; an enhancement in the emission line flux close to the hotspot in the northern radio lobe; and an emission line arc which partially circumscribes the northern radio hotspot. Although the kinematic and ionization evidence for a jet-cloud interaction in this source is less clear than in some other radio galaxies (e.g. 3C171: see below) — the ionization state is relatively high and the emission lines relatively narrow — van Breugel et al. (1985) found evidence for a jump in the emission line radial velocities across the northern radio lobe, while Clark (1996) noted that the ionization has a minimum, and the electron temperature a peak, at the position of the northern radio hotspot. Note that there is no clear evidence for a powerful quasar nucleus in this source: the nuclear regions show no evidence for scattered quasar light, and the nuclear emission line region has a relatively low luminosity and ionization state compared with the brightest extended emission line regions along the radio axis.
Our deep H$`\alpha `$ images (Figure 1a) show that the emission line regions along the radio axis form part of a spectacular system of interlocking emission line arcs and filaments, which extend almost as far perpendicular as parallel to the radio axis. Of particular interest is the fact that the brightest arc structure wraps a full 180 around the nucleus, with enhancements in the emission line surface brightness where the arc intercepts the radio axis to the north and south of the nucleus. The spatially integrated H$`\alpha `$ fluxes of the bright knots along the radio axis (including the nucleus), the extended low surface brightness filaments, and the nebula as a whole are $`2.5\times 10^{14}`$, $`1.6\times 10^{14}`$ and $`4.1\times 10^{14}`$ erg s<sup>-1</sup> cm<sup>-2</sup> respectively. For our adopted cosmology<sup>2</sup><sup>2</sup>2$`H_0=50kms^1kpc^1`$ and $`q_0=0.0`$ assumed throughout. the corresponding H$`\alpha `$ emission line luminosities are $`8.6\times 10^{41}`$, $`5.6\times 10^{41}`$ and $`1.42\times 10^{42}`$ erg s<sup>-1</sup> respectively.
The fact that the main arc and filament structures are not visible, or are considerably fainter, in the intermediate-band continuum image (Figure 1b) — which is at least as sensitive to continuum structures as the narrow-band H$`\alpha `$ image — demonstrates that they are predominantly emission line structures. However, a number of faint galaxies and continuum structures are detected within 100 kpc of the nucleus of Coma A, and at least some of these continuum structures (highlighted by arrows in the figure) are intimately associated with the extended H$`\alpha `$ filamentary structures.
Overall, the Coma A system has the appearance of an interacting group of galaxies: the H$`\alpha `$ filaments bear a marked resemblance to the HI tails detected in 21cm radio observations of interacting groups (e.g. the M81 group: Yun et al. 1994); and it is plausible that the faint continuum structures represent the debris of interactions/mergers between the dominant giant elliptical galaxy and less massive galaxies in the same group. The X-ray luminosity of Coma A ($`L_{0.53kev}<8.1\times 10^{42}`$ erg s<sup>-1</sup>: Fabbiano et al. 1984) is also consistent with a group environment.
Figure 2 shows an overlay of the emission line image and the 6cm radio map. This reveals a striking match between the emission line and radio structures. As well as the high surface brightness features along the radio axis, the brightest arc to the north of the nucleus closely follows the outer edge of northern radio lobe. The emission line structures appear to bound the radio structures: the brighter emission line features have a similar radial and lateral extent to the radio features. It is notable, however, that fainter, more diffuse emission line structures are detected well outside the radio lobes on the northern and eastern sides of the galaxy.
The detection of arc structures circumscribing radio lobes is not without precedent: the intermediate redshift radio galaxies PKS2250-41 (Clark et al. 1998, Villar-Martin et al. 1999), 3C435A (Rocca-Volmerange et al. 1994) and PKS1932-46 (Villar-Martin et al. 1998), the high redshift radio galaxies 3C280 (McCarthy et al. 1995) and 3C368 (Best et al. 1996), and the central elliptical galaxy in the cooling flow cluster A2597 (Keokemoer et al. 1999), all show evidence for arcs associated with radio lobes. In many of these cases there is also spectroscopic evidence that the emission line gas extends beyond the radio lobes.
### 3.2 3C171
3C171 is another example of an object in which high surface brightness emission line structures are closely aligned along the axis of the radio jets (Heckman et al. 1984, Baum et al. 1988). The spectroscopic evidence for a jet-cloud interaction in this source is strong: the emission line kinematics along the radio axis are highly disturbed; and the the general line ratios and ionization minima coincident with the radio hotspots to the east and west of the nucleus provide strong evidence that the emission line gas has been compressed and ionized by jet-induced shocks (Clark et al. 1998). A further possible consequence of the jet-cloud interactions is the highly disturbed radio structure, with the radio lobes showing a greater extent perpendicular- than parallel to the jet axis, giving an overall H-shaped appearance (Heckman et al. 1984, Blundell 1996).
Our deep H$`\alpha `$ and continuum images of this source are shown in Figure 3, while an overlay of the optical emission line image and the radio map is presented in Figure 4. From the continuum-subtracted H$`\alpha `$ image we measure spatially integrated emission line fluxes of $`2.12\times 10^{14}`$, $`6.2\times 10^{16}`$ and $`2.63\times 10^{14}`$ erg s<sup>-1</sup> cm<sup>-2</sup> respectively for the high surface brightness structures aligned along the radio axis (including the nucleus), the faint filament to the north, and the nebula as a whole. The corresponding H$`\alpha `$ emission line luminosities are $`6.6\times 10^{42}`$, $`1.9\times 10^{41}`$ and $`8.1\times 10^{42}`$ erg s<sup>-1</sup> respectively.
Although the emission line structures in 3C171 are clearly different in detail from those detected in Coma A, there are important general similarities. Most notably, as in Coma A, the highest surface brightness emission line features are closely aligned along the radio axis, yet lower surface brightness structures are also detected in the direction perpendicular to the radio axis. The emission line structures have a similar radial extent in the directions perpendicular and parallel to the radio axis. Away from the radio axis, the most striking emission line feature is the filament which extends 9 arcseconds (45 kpc) north of the nucleus. This feature lies along the fringes of the western radio lobe, just as the arc to the north of the nucleus in Coma A skirts the outer edge of the northern radio lobe in that object. A further similarity with Coma A is that, in the radio axis direction, the radio structures are confined within the emission line structures, which have a similar radial extent. We also find evidence for emission line gas that is not clearly associated with radio structures: the faint, diffuse H$`\alpha `$ emission to the south east of the nucleus lies well to the south of the extended eastern radio lobe. However, 3C171 is different from Coma A in the sense that the radio lobes extend further than the emission line structures in the direction perpendicular to the radio axis on the northern side of the galaxy.
## 4 Discussion
The main aim of the deep emission line imaging observations was to attempt to detect the broad emission line cones outside the main aligned structures, and thereby reconcile the AGN illumination and jet-cloud interactions models. The unified schemes predict illumination cones with opening half-angles of 45-60. Although the extended emission line nebulosities in low redshift radio galaxies rarely show the sharp-edged cone structures observed in some Seyfert galaxies (e.g. Pogge 1988, Tadhunter & Tsvetanov 1989), the emission line distributions are generally consistent with broad cone illumination of an inhomogeneous ISM (Hansen et al. 1987, Baum et al. 1989, Fosbury 1989). The detection of similar emission line distributions in 3C171 and Coma A would support the idea that the extended ionized haloes are photoionized by quasars hidden in the cores of the galaxies.
The deep imaging observations presented in this paper have confounded our expectations in the sense that, while they do show extended of emission line gas well away from the radio axis, the emission line distribution cannot be reconciled with any plausible ionization cone model. Not only do some of the features wrap through a full 180 in position angle around the nucleus of Coma A, but there are no sharp boundaries in the surface brightness of the structures, corresponding to the edges of an ionization cone. It is possible for the emission line distributions to appear broader than the nominal 45-60 cones predicted by the unified schemes if the cone axes are tilted towards the observer. However, in order to explain the emission line distributions in 3C171 and Coma A in this way, the cones would have to be tilted to such an extent that the observer’s line of sight would lie within the cone and we would see the illuminating AGN directly. Clearly this is not the case, and it appears highly unlikely that the extended filaments away from the radio axis are photoionized by a central source of ionizing photons.
The most plausible alternative to quasar illumination is ionization by the shocks associated with the expanding radio jets and lobes. The emission lines could be produced as the warm clouds cool behind the shock fronts or, alternatively, as a consequence of photoionization of precursor clouds by the ionizing photons produced in the cooling, shocked gas. In either case we would expect a close morphological association between the radio and optical structures, just as we observe in 3C171 and Coma A. By adapting equation 4.4 of Dopita and Sutherland (1996), and assuming a shock speed through the warm clouds of 200 km s<sup>-1</sup>, we estimate that the rate of flow of warm ISM through the shocks would have to be at least 1.9$`\times `$10<sup>4</sup> M yr<sup>-1</sup> for 3C171, and 3.2$`\times `$10<sup>3</sup> M yr<sup>-1</sup> for Coma A, in order for the emission line luminosities of the nebulae as a whole to be produced entirely by shock ionization. Energetically, the shock ionization mechanism appears to be feasible in the sense that the total emission line luminosities of the sources are $`<`$10% of the bulk powers of the radio jets (Clark 1996, Clark et al. 1998)<sup>3</sup><sup>3</sup>3In order to derive this result we have scaled the results of Clark (1996), who considered only the emission line components along the radio axis, to the total emission line fluxes for the nebulae as a whole, as derived from our H$`\alpha `$ images..
However, it is not possible to rule out some contribution to the ionization of the extended structures by a central photoionizing source. As discussed in the introduction, some radio sources with relativistic jets may not have powerful quasar nuclei. If this is the case, the narrow beams of radiation emitted by the jets could contribute to the ionization of the structures along the radio axis, although the ionization of the more extended filamentary structures would continue to be be dominated by interactions with the rdaio lobes.
One further possibility is that the structures are photoionized by young stars associated with the filaments. This is supported by the presence of faint continuum structures associated with the H$`\alpha `$ filaments (see Figure 1(c)), and the spectroscopic detection of excess UV continuum emission to the north and south of the nucleus along the radio axis above the level expected for the nebular continuum emitted by the warm gas (Clark 1996). Without further information on the nature and spectrum of the extended continuum structures it is difficult to test this model at the present time.
An open question for both 3C171 and Coma A is the extent to which the structures reflect the true distributions of ionized gas in the haloes of the host galaxies, and the extent to which the structures are distorted by their interaction with the radio components. It is possible for shock fronts to sweep up material into shell-like structures. However, given that the clouds are likely to be destroyed by hydrodynamical interactions with the fast, hot wind behind the shock fronts within a few shock crossing times (e.g. Klein, McKeee & Collella 1994), and given also the presence of diffuse H$`\alpha `$ emission well away from the radio structures in both Coma A and 3C171, it seems more plausible that these represent pre-existing gas structures. Cloud destruction by shocks may also lead to a relative absence of warm gas in the lobes, further enhancing the shell-like appearance of the emission line structures. In the case of Coma A it is likely that we are seeing the results of interactions between between the radio-emitting components and the gaseous remnants of mergers/interactions in a group of galaxies.
Clearly, detailed measurements of the kinematics, line ratios, and continuum spectra of the filamentary structures are required in order to resolve the outstanding issues concerning the physical state, ionization and origins of the warm gas.
## 5 Implications for high redshift radio galaxies
Our observations demonstrate the presence of extended gaseous structures well away from the high-surface-brightness structures aligned along the radio axes in two nearby radio galaxies. Given that Coma A and 3C171 are similar to the high redshift radio galaxies in the sense that they show high-surface-brightness emission line structures closely aligned along their radio axes, as well as evidence for disturbed emission line kinematics, it seems likely that similar extended gaseous structures also exist in the high-z sources. In this case, the highly collimated structures visible in the existing images of some $`z1`$ 3C radio sources may reflect more the ionization pattern induced by the radio jets than the true distribution of warm/cool gas in the host galaxies.
Note, however, that 3C171 and Coma A have radio and emission line luminosities that are an order of magnitude lower than 3C radio galaxies at $`z1`$. Furthermore, the radio lobes in 3C171 and Coma A extend further in the direction perpendicular to the radio jets than is typical in high redshift 3C radio sources. Therefore, it is difficult to predict the detectability of the extended low surface brightness structures in the high-z radio galaxies ($`z>1`$) based on a straightforward extrapolation of the properties of 3C171 and Coma A. Given the smaller lateral extents of the radio lobes in the high-z sources, the ionization effects associated with the lobes may be less effective at large distances from the radio axes in such objects. In addition, the structures in the high-z sources will be subject to $`(1+z)^4`$ cosmological surface brightness dimming which will make them more difficult to detect relative to nearby sources for similar intrinsic brightness levels. However, set against this is the fact that, in contrast to Coma A and 3C171, there exists good polarimetric evidence that many of the high-z radio galaxies contain powerful quasars hidden in their cores. Provided that the ionizing photons in the broad ionization cones can escape the nuclear regions (but see discussion in introduction), illumination by the quasar cones will enhance the surface brightnesses of the extended structures and render them more easily detectable.
The extended low surface brightness structures may already have been detected spectroscopically in at least one high-z source: deep, long-slit Keck spectra taken along the radio axis of 3C368 ($`z=1.135`$) by Stockton, Ridgway & Kellogg (1996) show the presence of a faint emission line region well outside the main high surface brightness emission line regions closer to the nucleus. The relatively narrow lines and high ionization state measured in this faint, low-surface-brightness region are consistent with quasar illumination of the undisturbed ambient medium of the host galaxy.
Some encouragement may also be drawn from the detection of large Ly$`\alpha `$ haloes around radio galaxies at $`z>2`$ (e.g. Adam et al. 1997). Although the Ly$`\alpha `$ in these haloes may not be formed by direct photoionization by an AGN, but rather by resonant scattering of Ly alpha photons produced in the extended regions around the nuclei (Villar-Martin et al. 1996), these observations at least demonstrate the presence of extensive haloes of cool ISM surrounding the host galaxies of some of the highest redshift radio galaxies.
Thus, we expect future deep emission line imaging of $`z1`$ radio galaxies to reveal the true distribution of the extended ionized gas in the host galaxies, and to provide clues to the origins of the gas and the evolution of the host galaxies.
## 6 Conclusions
Deep emission line imaging observations of two nearby examples of the radio-optical alignment effect have revealed extensive low-surface-brightness emission line structures well away from the radio axes, thus demonstrating that the intrinsic distribution of warm gas is more extensive than previosly suspected.
The general distribution of the gaseous structures is imcompatible with the standard quasar illumination picture, while their association with the extended radio structures provides clear evidence that they are interacting with the radio lobes, hotspots and jets. These may be objects in which the ionization of the extended emission line regions is entirely dominated by shocks induced by interactions between the radio plasma and the ISM.
It is often assumed that broad distribution of ionized gas observed in low redshift radio galaxies without clear signs of jet-cloud interactions imply illumination by the broad ionization cones of quasars hidden in the cores of the galaxies. These new observations suggest that this may not always be the case, and that the lobes as well as the jets may have a significant ionizing effect. Acknowledgments. The Willian Herschel Telescope is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roches de los Muchachos of the Instituto de Astrofisica de Canarias. We thank Katherine Blundell for allowing us to use to her radio image of 3C171. MVM acknowledges support from PPARC. References
Adam, G., Rocca-Volmerange, B., Gerard, S., Ferruit, P., Bacon, R., 1997, A&A, 326, 501
Anderson, S.F., Weymann, R.J., Foltz, C.B., Chaffee, F.H., 1987, AJ, 94, 278
Baker, J.C., 1997, MNRAS, 286, 23
Barthel, P.D., 1989, ApJ, 336, 606
Baum, S.A., Heckman, T.M., Bridle, A.H., van Breugel, W., Miley, G.K., 1988, ApJS, 68, 833
Baum, S., Heckman, T., 1989, ApJ, 336, 702
Baum, S.A., Heckman, T.M., van Breugel, W., 1990, ApJS, 74, 389
Best, P.N., Longair, M.S., Rottgering, H.J.A., 1996, MNRAS, 280, L9.
Bland-Hawthorn, J. & Jones, D.H. 1998a, PASA, 15, 44
Bland-Hawthorn, J. & Jones, D.H. 1998b, In Optical Astronomical Instrumentation, SPIE vol. 3355, 855
Blundell K.M. 1996, MNRAS 283, 538
Cimatti, A., Dey, A, van Breugel, W., Antonucci, R., Spinrad, H., 1996, ApJ, 465, 145
Cimatti, A., Dey, A., van Breugel, W., Hurt, T., Antonucci, R., 1997, ApJ, 476, 677
Clark, N.E., 1996, PhD thesis, University of Sheffield.
Clark, N.E., Tadhunter, C.N., Morganti, R., Killeen, N.E.B., Fosbury, R.A.E., Hook, R.N., Shaw, M., 1997, MNRAS, 286, 558
Clark, N.E., Axon, D.J., Tadhunter, C.N., Robinson, A., O’Brien, P., 1998, ApJ, 494, 546
Dey, A., Spinrad, H., 1996, ApJ, 459, 133
Dickson, R.D., Tadhunter, C.N., Shaw, M.A., Clark, N.E., Morganti, R., 1995, MNRAS, 273, L29
Dopita, M.A., Sutherland, R.S., 1996, ApJS, 102, 161
Fabian, A.C., 1989, MNRAS, 238, 41p
Fabbiano, G., Miller, L., Trinchieri, G., Longair, M., Elvis, M., 1984, ApJ, 277, 115
Fosbury, R.A.E., 1989: In: ESO Workshop on Extranuclear Activity in Galaxies, Meurs & Fosbury (eds), p169
Hansen, L., Norgaard-Nielsen, H.U., Jorgensen, H.E., 1987, A&ASuppl., 71, 465
Heckman, T.M., van Breugel, W.J.M., Miley, G.K., 1984, ApJ, 286, 509
Hill, G.J., Goodrich, R.W., DePoy, D.L., 1996, ApJ, 462, 162
Klein, R., McKee, C., Colella, P., 1994, ApJ, 420, 213
Knopp, G.P., Chambers, K.C., 1997, ApJS, 109, 367
Koekemoer, A.M., O’Dea, C.P., Sarazin, C.L., McNamara, B.R., Donahue, M., Voit, G.M., Baum, S.A., Gallimore, J.F., 1999, AJ, in press
Lawrence, A., 1991, MNRAS, 252, 586
Longair, M.S., Best, P.N., Rottgering, H.J.A., 1995, MNRAS, 275, L47
McCarthy, P.J., van Breugel, W., Spinrad, H., Djorgovski, S., 1987, ApJ, 321, L29
McCarthy, P.J., van Breugel, W., 1989, in The Epoch of Galaxy Formation, ed. C. Frenk, Kluwer Academic Press, p57
McCarthy, P.J., Spinrad, H., van Breugel, W.J.M., 1995, ApJSupp., 99, 27
McCarthy, P.J., Baum, S., Spinrad, H., 1996, ApJS, 106, 281
Ogle, P.M., Cohen, M.H., Miller, J.S., Tran, H.S., Fosbury, R.A.E., Goodrich, R.W., 1997, ApJ, 482, 370
Osterbock, D.E., Koski, A.T., Phillips, M.M., 1976, ApJ, 206, 898
Pogge, R.W., 1988, 328, 519
Rocca-Volmerange, B., Adam, G., Ferruit, P., Bacon, R., 1994, A&A, 292, 20
Stockton, A., Ridgway, S.E., Kellogg, M., 1996, AJ, 112, 902
Tadhunter, C.N., Fosbury, R.A.E., di Serego Alighieri, S., 1988, in Maraschi, L., Maccacaro, T., Ulrich, M.H., eds, Proc. Como Conf. 1988, BL Lac Objects, Springer-Verlag, Berlin, p.79
Tadhunter, C.N., Fosbury, R.A.E., Quinn, P., 1989, MNRAS, 240, 255
Tadhunter, C., Scarrott, S., Draper, P., Rolph, C., 1992, MNRAS, 256, 53p
Tadhunter, C.N., Tsvetanov, Z., 1989, Nat, 341, 422
Tadhunter, C.N., Morganti, R., Robinson, A., Dickson, R., Villar-Martin, M., Fosbury, R.A.E., 1997, MNRAS, submitted.
Villar-Martin, Binette, Fosbury, 1996, A&A, 312, 751
Villar-Martin, M., Tadhunter, C.N., Morganti, R., Clark, N., Killeen, N., Axon, D., 1998, A&A, 332, 479
Villar-Martin, M., Tadhunter, C.N., Morganti, R., Axon, D., 1999, MNRAS, 307, 24
van Breugel, W., Miley, G., Heckman, T., Butcher, H., Bridle, A., 1985, ApJ, 290, 496
West, M.J., 1994, MNRAS, 268, 79
Wills, B.J., Thompson, K.L., Han, M., Netzer, H., Wills, D., Baldwin, J.A., Ferland, G.J., Browne, I.W.A., Brotherton, M.S., ApJ, 447, 139
Young, S., Hough, J.H., Efstathiou, A., Wills, B.J., Axon, D.J., Bailey, J.A., Ward, M.J., 1996, MNRAS, 279, L72
Yun, M.S., Ho, P.T.P., Lp, K.Y., 1994, Nat, 272, 530
|
no-problem/0002/astro-ph0002114.html
|
ar5iv
|
text
|
# R-mode runaway and rapidly rotating neutron stars
## 1. Introduction
The launch of the Rossi X-ray Timing Explorer (RXTE) in 1995 heralded a new era in our understanding of neutron star physics. Detailed observations of quasiperiodic phenomena at kHz frequencies in more than a dozen Low-Mass X-ray Binaries (LMXB) strongly suggest that these systems contain rapidly spinning neutron stars (for a recent review, see van der Klis (2000)), providing support for the standard model for the formation of millisecond pulsars (MSP) via spin-up due to accretion.
Despite these advances several difficult questions remain to be answered by further observations and/or theoretical modeling. For example, we still do not know the reason for the apparent lack of radio pulsars at shorter periods than the 1.56 ms of PSR1937+21 (for a review of recent progress in the modelling of rotating neutron stars, see Stergioulas (1998)). The recent RXTE observations provide a further challenge for theorists. Various models suggest that the neutron stars in LMXB spin rapidly, perhaps in the narrow range 260-590 Hz (van der Klis, 2000). Three different models have been proposed to explain this surprising result. The first model (due to White & Zhang (1997)) is based on the standard magnetosphere model for accretion induced spin-up, while the remaining two models are rather different, both being based on the idea that gravitational radiation balances the accretion torque. In the first such model for the LMXB (proposed by Bildsten (1998) and recently refined by Ushomirsky, Cutler & Bildsten (2000)), the gravitational waves are due to a quadrupole deformation induced in the deep neutron star crust because of accretion generated temperature gradients. The second gravitational-wave model relies on the recently discovered r-mode instability (see Andersson & Kokkotas (2000) for a review) to dissipate the accreted angular momentum from the neutron star.
In this Letter we reexamine the idea that gravitational waves from unstable r-modes provide the agent that balances the accretion torque. This possibility was first analyzed in detail by Andersson, Kokkotas & Stergioulas (1999) (but see also Bildsten (1998)). Originally, it was thought that an accreting star in which the r-modes were excited to a significant level would reach a spin-equilibrium, very much in the vein of suggestions by Papaloizou & Pringle (1978) and Wagoner (1984). Should this happen, the neutron stars in LMXB would be prime sources for detectable gravitational waves. However, as was pointed out by Levin (1999) and Spruit (1998), the original idea is not viable since, in addition to generating gravitational waves that dissipate angular momentum from the system, the r-modes will heat the star up (via the shear viscosity that counteracts the r-mode at the relevant temperatures). Since the shear viscosity gets weaker as the temperature increases, the mode-heating triggers a thermal runaway and in a few months the r-mode would spin an accreting neutron star down to a rather low rotation rate. Essentially, this conclusion rules out the r-modes in galactic LMXB as a source of detectable gravitational waves, since they will only radiate for a tiny fraction of the systems lifetime.
Other recent results would (at first sight) seem to emphasize the conclusion that the r-modes are not relevant for the LMXB. Bildsten & Ushomirsky (2000) investigated the effect that the presence of a solid crust would have on the r-mode oscillations. They estimated that the dissipation associated with a viscous boundary layer that arises at the base of the solid crust in a relatively cold neutron star would greatly exceed that of the standard shear viscosity. Thus, Bildsten and Ushomirsky concluded that the r-mode instability would only be relevant for very high rotation rates, and could therefore not play a role in the LMXB.
We have reassessed the effect of the viscous boundary layer (correcting an erring factor in the estimates of Bildsten & Ushomirsky (2000)). Our new estimates show that the presence of the crust is important, but that the instability operates at significantly lower spin rates than suggested by Bildsten and Ushomirsky. Once we combine our estimates with the thermal runaway (now due to heating caused mainly by the presence of the viscous boundary layer), that results as the star is spun up to the point at which the instability sets in, we arrive at a model for the spin-evolution of accreting neutron stars. Remarkably, this simple model agrees well with existing observations of rapidly rotating neutron stars, covering both the LMXB and MSP populations.
## 2. Dissipation due to a viscous boundary layer
The r-mode instability follows after a tug of war between (mainly current multipole) gravitational radiation that drives the mode and various dissipation mechanisms that counteract the fluid motion. In the simplest model, the mode is dominated by shear viscosity at low temperatures while bulk viscosity may suppress the mode at high temperatures. At intermediate temperatures, the r-mode sets an upper limit on the neutron star spin rate. In an interesting recent paper, Bildsten & Ushomirsky (2000) estimate the strength of dissipation due to the solid crust of an old neutron star, and find that the presence of a boundary layer at the base of the crust leads to a very strong damping of the r-modes.
While we agree with the main idea and the various assumptions made by Bildsten and Ushomirsky, we would like to point out one important difference between their results and ones used previously in the literature. Their assumed timescale for gravitational radiation reaction differs significantly from, for example, the uniform density result derived by Kokkotas & Stergioulas (1999) (and subsequently used by several authors, see Andersson & Kokkotas (2000)). This is surprising since the uniform density result, which can be written
$$t_{gw}22\left(\frac{1.4M_{}}{M}\right)\left(\frac{\text{10 km}}{R}\right)^4\left(\frac{P}{\text{1 ms}}\right)^6\text{ s},$$
(1)
(where the negative sign indicates that the mode is unstable) has been shown to be close (within a factor of two) to the results for $`n=1`$ polytropes. $`M`$, $`R`$, and $`P`$ represent the mass, radius and spin period of the star, respectively. In contrast, Bildsten & Ushomirsky (2000) use the $`n=1`$ polytrope result and argue that it corresponds to $`t_{gw}146`$ s for a canonical neutron star rotating with a period of 1 ms, i.e. assume that radiation reaction is almost one order of magnitude weaker than in (1). This difference occurs because Bildsten and Ushomirsky have only rescaled the fiducial rotation frequency $`\mathrm{\Omega }_0=\sqrt{\pi G\overline{\rho }}`$ (where $`\overline{\rho }`$ represents the average density) in terms of which the $`n=1`$ polytrope results of Owen et al. (1998) were expressed ($`t_{gw}3.26(\mathrm{\Omega }_0/\mathrm{\Omega })^6`$ for a specific polytropic stellar model). Unfortunately, this procedure is not correct. From the fundamental relations, e.g. the formula for the gravitational-wave energy radiated via the current multipoles, one can see that the gravitational-wave timescale should scale with $`M`$, $`R`$ and $`P`$ in the way manifested in (1). Thus, we believe that Bildsten and Ushomirsky underestimate the strength of radiation reaction significantly, which motivates us to reassess the relevance of the viscous boundary layer.
We should, of course, emphasize at this point that our current understanding of the r-mode instability is based on crude estimates of the various timescales. In order to understand the role of the instability in an astrophysical context we must improve our modelling of many aspects of neutron star physics such as the effect of general relativity on the r-modes, cooling rates, viscosity coefficients, magnetic fields, potential superfluidity, the formation of a solid crust etcetera (see Andersson & Kokkotas (2000) for a description of recent progress in these various directions).
In the following we will mainly consider uniform density stars, i.e. use the gravitational-wave timescale given by (1). In estimating the dissipation timescale $`t_{vbl}`$ due to the presence of the crust, we need to evaluate $`t_{vbl}2E/(dE/dt)`$ where $`E`$ is the mode-energy, and $`dE/dt`$ follows from an integral over the surface area at the crust-core boundary (assumed to be located at radius $`R_b`$), cf. equation (3) of Bildsten & Ushomirsky (2000). To evaluate this integral we use the standard result for the shear viscosity in a normal fluid. To incorporate our uniform density model, we make the reasonable assumption that the density of the star is constant ($`M/R^3`$) inside radius $`R`$. Then it falls off rapidly in such a way that the base of the crust is located at a radius only slightly larger than $`R`$. Hence, it makes sense to use $`R_bR`$. If we neglect the small mass located outside radius $`R`$ we can then immediately compare the result for the viscous boundary layer to the timescales used by Andersson, Kokkotas & Stergioulas (1999). In the end, our estimate for the dissipation due to the presence of the viscous boundary layer is
$$t_{vbl}200\left(\frac{M}{1.4M_{}}\right)\left(\frac{\text{10 km}}{R}\right)^2\left(\frac{T}{10^8\text{ K}}\right)\left(\frac{P}{\text{1 ms}}\right)^{1/2}\text{ s}.$$
(2)
which is a factor of 2 larger than that of Bildsten and Ushomirsky. This difference arises simply because the mode-energy $`E`$ is this factor larger for uniform density models. The star is assumed to have a uniform temperature distribution, with core temperature $`T`$.
In Figure 3 we show the instability window obtained from our revised estimate. As is clear from this Figure, the presence of a viscous boundary layer in an old, relatively cold neutron star is, indeed, important. However, Bildsten and Ushomirsky’s conclusion that the r-mode instability is irrelevant for the LMXB cannot be drawn from Figure 3. On the contrary, the Figure suggests that the instability may well be limiting the rotation of these systems.
## 3. Thermal runaway in rapidly spinning neutron stars
The fact that our revised instability curve for r-modes damped by dissipation in a viscous boundary layer agrees well with the fastest observed neutron star spin frequencies, cf. Figure 3, motivates us to speculate further on the relevance of the instability. We want to model how the potential presence of an unstable r-mode affects the spin-evolution of rapidly spinning, accreting neutron stars. To do this we use the phenomenological two-parameter model devised by Owen et al. (1998), which is centered on evolution equations for the rotation frequency $`\mathrm{\Omega }`$ and the (dimensionless) r-mode amplitude $`\alpha `$. Complete details of our particular version of this model will be given elsewhere.
At the qualitative level, our results are not surprising. Accreting stars in the LMXB are expected to have core temperatures in the range $`14\times 10^8`$ K (Brown & Bildsten, 1998). For such temperatures the dissipation due to the viscous boundary layer gets weaker as the temperature increases. Consequently, the situation here is essentially identical to that considered by Levin (1999) (see also Spruit (1998) and Bildsten & Ushomirsky (2000)). After accreting and spinning up for something like $`10^7`$ years, the star reaches the period at which the r-mode instability sets in. For our particular estimates this corresponds to a period of 1.5 ms (at a core temperature of $`10^8`$ K). It is notable that this value is close to the 1.56 ms period of PSR1937+21. Once the r-mode becomes unstable (point A in Figure 3), viscous heating (now mainly due to the energy released in the viscous boundary layer) rapidly heats the star up to a few times $`10^9`$ K. The r-mode amplitude increases until it reaches a prescribed saturation level (amplitude $`\alpha _s`$) at which unspecified nonlinear effects halt further growth (point B in Figure 3). Once the mode has saturated, the neutron star rapidly spins down as excess angular momentum is radiated as gravitational waves. When the star has spun down to the point where the mode again becomes stable (point C in Figure 3), the amplitude starts to decay and the mode plays no further role in the spin evolution of the star (point D in Figure 3) unless the star is again spun up to the instability limit. Two examples of such r-mode cycles (corresponding to $`\alpha _s=0.1`$ and 1, respectively) are shown in Figure 3.
The real surprise here concerns the quantitative predictions of our model. As already mentioned, the model suggests that an accreting star will not spin up beyond 1.5 ms. This value obviously depends on the chosen stellar model, but it is independent of the r-mode saturation amplitude and only weakly dependent on the accretion rate (through a slight change in core temperature). In fact, the accretion rate only affects the time it takes the star to complete one full cycle. As soon as the mode becomes unstable the spin-evolution is dominated by gravitational radiation and viscous heating. Once the star has gone through the brief phase when the r-mode is active it has spun down to a period in the range 2.8-4.8 ms (corresponding to $`0.01\alpha _s1`$). Based on these results we propose the following spin-evolution scenario: An accreting neutron star will never spin up beyond (say) 1.5 ms. Once it has reached this level the r-mode instability sets in and spins the star down to a several ms period. At this point the mode is again stable and continued accretion may resume to spin the star up. Since the star must accrete roughly $`0.1M_{}`$ to reach the instability point, and the LMXB companions have masses in the range $`0.10.4M_{}`$, it can pass through several “r-mode cycles” during its lifetime.
Let us confront this simple model with current observations. To do this we note that our model leads to one main prediction: Once an accreting neutron star has been spun up beyond (say) 5 ms it must remain in the rather narrow range of periods $`1.55`$ ms until it has stopped accreting and magnetic dipole braking eventually slows it down. Since a given star can go through several r-mode cycles before accretion is halted one would expect most neutron stars in LMXB and the MSP to be found in the predicted range of rotation rates. As is clear from Figure 3, this prediction agrees well with the range of rotation periods inferred from observed kHz quasiperiodic oscillations in LMXB. The observed range shown in Figure 3 corresponds to rotation frequencies in the range 260-590 Hz (cf. van der Klis (2000)). Our model also agrees with the observed data for MSP, which are mainly found in the range $`1.566`$ ms, see Figure 3. In other words, our proposed model is in agreement with current observed data for rapidly rotating neutron stars.
Finally, it is worthwhile discussing briefly the detectability of the gravitational waves that are radiated during the relatively short time when the r-mode is saturated and the star spins down. As was argued by Levin (1999), the fact that the r-mode is active only for a small fraction of the lifetime of the system (something like 1 month out of the $`10^7`$ years it takes to complete one full cycle) means that even though these sources would be supremely detectable from within our galaxy the event rate is far too low to make them relevant. However, it is interesting to note that the spin-evolution is rather similar to that of a hot young neutron star once the r-mode has reached its saturation amplitude. This means that we can analyze the detectability of the emerging gravitational waves using the framework of Owen et al. (1998). We then find that these events can be observed from rather distant galaxies. For a source in the Virgo cluster (assumed to be at a distance of 15 Mpc) these gravitational waves could be detected with a signal to noise ratio of a few using LIGO II. However, even at the distance of the Virgo cluster these events would be quite rare. By combining a birth rate for LMXB of $`7\times 10^6`$ per year per galaxy with the fact that the the volume of space out to the Virgo cluster contains $`10^3`$ galaxies, and the possibility that each LMXB passes through (say) four r-mode cycles during its lifetime we deduce that one can only hope to see a few events per century in Virgo. In order to see several events per year the detector must be sensitive enough to detect these gravitational waves from (say) 150 Mpc. This would require a more advanced detector configuration such as a narrow-banded LIGO II. We will discuss this issue in more detail elsewhere.
## 4. Additional remarks
Before concluding our discussion, we recall that the initial excitement over the r-mode instability was related to the fact that it provided an explanation for the relatively slow inferred spin rates for young pulsars. In view of this, it is natural to digress somewhat and discuss how the picture of the r-mode instability in hot, newly born neutron stars is affected by the possible formation of a solid crust. Hence, we consider the evolution of a neutron star just after its birth in a supernova explosion. At a first glance, we might expect to model its r-mode amplitude in the standard way, cf. Owen et al. (1998), using the normal (crust-free) fluid viscous damping times for stellar temperatures above the melting temperature of the crust ($`T_m`$), and the viscous boundary layer damping time for temperatures below $`T_m`$. However, the situation is a little more complicated than this. Recall that the latent heat (i.e. the Coulomb binding energy) of a typical crust is $`E_{lat}10^{48}`$ ergs, while the r-mode energy is $`E_m2\alpha ^2(\text{1 ms}/P)^2\times 10^{51}`$ ergs. Provided that the time taken for the star to cool to $`T_m`$ is sufficiently long, the energy in the r-mode (which grows exponentially on a timescale $`t_{gw}20`$ s) will exceed $`E_{lat}`$, preventing the formation of the crust, even when $`T<T_m`$. Then the star will spin down in the manner described by, eg. Owen et al. (1998). This phase will end either because the star leaves the instability region of the $`\mathrm{\Omega }T`$ plot (see, for example, Fig. 1 of Owen et al.), or because the mode energy in the outer layers of the star (where the crust is going to form) has fallen below the crustal binding energy. We can estimate that this would happen at a frequency $`70\text{ Hz}/\alpha _s`$, by equating $`E_{lat}`$ to roughly 10 % of $`E_m`$). A more accurate treatment would take into account the *local* kinetic energy of fluid elements and since the latter is smaller near the poles than near the equator, the crust might form earlier at the poles. Clearly the problem of crust formation in an oscillating star requires further investigation. The final spin period will be around 15 ms, if the r-mode grows to an amplitude of $`\alpha _s1`$, consistent with the extrapolated initial spin rates of many young pulsars. On the other hand, if the mode is not given time to grow very large it will not prevent crust formation at $`T_m`$. Such a scenario was described by Bildsten & Ushomirsky (2000), who noted that the r-mode instability would then not spin the star down beyond a much higher frequency. Using our estimated timescales the resultant spin period would be $`35`$ ms in this scenario. Which scenario applies depends sensitively on the early cooling of the star, the crustal formation temperature and perhaps most importantly the initial amplitude of the r-mode following the collapse. It is, in fact, possible that both routes are viable and that a bimodal distribution of initial spin periods results. A likely key parameter is whether the supernova collapse leads to a large initial r-mode amplitude $`\alpha _s`$ or not. An initial period of $`15`$ ms would fit the long established data for the Crab, while the recently discovered 16 ms pulsar PRS J0537-6910 (Marshall et al., 1998) requires a considerably shorter initial period of a few ms.
In conclusion, we have reexamined the effect that the dissipation due to a possible viscous boundary layer in a neutron star with a solid crust has on the stability of the r-modes. By combining our new estimates with the thermal runaway introduced by Levin (1999) and Spruit (1998), we arrive at a spin-evolution model that agrees with present observations for rapidly spinning neutron stars. In particular, our predictions agree well with observations of both LMXB and MSP. Furthermore, the model can potentially explain the extrapolated spin periods of the young pulsars. Since it brings out this unified picture, our simple model has many attractive features, and we are currently investigating it in more detail.
We thank L. Bildsten, W. Kluzniak, B. Sathyaprakash, H. Spruit and G. Ushomirsky for comments on a draft version of this paper. This work was supported by PPARC grant PPA/G/1998/00606 to NA.
|
no-problem/0002/hep-ph0002242.html
|
ar5iv
|
text
|
# I Introduction
## I Introduction
The SuperKamiokande Collaboration has recently confirmed the oscillation of atmospheric neutrinos . This evidence, as well as the strong indications of oscillation of solar neutrinos too, which could explain the solar neutrino deficit , lead to nonzero neutrino masses. Although not zero, such masses have to be much smaller than the charged lepton and quark masses, less than few eV . This feature can be explained by means of the seesaw mechanism , where the mass matrix $`M_L`$ of light (left-handed) Majorana neutrinos is given by
$$M_L=M_DM_R^1M_D^T,$$
(1)
with the Dirac mass matrix $`M_D`$ of the same order of magnitude of the charged lepton or quark mass matrix, and the eigenvalues of $`M_R`$, the mass matrix of right-handed neutrinos, much bigger than the elements of $`M_D`$.
In the Minimal Standard Model plus three right-handed neutrinos, the mass matrix of heavy neutrinos is generated by a Majorana mass term $`(1/2)\overline{\nu }_RM_R(\nu _R)^c`$ and hence $`M_R`$ is not constrained. Instead, in Grand Unified Theories (GUTs) like $`SO(10)`$, $`M_R`$ is obtained from the Yukawa coupling of right-handed neutrinos with the Higgs field that breaks the unification or the intermediate group to the Standard Model . When such a field gets a VEV $`v_R`$, which is the unification or the intermediate scale, the right-handed neutrinos take a mass and $`M_R=Y_Rv_R`$, where $`Y_R`$ is the matrix of Yukawa coefficients. Actually, this happens when and because at the same stage it is also $`BL`$ broken, allowing for Majorana masses. In the supersymmetric case $`v_R`$ is the unification scale ($`v_R10^{16}`$ GeV), while in the nonsupersymmetric case it is the intermediate scale ($`v_R10^910^{13}`$ GeV) . On the other hand, GUTs generally predict $`M_DM_u`$, where $`M_u`$ is the mass matrix of up quarks, and $`M_lM_d`$, where $`M_l`$ is the mass matrix of charged leptons and $`M_d`$ the mass matrix of down quarks. This is called quark-lepton symmetry.
From the experimental data on neutrino masses and mixings, and the quark-lepton symmetry, it is possible to infer the heavy neutrino mass matrix $`M_R`$ by inverting formula (1),
$$M_R=M_D^TM_L^1M_D.$$
(2)
In fact, $`M_L`$ can be obtained, at least approximately, from experimental data on neutrino oscillations, and quark-lepton symmetry suggests
$$M_D\frac{m_\tau }{m_b}\text{diag}(m_u,m_c,m_t).$$
(3)
The nearly diagonal form of $`M_D`$ is due to the fact that mixing in the Dirac sector is similar to the small mixing in the up quark sector , and the factor $`m_\tau /m_bk`$ is due to approximate running from the unification or intermediate scale, where $`m_b=m_\tau `$ should hold . As a matter of fact $`M_D`$ is almost scale independent. Then the Dirac masses of neutrinos are fixed by the values of the up quark masses at the unification scale in the supersymmetric model, and at the intermediate scale in the nonsupersymmetric model. However, in both cases their values are roughly similar , namely $`M_D`$diag$`(0.001,0.3,100)`$ GeV. It is now important to check if the resulting scale of $`M_R`$ is in accordance with the physical scales of GUTs, and also the structure of $`M_R`$, which would give further insight towards a more complete theory. This program has been addressed in refs. and in the recent papers . In this paper we want to extend the analysis of ref., in order to include small but not zero $`U_{e3}`$, inverted hierarchy of light neutrino masses, approximate effect of Majorana phases, and a discussion on the structure of $`M_R`$.
In section II we summarize the experimental informations on neutrino masses and mixings, coming mainly from solar and atmospheric oscillations. In sections III and IV the normal and inverted mass hierarchy cases are studied. In section V the effect of Majorana phases is briefly considered and finally we give some concluding remarks.
## II Neutrino masses and mixings
We denote by $`m_i`$ ($`i=1,2,3`$) the light neutrino masses. The mass eigenstates $`\nu _i`$ are related to the weak eigenstates $`\nu _\alpha `$ ($`\alpha =e,\mu ,\tau `$) by the unitary matrix $`U`$,
$$\nu _\alpha =U_{\alpha i}\nu _i.$$
(4)
The results on solar oscillations imply for the three solutions of the solar neutrino problem, namely small mixing MSW (SM), large mixing MSW (LM) and vacuum oscillations (VO), the following orders of magnitude for $`\mathrm{\Delta }m_{sol}^2`$ :
$$\mathrm{\Delta }m_{sol}^210^6\text{eV}^2\text{(SM)}$$
(5)
$$\mathrm{\Delta }m_{sol}^210^5\text{eV}^2\text{(LM)}$$
(6)
$$\mathrm{\Delta }m_{sol}^210^{10}\text{eV}^2\text{(VO)}$$
(7)
On the other hand, atmospheric oscillations give
$$\mathrm{\Delta }m_{atm}^210^3\text{eV}^2,$$
(8)
so that $`\mathrm{\Delta }m_{sol}^2\mathrm{\Delta }m_{atm}^2`$. We can set
$$\mathrm{\Delta }m_{sol}^2=m_2^2m_1^2,\mathrm{\Delta }m_{atm}^2=m_3^2m_{1,2}^2,$$
(9)
and, assuming without loss of generality $`m_3>0`$, there are three possible spectra for $`m_i`$ :
$$m_3|m_2|,|m_1|\text{(hierarchical)}$$
(10)
$$|m_1||m_2|m_3\text{(inverted hierarchy)}$$
(11)
$$|m_1||m_2|m_3\text{(nearly degenerate).}$$
(12)
Moreover, due to the near maximal mixing of atmospheric neutrinos and the smallness of $`U_{e3}`$ , the mixing matrix $`U`$ can be written as
$$U=\left(\begin{array}{ccc}c& s& ϵ\\ \frac{1}{\sqrt{2}}(s+cϵ)& \frac{1}{\sqrt{2}}(csϵ)& \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}(scϵ)& \frac{1}{\sqrt{2}}(c+sϵ)& \frac{1}{\sqrt{2}}\end{array}\right),$$
(13)
where $`ϵ`$ is small and $`s=\mathrm{sin}\theta `$, $`c=\mathrm{cos}\theta `$, with $`\theta `$ the mixing angle of solar neutrinos. The SM solution corresponds to $`s0`$, while the LM and especially the VO solutions correspond to $`s1/\sqrt{2}`$ , that is bimaximal mixing .
We set $`D_L=`$diag$`(m_1,m_2,m_3)`$. Since the mixing in the charged lepton sector can be considered small and our experimental informations on neutrinos are approximate, for our analysis we can also set $`U^{}M_LU^{}=D_L`$ (exact in the basis where $`M_l`$ is diagonal), that is
$$M_L=UD_LU^T,$$
(14)
which gives the light neutrino mass matrix , valid up to small corrections of the order $`ϵ^20.03`$,
$$M_L=\left(\begin{array}{ccc}\mu & \delta & \delta ^{}\\ \delta & \rho & \sigma \\ \delta ^{}& \sigma & \rho ^{}\end{array}\right),$$
(15)
with
$$\mu =m_1c^2+m_2s^2$$
$$\mu ^{}=m_1s^2+m_2c^2$$
$$\delta =\frac{1}{\sqrt{2}}[ϵ(m_3\mu )+(m_2m_1)cs]$$
$$\delta ^{}=\frac{1}{\sqrt{2}}[ϵ(m_3\mu )(m_2m_1)cs]$$
$$\sigma =\frac{1}{2}(m_3\mu ^{})$$
$$\rho =\frac{1}{2}[m_3+\mu ^{}2(m_2m_1)csϵ]$$
$$\rho ^{}=\frac{1}{2}[m_3+\mu ^{}+2(m_2m_1)csϵ].$$
The inverse of $`M_L`$ is given by
$$M_L^1=\left(\begin{array}{ccc}\rho \rho ^{}\sigma ^2& \sigma \delta ^{}\delta \rho ^{}& \delta \sigma \rho \delta ^{}\\ \sigma \delta ^{}\delta \rho ^{}& \mu \rho ^{}\delta ^2& \delta \delta ^{}\mu \sigma \\ \delta \sigma \rho \delta ^{}& \delta \delta ^{}\mu \sigma & \mu \rho \delta ^2\end{array}\right)\frac{1}{D},$$
(16)
with $`D=m_1m_2m_3`$.
In the following sections we will study the matrix $`M_R`$ which is obtained from eqns.(14),(3),(2) by the first two possible neutrino spectra (10),(11) and $`s0`$ (single maximal mixing) or $`s1/\sqrt{2}`$ (double maximal mixing). We do not consider the nearly degenerate spectrum because it suffers from a number of instabilities . Notice that one of the advantages of such spectrum was the possibility of providing a hot dark matter component (with $`m_i2`$ eV), but now we believe that the amount of hot dark matter is probably much smaller, and one neutrino with mass about 0.07 eV, as in the hierarchical case, can be relevant . In any case, if one assumes a hierarchical $`M_D`$ it is quite difficult to make $`M_L`$ having degenerate eigenvalues. Nevertheless, we give here a rough evaluation for the scale of $`M_R`$ at the intermediate value $`10^{12}`$ GeV.
In this paper we do not consider the results of the LSND experiment , which have not yet been confirmed by other experiments. If confirmed the LSND results would imply a third $`\mathrm{\Delta }m^2`$ scale, $`\mathrm{\Delta }m_{LSND}^21`$ eV<sup>2</sup>, and thus a fourth (light and sterile) neutrino.
## III Hierarchical spectrum
In this case the light neutrino mass matrix can be written as
$$M_L=\left(\begin{array}{ccc}\mu & \delta & \delta ^{}\\ \delta & \frac{m_3}{2}& \frac{m_3}{2}\\ \delta ^{}& \frac{m_3}{2}& \frac{m_3}{2}\end{array}\right),$$
(17)
with
$$\mu =m_1c^2+m_2s^2$$
$$\delta =\frac{1}{\sqrt{2}}[ϵm_3+(m_2m_1)cs]$$
$$\delta ^{}=\frac{1}{\sqrt{2}}[ϵm_3(m_2m_1)cs].$$
The leading form is
$$M_L\left(\begin{array}{ccc}0& 0& 0\\ 0& 1& 1\\ 0& 1& 1\end{array}\right).$$
The inverse of $`M_L`$ is given by
$$M_L^1\left(\begin{array}{ccc}m_3\mu ^{}& \frac{m_3}{2}(\delta ^{}\delta )& \frac{m_3}{2}(\delta \delta ^{})\\ \frac{m_3}{2}(\delta ^{}\delta )& \frac{m_3}{2}\mu \delta ^2& \delta \delta ^{}\frac{m_3}{2}\mu \\ \frac{m_3}{2}(\delta \delta ^{})& \delta \delta ^{}\frac{m_3}{2}\mu & \frac{m_3}{2}\mu \delta ^2\end{array}\right)\frac{1}{D},$$
(18)
where for the entry 1-1 we have used a better degree of approximation from eqn.(16) than that obtained from eqn.(17). Due to the mass hierarchy (10) we also have $`m_3^2\mathrm{\Delta }m_{atm}^2`$, for example we can take $`m_3=610^2`$ eV. It will be useful to match results obtained for the scale of $`M_R`$ with the one obtained when $`M_L=D_L`$, that is
$$M_{R33}\frac{k^2m_t^2}{m_3}.$$
Within the paper we assume that the largest Yukawa coefficient in $`Y_R`$ is of order 1, as indeed it happens for the up quark Yukawa coefficients.
### A Single maximal mixing
If $`s0`$, then $`\delta =\delta ^{}=(1/\sqrt{2})ϵm_3`$, so that
$$M_L^1\left(\begin{array}{ccc}m_3m_2& 0& 0\\ 0& x& x\\ 0& x& x\end{array}\right)\frac{1}{D},$$
(19)
with $`x=m_3(m_1ϵ^2m_3)/2`$ and hence
$$M_{R33}\frac{1}{2}\frac{m_1ϵ^2m_3}{m_1m_2}k^2m_t^2.$$
(20)
If $`ϵ^2m_3m_1`$, then
$$M_{R33}\frac{1}{2}\frac{k^2m_t^2}{m_2}.$$
(21)
Since $`s0`$ corresponds to the SM solution, one has $`m_210^3`$ eV and $`M_{R33}10^{15}`$ GeV. The scale can be lowered for $`m_1ϵ^2m_3`$. If this cancellation does not occur, the structure of $`M_R`$ is hierarchical, with the leading form
$$M_R\text{diag}(0,0,1),$$
(22)
which is the same as that obtained for $`M_D`$ (see eqn.(3)).
### B Double maximal mixing
For $`s1/\sqrt{2}`$ we have three subcases: $`|m_2||m_1|`$, $`m_2m_1`$ and $`m_2m_1`$.
1. We consider now the case with $`|m_2||m_1|`$, where we have
$$\delta =\frac{1}{\sqrt{2}}(ϵm_3+m_2/2)$$
$$\delta ^{}=\frac{1}{\sqrt{2}}(ϵm_3m_2/2)$$
and $`\mu =m_2/2`$. If $`2ϵm_3|m_2|`$, then $`\delta =m_2/2\sqrt{2}=\delta ^{}`$ and
$$M_L^1\left(\begin{array}{ccc}\frac{m_3m_2}{2}& \frac{m_3m_2}{2\sqrt{2}}& \frac{m_3m_2}{2\sqrt{2}}\\ \frac{m_3m_2}{2\sqrt{2}}& \frac{m_3m_2}{4}& \frac{m_3m_2}{4}\\ \frac{m_3m_2}{2\sqrt{2}}& \frac{m_3m_2}{4}& \frac{m_3m_2}{4}\end{array}\right)\frac{1}{D}.$$
(23)
The scale of $`M_R`$ is given by
$$M_{R33}\frac{1}{4}\frac{k^2m_t^2}{m_1},$$
(24)
and $`M_{R33}10^{16}`$ GeV (LM) or $`M_{R33}10^{18}`$ GeV (VO). If $`\delta 0`$ or $`\delta ^{}0`$ results are similar. We have a hierarchical structure for $`M_R`$, reflecting the hierarchy of Dirac masses. The leading form is again eqn.(22).
2. If $`m_2m_1`$, then $`\delta =\delta ^{}=(1/\sqrt{2})ϵm_3`$ and $`\mu =m_2`$ yielding
$$M_L^1\left(\begin{array}{ccc}m_3m_2& 0& 0\\ 0& y& y\\ 0& y& y\end{array}\right)\frac{1}{D},$$
(25)
with $`y=m_3(m_2ϵ^2m_3)/2`$ and
$$M_{R33}\frac{1}{2}\frac{m_2ϵ^2m_3}{m_2^2}k^2m_t^2.$$
(26)
If $`ϵ^2m_3m_2`$, then
$$M_{R33}\frac{1}{2}\frac{k^2m_t^2}{m_2}$$
(27)
and $`M_{R33}10^{15}`$ GeV (LM and VO). The scale can be lowered if $`m_2ϵ^2m_3`$. If the cancellation does not occur, $`M_R`$ is hierarchical with the leading form (22).
3. For $`m_2m_1`$ we have
$$\delta =\frac{1}{\sqrt{2}}(ϵm_3+m_2)$$
$$\delta ^{}=\frac{1}{\sqrt{2}}(ϵm_3m_2)$$
and $`\mu 0`$. Assuming $`ϵm_3|m_2|`$, one has $`\delta =m_2/\sqrt{2}=\delta ^{}`$ and
$$M_L^1\left(\begin{array}{ccc}0& \sqrt{2}m_3m_2& \sqrt{2}m_3m_2\\ \sqrt{2}m_3m_2& m_2^2/2& m_2^2/2\\ \sqrt{2}m_3m_2& m_2^2/2& m_2^2/2\end{array}\right)\frac{1}{D},$$
(28)
$$M_{R33}\frac{1}{2}\frac{k^2m_t^2}{m_3}$$
(29)
$$M_{R13}\sqrt{2}\frac{k^2m_um_t}{m_2}.$$
(30)
For $`m_2/m_3m_u/m_t`$, $`M_{R33}`$ and $`M_{R13}`$ are similar and near the unification scale. Otherwise $`M_R`$ is hierarchical. An interesting case is $`\delta 0`$, which is possible if $`m_2<0`$, when $`\delta ^{}=\sqrt{2}m_2`$ and
$$M_L^1\left(\begin{array}{ccc}0& \frac{m_3m_2}{\sqrt{2}}& \frac{m_3m_2}{\sqrt{2}}\\ \frac{m_3m_2}{\sqrt{2}}& 2ϵm_3m_2& 0\\ \frac{m_3m_2}{\sqrt{2}}& 0& 0\end{array}\right)\frac{1}{D},$$
(31)
so that the scale is given by
$$M_{R13}\frac{1}{\sqrt{2}}\frac{k^2m_um_t}{m_2}$$
(32)
that is intermediate. In fact $`m_210^3`$ eV gives $`M_{R33}10^{11}`$ GeV. In this special case the structure of $`M_R`$ is roughly off-diagonal with the leading form
$$M_R\left(\begin{array}{ccc}0& 0& 1\\ 0& 0& 0\\ 1& 0& 0\end{array}\right),$$
(33)
which was obtained for example in refs.. If $`\delta ^{}0`$, $`\delta =\sqrt{2}m_2`$ and
$$M_L^1\left(\begin{array}{ccc}0& \frac{m_3m_2}{\sqrt{2}}& \frac{m_3m_2}{\sqrt{2}}\\ \frac{m_3m_2}{\sqrt{2}}& 0& 0\\ \frac{m_3m_2}{\sqrt{2}}& 0& 2ϵm_3m_2\end{array}\right)\frac{1}{D},$$
(34)
with $`M_{R33}ϵk^2m_t^2/m_2`$, $`M_{R13}k^2m_um_t/m_2`$, near the unification scale.
## IV Inverted hierarchy
In this case the light neutrino mass matrix is
$$M_L=\left(\begin{array}{ccc}\mu & \delta & \delta ^{}\\ \delta & \frac{\mu ^{}}{2}& \frac{\mu ^{}}{2}\\ \delta ^{}& \frac{\mu ^{}}{2}& \frac{\mu ^{}}{2}\end{array}\right),$$
(35)
with
$$\mu =m_1c^2+m_2s^2$$
$$\mu ^{}=m_1s^2+m_2c^2$$
$$\delta =\frac{1}{\sqrt{2}}[ϵ\mu (m_2m_1)cs]$$
$$\delta ^{}=\frac{1}{\sqrt{2}}[ϵ\mu +(m_2m_1)cs]$$
$$M_L^1\left(\begin{array}{ccc}m_3\mu ^{}& (\delta +\delta ^{})\frac{\mu ^{}}{2}& (\delta +\delta ^{})\frac{\mu ^{}}{2}\\ (\delta +\delta ^{})\frac{\mu ^{}}{2}& \frac{\mu \mu ^{}}{2}\delta ^2& \frac{\mu \mu ^{}}{2}+\delta \delta ^{}\\ (\delta +\delta ^{})\frac{\mu ^{}}{2}& \frac{\mu \mu ^{}}{2}+\delta \delta ^{}& \frac{\mu \mu ^{}}{2}\delta ^2\end{array}\right)\frac{1}{D},$$
(36)
and $`m_{1,2}^2\mathrm{\Delta }m_{atm}^2`$. The lightest neutrino mass $`m_3`$ does not depend on the solar neutrino solution, and can be arbitrarily small.
### A Single maximal mixing
If $`s0`$, then $`\mu =m_1`$, $`\mu ^{}=m_2`$, $`\delta =(1/\sqrt{2})ϵm_1=\delta ^{}`$, the leading $`M_L`$ is given by
$$M_L\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 1\\ 0& 1& 1\end{array}\right)$$
and the inverse of $`M_L`$ by
$$M_L^1\left(\begin{array}{ccc}m_3m_2& ϵ\frac{m_1m_2}{\sqrt{2}}& ϵ\frac{m_1m_2}{\sqrt{2}}\\ ϵ\frac{m_1m_2}{\sqrt{2}}& \frac{m_1m_2}{2}& \frac{m_1m_2}{2}\\ ϵ\frac{m_1m_2}{\sqrt{2}}& \frac{m_1m_2}{2}& \frac{m_1m_2}{2}\end{array}\right)\frac{1}{D}$$
(37)
so that
$$M_{R33}\frac{1}{2}\frac{k^2m_t^2}{m_3},$$
(38)
which is at or above the unification scale. The structure of $`M_R`$ is hierarchical, with the leading form (22).
### B Double maximal mixing
For $`s1/\sqrt{2}`$ we have two cases, corresponding to $`m_2m_1`$ and $`m_2m_1`$.
If $`m_2m_1`$, then $`\mu =m_{1,2}=\mu ^{}`$, $`\delta =(1/\sqrt{2})ϵm_{1,2}=\delta ^{}`$, the leading $`M_L`$ is like for $`s0`$ and
$$M_L^1\left(\begin{array}{ccc}m_3m_{1,2}& ϵ\frac{m_{1,2}^2}{\sqrt{2}}& ϵ\frac{m_{1,2}^2}{\sqrt{2}}\\ ϵ\frac{m_{1,2}^2}{\sqrt{2}}& \frac{m_{1,2}^2}{2}& \frac{m_{1,2}^2}{2}\\ ϵ\frac{m_{1,2}^2}{\sqrt{2}}& \frac{m_{1,2}^2}{2}& \frac{m_{1,2}^2}{2}\end{array}\right)\frac{1}{D}$$
(39)
with the same result as for $`s0`$.
If $`m_2m_1`$, then $`\mu =\mu ^{}=0`$, $`\delta =(1/\sqrt{2})m_{1,2}=\delta ^{}`$, the leading $`M_L`$ is
$$M_L\left(\begin{array}{ccc}0& 1& 1\\ 1& 0& 0\\ 1& 0& 0\end{array}\right)$$
and the inverse is
$$M_L^1\left(\begin{array}{ccc}0& 0& 0\\ 0& \frac{m_{1,2}^2}{2}& \frac{m_{1,2}^2}{2}\\ 0& \frac{m_{1,2}^2}{2}& \frac{m_{1,2}^2}{2}\end{array}\right)\frac{1}{D}$$
(40)
with
$$M_{R33}2\frac{k^2m_t^2}{m_3},$$
(41)
similar to above, and with a hierarchical $`M_R`$, of the leading form (22).
## V Effect of phases
In the preceding sections we have considered only real matrices, which is a CP conserving framework. The signs of $`m_i`$ correspond to CP parities of neutrinos, while the physical masses are $`|m_i|`$ . Let us now write a more general form of $`M_L`$ , namely the same as eqn.(14) but with $`U`$ parametrized as the ordinary CKM matrix (with the CP violating phase $`\delta `$) and
$$D_L=\text{diag}(m_1e^{i\alpha },m_2e^{i\beta },m_3)$$
(42)
with $`m_i>0`$ . We see that the preceding formalism trasforms according to
$$m_1m_1e^{i\alpha },m_2m_2e^{i\beta },ϵϵe^{i\delta }.$$
Moreover, in the hierarchical case $`ϵ`$ (or $`ϵ^2`$) is often joined to $`m_3`$, while in the inverted hierarchy case it is joined to $`m_{1,2}`$. It is clear that if there is no fine tuning of the parameters $`m_i`$, $`ϵ`$, phases have a minor effect. However, we have found some important cases where cancellations occur, indicating also small (that is about 0) or large (that is about $`\pi `$) phase differences. For example eqn.(31) may be obtained for $`\alpha 0`$, $`\delta 0`$, $`\beta \pi `$. It is to remember that only the phase $`\delta `$ affects neutrino oscillations (see $`ϵ`$ in eqn.(13)), while all three phases appear in the neutrinoless double-beta decay parameter $`M_{ee}=|U_{ei}^2m_i|`$. If $`ϵ0`$ and $`|m_2||m_1|`$, a large phase difference $`\alpha \beta \pi `$ gives a much smaller $`M_{ee}`$ with respect to a small phase difference $`\alpha \beta 0`$.
## VI Concluding remarks
We have found two leading forms for $`M_R`$, namely eqn.(22) ($`M_R`$ diagonal) and eqn.(33) ($`M_R`$ off-diagonal). The diagonal form generally is around the unification scale (except for the case of VO with full hierarchy, where the scale goes well above the unification scale, towards the Planck scale ), while the off-diagonal form is at the intermediate scale and hence welcome in the nonsupersymmetric model. Moreover, the off-diagonal form is obtained for a particular pattern for the signs of the light neutrino masses, namely $`m_2`$ opposite to both $`m_1`$ and $`m_3`$, with nearly bimaximal mixing. Of course, this pattern gives a smaller $`M_{ee}`$ with respect to the pattern with all $`m_i`$ of the same sign.
From the point of view of effective parameters, the off-diagonal form seems related to some suitable cancellations, but all of them lead to the smallness of entry $`M_{R33}`$ and hence point towards a different underlying theory with respect to the diagonal form, where the largest element is just $`M_{R33}`$. With regard to this, we would like to refer, for example, to the model , where a suitable pattern of horizontal $`U(1)`$ charges gives
$$M_R\left(\begin{array}{ccc}0& \sigma ^2& 1\\ \sigma ^2& \sigma ^2& 0\\ 1& 0& 0\end{array}\right)M_0,$$
with $`\sigma =(m_c/m_t)^{1/2}`$ and $`M_010^{12}`$ GeV.
The author thanks F. Buccella for discussions.
|
no-problem/0002/math0002072.html
|
ar5iv
|
text
|
# I Introduction
## I Introduction
One approach to quantum Yang-Mills on a Riemann surface of genus $`g`$ requires rewriting the Yang-Mills action in terms of the energy of a $`2g`$-tuple of paths in the symmetry group $`G`$ (This assumes $`g1`$. For $`g=0`$, the energy is that of a based loop in $`G`$.) The energy of such paths appears more recently in Yang-Mills inequalities Sengupta has developed .
Sengupta considers the space of smooth connections, grouped into subspaces by certain requirements on holonomy. For each subspace, there is a loop in $`G`$ whose energy bounds from below the Yang-Mills action on that subspace. For appropriate choices of the requirements on holonomies, this lower bound can be saturated; Yang-Mills connections are precisely those which saturate this bound.
Uhlenbeck has shown that, in two dimensions, the space of connections whose Yang-Mills action is finite contains discontinuous connections. Theorem III.1 below provides a lower bound for the Yang-Mills action on this larger space. It is analogous to Sengupta’s, but in this space the bound can always be saturated. One might then suppose that Yang-Mills connections arise when these saturating connections are also smooth; this is the import of Proposition III.1.
These relations between the Yang-Mills action and the energy of paths may help answer a question raised in Atiyah and Bott’s seminal work on the topology of the moduli space of Yang-Mills connections; namely, does the Yang-Mills action, which they show to be equivariantly perfect, in fact define a Morse stratification? Theorem III.2 describes the correspondence between the critical sets of the Yang-Mills action and those of the energy on the relevant space of paths, for which there is reason to believe the analytic issues are more tractable.
## II The Geometry of $`𝒜/𝒢_m`$
To describe the required energy requires some background on the structure of the quotient $`𝒜/𝒢_m`$ of the space of connections modulo gauge transformations. Here $`𝒜`$ refers to connections with finite total curvature on a given $`G`$-bundle $`P`$ over a Riemann surface $`\mathrm{\Sigma }`$, and $`𝒢_m`$ refers to the space of gauge transformations which are the identity at a specified point $`m\mathrm{\Sigma }`$. What follows is an overview of the essential elements; details are in the references .
Let $`D`$, a regular $`4g`$-gon be a fundamental domain for $`\mathrm{\Sigma }`$, chosen so that $`m`$ corresponds to the center of $`D`$. The edges making up $`D`$ represent the generators $`\{a_i,b_i\}_{i=1}^g`$ of $`\pi _1(\mathrm{\Sigma })`$, and are identified in pairs, with opposite orientations, as in Figure 1.
Theorem 3.1 of states that $`𝒜/𝒢_m`$ is itself a principal fiber bundle over $`Path^{2g}G`$ with an affine-linear fiber. Here $`Path^{2g}G`$ is the space of $`2g`$-tuples of paths in $`G`$ subject to a single relation on the $`4g`$ endpoint values of the paths. There is an obvious energy function (see Eq 2) on this base space $`Path^{2g}G`$; its critical points are are precisely the images of Yang-Mills connections. To understand how this arises, it will suffice to examine the projection $`\xi :𝒜/𝒢_mPath^{2g}G`$.
Consider holonomies by a given connection $`A`$ about the following loops in $`\mathrm{\Sigma }`$: Pick polar coordinates $`(r,\theta )`$ on $`D`$ centered at $`m`$. For a given point $`p`$ of the edge $`a_1D`$, the radial path from $`m`$ to that point followed by the radial path back to $`m`$ from the corresponding point $`p^1`$ of $`a_1^1`$ defines a loop in $`\mathrm{\Sigma }`$. See Figure 2.
Relative to a fixed choice of basepoint in the fiber over $`m`$, the holonomy by $`A`$ about this loop determines an element of $`G`$. Now, let the point $`p`$ vary within $`a_1`$. The corresponding holonomies trace out a path $`\alpha _1`$ in $`G`$. Holonomies about radial paths through the points of the other edges $`b_1,a_2,b_2,\mathrm{},a_g,b_g`$ similarly determine paths $`\beta _1,\alpha _2,\beta _2,\mathrm{},\alpha _g,\beta _g`$. Taken together, these define the $`2g`$-tuple $`\stackrel{}{\gamma }_A=(\alpha _1,\beta _1,\mathrm{},\alpha _g,\beta _g)`$. These $`2g`$ paths are not completely independent of each other, however, as the radii to the vertices of $`D`$ each lie on two distinct loops in $`\mathrm{\Sigma }`$ whose holonomies define the endpoint values of distinct paths in $`G`$. In fact, traversing, in the appropriate order, each such radius out to the vertex and back again to $`m`$ gives a certain product of the endpoint values of the paths in $`\stackrel{}{\gamma }_A`$. On the other hand, by construction, the holonomy about this path must be the identity in $`G`$. Equating these gives the relation defining $`Path^{2g}G`$:
$$\alpha _1(0)\beta _1(1)^1\alpha _1(1)^1\beta _1(0)\mathrm{}\alpha _g(0)\beta _g(1)^1\alpha _g(1)^1\beta _g(0)=\mathrm{𝟏}.$$
Define $`\xi ([A])\stackrel{}{\gamma }_A`$. This is well-defined on $`𝒜/𝒢_m`$, since acting on $`A`$ by an element of $`𝒢_m`$ has no effect on $`\stackrel{}{\gamma }_A`$. Clearly, adding to $`A`$ a Lie-algebra-valued one-form $`\tau `$ which vanishes in the radial directions of $`D`$ also has no effect on $`\stackrel{}{\gamma }_A`$. In fact, in $`𝒜/𝒢_m`$, as a bundle over $`Path^{2g}G`$, the fiber over $`\stackrel{}{\gamma }_A`$ is the space $`\{[A+\tau ]:\tau |_{\text{radii}}=0\}`$. This, and the fact that $`\xi `$ is onto, is proven in Theorem 3.1 of .
If the bundle $`P`$ is not topologically trivial, then, as detailed in , its topology is determined by an element $`z`$ of the center of the universal cover $`\widehat{G}`$ of $`G`$. On lifting $`P`$ to a $`\widehat{g}`$-bundle, the space $`Path^{2g}G`$ is replaced by the corresponding space for $`\widehat{G}`$ with the relation $`\alpha _1(0)\beta _1(1)^1\alpha _1(1)^1\beta _1(0)\mathrm{}\alpha _g(0)\beta _g(1)^1\alpha _g(1)^1\beta _g(0)=z`$. Henceforth, though we omit the hats, we assume we are on the lifted bundle with the corresponding relation.
## III The Yang-Mills action
Consider now the restriction of the Yang-Mills action on $`𝒜/𝒢_m`$ to the fiber through $`\stackrel{}{\gamma }_A`$.
$$S([A])=F_A,F_A,$$
where the inner product combines the invariant inner product on the Lie algebra, the metric-induced inner product on forms at each point and integration over $`\mathrm{\Sigma }`$. Along the fiber, $`F_{A+\tau }=F_A+D_A\tau `$, since the term quadratic in $`\tau `$ vanishes. Thus,
$$S([A+\tau ])=S([A])+2F_A,D_A\tau +D_A\tau ,D_A\tau .$$
Theorem 4.2 of ensures that the requirement $`F_{\stackrel{~}{A}},D_A\tau =0`$, for every $`\tau `$ vanishing along radii, singles out a unique choice for a continuous connection $`\stackrel{~}{A}`$ to serve as an “origin” in the fiber. Note that $`[\stackrel{~}{A}]`$ defines a section of $`𝒜/𝒢_m`$ over $`Path^{2g}G`$. Relative to this choice of origin,
$$S([\stackrel{~}{A}+\tau ])=S([\stackrel{~}{A}])+D_A\tau ,D_A\tau .$$
(1)
(For $`\tau `$ of the specified form, $`D_{\stackrel{~}{A}}\tau =D_A\tau `$.) The key point is that $`S([\stackrel{~}{A}])`$ pulls back to the energy of $`\stackrel{}{\gamma }_A`$. This follows from the condition on $`F_{\stackrel{~}{A}}`$ which implies directly that $`F_{\stackrel{~}{A}}`$ is covariantly constant along radii. Thus, in $`S([\stackrel{~}{A}])=F_{\stackrel{~}{A}},F_{\stackrel{~}{A}}`$, $`F_{\stackrel{~}{A}}`$ may replaced by its average along the radius. This, however, by a non-Abelian analog of Stoke’s theorem, or by Polyakov’s formula, is $`\alpha _i^1\dot{\alpha _i}`$ (or $`\beta _i^1\dot{\beta _i}`$) for some $`i`$ depending on the value of $`\theta `$. In fact, for an appropriate choice of parametrization, determined by the area element on $`\mathrm{\Sigma }`$,
$$S([\stackrel{~}{A}])=\frac{1}{2}\underset{i=1}{\overset{2g}{}}\dot{\gamma }_i^2E(\stackrel{}{\gamma }_A),$$
(2)
as detailed in Section 5.1 of . Here $`\gamma _i`$ denotes the $`i`$th component of $`\overline{\gamma }_A`$. For a generic connection, which must be gauge equivalent to $`\stackrel{~}{A}+\tau `$, Eq 1 thus becomes
$$S([\stackrel{~}{A}+\tau ])=E(\stackrel{}{\gamma }_A)+D_A\tau ,D_A\tau $$
(3)
It leads immediately to a lower bound on the Yang-Mills action on a given fiber:
###### Theorem III.1
For any connection $`A`$ representing an element of the fiber through $`\stackrel{}{\gamma }_APath^{2g}G`$,
$$S([A])\frac{1}{2}\underset{i=1}{\overset{2g}{}}\gamma _i^2,$$
with equality holding iff $`A`$ agrees with the section $`\stackrel{~}{A}`$ (up to gauge transformation).
Proof: This is an immediate consequence of Eq 3, since the second term on the right-hand side is positive semi-definite, and zero iff $`\tau =0`$. $``$ $``$
Given this decomposition of the Yang-Mills action, it is easy to see how its critical points correspond directly to critical points of the energy $`E`$.
###### Theorem III.2
The connection $`\stackrel{~}{A}`$ represents a Yang-Mills critical point iff $`\stackrel{}{\gamma }_{\stackrel{~}{A}}`$ is a critical point of the energy $`E`$.
Proof: Suppose $`A=\stackrel{~}{A}+\tau `$ is a Yang-Mills critical point; that is, a point at which $`S([\stackrel{~}{A}+\tau ])`$ is stationary. (There is no loss of generality in omitting a possible gauge transformation on one side of this equation.) By considering just $`\tau `$ of the form $`\tau =t\tau _0`$, for $`tR`$, it is clear from Eq 3 that $`\tau =0`$ is a necessary condition for $`A`$ to be a critical point. It then follows that $`\stackrel{}{\gamma }_A`$ must be a critical point of the energy. The converse is immediate. $``$ $``$
To relate this picture, in which connections need not be smooth and the energy bound can always be saturated, to Sengupta’s, in which connections must be smooth and the energy bound can only be saturated on the fibers containing Yang-Mills connections, note that in the fibers over critical points of the energy the connection $`\stackrel{~}{A}`$ is smooth.
###### Proposition III.1
If $`\stackrel{}{\gamma }`$ is a critical point of $`E`$, then the corresponding $`\stackrel{~}{A}`$ is smooth.
Proof: A simple calculus of variations computation shows that $`\stackrel{}{\gamma }`$ extremizes $`E`$ iff
$$\frac{}{\theta }\gamma _i^1\dot{\gamma _i}=0.$$
On the other hand, this condition also ensures that the covariantly constant curvatures $`F_{\stackrel{~}{A}}`$, related by the non-Abelian analog of Stokes Theorem mentioned previously, are continuous at $`m`$. This was the only place $`\stackrel{~}{A}`$ might have failed to be smooth. $``$ $``$
## IV A possible application
Atiyah and Bott suggest equivariant Morse theory might apply to the cohomology of the Yang-Mills moduli space, and, more particularly, their stratification may correspond to the Morse stratification for the Yang-Mills action. With this in mind, they prove the Yang-Mills action is an equivariantly perfect Morse function. However, analytic concerns prevent them from developing the theory more fully, except in genus zero. There Bott and Samuelson have shown that $`𝒜/𝒢`$ is topologically equivalent to based loops in $`G`$, and that Morse theory arguments go through for a wide variety of symmetric spaces including these based loops.
The geometric picture of $`𝒜/𝒢_m`$ as an affine-linear bundle shows it is topologically equivalent to its base space $`Path^{2g}G`$. Passing from $`𝒢_m`$ to $`𝒢`$, this becomes $`Path^{2g}G/G`$, where a given element $`gG`$ acts adjointly on each path: $`\gamma _i(t)g^1\gamma _i(t)g`$. Moreover Eq 3 says the section $`[\stackrel{~}{A}]`$ pulls the Yang-Mills action back to the energy on $`Path^{2g}G/G`$. Clearly, the Morse theory for this base space, if such exists, would be the Morse theory for $`𝒜/𝒢`$. Moreover, the generality of Bott and Samuelson’s results is nearly sufficient to apply them directly to $`Path^{2g}G/G`$. The endpoint condition, however, requires careful treatment, which we defer to future work.
Acknowledgements: The author is grateful to Ambar Sengupta for discussions of his work on the energy inequality and to Stephen Sawin for discussions of this and many other aspects of two-dimensional Yang-Mills.
|
no-problem/0002/astro-ph0002173.html
|
ar5iv
|
text
|
# Radial Velocity Studies of Close Binary Stars. III1footnote 11footnote 1Based on the data obtained at the David Dunlap Observatory, University of Toronto.
## 1 INTRODUCTION
This paper is a continuation of the radial velocity studies of close binary stars, Lu & Rucinski (1999) (Paper I) and Rucinski & Lu (1999) (Paper II). The main goals and motivations are described in these papers. In short, we attempt to obtain modern radial velocity data for close binary systems which are accessible to the 1.8 meter class telescopes at medium spectral resolution of about R = 10,000 – 15,000. Selection of the objects is quasi-random in the sense that we started with shortest period contact binaries. The intention is to publish the results in groups of ten systems as soon as reasonable orbital elements can be obtained from measurements evenly distributed in orbital phases. We are currently observing a few dozens of such systems. The rate of progress may slow down, as we move into systems with progressively longer periods.
This paper is structured in the same way as Papers I and II in that it consists of two tables containing the radial velocity measurements (Table Radial Velocity Studies of Close Binary Stars. III<sup>1</sup><sup>1</sup>1Based on the data obtained at the David Dunlap Observatory, University of Toronto.) and their sine-curve solutions (Table Radial Velocity Studies of Close Binary Stars. III<sup>1</sup><sup>1</sup>1Based on the data obtained at the David Dunlap Observatory, University of Toronto.) and of brief summaries of previous studies for individual systems. The reader is referred to the previous papers for technical details of the program. In short, all observations described here were made with the 1.88 meter telescope of the David Dunlap Observatory (DDO) of the University of Toronto. The Cassegrain spectrograph giving the scale of 0.2 Å/pixel, or about 12 km/s/pixel, was used; the pixel size of the CCD was 19 $`\mu `$m. A relatively wide spectrograph slit of 300 $`\mu `$m corresponded to the angular size on the sky of 1.8 arcsec and the projected width of 4 pixels. The spectra were centered at 5185 Å with the spectral coverage of 210 Å. The exposure times were typically 10 – 15 minutes long.
The data in Table Radial Velocity Studies of Close Binary Stars. III<sup>1</sup><sup>1</sup>1Based on the data obtained at the David Dunlap Observatory, University of Toronto. are organized in the same manner as in Paper II. It provides information about the relation between the spectroscopically observed epoch of the primary-eclipse T<sub>0</sub> and the recent photometric determinations in the form of the (O–C) deviations for the number of elapsed periods E. It also contains, in the first column below the star name, our new spectral classifications of the program objects. The classification spectra, were obtained with a grating giving a dispersion of 0.62 Å/pixel in the range 3850 – 4450 Å. The program-star spectra were “interpolated” between spectra of standard stars in terms of relative strengths of lines known as reliable classification criteria.
In the radial-velocity solutions of the orbits, the data have been assigned weights on the basis of our ability to resolve the components and to fit independent Gaussians to each of the broadening-function peaks. Weight equal to zero in Table Radial Velocity Studies of Close Binary Stars. III<sup>1</sup><sup>1</sup>1Based on the data obtained at the David Dunlap Observatory, University of Toronto. means that an observation was not used in our orbital solutions; however, these observations may be utilized in detailed modeling of broadening functions, if such are undertaken for the available material. The full-weight points are marked in the figures by filled symbols while half-weight points are marked by open symbols. Phases of the observations with zero weights are shown by short markers in the lower parts of the figures; they were usually obtained close to the phases of orbital conjunctions.
All systems discussed in this paper but one have been observed for radial velocities for the first time. The only exception is RZ Dra for which an SB1 orbit solution was obtained by Struve (1946). The solutions presented in Table Radial Velocity Studies of Close Binary Stars. III<sup>1</sup><sup>1</sup>1Based on the data obtained at the David Dunlap Observatory, University of Toronto. for the four circular-orbit parameters, $`\gamma `$, K<sub>1</sub>, K<sub>2</sub> and T<sub>0</sub>, have been obtained iteratively, with fixed values of the orbital period. First, two independent least-squares solutions for each star were made using the same programs as described in Papers I amd II. Then, one combined solution for both amplitudes and the common $`\gamma `$ was made with the fixed mean value of T<sub>0</sub>. Next, differential corrections for $`\gamma `$, K<sub>1</sub>, K<sub>2</sub>, and T<sub>0</sub> were determined, providing best values of the four parameters. These values are given in Table Radial Velocity Studies of Close Binary Stars. III<sup>1</sup><sup>1</sup>1Based on the data obtained at the David Dunlap Observatory, University of Toronto.. The corrections to $`\gamma `$, K<sub>1</sub>, K<sub>2</sub>, and T<sub>0</sub> were finally subject to a “bootstrap” process (several thousand solutions with randomly drawn data with repetitions) to provide the median values and ranges of the parameters. We have adopted them as measures of uncertainty of parameters in Table Radial Velocity Studies of Close Binary Stars. III<sup>1</sup><sup>1</sup>1Based on the data obtained at the David Dunlap Observatory, University of Toronto..
Throughout the paper, when the errors are not written otherwise, we express standard mean errors in terms of the last quoted digits, e.g. the number 0.349(29) should be interpreted as $`0.349\pm 0.029`$.
## 2 RESULTS FOR INDIVIDUAL SYSTEMS
### 2.1 CN And
The variability of CN And was discovered by Hoffmeister (1949). Modern light curves were presented by Kaluzny (1983), Refert et al. (1985), Keskin (1989) and Samec et al. (1998). The light curve is of the EB-type, with unequally deep eclipses. From the eclipse timing, there exists a clear indication of a continuous period decrease. We used the almost contemporary with our observations determination of the primary eclipse time $`T_0`$ by Samec et al. (1998). The radial velocity curve of the secondary component (see Figure Radial Velocity Studies of Close Binary Stars. III<sup>1</sup><sup>1</sup>1Based on the data obtained at the David Dunlap Observatory, University of Toronto.) shows some asymmetry in the first half of the orbital period which may explain why our $`T_0`$ is shifted by $`0.014`$ day to earlier phases. From the radial-velocity point of view, the system looks like a typical A-type contact binary, but it can be also a very close semi-detached system.
Two groups of investigators attempted solutions of the light curves for orbital parameters, disregarding lack of any information on the spectroscopic mass-ratio. Our $`q_{sp}=0.39\pm 0.03`$ differs rather drastically from these photometric estimates. Kaluzny (1983) expected the mass-ratio to be within $`0.55<q_{ph}<0.85`$. Refert et al. (1985) found the most likely interval to be $`0.5<q_{ph}<0.8`$; they saw indications of a strong contact. The matter of the contact or semi-detached nature of the system should be re-visited in view of the discrepancy between our $`q_{sp}`$ and the previous estimates of $`q_{ph}`$.
The photometric data at maximum light were published by Kaluzny (1983): $`V=9.62`$, $`(BV)=0.45`$. The system is apparently a moderately strong X-ray source (Shaw et al. (1996)).
### 2.2 HV Aqr
Variability of HV Aqr was discovered relatively recently by Hutton (1992). The type of variability was identified by Schirmer (1992) and a preliminary, but thorough study was presented by Robb (1992). The system shows total eclipses at the shallower of the two minima, so it is definitely an A-type one. This circumstance is important to our results because both available ephemerides of Schirmer and Robb do not predict the primary minimum correctly and we are not sure how our radial-velocity observations relate to the photometric observations: Our $`T_0`$ falls at phase 0.47 for the former and at 0.70 for the latter. Probably the values of the orbital period are slightly incorrect for both ephemerides. We used the period of 0.374460 day, following Schirmer (1992); if this is incorrect, then part of the scatter in our data may be due to the incorrect phasing over the span of 450 days of our observations.
The mass-ratio of HV Aqr is small, $`q_{sp}=0.145\pm 0.05`$. It agrees perfectly with the photometric solution of Robb (1992) of $`q_{ph}=0.146`$ which confirms the validity of the photometric approach for totally eclipsing systems, in contrast with the typical lack of agreement for partially eclipsing systems.
Our spectral type of F5V agrees with the observed $`(BV)=0.630.78`$ for a relatively large reddening of $`E_{BV}=0.08`$ expected by Robb (1992). The system is bright with $`V=9.80`$, and is potentially one of the best for a combined spectroscopic – photometric solution.
### 2.3 AO Cam
Variability of AO Cam was discovered by Hoffmeister (1966). Milone et al. (1982) analyzed their photometric observations in a simplified way. Subsequent photometric solution by Evans et al. (1985) and even the sophisticated work of Barone et al. (1993) did not bring much progress in view of the partial eclipses and total lack of any information on the mass-ratio. Barone et al. (1993) stated that AO Cam is definitely a W-type system; they estimated the photometric mass-ratio at $`q_{ph}=1.71\pm 0.04`$ or $`1/q=0.585`$. This value is very different from our spectroscopic result, $`q_{sp}=0.413\pm 0.011`$, but we do confirm the W-type of the system.
AO Cam does not have any $`UBV`$ data. Our spectral classification is G0V. In view of its considerable brightness of $`V=9.50`$ at maxima, it appears to be a somewhat neglected system.
To predict the moment of the primary eclipse $`T_0`$, we used the ephemeride based on the period of Evans et al. (1985) and the observations of the secondary eclipses by Faulkner (1986). The (O–C) deviation is relatively small in spite of many orbital periods elapsed since Faulker’s observations.
### 2.4 YY CrB
YY CrB is one of the variable stars discovered by the Hipparcos satellite mission (ESA (1997)). The only available light curve comes from this satellite; the star has not been studied in any other way. There are no obvious indications of total eclipses, but the coverage of eclipses is relatively poor so that it is possible that the minimum identified as the primary is not the deeper one. We used the primary minimum ephemeride of Hipparcos and this results in a contact system of type A, that is with the more massive and hotter component eclipsed at this minimum.
The system is bright, $`V_{max}=8.64`$, and its Hipparcos parallax is relatively large and well-determined, $`p=11.36\pm 0.85`$ milli-arcsec (mas), giving a good estimate of the absolute magnitude of the system, $`M_V=3.92\pm 0.16`$. With $`(BV)=0.62\pm 0.02`$ from the Hipparcos database, the $`M_V(\mathrm{log}P,BV)`$ calibration (Rucinski & Duerbeck (1997)) gives $`M_V^{cal}=3.88`$ so that the agreement is perfect. Our spectral classification of F8V agrees with the $`(BV)`$ color index.
With the very good parallax and the new radial velocity data, YY CrB is one of the systems which have a great potential of providing an excellent combined photometric and spectroscopic solution.
### 2.5 FU Dra
FU Dra is another Hipparcos discovery. Again, we assumed the primary eclipse identification and its ephemeride as in the Hipparcos publication (ESA (1997)). With these assumptions, the system appears to belong to the W-type systems. The mass-ratio is somewhat small for such systems, $`q_{sp}=0.25\pm 0.03`$.
The system is bright, $`V_{max}=10.55`$, but its Hipparcos parallax is only moderately well determined, $`p=6.25\pm 1.09`$ mas, resulting in the absolute magnitude $`M_V=4.53\pm 0.38`$. The system was measured by the Hipparcos project to have a relatively large tangential motion, $`\mu _{RA}=255.85\pm 1.18`$ mas and $`\mu _{dec}=16.61\pm 1.18`$ mas. The large proper motion had been noticed before (Lee 1984a , Lee 1984b ), but the spatial velocity was then estimated assuming an uncertain spectroscopic parallax. Using the Hipparcos data one obtains the two tangential components, $`V_{RA}=194`$ km s<sup>-1</sup> and $`V_{dec}=13`$ km s<sup>-1</sup>. The radial velocity $`\gamma =11`$ km s<sup>-1</sup> is moderate, so that only the RA component of the spatial tangential velocity is very large.
### 2.6 RZ Dra
RZ Dra has been frequently photometrically observed since the discovery by Ceraski (1907). The most recent extensive analysis of the system was by Kreiner et al. (1994). It utilized the only extant set of spectroscopic observations by Struve (1946) which led to a detection of one, brighter component. Our data confirm the primary amplitude $`K_1=100`$ km s<sup>-1</sup>, but we have been to also detect the secondary component. The system consists of components considerably differing in the effective temperature, and thus is classified as an EB-type binary. Spectroscopically, one sees a more massive component eclipsed at the deeper minimum so that it can be an Algol semi-detached binary or an A-type contact system in poor thermal contact. The analysis of Kreiner et al. (1994) was made under the assumption of the semi-detached configuration. They found the photometric mass-ratio $`q_{ph}0.45`$, but some solutions suggested $`q_{ph}0.55`$. Our relatively well defined solution for both components gives $`q_{sp}=0.40\pm 0.04`$. The same investigation of Kreiner et al. provided the starting value of $`T_0`$. In spite of indications that the period may be variable and that many epochs elapsed since the study by Kreiner et al., the observed shift in the primary eclipse time is relatively small.
The spectral type that we observed, A6V, most probably applies to the primary component which is much hotter than its companion (we have not attempted to separate the spectra in terms of spectral types). The Hipparcos parallax of the system, $`p=1.81\pm 1.01`$ mas, is too poor for an more extensive analysis of the absolute magnitude of the system. RZ Dra appears to be a relatively short-period (0.55 day) semi-detached Algol with both components accessible to spectroscopic observations.
### 2.7 UX Eri
UX Eri is a contact binary which was extensively photometrically observed since its discovery by Soloviev (1937). The first modern contact-model solutions which gave surprisingly good agreement with our spectroscopic mass-ratio, $`q_{sp}=0.37\pm 0.02`$, was presented by Mauder (1972) almost quarter of a century ago; it utilized the light curve of Binnendijk (1967) and arrived at $`q_{ph}=0.42`$.
Our observations have been supplemented by four observations obtained at the same time by Dr. Hilmar Duerbeck at the European Southern Observatory with a 1.52m telescope and a Cassegrain spectrograph. As a starting point of our solution, we used the moment of the primary minimum $`T_0`$ as predicted on the basis of observations by Agerer & Huebscher 1998a and Agerer & Huebscher 1998b ; both took place actually slightly after our spectroscopic observations.
UX Eri appears to be an A-type contact binary. The $`(BV)`$ color index is not available for this star (it has also not been measured by Hipparcos), so we have not been able to relate it to our spectral classification of F9V. The Hipparcos parallax $`p=6.57\pm 2.84`$ mas provides a relatively poor estimate of the absolute magnitude for the maximum brightness of $`V_{max}=10.59`$: $`M_V=4.7\pm 0.9`$.
### 2.8 RT LMi
RT LMi was discovered as a variable star by Hoffmeister (1949). The most recent analysis from which we took the time of the primary eclipse $`T_0`$ is by Niarchos et al. (1994). The system has been characterized in this study as a W-type contact binary of spectral type G0V with photospheric spots. Our spectral classification is based on poor spectra, but they indicate a slightly earlier spectral type, of approximately F7V. As Niarchos et al. (1994) pointed out, the minima were observed to be of almost equal depth. Our spectroscopic orbit gives an A-type so that the minimum selected by the authors as the primary corresponds to the eclipse of the more massive component (the temporal shift is very small in spite of many elapsed epochs). The photometric solution of Niarchos et al. (1994) assuming the W-type appears therefore to be invalid. The system is otherwise quite an inconspicuous contact binary. It lacks even most essential photometric data. The Simbad database gives $`V_{max}=11.4`$, but the source of this value is not cited.
### 2.9 V753 Mon
V753 Mon is a new discovery of the Hipparcos mission. It is probably one of the most interesting new close binaries recently discovered. V753 Mon has not been studied before. The only published photometric data come from the $`uvby`$ survey of Olsen (1994) who found $`(by)=0.214\pm 0.008`$, $`m_1=0.160\pm 0.003`$ and $`c_1=0.693\pm 0.023`$. The $`(by)`$ color index color corresponds to $`(BV)0.34`$ or the spectral type F2V. Our spectral classification is A8V which does not agree with these estimates and with $`(BV)=0.36`$ in the Hipparcos catalog, unless there is considerable reddening of about $`E_{BV}0.12`$. However, the early type would be in a better accord with the large masses indicated by the radial velocity solution (see below). The brightness data in the Olsen measurements indicated quite appreciable variability, $`V=8.46\pm 0.34`$, but this indication has been apparently overlooked. The Hipparcos mission database treats it as a new discovery.
The two features distinguish V753 Mon as a particularly interesting system: The mass-ratio close to unity and the large amplitudes of radial velocity variations indicating a large total mass. The mass-ratio $`q_{sp}=0.970\pm 0.009`$ is the closest to unity of all known contact binaries. Note that the currently largest mass ratios are $`q=0.80`$ for VZ Psc (Hrivnak et al. (1995)) and SW Lac (Zhai & Lu (1989)) and $`q=0.84`$ for OO Aql (Hrivnak (1989)). Since contact systems with $`q1`$ are not observed, but are expected to experience strong favorable observational biases for their detection and ease in analysis, it is generally thought that contact configurations avoid this particular mass-ratio. The summed amplitudes of the radial velocity variations for V753 Mon give the total mass of the system, $`(M_1+M_2)sin^3i=2.93\pm 0.06`$. This is in perfect agreement with the expected masses of two main-sequence stars of the spectral type F2V seen on an orbit exactly perpendicular to the plane of the sky. The light variation is about 0.52 mag., in place of the expected about 1.0 mag. for a contact system with $`q1`$; therefore, the total mass for $`i<90^{}`$ may turn out to be substantially larger. For the spectral type estimated by us the individual masses should be close to $`1.7M_{}`$.
Our radial velocity data show that with the ephemeride based on the Hipparcos data, the system belongs to the W-type contact systems. Apparently, the eclipses are of almost the same depth, as expected for $`q1`$. However, the light curve from Hipparcos is poorly covered around the secondary minimum so that the identification of the eclipses is uncertain. Obviously, the distinction between A-type and W-type systems becomes immaterial for $`q1`$.
The system is begging a new light curve and an extensive analysis, not only because of the unusual properties, but also because it is bright, $`V_{max}=8.34`$, and has a moderately well determined Hipparcos parallax: $`p=5.23\pm 1.04`$ resulting in $`M_V=1.93\pm 0.43`$. This is in perfect agreement with the $`M_V(\mathrm{log}P,BV)`$ calibration which gives $`M_V^{cal}=1.90`$ for the assumed $`(BV)=0.34`$. The agreement would not be that good if the color index is smaller, say 0.22, then one would obtain $`M_V^{cal}=1.54`$ which is still within the uncertainty of the parallax. Further investigations of V753 Mon will therefore contribute to the absolute-magnitude calibration which is rather moderately-well defined for contact binaries with periods longer than about 0.5 day (Rucinski & Duerbeck (1997)).
### 2.10 OU Ser
OU Ser is the fourth Hipparcos mission discovery in this group of systems. The light curve shows almost equally deep minima. With the Hipparcos ephemeride, the system appears to be an A-type one with a small mass-ratio $`q_{sp}=0.173\pm 0.017`$. Our spectral classification of the system indicates the spectral type F9/G0V.
The distinguishing properties of OU Ser in the Hipparcos database are its large proper motion and a well measured parallax. The tangential components of the proper motion are $`\mu _{RA}=387.5\pm 0.9`$ mas and $`\mu dec=2.8\pm 0.8`$ mas. With the parallax of $`p=17.3\pm 1.0`$ mas this translates into the spatial components $`V_{RA}=106`$ km s<sup>-1</sup> and $`V_{dec}=1`$ km s<sup>-1</sup>. The mean radial velocity of the system is $`\gamma =64.08\pm 0.41`$ km s<sup>-1</sup>. Thus, the RA and the radial velocity components indicate a high-velocity star.
The large proper motion of the star had been the reason for inclusion in the survey by Carney et al. (1994). They noted also the broad lines indicating short-period binarity and possibility of light variations. Their photometric data $`V=8.27`$, $`(BV)=0.62`$ and $`(UB)=0.08`$ agree with our spectral classification, F9/G0V. The $`ubvy`$ survey of Olsen (1994) suggests a slightly larger $`(BV)0.66`$ on the basis of the $`(by)=0.411\pm 0.003`$, hence a spectral type around G1/2V. The difference in the classification may be due to the apparently low metallicity of the system as judged by its low index $`m_1=0.168\pm 0.004`$ (provided this index is not confused by any chromospheric activity). the other data of Olsen are $`V=8.278\pm 0.005`$ and $`c_1=0.281\pm 0.006`$.
Assuming $`V_{max}=8.25`$ and the parallax $`p=17.3\pm 1.0`$ mas, one obtains $`M_V=4.44\pm 0.12`$. This again agrees very well with the absolute magnitude derived from the $`M_V(\mathrm{log}P,BV)`$ calibration of Rucinski & Duerbeck (1997): For $`(BV)=0.62`$, $`M_V^{cal}=4.32`$, while for $`(BV)=0.66`$, $`M_V^{cal}=4.46`$.
With the excellent parallax data and its properties of a high velocity star, OU Ser deserves a combined photometric – spectroscopic solution.
## 3 SUMMARY
The paper brings radial velocity data for the third group of ten close binary systems that we observed at the David Dunlap Observatory. All, but RZ Dra (which had SB1 radial velocity data), have never been observed spectroscopically; all ten are binaries with both components clearly detected so that they can be called SB2. All systems, but CN And and RZ Dra, which may be very close semi-detached systems, are contact binaries. We describe special features of the individual systems in the descriptions in Section 2. We note that again about half of the system are A-type contact binaries; the likely reasons why we prefer them over the W-type systems in our randomly drawn sample are given in the Conclusions to Paper II. We also observed as EB2 systems two binaries, CN And and RZ Dra; they are most probably semi-detached binaries.
We do not give the calculated values of $`(M_1+M_2)\mathrm{sin}^3i=1.0385\times 10^7(K_1+K_2)^3P(\mathrm{day})M_{}`$ because in most cases the inclination angles are either unknown or not trustworthy. However, one case is very interesting here: The total mass of components of the system V753 Mon is very large, $`M_1+M_2>2.93M_{}`$. Since too large velocity amplitudes are a rare phenomenon in the world of contact systems, this system requires special attention of the observers. The binary is also unique in having its mass-ratio exceptionally close to unity, $`q_{sp}=0.970\pm 0.009`$. Two other systems discovered by the Hipparcos mission are also very important and promise excellent combined solutions, YY CrB and OU Ser. It is important that both are high-velocity stars and have excellent parallaxes. And, finally, the recently discovered system HV Aqr offers an excellent solution in view of the total, well-defined eclipses and very good radial velocity data.
The authors would like to thank Jim Thomson for help with observations and to Hilmar Duerbeck for a permission to use his observations of UX Eri. The research has made use of the SIMBAD database, operated at the CDS, Strasbourg, France and accessible through the Canadian Astronomy Data Centre, which is operated by the Herzberg Institute of Astrophysics, National Research Council of Canada.
Captions to figures:
|
no-problem/0002/hep-ph0002226.html
|
ar5iv
|
text
|
# MPI-PhT/2000-09 Either neutralino dark matter or cuspy dark halos
## Abstract
We show that if the neutralino in the minimal supersymmetric standard model is the dark matter in our galaxy, there cannot be a dark matter cusp extending to the galactic center. Conversely, if a dark matter cusp extends to the galactic center, the neutralino cannot be the dark matter in our galaxy. We obtain these results considering the synchrotron emission from neutralino annihilations around the black hole at the galactic center.
The composition of dark matter is one of the major issues in cosmology. A popular candidate for non-baryonic cold dark matter is the lightest neutralino appearing in a large class of supersymmetric models . In a wide range of supersymmetric parameter space, relic neutralinos from the Big Bang are in principle abundant enough to account for the dark matter in our galactic halo .
A generic prediction of cold dark matter models is that dark matter halos should be have steep central cusps, meaning that their density rises as $`r^\gamma `$ to the center. Semi-analytical calculations find a cusp slope $`\gamma `$ between $`1`$ and 2 . Simulations find a slope $`\gamma `$ ranging from 0.3 to 1 to 1.5 . It is unclear if dark matter profiles in real galaxies and galaxy clusters have a central cusp or a constant density core.
There is mounting evidence that the non-thermal radio source Sgr A at the galactic center is a black hole of mass $`M3\times 10^6M_{}`$. This inference is based on the large proper motion of nearby stars , the spectrum of Sgr A (e.g. ), and its low proper motion . It is difficult to explain these data without a black hole .
The black hole at the galactic center modifies the distribution of dark matter in its surroundings , creating a high density dark matter region called the spike – to distinguish it from the above mentioned cusp. Signals from particle dark matter annihilation in the spike may be used to discriminate between a central cusp and a central core. With a central cusp, the annihilation signals from the galactic center increase by many orders of magnitude. With a central core, the annihilation signals do not increase significantly.
Stellar winds are observed to pervade the inner parsec of the galaxy , and are supposed to feed the central black hole (e.g. ). These winds carry a magnetic field whose measured intensity is a few milligauss at a distance of $`5\mathrm{p}\mathrm{c}`$ from the galactic center . The magnetic field intensity can rise to a few kilogauss at the Schwarzschild radius of the black hole in some accretion models for Sgr A .
In this letter we examine the radio emission from neutralino dark matter annihilation in the central spike. (Previous studies of radio emission from neutralino annihilation at the galactic center have considered an $`r^{1.8}`$ cusp but no spike .) Radio emission is due to synchrotron radiation from annihilation electrons and positrons in the magnetic field around Sgr A. Comparing the radio emission from the neutralino spike with the measured Sgr A spectrum, we find that neutralino dark matter in the minimal supersymmetric standard model is incompatible with a dark matter cusp extending to the galactic center.
There are two ways to interpret our results. If we believe that there is a dark matter cusp extending to the center of our galaxy, we can exclude the neutralino as a dark matter candidate. Conversely, if we believe that dark matter is the lightest neutralino, we can exclude that a dark matter cusp extends to the center of the galaxy.
Dark matter candidate. We examine the lightest neutralino in the minimal supersymmetric standard model. This model provides a well-defined calculational framework, but contains at least 106 yet-unmeasured parameters . Most of them control details of the squark and slepton sectors, and are usually disregarded in neutralino dark matter studies (cfr. ). So, following Bergström and Gondolo , we restrict the number of parameters to 7. Out of the database of points in parameter space built in refs. , we use the 35121 points in which the neutralino is a good cold dark matter candidate , in the sense that its relic density satisfies $`0.025<\mathrm{\Omega }_\chi h^2<1`$. The upper limit comes from the age of the Universe, the lower one from requiring that neutralinos are a major fraction of galactic dark halos. Present understanding of the matter density in the universe (e.g. ) suggests a narrower range $`0.08<\mathrm{\Omega }_\chi h^2<0.18`$, but we conservatively use the broader range.
Spike profile. We summarize the results of ref. for the spike profile. We assume the cusp has density profile
$$\rho _{\mathrm{cusp}}=\rho _D\left(\frac{r}{D}\right)^\gamma ,$$
(1)
with $`\rho _D=0.24\mathrm{GeV}/c^2/\mathrm{cm}^3`$ the density at the reference point $`D=8.5\mathrm{kpc}`$, the Sun location (this is a conservative value for $`\rho _D`$, see ). Then within a central region of radius $`R_{\mathrm{sp}}=\alpha _\gamma D\left(M/\rho _DD^3\right)^{1/(3\gamma )},`$ where $`\alpha _\gamma `$ is given in ref. and $`M=(2.6\pm 0.2)\times 10^6M_{}`$ is the mass of the central black hole, the dark matter density is modified to
$$\rho _{\mathrm{sp}}=\frac{\rho ^{}(r)\rho _\mathrm{c}}{\rho ^{}(r)+\rho _\mathrm{c}}.$$
(2)
Here $`\rho _\mathrm{c}=m_\chi /(\sigma vt_{\mathrm{bh}})`$, where $`t_{\mathrm{bh}}`$ is the age of the black hole (conservatively $`10^{10}`$ yr), $`m_\chi `$ is the mass of the neutralino, and $`\sigma v`$ is the neutralino–neutralino annihilation cross section times relative velocity (notice that for neutralinos at the galactic center $`\sigma v`$ is independent of $`v`$). Furthermore,
$$\rho ^{}(r)=\rho _Rg(r)\left(\frac{R_{\mathrm{sp}}}{r}\right)^{\gamma _{\mathrm{sp}}},$$
(3)
with $`g(r)=\left[1(8GM)/(rc^2)\right]^3`$ accounting for dark matter capture into the black hole, $`\gamma _{\mathrm{sp}}=(92\gamma )/(4\gamma )`$, and $`\rho _R=\rho _D\left(R_{\mathrm{sp}}/D\right)^\gamma `$.
Annihilation rate. The total number of neutralino annihilations per second in the spike follows from the density profile as
$$\mathrm{\Gamma }=\frac{\sigma v}{m^2}\rho _{\mathrm{sp}}^24\pi r^2𝑑r=\frac{4\pi \sigma v\rho _{\mathrm{in}}^2R_{\mathrm{in}}^3}{m^2},$$
(4)
with $`\rho _{\mathrm{in}}=\rho _{\mathrm{sp}}(R_{\mathrm{in}})`$ and $`R_{\mathrm{in}}=1.5\left[(20R_\mathrm{S})^2+R_\mathrm{c}^2\right]^{1/2}`$. The latter expression is a good approximation (6%) to the numerical integration of the annihilation profile.
Most of the annihilations occur either close to the black hole at $`13R_\mathrm{S}3\times 10^6\mathrm{pc}`$ (where $`R_\mathrm{S}2GM/c^2`$ is the Schwarzschild radius) or around the spike core radius $`R_\mathrm{c}=R_{\mathrm{sp}}\left(\rho _R/\rho _\mathrm{c}\right)^{1/\gamma _{\mathrm{sp}}}`$, whichever is larger.
Radio signals. The electrons and positrons produced by neutralino annihilation in the spike are expected to emit synchrotron radiation in the magnetic field around the galactic center.
The strength and structure of this magnetic field is known to some extent. A magnetic field of few milligauss has been detected few parsecs from the center. Models of Sgr A contain accretion flows, either spherical or moderately flattened , which carry a magnetic field towards the black hole. The strength of this magnetic field is assumed to increase inwards according to magnetic flux conservation or equipartition.
Including the gas and the radial dependence of the magnetic field in the synchrotron emission from neutralino annihilations is a complicated problem. Electrons and positrons in the regions where the magnetic field is strong may lose their energy almost in place, while those at the outskirts of the spike may have time to diffuse to very different radii. Moreover, the plasma may affect the shape of the synchrotron spectrum. We postpone this complicated analysis, and consider three simple but relevant models for the magnetic field and the electron/positron propagation.
In model A, we assume that the magnetic field is uniform across the spike, with strength $`B=1\mathrm{m}\mathrm{G}`$, and that the electrons and positrons lose all their energy into synchrotron radiation without moving significantly from their production point.
In model B, we also assume that the magnetic field is uniform across the spike with strength $`B=1\mathrm{m}\mathrm{G}`$, but that the electrons and positrons diffuse efficiently and are redistributed according to a gaussian encompassing the spike (we take the gaussian width $`\lambda =1`$ pc).
In model C, we assume that the magnetic field follows the equipartition value $`B=1\mu \mathrm{G}(r/\mathrm{pc})^{5/4}`$ (from ref. ) and that the electrons and positrons lose all their energy into synchrotron radiation without moving significantly from their production point. In addition, in this model, we neglect synchrotron self-absorption.
Under these assumptions, the electron plus positron spectrum follows from the equation of energy loss $`dE/dt=P(E)(2e^4B^2E^2)/(3m_e^4c^7)`$ as
$$\frac{dn_e}{dE}=\frac{Y_e(\mathrm{>}E)}{P(E)}\mathrm{\Gamma }f_e(r),$$
(5)
where
$$f_e(r)=\frac{\rho _{\mathrm{sp}}^2}{\rho _{\mathrm{sp}}^24\pi r^2𝑑r}$$
(6)
in models A and C, and
$$f_e(r)=\frac{1}{(2\pi \lambda ^2)^{3/2}}e^{r^2/2\lambda ^2}$$
(7)
in model B.
$`Y_e(\mathrm{>}E)`$ is the number of annihilation electrons and positrons with energy above $`E`$. We obtain $`Y_e(\mathrm{>}E)`$ with the DarkSUSY code , which includes a Pythia simulation of the $`e^\pm `$ continuum and the $`e^\pm `$ lines at the neutralino mass .
The synchrotron luminosity is given by
$$L_\nu =\frac{A_\nu \mathrm{\Gamma }}{\nu }𝑑r4\pi r^2f_e(r)_{m_e}^m\frac{Y_e(\mathrm{>}E)}{\nu _c(E)}F\left(\frac{\nu }{\nu _c(E)}\right)𝑑E,$$
(8)
where
$$\nu _c(E)=\frac{3eB}{4\pi m_ec}\left(\frac{E}{m_ec^2}\right)^2$$
(9)
and
$$F(x)=\frac{9\sqrt{3}}{8\pi }x_x^{\mathrm{}}K_{5/3}(y)𝑑y.$$
(10)
The factor $`A_\nu `$ accounts for synchrotron self-absorption. In models A and B, we write
$$A_\nu =\frac{1}{a_\nu }_0^{\mathrm{}}\left[1e^{\tau (b)}\right]\pi b𝑑b,$$
(11)
where $`(b,z)`$ are cylindrical coordinates,
$$\tau =a_\nu _{\mathrm{}}^+\mathrm{}f_e(b,z)𝑑z,$$
(12)
and
$$a_\nu =\frac{e^3B\mathrm{\Gamma }}{9m_e\nu ^2}_{m_e}^mE^2\frac{d}{dE}\left(\frac{Y_e(\mathrm{>}E)}{E^2P(E)}\right)F\left(\frac{\nu }{\nu _c(E)}\right)𝑑E.$$
(13)
In model C, we neglect self-absorption ($`A_\nu =1`$).
We have evaluated equation (8) numerically for each point in supersymmetric parameter space. In model C, we use the approximation $`F(x)\delta (x0.29)`$, which selects the peak of the synchrotron emission from each electron or positron (profuse thanks to Pasquale Blasi for suggesting this approximation).
Figure 1 shows a comparison of typical synchrotron spectra from neutralino annihilation in the spike with the measured spectrum of Sgr A (the latter is taken from the compilation in ref. ). Four spectra are plotted, corresponding to two points in supersymmetric parameter space (thick and thin lines) and two assumptions for the magnetic field (solid and dashed lines; for models A and C, respectively). The spectra are normalized to their maximal intensity, which is fixed by the upper bound at 408 MHz . This upper bound limits the synchrotron intensity for all points in supersymmetric parameter space.
Results. If a dark matter cusp extends to the galactic center, the neutralino cannot be the dark matter in our galaxy. For example, let us assume that the halo profile is of the Navarro-Frenk-White form , namely $`\rho r^1`$ in the central region. Figure 2 shows the expected radio fluxes $`S_\nu =L_\nu /4\pi D^2`$ at 408 MHz and the upper limit from . The upper panel is for model A, the lower panel for model C. Results of model B are similar to those of model A. Irrespective of the assumption on the magnetic field or the $`e^\pm `$ propagation, all points in supersymmetric parameter space where the neutralino would be a good dark matter candidate are excluded by several orders of magnitude.
Conversely, if the neutralino is the dark matter, there is no steep dark matter cusp extending to the galactic center. We see this by lowering the cusp slope $`\gamma `$ until the expected flux at 408 MHz decreases below the upper limit. We obtain a different maximum value $`\gamma _{\mathrm{max}}`$ for each point in supersymmetric parameter space. These values are plotted in figure 3 together with the range $`0.3\gamma 1.5`$ obtained in cold dark matter simulations. The upper bounds $`\gamma _{\mathrm{max}}`$ are generally orders of magnitude smaller than the simulation results.
We conclude that neutralino dark matter in the minimal supersymmetric standard model is incompatible with a dark matter cusp extending to the galactic center. If there is a dark matter cusp extending to the center, we can exclude the neutralino in the minimal supersymmetric standard model as a dark matter candidate. Conversely, if the dark matter of the galactic halo is the lightest neutralino in the minimal supersymmetric standard model, we can exclude that a dark matter cusp extends to the center of the galaxy.
Acknowledgements. Many thanks to the Fermilab Astrophysics group for the generous and warm hospitality. Thanks in particular to Pasquale Blasi for insistingly requesting a non-uniform magnetic field (model C).
|
no-problem/0002/astro-ph0002486.html
|
ar5iv
|
text
|
# Modification of AGB wind in a binary system
## 1. Introduction
Shapes of Planetary Nebulae, that deviate from spherical symmetry (in particular – axisymmetric ones) are often ascribed to binary interactions (e.g. Soker 1997). When an Asymptotic Giant Branch star loses its mass – which is to become a PN – the companion can affect the trajectories of the outflowing matter, concentrating it to the orbital plane. A density gradient between equatorial and polar regions is created, and when the star leaves AGB and the hot fast wind starts to break its way through the remnants of the expelled giant’s envelope, the elliptical or bipolar symmetry forms naturally.
In attempts to calculate such effects it is widely assumed, that the intrinsic AGB wind is spherically symmetric and the asymmetry is introduced only by the companion’s influence. But this need not to be the case for relatively close binaries, where the giant is noticeably distorted by tidal forces. Differences in local conditions through the stellar surface, mainly in temperature and gravity, may lead to different intensities of the outflow. Thus the wind would show an intrinsic directivity.
We investigate this possibility, using a simple model.
## 2. The model
We assume that:
* The orbit is circular and the giant corotates with orbital motion – hence the Roche model for the gravitational potential applies.
* Stellar surface is defined by the Roche equipotential surface.
* The local mass loss rate per unit area, $`\dot{m}`$, is a function of local stellar parameters such as effective temperature, $`T_{\mathrm{eff}}`$, and gravity, $`g`$.
* The luminosity is spread uniformly over the solid angle and therefore the effective temperature depends only on radius and inclination of a given surface element.
We have calculated sequences of models, representing stars filling increasing fraction of their Roche lobes for various mass ratios. For each point of the stellar surface we have computed gravity and effective temperature (relative to the spherical case). This allowed us to evaluate local mass loss rates, using the prescription derived by Arndt, Fleischer, & Sedlmayr (1997). Integration over the whole surface leads then to obtaining total mass loss rates.
## 3. Results
Figures 1–2 present our results. Figs. 1a–d show the local mass loss rates (according to Arndt et al. prescription), represented by grayscale. For each figure black denotes maximum and white – minimum in the mass loss rate. The mass loss rate at the dotted line is equal to that of a single, undistorted (i.e. spherical) star. It marks the border between the ”polar” regions where the mass loss rate is lower, and the ”equatorial strip” where it is higher than for a single star.
In Figs. 1a–c configurations with the mass ratio $`q`$ = $`0.5`$ and the ratio of giant volume radius to critical Roche surface volume radius $`R/R_{RL}`$ = $`1.0`$, $`0.5`$, $`0.1`$ are shown. The stars are viewed from the orbital plane, with the hemisphere facing the companion closer to the observer. Fig. 1d presents for comparison the $`q=0.5`$, $`R/R_{RL}=0.5`$ case viewed from the opposite hemisphere.
Fig. 2 plots the enhancement of the total mass loss rate caused by the presence of the binary companion, $`\mathrm{\Delta }\dot{M}/\dot{M}_{single}`$, against the $`R/R_{RL}`$ ratio. Note logarithmic scale on both axes. Points represent numerical results, lines – the derived analytical relation, which is
$$\mathrm{\Delta }\dot{M}/\dot{M}_{single}(R/R_{RL})^3$$
(see Sect. 4).
## 4. Discussion
As one could expect, we find significant differences in the local mass loss rates across the surface of the giant distorted by the presence of a companion. The gradient between the equatorial and polar regions is evident, although the strongest enhancement occurs toward the companion. It would be interesting to use these results as an input for modelling shapes of the Planetary Nebulae formed in binary systems, replacing the assumption of intrinsic sphericity of the AGB wind.
For evolutionary calculations considering binary evolution it is important to know by how much the stellar mass loss rate is affected by the presence of the companion. Tout and Eggleton (1988) proposed a formula, according to which the tidal torque would enhance the mass loss by a factor of $`1+B\times (R/R_{RL})^6`$, where $`B`$ is a free parameter to be adjusted (ranging from $`5\times 10^2`$ to $`10^4`$).
In our model the mass loss rate enhancement depends mainly on the $`g`$ value at the giant’s point closest to the binary companion. Therefore one may try a simple analytical approach to derive a similar relation for wind enhancement. Let us denote the gravity of a single star by $`g_{single}`$. Expanding the ratio $`g^1/g_{single}^1`$ at the point closest to the companion into a series in small $`R/R_{RL}`$ gives $`const\times (R/R_{RL})^3`$ as a first non-zero term following the unity. This leads to the following dependence for the mass loss rate:
$$\dot{M}=\dot{M}_{single}(1+const\times (R/R_{RL})^3).$$
Our numerical results confirm the above relation up to $`\mathrm{log}R/R_{RL}0.2`$ (i.e. $`R/R_{RL}2/3`$), which is shown on Fig. 2.
### Acknowledgments.
This work has been supported from the grant No. 2.P03D.020.17 of the Polish State Commitee for Scientific Research.
## References
Arndt, T. U., Fleischer, A. J., & Sedlmayr, E. 1997, A&A, 327, 614
Soker, N. 1997, ApJS, 112, 487
Tout, Ch. A., & Eggleton, P. P. 1988, MNRAS, 231, 823
|
no-problem/0002/physics0002003.html
|
ar5iv
|
text
|
# IS IT POSSIBLE TO TRANSFER AN INFORMATION WITH THE VELOCITIES EXCEEDING SPEED OF LIGHT IN EMPTY SPACE?
## 1 Introduction
In the theory of special relativity (SR) the maximal velocity of any signal does not exceed speed of light in the empty space (the existence of optical taxions does not break SR ) In the frame of multifractal theory of time and space it is possible to construct the theory of almost inertial systems . In this theory an arbitrary velocities of moving particles are possible if the approximate independence of speed of light from the velocity of the light source and the approximate constancy of speed of light in vacuum are valid (the breaking the low of constancy speed of light are less than possibility of modern experiment and consist $`10_{10}c`$, see ). Is the transfer of the information within the framework of the theory - possible with any velocities? The difficulty of create a signal carrier of the information spreading with arbitrary large (practically infinitely large) velocity is not the main difficulty at the answer to this question. These signals can be, for example, a beams of charged particles (protons, ionized atoms) accelerated up to velocities greater then the speed of light (their energy must be more then energy $`E_010^3`$ where $`E_0`$=$`m_oc^2`$) and then spontaneously accelerated at almost infinite quantity velocity. These beams may be the carriers of the transferring information. The difficulty consists in the creating the receivers (detectors) of the information recorded by beams (or single) faster than light particles. According to the theory - a particle with velocity $`v>c`$ is spontaneously accelerated up to the velocity $`v=\mathrm{}`$ and practically ceases to interact with a surrounding medium. The purpose of this paper is the attempt to analyze some opportunities of detection of such particles. If the problem of detectors for registration of the faster than light particles will be decided, the problem of practically instantaneous transfer of the information at any distances is solved positively.
## 2 What physical effects are existing for detection of particles moving with velocity $`v>c`$?
Let us suppose validity of the laws of the electrodynamics for the velocities $`v>c`$. After replacing $`\beta =\sqrt{1v^2/c^2}`$ by $`\beta ^{}=\sqrt[4]{(1v^2/c^2)^2+4a^2}`$, (see designations $`a`$ in -) the Lorentz’s transformation also may be used. In that case for the moving electrical charged particle possessing velocity $`v\mathrm{}`$ and energy $`E=\sqrt{2}E_0`$, near to the device playing role of the detector, there are following effects can be probably used for detection of the fact of transit of a particle:
a) In the real physical world any of the physical quantities can not be equal infinity, so we shall introduce for designation of maximal velocity of a particle designation $`v_m`$ ($`v_m`$ is the velocity of a faster than light particle for which the energy loss accompanying by increase of velocity is compensated by magnification of the energy gained from a medium in which the beam of the particles flow by, i.e. the velocity of a particle becomes practically stationary value, for example be in thermodynamic equilibrium with the relic radiation that gives the particle the velocity $`v_m`$$`500c`$ ). There are an almost instantaneous impulses of electrical and magnetic fields from an electrical current formed by transit of the faster than light particle through the media. These impulses could be discovered by detectors that are capable to detect super short impulses electrical or magnetic fields;
b)The kinetic energy of a faster than light particle at $`v>c`$ looks like $`E_k\sqrt{2}E_0c^2/v_m^2`$. The transfer of parts of this energy basically can be registered by high precision detector’s (counters of prompt particles for example based on use of an inner photoelectric effect) in case when a faster than light particle has the collision at a proton, nuclear or electron;
c) In the lengthy detector filled by substances with large density (small free length of collisions for particles) will arise the multiple collisions of faster than light particles with atoms. It can gives energy transition from substances to a faster than light particle and by that to decreasing of its velocity. The power transmission from medium to a particle will gives in decreasing of temperature of medium and besides gives the radiation of Cherenkov-Vavilov type (in an region of frequencies defined by number of collisions with atoms of the substance of the detector);
d) When the faster than light particle fly through the substance with many energy levels with negative temperatures as result may be lost of energy of substance without radiation and decreasing of negative temperature of active optical substance. Physical laws do not forbid all numbered methods of detection for ordinary particles with faster than light velocities and their experimental realization (as well as many other method’s are based on an energy exchange of a faster than light particle with energy of medium) are possible. The numbered methods realization depends on the value of maximal velocity $`v_m`$ .
## 3 Are the particles with $`v>c`$ and real mass exist in nature?
Let us put the question: are the faster than light particles exist in our world? When and where such particles can be discovered? As the one of consequences of the theory of fractal time (see ), the particles with velocities exceeding the velocity of light must have an energy exceeding their the rest energy $`E_o`$ in $`10^3`$ times. Such particles may be borne for example by explosions of stars ( in that case it is possible to expect the appearance of the maximum in the spectrum of $`\gamma `$\- quanta for the energies $`E_010^3`$ or at the first moments of ”big bang” when temperatures of the early Universe exceed $`10^{16}K`$. If a neutrinos have the rest mass and its rest energy are small and have the order (or less) $`1ev.`$ the neutrinos with faster then light velocities may be produced by stars, by nuclei explosions and in the reactions of thermonuclear controlled syntheses. May some super civilization use the faster then light particles, if this civilization has the technology of receiving the beams of such particle, for record and translating information with the faster than light velocities ? In that case it is necessary to seek such particles by mentioned above (or similar) methods.
## 4 Conclusion
On the basis of the above-stated treatment of possibilities of detection of the particles with the faster than light speed, it is possible to make a deduction: the prohibitions for transfer and receiving of the information with faster than light speed are absent (if the theory - are valid). The question about an existence of the ordinary particles (protons, electrons, neutrinos) with velocities faster than light and the real mass in nature ( that question was presented (and decided) for the first time in the paper as one of the consequences of the theory of almost inertial systems that lays beyond of the special relativity and coincides with SR in the case of ideal inertial systems) is now unsolved. The search of taxions continue more than thirty years. I don’t mention about the optical taxions. The existence of the optical taxions do not contradict the SR and apparently they are discovered. I think that only careful experimental search of the ordinary particles with the real mass (the faster than light particles) and experiments that may examine the fractal theory of time may throw light on this very interesting problem.
We suggest to carry out the experiments for receiving by accelerating the protons with energies equal $`10^{12}ev.`$ (that gives a protons the velocity equal the speed of light if the theory - are valid) , then to verify the predictions of the theory that presented in this paper and papers -
|
no-problem/0002/cond-mat0002142.html
|
ar5iv
|
text
|
# Finite-size calculations of spin-lattice relaxation rates in Heisenberg spin-ladders
## I Introduction
Spin-ladder systems, in particular the two-leg, $`S=1/2`$, antiferromagnetic variety, have been the subject of considerable theoretical and experimental investigation.dagotto96 Spin ladders are appealing because they are one-dimensional systems and thus can be effectively investigated using many powerful theoretical tools, while offering a wider parameter space of “simple,” and potentially experimentally realizable, Heisenberg Hamiltonians than spin chains. The simplest Heisenberg ladder Hamiltonian has the form
$$=\underset{n}{}J_{}(𝑺_{n,1}𝑺_{n+1,1}+𝑺_{n,2}𝑺_{n+1,2})+J_{}𝑺_{n,1}𝑺_{n,2}$$
(1)
which offers a dimensionless parameter $`J_{}/J_{}`$ which is in principle tunable by chemistry or pressure. In addition, compounds containing weakly coupled Cu<sub>2</sub>O<sub>3</sub> ladders are appealing because of possible connections with cuprate superconductivity.
The present work was motivated by the nuclear spin-lattice relaxation measurements in La<sub>6</sub>Ca<sub>8</sub>Cu<sub>24</sub>O<sub>41</sub>, an undoped ladder compound, by Imai et al.imai98 These measurements were carried out for all of the nuclear sites on the ladder, namely the copper, the “rung” oxygen, and the “ladder” (or “chain”) oxygen, over a wide temperature range, from low temperatures up to nearly $`900\mathrm{K}`$. Because the principal exchange interactions in cuprates are so large, on the order of $`1000\mathrm{K}`$, it is quite challenging to do experimental work at temperatures significantly greater than the spin gap ($`\mathrm{\Delta }500\mathrm{K}`$).
The experimental results (see Figure 1(c) of Imai et al.imai98 ) have the following noteworthy features. At temperatures below about $`425\mathrm{K}`$, the relaxation rates for all three sites follow a common (activated) temperature dependence up to a scale factor. However, on increasing $`T`$ the copper $`1/T_1`$ (which we will refer to as $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$) exhibits a rather sharp departure from that of the two oxygen sites ($`1/{}_{}{}^{\mathrm{O}(1)}T_{1}^{}`$ and $`1/{}_{}{}^{\mathrm{O}(2)}T_{1}^{}`$ for ladder and rung, respectively). There seems to be a nearly discontinuous decrease in the derivative of $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$; moreover, above $`425\mathrm{K}`$ the $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$ data appear nearly linear with an almost vanishing intercept. The relaxation rates for the two oxygen sites, in contrast, exhibit no particular features in the vicinity of $`425\mathrm{K}`$.
Several aspects of the wavevector dependence of the low-frequency spin susceptibility can be gleaned directly from the data.
One can express the spin-lattice relaxation rate in terms of the dynamic structure factor for the Cu<sup>2+</sup> spins
$$\frac{1}{{}_{}{}^{n}T_{1}^{}}𝑑𝒒H_n(𝒒)S(𝒒,\omega _n)$$
(2)
where $`H_n`$ is the hyperfine form factor associated with nucleus $`n`$, $`\omega _n`$ is the NMR frequency (which we will take to be zero in everything that follows), and $`S`$ is the structure factor. The proportionality constants can be neglected for our present purposes. The spin correlations are isotropic, so there is no need to consider the various components, $`S^{xx}`$ and so forth, individually. The hyperfine interactions are not isotropic, so the orientation of the magnetic field in the NMR experiment does affect the results; however, all of the results of present interest can be obtained with a single field orientation, which then specifies $`H_n(𝒒)`$ uniquely. The largest hyperfine couplings are between a given nuclear site and the closest spins; at that level of approximation, and taking the intra- and inter-chain lattice constants to be of unit length, one has
$`H_{\mathrm{Cu}}=A^2,H_{\mathrm{O}(1)}=4C^2\mathrm{cos}^2(q_x/2),`$
$`H_{\mathrm{O}(2)}=4F^2\mathrm{cos}^2(q_y/2)+D^2`$ (3)
where $`C`$, $`F`$, and $`D`$ are the hyperfine couplings identified in Fig. 1(a) of Imai et al.,imai98 $`A`$ is the on-site hyperfine interaction for copper, and we have elided the orientation dependence of the hyperfine interactions (so, for example, $`A^2`$ should really be $`A_x^2+A_y^2`$ if the static field is along the $`z`$ axis).
The essential difference between copper and oxygen sites is that in the latter the hyperfine interaction in the vicinity of $`𝒒=(\pi ,\pi )`$ is much smaller than in the vicinity of $`𝒒=(0,0)`$. If, at all temperatures of experimental relevance, $`S(𝒒,0)`$ had most of its weight in the vicinity of $`𝒒=(0,0)`$, then then all three relaxation rates would have tracked one another. The marked decrease of $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$ relative to the other two relaxation rates at $`425\mathrm{K}`$ indicates that this cannot be the case, and in fact suggests that at temperatures below $`425\mathrm{K}`$ the ratio of the spectral weight near $`(\pi ,\pi )`$ to that near $`(0,0)`$ is roughly constant and of order unity, while above $`425\mathrm{K}`$ the ratio falls markedly. (The decrease is crucial. If there were an increase in $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$ relative to the oxygen rates with increasing $`T`$, one could ascribe that to a turn-on of $`S((\pi ,\pi ),0)`$ for $`T\mathrm{\Delta }`$ but $`S((\pi ,\pi ),0)`$ might have been negligible compared to $`S((0,0),0)`$ at lower temperatures.)
Why the emphasis on $`𝒒=(0,0)`$ and $`(\pi ,\pi )`$? In gapped systems such as spin ladders, the low-energy spin fluctuations are Raman processes, and at low temperatures one needs to consider only the lowest energy magnons, namely those near $`𝒒=(\pi ,\pi )`$. Spin fluctuations near $`(0,0)`$ are associated with two-magnon processes, and those near $`(\pi ,\pi )`$ with three-magnon processes, and on the face of it one would be justified in neglecting the three-magnon processes entirely at low temperatures: see Ref. ivanov99, and references cited therein. However, as we have just seen, this appears to be inconsistent with the experimental data for La<sub>6</sub>Ca<sub>8</sub>Cu<sub>24</sub>O<sub>41</sub>, and it is also inconsistent with the quantum Monte Carlo calculations of spin-lattice relaxation in a particular Heisenberg ladder ($`J_{}/J_{}=1`$) by Sandvik, Dagotto, and Scalapino,sandvik96 at least at temperatures greater than half the magnon gap.
An extensive theoretical treatment of spin dynamics in gapped one-dimensional Heisenberg models, including spin ladders, has been presented by Damle and Sachdev.damle98 Their analysis of $`S(𝒒,\omega 0)`$ was restricted to $`𝒒`$ near $`(0,0)`$, but they did find the quite interesting result that the activation energy for $`1/T_1`$ is larger, by a factor of $`3/2`$, than the activation energy for the uniform static susceptibility (which is simply the spin gap). An analysis of $`S(𝒒,\omega 0)`$ for $`𝒒`$ near $`(\pi ,\pi )`$, for systems with $`J_{}J_{}`$ has been presented by Ivanov and Lee.ivanov99 Their results are suggestive of a fairly sharp crossover from low- to high-temperature regimes at $`T\mathrm{\Delta }`$, and also indicate that the $`(\pi ,\pi )`$ contribution to $`1/T_1`$ “overshoots” its $`T=\mathrm{}`$ value and thus decreases as $`T\mathrm{}`$.
In the present work, we have applied exact diagonalization to evaluate spin-lattice relaxation rates, following the method of Sokol, Gagliano and Bacci.sokol93 We have considered three different ladder Hamiltonians, namely $`J_{}/J_{}=0.5`$, 1.0, and 2.0, and have obtained $`1/T_1`$ for Cu, O(1), and O(2) sites taking the simplest conceivable hyperfine couplings, namely $`A=C=F=1`$, with all other interactions neglected. All of the calculations were for rather small systems, $`2\times 6`$, such that exact diagonalization could be carried out in an extremely straightforward manner.
It was noted above that calculations of spin lattice relaxation rates for spin ladders have already been carried out by means of large scale quantum Monte Carlo,sandvik96 but those calculations were limited to the Cu sites. Our goal is somewhat different than that of Sandvik, Dagotto, and Scalapino’s work. We are not trying to fit the data in detail, rather we want to see what can be learned from modest numerical calculations. One reason not to fit the data is that to get the gap correct to 10% by exact diagonalization for $`J_{}/J_{}=0.5`$ would require a system at least $`2\times 12`$. Another is that we do not treat the spin diffusion contribution to the relaxation rates correctly: our calculations effectively introduce an artificial cut-off so that we obtain a finite spin-lattice relaxation rate. Finally, the precise form of the spin Hamiltonian for the cuprate ladder compounds is still subject to argument. Although the Knight-shift results of Imai et al.imai98 appear to be consistent with the simple spin-ladder Hamiltonian of Eq. (1) for $`J_{}/J_{}0.5`$, it has been suggested by Brehmer et al.brehmer99 that instead $`J_{}/J_{}1`$ and in addition there is a modest amount of plaquette “ring exchange” in the Hamiltonian. A quantum-chemical analysis of the exchange interactions in various cupratesmizonu98 provides some support for the latter proposal, since it concludes that $`J_{}/J_{}0.9`$ for Sr<sub>14</sub>Cu<sub>24</sub>O<sub>41</sub> (which is a lightly-self-doped version of the undoped La<sub>6</sub>Ca<sub>8</sub>Cu<sub>24</sub>O<sub>41</sub> compound).
To be precise, the goals of our calculation are as follows. First, we want to verify that $`S(𝒒,0)`$ has significant weight near $`𝒒=(\pi ,\pi )`$ as well as near $`(0,0)`$ and see if there are any noticeable trends with varying $`J_{}/J_{}`$. Second, we want to explore the crossover from low to high temperature behavior in $`1/T_1`$: can we see anything like the experimental results, or like the theoretical results of Ivanov and Lee? Third, we want to keep our eyes open for any unanticipated patterns that might emerge in the numerical results.
## II Method of Calculation and Results
The finite-size calculations of spin-lattice relaxation rates are carried out following Sokol, Gagliano, and Bacci.sokol93 Rather than repeating their discussion of the method let us make a few remarks. We take $`J_{}`$ as the unit of energy.
The first step in the calculation is a complete diagonalization of the Hamiltonian and evaluation of matrix elements for certain local spin operators (depending on which nuclear site one is interested in). For the $`2\times 6`$ lattices all of the calculations could be done using the simplest possible representations of the states in terms of local $`S^z`$ values; it was not even necessary to use translational invariance to classify states by wave vector.
The second step is the construction of an auxiliary function which Sokol et al. refer to as $`I(\omega )`$. This is implicitly dependent on $`T`$ and the hyperfine couplings. We considered temperatures ranging from 0.3 to 50. Typically we constructed $`I(\omega )`$ at intervals of 0.02 in $`\omega `$ up to at least $`\omega =0.6`$.
Finally, one needs to estimate the zero-frequency derivative of $`I(\omega )`$, because $`1/T_1`$ is is proportional to $`T(I/\omega )|_{\omega =0}`$. At high temperatures $`I(\omega )`$ is quite smooth, but at temperatures comparable to the gap significant structure develops (see Fig. 1). In order to avoid introducing spurious temperature dependences into $`1/T_1`$ it is important to use a consistent procedure for extracting the derivative from the data. What we did was to fit a zero-intercept line through all the data points up to a cutoff $`\omega _{\mathrm{max}}`$, weighting all points equally in the fit. We did all of the calculations using both $`\omega _{\mathrm{max}}=0.5`$ and 0.3. While there are noticeable differences in the results using these two cutoffs, as shown in Fig. 2, our conclusions turn out the same no matter which is chosen. The use of a much smaller cutoff, which might seem to be preferred on the grounds that one is really looking for a zero-frequency derivative, is not beneficial. The structure that develops in $`I(\omega )`$ as $`T`$ is lowered, making it look like a Devil’s staircase, is a finite-size artifact and must be averaged over, using a suitably large $`\omega _{\mathrm{max}}`$, to obtain results that are representative of the thermodynamic limit.
We now turn to the results of the calculations for the three nuclear sites and three values of $`J_{}`$ considered (0.5, 1.0, and 2.0). In every case we take $`\omega _{\mathrm{max}}=0.5`$. In Fig. 3 we present results on a linear temperature scale, for $`T2`$. The behavior of the spin-lattice relaxation rate at high temperatures is a bit surprising: comparing the plots in Fig. 3(a) through (c) it is apparent that while the Cu and O(1) rates decrease strongly as $`J_{}`$ increases, the trend for the O(2) rate is different. This is made more explicit in Fig. 4, where we show $`1/T_1`$ for all three sites as a function of $`J_{}`$ at $`T=50`$ (effectively infinite temperature). In contrast, at low temperatures $`1/T_1`$ decreases with increasing $`J_{}`$ at all sites, as one would expect since the spin gap is an increasing function of $`J_{}`$.
## III Discussion and Conclusions
It is evident that for $`J_{}=0.5`$ and 1.0, $`1/T_1`$ for all three sites is nearly equal for temperatures below the spin gap. (Of course we do not claim that this holds to arbitrarily low temperatures, just that it seems correct for temperatures as low as we dare to estimate $`1/T_1`$.) Because of our choice of hyperfine interactions, this suggests that in such cases the weight in $`S(𝒒,0)`$ for $`𝒒(\pi ,\pi )`$ is approximately three times that for $`𝒒(0,0)`$. This is in quantitative agreement with the results of Sandvik et al.sandvik96 at $`J_{}=1.0`$. However, the story is rather different at $`J_{}=2.0`$, where the spin-lattice relaxation rates for all three sites, including the two oxygen sites, are significantly different even at $`T=\mathrm{\Delta }/2`$. In the strong-coupling limit, then, the simple picture for $`S(𝒒,0)`$ in which its weight is concentrated at $`(0,0)`$ and $`(\pi ,\pi )`$ does not work even for temperatures that are a modest fraction of $`\mathrm{\Delta }`$.
What can we say about the low-to-high temperature crossover in the spin-lattice relaxation rates? First of all, the sort of behavior seen experimentally, in which $`1/T_1`$ for the oxygen sites track each other closely while $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$ splits off, appears to be a special feature of $`J_{}1`$ in the present calculations; it is not at all generic and does not hold for the putative experimental value $`J_{}0.5`$. Second, in no case does $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$ exhibit any sort of sharp “break” as seen experimentally; nor does $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$ exhibit linear-in-$`T`$ behavior (with zero intercept, or otherwise) in the high temperature regime, even over a restricted temperature range (say $`\mathrm{\Delta }`$ to $`2\mathrm{\Delta }`$). Finally, in no case does $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$ exhibit an “overshoot” during the crossover: the spin-lattice relaxation rate associated with all sites monotonically increases with $`T`$.
Our calculations thus suggest that there are quite a few open problems in this field. Almost none of the prominent experimental facts concerning $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$ in La<sub>6</sub>Ca<sub>8</sub>Cu<sub>24</sub>O<sub>41</sub> are reproduced in our finite-size calculations. Furthermore, the work of Ivanov and Leeivanov99 does not seem to have much to say about our results, either. Their calculation is controlled only in the $`J_{}1`$ regime, so we should only look at the $`J_{}=0.5`$ data. Here we have no evidence of overshoot in $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$, and no reason to believe that one can just examine the spectral weight near $`(\pi ,\pi )`$ since $`1/{}_{}{}^{\mathrm{O}(2)}T_{1}^{}`$ “peels off” from $`1/{}_{}{}^{\mathrm{O}(1)}T_{1}^{}`$ in a manner not very different from $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$.
At this point we face several alternatives. It is possible that our results are simply unreliable, because we are considering systems that are too small (especially for $`J_{}=0.5`$) and our procedure for estimating $`dI(\omega )/d\omega `$ is flawed. We cannot rule this out, but we strongly suspect that the trends in the results as a function of $`J_{}`$ are robust. It is possible that the spin Hamiltonian for the ladders in La<sub>6</sub>Ca<sub>8</sub>Cu<sub>24</sub>O<sub>41</sub> is more complicated than the model we have considered. Whether the Hamiltonian of Brehmer et al.brehmer99 can reproduce the spin-lattice relaxation data requires another calculation. Another possibility that must be considered, given the remarkably sharp feature in $`1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}`$ found in the experimental data, is that La<sub>6</sub>Ca<sub>8</sub>Cu<sub>24</sub>O<sub>41</sub> undergoes, by coincidence, a subtle structural transition at $`425\mathrm{K}`$. This could introduce an anomalously strong $`T`$-dependence to the hyperfine interactions, though why the effect should be so much stronger in $`H_{\mathrm{Cu}}(𝒒)`$ than $`H_{\mathrm{O}(1)}(𝒒)`$ and $`H_{\mathrm{O}(2)}(𝒒)`$ is difficult to envision.
Let us now turn to the results of our calculations for spin-lattice relaxation at very high temperatures, shown in Fig. 4. The most natural way to think about these results is in terms of the Gaussian approximation.anderson53 ; moriya56a ; moriya56b The basic idea of this approach is to assume that $`𝑑𝒒H_n(𝒒)S(𝒒,\omega )`$ is a Gaussian function of $`\omega `$, and then evaluate the frequency cumulants of this function by means of short-time expansions of time-dependent correlation functions. At $`T=\mathrm{}`$ the calculations are especially simple, because the expectation values of correlators $`𝑺_i𝑺_j`$ vanish for sites $`ij`$. For three sites of interest in Heisenberg ladders, the Gaussian approximation yields the following exchange dependences of the spin-lattice relaxation rates at $`T=\mathrm{}`$:
$$1/{}_{}{}^{\mathrm{Cu}}T_{1}^{}1/\sqrt{1+\frac{1}{2}J_{}^2},$$
(4)
$$1/{}_{}{}^{\mathrm{O}(1)}T_{1}^{}1/\sqrt{1+J_{}^2},$$
(5)
and $`1/{}_{}{}^{\mathrm{O}(2)}T_{1}^{}`$ does not have any $`J_{}`$ dependence at all. (Recall that $`J_{}1`$; in all of these results there is an overall factor of $`1/J_{}`$.) If this last result seems peculiar, let us note that it can be derived in another way, by considering the strong-$`J_{}`$ limit. Then one most naturally thinks about the states in terms of singlets and triplets on the rungs. The relevant energy scale for the dynamics of the total spin on a rung, which is relevant to $`1/{}_{}{}^{\mathrm{O}(2)}T_{1}^{}`$, would seem to be proportional to $`J_{}`$ (that is, the bandwidth in lowest-order perturbation theory for a triplet excitation in a single backgroundbarnes93 ), and with the hypothesis of a single energy scale in $`𝑑𝒒H_{\mathrm{O}(2)}S(𝒒,\omega )`$ one reproduces the Gaussian approximation result.
We see in Fig. 4 that $`1/T_1`$ for the copper and ladder oxygen sites decreases with increasing $`J_{}`$, qualitatively in agreement with the Gaussian approximation, although the dependence on $`J_{}`$ is not as strong as that approximation suggests. Furthermore, $`1/{}_{}{}^{\mathrm{O}(2)}T_{1}^{}`$ exhibits an increase with $`J_{}`$. The rather poor performance of the Gaussian approximation is somewhat disappointing, considering how well it works for estimating spin-lattice relaxation rates in square-lattice Heisenberg antiferromagnets.gelfand93 ; sokol93 It is not too surprising, perhaps, given that the dynamic correlations in the $`S=1/2`$ Heisenberg chain are far from Gaussian at $`T=\mathrm{}`$.roldan86 So, there is yet another open problem in the area of low-energy spin dynamics of Heisenberg ladders.
###### Acknowledgements.
This work was supported by the US National Science Foundation through grant DMR 94–57928. We thank T. Imai for several stimulating discussions and also for communicating the results of his group’s experiments prior to publication.
|
no-problem/0002/astro-ph0002179.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
As yet, little is known about the structure of the inner regions of powerful radio galaxies and the impact of the activity on circumnuclear regions. The high resolution and sensitivity afforded by imaging observations using the Hubble Space Telescope (HST) has revealed a wealth of complex structures in such objects, with dust lanes, jets and large regions of scattered emission from hidden nuclei. One of the key targets in studies which aim to elucidate the interplay between active nuclei and their host galaxies, and between galactic activity of different types, is the archetypal powerful radio galaxy Cygnus A (see Carilli & Barthel 1996 for a review).
HST infrared imaging observations of Cygnus A by Tadhunter et al. (1999) revealed an edge-brightened bi-conical structure centred on the nuclear point source, which is strikingly similar to those observed around young stellar objects. The edge-brightening of this structure provides evidence that the bicone is defined as much by outflows in the nuclear regions as by the polar diagram of the illuminating quasar radiation field. The HST observations also show an unresolved nuclear source at 2.0 and 2.25$`\mu `$m. However, from the imaging observations alone it is unclear whether this unresolved source represents the highly extinguished quasar nucleus seen directly through the obscuring torus, or emission from a less-highly-extinguished extended region around the nucleus.
Near-IR polarization observations have the potential to remove the uncertainties surrounding the nature of the unresolved nuclear sources and, in addition, to provide further important information about the obscuration and anisotropy in the near-nuclear regions of powerful radio galaxies. Previous ground-based polarimetric observations of Cygnus A by Packham et al. (1998) demonstrate that the nuclear regions are highly polarized in the K-band, with a measured polarization of $`P_k4`$% for a 1 arcsecond diameter aperture centred on the compact IR nucleus. However, the resolution of the ground-based observations is insufficient to resolve the structures and determine the polarization mechanism unambiguously. In this letter we present new diffraction-limited infrared imaging polarimetry observations of Cygnus A made with the Near Infrared Imaging Camera and Multi-Object Spectrometer (NICMOS: MacKenty et al. 1997) on the HST. These observations resolve the polarized structures, and raise new questions about the nature of the anisotropy in the near-nuclear regions of this key source.
## 2 Observations and data reduction
NICMOS Camera 2 ‘long wavelength’ infrared imaging polarization observations were taken December 1997 and August 1998, giving a pixel scale of 0.075 arcseconds and a total field of 19.4$`\times `$19.4 arcseconds. The NICMOS polarizers are self-contained spectral elements, with the three long wavelength polarizers effective from $`\lambda 1.9`$$`2.1\mu `$m, resulting in an effective central wavelength of $`2.0\mu `$m (MacKenty et al. 1997). The polarizers are oriented at approximately 60 intervals and have characteristics as presented by Hines (1998). Each exposure consisted of a number of non-destructive reads of the detector which were optimally combined in the reduction software to remove cosmic rays. Regular chops were made to offset fields in order to facilitate accurate background subtraction. The total integration time was 2400 seconds per polarizer.
The reduction of the data used standard IRAF/STSDAS pipeline processing together with pedestal removal as given by van de Marel (1998). The three final, clean polarization images were combined following the prescription of Sparks & Axon (1999), using the pipeline produced variance data. The output data comprised a set of images containing each of the Stokes parameters $`I,Q,U`$, their variances and covariances, a debiassed estimate of the polarization intensity and polarization degree using the method of Serkowski (1958), and also position angle and uncertainty estimates on each of those images, as described in detail in Sparks & Axon (1999).
The two epochs of observation were acquired with different instrumental orientation, and were analysed independently to provide a robust check on our estimated uncertainties and on the possibility of systematic errors. A variety of spatial resolutions were used, from full diffraction-limited NICMOS resolution down to $`1`$ arcsec by smoothing the input three images prior to polarization analysis. Both epochs were fully consistent within the estimated statistical errors, implying that there are no significant sources of systematic uncertainty. Measurements of several Galactic stars in the field of Cygnus A give 2.0$`\mu `$m polarizations of $`P_{2.0\mu m}<2.0`$% for 5 pixel diameter apertures. This can be regarded as an upper limit on the level of systematic error in estimates of P. The typical statistical uncertainties for the most highly polarized regions measured in our full resolution images are $`\pm `$1.5% for $`P`$, and $`\pm `$3 for the polarization angle. In the following, we will present the data for the December 1997 observations only, since these have a higher S/N as a consequence of longer exposure times.
## 3 Results
Figure 1 shows the first epoch polarization results. As expected, the total intensity image (Stokes $`I`$) is very similar to the direct images published previously by Tadhunter et al. (1999). In particular, it shows an apparent edge-brightened, reasonably symmetric, bi-conical structure centred on the nucleus with an opening angle of 116 degrees and whose axis is closely aligned with the large scale radio jet.
The image of polarized intensity, however, reveals intriguing differences compared to total intensity. The only regions of strongly polarized emission (apart from the nucleus discussed below) are confined to a quasi-linear structure running along the NW-SE limb of the bi-conical structure. This feature shows approximate reflection symmetry about the nucleus, as opposed to the axial symmetry about the radio axis in the total intensity image. The brightness ratio of the two limbs of the cone to the east of the nucleus is approximately 2:1 in total intensity image, while in polarized intensity image it is $`>`$12:1. Note that the polarization structure visible in our 2.0$`\mu `$m image is strikingly different from that of the optical V- and B-band polarization images (Tadhunter et al. 1990, Ogle et al. 1997), in which the polarized emission appears uniformly distributed across the kpc-scale ionization cones and shows no clear preference for the NW-SE limb of the bi-cone.
Typical measured degrees of polarization are in excess of $`10`$% up to a maximum of $``$25% in the polarized region to the south east of the nucleus. However, these measures underestimate the true degree of intrinsic polarization in the extended structures, because starlight from the host galaxy makes a substantial contribution to the total flux. For example, using the azimuthal intensity profile measured in annulus with inner radius 4 pixels and outer radius 12 pixels, we estimate that the diffuse starlight from the host galaxy contributes 50 - 70% of the total flux in the south east arm of the bicone. Assuming that the starlight is unpolarized, the degree of intrinsic polarization in the south east arm is $`P_{2.0\mu m}^{intr}`$50 – 70%. Such high degrees of measured and intrinsic IR polarization are unprecedented in observations of active galaxies in which the synchrotron emitting jets are not observed directly at infrared wavelengths.
### 3.1 The nucleus
The nuclear point source, discussed in detail by Tadhunter et al. (1999), is also highly polarized. In the polarized intensity image the main nuclear component appears unresolved ($`FWHM=2.24`$ pixels) and its position agrees with that of the nucleus in the total intensity image to within 0.5 pixels (0.04 arcseconds). Thus, it appears likely that the bulk of the polarization is associated with the compact nucleus rather than a more extended region around the nucleus. The core does, however, show a faint entension to the NW in the polarized intensity image. This extension is aligned with the larger scale polarization structures, and its polarization E-vector is close to perpendicular to the radius vector from the nucleus.
From our full resolution polarization images, the measured degree of polarization at the peak flux of the nuclear point source is $`P_{2.0\mu m}^m20`$%. However, spurious polarization can arise because of small mis-alignments between the polarization images, especially in the nuclear regions where there are sharp gradients in the light distribution. To guard against such effects we have smoothed the polarization data using a 5$`\times `$5 pixel boxcar filter (0.375$`\times `$0.375 arcseconds) and re-measured the polarization in the nuclear regions. As expected, the measured degree of polarization in the nucleus in the smoothed image is less ($`P_{2.0\mu m}^m10`$%) than in the full resolution image, because of the greater degree of contamination by unpolarized starlight and extended structures around the nucleus. In order to determine the intrinsic polarization of the point source it is necessary to first determine the proportion of flux contributed by the point source to the total flux in the nuclear regions. Experiments involving the subtraction of a Tiny Tim generated point spread function (Krist & Hook 1997) suggest that an upper limit on the fractional contribution of the nucleus to the total flux in a 5$`\times `$5 pixel pixel box centred on the nucleus is $`f_{nuc}<35`$%. Thus, assuming that all the polarization in the near-nuclear regions is due to the unresolved compact core, and that the remainder of the light is unpolarized, the intrinsic polarization of the unresolved core source is $`P_{2.0\mu m}^{intr}=P_{2.0\mu m}^m/f_{nuc}>28`$%.
The position angle of polarization E-vector of the core source measured in our full resolution polarization map ($`PA=201\pm 3`$), is close to perpendicular to the radio jet axis ($`PA=105\pm 5`$). This is similar to the situation seen in other AGN, and in particular Cen A, where, towards longer wavelengths and smaller apertures, the infrared polarization becomes more and more closely perpendicular to the radio jet (Bailey et al. 1986).
## 4 Discussion
### 4.1 The nature of the unresolved core source
A major motivation for the HST observations was to investigate the nature of the compact core source and the cause of the relatively large polarization measured in the core by Packham et al. (1998). The explanation favoured by Packham et al. is that the compact core source represents transmitted quasar light, while the high polarization is due to dichroic absorption by aligned dust grains in the central obscuring torus. Because the dichroic mechanism is relatively inefficient, a high polarization implies a large extinction: from observations of Galactic stars it is known that at least 55 magnitudes of visual extinction is required to produce a K-band polarization of 28% for optimum grain alignment (Jones 1989). More typically, the correlation between K-band polarization and extinction deduced for Galactic stars by Jones (1989) implies that an extinction of $`A_v350`$ magnitudes would be required for $`P_k=28`$%. For comparison, an upper limit on the K-band extinction in Cygnus A, estimated by comparing the 2.25$`\mu `$m core flux with mid-IR and X-ray fluxes, is $`A_v<94`$ magnitudes (see Tadhunter et al. 1999 for details). Thus, the dichroic mechanism is only feasible if the efficiency of the mechanism in Cygnus A is greater than it is along most lines of sight in our Galaxy. Such enhanced efficiency cannot be entirely ruled out, given that the Galactic dichroic polarization involves a randomly oriented magnetic field component, whereas the magnetic fields in the central obscuring regions of AGN may be more coherent. In this context it is notable that near- and mid-IR polarization measurements of the central regions of the nearby Seyfert galaxy NGC1068 provide evidence for a greater dichroic efficiency than predicted by the Jones (1989) correlation, with $`P_k=5`$% produced by $`A_v`$20 – 40 magnitudes (Lumsden et al. 1999). However, even the greater dichroic efficiency deduced for NGC1068 would not be sufficient to produce the high polarization measured in the core of Cygnus A if $`A_v<94`$ magnitudes.
The efficiency problem might be resolved if the extinction to the core source in the K-band is higher than the $`A_v<94`$ estimated on the basis of comparisons of the K-band flux with the mid-IR and X-ray fluxes. Indeed, substantially higher extinctions have been deduced for Cygnus A, both from modelling the X-ray spectrum of the core ($`A_v=170\pm 30`$: Ueno et al. 1994) and from comparisons between hard-X-ray continuum, \[OIII\] emission line and mid-infrared continuum fluxes ($`A_v=143\pm 35`$: Ward 1996, Simpson 1995). If such high extinctions also apply to the quasar nucleus in the K-band, the low efficiency of the dichroic mechanism would be less of a problem. However, for any reasonable quasar SED, the contribution of such a highly obscured quasar nucleus to the flux and the polarization of the detected 2.0$`\mu `$m core source would be negligible (i.e. we would not expect to detect the quasar nucleus directly in the K-band). Thus, it is more likely that the relatively low extinction deduced from the K-band flux measurements reflects contamination of the K-band core by emission from a less-highly-obscured region, which is close enough to the central AGN to remain unresolved at the resolution of our HST observations. Although it has been proposed that the contaminating radiation in the K-band may include hot dust emission and/or line emission from quasar-illuminated regions close to the nucleus (e.g. Stockton & Ridgway 1996), such emission would have a low intrinsic polarization, and a large dichroic efficiency would still be required in order to produce the polarization of this component by dichroism.
The most plausible alternative to dichroic extinction is that the K-band core source represents scattered- rather than transmitted quasar light. In this case, the polarization is a consequence of scattering in an unresolved region close to the illuminating quasar; we do not detect the quasar nucleus directly in the K-band; and previous extinction estimates based on the K-band fluxes substantially underestimate the true nuclear extinction. Note that the presence of such a scattered component would resolve the discrepancy between the extinction estimates based on K-band flux measurements, and those based on fluxes measured at other wavelengths.
Finally, we must also consider the possibility that the core polarization is due to synchrotron radiation associated with the pc-scale jet visible in VLBI radio images (Krichbaum et al. 1996). Although the integrated polarization of the radio core is small even at high radio frequencies ($`P_{22GHz}<5`$%: Dreher 1979), we cannot entirely rule out the possibility that we are observing a highly polarized sub-component of the jet which suffers a relatively low extinction, or alternatively that the radio core source as a whole suffers large Faraday depolarization at radio wavelengths, and would appear more highly polarized at infrared wavelengths. Polarization observations at sub-mm wavelengths will be required to investigate this latter possibility.
### 4.2 The extended polarization structures
An intriguing feature of our HST observations is the high degree of polarization measured along, and only along, the NW-SE limb of the bicone. The orientation of the polarization measured along the limb is consistent with the scattering of light from a compact illuminating source in the nucleus, while the high degree of polarization is consistent with the edge-brightened bi-cone geometry of Tadhunter et al. (1999), in the sense that the scattering angle for the edge-brightened region will be close to the optimal 90 required for maximal polarization. However, the fact that the polarization is measured along only one limb is difficult to reconcile with the simplest bicone model in which the illuminating IR radiation field is azimuthally isotropic, and the scattering medium is uniformly distributed around the walls of the funnels hollowed out by the circum-nuclear outflows. In this simplest model both limbs would be highly polarized in the direction perpendicular to the radius vector of the source, and this is clearly inconsistent with the observations.
Our observation require that one or more of the assumptions implicit in the simple model must be relaxed. In general terms this requires either invoking specific matter distributions within the cone and/or an anisotropic illumination pattern of the central source itself.
Perhaps the simplest way of reconciling the polarization characteristics with the bi-cone geometry is to adjust the relative importance of scattering and intrinsic emmission with azimuth around the cone, so that one limb of the cone is dominated by scattered radiation, while the other is predominatly intrinsic radiation. Since the band-pass of the NICMOS polarizers contains the Paschen alpha line, this provides an obvious potential source of the diluting radiation for the unpolarized regions. Unfortunately there is no obvious reason why such an asymmetry should exist. Furthermore, direct images with the F222M filter show that there is no radical change in the relative brightness of the two limbs of the eastern cone between 2.0$`\mu `$m and 2.25$`\mu `$m. This is an argument against the Paschen alpha model for the intrinsic emission, since the F222M filter admits no emission lines as strong as Paschen alpha.
An alternative possibility is that the NW-SE limb is brighter because the near-IR radiation field of the illuminating AGN is more intense in that direction (i.e. the illuminating radiation field is azimuthally anisotropic within the cones). In this case, the clear difference in structure between the optical and near-IR reflection nebulae suggests that the sources of illumination at the two wavelengths are different: while the source of the shorter wavelength continuum must produce a radiation field which is azimuthally isotropic within cones defined by the obscuring torus, the source of illumination at the longer wavelengths is required to display considerable degree of azimuthal anisotropy. The near-IR continuum source must also have a relatively red spectrum, in order to avoid producing similar structures at optical and infrared wavelengths. The near-IR anisotropy might arise in the following ways.
1. Beamed radiation from the inner radio jet. The near-IR continuum is emitted by a component of the inner synchrotron jet which has a direction of bulk relativistic motion siginificantly displaced from the axis of the large-scale radio jet, such that the radiation is beamed towards the NW-SE limb of the bicone. However, given the remarkable degree of collimation, and the lack of bending, observed in the Cygnus A jet on scales betweem 1pc and 100kpc, a major difficulty with this model is that the the jet would have to bend through a large angle on a scale smaller than $``$1pc (the resolution of the VLBI maps), whilst retaining the rotational symmetry in the jet structure about the nucleus. A further requirement of this model is that, if the inner jet is precessing, the precession timescale must be greater than the light travel time ($``$5000 years) across the bicone structure.
2. Anisotropic hot dust emission. The near-IR continuum is emitted by hot dust in the inner regions of the galaxy, with a larger projected area of the emitting region visible from one limb of the bicone than from the other. For example, if the near-IR radiation is emitted by dust in a warped disk close to the central AGN — perhaps an outer part of the accretion disk — the warp could be oriented such that the NW-SE limb has an almost face-on view of the emitting region, whereas the the NE-SW limb has a more oblique view. There is already direct observational evidence for a warped outer accretion disk in at least one active galaxy (Miyoshi et al. 1995). Given that this mechanism would produce a relatively mild, broad-beam anisotropy, an optically thick torus on scales larger than the hot dust emitting region would still be required in order to produce the sharp-edges to the illuminated bicone structure.
Note that regardless of how any anisotropy in the IR radiation field might be produced, such anisotropy would not by itself explain the nature of the unpolarized emission along the SW-NE limb of the bicone, and the lack of variation in the brightness ratio of the two limbs of the eastern cone between 2.0$`\mu `$m and 2.25$`\mu `$m. It may also be difficult to reconcile the anisotropic illumination model with the polarization properties of the unresolved core source: if the core polarization is due to scattering, the orientation of the core polarization vector implies that a substantial flux of illuminating photons must escape at large angles to the NW-SE limb of the bicone.
We expect future spectropolarimetry observations to resolve the uncertainties concerning the origin of the near-IR polarization structures in Cygnus A. For example, in the case of the anisotropic illumination mechanisms considered above, the anisotropy is in the continuum flux rather than the broad lines associated with the AGN. Thus, if this model is correct, the broad lines will be relatively weak or absent in the polarized spectrum of the extended structures. In contrast, for a non-uniform distribution of scattering material but isotropic illumination within the cones, the broad lines and continuum will be scattered equally, and the equivalent widths of the broad lines in the polarized spectrum should fall within the the range measured for steep spectrum radio quasars.
## 5 Conclusions and Future Work
Our NICMOS polarimetry observations of Cygnus A have demonstrated the existence of a compact reflection nebula around the hidden core, but one whose polarization properties are inconsistent with the simplest illumination model suggested by the imaging data. The predominantly axial symmetry of the total intensity imaging is replaced by axial asymmetry and reflection symmetry about the nucleus in polarized light.
We have discussed several mechanisms to explain the near-IR polarization structures. While none of these is entirely satisfactory, it is clear that the near-IR polarization properties have the potential to provide key information about the geometries of the central emitting regions in AGN, and the near-IR continuum emission mechanism(s). In this context, it will be interesting in future to make similar observations of a large sample of powerful radio galaxies in order to determine whether the extraordinary IR polarization properties of Cygnus A are a common feature of the general population of such objects.
Acknowledgments. Based on Observations made with the ESA/NASA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. We thank the referee — Stuart Lumsden — for useful comments. A. Robinson acknowledges support from the Royal Society. References
Bailey, J.A., Sparks, W.B., Hough, J.H., Axon, D.J., 1986, Nat, 332, 150
Carilli C., Barthel, P.D., 1996, A&ARev, 7, 1
Dreher, J.W., 1979, ApJ, 230, 687
Hines, D.C., 1998, NICMOS & VLT, ESO Workshop and Conference Proceedings, 55, Wolfram Freudling and Richard Hook (eds), p63
Jones, T.J., 1989, ApJ, 346, 728
Krichbaum, T.P., Alef, W., Witzel, A., 1996, in Cygnus A — Study of a Radio Galaxy, ed. C.L. Carilli & D.E., Harris (Cambridge: Cambridge University Press), 93.
Krist J.E., Hook, R., 1997, TinyTim User Guide, Version 4.4 (Baltimore:STScI)
Lumsden, S.L., Moore, T.J.T., Smith, C., Fujiyoshi, T., Bland-Hawthorn, J., Ward, M.J., 1999, MNRAS, 303, 209
MacKenty J.W., et al. 1997, NICMOS Instrument Handbook, Version 2.0 (Baltimore: STScI)
Miyoshi, M., Moran, J., Herrnstein, J., Greenhill, L., Nakai, N., Diamond, P., Inoue, M., 1995, Nature, 373, 127
Ogle P.M., Cohen, M.H., Miller, J.S., Tran, H.D., Fosbury, R.A.E., Goodrich, R.W., 1997, ApJ, 482, L37
Packham, C., Hough, J.H., Young, S., Chrysostomou, A., Bailey, J.A., Axon, D.J., Ward, M.J., 1996, MNRAS, 278, 406
Packham, C., Young, S., Hough, J.H., Tadhunter, C.N., Axon., 1998, MNRAS, 297, 939
Serkowski, 1958, Acta.Astron., 8, 135
Sparks, W.B., Axon, D.J., 1999, PASP, in press
Stockton A., Ridgway S.E., Lilly, S., 1994, AJ, 108, 414
Stockton, A., Ridgway, S.E., 1996, in Cygnus A — Study of a Radio Galaxy, ed. C.L. Carilli & D.E., Harris (Cambridge: Cambridge University Press), 1
Tadhunter C.N., Scarrott S.M., Rolph C.D., 1990, MNRAS, 246, 163
Tadhunter C.N., Metz, S., Robinson, A., 1994, MNRAS, 268, 989
Tadhunter C.N., Packham, C., Axon, D.J., Jackson, N.J., Hough, J.H., Robinson, A., Young, S., Sparks, W., 1999, ApJ, 512, L91
Ueno S., Katsuji K., Minoru N., Yamauchi S., Ward M.J., 1994, ApJ, 431, L1
Ward M.J., Blanco P.R., Wilson A.S., Nishida M., 1991, ApJ, 382, 115
Ward M.J., 1996, in Cygnus A — Study of a Radio Galaxy, ed. C.L. Carilli & D.E., Harris (Cambridge: Cambridge University Press), 43
van der Marel, R., 1998: http://sol.stsci.edu/ marel/software.html
Figure 1. Infrared (2.0$`\mu `$m) polarization images of Cygnus A. Top left – total intensity (Stokes I) at full resolution; top right – polarization degree at full resolution; bottom left – polarized intensity at full resolution; and bottom right – polarization vectors plotted on a contour map of the polarized intensity image derived from the data smoothed with a 5$`\times `$5 pixel box filter, with length of vectors proportional to the percentage polarization. The line segment in the intensity image shows the direction of the radio axis. At the redshift of Cygnus A, 1.0 arcsecond corresponds to 1.0 kpc for $`H_0=75`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.0`$.
|
no-problem/0002/astro-ph0002399.html
|
ar5iv
|
text
|
# The Detection of the Diffuse Interstellar Bands in Dusty Starburst Galaxies
## 1 Introduction
The Diffuse Interstellar Bands (“$`DIBs`$”) have been studied for over 60 years, since Merrill (1934) first established their origin in the interstellar medium. Despite decades of intensive investigation, the identity of the carrier or carriers of the $`DIBs`$ has not been established (see the comprehensive review by Herbig 1995). The most likely candidates are large carbon-rich molecules (e.g. Sonnentrucker et al 1997), perhaps Polycyclic Aromatic Hydrocarbons (PAH’s - Salama et al 1999). The strongest and best-studied $`DIBs`$ in the optical spectrum are known empirically to trace the $`HI`$ phase of the ISM, with strengths that correlate well with the line-of-sight color excess $`E(BV)`$, the $`HI`$ column density, and the $`NaI`$ column density as probed with the $`NaI\lambda \lambda `$5890,5896 (“$`NaD`$”) doublet (see Herbig 1993).
To date, $`DIBs`$ have been observed almost exclusively in our own Galaxy, and to a limited extent in the Magellanic Clouds (Morgan 1987). Supernova 1986G in NGC 5128 (Cen A) allowed the detection of $`DIBs`$ produced within the famous dusty gas disk in this elliptical galaxy (di Serego Alighieri & Ponz 1987). Most recently, Gallagher & Smith (1999) have reported the possible discovery of the $`DIB`$ at $`\lambda `$6283.9 Å in the spectra of two “super starclusters” near the nucleus of the prototypical starburst galaxy M 82. This is intriguing, since it suggests that the $`DIB`$ carriers are present at a normal level even in the ISM of an intense starburst, in which the ambient radiation intensity and gas pressure are orders-of-magnitude higher than in the diffuse gas in the Milky Way disk (e.g. Colbert et al 1999).
We have recently analyzed the properties of the interstellar $`NaD`$ absorption- line in the spectra of 18 high-luminosity, infrared-selected (dusty) starbursts (Heckman et al. 2000 - hereafter HLSA). In the course of this analysis, we examined the 7 starbursts with the highest quality spectra for the presence of $`DIBs`$ (section 2). As we report in section 3 below, we have detected one or both of the $`\lambda `$6283.9 Å and $`\lambda `$5780.5 Å $`DIB`$ features (normally the two strongest $`DIBs`$ in the optical spectral region) in all seven cases. We have also been able to map the spatial distribution of the $`DIBs`$. These data allow us to directly compare the properties of the $`DIBs`$ in the ISM of these extreme starbursts to sight-lines in the Galaxy having similar gas column density and reddening (section 4).
## 2 Observations & Data Analysis
Details concerning the following are given in HLSA, so we only summarize the most salient points here.
The starburst sample presented here is a subset of the 32 galaxies observed by HLSA. The HLSA sample itself was selected from two far-infrared-bright samples: the Armus, Heckman, & Miley (1989) sample of galaxies with very warm far-IR colors and the Lehnert & Heckman (1995) sample of far-IR-bright disk galaxies seen at high inclination. The combined sample is representative of the far-IR-galaxy phenomenon, but is not complete.
HLSA found that the $`NaD`$ line was of predominantly interstellar origin in 18 of the 32 galaxies, while cool stars contributed significantly to the line in the other 14 cases. The plethora of weak absorption features in the spectra of cool stars greatly complicate the detection of the $`DIBs`$, while the strength of $`DIBs`$ in our Galaxy correlate strongly with the ISM $`NaI`$ column density. Thus, the galaxies in the present paper were drawn exclusively from the 18 “interstellar-dominated” objects in HLSA. We then selected the objects in HLSA having the highest signal-to-noise spectra obtained with a resolution better than $``$ 100 km s<sup>-1</sup> (see below). This results in a sample of 7 galaxies, as listed in Table 1.
The observations were undertaken in 1993 and 1994 using two different facilities: the 4-meter Blanco Telescope with the Cassegrain Spectrograph at $`CTIO`$ and the 4-meter Mayall Telescope with the RC Spectrograph at $`KPNO`$. The spectral resolution ranged from 1.1 Å FWHM in the $`KPNO`$ data to 1.8 Å FWHM in the $`CTIO`$ data. Details regarding spectrograph configurations are listed in Table 2 of HLSA.
The spectra were all processed using the standard LONGSLIT package in $`IRAF`$ (bias-subtracted, flat-fielded using spectra of a quartz-lamp, geometrically-rectified and wavelength-calibrated using a $`HeNeAr`$ arc lamp, and then sky-subtracted). See HLSA for details. No explicit correction was made for the presence of weak telluric absorption-features, but these are not a problem for our analysis. The strongest feature of relevance is the O<sub>2</sub> band from $``$ 6276 to 6284 Å (e.g. Figure 2a in Benvenuti & Porceddu 1989). Fortunately, the redshifts of our galaxies are sufficient to move the $`\lambda `$6283.9 Å $`DIB`$ out from under this feature.
The spectra were analyzed using the interactive SPLOT spectral fitting package in $`IRAF`$. In all cases, a one-dimensional “nuclear” spectrum was extracted, covering a region with a size set by the slit width and summed over 5 pixels in the spatial direction (the resulting aperture is typically 2 by 4 arcsec). The corresponding linear size of the projected aperture is generally a few hundred parsecs to a few kpc in these galaxies (median diameter 600 pc). This is a reasonable match to the typical sizes of powerful starbursts like these (e.g. Meurer et al 1997; Lehnert & Heckman 1996). Prior to further analysis, each 1-D spectrum was normalized to unit intensity by fitting it with, and then dividing it by, a low-order polynomial. Similar one-dimensional spectra for off- nuclear regions were extracted over the spatial region with adequate signal- to-noise in the continuum for each galaxy.
It is essential to remove the myriad absorption features due to cool stars from the spectra before searching for the relatively weak $`DIB`$ features. We have therefore used the average spectrum of several Galactic K giant stars as a template. After redshifting the normalized stellar template to the galaxy rest-frame, we have iteratively scaled and subtracted the template from the normalized galaxy spectrum until the residuals in the difference spectrum were minimized in the spectral regions that exclude potentially detectable interstellar features. The scale factors found for the stellar template imply that cool stars typically contribute 20 to 30% of the continuum light at $``$ 6000Å. This is consistent with both the less rigorous estimates reported in HLSA for these galaxies, and with theoretical expectations for red supergiants in a mature metal-rich starburst (Bruzual & Charlot 1993; Leitherer et al 1999). To compensate for the effects of the continuum subtraction, we added back an equivalent amount of featureless continuum. Thus, the depths and equivalent widths of the $`DIBs`$ in the original data are preserved by our analysis. As an example, we show the spectrum of the nucleus of M82 before and after the subtraction of a suitably-scaled K-star spectrum in Figure 1. These final processed spectra are shown in Figures 2 and 3.
We have estimated the uncertainties in our measurements in two ways. First, we compared the measurements for the four galaxies in the sample for which we have more than one independent spectrum (taken at a different position angle). Second, we have calculated the rms noise in the cool-star-subtracted spectra and used this to calculate the implied uncertainties (assuming standard error propogation for Poissonian noise). We report these uncertainties in Tables 1 through 3.
## 3 Results
### 3.1 The Identified Features
The two most conspicuous $`DIBs`$ along typical sight-lines in the ISM of our Galaxy are the strong, relatively narrow features at $`\lambda `$6283.9 Å and $`\lambda `$5780.5 Å (Herbig 1995). In all but one case, both features are within the wavelength coverage of our spectra. The next strongest $`DIBs`$ in the Milky Way in the relevant spectral region are at 5797.0 Å, 6010.1 Å, 6203.1 Å, and 6613.6 Å. We have searched for all these features in our spectra.
We turn our attention first to the $`\lambda `$6283.9 Å feature, which is the strongest feature in the Milky Way, and is not seriously confused by stellar photospheric lines in the starburst spectra. As can be seen in Figure 2, the $`\lambda `$6283.9 $`DIB`$ is detected in 6 of the 7 starburst nuclei (Table 1). The only exception is NGC 6240, where the feature would lie within the blue shoulder of the very strong and broad \[OI\]$`\lambda `$6300 nebular emission-line, making it very difficult to detect. In the other six cases, the equivalent width of this $`DIB`$ ranges from $``$ 0.4 to 0.9 Å with a normalized residual intensity at line-center of 0.83 to 0.94. These values correspond to some of the strongest features seen along sight-lines in the ISM of the Milky Way (e.g. Chlewicki et al 1986; Benvenuti & Porceddu 1989).
Weaker absorption due to the $`\lambda `$5780.5 $`DIB`$ is definitely present in three of the seven members of our sample (NGC2146, M82, and NGC6240), and possibly present in three more (NGC1614, NGC1808, and NGC3256). No measurement can be made in IRAS10565+2448, since the feature lies just outside our spectral passband. In the five cases in which both features are detected, the $`\lambda `$5780.5 $`DIB`$ is typically about 25% as strong as the $`\lambda `$6283.9 feature, compared to a mean value of about 45% along comparably-reddened lines-of-sight in the Milky Way (Chlewicki et al 1986; Benvenuti & Porceddu 1989). We emphasize that the measurement of the $`\lambda `$5780.5 $`DIB`$ is difficult in our spectra owing to its proximity to the comparably strong stellar photospheric CrI+CuI$`\lambda `$5782 feature (with which it is badly blended). We estimate that this introduces an uncertainty of $`\pm `$ 50 mÅ in the quoted equivalent widths (which is generally larger than the formal measurement uncertainties estimated above).
The two starburst nuclei with the strongest $`\lambda `$6283.9 Å $`DIB`$ feature are NGC 2146 and M82. These two spectra also have the highest signal-to-noise and (along with IRAS10565+2448) have the best spectral resolution and broadest wavelength coverage in our sample. In these two spectra, several other weaker $`DIB`$ features can be identified, namely those at 5797.0 Å, 6010.1 Å, 6203.1 Å, and 6613.6 Å (Figure 3). The equivalent widths of these features are $``$100 mÅ or typically 10 to 15% as large as those of the $`\lambda `$6283.9 Å feature. These relative strengths agree reasonably well with Galactic $`DIBs`$ (Chlewicki et al 1986; Benvenuti & Porceddu 1989; Herbig 1995). We summarize this information in Table 2, and note that similarly-weak features could be present in the noisier spectra of the other five members of our sample.
### 3.2 Kinematics
We have measured the width and centroid of the $`\lambda `$6283.9 Å $`DIB`$ feature in all cases but NGC 6240 (where we have instead used the $`\lambda `$5780.5 Å $`DIB`$). The measured line widths (Table 3) range from $``$ 5 to 9 Å. The intrinsic width of the $`\lambda `$6283.9 ($`\lambda `$5780.5) $`DIB`$ in the Milky Way is $``$ 4 (2) Å (Herbig 1995). Taking our instrumental resolution into account, the implied Doppler broadening of the $`DIBs`$ due to macroscopic motions in the starburst ISM ranges from FWHM 160 to 430 km s<sup>-1</sup>. In four of the seven cases, these Doppler widths are smaller than the widths of the $`NaD`$ doublet (by 25 to 60%). In NGC1808, NGC2146 and M82, the $`DIB`$ $`NaD`$ lines have the roughly the same Doppler widths. Interestingly, HLSA find that these are the three cases in the present sample in which the nuclear $`NaD`$ lines do not show significant blueshifts with respect to the galaxy systemic velocity ($`v_{sys}`$).
The centroids of the $`DIBs`$ are within $``$ 100 km s<sup>-1</sup> of $`v_{sys}`$. However, in all four cases with strongly blueshifted $`NaD`$ lines (NGC1614, NGC3256, IRAS10565+2448, and NGC6240), the $`DIBs`$ are mildly blueshifted (by $``$ 50 to 110 km s<sup>-1</sup>) with velocities that are intermediate between $`v_{NaD}`$ and $`v_{sys}`$. The velocities of the $`DIB`$ and $`NaD`$ absorbers roughly agree with one another (and lie close to $`v_{sys}`$) in the other three cases. This kinematic information is summarized in Table 3.
Taken together, these results suggest that the $`DIBs`$ trace gas that is more quiescent on-average than that probed by the $`NaD`$ line. That is, the $`NaD`$ absorption in the four “outflow” nuclei is probably produced by a combination of quiescent material ($`vv_{sys}`$ with smaller Doppler width) and disturbed, outflowing material. The bulk of the $`DIB`$ absorption would be associated with the former, and this component would dominate both the $`NaD`$ and $`DIB`$ absorption in the three other cases in our sample.
### 3.3 Spatial Extent
We have used our long-slit data to map the extra-nuclear spatial extent of the $`DIBs`$ in these galaxies. As listed in Table 1, these sizes range from $``$ 1 to 6 kpc. The absorbing region is larger (3 to 6 kpc) in the more powerful starbursts (NGC1614, NGC3256, IRAS10565+2448, and NGC6240, with $`logL_{bol}`$ = 11.3 to 12.0 $`L_{}`$), and smaller (0.9 to 1.8 kpc) in the less powerful cases (NGC1808, NGC2146, and M82, with $`logL_{bol}`$ = 10.5 to 10.7 $`L_{}`$). In the nearby (less powerful) starbursts, these sizes reflect the extent of the absorbing material. In the more distant (more powerful) starbursts, these sizes are lower limits set by the region with adequate signal-to-noise in the stellar continuum.
## 4 Discussion
### 4.1 Comparison to Galactic DIBs
The strengths of the prominent Galactic $`DIBs`$ correlate well with the column densities of both $`HI`$ and $`NaI`$ and with the reddening along the line-of-sight (e.g. Chlewicki et al 1986; Herbig 1993). This implies that the $`DIB`$ carrier is most plausibly associated with the cool atomic phase of the ISM. We can use the data discussed in HLSA to estimate the values for $`N_{NaI}`$ and $`E(BV)`$ in our sample of seven starbursts, to see if the $`DIBs`$ in our starburst sample obey the same empirical relations defined by the ISM of the Milky Way.
We follow HLSA and derive estimates for $`N_{NaI}`$ using the average of the values obtained from the classical “doublet ratio” method (Spitzer 1968) and the variant described by Hammann et al (1997). We estimate the line-of-sight reddening to the stellar continuum using the observed colors compared to theoretical models for a starburst stellar population (Leitherer et al 1999; see HLSA for details). We list the results in Table 1.
Our best-measured $`DIB`$ by-far is the strong $`\lambda `$6283.9 feature. The data compiled by Chlewicki et al (1986) and Benvenuti & Porceddu (1989) show that the mean ratio of the equivalent width of this feature and the color excess is $`<W_{6284}/E(BV)>`$ = 1.2 Å for heavily-reddened Galactic sight-lines. For our small starburst sample we find a similar result: $`<W_{6284}/E(BV)>`$ = 0.8 Å. This is shown in Figure 4 where we have plotted $`W_{6284}`$ vs. $`E(BV)`$ for a large sample of Galactic sight-lines using the extensive data compled by Herbig (1993). To compare our starburst data directly to this Galactic data we have converted the values given by Herbig (1993) for the equivalent width of the $`\lambda `$5780.5 Å $`DIB`$ into estimated values for $`W_{6284}`$ assuming that the mean ratio measured by Chlewicki et al (1986) and Benvenuti & Porceddu (1989) applies ($`W_{6284}/W_{5780}`$ = 2.2). In Figure 5 we have likewise plotted $`W_{6284}`$ vs. $`N_{NaI}`$ for both the Galactic data and our starburst data. The starbursts lie at the high-end of the relationship defined by $`DIBs`$ in the Milky Way.
### 4.2 Relationship to the $`\lambda `$2175 Å Dust Feature
Over the years, there has been considerable speculation as to a possible connection between the $`DIBs`$ and the strong and broad feature at $`\lambda `$2175 Å in the Galactic extinction curve (see Benvenuti & Porceddu 1989). In this context, the detection of strong $`DIBs`$ in starburst spectra is noteworthy. As shown by Calzetti et al (1994), the $`\lambda `$2175 feature is extremely (undetectably) weak in the UV spectra of starbursts. This implies that the carriers of the $`DIBs`$ and the $`\lambda `$2175 feature must be quite distinct (in agreement with the conclusions of Benvenuti & Porceddu (1989) for the Galactic ISM).
### 4.3 Speculations
On the face of it, the above results may seem surprising given the extreme differences between the physical conditions in the ISM of intense starbursts and our own Galactic disk. The strong starbursts in our sample have bolometric surface brightnesses of $`\mathrm{\Sigma }_{bol}10^{10}`$ to $`10^{11}`$ L kpc<sup>-2</sup> (e.g. Meurer et al. 1997), typical star-formation rates per unit area of $`\mathrm{\Sigma }_{SFR}`$ 10 M year<sup>-1</sup> kpc<sup>-2</sup>, and surface mass densities in gas and stars of $`\mathrm{\Sigma }_{gas}\mathrm{\Sigma }_{stars}`$ 10<sup>9</sup> M kpc<sup>-2</sup> (e.g. Kennicutt 1998). These are roughly 10<sup>3</sup> ($`\mathrm{\Sigma }_{SFR}`$), 10<sup>2</sup> ($`\mathrm{\Sigma }_{gas}`$) and 10<sup>1</sup> ($`\mathrm{\Sigma }_{stars}`$) times larger than the corresponding values in the disks of normal galaxies. These values for $`\mathrm{\Sigma }_{bol}`$ correspond to a radiant energy density inside the star-forming region that is roughly 10<sup>3</sup> times the value in the ISM of the Milky Way (and see Colbert et al 1999 for direct measurements of this quantity). The rate of mechanical energy deposition (supernova heating) per unit volume in these starbursts is of-order 10<sup>3</sup> times higher than in the ISM of our Galaxy (e.g. Heckman, Armus, & Miley 1990), as is the cosmic ray heating rate (Suchkov, Allen, & Heckman 1993). Finally, simple considerations of hydrostatic equilibrium imply correspondingly high pressures in the ISM: $`PG\mathrm{\Sigma }_g\mathrm{\Sigma }_{tot}`$ few $`\times `$ 10<sup>-9</sup> dyne cm<sup>-2</sup> (P/k $``$ few $`\times `$ 10<sup>7</sup> K cm<sup>-3</sup>, or several thousand times the value in the local ISM in the Milky Way). These high pressures have been confirmed observationally (e.g. Heckman, Armus, & Miley 1990; Colbert et al 1999).
The interesting result of the above is that despite the extreme conditions prevailing inside these starbursts, the dimensionless ratio of the ISM pressure to the energy density in UV photons (or cosmic rays) is quite similar in starbursts and the disk of the Milky Way. This would in turn imply that (for a given ISM temperature) the ratio of the number densities of the gas particles and UV photons (or cosmic rays) would also be similar to their values in the local ISM. Wang, Heckman, & Lehnert (1998) have discussed the evidence that this analysis is correct for the diffuse ionized medium in starbursts and the disks of normal late-type galaxies.
This “homologous” behavior of the ISM in regions spanning over three orders-of-magnitude in heating and cooling rates per particle may help to explain why the ratio of the column density of $`DIB`$ carriers to that of both $`Na`$ atoms (Figure 5) and dust grains (Figure 4) appears so similar in extreme starbursts and the ISM of our own Galaxy. In the absence of a well-understood origin for the $`DIBs`$, further speculation seems premature.
## 5 Summary
Despite over six decades of investigation, the nature and origin of the Diffuse Interstellar Bands remain a mystery (Herbig 1995). We have presented evidence that - far from being a possibly pathological property of the local ISM in our Galaxy - $`DIBs`$ are probably ubiquitous in the spectra of far-infrared-bright (dusty) starbursts.
In our own Galaxy, the two most conspicuous $`DIBs`$ are the features at $`\lambda `$6283.9 Å and $`\lambda `$5780.5 Å. We have detected one or both of these two $`DIBs`$ in all seven starbursts selected on the basis of strong interstellar $`NaI\lambda \lambda `$5790,5796 ($`NaD`$) absorption from the larger starburst sample studied by Heckman et al (2000 - HLSA). The equivalent widths of these features are $``$ 400 to 900 mÅ and $``$ 100 to 400 mÅ for the $`\lambda `$6280.9 and $`\lambda `$5780.5 features respectively. These roughly correspond to the greatest $`DIB`$ strengths observed in the Milky Way (Herbig 1993; Chlewicki et al 1986). In two members of our sample (M82 and NGC2146) the spectra are of high enough signal-to-noise to detect four other weaker $`DIBs`$ (at 5797.0 Å, 6010.1 Å, 6203.1 Å, and 6613.6 Å). These have typical equivalent widths of $``$ 100 mÅ. The relative strengths of these $`DIBs`$ are rather similar to those in the Milky Way (Herbig 1995; Chlewicki et al 1986; Benvenuti & Porceddu 1989).
The $`DIBs`$ can be mapped over an extensive region in and around the nuclear starbursts. In the moderately powerful starbursts ($`L_{bol}`$ = few $`\times `$ 10<sup>10</sup> L), this region is $``$ 1 kpc in size vs. several kpc in the more powerful starbursts ($`L_{bol}`$ = few $`\times `$ 10<sup>11</sup> L). The kinematics of the gas producing the $`DIBs`$ is evidently more quiescent than that producing the $`NaD`$ absorption studied by HLSA. In the four starbursts with broad and strongly blueshifted $`NaD`$ lines, the $`DIBs`$ are less Doppler-broadened and much less blueshifted ($`v_{DIB}`$ \- $`v_{sys}`$ $``$ -100 km s<sup>-1</sup>).
In the Milky Way, the $`DIBs`$ are known to trace a dusty atomic phase of the ISM, since their equivalent widths correlate strongly with the $`HI`$ column density, the $`NaI`$ column density, and the reddening parameter $`E(BV)`$ (Herbig 1995 and references therein). We show that these starburst $`DIBs`$ obey the same trends with $`N_{NaI}`$ and $`E(BV)`$ (e.g. $`W_{6284}`$ 1.2 $`E(BV)`$ Å at log$`N_{NaI}`$ 14 cm<sup>-2</sup>). Thus, the abundance of the $`DIB`$ carrier(s) relative to $`Na`$ atoms and dust grains appears to be very similar in intense starbursts and the diffuse ISM of our own Galaxy.
This seems surprising, given the thousand-fold greater energy density in photons and cosmic rays in the ISM of an intense starburst (e.g. Colbert et al 1999; Suchkov, Allen, & Heckman 1993). However, the gas pressures and densities in the starburst ISM are correspondingly larger as well (e.g. Heckman, Armus, & Miley 1990). Thus, such key dimensionless ratios as gas/photon density and gas-pressure/radiant-energy-density are similar in the ISM of starbursts and the disks of normal spiral galaxies (Wang, Heckman, & Lehnert 1998). This apparent “homology” may help explain the strikingly similar $`DIB`$ properties.
Finally, we point out that starbursts apparently produce strong $`DIBs`$ without producing a detectable $`\lambda `$2175 Å dust feature in their UV spectra (Calzetti et al 1994). This underscores the quite distinct origin of the two types of features.
We thank David Neufeld, Ken Sembach, and Don York for useful conversations at various stages of this project. The partial support of this project by NASA grant NAGW-3138 is acknowledged.
Note. Col. (2) The equivalent width of the $`\lambda `$6283.9 $`DIB`$ in mÅ. Col. (3) The equivalent width of the $`\lambda `$5780.5 $`DIB`$ in mÅ. The uncertainty is due primarily to the accuracy with which contamination by the stellar photospheric CrI+CuI$`\lambda `$5782 can be removed. We estimate this leads to an uncertainty of $`\pm `$50mÅ. The detection of this $`DIB`$ is therefore only tentative in NGC1614, NGC1808, and NGC3256 (indicated by a colon). Col. (4) The angular size (in arcsec) over which the $`\lambda `$6283.9 $`DIB`$ is detectable (the $`\lambda `$5780.5 $`DIB`$ was used in NGC6240). Col. (5) The corresponding physical size (in kpc), for our adopted $`H_0`$ = 70 km s<sup>-1</sup> Mpc<sup>-1</sup>. Col. (6) The estimated color excess along the line-of-sight to the stellar continuum, based on the observed continuum color and a model starburst spectral energy distribution (Leitherer et al 1999; see HLSA for details). Col. (7) The logarithm of the estimated column density of $`NaI`$ atoms (cm<sup>-2</sup>). These were derived using the standard doublet ratio technique (Spitzer 1968) and its variant in Hammann et al (1997). See HLSA for details. Based on an intercomparison of the values obtained by different techniques, we estimate the uncertainty to be $`\pm `$0.2 dex.
Note. Col. (2) The Doppler broadening (full-width-at-half maximum) in km s<sup>-1</sup> for the $`\lambda `$6283.9 $`DIB`$ (the $`\lambda `$5780.5 $`DIB`$ was used in NGC6240). These widths have been corrected for the intrinsic width of the DIB feature (see text) and for the instrumental resolution of the spectrograph (see HLSA). The raw, measured line widths and their associated uncertainties (in Å) are given in parantheses. Col. (3) The full-width-at-half-maximum in km s<sup>-1</sup> of the members of the $`NaI\lambda \lambda `$5890,5896 doublet ($`NaD`$). Uncertainties are $`\pm `$20 km s<sup>-1</sup>. Taken from HLSA. Col. (4) The heliocentric galaxy systemic velocity. Approximate uncertainties range from $`\pm `$10 km s<sup>-1</sup> for NGC1808, NGC2146, and M82 to $`\pm `$50 km s<sup>-1</sup> for NGC1614 and NGC3256, to $`\pm `$100 km s<sup>-1</sup> for NGC6240 and IRAS10565+2448. See HLSA and references therein. Col. (5) The heliocentric velocity of the $`\lambda `$6283.9 $`DIB`$ (the $`\lambda `$5780.5 $`DIB`$ was used in NGC6240). The measurement uncertainties are $`\pm `$30 km s<sup>-1</sup> for NGC2146 and M82, $`\pm `$50 km s<sup>-1</sup> for NGC1614, NGC1808, and NGC3256, and $`\pm `$80 km s<sup>-1</sup> for IRAS10565+2448 and NGC6240. These do not include any uncertainties in the true value of the rest wavelength for the $`DIB`$. Col. (6) The heliocentric velocity of the $`NaD`$ doublet taken from HLSA. Uncertainties are $`\pm `$20 km s<sup>-1</sup>.
|
no-problem/0002/astro-ph0002326.html
|
ar5iv
|
text
|
# Study of Multi-muon Events from EAS with the L3 Detector at Shallow Depth Underground
## 1 THE L3+COSMICS EXPERIMENT
The muon component of extensive air showers (EAS), due to the long muon range in the Earth’s atmosphere, carries a wealth of information about the shower development. Study of multi-muon events gives an insight into the primary cosmic ray composition and the physics of high energy hadronic interactions. The L3 detector, situated 30 m underground, offers interesting possibilities to detect and study such events , which are complementary to the data collected in traditional cosmic ray experiments. The hadron component of EAS is absorbed, while the muon component is detected with low threshold (typically, if we exclude access shafts, 15 GeV) and high momentum and spatial resolution by the sophisticated tracking system of the L3 detector. The muon spectrum can be measured up to 2 TeV with high precision. The multi-muon event rate is high enough to make studies of the knee region possible with one year of data taking.
This year, 5 billion triggers were collected with the full L3+Cosmics setup. The independent readout and data acquisition system allows us to take data in parallel with L3. The acceptance of the setup is 200 $`\mathrm{m}^2\mathrm{sr}`$. The angular resolution is better than 3.5 mrad for muons above 100 GeV and zenith angles from 0 to $`50^{}`$. The momentum resolution is 5.0 % at 45 GeV. It is calibrated with $`\mathrm{Z}\mu ^+\mu ^{}`$ events from the LEP calibration runs, where the muon momentum is known exactly.
## 2 MONTE CARLO SIMULATIONS
The simulation program ARROW is used to calculate the hadron, muon and neutrino flux at the detector level. The method combines simulations for fixed energies and different primary nuclei with a parametrization of the energy dependence and allows to do fast calculations for different geometries and energy thresholds. Results for the L3+C setup with sensitive area 200 $`\mathrm{m}^2\mathrm{sr}`$
are shown in Figure 1.
The primary composition is divided in heavy (Fe) and light (p) components in two limiting hypotheses: Fe S for constant heavy contribution $`30`$% and Fe H for heavy contribution rising from 30% below to 70% above the knee. The events with up to 6 muons are dominated by proton induced showers and above 10 muons the iron takes over. To distinguish between the two hypotheses with this method we need to detect events with $`50`$ muons and more.
## 3 DATA AND OUTLOOK
A first small subset of our data is shown in Figure 2. Only part of the events with up to 6 muons are reconstructed currently. The observed charge ratio $`\mu ^+/\mu ^{}`$ from the raw data is flat in the region between 50 and 500 GeV.
In the year 2000 an EAS array will be mounted above L3+C in order to detect the primary energy and core position. The experimental program includes studies of muon families and the primary composition, sidereal anisotropies, high multiplicity events in coincidences with other experiments, the moon shadow, searches for point sources, gamma ray bursts and exotic events.
|
no-problem/0002/cond-mat0002003.html
|
ar5iv
|
text
|
# Rashba spin splitting in two-dimensional electron and hole systems
## Abstract
In two-dimensional (2D) hole systems the inversion asymmetry induced spin splitting differs remarkably from its familiar counterpart in the conduction band. While the so-called Rashba spin splitting of electron states increases linearly with in-plane wave vector $`k_{}`$ the spin splitting of heavy hole states can be of third order in $`k_{}`$ so that spin splitting becomes negligible in the limit of small 2D hole densities. We discuss consequences of this behavior in the context of recent arguments on the origin of the metal-insulator transition observed in 2D systems.
At zero magnetic field $`B`$ spin splitting in quasi two-dimensional (2D) semiconductor quantum wells (QW’s) can be a consequence of the bulk inversion asymmetry (BIA) of the underlying crystal (e.g. a zinc blende structure) and of the structure inversion asymmetry (SIA) of the confinement potential. This $`B=0`$ spin splitting is the subject of considerable interest because it concerns details of energy band structure that are important in both fundamental research and electronic device applications (Refs. and references therein).
Here we want to focus on the SIA spin splitting which is usually the dominant part of $`B=0`$ spin splitting in 2D systems. To lowest order in $`k_{}`$ SIA spin splitting in 2D electron systems is given by the so-called Rashba model, which predicts a spin splitting linear in $`k_{}`$. For small in-plane wave vector $`k_{}`$ this is in good agreement with more accurate numerical computations. For 2D hole systems, on the other hand, the situation is more complicated because of the fourfold degeneracy of the topmost valence band $`\mathrm{\Gamma }_8^v`$, and so far only numerical computations on hole spin splitting have been performed. In the present paper we will develop an analytical model for the SIA spin splitting of 2D hole systems. We will show that in contrast to the familiar Rashba model the spin splitting of heavy hole (HH) states is basically proportional to $`k_{}^3`$. This result was already implicitly contained in several numerical computations. But a clear analytical framework was missing. We will discuss consequences of this behavior in the context of recent arguments on the origin of the metal-insulator transition observed in 2D systems.
First we want to review the major properties of the Rashba model
$$H_{6c}^{\mathrm{SO}}=\alpha 𝐤\times 𝐄𝝈.$$
(1)
In this equation $`𝝈=(\sigma _x,\sigma _y,\sigma _z)`$ denotes the Pauli spin matrices, $`\alpha `$ is a material-specific prefactor, and $`𝐄`$ is an effective electric field that results from the built-in or external potential $`V`$ as well as from the position dependent valence band edge. For $`𝐄=(0,0,E_z)`$ Eq. (1) becomes (using explicit matrix notation)
$$H_{6c}^{\mathrm{SO}}=\alpha E_z\left(\begin{array}{cc}0& k_{}\\ k_+& 0\end{array}\right)$$
(2)
with $`k_\pm =k_x\pm ik_y`$. By means of perturbation theory we obtain for the spin splitting of the energy dispersion
$$_{6c}^{\mathrm{SO}}(𝐤_{})=\pm \alpha E_zk_{}$$
(3)
where $`𝐤_{}=(k_x,k_y,0)`$. Using this simple formula several groups determined the prefactor $`\alpha E_z`$ by analyzing Shubnikov-de Haas (SdH) oscillations.
Equation (3) predicts an SIA spin splitting which is linear in $`k_{}`$. For small $`k_{}`$ Eq. (3) thus becomes the dominant term in the energy dispersion $`_\pm (𝐤_{})`$, i.e., SIA spin splitting of electron states is most important for small 2D densities. In particular, we get a divergent van Hove singularity of the density-of-states (DOS) at the bottom of the subband which is characteristic for a $`k`$ linear spin splitting. As an example, we show in Fig. 1 the self-consistently calculated subband dispersion $`_\pm (k_{})`$, DOS effective mass $`m^{}/m_0`$, and spin splitting $`_+(k_{})_{}(k_{})`$ for an MOS inversion layer on InSb. For small $`k_{}`$ the spin splitting increases linearly as a function of $`k_{}`$, in agreement with Eq. (3). Due to nonparabolicity the spin splitting for larger $`k_{}`$ converges toward a constant.
The spin splitting results in unequal populations $`N_\pm `$ of the two branches $`_\pm (k_{})`$. For a given total density $`N_s=N_++N_{}`$ and a subband dispersion $`_\pm (k_{})=\mu k_{}^2\pm \alpha E_zk_{}`$ with $`\mu =\mathrm{}^2/2m^{}`$ we obtain
$$N_\pm =\frac{1}{2}N_s\pm \frac{\alpha E_z}{8\pi \mu ^2}\sqrt{8\pi \mu ^2N_s\alpha E_z^2}.$$
(4)
This equation can be directly compared with, e.g., the results of SdH experiments.
The Rashba model (1) can be derived by purely group-theoretical means. The electron states in the lowest conduction band are $`s`$ like (orbital angular momentum $`l=0`$). With spin-orbit (SO) interaction we have total angular momentum $`j=1/2`$. Both $`𝐤`$ and $`𝐄`$ are polar vectors and $`𝐤\times 𝐄`$ is an axial vector (transforming according to the irreducible representation $`\mathrm{\Gamma }_4`$ of $`T_d`$). Likewise, the spin matrices $`\sigma _x`$, $`\sigma _y`$, and $`\sigma _z`$ form an axial vector $`𝝈`$. The dot product (1) of $`𝐤\times 𝐄`$ and $`𝝈`$ therefore transforms according to the identity representation $`\mathrm{\Gamma }_1`$, in accordance with the theory of invariants of Bir and Pikus. In the $`\mathrm{\Gamma }_6^c`$ conduction band the scalar triple product (1) is the only term of first order in $`𝐤`$ and $`𝐄`$ that is compatible with the symmetry of the band.
Now we want to compare the Rashba model (1) with the SIA spin splitting of hole states. The topmost valence band is $`p`$ like ($`l=1`$). With SO interaction we have $`j=3/2`$ for the HH/LH states ($`\mathrm{\Gamma }_8^v`$) and $`j=1/2`$ for the SO states ($`\mathrm{\Gamma }_7^v`$). For the $`\mathrm{\Gamma }_8^v`$ valence band there are two sets of matrices which transform like an axial vector, namely $`𝐉=(J_x,J_y,J_z)`$ and $`𝓙=(J_x^3,J_y^3,J_z^3)`$ (Refs. ). Here $`J_x`$, $`J_y`$ and $`J_z`$ are the angular momentum matrices for $`j=3/2`$. Thus we get
$$H_{8v}^{\mathrm{SO}}=\beta _1𝐤\times 𝐄𝐉+\beta _2𝐤\times 𝐄𝓙.$$
(5)
Similar to the Rashba model the first term has axial symmetry with the symmetry axis being the direction of the electric field $`𝐄`$. The second term is anisotropic, i.e., it depends on both the crystallographic orientation of $`𝐄`$ and $`𝐤`$. Using $`𝐤𝐩`$ theory we find that the prefactor $`\beta _2`$ is always much smaller than $`\beta _1`$, i.e., the dominant term in Eq. (5) is the first term. This can be easily understood by noting that the $`𝐤𝐩`$ coupling between $`\mathrm{\Gamma }_8^v`$ and $`\mathrm{\Gamma }_6^c`$ is isotropic, so that it contributes to $`\beta _1`$ but not to $`\beta _2`$. The prefactor $`\beta _2`$ stems from $`𝐤𝐩`$ coupling to more remote bands such as the $`p`$ antibonding conduction bands $`\mathrm{\Gamma }_8^c`$ and $`\mathrm{\Gamma }_7^c`$.
For $`𝐄=(0,0,E_z)`$ Eq. (5) becomes (using explicit matrix notation with $`j=3/2`$ eigenstates in the order $`j_z=+3/2,+1/2,1/2,3/2`$)
$$H_{8v}^{\mathrm{SO}}=\beta _1E_z\left(\begin{array}{cccc}0& \frac{1}{2}\sqrt{3}k_{}& 0& 0\\ \frac{1}{2}\sqrt{3}k_+& 0& k_{}& 0\\ 0& k_+& 0& \frac{1}{2}\sqrt{3}k_{}\\ 0& 0& \frac{1}{2}\sqrt{3}k_+& 0\end{array}\right)+\beta _2E_z\left(\begin{array}{cccc}0& \frac{7}{8}\sqrt{3}k_{}& 0& 3/4k_+\\ \frac{7}{8}\sqrt{3}k_+& 0& 5/2k_{}& 0\\ 0& 5/2k_+& 0& \frac{7}{8}\sqrt{3}k_{}\\ 3/4k_{}& 0& \frac{7}{8}\sqrt{3}k_+& 0\end{array}\right).$$
(6)
Here the first term couples the two LH states ($`j_z=\pm 1/2`$), and it couples the HH states ($`j_z=\pm 3/2`$) with the LH states. But there is no $`k`$ linear splitting of the HH states proportional to $`\beta _1`$. The second matrix in Eq. (6) contains a $`k`$ linear coupling of the HH states.
We want to emphasize that $`H_{6c}^{\mathrm{SO}}`$ and $`H_{8v}^{\mathrm{SO}}`$ are effective Hamiltonians for the spin splitting of electron and hole subbands, which are implicitly contained in the full multiband Hamiltonian for the subband problem
$$H=H_{𝐤𝐩}(𝐤_{},k_z=i_z)+eE_zz𝟙.$$
(7)
Here $`H_{𝐤𝐩}`$ is a $`𝐤𝐩`$ Hamiltonian for the bulk band structure (i.e., $`H_{𝐤𝐩}`$ does not contain $`H_{6c}^{\mathrm{SO}}`$ or $`H_{8v}^{\mathrm{SO}}`$) and we have restricted ourselves to the lowest order term in a Taylor expansion of the confining potential $`V(z)=V_0+eE_zz+𝒪(z^2)`$ which reflects the inversion asymmetry of $`V(z)`$. The effective Hamiltonians (2) and (6) stem from the combined effect of $`H_{𝐤𝐩}`$ and the term $`eE_zz`$. For a systematic investigation of the importance of the different terms in $`H`$ we have developed a novel, analytical approach based on a perturbative diagonalization of $`H`$ using a suitable set of trial functions and using Löwdin partitioning. Though we cannot expect accurate numerical results from such an approach it is an instructive complement for numerical methods, as we can clearly identify in the subband dispersion $`(𝐤_{})`$ the terms proportional to $`E_z`$ which are breaking the spin degeneracy. Neglecting in $`H_{𝐤𝐩}`$ remote bands like $`\mathrm{\Gamma }_8^c`$ and $`\mathrm{\Gamma }_7^c`$ we obtain for the SIA spin splitting of the HH states
$$_{\mathrm{HH}}^{\mathrm{SO}}(k_{})\pm \beta _1E_zk_{}^3.$$
(9)
In particular, we have no $`k`$ linear splitting (and $`\beta _20`$) if we restrict ourselves to the Luttinger Hamiltonian which includes $`\mathrm{\Gamma }_8^c`$ and $`\mathrm{\Gamma }_7^c`$ by means of second order perturbation theory. Accurate numerical computations show that the dominant part of the $`k`$ linear splitting of the HH states is due to BIA. However, for typical densities this $`k`$ linear splitting is rather small. For the LH states we have
$$_{\mathrm{LH}}^{\mathrm{SO}}(k_{})\pm \beta _1E_zk_{}.$$
(10)
Thus we have a qualitative difference between the spin splitting of electron and LH states which is proportional to $`k_{}`$ and the splitting of HH states which essentially is proportional to $`k_{}^3`$. The former is most important in the low-density regime whereas the latter becomes negligible for small densities. Note that for 2D hole systems the first subband is HH like so that for low densities the SIA spin splitting is given by Eq. (9). In Eq. (7) the lengthy prefactors depend on the details of the geometry of the QW. Moreover, we have omitted a weak dependence on the direction of $`𝐤_{}`$. But the order of the terms with respect to $`k_{}`$ is independent of these details. It is crucial that, basically, we have
$$\alpha ,\beta _1,\beta _2\mathrm{\Delta }_0$$
(11)
with $`\mathrm{\Delta }_0`$ the SO gap between the bulk valence bands $`\mathrm{\Gamma }_8^v`$ and $`\mathrm{\Gamma }_7^v`$, i.e., we have no SIA spin splitting for $`\mathrm{\Delta }_0=0`$. This can be most easily seen if we express $`H_{𝐤𝐩}`$ in a basis of orbital angular momentum eigenstates.
A more detailed analysis of our analytical model shows that both $`H_{6c}^{\mathrm{SO}}`$ and $`H_{8v}^{\mathrm{SO}}`$ stem from a third order perturbation theory for $`k_\pm `$, $`k_z=i_z`$, and $`eE_zz`$. This seems to be a rather high order. Nevertheless, the resulting terms are fairly large. In agreement with Refs. this is a simple argument to resolve the old controversy based on an argument by Ando that spin splitting in 2D systems ought to be negligibly small because for bound states in first order we have $`E_z=0`$. We note that the present ansatz for the prefactors $`\alpha `$ and $`\beta _1,\beta _2`$ is quite different from the ansatz in Ref. . We obtain $`H_{6c}^{\mathrm{SO}}`$ and $`H_{8v}^{\mathrm{SO}}`$ by means of Löwdin partitioning of the Hamiltonian (7) whereas in Ref. the authors explicitly introduced $`H_{6c}^{\mathrm{SO}}`$ into their model. Moreover, we evaluate the matrix elements of $`eE_zz`$ with respect to envelope functions for the bound states whereas in Ref. the authors considered matrix elements of $`eE_zz`$ with respect to bulk Bloch functions. The latter quantities are problematic because they depend on the origin of the coordinate frame.
As an example, we show in Fig. 2 the self-consistently calculated anisotropic subband dispersion $`_\pm (𝐤_{})`$, DOS effective mass $`m^{}/m_0`$, and spin splitting $`_+(𝐤_{})_{}(𝐤_{})`$ for a grown GaAs/Al<sub>0.5</sub>Ga<sub>0.5</sub>As heterostructure. The calculation was based on a $`14\times 14`$ Hamiltonian ($`\mathrm{\Gamma }_8^c`$, $`\mathrm{\Gamma }_7^c`$, $`\mathrm{\Gamma }_6^c`$, $`\mathrm{\Gamma }_8^v`$, and $`\mathrm{\Gamma }_7^v`$). It fully took into account both SIA and BIA. The weakly divergent van Hove singularity of the DOS effective mass at the subband edge indicates that the $`k`$ linear splitting is rather small. (Its dominant part is due to BIA. ) Basically, the spin splitting in Fig. 2 is proportional to $`k_{}^3`$.
Only for the crystallographic growth directions and the hole subband states at $`k_{}=0`$ are pure HH and LH states. For low-symmetry growth directions like and we have mixed HH-LH eigenstates even at $`k_{}=0`$, though often the eigenstates can be labeled by their dominant spinor components. The HH-LH mixing adds a $`k`$ linear term to the splitting (9) of the HH states, which often exceeds $`\beta _2E_zk_{}`$. However, this effect is still small when compared with the cubic splitting.
For a HH subband dispersion $`_\pm (k_{})=\mu k_{}^2\pm \beta _1E_zk_{}^3`$ we obtain for the densities $`N_\pm `$ in the spin-split subbands
$$N_\pm =\frac{1}{2}N_s\pm \frac{\beta _1E_zN_s}{\sqrt{2}\mu X}\sqrt{\pi N_s(64/X)}$$
(13)
with
$$X=1+\sqrt{14\pi N_s\left(\frac{\beta _1E_z}{\mu }\right)^2}.$$
(14)
The spin splitting according to Eq. (Rashba spin splitting in two-dimensional electron and hole systems) is substantially different from Eq. (4). For electrons and a fixed electric field $`E_z`$ but varying $`N_s`$ the difference $`\mathrm{\Delta }N=N_+N_{}`$ increases like $`N_s^{1/2}`$ whereas for HH subbands it increases like $`N_s^{3/2}`$. Using a fixed density $`N_s`$ but varying $`E_z`$ it is more difficult to detect the difference between Eqs. (4) and (Rashba spin splitting in two-dimensional electron and hole systems). In both cases a power expansion of $`\mathrm{\Delta }N`$ gives $`\mathrm{\Delta }N=a_1|E_z|+a_3|E_z|^3+𝒪(|E_z|^5)`$ with $`a_3<0`$ for electrons and $`a_3>0`$ for HH subbands.
The proportionality (11) is completely analogous to the effective $`g`$ factor in bulk semiconductors. Lassnig pointed out that the $`B=0`$ spin splitting of electrons can be expressed in terms of a position dependent effective $`g`$ factor $`g^{}(z)`$. In the following we want to discuss the close relationship between Zeeman splitting and $`B=0`$ spin splitting from a more general point of view. Note that in the presence of an external magnetic field $`𝐁`$ we have $`𝐤\times 𝐤=(ie/\mathrm{})𝐁`$ and the Zeeman splitting in the $`\mathrm{\Gamma }_6^c`$ conduction band can be expressed as
$$H_{6c}^Z=\frac{i\mathrm{}}{e}\frac{g^{}}{2}\mu _B𝐤\times 𝐤𝝈=\frac{g^{}}{2}\mu _B𝐁𝝈$$
(15)
with $`\mu _B`$ the Bohr magneton. Thus apart from a prefactor we obtain the Rashba term (1) from Eq. (15) by replacing one of the $`𝐤`$’s with the electric field $`𝐄`$. In the $`\mathrm{\Gamma }_8^v`$ valence band we have two invariants for the Zeeman splitting
$$H_{8v}^Z=2\kappa \mu _B𝐁𝐉+2q\mu _B𝐁𝓙.$$
(16)
Here, the first term is the isotropic contribution, and the second term is the anisotropic part. It is well-known that in all common semiconductors for which Eq. (16) is applicable the dominant contribution to $`H_{8v}^Z`$ is given by the first term proportional to $`\kappa `$ whereas the second term is rather small. Analogous to $`\beta _1`$ and $`\beta _2`$ the isotropic $`𝐤𝐩`$ coupling between $`\mathrm{\Gamma }_8^v`$ and $`\mathrm{\Gamma }_6^c`$ contributes to $`\kappa `$ but not to $`q`$. The latter stems from $`𝐤𝐩`$ coupling to more remote bands such as $`\mathrm{\Gamma }_8^c`$ and $`\mathrm{\Gamma }_7^c`$.
Several authors used an apparently closely related intuitive picture for the $`B=0`$ spin splitting which was based on the idea that the velocity $`v_{}=\mathrm{}k_{}/m^{}`$ of the 2D electrons is perpendicular to the electric field $`E_z`$. In the electron’s rest frame $`E_z`$ is Lorentz transformed into a magnetic field $`B`$ so that the $`B=0`$ spin splitting becomes a Zeeman splitting in the electron’s rest frame. However, this magnetic field is given by $`B=(v_{}/c^2)E_z`$ (SI units) and for typical values of $`E_z`$ and $`v_{}`$ we have $`B2\mathrm{}20\times 10^7`$ T which would result in a spin splitting of the order of $`5\times 10^9\mathrm{}5\times 10^5`$ meV. On the other hand, the experimentally observed spin splitting is of the order of $`0.1\mathrm{}10`$ meV. The $`B=0`$ spin splitting requires the SO interaction caused by the atomic cores. In bulk semiconductors this interaction is responsible for the SO gap $`\mathrm{\Delta }_0`$ between the valence bands $`\mathrm{\Gamma }_8^v`$ and $`\mathrm{\Gamma }_7^v`$ which appears in Eq. (11). The SO interaction is the larger the larger the atomic number of the constituting atoms. In Si we have $`\mathrm{\Delta }_0=44`$ meV whereas in Ge we have $`\mathrm{\Delta }_0=296`$ meV. Therefore, SIA spin splitting in Si quantum structures is rather small.
Recently, spin splitting in 2D systems has gained renewed interest because of an argument by Pudalov which relates the metal-insulator transition (MIT) in low-density 2D systems with the SIA spin splitting. Based on the Rashba model it was argued that the SIA spin splitting “results in a drastic change of the internal properties of the system even without allowing for the Coulomb interaction.” . However, as we have shown above, this argument is applicable only to electron and LH states. The MIT has been observed also in pure HH systems in, e.g., Si/SiGe QW’s. As noted above, SO interaction and spin splitting in these systems are rather small, so that it appears unlikely that here the broken inversion symmetry of the confining potential is responsible for the MIT. We note that in Si 2D electron systems the effective $`g`$ factor is enhanced due to many body effects. It can be expected that similar effects are also relevant for the $`B=0`$ spin splitting.
In conclusion, we have analyzed the SIA spin splitting in 2D electron and hole systems. In 2D hole systems the splitting differs remarkably from its familiar counterpart in the conduction band. For electron states it increases linearly with in-plane wave vector $`k_{}`$ whereas the spin splitting of heavy hole states can be of third order in $`k_{}`$. We have discussed consequences of this behavior in the context of recent arguments on the origin of the metal-insulator transition observed in 2D systems.
The author wants to thank O. Pankratov, S. J. Papadakis, and M. Shayegan for stimulating discussions and suggestions.
|
no-problem/0002/quant-ph0002071.html
|
ar5iv
|
text
|
# Nonextensive approach to decoherence in quantum mechanics
## Abstract
We propose a nonextensive generalization ($`q`$ parametrized) of the von Neumann equation for the density operator. Our model naturally leads to the phenomenon of decoherence, and unitary evolution is recovered in the limit of $`q1`$. The resulting evolution yields a nonexponential decay for quantum coherences, fact that might be attributed to nonextensivity. We discuss, as an example, the loss of coherence observed in trapped ions.
In the past decade, there have been substantial advances regarding a nonextensive, $`q`$-parametrized generalization of Gibbs-Boltzmann statistical mechanics . The $`q`$ parametrization is based on an approximate expression for the exponential, or the $`q`$-exponential
$$e_q(x)=\left[1+(1q)x\right]^{1/(1q)},$$
(1)
being the ordinary exponential recovered in the limit of $`q1`$ ($`e_1(x)=e^x`$). Thus we have a nonextensive $`q`$-exponential, i.e., $`e_q(x)e_q(y)e_q(x+y)`$, in general. Such a parametrization is the basis of Tsallis statistics, in which it is used a $`q`$-parametrized natural logarithm for the definition of a generalized entropy. We recall that in Tsallis’ statistics the entropy follows the pseudo-additive rule $`S_q(A+B)=S_q(A)+S_q(B)+(1q)S_q(A)S_q(B)`$, where $`A`$ and $`B`$ represent two independent systems, i.e., $`p_{ij}(A+B)=p_i(A)p_j(B)`$. Hence for $`q<1`$ we have the supper-additive regime, and for $`q>1`$ the sub-additive regime. The parameter $`q`$ thus characterizes the degree of extensivity of the physical system considered. Tsallis’ entropy seems to be previligied, in the sense that it satisfies the concavity criterion, i.e., it has a well defined concavity . Moreover, it has also been shown that most of the formal structure of the standard statistical mechanics (and thermodynamics) is retained within the nonextensive statistics, e.g., the H-theorem , and the Onsager reciprocity theorem . Nonextensive effects are usual in many systems exhibiting, for instance, long-range interactions, and the nonextensive formalism has been succesfully applied not only to several interesting physical problems, but also outside physics . We may cite for instance, applications to statistical mechanics , Lévy anomalous supperdiffusion , quantum scattering , low dimensional dissipative systems , in cosmology , and others . This means that departures from exponential (or logarithmic) behavior not so rarely occur in nature, and that the parametrization given by Tsallis’ statistics seems to be adequate for treating them. Discussions concerning the quantification of quantum entanglement as well as the implications on local realism may be also found, within the nonextensive formalism. Quantum entanglement has itself a nonlocal nature, and this may have somehow a connection with the general idea of nonextensivity. In fact, it has been concluded that entanglement may be enhanced in the supper-additive regime (for $`q<1`$) . Thus, although nonextensive ideas open up new interesting possibilities, their application to a field such as foundations of quantum mechanics has not been so frequent. We would like, in this letter, to address fundamental aspects of quantum theory within a nonextensive approach.
In quantum mechanics, the issue of decoherence, or the transformation of pure states into statistical mixtures has been atractting a lot attention, recently, especially due to the potential applications, such as quantum computing and quantum criptography , that might arise in highly controlled purely quantum mechanical systems, e.g., in trapped ion systems . That extraordinary control is also allowing to address fundamental problems in quantum mechanics, such as the Schrödinger cat problem , and the question of the origin of decoherence itself. Despite of the progresses, it does not exist yet a proper theory handling the question of the loss of coherence in quantum mechanics, although there are several propositions, involving either some coupling with an environment , or spontaneous (intrinsic) mechanisms . Indeed dissipative environments, in which it is verified loss of energy, tend to cause decoherence. In spite of the destructive action of dissipation, though, it has been worked out a scheme to recover quantum coherences in cavity fields , even if its environment is at $`T0`$, and after the system’s energy, and coherences, have substantially decayed. Regarding the intrinsic decoherence, several models have been already presented , and decoherence has been attributed, for instance, to stochastic jumps in the wave function , or gravitational effects . These models may contain one or even two new parameters . It is normally proposed some kind of modification of the von Neumann equation for the density operator $`\widehat{\rho }`$
$$\frac{d\widehat{\rho }}{dt}=\frac{i}{\mathrm{}}[\widehat{H},\widehat{\rho }].$$
(2)
A typical model yielding intrinsic decoherence, such as Milburn’s (see also reference ), gives the following modified equation for the evolution of $`\widehat{\rho }`$
$$\frac{d\widehat{\rho }}{dt}\frac{i}{\mathrm{}}[\widehat{H},\widehat{\rho }]\frac{\tau }{2\mathrm{}^2}[\widehat{H},[\widehat{H},\widehat{\rho }]],$$
(3)
where $`\tau `$ is a fundamental time step. The second term in the right-hand-side (double commutator), leads to an exponential decay of coherences without energy loss . Nevertheless, other models concerning decoherence , as well as quantum measurements , predict nonexponential decay of coherences, going as $`\mathrm{exp}(\gamma t^2)`$ rather than as $`\mathrm{exp}(\gamma t)`$. In , decoherence is attributed to a whormhole-matter coupling of nonlocal nature. This suggests that decoherence might not be appropriately described by a Markovian stochastic process . Moreover, experimental evidence of nonexponential decay of quantum coherences has been recently reported, in a experiments involving trapped ions, , as well as in quantum tunneling of cold atoms in an accelerating optical potential .
Here we propose a novel model to treat decoherence, within a nonextensive realm. We are able to write down a $`q`$-parametrized (generalized) evolution for the density operator, which naturally leads to decoherence. As a first step we may express von Neumann’s equation \[Eq. (2)\] in terms of the superoperator $`\widehat{}`$, being $`i\mathrm{}\widehat{}\widehat{\rho }=\widehat{H}\widehat{\rho }\widehat{\rho }\widehat{H}`$, so that we may write its formal solution in a simple form
$$\widehat{\rho }(t)=\mathrm{exp}(\widehat{}t)\widehat{\rho }(0).$$
(4)
Now, we substitute the exponential superoperator $`\mathrm{exp}(\widehat{}t)`$ by a $`q`$-parametrized exponential as defined in Eq. (1)
$$\widehat{\rho }(t)=\left[1+(1q)\widehat{}t\right]^{1/(1q)}\widehat{\rho }(0).$$
(5)
The standard unitary evolution given by von Neumann’s equation is recovered in the limit of $`q1`$. Hence, departing from Eq. (5), we may write a generalized ($`q`$-parametrized) von Neumann equation, or
$$\frac{d\widehat{\rho }}{dt}=\frac{\widehat{}}{1+(1q)\widehat{}t}\widehat{\rho }(t).$$
(6)
Here we consider the case in which the parameter $`q`$ is very close to one. This represents a small deviation from the unitary evolution given by (2), but yields loss of coherence. For $`|1q|\mathrm{\Omega }t1`$, where $`\mathrm{\Omega }`$ is the characteristic frequency of the physical system considered, we may write
$$\frac{d\widehat{\rho }}{dt}\left[\widehat{}(1q)t\widehat{}^2\right]\widehat{\rho }(t).$$
(7)
It may be shown, from Eq. (7), that there is decay of the nondiagonal elements in the basis formed by the eigenstates of $`\widehat{H}`$, characterizing decoherence, while the diagonal elements are not modified. Therefore the density operator remains normalized at all times, or $`Tr\widehat{\rho }(t)=Tr\widehat{\rho }(0)=1`$, property which must hold in any proper model for decoherence. We shall remark that in order to have a physically acceptable dynamics, the extensivity parameter should be greater than one ($`q>1`$), i.e., in the sub-additive regime. We may also rewrite Eq. (7) as
$$\frac{d\widehat{\rho }}{dt}\frac{i}{\mathrm{}}[\widehat{H},\widehat{\rho }]+g(t)[\widehat{H},[\widehat{H},\widehat{\rho }]],$$
(8)
where $`g(t)=(1q)t/\mathrm{}^2`$. Eq. (8) is similar to Eq. (3), apart from the time-dependent factor $`g(t)`$ multiplying the double commutator. Note that Eq. (8) is only valid for short times, or $`|1q|\mathrm{\Omega }t1`$. We have therefore constructed a novel model, based on a nonextensive generalization of von Neumann’s equation, which predicts loss of coherence, although with a time-dependence for the nondiagonal elements in the energy basis different from most that can be found in the literature . This particular time-dependence will lead to a nonexponential decay of quantum coherences, as we are going to discuss.
Now we would like to apply our model to a specific system in which experimental results are available. One of the most interesting systems investigated nowadays is a single trapped ion interacting with laser fields. Quantum state engineering of motional states of a massive oscillator has been achieved , and the loss of coherence in that system has been already observed . In the experiment described in reference , two Raman lasers induce transitions between two internal electronic states ($`|e`$ and $`|g`$) of a single <sup>9</sup>Be<sup>+</sup> trapped ion, as well as amongst center-of-mass vibrational states $`|n`$. In the interaction picture, and under the rotating wave approximation, the effective hamiltonian (in one dimension) may be written as
$$\widehat{H}_{int}^I=\mathrm{}\mathrm{\Omega }\left(\sigma _+e^{i\eta (\widehat{a}_x^{}+\widehat{a}_x)i\delta t}+\sigma _{}e^{i\eta (\widehat{a}_x^{}+\widehat{a}_x)+i\delta t}\right),$$
(9)
being $`\mathrm{\Omega }`$ the coupling constant, $`\delta `$ the detuning of the frequency difference of the two laser beams with respect to $`\omega _0`$, which is the energy difference between $`|e`$, or $`|g`$ ($`E_eE_g=\mathrm{}\omega _0`$). The Lamb-Dicke parameter is $`\eta =k\sqrt{\mathrm{}/(2m\omega _x)}`$, where $`k`$ is the magnitude of the difference between the wavevectors of the two laser beams, $`\omega _x`$ is the vibrational frequency of the ion’s center of mass having a mass $`m`$, and $`\widehat{a}_x(\widehat{a}_x^{})`$, are annihilation and creation operators of vibrational excitations. The ion is Raman cooled to the (vibrational) vacuum state $`|0`$, and from that Fock (as well as coherent and squeezed) states may be prepared by applying a sequence of laser pulses . If the atom is initially in the $`|g,n`$ state, and the Raman lasers are tuned to the first blue sideband ($`\delta =\omega _x`$), the evolution according to the hamiltonian in Eq. (9) (for small $`\eta `$) will be such that the system will perform Rabi oscillations between the states $`|g,n`$ and $`|e,n+1`$. If the excitation distribution of the initial vibrational state of the ion is $`P_n`$, occupation probability of the ground state $`P_g(t)`$ will be given by
$$P_g(n,t)=\frac{1}{2}\left[1+\underset{n}{}P_n\mathrm{cos}(2\mathrm{\Omega }_nt)e^{\gamma _nt}\right].$$
(10)
Here the Rabi frequency $`\mathrm{\Omega }_n`$ is
$$\mathrm{\Omega }_n=\mathrm{\Omega }\frac{e^{\eta ^2/2}}{\sqrt{n+1}}\eta L_n^1(\eta ^2),$$
(11)
where $`\mathrm{\Omega }/2\pi =500`$kHz, $`L_n^1`$ are Laguerre generalized polynomials, and $`\gamma _n=\gamma _0(n+1)^{0.7}`$, with $`\gamma _0=11.9`$kHz.
The experimental results are fitted using Eq. (10) . The Rabi oscillations are damped, and an empirical damping factor, $`\gamma _n`$, is introduced in order to fit the experimental data . There heve been attempts to derive such an unusual dependence on $`n`$ for the damping factor $`\gamma _n`$, by taking into account the fluctuations in laser fields and the trap parameters . Those models are somehow connected to a evolution of the type given by Eq. (3) , which predicts loss of coherence without loss of energy. In fact, at a time-scale in which the ion’s energy remains almost constant, decoherence is considerably large . Moreover, the actual causes of loss of coherence in experiments with trapped ions are still not identified . Our approach differs from previous ones because we depart from a different dynamics which leads to a peculiar nonexponential time-dependence. By employing a similar methodology to the one discussed in reference , we may calculate, from Eq. (8), the probability of the atom to occupy the ground state, obtaining
$$P_g^q(n,t)=\frac{1}{2}\left[1+\underset{n}{}\mathrm{cos}(2\mathrm{\Omega }_nt)e^{\gamma _{n,q}t^2}\right],$$
(12)
where $`\gamma _{n,q}=(q1)\mathrm{\Omega }_n^2/2`$, i.e., the damping factor arising in our model depends on the Rabi frequency $`\mathrm{\Omega }_n`$ and on the parameter $`q`$ in a simple way. It is also verified a nonexponential decay which goes as $`\mathrm{exp}(\gamma t^2)`$ rather than an exponential one. Now we may proceed with a graphical comparison between the curves plotted from expressions (10) and (12). This is shown in Figure 1, for two distinct cases. In Figure 1a we have plotted both the probability $`P_g`$ in Eq. (12) (dashed line) and the one in Eq. (10) (full line), as a function of time, for an initial state $`|g,0`$ $`(P_n=\delta _{n,0})`$, a Lamb-Dicke parameter $`\eta =0.202`$, and $`q=1.001`$. We note a clear departure from exponential behavior in the curve arising from our model, although they might coincide for some time-intervals. We remark that Eq. (12) is valid for times such that $`|1q|\mathrm{\Omega }t1`$. Here $`|1q|\mathrm{\Omega }t_{max}0.17`$. Right below, in Figure 1b, there is a similar plot, but using a different initial condition for the distribution of excitations of the ion state, e.g., a coherent state $`|\alpha `$, with $`\overline{n}=\alpha ^2=3`$ ($`P_n=\mathrm{exp}(\alpha ^2)\alpha ^{2n}/n!`$) instead of the vacuum state $`|0`$. In the coherent state case both curves are very close, although it seems that our curve (dashed line) represents a better fit for the experimental results . In general, our results are in reasonable agreement with the experimental data, as it may be seen in Figure 1 and in reference . This means that a nonexponential decay may also account for the loss of coherence in the trapped ions system.
In our model, the effects leading to decoherence are embodied within the nonextensive parameter $`q`$, rather than into a “fundamental time step” $`\tau `$, as it occurs in Milburn’s model. Nonextensivity may be especially relevant when nonlocal effects, for instance, long-range interactions, or memory effects are involved . In the specific example treated here, the ion is not completely isolated from its surroundings; electric fields generated in the trap electrodes couple to the ion’s charge, and this is considered to be a genuine source of decoherence to the vibrational motion of the ion .
In conclusion, we have presented a novel approach for treating the loss of coherence in quantum mechanics, based on a nonextensive formalism. In our model, the quality of decoherence depends on a single parameter, $`q`$, related to the nonextensive properties of the physical system considered. We obtain, with such a parametrization, an evolution equation which is a (nonextensive) generalization of von Neumann’s equation for the density operator, and that leads, in general, to a nonexponential decay of quantum coherences. We have applied our model to a concrete physical problem, that is, the decoherence occurring in the quantum state of a single trapped ion undergoing harmonic oscillations. This is a well know case in which decoherence may be readily tracked down, yet its causes remain unexplained. We have found that for values of the parameter $`q`$ rather close to one (in the sub-additive regime), our model is in reasonable agreement with the available experimental results, without the need of introducing any supplementary assumptions.
###### Acknowledgements.
One of us, H.M.-C., thanks R. Bonifacio and P. Tombesi for useful comments. A.V.-B. would like to thank the hospitality at INAOE, México. This work was partially supported by CONACyT (Consejo Nacional de Ciencia y Tecnología, México), and CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico, Brazil).
|
no-problem/0002/astro-ph0002231.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The formation of galactic disks is one of the most important unsolved problems in astrophysics today. In the currently favored hierarchical clustering framework, disks form in the potential wells of dark matter halos as the baryonic material cools and collapses dissipatively. Fall & Efstathiou (1980) have shown that disks formed in this way can be expected to possess the observed amount of angular momentum (and therefore the observed spatial extent for a given mass and profile shape), but only under the condition that the infalling gas retain most of its original angular momentum.
However, numerical simulations of this collapse scenario in the cold dark matter (CDM) cosmological context (e.g., Navarro & Benz 1991, Navarro & White 1994, Navarro, Frenk, & White 1995) have so far consistently indicated that when only cooling processes are included the infalling gas loses too much angular momentum (by over an order of magnitude) and the resulting disks are accordingly much smaller than required by the observations. This discrepancy is known as the angular momentum problem of disk galaxy formation. It arises from the combination of the following two facts: a) In the CDM scenario the magnitude of linear density fluctuations $`\sigma (M)=(\delta M/M)^2^{1/2}`$ increases steadily with decreasing mass scale $`M`$ leading to the formation of non-linear, virialized structures at increasingly early epochs with decreasing mass i.e. the hierarchical “bottom-up” scenario. b) Gas cooling is very efficient at early times due to gas densities being generally higher at high redshift as well as the rate of inverse Compton cooling also increasing very rapidly with redshift. a) and b) together lead to rapid condensation of small, dense gas clouds, which subsequently lose energy and (orbital) angular momentum by dynamical friction against the surrounding dark matter halo before they eventually merge to form the central disk. A mechanism is therefore needed that prevents, or at least delays, the collapse of protogalactic gas clouds and allows the gas to preserve a larger fraction of its angular momentum as it settles into the disk. Two such possible solutions are discussed in section 2.
In section 3 we present some new results from our WDM disk galaxy formation simulations on the Tully-Fisher relation and in section 4 we discuss how the magnetic field strengths of a few $`\mu `$G observed in galactic disks can be obtained via disk galaxy formation, as an alternative to disk dynamo amplification.
## 2 Towards solving the angular momentum problem
Two ways of possibly solving the angular momentum problem have recently been discussed in the literature: a) by invoking the effects of stellar feedback processes from either single, more or less uniformly distributed stars or star-bursts and b) by assuming that the dark matter is “warm” rather than cold. Both options lead to the suppression of the formation of early, small and dense gas clouds, for a) because the small gas clouds may be disrupted due to the energetic feedback of primarily type II super-nova explosions and for b) simply because fewer of the small and dense gas clouds form in the first place for WDM free-streaming masses $`M_{f,\mathrm{WDM}}10^{10}`$-$`10^{11}M_{}`$.
### 2.1 Stellar feedback processes
Sommer-Larsen et al. (1999) showed that the feedback caused by a putative, early epoch of more or less uniformly distributed population III star formation was not sufficient to solve the angular momentum problem. Based on test simulations they showed, however, that effects of feedback from star-bursts in small and dense protogalactic clouds might do that. Preliminary results of more sophisticated simulations incorporating stellar feedback processes in detail indicate that this is at least partly the case. Considerable fine-tuning seems to be required, however: About 2-3% of the gas in the proto-galactic region of a forming disk galaxy should be turned into stars. If less stars are formed the feedback is not strong enough to cure the angular momentum problem and, vice versa, if more stars are formed during this fairly early phase of star-formation, the energetic feedback causes the formation of the main disks and thereby the bulk of the stars to be delayed too much compared to the observed star-formation history of the Universe.
This requirement of fine-tuning is advantageous, however, in relation to the early chemical evolution of disk galaxies, since the early star-formation histories of the galaxies are then well constrained. Furthermore, as it is possible to track the elements produced and ejected by (primarily) type II supernovae in the star-bursts one can determine the fraction of these elements, which ultimately settle on the forming disk and hence determine the rate and metallicity of the gas falling onto the disk. In Figure 1 we show the time evolution of the oxygen abundance in a forming disk as a result of infall of a mixture of enriched and unenriched gas (neglecting the contribution of ejecta from stars formed subsequently in the disk). We have assumed a Salpeter IMF with $`M_{low}=0.1M_{}`$ and $`M_{up}=60M_{}`$ and that a typical type II supernova ejects $`2M_{}`$ of oxygen. This abundance can be regarded as the initial abundance of the disk, its value depending on when star-formation subsequently commenced in the disk (note that such two-epoch star-formation models have been advocated by, e.g., Chiappini, Matteucci & Gratton 1997). As can be seen from the figure this initial disk abundance is of the order $`[O/H]2`$. This is similar to the lowest abundance of the low-metallicity tail of the Galactic thick disk – see Beers & Sommer-Larsen (1995).
### 2.2 Warm dark matter
Another, more radical way of solving the angular momentum problem is to abandon CDM altogether and assume instead that dark matter is “warm”. Such a rather dramatic measure not only proves very helpful in this respect, as will be discussed below, but may also be additionally motivated: Recently, various possible shortcomings of the CDM cosmological scenario in relation to structure formation on galactic scales have been discussed in the literature: 1) CDM possibly leads to the formation of too many small galaxies relative to what is observed, i.e. the missing satellites problem (e.g., Klypin et al. 1999). 2) Even if galactic winds due to star-bursts can significantly reduce the number of visible dwarf galaxies formed, sufficiently many of the small and tightly bound dark matter systems left behind can still survive to the present day in the dark matter halos of larger galaxies like the Milky Way to possibly destroy the large, central disks via gravitational heating, as discussed by Moore et al. (1999a). 3) The dark matter halos produced in CDM cosmological simulations tend to have central cusps with $`\rho _{DM}(r)r^N,N12`$ (Dubinski & Carlberg 1991, Navarro et al. 1996, Fukushige & Makino 1997, Moore et al. 1998, Kravtsov et al. 1998, Gelato & Sommer-Larsen 1999). This is in disagreement with the flat, central dark matter density profiles (cores) inferred from observations of the kinematics of dwarf and low surface brightness galaxies (e.g., Burkert 1995, de Blok & McGaugh 1997, Kravtsov et al. 1998, Moore et al. 1999b, but see also van den Bosch et al. 1999).
The first two problems may possibly be overcome by invoking warm dark matter (WDM) instead of CDM: On mass scales less than the free-streaming mass, $`M\begin{array}{c}<\\ \end{array}M_{f,\mathrm{WDM}}`$, the growth of the initial density fluctuations in the Universe is suppressed relative to CDM due to relativistic free-streaming of the warm dark matter particles. In conventional WDM theory these become non-relativistic at redshifts $`z_{nr}10^6`$-$`10^7`$ for $`m_{\mathrm{WDM}}`$ 1 keV, which is the characteristic WDM particle mass required to give sub-galactic to galactic free-streaming masses. As a consequence of this suppression, fewer low mass galaxies (or “satellites”) are formed cf., e.g., Moore et al. (1999a) and Sommer-Larsen & Dolgov (1999, SD99). The central cusps problem may be more generic (Huss et al. 1999 and Moore et al. 1999b), but WDM deserves further attention also on this point.
SD99 show that the angular momentum problem may be resolved by going from cold to warm dark matter, with characteristic free-streaming mass $`M_{f,\mathrm{WDM}}10^{10}`$-$`10^{11}M_{}`$, and without having to invoke effects of stellar feedback processes at all. The reason why this kind of warm dark matter leads to a solution of the angular momentum problem is that because of the suppression of density fluctuations on sub-galactic scales relative to CDM the formation of a disk galaxy becomes a much more coherent and gentle process enabling the infalling, disk-forming gas to retain much more of its original angular momentum. In fact SD99 find it likely that the angular momentum problem can be completely resolved by going to the WDM structure formation scenario, which is more than can be said for the CDM+feedback approach so far. In Figure 2 we show a face-on view of a disk galaxy with characteristic circular velocity (where the rotation curve is approximately constant) $`V_c`$ 300 km/s formed in a WDM simulation (in this simulation gas was not converted into stars). Clearly it is no longer a problem to form extended, high angular momentum disks in fully cosmological simulations. In comparison the extent of typical disks formed in “passive” CDM simulations (i.e. simulations not incorporating the effects of stellar feedback processes) is less than 1 kpc – see, e.g., Sommer-Larsen et al. (1999).
Unlike the CDM+feedback solution, one does not get a constraint on the early star-formation histories of the proto-galaxies, so no statements about the abundance of the first generation of disk stars can be made without further assumptions.
SD99 discuss possible physical candidates for WDM particles and find that the most promising are neutrinos with weaker or stronger interactions than normal, majorons (light pseudogoldstone bosons), or mirror or shadow world neutrinos.
## 3 The Tully-Fisher relation
In Figure 3 we show the cooled-out disk mass $`M_{disk}`$ at redshift $`z`$=0 as a function of the characteristic circular velocity $`V_c`$ of model galaxies formed in our WDM simulations (assuming a Hubble parameter $`H_0`$=70 km/s/Mpc). Also shown is the $`I`$-band Tully-Fisher (TF) relation of Giovanelli et al. (1997) converted to mass assuming $`I`$-band mass-to-light ratios $`(M/L_I)`$=0.25, 0.5 and 1.0 in solar units and $`H_0`$=70 km/s/Mpc. Finally, the baryonic mass of the Milky Way, estimated in a completely independent way, is shown (see SD99 for details). As can be seen from the figure we can match the slope of the TF relation very well assuming a constant $`(M/L_I)`$. To get the normalization right a $`(M/L_I)0.8`$ is required. SD99 argue that this is quite a reasonable value in comparison with various dynamical and spectrophotometric estimates. Moreover, it is clearly gratifying that the Milky Way data point falls right on top of the theoretical as well as observational $`M_{disk}`$-$`V_c`$ relations (for $`(M/L_I)0.8`$, $`H_0`$=70 km/s/kpc).
Steinmetz & Navarro (1999) and Navarro & Steinmetz (1999) find a discrepancy between the observed and “theoretical” TF on the basis of CDM simulations of disk galaxy formation. It is hence possible that WDM helps out also on this point, but this has to be checked with more detailed simulations.
## 4 Magnetic fields in galactic disks and disk galaxy formation
Rögnvaldsson (1999) showed how the typical magnetic field strengths observed in galactic disks can be explained as a result of disk galaxy formation, as an alternative to the usually assumed dynamo amplification of an initially very weak magnetic field in the disk: Hot, virialized gas ($`T210^6`$ K) in a dark matter halo is assumed to initially follow the dark matter distribution and to be rotating slowly, corresponding to a spin-parameter $`\lambda `$ 0.05, typical of galactic, dark matter halos. The hot gas is assumed to be threaded by a weak and random magnetic field. As the hot gas cools radiatively, gravity forces it to flow inwards and due to the spin it forms a growing, cold, galactic disk in the central parts of the dark matter halo. The magnetic field follows the cooling gas inwards and is strongly amplified by compression and shear in the forming disk. Rögnvaldsson (1999) carried out magnetohydrodynamical (MHD) simulations of this process using an eulerian mesh MHD code. The simulations were run with various initial magnetic field strengths in the hot gas, parameterized by the initial ratio between the gas pressure and magnetic pressure $`\beta _0=\frac{P_{gas}}{B^2/8\pi }_0`$. The temporal evolution of the average magnetic field strength in the disk gas is shown in Figure 4. For weak initial fields ($`\beta _0`$=100-400 was taken as a starting point, since these values are typical values in the hot, intergalactic gas in clusters of galaxies) the average magnetic field strength grows gradually from about $`t`$=1 Gyr (after an initial relaxation phase). The average values of 1-2$`\mu `$G reached after about 5 Gyr are quite reasonable for typical disk galaxies, indicating a route to the explanation of the magnetic field strengths observed in galactic disks alternative to the usual dynamo one.
Another aspect of the growth of the field strength is reflected in the radial average in the disk, shown in Figure 5 at various times for a simulation with $`\beta _0`$=400. The field strength is always highest in the outermost part of the growing disk, since the fieldlines brought in with the cooling flow are stacked on top of the already existing field there and the field is further amplified by the disk shear.
## 5 Acknowledgements
I have benefited from comments by Örnólfur Rögnvaldsson, Sasha Dolgov and Jens Schmalzing and thank the organizers for a magnificent conference. This work was supported by Danmarks Grundforskningsfond through its support for the establishment of the Theoretical Astrophysics Center.
|
no-problem/0002/gr-qc0002080.html
|
ar5iv
|
text
|
# 1 Four different classes of scattering for the initial string perturbative vacuum (solid line). The two spatial reflections (𝑎) and (𝑐) describe the transition from an expanding pre-big bang configuration to an expanding post-big bang and contracting pre-big bang configuration, respectively. The two Bogoliubov processes (𝑏) and (𝑑) represent the production of universe-antiuniverse pairs from the vacuum. In case (𝑏) one universe is expanding, the other contracting, but they both fall inside the pre-big bang singularity. In case (𝑑) both universes are expanding, but only one falls inside the singularity, while the other one survives in the post-big bang regime.
## Abstract
The decay of the string perturbative vacuum, if triggered by a suitable, duality-breaking dilaton potential, can efficiently proceed via the parametric amplification of the Wheeler-De Witt wave function in superspace, and can appropriately describe the birth of our Universe as a quantum process of pair production from the vacuum.
BA-TH/00-378
February 2000
gr-qc/0002080
Birth of the Universe as anti-tunnelling
from the string perturbative vacuum
M. Gasperini
Dipartimento di Fisica, Università di Bari,
Via G. Amendola 173, 70126 Bari, Italy
and
Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Bari, Italy
———————————————
Essay written for the 2000 Awards of the Gravity Research Foundation,
and selected for Honorable Mention.
To appear in Int. J. Mod. Phys. D
A consistent and quantitative description of the birth of our Universe is one of the main goals of the quantum approach to cosmology. In the context of the standard scenario, in particular, quantum effects are expected to stimulate the birth of the Universe in a state approaching the de Sitter geometric configuration, appropriate to inflation . The initial cosmological state is unknown, however, and has to be fixed through some “ad-hoc” prescription. It follows that there are various possible choices for the initial boundary conditions , leading in general to different quantum pictures of the early cosmological evolution.
In the context of the pre-big bang scenario , typical of string cosmology, the initial state on the contrary is fixed, and has to approach the string perturbative vacuum. The quantum decay of this initial state necessarily crosses the high-curvature, Planckian regime, and can be appropriately described by a Wheeler-de Witt (WDW) wave function , evolving in superspace. The birth of the Universe may then be represented as a process of scattering and reflection , in an appropriate minisuperspace parametrized by the metric and by the dilaton. In that case the pre-big bang initial state – emerging from the string perturbative vacuum – simulates the boundary conditions prescribed for a process of “tunnelling from nothing”, in the context of the standard scenario . It seems thus appropriate to say that the above scattering process describes the birth of the Universe as a “tunnelling from the string perturbative vacuum” .
In a process of tunnelling, or quantum reflection, the WDW wave function corresponding to our present cosmological configuration turns out to be exponentially damped: the birth of the Universe from the string perturbative vacuum would thus appear to be a very unlikely (i.e., highly suppressed) quantum effect, according to the above representation. In the string cosmology minisuperspace, however, there are also other, more efficient “channels” of vacuum decay. The main purpose of this paper is to show that, with an appropriate model of dilaton potential, the transition from the pre-big bang to the post-big bang regime may correspond to a parametric amplification of the wave function, in such a way that the birth of the Universe can be represented as a process of “anti-tunneling from the string perturbative vacuum”. The name “anti-tunnelling”, which is synonymous of parametric amplification (a well known effect in the theory of cosmological perturbations) follows from the fact that the transition probability in that case is controlled by the inverse of the quantum-mechanical transmission coefficient in superspace.
In order to illustrate this possibility we shall use a quantum cosmology model based on the lowest order, gravi-dilaton string effective action, which in $`d+1`$ dimensions can be written as:
$$S=\frac{1}{2\lambda _s^{d1}}d^{d+1}x\sqrt{|g|}e^\varphi \left[R+(_\mu \varphi )^2+V(\varphi ,g_{\mu \nu })\right].$$
(1)
Here $`\lambda _s`$ is the fundamental string length parameter, and $`V`$ is the (possibly non-local and non-perturbative) dilaton potential. Considering an isotropic, spatially flat cosmological background,
$$\varphi =\varphi (t),g_{\mu \nu }=\mathrm{diag}(g_{00}(t),a^2(t)\delta _{ij}),$$
(2)
with spatial sections of finite volume (a toroidal space, for instance), it is convenient to introduce the variables
$$\beta =\sqrt{d}\mathrm{ln}a,\overline{\varphi }=\varphi \sqrt{d}\beta \mathrm{ln}d^dx/\lambda _s^d,$$
(3)
and the corresponding canonical momenta, defined in the cosmic time gauge by:
$$\mathrm{\Pi }_\beta =\left(\frac{\delta S}{\delta \dot{\beta }}\right)_{g_{00}=1}=\lambda _s\dot{\beta }e^{\overline{\varphi }},\mathrm{\Pi }_{\overline{\varphi }}=\left(\frac{\delta S}{\delta \dot{\overline{\varphi }}}\right)_{g_{00}=1}=\lambda _s\dot{\overline{\varphi }}e^{\overline{\varphi }}.$$
(4)
The WDW equation, which implements the Hamiltonian constraint $`H=\delta S/\delta g_{00}=0`$ in the two-dimensional minisuperspace spanned by $`\beta `$ and $`\overline{\varphi }`$, takes then the form :
$$\left[_{\overline{\varphi }}^2_\beta ^2+\lambda _s^2V(\beta ,\overline{\varphi })e^{2\overline{\varphi }}\right]\mathrm{\Psi }(\beta ,\overline{\varphi })=0$$
(5)
(we have used the differential representation $`\mathrm{\Pi }^2=^2`$).
As is well known from low-energy, perturbative theorems, the dilaton potential is strongly suppressed (with an istantonic law) in the small coupling regime, so that the effective WDW potential appearing in eq.(5) goes to zero as we approach the flat, zero-coupling, string perturbative vacuum, $`\beta \mathrm{}`$, $`\overline{\varphi }\mathrm{}`$. In the opposite regime of arbitrarily large coupling the dilaton potential is unknown, but we shall assume in this paper that a possible growth of $`V`$ is not strong enough to prevent the effective WDW potential from going to zero also at large positive values of $`\beta `$ and $`\overline{\varphi }`$, so that $`V\mathrm{exp}(2\overline{\varphi })0`$ for $`\beta ,\overline{\varphi }\pm \mathrm{}`$. In this case, the asymptotic solutions of the WDW equations (5) can be factorized in the form of plane waves, representing free energy and momentum eigenstates:
$$\mathrm{\Psi }(\beta ,\overline{\varphi })=\psi _k^\pm (\beta )\psi _k^\pm (\overline{\varphi })e^{\pm ik\beta \pm ik\overline{\varphi }},$$
(6)
where ($`k>0`$):
$$\mathrm{\Pi }_\beta \psi _k^\pm (\beta )=\pm k\psi _k^\pm (\beta ),\mathrm{\Pi }_{\overline{\varphi }}\psi _k^\pm (\overline{\varphi })=\pm k\psi _k^\pm (\overline{\varphi })$$
(7)
From a geometric point of view they represent, in minisuperspace, the four branches of the classical, low-energy string cosmology solutions , defined by the condition $`\mathrm{\Pi }_\beta =\pm \mathrm{\Pi }_{\overline{\varphi }}`$, and corresponding to :
* expansion, $`\mathrm{\Pi }_\beta >0`$, contraction, $`\mathrm{\Pi }_\beta <0`$,
* pre-big bang, $`\mathrm{\Pi }_{\overline{\varphi }}<0`$, post-big bang, $`\mathrm{\Pi }_{\overline{\varphi }}>0`$.
We now recall that, for an isotropic string cosmology solution , the dilaton is growing ($`\dot{\varphi }>0`$) only if the metric is expanding ($`\dot{\beta }>0`$), see eq. (3). If we impose, as our physical boundary condition, that the Universe emerges from the string perturbative vacuum (corresponding, asymptotically, to $`\beta \mathrm{}`$, $`\varphi \mathrm{}`$), then the initial state $`\mathrm{\Psi }_{in}`$ must represent a configuration which is expanding and with growing dilaton, i.e. $`\mathrm{\Psi }_{in}\psi ^+(\beta )\psi ^{}(\overline{\varphi })`$. The quantum evolution of the initial pre-big bang state is thus represented in this minisuperspace as the scattering, induced by the effective WDW potential, of an incoming wave travelling from $`\mathrm{}`$ along the positive direction of the axes $`\beta `$ and $`\overline{\varphi }`$.
It follows that, in general, there are four different types of evolution, depending on whether the asymptotic outgoing state $`\mathrm{\Psi }_{out}`$ is a superposition of waves with the same $`\mathrm{\Pi }_\beta `$ and opposite $`\mathrm{\Pi }_{\overline{\varphi }}`$, or with the same $`\mathrm{\Pi }_{\overline{\varphi }}`$ and opposite $`\mathrm{\Pi }_\beta `$, and also depending on the identification of the time-like coordinate in minisuperspace . These four possibilities are illustrated in Fig. 1, where cases $`(a)`$ and $`(b)`$ correspond to $`\mathrm{\Psi }_{out}^\pm \psi ^+(\beta )\psi ^\pm (\overline{\varphi })`$, while cases $`(c)`$ and $`(d)`$ correspond to $`\mathrm{\Psi }_{out}^\pm \psi ^{}(\overline{\varphi })\psi ^\pm (\beta )`$.
The two cases $`(a)`$ and $`(c)`$ represent scattering and reflection along the spacelike axes $`\overline{\varphi }`$ and $`\beta `$, respectively. In case $`(a)`$ the evolution along $`\beta `$ is monotonic, so that the Universe always keeps expanding. The incident wave is partially transmitted towards the pre-big bang singularity (unbounded growth of the curvature and of the dilaton, $`\beta +\mathrm{}`$, $`\overline{\varphi }+\mathrm{}`$), and partially reflected back towards the low-energy, expanding, post-big bang regime ($`\beta +\mathrm{}`$, $`\overline{\varphi }\mathrm{}`$). In case $`(c)`$ the evolution is monotonic along the time axis $`\overline{\varphi }`$, but not along $`\beta `$. So, the incident wave is totally transmitted towards the singularity ($`\overline{\varphi }+\mathrm{}`$), but in part as an expanding and in part as a contracting configuration.
In the language of third quantization (i.e., second quantization of the WDW wave function in superspace) we can say that in case $`(a)`$ we have the production of expanding post-big bang states from the string perturbative vacuum; in case $`(c)`$, instead, we have the production of contracting pre-big bang states. In both cases, however, such a production is exponentially suppressed, and the suppression is proportional to the proper spatial volume of the portion of Universe emerging from the string perturbative vacuum .
The other two cases, $`(b)`$ and $`(d)`$, are qualitatively different, as the final state is a superposition of positive and negative energy eigenstates, i.e. of modes of positive and negative frequency with respect to time axes chosen in minisuperspace. In a third quantization context they represent a “Bogoliubov mixing”, describing the production of pairs of universes from the vacuum. The mode moving backwards in time has to be “re-interpreted”, like in quantum field theory, as an “antiuniverse” of positive energy and opposite momentum (in superspace). Since the inversion of momentum, in superspace, corresponds to a reflection of $`\dot{\beta }`$, the re-interpretation principle in this context changes expansion into contraction, and vice-versa.
Case $`(b)`$, in particular, describes the production of universe-antiuniverse pairs – one expanding, the other contractiong – from the string perturbative vacuum. The pairs evolve towards the strong coupling regime $`\overline{\varphi }+\mathrm{}`$, so both the members of the pair fall inside the pre-big bang singularity. Case $`(d)`$ is more interesting, in our context, since in that case the universe-antiuniverse of the pair are both expanding: one falls inside the pre-big bang singularity, the other expands towards the low-energy, post-big bang regime, and may expand to infinity, representing the birth of a Universe like ours in a standard Friedman-like configuration.
Case $`(b)`$ was discussed in a previous paper : with a simple, duality-invariant model of potential, it was shown to represent an efficient conversion of expanding into contracting internal dimensions, associated to a parametric amplification of the wave function of the pre-big bang state. In this paper we shall concentrate on the process illustrated in case $`(d)`$, already conjectured to represent a promising candidate for an efficient transition from the pre- to the post-big bang regime, but never discussed in previous papers. To confirm this conjecture, we will provide here an explicit example in which the production of pairs of universes containing an expanding post-big bang configuration may be associated to a parametric amplification of the WDW wave function.
To this purpose we should note, first of all, that for a duality-invariant dilaton potential the string cosmology Hamiltonian associated to the action (1) is translationally invariant along the $`\beta `$ axis, $`[H,\mathrm{\Pi }_\beta ]=0`$: in this case, an initial expanding configuration keeps expanding, and the out state cannot be a mixture of states with positive and negative eigenvalues of $`\mathrm{\Pi }_\beta `$. In order to implement the process $`(d)`$ of Fig. 1 we thus need a non-local, duality-breaking potential, that contains both the metric and the dilaton, but not in the combination $`\overline{\varphi }`$ of eq. (3).
We shall use, in particular, a two-loop dilaton potential induced by an effective cosmological constant $`\mathrm{\Lambda }`$, i.e. $`V\mathrm{\Lambda }\mathrm{exp}(2\varphi )`$ (two-loop potentials are known to favour the transition to the post-big bang regime already at the classical level , but only for appropriate repulsive self-interactions with $`\mathrm{\Lambda }<0`$). We shall assume, in addition, that such a potential is rapidly damped in the large radius limit $`\beta +\mathrm{}`$, and we shall approximate such a damping, for simplicity, by the Heaviside step function $`V\theta (\beta )`$. With this damping we represent the effective suppression of the cosmological constant, required for the transition to a realistic post-big bang configuration, and induced by some physical mechanism that does not need to be specified explicitly, for the purpose of this paper. Also, the choice of the cut-off function $`\theta (\beta )`$ is not crucial, in our context, and other, smoother functions would be equally appropriate.
With these assumptions, the WDW equation (5) reduces to
$$\left[_{\overline{\varphi }}^2_\beta ^2+\lambda _s^2\mathrm{\Lambda }\theta (\beta )e^{2\sqrt{d}\beta }\right]\mathrm{\Psi }(\beta ,\overline{\varphi })=0,$$
(8)
and the general solution can be factorized in terms of the eigenstates of the momentum $`\mathrm{\Pi }_{\overline{\varphi }}`$, by setting
$$\mathrm{\Psi }(\beta ,\overline{\varphi })=\mathrm{\Psi }_k(\beta )e^{ik\overline{\varphi }},\left[_\beta ^2+k^2\lambda _s^2\mathrm{\Lambda }\theta (\beta )e^{2\sqrt{d}\beta }\right]\mathrm{\Psi }_k(\beta )=0.$$
(9)
In the region $`\beta >0`$ the potential is vanishing, so that the general outgoing solution is a superposition of eigenstates of $`\mathrm{\Pi }_\beta `$ corresponding to the positive and negative frequency modes $`\psi _k^\pm `$, as in case $`(d)`$ of Fig. 1. In the opposite region $`\beta <0`$ the general solution is a combination of Bessel functions $`J_\nu (z)`$, of imaginary index $`\nu =\pm ik/\sqrt{d}`$ and argument $`z=i\lambda _s\sqrt{\mathrm{\Lambda }/d}e^{\sqrt{d}\beta }`$.
We now fix the boundary conditions by imposing that the Universe starts expanding from the string perturbative vacuum: for $`\beta \mathrm{}`$, the solution must then reduce to a plane wave representing a classical, low-energy pre-big bang solution, with $`\mathrm{\Pi }_\beta =\mathrm{\Pi }_{\overline{\varphi }}=k>0`$. In particular, if we use the differential representation $`\mathrm{\Pi }=i`$ for both $`\beta `$ and $`\overline{\varphi }`$:
$$\mathrm{\Psi }_{in}=\underset{\beta \mathrm{}}{lim}\mathrm{\Psi }(\beta ,\overline{\varphi })e^{ik(\overline{\varphi }\beta )}.$$
(10)
This choice uniquely determines the WDW wave function as:
$`\mathrm{\Psi }(\beta ,\overline{\varphi })=N_kJ_{\frac{ik}{\sqrt{d}}}\left(i\lambda _s\sqrt{{\displaystyle \frac{\mathrm{\Lambda }}{d}}}e^{\sqrt{d}\beta }\right)e^{ik\overline{\varphi }},\beta `$ $`<`$ $`0,`$ (11)
$`=\left[A_+(k)e^{ik\beta }+A_{}(k)e^{ik\beta }\right]e^{ik\overline{\varphi }},\beta `$ $`>`$ $`0.`$ (12)
With the matching conditions at $`\beta =0`$ we can then compute the Bogoliubov coefficients $`|c_\pm (k)|^2=|A_\pm (k)|^2/|N_k|^2`$ determining, in the third quantization formalism, the number $`n_k`$ of universes produced from the vacuum, for each mode $`k`$ (here $`k`$ represents a given configuration in the space of the initial parameters).
In contrast to the tunnelling process discussed in preivious papers , this process may represent an efficient mechanism of vacuum decay since the wave function is parametrically amplified (i.e., $`n_k1`$) for all $`k<\lambda _s\sqrt{\mathrm{\Lambda }}`$. To illustrate this point we have numerically integrated eq. (9), and plotted in Fig. 2 the evolution in superspace of the real part of the wave function, for different configurations of initial momentum $`k`$ (the behaviour of the imaginary part is qualitatively similar).
It may be interesting to note that the amplification is smaller at higher frequencies or – to use the language of cosmological perturbation theory – the pairs of universes are produced with a decreasing spectrum . This result has a quite reasonable interpretation, once we express the momentum $`k`$ in terms of the physical parameters of the final geometric configuration. Indeed, from the definitons (3) and (4) we find, for a realistic transition occurring at the string scale, $`\dot{\beta }\lambda _s`$, that $`k(\mathrm{\Omega }_3/\lambda _s^3)g_s^2`$, where $`\mathrm{\Omega }_3=a^3d^3x`$ is the proper spatial volume emerging from the transition in the post-big bang regime, and $`g_s=e^\varphi /2`$ is the string coupling, parametrized by the dilaton. The condition of parametric amplification,
$$k\left(\frac{\mathrm{\Omega }_3}{\lambda _s^3}\right)\frac{1}{g_s^2}<\lambda _s\sqrt{\mathrm{\Lambda }},$$
(13)
implies that the transition is strongly favoured for configurations of small enough spatial volume in string units, large enough coupling $`g_s`$, and/or large enough cosmological constant $`\mathrm{\Lambda }`$, in string units (in agreement with previous results ).
For $`k\lambda _s\sqrt{\mathrm{\Lambda }}`$ the wave function does not “hit” the barrier, and there is no parametric amplification. The inital state runs almost undisturbed towards the singularity, and only a small, exponentially suppressed fraction is able to emerge in the post big bang regime. In the context of third quantization this process can still be described as the production of pairs of universes, but the number of pairs is now exponentially damped, $`n_k\mathrm{exp}(k/\lambda _s\sqrt{\mathrm{\Lambda }})`$, with a Boltzmann factor corresponding to a “thermal bath” of universes, at the effective temperature $`T\sqrt{\mathrm{\Lambda }}`$ in superspace.
In view of the above results, we may conclude that the decay of the string perturbative vacuum, if triggered by an appropriate, duality-breaking dilaton potential, can efficiently proceed via the parametric amplification of the WDW wave function in superspace, and can describe the birth of our Universe as a forced production of pairs from the vacuum fluctuations. One member of the pair disappears into the pre-big bang singularity, the other bounces back towards the low-energy region. The resulting effect is a net flux of universes that may escape to infinity in the post-big bang regime (see Fig. 3). This effect is similar to the quantum emission of radiation from a black hole , with the difference that the quanta produced in pairs from the vacuum are separated not by the black-hole horizon, but by the “Hubble” horizon associated to the “accelerated” variation of the dilaton in minisuperspace.
|
no-problem/0002/hep-ex0002053.html
|
ar5iv
|
text
|
# NEW CONSTRAINTS ON WIMPS FROM THE CANFRANC IGEX DARK MATTER SEARCH
## 1 Introduction
Substantial evidence exists suggesting most matter in the universe is dark, and there are compelling reasons to believe it consists mainly of non-baryonic particles. Among these candidates, Weakly Interacting Massive and neutral Particles (WIMPs) are among the front runners. The lightest stable particles of supersymmetric theories, like the neutralino, describe a particular class of WIMPs.
Direct detection techniques rely on measurement of WIMP elastic scattering off target nuclei in a suitable detector. Slow moving ($`300`$ km/s) and heavy ($`1010^3`$ GeV) galactic halo WIMPs could make a Ge nucleus recoil with a few keV, at a rate which depends on the type of WIMP and interaction. Only about 1/4 of this energy is visible in the detector. Because of the low interaction rate and the small energy deposition, the direct search for particle dark matter through scattering by nuclear targets requires ultralow background detectors with very low energy thresholds.
To detect the possible presence of WIMPs, the predicted event rate is compared with the observed spectrum. If this predicted event rate is larger than the measured one, the particle under consideration can be ruled out as a dark matter component. Such absence of WIMPs can be expressed as a contour line, $`\sigma `$(m), on the WIMP-nucleus elastic scattering cross-section plane. This excludes, for each mass, those particles whose cross-section lies above the contour line, $`\sigma `$(m).
This direct comparison of the expected signal with the observed background spectrum can only exclude or constrain the cross-section in terms of exclusion plots of $`\sigma `$(m). A convincing proof of the detection of dark matter would require finding unique signatures in the data, characteristic of the WIMP, which cannot be attributed to the background or instrumental artifacts. An example is the predicted summer-winter asymmetry in the WIMP signal rate due to the periodicity of the relative Earth-halo motion resulting from the Earth’s rotation around the Sun.
Germanium detectors used for double-beta decay searches have reached one of the lowest background levels of any type of detector and have a reasonable quenching factor ($`0.25`$). Thus, with sufficiently low energy thresholds, they are attractive devices for dark matter searches.
Germanium diodes dedicated to double-beta decay experiments\[4-11\] were applied to WIMP searches as early as 1987. The exclusion contour based on the best combination of data from these experiments is referred to in this paper as the “combined germanium contour”. Only recently has this exclusion plot been surpassed by a sodium iodide experiment (DAMA NaI-0), which uses a statistical pulse-shape discriminated background spectrum.
This paper presents a new germanium detector data limit for the direct detection of non-baryonic particle dark matter in the $``$50 GeV DAMA mass region.
## 2 Experiment
The IGEX experiment, optimized for detecting <sup>76</sup>Ge double-beta decay, has been described in detail elsewhere. The IGEX detectors are now also being used in the search for WIMPs interacting coherently with germanium nuclei. The COSME detector described below, is also operating in the same shield at Canfranc.
The IGEX detectors were fabricated at Oxford Instruments, Inc., in Oak Ridge, Tennessee. Russian GeO<sub>2</sub> powder, isotopically enriched to 86% <sup>76</sup>Ge, was purified, reduced to metal, and zone refined to $`10^{13}`$ p-type donor impurities per cubic centimeter by Eagle Picher, Inc., in Quapaw, Oklahoma. The metal was then transported to Oxford Instruments by surface in order to minimize activation by cosmic ray neutrons, where it was further zone refined, grown into crystals, and fabricated into detectors.
The COSME detector was fabricated at Princeton Gamma-Tech, Inc. in Princeton, New Jersey, using naturally abundant germanium. The refinement of newly-mined germanium ore to finished metal for this detector was expedited to minimize production of cosmogenic <sup>68</sup>Ge.
All of the cryostat parts were electroformed using a high purity OFHC copper/CuSO<sub>4</sub>/H<sub>2</sub>SO<sub>4</sub> plating system. The solution was continuously filtered to eliminate copper oxide, which causes porosity in the copper. A Ba(OH)<sub>2</sub> solution was added to precipitate BaSO<sub>4</sub>, which is also collected on the filter. Radium in the bath exchanges with the barium on the filter, thus minimizing radium contamination in the cryostat parts. The CuSO<sub>4</sub> crystals were purified of thorium by multiple recrystallization.
The IGEX detector used for dark matter searches, designated RG-II, has a mass of $`2.2`$ kg. The active mass of this detector, $`2.0`$ kg, was measured with a collimated source of <sup>152</sup>Eu in the Canfranc Laboratory and is in agreement with the Oxford Instruments efficiency measurements. The full-width at half-maximum (FWHM) energy resolution of RG-II was 2.37 keV at the 1333-keV line of <sup>60</sup>Co. The COSME detector has a mass of 254 g and an active mass of 234 g. The FWHM energy resolution of COSME is 0.43 keV at the 10.37 keV gallium X-ray. Energy calibration and resolution measurements were made every 7–10 days using the lines of <sup>22</sup>Na and <sup>60</sup>Co. Calibration for the low energy region was extrapolated using the X-ray lines of Pb.
For each detector, the first-stage field-effect transistor (FET) is mounted on a Teflon block a few centimeters from the center contact of the germanium crystal. The protective cover of the FET and the glass shell of the feedback resistor have been removed to reduce radioactive background. This first-stage assembly is mounted behind a 2.5-cm-thick cylinder of archaeological lead to further reduce background. Further stages of preamplification are located at the back of the cryostat cross arm, approximately 70 cm from the crystal. The IGEX detectors have preamplifiers modified for the pulse-shape analysis used in the double-beta decay searches.
The detectors shielding is as follows, from inside to outside. The innermost shield consists of 2.5 tons of 2000-year-old archaeological lead forming a 60-cm cube and having $`<9`$ mBq/kg of <sup>210</sup>Pb(<sup>210</sup>Bi), $`<0.2`$ mBq/kg of <sup>238</sup>U, and $`<0.3`$ mBq/kg of <sup>232</sup>Th. The detectors fit into precision-machined holes in this central core, which minimizes the empty space around the detectors available to radon. Nitrogen gas, at a rate of 140 l/hour, evaporating from liquid nitrogen, is forced into the detector chambers to create a positive pressure and further minimize radon intrusion. The archaeological lead block is centered in a 1-m cube of 70-year-old low-activity lead($`10`$ tons) having $`30`$ Bq/kg of <sup>210</sup>Pb. A minimum of 15 cm of archaeological lead separates the detectors from the outer lead shield. A 2-mm-thick cadmium sheet surrounds the main lead shield, and two layers of plastic seal this central assembly against radon intrusion. A cosmic muon veto covers the top and sides of the central core, except where the detector Dewars are located. The veto consists of BICRON BC-408 plastic scintillators 5.08 cm $`\times `$ 50.8 cm $`\times `$ 101.6 cm with surfaces finished by diamond mill to optimize internal reflection. BC-800 (UVT) light guides on the ends taper to 5.08 cm in diameter over a length of 50.8 cm and are coupled to Hamamatsu R329 photomultiplier tubes. The anticoincidence veto signal is obtained from the logical OR of all photomultiplier tube discriminator outputs. An external polyethylene neutron moderator 20 cm thick (1.5 tons) completes the shield. The entire shield is supported by an iron structure resting on noise-isolation blocks. The experiment is located in a room isolated from the rest of the laboratory and has an overburden of 2450 m.w.e., which reduces the measured muon flux to $`2\times 10^7\mathrm{cm}^2\mathrm{s}^1`$.
The data acquisition system for the low-energy region used in dark matter searches (referred to as IGEX-DM) is based on standard NIM electronics and is independent from that used for double-beta decay searches (IGEX-$`2\beta `$). It has been implemented by splitting the normal preamplifier output pulses of each detector and routing them through two Canberra 2020 amplifiers having different shaping times enabling noise rejection. These amplifier outputs are converted using 200 MHz Wilkinson-type Canberra analog-to-digital converters, controlled by a PC through parallel interfaces. For each event, the arrival time (with an accuracy of 100 $`\mu `$s), the elapsed time since the last veto event (with an accuracy of 20 $`\mu `$s), and the energy from each ADC are recorded.
## 3 Results
The IGEX-DM results obtained correspond to 30 days of analyzed data (Mt=60 kg-days) from IGEX detector RG-II. Also presented for comparison are earlier results from the COSME detector (COSME-1) , as well as recent results obtained in its current set-up (COSME-2).
The detector RG-II features an energy threshold of 4 keV and an energy resolution of 0.8 keV at the 75 keV Pb x-ray line. The background rate recorded was $`0.3`$ c/(keV-kg-day) between 4–10 keV, $`0.07`$ c/(keV-kg-day) between 10–20 keV, and $`0.05`$ c/(keV-kg-day) between 20–40 keV. Fig. 1 shows the RG-II 30-day spectrum; the numerical data are given in Table 1.
The exclusion plots are derived from the recorded spectrum in one-keV bins from 4 keV to 50 keV. As recommended by the Particle Data Group, the predicted signal in an energy bin is required to be less than or equal to the (90% C.L.) upper limit of the (Poisson) recorded counts. The derivation of the interaction rate signal supposes that the WIMPs form an isotropic, isothermal, non-rotating halo of density $`\rho =0.3`$ GeV/cm<sup>3</sup>, have a Maxwellian velocity distribution with $`\mathrm{v}_{\mathrm{rms}}=270`$ km/s (with an upper cut corresponding to an escape velocity of 650 km/s), and have a relative Earth-halo velocity of $`\mathrm{v}_\mathrm{r}=230`$ km/s. The cross sections are normalized to the nucleon, assuming a dominant scalar interaction. The Helm parameterization is used for the scalar nucleon form factor, and the quenching factor used is 0.25. The exclusion plots derived from the IGEX-DM (RG-II) and COSME data are shown in Fig. 2. In particular, IGEX results exclude WIMP-nucleon cross-sections above 1.3x10<sup>-8</sup> nb for masses corresponding to the 50 GeV DAMA region. Also shown is the combined germanium contour, including the last Heidelberg-Moscow data (recalculated from the original energy spectra with the same set of hypotheses and parameters), the DAMA experiment contour plot derived from Pulse Shape Discriminated spectra, and the DAMA region corresponding to their reported annual modulation effect. The IGEX-DM exclusion contour improves significantly on that of other germanium experiments for masses corresponding to that of the neutralino tentatively assigned to the DAMA modulation effect and results from using only unmanipulated data.
Data collection is currently in progress with improved background below 20 keV. Based on present IGEX-DM performance and reduction of the background to $`0.1`$ c/(keV-kg-day) between 4–10 keV, the complete DAMA region (m=$`52_8^{+10}`$ GeV, $`\sigma `$<sup>p</sup>=($`7.2_{0.9}^{+0.4}`$)x10<sup>-9</sup> nb) could be tested after an exposure of 1 kg-year, i.e. a few months of operation with two upgraded IGEX detectors.
## Acknowledgements
The Canfranc Astroparticle Underground Laboratory is operated by the University of Zaragoza under contract No. AEN99-1033. This research was partially funded by the Spanish Commission for Science and Technology (CICYT), the U.S. National Science Foundation, and the U.S. Department of Energy. The isotopically enriched <sup>76</sup>Ge was supplied by the Institute for Nuclear Research (INR), Moscow, and the Institute for Theoretical and Experimental Physics (ITEP), Moscow.
|
no-problem/0002/math-ph0002005.html
|
ar5iv
|
text
|
# 1. Introduction.
## 1. Introduction.
In this work we present a study of the Ermakov-Lewis invariants that are related to some linear differential equations of second order and one variable which are of much interest in many areas of physics. In particular we shall study in some detail the application of the Ermakov-Lewis formalism to several simple Hamiltonian models of “quantum” cosmology. There is also a formal application to the physical optics of waveguides.
In 1880, Ermakov published excerpts of his course on mathematical analysis where he described a relationship between linear differential equations of second order and a particular type of nonlinear equation. At the beginning of the thirties, Milne developed a method quite similar to the WKB technique where the same nonlinear equation found by Ermakov occurred, and applied it successfully to several model problems in quantum mechanics. Further, in 1950, the solution to this nonlinear differential equation has been given by Pinney .
On the other hand, within the study of the adiabatic invariants at the end of the fifties, a number of powerful perturbative methods in the phase space have been developed . In particular, Kruskal introduced a certain type of canonical variables which had the merit of considerably simplifying the mathematical approach and of clarifying some quasi-invariant structures of the phase space. Kruskal’s results have been used by Lewis to prove that the adiabatic invariant found by Kruskal is in fact a true invariant. Lewis applied it to the well-known problem of the harmonic oscillator of time-dependent frequency. Moreover, Lewis and Riesenfeld proceeded to quantize the invariant, although the physical interpretation was still not clear even at the classical level. In other words, a constant of motion without meaning was available.
In a subsequent work of Eliezer and Gray , an elementary physical interpretation was achieved in terms of the angular momentum of an auxiliary two-dimensional motion. Even though this interpretation is not fully satisfactory in the general case, it is the clearest at the moment.
Presently, the Ermakov-Lewis dynamical invariants are more and more in use for many different time-dependent problems whose Hamiltonian is a quadratic form in the canonical coordinates.
## 2. The method of Ermakov.
The Ukrainian mathematician V. Ermakov was the first to notice that some nonlinear differential equations are related in a simple and definite way with the second order linear differential equations. Ermakov gave as an example the so-called Ermakov system for which he formulated the following theorem.
Theorem 1E. If an integral of the equation
$$\frac{d^2y}{dx^2}=My$$
(1)
is known, we can find an integral of the equation
$$\frac{d^2z}{dx^2}=Mz+\frac{\alpha }{z^3},$$
(2)
where $`\alpha `$ is some constant.
Eliminating $`M`$ from these equations one gets
$$\frac{d}{dx}\left(y\frac{dz}{dx}z\frac{dy}{dx}\right)=\frac{\alpha y}{z^3}.$$
Multiplying both sides by
$$2\left(y\frac{dz}{dx}z\frac{dy}{dx}\right),$$
the last equation turns into
$$\frac{d}{dx}\left(y\frac{dz}{dx}z\frac{dy}{dx}\right)^2=\frac{2\alpha y}{z}\frac{d}{dx}\left(\frac{y}{z}\right).$$
Multiplying now by $`dx`$ and integrating both sides we get
$$\left(y\frac{dz}{dx}z\frac{dy}{dx}\right)^2=C\frac{\alpha y^2}{z^2}.$$
(3)
If $`y_1`$ and $`y_2`$ are two particular solutions of the equation (1), substituting them by $`y`$ in the latter equation we get two integrals of of the equation (2)
$$\left(y_1\frac{dz}{dx}z\frac{dy_1}{dx}\right)^2=C_1\frac{\alpha y_1^2}{z^2},$$
$$\left(y_2\frac{dz}{dx}z\frac{dy_2}{dx}\right)^2=C_2\frac{\alpha y_2^2}{z^2}.$$
Eliminating $`dz/dx`$ from these equations we get a general first integral of (2). One should note that the Ermakov system coincide with the problem of the two-dimensional parametric oscillator (as we shall see in chapter 6). Moreover, the proof of the theorem gives an exact method to solve this important dynamical problem.
The general first integral of equation (2) can be also obtained as follows. Getting $`dx`$ from equation (3):
$$dx=\frac{ydzzdy}{\sqrt{C\alpha y^2/z^2}}.$$
Dividing both sides by $`y^2`$ we get the form
$$\frac{dx}{y^2}=\frac{\frac{z}{y}d\left(\frac{z}{y}\right)}{\sqrt{C\frac{z^2}{y^2}\alpha }}.$$
Multiplying by C and integrating both sides we get:
$$C\frac{dx}{y^2}+C_3=\sqrt{C\frac{z^2}{y^2}\alpha }.$$
This is the general first integral of equation (2), where $`C_3`$ is the constant of the last integration. For $`y`$ is enough to take any particular integral of equation (1).
As a corollary of the previous theorem we can say that
Corollary 1Ec. If a particular solution of (2) is known, we can find the general solution of equation (1).
Since it is sufficient to find particular solutions of (1), we can take $`C=0`$ in equation (3). Thus we get:
$$y\frac{dz}{dx}z\frac{dy}{dx}=\frac{y}{z}\sqrt{\alpha }.$$
and therefore
$$\frac{dy}{y}=\frac{dz}{z}\pm \frac{dx\sqrt{\alpha }}{z^2}.$$
Integrating both sides
$$\mathrm{log}y=\mathrm{log}z\pm \sqrt{\alpha }\frac{dx}{z^2},$$
which results in
$$y=z\mathrm{exp}(\pm \sqrt{\alpha }\frac{dx}{z^2}).$$
Taking the plus sign first and the minus sign next we get two particular solutions of equation (1).
A generalization of the theorem has been given by the same Ermakov.
Theorem 2E. If $`p`$ is some known function of $`x`$ and $`f`$ is any other arbitrary given function, then the general solution of the equation
$$p\frac{d^2y}{dx^2}y\frac{d^2p}{dx^2}=\frac{1}{p^2}f\left(\frac{y}{p}\right)$$
can be found by quadratures.
Multiplying the equation by
$$2\left(p\frac{dy}{dx}y\frac{dp}{dx}\right)dx$$
one gets the following form
$$d\left(p\frac{dy}{dx}y\frac{dp}{dx}\right)^2=2f\left(\frac{y}{p}\right)d\left(\frac{y}{p}\right).$$
Integrating both sides and defining for simplicity reasons
$$2f(z)𝑑z=\phi (z),$$
we get
$$\left(p\frac{dy}{dx}y\frac{dp}{dx}\right)^2=\phi \left(\frac{y}{p}\right)+C.$$
This is the expression for a first integral of the equation. Thus, for $`dx`$ we have:
$$dx=\frac{pdyydp}{\sqrt{\phi \left(\frac{y}{p}\right)+C}}.$$
Dividing by $`p^2`$ and integrating both sides we find:
$$\frac{dx}{p^2}+C_4=\frac{d\left(\frac{y}{p}\right)}{\sqrt{\phi \left(\frac{y}{p}\right)+C}}.$$
This is the general integral of the equation.
A particular case is when $`p=x`$. Then, the differential equation can be written as
$$x^3\frac{d^2y}{dx^2}=f\left(\frac{y}{x}\right).$$
## 3. The method of Milne.
In 1930, Milne introduced a method to solve the Schrödinger equation taking into account the basic oscillatory structure of the wave function. This method has been one of the first in the class of the so-called phase-amplitude procedures, which allow to get sufficiently exact solutions for the one-dimensional Schrödinger equation at any energy and are used to locate resonances.
Let us consider the one-dimensional Schrödinger equation
$$\frac{d^2\psi }{dx^2}+k^2(x)\psi =0$$
(1)
where $`k^2(x)`$ is the local wave vector
$$k^2(x)=2\mu [EV(x)].$$
(2)
Milne proposed to write the wave function as a variable amplitude multiplied by the sinus of a variable phase, i.e.,
$$\psi (x)=\left(\frac{2\mu }{\pi }\right)^{1/2}\beta (x)\mathrm{sin}(\varphi (x)+\gamma )$$
(3)
where $`\mu `$ is the mass parameter of the problem at hand, $`E`$ is the total energy of the system, $`\gamma `$ is a constant phase, and $`V(x)`$ is the potential energy. In the original method, $`\beta `$ and $`\varphi `$ are real and $`\beta >0`$. Substituting the previous expression for $`\psi `$ in the wave equation and solving for $`d\varphi /dx`$ one gets
$$\frac{d^2\beta }{dx^2}+k^2(x)\beta =\frac{1}{\beta ^3},$$
(4)
$$\frac{d\varphi }{dx}=\frac{1}{\beta ^2}.$$
(5)
As one can see, the equation for Milne’s amplitude coincides with the nonlineal equation found by Ermakov, now known as Pinney’s equation.
## 4. Pinney’s result.
In a brief note Pinney was the first to give without proof (claimed to be trivial) the connection between the solutions of the equation for the parametric oscillator and the nonlinear equation (known as Pinney’s equation or Pinney-Milne equation).
$$y^{\prime \prime }(x)+p(x)y(x)+\frac{C}{y^3}(x)=0$$
(1)
for $`C=`$ constant and $`p(x)`$ given. The general solution for which $`y(x_0)=y_0`$, $`y^{}(x_0)=y_0^{}`$ is
$$y_P(x)=\left[U^2(x)CW^2V^2(x)\right]^{1/2},$$
(2)
where $`U`$ and $`V`$ are solutions of the linear equation
$$y^{\prime \prime }(x)+p(x)y(x)=0,$$
(3)
for which $`U(x_0)=y_0`$, $`U^{}(x_0)=y_0^{}`$; $`V(x_0)=0`$, $`V^{}(x_0)y_0^{}`$ and $`W`$ is the Wronskian $`W=UV^{}U^{}V=`$ constant $`0`$ and one takes the square root in (2) for that solution which at $`x_0`$ has the value $`y_0`$
The proof is very simple as follows. From eq. (2) we get $`\dot{y_P}=y_P^1(U\dot{U}CW^2V\dot{V})`$ and $`\ddot{y_P}=y_P^3(U\dot{U}CW^2V\dot{V})^2+y_P^1(\dot{U}^2CW^2\dot{V}^2)p(x)y_P`$. From here, explicitly calculating $`\ddot{y_P}+p(x)y_P`$ one gets $`Cy_P^3`$ and therefore Pinney’s equation.
## 5. Lewis’ results.
In 1967 Lewis considered parametric Hamiltonians of the standard form:
$$H_L=(1/2ϵ)[p^2+\mathrm{\Omega }^2(t)q^2],$$
(1)
If $`\mathrm{\Omega }`$ is real, the motion of the classical system whoose Hamiltonian is given by eq. (1) is oscillatory with an arbitrary high frequency when $`ϵ`$ goes to zero. Corresponding to this, there are asymptotic series in positive powers of $`ϵ`$, whose partial sums are the adiabatic invariants of the system; the leading term of the series is $`ϵH/\mathrm{\Omega }`$. In the problem of the charged particle the adiabatic invariant is the series of the magnetic moment. Lewis’s results came out from a direct application of the asymptotic theory of Kruskal (1962) to the classical system described by $`H_L`$ with real $`\mathrm{\Omega }`$. Lewis found that Kruskal’s theory could be applied in exact form. As a consequence, an exact invariant, which is precisely the Ermakov-Lewis invariant, has been found as a special case of Kruskal’s adiabatic invariant. Although $`\mathrm{\Omega }`$ was originally supposed to be real, the final results hold for complex $`\mathrm{\Omega }`$ as well. Moreover, the exact invariant is a constant of motion of the quantum system whose Hamiltonian is given by the quantum version of eq. (1).
The classical case.
Let us take a real $`\mathrm{\Omega }`$. In order to correctly apply Kruskal’s theory, it is necessary to write the equations of motion as for an autonomous system of first order so that all solutions be periodic in the independent variables for $`ϵ0`$. This can be achieved by means of a new independent variable $`s`$ defined as $`s=t/ϵ`$ and formally considering $`t`$ as a dependent variable. The resulting system of equations is
$`dq/ds`$ $`=`$ $`p,`$
$`dp/ds`$ $`=`$ $`\mathrm{\Omega }^2(t)q,`$
$`dt/ds`$ $`=`$ $`ϵ`$ (2)
Since $`t`$ is now a dependent variable, this system is autonomous. In the limit $`ϵ0`$, the solution of the last equation is $`t=\mathrm{constant}`$, and therefore the other two equations correspond to a harmonic oscillator of constant frequency. Since $`\mathrm{\Omega }`$ is real, the dependent variables are periodic in $`s`$ of period $`2\pi /\mathrm{\Omega }(t)`$ in the limit $`ϵ0`$, and the system of equations has the form required by Kruskal’s asymptotic theory. A central characteristic of the latter theory is a transformation from the variables $`(q,p,t)`$ to the so-called “nice variables” $`(z_1,z_2,\phi )`$. The latter are chosen in such a way that a two-parameter family of closed curves in the space $`(q,p,t)`$ can be defined by the conditions $`z_1=`$ constant and $`z_2=`$ constant. These closed curves are called rings. The variable $`\phi `$ is a variable angle which is defined in such a way as to change by $`2\pi `$ if any of the rings is covered once. The rings have the important feature that all the family can be mapped to itself if on each ring $`s`$ is changed according to eqs. (2). In the general theory, the transformation from the variables $`(q,p,t)`$ to the variables $`(z_1,z_2,\phi )`$ is defined as an asymptotic series in positive powers of $`ϵ`$, and a general prescription is given to determine the transformation order by order. As a matter of fact, Lewis has shown one possible explicit form for this transformation in terms of the variables $`q`$, $`p`$ and Pinney’s function $`\rho (t)`$. Moreover, the inverse transformation can also be obtained in explicit form.
For the parametric oscillator problem, the rings are to be found in the $`t`$= constant planes. It is this property that allows the usage of the rings for defining the exact invariant $`I`$ as the action integral
$$I=_{\mathrm{ring}}p𝑑q.$$
(3)
By doing explicitly the integral of $`I`$ as an integral from $`0`$ to $`2\pi `$ over the variable $`\phi `$ (see the Appendix), one gets
$$I=\frac{1}{2}[(q^2/\rho ^2)+(\rho \dot{x}ϵ\dot{\rho }q)^2],$$
(4)
where $`\rho `$ satisfies Pinney’s equation
$$ϵ^2\ddot{\rho }+\mathrm{\Omega }^2(t)\rho 1/\rho ^3=0,$$
(5)
and the point denotes differentiation with respect to $`t`$. The function $`\rho `$ can be taken as any particular solution of eq. (5). Although $`\mathrm{\Omega }`$ was supposed to be real, $`I`$ is an invariant even for complex $`\mathrm{\Omega }`$. It is easy to check that $`dI/dt=0`$ for the general case of complex $`\mathrm{\Omega }`$ by performing the derivation of eq. (4), using eqs. (2) to eliminate $`dq/dt`$ and $`dp/dt`$, and eq. (5) to eliminate $`\ddot{\rho }`$.
It might appear that the problem of solving the system of linear equations given by eqs. (2) has been merely replaced by the problem of solving the nonlinear eq. (5). This is however not so. First, any particular solution of eq. (5) can be used in the formula of $`I`$ with all the initial conditions for the eqs. (2). For the numerical work it is sufficient to find a particular solution for $`\rho `$. Second, the exact invariant has a simple and explicit dependence on the dynamical variables $`p`$ and $`q`$. Third, taking into account the fact that $`ϵ^2`$ is a factor for $`\ddot{\rho }`$ in eq. (5), one can obtain directly a particular solution for $`\rho `$ as a series of positive powers of $`ϵ^2`$. If $`\mathrm{\Omega }`$ is real and the leading term of the series is taken as $`\mathrm{\Omega }^{1/2}`$, then the corresponding series solution is just the adiabatic invariant expressed as a series. It is interesting to speculate if in practice it is more useful to calculate $`I`$ by means of the solution written as a truncated series of $`\rho `$ or by the corresponding expression in series for $`I`$ truncated at the same power of $`ϵ`$. Forth, one can also solve eq. (5) to get $`\rho `$ as a power series in $`1/ϵ^2`$ in terms of integrales. Finally, with the result of eqs. (4) and (5), it is possible to get a better understanding of the nature of Kruskal’s adiabatic invariant. Some progress in this regard can be found in the following general discussion on $`I`$ and $`\rho `$.
By adding a constant factor, the invariant $`I`$ of eq. (4) is the most general quadratic invariant of the system whose Hamiltonian given by eq. (1) is also a homogeneous quadratic form in $`p`$ and $`q`$. This can be seen by writing the invariant in terms of two linear independent solutions, $`f(t)`$ and $`g(t)`$ of the parametric equation. If we write the generalized form of $`I`$
$$I=\delta ^2[\rho ^2q^2+(\rho pϵ\dot{\rho }q)^2],$$
(6)
where $`\delta `$ is an arbitrary constant, and compare this form with that in terms of $`f(t)`$ y $`g(t)`$, then we can infer that the two invariants are identical if $`\rho `$ is given by
$$\rho =\gamma _1(ϵ\alpha )^1\left[\frac{A^2}{\delta ^2}g^2+\frac{B^2}{\delta ^2}f^2+2\gamma _2\left[\frac{A^2B^2}{\delta ^4}(ϵw)^2\right]^{1/2}fg\right]^{1/2},$$
(7)
where $`A`$ and $`B`$ are arbitrary constants, while the constants $`\alpha `$, $`\gamma _1`$ and $`\gamma _2`$ are defined by
$$w=fg^{^{}}gf^{^{}},\gamma _1=\pm 1,\gamma _2=\pm 1.$$
(8)
Since there are two arbitrary constants, this formula for $`\rho `$ gives the general solution of eq. (5) expressed in terms of $`f`$ and $`g`$. Using this formula we can build $`\rho `$ explicitly for any $`\mathrm{\Omega }`$ for which the eqs. (2) can be solved in an exact manner. By constructing $`\rho `$ in this way for special cases, we can infer that the expansion of $`\rho `$ in a series of positive powers of $`ϵ^2`$ is at least sometimes convergent. For example, if $`\mathrm{\Omega }=bt^{2n/(2n+1)}`$, where $`b`$ is a constant and $`n`$ is any integer, the series expansion is a polynomial in $`ϵ^2`$, and consequently it is convergent with an infinite radius of convergence.
Once we have the explicit form of Kruskal’s invariant, it is possible to find a canonical transformation for which the new momentum is the invariant itself. If we denote the new coordinate by $`q_1`$, the conjugated momentum by $`p_1`$, and the generating function by $`F`$, then the results can be written as
$$q_1=\mathrm{tan}^1[\rho ^2p/qϵ\rho \dot{\rho }],$$
$$p_1=\frac{1}{2}[\rho ^2q^2+(\rho pϵ\dot{\rho }q)^2],$$
$$F=\frac{1}{2}ϵ\rho ^1\dot{\rho }q^2\pm \rho ^1q(2p_1\rho ^2q^2)^2\pm p_1\mathrm{sin}^1[\rho ^1q/(2p_1)^{1/2}]+(n+\frac{1}{2})\pi p_1$$
$$\left(\frac{\pi }{2}\mathrm{sin}^1[\rho ^1q/(2p_1)^{1/2}]\frac{\pi }{2},n=\mathrm{integer}\right),$$
$$p=\frac{F}{q},q_1=\frac{F}{p_1},$$
$$H_{\mathrm{new}}=H+\frac{F}{t}=\frac{1}{ϵ}\rho ^2p_1.$$
(9)
In the expression for $`F`$ the upper or lower signs are taken according to $`pϵ\rho ^1\dot{\rho }q`$ is greater or less than $`0`$. One can see that $`q_1`$ is a cyclic variable in the new Hamiltonian, as it should be if $`p_1=I`$ can be an exact invariant.
Moreover, Lewis noticed that the second order differential equation for $`q`$, namely $`ϵ^2d^2q/dt^2+\mathrm{\Omega }^2(t)q=0`$, is of the same form as the 1D Schrödinger equation, if $`t`$ is considered as the spatial coordinate and $`q`$ is taken as the wave function. For bound states, $`\mathrm{\Omega }`$ is imaginary whereas for the continuous spectrum $`\mathrm{\Omega }`$ is real. Thus, the $`I`$ invariant is a relationship between the wave function and its first derivative .
The quantum case.
Let us consider the quantum system with the same Hamiltonian $`H_L`$, where $`\widehat{q}`$ and $`\widehat{p}`$ should fulfill now the commutation relations
$$[\widehat{q},\widehat{p}]=i\mathrm{}.$$
(10)
We shall take $`\rho `$ as real, which is possible if $`\mathrm{\Omega }^2`$ is real. Using the commutation relationships and the equation for $`\rho `$ it is easy to show that $`\widehat{I}`$ is a quantum constant of motion, i.e., it can be an observble since it satisfies
$$\frac{d\widehat{I}}{dt}=\frac{\widehat{I}}{t}+\frac{1}{i\mathrm{}}[\widehat{I},\widehat{H}]=0.$$
(11)
It follows that $`I`$ has eigenfunctions whose eigenvalues are time-dependent. The eigenfunctions and eigenvalues of $`\widehat{I}`$ can be found by a method which is similar to that used by Dirac to find the eigenfunctions and eigenvalues of the harmonic oscillator Hamiltonian. First, we introduce the raising and lowering operators, $`\widehat{a}^{}`$ and $`\widehat{a}`$, defined by
$$\widehat{a}^{}=(1/\sqrt{2})[\rho ^1\widehat{q}i(\rho \widehat{p}ϵ\dot{\rho }\widehat{q})],$$
$$\widehat{a}=(1/\sqrt{2})[\rho ^1\widehat{q}+i(\rho \widehat{p}ϵ\dot{\rho }\widehat{q})].$$
(12)
These operators fulfill the relationships
$$[\widehat{a},\widehat{a}^{}]=\mathrm{},$$
$$\widehat{a}\widehat{a}^{}=\widehat{I}+\frac{1}{2}\mathrm{}.$$
(13)
The operator $`\widehat{a}`$ acting on an eigenfunction of $`\widehat{I}`$ gives rise to an eigenfunction of $`\widehat{I}`$ whose eigenvalue is less by $`\mathrm{}`$ with respect to the initial eigenvalue. Similarly, $`\widehat{a}^{}`$ acting on an eigenfunction of $`\widehat{I}`$ raises the eigenvalue by $`\mathrm{}`$. Once these properties are settled, the normalization of the eigenfunctions of $`\widehat{I}`$ can be used to prove that the eigenvalues of $`\widehat{I}`$ are $`(n+\frac{1}{2})\mathrm{}`$, where $`n`$ is $`0`$ or a positive integer. If $`|n`$ denotes the normalized eigenfunction of $`\widehat{I}`$ whose eigenvalue is $`(n+\frac{1}{2})\mathrm{}`$, we can express the relationhip between $`|n+1`$ and $`|n`$ as follows
$$|n+1=[(n+1)\mathrm{}]^{1/2}\widehat{a}^{}|n.$$
(14)
The condition that determines the eigenstate whose eigenvalue is $`\frac{1}{2}\mathrm{}`$ is given by
$$\widehat{a}|0=0.$$
(15)
Using these results one can calculate the expectation value of the Hamiltonian in an eigenstate $`|n`$. The result is
$$n|\widehat{H}|n=(1/2ϵ)(\rho ^2+\mathrm{\Omega }^2\rho ^2+ϵ^2\dot{\rho }^2)(n+\frac{1}{2})\mathrm{}.$$
(16)
It is interesting to note that the expectation values of $`\widehat{H}`$ are equally spaced at each moment and that the lowest value is obtained for $`n=0`$, i.e., we have an exact counterpart of the harmonic oscillator. As a matter of fact, we can obtain the harmonic oscillator results if $`\mathrm{\Omega }`$ is taken real and constant with $`\rho =\mathrm{\Omega }^{1/2}`$, which gives $`I=ϵH/\mathrm{\Omega }`$.
## 6. The interpretation of Eliezer and Gray.
The harmonic linear motion corresponding to the 1D parametric oscillator equation can be seen as the projection of a 2D motion of a particle driven by the same law of force. Thus, the 2D auxiliary motion is described by the equation
$$\frac{d^2\stackrel{}{r}}{dt^2}+\mathrm{\Omega }^2\left(t\right)\stackrel{}{r}=0$$
(1)
where $`\stackrel{}{r}`$ is expressed in Cartesian coordinates $`(x,y)`$. Using polar coordinates $`(\rho ,\theta )`$ where $`\rho =|r|`$, $`x=\rho \mathrm{cos}\theta `$, $`y=\rho \mathrm{sin}\theta `$. The radial and transversal motions are described now by the equations
$$\ddot{\rho }\rho \dot{\theta }^2+\mathrm{\Omega }^2\rho =0$$
(2)
$$\frac{1}{\rho }\frac{d}{dt}\left(\rho ^2\dot{\theta }\right)=0.$$
(3)
Integrating eq. (3)
$$\rho ^2\dot{\theta }=h$$
(4)
where $`h`$ is the angular momentum, which is constant. Substituting in eq. (2) one gets a Pinney equation of the form:
$$\ddot{\rho }+\mathrm{\Omega }^2\rho =\frac{h^2}{\rho ^3}$$
(5)
The invariant $`I`$ corresponding to the eq. (5) is:
$$I=\frac{1}{2}\left[\frac{h^2x^2}{\rho ^2}+\left(p\rho x\dot{\rho }\right)^2\right]$$
(6)
and with the substitutions $`x=\rho \mathrm{cos}\theta `$ and $`p=\dot{x}`$ one gets:
$$I=\frac{1}{2}\left[h^2\mathrm{cos}^2\theta +h^2\mathrm{sin}^2\theta \right]=\frac{1}{2}h^2$$
(7)
Thus, the constancy of $`I`$ is equivalent to the constancy of the auxiliary angular momentum.
In the elementary classical mechanics, the study of the simple 1D harmonic oscillator is often made as the projection of the uniform circular motion on one of its diameters. The auxiliary motion introduced by Eliezer and Gray is just a generalization of this elementary procedure to more general laws of force.
The connection between the solutions of the parametric oscillator linear equation and Pinney’s solution is given by the following theorem.
Theorem 1EG. If $`y_1`$ and $`y_2`$ are linear independent solutions of the equation
$$\frac{d^2y}{dx^2}+Q\left(x\right)y=0$$
(8)
and $`W`$ is the Wronskian $`y_1y_2^{}y_2y_1^{}`$ (which, according to Abel’s theorem is constant), then the general solution of
$$\frac{d^2y}{dx^2}+Q\left(x\right)y=\frac{\lambda }{y^3}$$
(9)
where $`\lambda `$ is a constant, can be written as follows
$$y_P=\left(Ay_{1}^{}{}_{}{}^{2}+By_{2}^{}{}_{}{}^{2}+2Cy_1y_2\right)^{1/2}$$
(10)
where $`A`$, $`B`$ and $`C`$ are constants such that
$$ABC^2=\frac{\lambda }{W^2}$$
(11)
However, it is necessary that these constants be consistent with the initial conditions of the motion. If $`x_1\left(t\right)`$ and $`x_2\left(t\right)`$ are linear independent parametric solutions of initial conditions $`x_1\left(0\right)=1`$, $`\dot{x}_1\left(0\right)=0`$, $`x_2\left(0\right)=0`$, $`\dot{x}_2\left(0\right)=1`$, the general parametric solution can be written as
$$x\left(t\right)=\alpha x_1\left(t\right)+\beta x_2\left(t\right)$$
(12)
where $`\alpha `$ and $`\beta `$ are arbitrary constants that are related to the initial conditions of the motion by $`x\left(0\right)=\alpha `$ and $`\dot{x}\left(0\right)=\beta `$. The corresponding initial conditions for $`\rho `$ and $`\dot{\rho }`$ are obtained from $`x=\rho \mathrm{cos}\theta `$, $`\dot{x}=\dot{\rho }\mathrm{cos}\theta \rho \dot{\theta }\mathrm{sin}\theta `$, where $`\theta \left(0\right)=0`$ gives $`\rho \left(0\right)=\alpha `$ and $`\dot{\rho }\left(0\right)=0`$. Using (10) we get
$$\rho \left(t\right)=\left[\left(\alpha x_1+\beta x_2\right)^2+\left(\frac{h^2}{\alpha ^2}x_2^2\right)\right]^{\frac{1}{2}}$$
(13)
as the solution of (5) corresponding to the general parametric solution (12). Moreover, we have
$$\rho \mathrm{cos}\theta =\alpha x_1+\beta x_2$$
(14)
$$\rho \mathrm{sin}\theta =\frac{hx_2}{\alpha }$$
(15)
The previous considerations can be extended to systems whose equations of motion are of the form
$$\frac{d^2x}{dt^2}+P\left(t\right)\frac{dx}{dt}+Q\left(t\right)x=0.$$
(16)
The $`I`$ invariant is now
$$I=\frac{h^2x^2}{\rho ^2}+\left(\dot{\rho }x\rho p\right)^2\mathrm{exp}\left(2_0^tP\left(t\right)𝑑t\right)$$
(17)
where $`\rho `$ is any solution of
$$\frac{d^2\rho }{dt^2}+P\left(t\right)\frac{d\rho }{dt}+Q\left(t\right)\rho =\frac{h^2}{\rho ^3}\mathrm{exp}\left(2_0^tP\left(t\right)𝑑t\right)$$
(18)
The theorem that connects the solutions of (16) with those of (18) (with a change of notation) can be formulated in the following way.
Theorem 2EG. If $`y_1\left(x\right)`$ and $`y_2\left(x\right)`$ are two linear independent solutions of
$$\frac{d^2y}{dx^2}+P\left(x\right)\frac{dy}{dx}+Q\left(x\right)y=0$$
(19)
the general solution of
$$\frac{d^2y}{dx^2}+P\left(x\right)\frac{dy}{dx}+Q\left(x\right)y=\frac{\lambda }{y^3}\mathrm{exp}\left(2P\left(t\right)𝑑t\right)$$
(20)
can be written down as
$$y=\left(Ay_{1}^{}{}_{}{}^{2}+By_{2}^{}{}_{}{}^{2}+2Cy_1y_2\right)^{1/2}$$
(21)
where $`A`$ and $`B`$ are arbitrary constants, and
$$ABC^2=\frac{\lambda }{W^2}\mathrm{exp}\left(2P\left(t\right)𝑑t\right)$$
(22)
## 7. The connection between the Ermakov invariant and Nöther’s theorem.
In 1978, Leach found the Ermakov-Lewis invariant for the aforementioned parametric equation with first derivative
$$\ddot{x}+g(t)\dot{x}+\omega ^2(t)x=0,$$
(1)
by making use of a time-dependent canonical transformation leading to a constant new Hamiltonian. That transformation belonged to a symplectic group and has been put forth without details. In the same 1978 year, Lutzky proved that the invariant could be obtained starting from a direct application of Noether’s theorem (1918). This famous theorem makes a connection between the conserved quantities of a Lagrangian system with the group of symmetries that preserves the action as an invariant functional. Moreover, Lutzky discussed the relationships between the solutions of the parametric equation of motion and Pinney’s solution and commented on the great potential of the method for solving the nonlinear equations.
For the parametric equation without first derivative Lutzky used the following formulation of Noether’s theorem.
Theorem NL. Let $`G`$ be the one-parameter Lie group generated by
$$G=\xi (x,t)\frac{}{t}+n(x,t)\frac{}{x}$$
such that the action functional $`L(x,\dot{x},t)𝑑t`$ is left invariant under $`G`$. Then
$$\xi \frac{L}{t}+n\frac{L}{x}+\left(\dot{n}\dot{x}\dot{\xi }\right)\frac{L}{\dot{x}}+\dot{\xi }L=\dot{f}.$$
(2)
where $`f=f(x,t)`$, and
$$\dot{\xi }=\frac{\xi }{t}+\dot{x}\frac{\xi }{x},\dot{n}=\frac{n}{t}+\dot{x}\frac{n}{x},\dot{f}=\frac{f}{t}+\dot{x}\frac{f}{x}.$$
Moreover, a constant of motion of the system is given by
$$\mathrm{\Phi }=(\xi \dot{x}n)\frac{L}{\dot{x}}\xi L+f.$$
(3)
The Lagrangian $`L=\frac{1}{2}(\dot{x}^2\omega ^2x^2)`$ gives the equations of motion of the parametric oscillating type; substituting this Lagrangian in (2) and equating to zero the coefficients of the corresponding powers of $`\dot{x}`$, on gets a set of equations for $`\xi `$, $`n`$, $`f`$. Next, it is easy to prove that they imply that $`\xi `$ is a function only of $`t`$ and fulfills
$$\stackrel{\mathrm{}}{\xi }+4\xi \omega \dot{\omega }+4\omega ^2\dot{\xi }=0.$$
(4)
The following results are easy to get
$$n(x,t)=\frac{1}{2}\dot{\xi }x+\psi (t),$$
$$f(x,t)=\frac{1}{4}\ddot{\xi }x^2+\dot{\psi }x+C,\ddot{\psi }+\omega ^2\psi =0.$$
Choosing $`C=0`$, $`\psi =0`$, and substituting these values in (3), one can find that
$$\mathrm{\Phi }=\frac{1}{2}(\xi \dot{x}^2+[\xi \omega ^2+\frac{1}{2}\ddot{\xi }]x^2\dot{\xi }x\dot{x})$$
(5)
is a conserved quantity for the parametric undamped oscillatory motion if $`\xi `$ satisfies (4). Notice that (4) has the first integral
$$\xi \ddot{\xi }\frac{1}{2}\dot{\xi }^2+2\xi ^2\omega ^2=C_1.$$
(6)
If we choose $`\xi =\rho ^2`$ in (5) and (6), with $`C_1=1`$, we get that $`\mathrm{\Phi }`$ is the Ermakov-Lewis invariant. If the formula for the latter is considered as a differential equation for $`x`$, then it is easy to solve in the variable $`x/\rho `$; the result can be written in the form
$$x=\rho [A\mathrm{cos}\varphi +B\mathrm{sin}\varphi ],\varphi =\varphi (t),$$
(7)
where $`\dot{\varphi }=1/\rho ^2`$ and $`A`$ and $`B`$ are arbitrary constants. Thus, the general parametric solution can be found if a particular solution of Pinney’s equation is known.
Consider now the Ermakov-Lewis invariant as a conserved quantity for Pinney’s equation; this is possible if $`x`$ fulfills the parametric equation of motion. This standpoint is interesting because it provides an example of how to use Noether’s theorem to change a problem of solving nonlinear equations into an equivalent problem of solving linear equations. Thus, if we take as our initial task to solve Pinney’s equation, we can use Noether’s theorem with
$$L(\rho ,\dot{\rho },t)=\frac{1}{2}(\dot{\rho }^2\omega ^2\rho ^2\frac{1}{\rho ^2}),$$
to prove that
$$\mathrm{\Phi }=\frac{1}{2}\left[\frac{x^2}{\rho ^2}+C_2\frac{\rho ^2}{x^2}+(\rho \dot{x}\dot{\rho }x)^2\right]$$
(8)
is a conserved quantity for Pinney’s equation leading to
$$\ddot{x}+\omega ^2x=C_2/x^3.$$
(9)
The quantity $`C_2`$ is an arbitrary constant; choosing $`C_2=0`$, we reduce Pinney’s solution to the parametric linear solution, while (8) turns into the Ermakov-Lewis invariant.
If we write the invariant for two different solutions of the linear parametric equation, $`x_1`$ and $`x_2`$, while keeping the same $`\rho `$, and eliminate $`\dot{\rho }`$ in the resulting equations we get
$$\rho =\frac{1}{W}\sqrt{I_1x_2^2+I_2x_1^2+2x_1x_2[I_1I_2W^2]^{1/2}},$$
(10)
where $`W=\dot{x}_1x_2x_1\dot{x}_2`$, and $`I_1`$ and $`I_2`$ are constants. Thus, a general solution of Pinney’s equation can be obtained if two solutions of the linear parametric equation can be found. (Since the Wronskian $`W`$ is constant for two independent linear solutions, we can find that $`I_1=1`$, $`I_2=W^2`$, and therefore (10) turns into $`\rho =\sqrt{x_1^2+(1/W^2)x_2^2}`$, which is the result given by Pinney in 1950). Moreover, one can see from (7) that two independent parametric solutions are $`x_1=\stackrel{~}{\rho }\mathrm{cos}\varphi `$, $`x_2=\stackrel{~}{\rho }\mathrm{sin}\varphi `$, where $`\stackrel{~}{\rho }`$ is any solution of Pinney’s equation. Then $`W=1`$, and (10) turns into
$$\rho =\stackrel{~}{\rho }\sqrt{I_1\mathrm{sin}^2\varphi +I_2\mathrm{cos}^2\varphi +[I_1I_21]^{1/2}\mathrm{sin}2\varphi },\dot{\varphi }=1/\stackrel{~}{\rho }^2.$$
(11)
This beautiful result obtained by Lutzky by means of Noether’s theorem gives the general solution of Pinney’s equation in terms of an arbitrary particular solution of the same equation. Moreover, Lutzky suggested that this approach can be used to solve certain nonlinear dynamical systems once a conserved quantity containing an auxiliary function of a corresponding nonlinear differential equation can be found. Even if the auxiliary equation is nonlinear, sometimes it is simpler to solve than the original linear equation. In any case, one can establish useful relationships between the solutions of the two types of equations.
In conclusion, we mention that Noether’s method can be applied to the equation of parametric motion with first derivative (1); in this way one can reproduce the results of Eliezer and Gray of the previous chapter. The effective Lagrangian for (1) is given by $`L=\frac{1}{2}e^{F(t)}[\dot{x}^2\omega ^2(t)x^2]`$, where $`dF/dt=g(t)`$.
## 8. Possible generalizations of the Ermakov method.
We have seen that there is a simple relationship between the solutions of the parametric oscillator
$$\ddot{x}+\omega ^2(t)x=0,$$
(1)
and the solution of nonlinear differential equations of the Pinney type that differ from eq. (1) only in the nonlinear term. The equation of motion of a charged particle in some types of time-dependent magnetic fields can be written in the above form. Many time-dependent oscillating systems are governed by the same eq. (1). A conserved quantity for eq. (1) is
$$I_{EL}=\frac{1}{2}[(x^2/\rho ^2)+(\rho \dot{x}\dot{\rho }x)^2],$$
(2)
where $`x(t)`$ satisfies eq. (1) and $`\rho (t)`$ satisfies the auxiliary equation
$$\ddot{\rho }+\omega ^2(t)\rho =1/\rho ^3.$$
(3)
Using eq. (1) to eliminate $`\omega ^2(t)`$ in eq. (3) we find
$$\ddot{\rho }+(\rho /x)\ddot{x}=1/\rho ^3,$$
(4)
or
$$x\ddot{\rho }\rho \ddot{x}=(d/dt)(x\dot{\rho }\rho \dot{x})=x/\rho ^3.$$
(5)
Now, multiplying this equation by $`x\dot{\rho }\rho \dot{x}`$ we can write
$$(x\dot{\rho }\rho \dot{x})(d/dt)(x\dot{\rho }\rho \dot{x})=(x\dot{\rho }\rho \dot{x})x/\rho ^3,$$
(6)
or
$$\frac{1}{2}(d/dt)(x\dot{\rho }\rho \dot{x})^2=\frac{1}{2}(d/dt)(x/\rho ^2),$$
(7)
and therefore we have the invariant
$$I_{EL}=\frac{1}{2}[(x^2/\rho ^2)+(\rho \dot{x}\dot{\rho }x)^2],$$
(8)
where $`x`$ is any solution of eq. (1) and $`\rho `$ is any solution of eq. (3).
A simple generalization of this result has been proposed by Ray and Reid in 1979 . Instead of (3) they considered the following equation
$$\ddot{\rho }+\omega ^2(t)\rho =(1/x\rho ^2)f(x/\rho ),$$
(9)
where $`x`$ is a solution of eq. (1) and $`f(x/\rho )`$ is an arbitrary function of $`x/\rho `$. If again we eliminate $`\omega ^2`$ and we employ as a factor $`x\dot{\rho }\rho \dot{x}`$ as a factor to obtain
$$\frac{1}{2}(d/dt)(x\dot{\rho }\rho \dot{x})^2=(d/dt)\varphi (x/\rho ),$$
(10)
where
$$\varphi (x/\rho )=2^{x/\rho }f(u)𝑑u.$$
(11)
From eq. (10) we have the invariant
$$I_f=\frac{1}{2}[\varphi (x/\rho )+(\rho \dot{x}\dot{\rho }x)^2],$$
(12)
where $`x`$ is a solution of eq. (1) and $`\rho `$ is a solution of eq. (9). For $`f=x/\rho `$ we reobtain the invariant $`I_{EL}`$. The result (12) provides a connection between the solutions of the linear equation (1) with the solutions of an infinite number of nonlinear equations (9) by means of the invariant $`I_f`$.
As an additional generalization, one can consider the following two equations
$$\ddot{x}+\omega ^2(t)x=(1/\rho x^2)g(\rho /x),$$
(13)
$$\ddot{\rho }+\omega ^2(t)\rho =(1/x\rho ^2)f(x/\rho ),$$
(14)
where $`g`$ and $`f`$ are arbitrary functions of their arguments. Applying the same procedure to these equations one can find the invariant
$$I_{f,g}=\frac{1}{2}[\varphi (x/\rho )+\theta (\rho /x)+(x\dot{\rho }\rho \dot{x})^2],$$
(15)
where
$$\varphi (x/\rho )=2^{x/\rho }f(u)𝑑u,$$
(16)
$$\theta (\rho /x)=2^{\rho /x}g(u)𝑑u.$$
(17)
The expression (15) is an invariant whenever $`x`$ is a solution of eq. (13) and $`\rho `$ is a solution of eq. (14). One should notice that the functions $`f`$ and $`g`$ are arbitrary, and therefore the invariant $`I_{f,g}`$ gives the connection between the solutions of many different differential equations. We can see that the Ermakov-Lewis invariant is merely a particular case of $`I_{f,g}`$ with $`g=0`$, $`f=x/\rho `$.
In the cases $`g=0`$, $`f=0`$; $`g=0`$, $`f=x/\rho `$; $`g=\rho /x`$, $`f=0`$; and $`f=x/\rho `$, $`g=\rho /x`$ the equations (13) and (14) respectively are not coupled. In general, if we have found a solution for $`x`$, then the invariant $`I_{f,g}`$ provides some information about the solution $`\rho `$.
On the other hand, it is not known if the simple mechanical interpretation of Eliezer and Gray is also available for different choices of $`f`$ and $`g`$. The simple proof of the existence of $`I_{f,g}`$ clarifies how such invariants can occur from pairs of differential equations.
## 9. Geometrical angles and phases in the Ermakov problem.
The quantum mechanical holonomic effect known as Berry’s phase (BP) (1984) has been of much interest in the last fifteen years. In the simplest cases, it shows up when the time-dependent parameters of a system change adiabatically in time in the course of a closed trajectory in the parameter space. The wave function of the system gets, in addition to the common dynamical phase $`\mathrm{exp}(i\mathrm{}_0^TE_n(t)𝑑t)`$, a geometrical phase factor given by
$$\gamma _n(c)=i_0^T𝑑t\mathrm{\Psi }_n(X(t))|\frac{d}{dt}|\mathrm{\Psi }_n(X(t)),$$
(1)
because the parameters are slowly changing along the closed path $`c`$ of the spatial parameter $`X(t)`$ during the period $`T`$. $`|\mathrm{\Psi }_n(X(t))`$ are the eigenfunctions of the instantaneous Hamiltonian $`H(X(t))`$. BP has a classical analogue as an angular shift accumulated by the system when its dynamical variables are expressed in angle-action variables. This angular shift is known in the literature as Hannay’s angle (Hannay 1985, Berry 1985). Various model systems have been employed to calculate the BP and its classical analog. One of these systems is the generalized harmonic oscillator whoose $`H`$ is given by (Berry 1985, Hannay 1985)
$$H_{XYZ}(p,q,t)=\frac{1}{2}[X(t)q^2+2Y(t)qp+Z(t)p^2],$$
(2)
where the slow time-varying parameters are $`X(t)`$, $`Y(t)`$ y $`Z(t)`$.
Since $`H_{XYZ}`$ can be transformed into the $`H`$ of a parametric oscillator, it follows that there should be a connection between the BP of the system with $`H_{XYZ}`$ and the Lewis phase for the parametric oscillator . This problem has been first studied by Morales . Interestingly, the results appear to be exact although the system does not evolve adiabatically in time and goes to Berry’s result in the adiabatic limit.
Lewis and Riesenfeld have shown that for a quantum nonstationary system which is characterized by a Hamiltonian $`\widehat{H}(t)`$ and a Hermitian invariant $`\widehat{I}(t)`$, the general solution of the Schroedinger equation
$$i\mathrm{}\frac{\mathrm{\Psi }(q,t)}{t}=\widehat{H}(t)\mathrm{\Psi }(q,t),$$
(3)
is given by
$$\mathrm{\Psi }(q,t)=\underset{n}{}C_n\mathrm{exp}(i\alpha _n(t))\mathrm{\Psi }_n(q,t).$$
(4)
$`\mathrm{\Psi }_n(q,t)`$ are the eigenfunctions of the invariant
$$\widehat{I}\mathrm{\Psi }_n(q,t)=\lambda _n\mathrm{\Psi }_n(q,t),$$
(5)
where the eigenvalues are time-dependent, the coefficients $`C_n`$ are constants and the phases $`\alpha _n(t)`$ are obtained from the equation
$$\mathrm{}d\alpha _n(t)/dt=\mathrm{\Psi }_n|i\mathrm{}/t\widehat{H}(t)|\mathrm{\Psi }_n.$$
(6)
Using this result, Lewis and Riesenfeld obtained solutions for a quantum harmonic oscillator of parametric frequency characterized by the classical Hamiltonian
$$H(t)=\frac{1}{2}p^2+\frac{1}{2}\mathrm{\Omega }^2(t)q^2$$
(7)
and the classical equation of motion
$$\ddot{q}+\mathrm{\Omega }^2(t)q^2=0,$$
(8)
where the points denote differentiation with respect to time. The matrix elements that are required to evaluate the BP are given by
$$\mathrm{\Psi }_n|/t|\mathrm{\Psi }_n=\frac{1}{2}i(\rho \ddot{\rho }\dot{\rho }^2)(n+\frac{1}{2}).$$
(9)
$$\mathrm{\Psi }_n|\widehat{H}(t)|\mathrm{\Psi }_n=\frac{1}{2}(\dot{\rho }^2+\mathrm{\Omega }^2(t)\rho ^2+1/\rho ^2)(n+\frac{1}{2}),$$
(10)
where $`\rho (t)`$ is a real number, satisfying the equation
$$\ddot{\rho }+\mathrm{\Omega }^2(t)\rho =1/\rho ^3.$$
(11)
Substituting (9) and (10) in (6) and integrating one gets
$$\alpha _n(t)=(n+\frac{1}{2})_0^t𝑑t^{^{}}/\rho ^2(t^{^{}}).$$
(12)
One can show this either by using (9) or (12) and one can get the BP and Hannay’s angle for the system of Hamiltonian $`H_{XYZ}`$. For this system, the frequency that can be obtained from the Hamiltonian expressed in the action-angle variables, is given by
$$\omega =H(I,X(t),Y(t),Z(t))/I=(XZY^2)^{1/2}.$$
(13)
From (2) one can get the equations of motion for $`q`$ and $`p`$ and by eliminating $`p`$ one can get the Newtonian equation of motion for $`q`$ as follows
$$\ddot{q}(\dot{Z}/Z)\dot{q}+[XZY^2+(\dot{Z}Y\dot{Y}Z)/Z]q=0.$$
(14)
The term in $`\dot{q}`$ can be eliminated by introducing a new coordinate $`Q(t)`$ given by (Berry 1985)
$$q(t)=[Z(t)]^{1/2}Q(t).$$
(15)
Substituting (15) in (14) one gets
$$\ddot{Q}+[XZY^2+(\dot{Z}Y\dot{Y}Z)/Z+[1/2(\ddot{Z}/Z\dot{Z}^2/Z^2)1/4(\dot{Z}/Z)^2]]Q=0,$$
(16)
which corresponds to the equation of motion of an oscillator with parametrically forced frequency. Berry found Hannay’s angle $`\mathrm{\Delta }\theta `$ by the WKB method in quantum mechanics, but it can also be obtained by means of (9) or (12).
Comparing (8) with (16) we see that we can define $`\mathrm{\Omega }^2(t)`$ as
$$\mathrm{\Omega }^2(t)=XZY^2+(\dot{Z}Y\dot{Y}Z)/Z+[1/4(\ddot{Z}/Z\dot{Z}^2/Z^2)1/4(\dot{Z}/Z)^2].$$
(17)
With this connection and employing (1) and (9) we get
$$\gamma _n(C)=\frac{1}{2}(n+\frac{1}{2})_0^T(\rho \ddot{\rho }\dot{\rho }^2)𝑑t,$$
(18)
where $`\rho (t)`$ is the solution of (11) with $`\mathrm{\Omega }^2(t)`$ given by (17). It is important to notice that (18) is exact even when the system does not evolve slowly in time.
To compare with Berry’s result one should take the adiabatic limit. For that we define an adiabaticity parameter $`ϵ`$ and a ‘slow time’ variable $`\tau `$
$$\xi \xi (\tau )\tau =ϵt,$$
(19)
in terms of which $`\mathrm{\Omega }^2(\tau )`$ turns into
$$\mathrm{\Omega }^2(\tau )=ϵ^2[XZY^2+ϵ(Z^{^{}}YY^{^{}}Z)/Z+ϵ^2[1/2(Z^{^{}}/Z)^{^{}}1/4(Z^{^{}}/Z)^2]],$$
(20)
where the primes indicate differentiation with respect to $`\tau `$. It has been shown by Lewis that in the adiabatic limit eq. (20) can be solved as a power series in $`ϵ`$ with the leading term given by
$$\rho _0=\mathrm{\Omega }^{1/2}(\tau ).$$
(21)
If we plug this expression for $`\rho `$ and its time derivatives in (18) we could obtain the BP in the adiabatic limit. However, it is easy to calculate the Lewis phase first and then resting it from the dynamical phase $`\mathrm{}^1\mathrm{\Psi }_n|H(t)|\mathrm{\Psi }_n`$. Substituting (20) and (21) in (12) we get
$$\alpha _n(\tau )=(n+\frac{1}{2})\left(\frac{1}{ϵ}_0^\tau (XZY^2)^{1/2}𝑑\tau ^{^{}}+\frac{1}{2}_0^\tau \frac{(Z^{^{}}YY^{^{}}Z)}{Z(XZY^2)^{1/2}}𝑑\tau ^{^{}}+O(ϵ)\right).$$
(22)
The first term in the right hand side is the dynamical phase and the second and hiher order terms are associated with Berry’s phase. Therefore we can write the BP as follows
$$\gamma _n(C)=\frac{1}{2}(n+\frac{1}{2})_0^T\frac{\dot{Z}Y\dot{Y}Z}{Z(XZY^2)^{1/2}}𝑑t.$$
(23)
Hannay’s angle is obtained by using the correspondence principle in the form (Berry 1985, Hannay 1985)
$$\mathrm{\Delta }\theta =\gamma _n/n,$$
(24)
as
$$\mathrm{\Delta }\theta =\frac{1}{2}_0^T\frac{(\dot{Z}Y\dot{Y}Z)}{Z(XZY^2)^{1/2}}𝑑t,$$
(25)
which is the same result as that obtained by Berry (1985).
In this way, it has been proved that if a time-dependent quadratic $`H`$ can be transformed to the parametric form given by (7), then the Lewis phase can be used to calculate the BP and the Hannay’s angle. Although we presented the particular case discussed by Morales, it is known that Lewis’ approach for time-dependent systems is general. As a matter of fact, one can find more general cases in the literature.
## 10. Application to minisuperspace Hamiltonian cosmology.
The formalism of Ermakov invariants can be a useful alternative to study the evolutionary and chaoticity problems of “quantum” canonical universes since these invariants are closely related to the Hamiltonian formulation. Moreover, as we have seen in the previous chapter, Ermakov’s method is intimately related to geometrical angles and phases . Therefore, it seems natural to speak of Hannay’s angle as well as of various types of topological phases at the cosmological level.
The Hamiltonian formulation of the general relativity has been developed in the classical works of Dirac and Arnowitt, Deser and Misner (ADM) . When it was applied to the Bianchi homogeneous cosmological models it led to the so-called Hamiltonian cosmology . Its quantum counterpart, the canonical quantum cosmology , is based on the canonical quantization methods and/or path integral procedures. These cosmologies are often used in heuristic studies of the very early universe, close to the Planck scale epoch $`t_P10^{43}`$ s.
The most general models for homogeneous cosmologies are the Bianchi ones. In particular, those of class A of diagonal metric are at the same time the simplest from the point of view of quantizing them.
Briefly, we can say that in the ADM formalism the metric of these models is of the form
$$\mathrm{ds}^2=\mathrm{dt}^2+\mathrm{e}^{2\alpha (\mathrm{t})}(\mathrm{e}^{2\beta (\mathrm{t})})_{\mathrm{ij}}\omega ^\mathrm{i}\omega ^\mathrm{j},$$
(1)
where $`\alpha (t)`$ is a scalar function and $`\beta _{\mathrm{ij}}(\mathrm{t})`$ is a diagonal matrix of dimension 3, $`\beta _{\mathrm{ij}}=\mathrm{diag}(\mathrm{x}+\sqrt{3}\mathrm{y},\mathrm{x}\sqrt{3}\mathrm{y},2\mathrm{x})`$, $`\omega ^\mathrm{i}`$ are 1-forms characterizing each of the Bianchi models and fulfilling the algebra $`\mathrm{d}\omega ^\mathrm{i}=\frac{1}{2}\mathrm{C}_{\mathrm{jk}}^\mathrm{i}\omega ^\mathrm{j}\omega ^\mathrm{k}`$, where $`\mathrm{C}_{\mathrm{jk}}^\mathrm{i}`$ are structure constants.
The ADM action has the form
$$\mathrm{I}_{\mathrm{ADM}}=(\mathrm{P}_\mathrm{x}\mathrm{dx}+\mathrm{P}_\mathrm{y}\mathrm{dy}+\mathrm{P}_\alpha \mathrm{d}\alpha \mathrm{N}_{})\mathrm{dt},$$
(2)
where the Ps are the canonical moments, N is the lapse function and
$$_{}=\mathrm{e}^{3\alpha }\left(\mathrm{P}_\alpha ^2+\mathrm{P}_\mathrm{x}^2+\mathrm{P}_\mathrm{y}^2+\mathrm{e}^{4\alpha }\mathrm{V}(\mathrm{x},\mathrm{y})\right).$$
(3)
$`\mathrm{e}^{4\alpha }\mathrm{V}(\mathrm{x},\mathrm{y})=\mathrm{U}(\mathrm{q}^\mu )`$ is the potential of the cosmological model under consideration. The Wheeler-DeWitt (WDW) equation can be obtained by canonical quantization, i.e., substituting P$`_{\mathrm{q}^\mu }`$ by $`\widehat{\mathrm{P}}_{\mathrm{q}^\mu }=\mathrm{i}_{\mathrm{q}^\mu }`$ in eq. (3), where $`\mathrm{q}^\mu =(\alpha ,\mathrm{x},\mathrm{y})`$. The factor ordering of $`\mathrm{e}^{3\alpha }`$ with the operator $`\widehat{\mathrm{P}}_\alpha `$ is not unique. Hartle and Hawking suggested an almost general ordering of the following type
$$\mathrm{e}^{(3\mathrm{Q})\alpha }_\alpha \mathrm{e}^{\mathrm{Q}\alpha }_\alpha =\mathrm{e}^{3\alpha }_\alpha ^2+\mathrm{Q}\mathrm{e}^{3\alpha }_\alpha ,$$
(4)
where $`Q`$ is any real constant. If $`Q=0`$ the WDW equation is
$$\mathrm{}\mathrm{\Psi }\mathrm{U}(\mathrm{q}^\mu )\mathrm{\Psi }=0,$$
(5)
Using the ansatz $`\mathrm{\Psi }(\mathrm{q}^\mu )=\mathrm{Ae}^{\pm \mathrm{\Phi }}`$ one gets
$$\pm A\mathrm{}\mathrm{\Phi }+A[(\mathrm{\Phi })^2U]=0,$$
(6)
where $`\mathrm{}=\mathrm{G}^{\mu \nu }\frac{^2}{\mathrm{q}^\mu \mathrm{q}^\nu }`$, $`()^2=(\frac{}{\alpha })^2+(\frac{}{\mathrm{x}})^2+(\frac{}{\mathrm{y}})^2`$, y $`\mathrm{G}^{\mu \nu }=\mathrm{diag}(1,1,1)`$.
Employing the change of variable $`(\alpha ,\mathrm{x},\mathrm{y})(\beta _1,\beta _2,\beta _3)`$, where
$$\beta _1=\alpha +x+\sqrt{3}y,\beta _2=\alpha +x\sqrt{3}y,\beta _3=\alpha 2x,$$
(7)
the 1D character of some of the Bianchi models can be studied in a more direct way.
Empty FRW (EFRW) universes for $`Q=0`$.
We now apply the Ermakov method to the simplest cosmological oscillators which are the empty quantum universes of Friedmann-Robertson-Walker (EFRW) type. The results included in this chapter have been published recently . When the Hartle-Hawking parameter is equal to zero ($`Q=0`$), the WDW equation for the EFRW universe is
$$\frac{d^2\mathrm{\Psi }}{d\mathrm{\Omega }^2}\kappa e^{4\mathrm{\Omega }}\mathrm{\Psi }(\mathrm{\Omega })=0,$$
(8)
where $`\mathrm{\Omega }`$ is the Misner time which is related to the volume of the universe $`V`$ at a given cosmological epoch as $`\mathrm{\Omega }=\mathrm{ln}(V^{1/3})`$ , $`\kappa `$ is the curvature parameter of the universe (1,0,-1 for closed, plane, open universes, respectively) and $`\mathrm{\Psi }`$ is the wave function of the universe. The general solution is obtained as a linear superposition of modified Bessel functions of zero order in the case for which $`\kappa =1`$, $`\mathrm{\Psi }(\mathrm{\Omega })=C_1I_0(\frac{1}{2}e^{2\mathrm{\Omega }})+C_2K_0(\frac{1}{2}e^{2\mathrm{\Omega }})`$. If $`\kappa =1`$ the solution will be a superposition of ordinary Bessel functions of zero order $`\mathrm{\Psi }(\mathrm{\Omega })=C_1J_0(\frac{1}{2}e^{2\mathrm{\Omega }})+C_2Y_0(\frac{1}{2}e^{2\mathrm{\Omega }})`$. $`C_1`$ and $`C_2`$ are arbitrary superposition constants that we shall take for simplicity reasons equal $`C_1=C_2=1`$.
Eq. (8) can be transformed into the canonical equations of motion for a classical point particle of mass $`M=1`$, generalized coordinate $`q=\mathrm{\Psi }`$ and moment $`p=\mathrm{\Psi }^{^{}}`$, and by considering Misner’s time as a Hamiltonian time for which we shall keep the same notation. Thus, we can write
$`{\displaystyle \frac{dq}{d\mathrm{\Omega }}}`$ $`=`$ $`p`$ (9)
$`{\displaystyle \frac{dp}{d\mathrm{\Omega }}}`$ $`=`$ $`\kappa e^{4\mathrm{\Omega }}q.`$ (10)
These equations describe the canonical motion of an inverted oscillator ($`\kappa =1`$) and of a normal one ($`\kappa =1`$), respectively , of Hamiltonian
$$H(\mathrm{\Omega })=\frac{p^2}{2}\kappa e^{4\mathrm{\Omega }}\frac{q^2}{2}.$$
(11)
For the EFRW Hamiltonian the phase space functions $`T_1=\frac{p^2}{2}`$, $`T_2=pq`$, y $`T_3=\frac{q^2}{2}`$ form a dynamical Lie algebra, i.e.,
$$H=\underset{n}{}h_n(\mathrm{\Omega })T_n(p,q),$$
(12)
which is closed with respect to the Poisson brackets $`\{T_1,T_2\}=2T_1`$, $`\{T_2,T_3\}=2T_3`$, $`\{T_1,T_3\}=T_2`$. The Hamiltonian EFRW Hamiltonian can be written now as
$$H=T_1\kappa e^{4\mathrm{\Omega }}T_3.$$
(13)
The Ermakov invariant $`I`$ is a function in the dynamical algebra
$$I=\underset{r}{}ϵ_r(\mathrm{\Omega })T_r,$$
(14)
and through the time invariance condition
$$\frac{I}{\mathrm{\Omega }}=\{I,H\},$$
(15)
one is led to the following equations for the unknown functions $`ϵ_r(\mathrm{\Omega })`$
$$\dot{ϵ}_r+\underset{n}{}\left[\underset{m}{}C_{nm}^rh_m(\mathrm{\Omega })\right]ϵ_n=0,$$
(16)
where $`C_{nm}^r`$ are the structure constants of the Lie algebra given above. Thus, we obtain
$`\dot{ϵ}_1`$ $`=`$ $`2ϵ_2`$
$`\dot{ϵ}_2`$ $`=`$ $`\kappa e^{4\mathrm{\Omega }}ϵ_1ϵ_3`$ (17)
$`\dot{ϵ}_3`$ $`=`$ $`2\kappa e^{4\mathrm{\Omega }}ϵ_2.`$
The solution of this system of equations can be easily obtained by choosing $`ϵ_1=\rho ^2`$, which gives $`ϵ_2=\rho \dot{\rho }`$ y $`ϵ_3=\dot{\rho }^2+\frac{1}{\rho ^2}`$, where $`\rho `$ is the solution of Pinney’s equation
$$\ddot{\rho }\kappa e^{4\mathrm{\Omega }}\rho =\frac{1}{\rho ^3}.$$
(18)
In terms of $`\rho (\mathrm{\Omega })`$ and using (6), the Ermakov invariant can be written as follows
$$I=I_{\mathrm{kin}}+I_{\mathrm{pot}}=\frac{(\rho p\dot{\rho }q)^2}{2}+\frac{q^2}{2\rho ^2}=\frac{\rho ^4}{2}\left[\frac{d}{d\mathrm{\Omega }}\left(\frac{\mathrm{\Psi }}{\rho }\right)\right]^2+\frac{1}{2}\left(\frac{\mathrm{\Psi }}{\rho }\right)^2.$$
(19)
We have followed the calculations of Pinney and of Eliezer and Gray for $`\rho (\mathrm{\Omega })`$ in terms of linear combinations of the aforementioned Bessel functions that satisfy the initial conditions as given by these authors. We have worked with the values $`A=1`$, $`B=1/W^2`$ y $`C=0`$ for Pinney’s constants, where $`W`$ is the Wronskian of the Bessel functions. We have also chosen an auxiliary angular moment of unit value ($`h=1`$). Since $`I=h^2/2`$,we have to obtain a constant value of one-half for the Ermakov invariant. We have checked this by plotting $`I(\mathrm{\Omega })`$ for $`\kappa =\pm 1`$ in fig. 1.
Now we pass to the calculation of the angular variables. We first calculate the time-dependent generating function that allows us to go to the new canonical variables for which $`I`$ is chosen as the new “moment”
$$S(q,I,\stackrel{}{ϵ}(\mathrm{\Omega }))=^q𝑑q^{^{}}p(q^{^{}},I,\stackrel{}{ϵ}(\mathrm{\Omega })),$$
(20)
leading to
$$S(q,I,\stackrel{}{ϵ}(\mathrm{\Omega }))=\frac{q^2}{2}\frac{\dot{\rho }}{\rho }+I\mathrm{arcsin}\left[\frac{q}{\sqrt{2I\rho ^2}}\right]+\frac{q\sqrt{2I\rho ^2q^2}}{2\rho ^2},$$
(21)
where we have put to zero the constant of integration. Then
$$\theta =\frac{S}{I}=\mathrm{arcsin}\left(\frac{q}{\sqrt{2I\rho ^2}}\right).$$
(22)
Now, the canonical variables are
$$q_1=\rho \sqrt{2I}\mathrm{sin}\theta ,p_1=\frac{\sqrt{2I}}{\rho }\left(\mathrm{cos}\theta +\dot{\rho }\rho \mathrm{sin}\theta \right).$$
(23)
The dynamical angle is
$$\mathrm{\Delta }\theta ^d=_{\mathrm{\Omega }_0}^\mathrm{\Omega }\frac{H_{\mathrm{new}}}{I}𝑑\mathrm{\Omega }^{^{}}=_{\mathrm{\Omega }_0}^\mathrm{\Omega }\left[\frac{1}{\rho ^2}\frac{\rho ^2}{2}\frac{d}{d\mathrm{\Omega }^{^{}}}\left(\frac{\dot{\rho }}{\rho }\right)\right]𝑑\mathrm{\Omega }^{^{}},$$
(24)
while the geometrical angle (generalized Hannay angle) is
$$\mathrm{\Delta }\theta ^g=\frac{1}{2}_{\mathrm{\Omega }_0}^\mathrm{\Omega }\left[\frac{d}{d\mathrm{\Omega }^{^{}}}(\dot{\rho }\rho )2\dot{\rho }^2\right]𝑑\mathrm{\Omega }^{^{}}.$$
(25)
The sum of $`\mathrm{\Delta }\theta ^d`$ and $`\mathrm{\Delta }\theta ^g`$ is the total change of angle (Lewis’ angle):
$$\mathrm{\Delta }\theta ^t=_{\mathrm{\Omega }_0}^\mathrm{\Omega }\frac{1}{\rho ^2}𝑑\mathrm{\Omega }^{^{}}.$$
(26)
Plots of the angular quantities (24-26) for $`\kappa =1`$ are displayed in figs. 2,3, and 4, respectively. For $`\kappa =1`$ we’ve got similar plots.
fig. 1: The Ermakov-Lewis invariant for $`Q=0`$, $`\kappa =\pm 1`$, $`h=1`$.
fig. 2: The dynamical angle as a function of $`\mathrm{\Omega }`$.
fig. 3: The geometrical angle as a function of $`\mathrm{\Omega }`$.
fig. 4: The total angle as a function of $`\mathrm{\Omega }`$.
EFRW universes for $`Q0`$.
We now apply the Ermakov procedure to the EFRW oscillators when $`Q0`$. These results have been reported at the Third Workshop of the Mexican Division of Gravitation and Mathematical Physics of the Mexican Physical Society .
It can be shown that the WDW equation for EFRW universes with $`Q`$ taken as a free parameter is
$$\frac{d^2\mathrm{\Psi }}{d\mathrm{\Omega }^2}+Q\frac{d\mathrm{\Psi }}{d\mathrm{\Omega }}\kappa e^{4\mathrm{\Omega }}\mathrm{\Psi }(\mathrm{\Omega })=0,$$
(27)
where, as previously, $`\mathrm{\Omega }`$ is Misner’s time and $`\kappa `$ is the curvature index of the FRW universe; $`\kappa =1,0,1`$ for closed, flat, and open universes, respectively. For $`\kappa =\pm 1`$ the general solution can be expressed in terms of Bessel functions
$$\mathrm{\Psi }_\alpha ^+(\mathrm{\Omega })=e^{2\alpha \mathrm{\Omega }}\left(C_1I_\alpha (\frac{1}{2}e^{2\mathrm{\Omega }})+C_2K_\alpha (\frac{1}{2}e^{2\mathrm{\Omega }})\right)$$
(28)
and
$$\mathrm{\Psi }_\alpha ^{}(\mathrm{\Omega })=e^{2\alpha \mathrm{\Omega }}\left(C_1J_\alpha (\frac{1}{2}e^{2\mathrm{\Omega }})+C_2Y_\alpha (\frac{1}{2}e^{2\mathrm{\Omega }})\right),$$
(29)
respectively, where $`\alpha =Q/4`$. The case $`\kappa =0`$ does not correspond to a parametric oscillator and will not be considered here. The eq. (29) can be turned into the canonical equations for a classical point particle of mass $`M=e^{Q\mathrm{\Omega }}`$, generalized coordinate $`q=\mathrm{\Psi }`$ and moment $`p=e^{Q\mathrm{\Omega }}\dot{\mathrm{\Psi }}`$ (i.e., of velocity $`v=\dot{\mathrm{\Psi }}`$). Again, identifying Misner’s time $`\mathrm{\Omega }`$ with the classical Hamiltonian time we obtain the equations of motion
$`\dot{q}{\displaystyle \frac{dq}{d\mathrm{\Omega }}}`$ $`=`$ $`e^{Q\mathrm{\Omega }}p`$ (30)
$`\dot{p}{\displaystyle \frac{dp}{d\mathrm{\Omega }}}`$ $`=`$ $`\kappa e^{(Q4)\mathrm{\Omega }}q.`$ (31)
as derived from the time-dependent Hamiltonian:
$$H_{\mathrm{cl}}(\mathrm{\Omega })=e^{Q\mathrm{\Omega }}\frac{p^2}{2}\kappa e^{(Q4)\mathrm{\Omega }}\frac{q^2}{2}.$$
(32)
The Ermakov invariant $`I(\mathrm{\Omega })`$ can be built algebraically to be a constant of motion. The result is
$$(\mathrm{\Omega })=(\rho ^2)\frac{p^2}{2}(e^{Q\mathrm{\Omega }}\rho \dot{\rho })pq+(e^{2Q\mathrm{\Omega }}\dot{\rho }^2+\frac{1}{\rho ^2})\frac{q^2}{2},$$
(33)
where $`\rho `$ is the solution of Pinney’s equation $`\ddot{\rho }+Q\dot{\rho }\kappa e^{4\mathrm{\Omega }}\rho =\frac{e^{2Q\mathrm{\Omega }}}{\rho ^3}`$. In terms of $`\rho _\pm (\mathrm{\Omega })`$ the Ermakov invariant for this class of EFRW universes reads
$$I_{\mathrm{EFRW}}^\pm =\frac{(\rho _\pm pe^{Q\mathrm{\Omega }}\dot{\rho }_\pm q)^2}{2}+\frac{q^2}{2\rho _\pm ^2}=\frac{e^{2Q\mathrm{\Omega }}}{2}\left(\rho _\pm \dot{\mathrm{\Psi }}_\alpha ^\pm \dot{\rho }_\pm \mathrm{\Psi }_\alpha ^\pm \right)^2+\frac{1}{2}\left(\frac{\mathrm{\Psi }_\alpha ^\pm }{\rho _\pm }\right)^2.$$
(34)
In the calculation of $`I_{\mathrm{EFRW}}^\pm `$ we have used linear combinations of Bessel functions that fulfill the initial conditions for $`\rho `$ as explained in chapter 6 and which will be presented in some detail in a separate section of the present chapter.
Calculating again the generating function $`S(q,I,\mathrm{\Omega })`$ of the canonical transformations leading to the new momentum $`I`$, we obtain
$$S(q,I,\mathrm{\Omega })=e^{Q\mathrm{\Omega }}\frac{q^2}{2}\frac{\dot{\rho }}{\rho }+I\mathrm{arcsin}\left[\frac{q}{\sqrt{2I\rho ^2}}\right]+\frac{q\sqrt{2I\rho ^2q^2}}{2\rho ^2},$$
where the integration constant is again chosen to be zero. The new canonical variables are $`q_1=\rho \sqrt{2I}\mathrm{sin}\theta `$ y $`p_1=\frac{\sqrt{2I}}{\rho }\left(\mathrm{cos}\theta +e^{Q\mathrm{\Omega }}\dot{\rho }\rho \mathrm{sin}\theta \right)`$. The angular quantities are: $`\mathrm{\Delta }\theta ^d=_{\mathrm{\Omega }_0}^\mathrm{\Omega }\frac{H_{\mathrm{new}}}{}𝑑\mathrm{\Omega }^{^{}}=_0^\mathrm{\Omega }[\frac{e^{Q\mathrm{\Omega }^{}}}{\rho ^2}\frac{1}{2}\frac{d}{d\mathrm{\Omega }^{^{}}}(e^{Q\mathrm{\Omega }^{^{}}}\dot{\rho }\rho )+e^{Q\mathrm{\Omega }^{^{}}}\dot{\rho }^2]𝑑\mathrm{\Omega }^{^{}}`$, $`\mathrm{\Delta }\theta ^g=\frac{1}{2}_{\mathrm{\Omega }_0}^\mathrm{\Omega }[\frac{d}{d\mathrm{\Omega }^{^{}}}(e^{Q\mathrm{\Omega }^{^{}}}\dot{\rho }\rho )2e^{Q\mathrm{\Omega }^{^{}}}\dot{\rho }^2]𝑑\mathrm{\Omega }^{^{}}`$, for the dynamical and geometrical angles, respectively. Thus, the total angle will be $`\mathrm{\Delta }\theta =_{\mathrm{\Omega }_0}^\mathrm{\Omega }\frac{e^{Q\mathrm{\Omega }^{^{}}}}{\rho ^2}𝑑\mathrm{\Omega }^{^{}}`$. On the Misner time axis, going to $`\mathrm{}`$ means going to the origin of the universe, whereas $`\mathrm{\Omega }_0=0`$ means the present era. With these temporal limits for the cosmological evolution, one finds that the variation of the total angle $`\mathrm{\Delta }\theta `$ is basically the same as the Laplace transformation of $`1/\rho ^2`$: $`\mathrm{\Delta }\theta =L_{1/\rho ^2}(Q)`$.
The plots of the invariant and of the variations of the angular quantities are shown next, both for the closed EFRW universes as for the open ones.
fig. 5: $`I_{\mathrm{EFRW}}^+(\mathrm{\Omega })`$ for $`Q=3`$ and an initial singularity of unit auxiliary angular momentum.
fig. 6: The dynamical angle as a function of $`\mathrm{\Omega }`$ for a closed EFRW universe and $`Q=1`$.
fig. 7: The geometrical angle for the same case.
fig. 8: The total angle as a function of $`\mathrm{\Omega }`$ for the same case.
fig. 9: $`I_{\mathrm{EFRW}}^{}(\mathrm{\Omega })`$ with $`Q=1`$ for an initial singularity of auxiliary angular momentum excitation $`h=2`$.
fig. 10: The dynamical angle as a function of $`\mathrm{\Omega }`$ for an open EFRW universe of $`Q=1`$.
fig. 11: The geometrical angle as a function of $`\mathrm{\Omega }`$ for the same case.
fig. 12: The total angle as a function of $`\mathrm{\Omega }`$ for the same open case.
Somewhat more complicated cosmological models
We sketch now the Taub pure gravity model whose WDW equation reads
$$\frac{^2\mathrm{\Psi }}{\mathrm{\Omega }^2}\frac{^2\mathrm{\Psi }}{\beta ^2}+Q\frac{\mathrm{\Psi }}{\mathrm{\Omega }}+e^{4\mathrm{\Omega }}V(\beta )\mathrm{\Psi }=0,$$
(1)
where $`V(\beta )=\frac{1}{3}(e^{8\beta }4e^{2\beta })`$. This equation can be separated in the variables $`x_1=4\mathrm{\Omega }8\beta `$ and $`x_2=4\mathrm{\Omega }2\beta `$. Thus one is led to the following pair of 1D differential equations for which the Ermakov procedure is similar to the EFRW case
$$\frac{d^2\mathrm{\Psi }_{T1}}{dx_1^2}+\frac{Q}{12}\frac{d\mathrm{\Psi }_{T1}}{dx_1}+\left(\frac{\omega ^2}{4}\frac{1}{144}e^{x_1}\right)\mathrm{\Psi }_{T1}=0$$
(2)
and
$$\frac{d^2\mathrm{\Psi }_{T2}}{dx_2^2}\frac{Q}{3}\frac{d\mathrm{\Psi }_{T2}}{dx_2}+\left(\omega ^2\frac{1}{9}e^{x_2}\right)\mathrm{\Psi }_{T2}=0.$$
(3)
where $`\omega /2`$ is a separation constant. The solutions are $`\mathrm{\Psi }_{T1}\mathrm{\Psi }_{T\alpha _1}=e^{(Q/24)x_1}Z_{i\alpha _1}(ie^{x_1/2}/6)`$ and $`\mathrm{\Psi }_{T2}\mathrm{\Psi }_{T\alpha _2}=e^{(Q/6)x_2}Z_{i\alpha _2}(i2e^{x_2/2}/3)`$, respectively, where $`\alpha _1=\sqrt{\omega ^2(Q/12)^2}`$ and $`\alpha _2=\sqrt{4\omega ^2(Q/3)^2}`$.
A more realistic case is that in which a scalar field of minimal coupling to the FRW minisuperspace metric is included. The WDW equation is
$$[_\mathrm{\Omega }^2+Q_\mathrm{\Omega }_\varphi ^2\kappa e^{4\mathrm{\Omega }}+m^2e^{6\mathrm{\Omega }}\varphi ^2]\mathrm{\Psi }(\mathrm{\Omega },\varphi )=0,$$
(4)
and can be written as a Schroedinger equation for a two-component wave function (see ). This allows to think of squuezed cosmological states in the Ermakov framework . For this, we shall use the following factorization of the invariant $`=\mathrm{}(bb^{}+\frac{1}{2})`$, where $`b=(2\mathrm{})^{1/2}[\frac{q}{\rho }+i(\rho pe^{Q_c\mathrm{\Omega }}\dot{\rho }q)]`$ and $`b^{}=(2\mathrm{})^{1/2}[\frac{q}{\rho }i(\rho pe^{Q_c\mathrm{\Omega }}\dot{\rho }q)]`$. $`Q_c`$ is a fixed ordering parameter. Consider now a Misner reference oscillator of frequency $`\omega _0`$ corresponding to a given cosmological epoch $`\mathrm{\Omega }_0`$ for which one can introduce the standard factorization operators $`a=(2\mathrm{}\omega _0)^{1/2}[\omega _0q+ip],a^{}=(2\mathrm{}\omega _0)^{1/2}[\omega _0qip]`$. The connection between the two pairs $`a`$ and $`b`$ is $`b(\mathrm{\Omega })=\mu (\mathrm{\Omega })a+\nu (\mathrm{\Omega })a^{}`$ y $`b^{}(\mathrm{\Omega })=\mu ^{}(\mathrm{\Omega })a^{}+\nu ^{}(\mathrm{\Omega })a^{},`$ where $`\mu (\mathrm{\Omega })=(4\omega _0)^{1/2}[\rho ^1ie^{Q_c\mathrm{\Omega }}\dot{\rho }+\omega _0\rho ]`$ and $`\nu (\mathrm{\Omega })=(4\omega _0)^{1/2}[\rho ^1ie^{Q_c\mathrm{\Omega }}\dot{\rho }\omega _0\rho ]`$ satisfy the relationship $`|\mu (\mathrm{\Omega })|^2|\nu (\mathrm{\Omega })|^2=1`$. The uncertainties can be calculated $`(\mathrm{\Delta }q)^2=\frac{\mathrm{}}{2\omega _0}|\mu \nu |^2`$, $`(\mathrm{\Delta }p)^2=\frac{\mathrm{}\omega _0}{2}|\mu +\nu |^2`$, and $`(\mathrm{\Delta }q)(\mathrm{\Delta }p)=\frac{\mathrm{}}{2}|\mu +\nu ||\mu \nu |`$ showing that in general these Ermakov states are not of minimum uncertainty .
The way one should do the linear combinations for the solutions
of the linear differential equations.
As it has been shown, in order to solve Pinney’s equation one should first find the solutions to the equations of motion. Since these equations are linear, we have chosen those combinations which satisfy the initial conditions of motion. According to the interpretation of Eliezer and Gray, the solution of Pinney’s equation is just the amplitude of the 2D auxilliary motion. Therefore, two of the three quadratic terms of the solution can be seen as the amplitudes along each of the axes, respectively. The third one is a mixed term (one can also eliminate it by diagonalizing the quadratic form in the square root).
Let $`q(0)=a`$ and $`\dot{q}(0)=b`$ be the initial conditions for the equation of motion. The solution can be written as $`x\left(t\right)=ax_1\left(t\right)+bx_2\left(t\right)`$, and therefore the functions $`x_1`$ y $`x_2`$ must satisfy the conditions $`x_1\left(0\right)=1`$, $`\dot{x}_1\left(0\right)=0`$, $`x_2\left(0\right)=0`$, $`\dot{x}_2\left(0\right)=1`$. If we take $`\psi _1`$ and $`\psi _2`$ as a pair of linear independent solutions of the parametric equation, then we can build the functions $`x_\mathrm{i}`$ as linear combinations of $`\psi _\mathrm{i}`$: $`x_\mathrm{i}=a_\mathrm{i}\psi _1+b_\mathrm{i}\psi _2`$. It is clear that the linear superpositions that satisfy the initial conditions will be:
$$x_1=\frac{1}{W(0)}\left[\psi _2^{^{}}(0)\psi _1(t)\psi _1^{^{}}(0)\psi _2(t)\right]$$
(1)
$$x_2=\frac{1}{W(0)}\left[\psi _2(0)\psi _1(t)+\psi _1(0)\psi _2(t)\right]$$
(2)
where $`W(0)`$ is the Wronskian of the functions $`\psi _1`$ and $`\psi _2`$ evaluated at zero time parameter. The functions $`x_\mathrm{i}`$ are the correct ones that should enter the solution of Pinney’s equation written in the form given by Eliezer and Gray. In this way, we have in the case of the cosmological models that have been discussed:
$$x_1=\frac{\left(2z\right)^{\frac{Q}{4}}}{2}\left[\psi _{1}^{}{}_{}{}^{^{}}(1/2)K_{\frac{Q}{4}}(z)\psi _{2}^{}{}_{}{}^{^{}}(1/2)I_{\frac{Q}{4}}(z)\right]$$
(3)
$$x_2=\frac{\left(2z\right)^{\frac{Q}{4}}}{2}\left[K_{\frac{Q}{4}}(1/2)I_{\frac{Q}{4}}(z)I_{\frac{Q}{4}}(1/2)K_{\frac{Q}{4}}(z)\right]$$
(4)
where
$$z=\frac{1}{2}e^{2\mathrm{\Omega }},$$
$$\psi _{1}^{}{}_{}{}^{^{}}(1/2)=\left[\frac{Q}{2}I_{\frac{Q}{4}}(1/2)+I_{\frac{Q}{4}}^{}{}_{}{}^{^{}}(1/2)\right],$$
$$\psi _{2}^{}{}_{}{}^{^{}}(1/2)=\left[\frac{Q}{2}K_{\frac{Q}{4}}(1/2)+K_{\frac{Q}{4}}^{}{}_{}{}^{^{}}(1/2)\right],$$
for closed EFRW, and similarly for the open EFRW models.
The superposition coefficients we worked with are of the form $`a_+=N_K(1/2)/D_+(1/2)`$, $`b_+=N_I(1/2)/D_+(1/2)`$, $`c_+=K(1/2)/D_+(1/2)`$, $`d_+=I(1/2)/D_+(1/2)`$, where $`N_K(1/2)=K_{\frac{Q}{4}+1}(1/2)QK_{\frac{Q}{4}}(1/2)`$, $`N_I(1/2)=I_{\frac{Q}{4}+1}(1/2)+QK_{\frac{Q}{4}}(1/2)`$, and $`D_+(1/2)=I_{\frac{Q}{4}+1}(1/2)K_{\frac{Q}{4}}(1/2)+K_{\frac{Q}{4}+1}(1/2)I_{\frac{Q}{4}}(1/2)`$ for the closed EFRW case; $`a_{}=N_Y(1/2)/D_{}(1/2)`$, $`b_{}=N_J(1/2)/D_{}(1/2)`$, $`c_{}=Y(1/2)/D_{}(1/2)`$, $`d_{}=J(1/2)/D_{}(1/2)`$, where $`N_Y(1/2)=Y_{\frac{Q}{4}+1}(1/2)QY_{\frac{Q}{4}}(1/2)`$, $`N_J(1/2)=J_{\frac{Q}{4}+1}(1/2)+QJ_{\frac{Q}{4}}(1/2)`$, and $`D_{}(1/2)=J_{\frac{Q}{4}+1}(1/2)Y_{\frac{Q}{4}}(1/2)Y_{\frac{Q}{4}+1}(1/2)J_{\frac{Q}{4}}(1/2)`$ for the open EFRW case.
## 11. Application to physical optics.
In order to study the Ermakov procedure within physical optics, our starting point will be the 1D Helmholtz equation in the form given by Goyal et al and Delgado et al
$$\frac{d^2\psi }{dx^2}+\lambda \varphi (x)\psi (x)=0,$$
(1)
that is, as a Sturm-Liouville equation for the set of eigenvalues $`\lambda R`$ defining the Helmholtz spectrum within a closed given interval \[a,b\] on the real line, where the nontrivial function $`\psi `$ turns to zero at the end points (Dirichlet boundary conditions). Eq. (1) occurs, for example, in the case of the transversal electric modes (TE) propagating in waveguides that have a continuously varying refractive index in the $`x`$ direction but are independent of $`y`$ and $`z`$. Similar problems in acoustics can be treated along the same lines. The transformation of eq. (1) into the canonical equations of motion of a classical pointparticle is performed as follows. Let $`\psi (x)`$ by any real solution of eq. (1). Define $`x=t`$, $`\psi =q`$, and $`\psi ^{^{}}=p`$; then, eq. (1) turns into
$`{\displaystyle \frac{dq}{dt}}`$ $`=`$ $`p`$ (2)
$`{\displaystyle \frac{dp}{dt}}`$ $`=`$ $`\lambda \varphi (t)q,`$ (3)
with the boundary conditions $`q(a)=q(b)=0`$. The corresponding classical Hamiltonian
$$H(t)=\frac{p^2}{2}+\lambda \varphi (t)\frac{q^2}{2}.$$
(4)
is similar to the previous cosmological case of $`Q=0`$, if one identifies $`\lambda =\kappa `$ and $`\varphi =e^{4\mathrm{\Omega }}`$. The procedure to find the Ermakov invariant follows step by step the cosmological case. In the phase space algebra we can write the invariant as
$$I=\underset{r}{}\mu _r(t)T_r,$$
(5)
and applying
$$\frac{I}{t}=\{I,H\},$$
(6)
we get the system of equations for the coefficients $`\mu _r(t)`$
$`\dot{\mu }_1`$ $`=`$ $`2\mu _2`$
$`\dot{\mu }_2`$ $`=`$ $`\lambda \varphi (t)\mu _1\mu _3`$ (7)
$`\dot{\mu }_3`$ $`=`$ $`2\lambda \varphi (t)\mu _2.`$
The solutions can be written in the conventional form by choosing $`\mu _1=\rho ^2`$, that gives $`\mu _2=\rho \dot{\rho }`$ and $`\mu _3=\dot{\rho }^2+\frac{1}{\rho ^2}`$, where $`\rho `$ is a solution of the Pinney’s equation of the form: $`\ddot{\rho }+\lambda \varphi (t)\rho =\frac{1}{\rho ^3},`$ with the Ermakov invariant of the well-known form $`I=\frac{(\rho p\dot{\rho }q)^2}{2}+\frac{q^2}{2\rho ^2}.`$ Next, we calculate the generating function of the canonical transformation for which $`I`$ is the new momentum
$$S(q,I,\stackrel{}{\mu }(t))=^q𝑑q^{^{}}p(q^{^{}},I,\stackrel{}{\mu }(t)).$$
(8)
Thus,
$$S(q,I,\stackrel{}{\mu }(t))=\frac{q^2}{2}\frac{\dot{\rho }}{\rho }+I\mathrm{arcsin}\left[\frac{q}{\sqrt{2I\rho ^2q^2}}\right]+\frac{q\sqrt{2I\rho ^2q^2}}{2\rho ^2},$$
(9)
where we have put to zero the integration constant. In this way we get
$$\theta =\frac{S}{I}=\mathrm{arcsin}\left(\frac{q}{\sqrt{2I\rho ^2q^2}}\right).$$
(10)
The new canonical variables are $`q_1=\rho \sqrt{2I}\mathrm{sin}\theta `$ and $`p_1=\frac{\sqrt{2I}}{\rho }\left(\mathrm{cos}\theta +\dot{\rho }\rho \mathrm{sin}\theta \right)`$. The dynamical angle is given by
$$\mathrm{\Delta }\theta ^d=_{t_0}^t\left[\frac{1}{\rho ^2}\frac{\rho ^2}{2}\frac{d}{dt^{^{}}}\left(\frac{\dot{\rho }}{\rho }\right)\right]𝑑t^{^{}}$$
(11)
whereas the geometrical angle is
$$\mathrm{\Delta }\theta ^g=\frac{1}{2}_{t_0}^t\left[(\ddot{\rho }\rho )\dot{\rho }^2\right]𝑑t^{^{}}.$$
(12)
For periodic parameters $`\stackrel{}{\mu }(t)`$, with all the components of the same period $`T`$, the geometric angle is known as the nonadiabatic Hannay angle that can be written as a function of $`\rho `$:
$$\mathrm{\Delta }\theta _H^g=_C\dot{\rho }𝑑\rho .$$
(13)
Now, in order to proceed with the quantization of the Ermakov problem, we turn $`q`$ and $`p`$ into operators, $`\widehat{q}`$ y $`\widehat{p}=i\mathrm{}\frac{}{q}`$, but keeping the auxiliary function $`\rho `$ as a real number. The Ermakov invariant is now a Hermitian constant operator
$$\frac{d\widehat{I}}{dt}=\frac{\widehat{I}}{t}+\frac{1}{i\mathrm{}}[\widehat{I},\widehat{H}]=0$$
(14)
and the time-dependent Schrödinger equation for the Helmholtz Hamiltonian is
$$i\mathrm{}\frac{}{t}|\psi (\widehat{q},t)=\frac{1}{2}(\widehat{p}^2+\lambda \varphi (t)\widehat{q}^2)|\psi (\widehat{q},t).$$
(15)
The problem now is to find the eigenvalues of $`\widehat{I}`$
$$\widehat{I}|\psi _n(\widehat{q},t=\kappa _n|\psi _n(\widehat{q},t)$$
(16)
and also to write the explicit form of the general solution of eq. (15)
$$\psi (\widehat{q},t)=\underset{n}{}C_ne^{i\alpha _n(t)}\psi _n(\widehat{q},t)$$
(17)
where $`C_n`$ are superposition constants, $`\psi _n`$ are (orthonormalized) eigenfunctions of $`\widehat{I}`$, and the phases $`\alpha _n(t)`$ are the Lewis phases that can be found from the equation
$$\mathrm{}\frac{d\alpha _n(t)}{dt}=\psi _n|i\mathrm{}\frac{}{t}\widehat{H}|\psi _n.$$
(18)
The crucial point in the Ermakov quantum problem is to perform a unitary transformation in such a way as to get time-independent eigenvalues for the new Ermakov invariant $`\widehat{I}^{^{}}=\widehat{U}\widehat{I}\widehat{U}^{}`$. It is easy to obtain the required unitary transformation: $`\widehat{U}=\mathrm{exp}[\frac{i}{\mathrm{}}\frac{\dot{\rho }}{\rho }\frac{\widehat{q}^2}{2}]`$. The new invariant will be $`\widehat{I}^{^{}}=\frac{\rho ^2\widehat{p}^2}{2}+\frac{\widehat{q}^2}{2\rho ^2}`$. The eigenfunctions are $`e^{\frac{\theta ^2}{2\mathrm{}}}H_n(\theta /\sqrt{\mathrm{}})`$, where $`H_n`$ are the Hermite polynomials, $`\theta =\frac{q}{\rho }`$, and the eigenvalues are $`\kappa _n=\mathrm{}(n+\frac{1}{2})`$. Thus, one can write the eigenfunctions $`\psi _n`$ as follows
$$\psi _n\frac{1}{\rho ^{\frac{1}{2}}}\mathrm{exp}\left(\frac{1}{2}\frac{i}{\mathrm{}}\frac{\dot{\rho }}{\rho }q^2\right)\mathrm{exp}\left(\frac{q^2}{2\mathrm{}\rho ^2}\right)H_n\left(\frac{1}{\sqrt{\mathrm{}}}\frac{q}{\rho }\right).$$
(19)
The factor $`1/\rho ^{1/2}`$ has been introduced for normalization reasons. Using these functions and doing simple calculations one can find the geometrical phase
$$\alpha _n^g=\frac{1}{2}(n+\frac{1}{2})_{t_0}^t\left[(\ddot{\rho }\rho )\dot{\rho }^2\right]𝑑t^{^{}}.$$
(20)
The cyclic (nonadiabatic) Berry’s phase is
$$\alpha _{B,n}^g=(n+\frac{1}{2})_C\dot{\rho }𝑑\rho .$$
(21)
The results obviously depend on the explicit form of $`\rho `$ which in turn depends on the explicit form of $`\varphi `$.
One can find that a good adiabatic parameter is the inverse of the square root of the Helmoltz eigenvalues, $`\frac{1}{\sqrt{\lambda }}`$, with a slow “time” variable $`\tau =\frac{1}{\sqrt{\lambda }}t`$. The adiabatic approximation has been studied in detail by Lewis . If the Helmholtz Hamiltonian is written down as
$$H(t)=\frac{\sqrt{\lambda }}{2}[p^2+\varphi (t)q^2],$$
(22)
then Pinney’s equation is
$$\frac{1}{\lambda }\ddot{\rho }+\varphi (t)\rho =\frac{1}{\rho ^3},$$
(23)
while the Ermakov invariant becomes a $`1/\sqrt{\lambda }`$-dependent function
$$I(1/\sqrt{\lambda })=\frac{(\rho p\dot{\rho }q/\sqrt{\lambda })^2}{2}+\frac{q^2}{2\rho ^2}.$$
(24)
In the adiabatic approximation, Lewis obtained the general Pinney solution in terms of the linear independent solutions $`f`$ and $`g`$ of the equation of motion $`\frac{1}{\lambda }\ddot{q}+\mathrm{\Omega }^2(t)q=0`$ for the classical oscillator (see eq. (45) in ). Among the examples given by Lewis, it is $`\mathrm{\Omega }(t)=bt^{m/2}`$, $`m2`$, $`b=\mathrm{constant}`$ which is directly related to a realistic dielectric of a waveguide since it corresponds to a power-law index profile ($`n(x)x^{m/2}`$). For this case, Lewis obtained a simple formula for $`\rho `$ of $`O(1)`$ order in $`1/\sqrt{\lambda }`$
$$\rho _m=\gamma _1\left[\frac{\gamma _2\pi \sqrt{\lambda }}{(m+2)}\right]^{\frac{1}{2}}t^{\frac{1}{2}}[H_\beta ^{(1)}(y)H_\beta ^{(2)}(y)]^{\frac{1}{2}},$$
(25)
where $`H_\beta ^{(1)}`$ and $`H_\beta ^{(2)}`$ are Hankel functions of order $`\beta =1/(m+2)`$, $`y=\frac{2b\sqrt{\lambda }}{(m+2)}t^{\frac{m}{2}+1}`$, and $`\gamma _1=\pm 1`$, $`\gamma _2=\pm 1`$. An even more useful technological application might be the following proposal of Lewis: $`m=\frac{4n}{2n+1}`$, $`n=\pm 1,\pm 2,\mathrm{}`$, leading to
$$\rho _n=\gamma _1\gamma _2^{\frac{1}{2}}b^{\frac{1}{2}}t^{\frac{n}{2n+1}}|G(t,1/\sqrt{\lambda })|^2,$$
(26)
where
$$G(t,1/\sqrt{\lambda })=\left[\underset{k=0}{\overset{n}{}}(1)^k\frac{(n+k)!}{k!(nk)!}\left(\frac{1/\sqrt{\lambda }}{2ib(2n+1)}\right)^kt^{\frac{k}{(2n+1)}}\right]^{\frac{1}{2}}.$$
(27)
One gets $`\rho `$ as a polynomial in the square of the adiabatic parameter, i.e., $`\lambda ^1`$, of infinite radius of convergence. The topological quantities (angles and phases) can be calculated by substituting the explicit form of Pinney ’s function in the corresponding formulas. Lewis found a recursive formula in $`1/\lambda `$ of order $`1/\lambda ^3`$ that can be used for any type of index profile. The recurrence relationship is
$$\rho =\rho _0+\rho _1/\lambda +\rho _2/\lambda ^2+\rho _3/\lambda ^3+\mathrm{},$$
(28)
where $`\rho _0=\mathrm{\Omega }^{1/2}=\varphi ^{1/4}(x)`$; for the other coefficients $`\rho _i`$ see the appendix in . The main contribution to the topological quantities are given by $`\rho _0`$. In the case of a power-law index profile, the geometric angle is
$$\mathrm{\Delta }\theta ^g=\frac{m}{4b(m+2)}\left[t^{(\frac{m}{2}+1)}t_0^{(\frac{m}{2}+1)}\right],$$
(29)
and a similar formula can be written for the geometric quantum phase. For periodic indices, one can write the Hannay angle and Berry’s phase according to their cyclic integral expressions. Finally, we notice that the choice $`\varphi (x)=\mathrm{\Phi }(x)+\frac{\mathrm{Const}}{\psi ^3(x)}`$, which corresponds to nonlinear waveguides, leads to more general time-dependent Hamiltonians that have been discussed in the Ermakov perspective by Maamache .
We have presented in a formal way the application of the Ermakov approach to 1D Helmholtz problems. For more detailes one can look in a recent work by Rosu and Romero .
## 12. Conclusions.
As one could see from the examples we discussed in this work, the Ermakov-Lewis quadratic invariants are an important method of research for parametric oscillator problems. They are helpful for better understanding this widespreaded class of phenomena with applications in many areas of physics. One can also say that the Ermakov approach gives a connection between the linear physics of parametric oscillators and the corresponding nonlinear physics.
The cosmological applications of the classical Ermakov procedure we presented herein are based on a classical particle representation of the WDW equation for the EFRW models. We also notice that the Ermakov invariant is equivalent to the Courant-Snyder invariant of use in the accelerator physics , allowing an analogy between the physics of beams and the cosmological evolution as suggested by Rosu and Socorro .
We end up with a possible interpretation of the Ermakov invariant within the empty minisuperspace cosmology. If one performs an expansion of the invariant in a power series in the adiabatic parameter, the principal term which defines the adiabatic regime gives the number of adiabatic “quanta” and there were authors who gave classical descriptions of the cosmological particle production in such terms . On the other hand, the Eliezer-Gray interpretation as an angular momentum of the 2D auxiliary motion allows one to say that for EFRW minisuperspace models, the Ermakov invariant gives the number of adiabatic excitations of the auxiliary angular momentum with which the universe is created at the initial singularity.
## Appendix A: Calculation of the integral of $`I`$.
The phase space integral of $`I`$ in chapter 5 can be calculated from the formula (15a) in the paper of Lewis
$$I=\frac{1}{2\pi }_0^{2\pi }X_2\frac{X_1}{\phi }𝑑\phi $$
(1)
where $`X_1`$ and $`X_2`$ represent the functional dependences of $`q`$ and $`p`$, respectively, in terms of the nice variables $`z_1`$ and $`\phi `$, which have been given by Lewis in the formulas (38) of the same paper as follows
$$X_1=\pm \frac{z_1}{F_1\mathrm{\Omega }[1+\mathrm{tan}^2(\phi F_2)]^{1/2}}$$
(2)
and
$$X_2=\pm \frac{z_1[ϵ\frac{d\mathrm{ln}\rho }{dt}+\frac{1}{\rho ^2}\mathrm{tan}(\phi F_2)]}{F_1\mathrm{\Omega }[1+\mathrm{tan}^2(\phi F_2)]^{1/2}},$$
(3)
where $`F_1`$ and $`F_2`$ are two arbitrary functions of time. Thus,
$$\frac{X_1}{\phi }=\pm \frac{z_1}{F_1\mathrm{\Omega }}[1+\mathrm{tan}^2(\phi F_2)]^{3/2}(1/2)2\mathrm{t}\mathrm{a}\mathrm{n}(\phi F_2)\mathrm{sec}^2(\phi F_2).$$
(4)
We have the following integral
$$I=\frac{z_1^2}{2\pi F_1^2\mathrm{\Omega }^2}_0^{2\pi }\frac{[ϵ\frac{d\mathrm{ln}\rho }{dt}+\frac{1}{\rho ^2}\mathrm{tan}(\phi F_2)]}{[1+\mathrm{tan}^2(\phi F_2)]^2}\mathrm{tan}(\phi F_2)\mathrm{sec}^2(\phi F_2)𝑑\phi .$$
(5)
Now, employing
$$s=\mathrm{tan}^2(\phi F_2),ds=2\mathrm{t}\mathrm{a}\mathrm{n}(\phi F_2)\mathrm{sec}^2(\phi F_2)d\phi $$
(6)
one gets
$$I\frac{\frac{ϵd\mathrm{ln}\rho }{dt}ds}{(1+s)^2}+\frac{1}{\rho ^2}\frac{s^{1/2}ds}{(1+s)^2}.$$
(7)
Therefore
$$I=\frac{z_1^2}{2\pi F_1^2\mathrm{\Omega }^2}\left[ϵ\frac{d\mathrm{ln}\rho }{dt}\frac{1}{(1+s)}+\frac{1}{\rho ^2}\left(s^{1/2}+\mathrm{tan}^1\sqrt{s}\right)\right].$$
(8)
Going back to the $`\phi `$ variable and taking into account the corresponding $`0`$ and $`2\pi `$ limits one gets
$$I=\frac{z_1^2}{2F_1^2\mathrm{\Omega }^2\rho ^2},$$
(9)
which is the result obtained by Lewis. The common form of $`I`$ can be obtained by going back to the $`(q,p)`$ variables.
## Appendix B: Calculation of the expectation value of $`\widehat{H}`$ in eigenstates of $`\widehat{I}`$.
From the formulas (12) in chapter 5 for the raising and lowering operators one gets
$$\widehat{q}=\frac{\rho }{\sqrt{2}}(\widehat{a}^++\widehat{a}),$$
(1)
$$\widehat{p}=\frac{1}{\sqrt{2}}\left[(\dot{\rho }+i/\rho )\widehat{a}^++(\dot{\rho }i/\rho )\widehat{a}\right].$$
(2)
Performing simple calculations, one gets
$$\widehat{H}=f(\rho )\widehat{a}^{+2}+f^{}(\rho )\widehat{a}^2+\frac{1}{4}\left[\dot{\rho }^2+\frac{1}{\rho ^2}+\omega ^2\rho ^2\right](2\widehat{I}),$$
(3)
where $`f(\rho )=\dot{\rho }^2+2i\dot{\rho }/\rho 1/\rho ^2+\omega ^2\rho ^2`$. Thus,
$$n|\widehat{H}|n=n|\frac{1}{2}\left[\dot{\rho }^2+\frac{1}{\rho ^2}+\omega ^2\rho ^2\right]\widehat{I}|n$$
(4)
from which eq. (16) in chapter 5 is obvious.
|
no-problem/0002/astro-ph0002358.html
|
ar5iv
|
text
|
# Photometric Properties of Low Redshift Galaxy Clusters
## 1. Preliminary Results
A recent comprehensive photometric survey of 45 low-z X-ray selected Abell clusters (López-Cruz 1997) has measured significant variations in the faint end slope of the luninosity function (LF). This result has indicated that dwarf galaxies (dGs) have different mixtures in relation with the cluster environment. Clusters having a central “cD-like” galaxy have a flatter faint end slope than non-cD clusters. Also, cD clusters were found to have a dwarf-to-giant ratio (D/G) which was smaller than non-cD clusters. López-Cruz et al. (1997) has suggested that the light contained in cD envelopes can be accounted for by assuming that it is produced from stars that originally formed dGs. In this simple model, the D/G would be expected to increase with radial distance from the cluster centre due to the decrease in the disruptive forces.
In order to test the dG disruption model, B and R band images of a sample of 27 low-z ($`0.02z0.04`$) Abell clusters have been obtained with the 8k CCD mosaic camera on the KPNO 0.9m telescope. This telescope/detector combination provides a $`1^o\times 1^o`$ field of view, giving an areal coverage of $`12h^1`$ Mp$`c^2`$. These observations will allow us to probe several magnitudes deeper than the López-Cruz (1997) survey and provide a definitive measure of the dG LF. Preliminary LFs and D/G ratios have been calculated for five clusters (A1185, A1656, A2151, A2152, and A2197). A significant increase in the faint end slope between the inner (0.0-0.75 Mpc) and outer (0.75-1.50 Mpc) LF can be seen for A2151 ($`H_o=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$). This indicates that the number of dGs, defined as the ratio of the number of galaxies with $`19M_R15`$ to those with $`M_R<19.5`$, has increased in the outer radial bin as compared to the inner cluster region. All five clusters also show a significant dip in the LF at $`M_R19`$. This dip suggests that the LF can be modelled by 2 components: a log-normal bright component, and a Schecheter function faint component.
## References
López-Cruz, O., Yee, H.K.C., Brown, J.P., Jones, C. & Forman, W. 1997, ApJ, 476, L97
López-Cruz, O. 1997, Ph.D thesis, University of Toronto
|
no-problem/0002/cond-mat0002053.html
|
ar5iv
|
text
|
# Quantum-Critical Dynamics of the Skyrmion Lattice.
## Abstract
Near to filling fraction $`\nu =1`$, the quantum Hall ferromagnet contains multiple Skyrmion spin excitations. This multi-Skyrmion system has a tremendously rich quantum-critical structure. This is simplified when Skyrmions are pinned by disorder. We calculate the nuclear relaxation rate in this case and compare the result with experiment. We discus how such measurements may be used to further probe the quantum-critical structure of the multi-Skyrmion system.
At exact filling of a single Landau level the quantized Hall state forms an almost perfect ferromagnet. This quantum Hall ferromagnet (QHF) has some novel features due to the phenomenology of the underlying quantized Hall state. Magnetic vortices, or Skyrmions, in the QHF carry quantized electrical charge. These Skyrmions are stabilised by a chemical potential so that the ground state slightly away from filling fraction $`\nu =1`$ contains a finite density of them. This original proposal of Sondhi et al. has been confirmed in a number of experiments.
The $`T=0`$ phase diagram of the multi-Skyrmion system has been thoroughly investigated. In the absence of disorder, crystalline arrangements are expected. For filling fractions very close to $`\nu =1`$ a triangular lattice is formed with a transition to a square lattice as the deviation from $`\nu =1`$ is increased. The statistical mechanics of the possible melting transitions has been considered. At the highest Skyrmion densities, zero point fluctuations are expected to give rise to a quantum-melted state.
Despite this wealth of study, a complete account of the experimental observables has not been achieved. For example, nuclear magnetic resonance provides one of the clearest probes of the spin polarisation in the QHF. Although it is understood in general terms how low-energy spin-fluctuations of the Skyrmion system may enhance the relaxation of nuclear spins, attempts to calculate relaxation rates have been flawed. The fundamental physics, missed in other considerations, is the quantum-critical nature of fluctuations of the Skyrmion lattice. One immediate consequence of this quantum-criticallity is that the limits of temperature/frequency$`0`$ and frequency/temperature$`0`$ are very different. Typically, experimental probes are at frequencies much less than temperature and the latter limit is appropriate. This means that zero-temperature calculations cannot model experiments correctly.
Let us consider these points a little further. In our analysis below, we will find an underlying gappless XY-model governing orientational fluctuations of the multi-Skyrmion system. If a spinwave expansion is attempted for a gappless magnet at or below its critical dimension, the occupation of low-frequency modes is found to diverge. The constraint, fixing the magnitude of the local spin, restricts this divergence. Interplay between divergence and constraint gives rise to a finite (temperature dependent) correlation length, $`\xi (T)`$, beyond which correlations of the magnet decay exponentially. The dynamics of the critical magnet are very different on length scales greater or less than $`\xi `$. On lengthscales less than $`\xi `$, the groundstate is ordered (albeit in a quantum superposition of all possible orientations due to long wavelength spin fluctuations). Fluctuations with wavelength less than $`\xi (T)`$ may, therefore, be described by a modified spinwave expansion. On lengthscales greater than $`\xi `$, the groundstate is disordered and fluctuations are overdamped. This quantum relaxational dynamics is a striking feature of quantum-critical systems and leads to interesting universalities. Regimes of renormalized classical and quantum activated behaviour at low-temperature, cross over to universal behaviour in the high-temperature quantum-critical regime.
The Skyrmion spin-configuration consists of a vortex-like arrangement of in-plane components of spin with the z-component reversed in the centre of the Skyrmion and gradually increasing to match the ferromagnetic background at infinity. At large distances, the spin distribution decays exponentially to the ferromagnetic background on a length scale determined by the ratio of spin stiffness to Zeeman energy. An individual Skyrmion may be characterized completely by its position (i.e. the point at which the spin points in the opposite direction to the ferromagnetic background) its size (i.e. the number of flipped spins) and the orientation of the in-plane components of spin. The equilibrium size of the Skyrmion is determined by a balance between its coulomb and Zeeman energies (In the presence of a disorder potential, the potential energy of the Skyrmion also enters this balancing act).
Consider a ferromagnet with a dilute distribution of Skyrmions. The normal modes of this system are relatively easy to identify. Firstly, ferromagnetic spinwaves propagate in-between the Skyrmions. The spectrum of these is gapped by the Zeeman energy and will be ignored from now on. Positional fluctuations, or phonon modes, of the Skyrmions are gapless in a pure system, but gapped when the lattice is pinned by disorder. Finally, fluctuations in the in-plane orientation and size must be considered. These two types of fluctuation are intimately connected; rotating a Skyrmion changes its size. This follows from the commutation relations of quantum angular momentum operators.
The orientation, $`\theta (𝐱_i,t)`$, of a Skyrmion centred at a point $`𝐱_i`$ is described by the following effective action:
$$S=\frac{1}{2}𝑑t\left[\underset{i}{}I_i\theta (𝐱_i,t)_t^2\theta (𝐱_i,t)\underset{<i,j>}{}J_{ij}\mathrm{cos}(\theta (𝐱_i,t)\theta (𝐱_j,t))\right].$$
(1)
$`I_i`$ is the moment of inertia of the $`i^{th}`$ Skyrmion and $`J_{ij}`$ is the stiffness to relative rotations of neighbouring Skyrmions. The first term in Eq.(1) arises due to the change in energy of a Skyrmion when its size fluctuates; $`\mathrm{\Delta }E=I^1\delta s^2/8`$. Since the z-component of spin and orientation are conjugate coordinates, a cross-term $`i\delta s(𝐱_i,t)_t\theta (𝐱_i,t)/2`$ appears in their joint effective action. Integrating out $`\delta s(𝐱_i,t)`$ gives the first term in Eq.(1). Clearly, $`1/4I`$ is the second derivative of the Skyrmion energy with respect to its spin. $`I`$ is related to the Skyrmion size (in fact $`I=24\mu _BgB/s`$) and, in the absence of disorder, is the same for all Skyrmions. Correlation functions involving $`\delta s`$ may be calculated using Eq.(1) by making the replacement $`\delta s(𝐱_i,t)2iI_t\theta (𝐱,t)`$\- the result of a simple Gaussian integration over $`\delta s`$. The second term in Eq.(1) is an effective dipole-interaction of Skyrmions due to the energetics of overlapping Skyrmion tails. In a square Skyrmion lattice, $`J_{ij}`$ is independent of lattice site. A continuum limit may be taken where $`\theta _i`$ is replaced by a staggered field, $`\theta _i\theta _i+\eta _i\pi `$, with $`\eta _i=0,1`$ on adjacent sites, and $`\theta _i\theta _j`$ is replaced by a derivative;
$$S=\frac{1}{2}\frac{d\omega d^2k}{(2\pi )^3}\theta (𝐤,\omega )\left(\delta \nu \overline{\rho }I\omega ^2+J𝐤^2\right)\theta (𝐤,\omega ).$$
(2)
The frequency integral in this expression is shorthand for a Matsubara summation at finite temperature and the momentum integral is over the Brillouin zone. A factor of the Skyrmion density $`\delta \nu \overline{\rho }`$ has been introduced, where $`\delta \nu `$ is the deviation from filling fraction $`\nu =1`$ and $`\overline{\rho }`$ is the electron density. There are a few caveats to the use of Eq.(2). We defer discusion of these until later. This model is perhaps most familiar as an effective theory of the Josephson junction array. In this case, $`\theta `$ is the phase of the superconducting order parameter and its conjugate coordinate is the charge of the superconducting junction.
In order to calculate properties of the Skyrmion lattice, we must relate fluctuations in the Skyrmion orientation to fluctuations in the orientation of local spin. We use a coherent-state representation of the polarization of the local spin, via an O(3)-vector field $`𝐧(𝐱,t)`$. The static spin distribution at a point $`𝐱`$ relative to the centre of a single Skyrmion is denoted by $`𝐧(𝐱)`$ and its in-plane components by $`n_x+in_y=n_re^{i\varphi _0}`$. The in-plane components of local spin at a point $`𝐱`$, in response to rotational fluctuations of a Skyrmion centred at $`𝐱_i`$, are given by
$$n_re^{i\theta }(𝐱,t)=()^{\eta _i}\left[n_r(𝐱𝐱_i)+2iI\frac{n_r(𝐱𝐱_i)}{s}_t\theta (𝐱_i,t)\right]e^{i\theta +i\varphi _0}(𝐱_i,t).$$
(3)
We have used the conjugate relationship between Skyrmion spin and orientation in writing down this expression. In a distribution of many Skyrmions, one must in principle sum the contributions of all Skyrmions to the fluctuation in spin at the point $`𝐱`$. However, in the dilute limit in which we are performing our explicit calculation, Skyrmions are exponentially localized. The dominant spin fluctuations occur near to the centre of Skyrmions and so, to logarithmic accuracy, the local fluctuations at a point $`𝐱`$ are due only to the nearest Skyrmion.
We will calculate the nuclear relaxation rate due to low energy quantum fluctuations of the Skyrmion lattice;
$$\frac{1}{T_1}=T\gamma \underset{\omega 0}{lim}\frac{d^2k}{(2\pi )^2}\frac{mS_+(𝐤,\omega )S_{}(𝐤,\omega )}{\omega },$$
(4)
where $`\gamma `$ is the hyperfine coupling constant. Other physical observables, such as the temperature dependence of magnetization, $`M=𝑑td^2xn_z(𝐱,t),`$ may be calculated similarly. Our first task is to replace the expectation of the spin raising and lowering operators in Eq.(4) by correlators of the Skyrmion orientation. Substituting from Eq.(3) into Eq.(4) and ignoring terms higher order in frequency, the nuclear relaxation rate at a point $`𝐱`$ is
$$\frac{1}{T_1}(𝐱)=T\gamma \underset{\omega 0}{lim}n_r^2(𝐱)\frac{d^2k}{(2\pi )^2}\frac{me^{i\theta }(𝐤,\omega )e^{i\theta }(𝐤,\omega )}{\omega }.$$
(5)
This takes the form of a correlation function of the Skyrmion orientation, multiplied by a profile function characteristic of the Skyrmion groundstate. The average rate is given by integrating this over an area containing a single Skyrmion and multiplying by the Skyrmion density $`\delta \nu \overline{\rho }`$. The result is identical to Eq.(5) with the replacement $`n_r^2(𝐱)\delta \nu \overline{n_r^2}`$. $`\overline{n_r^2}=\overline{\rho }d^2xn_r^2(𝐱)`$ is a number characteristic of a single Skyrmion. For a pure Skyrmion spin distribution, $`\overline{n_r^2}=2s`$ to logarithmic accuracy in the Skyrmion spin, $`s=\overline{\rho }d^2x(1n_z(𝐱))`$. Notice that radial fluctuations of local spin contribute only to higher order in frequency and have been neglected in writing down Eq.(5).
The problem of finding the nuclear relaxation rate has now been reduced to evaluating the correlation function in Eq.(5) using the effective action Eq.(2). This is rather tricky. The O(2)-quantum rotor, described by Eq.(2), is quantum-critical. It is necessary to employ a non-perturbative scheme, such as 1/N or epsilon expansions, to calculate in the quantum-critical regime of this model. Here we use the result of the 1/N expansion of Chubukov et al.
An important feature of the effective action, Eq.(2), is that it displays a zero-temperature phase transition. For $`IJ<1`$, the Skyrmion moment of inertia is sufficiently small that quantum fluctuations destroy long range order even at zero temperatures. For $`IJ>1`$, the $`T=0`$ groundstate has an infinite correlation length and is ordered. Notice that arbitrarily small temperatures destroy this long-range order even when $`IJ>1`$. This arises due to the interplay between fluctuations and constraint and can be seen in a simple mean-field calculation. In the O(2) representation of Eq.(2), $`S=𝑑td^2x\left[𝐧(I_t^2+J_𝐱^2)𝐧+\lambda (𝐱,t)(𝐧^21)\right]`$, where $`𝐧(𝐱,t)`$ is an O(2)-vector field and $`\lambda (𝐱,t)`$ is an auxiliary field that imposes the constraint $`𝐧^2=1`$. Imposing the constraint at mean-field level, $`𝐧^2=1`$, determines a temperature dependent gap, $`\lambda (T)`$. The spin correlations decay exponentially on a length scale $`\xi (T)=\sqrt{J/\lambda (T)}`$. The results of such a calculation are sketched in Fig.1.
Above a temperature of about $`T_{QC}=|2\pi JE_{\text{max}}|`$ (The cut-off, $`E_{\text{max}}=2\pi \sqrt{J/I}`$ corresponds to fluctuations with momentum at the Brillouin zone boundary, $`k_{\text{max}}=\pi \sqrt{\overline{\rho }\delta \nu }`$), the gap/correlation length develops a universal temperature dependence. In this region, thermal and quantum fluctuations are of similar importance and are very difficult to disentangle. This crossover to universal high-temperature behaviour from distinct low-temperature behaviours is a feature of all correlation functions of Eq.(2). It is usual to summarise this behaviour by the phase diagram sketched in Fig.2. In this figure, $`J(T)`$ is the renormalized stiffness in the ordered phase and $`\lambda (T)`$ is the gap in the paramagnetic phase.
The correlation function required in Eq.(5) has been calculated in Ref. by means of a 1/N expansion, with the result
$`\underset{\omega 0}{lim}{\displaystyle \frac{1}{\omega }}{\displaystyle \frac{d^2k}{(2\pi )^2}me^{i\theta }(𝐤,\omega )e^{i\theta }(𝐤,\omega )}`$
$`\begin{array}{cccc}=\hfill & \frac{1}{T}\frac{0.015}{\sqrt{IJ}}\left(\frac{kT}{\pi J}\right)^\eta ,\hfill & TT_{QC}\hfill & \\ =\hfill & \frac{1}{T}\frac{0.085}{\sqrt{IJ}}\frac{IT^2}{\lambda (0)}e^{2\sqrt{\lambda (0)/IT^2}},\hfill & TT_{QC}\text{,}\hfill & IJ<4,\hfill \\ =\hfill & \frac{0.18}{T}\left(\frac{kT}{2\pi \lambda (T)}\right)^{1/2},\hfill & TT_{QC}\text{,}\hfill & IJ>4,\hfill \end{array}`$
where $`\eta `$ is a number close to zero. Substituting these results into Eq.(5) we obtain
$$\frac{1}{T_1}=\gamma \frac{0.03s}{\sqrt{IJ}}\left(\frac{kT}{\pi J}\right)^\eta $$
(7)
at high temperature. The full behaviour is sketched in Fig.3. The kink at $`T_{KT}`$ is due to the discontinuous change in spin stiffness seen at the Kosterlitz-Thouless transition. This effect is not seen in the 1/N expansion of Ref. and must be calculated by some other means.
The nuclear relaxation rate obtained here is very different from that obtained in Ref.. As pointed out in the introduction, this is due to the unphysical limit $`T/\omega 0`$ used in Ref.. Nevertheless, it is instructive to see how the results of Ref. relate to the present formalism. Since $`T\omega `$ in their work, Côté et al consider fluctuations on length scales much less than the correlation length. The groundstate is ordered and a spinwave expansion may be used. Long wavelength fluctuations lead to a superposition of orientations- rotational averaging- the immediate consequence of which is that $`e^{i\theta }(𝐤,\omega )e^{i\theta }(𝐤,\omega )=0`$. Returning to the substitution of Eq.(3) into Eq.(4), we must retain terms to next order in frequency. This is a cross-term between radial and transverse fluctuations and involves a correlator $`_t\theta e^{i\theta }(𝐤,\omega )e^{i\theta }(𝐤,\omega )`$. Evaluating this correlation function via a $`T=0`$ spinwave expansion of the effective action, Eq.(2), reproduces the result of Ref. (up to a numerical factor due to our estimate of the Skyrmion profile function, $`n_rn_r/s`$). The zero-temperature phonon contribution may be calculated similarly.
Up to now, we have assumed that the Skyrmion lattice is pinned by disorder and that phonons may be ignored as a consequence. The situation is rather subtle and a fuller discussion is appropriate at this juncture. A quadratic effective action for phonons of the Skyrmion lattice is known. It is identical to that of an electronic Wigner crystal in a magnetic field with a vanishing effective mass. Under this effective action, Skyrmions move in small ellipses with a frequency $`\omega _𝐤|𝐤|^{3/2}/B`$ and major axes orientated transverse to the phonon momentum.
At finite temperature, the occupation of transverse phonons is infra-red divergent. This divergence is restricted by interactions between phonons arising from the non-harmonicity of the Skyrmion interaction. Unlike fluctuations in orientation, where the spinwave interaction is due to a topological constraint, these phonon interactions are non-universal (at best the universality is hidden in the details of the groundstate spin distribution and effective Skyrmion interaction potential). The resulting physics is very similar to that discussed for the rotation mode above; the phonon system is quantum-critical and has low-temperature ordered and quantum-melted phases and a high-temperature quantum-critical regime.
Even this is not the full story. Although the low energy dispersions of phonons and Skyrmion rotations are independent, non-linear interactions exist between these modes. The lattice stiffness is, in part, due to the dipole interaction of Skyrmions and is affected by fluctuations in orientation. Similarly, the dipole interaction between Skyrmions is strongly dependent upon the separation of Skyrmions and is affected by phonons. These non-linearities occur on the same footing as the phonon-phonon interactions and rotation-rotation interactions. The full quantum-critical structure of the multi-Skyrmion system is tremendously complicated.
The position taken here in neglecting this wealth of structure is that the Skyrmion lattice is pinned by disorder and the phonon spectrum gapped. Phononic fluctuations are suppressed at low temperatures and the associated critical structure occurs at higher temperature. The residual effect of phonons is a slight thermal renormalization of the rotational stiffness. The slight distortion in static positions of Skyrmions, in response to the disorder potential, gives a small random contribution to the stiffness $`J`$. This randomness produces a small region of Bose-glass phase at low temperatures, intervening between the paramagnet and renormalized classical regimes. For weak disorder, this phase only affects the physics very close to the critical point and does not affect our conclusions.
We now turn to a discussion of the experimental implication of the above calculations. Detailed measurements of $`1/T_1`$ have been carried out by Bayot et al. Above $`40mK`$, $`T_1`$ is independent of temperature. This is consistent with the rotational degrees of freedom being in their quantum-critical regime. Values of $`I`$ and $`J`$ for this system extracted from TDHFA calculations put $`IJ`$ very close to $`1`$. The system is close to criticality and the crossover to the quantum-critical regime occurs at correspondingly low temperature. At $`40mK`$ there is an abrupt step in $`1/T_1`$(and attendant peak in heat capacity). This is consistent with a Kosterlitz-Thouless transition in the orientational order (notice that the crossover temperature $`|2\pi JE_{\text{max}}|`$ may be much less than $`T_{KT}=2\pi J`$ and so the behaviour may be quantum-critical either side of the transisiton). There are a number of other candidate transitions, however, and it is not easy to discriminate between them. Considerations along the lines of those presented here allow some elaboration, but this is necessarily rather speculative and we refrain from its discussion at present. We may make some firm predictions for nuclear relaxation measurements below $`40mK`$. Changing the deviation in filling fraction or using tilted filed measurements both change the parameter $`IJ`$ and allow exploration of the phase diagram shown in Fig.1. The divergence or otherwise of the nuclear relaxation rate as temperature is reduced to zero should give a clear indication of the quantum-critical structure.
I would like to thank N. R. Cooper, J. R. Chalker, S. M. Girvin and N. Read for enlightening discusions, comments and suggestions. This work was supported by Trinity College Cambridge.
|
no-problem/0002/astro-ph0002499.html
|
ar5iv
|
text
|
# Tidal Disruption of a Solar Type Star by a Super-Massive Black Hole
## 1 Introduction
It has long been suggested that supermassive black holes (with a black hole mass $`M_{\mathrm{bh}}10^6`$ M) in relatively low-luminosity active galactic nuclei (AGNs) can be fed by tidal disruptions of stars that are on nearly radial orbits (e.g. Frank, 1978; Lacy et al., 1982; Carter & Luminet, 1985; Rees, 1988, 1990). The frequency of such events is expected to be of the order of $`10^4`$ yr<sup>-1</sup> in a galaxy like M31. In AGNs with more massive central black holes ($`M_{\mathrm{bh}}10^8`$ M), stars are “swallowed” whole, since the ratio of the event horizon radius to the tidal disruption radius increases with the black hole mass $`\left(M_{\mathrm{bh}}^{2/3}\right)`$.
Although direct observational evidence for the disruption process is lacking, a few observations have been tentatively identified as being associated with disruption events. For example, an ultraviolet flare at the center of the elliptical galaxy NGC 4552, although rather weak, has been suggested to result from such an event (Renzini et al., 1995). Other sudden eruptions that have been interpreted as potentially resulting from tidal disruptions include outbursts in IC 3599, NGC 5905, RXJ 1242.6-1119, RXJ 1624.9+7554, and RXJ 1331.9-3243 (e.g. Komossa & Greiner, 1999; Komossa & Bade, 1999; Grupe et al., 1999, and references therein). It has also been suggested that the sudden appearance and variability of the double-peaked Balmer lines in the active nucleus NGC 1097 is at least broadly consistent with being produced in a ring resulting from tidal disruption (Storchi-Bergmann et al., 1995). Similarly, it has been proposed that an outburst observed in the Seyfert galaxy NGC 5548 was due to a single star falling into a $`10^7`$ M black hole (Peterson & Ferland 1986; although other interpretations exist, e.g. Terlevich & Melnick 1988; Kallman & Elitzur 1988). More recent work on the broad emission line region of this galaxy, although interesting in its own right, did not shed any new light on the cause of the variability (Goad & Koratkar, 1998).
Several aspects of the problem of stellar encounters with supermassive black holes and of tidal disruption have been examined both analytically and numerically (e.g. Nolthenius & Katz, 1982; Bicknell & Gingold, 1983; Carter & Luminet, 1983; Hills, 1988; Kochanek, 1992, 1994; Syer & Clarke, 1992, 1993; Laguna et al., 1993; Marck et al., 1996). In addition, the interaction of a white dwarf with a massive black hole has been studied both generally (e.g. Frolov et al., 1994) and in the context of gamma-ray bursts (e.g. Fryer et al., 1999).
In the present work, we follow the evolution of a star that is tidally disrupted. In particular, we calculate the properties of the disruption debris to longer times than in some of the previous studies, using a Post-Newtonian, Smooth-Particle-Hydrodynamics (SPH) code.
The numerical method is described in $`\mathrm{\S }2`$, the results are presented in $`\mathrm{\S }3`$, and a discussion and conclusions follow.
## 2 The Numerical Method
We first introduce some notation that will be used throughout the paper. The mass of the black hole is denoted by $`M_{\mathrm{bh}}`$. The gravitational radius of the black hole is $`R_g=2GM_{\mathrm{bh}}/c^2`$. To measure the strength of the tidal encounter we use the dimensionless parameter (e.g. Press & Teukolsky, 1977)
$$\eta _t=\left(\frac{M_{}}{M_{\mathrm{bh}}}\frac{R_p^3}{R_{}^3}\right)^{1/2},$$
(1)
and to measure the magnitude of the relativistic effects we use
$$\eta _r=\frac{R_p}{R_g},$$
(2)
where $`M_{}`$ and $`R_{}`$ are the mass and radius of the star respectively and $`R_p`$ is the radius at the pericenter.
Simulating the full evolution of a star that is tidally disrupted by a black hole is a challenging numerical task. Once passing the pericenter, the star is tidally disrupted into a very long and dilute gas stream. As noted already by Kochanek (1994), each fluid element in this stream follows an almost test particle orbit. The fraction of the gas that is bound to the black hole eventually returns to pericenter and moves on to start another orbit. Relativistic orbital precession can cause this outgoing gas stream to collide with the incoming stream. Clearly, this problem consists of highly varying spatial configurations of matter, with much “empty space.” These attributes led to our selection of SPH as the numerical scheme used to tackle this problem. Indeed, previous works that used grid based codes (e.g. Khokhlov et al. (1993b, a); Frolov et al. (1994); Diener et al. (1997)) only followed the star close to the pericenter. This is also true of previous works by Evans & Kochanek (1989), and Laguna et al. (1993) that used SPH.
We use the (0+1+2.5) Post-Newtonian (PN) SPH code described in Ayal et al. (2000). This code implements the formalism of Blanchet, Damour & Schäfer (1990; hereafter BDS) and features full Newtonian gravity and hydrodynamics (0PN), the first order effects of general relativity on the gravity and hydrodynamics \[known as the first post-Newtonian, (1PN) approximation\], and gravitational wave damping (2.5PN). The independent matter variables used in this formalism consist of the following set: $`\rho _{}`$ the coordinate rest mass density, $`\epsilon _{}`$ the coordinate specific internal energy and $`𝐰`$ the specific linear momentum. In fully relativistic terms these are defined as:
$`\rho _{}`$ $`=`$ $`\sqrt{g}u^0\rho ,`$ (3)
$`\epsilon _{}`$ $`=`$ $`\epsilon (\rho _{}),`$ (4)
$`w_i`$ $`=`$ $`\left(c^2+\epsilon +p/\rho \right){\displaystyle \frac{u_i}{c}},`$ (5)
where $`\rho `$ is the rest mass density, $`\epsilon (\rho )`$ is the specific energy, $`p(\epsilon ,\rho )`$ is the pressure and $`u^\mu `$ is the four-velocity (Greek indices run from 0 to 4, latin indices from 1 to 3). The corresponding BDS variables are the above quantities neglecting all terms except 0PN, 1PN and 2.5PN. Using these variables the formalism yields an evolution system which consists of 9 Poisson equations and 4 hyperbolic equations which we solve as explained in Ayal et al. (2000).
We model the black hole using a massive point particle that has no hydrodynamical interactions. In order to be consistent with the 1PN approximation we must ensure that all the relativistic effects are small (of the order of 10%). This excludes simulating the strong field regions near the black hole’s horizon. We enforce this limit using a fully absorbing boundary condition at $`10R_g`$. Every particle crossing into this region is assumed to be accreted and taken out of the simulation. Since the mass of the star is negligible compared the mass of the black hole, we do not need to increase the mass of the black hole for every particle crossing the $`10R_g`$ boundary.
The statistical nature of SPH does not handle well single, separate particles. Indeed, the entire formalism is built on the assumption that each particle interacts hydrodynamically with about 60 (in 3D) other particles at all times. This number of interactions is needed to form a good sample of the fluid properties for an accurate calculation of gradients. In order to maintain this number of interactions under varying conditions, most SPH codes employ adaptive smoothing lengths (e.g. Benz, 1990; Monaghan, 1992), in effect changing the resolution at each point. In our problem, we have widely varying length scales when the gas stream expands, where varying the smoothing length is advantageous. On the other hand, the difference in the particle eccentricities causes them to separate when approaching pericenter, and they arrive there almost one by one. At this stage, maintaining hydrodynamic interaction with 60 other particles would require huge smoothing lengths, where a small change in the smoothing length would lead to a large change in the number of interactions. This, coupled with an algorithm that tries to maintain a fixed number of interactions, can cause large oscillations in the smoothing length, which in turn introduce a highly varying number of hydrodynamic interactions. These huge smoothing lengths for particles approaching pericenter also mean that we have a very coarse resolution at a crucial stage.
In order to overcome this numerical difficulty we introduce a “particle splitting” (PS) scheme. Whenever a particle satisfies some splitting criterion we split it into new particles each having a smaller mass and smoothing length so that the overall mass is conserved. Thus the PS algorithm consists of two parts—the splitting criterion and the splitting method. In the method we use, maximal splitting, we split each particle into 13 particles, giving the original particle 12 new neighbors (12 is the maximum number of spheres that can “touch” a sphere with the same radius). Another possible method is minimal splitting (while still maintaining a quasi spherical symmetry)—we split each particle into 5, adding 4 particles at the edges of a tetrahedron centered on the original particle. We found that the minimal splitting method tends to produce lumpier particle distributions, we therefore used maximal splitting in both the PS runs presented in this work. We split each particle into 13 particles, each having half the smoothing length of the original particle, spaced by one original smoothing length. This splitting method conserves the particle’s interaction radius which is twice it’s smoothing length. As a splitting criterion we used the ratio between each particle’s smoothing length and the average smoothing length. Whenever the smoothing length of a bound particle exceeds twice the average smoothing length we split this particle. We split only bound particles so as not to waste computational time on the unbound debris, in which we are not interested in the present work. In order for the unbound debris smoothing length not to dominate the average smoothing length which we use for the splitting criterion, we enforce a maximum smoothing length of twice the average smoothing length for unbound particles. This causes unbound particles of gas to have fewer and fewer hydrodynamical interactions, and their motion to be dominated by gravity, as is expected. Another computational time reducing technique that we use is to delete any particle that is both unbound and is farther away than $`2500R_g`$. The latter criterion ensures that the deletion of these particles does not affect the dynamics near pericenter.
In order to estimate the errors in using the particle splitting method, we compared the results of three runs. The first run was a conventional SPH run with a fixed number of 4295 particles, denoted by F1. The second and third runs were PS runs denoted by PS1 and PS2 and they differ only in the number of particles at the initial time. These parameters are summarized in Table 1.
## 3 Results
As initial conditions we took a $`\mathrm{\Gamma }=5/3`$ polytrope with a solar radius and mass. This star was then put at a distance of $`100R_g`$ and given the velocity of an appropriate parabolic Keplerian orbit with a pericenter at $`20R_g`$. Giving the initial conditions at such a distance ensures that any relativistic effects (at that position) are negligible so that using Newtonian expressions for various quantities (such as energy) is justified. As can be seen in Fig. 1, the star’s center of mass (CM) follows an almost relativistic orbit, which validates the use of the 1PN approximations for this problem. For this run we obtained the values of $`\eta _r18`$ and $`\eta _t0.65`$ at the pericenter. These values ensure that on one hand the 1PN approximation is valid, and on other, that the tidal interaction is sufficiently strong to lead to disruption.
We can roughly divide the disruption process into 2 qualitatively different regimes. In the first, the tidal forces dominate and the star is destroyed. This is followed, at about $`t=8`$ hours, by the post disruption regime. This latter stage consists of the gas stream phase and the accretions phase. During the gas stream phase the pressure is negligible and the particles move on almost Keplerian orbits. The accretion phase starts at $`t=15`$ days when the first particles return to pericenter.
### 3.1 Disruption
The disruption phase for a solar type star was studied, using Newtonian physics, by Evans & Kochanek (1989). Later it was studied in greater detail by Khokhlov et al. (1993b, a); and by Diener et al. (1997) who included a general relativistic treatment of the tidal potential of a Kerr black hole and Newtonian hydrodynamics for the star. This phase was also studied by Laguna et al. (1993) using Newtonian hydrodynamics on a fixed Kerr background. We compare our results for this phase with these previous studies.
The results shown are taken from run F1 which has a higher resolution at this stage. The rest-mass density contours presented in Fig. 2 show the effect of the tidal forces on the star. In Fig. 3 we show the central coordinate rest-mass density $`\rho _{}`$, the angular momentum $`|𝐉|`$, and total energy $`E_{}`$ of the star, during the disruption. The value of $`\rho _{}`$ does not increase beyond the initial value, and it falls rapidly after the disruption. Comparing to Laguna et al. (1993), we find that our encounter is somewhere between a non-relativistic encounter and a single-compression relativistic encounter, as expected. The values of $`\rho _{}`$ are also close to those in Diener et al. (1997) and Khokhlov et al. (1993a). The total angular momentum relative to the CM of the star increases by about 5 orders of magnitude at the disruption, and a more gentle increase occurs afterwards, caused by the gravitational torques acting on the debris. The total energy of the star also has a sharp increase at the disruption as the star becomes unbound, followed by another rise as the star moves away from the BH.
The differential mass distribution in specific energies $`dM/d\epsilon _{}`$ is shown in Fig. 4. We use $`\mathrm{\Delta }ϵ=GM_{\mathrm{bh}}R_{}/R_p^2`$, the change in the BH potential across the star (Lacy et al., 1982) as our energy scale. The mass distribution is almost constant as predicted by Rees (1988). A comparison with Evans & Kochanek (1989), who calculated this quantity for the Newtonian case, shows that in our calculation the width of the distribution is smaller by approximately $`0.5\mathrm{\Delta }ϵ`$ (FWHM).
In general, we find that the 1PN energy $`E_{\mathrm{pn}}`$ is conserved to better than 2%. The energy radiated by gravitational radiation during the disruption is negligible (compared to the total), amounting to $`1.6\times 10^{46}`$ erg.
### 3.2 Post Disruption
The post disruption phase begins at about $`t=8`$ hours. At this stage the gas is sufficiently dilute and cold to make the pressure very small. Consequently, the gas elements in the stream follow almost exact geodesics. A previous study on the gas stream phase, by Kochanek (1994), used the thinness of the stream to decouple the transverse properties of the stream from the variations along its length. This same thinness introduces difficulties into the numerical approach. The spherical nature of the SPH particles causes the code to overestimate the stream width. This effect can be overcome by using more particles but the increase in computational time renders this approach impractical with current hardware. As the SPH particles approach pericenter for the second time, the PS method therefore becomes essential. Without PS, the SPH particles approach pericenter almost one by one and the strong compressional effects are manifested through two- and three-particle interactions near the pericenter. This small number of interactions could reduce the reliability of the results. By using PS however, we increase the number of particles approaching pericenter, thus increasing the number of interactions and thereby ensuring that the hydrodynamical interactions remain adequate. The effects of PS can be seen in Fig. 5 where the conventional run has a very non-uniform particle density even when compared to the PS1 run (with only 1350 particles at this stage).
In Fig. 6 we show the distribution of eccentricities for the SPH particles in the gas stream phase. The mean eccentricity is about 0.994, as the star was initially marginally bound. The distribution does not change by much until the end of the stream phase, when the first particle approaches pericenter. The differential mass distribution in specific internal energy as a function of time is shown in Fig. 7. The earliest time shown is close to that of Fig. 4. As can be seen, at the gas stream phase the internal energy distribution shifts towards lower energies, while the second passage through pericenter heats the particles up. This heating is caused by the strong compression characterizing the second passage of the gas through pericenter. The compression is caused by the gas orbits converging to the space occupied by the star at the initial pericenter (e.g. Kochanek, 1994). This compression can be seen in Fig. 8 where we show the pressure at some fixed time after the first return to pericenter in the PS2 run. The pressure is shown along a path as a function of $`l`$, the length of the path. The path was chosen so as to pass through the gas stream and pericenter. As can be seen, the pressure rises sharply at negative $`l`$, corresponding to the passage through pericenter, and there is a large pressure gradient in the $`z`$ direction. We find that the bounce that follows this compression is sufficiently strong to impart a significant fraction of the gas with the escape velocity. In Fig. 9 we show the amount of the star’s mass that is bound, unbound, and accreted (we note again that accreted here means that it has come closer then $`10R_g`$ to the BH). As the star leaves the vicinity of the BH for the first time, 65% of its mass is bound. This stays constant during the gas stream phase since the pressures are small. Following the second passage through pericenter however, mass gets unbound, until, at the time we stop our simulation, 50% of the mass is unbound, 40% is bound and the remaining 10% is accreted. The relative difference between the two runs is about 10% in all of these quantities.
In Fig. 10 we show the differential mass distribution in orbital periods in the stream. This has been used in previous works (e.g. Rees, 1988; Evans & Kochanek, 1989; Laguna et al., 1993) to estimate the mass infall rate onto the BH, under the assumption that all the mass returning to pericenter is accreted. In Fig. 11 we show the estimated accretion rate according to Fig. 10 together with the actual accretion rate we get. Our calculations show that since some of the mass becomes unbound when reaching pericenter, the actual accretion rate is a factor of 3 lower than that inferred previously. The total accreted mass up to 60 days (Fig. 9) was overestimated by a factor of 4 when assuming that all returning mass is accreted. Another important consequence of the strong compression near pericenter is that the expected self-intersection of the gas stream in fact does not occur, since the debris is given a high velocity perpendicular to the orbital plane.
The inner $`10^4`$ R around the BH is composed of the infalling debris and a low density cloud with a density of $`10^{11}\mathrm{g}\mathrm{cm}^3`$ and with a temperature range of higher than $`10^6`$ K. In Figs. 12 and 13 we show the density, specific internal energy and velocity in the inner 2000 R around the BH. Even using PS is is clear that we cannot resolve structures well within this radius (e.g. an accretion disk, as proposed by Cannizzo et al. 1990). Nevertheless, there is some evidence of circularization of the gas flow in the velocity plots.
## 4 Discussion
We have performed a 1PN simulation of a solar mass star being disrupted by a super massive ($`10^6`$ M) black hole. The disruption process itself causes about 1/2 of the star’s matter to become unbound. We follow this matter up to and beyond the time when it returns to pericenter and starts accreting onto the BH. This in turn enables us to determine the amount of returning mass that is actually accreted. Contrary to previous, more heuristic estimates, we find that only about 25% of the returning mass actually gets accreted. The rest becomes unbound, following being heated by the strong compression accompanying the approach to pericenter.
The main consequence of this process is that the maximum accretion rate is a factor of 3 lower than expected from a simple examination of the rate of mass return to pericenter. The accretion rate into the volume resolved by our simulation is also quite constant, at about 1 $`\mathrm{M}_{}\mathrm{yr}^1`$, for the last 20 days of the simulation as opposed to a power law decay which is expected from the rate of return. When integrated over time, these results show that only about 10% of the original star’s mass actually gets accreted as opposed to the 65% expected if all the bound mass would be finally accreted. This means that the mass involved in the accretion flow, be it in the form of an accretion disk (e.g. Cannizzo et al., 1990), or spherical accretion (e.g. Loeb & Ulmer, 1997) is considerably smaller than previously estimated. Consequently, the duration of the expected ‘flare’ can be shorter by a factor 4–5 compared to these early estimates, which makes the detectability of these events much harder (the rather high temperature of some of the infalling debris also makes the bolometric correction high). Indeed, supernova searches in distant galaxies have failed so far to identify any such event unambiguously (Filippenko 2000, private communication).
The mass of the debris that becomes unbound because of the pericenter compression is comparable to the mass of the unbound debris resulting from the original disruption event. The main difference is that this new debris component has a different orbital distribution. Most notably, the velocity has a much larger component in the direction perpendicular to the orbital plane. This additional component of unbound high velocity debris could produce interesting consequences (e.g. components like Sgr A East in the Galactic center) when colliding with the interstellar medium surrounding the BH (e.g. Khokhlov & Melia, 1996). In particular, the fact that the mass ejection is more spherically symmetric, makes the event more similar to a normal supernova. Thus, tidal disruption events may produce a quite unique signature (in terms of their impact on surrounding gas), in which two supernova-type events are separated by a few weeks to a few months, with the first one being very anisotropic (mass being ejected within a solid angle $`\mathrm{\Omega }\stackrel{>}{}16(R_{}/R_\rho )^{1/2}(M_{}/M_{\mathrm{bh}})^{1/2}`$ rad<sup>2</sup>) and the second more spherically symmetric.
ML acknowledges support from NASA Grant NAG5-6857.
|
no-problem/0002/astro-ph0002250.html
|
ar5iv
|
text
|
# Modelling galaxy clustering at high redshift
## 1. Introduction
There has been significant recent progress in the study of galaxy formation within a cosmological context, mainly due to a phenomenological approach to this problem. The idea is to start with a structure formation model that describes where and when galactic dark haloes form. A simple description of gas dynamics and star formation provides a means to calculate the amount of stars forming in these haloes. Stellar population synthesis models then provide the spectral evolution, i.e. luminosities and colours, of these galaxies.
Many physical processes are modelled as simple functions of the circular velocity of the galaxy halo. Therefore, the Tully-Fisher relation is the most obvious observational relation to try and predict, as it relates the total luminosity of a galaxy to its halo circular velocity. However, most phenomenological galaxy formation models do not simultaneously fit the I-band Tully-Fisher relation and the B or K band luminosity function. When one sets the model parameters such that the Tully-Fisher relation has the right normalization, the luminosity functions generally overshoot (e.g. Kauffmann, White & Guiderdoni 1993; Kauffmann, Colberg, Diaferio & White 1999), certainly for the $`\mathrm{\Omega }=1`$, $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> standard CDM cosmology (in the form given by Davis et al. 1985) that we consider in this paper. Alternatively, when making sure that the luminosity functions matches by changing some of the model parameters, the Tully-Fisher relation ends up significantly shifted with respect to the observed relation (e.g. Cole et al. 1994; Heyl et al. 1995).
In order to keep the modelling as analytical as possible, an extension to the Press & Schechter (1974) prescription for the evolution of galaxy haloes (e.g. Bond et al. 1991; Bower 1991; Lacey & Cole 1993; Kauffmann & White 1993) has been a popular ingredient for implementations of a phenomenological theory of galaxy formation. However, the EPS formalism is designed to identify collapsed systems, irrespective of whether these contain surviving subsystems. This ‘overmerging’ of subhaloes into larger embedding haloes is relevant to the problem of matching both the galaxy luminosity function and the Tully-Fisher relation, as the central galaxy in an overmerged halo is the focus of a much larger cooling gas reservoir than the reservoir that galaxy is to focus of in case its parent subhalo survives. Traditional N-body simulations suffer from a similar overmerging problem (e.g. White 1976), which is of a purely numerical nature, caused by two-body heating in dense environments when the mass resolution is too low (Carlberg 1994; van Kampen 1995).
In order to circumvent these problems, we use an N-body simulation technique that includes a built-in recipe for galaxy halo formation, designed to prevent overmerging (van Kampen 1995, 1997), to generate the halo population and its formation and merger history. This resolves most of the discrepancy sketched above, and allows us to make the modelling more realistic by adding chemical evolution and a merger-driven bursting mode of star formation to the modelling. Once stars are formed, we apply the stellar population synthesis models of Jimenez et al. (1998) to follow their evolution. We have enhanced these models with a model for the evolution of the average metallicity of the population, which depends on the starting metallicity. Feedback to the surrounding material means that cooling properties of that material will change with time, affecting the star formation rate, and thus various other properties of the parent galaxy.
## 2. Overview of the phenomenological model
The key ingredients of the model are described below. We refer to van Kampen et al. (1999) for a much more detailed description and discussion of the model, and a list of the choices for the various parameters involved.
### The merging history of dark-matter haloes.
This is often treated by Monte-Carlo realizations of the analytic ‘extended Press-Schechter’ formalism, which ignores substructure. We use a special N-body technique to prevent galaxy-scale haloes undergoing ‘overmerging’ owing to inadequate numerical resolution.
### The merging of galaxies within dark-matter haloes.
Each halo contains a single galaxy at formation. When haloes merge, a criterion based on dynamical friction is used to decide how many galaxies exist in the newly merged halo. The most massive of those galaxies becomes the single central galaxy to which gas can cool, while the others become its satellites.
### The history of gas within dark-matter haloes.
When a halo first forms, it is assumed to have an isothermal-sphere density profile. A fraction $`\mathrm{\Omega }_b/\mathrm{\Omega }`$ of this is in the form of gas at the virial temperature, which can cool to form stars within a single galaxy at the centre of the halo. Application of the standard radiative cooling curve shows the rate at which this hot gas cools below $`10^4`$ K, and is able to form stars. Energy output from supernovae reheats some of the cooled gas back to the hot phase. When haloes merge, all hot gas is stripped and ends up in the new halo.
### Quiescent star formation.
The star formation rate is equal the ratio of the amount of cold gas available and the star-formation timescale. The amount of cold gas available depends on the merger history of the halo, the star formation history, and the how much cold gas has been reheated by feedback processes.
### Starbursts.
We also model star bursts, i.e. the star-formation rate may suffer a sharp spike following a major merger event.
### Feedback from star-formation.
The energy released from young stars heats cold gas in proportion to the amount of star-formation, returning it to the reservoir of hot gas.
### Stellar evolution and populations.
Our work assumes the spectral models of Jimenez et al. (1998); for solar metallicity, the results are not greatly different from those of other workers. The IMF is generally taken to be Salpeter, but any choice is possible. Unlike other workers, we take it as established that the population of brown dwarfs makes a negligible contribution to the total stellar mass density, and we do not allow an adjustable $`M/L`$ ratio, $`\mathrm{{\rm Y}}`$, for the stellar population.
### Chemical evolution.
The evolution of the metals must be followed, for two reasons: (i) the cooling of the hot gas depends on metal content; (ii) for a given age, a stellar population of high metallicity will be much redder. The models of Jimenez et al. (1998) allow synthetic stellar populations of any metallicity to be constructed.
## 3. Low-redshift results
With the set-up described above we match both the B and K band luminosity function and the I-band Tully-Fisher relation, for an $`\mathrm{\Omega }=1`$ standard CDM structure formation scenario. Resolving the overmerging problem is the major contributor to this result, but the inclusion of chemical evolution and starbursts are also important ingredients.
The new ingredients we have added to the modelling of galaxy formation are needed in order to make the models more realistic, and are not introduced simply in order to give yet more free parameters. Nevertheless, our resolution to the Tully-Fisher / luminosity function discrepancy may well not be unique, and various other changes to the ingredients of the phenomenological galaxy formation recipe might produce similar results. For example, we have not studied the influence cosmological parameters have on the model galaxy populations, where $`\mathrm{\Omega }`$, $`\mathrm{\Lambda }`$, and $`\sigma _8`$ are likely to be the important parameters. Other types of ingredients are possible as well: Somerville & Primack (1998) resolve some of the discrepancy using a dust extinction model plus a halo-disk approach to feedback.
## 4. High-redshift clustering
One way of resolving the worries about degeneracies in the cosmological/physical parameter space will be to include data at intermediate and high redshifts, which is being gathered with increasing speed and ease, and at increasingly higher redshifts. In this contribution we show a preliminary comparison of the correlation properties of galaxies at redshift $`z=3`$. Recently, Giavalisco et al. (1998) gave an estimate for the galaxy-galaxy correlation function $`\xi (r)=(r_0/r)^\gamma `$ for a sample of Lyman-break galaxies at this redshift. They found $`r_0=2.1h^1`$Mpc and $`\gamma =2.0`$.
We selected our model galaxies in exactly the same way as Giavalisco et al. (1998) did, and compared two of the models produced by van Kampen et al. (1999), models $`n`$ and $`b`$, to the observational data. The first model ($`n`$), which is as close as possible to the model by Cole et al. (1994), but with the mass-to-light parameter $`\mathrm{{\rm Y}}=1`$, gives $`r_0=3.5h^1`$Mpc and $`\gamma =1.72`$. The second model ($`b`$), which includes starbursts and chemical evolution, gives $`r_0=4.4h^1`$Mpc and $`\gamma =2.1`$. Both models fit the correlation function at $`z=0`$ very well, and cannot be distinguished from each other.
As the observed correlation data are still relatively uncertain at this moment in time, it is premature to rule out models on the basis of this data. The two models discussed above have similar predictions for low redshifts, but predict different clustering properties at high-redshift. However, the differences are not large, so one needs either really good data, or a much larger variety of observational characteristics of the high-redshift galaxy population.
#### Acknowledgments.
Many thanks for the loud vocal support from outside the conference building during my presentation. I like to think that the people of Marseille just wanted to show how much they supported everything I said …
## References
Bond J.R., Cole S., Efstathiou G., Kaiser N., 1991, ApJ, 379, 440
Bower R.J., 1991, MNRAS, 248, 332
Carlberg R.G., 1994, ApJ, 433, 468
Cole S., Aragón-Salamanca A., Frenk C.S., Navarro J.F., Zepf S.E., 1994, MNRAS, 271, 781
Davis M., Efstathiou G., Frenk C.S., White S.D.M., 1985, ApJ, 292, 371
Heyl J.S., Cole S., Frenk C.S., Navarro J.F., 1995, MNRAS, 274, 755
Giavalisco, M., Steidel C.C., Adelberger, K.L., Dickinson, M.E., Pettini, M., Kellogg, M., 1998, ApJ, 503, 543
Jimenez R., Padoan P., Matteucci F., Heavens A., 1998, MNRAS, 299, 123
Kauffman G., White S.D.M., 1993, MNRAS, 261, 921
Kauffmann G., Colberg J.M., Diaferio A., White S.D.M., 1999, MNRAS, 303, 188
Kauffmann G., White S.D.M., Guiderdoni, 1993, MNRAS, 264, 201
Lacey C.G., Cole S., 1993, MNRAS, 262, 627
Press W.H., Schechter P., 1974, ApJ, 187, 425
Somerville R.S., Primack J.R., 1998, astro-ph/9802268
van Kampen E., 1995, MNRAS, 273, 295
van Kampen E., 1997, in Clarke D.A., West M.J., eds., Proc. 12th ‘Kingston meeting’ on Theoretical Astrophysics: Computational Astrophysics, ASP Conf. Ser. Vol. 123. Astron. Soc. Pac., San Francisco, p. 231, astro-ph/9904270
van Kampen E., Jimenez R., Peacock J.A, 1999, MNRAS, in press, astro-ph/9904274
White S.D.M., 1976, MNRAS, 177, 717
|
no-problem/0002/astro-ph0002309.html
|
ar5iv
|
text
|
# Formation and evolution of disk galaxies within cold dark matter halos
## 1. Introduction
The hierarchical cosmic structure formation picture based on the inflationary cold dark matter (CDM) provides a solid framework for models of galaxy formation and evolution. On the other hand, the unprecedented observations of galaxies at different redshifts make it possible to probe and constrain these models. Here we discuss some of the results obtained with a self-consistent scenario of disk galaxy formation and evolution within the context of the hierarchical picture (the extended collapse scenario).
## 2. Dark matter halos
Using the extended Press-Schechter approximation, we generate the mass aggregation histories (MAHs) of the dark matter (DM) halos. Collapse and virialization of these halos are then calculated assuming spherical symmetry and adiabatic invariance, using a method based on a generalization of the secondary infall model (Avila-Reese, Firmani, & Hernández 1998). These halos will mainly correspond to isolated systems. The diversity of MAHs results in diversity of density profiles which, in our model, mainly depend on the MAH. The density profile corresponding to the average MAH is well described by the Navarro et al. (1996) profile. In Avila-Reese et al. (1999) we have compared the outer density profiles, concentrations and structural relations of thousands of halos identified as isolated in a cosmological ($`\mathrm{\Lambda }`$CDM) N-body simulation with those obtained with our seminumerical method. We found a good agreement between the model and simulation results.
We have found that $`13\%`$ of the halos in the numerical simulation at $`z=0`$ are contained within larger halos and $`17\%`$ have significant companions within three virial radii. The remaining 70% of the halos are isolated objects. The slope $`\beta `$ of the outer density profile ($`\rho r^\beta `$) and the halo concentration defined as $`c_{1/5}=r_h/r(M_h/5)`$, where $`r_h`$ and $`M_h`$ are the virial radius and mass, depend on the halo environment. For a given $`M_h`$, halos in clusters have typically steeper outer profiles and are more concentrated than the isolated halos (for the latter $`\beta 2.9`$ in average and $`\beta `$ between 2.5 and 3.8 for 68% of the halos). Contrary to naive expectations, halos in galaxy and group systems as well as the halos with significant companions, systematically have flatter and less concentrated density profiles than isolated halos. A tight correlation between $`M_h`$ and the maximum circular velocity $`V_m`$ is observed: $`M_hV_m^n`$, $`n3.2`$. This is roughly the slope of the infrared Tully-Fisher relations (TFR). Thus, it seems that there is no room for the mass dependence of the infrared $`M_h/L`$ ratio.
## 3. Galaxy evolutionary models
We model the formation and evolution of baryon disks in centrifugal equilibrium within the growing CDM halos formed as described in §2. We assume that halos acquire angular momentum from large-scale torques with the spin parameter $`\lambda `$ distributed log-normally and constant in time. The disks are built inside-out with the gas infall rate (no mergers) proportional to the cosmological mass aggregation rate and assuming detailed angular momentum conservation. The gravitational drag of the disk on the DM halo is calculated. The local SF is assumed to be induced by disk instabilities and regulated by energy balance within the disk turbulent ISM (no SF feedback and self-regulation at the level of the interhalo medium is allowed). We also calculate the secular formation of a bulge. This way, at each epoch and at each radius, the growing disk is characterized by the infall rate of fresh gas, the gas and stellar surface density profiles, the total rotation curve (including the DM component), the local SF rate, and the size of the inner region transformed into bulge component.
## 4. Highlights of the model results
Results on the structure and dynamics of our model disk galaxies were discussed in Firmani & Avila-Reese (2000); the luminosity properties and topics related to the disk Hubble sequence were treated in Avila-Reese & Firmani (2000), while some evolutionary aspects of the galaxies were presented in Firmani & Avila-Reese (1999). In the following, we highlight some of the results.
Local properties. The (stellar) surface density and brightness profiles are exponential, the sequence of high to low surface brightness (SB) being mainly determined by $`\lambda `$. The gas profiles at $`z=0`$ are also exponential although much lower in density and with a scale radius $`24`$ times larger than the stellar profiles. There is a negative radial gradient of the color index: stars in the outer regions of the disk form later than stars in the inner regions. We find that the local SF rate per unit area correlates with the gas surface density as $`\mathrm{\Sigma }_{\mathrm{SFR}}(r)\mathrm{\Sigma }_g^n(r)`$ with $`n2`$ for most of the models and over a major portion of the disks. The shape of the rotation curves correlates with the SB ($`\lambda `$) and in most cases is approximately flat. The dark halo dominates in the rotation curve decomposition down to very central regions.
The infrared Tully-Fisher relations (TFR). The slope of the $`M_hV_m`$ relation of the CDM halos remains imprinted in the TFR and agrees with observations. This slope is almost independent of the assumed disk mass fraction $`f_d`$ when the disk component in the rotation curve decomposition is gravitationally important ($`f_d\genfrac{}{}{0pt}{}{_>}{^{}}0.03`$ for the $`\mathrm{\Lambda }`$CDM model used here). The zero point of the model TFR is only slightly larger than the observed zero point. The rms scatter in our TFR slightly decreases with mass; from $`V_m=70`$ to 300 km/s the scatter is between 0.38 and 0.31 mag. We have found that a major contribution to this scatter is from the scatter in the DM halo structures due to the dispersion of the MAHs; a minor contribution to the scatter is due to the dispersion of $`\lambda `$. The TFR for high and low SB models is approximately the same. The slope of the correlation among the residuals of the TF and luminosity-radius relations is small and non-monotonic, although the shape of the rotation curves of our models correlates with the SB. For a given total (star+gas) disk mass, the $`V_m`$ decreases with decreasing SB. However, owing to the dependence of the SF efficiency on the disk surface density, the stellar mass $`M_s`$ (luminosity) also decreases. This combined influence of the SB ($`\lambda `$) on $`V_m`$ and $`M_s`$ puts models of different SB on the same $`M_sV_m`$ relation. As a result, high and low SB models follow similar TFRs.
The Hubble sequence. The main properties of the high and low SB disk galaxies and their correlations are determined by the combination of three fundamental physical factors and their dispersions: the halo virial mass, the MAH and the angular momentum given through $`\lambda `$. The MAH determines mainly the halo structure, the integral color index, and the gas fraction $`f_g`$ while $`\lambda `$ determines mainly the disk SB, the bulge-to-disk (b/d) ratio and the shape of the rotation curve. Our models show that the redder and more concentrated (higher SB) is the disk, the smaller is $`f_g`$ and the larger is the b/d ratio (disk Hubble sequence). The values of all these magnitudes are in good agreement with observations.
Evolutionary features. In the inside-out hierarchical disk formation scenario galaxies undergo not only luminosity but also structural (size, SB, b/d ratio) evolution. For an Einstein- de Sitter universe we find that the scale radius for normal disk galaxies decreases roughly as $`(1+z)^{0.5}`$ up to $`z1.5`$, while the central $`B`$band SB from $`z=0`$ to $`z=1`$ increases by $`1.2`$ mag.
The SF history in the models is driven both by the MAH and the disk gas surface density. For the average MAH and $`\lambda =0.05`$, the SF rate reaches a maximum at $`z1.52.5`$ which is a factor of 2.5-4.0 higher than the rate at $`z=0`$. In the same way, $`L_B`$ increases towards the past by factors slightly smaller than the SF rate. The less massive galaxies present a slightly more active luminosity evolution than the massive galaxies. The model galaxies are somewhat bluer in the past; from $`z=0`$ to $`z=1`$ the $`BV`$ decreases on average 0.25-0.30 magnitudes. The total mass-to-$`L_B`$ ratio also decreases towards higher redshifts: from $`z=0`$ to $`z=1`$ it decreases on average by a factor $`3.3`$, i.e. a galaxy at $`z=1`$ is more luminous in the $`B`$band and less massive than at $`z=0`$. Again, this is a result related to the hierarchical MAHs of the protogalaxies.
Owing to the mass (size) evolution, for a fixed $`V_m`$, the $`H`$band luminosity is a factor $``$2.2 less at $`z=1`$ than at $`z=0`$; however, owing to the luminosity evolution, $`L_B`$ is a factor $``$ 2.1 larger. Therefore, while the zero-point of the $`H`$band TFR increases towards the past, in the case of the $`B`$band TFR, compensation due to the $`L_B`$ evolution results in the zero-point remaining approximately constant with time. The slopes in both cases also remain constant.
## 5. Potential difficulties of the hierarchical scenario
Although several main properties, correlations, and evolutionary features of normal disk galaxies have been successfully predicted by our models, it is important to remark on their problems. We find the following potential conflicts with the observations: 1) the size and SB evolution of the disks is too pronounced, 2) the radial color index gradients are too steep and the $`f_g`$ is slightly over-abundant, 3) the DM component dominates in the rotation curve decompositions almost down to the center and the halos are too cuspy.
Regarding item 1), if selection effects in the deep field are not so significant as Simard et al. (1999) have claimed, then probably it is not so serious. In fact, some physical ingredients not considered in our models (e.g., merging, angular momentum transfer, and non-stationary SF) all work in the direction to improve models regarding problems 1) and 2). The problem 3) can probably be solved if the inner density profile of the CDM halos can be shallower than predicted (several solutions such as self-interacting CDM, warm DM, non-Gaussian fluctuations, have been proposed). Nevertheless, it is possible that all these problems together with those of the dearth of satellites and the high frequency of disk disruptive mergers, are in general pointing out to serious problems for the Gaussian CDM-based hierarchical picture of structure formation. More observational tests regarding the problems mentioned above and more theoretical effort in modeling galaxy formation and evolution are urgently required.
## References
Avila-Reese, V., Firmani, C., & Hernández X. 1998, ApJ, 505, 37
Avila-Reese, V, Firmani, C., Klypin A., & Kravtsov A. 1999, MNRAS, 309, 527
Avila-Reese, V, & Firmani, C. 2000, RevMexA&A, v. 36, in press
Firmani, C., & Avila-Reese, V. 1999, ASP Conf. Series 176, 406
Firmani, C., & Avila-Reese, V. 2000, MNRAS, in press
Navarro, J., Frenk, C.S. & White, S.D.M. 1997, ApJ, 462, 563
Simard, L. et al. 1999, ApJ, 519, 563
|
no-problem/0003/cond-mat0003325.html
|
ar5iv
|
text
|
# OPENING AN ENERGY GAP IN AN ELECTRON DOUBLE LAYER SYSTEM AT INTEGER FILLING FACTOR IN A TILTED MAGNETIC FIELD
## Abstract
We employ magnetocapacitance measurements to study the spectrum of a double layer system with gate-voltage-tuned electron density distributions in tilted magnetic fields. For the dissipative state in normal magnetic fields at filling factor $`\nu =3`$ and 4, a parallel magnetic field component is found to give rise to opening a gap at the Fermi level. We account for the effect in terms of parallel-field-caused orthogonality breaking of the Landau wave functions with different quantum numbers for two subbands.
Much interest in electron double layers is attracted by their many-body properties in a quantizing magnetic field. These include the fractional quantum Hall effect at filling factor $`\nu =1/2`$ , the many-body quantum Hall plateau at $`\nu =1`$ , broken-symmetry states at fractional fillings , the canted antiferromagnetic state at $`\nu =2`$ , etc. Still, the single-electron properties of double layer systems that can be interpreted without appealing exchange and correlation effects are not less intriguing. A standard double layer with interlayer distance of about the Bohr radius is a soft two-subband system if brought into the imbalance regime in which the electron density distribution is two asymmetric maxima corresponding to two electron layers. In such a system a small interlayer charge transfer shifts significantly the Landau level sets’ positions; particularly, the transfer of all electrons in a single quantum level would lead to a shift as large as the cyclotron energy. In a double layer system with gate-bias-controllable electron density distributions at normal magnetic fields, peculiarities were observed in the Landau level fan chart: at fixed integer filling factor $`\nu >2`$ the Landau levels for two electron subbands pin to the Fermi level over wide regions of a magnetic field, giving rise to a zero activation energy for the conductivity . The pinning effect is obviously possible due to the orthogonality of the Landau level wave functions with different index and, therefore, it might disappear if the orthogonality were lost for some reason. In contrast, at $`\nu =1`$ and 2 the gap was found to be similar to the symmetric-antisymmetric splitting at balance and having a finite value for any field. This was explained to be caused by a subband wave function reconstruction in the growth direction .
Here, we study the electron spectrum of a gate-voltage-tunable double layer in tilted magnetic fields. We find that for the dissipative state at filling factor $`\nu =3`$ and 4 in a normal magnetic field, the addition of a parallel field component leads to the appearance of a gap at the Fermi level as indicated by activated conductivity. These findings are explained in terms of a wave-function orthogonality-breaking effect caused by parallel magnetic field component.
The samples are grown by molecular beam epitaxy on semi-insulating GaAs substrate. The active layers form a 760 Å wide parabolic well. In the center of the well a 3 monolayer thick Al<sub>x</sub>Ga<sub>1-x</sub>As ($`x=0.3`$) sheet is grown which serves as a tunnel barrier between both parts on either side. The symmetrically doped well is capped by 600 Å AlGaAs and 40 Å GaAs layers. The symmetric-antisymmetric splitting in the bilayer electron system as determined from far infrared measurements and model calculations is equal to $`\mathrm{\Delta }_{SAS}=1.3`$ meV. The sample has ohmic contacts (each of them is connected to both electron systems in two parts of the well) and two gates on the crystal surface with areas $`120\times 120`$ and $`220\times 120`$ $`\mu `$m<sup>2</sup>. The gate electrode enables us to tune the carrier density in the well, which is equal to $`4.2\times 10^{11}`$ cm<sup>-2</sup> at zero gate bias, and simultaneously measure the capacitance between the gate and the well. For the capacitance measurements we additionally apply a small ac voltage $`V_{ac}=2.4`$ mV at frequencies in the range 3 – 600 Hz between the well and the gate and measure both current components as a function of gate bias $`V_g`$ in a normal and tilted magnetic fields in the temperature interval between 30 mK and 1.2 K. An example of the imaginary current component is depicted in Fig. 1; also shown in the inset is the calculated behaviour of the conduction band bottom for our sample.
The employed experimental technique is similar to magnetotransport measurements in Corbino geometry: in the low frequency limit, the active component of the current is inversely proportional to the dissipative conductivity $`\sigma _{xx}`$ while the imaginary current component reflects the thermodynamic density of states in a double layer system. Activation energy at the minima of $`\sigma _{xx}`$ for integer $`\nu `$ is determined from the temperature dependence of the corresponding peaks in the active current component.
The positions of the $`\sigma _{xx}`$ minimum for $`\nu =2`$, 3, and 4 in the ($`B_{},V_g`$) plane are shown in Fig. 2 for both normal and tilted magnetic field. At the gate voltages $`V_{th1}<V_g<V_{th2}`$, at which one subband $`E_1`$ of the substrate side part of the well is filled with electrons, the experimental points fall onto straight lines with slopes defined by capacitance between the gate and the bottom electron layer. Above $`V_{th2}`$, where a second subband $`E_2`$ collects electrons in the front part of the well, a minimum in $`\sigma _{xx}`$ at integer $`\nu `$ corresponds to a gap in the spectrum of the bilayer electron system. In this case the slope is inversely proportional to the capacitance between gate and top electron layer. Additional minima of the imaginary current component that are related to the thermodynamic density of states in the second subband solely are shown in Fig. 2 by dashed lines. Hence, each of the two different kinds of minima forms its own Landau level fan chart. In the perpendicular magnetic field, wide disruptions of the fan line at $`\nu =4`$ and a termination of the line at $`\nu =3`$ indicate the absence of a minimum in $`\sigma _{xx}`$ (Fig. 2a). As mentioned above, this results from a Fermi level pinning of the Landau levels for two subbands.
Remarkably, switching on a parallel magnetic field is found to promote the formation of a $`\sigma _{xx}`$ minimum at integer $`\nu >2`$, particularly at $`\nu =3`$ and 4, see Fig. 2b. This implies that the parallel magnetic field suppresses the pinning effect, giving rise to opening a gap at the Fermi level in the double layer system.
Figure 3 represents the behaviour of the activation energy $`E_a`$ along the $`\nu =3`$ and 4 fan lines in Fig. 2 for different tilt angles $`\mathrm{\Theta }`$ of the magnetic field. As seen from Fig. 3a, for filling factor $`\nu =4`$ in the normal field, the value of $`E_a`$ is largest both at the bilayer onset $`V_{th2}`$ and at balance. In between these it zeroes, which is in agreement with the disappearance of the minimum of $`\sigma _{xx}`$ in the magnetic field range between 2.6 and 3.4 T; in the close vicinity of $`B=3`$ T, $`E_a`$ is unmeasurably small but likely finite as can be reconciled with the observed $`\sigma _{xx}`$ minimum at the fan crossing point of $`\nu =4`$ and $`\nu _2=1`$ (Fig. 2a). In contrast, for tilted magnetic fields, the activation energy at $`\nu =4`$ never tends to zero, forming a plateau, instead (Fig. 3a).
For $`\nu =3`$ the parallel field effects are basically similar to the case of $`\nu =4`$ with one noteworthy distinction. Near the balance point, the activation energy in a tilted magnetic field exhibits a minimum that deepens with increasing tilt angle, see Fig. 3b. This minimum is likely to be of many-body origin: at sufficiently large $`\mathrm{\Theta }`$ it is accompanied by a splitting of the $`\nu =3`$ fan line, which is very similar to the behaviour of the double layer at $`\nu =2`$ discussed as manifestation of the canted antiferromagnetic phase . This effect will be considered in detail elsewhere.
We relate the appearance of a gap at integer $`\nu >2`$ in the unbalanced double layer at tilted magnetic fields to orthogonality breaking of the Landau wave functions with different quantum numbers for two subbands. Indeed, the interlayer tunneling should occur with in-plane momentum conservation so that in a tilted magnetic field it is accompanied with an in-plane shift of the center of the Landau wave function by an amount $`d_0\mathrm{tan}\mathrm{\Theta }`$, where $`d_0`$ is the distance between the centers of mass for electron density distributions in two lowest subbands. Apparently, the so-shifted Landau wave functions with different quantum numbers for two subbands become overlapped. In this case the above mentioned pinning effect at integer $`\nu >2`$ cannot occur any more. Instead, as will be discussed below, the wave functions get reconstructed, which is accompanied by the levels’ splitting.
We calculate the single-particle spectrum in a tilted magnetic field in self-consistent Hartree approximation without taking into account the spin splitting (supposing small $`g`$ factor) as well as the exchange and correlation energy. The intersubband charge transfer when switching on the magnetic field is a perturbation potential in the problem that mixes the wave functions for two subbands. Account is taken of a shift of the subband bottoms due to a parallel component of the magnetic field, and the value of gap at the Fermi level is determined in the first order of perturbation theory in a similar way to the $`\nu =1`$ and 2 case at normal magnetic fields of Ref. .
The magnetic field dependence of the calculated gap $`\mathrm{\Delta }`$ for filling factor $`\nu =4`$ is displayed in Fig. 4. At fixed tilt angle the calculation reproduces well the observed behaviour of the gap along the $`\nu =4`$ fan line (cf. Figs. 3a and 4a). The quantitative difference between the gap values can be attributed to the finite width of the Landau levels which is disregarded in calculation.
The gap $`\mathrm{\Delta }`$ as a function of parallel magnetic field component $`B_{}`$ at a fixed value of $`B_{}=2.6`$ T is depicted in Fig. 4b. It reaches a maximum at $`B_{}=3.5`$ T and then drops with further increasing field $`B_{}`$. It is clear that $`\mathrm{\Delta }(B_{})`$ reflects the dependence of the overlap of the Landau wave functions with different quantum numbers on their in-plane shift $`d_0\mathrm{tan}\mathrm{\Theta }`$: while at sufficiently small shifts the overlap rises with shift, at large shifts the overlap is sure to vanish, restoring the wave function orthogonality.
The above explanation holds for filling factor $`\nu =3`$ as well. We note that, in a normal magnetic field, the $`\nu =3`$ gap for our case is of spin origin since the expected spin splitting is smaller than $`\mathrm{\Delta }_{SAS}`$ . Therefore, it can increase with $`B_{}`$ for trivial reasons. The point of importance is that the Landau wave function orthogonality has to be lost for the gap to open.
In summary, we have performed magnetocapacitance measurements on a double layer system with gate-voltage-controlled electron density distributions in tilted magnetic fields. It has been found that, for the dissipative state in normal magnetic fields at filling factor $`\nu =3`$ and 4, a parallel magnetic field component leads to opening a gap at the Fermi level. We attribute the origin of the effect to orthogonality breaking of the Landau wave functions with different quantum numbers for two subbands as caused by parallel magnetic field. The calculated behaviour of the gap is consistent with the experimental data.
We are thankful to S.V. Iordanskii for valuable discussions. This work was supported in part by the Deutsche Forschungsgemeinschaft DFG, the AFOSR under Grant No. F49620-94-1-0158, the Russian Foundation for Basic Research under Grants No. 00-02-17294 and No. 98-02-16632, the Programme ”Nanostructures” from the Russian Ministry of Sciences under Grant No. 97-1024, and INTAS under Grant No. 97-31980. The Munich - Santa Barbara collaboration has also been supported by a joint NSF-European Grant and the Max-Planck research award.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.