id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9906/astro-ph9906249.html
|
ar5iv
|
text
|
# Response to the Document “Origin of the Extended EUV Emission from the Abell 2199 and Abell 1795 Clusters of Galaxies” by Lieu, Mittaz, Bonamente, Durret and Kaastra
Lieu, Mittaz, Bonamente, Durret and Kaastra (hereafter, Lieu et al.) have provided a document which claims to rebut the finding by Bowyer, Berghöfer and Korpela (hereafter, BBK) presented at the Ringburg Workshop (April 1999) that excess EUV emission detected in some clusters of galaxies is in an artifact of the background subtraction employed.
We invite interested observers to carry out this analysis for themselves, but we realize this may take a substantial effort, and not everyone will have the necessary tools readily available. Hence we here provide some relevant information and discuss the points raised by Lieu et al.
The central point is that the EUVE Deep Survey Telescope response is not flat. This is not unique to EUVE. All space borne and ground based telescopes show this feature to some extent and data from these telescopes are routinely corrected with a vignetting function, or flat-fielding. In Fig. 1 below, we provide a sensitivity plot of the EUVE Deep Survey Telescope obtained by long observations of blank sky. Each contour is a 10% sensitivity level change. A detailed version of this response, useful for considering small-scale variations in the detector that might be thought of as variations in the detailed cluster emission, is available at http://sag-www.ssl.berkeley.edu/$``$korpela/euve\_eff.
Given the obvious variation in sensitivity over the field of view of this telescope, we do not understand how anyone can claim that a flat background as employed by Lieu et al. is correct and appropriate for analyses of extended features.
Nonetheless, in their Figure 3, replicated below, Lieu et al. do claim that this background is flat. However, it is visually obvious that the data in this figure are not compatible with a flat background. A simple Chi-squared test is something anyone can easily do with the data in this figure; we find a reduced Chi-square of 1.7 for the best fit flat line, and a reduced Chi-square of 0.93 for the best fit sloped line. The decrease in the background with increasing radius seen in this figure is precisely the effect that we have brought to everyone’s attention.
In their Figure 3, Lieu et al. place their ”flat” background level at a rough average of their data points and claim this to be the average ”flat” value. In their analyses of cluster emission they place their background level near the lowest of their outlying data points. It is not surprising that by using this method of determining the background level they find significant positive flux.
Lieu et al. make a number of other points which they claim buttress their view.
1. They state, ”The CSE effect was confirmed by the LECS instrument aboard BeppoSax.” The validity of this ”confirmation” is not clear. The preliminary BeppoSax analysis now available uses theoretical functions for several of the key data reductions, rather than in-flight derived values. A more though analysis is currently underway (Kaastra, private communication).
2. ”A multi-scale wavelet analysis of the EUVE/DS data of A 1795 shows clear signatures of cluster emission out to a radius of at least 8 arc minutes.” We find that this ”emission” is entirely a fine grained detector sensitivity effect. This can be verified by looking at the fine scale structure in the background sensitivity map provided at the above listed site. The ”emission” appears to be different in different clusters because the observations were taken at different places on the detector and/or were taken with different thresholding and/or were taken at a combination of places on the detector. When the detector’s small scale sensitivity is properly accounted for, this effect disappears.
3. Lieu et al. point out the additional background subtracted by us did not lead to the removal of CSE from the Virgo and Coma cluster data. They then state, ”A natural puzzle is why these two clusters exhibit CSE…” It is indeed a puzzle why these two clusters exhibit CSE and others don’t. The reason for this should be determined by future research. A clue may be provided by the fact that these two clusters exhibit substantial activity either in the form of merging or the presence of a high energy jet, while the other clusters are quiescent.
4. Lieu et al. state, ”In fact, the rest (of the clusters) suffer from the opposite effect; they are strongly intrinsically absorbed.” It is hard for us to understand why this is a problem. A similar effect has already been noted in X-ray observations of these clusters, and its underlying cause in terms of ”cooling flow” gas was discussed in the presentation by Bowyer at the Ringburg Workshop.
5. The ”Clincher test”. We have difficulty in understanding all of the subtleties provided by Lieu et al. in this section, but their claim that the two raw data sets are essentially identical is correct. We stated this at the Workshop. Lieu et al. make a number of incorrect statements regarding the background levels observed. The particle background cannot be removed by pulse height thresholding as they claim, it can just be reduced. They make the statement that different observations can vary by a factor of two ”mainly due to an increase in the photon background” and state that this is a crucial point. Unfortunately this is incorrect. The photon background is constant and the particle background is what changes in the EUVE Deep Survey Telescope as was pointed out in an extensive analysis by Lieu and co-workers. (”EUVE First Light Observations of the Diffuse Sky Background”, Lieu, Bowyer, Lampton, Jelinsky, & Edelstein. 1993, ApJ, 417, L41).
6. It is not true, as Lieu et al. claim, that ”within the context of the BBK scenario, the photon background must assume two templates, suitably correlated with each other as to produce the same absolute brightness profile.” The template is the same; it is shown in Figure 1. The only difference required is a normalization factor to account for the different (flat) particle background levels at the time of each observation.
7. The differences in the two data sets shown Figure 6 of Lieu et al. is simply explained. These observations were taken at different locations on the detector (as Lieu et al. state). The vignetting is different at each of these locations as can be seen in Figure 1, and hence the profiles will be different.
8. In ”Another Cosmic Conspiracy”, Lieu et al. state ”one will be forced to conclude that such a profile must apply to every cluster observed by EUV, i.e., all clusters must appear in the EUV like A 2199.” We disagree with this statement on several grounds. First, if the data were taken at different places on the detector, a cursory examination of Fig. 1 shows that the sensitivity deviations will be different and the cluster will look different. Second, there is EUV emission from the low energy continuation of the X-ray gas in clusters. Since the X-ray gas distribution is different in different clusters, the related EUV emission will be different in different clusters.
9. Lieu et al. incorrectly state that we find no emission in A2199 at radii larger than five arcminutes. Our analysis shows that the emission in A2199 extends to at least 9 arcminutes, but is entirely accounted for by the EUV tail of the X-ray emitting gas.
We challenge Lieu et al. to do the following: At each individual position where an observation of a cluster is made, derive an azimuthally averaged radial profile. Derive an azimuthally averaged radial profile of the background taken at this same location. Subtract from each a particle background as determined by count rates in highly vignetted regions near the edge of the filter. Fit the background profile at large radii to the source observations at large radii, and plot them on the same graph. Then share the results with all of us.
|
no-problem/9906/cond-mat9906030.html
|
ar5iv
|
text
|
# On the conformational structure of a stiff homopolymer
## I Introduction
Conformational transitions of semi–flexible polymers have been of considerable interest for analytical studies and computer simulations recently. Polymers can possess different degree of flexibility and their persistent length $`\lambda `$ is one of the key parameters that determine the conformation . For example, for polystyrene $`\lambda 1.4`$ nm, which corresponds to about 5 chain bonds, whereas for the double helix DNA $`\lambda 50`$ nm, i.e. about 150 base pairs.
The equilibrium and kinetics of folding of a flexible homopolymer ($`\lambda =0`$) are relatively well understood at present. It is believed that in a wide range of interaction parameters the equilibrium collapse transition is continuous. As the persistent length increases the extended coil becomes larger with the swelling exponent changing smoothly from that of the Flory coil $`\nu =3/5`$ to that of a rigid rod $`\nu =1`$. The collapse transition of a semi–flexible homopolymer on going from the good to the poor solvent regime remains continuous at first, but starting from some critical value of $`\lambda `$ becomes discontinuous . Such a change of the nature of the transition actually reflects a profound restructuring of the collapsed globule. Thus, instead of forming a nearly spherical liquid–like globule a sufficiently stiff chain has no point of easy bending so that it wraps around itself forming a torus. Such structures have been observed a number of times experimentally for DNA molecules and were studied theoretically from different points of view starting from a pioneering work .
In works Refs. it has been noted that for a given degree of polymerisation the toroidal conformation exists in a region starting from some value of stiffness $`\lambda `$ and in a limited interval of the solvent quality. In Ref. based on the Gaussian self–consistent (GSC) method we have studied the equilibrium phase diagram and kinetics of conformational transitions after various quenches. Importantly, both transitions coil–to–torus and torus–to–globule are discontinuous, and thus there are associated regions of metastability. This results in a rather complex kinetic picture of expansion or folding, essentially dependent on the quench depth. In that work we have also mentioned that there seemed to be some additional minima of the free energy, the study of which has been deferred until the current paper.
In Ref. we have shown that at equilibrium the GSC treatment precisely reduces to the Gibbs–Bogoliubov variational method with a generic quadratic trial Hamiltonian. However, in the extended variational space care should be taken while finding the true free energy minima as these seem to be sensitive to the limitations of the underlying polymer model. The current model with a virial–type expansion is believed to have problems at high densities. Thus we shall re–examine the phase diagram of a semi–flexible chain here in a more systematic and accurate manner.
Despite a relative ease for analytical theories to obtain the toroidal conformation, computer simulations have been less successful so far. In Ref. an attempt was made to include the bending energy into Monte Carlo simulation in a lattice model of Ref. . Although a number of metastable states corresponding to conformations with hairpin and crystalline conformations were observed, the true equilibrium state corresponding to the torus was not possible to obtain. These difficulties are due to a number of circumstances. First, a toroidal conformation appears only for sufficiently long and stiff chains. However, the relaxation times for these become enormous and the equilibrium state is hard to reach during a limited simulation time. Second, the lattice itself introduces a number of unfortunate artefacts. This is because the rotational symmetry is broken, so that the chain segments can lie only along a number of allowed directions, and thus weak bendings are simply impossible. So, the chain forms conformations with long straight sections which then possess a U-turn, resembling a hairpin. Such abrupt turns, however, produce a considerable energetic penalty, so that the corresponding states have a higher free energy than the true minimum. Depending on the particular type of the lattice model used the true minimum may still be a mis–shapen torus, or if the smallest and only bendings are 90 degrees, one would expect instead a kind of a solid–like crystalline ordering . Even though a kind of torus may be possible in the model with links along 2-D and 3-D diagonals , such a state would be extremely hard to reach due to the lack of collective moves and a huge number of metastable minima in which the system would tend to be trapped. Interestingly, in a recent paper Ref. a simulation in the bond fluctuation model, which is intermediate between the lattice and continuous space models, has exhibited a toroidal, though imperfect, structure (see e.g. Fig. 7).
One attractive possibility is to use Langevin or Monte Carlo simulations in continuous space instead, although this would require much longer computations. Here we shall use a fairly standard Monte Carlo off–lattice technique for a ring homopolymer chain.
Thus, the main objective of this work is by using both the Gaussian variational and Monte Carlo techniques to obtain a more accurate phase diagram of the homopolymer in terms of the stiffness and the solvent quality and to elucidate the conformational structure of corresponding thermodynamically stable as well as possible metastable states.
## II Techniques
### A Gaussian variational method
The Gaussian variational method is based on minimising the Gibbs–Bogoliubov trial free energy, $`𝒜T𝒮`$, with respect to the full set of variational parameters. Here these are the mean–squared distances, $`D_{mm^{}}(1/3)(𝐗_m𝐗_m^{})^2`$, between monomers number $`m`$ and $`m^{}`$ ($`m,m^{}=0,\mathrm{}N1`$, where $`N`$ is the degree of polymerisation). For a ring homopolymer the matrix $`D_{mm^{}}`$ is translationally invariant along the chain: $`D_{mm^{}}D_k`$, where $`k=|mm^{}|`$.
Using the de Gennes–des Cloizeaux–Edwards bead–and–spring model of the homopolymer with the volume interactions represented by the virial–type expansion, one can obtain the following entropic and the energetic contributions ,
$`𝒮`$ $`=`$ $`{\displaystyle \frac{3k_B}{2}}{\displaystyle \underset{q=1}{\overset{N1}{}}}\mathrm{log}_q,_q={\displaystyle \frac{1}{2N}}{\displaystyle \underset{k=1}{\overset{N1}{}}}\mathrm{cos}\left({\displaystyle \frac{2\pi kq}{N}}\right)D_k,R_g^2{\displaystyle \underset{q=1}{\overset{N1}{}}}_q,`$ (1)
$``$ $`=`$ $`{\displaystyle \frac{3Nk_BT}{2l^2}}D_1+{\displaystyle \frac{3Nk_BT\lambda }{2l^3}}(4D_1D_2)+{\displaystyle \frac{u^{(2)}N}{(2\pi )^{3/2}}}{\displaystyle \underset{k=1}{\overset{N1}{}}}D_k^{3/2}+{\displaystyle \frac{3u^{(3)}N}{(2\pi )^3}}{\displaystyle \underset{k=1}{\overset{N1}{}}}D_k^3`$ (3)
$`+{\displaystyle \frac{u^{(3)}N}{(2\pi )^3}}{\displaystyle \underset{k_1k_2=1}{\overset{N1}{}}}\left({\displaystyle \frac{1}{2}}(D_{k_1}D_{k_2}+D_{k_1}D_{k_1k_2}+D_{k_2}D_{k_1k_2}){\displaystyle \frac{1}{4}}(D_{k_1}^2+D_{k_2}^2+D_{k_1k_2}^2)\right)^{3/2},`$
where we have also introduced the normal modes $`_q`$ and the mean squared radius of gyration of the chain $`R_g^2`$.
Analogously to Eqs. (8-11) of Ref. the first term in Eq. (3) is the elastic energy of springs and the second term is the bending energy, with $`l`$ and $`\lambda `$ being the statistical and the persistent lengths of the chain respectively. Note that these formulas correspond precisely to the Kratky–Porod model of taking the stiffness into account by adding the integral along the chain of the squared curvature .
In Refs. we have discussed that the application of a virial–type expansion is flawed for the dense globular state. Namely, the model with the two– and three–body terms in the variational treatment is found to possess a number of pathological infinitely deep free energy minima. This implies that for a sufficiently high density the three–body term is unable to cope with the increasingly strong two–body attraction. Introduction of a thermodynamically subdominant term such as the 4th term in Eq. (3), $`_{si}`$ (see Appendix for more details), or the so–called ‘thickness’ term in Ref. , fixes the problem, and is also shown to produce a negligibly weak correction in the repulsive and ideal coil regimes. Although attempts to derive such a term from the perturbation and renomalisation theory have been made (as e.g. in the latter work), these partial fixes are fundamentally inconsistent. Having convinced ourselves that for the purpose of the current work the results from such a theory are in satisfactory agreement with the numerical experiment, we shall accept this procedure here and bear in mind its limitations. Hopefully, a new non–Gaussian theory under development by one of the authors may address this problem. It fundamentally deals with true intermolecular interaction potentials instead of ill–defined virial expansions, something which the Gaussian variational theory can not avoid since the energy averaged over a Gaussian trial distribution diverges for any singular potential involving a hard–core part.
### B Off–Lattice Monte Carlo Simulation
Since the lattice Monte Carlo model is not suitable for study of the effects of chain stiffness for several reasons, we have carried out simulation in continuous space instead. The disadvantages of the lattice model include the rotational anisotropy and tendency for condensed phases to form some kind of crystalline structures on the lattice . Thus, to produce toroidal states on a lattice for comparatively short polymer chains is rather difficult. Another disadvantage of the lattice model is that the persistence of the polymer chain reduces dramatically the acceptance ratio of the local monomer moves and, thus, other more sophisticated types of moves such as shifts and rotations of chain segments as a whole are needed.
The model is implemented for a single homopolymer consisting of $`N`$ monomers connected by springs in a ring, which additionally interact with each other via a pair–wise short ranged spherically symmetric potential,
$$H=\frac{k_BT}{2l^2}\underset{m}{}(𝐗_m𝐗_{m1})^2+\frac{k_BT\lambda }{2l^3}\underset{m}{}(𝐗_{m+1}+𝐗_{m1}2𝐗_m)^2+\frac{1}{2}\underset{mm^{}}{}V(|𝐗_m𝐗_m^{}|).$$
(4)
Unlike the GSC theory, where one has to introduce a virial–type expansion representing the pair–wise potential, here we use the two–body interaction potential explicitly,
$$V(r)=\{\begin{array}{cc}+\mathrm{}\hfill & \text{for }r<d\hfill \\ V_0\left(\left(\frac{d}{r}\right)^{12}\left(\frac{d}{r}\right)^6\right)\hfill & \text{for }r>d\hfill \end{array}.$$
(5)
Thus, monomers are represented by hard spheres of the diameter $`d`$, with a weak short ranged Lennard–Jones attraction of characteristic strength $`V_0`$. During simulation we change the strength of the two–body attraction $`V_0`$, which can be viewed as basically the “inverse temperature”, rather than changing the temperature $`T`$ itself.
The Monte Carlo updates scheme is based on the Metropolis algorithm with local monomer moves. The new coordinate of a monomer can be sought as, $`q^{new}=q^{old}+r_\mathrm{\Delta }`$, where $`q`$ stands for $`x`$, $`y`$ and $`z`$ spatial projections and $`r_\mathrm{\Delta }`$ is a random number uniformly distributed in the interval $`[\mathrm{\Delta },\mathrm{\Delta }]`$. Here $`\mathrm{\Delta }`$ is some additional parameter of the Monte Carlo scheme, which, in a sense, characterises the timescale involved in the Monte Carlo sweep (MCS), the latter being defined as $`N`$ attempted Monte Carlo steps.
In both models we work in the system of units such that $`l=1`$ and $`k_BT=1`$. Additionally we fix the third virial coefficient in the variational method, $`u^{(3)}=10`$, and the hard–core diameter, $`d=1`$, in Eq. (5).
## III Equilibrium Phase Diagram from the variational method
First, let us consider the system behaviour upon a quasistatic change of the interaction parameters. In Fig. 1 we present the plot of the mean squared radius of gyration, $`R_g^2`$, versus the second virial coefficient, $`u^{(2)}`$, at a fixed stiffness parameter, $`\lambda `$. The regime of repulsion and comparatively weak attraction in the right–hand–side of the figure corresponds to the extended coil conformation of the polymer with a large radius of gyration scaling as $`R_gN^{\nu _{coil}}`$, where the exponent $`\nu _{coil}`$ is close to the Flory value $`\nu _F=3/5`$ for a flexible chain, $`\lambda =0`$, becomes a rigid rod exponent $`\nu _{rod}=1`$ for a very stiff chain, with a cross–over in between. Since increasing the stiffness leads to a stronger effective repulsion between monomers, the extended phase expands to the region of the negative second virial coefficient for higher values of the stiffness parameter.
For comparatively small values of $`\lambda `$ the plot of the radius of gyration (solid line and diamonds in Fig. 1) is quite similar to that of the flexible homopolymer at the equilibrium coil–to–globule transition (see e.g. Fig. 1 in Ref. ), which is second order. However, at higher values of the stiffness parameter the collapse transition becomes first order . In this case, after the system has been quasistatically quenched to the region of a higher monomer attraction (line denoted by pluses in Fig. 1), the local minimum corresponding to the coil suddenly disappears becoming an inflexion point somewhere in the interval, $`23<u^{(2)}<22`$, and the system passes to another free energy minimum with a much smaller value of the radius of gyration. Similarly, upon changing $`u^{(2)}`$ quasistatically towards monomer repulsion (line denoted by quadrangles in Fig. 1), the free energy minimum disappears in the interval $`12<u^{(2)}<11`$, and the system transforms into the coil state. If at least two minima of the free energy can coexist in some interval of the interaction parameters, the transition point is defined by the condition that the current minimum of the free energy $`𝒜`$ becomes the deepest one. Observables such as the mean energy, $``$, the mean squared radius of gyration, $`R_g^2`$, and the mean squared distances between monomers, $`D_{mm^{}}`$, experience a discontinuous jump at such a transition.
It is important to note that upon a quasistatic change of the second virial coefficient towards repulsion (line denoted by quadrangles in Fig. 1) the mean squared radius of gyration increases in a three–step–like fashion before the homopolymer expands to the coil. This is a manifestation of some additional condensed phases, which we have denoted by labels $`(T5)`$, $`(T6)`$ and $`(T7)`$. To understand the distinction between these phases and the conventional globule let us compare the monomer–monomer mean squared distances, $`D_k`$, in these phases. These are exhibited in Figs. 2a, b. As we have discussed earlier for the state of the extended coil this function monotonically increases on the half–period of the chain. The situation remains similar for the coil of a stiff homopolymer (line denoted by diamonds in Fig. 2a).
However, the function $`D_k`$ is more sensitive to the stiffness in the state of the globule, especially at small values of the chain index $`k`$ (compare the line denoted by pluses with the solid line in Fig. 2a). At small values of $`k`$ the function is nearly parabolic, $`D_k|k|^2`$, i.e. the chain represents almost a rigid rod, reaching a maximum at some value of the chain index $`k^{}`$. In some intermediate range of the chain index, $`0<k6k^{}`$, one can see about 2-3 oscillations in the function $`D_k`$, with the amplitude decreasing quickly to stabilise at some level. At higher values of the chain index towards half of the chain the function remains constant. Thus, we can conclude that for small chain distances the structure of the globule of a fairly stiff polymer is quite different from that of the flexible chain. This is easy to understand. For a semi–flexible polymer chain segments in the globule are locally straightened on a characteristic scale related to $`\lambda `$. As long as this scale is considerably smaller than the globule size the shape of $`D_k`$ remains flat as for the flexible chain. When this scale becomes comparable to the globule size, a few oscillations appear in the mean squared distances.
Transition from the conventional globule to the phase labelled as $`(T7)`$ is accompanied by a spectacular change of the function $`D_k`$ (see line denoted by quadrangles in Fig. 2b). In this phase the function strongly oscillates with the amplitude decreasing rather slowly towards half of the chain. The ratio of the value of $`D_k`$ in a maximum to that in a minimum is about 5-6 near the middle of the chain. The designation of the phases is done according to the number of oscillations: in phase $`(T7)`$ ther are 7 oscillations, in phase $`(T6)`$ there are 6 oscillations (line denoted by pluses in Fig. 2b), and in phase $`(T5)`$ there are 5 oscillations (line denoted by diamonds). We also note that the smaller the number of oscillations the higher is the ratio of the value of $`D_k`$ in a maximum to that in a minimum.
We claim that the phases $`(Tn)`$ correspond to the toroidal conformation with the number of windings $`𝒩_w=n`$. The chain index $`k^{}`$, where the function $`D_k`$ reaches its first maximum, is equal to the number of monomers forming half–period of the first winding starting from the zeroth monomer. Therefore, $`D_{max}=D_k^{}`$ may be interpreted roughly as the mean squared external diameter of the torus. By moving from the monomer number $`k^{}`$ to $`2k^{}`$ the first winding is completed. However, because of the excluded volume interaction the chain cannot return to the same coordinate, giving rise in the average to the value $`D_{min}=D_{2k^{}}`$, which may be considered as the mean squared internal diameter of the torus. The winding number $`𝒩_w`$ is thus precisely the number of oscillations in $`D_k`$. The physical reason for a torus is clear — a persistent chain has no desire to bend, so it tends to have as large a radius of curvature as possible, however, two–body attraction tends to keep quite close packing of the chain.
A quasistatic increase of the repulsion (see line denoted by quadrangles in Fig. 1) results in transformation of the conventional globule to the toroidal globule with $`𝒩_w=7`$ windings in the interval, $`27<u^{(2)}<26`$. This is a rather weak discontinuous transition. It occurs when the characteristic scale of straightened segments reaches the size of the globule, and a hole in the centre of the globule is formed. Transitions between various toroidal states are also first order, but much stronger since they are accompanied by a global restructuring of the polymer conformation.
In Fig. 3 we present the plots of the mean squared radius of gyration, $`R_g^2`$, upon a quasistatic change of the stiffness parameter $`\lambda `$ for different fixed negative values of the second virial coefficient, $`u^{(2)}`$, corresponding to the globule at $`\lambda =0`$. The radius of gyration increases monotonically during this change. The most significant changes occur in the region of rather small stiffness parameter, $`0<\lambda <1/2`$, and in the region where the conventional globule transforms to the toroidal state. The second change is associated with a weak first order transition. Note that the final toroidal state depends on the value of the second virial coefficient. At a weaker attraction the globule is transformed to a toroidal state with a smaller winding number. However, the transition from the coil to the toroidal phases upon a quasistatic change of $`u^{(2)}`$ (line denoted by pluses in Fig. 1) is quite difficult due to a large potential barrier, which makes the coil a metastable state in a large region of the phase diagram.
In Fig. 4 we present the resulting phase diagram of the stiff homopolymer in terms of the second virial coefficient, $`u^{(2)}`$, and the stiffness parameter, $`\lambda `$. It contains phases of the coil, where monomer attraction is insufficient to form compact states, the globule in the region of either a low stiffness or a strong monomer attraction, and a number of toroidal phases characterised by distinct winding number $`𝒩_w`$. As we have already discussed above the collapse transition changes its behaviour from continuous to discontinuous starting from some value of the stiffness. The globule of a semi–flexible polymer is different in the local structure from that of the flexible homopolymer, although global scaling characteristics are the same for both cases. The toroidal phases lie in the intermediate region in $`u^{(2)}`$ starting from some critical values of the stiffness parameter. Some of such states with $`𝒩_w=2,3,4`$ are always metastable, while some with $`𝒩_w=5,6,7`$ can become thermodynamically stable.
For a large fixed value of the stiffness parameter the number of toroidal phases increases approximately linearly with the degree of polymerisation, $`N`$. For example, the maximal winding numbers at $`\lambda =25`$ for polymers with the degree of polymerisation $`N=50`$, $`100`$, $`150`$ and $`200`$ are equal to: $`𝒩_w=4`$, $`7`$, $`10`$ and $`13`$ respectively.
## IV Results from Monte Carlo simulation
In series of pictures in Fig. 5 we exhibit typical conformations in various phases from the off–lattice Monte Carlo simulation. Fig. 5a corresponds to the extended conformation, which for large values of the stiffness parameter takes a form close to a ring. In Fig. 5b we draw the backbone of a typical globular conformation for the values of the stiffness parameter not large enough to form a toroidal state. The globule structure here is quite different from that of a flexible homopolymer in Fig. 5c. One can see that it consists of entangled loops of a radius close to the size of the globule. The function of the mean squared distances between monomers here possesses a typical form presented by pluses in Fig. 2a with significant oscillations on small chain distances, which quickly saturate due to varying number of monomers in each loop. We should also note that the globule of a stiff homopolymer is not quite spherical, but rather reminds an ellipsoid, either flattened or elongated. Thus, we avoid calling it “spherical globule” as we did in Ref. . Increasing the stiffness parameter transforms such a globule to the toroidal conformation exhibited in Fig. 5d.
It is important to note that to produce the globule state as in Fig. 5b, or the toroidal state as in Fig. 5d, in a Monte Carlo simulation it is much easier to bring the system first to the globule of the flexible homopolymer (Fig. 5c), and then to increase the stiffness parameter quasistatically. If instead we change $`V_0`$ quasistatically at a fixed sufficiently large $`\lambda `$ reaching the equilibrium would be difficult. First, as we have mentioned earlier, the region of the metastable coil is rather wide for large values of the stiffness. Second, the system may become trapped in a metastable state during such a process. Polymer conformation in these states have a typical form of a hairpin (see Fig. 6c). Here the chain folds a few times along a nearly straight line forming abrupt U-turns near the ends. However, these ends contribute more significantly to the bending energy than a uniform slow bending. Thus, such conformations possess a higher energy and hence are metastable.
Let us consider in more detail the kinetic process of pairpin formation after an instantaneous change of the two–body attraction, $`V_0`$, starting from the ring–like conformation as in Fig. 5a. Due to the stiffness the initial ring conformation remains stable for some time. However, due to thermal fluctuations some distant parts of the chain would meet each other occasionally, so that the chain acquires a shape of the digit ‘8’ as in Fig. 6a. If monomers are attractive enough, parts of the chain would align along each other starting from the centre towards ends (see Fig. 6b). The process can repeat itself if the persistent length is shorter than half of the chain. Without performing quite sophisticated collective movements of the chain segments it is virtually impossible to proceed further towards the fully collapsed state. Snapshot in Fig. 6c corresponds to $`10^7`$ MCSs and the system is still trapped in the same hairpin state.
In series of pictures in Fig. 7 we exhibit polymer conformations during kinetics of folding for a smaller value of the stiffness parameter, $`\lambda =5`$. After a long evolution during which the ring remains practically unchanged some loops of a size comparable to the persistent length are formed (see Fig. 7a), which continue to grow by picking up the slack of the chain and thus forcing a few more loops to form. Then a kind of a star–like structure including a few hairpins is produced as in Fig. 7b. These hairpins fold onto each other producing a sausage–like object (see Fig. 7c). Further rather slow kinetic process involves refolding of the sausage which is accompanied by its broadening and shortening (see Fig. 7d). Thus, kinetics of folding leads to elongated sausage–like conformaions, whereas a similar quasistatic change of the stiffness would tend to produce a flattened rather than elongated globule (see Fig. 5b).
Finally, let us note that the hairpin conformations have not been obtained by the GSC method neither at equilibrium, nor even as intermediate kinetic states, although they are present as local free energy minima. This is probably related to that in the Gaussian method the monomer–monomer correlation functions are represented by only one parameter characterising the mean squared distances. On the other hand, the lack of collective moves in the Monte Carlo scheme quite likely overestimates the stability of hairpin conformations as metastable or kinetically arrested states.
## V Conclusion and Discussion
In this paper we have completed the study of the phase diagram of a stiff homopolymer based on the Gaussian variational method and have also performed some equilibrium and kinetic Monte Carlo simulations in continuous space. Compared to the previous work Ref. we have shown here that the region corresponding to the toroidal globule actually consists of a number of strips corresponding to tori with different winding numbers. The transition curves separating such states from each other, as well as from the coil and the spherical globule, all correspond to first order transitions and terminate in critical points. For a given sufficiently large degree of polymerisation $`N`$ there exist a certain number of different toroidal states, which grows with $`N`$ approximately linearly. The distinction between such toroidal states is clearly visible in the function of the mean squared distances between monomers, which shows precisely the number of oscillations equal to the winding number. The existence of the toroidal states has been also confirmed by off–lattice Monte Carlo simulation for the similar model of local stiffness. In addition, hairpin conformations with abrupt U-turns corresponding to metastable states have been observed. These also appear as nonequilibrium intermediaries during kinetics of folding.
We should emphasise that the results in the previous work Ref. at sufficiently strong monomer attractions do suffer somewhat from the artefacts of the de Gennes–des Cloizeaux–Edwards bead–and–spring model based on the virial expansion. Here, by including the $`_{si}`$-term introduced in Ref. , we have managed to fix the problem at a practical level, although its fundamental resolution remains the matter of future work . The changes in the results are as follows. First, the region designated as ‘Torus’ in Fig. 1 of Ref. expands and covers all of the region designated as ‘Spherical Globule’ there (i.e. curves III, III’, III” in Fig. 1 of Ref. shift to the left of the axis $`\lambda =0`$). Indeed, a slightly oscillating behaviour of $`D_k`$ is only related to the existence of locally straightened sections of the chain, and does not necessarily correspond to a torus. Second, the metastable states which we called T’, T” and so on in Fig. 6 of Ref. now become thermodynamically stable in some regions of the phase diagram in Fig. 4. Oscillations of the function $`D_k`$ for the true toroidal states are much stronger (see Fig. 2b) and this function never reaches a steady level. Most importantly, there are a few distinct toroidal states of a ring polymer separated by first order transitions and distinguished by the winding number. Nevertheless, the main conclusion of Ref. that the toroidal conformation exists in a triangularly shaped region of the phase diagram, remains valid. Most of the general conclusions about the stability of the toroidal conformation and kinetics of conformational changes of Ref. remain unchanged too.
We would like to emphasise that the existence of the toroidal states is pretty much related to the choice of the bending energy as the square of the local curvature of the chain. Indeed, such a choice is natural for representing the persistent flexibility of polymers, which is due to rather small harmonic fluctuations in bending of the chain sections. This mechanism of flexibility is dominant for many helical or rather stiff chains such as DNA, for which the toroidal states have been observed experimentally .
The rotational–isomeric flexibility is also very important for many polymers. Such polymer molecules exhibit flexibility due to rotation around carbon–carbon and other bonds, as the minima of the torsional potential corresponding to the gauche and trans configurations have the difference in their depth of about $`k_BT`$. For representing this mechanism of flexibility a model with discrete bending angles is more appropriate. Such models, however, possess crystalline solid–like states instead of toroidal ones. Such a difference in conformational states is genuine in our view and both types of structures are observable experimentally depending on the particular polymer system.
Another important prerequisite for obtaining a toroidal state even in a model with persistent flexibility mechanism is that the processes of inter–chain aggregation do not occur, otherwise more complicated self–assembled structures may be formed. For instance, aggregation of triple helix collagen molecules leads to self–assembly of various fibrils.
###### Acknowledgements.
The authors are grateful for most interesting discussions to Professor A.Yu. Grosberg and Professor K.A. Dawson, and also to Professor K. Binder and Dr A.V. Gorelov. One of us (E.G.T.) acknowledges the support of the Enterprise Ireland grants IC/1999/001 and SC/99/186.
## The self–interaction energy term
Let us discuss in more detail the appearance of the last term, $`_{si}`$, in Eq. (3), which we call the self–interaction energy term. Such a term has been introduced in Ref. for heteropolymers, for which the two–body interaction matrix, $`u_{mm^{}}^{(2)}`$, is site dependent. Generally, one has to discard the singular terms with coinciding indices in the virial expansion as we have done in Eq. (3). It turns out, however, that the resulting free energy possesses some pathological minima with singular free energy if at least one element of the matrix $`u_{mm^{}}^{(2)}`$ becomes negative. Indeed, let us consider the interactions of just three monomers under the condition that the mean squared distances from monomers ’0’ and ’1’ to ’2’ are equal to each other, $`D_{0,2}=D_{1,2}=D`$. These interactions produce the mean energy contribution,
$$_3=\frac{u_{0,2}^{(2)}+u_{1,2}^{(2)}}{(2\pi D)^{3/2}}+\frac{1}{(2\pi D_{0,1})^{3/2}}\left(u_{0,1}^{(2)}+\frac{6u^{(3)}(2\pi )^{3/2}}{(DD_{0,1}/4)^{3/2}}\right)$$
(6)
If $`u_{0,1}^{(2)}<0`$ and monomer ’2’ is placed away from monomers ’0’ and ’1’, $`D>D_{0,1}/4+(6u^{(3)}(2\pi )^{3/2}/|u_{0,1}^{(2)}|)^{2/3}`$, obviously, in the limit $`D_{0,1}0`$ the mean energy possesses a singular minimum, $`_3\mathrm{}`$. As for the free energy, the logarithmic divergence of the entropy could not change the situation, thus $`𝒜\mathrm{}`$ as well. One can also show that the inclusion of more monomers in the chain, or of higher than the three–body interactions, does not improve the situation, but produces more and more of such pathological solutions.
The problem can be quite easily remedied by using another prescription — replacing the terms with coinciding indices by the self–interaction terms . Namely, we should add the following term,
$$_{si}=c_3u^{(3)}\underset{mm^{}}{}\delta (𝐗_m𝐗_m^{})^2=c_3\frac{u^{(3)}}{(2\pi )^3}N\underset{k}{}D_k^3,$$
(7)
where $`c_3=3`$ is a combinatorial factor related to the three possible ways of having coinciding pairs of indices in a triple summation. Obviously, the higher negative power of $`D_k`$ in Eq. (7) compared to the two–body term in Eq. (3) prevents one monomer from falling onto another.
Interestingly enough, this problem is hidden for a ring homopolymer. Indeed, due to the inverse symmetry , we have the property that for any indices $`m`$, $`m^{}`$ the following mean squared distances are equal: $`D_{m,m^{}}=D_{m,2mm^{}}=D_{2m^{}m,m^{}}`$. This provides sufficient repulsion coming from the three–body term to preclude any pathological solutions. Nevertheless, without the self–interaction energy term the theory possesses an unphysical behaviour: forcing two monomers very close to each other produces a rather weak repulsion of this pair from all very distant monomers. This repulsion comes from the three–body interaction and it is dominant at no matter how strong two–body attraction between monomers.
This effect comes into play only for sufficiently large negative $`u^{(2)}`$. In particular, it leads to a somewhat more convex than expected shape of the $`D_k`$ function for the globule of a flexible homopolymer. Nevertheless, the kinetics of folding remains little affected by this deficiency. Adding the self–interaction energy term improves the situation and results in a flat shape of $`D_k`$ (see Ref. for more detail). For a stiff homopolymer there are even more pronounced problems associated with the pathological three–body repulsion in the absence of $`_{si}`$. So, from Fig. 4 of Ref. one can see that there the size of the spherical globule was larger than that of the toroidal globule in the vicinity of the transition. Including the $`_{si}`$ term reverses this situation as we shall see below. It also lowers the depths of the additional local free energy minima corresponding to phases such as T’ and T” in Fig. 6 of Ref. . Finally, the inclusion of the $`_{si}`$ term leads to a better agreement with various results from Monte Carlo simulations for flexible and stiff homopolymers.
## Figure Captions
|
no-problem/9906/astro-ph9906243.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
For over 20 years since the pioneering surveys of cluster galaxies by Butcher and Oemler and field galaxies by Peterson et al. , observations of distant normal galaxies have continually been intensified to probe their formation and evolution. Such explorations started with only photometric data in the form of number counts and colors, as well as clustering statistics and photometric redshifts. Spectroscopic redshifts and other low spectral resolution information, such as star formation rates from \[OII\] line strengths or age indices from the HK 4000 Å break, soon followed. Even more recently, studies have progressed to include morphologies, sizes, and structures from HST images. The reader is referred to the contribution in this volume by Hammer for an up-to-date overview of the global properties of galaxy evolution.
In the last several years, a new and very powerful dimension has been added to the suite of tools to explore distant galaxies, namely internal kinematics and dynamics (masses). Although the potential was discussed over a decade earlier by Kron , only recently have advances in instrumentation and access to larger telescopes allow such programs to be practical.
### 1.1 Varieties of Mass
Mass is a term whose exact meaning is often unspecified and dependent upon the user and context. Confusion arises because mass, especially in its baryonic forms, come in many varieties. Dark matter, e.g., can be hot or cold, while gas, stars, dust, planets, and black holes can be further divided depending on its location (bulge, disk, halo), time (young or old) or redshift (high or low), color (blue or red) or temperature (hot or cold), different elements or state (e.g., HI, HII, H<sub>2</sub>, CO, gas, hot or warm dust), etc.
The importance of securing mass measurements for the study of distant galaxy evolution appears obvious. After all, total or dynamical mass is a fundamental physical property of galaxies. And, given that star formation remains a poorly understood physical process, mass possesses much closer and more direct ties to our current theories and simulations of galaxy formation than that from luminosity. Moreover, dynamical mass provides a new, very rich dimension to explore galaxy formation and evolution that is in principle independent of luminosity variations; that is conserved in isolated volumes; and which is likely to be related to the bias in the clustering behavior of galaxies. Mass can be measured for individual galaxies, to yield, e.g., M/L; to assess whether an object is a bursting dwarf; to estimate the fraction of dark matter; or to determine the relative proportions of stars and gas. Due to the diversity of galaxy types and properties, however, masses for statistically complete samples are more likely to reveal convincing, unique, and well characterized evidence for evolution. Such samples are essential to yield, e.g., the evolution in the volume density of galaxies with different velocity widths or M/L, or the clustering amplitude of galaxies as a function of mass or M/L.
Yet, despite their importance and potential, mass measurements for distant galaxies have been sparse. In part this lack of data is due to the greater difficulty in making such observations as compared to redshifts alone. Also, many astronomers have the common perception that mass and light are so tightly correlated for most local galaxies that additional efforts to obtain mass measurements of faint distant galaxies would be redundant and expensive.
### 1.2 Techniques to Measure Mass
For certain kinds of mass, the radiation itself is a direct measure. Examples include HI from 21cm flux, old stars from rest-frame near-infrared luminosities, or dust mass from submm if the temperature is well constrained. Indirect measures include HI from CO, total stellar mass from light assuming some average M/L, total stellar mass from star formation rates and assumed lifetimes, or mass of gas from column densities and sky coverage as probed by QSO absorption lines. Instead of the masses of various subcomponents, the total mass (dark or luminous) is almost always also desired. A relatively new technique to measure total mass exploits gravitational lensing, which can be applied to individual galaxies or clusters via strong lensing events; to statistical samples of galaxies via galaxy-galaxy weak lensing; or to the large scale mass power spectrum via global weak lensing patterns.
For the vast bulk of galaxies, total masses are estimated from $`v^2r`$, which assumes viralization and that the observed internal kinematic velocities ($`v`$) and sizes ($`r`$) are reliable tracers of the gravitational potential structure and depth. The kinematics of each galaxy can be extracted via 1-D, 2-D, or 3-D of information.
In the 1-D case of flux versus wavelength, the integrated spectrum of a galaxy is used to extract the velocity widths of emission lines (as studied for HI gas via 21cm observations or for ionized gas in star forming galaxies via optical spectra) or the velocity dispersions of stellar absorption lines (e.g., as used in the Faber-Jackson or Fundamental Plane relations for elliptical galaxies). In general, 1-D optical data are obtained with a single aperture or individual optical fiber for each galaxy.
In 2-D, the addition of spatial resolution along one axis provides rotation curves. The extra dimension yields qualitatively new types of information, such as the radial change in M/L that can reveal the amount of dark matter; the velocity distortions that might be signatures of mergers; or gradients in the velocity dispersion of disks that may reflect the rate of heating by satellites. Most of these measurements are based on long-slit apertures, usually along the major axis of a galaxy.
In 3-D, the remaining angular spatial dimension is added and provides information on the position angle, inclination, possible asymmetries, etc. in the kinematics of a galaxy . The most common optical instruments to extract such 3-D data for distant galaxies include integral field units with bundles of optical fibers, Fabry-Perot systems, and ramped narrow-band filter systems.
## 2 Overview of Kinematic Studies of Distant Galaxies
Given the difficulty of securing just the redshift, much less any line profile information, distant galaxy spectral surveys of the 1980’s through the early 1990’s relied on relatively low spectral resolution spectrographs, typically with velocity resolutions of $``$ 500 to 1000 $`\mathrm{km}\mathrm{s}^1`$, which is inadequate for internal kinematic studies of most galaxies. Although 4-m to 6-m class telescopes are able to yield kinematics for brighter galaxies at intermediate redshifts, the vast bulk of distant galaxy kinematic observations has come from the Keck 10-m Telescopes that started operating in 1994. The following subsections highlight the major scientific results from these pioneering efforts.
### 2.1 Emission Line Kinematics
Unless distant galaxies produce significant gas motions unrelated to their mass or their emission line spatial distribution is unrepresentative of the underlying structure, the widths of emission lines from their integrated spectra should provide a good diagnostic of their gravitational potential. Motivated by three factors: 1) the relatively good resolution of the Low Resolution Imaging Spectrograph (LRIS ) of less than 100 $`\mathrm{km}\mathrm{s}^1`$; 2) the availability of high spatial resolution images from the Hubble Space Telescope to derive inclination angles and sizes of galaxies; and 3) the desire to test a number of claims for dramatic evolution of moderate redshift field galaxies, the Deep Extragalactic Evolutionary Probe (DEEP: see URL: http://www.ucolick,.org/deep/home.html) team initiated a number of pilot surveys of internal kinematics . One early LRIS survey included 18 apparent spirals brighter than $`I22`$. They were found to have redshifts $`z`$ 0.2 to 0.8 and integrated velocity widths that yielded M/L ratios only about one magnitude brighter than that of local spirals . Due to LRIS being down at the last moment of a scheduled run, another DEEP survey used the only available instrument, the High Resolution Spectrograph (HIRES: ) with a velocity resolution of less than 10 $`\mathrm{km}\mathrm{s}^1`$ on Keck, to observe a sample of 17 very blue, very luminous ($`L^{}`$) compact galaxies with $`B20`$ to 23 and redshifts $`z0.1`$ to 0.7 . The surprisingly small velocity dispersions of 30 to 60 $`\mathrm{km}\mathrm{s}^1`$ for most of these galaxies, along with their small sizes, suggested that they were similar to HII galaxies and perhaps the progenitors of luminous spheroidal galaxies seen locally . Similar results were found for a sample of 6 very strong emission-line galaxies in cluster Cl 0024+1654 at $`z0.4`$ . Further LRIS observations of even fainter ($`I<23.5`$) compact galaxies found in the flanking fields of the Hubble Deep Field showed that such galaxies are likely to be significant contributors to the star formation rate at high redshifts $`z0.8`$ and that the galaxies with high specific star formation rates (per mass) may be shifting to higher masses at higher redshifts .
In the meantime, two other groups pushed the limits of 4-m class telescopes with instruments that yielded velocity resolutions of about 50 $`\mathrm{km}\mathrm{s}^1`$. One group used the AUTOFIB fiber-optics spectrograph on the AAT to explore the kinematics of 24 blue galaxies with $`B21.5`$ at redshifts $`z0.25`$. The other group used the Subarcsec Imaging Spectrograph (SIS: ) on the CFHT to examine 24 others with $`I<22`$ and with $`z0.6`$ . Both groups claimed detection of 1 mag to 2 mag of luminosity brightening compared to local counterparts. As previously found , the evidence for evolution was particularly strong among compact galaxies . These surveys reveal that the masses of otherwise similarly luminous galaxies (typically near $`L^{}`$) span from genuine dwarfs with $`10^9\mathrm{M}_{}`$ to bonafide giants with well over $`10^{11}\mathrm{M}_{}`$. The poor correlation of mass with luminosity found for some, albeit very blue, galaxies justifies the clear need for internal kinematics measurements in the study of distant galaxy evolution.
Finally, we note that even the internal kinematics of the important, but very faint ($`R`$ 24 to 26 mag), population of Lyman-drop galaxies <sup>1</sup><sup>1</sup>1Lyman-drop is preferred over the more popular term of Lyman-break galaxies, since the Ly$`\alpha `$ forest also contributes to the shape of the spectrum. has been studied via velocity widths of their emission lines. A major issue is whether the masses of these galaxies are large, as inferred from theory and their observed strong clustering properties , or are low if they are instead star-bursting pre-merger components, starbursting subcomponents within larger-mass halos, or progenitors of genuine dwarf galaxies . The spectral evidence is meager but favors small masses, with emission line widths of generally less than $`100\mathrm{k}\mathrm{m}\mathrm{s}^1`$ from Ly$`\alpha `$ as measured in the optical or from the more reliable $`H_\beta `$ and \[OIII\] lines that were obtained in the near-infrared .
### 2.2 Absorption Line Velocity Dispersions
These measures are considerably more difficult than the widths of strong emission lines, due to the need to have high S/N in the continuum. Thus far, absorption line widths have mainly been measured in distant, luminous early-type galaxies. This challenging work typically requires the use of 8-10m class telescopes, moderately good spectral resolution, and a corresponding set of stellar templates for cross-correlations. The handful of studies compare the M/L ratio and Fundamental Plane of distant versus local E/S0 for rich clusters of galaxies and field galaxies . The results are reassuringly consistent, with any changes being a good match to just mild passive luminosity evolution, which predict $`1`$ mag of brightening by redshifts $`z`$ 0.85 (see Fig. 1).
### 2.3 Spatially Resolved Kinematics
These observations require both good spectral resolution and moderately high spatial resolution in the spectra, as well as information on inclination and position angle of the major axis for each galaxy. Simulations are generally needed to derive such parameters as the terminal velocity, by accounting for seeing, slit width, optical PSF, inclination, etc.
At the lower redshift regime, 4-m to 5-m class programs have already yielded interesting, but apparently inconsistent, results. In one case, the claim is that 40 high-quality rotation curves of spirals with redshifts $`z0.2`$ to 0.4 show no evidence for evolution in the Tully-Fisher relationship between luminosity and velocities after corrections are made for the colors of the galaxies . Another group studied a sample of 22 strong \[OII\] emitting galaxies with $`z0.35`$ and $`R21`$ with the SIS and Multi Object Spectrograph (MOS: ) on CFHT . They find nearly 2 mag of luminosity brightening when compared to local samples. At yet higher redshifts to $`z1`$, the Keck Telescope with LRIS has yielded over 30 decent rotation curves for galaxies that appear to be relatively large spirals in HST images (see Fig. 2 for examples). As seen in Fig. 3, there is little, if any, offset ($`<0.5`$ mag) between this high-redshift sample and that of local Tully-Fisher samples. This result is particularly interesting for two reasons. First, at least some theories predict quite rapid disk evolution , while these observations, along with the evidence for little change in the volume density of large disk galaxies , appear to suggest otherwise. Second, the stark contrast of the spiral results to the evidence for much stronger evolution among compact galaxies (subsection 2.1) is suggestive of an important, missing component in our understanding of galaxy evolution.
Although Lyman-drop galaxies are generally too small ($``$0.2<sup>′′</sup>) to yield rotation curves with present ground-based systems, some appear to have multiple or extended components, which then allow spatially resolved kinematics to be gathered. So far, results exist only for two galaxies, with one consistent with a very low mass of less than 10$`{}_{}{}^{10}\mathrm{M}_{}^{}`$ (see Fig. 4), while the other has a more typical mass of 3 10$`{}_{}{}^{11}\mathrm{M}_{}^{}`$ for luminous galaxies . Given the unknown inclination of the plane of their relative motions and their true sizes or separations, more data are clearly needed to obtain the needed statistics to assess whether Lyman-drop galaxies are generally large or small mass systems.
## 3 Future Prospects
The aforementioned pioneering surveys clearly demonstrate the feasibility of acquiring kinematics and dynamics of high redshift galaxies. The resultant scientific conclusions, even if currently tentative and based on limited samples, highlight the potential to extract critical and unique clues to their nature and evolution. The whole field of observing the dynamics of distant galaxies is still in its infancy and will soon experience an explosive growth.
Over the next decade, we will see the completion of several major redshift surveys with high enough spectral resolution to yield internal kinematics. These include the Sloan Digital Sky Survey (SDSS:), which will firmly set the foundation for the local internal kinematic properties of diverse galaxy types; the next phase of DEEP, which will aim for about 50,000 galaxies at $`z0.9`$ ; and the VIRMOS survey on the VLT , which will access not only the optical but also the near-IR that reaches the rest-frame optical for redshifts beyond $`z1`$. In principle, such kinematic surveys can be used to estimate the masses of galaxies on larger scales than actually observed in the luminous portions of galaxies, by measuring the distribution of relative velocities of galaxy pairs at different separations. More direct measures of dynamical mass on larger scales are likely to come from weak lensing surveys, several of which are already underway; some will take advantage of photometric redshifts to vastly improve the S/N by discriminating between foreground and background sources. Moreover, adaptive optics (see Fig. 5) along with NGST will yield detailed kinematics of very high redshift and even the most compact galaxies. Finally, planned enhancements to submm, mm, and radio telescopes should provide direct measures of the amount and motions of various forms of gas in distant galaxies (a recent example at $`z3.4`$ uses 21cm in absorption ).
The science from these large samples of mass measurements for distant galaxies will be revolutionary. Galaxy formation, e.g., will be observed as a physical process involving the hierarchical buildup of mass, rather than the current popular trend of concentrating on galaxy formation as reflected by the star formation rate (SFR), i.e. just the conversion of gas into stars. One diagnostic to explore this merging process includes the evolution of the comoving volume density function (EOCVDF) of mass or velocity; another is to combine the information with luminosity to measure the EOCVDF(M/L). With the advent of high spatial resolution spectroscopy that enables separation of the structure and kinematics of galaxy subcomponents, the EOCVDF(disk mass, M/L, scale-length) or EOCVDF(bulge mass, M/L, size) will be dramatic enhancements to current studies of the Tully-Fisher and Fundamental Plane relations. Adding other dimensions of information such as SFR, chemical abundances, and stellar ages will only add richness and depth to the forthcoming revolution in distant galaxy studies. As emphasized in the introduction, mass comes in diverse forms, so the previous discussion applies also for dark matter, halo, HI, etc. as new telescopes and tools allow discrimination among them.
This revolution will extend even to cosmology. Large-scale structure and bias, e.g., will be characterized and discriminated by mass (mass power spectrum) instead of light. Independent tests of the global geometry can be undertaken by using the Fundamental Plane of ellipticals or via the classical volume tests . The latter is feasible through the EOCVD of quantitities that are expected to be conserved, such as the total baryonic mass (obtained by combining results from X-ray to radio) or the total mass (from gravitational lensing).
Acknowledgements. DEEP was initiated by the Berkeley Center for Particle Astrophysics (CfPA) and has been supported by various NSF, NASA, and STScI grants over the years, including NSF AST-9529098 and STScI AR-07532.01-96. K. Gebhardt, N. Vogt, A. C. Phillips, and J. Lowenthal are especially thanked for providing the figures. I also thank V. Rubin and R. Kron for their encouragement to me in the early 1980’s to explore the dynamics of distant galaxies.
|
no-problem/9906/cond-mat9906174.html
|
ar5iv
|
text
|
# Untitled Document
Dynamics of Nonequilibrium Deposition
Vladimir Privman
Department of Physics, Clarkson University, Potsdam, New York 13699–5820, USA
ABSTRACT
In this work we survey selected theoretical developments for models of deposition of extended particles, with and without surface diffusion, on linear and planar substrates, of interest in colloid, polymer, and certain biological systems.
1. Introduction
Dynamics of important physical, chemical, and biological processes, e.g., \[1-2\], provides examples of strongly fluctuating systems in low dimensions, $`D=1`$ or 2. These processes include surface adsorption, for instance of colloid particles or proteins, possibly accompanied by diffusional or other relaxation (such as detachment), for which the experimentally relevant dimension is that of planar substrates, $`D=2`$, or that of large collectors. The surface of the latter is also semi-two-dimensional owing to their large size as compared to the size of the deposited particles.
For reaction-diffusion kinetics, the classical chemical studies were for $`D=3`$. However, recent emphasis on heterogeneous catalysis generated interest in $`D=2`$. Actually, for both deposition and reactions, some experimental results exist even in $`D=1`$ (literature citations will be given later). Finally, kinetics of ordering and phase separation, largely amenable to experimental probe in $`D=3`$ and $`2`$, attracted much recent theoretical effort in $`D=1,2`$.
Theoretical emphasis on low-dimensional models has been driven by the following interesting combination of properties. Firstly, models in $`D=1`$, and sometimes in $`D=2`$, allow derivation of analytical results. Secondly, it turns out that all three types of model: deposition-relaxation, reaction-diffusion, phase separation, are interrelated in many, but not all, of their properties. This observation is by no means obvious, and in fact it is model-dependent and can be firmly established and explored only in low dimensions, especially in $`D=1`$, see, e.g., \[1-2\].
It turns out that for systems with stochastic dynamics without the equilibrium state, important regimes, such as the large-time asymptotic behavior, are frequently governed by strong fluctuations manifested in power-law rather than exponential time dependence, etc. However, the upper critical dimension above which the fluctuation behavior is described by the mean-field (rate-equation) approximation, is typically lower than in the more familiar and better studied equilibrium models. As a result, attention has been drawn to low dimensions where the strongly fluctuating non-mean-field behavior can be studied.
Low-dimensional nonequilibrium dynamical models pose several interesting challenges theoretically and numerically. While many exact, asymptotic, and numerical results are already available in the literature, as reviewed in \[1-2\], this field presently provides examples of properties (such as power-law exponents) which lack theoretical explanation even in $`1D`$. Numerical simulations are challenging and require large scale computational effort already for $`1D`$ models. For more experimentally relevant $`2D`$ cases, where analytical results are scarce, difficulty in numerical simulations has been the “bottle neck” for understanding many open problems.
The purpose of this work is to provide an introduction to the field of nonequilibrium surface deposition models of extended particles. No comprehensive survey of the literature is attempted. Relation of deposition to other low-dimensional models mentioned earlier will be only referred to in detail in few cases. The specific models and examples selected for a more detailed exposition, i.e., models of deposition with diffusional relaxation, were biased by author’s own work.
The outline of the review is as follows. The rest of this introductory section is devoted to defining the specific topics of surface deposition to be surveyed. Section 2 describes the simplest models of random sequential adsorption. Section 3 is devoted to deposition with relaxation, with general remarks followed by definition of the simplest, $`1D`$ models of diffusional relaxation for which we present a more detailed description of various theoretical results. Multilayer deposition is also addressed in Section 3. More numerically-based $`2D`$ results for deposition with diffusional relaxation are surveyed in Section 4, along with concluding remarks.
Surface deposition is a vast field of study. Indeed, dynamics of the deposition process is governed by substrate structure, substrate-particle interactions, particle-particle interactions, and transport mechanism of particles to the surface. Furthermore, deposition processes may be accompanied by particle motion on the surface and by detachment. Our emphasis here will be on those deposition processes where the particles are “large” as compared to the underlying atomic and morphological structure of the substrate and as compared to the range of the particle-particle and particle-substrate interactions. Thus, colloids, for instance, involve particles of submicron to several micron size. We note that 1$`\mu `$m$`=10000`$Å, whereas atomic dimensions are of order 1Å, while the range over which particle-surface and particle-particle interactions are significant as compared to $`kT`$, is typically of order 100Åor less.
Extensive theoretical study of such systems is relatively recent and it has been motivated by experiments where submicron-size colloid, polymer, and protein “particles” were the deposited objects; see \[3-18\] for a partial literature list, as well as other articles in this issue. It is usually assumed that the main mechanism by which particles “talk” to each other is exclusion effect due to their size. In contrast, deposition processes associated, for instance, with crystal growth, e.g., , involve atomic-scale interactions and while the particle-particle exclusion is always an important factor, its interplay with other processes which affect the growth dynamics is quite different.
Perhaps the simplest and the most studied model with particle exclusion is Random Sequential Adsorption (RSA). The RSA model, to be described in detail in Section 2, assumes that particle transport (incoming flux) onto the surface results in a uniform deposition attempt rate $`R`$ per unit time and area. In the simplest formulation, one assumes that only monolayer deposition is allowed. This could correspond, for instance, to repulsive particle-particle and attractive particle-substrate forces. Within this monolayer deposit, each new arriving particle must either “fit in” in an empty area allowed by the hard-core exclusion interaction with the particles deposited earlier, or the deposition attempt is rejected.
As mentioned, the basic RSA model will be described shortly, in Section 2. More recent work has been focused on its extensions to allow for particle relaxation by diffusion, Sections 3 and 4, to include detachment processes, and to allow multilayer formation. The latter two extensions will be briefly surveyed in Section 3. Many other extensions will not be discussed, such as for instance “softening” the hard-core interactions or modifying the particle transport mechanism, etc. \[21-22\].
2. Random Sequential Adsorption
The irreversible Random Sequential Adsorption (RSA) process \[21-22\] models experiments of colloid and other, typically, submicron, particle deposition \[4-16\] by assuming a planar $`2D`$ substrate and, in the simplest case, continuum (off-lattice) deposition of spherical particles. However, other RSA models have received attention. In $`2D`$, noncircular cross-section shapes as well as various lattice-deposition models were considered \[21-22\]. Several experiments on polymers and attachment of fluorescent units on DNA molecules (the latter, in fact, is usually accompanied by motion of these units on the DNA “substrate” and detachment) suggest consideration of the lattice-substrate RSA processes, in $`1D`$. RSA processes have also found applications in traffic problems and certain other fields and they were reviewed extensively in the literature \[21-22\]. Our presentation in this section aims at defining some RSA models and outlining characteristic features of their dynamics.
Figure 1 illustrates the simplest possible monolayer lattice RSA model: irreversible deposition of dimers on the linear lattice. An arriving dimer will be deposited if the underlying pair of lattice sites are both empty. Otherwise, it is discarded. Thus, the deposition attempt of $`a`$ will succeed. However, if the arriving particle is at $`b`$ then the deposition attempt will be rejected unless there is some relaxation mechanism such as detachment of dimers or monomers, or diffusional hopping. For instance, if $`c`$ first hops to the left then later deposition of $`b`$ can succeed. For $`d`$, the deposition is, again, not possible unless detachment and/or motion of monomers or whole dimers clear the appropriate landing sites marked by $`e`$.
Let us consider the irreversible RSA without detachment or diffusion. Note that once $`a`$ attaches, in Figure 1, the configuration is fully jammed in the interval shown. The substrate is usually assumed to be empty initially, at $`t=0`$. In the course of time $`t`$, the coverage, $`\rho (t)`$, increases and builds up to order 1 on the time scales of order $`\left(RV\right)^1`$, where $`R`$ was defined earlier as the deposition attempt rate per unit time and “area” of the $`D`$-dimensional surface, while $`V`$ is the particle volume. The latter is $`D`$-dimensional; for deposition of spheres on a planar surface, $`V`$ is actually the cross-sections area.
At large times the coverage approaches the jammed-state value where only gaps smaller than the particle size were left in the monolayer. The resulting state is less dense than the fully ordered “crystalline” (close-packed) coverage. For the $`D=1`$ deposition shown in Figure 1 the fully ordered state would have $`\rho =1`$. The variation of the RSA coverage is illustrated by the lower curve in Figure 2.
At early times the monolayer deposit is not dense and the deposition process is largely uncorrelated. In this regime, mean-field like low-density approximation schemes are useful \[23-26\]. Deposition of $`k`$-mer particles on the linear lattice in $`1D`$ was in fact solved exactly for all times \[3,27-28\]. In $`D=2`$, extensive numerical studies were reported \[26,29-40\] of the variation of coverage with time and large-time asymptotic behavior which will be discussed shortly. Some exact results for correlation properties are also available, in $`1D`$ .
The large-time deposit has several characteristic properties that have attracted much theoretical interest. For lattice models, the approach to the jammed-state coverage is exponential \[40-42\]. This was shown to follow from the property that the final stages of deposition are in few sparse, well separated surviving “landing sites.” Estimates of decrease in their density at late stages suggest that
$$\rho (\mathrm{})\rho (t)\mathrm{exp}\left(R\mathrm{}^Dt\right),$$
$`(2.1)`$
where $`\mathrm{}`$ is the lattice spacing. The coefficient in (2.1) is of order $`\mathrm{}^D/V`$ if the coverage is defined as the fraction of lattice units covered, i.e., the dimensionless fraction of area covered, also termed the coverage fraction, so that coverage as density of particles per unit volume would be $`V^1\rho `$. The detailed behavior depends of the size and shape of the depositing particles as compared to the underlying lattice unit cells.
However, for continuum off-lattice deposition, formally obtained as the limit $`\mathrm{}0`$, the approach to the jamming coverage is power-law. This interesting behavior \[41-42\] is due to the fact that for large times the remaining voids accessible to particle deposition can be of sizes arbitrarily close to those of the depositing particles. Such voids are thus reached with very low probability by the depositing particles, the flux of which is uniformly distributed. The resulting power-law behavior depends on the dimensionality and particle shape. For instance, for $`D`$-dimensional cubes of volume $`V`$,
$$\rho (\mathrm{})\rho (t)\frac{\left[\mathrm{ln}(RVt)\right]^{D1}}{RVt},$$
$`(2.2)`$
while for spherical particles,
$$\rho (\mathrm{})\rho (t)(RVt)^{1/D}.$$
$`(2.3)`$
For the linear surface, the $`D=1`$ cubes and spheres both reduce to the deposition process of segments of length $`V`$. As mentioned earlier, this $`1D`$ process is exactly solvable .
The $`D>1`$ expressions (2.2)-(2.3), and similar relations for other particle shapes, etc., are actually empirical asymptotic laws which have been verified, mostly for $`D=2`$, by extensive numerical simulations \[29-40\]. The most studied $`2D`$ geometries are circles (corresponding to the deposition of spheres on a plane) and squares. The jamming coverages are \[29-31,39-40\]
$$\rho _{\mathrm{squares}}(\mathrm{})0.5620\mathrm{and}\rho _{\mathrm{circles}}(\mathrm{})0.544\mathrm{to}\mathrm{\hspace{0.33em}0.550}.$$
$`(2.4)`$
For square particles, the crossover to continuum in the limit $`k\mathrm{}`$ and $`\mathrm{}0`$, with fixed $`V^{1/D}=k\mathrm{}`$ in deposition of $`k\times k\times \mathrm{}\times k`$ lattice squares, has been investigated in some detail , both analytically (in any $`D`$) and numerically (in $`2D`$).
The correlations in the large-time “jammed” state are different from those of the equilibrium random “gas” of particles with density near $`\rho (\mathrm{})`$. In fact, the two-particle correlations in continuum deposition develop a weak singularity at contact, and correlations generally reflect the infinite memory (full irreversibility) of the RSA process .
3. Deposition with Relaxation
Monolayer deposits may “relax” (i.e., explore more configurations) by particle motion on the surface, by their detachment, etc. In fact, detachment has been experimentally observed in deposition of colloid particles which were otherwise quite immobile on the surface . Theoretical interpretation of colloid particle detachment data has proved difficult, however, because binding to the substrate once deposited, can be different for different particles, whereas the transport to the substrate, i.e., the flux of the arriving particles in the deposition part of the process, typically by convective diffusion, is more uniform. Detachment also plays role in deposition on DNA molecules . Theoretical interpretation of the latter data, which also involves hopping motion on DNA, was achieved by mean-field type modeling .
Recently, more theoretically motivated studies of the detachment relaxation processes, in some instances with surface diffusion allowed as well, have lead to interesting model studies \[44-50\]. These investigations did not always assume detachment of the original units. For instance, in the $`1D`$ dimer deposition shown in Figure 1, each dimer on the surface could detach and open up a “landing site” for future deposition. However, in order to allow deposition in the location represented schematically by the dimer particle $`d`$, two monomers could detach (marked by $`e`$) which were parts of different dimers. Such models of “recombination” prior to detachment, of $`k`$-mers in $`D=1`$, were mapped onto certain spin models and symmetry relations identified which allowed derivation of several exact and asymptotic results on the correlations and other properties \[44-50\]. We note that deposition and detachment combine to drive the dynamics into a steady state, rather than jammed state as in ordinary RSA. These studies have been largely limited thus far to $`1D`$ models.
We now turn to particle motion on the surface, in a monolayer deposit, which was experimentally observed in deposition of proteins and also in deposition on DNA molecules . From now on, we focus on diffusional relaxation (random hopping in the lattice case). Consider the dimer deposition in $`1D`$; see Figure 1. The configuration in Figure 1, after particle $`a`$ is actually deposited, is jammed in the interval shown. Hopping of particle $`c`$ one site to the left would open up a two-site gap to allow deposition of $`b`$. Thus, diffusional relaxation allows the deposition process to reach denser, in fact, ordered (close-packed) configurations. For short times, when the empty area is plentiful, the effect of the in-surface particle motion will be small. However, for large times, the density will exceed that of the RSA process, as illustrated by the upper curve in Figure 2.
Further investigation of this effect is much simpler in $`1D`$ than in $`2D`$. Let us therefore consider the $`1D`$ case first, postponing the discussion of $`2D`$ models to the next section. Specifically, consider deposition of $`k`$-mers of fixed length $`V`$. In order to allow limit $`k\mathrm{}`$ which corresponds to continuum deposition, we take the underlying lattice spacing $`\mathrm{}=V/k`$. Since the deposition attempt rate $`R`$ was defined per unit area (unit length here), it has no significant $`k`$-dependence. However, the added diffusional hopping of $`k`$-mers on the $`1D`$ lattice, with attempt rate $`H`$ and hard-core or similar particle interaction, must be $`k`$-dependent. Indeed, we consider each deposited $`k`$-mer particle as randomly and independently attempting to move one lattice spacing to the left or to the right with rate $`H/2`$ per unit time. Of course, particles cannot run over each other so some sort of hard-core interaction must be assumed, i.e., in a dense state most hopping attempts will fail. However, if left alone, each particle would move diffusively for large time scales. In order to have the resulting diffusion constant $`𝒟`$ finite in the continuum limit $`k\mathrm{}`$, we put
$$H𝒟/\mathrm{}^2=𝒟k^2/V^2.$$
$`(3.1)`$
which is only valid in $`1D`$.
Each successful hopping of a particle results in motion of one empty lattice site (see particle $`c`$ in Figure 1). It is useful to reconsider the dynamics of particle hopping in terms of the dynamics of this rearrangement of empty area fragments \[51-53\]. Indeed, if several of these empty sites are combined to form large enough voids, deposition attempts can succeed in regions of particle density which would be “frozen” or “jammed” in ordinary RSA. In terms of these new “particles” which are empty lattice sites of the deposition problem, the process is in fact that of reaction-diffusion. Indeed, $`k`$ reactants (empty sites) must be brought together by diffusional hopping in order to have finite probability of their annihilation, i.e., disappearance of a group of consecutive nearest-neighbor empty sites due to successful deposition. Of course, the $`k`$-group can also be broken apart due to diffusion. Therefore, the $`k`$-reactant annihilation is not instantaneous in the reaction nomenclature. Such $`k`$-particle reactions are of interest on their own \[54-59\].
The simplest mean-field rate equation for annihilation of $`k`$ reactants describes the time dependence of the coverage, $`\rho (t)`$, in terms of the reactant density $`1\rho `$,
$$\frac{d\rho }{dt}=\mathrm{\Gamma }(1\rho )^k,$$
$`(3.2)`$
where $`\mathrm{\Gamma }`$ is the effective rate constant. Note that we assume that the close-packing dimensional coverage is 1 in $`1D`$. There are two problems with this approximation. Firstly, it turns out that for $`k=2`$ the mean-field approach breaks down. Diffusive-fluctuation arguments for non-mean-field behavior have been advanced for reactions \[54,56,60-61\]. In $`1D`$, several exact calculations support this conclusion \[62-68\]. The asymptotic large-time behavior turns out to be
$$1\rho 1/\sqrt{t}(k=2,D=1),$$
$`(3.3)`$
rather than the mean-field prediction $`1/t`$. The coefficient in (3.3) is expected to be universal, when expressed in an appropriate dimensionless form by introducing single-reactant diffusion constant.
The power law (3.3) was confirmed by extensive numerical simulations of dimer deposition and by exact solution for one particular value of $`H`$ for a model with dimer dissociation. The latter work also yielded some exact results for correlations. Specifically, while the connected particle-particle correlations spread diffusively in space, their decay it time is nondiffusive; see for details. Series expansion studies of models of dimer deposition with diffusional hopping of the whole dimers or their “dissociation” into hopping monomers, has confirmed the expected asymptotic behavior and also provided estimates of the coverage as a function of time .
The case $`k=3`$ is marginal with the mean-field power law modified by logarithmic terms. The latter were not observed in Monte Carlo studies of deposition . However, extensive results are available directly for three-body reactions \[56-59\], including verification of the logarithmic corrections to the mean-field behavior \[57-59\].
The second problem with the mean-field rate equation was identified in the continuum limit of off-lattice deposition, i.e., for $`k\mathrm{}`$. Indeed, the mean-field approach is essentially the fast diffusion approximation assuming that diffusional relaxation is efficient enough to equilibrate nonuniform density profile fluctuations on the time scales fast as compared to the time scales of the deposition events. Thus, the mean-field results are formulated in terms of the uniform properties, such as the density. It turns out, however, that the simplest, $`k^{\mathrm{th}}`$-power of the reactant density form (3.2) is only appropriate for times $`t>>e^{k1}/(RV)`$.
This conclusion was reached by assuming the fast-diffusion, randomized (equilibrium) hard-core reactant system form of the inter-reactant distribution function in $`1D`$ (essentially, an assumption on the form of certain correlations). This approach, not detailed here, allows Ginzburg-criterion-like estimation of the limits of validity of the mean-field results and it correctly suggests mean-field validity for $`k=4,5,\mathrm{}`$, with logarithmic corrections for $`k=3`$ and complete breakdown of the mean-field assumptions for $`k=2`$. However, this detailed analysis yields the modified mean-field relation
$$\frac{d\rho }{dt}=\frac{\gamma RV(1\rho )^k}{\left(1\rho +k^1\rho \right)}(D=1),$$
$`(3.4)`$
where $`\gamma `$ is some effective dimensionless rate constant. This new expression applies uniformly as $`k\mathrm{}`$. Thus, the continuum deposition is also asymptotically mean-field, with the essentially-singular “rate equation”
$$\frac{d\rho }{dt}=\gamma (1\rho )\mathrm{exp}[\rho /(1\rho )](k=\mathrm{},D=1).$$
$`(3.5)`$
The approach to the full, saturation coverage for large times is extremely slow,
$$1\rho (t)\frac{1}{\mathrm{ln}\left(t\mathrm{ln}t\right)}(k=\mathrm{},D=1).$$
$`(3.6)`$
Similar predictions for $`k`$-particle reactions can be found in .
When particles are allowed to attach also on top of each other, with possibly some rearrangement processes allowed as well, multilayer deposits will be formed. It is important to note that the large-layer structure of the deposit and fluctuation properties of the growing surface will be determined by the transport mechanism of particles to the surface and by the allowed relaxations (rearrangements). Indeed, these two characteristics determine the screening properties of the multilayer formation process which in turn shape the deposit morphology, which can range from fractal to dense, and the roughening of the growing deposit surface. There is a large body of research studying such growth, with recent emphasis on the growing surface fluctuation properties.
However, the feature characteristic of the RSA process, i.e., the exclusion due to particle size, plays no role in determining the universal, large-scale properties of “thick” deposits and their surfaces. Indeed, the RSA-like jamming will be only important for detailed morphology of the first few layers in a multilayer deposit. However, it turns out that RSA-like approaches (with relaxation) can be useful in modeling granular compaction .
In view of the above remarks, multilayer deposition models involving jamming effects were relatively less studied. They can be divided into two groups. Firstly, structure of the deposit in the first few layers is of interest \[73-75\] since they retain “memory” of the surface. Variation of density and other correlation properties away from the wall has structure on the length scales of particle size. These typically oscillatory features decay away with the distance from the wall. Numerical Monte Carlo simulation aspects of continuum multilayer deposition (ballistic deposition of $`3D`$ balls) were reviewed in . Secondly, few-layer deposition processes have been of interest in some experimental systems. Mean-field theories of multilayer deposition with particle size and interactions accounted for were formulated and used to fit such data \[12,14-16\].
4. Two-Dimensional Deposition with Diffusional Relaxation
We now turn to the $`2D`$ case of deposition of extended objects on planar substrates, accompanied by diffusional relaxation (assuming monolayer deposits). We note that the available theoretical results are limited to few studies \[38,77-79\]. They indicate a rich pattern of new effects as compared to $`1D`$. In fact, there exists extensive literature, e.g., on deposition with diffusional relaxation in other models, in particular those where the jamming effect is not present or plays no significant role. These include, e.g., deposition of “monomer” particles which align with the underlying lattice without jamming, as well as models where many layers are formed (mentioned in the preceding section).
The $`2D`$ deposition with relaxation of extended objects is of interest in certain experimental systems where the depositing objects are proteins . Here we focus on the combined effect of jamming and diffusion, and we emphasize dynamics at large times. For early stages of the deposition process, low-density approximation schemes can be used. One such application was reported in for continuum deposition of circles on a plane.
In order to identify features new to $`2D`$, let us consider deposition of $`2\times 2`$ squares on the square lattice. The particles are exactly aligned with the $`2\times 2`$ lattice sites as shown in Figure 3. Furthermore, we assume that the diffusional hopping is along the lattice directions $`\pm x`$ and $`\pm y`$, one lattice spacing at a time. In this model dense configurations involve domains of four phases as shown in Figure 3. As a result, immobile fragments of empty area can exist. Each such single-site vacancy (Figure 3) serves as a meeting point of four domain walls. By “immobile” we mean that the vacancy cannot move due to local motion of the surrounding particles. For it to move, a larger empty-area fragment must first arrive, along one of the domain walls. One such larger empty void is shown in Figure 3. Note that it serves as a kink in the domain wall.
Existence of immobile vacancies suggests possible “frozen,” glassy behavior with extremely slow relaxation, at least locally. In fact, the full characterization of the dynamics of this model requires further study. The first numerical results do provide some answers which will be reviewed shortly. We first consider a simpler model depicted in Figure 4. In this model \[78-79\] the extended particles are squares of size $`\sqrt{2}\times \sqrt{2}`$. They are rotated 45 with respect to the underlying square lattice. Their diffusion, however, is along the vertical and horizontal lattice axes, by hopping one lattice spacing at a time. The equilibrium variant of this model (without deposition, with fixed particle density) is the well-studied hard-square model which, at large densities, phase separates into two distinct phases. These two phases also play role in the late stages of RSA with diffusion. Indeed, at large densities the empty area is stored in domain walls separating ordered regions. One such domain wall is shown in Figure 4. Snapshots of actual Monte Carlo simulation results can be found in \[78-79\].
Figure 4 illustrates the process of ordering which essentially amounts to shortening of domain walls. In Figure 4, the domain wall gets shorter after the shaded particles diffusively rearrange to open up a deposition slot which can be covered by an arriving particle. Numerical simulations \[78-79\] find behavior reminiscent of the low-temperature equilibrium ordering processes \[83-85\] driven by diffusive evolution of the domain-wall structure. For instance, the remaining uncovered area vanishes according to
$$1\rho (t)\frac{1}{\sqrt{t}}.$$
$`(4.1)`$
This quantity, however, also measures the length of domain walls in the system (at large times). Thus, disregarding finite-size effects and assuming that the domain walls are not too convoluted (as confirmed by numerical simulations), we conclude that the power law (4.1) corresponds to typical domain sizes growing as $`\sqrt{t}`$, reminiscent of the equilibrium ordering processes of systems with nonconserved order parameter dynamics \[83-85\].
We now turn again to the $`2\times 2`$ model of Figure 3. The equilibrium variant of this model corresponds to hard-squares with both nearest and next-nearest neighbor exclusion \[82,86-87\]. It has been studied in lesser detail than the two-phase hard-square model described in the preceding paragraphs. In fact, the equilibrium phase transition has not been fully classified (while it was Ising for the simpler model). The ordering at low temperatures and high densities was studied in . However, many features noted, for instance large entropy of the ordered arrangements, require further study. The dynamical variant (RSA with diffusion) of this model was studied numerically in . The structure of the single-site frozen vacancies and associated network of domain walls turns out to be boundary-condition sensitive. For periodic boundary conditions the density “freezes” at values $`1\rho L^1`$, where $`L`$ is the linear system size.
Preliminary indications were found that the domain size and shape distributions in such a frozen state are nontrivial. Extrapolation $`L\mathrm{}`$ indicates that the power law behavior similar to (4.1) is nondiffusive: the exponent $`1/2`$ is replaced by $`0.57`$. However, the density of the smallest mobile vacancies, i.e., dimer kinks in domain walls, one of which is illustrated in Figure 3, does decrease diffusively. Further studies are needed to fully clarify the ordering process associated with the approach to the full coverage as $`t\mathrm{}`$ and $`L\mathrm{}`$ in this model.
Even more complicated behaviors are possible when the depositing objects are not symmetric and can have several orientations as they reach the substrate. In addition to translational diffusion (hopping), one has to consider possible rotational motion. The square-lattice deposition of dimers, with hopping processes including one lattice spacing motion along the dimer axis and 90 rotations about a constituent monomer, was studied in . The dimers were allowed to deposit vertically and horizontally. In this case the full close-packed coverage is not achieved at all because the frozen vacancy sites can be embedded in, and move by diffusion in, extended structures of different “topologies.” These structures are probably less efficiently “demolished” by the motion of mobile vacancies than the elimination of localized frozen vacancies in the model of Figure 3.
In summary, we reviewed the deposition processes involving extended objects, with jamming and its interplay with diffusional relaxation yielding interesting new dynamics of approach to the large-time state. While significant progress has been achieved in $`1D`$, the $`2D`$ systems require further investigations. Mean-field and low-density approximations can be used in many instances for large enough dimensions, for short times, and for particle sizes larger than few lattice units. Added diffusion allows formation of denser deposits and leads to power-law large-time tails which, in $`1D`$, were related to diffusion-limited reactions, while in $`2D`$, associated with evolution of domain-wall network and defects, reminiscent of equilibrium ordering processes.
REFERENCES
1. V. Privman, Trends in Statistical Physics 1, 89 (1994).
2. Nonequilibrium Statistical Mechanics in One Dimension, V. Privman, ed. (Cambridge University Press, 1997).
3. E.R. Cohen and H. Reiss, J. Chem. Phys. 38, 680 (1963).
4. J. Feder and I. Giaever, J. Colloid Interface Sci. 78, 144 (1980).
5. A. Schmitt, R. Varoqui, S. Uniyal, J.L. Brash and C. Pusiner, J. Colloid Interface Sci. 92, 25 (1983).
6. G.Y. Onoda and E.G. Liniger, Phys. Rev. A33, 715 (1986).
7. N. Kallay, B. Biškup, M. Tomić and E. Matijević, J. Colloid Interface Sci. 114, 357 (1986).
8. N. Kallay, M. Tomić, B. Biškup, I. Kunjašić and E. Matijević, Colloids Surfaces 28, 185 (1987).
9. J.D. Aptel, J.C. Voegel and A. Schmitt, Colloids Surfaces 29, 359 (1988).
10. Z. Adamczyk, Colloids and Surfaces 35, 283 (1989).
11. Z. Adamczyk, Colloids and Surfaces 39, 1 (1989).
12. C.R. O’Melia, in Aquatic Chemical Kinetics, p. 447, W. Stumm, ed. (Wiley, New York, 1990).
13. Z. Adamczyk, M. Zembala, B. Siwek and P. Warszyński, J. Colloid Interface Sci. 140, 123 (1990).
14. N. Ryde, N. Kallay and E. Matijević, J. Chem. Soc. Farad. Tran. 87, 1377 (1991).
15. N. Ryde, H. Kihira and E. Matijević, J. Colloid Interface Sci. 151, 421 (1992).
16. L. Song and M. Elimelech, Colloids and Surfaces A73, 49 (1993).
17. J.J. Ramsden, J. Statist. Phys. 73, 853 (1993).
18. C.J. Murphy, M.R. Arkin, Y. Jenkins, N.D. Ghatlia, S.H. Bossmann, N.J. Turro and J.K. Barton, Science 262, 1025 (1993).
19. Liquid Semiconductors, V.M. Glazov, S.N. Chizhevskaya and N.N. Glagoleva, (Plenum, New York, 1969).
20. P. Schaaf, A. Johner and J. Talbot, Phys. Rev. Lett. 66, 1603 (1991).
21. Review: M.C. Bartelt and V. Privman, Internat. J. Mod. Phys. B5, 2883 (1991).
22. Review: J.W. Evans, Rev. Mod. Phys. 65, 1281 (1993).
23. B. Widom, J. Chem. Phys. 44, 3888 (1966).
24. B. Widom, J. Chem. Phys. 58, 4043 (1973).
25. P. Schaaf and J. Talbot, Phys. Rev. Lett. 62, 175 (1989).
26. R. Dickman, J.-S. Wang and I. Jensen, J. Chem. Phys. 94, 8252 (1991).
27. J.J. Gonzalez, P.C. Hemmer and J.S. Høye, Chem. Phys. 3, 228 (1974).
28. J.W. Evans, J. Phys. A23, 2227 (1990).
29. J. Feder, J. Theor. Biology 87, 237 (1980).
30. E.M. Tory, W.S. Jodrey and D.K. Pickard, J. Theor. Biology 102, 439 (1983).
31. E.L. Hinrichsen, J. Feder and T. Jøssang, J. Statist. Phys. 44, 793 (1986).
32. E. Burgos and H. Bonadeo, J. Phys. A20, 1193 (1987).
33. G.C. Barker and M.J. Grimson, J. Phys. A20, 2225 (1987).
34. R.D. Vigil and R.M. Ziff, J. Chem. Phys. 91, 2599 (1989).
35. J. Talbot, G. Tarjus and P. Schaaf, Phys. Rev. A40, 4808 (1989).
36. R.D. Vigil and R.M. Ziff, J. Chem. Phys. 93, 8270 (1990).
37. J.D. Sherwood, J. Phys. A23, 2827 (1990).
38. G. Tarjus, P. Schaaf and J. Talbot, J. Chem. Phys. 93, 8352 (1990).
39. B.J. Brosilow, R.M. Ziff and R.D. Vigil, Phys. Rev. A43, 631 (1991).
40. V. Privman, J.-S. Wang and P. Nielaba, Phys. Rev. B43, 3366 (1991).
41. Y. Pomeau, J. Phys. A13, L193 (1980).
42. R.H. Swendsen, Phys. Rev. A24, 504 (1981).
43. S.H. Bossmann and L.S. Schulman, p. 443 in Ref. 2.
44. M. Barma, M.D. Grynberg and R.B. Stinchcombe, Phys. Rev. Lett. 70, 1033 (1993).
45. R.B. Stinchcombe, M.D. Grynberg and M. Barma, Phys. Rev. E47, 4018 (1993).
46. M.D. Grynberg, T.J. Newman and R.B. Stinchcombe, Phys. Rev. E50, 957 (1994).
47. M.D. Grynberg and R.B. Stinchcombe, Phys. Rev. E49, R23 (1994).
48. G.M. Schütz, J. Statist. Phys. 79, 243 (1995).
49. P.L. Krapivsky and E. Ben-Naim, J. Chem. Phys. 100, 6778 (1994).
50. M. Barma and D. Dhar, Phys. Rev. Lett. 73, 2135 (1994).
51. V. Privman and M. Barma, J. Chem. Phys. 97, 6714 (1992).
52 P. Nielaba and V. Privman, Mod. Phys. Lett. B 6, 533 (1992).
53. B. Bonnier and J. McCabe, Europhys. Lett. 25, 399 (1994).
54. K. Kang, P. Meakin, J.H. Oh and S. Redner, J. Phys. A 17, L665 (1984).
55. S. Cornell, M. Droz and B. Chopard, Phys. Rev. A 44, 4826 (1991).
56. V. Privman and M.D. Grynberg, J. Phys. A 25, 6575 (1992).
57. D. ben-Avraham, Phys. Rev. Lett. 71, 3733 (1993).
58. P.L. Krapivsky, Phys. Rev. E 49, 3223 (1994).
59. B.P. Lee, J. Phys. A 27, 2533 (1994).
60. K. Kang and S. Redner, Phys. Rev. Lett. 52, 955 (1984).
61. K. Kang and S. Redner, Phys. Rev. A32, 435 (1985).
62. Z. Racz, Phys. Rev. Lett. 55, 1707 (1985).
63. M. Bramson and J.L. Lebowitz, Phys. Rev. Lett. 61, 2397 (1988).
64. D.J. Balding and N.J.B. Green, Phys. Rev. A 40, 4585 (1989).
65. J.G. Amar and F. Family, Phys. Rev. A 41, 3258 (1990).
66. D. ben-Avraham, M.A. Burschka and C.R. Doering, J. Statist. Phys. 60, 695 (1990).
67. M. Bramson and J.L. Lebowitz, J. Statist. Phys. 62, 297 (1991).
68. V. Privman, J. Statist. Phys. 69, 629 (1992).
69. V. Privman and P. Nielaba, Europhys. Lett. 18, 673 (1992).
70. M.D. Grynberg and R.B. Stinchcombe, Phys. Rev. Lett. 74, 1242 (1995).
71. C.K. Gan and J.-S. Wang, Phys. Rev. E55, 107 (1997).
72. M.J. de Oliveira and A. Petri, J. Phys. A31, L425 (1998).
73. R.-F. Xiao, J.I.D. Alexander and F. Rosenberger, Phys. Rev. A45, R571 (1992).
74. B.D. Lubachevsky, V. Privman and S.C. Roy, Phys. Rev. E47, 48 (1993).
75. B.D. Lubachevsky, V. Privman and S.C. Roy, J. Comp. Phys. 126, 152 (1996).
76. V. Privman, H.L. Frisch, N. Ryde and E. Matijević, J. Chem. Soc. Farad. Tran. 87, 1371 (1991).
77. J.-S. Wang, P. Nielaba and V. Privman, Physica A199, 527 (1993).
78. J.-S. Wang, P. Nielaba and V. Privman, Mod. Phys. Lett. B7, 189 (1993).
79. E.W. James, D.-J. Liu and J.W. Evans, Relaxation Effects in Random Sequential Adsorption: Application to Chemisorption Systems, this volume.
80. S.A. Grigera, T.S. Grigera and J.R. Grigera, Phys. Lett A226, 124 (1997).
81. J.A. Vernables, G.D.T. Spiller and M. Hanbücken, Rept. Prog. Phys. 47, 399 (1984).
82. L.K. Runnels, in Phase Transitions and Critical Phenomena, Vol. 2, p. 305, C. Domb and M.S. Green, eds. (Academic, London, 1972).
83. J.D. Gunton, M. San Miguel, P.S. Sahni, Phase Transitions and Critical Phenomena, Vol. 8, p. 267, C. Domb and J.L. Lebowitz, eds. (Academic, London, 1983).
84. O.G. Mouritsen, in Kinetics and Ordering and Growth at Surfaces, p. 1, M.G. Lagally, ed. (Plenum, NY, 1990).
85. A. Sadiq and K. Binder, J. Statist. Phys. 35, 517 (1984).
86. K. Binder and D.P. Landau, Phys. Rev. B21, 1941 (1980).
87. W. Kinzel and M. Schick, Phys. Rev. B24, 324 (1981).
Figure Captions
Figure 1: Deposition of dimers on the $`1D`$ lattice. Once the arriving dimer $`a`$ attaches, the configuration shown will be fully jammed in the interval displayed. Further deposition can only proceed if dimer or monomer diffusion (hopping) and/or detachment are allowed. Letter labels $`b`$, $`c`$, $`d`$ are referred to in the text.
Figure 2: Schematic variation of the coverage fraction $`\rho (t)`$ with time for lattice deposition without (lower curve) and with (upper curve) diffusional or other relaxation. The “ordered” density corresponds to close packing. Note that the short-time behavior deviates from linear at times of order $`1/(RV)`$. (Quantities $`R,V`$ are defined in the text.)
Figure 3: Fragment of a deposit configuration in the deposition of $`2\times 2`$ squares. Illustrated are one single-site frozen vacancy at which four domain walls converge (indicated by heavy lines), as well as one dimer vacancy which causes a kink in one of the domain walls.
Figure 4: Illustration of deposition of $`\sqrt{2}\times \sqrt{2}`$ particles on the square lattice. Diffusional motion during time interval from $`t_1`$ to $`t_2`$ can rearrange the empty area “stored” in the domain wall to open up a new landing site for deposition. This is illustrated by the shaded particles.
|
no-problem/9906/hep-ex9906001.html
|
ar5iv
|
text
|
# The CP/T Experiment
## I Introduction
This talk is a description of an experiment that has been proposed to run at the Fermilab Main Injector to study CP violation, test CPT symmetry conservation, and search for rare decays of the $`K_S`$ meson. Here I will only discuss the CPT symmetry nonconservation search.
CPT symmetry conservation is a subject under theoretical attack. Studies of Hawking radiation and of string theory (the leading contender for a unified theory of all four forces of nature) have shown the CPT theorem to be invalid in real life (rather than in the three-force approximation we call the standard model). Many physicists are reluctant to accept the possiblity that CPT symmetry violation may occur at the Planck scale. One reason for this reluctance is that we have, so far, only theoretical hints that this is the case. Another reason is the great success of the standard model. It may be quite a few years before a theory that unifies all four of the forces of nature becomes mature enough so that convincing theoretical statements can be made about the CPT structure of the world.
Fortunately we don’t have to wait. The $`K_LK_S`$ system provides a way of testing the validity of CPT symmetry conservation where it is possible to perform extremely accurate experiments . In this document we propose to do an experiment that will reach the Planck scale. Finding CPT symmetry nonconservation would be a major discovery that would change in a fundamental way how physicists view the world. If we don’t find it we will strongly constrain several quantum theories of gravity and provide a powerful benchmark against which future theories must be measured.
## II CPT Theory and Phenomenology
The CPT theorem is based on the assumptions of locality, Lorentz invariance, the spin-statistics theorem, and asymptotically free wave functions. All quantum field theories (including the standard model of the elementary particles) assume CPT symmetry invariance.
There is a theoretical hint of the level at which CPT symmetry might be violated. This comes from the fact that gravity can’t be consistently included in a quantum field theory, and the proof of the CPT theorem assumes Minkowski space . To include gravity in a unified theory of all four forces of nature, many physicists think that a more general theory is needed, which would have quantum field theory embedded in it. In this more general theory the CPT theorem will be invalid.
One expects to see quantum effects of gravity at what is called the Planck scale: at energies of $`M_{Planck}c^2=\sqrt{\mathrm{}c^5/G}=1.2\times 10^{19}`$ GeV, or at distances of the order of $`10^{33}`$ cm. The quantum effects of gravity are expected to be very small in ordinary processes. However, in a place where the standard model predicts a null result, like CPT violation, quantum effects of gravity would stand out. Therefore, it would be very interesting to test CPT symmetry conservation at the Planck scale.
One might think that string theory, as a candidate for the more general theory that has quantum field theory embedded in it, would give us guidance. CPT conservation is artificially built into string theories, first by G. Veneziano .
Kostelecky and Potting suggested that spontaneous CPT violation might occur in string theory; i.e., they put the CPT violation in the solutions rather than in the equations of motion. One of the problems with string theory in general is that it’s not known how to relate string effects at the Planck scale to effects seen at current accelerator energies, and Kostelecky and Potting have the same difficulty. They have tried to remedy this by writing the most general additions to the Standard Model Lagrangian that maintain the SU(3) x SU(2) x U(1) effective structure of the theory but violate CPT symmetry. This allows them to classify the various types of CPT violation that might be seen (in the lepton sector, quark sector, etc.) and have a parameterization that includes all these effects. They find that the largest CPT violating effect is a change in quark propagators that has the opposite sign for antiquarks. This leads to a nonzero value of $`|M_{K^0}M_{\overline{K^0}}|`$ coming from indirect CPT violation. This is much larger than any direct CPT violation effect. This is precisely the signature that this experiment would search for.
The $`K^0\overline{K}^0`$ system provides us with an incredibly finely balanced interferometer that magnifies small perturbations such as CPT violating effects. It is a natural place to search for CPT symmetry violation since it exhibits C, P, and CP symmetry violation (and is the only place to date where CP violation has been seen). In the final analysis, the conservation or violation of CPT symmetry is an experimental question, and the search for this effect is of the utmost interest.
In $`K^0`$ physics, one can observe CPT violating effects through mixing or decays (called indirect or direct CPT violation respectively). In mixing, one introduces a parameter $`\mathrm{\Delta }`$ which is both CP and CPT violating. One can also have direct CPT violation. Eqn. (3) shows the mixing of $`K_L`$ and $`K_S`$ in terms of the CP eigenstates $`K_1`$ and $`K_2`$.
$`\{\begin{array}{c}K_S=K_1+(ϵ+\mathrm{\Delta })K_2\\ K_L=K_2+(ϵ\mathrm{\Delta })K_1\end{array}`$ (3)
There are several measurements that would signify CPT violation: a difference between the phase of $`ϵ`$ and the phase of $`\eta _+`$, evidence for a non-zero $`\mathrm{\Delta }`$ in the Bell-Steinberger relation, a difference between the phases of $`\eta _+`$ and $`\eta _{00}`$, or certain interference terms between $`K_L`$ and $`K_S`$ in semileptonic decays. In this report we will concentrate on the first two methods, measuring the phase of $`\eta _+`$ and comparing it to the calculated value of the phase of $`ϵ`$, and evaluating the Bell-Steinberger relation, since from them we can make the most accurate measurements.
We now consider the CPT test based on measuring the phase of $`\eta _+`$ and calculating the phase of $`ϵ`$. For what follows we adopt the Wu-Yang phase convention. Figure 1 shows the relationships between $`ϵ,ϵ^{},\mathrm{\Delta },`$ and $`\eta _+`$. $`ϵ^{}`$ and $`\mathrm{\Delta }`$ are shown greatly enlarged for clarity.
The size of $`|ϵ^{}/ϵ|`$ is of order $`10^3`$, and the phase of $`ϵ^{}`$ is very close to that of $`ϵ`$, so the phase of the vector $`ϵ+ϵ^{}`$ is the same, to good accuracy, to the phase of $`ϵ`$ ($`ϵ^{}`$ is too small to have an affect on the calculation of the phase of $`ϵ`$ at the level in which we are interested). We can see from the figure that the component of $`\mathrm{\Delta }`$ perpendicular to $`ϵ`$, $`\mathrm{\Delta }_{}`$, is
$`\mathrm{\Delta }_{}=|\eta _+|(\varphi _+\varphi _ϵ)`$ (4)
where $`\varphi _+`$ ($`\varphi _ϵ`$) is the phase of $`\eta _+(ϵ)`$. In general, in terms of the elements of the kaon decay matrix $`\mathrm{\Gamma }`$ and mass matrix $`M`$, $`\mathrm{\Delta }`$ is given by :
$`\mathrm{\Delta }={\displaystyle \frac{(\mathrm{\Gamma }_{11}\mathrm{\Gamma }_{22})+i(M_{11}M_{22})}{(\mathrm{\Gamma }_S\mathrm{\Gamma }_L)2i(M_LM_S)}}`$ (5)
The mass term has a phase perpendicular to $`\varphi _{SW}`$, the superweak phase, which is defined as $`\mathrm{tan}\varphi _{SW}=2(M_LM_S)/(\mathrm{\Gamma }_S\mathrm{\Gamma }_L)`$. $`\varphi _{SW}`$ is approximately equal to $`\varphi _ϵ`$. The decay term is parallel to $`\varphi _{SW}`$. We can solve Eqns. (4) and (5) for $`M_{11}M_{22}`$, which is the mass difference between the $`K^0`$ and $`\overline{K^0}`$ mesons, and get an equation which we can use to search for indirect CPT violation:
$`{\displaystyle \frac{|M_{K^0}M_{\overline{K^0}}|}{M_{K^0}}}={\displaystyle \frac{2(M_LM_S)}{M_{K^0}}}{\displaystyle \frac{|\eta _+|}{\mathrm{sin}\varphi _{SW}}}|\varphi _+\varphi _ϵ|`$ (6)
In Eqn. (6), Nature has been kind: the constant factors multiplying $`|\varphi _+\varphi _ϵ|`$ are exceedingly small. $`(M_LM_S)`$ is $`10^6`$ eV, and when one divides by $`M_{K^0}`$ the ratio is of order $`10^{15}`$. $`|\eta _+|`$ is of order $`10^3`$. The product of all the factors multiplying $`|\varphi _+\varphi _ϵ|`$ is $`4\times 10^{17}`$. By the Planck scale we mean
$`{\displaystyle \frac{|M_{K^0}M_{\overline{K^0}}|}{M_{K^0}}}={\displaystyle \frac{M_{K^0}}{M_{Planck}}}=4.1\times 10^{20}`$ (7)
so a measurement of $`|\varphi _+\varphi _ϵ|`$ accurate to 1 milliradian would test a CPT violating effect at the accuracy of the Planck scale.
Some CP/T experiment collaborators were part of Fermilab experiment E773. In this experiment we placed the limit (at 90% confidence level) ,
$`{\displaystyle \frac{|M_{K^0}M_{\overline{K^0}}|}{M_{K^0}}}<1.3\times 10^{18}`$ (8)
so the result of Ref. stands at 31 times the Planck scale.
That publication actually compared the phase of $`\eta _+`$ to the superweak phase (and stated clearly that in doing so the assumption was being made that CP violation would not be unexpectedly large in modes other than $`\pi \pi `$). In the calculation of the phase of $`ϵ`$, there are three corrections that should be made to the superweak phase: from $`Im(x)`$, the $`\mathrm{\Delta }S=\mathrm{\Delta }Q`$ rule violation parameter, from $`Im(\eta _{+0})`$, and from $`Im(\eta _{000})`$. Together they have an uncertainty of 2.7 degrees which should be added in quadrature with the approximately 1 degree accuracy of Ref. .
Several CP/T experiment collaborators are part of the KTeV experiment as well. There we expect to make an improvement of a factor of 3 to 5. In KTeV interference is seen very clearly. But the interference term from which $`\varphi _+`$ is measured, $`2|\eta _+||\rho |\mathrm{cos}(\mathrm{\Delta }mt+\varphi _\rho \varphi _+)\mathrm{exp}(t/2\tau _s)`$, is reduced by the regeneration amplitude $`|\rho |0.03`$, and $`\varphi _+`$ and $`\varphi _\rho `$ are hard to disentangle. Using the regeneration method will be difficult beyond the KTeV level .
It should be understood clearly that measuring the phase of $`\eta _+`$ and comparing it to the superweak phase does not constitute a complete test of CPT symmetry conservation: the corrections to the superweak phase have larger uncertainties than existing experimental measurements of $`\varphi _+`$. For example, if a significant difference between $`\varphi _+`$ and $`\varphi _{SW}`$ were found in an experiment it would NOT prove that CPT symmetry was violated. More accurate measurements of $`Im(x),Im(\eta _{+0})`$, and $`Im(\eta _{000})`$ must be made before this could be proved. An interference experiment located just downstream of the production target is needed for these measurements. In a regeneration experiment the interference in $`3\pi `$ decays is reduced in size by a factor of $`\rho `$, the regeneration amplitude, which is about 0.1 at most (at Main Injector energies), compared to an experiment near the production target, and it’s extremely difficult for a regeneration experiment to measure $`Im(x),Im(\eta _{+0})`$, and $`Im(\eta _{000})`$ to the required accuracy.
## III Two Tests of CPT Symmetry Conservation
### A The Phase Difference between $`\eta _+`$ and $`ϵ`$
After the KTeV experiment we expect to stand an order of magnitude above the Planck scale. To close that gap we will want to do an interference experiment near the kaon production target. The interference term is then $`2D|\eta _+|\mathrm{cos}(\mathrm{\Delta }mt\varphi _+)\mathrm{exp}(t/2\tau _s)`$. Here $`\varphi _+`$ appears alone, and $`|\rho |`$ is replaced with the dilution factor, $`D=(K^0\overline{K}^0)/(K^0+\overline{K}^0)`$ at the target. To maximize D and hence the interference, we choose to make our $`K^0`$ beam from a $`K^+`$ beam by charge exchange. Then at medium to high Feynman x, $`D1`$. The charge exchange cross section is large, about 20% of the total cross section. To maximize the flux of $`K^+`$ made from the 120 GeV/c protons from the Fermilab Main Injector we choose a $`K^+`$ momentum of 25 GeV/c. We would use a hyperon magnet to define the $`K^0`$ beam, similar to the one in the Fermilab Proton Center beam line. In the calculations described below we assume the use of a vee spectrometer, a lead glass electromagnetic calorimeter, and a muon detector.
In Ref. $`\varphi _+`$ was measured to $`1^o`$ accuracy. A CPT-violating mass difference exactly at the Planck scale would result in $`|\varphi _+\varphi _ϵ|=0.06^o`$. We set ourselves the goal of measuring $`\varphi _+`$ and $`\varphi _ϵ`$ to sufficient accuracy to see such a CPT-violating effect.
We have calculated the statistical sensitivity of the CPT measurements assuming that we have a 1 year long run with $`3\times 10^{12}`$ protons per pulse at 52% efficiency.
Fig. 2 shows the proper time distribution of accepted events. The figure shows the actual proper time distribution and also what the distribution would look like if there were no interference. The second part of the figure shows the ratio of those two curves. Between 5 and 20 $`K_S`$ lifetimes the interference is first a 40% destructive effect then is a 65% constructive effect.
We calculated the distribution of events in momentum and proper time for the resulting 20 billion events and fit this distribution using MINUIT, with fitting parameters $`|\eta _+|,\varphi _+,D`$, three parameters describing the normalization and shape of the kaon momentum spectrum, $`\tau _S`$ and $`\mathrm{\Delta }m`$ (the $`K_S`$ lifetime and the $`K_LK_S`$ mass difference). The uncertainty that results from this fit is $`0.040`$ degrees. This will meet our goal of testing CPT symmetry conservation at the Planck scale. This number $`\pm 0.040`$ degrees has another meaning: it is the statistical (including fitting) uncertainty of this measurement, and sets the scale against which all other aspects of the $`|\varphi _+\varphi _ϵ|`$ measurement should be compared.
In this experiment we measure $`\varphi _+`$, but we must also determine $`\varphi _ϵ`$. The leading contribution to $`\varphi _ϵ`$ is the superweak phase, $`\varphi _{SW}`$, given by $`\mathrm{tan}(\varphi _{SW})=2\mathrm{\Delta }m/\mathrm{\Delta }\mathrm{\Gamma }`$. The superweak phase will be measured by KTeV to accuracy sufficient for our purposes here. We next describe some corrections to this contribution.
For this experiment, $`ϵ^{}`$ will have no meaningful effect. Assuming CPT invariance, the phase of $`ϵ^{}`$ is known to be $`(48\pm 4)`$ degrees . Its magnitude is unknown, but if we assume it to be the central value from E832 we find that the maximum possible difference it can provide between $`\varphi _+`$ and $`\varphi _ϵ`$ is 0.012 degrees, a factor of 5 smaller than the contribution of CPT violation at the Planck scale.
The full formula for $`\varphi _ϵ`$ is
$`\mathrm{tan}\varphi _ϵ={\displaystyle \frac{2\mathrm{\Delta }m}{\mathrm{\Gamma }_S\mathrm{\Gamma }_L}}\mathrm{cos}\xi +{\displaystyle \frac{\mathrm{sin}\xi }{\delta }}`$ (9)
where $`\xi =\mathrm{arg}(\mathrm{\Gamma }_{12}A_0\overline{A}_{0}^{}{}_{}{}^{})`$ and $`\delta =2Re(ϵ)`$. Here $`A_0`$ is the isospin 0 part of the $`\pi ^+\pi ^{}`$ decay amplitude. In the Wu-Yang phase convention, $`A_0`$ is real, and $`\mathrm{\Gamma }_{12}`$ gives contributions from two sources: semileptonic decays through $`Im(x)`$, the $`\mathrm{\Delta }S=\mathrm{\Delta }Q`$ violation parameter, and $`3\pi `$ decays through $`Im(\eta _{+0})`$ and $`Im(\eta _{000})`$.
In the standard model we expect $`x10^7`$, which is too small to affect this experiment, but $`Im(x)`$ is known experimentally only to an accuracy of $`\pm 0.026`$. This results in an uncertainty in $`\varphi _ϵ`$ of 1.7 degrees. To prove that an observed difference between $`\varphi _+`$ and $`\varphi _ϵ`$ were due to CPT violation one would have to measure $`Im(x)`$ about 40 times more accurately than today’s level. The way we will do this is described below.
The contribution to $`\varphi _ϵ`$ from the $`3\pi `$ modes in the standard model is 0.017 degrees, which is smaller than the accuracy we are trying to obtain. But if one takes into account the current world’s knowledge, the uncertainty these decay modes contribute is 2.2 degrees. So they also have to be measured better.
The best experimental approach to measuring these three quantities, $`x,\eta _{+0}`$, and $`\eta _{000}`$, is the same: choose an experiment with high dilution factor and observe interference between $`K_L`$ and $`K_S`$ close to the target; i.e. the experiment described here. These measurements should be thought of as being an itegral part of this experiment. We have performed a calculation of the sensitivity of this experiment for these quantities, and we estimate that we can reach at least the required sensitivity. We conclude that we can determine $`\varphi _ϵ`$ to the required accuracy.
We used the same Monte Carlo and fitting programs to estimate the sensitivity of our experiment to the measurements necessary for the calculation of $`\varphi _ϵ`$, $`Im(x),Im(\eta _{+0})`$, and $`Im(\eta _{000})`$, and conclude that we will have the required sensitivity. We find that the uncertainty in $`Im(x)`$ contributes much more than $`Im(\eta _{+0})`$ and $`Im(\eta _{000})`$ to the uncertainty in $`\varphi _ϵ`$.
### B CPT Test via the Bell-Steinberger Relation
The next test of CPT symmetry conservation comes through an evaluation of the Bell-Steinberger relation. Our ability to measure CP violation parameters (and also $`Im(x)`$) very accurately will make it possible to reduce the uncertainties in the Bell-Steinberger relation by two orders of magnitude, which will make this CPT test be sensitive at the Planck scale also.
The Bell-Steinberger relation is a statement of the conservation of probability in $`K^0\overline{K}^0`$ decays, in which, through Eq. (3), $`\mathrm{\Delta }`$ appears. It is usually written as:
$`(1+i\mathrm{tan}\varphi _{SW})[Re(ϵ)iIm(\mathrm{\Delta })]={\displaystyle \underset{f}{}}\alpha _f`$ (10)
where the sum runs over all decay channels f, and $`\alpha _f=\frac{1}{\mathrm{\Gamma }_S}A^{}(\mathrm{K}_\mathrm{S}f)A(\mathrm{K}_\mathrm{L}f)`$. The most recent published evaluation of the Bell-Steinberger relation is ref. .
The biggest uncertainties in the Bell-Steinberger relation at this time come from $`\eta _{000}`$, $`Im(x)`$, and $`\delta _l`$ (the charge asymmetry in $`K_L`$ semileptonic decays). Although $`\delta _l`$ doesn’t explicitly appear in the Bell-Steinberger relation, it is the best way of evaluating $`Re(ϵ)`$. The proposed experiment will be able to make excellent measurements of the first two of these quantities, and KTeV will measure $`\delta _l`$ quite accurately. For the next level of accuracy in the Bell-Steinberger relation the uncertainties of the $`\alpha _+`$ and $`\alpha _{00}`$ terms must be reduced. These uncertainties depend on those of $`|\eta _+|,Re(ϵ^{}/ϵ)`$, and $`\mathrm{\Delta }\varphi =\varphi _{00}\varphi _+`$. The latter two quantities will be measured by the KTeV experiment to sufficient accuracy for our purposes here.
We will have good sensitivity for the $`|\eta _+|`$ measurement. In our fits to the proper time dependence of $`\pi ^+\pi ^{}`$ events we have excellent statistical sensitivity for measuring $`|\eta _+|`$. In the interference term, however, it is highly correlated with $`D`$, the dilution factor. We will measure $`D`$ using semileptonic decays. The semileptonic charge asymmetry at zero proper time equals $`D`$. We calculate that we will be able to measure $`D`$ to better than 0.1% for momenta above 13 GeV/c. We should be able to measure $`|\eta _+|`$ to 0.1% accuracy, about 10 times better than it is currently known.
The most accurate way to determine $`|\eta _{00}|`$ will be by using the KTeV value of $`ϵ^{}/ϵ`$ and our measurement of $`|\eta _+|`$. The most accurate way of determining $`\varphi _{00}`$ will use the KTeV value of $`\mathrm{\Delta }\varphi `$ and our measurement of $`\varphi _+`$.
We should be able to reduce the uncertainties in the Bell-Steinberger relation by about two orders of magnitude from their present values. The limit on $`Re(\mathrm{\Delta })`$ will be about $`5\times 10^6`$, about twice the contribution of CPT violation at the Planck scale, and will be set by the uncertainty in $`\delta _l`$. For $`Im(\mathrm{\Delta })`$ the limit will be about $`1\times 10^6`$, dominated by the uncertainty in $`Im(x)`$, which would allow us to place a $`2\sigma `$ limit at the Planck scale. Since the Bell-Steinberger measurement is sensitive to $`Re(\mathrm{\Delta })`$ and $`Im(\mathrm{\Delta })`$ independently, these limits would be valid even if $`\mathrm{\Delta }`$ is parallel to $`ϵ`$, in contrast to the CPT violation limits from $`|\varphi _+\varphi _ϵ|`$, which are sensitive only to the component of $`\mathrm{\Delta }`$ perpendicular to $`ϵ`$.
## IV Conclusion
We have described an experiment to carry out a systematic program of measurements in $`K_SK_L`$ interference physics. We will search for CPT symmetry violation in the decays of $`K^0`$ mesons with the sensitivity to reach the Planck scale, measure CP violation parameters to test the detailed predictions of the Standard Model, and study rare kaon decays.
Our design uses protons from the Fermilab Main Injector to make an RF separated $`K^+`$ beam. With this we make a tertiary neutral kaon beam created in just the way to maximize the interference between $`K_S`$ and $`K_L`$ while maintaining high flux. We use a “closed geometry” hyperon magnet for beam definition. A standard Vee spectrometer, with drift chambers, an electromagnetic calorimeter, and a muon detector, is used to make the measurement.
|
no-problem/9906/quant-ph9906047.html
|
ar5iv
|
text
|
# The Best Copenhagen Tunneling Times
## I Introduction
A problem which does not have a clear cut answer in quantum mechanics, is the time that it takes for an electron to pass through a potential barrier. This is a problem that is important from both a theoretical perspective and a technological view.
In quantum mechanics, time enters as a parameter rather than an observable (to which an operator can be assigned). Thus, there is no direct way to calculate tunneling times. People have tried to introduce quantities which have the dimension of time and can somehow be associated with the passage of the particle through the barrier. These efforts have led to the introduction of several times, some of which are completely unrelated to the others \[5-17\]. Some people have used Larmor precession as a clock to measure the duration of tunneling for a steady state or for a wave packet. Others, have used Feynman paths like real paths to calculate an average tunneling time with the weighting function $`exp[iS(x(t))/\mathrm{}]`$, where $`S`$ is the action associated with the path $`x(t)`$\- where $`x(t)`$’s are Feynman paths initiated from a point on the left of the barrier and ending at another point on the right of it. On the other hand, a group of people have used some features of an incident wave packet and the comparable features of the transmitted packet to introduce a delay as tunneling time. There are many other approaches, some of which are mentioned in Refs. \[10-17\]. But, there is no general consensus among physicists about the meaning of them and about which, if any, of them being the proper tunneling time. In Bohmian mechanics, however, there is a unique way of identifying the time of passage through a barrier. This time has a reasonable behaviour with respect to the width of the barrier and the energy of particle.
It is expected that with the availability of reliable experimental results in the near future, an appropriate definition can be selected from the available ones, or that they would prepare the ground for a more appropriate definition of the transmission time. But now, we want to use the definition of tunneling time in the framework of Bohmian mechanics to select one of available definitions for quantum tunneling times (QTT) within the standard interpretation as the best definition.
Our paper is organized as follows: after introducing Olkhovsky-Recami QTT, by using a heuristic argument in section II, we introduce, in section III, Bohmian QTT. Then, in section IV, we give a critical discussion about Cushing’s thought experiment and about what it really measures.
## II Tunneling’s characteristic times in the Copenhagen framework
To begin with, we consider the time at which a particle passes through a definite point in space. We describe the particle by a Gaussian wave packet which is incident from the left. The most natural way to estimate this time of passage is to find the time at which the peak of the wave packet passes through that point. But this is not a right criterion for finding the time of passage of the particle (even if the wave packet is symmetrical). To clarify the matter, we divide the packet, in the middle, into two parts. The probability of finding the particle in the front section is $`\frac{1}{2}`$ and the same is true for the back section. We represent the transit time of the centre of gravity of the front section by $`t_1`$ and that of the back section by $`t_2`$. The average time for particle’s passage through that point is $`T=\frac{1}{2}(t_1+t_2)`$. If the transit time for the peak of the wave is denoted by $`t`$, we have:
$`t_1=t{\displaystyle \frac{x_1}{v_g}}`$ (2)
$`t_2=t+{\displaystyle \frac{x_2}{v_g}}`$ (3)
where $`v_g`$ is the group velocity of the wave packet and $`x_1`$ and $`x_2`$ are, respectively, the distances of the centers of gravity of the front and the back sections of the packet from its peak position, when these centers pass the point under consideration. Thus, we have:
$`T={\displaystyle \frac{1}{2}}(t_1+t_2)=t+{\displaystyle \frac{x_2x_1}{2v_g}}`$ (4)
If the wave packet did not spread, $`x_1`$ and $`x_2`$ would remain equal and $`T`$ would be equal to $`t`$. But, since the wave packet spreads, $`Tt`$. In fact, the average transit time for the particle is later than that of wave’s peak. Because, the spreading of the packet decreases the transit time of the centre of gravity of wave’s front section, and increases that of the back
section. But the change is not symmetrical (i.e. $`x_1x_2`$), as the back section of the wave experiences the spreading for a longer time.
Now, consider a wave packet $`\psi (x,t)`$, which is incident from the left and approaches a far point $`x`$. The best time that we can attribute to particle’s passage through $`x`$ is
$`\tau (x)={\displaystyle \frac{_0^{\mathrm{}}t|\psi (x,t)|^2v(x,t)𝑑t}{_0^{\mathrm{}}|\psi (x,t)|^2v(x,t)𝑑t}}`$ (5)
where $`v(x,t)=`$$`\frac{j(x,t)}{|\psi (x,t)|^2}`$, $`j(x,t)`$ being the probability current density. In fact, we have divided the wave packet into infinitesimal elements. The transit time when particle is in one of these elements is weighted by the probability of finding the particle there (i.e. $`|\psi (x,t)|^2dx=|\psi (x,t)|^2v(x,t)dt`$). Fig.(1) illustrates the difference between this time and the time that the peak passes that point. For narrow wave packets, for which the rate of spreading is large, this difference is large. From (3), one can define a distribution for the transit time through $`x`$:
$`P(x,t)={\displaystyle \frac{|\psi (x,t)|^2v(x,t)}{_0^{\mathrm{}}|\psi (x,t)|^2v(x,t)𝑑t}}={\displaystyle \frac{j(x,t)}{|T|^2}}`$ (6)
where $`|T|^2`$ is the transition probability for passing through $`x_o`$. Dumont and Marchioro introduced this definition for the distribution of the time at which a particle passes through the far side of a potential barrier. They did not find it possible to define the time spent by the particle in the barrier. Leavens showed that this is also the distribution for the same time in Bohmian mechanics.
By looking at (3), one notices that $`\tau (x)`$ is in fact the average time for the passage of the probability density $`|\psi |^2`$ through $`x_o`$. Since the probability density represents the probability of the presence of the particle, it is natural to take the average time for the passage of probability density through a point as a measure of the average time for particle’s passage through that point. But, while part of the probability flux passes through the barrier, the particle itself might not be detected on the other side of the barrier. We don’t, however, expect to get a definite prediction for an individual system, and in the laboratory we usually consider an ensemble of systems. Thus, it is natural to take the average time for the passage of the probability density as a measure of the average time for particle’s passage. From now on, we talk about particle’s average time of transit. Consider, a particle incident on a barrier from the left. Then, one can easily extend (3) to define average times for particle’s entrance into the barrier ($`\tau _{_{in}}`$), particle’s exit from the right side of the barrier ($`\tau _{out}^^T`$), and particle’s exit from the left side of the barrier ($`\tau _{out}^^R`$). To simplify the matter we use the following notations:
$`(\mathrm{}\mathrm{\Theta })_x^\pm ={\displaystyle _0^{\mathrm{}}}𝑑t\mathrm{}(\pm )j(x,t)\mathrm{\Theta }[\pm j(x,t)]`$ (7)
where $`j(x,t)`$ represents the probability current density at the point $`x`$ at time $`t`$, and $`\mathrm{\Theta }`$ is the usual step function. Using this definition and (3), we define $`\tau _{_{in}}`$, $`\tau _{_{out}}^^R`$ and $`\tau _{_{out}}^^T`$ as:
$`\tau _{_{in}}={\displaystyle \frac{_0^{\mathrm{}}𝑑ttj(a,t)\mathrm{\Theta }[+j(a,t)]}{_0^{\mathrm{}}𝑑tj(a,t)\mathrm{\Theta }[+j(a,t)]}}={\displaystyle \frac{(t\mathrm{\Theta })_a^^+}{(\mathrm{\Theta })_a^^+}}`$ (9)
$`\tau _{_{out}}^^T={\displaystyle \frac{_0^{\mathrm{}}𝑑ttj(b,t)\mathrm{\Theta }[+j(b,t)]}{_0^{\mathrm{}}𝑑tj(b,t)\mathrm{\Theta }[+j(b,t)]}}={\displaystyle \frac{(t\mathrm{\Theta })_b^^+}{(\mathrm{\Theta })_b^^+}}`$ (10)
$`\tau _{_{out}}^^R={\displaystyle \frac{_0^{\mathrm{}}𝑑tt()j(a,t)\mathrm{\Theta }[j(a,t)]}{_0^{\mathrm{}}𝑑t()j(a,t)\mathrm{\Theta }[j(a,t)]}}={\displaystyle \frac{(t\mathrm{\Theta })_a^{^{}}}{(\mathrm{\Theta })_a^{^{}}}}`$ (11)
where $`a`$ and $`b`$ represent the coordinates of the left and right side of the barrier respectively. Using these times, one can write, the times that particle spends in the barrier before the transmission ($`\tau __T^{^{OR}}`$) or reflection ($`\tau __R^{^{OR}}`$) as:
$`\tau __T^{^{OR}}=\tau _{_{out}}^^T\tau _{_{in}}`$ (13)
$`\tau __R^{^{OR}}=\tau _{_{out}}^^R\tau _{_{in}}`$ (14)
We shall call them OR timesNote that, in their orginal definition, temporal integrations run from $`\mathrm{}`$ to $`+\mathrm{}`$. In Ref , they discussed that the substitution of integrals of the type $`_0^{\mathrm{}}`$ for integrals $`_{\mathrm{}}^+\mathrm{}`$ have physical significance. In any way, we shall use relations (6). (referring to Olkhovsky and Recami). The average time spent by the particle in the barrier, irrespective of being transmitted or reflected, the so called dwelling time, is thus given by
$`\tau __d^{^{OR}}=(\mathrm{\Theta })_b^^+\tau __T^{^{OR}}+(\mathrm{\Theta })_a^{^{}}\tau __R^{^{OR}}`$ (15)
where $`(\mathrm{\Theta })_b^^+`$ and $`(\mathrm{\Theta })_a^{^{}}`$ represent the probability of particle’s exit from the right and left sides of the barrier respectively. Now, the probability of particle’s exit from the right, $`(\mathrm{\Theta })_b^^+`$, is equal to the probability of particle’s transmission through the barrier, $`|T|^2`$. But the probability of particle’s exit from the left, $`(\mathrm{\Theta })_a^{^{}}`$, is not equal to the probability of reflection from the barrier, $`|R|^2`$. Because, the particle could be reflected without entering the barrier. Using (7), one can write (8) in the form:
$`\tau __d^{^{OR}}=(\mathrm{\Theta })_b^^+\tau _{_{out}}^^T+(\mathrm{\Theta })_a^{^{}}\tau _{_{out}}^^R(\mathrm{\Theta })_a^^+\tau _{_{in}}`$ (16)
where we have made use of the fact that $`(\mathrm{\Theta })_b^^++(\mathrm{\Theta })_a^{^{}}=(\mathrm{\Theta })_a^^+`$, which follows from the conservation of probability. The first two terms in (9) represent the average of particle’s exit time from the barrier, irrespective of the direction of exit. Using (6) we can write the right hand side of (9) in the form:
$`\tau __d^{^{OR}}={\displaystyle _0^{\mathrm{}}}𝑑tt[j(b,t)j(a,t)]`$ (17)
Using continuty equation, one can easily show that (10) coincides with the standard dwelling time defined by
$`\tau __D={\displaystyle _0^{\mathrm{}}}𝑑t{\displaystyle _a^b}|\psi (x,t)|^2𝑑x`$ (18)
## III Tunneling’s characteristic times in Bohmian framework
In the causal interpretation of quantum mechanics, proposed by David Bohm, a particle has a well defined position and velocity at each instant, where the latter is obtained from a field $`\psi (x,t)`$ satisfying the Schrödinger equation. If the particle is at $`x`$ at the time $`t`$, its velocity is given by
$`v(x,t)={\displaystyle \frac{j(x,t)}{|\psi (x,t)|^2}}`$ (19)
For a particle which is prepared in the state $`\psi (x,0)`$ at $`t=0`$, any uncertainty in its dynamical variables is a result of our ignorance about its initial position $`x_o`$. Our information about particle’s initial position is given by a probability distribution $`|\psi (x_o,0)|^2`$. If we know the initial position $`x_o`$ of the particle, we can find its position at a later time, $`x(x_o;t)`$, from (12). Then, when a particle encounters a barrier, it is determined whether the particle passes through the barrier or not, and one can determine when the particle enters the barrier and when it leaves the barrier. Thus the time spent by the particle within the barrier is easily calculated. But, since we do not know particle’s initial position, we consider an ensemble of initial positions, given by the distribution $`|\psi (x_o,t)|^2`$. Then, we calculate the average time spent by the particle within the barrier. To compare the time of reflection or transmission in this framework with OR charactristic times, we first consider the time of arrival at $`x_1`$, for a particle that was at $`x_o`$ at $`t=0`$
$`t(x_1;x_o)={\displaystyle _{C_{x_o}}}𝑑xt(x;x_o)\delta (x_1x)`$ (20)
where the integral is defined along Bohmian path $`C_{x_o}`$ which starts at $`x_o`$. This relation can also be written in the form:
$`t(x_1;x_o)={\displaystyle _0^{\mathrm{}}}𝑑t|v(x(x_o;t),t)|t\delta (x_1x(x_o;t))`$ (21)
where
$`\delta (x_1x(x_o;t))={\displaystyle \frac{\delta (t(x_1)t)}{|v(x(x_o;t),t)|}}`$ (22)
Since it is possible for the particle to pass the point $`x_1`$ twice (due to reflection from the barrier), we define $`t^\pm (x_o;x_1)`$ in the following manner:
$`t^\pm (x_o;x_1)={\displaystyle _0^{\mathrm{}}}𝑑t|v(x(x_o;t),t)|t\delta (x_1x(x_o;t))\mathrm{\Theta }[\pm v(x(x_o;t),t)]`$ (23)
where $`t^+`$ and $`t^{}`$ correspond to the cases where the particle passes $`x_1`$ from left to right and from right to left respectively. Since for long periods of time, a particle either passes or is reflected (depending on its $`x_o`$), we define $`\mathrm{\Theta }_R`$ and $`\mathrm{\Theta }_T`$ in the following way:
$`\mathrm{\Theta }_T(x_o)=1,\mathrm{\Theta }_R(x_o)=0(fortransmission)`$ (25)
$`\mathrm{\Theta }_T(x_o)=0,\mathrm{\Theta }_R(x_o)=1(forreflection)`$ (26)
Thus, we have $`\mathrm{\Theta }_T(x_o)+\mathrm{\Theta }_R(x_o)=1`$. Using these functions, the average times spent by the transmitted and the reflected particles, $`\tau __T^^B`$ and $`\tau __R^^B`$, respectively, are given by
$`\tau __T^^B={\displaystyle \frac{t^+(x_o;b)\mathrm{\Theta }_T(x_o)t^+(x_o;a)\mathrm{\Theta }_T(x_o)}{\mathrm{\Theta }_T(x_o)}}`$ (28)
$`\tau __R^^B={\displaystyle \frac{t^{}(x_o;a)\mathrm{\Theta }_R(x_o)t^+(x_o;a)\mathrm{\Theta }_R(x_o)}{\mathrm{\Theta }_R(x_o)}}`$ (29)
where
$`\mathrm{}={\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑x_o\mathrm{}|\psi (x_o,t)|^2`$ (30)
But $`\mathrm{\Theta }_R(x_o)=|T|^2`$ and $`\mathrm{\Theta }_R(x_o)=|R|^2`$. Thus, we have for the dwelling time:
$`\tau _d^^B=|T|^2\tau __T^^B+|R|^2\tau __R^^B`$ (31)
$`=t^+(x_o;b)\mathrm{\Theta }_T(x_o)+t^{}(x_o;a)\mathrm{\Theta }_R(x_o)t^+(x_o;a)`$ (32)
where we have made use of the fact that $`\mathrm{\Theta }_T+\mathrm{\Theta }_R=1`$. Using the fact that $`_{\mathrm{}}^+\mathrm{}𝑑x_of(x(x_o,t),t)|\psi (x_o,0)|^2\delta (xx(x_o,t))=f(x,t)|\psi (x,0)|^2`$, one can easily show that
$`t^+(x_o;b)\mathrm{\Theta }_T(x_o)=(t\mathrm{\Theta })_b^^+`$ (34)
$`t^{}(x_o;a)\mathrm{\Theta }_R(x_o)=(t\mathrm{\Theta })_a^{^{}}`$ (35)
$`t^+(x_o;a)=(t\mathrm{\Theta })_a^^+`$ (36)
Thus, $`\tau __d^^B`$ is equal to $`\tau __d^{^{OR}}`$ and therefore equal to $`\tau __D`$. Of course, the equality of $`\tau __d^^B`$ and $`\tau __D`$ was shown earlier by Leavens. But the relations (21) are new and they are important because they show the relation between OR characteristic times and those defined in Bohmian mechanics. Notice that in the causal interpretation of Bohm, one defines two average entrance times $`\tau _{in}^^T`$ and $`\tau _{in}^^R`$, depending on wether the particle is reflected or transmitted:
$`\tau _{in}^^T={\displaystyle \frac{t^+(x_o,a)\mathrm{\Theta }_T(x_o)}{\mathrm{\Theta }_T(x_o)}}`$ (38)
$`\tau _{in}^^R={\displaystyle \frac{t^+(x_o,a)\mathrm{\Theta }_R(x_o)}{\mathrm{\Theta }_R(x_o)}}`$ (39)
whereas OR have defined only one average time. This is because in the standard interpretation of quantum mechanics, it is not definite whether a particle that has entered a barrier, is transmitted or reflected. It is natural to have the average time for particle’s entrance, irrespective of wether it is reflected or transmitted, to be equal to $`|T|^2\tau _{in}^^T+|R|^2\tau _{in}^^R=t^+(x_o;a)`$ in Bohmian framework. Then, we must have $`(t\mathrm{\Theta })_a^^+=|T|^2\tau _{in}^^T+|R|^2\tau _{in}^^R`$, which is easy to prove.
It is natural to expect that the average time for particle’s transmision through a potential barrier to be a function of the width of the barrier. This time should generally increase with the width of barrier. However, due to quantum effects, one does not expect it to be a linear function of this width. Most of the times defined within the framework of the standard interpretation of quantum mechanics, do not have this property, and some of them even yield negative times! On the other hand, we expect the transition time to decrease with the increase in the energy of the incident particle. The transition time in OR approach has both of these properties . The digrams in Fig.(2) and Fig.(3) represent the trasmission time as a function of the width of the barrier and as a function of particle’s energy respectively, for both Bohmian and OR times. The numerical method used to solve the time dependent Schrödinger equation was the fourth order (in time steps $`\delta t`$) symmetrized product formula method, developed by De Readt . We chose $`\delta x=`$$`\pi `$$`/30k_o`$, where $`k_o=\sqrt{\frac{2mE_o}{\mathrm{}^2}}`$ and $`\delta t=\delta x^2/25`$ in all calulation ($`E_o`$ is the energy of the incident Gaussian wave packet). One notices that OR transmission time coincides with that of Bohmian case for large $`|T|^2`$ (i.e. $`d<2`$ in the diagrams of Fig(2) and $`E_o>V_o`$ in diagram of Fig(3)). This is natural, because while the average time for particle’s exit from the right side of the barrier is always the same in both approaches, in the limit of $`|T|^21`$, the average entrance time for the tansmitted particle is the same in OR approach and in Bohmian approach ($`|T|^21\tau _{_{in}}^^T\tau _{_{in}}`$). Thus in this limit we have $`\tau __T^^B=\tau __T^{^{OR}}`$. On other hand, as we said earlier, the average time for particle’s exit from the left, in OR approach, is generally different from that of Bohmian case. But, if we choose $`a`$ to be a point far (relative to the width of wave packet) from the left side of the barrier, then, the time for particle’s exit from the left side is the same in both approaches. Since, in this case for $`|R|^21`$, the average time of entrance for reflected particles in causal approach become equal to the average time of entrance in OR approach ($`|R|^21\tau _{_{in}}^^R\tau _{_{in}}`$). Thus we have $`\tau __R^^B=\tau __R^{^{OR}}`$. It appears that OR approach gives the most natural definition for a positive definite transmission time, within the framework of the standard interpretation of quantum mechanics<sup>§</sup><sup>§</sup>§Of course, in relation (6b) if we chose $`b`$ a point in the interior of barrier width, $`\tau __T^{^{OR}}`$ maybe become nagative, but small in absolute value.
## IV Experimental test
It is generally believed that the standard quantum mechanics and Bohmian mechanics have identical predictions for physical observables. On the other hand, there is no Hermitian operator associated with time. Is it possible to consider a phenomenon involving time, e.g. tunneling, to differentiate between these two theories? By considering a thought experiment, Cushing gave a positive response to this question. His argument was the following :
(1) There is presently no satisfactory account of a quantum tunneling time (QTT) in the standard quantum mechanics.
(2) There is a well defined account of QTT in Bohmian interpretation. If it can be measured, then such a measurement would constitute a test of the interpretation .
(3) It might be possible to measure the Bohmian QTT with an experiment of a certain type.
(4) Therefore, from (2) and (3), if an experiment of that type is possible, such an experiment could serve as a test of Bohm’s interpretation.
(5) Because of (1), the outcome of an experiment of that type would not support or refute the copenhagen interpretation.
In a recent article, K. Bedard, by refering to (1), (2) and (5) questioned Cushing’s conclusion. Her argument was based on the fact that the two theories have different microontologies. Therefore, the QTT obtainable from Bohmian mechanics has no counterpart in the standard quantum mechanics. Thus, the measurement of such a time cannot be considered a test between the two theories. Here, we shall question (3), i.e. the claim that Cushing’s thought experiment can be used to measure Bohmian times.
Cushing’s experiment consists of a potential barrier between $`x_1`$ and $`x_2`$ with width $`d`$ ($`d=ba`$). A detector $`D_T`$ is located at $`x_2`$ on the right of barrier and a detector $`D_R`$ is located at $`x_1`$ on the left of the barrier ($`x_1<a<b<x_2`$). Electrons are incident from the left. $`D_T`$ records the arrival times of the transmitted electrons at $`x_2`$ and $`D_R`$ records the times of the reflected electron at $`x_1`$. The distance from $`x_1`$ to the left side of the barrier ($`a`$) is a much more than width of wave packet. The same holds for $`x_2`$. The recording of the arrival time of the incident electron at $`x_1`$ will collapse the wave function. In that case, any subsequent tunneling time prediction on the basic of the known incident wave packet would be quite useless. To resolve this problem, Cushing considers the preparation of the state of the incident particle at $`x_1`$, rather than its detection. Thus, the time recorded at $`x_1`$ is the preparation time for the transit of the particle, if $`D_T`$ would detect it, and the preparation time for the reflected paticle, if $`D_R`$ would detect it. To provide this condition, we prepare a source of electrons in front of which there is a shutter. The shutter starts to open little before $`t_o=0`$ and closes little after $`t_o`$. Thus $`t_o`$ is the most probable time for the passage of the electron from $`x_1`$. In other words, $`t_o`$ is the time when the peak of the wave packet passes $`x_1`$. By choosing a weak source, we can be sure to have at most one electron emerging from shutter’s opening. The time of passage for the particle through $`x_1`$ is $`t_1=t_o\pm \frac{\mathrm{\Delta }x}{v_o}`$, where $`\mathrm{\Delta }x`$ is the widths of the packet and $`v_o`$ is the speed of the particle. Cushing claims that ” in principle, this error could be made as small as we like (for large enough $`v_o`$)”. In our opinion, the error must be campared with $`\tau __T`$ not with $`t_1`$. In fact, we want to obtain $`\tau __T`$ which is the difference of the two time ($`t_2t_1`$), where $`t_2`$ is the time that electron is detected at $`x_2`$ by $`D_T`$). The error could be small if we compare it with $`t_1`$ and $`t_2`$ but not if we compare it with their difference. Thus, we must have:
$`{\displaystyle \frac{\mathrm{\Delta }x}{v_o}}\tau __T`$ (40)
By refering to the Fig.(3), one can see that $`\tau __T`$ decreases quicker than $`\frac{1}{v_o}`$. Thus, the increase in $`v_o`$ decreases the right hand side of (23) more than its left hand side and we are not able to decrease relative error in this way. One may hope obtaining condition (23) by decreasing $`\mathrm{\Delta }x`$. But decreasing $`\mathrm{\Delta }x`$ is not useful. Because, by refering to Fig.(2) (a, b, c) one can see that $`\tau __T`$ decreases almost linearly with $`\mathrm{\Delta }x`$.
Experimental limitations dictate that the arrival time of particles to the barrier be measured independent of whether they shall be reflected or transmitted (i.e. state preparation time). Thus, although Bohm’s theory considers $`\tau _{_{in}}^^T`$ and $`\tau _{_{in}}^^R`$, it must pay attention only to the measurements of $`\tau _{_{in}}=|T|^2\tau _{_{in}}^^T+|R|^2\tau _{_{in}}^^R`$, to avoid experimental limitations. In this way, the precise time that it attributes to particle’s transmission through the barrier or reflection is $`\tau __T^{^{OR}}`$ and $`\tau __R^{^{OR}}`$ respectively. Thus, at the experimental level, even in the case of tunneling times we have the same predictions in the two theories. In fact, here, we encounter a problem like the case of the celebrated two-slit experiment. In the framework of Bohmian mechanics, all particles observed on the lower (upper) half of the screen must come from the lower (upper) slit. But, any effort to know which particle came from which slit destroys the interference pattern. Thus, in the two-slit experiment, the two theory come to the same result due to experimental limitations. It appears that, from various definitions given for QTT in the framework of the standard quantum mechanics, our choice of OR’s is the best. Because, in our opinion it is the best time that can be related to the tunneling phenomena in the framework of the standard interpretation of quantum mechanics. We can justify our claim in the follow way:
(1) There is a unique and well defined account of QTT in Bohm’s interpretation.
(2) There are several accounts of QTT in standard interpretation.
(3) These two theories have the same prediction for observables.
(4) Bohmian prediction for QTT coincides with one of Copenhagen QTT (OR’s).
In fact, OR’s is the only definition that gives the same result, at the experimental level, as Bohmian mechanics, although it does not associate an operator with $`\tau __T`$ (at least up to now). In this way, we have used a theory with additional microontology (Bohmian mechanics) to give the best definition for a quantity in a theory with less microontology. Bohm’s theory may also shed light on other definitions of QTT in the standard quantum mechanics.
Conclusion
Considering the fact that the microontology of Copenhagen theory includes wave function (probability amplitude), and not point-like particles, the best time one could attribute to the passage of a particle from a point of space is the average time of the passage of probability flux (eq.(3)). Generalization of this time to QTT, leads one to OR’s times. On other hand, the microontology of Bohmian mechanics includes point-like particles in addition to wave function, and it leads uniquely to Bohmian QTT (eq.(18)). We have compared them for different width and energy of wave packet in Fig.(2) and Fig.(3) by use of numerical calculation.
Now, Bohmian QTT could not be measured due to experimental limitations. The best times that could be obtained in Bohmian mechanics are the same as OR’s. The agreement of one of the severalIn fact, only systematic projector approach of Brouard, Sala and Muga leads to an infinite hierarchy of possible mean transmission and reflection times . available definitions of QTT in Copenhagen quantum mechanics with the unique definition of Bohmian mechanics, separates it from others. Because, it is reasonable to expect same prediction for the two theories even in the case of QTT.
Acknowledgments
The authors would like to thank Dr. C. R. Leavens drawing our attention to prior work of Olkhovsky and Recami.
|
no-problem/9906/quant-ph9906124.html
|
ar5iv
|
text
|
# Quantum-mechanical probability from the symmetries of two-state systems
## Abstract
In 1989, Deutsch gave a basic physical explanation of why quantum-mechanical probabilities are squares of amplitudes. Essentially, a general state vector is transformed into a highly symmetric equal-amplitude superposition. The argument was recently elaborated and publicised by DeWitt. It has remained incomplete, however, inasmuch as both authors anticipate the usual normalization (sum of amplitudes squared) of state vectors. In the present paper, a thought experiment is devised in which Deutsch’s idea is demonstrated independently of the normalization, exploiting further symmetries instead.
According to the standard Born statistical interpretation of a state vector
$$|\psi =\underset{i}{}\psi _i|i\underset{k}{}|\psi _k|^2=1$$
the $`i`$th eigenvalue of an observable is measured with probability $`|\psi _i|^2`$. While there is no generally accepted answer as to the origin of the stochasticity, the values of the probabilities can be deduced from a variety of assumptions. The simplest way is to define a quantum state as a linear expectation-value functional over the algebra of observables . Thus one starts out from
$$A+B=A+B\text{(not used in this paper)}$$
(1)
Such an equation can be taken to define “correspondence” but its physical interpretation is not unproblematic . $`A`$ and $`B`$ are formal representations of apparatuses, and it is hard to tell what kind of apparatus is represented by $`A+B`$ if the summands are as different as a particle’s momentum and position.
A celebrated mathematical result on quantum-mechanical probabilities is due to Gleason . If normalized probabilities are to be assigned to the eigenvectors of each hermitian operator, which is what a quantum state is expected to do, then the only possibility is the amplitudes-squared prescription. Unfortunately, the existing proofs of Gleason’s theorem (cf. ) are not easily received by many physicists.
In Deutsch’s approach only basic mathematics is involved. Moreover, sums of operators as on the lhs of (1) can be avoided; only superpositions of state vectors are required. DeWitt conceals this latter advantage by presenting the argument in terms of sums of projection operators, while Deutsch uses state vectors only. However, both authors anticipate in a crucial way the amplitudes-squared normalization of physical state vectors. The problem with this is that the standard normalization is physically motivated by the amplitudes-squared form of the probabilities.
In the present paper, normalizations are implicit in the unitarity of time evolutions. The latter can be inferred from something weaker than unitarity: from the assumption that none of the state vectors “decay” to the null vector. This would also be consistent with probabilities equal to the absolute values (unsquared) of amplitudes, hence it is a weaker assumption. In conjunction with symmetries of certain two-state subsystems, however, full unitarity is recovered automatically.
Following Deutsch , let us consider a superposition state of the special form
$$|\psi =\sqrt{\frac{m}{m+n}}|A+\sqrt{\frac{n}{m+n}}|B$$
(2)
Let this be coupled to an auxilliary $`m+n+1`$-state system, and let $`|A`$ and $`|B`$ be substituted by normalized superpositions according to
$`|A|A|0`$ $`\stackrel{S}{}`$ $`\sqrt{{\displaystyle \frac{1}{m}}}{\displaystyle \underset{i=1}{\overset{m}{}}}|A|i`$ (3)
$`|B|B|0`$ $`\stackrel{S}{}`$ $`\sqrt{{\displaystyle \frac{1}{n}}}{\displaystyle \underset{i=m+1}{\overset{m+n}{}}}|B|i`$ (4)
Deutsch motivates $`S`$ by decision-theoretic substitutability; DeWitt devises an observable whose expectation value involves $`S`$. The substitution preserves the properties $`A`$ and $`B`$, but when inserted in (2) it results in an equal-amplitude superposition of the form
$$\frac{1}{\sqrt{m+n}}\underset{M=1}{\overset{m+n}{}}|M$$
(5)
Due to the permutational symmetry (see also discussion of (12) below) the probability for detecting an $`|M`$ state is $`(m+n)^1`$. Thus the probability for property $`A`$ is $`(m+n)^1m`$, and for $`B`$ it is $`(m+n)^1n`$.
In order to avoid anticipating the normalization, $`S`$ is interpreted here as the time evolution of an apparatus capable of spatially separating internal states of an atom. The factors of $`1/\sqrt{m}`$ and $`1/\sqrt{n}`$ then arise automatically. It is reassuring to note that a variety of state-separating apparatuses can be realized experimentally .
Consider an arrangement of $`m+n+1`$ cavities, all of the same shape, connected by channels as indicated for $`m=3`$ and $`n=2`$ in Figure 1.
Let the states $`|A`$ and $`|B`$ correspond to internal states of an atom. If the atom is placed in cavity $`i`$, its total state is $`|A|i`$ or $`|B|i`$, etc. Let us assume that channels $`1,\mathrm{},m`$ can be passed by the atom in state $`|A`$ only, channels $`m+1,\mathrm{},m+n`$ in state $`|B`$ only, and that channels can be closed individually.
Why cavities? Their point is to enable an individual treatment of parts of a wavefunction. If summand $`|i`$ resides in a disconnected cavity it is screened from other summands $`|j|i`$. We shall use this (a) to put into storage a summand that already has a desired form, and (b) to change the complex phase or internal state of a summand.
A particularly symmetric situation arises if we close all channels but one, connecting $`|0`$ to some $`|i`$. The permutation of $`|0`$ and $`|i`$ is then a symmetry of the time evolution. An atom in the initial state $`|A|0`$ will evolve, after a time interval $`\tau `$, into a superposition
$$\alpha |A|0+\beta |A|i$$
(6)
where $`\alpha `$ and $`\beta `$ are complex numbers. By exchanging the roles of $`|0`$ and $`|i`$ we obtain from an initial state $`|A|i`$
$$\beta |A|0+\alpha |A|i$$
(7)
There are two stationary cavity states, $`|\pm =|0\pm |i`$, for which the time evolution takes the form (omitting the $`|A`$ factor for the moment)
$$|\pm (\alpha \pm \beta )|\pm $$
(8)
We now postulate the system to be stable in the sense that none of the state vectors tend to zero for $`t\pm \mathrm{}`$. Repeated application of (8) then implies both $`|\alpha +\beta |^2=1`$ and $`|\alpha \beta |^2=1`$, which in turn implies
$$|\alpha |^2+|\beta |^2=1\alpha \beta ^{}+\alpha ^{}\beta =0$$
Thus the time evolution matrix
$$U(\tau )=\left(\begin{array}{cc}\alpha (\tau )& \beta (\tau )\\ \beta (\tau )& \alpha (\tau )\end{array}\right)$$
is unitary automatically for any $`\tau `$. Its explicit dependence on $`\tau `$ can be seen from the group property $`U(\tau +\tau ^{})=U(\tau )U(\tau ^{})`$ (by considering infinitesimal $`\tau ^{}`$ and integrating up, for example):
$$U(\tau )=e^{iϵ\tau }\left(\begin{array}{cc}\mathrm{cos}\omega \tau & i\mathrm{sin}\omega \tau \\ i\mathrm{sin}\omega \tau & \mathrm{cos}\omega \tau \end{array}\right)$$
(9)
Parameters $`ϵ`$ and $`\omega `$ are real numbers defined by $`e^{iϵ\tau }\mathrm{cos}\omega \tau =\alpha `$. Thus we recover the well-known time evolution of a symmetric two-state system without anticipating conservation of probability.
We now use the time evolution through individual channels in order to produce an equal-amplitude superposition from (2). The iterative step is
$$\sqrt{k}|A|0\sqrt{k1}|A|0+|A|i$$
(10)
which is accomplished by opening channel $`i`$ exclusively, for a time interval $`\tau _k`$ determined by
$$\mathrm{cot}\omega \tau _k=\sqrt{k1}$$
Thus we produce a unit-amplitude contribution in the $`i`$th cavity. In fact, we are allowed to consider simplified, non-normalized state vectors here because it will be the equality of amplitudes in the various cavities that matters.
Starting out from the simplified state vector
$$\sqrt{m}|A|0+\sqrt{n}|B|0$$
we open up channels $`1,\mathrm{},m`$ for a time interval $`\tau _m`$, $`\tau _{m1}`$, …, $`\tau _1`$, respectively. This brings down the amplitude of the central state $`|A|0`$ from $`\sqrt{m}`$ to $`\sqrt{m1}`$, $`\sqrt{m2}`$, …, $`0`$ while the amplitude of the central state $`|B|0`$ in the superposition is not affected. In exchange for $`\sqrt{m}|A|0`$ we successively obtain terms $`|A|1`$, $`|A|2`$, …, $`|A|m`$ in the superposition, all with a unit amplitude. Analogously, to deal with internal state $`|B`$ we open up channels $`m+1`$ to $`m+n`$ for time intervals $`\tau _n`$, $`\tau _{n1}`$, …, $`\tau _1`$, respectively. In exchange for $`\sqrt{n}|B|0`$ we obtain terms $`|B|m+1`$, $`|B|m+2`$, …, $`|B|m+n`$ with a unit amplitude in the superposition. It should be stressed again that in all these steps we do not change properties $`A`$ and $`B`$, we only separate them spatially.
According to the discussion so far, we have arrived at a state vector
$$\underset{i=1}{\overset{m}{}}|A|i+\underset{i=m+1}{\overset{m+n}{}}|B|i$$
(11)
However, we have neglected the phase factors that arise from the $`e^{iϵ\tau }`$ of equation (9). Moreover, there could be additional phase factors from the evolution within a cavity during times of disconnection. Hence we should rather discuss the more general expression
$$\underset{i=1}{\overset{m}{}}e^{i\phi _i}|A|i+\underset{i=m+1}{\overset{m+n}{}}e^{i\phi _i}|B|i$$
But phase factors for individual cavities do not pose a problem physicswise. The relation between energies and rotating phase factors, a non-statistical axiom of quantum mechanics, implies that we can get rid of the $`e^{i\phi _i}`$ by temporarily increasing the potential energy of the atom (gravitationally, e.g.) in a particular cavity. Hence it actually suffices to consider expression (11).
For the assignment of probability $`1/n`$ to each of the cavity states of (11) we must bring out the permutational symmetry among all $`m+n`$ states more clearly. Again it is helpful to have the internal atomic states $`A`$ and $`B`$ separated in space. From the “experimental” procedure we know that in cavities $`m+1,\mathrm{},m+n`$ the internal state is necessarily $`|B`$. To these cavities we now apply a $`\pi `$ pulse so as to rotate $`|B`$ into $`|A`$. Thus we finally arrive at a state of the form
$$\underset{i=1}{\overset{m+n}{}}|A|i$$
(12)
whose permutational symmetry with respect to all cavities is obvious. For example, we could swap the contents of cavities $`i`$ and $`j`$ without changing the state vector. Thus the probability for detecting the atom in a particular cavity is $`1/n`$. From the conduct of the experiment it follows what this means for the probabilities of $`A`$ and $`B`$.
The argument is easily extended to superpositions involving more than two internal states of an atom,
$$|\psi =\underset{i}{}\sqrt{n_i}|A_i$$
Successively, each component $`\sqrt{n_i}|A_i`$ is transformed by $`n_i`$ applications of (10) into $`n_i`$ unit-amplitude terms of a grand superposition analogous to (11).
|
no-problem/9906/physics9906026.html
|
ar5iv
|
text
|
# Comment on “Density functional theory study of some structural and energetic properties of small lithium clusters” [J. Chem. Phys. 105, 9933 (1996)]
The experimentally determined size-evolutionary patterns (SEPs) of simple metal clusters pertaining to ionization potentials (IPs), electron affinities, and monomer separation energies have attracted much attention in the past ten years , since they may reflect the electronic shell structure of clusters (the major features of SEPs are associated with steps at magic numbers, but often there is in addition substantial fine structure, such as odd-even alternations).
Due to the required computational effort, first-principles (FP) theoretical studies (which incorporate the ionic geometry) of such SEPs are usually limited to cluster sizes of the order of ten atoms. Recently, however, systematic theoretical investigations of such SEPs have been performed for a broad range of cluster sizes using the jellium-related Shell-Correction-Method (SCM) approach.
In a recent article, Gardet et al. have studied the IPs of small lithium clusters Li<sub>N</sub> (with $`N12`$) using a FP - density functional theory (DFT) and found reasonable agreement between theory and experiment. Furthermore, through a comparison of their results with those obtained from Kohn-Sham local-density-approximation (KS-LDA) calculations on a spherical jellium background, they concluded that the jellium model is inadequate for a proper description of the IPs of such systems.
The purpose of this comment is to clarify that, while modelling the ions in a cluster via a uniform jellium background is certainly an approximation, the above conclusion of Gardet et al. pertains to limitations introduced through neglect of deviations from spherical symmetry, rather than to the jellium approximation itself. Indeed, in a series of papers, we have demonstrated that consideration of triaxial shape deformations drastically improves the agreement between the jellium approximation and experiment for all instances of the aforementioned SEPs and for sizes up to 100 atoms, as well as for a variety of metal species (namely, alkali metals, such and Na and K, and noble metals, such as Cu and Ag).
To further elucidate the importance of shape deformations, we display in Fig. 1 the IPs of small Li<sub>N</sub> clusters (in the same size-range as with Ref. ) calculated with our SCM for three different families of shapes of the jellium background \[namely, spherical, spheroidal (axially symmetric), and ellipsoidal (triaxial)\], and compare them to the experimental measurements.
Fig. 1 reveals that spherical shapes (top panel) exhibit a characteristic sawtoothed profile, well known from previous spherical KS-LDA studies and similar to the curve labeled jellium-LDA in Fig. 12 of Ref. . Apart from major-shell closures, this sawtoothed profile describes the data rather poorly (notice in particular the absence of fine structure between major shell closures at $`N=2`$, 8 and 20).
The spheroidal model (middle panel) exhibits substantial improvement in describing the experimental trend. Furthermore, the ellipsoidal case (bottom panel) improves the agreement between the SCM results and experiment even further, in particular in the size range $`11N14`$. The essential improvement introduced by the deformations over the spherical case concerns the very good description of the subshell closure at $`N=14`$ and of odd-even alternations between major shell closures.
In summary, we have illustrated once again that, within the jellium approximation, deformed cluster shapes provide an adequate description of the observed systematic size dependence of the properties of simple metal clusters and should necessarily be employed in comparisons with other theoretical approaches.
This research is supported by the US Department of Energy (Grant No. FG05-86ER-45234). Studies were performed at the Georgia Institute of Technology Center for Computational Materials Science.
|
no-problem/9906/cond-mat9906133.html
|
ar5iv
|
text
|
# Oscillatory Behavior of Critical Amplitudes of the Gaussian Model on a Hierarchical Structure
## I Introduction
In the near past a considerable research activity has been devoted to the studies of recursion relations which have a singular structure near the pertinent fixed points \[1-6\]. It was found that, under certain conditions, these singularities can lead to an unusual critical behavior of relevant physical quantities. In particular, it has been shown that the mean end-to-end distance $`R__N`$ of a simple ideal polymer chain on some hierarchical structures can grow more slowly than any power of its length $`N`$. This effect has been termed localization, and it has been attributed to an entropic trapping of the polymer chain : In order to maximize the entropy, it is advantageous for a chain to visit the lattice sites of the highest coordination number preferentially. These sites act, therefore, as entropic traps preventing the swelling of the chain.
It is well known that statistics of an ideal polymer chain on lattices can be captured by means of a suitable Gaussian model. Using this connection a number of interesting results for the polymer model have been derived by studying the singular structure of associated recursion relations for the Gaussian model. Let us call here just one example: It has been argued, by using finite-size scaling arguments and an analytical study of a pertinent mapping, that the mean end-to-end distance of an ideal chain on the modified Sierpinski gasket (see Fig. 1) follows the logarithmic asymptotic law: $`R__N\mathrm{ln}^\mathrm{\Phi }N`$, with $`\mathrm{\Phi }=\mathrm{ln}3/\mathrm{ln}2`$. As it has been emphasized , however, it was very difficult to check this result numerically, due to the oscillations in the values of the Gaussian correlation length $`\xi `$. In fact, these oscillations are so pronounced that they mask even leading singular behavior of $`\xi `$.
This type of corrections to the leading asymptotic behavior near criticality were studied quite early , within the framework of the renormalization group approach. Recently, they were observed in many systems displaying the power law singularities, and they were related to the concept of discrete scale invariance . This motivated us to reconsider here the Gaussian model on a modified SG lattice. In this case the hierarchical structure of the lattice leads to the oscillations of critical amplitudes of various quantities close to criticality. If one presents them on an appropriate scale, it turns out that they become regularly spaced.
We have found that the above mentioned oscillations are universal, and that they can be described in terms of some simple functions which are periodic in the variable $`\mathrm{ln}|\mathrm{ln}(\delta K)|`$ (with $`\delta K=K_cK`$ being the distance from the critical point). This is in contrast to the critical behavior of usual systems (displaying the power law singularities), in which case corresponding variable has the form $`\mathrm{ln}(\delta K)`$. What is perhaps even more interesting, we have found that our numerical values of the correlation length do not fit the form $`\xi _0\mathrm{ln}^\mathrm{\Phi }(\delta K)`$, with the above reported value $`\mathrm{\Phi }=\mathrm{ln}3/\mathrm{ln}2`$. We have shown instead, using an asymptotic matching, that one has to take $`\mathrm{\Phi }=\mathrm{ln}(3/2)/\mathrm{ln}2`$ in order to have a proper agreement between analytical and numerical results. Consequently, the common finite-size scaling arguments, which were used in the previous studies of this model , are not applicable in this case.
In Sec. II we present our model, recall some previously obtained results, and examine oscillatory behavior of some critical amplitudes. Our conclusions are given in Sec III.
## II THE MODEL AND ITS ANALYSIS
We consider here the usual zero-field Gaussian model on a hierarchical structure which is presented by a modified Sierpinski gasket (SG) lattice of base $`b=3`$ (Fig. 1). Partition function of this model has a simple form,
$$𝒵(K)=\underset{\mathrm{}}{\overset{\mathrm{}}{}}dS_1\mathrm{}dS_N\mathrm{exp}[\frac{1}{2}\underset{i}{}S_i^2+K\underset{<ij>}{}S_iS_j],$$
(1)
where $`S_i`$ is the continuous spin variable at site $`i`$, $`K=J/k_BT`$ where $`T`$ is temperature and $`J`$ represents interaction between each nearest-neighbor pair of Gaussian spins, while $`<ij>`$ denotes the summation over all such pairs. As we have shown in a previous paper , the $`r`$th order partition function can be expressed in terms of three parameters $`A^{(r)}`$, $`B^{(r)}`$, and $`D^{(r)}`$ which obey a set of recursion relations and suitable initial conditions. These relations are somewhat cumbersome (see Eqs. (34)-(35) of ), and we will not repeat them here. Let us note, nevertheless, that they have a singular structure near the relevant fixed point, which does not allow us to make a common fixed-point analysis. As detailed in , for the critical value $`K_c=\mathrm{0.227\hspace{0.17em}148}\mathrm{}`$ of the interaction strength $`K`$, all successive iterations of $`A^{(r)}`$ and $`B^{(r)}`$ lie on an invariant line, which starts at the point $`(A^{(0)}=0,B^{(0)}=K_c)`$ and ends at the fixed point $`(A^{}=1/6,B^{}=0)`$. An asymptotic equation of this line, which is valid near the fixed point, has been found perturbativelly . One can show that along this line ($`\delta K=0`$) parameter $`B`$ renormalizes according to the law
$$B^{}=2\sqrt{3}B^2+(18+12\sqrt{3})B^3\frac{21}{4}\sqrt{3}B^4+\mathrm{},$$
(2)
while away from it ($`\delta K>0`$) this parameter follows the law: $`B^{}B^3`$. It is evident that the solution $`B_r\mu ^{3^r}`$ (where $`\mu =\mu (\delta K)<1`$) well fits in with the latter condition. On the other hand, an asymptotic solution of (2) can be expressed as a power series in $`\kappa ^{2^r}`$,
$$B^{(r)}=\frac{\sqrt{3}}{6}\kappa ^{2^r}\frac{2+\sqrt{3}}{8}\kappa ^{22^r}+\frac{312+193\sqrt{3}}{384}\kappa ^{32^r}+\mathrm{},$$
(3)
where $`0<\kappa <1`$ is a constant that can be determined numerically. Although this solution has been derived for $`\delta K=0`$, it holds also for a finite but very small value of $`\delta K`$ ($`0<\delta K1`$), provided $`rr_0`$, where $`r_01`$ is the number of iterations one can make along the invariant line before going away from it. These two regimes are separated by a rather narrow crossover region, which makes it possible to apply the asymptotic matching:
$$\mu ^{3^{r_0}}=\mathrm{exp}[\mathrm{ln}(\mu )\mathrm{\hspace{0.17em}3}^{r_0}]\kappa ^{2^{r_0}}=\mathrm{exp}[\mathrm{ln}(\kappa )\mathrm{\hspace{0.17em}2}^{r_0}],$$
(4)
i.e.
$$\xi \left(\frac{3}{2}\right)^{r_0},$$
(5)
where $`\xi (\delta K)=1/\mathrm{ln}[\mu (\delta K)]`$ stands for the correlation length of the model . This finding is in contrast to the usual finite-size scaling expectation, according to which the correlation length of a finite system at criticality should be of the order of system’s size ($`\xi 3^{r_0}`$).
The number of iteration $`r_0`$ along the invariant line depends on the value of $`\delta K`$ and can be estimated from the obvious relation: $`B^{(r_0)}(\delta K)B^{(r_0)}(0)+\frac{dB^{(r_0)}}{dK}|_{\delta K=0}\delta K`$. Indeed, taking into account (3) and relation $`dB^{(r_0)}/dK2^{r_0}`$ (see ), we find:
$$\kappa ^{2^{r_0}}2^{r_0}\delta K,\text{or}2^{r_0}\mathrm{ln}(\kappa )\mathrm{ln}(\delta K).$$
(6)
This, together with (6), leads to a logarithmic singular behavior of $`\xi `$
$$\xi |\mathrm{ln}(\delta K)|^\mathrm{\Phi },\text{with}\mathrm{\Phi }=\frac{\mathrm{ln}(3/2)}{\mathrm{ln}2}.$$
(7)
This differs (in the value of $`\mathrm{\Phi }`$) from the earlier reported results which have been derived by using the finite-size scaling assumption: $`\xi 3^{r_0}`$. This difference is caused by the peculiar behavior of the parameter $`B`$ and correlation function – in our case they decrease exponentially to zero even at criticality (see (3) and note ).
In order to provide some further insight into the critical behavior of the model, we will also examine it numerically. Thus, using a huge precision, we have been able to come very close to the critical point ($`\delta K/K_c<10^{3000}`$). This allows us to calculate the correlation function and associated correlation length in a wide region (on a logarithmic scale) around the critical point . Our results are presented in Fig. 2(a), where the scaled correlation length (critical amplitude $`\xi _0`$) $`\xi _0=\xi |\mathrm{ln}(\delta K)|^\mathrm{\Phi }`$ is displayed as a function of $`\mathrm{ln}|\mathrm{ln}(\delta K)|`$. The overall behavior of $`\xi _0`$ is highly sensitive to the precise value of $`\mathrm{\Phi }`$. For example, average value $`\overline{\xi }_0=(\xi _{0,max}+\xi _{0,min})/2`$ of $`\xi _0`$ appears to be a constant for sufficiently small values of $`\delta K`$, with the above quoted value of $`\mathrm{\Phi }`$ (see Fig. 2(a)), while $`\overline{\xi }_0`$ becomes unstable under a small change of $`\mathrm{\Phi }`$. This provides a good criterion for a numerical calculation of $`\mathrm{\Phi }`$. Indeed, in this way we have been able to determine $`\mathrm{\Phi }`$ with four correct digits, and a further improvement depends on the possibility to approach the fixed point still more closely. This should be contrasted with the straightforward procedure : A plot of $`\mathrm{ln}\xi `$ versus $`\mathrm{ln}|\mathrm{ln}(\delta K)|`$ leads to numerical estimates of $`\mathrm{\Phi }`$ which oscillate with large amplitudes around the exact value, independent on the distance $`\delta K`$ from the critical point.
It seems that $`\xi _0`$ represents a simple periodic function of $`\mathrm{ln}|\mathrm{ln}(\delta K)|`$. Period of this function, estimated numerically, is found to be in excellent agreement with the theoretical value $`\tau =\mathrm{ln}2`$. Perhaps the simplest way to understand this is to adopt the following point of view: One can regard (7) as a ’pure’ power law, with the scaling variable $`|\mathrm{ln}(\delta K)|`$ (rather than $`\delta K`$) and a ’critical exponent’ $`\mathrm{\Phi }`$. In the same spirit, one can interpret the relation (6) as $`\lambda ^{r_0}|\mathrm{ln}(\delta K)|`$, with $`\lambda =2`$ playing the role of a ’thermal’ eigenvalue, while relation (5) provides an ’effective’ spacial scaling ratio $`3/2`$. It is clear then, from standard theory of log-periodic corrections to the power law scaling , that critical amplitude $`\xi _0`$ should be a periodic function in $`\mathrm{ln}|\mathrm{ln}(\delta K)|`$, with period $`\tau =\mathrm{ln}\lambda =\mathrm{ln}2`$.
We have also analysed the critical behavior of the first derivative of the free energy density with respect to $`K`$ (internal energy $`E`$). Using the approach described in , we have found that this quantity exhibits an interesting confluent singularity,
$$E\frac{1}{\delta K}|\mathrm{ln}(\delta K)|^\mathrm{\Psi },\text{with}\mathrm{\Psi }=\frac{\mathrm{ln}6}{\mathrm{ln}2},$$
(8)
which corresponds to a first order phase transition. As in the case of the correlation length, we have studied the internal energy numerically. Our results are displayed in Fig. 2(b), where we presented the scaled energy (i.e. critical amplitude $`E_0`$) $`E_0=\delta K|\mathrm{ln}(\delta K)|^\mathrm{\Psi }E`$ as a function of $`\mathrm{ln}|\mathrm{ln}(\delta K)|`$. It is evident that $`E_0`$ represents a simple periodic function of $`\mathrm{ln}|\mathrm{ln}(\delta K)|`$, while its period is in agreement with the above quoted theoretical value ($`\tau =\mathrm{ln}2`$). At the same time this analysis provides a good numerical check of the form (8) of the energy leading singularity.
## III CONCLUSION
In this paper we have studied critical behavior of the Gaussian model on a modified SG. We have shown that both correlation length and energy critical amplitudes exhibit very pronounced oscillations near the critical coupling $`K_c`$. These oscillations can be described in terms of some simple functions which are periodic in $`\mathrm{ln}|\mathrm{ln}(\delta K)|`$. Period $`\tau `$ of these functions is found to be determined by a universal quantity which governs critical behavior of the model. Knowledge of these functions is very useful because it provides more precise description of the critical behavior of quantities under consideration.
Having determined the basic properties of these functions, we have been able to make a precise numerical check of the exact form of the leading singular behavior of $`\xi `$ and $`E`$. In particular, this allowed us to notice an inaccuracy in a previously described leading asymptotic form of $`\xi `$ near criticality . It seems that this discrepancy stems from the inapplicability of the standard finite-size scaling assumption in this model. Indeed, using a simple technique, not relying on this assumption (an asymptotic matching), we derive somewhat different singular behavior of $`\xi `$ (see (7)), which turns out to be in excellent agreement with the acquired numerical findings (Fig. 2). This example points out that an ’ad hoc’ use of finite-size scaling assumptions could be questionable sometimes, and that one has to use them with caution in general case.
|
no-problem/9906/nucl-th9906055.html
|
ar5iv
|
text
|
# Transverse Energy Production at RHIC
## 1 Introduction
The Relativistic Heavy Ion Collider (RHIC) will provide us with a unique opportunity to study matter under extreme conditions. The transition from hadronic degrees of freedom to quark and gluon degrees of freedom is expected from collisions at these energies. The transverse energy spectrum will be among the very first results from RHIC. One important question we would like to answer from the day-one physics at RHIC is the maximum energy density reached in central Au+Au collisions. Lattice QCD calculations give us the critical energy density for the transition to quark-gluon plasma. In this talk, we explore the relationship between the transverse energy spectrum and the energy density, and discuss the possibility of experimentally determine the maximum energy density.
A general framework for computing and analysing energy-momentum tensor is introduced. As the first part of this study, we construct the energy-momentum tensor for simulated events from transport model RQMD . We also find the transverse energy, the equation of state, and the energy density for these events. Currently we are using our techniques to study the parton cascade model VNI as well as other event generators.
## 2 Energy-Momentum Tensor in Transport Models
In transport model, the energy-momentum tensor
$$T^{\mu \nu }(\stackrel{}{x},t)\underset{i}{}d^3p\frac{p^\mu p^\nu }{p^0}f_i(\stackrel{}{x},\stackrel{}{p},t),$$
(1)
and the particle number current
$$j^\mu (\stackrel{}{x},t)\underset{i}{}\frac{d^3p}{p^0}p^\mu f_i(\stackrel{}{x},\stackrel{}{p},t).$$
(2)
where $`f_i(\stackrel{}{x},\stackrel{}{p},t)`$ is the distribution functions for particle type $`i`$.
In a relativistic cascade, each particle is represented by a point in both position and momentum space, and the distribution function is an ensemble average of the $`\delta `$-functions,
$$f_i(\stackrel{}{x},\stackrel{}{p},t)=\underset{k}{}\delta ^3(\stackrel{}{x}\stackrel{}{r}_k(t))\delta ^3(\stackrel{}{p}\stackrel{}{p}_k(t)).$$
(3)
The energy-momentum tensor and the particle current become
$$T^{\mu \nu }(\stackrel{}{x},t)=\underset{V0}{lim}\frac{1}{V}\underset{k}{\overset{\stackrel{}{x}_k(t)V}{}}\frac{p_k^\mu p_k^\nu }{p_k^0},$$
(4)
and
$$j^\mu (\stackrel{}{x},t)=\underset{V0}{lim}\frac{1}{V}\underset{k}{\overset{\stackrel{}{x}_k(t)V}{}}\frac{p_k^\mu }{p_k^0}.$$
(5)
At any given space-time location, $`T^{\mu \nu }`$, being a symmetric tensor, has ten independent elements from which we can obtain ten physical quantities: local energy density $`ϵ`$, local pressures $`𝒫_1,𝒫_2,𝒫_3`$, the flow velocity $`\stackrel{}{v}_f`$, and the orientation of the principal momentum axises. Following the convention of Landau and Lifshitz , the local rest frame is defined as the frame in which
$$T^{\mu \nu }=\left(\begin{array}{cc}ϵ& 0\\ 0& T^{ij}\end{array}\right).$$
(6)
The Lorentz boost of $`T^{\mu \nu }`$ to the above form gives us the flow velocity. The momentum tensor $`T^{ij}`$ can be diagonalized by performing a rotation. After the boost and the rotation ,
$$T^{\mu \nu }=\left(\begin{array}{cccc}ϵ& 0& 0& 0\\ 0& 𝒫_1& 0& 0\\ 0& 0& 𝒫_2& 0\\ 0& 0& 0& 𝒫_3\end{array}\right).$$
(7)
The local particle density
$$\rho j^\mu u_\mu ,$$
(8)
where $`u^\mu =(\gamma ,\gamma \stackrel{}{v})`$ is the velocity four vector. We further define
$$𝒫(𝒫_1+𝒫_2+𝒫_3)/3\mathrm{and}T𝒫/\rho .$$
(9)
If there is local thermal equilibrium, then $`𝒫`$ would be the pressure and $`T`$ would correspond to the temperature.
## 3 Event Generator Studies
There are, in general, two stages in the evolution of $`E_t`$. Initially, $`E_t`$ increases due to the transverse excitations from interactions. Afterwards, the longitudinal expansion results in a decreasing $`E_t`$. What we measure in the experiment is the final $`E_t`$ at the end of this expansion. The details of the $`E_t`$ evolution, like all other observables in the relativistic heavy ion collisions, is model dependent. In order to get a reasonable estimate of the energy density through measurement, we need a systematic study of all plausible models. The transverse energy $`E_t`$ and the energy density $`ϵ`$ from the transport model RQMD is shown in Figure 1.
A total of 144 central ($`b=0`$) Au+Au events, with full event histories, at RHIC energy (100 GeV/A+100 GeV/A) was generated using RQMD version $`2.4`$ . The energy-momentum tensor and the particle current is computed according to Eqs. (4) and (5), where the small volume V is chosen to be a sphere of an 1 fm radius. The local energy density $`ϵ`$, pressure $`𝒫`$, and the flow velocity $`\stackrel{}{v}_f`$ are obtained from Eqs. (6)-(9). Figure 2 shows the variation of $`𝒫/ϵ`$ and the development of transverse flow velocity.
In the RQMD events, during the early stage, the longitudinal pressure is higher than the transverse pressure. After about 7 fm/c, the system is approximately isotropic, but it is still not in a local thermal equilibrium. Figure 2(b) displays one such nonthermal behaviour in the flow velocity distribution. Particles with lighter masses, e.g. pions and Kaons, have greater collective velocities than heavier particles, i.e. there is no common flow velocity.
Clearly, the system in Figures 1 and 2, can not be described by a simple Bjorken expansion picture. There are expansions along both the longitudinal and the transverse directions, and the system is not in a local thermal equilibrium. The time evolution of $`ϵ`$ and $`E_t`$, in Figure 1, are not exactly correlated. So the measurement of $`E_t`$ by itself is not sufficient to determine $`ϵ`$.
The initial state in RQMD is not partonic, so it may not be applicable for RHIC. But the model may still have a reasonable parametrisation for stopping and for initial transverse energy production, and it has a well tested hadronic final state after-burner. Models like VNI, HIJING, VENUS, and UrQMD have quite different descriptions of the initial stages at RHIC. By coupling these initial conditions with a common hadronic after-burner, we will have a better understanding of the relationship between the observables and the maximum energy density.
## 4 Remarks
An estimate of the maximum energy density might be possible if,in addition to transverse energy, we also have a good measurement of several collective observables such as the radial and elliptic flows, the system size from HBT, and other correlations . This kind of estimate would still be model dependent, so we need to make a survey of all models.
## Acknowledgements
We wish to thank H. Sorge for helping us with the intermediate states of RQMD events. We also wish to thank Dr. M. Gyulassy, Dr. S. Jeon, and Dr. S. Voloshin for many fruitful discussions, and thank Dr. G. Baym for suggesting this study. The work is supported in part by U.S. Department of Energy under contract No. DE-AC03-76SF00098, No. FG02-92ER40699 and the Alred P. Sloan Foundation (Y.P).
|
no-problem/9906/nucl-th9906026.html
|
ar5iv
|
text
|
# Kaon photoproduction on the nucleon : Contributions of kaon-hyperon final states to the magnetic moment of the nucleon
By using the Gerasimov-Drell-Hearn (GDH) sum rule and an isobaric model of kaon photoproduction, we calculate contributions of kaon-hyperon final states to the magnetic moment of the proton and the neutron. We find that the contributions are small. The approximation of $`\sigma _{TT^{}}`$ by $`\sigma _T`$ clearly overestimates the value of the GDH integral. We find a smaller upper bound for the contributions of kaon-hyperon final states to the proton’s anomalous magnetic moment in kaon photoproduction, and a positive contribution for the square of the neutron’s magnetic moment.
PACS number(s): 13.60.Le,11.55.Hx, 13.40.Em, 14.20.Dh
The internal structure of the nucleon is still an interesting topic of investigations nowadays. The existence of this structure is responsible for the ground state properties of the nucleon, such as hadronic and electromagnetic form factors and the anomalous magnetic moment. At higher energies this finite internal structure yields a series of resonances in the mass region of 1 $``$ 2 GeV. It was then found that the nucleon’s ground state properties and the nucleon’s resonance spectra are not all independent phenomena; they are related by a number of sum rules .
One of these sum rules is the Gerasimov-Drell-Hearn (GDH) sum rule, which connects the nucleon’s magnetic moments and the helicity structures in the resonance region. Although the GDH sum rule was proposed more than 30 years ago, no direct experiment had been performed to investigate whether or not the sum rule converges. However, with the advent of the new high-intensity and continuous-electron-beam accelerator, accurate measurements of the contribution to the GDH integral from individual final states are made possible.
Previously, Hammer, Drechsel, and Mart (HDM) suggested that by using the Gerasimov-Drell-Hearn sum rule it is possible to estimate strange contributions to the magnetic moments of the proton . They used experimental data and an isobaric model for the photoproduction of $`\eta `$, $`\varphi `$, as well as $`K`$ mesons, in order to estimate the transversely unpolarized total cross section $`\sigma _T`$ and, therefore, to calculate the upper bounds of strange contributions to the anomalous magnetic moment of the proton. It is the purpose of this Brief Report to update the contributions of kaon-hyperon final states, by means of the latest isobaric model which fits all available experimental data, including the recent data from SAPHIR .
The GDH sum rule (for a review see Ref. ) relates the anomalous magnetic moment of the nucleon $`\kappa _N`$ to the difference of its polarized total photoabsorption cross section
$$\frac{\kappa _N^2}{4}=\frac{m_N^2}{8\pi ^2\alpha }_0^{\mathrm{}}\frac{d\nu }{\nu }[\sigma _{1/2}(\nu )\sigma _{3/2}(\nu )],$$
(1)
where $`\sigma _{3/2}`$ and $`\sigma _{1/2}`$ denote the cross sections for the possible combinations of spins of the nucleon and photon (i.e., $`\sigma _{3/2}`$ for total spin = $`\frac{3}{2}`$ and $`\sigma _{1/2}`$ for total spin = $`\frac{1}{2}`$), $`\alpha `$ is the fine structure constant, $`\nu `$ is the photon energy in the laboratory frame, and $`m_N`$ the mass of the nucleon. The derivation of GDH sum rule is based on general principles: Lorentz and gauge invariance, crossing symmetry, causality, and unitarity. The only assumption in deriving Eq. (1) is that the scattering amplitude goes to zero for the limit $`|\nu |\mathrm{}`$, thus there is no subtraction hypothesis .
In photoproduction processes, however, the spin-dependent cross section is related to the total cross sections by
$`\sigma _T`$ $`=`$ $`{\displaystyle \frac{\sigma _{3/2}+\sigma _{1/2}}{2}},`$ (2)
$`\sigma _{TT^{}}`$ $`=`$ $`{\displaystyle \frac{\sigma _{3/2}\sigma _{1/2}}{2}}.`$ (3)
The first cross section can be measured using unpolarized real photons while the second can be measured with longitudinally polarized electrons and polarized nucleon targets or hyperon recoils. Experimentally, the latter must be done using electroproduction, i.e. virtual photons. Nevertheless, the momentum transfer of the electrons ($`Q^2`$) can be minimized close to the photon point. Numerically, $`\sigma _{TT^{}}`$ can be calculated by using photoproduction, since Eq. (1) needs $`Q^2=0`$ and $`\sigma _{TT^{}}=\sigma _{TT^{}}(F_1,F_2,F_3,F_4)`$, where the $`F_i`$’s are the CGLN amplitudes for real photons .
Unlike the calculation in the previous paper, here we use both
$`\kappa _N^2`$ $`=`$ $`{\displaystyle \frac{m_N^2}{\pi ^2\alpha }}{\displaystyle _0^{\nu _{\mathrm{max}}}}{\displaystyle \frac{d\nu }{\nu }}\sigma _{TT^{}}`$ (4)
and
$`\kappa _N^2`$ $``$ $`{\displaystyle \frac{m_N^2}{\pi ^2\alpha }}{\displaystyle _0^{\nu _{\mathrm{max}}}}{\displaystyle \frac{d\nu }{\nu }}\sigma _T`$ (5)
where the GDH Integral is already saturated at $`\nu _{\mathrm{max}}`$ 2 GeV , in order to measure the deviations of the approximation made by the previous work from the expected values. This was not done in the previous work since experimental data for kaon photoproduction were very scarce at that time, especially for the $`\gamma pK^0\mathrm{\Sigma }^+`$ channel, thus predictions of $`\sigma _{TT^{}}`$ were somewhat unreliable.
We use the latest and modern elementary operator , which was guided by recent coupled-channel results and includes the newest data . The model consists of a tree-level amplitude that reproduces all available $`K^+\mathrm{\Lambda }`$, $`K^+\mathrm{\Sigma }^0`$ and $`K^0\mathrm{\Sigma }^+`$ photoproduction observables and thus provides an effective parametrization of these processes. The background terms contain the standard $`s`$-, $`u`$-, and $`t`$-channel contributions along with a contact term that was required to restore gauge invariance after hadronic form factors had been introduced . This model includes the three nucleon resonances that have been found in the coupled-channels approach to decay into the $`K\mathrm{\Lambda }`$ channel, the $`S_{11}`$(1650), $`P_{11}`$(1710), and $`P_{13}(1720)`$. For $`K\mathrm{\Sigma }`$ production further contributions from the $`S_{31}`$(1900) and $`P_{31}`$(1910) $`\mathrm{\Delta }`$ resonances were added.
In Fig. 1 we show the total cross sections $`\sigma _T`$ and $`\sigma _{TT^{}}`$ as a function of the photon laboratory energy $`\nu `$ for the six isospin channels in kaon photoproduction. Since there are no experimental data for productions on the neutron, we consider the three right panels in Fig. 1 as predictions. Obviously, the model can remarkably reproduce the experimental data for the productions on the proton. In the former calculation, contribution from the $`\gamma pK^0\mathrm{\Sigma }^+`$ channel could not properly be calculated since previous elementary models mostly overpredict $`K^0\mathrm{\Sigma }^+`$ total cross section by a factor of up to 100 . With the new SAPHIR data available in three isospin channels, the elementary model becomes more reliable to explain kaon photoproduction on the proton and to predict the production on the neutron.
The elementary model predicts negative sign for $`\sigma _{TT^{}}`$ (note that we have plotted $`\sigma _{TT^{}}`$), except for the $`K^0\mathrm{\Lambda }`$ channel, where it produces a negative sign for the GDH integral of the neutron, thus yielding positive values for $`\kappa ^2`$ of the neutron, albeit $`\gamma nK^+\mathrm{\Sigma }^{}`$and $`\gamma nK^0\mathrm{\Sigma }^0`$ channels show a different behavior.
In Table I we list the numerical values obtained both by Eqs. (4) and (5), using a cutoff energy where we found the elementary model is still reliable. It is found that the result is not sensitive to the cutoff energy $`\nu _{\mathrm{max}}`$ around 2 GeV, i.e. there is no significant change in the integral in the energy interval 1.8 $``$ 2.2 GeV, especially in the case of photoproduction on the proton where the cross sections show a convergence at higher energies. From Table I it is already obvious that replacing Eq. (4) by Eq. (5) would overestimates the value of the GDH Integral, especially since we know that $`\sigma _T`$ is positive definite, while $`\sigma _{TT^{}}`$ is not. We find that our present calculation yields a slightly different result for $`\gamma pK^+\mathrm{\Lambda }`$ channel, but not in the $`\gamma pK^+\mathrm{\Sigma }^0`$ channel, where previous work seems to overestimate the present calculation.
Should the contributions add up coherently, our calculation would yield values of $`\kappa _p^2(K)=0.063`$ and $`\kappa _n^2(K)=0.031`$, or $`|\kappa _p(K)|/\kappa _p0.14`$ and $`\kappa _n(K)/\kappa _n0.094`$. This put even smaller values for the upper bound of the magnitude of kaon-hyperon final states contributions to the proton’s magnetic moment, compared to the previous result of HDM, $`\kappa _p^2(K)=0.07`$ . An interesting feature is that our calculation yields a positive value for contributions to the $`\kappa _n^2(K)`$, therefore increases the calculated value of the GDH Integral for the neutron.
In conclusion, we have refined the calculation of kaon-hyperon final states contributions to the anomalous magnetic moment of the proton and predicted the contributions for the case of the neutron, based on the experimental data of kaon photoproduction and a modern isobaric model. Experimental data for $`\sigma _T`$ in neutron’s channels and $`\sigma _{TT^{}}`$ in all six isospin channels will strongly suppress the uncertainties in our calculation. Therefore, future experimental proposals in MAMI, ELSA, TJNAF, or GRAAL should address this topic as an important measurement in order to improve our understanding of the nucleon’s structure.
It is a pleasure to acknowledge that this work was supported by the University Research for Graduate Education (URGE) grant.
|
no-problem/9906/astro-ph9906145.html
|
ar5iv
|
text
|
# Quasi–Thermal Comptonization and GRBs
## 1 Introduction
In Celotti & Ghisellini (this volume) we argue that interpreting the burst emission as synchrotron radiation faces some severe problems. This justifies the search for alternatives. Here we will argue that a valid alternative is quasi–thermal Comptonization. This has already been proposed to explain the burst emission by Liang (1997) and Liang et al. (1997), who required a relatively weakly magnetized (magnetic field $`B0.1`$ Gauss) and large ($`R10^{15}`$ cm) emitting region. These values of the physical parameters contrast with the ones advocated by the “standard internal shock” scenario as it has been developing to explain the structured GRB light curve and its fast variability, which requires a compact ($`R10^{13}`$ cm) and magnetized ($`B10^5`$ Gauss) emitting region (Rees & Mészáros 1992; Rees & Mészáros 1994; Sari & Piran 1997).
More recently, we (Ghisellini & Celotti 1999) have proposed again the quasi–thermal Comptonization scenario, but using the very same physical parameters as in the internal shock picture. The only (important) difference concerned the timescale of particle acceleration. Instead of considering it instantaneous, we considered that the particles can be re–accelerated for the entire duration of the shell–shell interaction, and therefore the acceleration timescale can last for $`\mathrm{\Delta }R^{}/c`$, where $`\mathrm{\Delta }R^{}`$ is the shell width as measured in the comoving frame.
In this case the typical electron energy is dictated by the balance between the heating and the cooling rate: assuming that the bulk Lorentz factor of one shell in the comoving frame of the other is $`\mathrm{\Gamma }^{}`$, we obtain:
$$\frac{(\mathrm{\Gamma }^{}1)n_p^{}m_pc^2}{\mathrm{\Delta }R^{}/c}=\frac{4}{3}n_e^{}\sigma _Tc(\gamma ^21)U$$
(1)
where $`n_p^{}`$ and $`n_e^{}`$ are the comoving densities of protons and leptons, respectively, and $`U`$ is the total (radiative plus magnetic) energy density. The resulting particle distribution may well be different from a perfect Maxwellian, but it has in any case a narrow energy width, and a meaningful mean energy may be defined. It may also be possible that a high energy tail (possibly a steep power law) is present, producing a tail of high frequency radiation (see e.g. Stern 1999).
With such low values of the typical $`\gamma `$, the produced cyclo–synchrotron radiation is self–absorbed and the corresponding power is orders of magnitudes lower than the observed burst power and at much lower typical frequencies. This self absorbed radiation is nevertheless important, because it provides the seed photons to be scattered at high energies.
In this paper we will show why multiple Compton can provide typical spectral slopes in agreement with observations, and some ideas on how it is possible to have spectral cut–offs at the observed energies. Finally we will present some considerations on the hypernova scenario, based on the fact that the pre–hypernova star necessarily has a strong wind, which makes the the circum–burst surroundings very dense. The effects of this wind will be considered and discussed.
## 2 Quasi thermal–Comptonization
As mentioned above, the particle distribution may not be a perfect Maxwellian, but it can nevertheless have a well defined mean energy, which can correspond to an effective temperature. Let us then introduce a dimensionless effective temperature $`\mathrm{\Theta }^{}kT^{}/(m_ec^2)`$, measured in the comoving frame. Assume also that all particles in the shell, of optical depth $`\tau ^{}\sigma _Tn_e^{}\mathrm{\Delta }R^{}`$, partecipate to the burst emission. The interaction between the shells is at a distance $`R_i=10^{13}R_{i,13}`$ cm from the center, and the shell width is $`\mathrm{\Delta }R^{}R/\mathrm{\Gamma }`$.
### 2.1 Seed photons
As long as the typical $`\gamma `$ factor of the emitting electron is low, the cyclo–synchrotron radiation is self absorbed, and the corresponding spectrum resembles a blackbody, peaking at the self-absorption frequency $`\nu _T^{}`$, which is a strong function of the temperature. Interpolating numerical results, Ghisellini & Celotti (1999) obtain $`\nu _T^{}2.75\times 10^{14}(\mathrm{\Theta }^{})^{1.191}`$ Hz, which holds for $`0.1\stackrel{<}{}\mathrm{\Theta }^{}\stackrel{<}{}3`$. This gives, for $`B10^5`$ Gauss and $`\tau _T1`$, $`\nu _T^{}2.75\times 10^{14}(\mathrm{\Theta }^{})^{1.191}`$ Hz.
The corresponding comoving self–absorbed luminosity is
$$L_s^{}\frac{8\pi }{3}m_eR^2\mathrm{\Theta }^{}(\nu _T^{})^37.6\times 10^{41}\mathrm{\Theta }^{}R_{13}^2(\nu _{T,14}^{})^3\mathrm{erg}\mathrm{s}^1$$
(2)
The same electrons will scatter these photons through multiple scatterings, in order to emit the burst luminosity. We can define a generalized Comptonization parameter as
$$y\mathrm{\hspace{0.17em}4}\tau \mathrm{\Theta }^{}(1+\tau )(1+4\mathrm{\Theta }^{})$$
(3)
For values of $`y`$ larger than unity the final spectrum amplifies the synchrotron power by the factor $`e^y`$: values of $`y`$ around 10–13 are needed to produce an intrinsic Compton power $`L_c^{}10^{46}`$ erg s<sup>-1</sup> starting from a synchrotron power $`L_s^{}10^{41}`$ erg s<sup>-1</sup>.
### 2.2 A preferred slope: $`\nu ^0`$
Thermal Comptonization has been extensively studied in the past years to explain the high energy spectra of galactic black hole candidates and radio–quiet AGNs (see e.g. Pozdnyakov, Sobol, & Sunyaev 1983). The fractional energy amplification of the scattered photons, at each scattering, is $`A1+4\mathrm{\Theta }^{}+16(\mathrm{\Theta }^{})^2`$. When $`\tau `$ is significantly larger than unity, almost all photons undergo several scatterings, and in the Compton spectrum of each order we therefore have the same number of photons, of mean frequency $`\nu _i`$ and distributed in a range $`\mathrm{\Delta }\nu _i\nu _i`$ of frequencies. The escaping photons are a fraction $`1/\tau `$ of the ones contained in each spectrum. We therefore have that, in a $`\nu `$$`F_\nu `$ plot, the spectrum of the escaping photons is flat ($`F_\nu \nu ^0`$) up to $`\mathrm{\Theta }^{}`$, where photons have the same energies of the leptons. At these energies a Wien peak forms ($`F_\nu \nu ^3\mathrm{exp}(h\nu /kT^{})`$, whose importance depends on the value of $`\tau `$ and $`\mathrm{\Theta }^{}`$. In this case an increased (decreased) $`\tau `$ and/or $`\mathrm{\Theta }^{}`$ make the Wien peak to become more (less) dominant, and at the same time they decrease (increase) the normalization of the power law part of the spectrum, but they do not change its slope.
### 2.3 Importance of pairs and feedbacks
The production of electron–positron pairs would surely be efficient for intrinsic compactnesses $`\mathrm{}^{}>1`$ <sup>1</sup><sup>1</sup>1We define the compactness as $`\mathrm{}=\sigma _TL\mathrm{\Delta }R^{}/(m_ec^3R^2)`$. See Celotti & Ghisellini, this volume, for more details., and would on one hand increase the optical depth, and on the other acts as a thermostat, by maintaining the temperature in a narrow range. Detailed time dependent studies of the optical depth and temperature evolution for a rapidly varying source have not yet been pursued. Results concerning a steady source in pair equilibrium indicate that for $`\mathrm{}^{}`$ between 10 and $`10^3`$ the maximum equilibrium temperature is of the order of 30–300 keV (Svensson 1982, 1984), if the source is pair dominated (i.e. the density of pairs outnumbers the density of protons). Indeed we expect in this situation to be close to pair equilibrium, as this would be reached in about a dynamical timescale (i.e. in $`\mathrm{\Delta }R^{}/c`$), but note that the quoted numbers refer to a perfect Maxwellian particle distribution. If an high energy tail is present, more photons are created above the threshold for photon–photon pair production with respect to the case of a pure Maxwellian, and thus pairs become important for values of $`\mathrm{\Theta }^{}`$ lower than in the completely thermal case (see Stern 1999; Coppi 1999; Stern, this volume).
An ‘effective’ temperature of $`kT^{}`$ 50 keV ($`\mathrm{\Theta }^{}0.1`$) and $`\tau _T4`$ dominated by pairs, can be a consistent solution giving $`y11`$. See also below for an effect which could considerably enhance the compactness of the emitting region, and therefore its pair density.
## 3 The high energy cut–off
With $`\mathrm{\Theta }^{}0.1`$, and $`\mathrm{\Gamma }100`$, the observed high energy cutoff lies at $`E_c10\mathrm{\Theta }_1^{}\mathrm{\Gamma }_2/(1+z)`$ MeV. This value is somewhat larger than what is typically observed. However, there are a number of effects which may be potentially important, and that can lower this value. One is that the entire system is highly time–dependent, and the time–evolution is in the sense of a cooling of the leptons, which will then produce a time–averaged spectral energy cut–off lower than a few MeV. On the other hand, if the observed value of $`300`$ keV is really typical, and not biased by selection effects introduced by triggering criteria and detector response energies (see e.g. Lloyd & Petrosian 1999, and Petrosian, this volume), we ought to look for a very robust explanation.
### 3.1 “Brainerd break”
Brainerd (1994) linked the typical high energy cut–off of GRBs to the effect of down–scattering: photons with energies much larger than $`m_ec^2`$ pass undisturbed through a scattering medium because of the reduction with energy of the Klein Nishina cross section, while photons with energies just below $`m_ec^2`$ interact, and their energy after the scattering is reduced. The net effect is to produce a “downscattering hole” in the spectrum, between $`m_ec^2/\tau ^2`$ and $`\tau m_ec^2`$. The attractive feature of this model is that the cut–off energy is associated with the rest mass-energy of the electron. The difficulty is that a significant part of the power originally radiated by the burst goes into heating (by the Compton process) of the scattering electrons.
### 3.2 Pair production break
If some scatterings take place between the burst photons and some external medium at rest, there may be a very efficient process which modifies the emergent burst radiation, namely pair production. Assume in fact that the external medium has an optical depth $`\tau _{ext}`$ in a region close to where the burst radiation originates (i.e. between $`R_i`$ and $`2R_i`$). This material will scatter back a fraction $`\tau _{ext}L`$ of the burst power, corresponding to a compactness
$$\mathrm{}_{ext}\frac{\sigma _T\tau _{ext}L}{R_im_ec^3}$$
(4)
If we require that the primary spectrum is not modified by photon–photon absorption, the optical depth of the scattering matter and its density must be
$$\tau _{ext}<\mathrm{\hspace{0.17em}3.7}\times 10^9\frac{R_{13}}{L_{50}}n_{ext}<\frac{5.5\times 10^2}{L_{50}}\mathrm{cm}^3$$
(5)
As can be seen, the requirement on the density of the external matter is particularly severe, especially in the case of bursts originating in dense stellar forming regions. On the other hand photon–photon opacity may be an important ingredient to shape the spectrum, and the reason why GRB spectra peak at around 300 keV. As in the Brainerd model, the attractive feature is to link the energy break to $`m_ec^2`$, while the difficulty is that all the primary radiation emitted above $`m_ec^2`$ get absorbed. Contrary to the Brainerd model, in this case the spectrum does not retain its original slope above $`\tau m_ec^2`$, and implies that the GeV radiation observed by EGRET for some GBRs is produced in the afterglow. If the absorbed energy is not re–emitted, but remains in the form of lepton energy, this will significantly lower the efficiency of the burst to produce radiation. On the other hand it is conceivable to expect that the created pairs will radiate their energy in a short time, and that they can be even re–accelerated by the incoming fireball. The net effect may be simply to increase the density of the radiating particles, introducing a feedback process: an increased density lowers the effective temperature $``$ less energy is radiated above $`m_ec^2`$ $``$ the number of pairs produced via the “mirror” process decreases $``$ the new pair density decreases, and so on. Another feedback is introduced by the fact that the photons scattered back to the emitting shell will increase the number of seed photons, softening the spectrum. All these effects deserve a more detailed investigation, even if their time–dependence nature will made these studies quite complex.
## 4 Pre–hypernova wind
If the progenitors of GRBs are hypernovae (Paczyński 1998), then we must consider the effects of the strong wind necessarily present during the pre–hypernovae phase. For illustration, assume a mass loss $`\dot{m}=10^4\dot{m}_4M_{}\mathrm{yr}^1`$ and a wind velocity $`v=10^8v_8`$ cm s<sup>-1</sup>. The particle density $`n_{}`$ near the surface of the pre–hypernova star of radius $`R_{}`$ is
$$n_{}=\frac{\dot{m}}{4\pi R_{}^2vm_p}=\mathrm{\hspace{0.17em}3.15}\times 10^{12}\frac{\dot{m}_4}{v_8R_{,12}^2}\mathrm{cm}^3$$
(6)
and scales as $`(R/R_{})^2`$. The mass contained in this wind decelerates the fireball at the deceleration radius $`R_d`$ where the wind mass equals the fireball mass divided by $`\mathrm{\Gamma }`$
$$\frac{\dot{m}(R_dR_{})}{v}=\frac{E}{\mathrm{\Gamma }^2c^2}R_d=R_{}+\frac{Ev}{\mathrm{\Gamma }^2\dot{m}c^2}=R_{}+1.75\times 10^{13}\frac{v_8E_{52}}{\mathrm{\Gamma }_2^2\dot{m}_4}\mathrm{cm}$$
(7)
As can be seen, the deceleration radius is close to the transparency radius, i.e. the distance at which the fireball becomes transparent. The first immediate consequence is that internal shocks do not develop. The second immediate consequence is that the the optical depth of wind material between $`R_d`$ and infinity is quite large
$$\tau _w(R_d\mathrm{})=\sigma _T_{R_d}^{\mathrm{}}n(R)𝑑R\mathrm{\hspace{0.17em}0.2}\frac{\dot{m}_4}{R_{d,13}v_8}$$
(8)
Due to the $`R^2`$ dependence of the density, most of the contribution to this optical depth comes from material close to $`R_d`$. Therefore all the effects discussed above (downscattering and pair reprocessing) would take place.
The conclusion is that the hypernova hypothesis implies a scenario for the production of the burst and the afterglow quite different from the internal/external shock scenario. In this case in fact we have only external shocks between the fireball and a very dense medium. The fireball would decelerate while producing the burst emission: if a significant fraction of the bulk energy ends up into radiation, this implies that the burst emission should contain more energy than the time–integrated afterglow emission. It may also imply a difference between the early and late burst emission, due to the fact that the corresponding emitting zones may have different $`\mathrm{\Gamma }`$ factors. Since this is not observed (see e.g. Fenimore 1999, this volume), this may be a problem for the hypernova idea, but since processes different from collisionless shocks may be operating (instabilities, turbulences and so on), this issue is worth investigating.
From the point of view of the radiation processes, the pre–hypernova wind scenario offers an interesting possibility. At early times, the heating–cooling balance (Eq. 1) gives sub or trans–relativistic lepton energies. As the energy density $`U`$ decreases, the cooling becomes less rapid and lepton energies increase, becoming relativistic. Correspondingly, self–absorbed cyclo-synchrotron and multiple Compton may originate the burst emission, while the afterglow may correspond to synchrotron and self–Compton emission from relativistic electrons. Transition from these two regimes may be smooth. The self–absorbed cyclo–synchrotron radiation would increase its relative importance as the Lorentz factor of the emitting leptons increases, and develop a thin part when $`\gamma `$ becomes large enough (e.g. $`\gamma >100`$), producing optical radiation. This may be an alternative explanation for the prompt optical flash of GRB990123. At later times, the particle energy may reach its maximum possible value (e.g. $`\gamma =\mathrm{\Gamma }m_p/m_e`$), and then decrease following the usual prescriptions, with a power law time decay of the flux density.
## 5 Conclusions
If there is a balance between heating and cooling, the emitting leptons reach typical energies which are mildly relativistic at most. The most efficient radiation process in this case is quasi–thermal Comptonization of self–absorbed cyclo–synchrotron photons. This process is characterized, in the quasi–saturated regime, by a spectrum which maintains its flat slope (in the power law part) even if the emitting optical depth or the temperature change. What changes is the relative importance of the Wien peak. The emitting plasma may be dominated by the pairs produced through photon–photon interactions in the high energy part of the spectrum, and this may limit the effective temperature in a narrow range. Most important, in this respect, is the exact shape of the high energy part of the particle distribution, which may differ from a pure Maxwellian.
The observed high energy cut–off of the burst emission is well defined, and close to the rest–mass energy of the electron. This fact is difficult to be explained both by “standard” synchrotron models and by Comptonization models.
It calls for a more robust interpretation, where the energy $`m_ec^2`$ enters in a natural way. We have argued that photon–photon absorption may play again a crucial role if there is, in front of the fireball, some material scattering back a fraction of the burst radiation. This material may be the interstellar matter in a dense star forming region or the matter blown out from a pre–hypernova star. In the latter case the fireball is decelerated at typical distances $`R10^{13}`$ cm, i.e. where it has become transparent. There is no need to have internal shocks. Other problems however arises, still to be investigated.
|
no-problem/9906/gr-qc9906059.html
|
ar5iv
|
text
|
# Noncommutative Geometry for Pedestrians11footnote 1Lecure given at the International School of Gravitation, Erice: 16th Course: ‘Classical and Quantum Non-Locality’.
## 1 Introduction
To control the divergences which from the very beginning had plagued quantum electrodynamics, Heisenberg already in the 1930’s proposed to replace the space-time continuum by a lattice structure. A lattice however breaks Lorentz invariance and can hardly be considered as fundamental. It was Snyder who first had the idea of using a noncommutative structure at small length scales to introduce an effective cut-off in field theory similar to a lattice but at the same time maintaining Lorentz invariance. His suggestion came however just at the time when the renormalization program finally successfully became an effective if rather ad hoc prescription for predicting numbers from the theory of quantum electrodynamics and it was for the most part ignored. Some time later von Neumann introduced the term ‘noncommutative geometry’ to refer in general to a geometry in which an algebra of functions is replaced by a noncommutative algebra. As in the quantization of classical phase-space, coordinates are replaced by generators of the algebra . Since these do not commute they cannot be simultaneously diagonalized and the space disappears. One can argue that, just as Bohr cells replace classical-phase-space points, the appropriate intuitive notion to replace a ‘point’ is a Planck cell of dimension given by the Planck area. If a coherent description could be found for the structure of space-time which were pointless on small length scales, then the ultraviolet divergences of quantum field theory could be eliminated. In fact the elimination of these divergences is equivalent to coarse-graining the structure of space-time over small length scales; if an ultraviolet cut-off $`\mathrm{\Lambda }`$ is used then the theory does not see length scales smaller than $`\mathrm{\Lambda }^1`$. When a physicist calculates a Feynman diagram he is forced to place a cut-off $`\mathrm{\Lambda }`$ on the momentum variables in the integrands. This means that he renounces any interest in regions of space-time of volume less than $`\mathrm{\Lambda }^4`$. As $`\mathrm{\Lambda }`$ becomes larger and larger the forbidden region becomes smaller and smaller but it can never be made to vanish. There is a fundamental length scale, much larger than the Planck length, below which the notion of a point is of no practical importance. The simplest and most elegant, if certainly not the only, way of introducing such a scale in a Lorentz-invariant way is through the introduction of noncommuting space-time ‘coordinates’.
As a simple illustration of how a ‘space’ can be ‘discrete’ in some sense and still covariant under the action of a continuous symmetry group one can consider the ordinary round 2-sphere, which has acting on it the rotational group $`SO_3`$. As a simple example of a lattice structure one can consider two points on the sphere, for example the north and south poles. One immediately notices of course that by choosing the two points one has broken the rotational invariance. It can be restored at the expense of commutativity. The set of functions on the two points can be identified with the algebra of diagonal $`2\times 2`$ matrices, each of the two entries on the diagonal corresponding to a possible value of a function at one of the two points. Now an action of a group on the lattice is equivalent to an action of the group on the matrices and there can obviously be no non-trivial action of the group $`SO_3`$ on the algebra of diagonal $`2\times 2`$ matrices. However if one extends the algebra to the noncommutative algebra of all $`2\times 2`$ matrices one recovers the invariance. The two points, so to speak, have been smeared out over the surface of a sphere; they are replaced by two cells. An ‘observable’ is an hermitian $`2\times 2`$ matrix and has therefore two real eigenvalues, which are its values on the two cells. Although what we have just done has nothing to do with Planck’s constant it is similar to the procedure of replacing a classical spin which can take two values by a quantum spin of total spin 1/2. Only the latter is invariant under the rotation group. By replacing the spin 1/2 by arbitrary spin $`s`$ one can describe a ‘lattice structure’ of $`n=2s+1`$ points in an $`SO_3`$-invariant manner. The algebra becomes then the algebra $`M_n`$ of $`n\times n`$ complex matrices and there are $`n`$ cells of area $`2\pi \mathrm{¯}k`$ with
$$n\frac{\text{Vol}(S^2)}{2\pi \mathrm{¯}k}.$$
In general, a static, closed surface in a fuzzy space-time as we define it can only have a finite number of modes and will be described by some finite-dimensional algebra . Graded extensions of some of these algebras have also been constructed . Although we are interested in a matrix version of surfaces primarily as a model of an eventual noncommutative theory of gravity they have a certain interest in other, closely related, domain of physics. We have seen, for example, that without the differential calculus the fuzzy sphere is basically just an approximation to a classical spin $`r`$ by a quantum spin $`r`$ with $`\mathrm{}`$ in lieu of $`\mathrm{¯}k`$. It has been extended in various directions under various names and for various reasons . In order to explain the finite entropy of a black hole it has been conjectured, for example by ’t Hooft , that the horizon has a structure of a fuzzy 2-sphere since the latter has a finite number of ‘points’ and yet has an $`SO_3`$-invariant geometry. The horizon of a black hole might be a unique situation in which one can actually ‘see’ the cellular structure of space.
It is to be stressed that we shall here modify the structure of Minkowski space-time but maintain covariance under the action of the Poincaré group. A fuzzy space-time looks then like a solid which has a homogeneous distribution of dislocations but no disclinations. We can pursue this solid-state analogy and think of the ordinary Minkowski coordinates as macroscopic order parameters obtained by coarse-graining over scales less than the fundamental scale. They break down and must be replaced by elements of some noncommutative algebra when one considers phenomena on these scales. It might be argued that since we have made space-time ‘noncommutative’ we ought to do the same with the Poincaré group. This logic leads naturally to the notion of a $`q`$-deformed Poincaré (or Lorentz) group which act on a very particular noncommutative version of Minkowski space called $`q`$-Minkowski space . The idea of a $`q`$-deformation goes back to Sylvester . It was taken up later by Weyl and Schwinger to produce a finite version of quantum mechanics.
It has also been argued, for conceptual as well as practical, numerical reasons, that a lattice version of space-time or of space is quite satisfactory if one uses a random lattice structure or graph. The most widely used and successful modification of space-time is in fact what is called the lattice approximation. From this point of view the Lorentz group is a classical invariance group and is not valid at the microscopic level. Historically the first attempt to make a finite approximation to a curved manifold was due to Regge and this developed into what is now known as the Regge calculus. The idea is based on the fact that the Euler number of a surface can be expressed as an integral of the gaussian curvature. If one applies this to a flat cone with a smooth vertex then one finds a relation between the defect angle and the mean curvature of the vertex. The latter is encoded in the former. In recent years there has been a burst of activity in this direction, inspired by numerical and theoretical calculations of critical exponents of phase transitions on random surfaces. One chooses a random triangulation of a surface with triangles of constant fixed length, the lattice parameter. If a given point is the vertex of exactly six triangles then the curvature at the point is flat; if there are less than six the curvature is positive; it there are more than six the curvature is negative. Non-integer values of curvature appear through statistical fluctuation. Attempts have been made to generalize this idea to three dimensions using tetrahedra instead of triangles and indeed also to four dimensions, with euclidean signature. The main problem, apart from considerations of the physical relevance of a theory of euclidean gravity, is that of a proper identification of the curvature invariants as a combination of defect angles. On the other hand some authors have investigated random lattices from the point of view of noncommutative geometry. For an introduction to the lattice theory of gravity from these two different points of view we refer to the books by Ambjørn & Jonsson and by Landi . Compare also the loop-space approach to quantum gravity .
One typically replaces the four Minkowski coordinates $`x^\mu `$ by four generators $`q^\mu `$ of a noncommutative algebra which satisfy commutation relations of the form
$$[q^\mu ,q^\nu ]=i\mathrm{¯}kq^{\mu \nu }.$$
(1.1)
The parameter $`\mathrm{¯}k`$ is a fundamental area scale which we shall suppose to be of the order of the Planck area:
$$\mathrm{¯}k\mu _P^2=G\mathrm{}.$$
There is however no need for this assumption; the experimental bounds would be much larger. Equation (1.1) contains little information about the algebra. If the right-hand side does not vanish it states that at least some of the $`q^\mu `$ do not commute. It states also that it is possible to identify the original coordinates with the generators $`q^\mu `$ in the limit $`\mathrm{¯}k0`$:
$$\underset{\mathrm{¯}k0}{lim}q^\mu =x^\mu .$$
(1.2)
For mathematical simplicity we shall suppose this to be the case although one could include a singular ‘renormalization constant’ $`Z`$ and replace (1.2) by an equation of the form
$$\underset{\mathrm{¯}k0}{lim}q^\mu =Zx^\mu .$$
(1.3)
If, as we shall argue, gravity acts as a universal regulator for ultraviolet divergences then one could reasonably expect the limit $`\mathrm{¯}k0`$ to be a singular limit.
Let $`𝒜_{\mathrm{¯}k}`$ be the algebra generated in some sense by the elements $`q^\mu `$. We shall be here working on a formal level so that one can think of $`𝒜_{\mathrm{¯}k}`$ as an algebra of polynomials in the $`q^\mu `$ although we shall implicitly suppose that there are enough elements to generate smooth functions on space-time in the commutative limit. Since we have identified the generators as hermitian operators on some Hilbert space we can identify $`𝒜_{\mathrm{¯}k}`$ as a subalgebra of the algebra of all operators on the Hilbert space. We have added the subscript $`\mathrm{¯}k`$ to underline the dependence on this parameter but of course the commutation relations (1.1) do not determine the structure of $`𝒜_{\mathrm{¯}k}`$, We in fact conjecture that every possible gravitational field can be considered as the commutative limit of a noncommutative equivalent and that the latter is strongly restricted if not determined by the structure of the algebra $`𝒜_{\mathrm{¯}k}`$. We must have then a large number of algebras $`𝒜_{\mathrm{¯}k}`$ for each value of $`\mathrm{¯}k`$.
Interest in Snyder’s idea was revived much later when mathematicians, notably Connes and Woronowicz , succeeded in generalizing the notion of differential structure to noncommutative geometry. Just as it is possible to give many differential structures to a given topological space it is possible to define many differential calculi over a given algebra. We shall use the term ‘noncommutative geometry’ to mean ‘noncommutative differential geometry’ in the sense of Connes. Along with the introduction of a generalized integral this permits one in principle to define the action of a Yang-Mills field on a large class of noncommutative geometries.
One of the more obvious applications was to the study of a modified form of Kaluza-Klein theory in which the hidden dimensions were replaced by noncommutative structures . In simple models gravity could also be defined although it was not until much later that the technical problems involved in the definition of this field were to be to a certain extent overcome. Soon even a formulation of the standard model of the electroweak forces could be given . A simultaneous development was a revival of the idea of Snyder that geometry at the Planck scale would not necessarily be described by a differential manifold.
One of the advantages of noncommutative geometry is that smooth, finite examples can be constructed which are invariant under the action of a continuous symmetry group. Such models necessarily have a minimal length associated to them and quantum field theory on them is necessarily finite . In general this minimal length is usually considered to be in some way or another associated with the gravitational field. The possibility which we shall consider here is that the mechanism by which this works is through the introduction of noncommuting ‘coordinates’. This idea has been developed by several authors from several points of view since the original work of Snyder. It is the left-hand arrow of the diagram
$$\begin{array}{ccc}𝒜_{\mathrm{¯}k}& & \mathrm{\Omega }^{}(𝒜_{\mathrm{¯}k})\\ & & \\ \text{Cut-off}& & \text{Gravity}\end{array}$$
(1.4)
The $`𝒜_{\mathrm{¯}k}`$ is a noncommutative algebra and the index $`\mathrm{¯}k`$ indicates the area scale below which the noncommutativity is relevant; this would normally be taken to be the Planck area.
The top arrow is a mathematical triviality; the $`\mathrm{\Omega }^{}(𝒜_{\mathrm{¯}k})`$ is a second algebra which contains $`𝒜_{\mathrm{¯}k}`$ and is what gives a differential structure to it just as the algebra of de Rham differential forms gives a differential structure to a smooth manifold. There is an associated differential $`d`$, which satisfies the relation $`d^2=0`$. The couple $`(\mathrm{\Omega }^{}(𝒜),d)`$ is known as a differential calculus over the algebra $`𝒜`$. The algebra $`𝒜`$ is what in ordinary geometry would determine the set of points one is considering, with possibly an additional topological or measure theoretic structure. The differential calculus is what gives an additional differential structure or a notion of smoothness. On a commutative algebra of functions on a lattice, for example, it would determine the number of nearest neighbours and therefore the dimension. The idea of extending the notion of a differential to noncommutative algebras is due to Connes who proposed a definition based on a formal analogy with an identity in ordinary geometry involving the Dirac operator $`\text{/}D`$. Let $`\psi `$ be a Dirac spinor and $`f`$ a smooth function. Then one can write
$$i\gamma ^\alpha e_\alpha f\psi =\text{/}D(f\psi )f\text{/}D\psi .$$
Here $`e_\alpha `$ is the Pfaffian derivative with respect to an orthonormal moving frame $`\theta ^\alpha `$. This equation can be written
$$\gamma ^\alpha e_\alpha f=i[\text{/}D,f]$$
and it is clear that if one makes the replacement
$$\gamma ^\alpha \theta ^\alpha $$
then on the right-hand side one has the de Rham differential. Inspired by this fact, one defines a differential in the noncommutative case by the formula
$$df=i[F,f]$$
where now $`f`$ belongs to a noncommutative algebra $`𝒜`$ with a representation on a Hilbert space $``$ and $`F`$ is an operator on $``$ with spectral properties which make it look like a Dirac operator. The triple $`(𝒜,F,)`$ is called a spectral triple. It is inspired by the $`K`$-cycle introduced by Atiyah to define a dual to $`K`$-theory . The simplest example is obtained by choosing $`𝒜=`$ acting on $`^2`$ by left multiplication and
$$F=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right).$$
The 1-forms are then off-diagonal $`2\times 2`$ complex matrices. The differential is extended to them using the same formula as above but with a bracket which is an anticommutator instead of a commutator. Since $`F^2=1`$ it is immediate that $`d^2=0`$. The algebra $`𝒜`$ of this example can be considered as the algebra of functions on 2 points and the differential can be identified with the finite-difference operator.
One can argue , not completely successfully, that each gravitational field is the unique ‘shadow’ in the limit $`\mathrm{¯}k0`$ of some differential structure over some noncommutative algebra. This would define the right-hand arrow of the diagram. A hand-waving argument can be given which allows one to think of the noncommutative structure of space-time as being due to quantum fluctuations of the light-cone in ordinary 4-dimensional space-time. This relies on the existence of quantum gravitational fluctuations. A purely classical argument based on the formation of black-holes has been also given . In both cases the classical gravitational field is to be considered as regularizing the ultraviolet divergences through the introduction of the noncommutative structure of space-time. This can be strengthened as the conjecture that the classical gravitational field and the noncommutative nature of space-time are two aspects of the same thing. If the gravitational field is quantized then presumably the light-cone will fluctuate and any two points with a space-like separation would have a time-like separation on a time scale of the order of the Planck time, in which case the corresponding operators would no longer commute. So even in flat space-time quantum fluctuations of the gravitational field could be expected to introduce a non-locality in the theory. This is one possible source of noncommutative geometry on the order of the Planck scale. The composition of the three arrows in (1.4) is an expression of an old idea, due to Pauli, that perturbative ultraviolet divergences will somehow be regularized by the gravitational field . We refer to Garay for a recent review.
One example from which one can seek inspiration in looking for examples of noncommutative geometries is quantized phase space, which had been already studied from a noncommutative point of view by Dirac . The minimal length in this case is given by the Heisenberg uncertainty relations or by modifications thereof . In fact in order to explain the supposed Zitterbewegung of the electron Schrödinger had proposed to mix position space with momentum space in order to obtain a set of center-of-mass coordinates which did not commute. This idea has inspired many of the recent attempts to introduce minimal lengths. We refer to for examples which are in one way or another connected to noncommutative geometry. Another concept from quantum mechanics which is useful in concrete applications is that of a coherent state. This was first used in a finite noncommutative geometry by Grosse & Prešnajder and later applied to the calculation of propagators on infinite noncommutative geometries, which now become regular 2-point functions and yield finite vacuum fluctuations. Although efforts have been made in this direction these fluctuations have not been satisfactorily included as a source of the gravitational field, even in some ‘quasi-commutative’ approximation. If this were done then the missing arrow in (1.4) could be drawn. The difficulty is partly due to the lack of tractable noncommutative versions of curved spaces.
The fundamental open problem of the noncommutative theory of gravity concerns of course the relation it might have to a future quantum theory of gravity either directly or via the theory of ‘strings’ and ‘membranes’. But there are more immediate technical problems which have not received a satisfactory answer. We shall mention the problem of the definition of the curvature. It is not certain that the ordinary definition of curvature taken directly from differential geometry is the quantity which is most useful in the noncommutative theory. Cyclic homology groups have been proposed by Connes as the appropriate generalization to noncommutative geometry of topological invariants; the definition of other, non-topological, invariants in not clear. It is not in fact even obvious that one should attempt to define curvature invariants.
There is an interesting theory of gravity, due to Sakharov and popularized by Wheeler, called induced gravity, in which the gravitational field is a phenomenological coarse-graining of more fundamental fields. Flat Minkowski space-time is to be considered as a sort of perfect crystal and curvature as a manifestation of elastic tension, or possibly of defects, in this structure. A deformation in the crystal produces a variation in the vacuum energy which we perceive as gravitational energy. ‘Gravitation is to particle physics as elasticity is to chemical physics: merely a statistical measure of residual energies.’ The description of the gravitational field which we are attempting to formulate using noncommutative geometry is not far from this. We have noticed that the use of noncommuting coordinates is a convenient way of making a discrete structure like a lattice invariant under the action of a continuous group. In this sense what we would like to propose is a Lorentz-invariant version of Sakharov’s crystal. Each coordinate can be separately measured and found to have a distribution of eigenvalues similar to the distribution of atoms in a crystal. The gravitational field is to be considered as a measure of the variation of this distribution just as elastic energy is a measure of the variation in the density of atoms in a crystal.
We shall here accept a noncommutative structure of space-time as a mathemetical possibility. One can however attempt to associate the structure with other phenomena. A first step in this direction was undoubtedly taken by Bohr & Rosenfeld when they deduced an intrinsic uncertainty in the position of an event in space-time from the quantum-mechanical measurement process. This idea has been since pursued by other authors and even related to the formation of black holes and to the influence of quantum fluctuations in the gravitational field . An uncertainty relation in the measurements of an event is one of the most essential aspects of a noncommutative structure. The possible influence of quantum-mechanical fluctuations on differential forms was realized some time ago by Segal . A related idea is what one might refer to as ‘spontaneous lattization’. A quantum operator is a very singular object in general and the correct definition of the space-time coordinates, considered as quantum operators, could give rise to a preferred set of events in space-time which has some of the aspects of a ‘lattice’ in the sense that each operator, has a discrete spectrum . The work of Yukawa and Takano could be considered as somewhat similar to this, except that the fuzzy nature of space-time is emphasized and related to the presence of particles. Finkelstein has attempted a very philosophical derivation of the structure of space-time from the notion of ‘simplicity’ (in the group-theoretic sense of the word) which has led him to the possibility of the ‘superposition of points’, simething very similar to noncommutativity. We shall mention below the attempts to derive a noncommutative structure of space-time from string theory.
When referring to the version of space-time which we describe here we use the adjective ‘fuzzy’ to underline the fact that points are ill-defined. Since the algebraic structure is described by commutation relations the qualifier ‘quantum’ has also been used . This latter expression is unfortunate since the structure has no immediate relation to quantum mechanics and also it leads to confusion with ‘spaces’ on which ‘quantum groups’ act. To add to the confusion the word ‘quantum’ has also been used to designate equivalence classes of ordinary differential geometries which yield isomorphic string theories and the word ‘lattice’ has been used to designate what we here qualify as ‘fuzzy’.
## 2 A simple example
The algebra $`𝒫(u,v)`$ of polynomials in $`u=e^{ix}`$, $`v=e^{iy}`$ is dense in any algebra of functions on the torus, defined by the relations $`0x2\pi `$, $`0y2\pi `$, where $`x`$ and $`y`$ are the ordinary cartesian coordinates of $`^2`$. If one considers a square lattice of $`n^2`$ points then $`u^n=1`$ and $`v^n=1`$ and the algebra is reduced to a subalgebra $`𝒫_n`$ of dimension $`n^2`$. Introduce a basis $`|j_1`$, $`0jn1`$, of $`^n`$ with $`|n_1|0_1`$ and replace $`u`$ and $`v`$ by the operators
$$u|j_1=q^j|j_1,v|j_1=|j+1_1,q^n=1.$$
Then the new elements $`u`$ and $`v`$ satisfy the relations
$$uv=qvu,u^n=1,v^n=1$$
and the algebra they generate is the matrix algebra $`M_n`$ instead of the commutative algebra $`𝒫_n`$. There is also a basis $`|j_2`$ in which $`v`$ is diagonal and a ‘Fourier’ transformation between the two .
Introduce the forms
$`\theta ^1=i\left(1{\displaystyle \frac{n}{n1}}|0_20|\right)u^1du,`$
$`\theta ^2=i\left(1{\displaystyle \frac{n}{n1}}|n1_1n1|\right)v^1dv.`$
In this simple example the differential calculus can be defined by the relations
$$\theta ^af=f\theta ^a,\theta ^a\theta ^b=\theta ^b\theta ^a$$
of ordinary differential geometry. It follows that
$$\mathrm{\Omega }^1(M_n)\underset{1}{\overset{2}{}}M_n,d\theta ^a=0.$$
The differential calculus has the form one might expect of a noncommutative version of the torus. Notice that the differentials $`du`$ and $`dv`$ do not commute with the elements of the algebra.
One can choose for $`q`$ the value
$$q=e^{2\pi il/n}$$
for some integer $`l`$ relatively prime with respect to $`n`$. The limit of the sequence of algebras as $`l/n\alpha `$ irrational is known as the rotation algebra or the noncommutative torus . This algebra has a very rich representation theory and it has played an important role as an example in the developement of noncommutative geometry .
## 3 Noncommutative electromagnetic theory
The group of unitary elements of the algebra of functions on a manifold is the local gauge group of electromagnetism and the covariant derivative associated to the electromagnetic potential can be expressed as a map
$$\stackrel{D}{}\mathrm{\Omega }^1(V)_𝒜$$
(3.1)
from a $`𝒞(V)`$-module $``$ to the tensor product $`\mathrm{\Omega }^1(V)_{𝒞(V)}`$, which satisfies a Leibniz rule
$$D(f\psi )=df\psi +fD\psi ,f𝒞(V),\psi .$$
We shall often omit the tensor-product symbol in the following. As far as the electromagnetic potential is concerned we can identify $``$ with $`𝒞(V)`$ itself; electromagnetism couples equally, for example, to all four components of a Dirac spinor. The covariant derivative is defined therefore by the Leibniz rule and the definition
$$D\mathrm{\hspace{0.17em}1}=A1=A.$$
That is, one can rewrite (3.1) as
$$D\psi =(_\mu +A_\mu )dx^\mu \psi .$$
One can study electromagnetism on a large class of noncommutative geometries and there exist many recent reviews . Because of the noncommutativity however the result often looks more like nonabelian Yang-Mills theory.
## 4 Metrics
We shall define a metric as a bilinear map
$$\mathrm{\Omega }^1(𝒜)_𝒜\mathrm{\Omega }^1(𝒜)\stackrel{g}{}𝒜.$$
(4.1)
This is a ‘conservative’ definition, a straightforward generalization of one of the possible definitions of a metric in ordinary differential geometry:
$$g(dx^\mu dx^\nu )=g^{\mu \nu }.$$
The usual definition of a metric in the commutative case is a bilinear map
$$𝒳_{𝒞(V)}𝒳\stackrel{g}{}𝒞(V)$$
where $`𝒳`$ is the $`𝒞(V)`$-bimodule of vector fields on $`V`$:
$$g(_\mu _\nu )=g_{\mu \nu }.$$
This definition is not suitable in the noncommutative case since the set of derivations of the algebra, which is the generalization of $`𝒳`$, has no natural structure as an $`𝒜`$-module. The linearity condition is equivalent to a locality condition for the metric; the length of a vector at a given point depends only on the value of the metric and the vector field at that point. In the noncommutative case bilinearity is the natural (and only possible) expression of locality. It would exclude, for example, a metric in ordinary geometry defined by a map of the form
$$g(\alpha ,\beta )(x)=_Vg_x(\alpha _x,\beta _y)G(x,y)𝑑y.$$
Here $`\alpha ,\beta \mathrm{\Omega }^1(V)`$ and $`g_x`$ is a metric on the tangent space at the point $`xV`$. The function $`G(x,y)`$ is an arbitrary smooth function of $`x`$ and $`y`$ and $`dy`$ is the measure on $`V`$ induced by the metric.
Introduce a bilinear flip $`\sigma `$:
$$\mathrm{\Omega }^1(𝒜)_𝒜\mathrm{\Omega }^1(𝒜)\stackrel{\sigma }{}\mathrm{\Omega }^1(𝒜)_𝒜\mathrm{\Omega }^1(𝒜)$$
(4.2)
We shall say that the metric is symmetric if
$$g\sigma g.$$
Many of the finite examples have unique metrics as do some of the infinite ones . Other definitions of a metric have been given, some of which are similar to that given above but which weaken the locality condition and one which defines a metric on the associated space of states.
## 5 Linear Connections
An important geometric problem is that of comparing vectors and forms defined at two different points of a manifold. The solution to this problem leads to the concepts of a connection and covariant derivative. We define a linear connection as a covariant derivative
$$\mathrm{\Omega }^1(𝒜)\stackrel{D}{}\mathrm{\Omega }^1(𝒜)_𝒜\mathrm{\Omega }^1(𝒜)$$
on the $`𝒜`$-bimodule $`\mathrm{\Omega }^1(𝒜)`$ with an extra right Leibniz rule
$$D(\xi f)=\sigma (\xi df)+(D\xi )f$$
defined using the flip $`\sigma `$ introduced in (4.2). In ordinary geometry the map
$$D(dx^\lambda )=\mathrm{\Gamma }_{\mu \nu }^\lambda dx^\mu dx^\nu $$
defines the Christophel symbols.
We define the torsion map
$$\mathrm{\Theta }:\mathrm{\Omega }^1(𝒜)\mathrm{\Omega }^2(𝒜)$$
by $`\mathrm{\Theta }=d\pi D`$. It is left-linear. A short calculation yields
$$\mathrm{\Theta }(\xi )f\mathrm{\Theta }(\xi f)=\pi (1+\sigma )(\xi df).$$
We shall impose the condition
$$\pi (\sigma +1)=0$$
(5.1)
on $`\sigma `$. It could also be considered as a condition on the product $`\pi `$. In fact in ordinary geometry it is the definition of $`\pi `$; a 2-form can be considered as an antisymmetric tensor. Because of this condition the torsion is a bilinear map. Using $`\sigma `$ a reality condition on the metric and the linear connection can be introduced . In the commutative limit, when it exists, the commutator defines a Poisson structure, which normally would be expected to have an intimate relation with the linear connection. This relation has only been studied in very particular situations .
## 6 Gravity
The classical gravitational field is normally supposed to be described by a torsion-free, metric-compatible linear connection on a smooth manifold. One might suppose that it is possible to formulate a noncommutative theory of (classical/quantum) gravity by replacing the algebra of functions by a more general algebra and by choosing an appropriate differential calculus. It seems however difficult to introduce a satisfactory definition of local curvature and the corresponding curvature invariants . One way of circumventing this problem is to consider classical gravity as an effective theory and the Einstein-Hilbert action as an induced action. We recall that the classical gravitational action is given by
$$S[g]=\mu _P^4\mathrm{\Lambda }_c+\mu _P^2R.$$
In the noncommutative case there is a natural definition of the integral but there does not seem to be a natural generalization of the Ricci scalar. One of the problems is the fact that the natural generalization of the curvature form is in general not right-linear in the noncommutative case. The Ricci scalar then will not be local. One way of circumventing these problems is to return to an old version of classical gravity known as induced gravity . The idea is to identify the gravitational action with the quantum corrections to a classical field in a curved background. If $`\mathrm{\Delta }[g]`$ is the operator which describes the propagation of a given mode in presence of a metric $`g`$ then one finds that, with a cut-off $`\mathrm{\Lambda }`$, the effective action is given by
$$\mathrm{\Gamma }[g]\text{Tr}\mathrm{log}\mathrm{\Delta }[g]\mathrm{\Lambda }^4\text{Vol}(V)[g]+\mathrm{\Lambda }^2S_1[g]+(\mathrm{log}\mathrm{\Lambda })S_2[g]+\mathrm{}.$$
If one identifies $`\mathrm{\Lambda }=\mu _P`$ then one finds that $`S_1[g]`$ is the Einstein-Hilbert action. A problem with this is that it can be only properly defined on a compact manifold with a metric of euclidean signature and Wick rotation on a curved space-time is a rather delicate if not dubious procedure. Another problem with this theory, as indeed with the gravitational field in general, is that it predicts an extremely large cosmological constant. The expression $`\text{Tr}\mathrm{log}\mathrm{\Delta }[g]`$ has a natural generalization to the noncommutative case .
We have defined gravity using a linear connection, which required the full bimodule structure of the $`𝒜`$-module of 1-forms. One can argue that this was necessary to obtain a satisfactory definition of locality as well as a reality condition. It is possible to relax these requirements and define gravity as a Yang-Mills field or as a couple of left and right connections . If the algebra is commutative (but not an algebra of smooth functions) then to a certain extent all definitions coincide .
## 7 Regularization
Using the diagram (1.4) we have argued that gravity regularizes propagators in quantum field theory through the formation of a noncommutative structure. Several explicit examples of this have been given in the literature . In particular an energy-momentum tensor constructed from regularized propagators has been used as a source of a cosmological solution. The propagators appear as if they were derived from non-local theories on ordinary space-time . We required that the metric that we use be local in the sense that the map (4.1) is bilinear with respect to the algebra. One could say that the theory is as local as the algebra will permit. However, since the algebra is not an algebra of points this means that the theory appears to be non-local as an effective theory on a space-time manifold.
## 8 Kaluza-Klein theory
We mentioned in the Introduction that one of the first, obvious applications of noncommutative geometry is as an alternative hidden structure of Kaluza-Klein theory. This means that one leaves space-time as it is and one modifies only the extra dimensions; one replaces their algebra of functions by a noncommutative algebra, usually of finite dimension to avoid the infinite tower of massive states of traditional Kaluza-Klein theory. Because of this restriction and because the extra dimensions are purely algebraic in nature the length scale associated with them can be arbitrary , indeed as large as the Compton wave length of a typical massive particle.
The algebra of Kaluza-Klein theory is therefore, for example, a product algebra of the form
$$𝒜=𝒞(V)M_n.$$
Normally $`V`$ would be chosen to be a manifold of dimension four, but since much of the formalism is identical to that of the $`M`$(atrix)-theory of $`D`$-branes . For the simple models with a matrix extension one can use as gravitational action the Einstein-Hilbert action in ‘dimension’ $`4+d`$, including possibly Gauss-Bonnet terms . For a more detailed review we refer to a lecture at the 5th Hellenic school in Corfu.
## 9 Quantum groups and spaces
The set of smooth functions on a manifold is an algebra. This means that from any function of two variables one can construct a function of one by multiplication. If the manifold happens to be a Lie group then there is another operation which to any function of one variable constructs a function of two. This is called co-multiplication and is usually written $`\mathrm{\Delta }`$:
$$(\mathrm{\Delta }f)(g_1,g_2)=f(g_1g_2).$$
It satisfies a set of consistency conditions with the product. Since the expression ‘noncommutative group’ designates something else the noncommutative version of an algebra of smooth functions on a Lie group has been called a ‘quantum group’. It is neither ‘quantum’ nor ‘group’. The first example was found by Kulish & Reshetikhin and by Sklyanin . A systematic description was first made by Woronowicz , by Jimbo , Manin and Drinfeld . The Lie group $`SO(n)`$ acts on the space $`^n`$; the Lie group $`SU(n)`$ acts on $`^n`$. The ‘quantum’ versions $`SO_q(n)`$ and $`SU_q(n)`$ of these groups act on the ‘quantum spaces’ $`_q^n`$ and $`_q^n`$. These latter are noncommutative algebras with special covariance properties. The first differential calculus on a quantum space was constructed by Wess & Zumino . There is an immense literature on quantum groups and spaces, from the algebraic as will as geometric point of view. We have included some of it in the bibliography. We mention in particular the collection of articles edited by Doebner & Hennig and Kulish and the introductory text by Kassel .
## 10 Mathematics
At a more sophisticated level one would have to add a topology to the algebra. Since we have identified the generators as hermitian operators on a Hilbert space, the most obvious structure would be that of a von Neumann algebra. We refer to Connes for a description of these algebras within the context of noncommutative geometry. A large part of the interest of mathematicians in noncommutative geometry has been concerned with the generalization of topological invariants to the noncommutative case. It was indeed this which lead Connes to develop cyclic cohomology. Connes has also developed and extended the notion of a Dixmier trace on certain types of algebras as a possible generalization of the notion of an integral. The representation theory of quantum groups is an active field of current interest since the pioneering work of Woronowicz . For a recent survey we refer to the book by Klymik & Schmüdgen . Another interesting problem is the relation between differential calculi covariant under the (co-)action of quantum groups and those constructed using the spectral-triple formalism of Connes. Although it has been known for some time that many if not all of the covariant calculi have formal Dirac ‘operators’ it is only recently that mathematicians have considered to what extent these ‘operators’ can be actually represented as real operators on a Hilbert space and to what extent they satisfy the spectral-triple conditions.
## 11 String Theory
Last, but not least, is the possible relation of noncommutative geometry to string theory. We have mentioned that since noncommutative geometry is pointless a field theory on it will be divergence-free. In particular monopole configurations will have finite energy, provided of course that the geometry in which they are constructed can be approximated by a noncommutative geometry, since the point on which they are localized has been replaced by an volume of fuzz, This is one characteristic that it shares with string theory. Certain monopole solutions in string theory have finite energy since the point in space (a $`D`$-brane) on which they are localized has been replaced by a throat to another ‘adjacent’ $`D`$-brane.
In noncommutative geometry the string is replaced by a certain finite number of elementary volumes of ‘fuzz’, each of which can contain one quantum mode. Because of the nontrivial commutation relations the ‘line’ $`\delta q^\mu =q^\mu q^\mu `$ joining two points $`q^\mu `$ and $`q^\mu `$ is quantized and can be characterized by a certain number of creation operators $`a_j`$ each of which creates a longitudinal displacement. They would correspond to the rigid longitudinal vibrational modes of the string. Since it requires no energy to separate two points the string tension would be zero. This has not much in common with traditional string theory.
We mentioned in the previous section that noncommutative Kaluza-Klein theory has much in common with the $`M`$(atrix) theory of $`D`$-branes. What is lacking is a satisfactory supersymmetric extension. Finally we mention that there have been speculations that string theory might give rise naturally to space-time uncertainty relations and that it might also give rise to a noncommutative theory of gravity. More specifically there have been attempts to relate a noncommutative structure of space-time to the quantization of the open string in the presence of a non-vanishing $`B`$-field.
## Acknowledgments
The author would like to thank the Max-Planck-Institut für Physik in München for financial support and J. Wess for his hospitality there.
##
What follows constitutes in no way a complete bibliography of noncommutative geometry. It is strongly biased in favour of the author’s personal interests and the few subjects which were touched upon in the text.
|
no-problem/9906/cond-mat9906234.html
|
ar5iv
|
text
|
# Multiscale Analysis of Blood Pressure Signals
## Abstract
We describe the multiresolution wavelet analysis of blood pressure waves in vasovagal syncope affected patients compared with healthy people one, using Haar and Gaussian bases. A comparison between scale-dependent and scale-independent measures discriminating the two classes of subjects is made. What emerges is a sort of equivalence between these two methodological approaches, that is both methods reach the same statistical significance of separation between the two classes.
PACS numbers: 87.80.+s, 87.90.+y, 07.05.k
preprint: BARI-TH/98-317
In recent years biological time series have been considered in the more general framework of fractal functions. Accordingly, the analysis tools commonly used for fractal functions, have been applied to study physiological time series, see e.g. . A major approach to such problems is based on the wavelet transform, a technique which has proved to be well suited for characterizing the scaling properties of fractal objects even in presence of low-frequency trends .
In particular, the regulation of the cardiac rhythm, has been recently investigated in two very interesting papers aiming at providing means of diagnosis of heart disease. The interbeat interval records for healthy and sick subjects have been studied by wavelet analysis, which can appropriately treat the non-stationarity of these signals. In Ref. a scale dependent measure, the root-mean square of the wavelet coefficients $`\sigma _w(s)`$ at a particular scale $`s`$, has shown to be able to sharply discriminate between healthy and sick subjects. In Ref. it was observed that scale-dependent measures may reflect characteristics specific to the subject or to the choice of the wavelet basis; a scale-independent measure extracting the exponents characterizing the scaling of the partition function of wavelet coefficients was then proposed, and its performance in detecting heart disease was excellent. The scaling exponents were already studied in Ref. for the second wavelet moments while in Ref. they are calculated for arbitrary moments. It seems likely to us that the approaches based on scale-dependent measures and that based on scale-independent ones should be considered as qualitatively equivalent. Indeed some pathological conditions may alter the cardiac dynamics at a specific scale or range of scales, while the scaling behaviour of the dynamics of cardiac rhythm regulation should be universal for subjects belonging to the same class. An important problem is how the choice of the wavelet basis influences the results of the analysis. Moreover it is interesting to check whether the same kind of analysis can be used to study other physiological time series and pathologies.
In this work we study the temporal series of the systolic blood pressure waves maxima in nine healthy subjects and ten subjects showing a pathology known as vasovagal syncope. We perform a wavelet analysis of these time series and consider both the scale-dependent and the scale-independent measures above described. To our knowledge this is the first time wavelets are used to study blood pressure waves signals. We performed the analysis using two different wavelet bases, namely the Haar basis and the third derivative of the Gaussian one (TDG). The main difference between these two bases is that the former is able to remove only zero order trends, while the latter is insensible to higher polynomial trends. We find that the Haar basis is, with respect to the data set at hand, well suited for scale-dependent measures i.e. measures of $`\sigma _w(s)`$. Indeed using these wavelets we find an evident separation among healthy and sick subjects at a particular scale $`s=32`$: this separation is missing when the TDG is used. On the other hand, using the TDG leads to a separation with respect to scaling exponents measures which is quite less significant when the Haar basis is used. Interestingly we found that the statistical confidence of separation in the scale-dependent parameter (obtained using Haar wavelets) is very close to that obtained by the scale-independent parameter i.e. the scaling exponent (using the TDG). Since both methods have reached the same degree of separation between the two classes, it remains to be understood whether this coincides with the intrinsic degree of separability of the data set here considered.
Vasovagal syncope is a sudden, rapid and reversing loss of consciousness, due to a reduction of cerebral blood flow attributable to a dysfunction of the cardiovascular control, induced by that part of the Autonomic Nervous System (ANS) that regulates the arterial pressure . In normal conditions the arterial pressure is maintained at a constant level by the existence of a negative feed-back mechanism localised in some nervous centres of the brainstem. As a consequence of a blood pressure variation, the ANS is able to restore the haemodynamic situation acting on heart and vases, by means of two efferent pathways, the vasovagal and sympathetic one, the former acting in the sense of a reduction of the arterial pressure, the latter in the opposite sense . Vasovagal syncope consists of an abrupt fall of blood pressure corresponding to an acute haemodynamic reaction produced by a sudden change in the activity of the ANS (an excessive enhancement of vasovagal outflow or a sudden decrease of sympathetic activity) .
Vasovagal syncope is a quite common clinical problem and in the $`50\%`$ of patients it is non diagnosed, being labelled as syncope of unknown origin, i.e. not necessarily connected to a dysfunction of the ANS action.. Anyway, a rough diagnosis of vasovagal syncope is practicable with the help of the head-up tilt test (HUT) . During this test the patient, positioned on a self-moving table, after an initial rest period in horizontal position, is suddenly brought in vertical position. In such a way the ANS registers a sudden stimulus of reduction of arterial pressure due to the shift of blood volume to inferior limbs. A badly regulated response to this stimulus can induce syncope behaviour.
According to some authors, the positiveness of HUT means an individual predisposition toward vasovagal syncope . This statement does not find a general agreement because of the low reproducibility of the test in the same patient and the extreme variability of the sensitivity in most of the clinical studies . For this reason a long and careful clinical observation period is needed to establish with a certain reliability whether the patient is affected by this syndrome. What we want to stress here is that, from a clinical standpoint, there is not a neat way of discriminating between healthy and syncope-affected subjects, while, in the case of heart disease, studied in , there is always a very clear clinical picture. For this reason in last years a large piece of work has been devoted to the investigation of signal patterns that could characterise syncope-affected patients. This has been done especially by means of Fourier analyses of arterial pressure and heart rate which have not shown to be successful for this purpose .
The temporal behaviour of blood pressure is the most clinically relevant aspect to study vasovagal syncope since it is the result of the combined activity of ANS on heart and vases. Therefore we extract blood pressure wave maxima from a recording period twenty minutes long (which is the better we can do for technical reasons). During this time the following biological signals of the subject are recorded: E.C.G. (lead D-II), E.E.G., the thoracic breath, the arterial blood pressure (by means of a system finapres Ohmeda 2300 Eglewood co. USA, measuring from the second finger of the left hand).
We denote $`\{P_i\}`$ the time series of systolic pressure maxima. The coefficients of the discrete wavelet transform at scale $`s`$ are given by:
$$W_s(n)=s^1\underset{i=1}{\overset{M}{}}P_i\psi ((in)/s),$$
(1)
where $`\psi `$ is the generating wavelet, $`M`$ is the number of points in the time series (we have $`M=2^{10}`$), $`n`$ is the point for which the coefficient is calculated. The scale-dependent measure proposed in corresponds to evaluate the root-mean square of wavelet coefficients at fixed scales. The scale-independent measure deals with the sums of the moments of the wavelets coefficients
$$Z_q(s)=\underset{n}{}|W_s(n)|^q,$$
(2)
where the sum is only over the maxima of $`|W_s|`$. One can show that $`Z_q`$ scales as:
$$Z_q(s)s^{\tau (q)}.$$
(3)
The exponents $`\tau (q)`$, especially for $`q=2`$ and $`q=5`$, were found to provide a robust degree of separation in the case of heart disease diagnosis .
Firstly we discuss the results we obtained on the data set here considered by the evaluation of the r.m.s. of wavelet coefficients. In Fig. 1(a) the r.m.s. of the Haar wavelets coefficients are plotted versus the scale, for both sick and healthy subjects, while in Fig. 1(b) the same quantities are plotted in the case of the TDG basis. One can see that in the Haar case an evident separation between healthy and sick subjects holds at the scale $`s=32`$: healthy subjects have greater fluctuations in the wavelet coefficients. We performed the WMW (Wilcoxon-Mann-Whitney) test to check the hypothesis that the two kinds of samples, positive and control subjects, have been drawn from the same continuous distribution function. The WMW test gives a $`3.5\times 10^3`$ probability to the above cited hypothesis, i.e. the statistical hypothesis is rejectable at the level of significance of $`1\%`$. On the other hand, using the TDG as the wavelet basis does not lead to separation at any scale. Therefore a scale-dependent measure can highlight an evident separation at a particular scale but the result depends on the wavelet basis one uses.
Let us now turn to consider scaling exponents measures. We have calculated the partition functions $`Z_q(s)`$ using both Haar wavelets and the TDG basis. A measure of the exponents $`\tau (q)`$ can then be obtained through log-log plots of $`Z_q`$ versus $`s`$. In the case of the TDG, the log-log plots of $`Z_q`$ versus $`s`$ showed a neat scaling behaviour: in Fig. 2 the $`q=1`$ case, the most significant with our data, is shown. In the case of Haar wavelets, the log-log plots show some curvature (see Fig. 2), but calculating linear correlation coefficients we discover that it still makes sense to evaluate $`\tau (q=1)`$ exponents. For the moment let’s refer to the TDG case. We found that the exponent $`\tau (q=1)`$ acts as a discriminating parameter between healthy and sick subjects, while exponents for the other values of $`q`$ did not succeed in obtaining equally convincing results. Healthy subjects have lower $`\tau (q=1)`$ values than syncope affected ones (see. Fig. 3). By WMW test, the probability that the values of the exponents found for the two classes of subjects, healthy and sick, were sampled from the same continuous distribution was estimated $`4.5\times 10^3`$, a level of significance very close to the one found in the case of the scale-dependent measure. On the other hand, considering the $`\tau (q=1)`$ as computed in the Haar basis, we find that the latter probability value grows of about one order of magnitude reaching a value of $`2.1\times 10^2`$.
In Fig. 3 we have shown the points corresponding to the $`19`$ subjects under consideration in the $`\sigma _w\tau `$ plane, where the coordinates correspond to the measured quantities $`\sigma _w(32)`$ (by Haar wavelets) and $`\tau (q=1)`$ (by the TDG basis). It is evident that the two measures separate, at the same degree, the two classes.
We observe that it is reasonable that the Gaussian basis is more effective in detecting the scaling behaviour of these time series with respect to the Haar basis. On the other hand the same degree of separation is obtained by Haar wavelets at a given scale, while the TDG seems insensible to the single scale features. It follows that these two kinds of measures are going in the same direction rather than excluding each other. A very careful analysis in Ref. shows that in the case of diverse heart pathologies, scale-dependent measures, namely measures of $`\sigma _w(s)`$ at a particular scale $`s`$, outperform measures of scaling exponents. Due to the size of our data set, we may encounter problems in reproducing the kind of analysis performed in Ref. , but we look forward to investigate this aspect. In Ref is also stressed that baroflex modulations of sympathetic and parasympathetic tone lie in a frequency range which corresponds to the scale $`s=32`$ which is, also for us, the best discriminating between controls and positives.
We are, at the moment, not able to provide the physiological explanation of the phenomena here described. However these results might be useful to get a better understanding of the very complicated vasovagal syncope pathology.
Some conclusions are in order. We analysed by wavelets blood pressure signals from healthy subjects and subjects positive to vasovagal syncope pathology. We evaluated two quantities, one depending on a fixed scale and a scaling exponent, which have been recently proposed as diagnosis tools for heart disease. We have shown that both the measures succeed in separating the two classes within the same degree of significance. We are working to have longer records and an enlarged number of positives and controls so as to refine our analysis. At the moment we are aware to be far from being able to propose an alternative diagnostic tool. This would be very useful in consideration of the particular difficulty that the clinical diagnosis of vasovagal syncope still presents.
The authors are grateful to Doc. M. Osei Bonsu for giving us the possibility to access at the not elaborated blood pressure data. A.D.P. Thanks the group-IV of INFN and in particular Prof. R. Gatto for supporting his permanence at CERN-Geneva. The authors also thank an anonymous referee whose suggestions improved the presentation of this work.
|
no-problem/9906/cond-mat9906209.html
|
ar5iv
|
text
|
# Electronic Structure of Cu1-xNixRh2S4 and CuRh2Se4: Band Structure Calculations, X-ray Photoemission and Fluorescence Measurements
## I Introduction
Spinel compounds exhibit an extensive variety of interesting physical properties and have potential technological applications. There are a variety of 3d ion-based oxide spinels, while the S and Se counterparts usually contain 4d or 5d atoms. Several of the compounds are superconductors (LiTi<sub>2</sub>O<sub>4</sub>, CuRh<sub>2</sub>S<sub>4</sub>, CuRh<sub>2</sub>Se<sub>4</sub>, etc.), there are unusual magnetic insulators (e.g. LiMn<sub>2</sub>O<sub>4</sub> and Fe<sub>3</sub>O<sub>4</sub>), and recently, the first d-electron-based heavy fermion metal has been discovered (LiV<sub>2</sub>O<sub>4</sub>). The suprisingly high value of the superconducting critical temperature (11 K) in LiTi<sub>2</sub>O<sub>4</sub> has never been understood. Another spinel compound, CuIr<sub>2</sub>S<sub>4</sub>, is neither magnetic nor superconducting but displays a rather unusual metal-insulator transition that is not yet understood. The ternary sulfo- and selenospinels CuRh<sub>2</sub>S<sub>4</sub> and CuRh<sub>2</sub>Se<sub>4</sub> have been found to be superconducting at T<sub>c</sub> = 4.70 and 3.48 K, respectively. They have the typical spinel structure \[$`Fd\overline{3}m`$\] where Cu ions occupy the $`A`$ tetrahedral sites and Rh ions occupy the $`B`$ octahedral sites.
This wide range of phenomena in the spinel-structure oxide compounds raises very general questions about the electronic structure of the sulfides and the selenides: are there indications of strong correlations effects, or can their properties be accounted for as Fermi liquids described by conventional band theory? Different models for the valence of Cu in these compounds have been discussed, but according to recent photoemission measurements given for CuV<sub>2</sub>S<sub>4</sub>, CuIr<sub>2</sub>S<sub>4</sub>, CuIr<sub>2</sub>Se<sub>4</sub> and Cu<sub>0.5</sub>Fe<sub>0.5</sub>Cr<sub>2</sub>S<sub>4</sub>, Cu is best characterized as monovalent in spinel compounds. Therefore, one expects that the Rh ion will have a formal mixed valence of +3.5 in CuRh<sub>2</sub>S<sub>4</sub> and CuRh<sub>2</sub>Se<sub>4</sub>, and indeed both are good metals. However, very little of the typical temperature-dependent behavior of “mixed valence compounds” is seen in these Rh-based spinels.
The electrical and magnetic properties of Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> have been presented by Matsumoto et al. The superconducting transition temperature decreases (4.70 K $``$ 3.7 K $``$ 2.8 K $``$ $`<`$ 2.0 K) as Cu is replaced by Ni (x = 0.00, 0.02, 0.05, and 0.10), but the reason for this behavior is unexplained. Hagino et al. have presented extensive data on CuRh<sub>2</sub>S<sub>4</sub> and CuRh<sub>2</sub>Se<sub>4</sub> (resistivity, susceptibility, magnetization, specific heat, NMR), but their differences do not yet have any microscopic interpretation. Only for CuRh<sub>2</sub>S<sub>4</sub> have general (full potential, all electron) band structure calculations been reported.
In this paper, we present X-ray spectroscopic studies of the valence band electronic structure of these materials. To provide a clear interpretation of these data, we also report first-principles band structure calculations (Linear-Augmented-Plane-Wave method \[LAPW\]) for CuRh<sub>2</sub>S<sub>4</sub>, CuRh<sub>2</sub>Se<sub>4</sub>, NiRh<sub>2</sub>S<sub>4</sub> and Cu<sub>0.5</sub>Ni<sub>0.5</sub>Rh<sub>2</sub>S<sub>4</sub> that enable us to address the properties of these spinels. Total and partial densities of states (DOS), plasma energies and transport-related quantities are calculated as well as X-ray emission spectra. The total and partial DOS and calculated X-ray emission spectra are found to compare favorably with the measured X-ray photoelectron spectra (XPS) and X-ray emission spectra (XES) (which probe total and partial DOS, respectively). All spectral measurements are performed using the same samples which were used to study the electrical and magnetic properties of Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> in Ref. .
## II Experimental Details
Mixtures of high-purity fine powders of Cu, Ni, Rh, S and Se with nominal stoichiometry were heated in sealed quartz tubes at $`850^{}`$ C for a period of 10 days. Subsequently, the specimens were reground and sintered in pressed parallelepiped form at $`850^{}`$ C for 48 hours. X-ray diffraction data confirms the spinel phase in these powder specimens. The lattice constants of Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> are 9.79, 9.79 and 9.71 Å for x = 0.0, 0.1 and 1.0, respectively, and 10.27 Å for CuRh<sub>2</sub>Se<sub>4</sub>.
The XPS measurements were performed with an ESCA spectrometer from Physical Electronics (PHI 5600 ci, with monochromatized Al-K<sub>α</sub> radiation of a 0.3 eV FWHM). The energy resolution of the analyzer was 1.5$`\%`$ of the pass energy. The estimated energy resolution was less than 0.35 eV for the XPS measurements on the copper and nickel sulfides. The pressure in the vacuum chamber during the measurements was below $`5\times 10^9`$ mbar. Prior to XPS measurements the samples were cleaved in ultra high vacuum. All the investigations have been performed at room temperature on the freshly cleaved surface. The XPS spectra were calibrated using a Au-foil to obtain photoelectrons from the Au-4f<sub>7/2</sub> subshell. The binding energy for Au-4f<sub>7/2</sub> electrons is 84.0 eV.
X-ray fluorescence spectra were measured at Beamline 8.0 of the Advanced Light Source (ALS) at Lawrence Berkeley Laboratory. The undulator beam line is equipped with a spherical grating monochromator, and an experimental resolving power of $`E/\mathrm{\Delta }E=300`$ was used. The fluorescence end station consists of a Rowland circle grating spectrometer. The Ni L<sub>3</sub> and Cu L<sub>3</sub> XES were measured with an experimental resolution of approximately 0.5–0.6 eV and S L<sub>2,3</sub> and Se M<sub>2,3</sub> with resolution of 0.3–0.4 eV. The incident angle of the $`p`$-polarized monochromatic beam on the sample was about $`15^{}`$. The Cu L<sub>3</sub> and Ni L<sub>3</sub> XES were measured just above L<sub>3</sub> threshold but below the L<sub>2</sub> threshold which prevented overlap of the metal L<sub>3</sub> and metal L<sub>2</sub> spectra.
## III Method of Calculation
The band structure calculations were done with the full potential LAPW code WEIN97. The sphere radii were chosen as 2.1 a.u., 2.2 a.u., and 2.0 a.u. for Cu/Ni, Rh, and S/Se, respectively. The plane wave cutoff was K<sub>max</sub> = 3.25 a.u., resulting in slightly more than 1400 basis functions per primitive cell ($``$100 basis functions/atom). The local density approximation (LDA) exchange-correlation potential of Perdew and Wang was used except for the DOS calculations shown in Fig. 7 where the gradient correction to the LDA exchange-correlation potential of Perdew, Burke, and Ernzerhof was used. A mesh of 47 k-points in the irreducible zone (Blöchl’s modified tetrahedron method) was used in achieving self- consistency.
The XES spectra were calculated using Fermi’s golden rule and the matrix elements between the core and valence states (following the formalism of Neckel). The calculated spectra include broadening for the spectrometer and core and valence lifetimes. The DOS calculations used 47 k-points (again, Blöchl’s modified tetrahedron method was used). The experimental lattice constants (listed in the previous section) were used in the calculations and the values used for the internal parameter $`u`$ were taken to be 0.385 for all three stoichiometric compounds (CuRh<sub>2</sub>Se<sub>4</sub>, CuRh<sub>2</sub>S<sub>4</sub>, NiRh<sub>2</sub>S<sub>4</sub>) as well as for Cu<sub>0.5</sub>Ni<sub>0.5</sub>Rh<sub>2</sub>S<sub>4</sub>. Experimental data for the internal parameter was not available, so the values were taken to be 0.385 (rather than the “ideal” position of 3/8) by analogy to the related CuIr<sub>2</sub>S<sub>4</sub> and CuIr<sub>2</sub>Se<sub>4</sub> spinel compounds for which the $`u`$ parameter has been measured.
## IV Discussion of Spectroscopic Data
### A CuRh<sub>2</sub>S<sub>4</sub> and NiRh<sub>2</sub>S<sub>4</sub>
The calculated total and partial DOS of CuRh<sub>2</sub>S<sub>4</sub> and NiRh<sub>2</sub>S<sub>4</sub>, shown in Figs. 1 and 2, reveal many common features. The valence bands extend from E<sub>F</sub> (taken as the zero of energy) to approximately -7 eV and the Fermi level lies near the top of a Rh d-chalcogen p complex of bands that lie below a gap centered 0.5–1.0 eV above the Fermi level. The gap between the valence band and conduction band is found to be about of 0.5–0.7 eV wide. The sulfur states in CuRh<sub>2</sub>S<sub>4</sub> and NiRh<sub>2</sub>S<sub>4</sub> show similar DOS, S 3s atomic-like states in the region -12.7 $``$ -14.7 eV and band-like S 3p states which are mixed with Rh 4d and Cu/Ni 3d-states in a wide energy region. Cu/Ni 3d-states are found to be much narrower than Rh 4d states which are less localized and form two peaks in the DOS near the bottom and the top of the valence band. Our results for CuRh<sub>2</sub>S<sub>4</sub> are similar to those of Ref. 8 except for the distribution of Cu 3d DOS. As seen in Fig. 1, Cu 3d states lie within the region of S 3p states but are weakly hybridized, forming a 1 eV wide peak centered around -2.5 eV. The S d character is quite small and probably reflects tails of the neighboring atoms more than atomic 3d character.
The total DOS at the Fermi level \[N(E<sub>F</sub>)\] increases from NiRh<sub>2</sub>S<sub>4</sub> (8.18 states/eV/cell) to CuRh<sub>2</sub>S<sub>4</sub> (9.89 states/eV/cell) which has the same trend as electronic specific heat coefficients measured in Refs. and . For the intermediate compound Cu<sub>0.5</sub>Ni<sub>0.5</sub>Rh<sub>2</sub>S<sub>4</sub>, N(E<sub>F</sub>) is 8.43 states/eV/cell, much nearer that of NiRh<sub>2</sub>S<sub>4</sub>. In CuRh<sub>2</sub>S<sub>4</sub> the Cu 3d partial DOS is very small at the Fermi level whereas Rh 4d and S 3p partial DOS are the main contribution to the total. Consequently, the Cooper pairs in the superconducting state of CuRh<sub>2</sub>S<sub>4</sub> are formed mainly by the electrons in the hybridized bands derived from Rh 4d and S 3p states.
In NiRh<sub>2</sub>S<sub>4</sub> the situation is quite different. Ni 3d states are broader and at lower binding energy than the Cu 3d states of CuRh<sub>2</sub>S<sub>4</sub>, and hybridization with S p leads to Ni 3d character over a 3 eV wide region that extends above the Fermi level. The result is that the main contribution to the DOS at the Fermi level is from Ni 3d states, unlike in CuRh<sub>2</sub>S<sub>4</sub> where the Cu 3d contribution at E<sub>F</sub> is very minor.
The experimental Cu L<sub>3</sub> ($`3d4s2p`$ transition), Ni L<sub>3</sub> ($`3d4s2p`$ transition) and S L<sub>2,3</sub> ($`3s3d2p`$ transition) XES probe Cu 3d4s, Ni 3d4s and S 3s3d partial DOS in the valence band and, in the first approximation, can be directly compared with calculated band structures. The comparison of the calculated and measured partial DOS are shown in Figs. 3 and 4, where Cu L<sub>3</sub>, Ni L<sub>3</sub> and S L<sub>2,3</sub> XES are converted to the binding energy scale using our XPS measurements of the corresponding core levels \[E<sub>b.e.</sub>(Cu 2p) = 932.39 eV, E<sub>b.e.</sub>(Ni 2p) = 852.98 eV and E<sub>b.e.</sub>(S 2p) = 161.57 eV\]. We see that the measured Cu L<sub>3</sub>, Ni L<sub>3</sub> and S L<sub>2,3</sub> XES peaks are very close to Cu 3d, Ni 3d and S 3s partial DOS in CuRh<sub>2</sub>S<sub>4</sub> (Fig. 3) and NiRh<sub>2</sub>S<sub>4</sub> (Fig. 4). In each case, the peaks in the calculated DOS lie at somewhat lower binding energy: 1 eV for S 3s and Cu 3p, but only a few tenths of eV for Ni 3d. The difference reflects a self-energy correction that lies beyond our band theoretical methods. In addition, we calculated the emission intensities of Cu/Ni L<sub>3</sub>, Rh N3 ($`4d4p`$ transition) and S L<sub>2,3</sub> XES in both compounds as described in section III. The calculated spectra are presented in the same figures (Figs. 3 and 4) and show close correspondence with experimental spectra as well as with the corresponding partial DOS. From the close agreement, we conclude that the influence of core holes in the measured XES spectra is minor and experimental spectra can be understood directly from the calculated spectra and partial DOS.
### B Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> (x = 0, 0.1, 0.3, 0.5, 1.0)
We measured XPS valence band (VB) spectra for the Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> (x = 0, 0.1, 0.3, 0.5, 1.0) system (see Fig. 5) and found a four peak structure: ($`a`$, $`c`$, $`d`$, $`e`$) for CuRh<sub>2</sub>S<sub>4</sub> and ($`a`$, $`b`$, $`d`$, $`e`$) for NiRh<sub>2</sub>S<sub>4</sub>, each of which is very close to the corresponding calculated total DOS (Figs. 1 and 2). Based on our calculations, we can conclude that the $`a`$ peak at 1 eV binding energy is formed by Rh 4d-S 3p states for CuRh<sub>2</sub>S<sub>4</sub> and Ni 3d-Rh 4d–S 3p states for NiRh<sub>2</sub>S<sub>4</sub>. The next peak ($`b`$ for NiRh<sub>2</sub>S<sub>4</sub> at 2 eV binding energy and $`c`$ for CuRh<sub>2</sub>S<sub>4</sub> at 3 eV binding energy) can be attributed mainly to Ni (respectively Cu) 3d states. The $`d`$ peak (5.5 eV) relates to Rh 4d-S 3p states and the $`e`$ peak is associated with atomic-like S 3s states. In the solid solution Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> the positions of the peaks do not change as the concentration varies, but only the ratio of intensities of $`b`$ (Ni 3d) and $`c`$ (Cu 3d) peaks vary according to the Cu/Ni concentration.
This behavior suggests that the electronic structure of the solid solution Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> can be deduced by analyzing the endpoints (x = 0.0 and 1.0), CuRh<sub>2</sub>S<sub>4</sub> and NiRh<sub>2</sub>S<sub>4</sub>. This conclusion results not from a rigid band picture (which does not hold) but from the opposite “split-band” behavior in which both Cu and Ni retain their own DOS peaks which then vary in strength roughly as the concentration. In Fig. 6 we have compared XPS VB measurements with Cu L<sub>3</sub>, Ni L<sub>3</sub> and S L<sub>2,3</sub> XES spectra for Cu<sub>0.5</sub>Ni<sub>0.5</sub>Rh<sub>2</sub>S<sub>4</sub>. We see that positions of the peaks in the Ni L<sub>3</sub>, Cu L<sub>3</sub> and S L<sub>2,3</sub> XES spectra correspond exactly to peaks $`b`$, $`c`$ and $`e`$ of the XPS VB measurements, which is consistent with our interpretation of the XPS data as indicating a solid solution of Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> if the split-band behavior holds.
In Fig. 7 we have compared the calculated total DOS of CuRh<sub>2</sub>S<sub>4</sub>, NiRh<sub>2</sub>S<sub>4</sub>, and Cu<sub>0.5</sub>Ni<sub>0.5</sub>Rh<sub>2</sub>S<sub>4</sub>. With respect to the top of the highest occupied bands, the Fermi energy is highest in the bands of CuRh<sub>2</sub>S<sub>4</sub> to accommodate the two additional electrons from the Cu atoms. The behavior of the DOS for the three systems shown are quite different, particularly for Cu and Ni ions, in an energy range between the Fermi levels for NiRh<sub>2</sub>S<sub>4</sub> and for CuRh<sub>2</sub>S<sub>4</sub>, invalidating a rigid-band interpretation of the differences and similarities in these compounds. This is not surprising given the different character of the Ni- and Cu-derived states in this energy region. As mentioned above, whereas states at the Fermi level in NiRh<sub>2</sub>S<sub>4</sub> have a strong Ni 3d character, Cu 3d states lie entirely below the Fermi level in CuRh<sub>2</sub>S<sub>4</sub>. The character of states at the Fermi level in CuRh<sub>2</sub>S<sub>4</sub> are primarily Rh d-like states hybridized with S 3p states.
According to Ref. , the superconducting transition temperature of Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> decreases with increasing Ni concentration from 4.7 K (x = 0.0) to 3.7 K (x = 0.02) and then to 2.8 K (x = 0.05). While we attribute this to a general decrease in DOS at the Fermi level as the Ni concentration is increased (see Sec. V), this trend does not require a simple rigid-band interpretation. In the alloy, the DOS within a few tenths of an eV of E<sub>F</sub> probably cannot be described by either the rigid band or split-band models.
### C CuRh<sub>2</sub>Se<sub>4</sub>
Figure 8 shows the calculated total and partial DOS for CuRh<sub>2</sub>Se<sub>4</sub>. While it is similar to that of CuRh<sub>2</sub>S<sub>4</sub> (Fig. 1), we can point out two differences: ($`i`$) the Se 4p DOS is redistributed somewhat compared to S 3p and has a higher contribution in the vicinity of the Fermi level, and ($`ii`$) the Se d-like character is even less than that of the d-like character in CuRh<sub>2</sub>S<sub>4</sub>. The total DOS at the Fermi level is 12.05 states/eV-cell which is higher than in CuRh<sub>2</sub>S<sub>4</sub>, in qualitative agreement with measurements of electronic specific-heat measurements.
In Fig. 9 the experimental Cu L<sub>3</sub> and Se M<sub>2,3</sub> ($`4s3p`$ transition) XES measurements are compared to the Cu 3d and Se 4s partial DOS and calculated spectra. The agreement of the peak positions between experiment and theory is quite close. Again we note that calculated XES spectra exactly follow the partial DOS, as in the case of CuRh<sub>2</sub>S<sub>4</sub> and NiRh<sub>2</sub>S<sub>4</sub> (Figs. 3 and 4). The XPS valence band data is compared with the Cu L<sub>3</sub> and Se M<sub>2,3</sub> XES spectra of Fig. 10. The location of Cu 3d-Se 4s-derived bands is reproduced well (comparable to that in the sulfide) by the calculations. There are some differences in ratio of the XPS peaks for CuRh<sub>2</sub>Se<sub>4</sub> and CuRh<sub>2</sub>S<sub>4</sub>: the relative intensity of Cu 3d peak located of around 2.5 eV is less in CuRh<sub>2</sub>Se<sub>4</sub> than in CuRh<sub>2</sub>S<sub>4</sub>. This may be due to the 2.5 times larger photo-ionization cross-section of Se 4p states as compared to that of S 3p states.
## V Other Data
In a metal the Drude plasma energy tensor $`\mathrm{}\mathrm{\Omega }_{p,ij}`$ contains a good deal of information about low temperature transport and low frequency optical properties. $`\mathrm{\Omega }_{p,ij}`$ is given by
$`\mathrm{\Omega }_{p,ij}^2`$ $`=`$ $`4\pi e^2{\displaystyle \frac{1}{V}}{\displaystyle \underset{k}{}}v_{k,i}v_{k,j}\delta (\epsilon _k\epsilon _F)`$ (1)
$`=`$ $`4\pi e^2v_iv_jN(\epsilon _F)`$ (2)
where $`v_{k,i}`$ is the i-th cartesian coordinate of the electron velocity, $`V`$ is the normalization volume and $`\mathrm{}`$ indicates a Fermi surface average. The optical conductivity (specializing now to cubic metals) contains a $`\delta `$-function contribution at zero frequency proportional to $`\mathrm{\Omega }_p^2`$ (which is broadened by scattering processes), and the static conductivity in Bloch-Boltzmann theory becomes
$$\rho (T)=\rho _0+\frac{4\pi }{\mathrm{\Omega }_p^2\tau }$$
(3)
($`\rho _0`$ is the residual resistivity at T = 0) as long as the mean free path $`l=v_F\tau `$ is large enough that scattering processes are independent. When phonon scattering dominates, which is usually the case above 25$`\%`$ of the Debye temperature, the relaxation time $`\tau `$ becomes approximately
$$\frac{\mathrm{}}{\tau _{ep}}=2\pi \lambda _{tr}k_BT$$
(4)
where $`\lambda _{tr}`$ is a “transport” electron-phonon (EP) coupling strength that is usually close to the EP coupling constant $`\lambda `$ that governs superconducting properties. Then in the high T regime we obtain the estimate
$$\lambda \lambda _{tr}\frac{\mathrm{}\mathrm{\Omega }_p^2}{8\pi ^2k_B}\frac{d\rho }{dT}$$
(5)
.
Hagino et al. have presented resistivity data on sintered samples of CuRh<sub>2</sub>S<sub>4</sub> and CuRh<sub>2</sub>Se<sub>4</sub>. Although both are clearly metallic (d$`\rho `$/dT $`>`$ 0), the magnitudes of $`\rho `$ differ by a factor of 20 over most of the range 50 K $``$ T $``$ 300 K. CuRh<sub>2</sub>Se<sub>4</sub> has $`\rho _0=2\mu \mathrm{\Omega }cm`$, indicating excellent metallic behavior in spite of the intergrain scattering that is present in the sintered samples. The CuRh<sub>2</sub>S<sub>4</sub> sample had $`\rho _0=500\mu \mathrm{\Omega }cm`$ (perhaps from intergrain scattering connected to differences in surface chemistry of the sulfide and the selenide) which makes Eq. (3) inapplicable. Moreover, both materials (especially CuRh<sub>2</sub>S<sub>4</sub>) show saturation behavior which makes the Bloch-Boltzmann analysis less definitive. However, we can apply this formalism to CuRh<sub>2</sub>Se<sub>4</sub> to obtain an estimate, using $`d\rho /dT2\mu \mathrm{\Omega }cm/K`$ to obtain $`\lambda _{tr}=1.8`$. This value is almost a factor of three larger than $`\lambda =0.64`$ found by Hagino et al. to be sufficient to account for T<sub>c</sub> = 3.5 K. We expect that the magnitude of $`\rho `$ measured on the sintered sample of CuRh<sub>2</sub>Se<sub>4</sub>, although small, is still not representative of the bulk.
From their measurements, Hagino et al. inferred almost indistinguishable values of the linear specific heat coefficient $`\gamma `$, the density of states N(E<sub>F</sub>), and electron-phonon coupling strengths $`\lambda `$ for CuRh<sub>2</sub>S<sub>4</sub> and CuRh<sub>2</sub>Se<sub>4</sub>. Our calculations lead to a 20$`\%`$ higher value of N(E<sub>F</sub>) in the selenide which is at odds with their values. The 1.2 K lower value of T<sub>c</sub> in the selenide is not very definitive, since this difference could be related to softer phonon frequencies. The nearly factor of two increase in the susceptibility in the selenide (and not in the sulfide) below 300 K remains unexplained. Data on single crystal samples may be necessary to resolve these discrepancies.
## VI Conclusions
The main results of the present study of the electronic structure in Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> and CuR<sub>2</sub>Se<sub>4</sub> can be summarized as follows. The electronic states near E<sub>F</sub> consist mainly Rh 4d and S(Se) 3p(4p) orbitals for CuRh<sub>2</sub>S<sub>4</sub> and CuRh<sub>2</sub>Se<sub>4</sub> and primarily Ni 3d with some Rh 4d and S 3p orbitals in NiRh<sub>2</sub>S<sub>4</sub>. Thus, we find that character of states at the Fermi level changes in a non-rigid-band way in Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub>, and while there is a general trend of a decreasing DOS at the Fermi level as a function of Ni concentration, we have found that the superconducting trends in Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> cannot be explained quantitatively by the calculated DOS of the Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> system. Moreover, such an interpretation would be at odds with the partial DOS which shows the different character of states near E<sub>F</sub>. The measured X-ray data suggests interpreting Cu<sub>1-x</sub>Ni<sub>x</sub>Rh<sub>2</sub>S<sub>4</sub> as a solid state solution more in line with a “split-band” interpretation.
Calculated X-ray emission spectra are found to be in an excellent agreement with experimental data, with peak positions differing by only 0.3–1.0 eV. This agreement implies that core hole effects are negligible. In addition to total DOS, plasma energies have been calculated and used to offer additional theoretical input to interpret the differences between CuRh<sub>2</sub>S<sub>4</sub> and CuRh<sub>2</sub>Se<sub>4</sub>. Unfortunately, transport data appear to be too strongly affected by intergrain scattering to allow a quantitative analysis.
To summarize, the very good agreement between the measured and calculated electronic spectra indicate a lack of any strong correlation effects. The decrease in superconducting T<sub>c</sub> with Ni concentration is likely due to a decrease in N(E<sub>F</sub>). Beyond these general conclusions, however, several questions remain. The linear specific heat coefficients are not accounted for quantitatively; neither are the intermediate temperature resistivities, but these must be measured on single crystals to obtain a good experimental picture. Finally, the temperature dependence of the susceptibility of CuRh<sub>2</sub>Se<sub>4</sub> remains unexplained.
## VII Acknowledgments
This work was supported by the Russian Science Foundation for Fundamental Research (Projects 96-15-96598 and 98-02-04129), a NATO Linkage Grant (HTECH.LG 971222), INTAS-RFBR (95-0565), NSF Grants (DMR-9017997, DMR-9420425, and DMR-9802076), and the DOE EPSCOR and Louisiana Education Quality Special Fund (DOE-LEQSF (1993-95-03)). Work at the Advanced Light Source at Lawrence Berkeley National Laboratory was supported by the U.S. Department of Energy under contract (DE-AC03-76SF00098).
|
no-problem/9906/cond-mat9906272.html
|
ar5iv
|
text
|
# Untitled Document
Persistence in the Zero-Temperature Dynamics of the
Diluted Ising Ferromagnet in Two Dimensions
S. Jain,
School of Mathematics and Computing,
University of Derby,
Kedleston Road,
Derby DE22 1GB,
U.K.
E-mail: S.Jain@derby.ac.uk
Classification Numbers:
05.20-y, 05.50+q, 05.70.Ln, 64.60.Cn, 75.10.Hk, 75.40.Mg
Published in Physical Review E60, R2445 (1999)
Cond-mat/9906272
ABSTRACT
The non-equilibrium dynamics of the strongly diluted random-bond Ising model in two-dimensions $`(2d)`$ is investigated numerically. The persistence probability, $`P(t)`$, of spins which do not flip by time $`t`$ is found to decay to a non-zero, dilution-dependent, value $`P(\mathrm{})`$. We find that $`p(t)=P(t)P(\mathrm{})`$ decays exponentially to zero at large times. Furthermore, the fraction of spins which never flip is a monotonically increasing function over the range of bond-dilution considered. Our findings, which are consistent with a recent result of Newman and Stein, suggest that persistence in diluted and pure systems falls into different classes. Furthermore, its behaviour would also appear to depend crucially on the strength of the dilution present.
In the non-equilibrium dynamics of spin systems at zero-temperature we are interested in the fraction of spins, $`P(t)`$, that persist in the same state up to some later time $`t`$. For homogeneous ferromagnetic Ising models in $`d`$-dimensions, $`P(t)`$, has been found to decay algebraically \[1-4\]
$$P(t)t^{\theta (d)},$$
$`(1)`$
for $`d<4`$, where $`\theta (d)`$ is the new non-trivial persistence exponent.
The presence of a non-vanishing $`P(t)`$ as $`t\mathrm{}`$ has been reported in computer simulations of both the Ising model in higher dimensions ($`d>4`$) and the $`q`$state Potts model in $`2d`$ for $`q>4`$ ; this feature is sometimes referred to as ‘blocking’. Obviously, if $`P(\mathrm{})>0`$, we can reformulate the problem by restricting our attention only to those spins that eventually flip. Hence, we can consider the behaviour of
$$p(t)=P(t)P(\mathrm{}).$$
$`(2)`$
Although the numerical simulations of the $`q`$state Potts model mentioned above seem to indicate that $`p(t)`$ also decays algebraically, the evidence is by no means conclusive.
By considering the dynamics of the local order parameter the persistence problem can be generalised to non-zero temperatures \[6-9\].
It is only recently that attention has turned to the persistence problem in systems containing disorder. Numerical simulations of the zero-temperature dynamics of the weakly-diluted Ising model in $`2d`$ also reported that $`P(\mathrm{})>0`$. In fact, the study in is consistent with the presence of three distinct regimes: an initial short time regime where the behaviour is pure-like; an intermediate regime where the persistence probability decays logarithmically; and a final long time regime where the system ‘freezes’ and $`P(t)`$ is effectively constant.
Very recently, Newman and Stein have argued that the ‘blocking’ of spins in systems with continuous disorder is associated with the fact that ‘every spin flips only finitely many times’ . As a consequence, in some simple $`1d`$ models $`p(t)`$ was found to decay exponentially rather than algebraically for large times, namely
$$p(t)e^{kt},$$
$`(3)`$
where $`k>0`$. In contrast, persistence in the weakly-diluted Ising model appears to decay logarithmically in the intermediate regime. (Note that examines the behaviour of $`P(t)`$ and not $`p(t)`$).
Clearly, it is of immense interest to establish whether the presence of ‘blocking’ in a system necessarily implies exponential decay of the persistence probability. Howard has found evidence for exponential decay in certain non-disordered models with ‘blocking’ ($`2d`$ hexagonal lattices and Bethe lattices with $`z=3`$ are discussed in ).
To clarify and further investigate the situation, in this Rapid Communication we present the results of computer simulations of an Ising model containing strong bond dilution. Here we restrict our attention to zero-temperature.
The Hamiltonian of the model we work with is given by
$$H=\underset{<ij>}{}J_{ij}S_iS_j$$
$`(4)`$
where $`S_i=\pm 1`$ are Ising spins situated on every site of a square $`L\times L(=N)`$ lattice with periodic boundary conditions and the summation runs over all nearest-neighbour pairs only. The quenched ferromagnetic exchange interactions are selected from a binary distribution given by
$$P(J_{ij})=(1p)\delta (J_{ij})+p\delta (J_{ij}1)$$
$`(5)`$
where $`p`$ is the concentration of bonds.
We obtained data for $`L=500`$ and $`750`$ at zero temperature for a broad range of bond-concentrations ($`0p0.5`$) on a suite of Silicon Graphics workstations and for $`L=1000`$ on a SGI Origin 2000; as the data for the different lattice sizes studied are practically indistinguishable, here we simply present the results for the largest lattice simulated.
We begin each run with a random starting configuration of the spins and then update the lattice by first calculating the energy change that would result from flipping a spin. The rule we use is: always flip if the energy change is negative, never flip if the energy change is positive and flip at random if the energy change is zero.
The number, $`n(t)`$, of spins which have never flipped until time $`t`$ is then counted. As we are working with strongly diluted lattices, it is necessary to monitor the value of $`n(t)`$ after practically each Monte Carlo step.
The persistence probability is given by
$$P(t)=[<n(t)>]/N$$
$`(6)`$
where $`<\mathrm{}>`$ indicates an average over different initial conditions and $`[\mathrm{}]`$ denotes an average over samples i.e. over the bond-dilution. For the simulations considered in this work we averaged over at least 100 different initial conditions and samples for each run.
We now discuss our results. To examine the decay of the persistence probability, in Fig. 1 we plot $`\mathrm{ln}P(t)`$ versus $`\mathrm{ln}t`$ for a wide range of bond concentrations, $`p:0.1p0.5`$, for a lattice of size $`L=1000`$. The decay of $`P(t)`$ appears to be non-algebraic before ‘freezing’ occurs. We see that, effectively, $`P(t)=P(\mathrm{})`$ for $`t>t^{}(p)`$, where the value $`t^{}(p)`$ depends on the strength of the dilution. Furthermore, the non-zero value of $`P(\mathrm{})`$ also depends on $`p`$, with the fraction of non-flipping spins increasing monotonically with the bond concentration. The increase in $`P(\mathrm{})`$ with $`p`$ can be seen more clearly in Fig. 2 where we have plotted some additional data at values of the exchange interaction not shown in Fig. 1. The numerical values of $`P(\mathrm{})`$ for the different bond concentrations simulated are also displayed in Table 1.
Obviously, when $`p=0`$ all spins eventually flip as the energy change in flipping is always zero. For a value of $`p0`$, there will be regions of the lattice containing finite clusters where it will cost energy to flip spins. For example, an isolated bond connecting two up spins is just such a stable cluster. The occurrence of these clusters increases with the bond concentration and hence also does the fraction of spins which never flip. This increases smoothly to $`p=0.5`$, the bond percolation threshold, where it appears to level off. That is, the maximum value of $`P(\mathrm{})0.46`$. Clearly, $`P(\mathrm{})`$ must decrease eventually for higher values of $`p`$ as we know that every spin flips infinitely many times for the pure model, $`p=1`$ .
We now consider the non-algebraic decay of $`P(t)`$ to $`P(\mathrm{})`$. As discussed earlier, it is more convenient to work with $`p(t)`$ from Eqn. (2). In Fig. 3 we replot the data displayed in Fig. 1 as $`\mathrm{ln}p(t)`$ against $`t`$. The straight lines are linear fits to Eqn. (3) after discarding data for short times. It is evident from Fig. 3 that $`p(t)`$ indeed decays exponentially at large times. Hence, we confirm that for the strongly diluted Ising model in $`2d`$ persistence decays exponentially as predicted by Newman and Stein . This is in marked contrast to the behaviour for the pure \[1-3\] and the weakly diluted models .
To conclude, we have presented new data for the zero-temperature dynamics of the strongly diluted random-bond $`2d`$ Ising ferromagnet. This system exhibits ‘blocking’ and we find evidence that $`p(t)`$ decreases exponentially for large times. The fraction of spins which never flip increases monotonically from zero with increasing bond concentration. Our results support the suggestion that the decay of the persistence probability can be non-algebraic for certain classes of models. Indeed, for the diluted $`2d`$ Ising model the behaviour of $`p(t)`$ would appear to depend crucially on the strength of the dilution.
Acknowledgement
I am grateful to C.M. Newman and D.L. Stein for commenting on the draft version of this paper. I would like to acknowledge Matthew Birkin for both technical assistance and maintaining the Silicon Graphics workstations. The CPU time on the SGI Origin 2000 at the University of Manchester was made available by the Engineering and Physical Sciences Research Council (EPSRC), Great Britain.
FIGURE CAPTIONS
Fig. 1
Log-log plot of $`P(t)`$ versus $`t`$ for the bond-diluted $`2d`$ Ising model for a range of bond concentrations, $`p`$; the size of the lattice is $`1000\times 1000`$.
Fig. 2
A plot of the fraction of spin which NEVER flip ($`P(\mathrm{})`$) against the bond concentraion $`p`$.
Fig. 3
Plot of $`\mathrm{ln}p(t)`$ against $`t`$ for different bond concentrations, $`p`$. The straight lines are linear fits to the data after discarding the initial short time behaviour.
TABLES
Table 1
| $`p`$ | $`P(\mathrm{})`$ |
| --- | --- |
| 0 | 0 |
| 0.025 | 0.0708(1) |
| 0.050 | 0.1340(1) |
| 0.075 | 0.1900(1) |
| 0.100 | 0.2390(1) |
| 0.125 | 0.2813(1) |
| 0.150 | 0.3173(1) |
| 0.175 | 0.3479(2) |
| 0.200 | 0.3732(2) |
| 0.250 | 0.4097(1) |
| 0.300 | 0.4331(2) |
| 0.350 | 0.4453(3) |
| 0.400 | 0.4526(3) |
| 0.450 | 0.4559(2) |
| 0.500 | 0.4576(1) |
The fraction of spins, $`P(\mathrm{})`$, which never flip at various values of the bond concentraion, $`p`$.
REFERENCES
B. Derrida, A. J. Bray and C. Godreche, J.Phys. A 27, L357 (1994).
A.J. Bray, B. Derrida and C. Godreche, Europhys. Lett. 27, 177 (1994).
D. Stauffer J.Phys.A 27, 5029 (1994).
B. Derrida, V. Hakim and V. Pasquier, Phys. Rev. lett. 75, 751 (1995); J. Stat. Phys. 85, 763 (1996).
B. Derrida, P.M.C. de Oliveira and D. Stauffer, Physica 224A, 604 (1996).
S. N. Majumdar, A. J. Bray, S. J. Cornell, and C. Sire, Phys. Rev. Lett. 77, 3704 (1996).
K. Oerding, S. J. Cornell, and A. J. Bray, Phys. Rev. E56, R25 (1997).
B. Zheng, Int. J. Mod. Phys. B12, 1419 (1998).
J-M. Drouffe and C. Godreche, e-print cond-mat/9808153.
S. Jain, Phys. Rev. E59, R2496 (1999).
C.M. Newman and D.L. Stein, Phys. Rev. Lett. 82, 3944 (1999).
C.D. Howard, preprint (1999).
|
no-problem/9906/nucl-ex9906001.html
|
ar5iv
|
text
|
# 1 Correlation functions constructed for p-p, p-4He, d-4He, and t-4He coincidences (from top to bottom) for spectator decays following 197Au + 197Au collisions at 1000 MeV per nucleon. The data are sorted according to 𝑍_{𝑏𝑜𝑢𝑛𝑑} into four bins with limits given in the top panel of each column. The lines represent the results of the calculations used to extract source radii (see text). Note that the scales of the abscissa are different for the top row (p-p correlations) and for the remaining three rows of panels.
Breakup Density in Spectator Fragmentation
S. Fritz,<sup>(1)</sup> C. Schwarz,<sup>(1)</sup> R. Bassini,<sup>(2)</sup> M. Begemann-Blaich,<sup>(1)</sup> S.J. Gaff-Ejakov,<sup>(3)</sup> D. Gourio,<sup>(1)</sup> C. Groß,<sup>(1)</sup> G. Immé,<sup>(4)</sup> I. Iori,<sup>(2)</sup> U. Kleinevoß,<sup>(1)</sup>\[a\] G.J. Kunde,<sup>(3)</sup>\[b\] W.D. Kunze,<sup>(1)</sup> U. Lynen,<sup>(1)</sup> V. Maddalena,<sup>(4)</sup> M. Mahi,<sup>(1)</sup> T. Möhlenkamp,<sup>(5)</sup> A. Moroni,<sup>(2)</sup> W.F.J. Müller,<sup>(1)</sup> C. Nociforo,<sup>(4)</sup> B. Ocker,<sup>(6)</sup> T. Odeh,<sup>(1)</sup>\[a\] F. Petruzzelli,<sup>(2)</sup> J. Pochodzalla,<sup>(1)</sup>\[c\] G. Raciti,<sup>(4)</sup> G. Riccobene,<sup>(4)</sup> F.P. Romano,<sup>(4)</sup> A. Saija,<sup>(4)</sup> M. Schnittker,<sup>(1)</sup> A. Schüttauf,<sup>(6)</sup>\[d\] W. Seidel,<sup>(5)</sup> V. Serfling,<sup>(1)</sup> C. Sfienti,<sup>(4)</sup> W. Trautmann,<sup>(1)</sup> A. Trzcinski,<sup>(7)</sup> G. Verde,<sup>(4)</sup> A. Wörner,<sup>(1)</sup> Hongfei Xi,<sup>(1)</sup> and B. Zwieglinski<sup>(7)</sup>
<sup>(1)</sup>Gesellschaft für Schwerionenforschung, D-64291 Darmstadt, Germany
<sup>(2)</sup>Istituto di Scienze Fisiche, Università degli Studi di Milano and I.N.F.N., I-20133 Milano, Italy
<sup>(3)</sup>Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824, USA
<sup>(4)</sup>Dipartimento di Fisica dell’ Università and I.N.F.N., I-95129 Catania, Italy
<sup>(5)</sup>Forschungszentrum Rossendorf, D-01314 Dresden, Germany
<sup>(6)</sup>Institut für Kernphysik, Universität Frankfurt, D-60486 Frankfurt, Germany
<sup>(7)</sup>Soltan Institute for Nuclear Studies, 00-681 Warsaw, Hoza 69, Poland
ABSTRACT
Proton-proton correlations and correlations of protons, deuterons and tritons with $`\alpha `$ particles from spectator decays following <sup>197</sup>Au + <sup>197</sup>Au collisions at 1000 MeV per nucleon have been measured with two highly efficient detector hodoscopes. The constructed correlation functions, interpreted within the approximation of a simultaneous volume decay, indicate a moderate expansion and low breakup densities, similar to assumptions made in statistical multifragmentation models.
PACS numbers: 25.70.Pq, 21.65.+f, 25.70.Mn, 25.75.Gz
Expansion is a rather basic conceptual feature of the multifragmentation of heavy nuclei. A volume of about three to eight times that occupied at saturation density is assumed in the statistical models aiming at a phase space description of the multi-fragment breakups . Expansion also provides the link to the nuclear liquid-gas phase transition; only one third of the saturation value is expected for the critical density of nuclear matter . The experimental confirmation of expansion to low breakup densities is therefore of the highest significance for the understanding and interpretation of the multifragmentation phenomenon .
In central collisions of heavy nuclei, rapid expansion is evident from the observation of radial collective flow . For the fragment decay of spectators following collisions at relativistic energies, the case studied in this work, significant radial flow has not been observed . Here evidence for expansion has been obtained, indirectly, from model comparisons. Models that assume sequential emission from nuclear systems at saturation density underpredict the fragment multiplicities while those assuming expanded breakup volumes yield satisfactory descriptions of the populated partition space . The disappearance of the Coulomb peaks in the kinetic-energy spectra of emitted light particles and fragments, associated with increasing fragment production, provides additional evidence consistent with volume emission from expanded systems . There are also other dynamical and statistical observables that have been interpreted as evidence for expansion in recent papers .
Interferometry-type methods permit experimental determinations of the breakup volume or, more precisely, of the space-time locations of the last collisions of the emitted products . In the nuclear regime, correlation functions for light charged particles, predominantly proton-proton correlations, have been widely explored for that purpose . Depending on the assumed reaction scenario and energy regime, both time scales and breakup radii have been deduced. The time scales for the decay of highly excited spectator nuclei produced at relativistic bombarding energies should be rather short , and we may expect that the correlation functions are mainly sensitive to the spatial extension of the source. More importantly, if we assume a rapid volume breakup of the system, the quantity of interest will be the local density, i.e. the mutual proximity of the nascent fragments and light particles. These densities are obtained in the limit of assuming a zero-lifetime in the source analysis.
In this Letter, we present the results of correlation measurements for spectator decays following collisions of <sup>197</sup>Au + <sup>197</sup>Au at a bombarding energy of 1000 MeV per nucleon. Besides proton-proton coincidences, also coincidences of protons, deuterons, and tritons with $`\alpha `$ particles were measured and correlation functions were constructed. For their quantitative interpretation, it is assumed that they are dominated by the effect of final-state interactions. The results are found to be consistent with low breakup densities with values close to those assumed in the statistical multifragmentation models.
Beams of <sup>197</sup>Au with incident energy 1000 MeV per nucleon were provided by the heavy-ion synchrotron SIS and directed onto targets of 25-mg/cm<sup>2</sup> areal thickness. Two multi-detector hodoscopes, consisting of a total of 160 Si-CsI(Tl) telescopes in closely-packed geometry, were placed on opposite sides with respect to the beam axis. The angular range $`\theta _{lab}`$ from 122 to 156 was chosen with the aim of selectively detecting the products of the target-spectator decay. Each telescope consisted of a 300-$`\mu `$m Si detector with 30 x 30 mm<sup>2</sup> (96 detectors) or 25 x 25 mm<sup>2</sup> (64 detectors) active area, followed by a 6-cm long CsI(Tl) scintillator with photodiode readout. The distance to the target was about 60 cm.
The products of the projectile decay were measured with the time-of-flight wall of the ALADIN spectrometer and the quantity $`Z_{bound}`$ was determined event-by-event. $`Z_{bound}`$ is defined as the sum of the atomic numbers $`Z_i`$ of all projectile fragments with $`Z_i`$ 2. It reflects the variation of the size of the primary spectator nuclei and is inversely correlated with its excitation energy. Because of the symmetry of the collision system, the mean values of $`Z_{bound}`$ for the target and the projectile spectators within the same event class have been assumed to be identical.
Examples of correlation functions constructed for the four types of coincidences p-p, p-<sup>4</sup>He, d-<sup>4</sup>He, and t-<sup>4</sup>He are shown in Fig. 1, sorted into four bins of $`Z_{bound}`$ as indicated. The uncorrelated yields were obtained with the technique of event-mixing and normalized in the range of large relative momenta $`q`$ 70 MeV/c (p-p) and $`q`$ 150 MeV/c (p-$`\alpha `$, d-$`\alpha `$, and t-$`\alpha `$). In the off-line analysis, thresholds were set at $`E_{lab}`$ 20 MeV for all particles p, d, t, and $`\alpha `$ and all hodoscope detectors. The shapes of the correlation functions are sensitive to the threshold. For p-p, e.g., the suppression at small $`q`$ tends to disappear if the threshold is set at $`E_{lab}`$ 10 MeV or lower, presumably as a result of increasing contributions from evaporation and sequential decay. These long-lifetime components are suppressed if higher thresholds are chosen. The pairs of particles were also requested to be detected in the same hodoscope in order to avoid correlation effects that appear at large $`q`$ and are believed to be due to a collective (sidewards) motion of the proton-emitting source.
The p-p correlation functions are characterized by a depression at small relative momentum and by a weakly pronounced peak near relative momentum $`q`$ = 20 MeV/c, caused by the S-wave nuclear interaction and used for the quantitative interpretation. The three correlation functions of the hydrogen isotopes with $`\alpha `$ particles are dominated by the resonances corresponding to the ground state of <sup>5</sup>Li and by the 2.19-MeV and 4.63-MeV excited states of <sup>6</sup>Li and <sup>7</sup>Li, respectively. The observed widths of the <sup>6</sup>Li and <sup>7</sup>Li peaks in the d-$`\alpha `$ and t-$`\alpha `$ correlation functions represent the experimental resolution which is mainly determined by the angular resolution following from the geometry of the detector hodoscopes. A striking feature of the data is the overall stability of the peak heights as a function of $`Z_{bound}`$ which indicates source extensions that do not change dramatically with impact parameter. The largest deviations of the peak height and of the overall shape of the correlation functions appear in the bin 60 $`Z_{bound}`$ 79, corresponding to the largest impact parameters.
The analysis of the p-p correlation functions was performed with the Koonin-Pratt formalism in the zero-lifetime limit. In this form, the analysis includes the effects of quantum statistics and of the mutual nuclear and Coulomb final-state interactions but it ignores the long-range Coulomb repulsion of the two protons from the emitting source. In order to assess the magnitude of the latter effect, classical Coulomb trajectory calculations were performed. In addition, calculations with the three-body quantum model of Lednicky et al. were used to identify possible systematic uncertainties. A uniform sphere with radius $`R`$ and statistical momentum distributions corresponding to the measured kinetic-energy spectra were assumed for the proton source; the model results are slightly dependent on the particle momenta through the applied experimental filter.
The quality of the obtained results and the sensitivity to the radius parameter $`R`$ is illustrated in Fig. 2 for the case 20 $`Z_{bound}`$ 40. With the filtered Koonin-Pratt calculations, the most satisfactory description of the data is obtained with $`R`$ = 8.2 fm (full line). The height and the width of the resonance peak are rather well reproduced. For comparison, the results of the same calculation before filtering (dotted line) and after correction for the Coulomb repulsion from the source (dashed line) are also shown. The filtering effect is equivalent to a change of the radius by $`\mathrm{\Delta }R`$ 0.3 fm whereas the modification caused by the charge of the emitting source is almost negligible. The latter is not unexpected in the present case, especially for the more central collisions which produce spectators of very moderate total charge (e.g. $``$ 45 for 20 $`Z_{bound}`$ 40). In addition, with the assumption of a simultaneous volume break-up, only the charge inside the volume corresponding to the radial position of the particle was assumed to contribute to its acceleration. Correlation peaks of nearly the same height but with slightly smaller width were obtained with the three-body quantum model . In the two-body approximation, the difference between the results obtained with the Koonin-Pratt and the Lednicky et al. formalisms is practically negligible. Therefore, and because of the reduced importance of the three-body Coulomb effect, the quantitative analysis of the measured p-p correlation functions was performed with the filtered Koonin-Pratt simulations.
The best fits, generated by minimizing $`\chi ^2`$ within the range 10 $`q`$ 35 MeV/c are shown in Fig. 1. The corresponding radii, listed in Table 1, are close to about 8 fm, up to nearly 9 fm for the bin of largest $`Z_{bound}`$ which, however, seems to be afflicted with the largest uncertainties. These values are distinctly but not excessively larger than the radius $`R`$ = 6.7 fm of a gold nucleus at normal density 0.16 fm<sup>-3</sup>. The quoted errors are purely statistical and were determined from the radii for which $`\chi ^2`$ exceeds its minimum by an amount equal to the minimum value of $`\chi ^2`$ per degree of freedom (i.e. for $`\chi ^2=\chi _{min}^2n/(n1)`$ where $`n`$ is the number of data points included). The additional systematic uncertainty, mainly resulting from the arbitrariness of choosing the normalization interval and estimated to be about 0.5 fm, is much larger.
Correlation functions for proton pairs belonging to four intervals of the laboratory pair momentum $`P_{sum}=|\stackrel{}{p}_1+\stackrel{}{p}_2|`$, integrated over $`Z_{bound}`$, are shown in Fig. 3. Only the data from the 96-element hodoscope were used. The combined effects of the energy threshold $`E_{lab}`$ 20 MeV, of the binning in pair momentum, and of the finite solid-angle acceptance of the detector hodoscope cause the limitation of the populated $`q`$ range for the bins of smaller $`P_{sum}`$. A normalization interval 70 $`q`$ 90 MeV/c was therefore chosen for these correlation functions. The same effects, perhaps including some collectivity due to a small but finite source motion, may cause the slight decrease towards larger $`q`$ that is not observed in the momentum-integrated correlation functions (cf. Fig. 1). We observe, however, that the peak height at $`q`$ 20 MeV/c relative to the uncorrelated background at $`q>`$ 40 MeV/c is virtually identical in all four cases. This is consistent with what is expected for the ideal situation of a purely statistical source, confirming that effects caused by collective radial motion or emission times related to the temporal evolution of an expanding source should be small for spectator decays.
The choice of a threshold of $`E_{lab}`$ 20 MeV, on the other hand, raises the question of what reaction stage is mainly represented by the obtained correlation data. Light-particle spectra from the same experiment, measured with high-resolution telescopes at backward angles, give evidence for emission prior to the final breakup stage . This was concluded from the comparison of the slope temperatures and multiplicities to the predictions of the statistical multifragmentation model. Distinct components of equilibrium yields and of faster particles, termed pre-equilibrium or first stage, have also been identified by other groups in data for comparable reactions . In all these cases the main parts of the low-energy equilibrium components are below the present threshold. Long-lifetime components are thus excluded from the analysis so as to preserve the sensitivity of the correlation function to the initial spatial dimensions. On the other hand, pre-equilibrium or pre-breakup particles as they are scattered from the forming spectator matter may also contribute to an interferometric picture that reflects the extension of the latter.
Besides the p-p correlations also the p-$`\alpha `$ and d-$`\alpha `$ correlations were used to determine breakup radii by comparing them to the numerical results of Boal and Shillcock . The calculated correlation functions which in their work are given for a discrete set of source radii were interpolated, and a Monte Carlo procedure was used for an event-by-event simulation of the effects of multiple scattering in the target and of the spatial and energy resolution of the detection system . The Coulomb acceleration by the emitting source was also here neglected. This seems justified for d-$`\alpha `$ because the effects should not be much larger than in the p-p case. For p-$`\alpha `$, the different charge-to-mass ratios of the two particles may be expected to cause larger distortions but, at the same time, the finite lifetimes of these resonances may reduce them considerably. The resulting fits to the data are rather satisfactory (Fig. 1). The corresponding radii are about 1 to 2 fm larger than those deduced from the p-p correlations (Table 1). We also observe a slightly stronger variation with the pair momentum than in the p-p case. For p-$`\alpha `$, e.g., the peak of the resonance grows from about 1.3 to 1.6 as the pair momentum is varied from $`P_{sum}`$ 800 MeV/c to $`P_{sum}>`$ 1000 MeV/c (result after integration over $`Z_{bound}`$).
Densities were calculated by dividing the number of spectator constituents, taken from the calorimetric analyses of Refs. and listed in Table 1, by the source volume. The densities vary considerably with centrality (Fig. 4) even though the radii are approximately constant. This is caused by the varying spectator masses which, in excellent agreement with the prediction of the geometric participant-spectator model , decrease with increasing centrality almost in proportion to $`Z_{bound}`$.
In the p-p case, the mean relative densities decrease from $`\rho /\rho _0`$ = 0.4 for the near-peripheral to below $`\rho /\rho _0`$ = 0.2 for the most central collisions. These values compare well with the densities assumed in the statistical multifragmentation model, including their variation with centrality. In the model, the mean density changes as a function of the multiplicity if a fixed socalled crack width is used as criterion for the placement of fragments inside the breakup volume . For the most peripheral bin, the interpretation of the shape of the measured p-p correlations is somewhat uncertain (cf. Fig. 1), and the corresponding density has not been plotted. For p-$`\alpha `$ and d-$`\alpha `$, the fits are satisfactory for the largest $`Z_{bound}`$ but correspond to rather large radii (Table 1). The densities are therefore low which is somewhat unexpected as here the production of highly excited heavy residues is the dominant reaction channel . Obviously, the assumptions of a homogeneous spherical source and of a rapid volume breakup are less well justified in this case which causes difficulties for the interpretation of the obtained density values.
The differences between the results for p-p and for the resonances involving $`\alpha `$ particles are statistically significant but their origin is not clear at present. Besides the systematic uncertainties of the employed formalisms, there is also the possibility of differences in the scattering cross sections; larger cross sections will cause larger apparent source sizes. An interesting connection exists between the large radii derived for the lithium states and the limitation of the temperatures deduced from their relative populations . A late emission of these resonances could explain both observations.
In summary, correlation functions constructed from proton-proton coincidences and from coincidences between protons, deuterons, or tritons with $`\alpha `$ particles consistently show that the breakup volume does not appreciably change with impact parameter even though the spectator mass varies considerably. A quantitative analysis, in the limit of zero lifetime of the source, yields results that are consistent with a very moderate radial expansion. The deduced breakup densities are rather low as assumed in the statistical model scenarios. The variation of the density with impact parameter is caused by the changing spectator mass. With regard to the method, it has become clear that the systematic uncertainties of deriving densities from correlation functions may be large, in particular for the resonances involving $`\alpha `$ particles for which the formalism is not yet as much advanced. It seems, however, that the spectator decay at relativistic energies may represent a particularly favorable case because the source charge is moderate, collective motion is nearly nonexistent, and with the choice of high energy thresholds long-lifetime components may have been efficiently excluded from the analysis.
The authors would like to thank R. Lednicky for making his code available to them and W.A. Friedman for fruitful discussions. M.B., J.P., and C.S. acknowledge the financial support of the Deutsche Forschungsgemeinschaft under the Contract No. Be1634/1-1, Po256/2-1, and Schw510/2-1, respectively. This work was supported by the European Community under contract ERBFMGECT950083.
|
no-problem/9906/math9906203.html
|
ar5iv
|
text
|
# Simply-laced Coxeter groups and groups generated by symplectic transvections
## 1. Introduction
The point of departure for this paper is the following result obtained in . Let $`N_n^0`$ denote the semi-algebraic set of all unipotent upper-triangular $`n\times n`$ matrices $`x`$ with real entries such that, for every $`k=1,\mathrm{},n1`$, the minor of $`x`$ with rows $`1,\mathrm{},k`$ and columns $`nk+1,\mathrm{},n`$ is non-zero. Then the number $`\mathrm{\#}_n`$ of connected components of $`N_n^0`$ is given as follows: $`\mathrm{\#}_2=2,\mathrm{\#}_3=6,\mathrm{\#}_4=20,\mathrm{\#}_5=52`$, and $`\mathrm{\#}_n=32^{n1}`$ for $`n6`$.
An interesting feature of this answer is that every case which one can check by hand turns out to be exceptional. But the method of the proof seems to be even more interesting than the answer itself: it is shown that the connected components of $`N_n^0`$ are in a bijection with the orbits of a certain group $`\mathrm{\Gamma }_n`$ that acts in a vector space of dimension $`n(n1)/2`$ over the two-element field $`𝔽_2`$, and is generated by symplectic transvections. Such groups appeared earlier in singularity theory, see e.g., and references therein.
The construction of $`\mathrm{\Gamma }_n`$ given in uses the combinatorial machinery (developed in ) of pseudo-line arrangements associated with reduced expressions in the symmetric group. In this paper we present the following far-reaching generalization of this construction. Let $`W`$ be an arbitrary Coxeter group of simply-laced type (possibly infinite but of finite rank). Let $`u`$ and $`v`$ be any two elements in $`W`$, and $`𝐢`$ be a reduced word (of length $`m=\mathrm{}(u)+\mathrm{}(v)`$) for the pair $`(u,v)`$ in the Coxeter group $`W\times W`$ (see Section 2 for more details). We associate to $`𝐢`$ a subgroup $`\mathrm{\Gamma }_𝐢`$ in $`GL_m()`$ generated by symplectic transvections. We prove among other things that the subgroups corresponding to different reduced words for the same pair $`(u,v)`$ are conjugate to each other inside $`GL_m()`$. To recover the group $`\mathrm{\Gamma }_n`$ from this general construction, one needs several specializations and reductions: take $`W`$ to be the symmetric group $`S_n`$; take $`(u,v)=(w_0,e)`$, where $`w_0`$ is the longest permutation in $`S_n`$, and $`e`$ is the identity permutation; take $`𝐢`$ to be the lexicographically minimal reduced word $`1,2,1,\mathrm{},n1,n2,\mathrm{},1`$ for $`w_0`$; and finally, take the group $`\mathrm{\Gamma }_𝐢(𝔽_2)`$ obtained from $`\mathrm{\Gamma }_𝐢`$ by reducing the linear transformations from $``$ to $`𝔽_2`$.
We also generalize the enumeration result of by showing that, under certain assumptions on $`u`$ and $`v`$, the number of $`\mathrm{\Gamma }_𝐢(𝔽_2)`$-orbits in $`𝔽_2^m`$ is equal to $`32^s`$, where $`s`$ is the number of simple reflections in $`W`$ that appear in a reduced decomposition for $`u`$ or $`v`$. We deduce this from a description of orbits in an even more general situation which sharpens the results in (see Section 6 below).
Although the results and methods of this paper are purely algebraic and combinatorial, our motivation for the study of the groups $`\mathrm{\Gamma }_𝐢`$ and their orbits comes from geometry. In the case when $`W`$ is the (finite) Weyl group of a simply-laced root system, we expect that the $`\mathrm{\Gamma }_𝐢(𝔽_2)`$-orbits in $`𝔽_2^m`$ enumerate connected components of the real part of the reduced double Bruhat cell corresponding to $`(u,v)`$. Double Bruhat cells were introduced and studied in as a natural framework for the study of total positivity in semisimple groups; as explained to us by N. Reshetikhin, they also appear naturally in the study of symplectic leaves in semisimple groups (see ). Let us briefly recall their definition.
Let $`G`$ be a split simply connected semisimple algebraic group defined over $``$ with the Weyl group $`W`$; thus $`W=\mathrm{Norm}_G(H)/H`$, where $`H`$ is an $``$-split maximal torus in $`G`$. Let $`B`$ and $`B_{}`$ be two (opposite) Borel subgroups in $`G`$ such that $`BB_{}=H`$. The *double Bruhat cells* $`G^{u,v}`$ are defined as the intersections of ordinary Bruhat cells taken with respect to $`B`$ and $`B_{}`$:
$$G^{u,v}=BuBB_{}vB_{}.$$
In view of the well-known Bruhat decomposition, the group $`G`$ is the disjoint union of all $`G^{u,v}`$ for $`(u,v)W\times W`$.
The term “cell” might be misleading because the topology of $`G^{u,v}`$ can be quite complicated. The torus $`H`$ acts freely on $`G^{u,v}`$ by left (as well as right) translations, and there is a natural section $`L^{u,v}`$ for this action which we call the *reduced double Bruhat cell*. These sections are introduced and studied in a forthcoming paper (for the definition see Section 7 below).
We seem to be very close to a proof of the fact that the connected components of the real part of $`L^{u,v}`$ are in a natural bijection with the $`\mathrm{\Gamma }_𝐢(𝔽_2)`$-orbits in $`𝔽_2^m`$; but some details are still missing. This question will be treated in a separate publication.
The special case when $`(u,v)=(e,w)`$ for some element $`wW`$ is of particular geometric interest. In this case, $`L^{u,v}`$ is biregularly isomorphic to the so-called *opposite Schubert cell*
$$C_w^0:=C_ww_0C_{w_0},$$
where $`w_0`$ is the longest element of $`W`$, and $`C_w=(BwB)/BG/B`$ is the *Schubert cell* corresponding to $`w`$. These opposite cells appeared in the literature in various contexts, and were studied (in various degrees of generality) in . In particular, the variety $`N_n^0`$ which was the main object of study in is naturally identified with the real part of the opposite cell $`C_{w_0}^0`$ for $`G=SL_n`$.
By the informal “complexification principle” of Arnold, if the group $`\mathrm{\Gamma }_𝐢(𝔽_2)`$ enumerates connected components of the real part of $`L^{u,v}`$, the group $`\mathrm{\Gamma }_𝐢`$ itself (which acts in $`^m`$ rather than in $`𝔽_2^m`$) should provide information about topology of the complex variety $`L^{u,v}`$. So far we did not find a totally satisfactory “complexification” along these lines.
The paper is organized as follows. Main definitions, notations and conventions are collected in Section 2. Our main results are formulated in Section 3 and proved in the three next sections. We conclude by discussing in more detail the geometric connection outlined above.
## 2. Definitions
### 2.1. Simply-laced Coxeter groups
Let $`\mathrm{\Pi }`$ be an arbitrary finite graph without loops and multiple edges. Throughout the paper, we use the following notation: write $`i\mathrm{\Pi }`$ if $`i`$ is a vertex of $`\mathrm{\Pi }`$, and $`\{i,j\}\mathrm{\Pi }`$ if the vertices $`i`$ and $`j`$ are adjacent in $`\mathrm{\Pi }`$. The (simply-laced) Coxeter group $`W=W(\mathrm{\Pi })`$ associated with $`\mathrm{\Pi }`$ is generated by the elements $`s_i`$ for $`i\mathrm{\Pi }`$ subject to the relations
(2.1)
$$s_i^2=e;s_is_j=s_js_i(\{i,j\}\mathrm{\Pi });s_is_js_i=s_js_is_j(\{i,j\}\mathrm{\Pi }).$$
A word $`𝐢=(i_1,\mathrm{},i_m)`$ in the alphabet $`\mathrm{\Pi }`$ is a *reduced word* for $`wW`$ if $`w=s_{i_1}\mathrm{}s_{i_m}`$, and $`m`$ is the smallest length of such a factorization. The length $`m`$ of any reduced word for $`w`$ is called the *length* of $`w`$ and denoted by $`m=\mathrm{}(w)`$. Let $`R(w)`$ denote the set of all reduced words for $`w`$.
The “double” group $`W\times W`$ is also a Coxeter group; it corresponds to the graph $`\stackrel{~}{\mathrm{\Pi }}`$ which is the union of two disconnected copies of $`\mathrm{\Pi }`$. We identify the vertex set of $`\stackrel{~}{\mathrm{\Pi }}`$ with $`\{+1,1\}\times \mathrm{\Pi }`$, and write a vertex $`(\pm 1,i)\stackrel{~}{\mathrm{\Pi }}`$ simply as $`\pm i`$. For each $`\pm i\stackrel{~}{\mathrm{\Pi }}`$, we set $`\epsilon (\pm i)=\pm 1`$ and $`|\pm i|=i\mathrm{\Pi }`$. Thus two vertices $`i`$ and $`j`$ of $`\stackrel{~}{\mathrm{\Pi }}`$ are joined by an edge if and only if $`\epsilon (i)=\epsilon (j)`$ and $`\{|i|,|j|\}\mathrm{\Pi }`$. In this notation, a reduced word for a pair $`(u,v)W\times W`$ is an arbitrary shuffle of a reduced word for $`u`$ written in the alphabet $`\mathrm{\Pi }`$ and a reduced word for $`v`$ written in the alphabet $`\mathrm{\Pi }`$.
In view of the defining relations (2.1), the set of reduced words $`R(u,v)`$ is equipped with the following operations:
* *2-move*. Interchange two consecutive entries $`i_{k1},i_k`$ in a reduced word $`𝐢=(i_1,\mathrm{},i_m)`$ provided $`\{i_{k1},i_k\}\stackrel{~}{\mathrm{\Pi }}`$.
* *3-move*. Replace three consecutive entries $`i_{k2},i_{k1},i_k`$ in $`𝐢`$ by $`i_{k1},i_{k2},i_{k1}`$ if $`i_k=i_{k2}`$ and $`\{i_{k1},i_k\}\stackrel{~}{\mathrm{\Pi }}`$.
In each case, we will refer to the index $`k[1,m]`$ as the *position* of the corresponding move. Using these operations, we make $`R(u,v)`$ the set of vertices of a graph whose edges correspond to $`2`$\- and $`3`$-moves. It is a well known result due to Tits that this graph is *connected*, i.e., any two reduced words in $`R(u,v)`$ can be obtained from each other by a sequence of $`2`$\- and $`3`$-moves. We will say that a $`2`$-move interchanging the entries $`i_{k1}`$ and $`i_k`$ is *trivial* if $`i_ki_{k1}`$; the remaining $`2`$-moves and all $`3`$-moves will be referred to as *non-trivial*.
### 2.2. Groups generated by symplectic transvections
Let $`\mathrm{\Sigma }`$ be a finite directed graph. As before, we shall write $`k\mathrm{\Sigma }`$ if $`k`$ is a vertex of $`\mathrm{\Sigma }`$, and $`\{k,l\}\mathrm{\Sigma }`$ if the vertices $`k`$ and $`l`$ are adjacent in the underlying graph obtained from $`\mathrm{\Sigma }`$ by forgetting directions of edges. We also write $`(kl)\mathrm{\Sigma }`$ if $`kl`$ is a directed edge of $`\mathrm{\Sigma }`$.
Let $`V=^\mathrm{\Sigma }`$ be the lattice with a fixed $``$-basis $`(e_k)_{k\mathrm{\Sigma }}`$ labeled by vertices of $`\mathrm{\Sigma }`$. Let $`\xi _kV^{}`$ denote the corresponding coordinate functions, i.e., every vector $`vV`$ can be written as
$$v=\underset{k\mathrm{\Sigma }}{}\xi _k(v)e_k.$$
We define a skew-symmetric bilinear form $`\mathrm{\Omega }`$ on $`V`$ by
(2.2)
$$\mathrm{\Omega }=\mathrm{\Omega }_\mathrm{\Sigma }=\underset{(kl)\mathrm{\Sigma }}{}\xi _k\xi _l.$$
For each $`k\mathrm{\Sigma }`$, we define the symplectic transvection $`\tau _k=\tau _{k,\mathrm{\Sigma }}:VV`$ by
(2.3)
$$\tau _k(v)=v\mathrm{\Omega }(v,e_k)e_k.$$
(The word “symplectic” might be misleading since $`\mathrm{\Omega }`$ is allowed to be degenerate; still we prefer to keep this terminology from .) In the coordinate form, we have $`\xi _l(\tau _k(v))=\xi _l(v)`$ for $`lk`$, and
(2.4)
$$\xi _k(\tau _k(v))=\xi _k(v)\underset{(ak)\mathrm{\Sigma }}{}\xi _a(v)+\underset{(kb)\mathrm{\Sigma }}{}\xi _b(v).$$
For any subset $`B`$ of vertices of $`\mathrm{\Sigma }`$, we denote by $`\mathrm{\Gamma }_{\mathrm{\Sigma },B}`$ the group of linear transformations of $`V=^\mathrm{\Sigma }`$ generated by the transvections $`\tau _k`$ for $`kB`$.
Note that all transformations from $`\mathrm{\Gamma }_{\mathrm{\Sigma },B}`$ are represented by integer matrices in the standard basis $`e_k`$. Let $`\mathrm{\Gamma }_{\mathrm{\Sigma },B}(𝔽_2)`$ denote the group of linear transformations of the $`𝔽_2`$-vector space $`V(𝔽_2)=𝔽_2^\mathrm{\Sigma }`$ obtained from $`\mathrm{\Gamma }_{\mathrm{\Sigma },B}`$ by reduction modulo $`2`$ (recall that $`𝔽_2`$ is the $`2`$-element field).
## 3. Main results
### 3.1. The graph $`\mathrm{\Sigma }(𝐢)`$
We now present our main combinatorial construction that brings together simply-laced Coxeter groups and groups generated by symplectic transvections. Let $`W=W(\mathrm{\Pi })`$ be the simply-laced Coxeter group associated to a graph $`\mathrm{\Pi }`$ (see Section 2.1). Fix a pair $`(u,v)W\times W`$, and let $`m=\mathrm{}(u)+\mathrm{}(v)`$. Let $`𝐢=(i_1,\mathrm{},i_m)R(u,v)`$ be any reduced word for $`(u,v)`$. We shall construct a directed graph $`\mathrm{\Sigma }(𝐢)`$ and a subset $`B(𝐢)`$ of its vertices, thus giving rise to a group $`\mathrm{\Gamma }_{\mathrm{\Sigma }(𝐢),B(𝐢)}`$ generated by symplectic transvections.
First of all, the set of vertices of $`\mathrm{\Sigma }(𝐢)`$ is just the set $`[1,m]=\{1,2,\mathrm{},m\}`$. For $`l[1,m]`$, we denote by $`l^{}=l_𝐢^{}`$ the maximal index $`k`$ such that $`1k<l`$ and $`|i_k|=|i_l|`$; if $`|i_k||i_l|`$ for $`1k<l`$ then we set $`l^{}=0`$. We define $`B(𝐢)[1,m]`$ as the subset of indices $`l[2,m]`$ such that $`l^{}>0`$. The indices $`lB(𝐢)`$ will be called *$`𝐢`$-bounded*.
It remains to define the edges of $`\mathrm{\Sigma }(𝐢)`$.
###### Definition 3.1.
A pair $`\{k,l\}[1,m]`$ with $`k<l`$ is an edge of $`\mathrm{\Sigma }(𝐢)`$ if it satisfies one of the following three conditions:
(i) $`k=l^{}`$;
(ii) $`k^{}<l^{}<k`$, $`\{|i_k|,|i_l|\}\mathrm{\Pi }`$, and $`\epsilon (i_l^{})=\epsilon (i_k)`$;
(iii) $`l^{}<k^{}<k`$, $`\{|i_k|,|i_l|\}\mathrm{\Pi }`$, and $`\epsilon (i_k^{})=\epsilon (i_k)`$.
The edges of type (i) are called *horizontal*, and those of types (ii) and (iii) *inclined*. A horizontal (resp. inclined) edge $`\{k,l\}`$ with $`k<l`$ is directed from $`k`$ to $`l`$ if and only if $`\epsilon (i_k)=+1`$ (resp. $`\epsilon (i_k)=1`$).
We will give a few examples in the end of Section 3.2.
### 3.2. Properties of graphs $`\mathrm{\Sigma }(𝐢)`$
We start with the following property of $`\mathrm{\Sigma }(𝐢)`$ and $`B(𝐢)`$.
###### Proposition 3.2.
For any non-empty subset $`SB(𝐢)`$, there exists a vertex $`a[1,m]S`$ such that $`\{a,b\}\mathrm{\Sigma }(𝐢)`$ for a unique $`bS`$.
For any edge $`\{i,j\}\mathrm{\Pi }`$, let $`\mathrm{\Sigma }_{i,j}(𝐢)`$ denote the induced directed subgraph of $`\mathrm{\Sigma }(𝐢)`$ with vertices $`k[1,m]`$ such that $`|i_k|=i`$ or $`|i_k|=j`$. We shall use the following planar realization of $`\mathrm{\Sigma }_{i,j}(𝐢)`$ which we call the $`(i,j)`$-*strip* of $`\mathrm{\Sigma }(𝐢)`$. Consider the infinite horizontal strip $`\times [1,1]^2`$, and identify each vertex $`k\mathrm{\Sigma }_{i,j}(𝐢)`$ with the point $`A=A_k=(k,y)`$, where $`y=1`$ for $`|i_k|=i`$, and $`y=1`$ for $`|i_k|=j`$. We represent each (directed) edge $`(kl)`$ by a straight line segment from $`A_k`$ to $`A_l`$. (This justifies the terms “horizontal” and “inclined” edges in Definition 3.1.)
Note that every edge of $`\mathrm{\Sigma }(𝐢)`$ belongs to some $`(i,j)`$-strip, so we can think of $`\mathrm{\Sigma }(𝐢)`$ as the union of all its strips glued together along horizontal lines.
###### Theorem 3.3.
(a) The $`(i,j)`$-strip of $`\mathrm{\Sigma }(𝐢)`$ is a planar graph; equivalently, no two inclined edges cross each other inside the strip.
(b) The boundary of any triangle or trapezoid formed by two consecutive inclined edges and horizontal segments between them is a directed cycle in $`\mathrm{\Sigma }_{i,j}(𝐢)`$.
Our next goal is to compare the directed graphs $`\mathrm{\Sigma }(𝐢)`$ and $`\mathrm{\Sigma }(𝐢^{})`$ when two reduced words $`𝐢`$ and $`𝐢^{}`$ are related by a $`2`$\- or $`3`$-move. To do this, we associate to $`𝐢`$ and $`𝐢^{}`$ a permutation $`\sigma _{𝐢^{},𝐢}`$ of $`[1,m]`$ defined as follows. If $`𝐢`$ and $`𝐢^{}`$ are related by a trivial $`2`$-move in position $`k`$ then $`\sigma _{𝐢^{},𝐢}=(k1,k)`$, the transposition of $`k1`$ and $`k`$; if $`𝐢`$ and $`𝐢^{}`$ are related by a non-trivial $`2`$-move then $`\sigma _{𝐢^{},𝐢}=e`$, the identity permutation of $`[1,m]`$; finally, if $`𝐢`$ and $`𝐢^{}`$ are related by a $`3`$-move in position $`k`$ then $`\sigma _{𝐢^{},𝐢}=(k2,k1)`$. The following properties of $`\sigma _{𝐢^{},𝐢}`$ are immediate from the definitions.
###### Proposition 3.4.
The permutation $`\sigma _{𝐢^{},𝐢}`$ sends $`𝐢`$-bounded indices to $`𝐢^{}`$-bounded ones. If the move that relates $`𝐢`$ and $`𝐢^{}`$ is non-trivial then its position $`k`$ is $`𝐢`$-bounded, and $`\sigma _{𝐢^{},𝐢}(k)=k`$.
The relationship between the graphs $`\mathrm{\Sigma }(𝐢)`$ and $`\mathrm{\Sigma }(𝐢^{})`$ is now given as follows.
###### Theorem 3.5.
Suppose two reduced words $`𝐢`$ and $`𝐢^{}`$ are related by a $`2`$\- or $`3`$-move in position $`k`$, and $`\sigma =\sigma _{𝐢^{},𝐢}`$ is the corresponding permutation of $`[1,m]`$. Let $`a`$ and $`b`$ be two distinct elements of $`[1,m]`$ such that at least one of them is $`𝐢`$-bounded. Then
(3.1)
$$(ab)\mathrm{\Sigma }(𝐢)(\sigma (a)\sigma (b))\mathrm{\Sigma }(𝐢^{}),$$
with the following two exceptions.
1. If the move that relates $`𝐢`$ and $`𝐢^{}`$ is non-trivial then $`(ak)\mathrm{\Sigma }(𝐢)(k\sigma (a))\mathrm{\Sigma }(𝐢^{})`$.
2. If the move that relates $`𝐢`$ and $`𝐢^{}`$ is non-trivial, and $`akb`$ in $`\mathrm{\Sigma }(𝐢)`$ then $`\{a,b\}\mathrm{\Sigma }(𝐢)\{\sigma (a),\sigma (b)\}\mathrm{\Sigma }(𝐢^{})`$; furthermore, the edge $`\{a,b\}\mathrm{\Sigma }(𝐢)`$ can only be directed as $`ba`$.
The following example illustrates the above results.
###### Example 3.6.
Let $`\mathrm{\Pi }`$ be the Dynkin graph $`A_4`$, i.e., the chain formed by vertices $`1,2,3`$, and $`4`$. Let $`u=s_4s_2s_1s_2s_3s_2s_4s_1`$ and $`v=s_2s_1s_3s_2s_4s_1s_3s_2s_1`$ (in the standard realization of $`W`$ as the symmetric group $`S_5`$, with the generators $`s_i=(i,i+1)`$ (adjacent transpositions), the permutations $`u`$ and $`v`$ can be written in the one-line notation as $`u=53241`$ and $`v=54312`$). The graph $`\mathrm{\Sigma }(𝐢)`$ corresponding to the reduced word $`𝐢=(2,1,4,2,1,3,2,2,3,2,4,1,4,1,3,2,1)`$ of $`(u,v)`$ is shown on Fig. 1. Here white (resp. black) vertices of each horizontal level $`i`$ correspond to entries of $`𝐢`$ that are equal to $`i`$ (resp. to $`i`$). Horizontal edges are shown by solid lines, inclined edges of type (ii) in Definition 3.1 by dashed lines, and inclined edges of type (iii) by dotted lines.
Now let $`𝐢^{}`$ be obtained from $`𝐢`$ by the (non-trivial) 2-move in position 8, i.e., by interchanging $`i_7=2`$ with $`i_8=2`$. The corresponding graph $`\mathrm{\Sigma }(𝐢^{})`$ is shown on Fig. 2.
Notice that the edges of $`\mathrm{\Sigma }(𝐢)`$ that fall into the first exceptional case in Theorem 3.5 are $`AB`$, $`CA`$, and $`AD`$; by reversing their orientation, one obtains the edges $`B^{}A^{}`$, $`A^{}C^{}`$, and $`D^{}A^{}`$ of $`\mathrm{\Sigma }(𝐢^{})`$. The second exceptional case in Theorem 3.5 applies to two edges $`BE`$ and $`DE`$ of $`\mathrm{\Sigma }(𝐢)`$ and two “non-edges” $`\{C,B\}`$ and $`\{C,D\}`$; the corresponding edges and non-edges of $`\mathrm{\Sigma }(𝐢^{})`$ are $`C^{}B^{}`$, $`C^{}D^{}`$, $`\{E^{},B^{}\}`$, and $`\{E^{},D^{}\}`$.
Finally, consider the reduced word $`𝐢^{\prime \prime }`$ obtained from $`𝐢^{}`$ by the 3-move in position 10, i.e., by replacing $`(i_8^{},i_9^{},i_{10}^{})=(2,3,2)`$ with $`(3,2,3)`$. The corresponding graph $`\mathrm{\Sigma }(𝐢^{\prime \prime })`$ is shown on Fig. 3.
Now the first exceptional case in Theorem 3.5 covers the edges $`D^{}A^{}`$, $`C^{}D^{}`$, $`D^{}F^{}`$, and $`G^{}D^{}`$ of $`\mathrm{\Sigma }(𝐢^{})`$, and the corresponding edges $`A^{\prime \prime }D^{\prime \prime }`$, $`D^{\prime \prime }C^{\prime \prime }`$, $`F^{\prime \prime }D^{\prime \prime }`$, and $`D^{\prime \prime }G^{\prime \prime }`$ of $`\mathrm{\Sigma }(𝐢^{\prime \prime })`$. The second exceptional case covers the edges $`F^{}C^{}`$ and $`A^{}C^{}`$, and non-edges $`\{G^{},F^{}\}`$ and $`\{G^{},A^{}\}`$ of $`\mathrm{\Sigma }(𝐢^{\prime \prime })`$; the corresponding edges and non-edges of $`\mathrm{\Sigma }(𝐢^{\prime \prime })`$ are $`G^{\prime \prime }F^{\prime \prime }`$, $`G^{\prime \prime }A^{\prime \prime }`$, $`\{C^{\prime \prime },F^{\prime \prime }\}`$, and $`\{A^{\prime \prime },C^{\prime \prime }\}`$.
### 3.3. The groups $`\mathrm{\Gamma }_𝐢`$ and conjugacy theorems
As before, let $`𝐢=(i_1,\mathrm{},i_m)`$ be a reduced word for a pair $`(u,v)`$ of elements in a simply-laced Coxeter group $`W`$. By the general construction in Section 2.2, the pair $`(\mathrm{\Sigma }(𝐢),B(𝐢))`$ gives rise to a skew symmetric form $`\mathrm{\Omega }_{\mathrm{\Sigma }(𝐢)}`$ on $`^m`$, and to a subgroup $`\mathrm{\Gamma }_{\mathrm{\Sigma }(𝐢),B(𝐢)}GL_m()`$ generated by symplectic transvections. We denote these symplectic transvections by $`\tau _{k,𝐢}`$, and also abbreviate $`\mathrm{\Omega }_𝐢=\mathrm{\Omega }_{\mathrm{\Sigma }(𝐢)}`$, and $`\mathrm{\Gamma }_𝐢=\mathrm{\Gamma }_{\mathrm{\Sigma }(𝐢),B(𝐢)}`$.
###### Theorem 3.7.
For any two reduced words $`𝐢`$ and $`𝐢^{}`$ for the same pair $`(u,v)W\times W`$, the groups $`\mathrm{\Gamma }_𝐢`$ and $`\mathrm{\Gamma }_𝐢^{}`$ are conjugate to each other inside $`GL_m()`$.
Our proof of Theorem 3.7 is constructive. In view of the Tits result quoted in Section 2.1, it is enough to prove Theorem 3.7 in the case when $`𝐢`$ and $`𝐢^{}`$ are related by a 2- or 3-move. We shall construct the corresponding conjugating linear transformations explicitly. To do this, let us define two linear maps $`\phi _{𝐢^{},𝐢}^\pm :^m^m`$. For $`v^m`$, the vectors $`\phi _{𝐢^{},𝐢}^+(v)=v^+`$ and $`\phi _{𝐢^{},𝐢}^{}(v)=v^{}`$ are defined as follows. If $`𝐢`$ and $`𝐢^{}`$ are related by a trivial $`2`$-move and $`l`$ is arbitrary, or if $`𝐢`$ and $`𝐢^{}`$ are related by a non-trivial move in position $`k`$ and $`lk`$, then we set
(3.2)
$$\xi _l(v^+)=\xi _l(v^{})=\xi _{\sigma _{𝐢^{},𝐢}(l)}(v);$$
for $`l=k`$ in the case of a non-trivial move, we set
(3.3)
$$\xi _k(v^+)=\underset{(ak)\mathrm{\Sigma }(𝐢)}{}\xi _a(v)\xi _k(v);\xi _k(v^{})=\underset{(kb)\mathrm{\Sigma }(𝐢)}{}\xi _b(v)\xi _k(v).$$
###### Theorem 3.8.
If two reduced words $`𝐢`$ and $`𝐢^{}`$ for the same pair $`(u,v)W\times W`$ are related by a $`2`$\- or $`3`$-move then the corresponding linear maps $`\phi _{𝐢^{},𝐢}^+`$ and $`\phi _{𝐢^{},𝐢}^{}`$ are invertible, and
(3.4)
$$\mathrm{\Gamma }_𝐢^{}=\phi _{𝐢^{},𝐢}^+\mathrm{\Gamma }_𝐢(\phi _{𝐢^{},𝐢}^+)^1=\phi _{𝐢^{},𝐢}^{}\mathrm{\Gamma }_𝐢(\phi _{𝐢^{},𝐢}^{})^1.$$
Our proof of Theorem 3.8 is based on the following properties of the maps $`\phi _{𝐢^{},𝐢}^\pm `$, which might be of independent interest.
###### Theorem 3.9.
(a) The linear maps $`\phi _{𝐢^{},𝐢}^\pm `$ satisfy:
(3.5)
$$\phi _{𝐢,𝐢^{}}^{}\phi _{𝐢^{},𝐢}^+=\phi _{𝐢,𝐢^{}}^+\phi _{𝐢^{},𝐢}^{}=\mathrm{Id}.$$
(b) If the move that relates $`𝐢`$ and $`𝐢^{}`$ is non-trivial in position $`k`$ then
(3.6)
$$\phi _{𝐢,𝐢^{}}^+\phi _{𝐢^{},𝐢}^+=\tau _{k,𝐢}.$$
(c) For any $`𝐢`$-bounded index $`l[1,m]`$, we have
(3.7)
$$\phi _{𝐢^{},𝐢}^+\tau _{l,𝐢}=\tau _{\sigma _{𝐢^{},𝐢}(l),𝐢^{}}\phi _{𝐢^{},𝐢}^+$$
unless the move that relates $`𝐢`$ and $`𝐢^{}`$ is non-trivial in position $`k`$, and $`(lk)\mathrm{\Sigma }_𝐢`$.
### 3.4. Enumerating $`\mathrm{\Gamma }_{\mathrm{\Sigma },B}(𝔽_2)`$-orbits in $`𝔽_2^\mathrm{\Sigma }`$
Let $`\mathrm{\Sigma }`$ and $`B`$ have the same meaning as in Section 2.2, and let $`\mathrm{\Gamma }=\mathrm{\Gamma }_{\mathrm{\Sigma },B}(𝔽_2)`$ be the corresponding group of linear transformations of the vector space $`𝔽_2^\mathrm{\Sigma }`$.
The following definition is motivated by the results in .
###### Definition 3.10.
A finite (non-directed) graph is $`E_6`$-compatible if it is connected, and it contains an induced subgraph with $`6`$ vertices isomorphic to the Dynkin graph $`E_6`$ (see Fig. 4).
###### Theorem 3.11.
Suppose that the induced subgraph of $`\mathrm{\Sigma }`$ with the set of vertices $`B`$ is $`E_6`$-compatible. Then the number of $`\mathrm{\Gamma }`$-orbits in $`𝔽_2^\mathrm{\Sigma }`$ is equal to
$$2^{\mathrm{\#}(\mathrm{\Sigma }B)}(2+2^{dim(𝔽_2^B\mathrm{Ker}\overline{\mathrm{\Omega }})}),$$
where $`\overline{\mathrm{\Omega }}`$ denotes the $`𝔽_2`$-valued bilinear form on $`𝔽_2^\mathrm{\Sigma }`$ obtained by reduction modulo $`2`$ from the form $`\mathrm{\Omega }=\mathrm{\Omega }_\mathrm{\Sigma }`$ in (2.2).
Theorem 3.11 has the following corollary which generalizes the main enumeration result in .
###### Corollary 3.12.
Let $`u`$ and $`v`$ be two elements of a simply-laced Coxeter group $`W`$, and suppose that for some reduced word $`𝐢R(u,v)`$, the induced subgraph of $`\mathrm{\Sigma }(𝐢)`$ with the set of vertices $`B(𝐢)`$ is $`E_6`$-compatible. Then the number of $`\mathrm{\Gamma }_𝐢(𝔽_2)`$-orbits in $`𝔽_2^m`$ is equal to $`32^s`$, where $`s`$ is the number of indices $`i\mathrm{\Pi }`$ such that some (equivalently, any) reduced word for $`(u,v)`$ has an entry $`\pm i`$.
## 4. Proofs of results in Section 3.2
### 4.1. Proof of Proposition 3.2
By the definition of $`𝐢`$-bounded indices, we have $`k^{}[1,m]`$ for any $`kS`$. Now pick $`bS`$ with the smallest value of $`b^{}`$, and set $`a=b^{}`$. Clearly, $`aS`$, and $`\{a,b\}`$ is a horizontal edge in $`\mathrm{\Sigma }(𝐢)`$. We claim that $`b`$ is the only vertex in $`S`$ such that $`\{a,b\}\mathrm{\Sigma }(𝐢)`$. Indeed, if $`\{a,c\}\mathrm{\Sigma }(𝐢)`$ for some $`cb`$ then $`c^{}<a`$, in view of Definition 3.1. Because of the way $`b`$ was chosen, we have $`cS`$, as required.
### 4.2. Proof of Theorem 3.3
In the course of the proof, we fix a reduced word $`𝐢R(u,v)`$, and an edge $`\{i,j\}\mathrm{\Pi }`$; we shall refer to the $`(i,j)`$-strip of $`\mathrm{\Sigma }(𝐢)`$ as simply the strip. For any vertex $`A=A_k=(k,y)`$ in the strip, we set $`y(A)=y`$, and $`\epsilon (A)=\epsilon (i_k)`$; we call $`y(A)`$ the *level*, and $`\epsilon (A)`$ the *sign* of $`A`$. We also set
$$c(A)=y(A)\epsilon (A),$$
and call $`c(A)`$ the *charge* of a vertex $`A`$. Finally, we linearly order the vertices by setting $`A_kA_l`$ if $`k<l`$, i.e., if the vertex $`A_k`$ is to the left of $`A_l`$. In these terms, one can describe inclined edges in the strip as follows.
###### Lemma 4.1.
A vertex $`B`$ is the left end of an inclined edge in the strip if and only if it satisfies the following two conditions:
(1) $`B`$ is not the leftmost vertex in the strip, and the preceding vertex $`A`$ has opposite charge $`c(A)=c(B)`$.
(2) there is a vertex $`C`$ of opposite level $`y(C)=y(B)`$ that lies to the right of $`B`$.
Under these conditions, an inclined edge with the left end $`B`$ is unique, and its right end is the leftmost vertex $`C`$ satisfying (2).
This is just a reformulation of conditions (ii) and (iii) in Definition 3.1.
###### Lemma 4.2.
Suppose $`ACC^{}`$ are three vertices in the strip such that $`c(A)=c(C)`$, and $`y(C)=y(C^{})`$. Then there exists a vertex $`B`$ such that $`ABC`$, and $`B`$ is the left end of an inclined edge in the strip.
###### Proof.
Let $`B`$ be the leftmost vertex such that $`ABC`$ and $`c(B)=c(A)`$. Clearly, $`B`$ satisfies condition (1) in Lemma 4.1. It remains to show that $`B`$ also satisfies condition (2); that is, we need to find a vertex of opposite level to $`B`$ that lies to the right of $`B`$. Depending on the level of $`B`$, either $`C`$ or $`C^{}`$ is such a vertex, and we are done. ∎
Now everything is ready for the proof of Theorem 3.3. To prove part (a), assume that $`\{B,C\}`$ and $`\{B^{},C^{}\}`$ are two inclined edges that cross each other inside the strip. Without loss of generality, assume that $`BC`$, $`B^{}C^{}`$, and $`CC^{}`$. Then we must have $`B^{}C`$ (otherwise, our inclined edges would not cross). Since $`\{B^{},C^{}\}`$ is an inclined edge, and $`B^{}CC^{}`$, Lemma 4.1 implies that $`y(C)=y(B^{})`$. Therefore, $`y(B)=y(C)=y(B^{})`$. Again applying Lemma 4.1 to the inclined edge $`\{B^{},C^{}\}`$, we conclude that $`BB^{}`$, i.e., we must have $`BB^{}CC^{}`$. But then, by the same lemma, $`\{B,C\}`$ cannot be an inclined edge, providing a desired contradiction.
To prove part (b), consider two consecutive inclined edges $`\{B,C\}`$ and $`\{B^{},C^{}\}`$. Again we can assume without loss of generality that $`BC`$, $`B^{}C^{}`$, and $`CC^{}`$. Let $`P`$ be the boundary of the polygon with vertices $`B,C,B^{}`$, and $`C^{}`$. By Lemma 4.1, the leftmost vertex of $`P`$ is $`B`$, the rightmost vertex is $`C^{}`$, and $`P`$ does not contain a vertex $`D`$ such that $`B^{}DC^{}`$; in particular, we have either $`CB^{}`$ or $`C=C^{}`$. Now we make the following crucial observation: all the vertices $`D`$ on $`P`$ such that $`BDB^{}`$ must have the same charge $`c(D)=c(B)`$. Indeed, assume that $`c(D)=c(B)`$ for some $`D`$ with $`BDB^{}`$. Then Lemma 4.2 implies that some $`B^{\prime \prime }`$ with $`BB^{\prime \prime }D`$ is the left end of an inclined edge; but this contradicts our assumption that $`\{B,C\}`$ and $`\{B^{},C^{}\}`$ are two *consecutive* inclined edges. We see that $`c(D)=c(B)`$ for any vertex $`DP\{B^{},C^{}\}`$. Combining this fact with condition (1) in Lemma 4.1 applied to the inclined edge $`\{B^{},C^{}\}`$ with the left end $`B^{}`$, we conclude that $`c(B^{})=c(B)`$. Remembering the definition of charge, the above statements can be reformulated as follows: $`B^{}`$ has the same (resp. opposite) sign with all vertices of opposite (resp. same) level in $`P\{C^{}\}`$. Using the definition of directions of edges in Definition 3.1, we obtain:
1. Horizontal edges on opposite sides of $`P`$ are directed opposite way since their left ends have opposite signs.
2. Suppose $`B^{}`$ is the right end of a horizontal edge $`\{A,B^{}\}`$ in $`P`$. Then exactly one of the edges $`\{A,B^{}\}`$ and $`\{B^{},C^{}\}`$ is directed towards $`B^{}`$ since their left ends $`A`$ and $`B^{}`$ have opposite signs.
3. The same argument shows that if $`C^{}`$ is the right end of a horizontal edge $`\{A,C^{}\}`$ in $`P`$ then exactly one of the edges $`\{A,C^{}\}`$ and $`\{B^{},C^{}\}`$ is directed towards $`C^{}`$.
4. Finally, if $`B`$ is the left end of a horizontal edge $`\{B,D\}`$ in $`P`$ then exactly one of the edges $`\{B,C\}`$ and $`\{B,D\}`$ is directed towards $`B`$.
These facts imply that $`P`$ is a directed cycle, which completes the proof of Theorem 3.3.
### 4.3. Proof of Theorem 3.5
Let us call a pair of indices $`\{a,b\}`$ *exceptional* (for $`𝐢`$ and $`𝐢^{}`$) if it violates (3.1). We need to show that exceptional pairs are precisely those in two exceptional cases in Theorem 3.5; to do this, we shall examine the relationship between the corresponding strips in $`\mathrm{\Sigma }(𝐢)`$ and $`\mathrm{\Sigma }(𝐢^{})`$. Let us consider the following three cases:
Case 1 (trivial $`2`$-move). Suppose $`i_k=i_{k1}^{}=i_0`$, $`i_{k1}=i_k^{}=j_0`$, and $`i_l=i_l^{}`$ for $`l\{k1,k\}`$, where $`i_0,j_0\stackrel{~}{\mathrm{\Pi }}`$ are such that $`|i_0||j_0|`$ and $`\{i_0,j_0\}\stackrel{~}{\mathrm{\Pi }}`$.
If both $`i`$ and $`j`$ are different from $`|i_0|`$ and $`|j_0|`$ then the strip $`\mathrm{\Sigma }_{i,j}(𝐢)`$ is identical to $`\mathrm{\Sigma }_{i,j}(𝐢^{})`$, and so does not contain exceptional pairs.
If say $`i=|i_0|`$ but $`j|j_0|`$ then the only vertex in $`\mathrm{\Sigma }_{i,j}(𝐢)`$ but not in $`\mathrm{\Sigma }_{i,j}(𝐢^{})`$ is $`A_k`$ (in the notation of Section 4.2), while the only vertex in $`\mathrm{\Sigma }_{i,j}(𝐢^{})`$ but not in $`\mathrm{\Sigma }_{i,j}(𝐢)`$ is $`A_{k1}^{}=A_{\sigma (k)}^{}`$. The vertex $`A_k`$ has the same level and sign and so the same charge as the vertex $`A_{\sigma (k)}^{}`$ in $`\mathrm{\Sigma }_{i,j}(𝐢^{})`$; by Lemma 4.1, there are no exceptional pairs in the strip $`\mathrm{\Sigma }_{i,j}(𝐢)`$.
Finally, suppose that $`\{i,j\}=\{|i_0|,|j_0|\}`$; in particular, in this case we have $`\{|i_0|,|j_0|\}\mathrm{\Pi }`$, hence $`\epsilon (i_0)=\epsilon (j_0)`$. Now the only vertices in $`\mathrm{\Sigma }_{i,j}(𝐢)`$ but not in $`\mathrm{\Sigma }_{i,j}(𝐢^{})`$ are $`A_k`$ and $`A_{k1}`$, while the only vertices in $`\mathrm{\Sigma }_{i,j}(𝐢^{})`$ but not in $`\mathrm{\Sigma }_{i,j}(𝐢)`$ are $`A_{k1}^{}=A_{\sigma (k)}^{}`$ and $`A_k^{}=A_{\sigma (k1)}^{}`$. Since $`A_k`$ and $`A_{k1}`$ are of opposite level and opposite sign, they have the same charge, which is also equal to the charge of $`A_{\sigma (k1)}^{}`$ and $`A_{\sigma (k)}^{}`$. Again using Lemma 4.1, we see that the strip in question also does not contain exceptional pairs.
Case 2 (non-trivial $`2`$-move). Suppose $`i_k=i_{k1}^{}=i_0\stackrel{~}{\mathrm{\Pi }}`$, $`i_{k1}=i_k^{}=i_0`$, and $`i_l=i_l^{}`$ for $`l\{k1,k\}`$. Interchanging if necessary $`𝐢`$ and $`𝐢^{}`$, we can and will assume that $`i_0\mathrm{\Pi }`$. Clearly, an exceptional pair can only belong to an $`(i,j)`$-strip with $`i=i_0`$. In our case, the location of all vertices in $`\mathrm{\Sigma }_{i,j}(𝐢)`$ and $`\mathrm{\Sigma }_{i,j}(𝐢^{})`$ is the same; the only difference between the two strips is that the vertices $`A_{k1}`$ and $`A_k`$ in $`\mathrm{\Sigma }_{i,j}(𝐢)`$ have opposite signs and hence opposite charges to their counterparts in $`\mathrm{\Sigma }_{i,j}(𝐢^{})`$. It follows that exceptional pairs of vertices of the same level are precisely horizontal edges containing $`A_k`$, i.e., $`\{A_{k1},A_k\}`$ and $`\{A_k,C\}`$, where $`C`$ is the right neighbor of $`A_k`$ of the same level (note that $`C`$ does not necessarily exist). Since $`\epsilon (i_k)=\epsilon (i_{k1}^{})=+1`$, and $`\epsilon (i_{k1})=\epsilon (i_k^{})=1`$, we have
$$(A_kA_{k1})\mathrm{\Sigma }(𝐢),(A_kC)\mathrm{\Sigma }(𝐢),$$
$$(A_{k1}^{}A_k^{})\mathrm{\Sigma }(𝐢^{}),(C^{}A_k^{})\mathrm{\Sigma }(𝐢^{}),$$
so both pairs $`\{A_{k1},A_k\}`$ and $`\{A_k,C\}`$ fall into the first exceptional case in Theorem 3.5.
Let us now describe exceptional pairs corresponding to inclined edges. Let $`B`$ be the vertex of the opposite level to $`A_k`$ and closest to $`A_k`$ from the right (as the vertex $`C`$ above, $`B`$ does not necessarily exist). By Lemma 4.1, the left end of an exceptional inclined pair can only be $`A_{k1}`$, $`A_k`$, or the leftmost of $`B`$ and $`C`$; furthermore, the corresponding inclined edges can only be $`\{A_{k1},B\}`$, $`\{A_k,B\}`$, or $`\{B,C\}`$. We claim that all these three pairs are indeed exceptional, and each of them falls into one of the exceptional cases in Theorem 3.5.
Let us start with $`\{B,C\}`$. Since $`A_k`$ is the preceding vertex to the leftmost member of $`\{B,C\}`$, and it has opposite charges in the two strips, Lemma 4.1 implies that $`\{B,C\}`$ is an edge in precisely one of the strips. By Theorem 3.3 (b), the triangle with vertices $`A_k`$, $`B`$, and $`C`$ is a directed cycle in the corresponding strip. Thus the pair $`\{B,C\}`$ falls into the second exceptional case in Theorem 3.5.
The same argument shows that $`\{A_{k1},B\}`$ falls into the second exceptional case in Theorem 3.5 provided one of $`A_{k1}`$ and $`B`$ is $`𝐢`$-bounded, i.e., $`A_{k1}`$ is not the leftmost vertex in the strip. As for $`\{A_k,B\}`$, it is an edge in both strips, and it has opposite directions in them because its left end $`A_k`$ has opposite signs there. Thus $`\{A_k,B\}`$ falls into the first exceptional case in Theorem 3.5.
It remains to show that the exceptional pairs (horizontal and inclined) just discussed exhaust all possibilities for the two exceptional cases in Theorem 3.5. This is clear because by the above analysis, the only possible edges through $`A_k`$ in $`\mathrm{\Sigma }(𝐢)`$ are $`(A_kA_{k1})`$, $`(A_kC)`$, and $`(BA_k)`$ with $`B`$ of the kind described above.
Case 3 ($`3`$-move). Suppose $`i_k=i_{k2}=i_{k1}^{}=i_0`$, $`i_{k1}=i_k^{}=i_{k2}^{}=j_0`$ for some $`\{i_0,j_0\}\mathrm{\Pi }`$, and $`i_l=i_l^{}`$ for $`l\{k2,k1,k\}`$ (the case when $`\{i_0,j_0\}\mathrm{\Pi }`$ is totally similar). As in the previous case, we need to describe all exceptional pairs.
First an exceptional pair can only belong to an $`(i,j)`$-strip with at least one of $`i`$ and $`j`$ equal to $`i_0`$ or $`j_0`$. Next let us compare the $`(i_0,j_0)`$-strips in $`\mathrm{\Sigma }(𝐢)`$ and $`\mathrm{\Sigma }(𝐢^{})`$. The location of all vertices in these two strips is the same with the exception of $`A_{k2},A_{k1}`$, and $`A_k`$ in the former strip, and their counterparts $`A_{k2}^{}=A_{\sigma (k1)}^{},A_{k1}^{}=A_{\sigma (k2)}^{}`$, and $`A_k^{}`$ in the latter strip. Each of the six exceptional vertices has sign $`+1`$; so its level is equal to its charge. These charges (or levels) are given as follows:
$$c(A_{k2})=c(A_{\sigma (k2)}^{})=c(A_k)=1,c(A_{k1})=c(A_{\sigma (k1)}^{})=c(A_k^{})=1.$$
Let $`B`$ (resp. $`B^{}`$) denote the vertex in both strips which is the closest from the right to $`A_k`$ on the same (resp. opposite) level; note that $`B`$ or $`B^{}`$ may not exist. Since the trapezoid $`T`$ with vertices $`A_{k2},A_{k1},B^{}`$, and $`B`$ in $`\mathrm{\Sigma }_{i_0,j_0}(𝐢)`$ is in the same relative position to all outside vertices as the trapezoid $`T^{}`$ with vertices $`A_{\sigma (k2)}^{},A_{\sigma (k1)}^{},B^{}`$, a nd $`B`$ in $`\mathrm{\Sigma }_{i_0,j_0}(𝐢^{})`$, it follows that every exceptional pair is contained in $`T`$. An inspection using Lemma 4.1 shows that $`T`$ contains the directed edges
$$A_{k2}A_kA_{k1}B^{}A_kB$$
and does not contain any of the edges $`\{A_{k2},B\}`$, $`\{A_{k2},B^{}\}`$, or $`\{A_{k1},B\}`$. Similarly (or by interchanging $`𝐢`$ and $`𝐢^{}`$), we conclude that $`T^{}`$ contains the directed edges
$$A_{\sigma (k1)}^{}A_k^{}A_{\sigma (k2)}^{}BA_k^{}B^{}$$
and does not contain any of the edges $`\{A_{\sigma (k1)}^{},B^{}\}`$, $`\{A_{\sigma (k1)}^{},B^{}\}`$, or $`\{A_{\sigma (k2)}^{},B^{}\}`$. Furthermore, $`\{B,B^{}\}`$ is an edge in precisely one of the strips (since the preceding vertices $`A_k`$ and $`A_k^{}`$ have opposite charges); and precisely one of the pairs $`\{A_{k2},A_{k1}\}`$ and $`\{A_{\sigma (k1)}^{},A_{\sigma (k2)}^{}\}`$ is an edge in its strip provided $`A_{k2}`$ is not the leftmost vertex (since their left ends $`A_{k2}`$ and $`A_{\sigma (k1)}^{}`$ have opposite charges).
Comparing this information for the two trapezoids, we see that the exceptional pairs in $`T`$ are all pairs of vertices in $`T`$ with the exception of two diagonals $`\{A_{k2},B^{}\}`$ and $`\{A_{k1},B\}`$ (and also of $`\{A_{k2},A_{k1}\}`$ if $`A_{k2}`$ is the leftmost vertex in the strip). By inspection based on Theorem 3.3 (b), all these exceptional pairs fall into the two exceptional cases in Theorem 3.5.
A similar (but much simpler) analysis shows that any $`(i,j)`$-strip with precisely one of $`i`$ and $`j`$ belonging to $`\{i_0,j_0\}`$ does not contain extra exceptional pairs, and also has no inclined edges through $`A_k`$ or $`A_k^{}`$. We conclude that all the exceptional pairs are contained in the above trapezoid $`T`$. The fact that these exceptional pairs exhaust all possibilities for the two exceptional cases in Theorem 3.5 is clear because by the above analysis, the only edges through $`A_k`$ in $`\mathrm{\Sigma }(𝐢)`$ are those connecting $`A_k`$ with the vertices of $`T`$. Theorem 3.5 is proved.
## 5. Proofs of results in Section 3.3
We have already noticed that Theorem 3.7 follows from Theorem 3.8. Let us first prove Theorem 3.9 and then deduce Theorem 3.8 from it.
### 5.1. Proof of Theorem 3.9
We fix reduced words $`𝐢`$ and $`𝐢^{}`$ related by a 2- or 3-move, and abbreviate $`\sigma =\sigma _{𝐢^{},𝐢}=\sigma _{𝐢,𝐢^{}}`$ and $`\phi ^+=\phi _{𝐢^{},𝐢}^+`$. Let us first prove parts (a) and (b). We shall only prove the first equality in (3.5); the proof of the second one and of (3.6) is completely similar. Let $`v^m`$, $`v^+=\phi ^+(v)`$, and $`v^{}=\phi _{𝐢,𝐢^{}}^{}(v^+)`$; thus we need to show that $`v=v^{}`$, i.e., that $`\xi _l(v)=\xi _l(v^{})`$ for all $`l[1,m]`$. Note that the permutation $`\sigma `$ is an involution. In view of (3.2), this implies the desired equality $`\xi _l(v)=\xi _l(v^{})`$ in all the cases except the following one: the move that relates $`𝐢`$ and $`𝐢^{}`$ is non-trivial in position $`k`$, and $`l=k`$. To deal with this case, we use the first exceptional case in Theorem 3.5 which we can write as
$$(kb)\mathrm{\Sigma }(𝐢^{})(\sigma (b)k)\mathrm{\Sigma }(𝐢).$$
Combining this with the definitions (3.2) and (3.3), we obtain
$$\xi _k(v^{})=\underset{(kb)\mathrm{\Sigma }(𝐢^{})}{}\xi _b(v^+)\xi _k(v^+)$$
$$=\underset{(\sigma (b)k)\mathrm{\Sigma }(𝐢)}{}\xi _{\sigma (b)}(v)(\underset{(ak)\mathrm{\Sigma }(𝐢)}{}\xi _a(v)\xi _k(v))=\xi _k(v),$$
as required.
We deduce part (c) from the following lemma which says that the maps $`(\phi _{𝐢^{},𝐢}^\pm )^{}`$ induced by $`\phi _{𝐢^{},𝐢}^\pm `$ “almost” transform the form $`\mathrm{\Omega }_𝐢^{}`$ into $`\mathrm{\Omega }_𝐢`$.
###### Lemma 5.1.
If the move that relates $`𝐢`$ and $`𝐢^{}`$ is trivial then
$$(\phi _{𝐢^{},𝐢}^+)^{}(\mathrm{\Omega }_𝐢^{})=(\phi _{𝐢^{},𝐢}^{})^{}(\mathrm{\Omega }_𝐢^{})=\mathrm{\Omega }_𝐢.$$
If the move that relates $`𝐢`$ and $`𝐢^{}`$ is non-trivial in position $`k`$ then
(5.1)
$$(\phi _{𝐢^{},𝐢}^+)^{}(\mathrm{\Omega }_𝐢^{})=(\phi _{𝐢^{},𝐢}^{})^{}(\mathrm{\Omega }_𝐢^{})=\mathrm{\Omega }_𝐢\underset{(akb)\mathrm{\Sigma }(𝐢)}{}_{a,bB(𝐢)}\xi _a\xi _b.$$
###### Proof.
We will only deal with $`(\phi ^+)^{}(\mathrm{\Omega }_𝐢^{})=(\phi _{𝐢^{},𝐢}^+)^{}(\mathrm{\Omega }_𝐢^{})`$; the form $`(\phi _{𝐢^{},𝐢}^{})^{}(\mathrm{\Omega }_𝐢^{})`$ can be treated in the same way. By the definition,
$$(\phi ^+)^{}(\mathrm{\Omega }_𝐢^{})=\underset{(a^{}b^{})\mathrm{\Sigma }(𝐢^{})}{}(\phi ^+)^{}\xi _a^{}(\phi ^+)^{}\xi _b^{}.$$
The forms $`(\phi ^+)^{}\xi _a^{}`$ are given by (3.2) and (3.3). In particular, if $`𝐢`$ and $`𝐢^{}`$ are related by a trivial move then $`(\phi ^+)^{}\xi _a^{}=\xi _{\sigma (a^{})}`$ for any $`a^{}[1,m]`$; by Theorem 3.5, in this case we have
$$(\phi ^+)^{}(\mathrm{\Omega }_𝐢^{})=\underset{(ab)\mathrm{\Sigma }(𝐢)}{}\xi _a\xi _b$$
as claimed.
Now suppose that $`𝐢`$ and $`𝐢^{}`$ are related by a non-trivial move in position $`k`$. Then we have
$`(\phi ^+)^{}(\mathrm{\Omega }_𝐢^{})`$ $`=`$ $`{\displaystyle \underset{(\sigma (a)\sigma (b))\mathrm{\Sigma }(𝐢^{})}{}_{a,bk}}\xi _a\xi _b`$
$`+`$ $`{\displaystyle \underset{(k\sigma (a^{}))\mathrm{\Sigma }(𝐢^{})}{}}({\displaystyle \underset{(ak)\mathrm{\Sigma }(𝐢)}{}}\xi _a\xi _k)\xi _a^{}`$
$`+`$ $`{\displaystyle \underset{(\sigma (b)k)\mathrm{\Sigma }(𝐢^{})}{}}\xi _b({\displaystyle \underset{(ak)\mathrm{\Sigma }(𝐢)}{}}\xi _a\xi _k).`$
Using the second exceptional case in Theorem 3.5, we can rewrite the first summand as
$$\underset{(ab)\mathrm{\Sigma }(𝐢)}{}_{a,bk}\xi _a\xi _b+\underset{(akb)\mathrm{\Sigma }(𝐢)}{}_{\{a,b\}B(𝐢)\mathrm{}}\xi _a\xi _b.$$
Similarly, using the first exceptional case in Theorem 3.5, we can rewrite the last two summands as
$$\underset{(ak)\mathrm{\Sigma }(𝐢)}{}\xi _a\xi _k+\underset{(kb)\mathrm{\Sigma }(𝐢)}{}\xi _k\xi _b\underset{(akb)\mathrm{\Sigma }(𝐢)}{}\xi _a\xi _b$$
(note that the missing term
$$\underset{(ak)\mathrm{\Sigma }(𝐢)}{}_{(a^{}k)\mathrm{\Sigma }(𝐢)}\xi _a\xi _a^{}$$
is equal to $`0`$). Adding up the last two sums, we obtain (5.1). ∎
Now everything is ready for the proof of Theorem 3.9 (c). Since $`l`$ is assumed to be $`𝐢`$-bounded, Lemma 5.1 implies that $`\mathrm{\Omega }_𝐢(v,e_l)=\mathrm{\Omega }_𝐢^{}(\phi ^+(v),\phi ^+(e_l))`$ for any $`v^m`$. On the other hand, since the case when the move that relates $`𝐢`$ and $`𝐢^{}`$ is non-trivial in position $`k`$, and $`(lk)\mathrm{\Sigma }_𝐢`$, is excluded, we have $`\phi ^+(e_l)=\pm e_{\sigma (l)}`$ (with the minus sign for $`l=k`$ only). Therefore, our assumptions on $`l`$ imply that
$$\mathrm{\Omega }_𝐢(v,e_l)\phi ^+(e_l)=\mathrm{\Omega }_𝐢^{}(\phi ^+(v),e_{\sigma (l)})e_{\sigma (l)}.$$
Remembering the definition (2.3) of symplectic transvections, we conclude that
$$(\tau _{\sigma (l),𝐢^{}}\phi ^+)(v)=\phi ^+(v)\mathrm{\Omega }_𝐢^{}(\phi ^+(v),e_{\sigma (l)})e_{\sigma (l)}$$
$$=\phi ^+(v)\mathrm{\Omega }_𝐢(v,e_l)\phi ^+(e_l)=(\phi ^+\tau _{l,𝐢})(v),$$
as required. This completes the proof of Theorem 3.9.
###### Remark 5.2.
It is possible to modify all skew symmetric forms $`\mathrm{\Omega }_𝐢`$ without changing the corresponding groups $`\mathrm{\Gamma }_𝐢`$ in such a way that the modified forms will be preserved by the maps $`(\phi _{𝐢^{},𝐢}^\pm )^{}`$. There are several ways to do it. Here is one “canonical” solution: replace each $`\mathrm{\Omega }_𝐢`$ by the form
$$\stackrel{~}{\mathrm{\Omega }}_𝐢=\mathrm{\Omega }_𝐢\frac{1}{2}\epsilon (i_k)\xi _k\xi _l,$$
where the sum is over all pairs of $`𝐢`$-unbounded indices $`k<l`$ such that $`\{|i_k|,|i_l|\}\mathrm{\Pi }`$. It follows easily from Lemma 5.1 that $`(\phi _{𝐢^{},𝐢}^\pm )^{}(\stackrel{~}{\mathrm{\Omega }}_𝐢^{})=\stackrel{~}{\mathrm{\Omega }}_𝐢`$. Unfortunately, the forms $`\stackrel{~}{\mathrm{\Omega }}_𝐢`$ are not defined over $``$; in particular, they cannot be reduced to bilinear forms over $`𝔽_2`$.
### 5.2. Proof of Theorem 3.8
The fact that $`\phi _{𝐢^{},𝐢}^+`$ and $`\phi _{𝐢^{},𝐢}^{}`$ are invertible follows from (3.5). To prove (3.4), it remains to show that $`\phi _{𝐢^{},𝐢}^+\tau _{l,𝐢}(\phi _{𝐢^{},𝐢}^+)^1\mathrm{\Gamma }_𝐢^{}`$ for any $`𝐢`$-bounded index $`l[1,m]`$. This follows from (3.7) unless the move that relates $`𝐢`$ and $`𝐢^{}`$ is non-trivial in position $`k`$, and $`(lk)\mathrm{\Sigma }_𝐢`$. In this exceptional case, we conclude by interchanging $`𝐢`$ and $`𝐢^{}`$ in (3.7) that
$$\phi _{𝐢,𝐢^{}}^+\tau _{\sigma _{𝐢^{},𝐢}(l),𝐢^{}}=\tau _{l,𝐢}\phi _{𝐢,𝐢^{}}^+.$$
Using (3.6), we obtain that
$$\phi _{𝐢^{},𝐢}^+\tau _{l,𝐢}(\phi _{𝐢^{},𝐢}^+)^1=(\phi _{𝐢^{},𝐢}^+\phi _{𝐢,𝐢^{}}^+)\tau _{\sigma _{𝐢^{},𝐢}(l),𝐢^{}}(\phi _{𝐢^{},𝐢}^+\phi _{𝐢,𝐢^{}}^+)^1=\tau _{k,𝐢^{}}\tau _{\sigma _{𝐢^{},𝐢}(l),𝐢^{}}\tau _{k,𝐢^{}}^1\mathrm{\Gamma }_𝐢^{},$$
as required. This completes the proofs of Theorems 3.8 and 3.7.
## 6. Proofs of results in Section 3.4
### 6.1. Description of $`\mathrm{\Gamma }`$-orbits
In this section we shall only work over the field $`𝔽_2`$. Therefore we find it convenient to change our notation a little bit. Let $`V`$ be a finite-dimensional vector space over $`𝔽_2`$ with a skew-symmetric $`𝔽_2`$-valued form $`\mathrm{\Omega }`$ (i.e., $`\mathrm{\Omega }(v,v)=0`$ for any $`vV`$). For any $`vV`$, let $`\tau _v:VV`$ denote the corresponding symplectic transvection acting by $`\tau _v(x)=x\mathrm{\Omega }(x,v)v`$. Fix a linearly independent subset $`BV`$, and let $`\mathrm{\Gamma }`$ be the subgroup of $`GL(V)`$ generated by the transvections $`\tau _b`$ for $`bB`$. We make $`B`$ the set of vertices of a graph with $`\{b,b^{}\}`$ an edge whenever $`\mathrm{\Omega }(b,b^{})=1`$.
We shall deduce Theorem 3.11 from the following description of the $`\mathrm{\Gamma }`$-orbits in $`V`$ in the case when the graph $`B`$ is $`E_6`$-compatible (see Definition 3.10).
Let $`UV`$ be the linear span of $`B`$. The group $`\mathrm{\Gamma }`$ preserves each parallel translate $`(v+U)V/U`$ of $`U`$ in $`V`$, so we only need to describe $`\mathrm{\Gamma }`$-orbits in each $`v+U`$.
Let us first describe one-element orbits, i.e., $`\mathrm{\Gamma }`$-fixed points in each “slice” $`v+U`$. Let $`V^\mathrm{\Gamma }V`$ denote the subspace of $`\mathrm{\Gamma }`$-invariant vectors, and $`KU`$ denote the kernel of the restriction $`\mathrm{\Omega }|_U`$.
###### Proposition 6.1.
If $`\mathrm{\Omega }(K,v+U)=0`$ then $`(v+U)V^\mathrm{\Gamma }`$ is a parallel translate of $`K`$; otherwise, this intersection is empty.
###### Proof.
Suppose the intersection $`(v+U)V^\mathrm{\Gamma }`$ is non-empty; without loss of generality, we can assume that $`v`$ is $`\mathrm{\Gamma }`$-invariant. By the definition, $`vV^\mathrm{\Gamma }`$ if and only if $`\mathrm{\Omega }(u,v)=0`$ for all $`uU`$. In particular, $`\mathrm{\Omega }(K,v)=0`$, hence $`\mathrm{\Omega }(K,v+U)=0`$. Furthermore, an element $`v+u`$ of $`v+U`$ is $`\mathrm{\Gamma }`$-invariant if and only if $`uK`$, and we are done. ∎
Following , we choose a function $`Q:V𝔽_2`$ satisfying the following properties:
(6.1)
$$Q(u+v)=Q(u)+Q(v)+\mathrm{\Omega }(u,v)(u,vV),Q(b)=1(bB).$$
(Clearly, these properties uniquely determine the restriction of $`Q`$ to $`U`$.) An easy check shows that $`Q(\tau _v(x))=Q(x)`$ whenever $`Q(v)=1`$; in particular, the function $`Q`$ is $`\mathrm{\Gamma }`$-invariant.
Now everything is ready for a description of $`\mathrm{\Gamma }`$-orbits in $`V`$.
###### Theorem 6.2.
If the graph $`B`$ is $`E_6`$-compatible then $`\mathrm{\Gamma }`$ has precisely two orbits in each set $`(v+U)V^\mathrm{\Gamma }`$. These two orbits are intersections of $`(v+U)V^\mathrm{\Gamma }`$ with the level sets $`Q^1(0)`$ and $`Q^1(1)`$ of $`Q`$.
The proof will be given in the next section. Let us show that this theorem implies Theorem 3.11 and Corollary 3.12.
###### Corollary 6.3.
If the graph $`B`$ is $`E_6`$-compatible then the number of $`\mathrm{\Gamma }`$-orbits in $`V`$ is equal to $`2^{dim(V/U)}(2+2^{dim(U\mathrm{Ker}\mathrm{\Omega })})`$; in particular, if $`U\mathrm{Ker}\mathrm{\Omega }=\{0\}`$ then this number is $`32^{dim(V/U)}`$.
###### Proof.
By Proposition 6.1 and Theorem 6.2, each slice $`v+U`$ with $`\mathrm{\Omega }(K,v+U)=0`$ splits into $`2^{dimK}+2`$ $`\mathrm{\Gamma }`$-orbits, while each of the remaining slices splits into $`2`$ orbits. There are $`2^{dim(V^\mathrm{\Gamma }/K)}`$ slices of the first kind and $`2^{dim(V/U)}2^{dim(V^\mathrm{\Gamma }/K)}`$ slices of the second kind. Thus the number of $`\mathrm{\Gamma }`$-orbits in $`V`$ is equal to
$$2^{dim(V^\mathrm{\Gamma }/K)}(2^{dimK}+2)+(2^{dim(V/U)}2^{dim(V^\mathrm{\Gamma }/K)})2.$$
Our statement follows by simplifying this answer. ∎
Now Theorem 3.11 is just a reformulation of this Corollary. As for Corollary 3.12, one only needs to show that its assumptions imply that $`U\mathrm{Ker}\mathrm{\Omega }=\{0\}`$. But this follows at once from Proposition 3.2.
### 6.2. Proof of Theorem 6.2
We split the proof into several lemmas. Let $`EU`$ be the linear span of $`6`$ vectors from $`B`$ that form an induced subgraph isomorphic to $`E_6`$. The restriction of $`\mathrm{\Omega }`$ to $`E`$ is nondegenerate; in particular, $`EK=\{0\}`$.
###### Lemma 6.4.
(a) Every $`4`$-dimensional vector subspace of $`E`$ contains at least two non-zero vectors with $`Q=0`$.
(b) Every $`5`$-dimensional vector subspace of $`E`$ contains at least two vectors with $`Q=1`$.
###### Proof.
(a) It suffices to show that every $`3`$-dimensional subspace of $`E`$ contains a non-zero vector with $`Q=0`$. Let $`e_1,e_2`$, and $`e_3`$ be three linearly independent vectors. If we assume that $`Q=1`$ on each of the $`6`$ vectors $`e_1,e_2,e_3,e_1+e_2,e_1+e_3`$, and $`e_2+e_3`$ then, in view of (6.1), we must have $`\mathrm{\Omega }(e_1,e_2)=\mathrm{\Omega }(e_1,e_3)=\mathrm{\Omega }(e_2,e_3)=1`$. But then $`Q(e_1+e_2+e_3)=0`$, as required.
(b) It follows from the results in (or by direct counting) that $`E`$ consists of $`28`$ vectors with $`Q=0`$ and $`36`$ vectors with $`Q=1`$. Since the cardinality of every $`5`$-dimensional subspace of $`E`$ is $`32`$, our claim follows. ∎
###### Lemma 6.5.
The function $`Q`$ is nonconstant on each set $`(v+U)V^\mathrm{\Gamma }`$.
###### Proof.
Suppose $`vVV^\mathrm{\Gamma }`$. By Lemma 6.4 (b), there exist two vectors $`ee^{}`$ in $`E`$ such that
$$\mathrm{\Omega }(v,e)=\mathrm{\Omega }(v,e^{})=0,Q(e)=Q(e^{})=1.$$
In view of (6.1), we have $`Q(v+e)=Q(v+e^{})=Q(v)+1`$, and it is clear that at least one of the vectors $`v+e`$ and $`v+e^{}`$ is not $`\mathrm{\Gamma }`$-invariant (otherwise we would have $`\mathrm{\Omega }(ee^{},u)=0`$ for all $`uU`$, which contradicts the fact that $`\mathrm{\Omega }|_E`$ is nondegenerate). ∎
To prove Theorem 6.2, it remains to show that $`\mathrm{\Gamma }`$ acts transitively on each level set of $`Q`$ in $`(v+U)V^\mathrm{\Gamma }`$. To do this, we shall need the following important result due to Janssen \[6, Theorem 3.5\].
###### Lemma 6.6.
If $`u`$ is a vector in $`UK`$ such that $`Q(u)=1`$ then the symplectic transvection $`\tau _u`$ belongs to $`\mathrm{\Gamma }`$.
We also need the following result from \[12, Lemma 4.3\].
###### Lemma 6.7.
If the graph $`B`$ is $`E_6`$-compatible then $`\mathrm{\Gamma }`$ acts transitively on each of the level sets of $`Q`$ in $`UK`$.
To continue the proof, let us introduce some terminology. For a linear form $`\xi U^{}`$, denote
$$T_\xi =\{uUK:Q(u)=\xi (u)=1\}.$$
We shall call a family of vectors $`(u_1,u_2,\mathrm{},u_s)`$ *weakly orthogonal* if $`\mathrm{\Omega }(u_1+\mathrm{}+u_{i1},u_i)=0`$ for $`i=2,\mathrm{},s`$.
###### Lemma 6.8.
Let $`\xi U^{}`$ be a linear form on $`U`$ such that $`\xi |_K0`$. Then every nonzero vector $`uU`$ such that $`Q(u)=\xi (u)`$ can be expressed as the sum $`u=u_1+\mathrm{}+u_s`$ of some weakly orthogonal family of vectors $`(u_1,u_2,\mathrm{},u_s)`$ from $`T_\xi `$.
###### Proof.
We need to construct a required weakly orthogonal family $`(u_1,u_2,\mathrm{},u_s)`$ in each of the following three cases.
Case 1. Let $`0u=kK`$ be such that $`Q(k)=\xi (k)=0`$. Since $`\xi 0`$, we have $`\xi (b)=1`$ for some $`bB`$. By (6.1), we also have $`Q(b)=1`$. Since $`bK`$, we can take $`(u_1,u_2)=(b,kb)`$ as a desired weakly orthogonal family.
Case 2. Let $`u=kK`$ be such that $`Q(k)=\xi (k)=1`$. By Lemma 6.4 (a), there exist distinct nonzero vectors $`e`$ and $`e^{}`$ in $`E`$ such that $`Q(e)=\xi (e)=Q(e^{})=\xi (e^{})=\mathrm{\Omega }(e,e^{})=0`$. Then we can take $`(u_1,u_2,u_3)=(ke,ke^{},e+e^{}k)`$ as a desired weakly orthogonal family.
Case 3. Let $`uUK`$ be such that $`Q(u)=\xi (u)=0`$. Since $`\xi |_K0`$, we can choose $`kK`$ so that $`\xi (k)=1`$. If $`Q(k)=1`$ then a desired weakly orthogonal family for $`u`$ can be chosen as $`(u_1,u_2,u_3,u_4)`$, where $`(u_1,u_2,u_3)`$ is a weakly orthogonal family for $`k`$ constructed in Case 2 above, and $`u_4=uk`$. If $`Q(k)=0`$, choose $`eE`$ such that $`Q(e)=1,\mathrm{\Omega }(u,e)=0`$, and $`ueK`$ (the existence of such a vector $`e`$ follows from Lemma 6.4 (b)). If $`\xi (e)=1`$ then a desired weakly orthogonal family for $`u`$ can be chosen as $`(u_1,u_2)=(e,ue)`$. Finally, if $`\xi (e)=0`$ then a desired weakly orthogonal family for $`u`$ can be chosen as $`(u_1,u_2)=(e+k,uek)`$. ∎
Now everything is ready for completing the proof of Theorem 6.2. Take any slice $`v+UV/U`$; we need to show that $`\mathrm{\Gamma }`$ acts transitively on each of the level sets of $`Q`$ in $`(v+U)V^\mathrm{\Gamma }`$. First suppose that $`(v+U)V^\mathrm{\Gamma }\mathrm{}`$; by Proposition 6.1, this means that $`\mathrm{\Omega }(K,v+U)=0`$. Without loss of generality, we can assume that $`v`$ is $`\mathrm{\Gamma }`$-invariant. Then $`\mathrm{\Omega }(u,v)=0`$ for any $`uU`$, so we have $`Q(v+u)=Q(v)+Q(u)`$. On the other hand, we have $`g(v+u)=v+g(u)`$ for any $`g\mathrm{\Gamma }`$ and $`uU`$. Thus the correspondence $`uv+u`$ is a $`\mathrm{\Gamma }`$-equivariant bijection between $`U`$ and $`v+U`$ preserving partitions into the level sets of $`Q`$. Therefore our statement follows from Lemma 6.7.
It remains to treat the case when $`\mathrm{\Omega }(K,v+U)0`$. In other words, if we choose any representative $`v`$ and define the linear form $`\xi U^{}`$ by $`\xi (u)=\mathrm{\Omega }(u,v)`$ then $`\xi |_K0`$. Let $`uU`$ be such that $`Q(v)=Q(v+u)`$; we need to show that $`v+u`$ belongs to the $`\mathrm{\Gamma }`$-orbit $`\mathrm{\Gamma }(v)`$. In view of (6.1), we have $`Q(u)=\xi (u)`$. In view of Lemma 6.8, it suffices to show that $`\mathrm{\Gamma }(v)`$ contains $`v+u_1+\mathrm{}+u_s`$ for any weakly orthogonal family of vectors $`(u_1,u_2,\mathrm{},u_s)`$ from $`T_\xi `$. We proceed by induction on $`s`$. The statement is true for $`s=1`$ because $`v+u_1=\tau _{u_1}(v)`$, and $`\tau _{u_1}\mathrm{\Gamma }`$ by Lemma 6.6. Now let $`s2`$, and assume that $`v^{}=v+u_1+\mathrm{}+u_{s1}\mathrm{\Gamma }(v)`$. The definition of a weakly orthogonal family implies that
$$v+u_1+\mathrm{}+u_s=v^{}+u_s=\tau _{u_s}(v^{})\mathrm{\Gamma }(v),$$
and we are done. This completes the proof of Theorem 6.2.
## 7. Connected components of real double Bruhat cells
In this section we give a (conjectural) geometric application of the above constructions. We assume that $`\mathrm{\Pi }`$ is a Dynkin graph of simply-laced type, i.e., every connected component of $`\mathrm{\Pi }`$ is the Dynkin graph of type $`A_n,D_n,E_6,E_7`$, or $`E_8`$. Let $`G`$ be a simply connected semisimple algebraic group with the Dynkin graph $`\mathrm{\Pi }`$. We fix a pair of opposite Borel subgroups $`B_{}`$ and $`B`$ in $`G`$; thus $`H=B_{}B`$ is a maximal torus in $`G`$. Let $`N`$ and $`N_{}`$ be the unipotent radicals of $`B`$ and $`B_{}`$, respectively. Let $`\{\alpha _i:i\mathrm{\Pi }\}`$ be the system of simple roots for which the corresponding root subgroups are contained in $`N`$. For every $`i\mathrm{\Pi }`$, let $`\phi _i:SL_2G`$ be the canonical embedding corresponding to $`\alpha _i`$. The (split) real part of $`G`$ is defined as the subgroup $`G()`$ of $`G`$ generated by all the subgroups $`\phi _i(SL_2())`$. For any subset $`LG`$ we define its real part by $`L()=LG()`$.
The *Weyl group* $`W`$ of $`G`$ is defined by $`W=\mathrm{Norm}_G(H)/H`$. It is canonically identified with the Coxeter group $`W(\mathrm{\Pi })`$ (as defined in Section 2.1) via $`s_i=\overline{s_i}H`$, where
$$\overline{s_i}=\phi _i\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)\mathrm{Norm}_G(H).$$
The representatives $`\overline{s_i}G`$ satisfy the braid relations in $`W`$; thus the representative $`\overline{w}`$ can be unambiguously defined for any $`wW`$ by requiring that $`\overline{uv}=\overline{u}\overline{v}`$ whenever $`\mathrm{}(uv)=\mathrm{}(u)+\mathrm{}(v)`$.
The group $`G`$ has two *Bruhat decompositions*, with respect to $`B`$ and $`B_{}`$:
$$G=\underset{uW}{}BuB=\underset{vW}{}B_{}vB_{}.$$
The *double Bruhat cells* $`G^{u,v}`$ are defined by $`G^{u,v}=BuBB_{}vB_{}`$.
Following , we define the *reduced double Bruhat cell* $`L^{u,v}G^{u,v}`$ as follows:
(7.1)
$$L^{u,v}=N\overline{u}NB_{}vB_{}.$$
The maximal torus $`H`$ acts freely on $`G^{u,v}`$ by left (or right) translations, and $`L^{u,v}`$ is a section of this action. Thus $`G^{u,v}`$ is biregularly isomorphic to $`H\times L^{u,v}`$, and all properties of $`G^{u,v}`$ can be translated in a straightforward way into the corresponding properties of $`L^{u,v}`$ (and vice versa). In particular, Theorem 1.1 in implies that $`L^{u,v}`$ is biregularly isomorphic to a Zariski open subset of an affine space of dimension $`\mathrm{}(u)+\mathrm{}(v)`$.
###### Conjecture 7.1.
For every two elements $`u`$ and $`v`$ in $`W`$, and every reduced word $`𝐢R(u,v)`$, the connected components of $`L^{u,v}()`$ are in a natural bijection with the $`\mathrm{\Gamma }_𝐢(𝔽_2)`$-orbits in $`𝔽_2^{\mathrm{}(u)+\mathrm{}(v)}`$ .
The precise form of this conjecture comes from the “calculus of generalized minors” developed in and in a forthcoming paper . If $`u`$ is the identity element $`eW`$ then $`L^{e,v}=NB_{}vB_{}`$ is the variety $`N^v`$ studied in . When $`G=SL_n`$, and $`v=w_0`$, the longest element in $`W`$, the real part $`N^{w_0}()`$ is the semi-algebraic set $`N_n^0`$ discussed in the introduction; in this case, the conjecture was proved in (for a special reduced word $`𝐢=(1,2,1,\mathrm{},n1,n2,\mathrm{},2,1)R(w_0)`$).
|
no-problem/9906/astro-ph9906268.html
|
ar5iv
|
text
|
# Iron line in the afterglow: a key to the progenitor
## 1 Introduction
Piro et al. (1999) and Yoshida et al. (1999) report the detection of an iron emission line feature in the X–ray afterglow spectra of GRB 970508 and GRB 970828, respectively. Both lines are characterized by a large flux and equivalent width (EW) compared with the theoretical previsions made in the framework of the hypernova and compact merger GRB progenitor models (Ghisellini et al. 1999, Böttcher et al. 1999). The line detected in GRB 970508 is consistent with an iron $`K_\alpha `$ line redshifted to the rest–frame of the candidate host galaxy ($`z=0.835`$, Metzger et al. 1997), while GRB 970828 has no measured redshift and the identification of the feature with the same line would imply a redshift $`z0.33`$. The line fluxes (equivalent widths) are $`F_{Fe}=(2.8\pm 1.1)\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> ($`\mathrm{EW}1`$ keV) and $`F_{Fe}=(1.5\pm 0.8)\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> ($`\mathrm{EW}3`$ keV) for GRB 970508 and GRB 970828, respectively. As indicated by the flux uncertainties, these line features are revealed at a statistical level of $`3\sigma `$. A strong iron emission line unambiguously points towards the presence, in the vicinity of the burster, of a few per cent of iron solar masses concentrated in a compact region. Thus the presence of such a line in the X–ray afterglow spectrum would represent the “Rosetta Stone” for unveiling the burst progenitor.
To date, three main classes of models have been proposed for the origin of gamma–ray bursts (GRB): neutron star – neutron star (NS–NS) mergers (Paczyński 1986; Eichler et al. 1989), Hypernovae or failed type Ib supernovae (Woosley 1993; Paczyński 1998) and Supranovae (Vietri & Stella 1998). In the NS–NS model the burst is produced during the collapse of a binary system composed of two neutron stars or of a neutron star and a black hole. Interacting neutrinos or the spin–down of the remnant black hole should power a relativistic outflow and, eventually, the production of $`\gamma `$–ray photons. Hypernovae are the final evolutionary stage of very massive stars ($`M100M_{}`$), whose core implodes in a black hole without the explosion of a classical supernova. Again, the rotational energy of the remnant black hole is tapped by a huge magnetic field, powering the burst explosion. While NS–NS mergers should take place outside the original star formation site, due to the large peculiar velocities and long lifetime of the coalescing binary systems, hypernovae take place in the dense and possibly iron rich molecular cloud where the massive star was born.
The Supranova scenario (Vietri & Stella 1998) assumes that, following a supernova explosion, a fast spinning neutron star is formed, with a mass that would be supercritical in the absence of rotation. As radiative energy losses spin it down in a time–scale of months to years, it inevitably collapses to a Kerr black hole, whose rotational energy can then power the GRB. A supernova remnant (SNR) is naturally left over around the burst location.
## 2 Mass, size and shape of the surrounding material
Since the detected lines are both characterized by a flux of several $`10^{13}`$ erg s<sup>-1</sup>, we adopt as reference a line with a flux<sup>1</sup><sup>1</sup>1Here and in the following we parametrise a quantity $`Q`$ as $`Q=10^xQ_x`$ and adopt cgs units. $`F_{Fe}=10^{13}F_{Fe,13}`$ erg cm<sup>-2</sup> s<sup>-1</sup>.
This in itself constrains both the amount of line–emitting matter and the size of the emitting region.
We assume that the emitting region is a homogeneous spherical shell centered in the GRB progenitor, with radius $`R`$ and width $`\mathrm{\Delta }RR`$. A limit to the size of the emitting region can be set, independently from the flux variability, imposing the flux level times the minimum duration of the emission ($`R/c`$) to be less than the total available energy supplied by the burst fluence. Allowing for an efficiency $`q`$ of the burst photon reprocessing into the line ($`q<0.1`$, see Ghisellini et al. 1999) we have:
$$R<3\times 10^{17}q_1\frac{_5}{F_{Fe,13}}\text{cm}$$
(1)
where $``$ is the total GRB fluence. Note that this result, obtained with independent arguments, agrees with the $`1`$ day variability time–scales of the detected features.
To obtain a lower limit to the total amount of mass (see Lazzati et al. 1999) we consider a parameter $`k`$ given by the ratio between the total number of iron line photons divided by the number of iron nuclei: $`k`$ is the number of photons produced by a single iron atom. This number can be constrained to be less than the total number of ionizations an ion can undergo when illuminated by the burst or X–ray afterglow photons. For a GRB located at $`z=1`$, we obtain<sup>2</sup><sup>2</sup>2The cosmological parameters will be set throughout this paper to $`H_0=65`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`q_0=0.5`$ and $`\mathrm{\Lambda }=0`$.:
$$k\stackrel{<}{}\frac{qE}{4\pi ϵ_{ion}R^2}\sigma _K=6.5\times 10^6\frac{qE_{52}}{R_{16}^2}$$
(2)
where $`E`$ it the total energy of the burst photons, $`ϵ_{ion}`$ the energy of an ionizing photon and $`\sigma _K`$ the ionization cross–section of the iron K shell. The total mass is given by the ratio of the total number of produced photons divided by $`k`$ times the mass of the iron atom. We have:
$$M\stackrel{>}{}0.13\frac{F_{Fe,13}t_5R_{16}^2}{q_1A_{}E_{52}}M_{}$$
(3)
where $`t_5`$ is the time during which the line is observed in units of $`10^5`$ seconds and $`A_{}`$ is the iron abundance in solar units.
If such a large amount of mass were uniformly spread around the burst location, it would completely stop the fireball. In fact (see e.g. Wijers, Rees & Mészáros 1997) the fireball is slowed down to sub–relativistic speeds when the swept up mass equals its initial rest mass. With a typical baryon load of $`10^{(4÷6)}M_{}`$, the mass predicted in Eq. 3 would stop the fireball after an observer time $`t\mathrm{\Gamma }^2R/c3\times 10^4\mathrm{\Gamma }_1^2`$ s, i.e. almost one day. Any surviving long wavelength emission should then decay exponentially in the absence of energy supply. The only way to reconcile a monthly lasting power–law optical afterglow with iron line emission is through a particular geometry, in which the line of sight is devoid of the remnant matter. This implies (see also Fig. 1) that the burst emission is only moderately (if at all) beamed, since the burst photons must illuminate the surrounding matter and the line of sight simultaneously.
## 3 Emission mechanisms
The limits on the mass discussed in the above section are very general and independent of the mechanism producing the line photons. Lazzati et al. 1999 give a complete analysis of three mechanisms that are able to generate such an intense line feature.
### 3.0.1 Recombination
In this scenario, burst (or X–ray afterglow) photons keep all the iron in the vicinity of the burst fully ionized. If the density of iron and free electrons is large and the plasma is cool, recombination of free electrons with iron nuclei is efficient and iron line photons are produced. However, we require that both $`n_{Fe}\stackrel{>}{}10^{10}`$ cm<sup>-3</sup> and that $`T\stackrel{<}{}10^4`$ K. During the burst Inverse Compton interactions of free electrons with $`\gamma `$–ray photons heat the plasma to $`T10^8`$ K, and recombination is inefficient. This scenario may, however, be marginally consistent with GRB 970508 data if the ionization flux is provided by an afterglow with an high energy tail but with typical Compton temperature of $`T_C10^{4÷5}`$ K.
### 3.0.2 Thermal emission
A supernova remnant of several solar masses, some months after the explosion of the supernova, has an emission integral of the same order of magnitude of the intergalactic medium of a galaxy cluster. These systems emit $`6.7`$ keV iron lines due to collisional excitation of hydrogenoid iron ions. In addition, the iron richness of a supernova remnant can be up to ten times larger than in a cluster of galaxies, allowing for the emission of iron line with several keV equivalent widths. In addition to the line, the heated SNR produces bremsstrahlung radiation at a level $`F_{ff}10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, compatible with a standard X–ray afterglow at $`z1`$.
### 3.0.3 Reflection
If some dense matter (e.g. a SNR with less than a month of life) is present in close vicinity with the burster, part of the incident radiation is reflected toward the observer. This same physical process is efficient in the disc of Seyfert galaxies, and a strong iron line is produced since the cross section of the iron atom is much larger than that of free electrons. For a $`z1`$ burst, lines with up to $`F_{Fe}10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> can be produced. In steady state, $`150`$ eV EW lines are produced (if the material covers half of the sky), but in gamma–ray bursts the reflected component is seen when the burst emission has already faded and the EW can be much higher (even a few keV). Given the fast variability properties of the detected lines, this model is favored with respect to recombination and thermal emission.
## 4 Progenitor
The presence of dense matter at rest in the reference frame of the burster (and not of the fireball) is a discriminating parameter for the main models of burst progenitor. Here we analyze the two main models, binary neutron star mergers and hypernovae, and a particular model, the Supranova, that naturally accounts for the surrounding iron rich matter.
The production of lines in the fireball has been discussed in Mészáros & Rees 1998a. They find that lines (particularly iron lines) can be produced inside the fireball, but we should observe them at a frequency $`\nu _{obs}=\mathrm{\Gamma }/(1+z)\nu _l`$, where $`\nu _l`$ is the frequency of the line in the laboratory. In the case of GRB 970508 the observed frequency of the line is consistent with an iron line at the same redshift of the burster. An incredible fine–tuning is necessary to explain the feature as an optical–UV line blueshifted in the X–ray band due to the fireball expansion.
The line must hence be produced by the interaction of the burst photons with cold matter that surrounds the burster.
### 4.0.1 Binary compact object mergers
The lifetime of a binary system composed of a couple of compact objects (NS–NS or BH–NS) ranges from 100 million to 10 billion years, mainly depending on the initial separation and on the mass of the binary system components (see e.g. Lipunov et al 1995). Due to the kick velocity that a compact object inherits from the supernova explosion that generated it, $`200`$ km s<sup>-1</sup> on average (Kalogera et al. 1998), these binary systems merge well outside their formation site and, possibly, even outside the host galaxy. Assuming an hydrogen density $`n=1`$ cm<sup>-3</sup> and a solar iron abundance, a sphere of radius:
$$R\stackrel{>}{}3.7\times 10^{23}n^1\frac{F_{Fe,13}t_5}{q_1A_{}E_{52}}\mathrm{cm}$$
(4)
is necessary to meet the requirements of Eq. 3. However, the radius of the above equation is five orders of magnitude larger than the maximum allowed value (cf. Eq. 1).
### 4.0.2 Hypernova
If we insert in Eq. 4 the density appropriate for a molecular cloud, in which a very massive star is thought to be located, we obtain $`R3\times 10^{18}A_{}^1`$ cm for $`n=10^5`$ cm<sup>-3</sup> that, for an iron rich cloud, is marginally consistent with the limit of Eq. 1. To firmly rule out the possibility of an intense iron line produced in the hypernova scenario, we analyze each emission mechanism discussed in Sect. 3. Reflection can be immediately excluded since a radius $`R>10^{20}`$ cm cloud is necessary to have a Thomson thick mirror. The efficiency of thermal emission is proportional to the square of the density. Lazzati et al. 1999 estimate that both a density $`n10^{10}`$ cm<sup>-3</sup> and a mass of several solar masses (see also Sect. 3) are necessary to produce such an intense iron line. In the hypernova scenario, the density is 5 orders of magnitude smaller, giving a flux 10 orders of magnitude fainter. The recombination time in a medium with the considered density is too long (see Böttcher et al. 1999), however a fluorescence line may be emitted due to ionization of the inner shell of almost neutral iron atoms. The more optimistic estimate of the line flux produced by fluorescence in a molecular cloud is given in Ghisellini et al. 1999, with an upper limit $`F_{Fe}<5\times 10^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. Even if we consider the presence of a pre–hypernova wind with $`\dot{m}10^4M_{}`$/yr, the total mass contained in a radius of $`10^{18}`$ cm would be a few percent of a solar mass, leaving the above discussion unaffected. We hence conclude that in a standard hypernova scenario it is not possible to produce an intense iron line.
Mészáros & Rees 1998b discussed the possible production of an intense iron line if a compact companion of the hypernova has left a dense torus of material ($`MM_{}`$; $`n10^{10}`$ cm<sup>-3</sup>) at a distance $`R10^{15}`$ cm from the burster. Another progenitor model can account for this matter in a more natural way.
## 5 Supranova
In the model presented by Vietri & Stella 1998 the burst explosion is preceded by a supernova. The neutron star left over by the supernova explosion is supermassive, i.e. its mass exceeds the limiting mass for a neutron star, but the implosion in a black hole is prevented by the fast rotation. As the system spins down, due to radiative losses, the neutron star shrinks until a limiting state is reached and the star collapses in a black hole. The duration of this transient phase depends from the strength of the magnetic field embedded in the star and from the initial spin velocity, but should typically last between several months and a dozen of years. If we consider a supernova exploded a year before the burst, the density of its shell remnant at the burst onset is:
$$n5\times 10^9M_{}v_9^3\mathrm{cm}^3$$
(5)
where $`M_{}`$ is the mass of the ejecta in solar units, $`v_9`$ is their speed in units of $`10^9`$ cm s<sup>-1</sup> and the width of the shell has been assumed $`\mathrm{\Delta }R=R/100`$. The shell radius is $`R=3\times 10^{16}`$ cm, in good agreement with the constraints derived above. If we consider that a SNR can have an iron richness ten times the solar value (Woosley 1988), this scenario is the most natural one to explain the intense iron line observed in the afterglow of GRB 970508 and GRB 970828.
## 6 GRB 990123
The recent explosion of GRB 990123 has gathered the attention of the gamma–ray astronomers due to the huge fluence and large distance of this burst. The early X–ray afterglow flux was measured by the Beppo-SAX satellite at a level of $`F1.1\times 10^{11}`$ erg s<sup>-1</sup> (Heise et al. 1999) about 6 hours after the burst trigger. The first results on the spectral analysis gives a “featureless spectrum” (Heise et al. 1999). If an iron line feature with the same EW of those of GRB 970508 and GRB 970828 were present in the afterglow of GRB 990123, a firm detection would have been established: since the underlying continuum flux of the latter burst is $`100`$ times larger than in the other two cases, the signal to noise ratio of the line would have been 10 times larger. Moreover, due to the larger redshift of GRB 990123 (z=1.6004, Kulkarni et al. 1999) the redshifted line energy lies in a region where the detector efficiency is the largest. This could have led to a $`30\sigma `$ line feature.
There are, however, two reasons why an iron line is not expected in the afterglow of GRB 990123. First, (Lazzati et al. 1999) the line intensity is limited by the recombination time and not by the ionization time. This means that, even in the case of GRB 970508 - a weak burst - the ionizing flux was more than enough to fully exploit the matter surrounding the progenitor. If an iron line with flux $`F_{Fe}3\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> was present in the X–ray afterglow of the 23 January burst, its EW would be decreased to the very low value of $`10`$ eV, hence well below the detection threshold of the Beppo-SAX. Finally, the break in the power–law decrease of the optical light curve of GRB 990123 afterglow has been interpreted as a sign of beaming in the burst itself (Kulkarni et al. 1999; Fruchter et al 1999). If this interpretation is correct, even if a cloud were present in close vicinity with the burster, its illumination would have been minimal, preventing the formation of an intense line.
## 7 Discussion
We have analyzed how the detection of an iron line redshifted to the rest frame of the burster can be used to constrain the progenitor models. Indeed, due to the relativistic outflows involved in the fireball model, very poor information of the progenitor is contained in the gamma–ray burst photons, and the only mean to investigate the progenitor nature is through the interaction of the photons with the ambient medium.
In particular, we have analyzed three different models predicting very different conditions of the external medium: the merging of two compact objects in a binary system, the hypernova and the Supranova. We derived a (model–independent) lower limit to the mass required to produce the line, ruling out the NS–NS merger model and, with a careful analysis, the hypernova. The Supranova, which naturally accounts for a shell remnant of iron rich material surrounding the burster seems the most promising model capable to produce an intense iron line.
Due to the paucity of the statistical significance of the line detections, more observations are needed to draw firm conclusions. We show that, despite its luminosity, GRB 990123 is not the ideal test for the line detection and more observations with more efficient satellites are needed.
###### Acknowledgements.
DL thanks the Cariplo foundation for financial support.
|
no-problem/9906/astro-ph9906080.html
|
ar5iv
|
text
|
# Correlated adiabatic and isocurvature perturbations from double inflation
## I Introduction
In our present picture, cosmological fluctuations today are seen as the combination of an initial spectrum, which can be computed within the framework of high energy models (like inflation), with the subsequent processes occuring at lower energy where the physics is believed to be better understood (up to some unknowns such as the amount and nature of dark matter, the mass of neutrinos…). In the near future, we expect new and precise information on cosmological fluctuations with the planned measurements of the Cosmic Microwave Background Radiation (CMBR) anisotropies by the satellites MAP and PLANCK . It has been emphasized in the recent years that the precision of these measurements could in principle allow us to determine with a high precision the cosmological parameters . These studies however all assume very simple initial perturbations, typically Gaussian adiabatic perturbations with a power-law spectrum. However, reality could turn out to be more subtle. This then would have the drawback to complicate the determination of the cosmological parameters but could open the fascinating perspective to gain precious information on the primordial universe. At present, at a time when data are still unprecise, it is essential to identify broad categories of early universe models and to determine their specificities as far as observable quantities are concerned, with the purpose to be able to discriminate between these various classes of models when detailed data will become available.
Ultimately, inflation must be related to a high energy physics model. Today there are many viable models but a generic feature of these models is that they contain generally many scalar fields. A property of inflation with several scalar fields is that it can generate, in addition to the ubiquitous adiabatic perturbations, isocurvature perturbations. In this respect, it is important to consider the possible role of primordial isocurvature perturbations. Isocurvature perturbations are perturbations in the relative density ratio between various species in the early universe, in contrast with the more standard adiabatic (or isentropic) perturbations which are perturbations in the total energy density with fixed particle number ratios. Primordial isocurvature perturbations are, most of the time, ignored in inflationary models. The main reason for this is that they are less universal than adiabatic perturbations because, on one hand, they can be produced only in multiple inflationary models , and, on the other hand, they can survive until the present epoch only if at least one of the inflaton fields remains decoupled from ordinary matter during the whole history of the universe. However, not only the existence of isocurvature perturbations is allowed in principle, but candidates for inflatons with the required above conditions even exist in many theoretical models (dilatons, axions).
What has already been established is that a pure isocurvature scale-invariant spectrum must be rejected because it predicts on large scales too large temperature anisotropies with respect to density fluctuations . But other possibilities can be envisaged. Several have been investigated in the literature: tilted isocurvature perturbations , combination of isocurvature and adiabatic perturbations . In the latter case, only combinations of independent isocurvature and adiabatic perturbations were considered. The aim of this paper is to investigate the possibility of correlated mixtures of isocurvature and adiabatic perturbations.
To illustrate this, the simplest model of multi-field inflation is considered here: double inflation , namely a model with two massive scalar fields without self-interaction or mutual interaction (other than gravitational). The production of fluctuations in this model has been studied in great detail by Polarski and Starobinsky and, in the present work, their notation and formalism will be followed closely. They were interested essentially in adiabatic perturbations (see also for a numerical analysis) but also considered isocurvature perturbations . However, they did not investigate the range of parameters where this model has the striking property to produce correlated isocurvature and adiabatic perturbations. By this, we mean the cases where both isocurvature and adiabatic perturbations receive significant contributions of at least one of the scalar fields, in contrast to the uncorrelated case where one of the scalar fields feeds essentially the adiabatic perturbations while the second one is at the origin of the isocurvature perturbations.
The plan of this paper is the following. In section 2, the model of double inflation will be presented. Section 3 will be devoted to the analysis of adiabatic and isocurvature perturbations: their definition, how they are obtained from the inflation perturbations, the conditions to obtain correlated mixtures. Section 4 considers formally the definition of spectra for the perturbations as well as the notion of correlation. In section 5, the predictions for the CMBR anisotropies and matter power spectrum are given for the models with correlated primordial perturbations.
## II Double inflation
As mentioned in the introduction, inflation needs at least two scalar fields to produce isocurvature perturbations. That is why we investigate the simplest model of inflation with two scalar fields: they are non-interacting, massive, minimally coupled scalar fields. The Lagrangian corresponding to this model is
$$=\frac{{}_{}{}^{(4)}R}{16\pi G}\frac{1}{2}_\mu \varphi _l^\mu \varphi _l\frac{1}{2}m_l^2\varphi _l^2\frac{1}{2}_\mu \varphi _h^\mu \varphi _h\frac{1}{2}m_h^2\varphi _h^2,$$
(1)
where the subscripts $`l`$ and $`h`$ designate respectively the light and heavy scalar fields (and thus $`m_h>m_l`$). $`{}_{}{}^{(4)}R`$ is the scalar spacetime curvature and G Newton’s constant.
### A The background equations
In a spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) spacetime, with metric $`ds^2=dt^2+a^2(t)d\stackrel{}{x}^2`$, the equations of motion read
$`3H^2`$ $`=`$ $`4\pi G(\dot{\varphi }_l^2+\dot{\varphi }_m^2+m_l^2\varphi _l^2+m_h^2\varphi _h^2)`$ (2)
$`\ddot{\varphi }_l`$ $`+`$ $`3H\dot{\varphi }_l+m_l^2\varphi _l=0,`$ (3)
$`\ddot{\varphi }_h`$ $`+`$ $`3H\dot{\varphi }_h+m_h^2\varphi _h=0.`$ (4)
Following it is convenient, during the phase when both scalar fields are slow-rolling (i.e. when $`\dot{\varphi }_l^2`$ and $`\dot{\varphi }_h^2`$ can be neglected in (2), $`\ddot{\varphi }_l`$ and $`\ddot{\varphi }_h`$ in (4)), to write the evolution of the two scalar fields in the following parametric form
$$\varphi _h=\sqrt{\frac{s}{2\pi G}}\mathrm{sin}\theta ,\varphi _l=\sqrt{\frac{s}{2\pi G}}\mathrm{cos}\theta $$
(5)
where
$$s=\mathrm{ln}(a/a_e)$$
(6)
is the number of e-folds between a given instant and the end of inflation. This form (5) is a consequence of the approximate relation $`d(\varphi _h^2+\varphi _l^2)/ds=d(\varphi _h^2+\varphi _l^2)/(Hdt)(2\pi G)^1`$ resulting from the (slow-roll) equations of motion. The angular variable $`\theta `$ can then be related to the parameter $`s`$ by the expression
$$s=s_0\frac{(\mathrm{sin}\theta )^{\frac{2}{R^21}}}{(\mathrm{cos}\theta )^{\frac{2R^2}{R^21}}}$$
(7)
where $`R`$ is the ratio of the masses of the two scalar fields
$$R\frac{m_h}{m_l}.$$
(8)
The equation (7) was obtained by integrating the relation giving $`d\theta /d(\mathrm{ln}s)`$ as a function of $`\theta `$, which can be established by use of the slow-roll approximation of the equations of motion (2)-(4) (see for the details of the calculations). As noticed in , equation (7) can be approximated, when $`\theta R^1`$, by the simple formula
$$s\frac{s_0}{\mathrm{cos}^2\theta }.$$
(9)
This behaviour corresponds to the period when inflation is dominated by the heavy scalar field (this approximation is valid as long as $`s>s_0`$ and $`ss_0s_0/R^2`$). This period ends when $`\theta R^1`$, $`ss_0`$ and is followed (possibly after a dust-like transition period) by a phase of inflation dominated by the light scalar field.
It follows from (2)-(5) that the Hubble parameter can be expressed in the form
$$H^2(s)\frac{2}{3}m_l^2s[1+(R^21)\mathrm{sin}^2\theta (s)],$$
(10)
where the function $`\theta (s)`$ is obtained by inverting (7) ($`0<\theta <\pi /2`$). As inflation proceeds, $`s`$ decreases and $`\theta `$ goes to smaller and smaller values, which implies a decreasing Hubble parameter during inflation.
It will be convenient to define $`s_H`$ as the number of e-folds before the end of inflation when the scale corresponding to our Hubble radius today crossed out the Hubble radius during inflation. The value of $`s_H`$ depends on the temperature after the reheating (see e.g. ) but roughly $`s_H60`$. To make definite calculations, we shall take throughout this work the value
$$s_H=60.$$
(11)
Note that the class of models considered here depends on three free parameters: the two masses $`m_l`$ and $`m_h`$, or alternatively $`m_l`$ and $`R`$, and the parameter $`s_0`$. In particular, the choice of this last parameter $`s_0`$ relatively to $`s_H`$ will determine the specific phase of double inflation, ‘heavy’ dominated, intermediate or ‘light’ dominated, during which the perturbations on scales of cosmological relevance were produced.
### B Perturbations
After having determined the evolution of the background quantities, let us turn now to the evolution of the linear perturbations. We shall restrict our analysis to the so-called scalar perturbations (in the terminology of Bardeen ). We thus consider a spacetime linearly perturbed about the flat FLRW spacetime of the previous subsection, endowed with the metric
$$ds^2=(1+2\mathrm{\Phi })dt^2+a^2(t)(12\mathrm{\Psi })\delta _{ij}dx^idx^j.$$
(12)
Although this metric is not the most general a priori, it turns out that any perturbed metric (of the scalar type) can be transformed into a metric of this form by a suitable coordinate transformation. This choice corresponds to the so-called longitudinal gauge. In addition to the geometrical perturbations $`\mathrm{\Phi }`$ and $`\mathrm{\Psi }`$, one must also consider the matter perturbations, which will simply be during inflation the perturbations of the scalar fields, respectively $`\delta \varphi _h`$ and $`\delta \varphi _l`$, with respect to their homogeneous values.
Before writing down the equations of motion for the perturbations, it is convenient to use a Fourier decomposition and to define the Fourier modes of any perturbed quantity $`f`$ by the relation
$$f_𝐤=\frac{d^3𝐱}{(2\pi )^{3/2}}e^{i𝐤.𝐱}f(𝐱).$$
(13)
The equations of motion for the perturbations are derived from the perturbed Einstein equations and from the Klein-Gordon equations of the scalar fields. They lead to the following four equations (see e.g. )
$$\mathrm{\Phi }=\mathrm{\Psi },$$
(14)
$$\dot{\mathrm{\Phi }}+H\mathrm{\Phi }=4\pi G\left(\dot{\varphi }_h\delta \varphi _h+\dot{\varphi }_l\delta \varphi _l\right),$$
(15)
$$\ddot{\delta \varphi _h}+3H\dot{\delta \varphi _h}+\left(\frac{k^2}{a^2}+m_h^2\right)\delta \varphi _h=4\dot{\varphi }_h\dot{\mathrm{\Phi }}2m_h^2\varphi _h\mathrm{\Phi },$$
(16)
$$\ddot{\delta \varphi _l}+3H\dot{\delta \varphi _l}+\left(\frac{k^2}{a^2}+m_l^2\right)\delta \varphi _l=4\dot{\varphi }_l\dot{\mathrm{\Phi }}2m_l^2\varphi _l\mathrm{\Phi },$$
(17)
where the subscript $`𝐤`$ is here implicit, as it will be throughout this paper.
In the slow-rolling approximation and for superhorizon modes, i.e. $`kaH`$, these equations can be solved (see ) and the dominant solutions read
$$\mathrm{\Phi }\frac{C_1\dot{H}}{H^2}+2C_3\frac{(m_h^2m_l^2)m_h^2\varphi _h^2m_l^2\varphi _l^2}{3(m_h^2\varphi _h^2+m_l^2\varphi _l^2)^2},$$
(18)
$$\frac{\delta \varphi _l}{\dot{\varphi }_l}\frac{C_1}{H}2C_3\frac{Hm_h^2\varphi _h^2}{m_h^2\varphi _h^2+m_l^2\varphi _l^2},\frac{\delta \varphi _h}{\dot{\varphi }_h}\frac{C_1}{H}+2C_3\frac{Hm_l^2\varphi _l^2}{m_h^2\varphi _h^2+m_l^2\varphi _l^2},$$
(19)
where $`C_1(𝐤)`$ and $`C_3(𝐤)`$ are time-independent constants of integration and are fixed by the initial conditions. As usual in inflation, perturbations are assumed to be initially (i.e. before crossing out the Hubble radius) in their vacuum quantum state. Perturbations outside the Hubble radius are then obtained by amplification of the vacuum quantum fluctuations due to the gravitational interaction. The two scalar fields being independent, one simply duplicates the results of single scalar field inflation (see e.g. ). Consequently, $`\delta \varphi _h`$ and $`\delta \varphi _l`$ can be written, for wavelengths crossing out the Hubble radius, as
$$\delta \varphi _h=\frac{H_k}{\sqrt{2k^3}}e_h(𝐤),\delta \varphi _l=\frac{H_k}{\sqrt{2k^3}}e_l(𝐤),$$
(20)
where $`e_h`$ and $`e_l`$ are classical Gaussian random fields with $`e_i(𝐤)=0`$, $`e_i(𝐤)e_j^{}(𝐤^{})=\delta _{ij}\delta (𝐤𝐤^{})`$, for $`i,j=l,h`$, and $`H_k`$ is the Hubble parameter when the mode crosses the Hubble radius, i.e., when $`k=2\pi aH`$. Neglecting the evolution of the Hubble parameter with respect to that of the scale factor, the number of e-folds $`s_k`$ corresponding to the instant when the mode of wavenumber $`k`$ crossed out the Hubble radius, is given simply by
$$kk_He^{s_Hs_k},$$
(21)
where $`k_H`$ is the wavenumber corresponding to the present Hubble scale. In the present work, our interest will focus on the scales of cosmological relevance, typically $`k/k_H0.12000`$. This means that the range of e-folds that will interest us is $`52s_k63`$.
## III Primordial perturbations
The analysis of the solutions for the perturbations during inflation obtained in the previous section will now enable us to determine the “initial” (but post-inflationary) conditions for the perturbations in the radiation era taking place after inflation and reheating.
### A Initial conditions in the radiation era
At some past instant deep in the radiation era, we shall consider four species of particles. Two species will be relativistic: photons and neutrinos; two species will be non-relativistic: baryons and cold dark matter. Their respective energy density contrasts will be denoted $`\delta _\gamma `$, $`\delta _\nu `$, $`\delta _b`$ and $`\delta _c`$ ($`\delta _A\delta \rho _A/\rho _A`$).
At this point, it is useful to define precisely the notion of adiabatic and isocurvature perturbations. Isocurvature pertubations are defined by the condition that there is no perturbation of the energy density in the total comoving gauge (denoted by the subscript $`(c)`$), i.e.
$$\underset{A}{}\delta ^{(c)}\rho _A=0,$$
(22)
but that there are perturbations in the ratios of species particle numbers, i.e.
$$\delta ^{(c)}(n_A/n_B)0$$
(23)
in general. By contrast, adiabatic (or isentropic) perturbations are defined by the prescription that the particle number ratios between various species is fixed, i.e.
$$\delta ^{(c)}(n_A/n_B)=0,$$
(24)
whereas the total energy density perturbation can fluctuate, i.e.
$$\underset{A}{}\delta ^{(c)}\rho _A0.$$
(25)
It is clear from the above definitions that, if one considers N species, there will be in general one adiabatic mode and $`N1`$ isocurvature modes. Here, it will be assumed that the light scalar field $`\varphi _l`$ decays into ordinary particles, i.e. gives birth to the photons, neutrinos and baryons, while the dark matter particles are associated exclusively with the heavy scalar field $`\varphi _h`$. Note that part of the dark matter could be also produced by the light scalar field but we shall ignore this possiblity here for simplicity. As a consequence, the particle number ratios between the three ‘ordinary’ species will be frozen, i.e.
$$\frac{\delta ^{(c)}n_\gamma }{n_\gamma }=\frac{\delta ^{(c)}n_\nu }{n_\nu }=\frac{\delta ^{(c)}n_b}{n_b},$$
(26)
and only one isocurvature mode will exist, which can be conveniently represented by the quantity
$$S\frac{\delta ^{(c)}n_c}{n_c}\frac{\delta ^{(c)}n_\gamma }{n_\gamma }=\delta _c^{(c)}\frac{3}{4}\delta _\gamma ^{(c)}.$$
(27)
Going back to the longitudinal gauge, and following Ma and Bertschinger , one can write the initial conditions deep in the radiation era for modes outside the Hubble radius in the form
$`\delta _\gamma =2\mathrm{\Phi },`$ (28)
$`\delta _b={\displaystyle \frac{3}{4}}\delta _\nu ={\displaystyle \frac{3}{4}}\delta _\gamma ,`$ (29)
$`\delta _c=S+{\displaystyle \frac{3}{4}}\delta _\gamma ,`$ (30)
$`\theta _\gamma =\theta _\nu =\theta _b=\theta _c={\displaystyle \frac{1}{2}}(k^2\eta )\mathrm{\Phi },`$ (31)
$`\sigma _\nu ={\displaystyle \frac{1}{15}}(k\eta )^2\mathrm{\Phi },`$ (32)
$`\mathrm{\Psi }=\left(1+{\displaystyle \frac{2}{5}}R_\nu \right)\mathrm{\Phi },`$ (33)
with $`R_\nu =\rho _\nu /(\rho _\gamma +\rho _\nu )`$ and where $`\theta `$ stands for the divergence of the fluid three-velocity, $`\sigma _\nu `$ for the shear stress of neutrinos (in the rest of this paper, the contribution of neutrinos in all analytical calculations will be ignored for simplicity but it will be taken into account in the numerical calculations) and $`\eta `$ is the conformal time defined by $`d\eta =dt/a(t)`$. All the information about the initial conditions is thus contained in the two k-dependent quantities $`\mathrm{\Phi }`$ and $`S`$, which are time-independent during the radiation era (for $`kaH`$). The next subsections will be devoted to make the link between these two quantities and the perturbations during inflation.
### B Adiabatic initial perturbations
The evolution of $`\mathrm{\Phi }`$ is given by (18) only during the phase when both scalar fields are slow rolling. As soon as the heavy scalar field $`\varphi _h`$ ends its slow-rolling phase, the second term on the right hand side of (18) will die out, and $`\mathrm{\Phi }`$ can be given during all the subsequent evolution of the universe in the simple form
$$\mathrm{\Phi }=C_1\left(1\frac{H}{a}_0^ta(t^{})𝑑t^{}\right).$$
(34)
It can be checked that, during inflation, $`1\frac{H}{a}_0^ta(t^{})𝑑t^{}\dot{H}/H^2`$, which ensures that the coefficient $`C_1`$ in the above formula is the same as in (18).
It is thus essential to express $`C_1`$ in terms of the perturbations of the two scalar fields, and therefore in terms of $`e_l`$ and $`e_h`$, in order to be able to determine the amplitude of the perturbations after the end of inflation. Combining the two equations in (19) and using (20), as well as the slow-roll approximation of the background equations of motion (2)-(4), the expression for the coefficient $`C_1`$ during inflation is found to be
$$C_1(𝐤)4\pi G\frac{H_k}{\sqrt{2k^3}}\left[\varphi _le_l(𝐤)+\varphi _he_h(𝐤)\right].$$
(35)
As it is clear from this formula, $`C_1(k)`$ is a stochastic variable, whose properties can be determined from the stochastic properties of $`e_l`$ and $`e_h`$.
During the radiation era, the relation between the coefficient $`C_1`$ and the gravitational potential is simply, using once more (34),
$$\mathrm{\Phi }=\frac{2}{3}C_1(𝐤).$$
(36)
Therefore, the gravitational potential during the radiation era for modes larger than the Hubble scale is given by the expression
$$\widehat{\mathrm{\Phi }}\frac{4\sqrt{\pi G}}{3}k^{3/2}\sqrt{s_k}H_k\left[\mathrm{sin}\theta _ke_h(𝐤)+\mathrm{cos}\theta _ke_l(𝐤)\right],$$
(37)
where $`H_k`$ is given as a function of $`s_k`$ in (10) and $`s_k`$ is given as a function of $`k`$ in (21). The hat in the above equations (and in all subsequent equations) indicates that the value of the corresponding quantity is taken deep in the radiation era when the wavelength of the Fourier mode is larger than the Hubble radius.
### C Isocurvature initial perturbations
As explained in subsection A, the isocurvature perturbations in the present model are due to variations in the relative proportions of cold dark matter, generated by the heavy scalar field, with respect to the three other main species (photons, baryons, neutrinos), all generated by the light scalar field. Moreover, during the radiation era, the isocurvature perturbation $`S`$ rigorously defined by (27) is essentially the comoving cold dark matter density contrast, $`S\delta _c^{(c)}`$, so that what is needed to obtain the primordial isocurvature spectrum is simply to compute the cold dark matter density contrast in terms of the scalar field perturbations during inflation. This task was carried out in . Only the main points will be summarized here.
Let us first give the comoving energy density perturbation associated with the heavy scalar field:
$$\delta \rho _h^{(c)}=\dot{\varphi }_h\delta \dot{\varphi }_h+m_h^2\varphi _h\delta \varphi _h+3H\dot{\varphi }_h\delta \varphi _h\dot{\varphi }_h^2\mathrm{\Phi }.$$
(38)
Matching the inflationary phase when $`\varphi _h`$ is slow-rolling to the the inflationary phase when $`\varphi _h`$ is oscillating and then to the post-reheating radiation dominated phase, one finds
$$\delta _h^{(c)}\frac{4}{3}m_h^2C_3.$$
(39)
The coefficient $`C_3`$ can then be obtained, during inflation, by substracting the two equations in (19) and then using the (slow-roll) background equations of motion and (20). Inserting the result in (39), the density contrast of the cold dark matter (associated with the heavy scalar field), for modes larger than the Hubble radius, is found to be given, during the radiation era, by the expression
$$\delta _h^{(c)}\sqrt{\frac{2}{k^3}}H_k\left(\varphi _h^1e_h(𝐤)\frac{m_h^2}{m_l^2}\varphi _l^1e_l(𝐤)\right),$$
(40)
where the value of the scalar fields is taken at Hubble radius crossing. This can be reexpressed, using (5) and (8), in the form
$$\widehat{S}\delta _h^{(c)}2\sqrt{\pi G}k^{3/2}s_k^{1/2}H_k\left[\frac{e_h}{\mathrm{sin}\theta _k}\frac{R^2}{\mathrm{cos}\theta _k}e_l\right].$$
(41)
Note that the isocurvature perturbations have the same power-law dependence as the adiabatic perturbations multiplied by a weakly k-dependent expression which is different from the analogous expression in (37).
### D Conditions for the existence of correlated adiabatic and isocurvature perturbations
As shown above, the quantities describing the primordial adiabatic and isocurvature perturbations are in general linear combinations of the independent stochastic quantities $`e_l`$ and $`e_h`$ and are thus expected to be correlated. It is now necessary to examine the actual value of the corresponding coefficients. For adiabatic perturbations, i.e. in equation (37), the light contribution is dominant for $`\mathrm{tan}\theta <1`$ whereas the heavy contribution is dominant for $`\mathrm{tan}\theta >1`$. For isocurvature perturbations, i.e. in equation (41), the light contribution dominates for $`\mathrm{tan}\theta >R^2`$ whereas the heavy contribution is predominant in the opposite case. Assuming $`R^21`$, one can thus divide the space of parameters for double inflation into three regions:
#### 1 Region $`\mathrm{tan}\theta >>1`$
The adiabatic perturbations are dominated by the heavy scalar field while the isocurvature perturbations are dominated by the light scalar field. The two types of perturbations will thus appear independent. Moreover, except for $`\theta `$ very close to $`\pi /2`$, the isocurvature amplitude will be suppressed with respect to the adiabatic amplitude by a factor $`s_k`$. In this parameter region, one recovers the standard results of a pure adiabatic spectrum due to a single scalar field, here the heavy scalar field.
#### 2 Region $`\mathrm{tan}\theta <<R^2`$
In this region, essentially only the light scalar field contributes to the adiabatic perturbations while the isocurvature perturbations are dominated by the heavy scalar field. The two contributions are therefore independent and the isocurvature amplitude can be very high with respect to the adiabatic one if $`\theta `$ is sufficiently small.
#### 3 Region $`R^2\mathrm{tan}\theta 1`$
This is the most interesting region. Here, both the adiabatic and isocurvature perturbations are essentially feeded by the fluctuations of the light scalar field (even if their amplitude depends on the background value of the two scalar fields). This means that the adiabatic and isocurvature perturbations are strongly correlated in this region. If one considers the relative magnitude of these light scalar field contributions, one sees that the isocurvature contribution can compensate the $`s_k^1`$ suppression (with respect to the adiabatic perturbations) by a suitable factor $`R^2`$. Note also that in the upper part of this parameter region, i.e. for $`\mathrm{tan}\theta 1`$, the heavy and light contributions in the adiabatic perturbations will be of similar order while the heavy contribution in the isocurvature perturbations can be ignored. In contrast, in the lower part of the region, i.e. $`\theta R^2`$, the heavy contribution in the adiabatic perturbations is negligible whereas the light and heavy contributions in the isocurvature perturbations are of similar weight. This is illustrated on Figure 1, which displays the relative behaviour of the four contributions as a function of the angle $`\theta `$.
The expressions for the adiabatic and isocurvature primordial perturbations, (37) and (41) can be simplified further when one assumes that these perturbations are produced during some specific phases of inflation. For instance, if all scales of cosmological relevance are produced during the period of inflation dominated by the heavy scalar field, then the approximate relation (9) relating the angle $`\theta `$ to the number of e-folds applies, which enables us to simplify the Hubble parameter expression, given in equation (10), into
$$H(s)\sqrt{\frac{2}{3}}m_l\sqrt{R^21}\sqrt{ss_0}.$$
(42)
The various contributions to adiabatic and isocurvature perturbations then reduce to the form
$$k^{3/2}\widehat{\mathrm{\Phi }}_h\frac{4\sqrt{6\pi G}}{9}m_l\sqrt{R^21}\left(s_ks_0\right),k^{3/2}\widehat{\mathrm{\Phi }}_l\frac{4\sqrt{6\pi G}}{9}m_l\sqrt{R^21}\sqrt{s_0\left(s_ks_0\right)},$$
(43)
and
$$k^{3/2}\widehat{S}_h\frac{2\sqrt{6\pi G}}{3}m_l\sqrt{R^21},k^{3/2}\widehat{S}_l\frac{2\sqrt{6\pi G}}{3}m_lR^2\sqrt{R^21}\sqrt{\frac{s_ks_0}{s_0}},$$
(44)
where the indices $`h`$ and $`l`$ refer to the corresponding coefficients of $`e_h`$ and $`e_l`$ in (37) and (41). Let us briefly comment these results when one varies the free parameters of the model, $`m_l`$, $`R`$ and $`s_0`$ (but remaining in the domain of validity of the above approximate expressions). Considering the variations with respect to the first two parameters, one can notice that all the contributions are proportional to the term $`m_l\sqrt{R^21}`$, except $`\widehat{S}_l`$ which contains an additional $`R^2`$ dependence. This means, ignoring for the moment the (weak) scale dependence, that the relative amplitudes of three of the contributions are fixed, the relative amplitude of $`\widehat{S}_l`$ being adjustable by the mass ratio $`R`$. Once $`R`$ is fixed, the overall amplitude of the perturbations can be fixed by the scale $`m_l`$. Concerning now the variation of the contributions with the cosmological scale, $`\widehat{S}_h`$ is scale-invariant, while the three other are weakly scale dependent: $`\widehat{\mathrm{\Phi }}_l`$ and $`\widehat{S}_l`$ have the same dependence, whereas $`\widehat{\mathrm{\Phi }}_h`$ has a stronger dependence.
Another limiting case corresponds to $`\theta R^1`$, which occurs during the period of inflation dominated by the light scalar field. In this case, one has $`ss_0\theta ^{2/R^2}`$ and the Hubble parameter is approximately given by
$$H(s)\sqrt{\frac{2}{3}}m_l\sqrt{s}.$$
(45)
As a consequence, the ’heavy’ and ’light’ contributions are approximated by
$$k^{3/2}\widehat{\mathrm{\Phi }}_h\frac{4\sqrt{6\pi G}}{9}m_ls_k\theta _k,k^{3/2}\widehat{\mathrm{\Phi }}_l\frac{4\sqrt{6\pi G}}{9}m_ls_k$$
(46)
for the adiabatic perturbations and
$$k^{3/2}\widehat{S}_h\frac{2\sqrt{6\pi G}}{3}m_l\theta _k^1,k^{3/2}\widehat{S}_l\frac{2\sqrt{6\pi G}}{3}m_lR^2$$
(47)
for isocurvature perturbations. The ’heavy’ adiabatic contribution is thus negligible and the perturbations due to the heavy scalar field are therefore essentially isocurvature.
Note, to conclude this section, that in their work , Polarski and Starobinsky, concentrated their attention on the intermediate case where the scales of cosmological relevance just correspond to the transition zone from the heavy scalar field driven inflation to the light scalar field driven inflation. As a consequence, their spectrum has a stronger variation in $`k`$ than in the limiting cases considered above. Here, the emphasis is put on contributions to the isocurvature perturbations. With another choice of parameters, one can also produce a huge temperature anisotropy dipole due to isocurvature perturbations on scales larger than the present Hubble radius .
## IV Spectra and correlation of adiabatic and isocurvature perturbations
### A General definitions
It is usually assumed in cosmology that the perturbations can be described by (homogeneous and isotropic) gaussian random fields. In the specific model under consideration here, where the perturbations are created during an inflationary phase, this is true by construction. What is new here is that isocurvature and adiabatic perturbations are not assumed to be independent. Indeed, as shown in the previous section, in the case of double inflation, the two kinds of perturbations are correlated, at least for some region of the parameter space. It will thus be our purpose to define statistical quantities that can describe random fields which are, a priori, correlated. Let us first recall, for any homogeneous and isotropic random field $`f`$, the standard definition (up to a normalization factor) of its power spectrum by the expression
$$f_𝐤f_𝐤^{}^{}=2\pi ^2k^3𝒫_f(k)\delta (𝐤𝐤^{}).$$
(48)
In addition to this definition, it will be useful to define a covariance spectrum between two random fields $`f`$ and $`g`$ by the following expression
$$ef_𝐤g_𝐤^{}^{}=2\pi ^2k^3𝒞_{f,g}(k)\delta (𝐤𝐤^{}).$$
(49)
In order to estimate the degree of correlation between two quantities, it is convenient to also define the correlation spectrum $`\stackrel{~}{𝒞}_{f,g}(k)`$ by normalizing $`𝒞_{f,g}(k)`$:
$$\stackrel{~}{𝒞}_{f,g}(k)=\frac{𝒞_{f,g}(k)}{\sqrt{𝒫_f(k)}\sqrt{𝒫_g(k)}}.$$
(50)
Schwartz inequality implies, as usual, that $`1𝒞_{f,g}(k)1`$. The correlation (anticorrelation) will be stronger as one is closer to $`1`$ or $`1`$.
### B Double inflation generated perturbations
Let us now specialize the above formulas to the case of perturbations generated by double inflation. By substituting the explicit expressions for the perturbations obtained in the previous section, namely (37) and (41), one finds
$$𝒫_{\widehat{\mathrm{\Phi }}}=\frac{8G}{9\pi }H_k^2s_k$$
(51)
for the initial adiabatic spectrum,
$$𝒫_{\widehat{S}}=\frac{2G}{\pi }\frac{H_k^2}{s_k}\left[\frac{R^4}{\mathrm{cos}^2\theta }+\frac{1}{\mathrm{sin}^2\theta }\right],$$
(52)
for the initial isocurvature spectrum and
$$𝒞_{\widehat{\mathrm{\Phi }},\widehat{S}}=\frac{4G}{3\pi }H_k^2(R^21)$$
(53)
for the covariance spectrum. Combining the three above spectra according to (50), one finds finally for the correlation spectrum the expression
$$\stackrel{~}{𝒞}_{\widehat{\mathrm{\Phi }},\widehat{S}}=\frac{(R^21)\mathrm{sin}2\theta }{2(R^4\mathrm{sin}^2\theta +\mathrm{cos}^2\theta )^{1/2}}.$$
(54)
It is instructive to study the dependence of this correlation spectrum with respect to the parameters of the model. If one takes $`\theta `$ fixed, one sees that the correlation will vanish for $`R=1`$ and will then increase monotonously with increasing $`R`$ approaching the asymptotic value $`\mathrm{cos}\theta `$. If now one considers $`R`$ as fixed and study the variations of the correlation with respect to $`\theta `$, one recovers the conclusions of section 3D: the correlation vanishes when $`\theta `$ approaches zero or $`\pi /2`$; inbetween, one can see that the correlation reach a maximum for $`\mathrm{sin}^2\theta =(R^2+1)^1`$, with the value
$$\stackrel{~}{𝒞}_{\widehat{\mathrm{\Phi }},\widehat{S}}^{max}=\frac{R^21}{R^2+1}.$$
(55)
The correlation spectrum $`\stackrel{~}{𝒞}_{\widehat{\mathrm{\Phi }},\widehat{S}}`$ for various choices of parameters has been plotted on Fig. 2, as a function of $`s_k`$. One can, as before, distinguish between the two extreme cases. For models such that $`\theta R^1`$, corresponding to a ’heavy’ inflationary phase, the various contributions vary slowly with $`\theta `$, as can be seen from Fig. 1. This means that the correlation will be almost constant. For models such that $`\theta R^1`$, corresponding to a ’light’ inflationary phase, $`\widehat{S}_h`$ increases quickly with decreasing $`\theta `$, i.e. with decreasing scales, which implies that the correlation will decrease with decreasing scales, i.e. smaller $`s_k`$. The models with $`s_0<s_H`$ belong to the first category, while models with $`s_0>s_H`$ correspond to the second. Finally, the models with $`s_0`$ close to $`s_H`$ have an intermediate behaviour between the two extreme cases. They also have the strongest correlation.
## V Predictions for the CMBR and density contrast spectrum
### A Analytical predictions for long-wavelength perturbations
#### 1 Evolution of the perturbations
In the case of perturbations whose wavelength is larger than the Hubble radius, the time evolution is particularly simple. For an initial isocurvature perturbation characterized by the initial amplitude $`\widehat{S}`$, the entropy perturbation $`S`$ is unchanged as long as the perturbation is larger than the Hubble radius, whatever the evolution of the backgroung equation of state, i.e.
$$S=\widehat{S}(kaH).$$
(56)
However, the radiation-matter transition will generate a gravitational potential perturbation (see e.g. )
$$\mathrm{\Phi }^{iso}=\frac{1}{5}\widehat{S}(kaH).$$
(57)
Of course, the initial adiabatic perturbation will also contribute to the gravitational potential perturbation:
$$\mathrm{\Phi }^{ad}=T\widehat{\mathrm{\Phi }}(kaH),$$
(58)
where $`T`$ is a coefficient, close to $`1`$, due to the evolution of the universe (if one ignores the anisotropic stress of the neutrinos, $`T=9/10`$.)
#### 2 Large angular scale CMBR anisotropies.
At large angular scales, the temperature anisotropies are essentially due to the sum of an intrinsic contribution and of a Sachs-Wolfe contribution. Except for the dipole for which the Doppler terms are important, the Sachs-Wolfe contribution can be written (for a spatially flat background)
$$\left(\frac{\mathrm{\Delta }T}{T}\right)_{SW}(𝐞)=\frac{1}{3}\mathrm{\Phi }(x_{ls});$$
(59)
Where $`𝐞`$ on the left hand side is a unit vector corresponding to the direction of observation and $`x_{ls}`$ on the right hand side represents the intersection of the last scattering surface with the light-ray of direction $`𝐞`$. The intrinsic contribution is simply given, via the Stefan law, as the perturbation $`\delta _\gamma ^{(c)}/4`$ at the time of last scattering. Since last scattering occured in the matter era, $`\delta _m^{(c)}\delta ^{(c)}`$, and therefore for an adiabatic perturbation ($`\delta _m^{(c)}=\frac{3}{4}\delta _\gamma ^{(c)}`$), $`\left(\frac{\mathrm{\Delta }T}{T}\right)_{\mathrm{int}}\frac{1}{3}\delta _m^{(c)}`$, which can be seen to be negligible (see below (62)) with respect to the Sachs-Wolfe contribution, whereas for an isocurvature perturbation ($`S\frac{3}{4}\delta _\gamma ^{(c)}`$ during matter era),
$$\left(\frac{\mathrm{\Delta }T}{T}\right)_{int}\frac{1}{3}S.$$
(60)
To conclude, the temperature anisotropies will be given in general by
$$\frac{\mathrm{\Delta }T}{T}=\frac{1}{3}T\widehat{\mathrm{\Phi }}\frac{2}{5}\widehat{S}$$
(61)
on angular scales larger than the angle (of the order of the degree) corresponding to the size of the Hubble radius at the time of the last scattering. This equation enables us to estimate easily the normalization of the temperatures anisotropies for the low multipoles (see the definition (68)-(69)), essentially constrained by COBE measurements. Note that for mixed primordial perturbations with isocurvature and adiabatic contributions of the same order of magnitude, the low multipoles anisotropies can be significantly reduced by a compensation effect between the isocurvature perturbation and the adiabatic one. It turns out that this is the case for the light scalar field contribution in double inflation models with $`R5`$ (see Fig.1 and the consequence on Fig. 3-5).
#### 3 Large scale structure
Large scale structure is governed by the density contrast, or equivalently the gravitational potential perturbation $`\mathrm{\Phi }`$ since the latter quantity can be related to the (total) contrast density in the comoving gauge by the (generalized) Poisson equation, which reads (see e.g. )
$$\left(\frac{k}{aH}\right)^2\mathrm{\Phi }=\frac{3}{2}\delta ^{(c)}.$$
(62)
For modes inside the Hubble radius the evolution becomes quite complicated and depends on the specific ingredients of the model. But what is relevant for our purpose is that this subhorizon evolution does not depend on the nature of the primordial perturbations. What matters is the total gravitational potential perturbation $`\mathrm{\Phi }`$ which can be written, in the matter era, as
$$\mathrm{\Phi }=\mathrm{\Phi }_{ad}\frac{1}{5}S.$$
(63)
Note that the influence of primordial isocurvature perturbations is smaller on the large scale density power spectrum (see (63)) than on large scale temperature anisotropies (see (61)).
#### 4 Spectra
Using the relation (61) (with $`T=1`$), the spectrum for the large scale temperature anisotropies can be expressed in terms of the primordial isocurvature and adiabatic spectra,
$$𝒫_{\frac{\mathrm{\Delta }T}{T}}=\frac{1}{9}𝒫_{\widehat{\mathrm{\Phi }}}+\frac{4}{25}𝒫_{\widehat{S}}\frac{4}{15}𝒞_{\widehat{\mathrm{\Phi }},\widehat{S}}.$$
(64)
When only primordial adiabatic perturbations are present, the previous expression implies
$$𝒫_{\frac{\mathrm{\Delta }T}{T}}^{1/2}=\frac{1}{3}𝒫_\mathrm{\Phi }^{1/2},$$
(65)
whereas for pure isocurvature perturbations, one finds
$$𝒫_{\frac{\mathrm{\Delta }T}{T}}^{1/2}=2𝒫_\mathrm{\Phi }^{1/2}.$$
(66)
This is in agreement with the standard comment in the literature that isocurvature perturbations generate CMBR anisotropies six times bigger than equivalent adiabatic perturbations. This is the reason why isocurvature perturbations are in general rejected in comological models . However, when one takes into account both isocurvature and adiabatic perturbations, with the possibility of correlation, then the additional term due to correlation can change significantly these conclusions. Illustrations will be given in the next subsection.
Similarly, the spectrum for the gravitational potential is given by
$$𝒫_\mathrm{\Phi }=𝒫_{\widehat{\mathrm{\Phi }}}+\frac{1}{25}𝒫_{\widehat{S}}\frac{2}{5}𝒞_{\widehat{\mathrm{\Phi }},\widehat{S}}.$$
(67)
### B All-scale predictions
After having considered long wavelength perturbations, whose advantage is one can estimate analytically their observable amplitude and thus normalize easily the models, let us analyze now smaller scales, which require the use of numerical computation.
#### 1 CMBR anisotropies
As it is customary, one decomposes the CMBR anisotropies on the basis of spherical harmonics:
$$\frac{\mathrm{\Delta }T}{T}(\theta ,\varphi )=\underset{l=1}{\overset{\mathrm{}}{}}\underset{m=l}{\overset{l}{}}a_{lm}Y_{lm}(\theta ,\varphi ).$$
(68)
The predictions of a model are usually given in terms of the expectation values of the squared multipole coefficients
$$C_l|a_{lm}|^2.$$
(69)
In the present model, the temperature anisotropies will be the superposition of a contribution due to the heavy scalar field and of a contribution due to the light scalar field. These two contributions are independent, because the stochastic quantities $`e_h`$ and $`e_l`$ are independent, and as such the coefficients $`C_l`$ can be decomposed
$$C_l=C_l^{(l)}+C_l^{(h)},$$
(70)
where the upper index refers to the ’light’ or ’heavy’ nature of the perturbations. It is important to emphasize that only a decomposition of this type is allowed here. For example, a decomposition of the $`C_l`$ as a sum of an isocurvature contribution and of an adiabatic contribution would be wrong here. In practice, the heavy and light contributions to the $`C_l`$ are computed independently, by using twice a Boltzmann code (developped in our group by A. Riazuelo, and used in ). The first run takes as initial condition $`\widehat{\mathrm{\Phi }}_h`$ and $`\widehat{S}_h`$ and yields the coefficients $`C_l^{(h)}`$. Similarly, the second run computes the $`C_l^{(l)}`$ using as initial conditions the corresponding quantities $`\widehat{\mathrm{\Phi }}_l`$ and $`\widehat{S}_l`$. The results for $`C_l^{(l)}`$ and $`C_l^{(h)}`$, as well as their sum $`C_l`$, are plotted on Fig. 3-6 for four illustrative models (for all models the Hubble parameter and the baryon density correspond respectively to $`h_{100}=0.5`$, $`\mathrm{\Omega }_b=0.052`$). For the first three models, the value $`R=5`$ has been chosen because the isocurvature and adiabatic contributions of the light scalar field are then of similar amplitude, as is visible on Fig. 1, and the effects of mixing and correlation are particularly important. A consequence of the similar amplitude (with the same sign) of the two ‘light’ contributions is an important suppression of the light spectrum $`C_l^{(l)}`$ for small $`l`$, as noticed already in the previous subsection, and as is visible on Fig 3-5. In contrast, one can check on Fig. 6 that this will not be the case for the $`R=10`$ model, for which $`\widehat{S}_l`$ is dominant.
The first two graphs have a roughly similar behaviour for the ’heavy’ contribution. What distinguishes them is the ’light’ contribution, which illustrates its high sensitivity on the relative amplitudes of $`\widehat{S}_l`$ and $`\widehat{\mathrm{\Phi }}_l`$. A systematic investigation of the effects of mixed correlated primordial spectra, independently of the early universe model to produce them, on the temperature anisotropies will be given elsewhere . For these two models, one notices an amplification (weak in the first case and strong in the second) of the main acoustic peak with respect to the standard (pure adiabatic and scale-invariant) model. In contrast, the third example shows a suppression of the main peak, which is due to a strong contribution $`\widehat{S}_h`$, which makes the ’heavy’ spectrum look “isocurvature” and thus damps the main peak in the global spectrum. Finally, the last example is characteristic of the domination of the ‘light’ spectrum, itself dominated by the isocurvature contribution ($`\widehat{S}_l`$), which thus makes the global spectrum look ”isocurvature”. It is rather remarkable that modest variations of the two relevant parameters of the model, $`R`$ and $`s_0`$ ($`m_l`$ is useful simply for an overall normalization of the parameters), can lead to a large variety of temperature anisotropy spectra.
### C Power Spectrum
Another quantity which is extremely important for the confrontation of models with the observations is the (total) density power spectrum. In the literature, it is usually denoted $`P(k)`$ and its relation to the corresponding spectrum $`𝒫_{\delta ^{(c)}}`$ (for the comoving density constrast) defined generically in (48) is
$$P(k)=2\pi ^2k^3𝒫_{\delta ^{(c)}}.$$
(71)
Using the Poisson equation (62), it can be reexpressed in terms of the gravitational potential spectrum
$$P(k)=\frac{8\pi ^2}{9(a_0H_0)^4}k𝒫_\mathrm{\Phi }(k).$$
(72)
As with the temperature anisotropies, the power spectrum for double inflation is obtainable by computing independently the power spectrum for the heavy scalar field contribution, then that for the light scalar field contribution, and finally by adding the two results,
$$P(k)=P_l(k)+P_h(k).$$
(73)
In contrast with the temperature anisotropies, the influence of the mixing and correlation of the primordial perturbations on the density spectrum is less spectacular because the shapes of the pure isocurvature and pure adiabatic density spectra are not extremely different. There is however a sensible difference: the pure isocurvature spectrum has, relatively to the large scales, less power on small scales than the pure adiabatic spectrum. To illustrate what happens for mixed and correlated primordial perturbations, Fig. 7 displays, for the model specified by the parameters $`R=5`$ and $`s_0=50`$, the total power spectrum , together with the two independent ‘light’ and ‘heavy’ contributions, as well as the standard adiabatic CDM power spectrum (normalized as before) for comparison. Note that the resulting spectrum has, relatively to large scales, less power than the standard adiabatic power spectrum. However, it has globally more power than the standard spectrum for the same temperature anisotropy amplitude on small $`l`$.
## VI Conclusions
The main conclusion of this work is that it is possible, in the simplest model of multiple inflation, to obtain correlated isocurvature and adiabatic primordial perturbations. These perturbations slightly deviate from scale-invariance but their correlation can entail significant modifications with respect to standard single scalar field models.
This class of models, both simple and rich, could provide an interesting field of experiment to investigate the feasibility to determine the cosmological parameters and the primordial perturbations from expected data. The question would be, assuming Nature has chosen this particular model, could we infer from the expected temperature anisotropy data the cosmological parameters and to which precision? More important, would it possible to discriminate between a single field inflation model and a multiple field model with correlated perturbations and what would be the price to pay on the precision of the cosmological parameters ?
It was not the purpose of the present work to exhibit a model supposed to fit better the observations. However, one of these models, surprisingly, turns out to present two characteristics, which are at present favoured by observations: a power spectrum with modest power at small scales (comparatively to the standard CDM model) and a high peak on intermediate scales. It may be worth seeing how well this model does when confronted with the current observations.
Finally, it would be interesting to investigate the possiblity of correlated adiabatic and isocurvature perturbations within the framework of multiple inflation with interaction between scalar fields and see how the main features presented here would be modified.
###### Acknowledgements.
I would like to thank Alain Riazuelo for his help with the Botzmann code he has developed.
|
no-problem/9906/astro-ph9906112.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The acceleration of high energy particles in astrophysical plasmas is a transport process in configuration and momentum space. In describing the acceleration of charged particles in a magnetised plasma, most analytical descriptions of this process are based on the assumption that the phase-space density $`f(𝒙,𝒑,t)`$ is to zeroth order isotropic and independent of the pitch angle $`\mu =\mathrm{cos}\alpha =𝒑𝑩/(pB)`$ between the particle momentum $`𝒑`$ and magnetic field $`𝑩`$. Under this assumption, the process of acceleration at a plane shock wave moving in $`x`$-direction can be described using the isotropic particle density $`n(x,p,t)=4\pi p^2f^{(0)}(x,p,t)`$, where $`p=|𝒑|`$. The transport equation in a plasma at velocity $`u(x,t)`$ is then given by (e.g. Parker 1965; Jones & Ellison 1991):
$`{\displaystyle \frac{n}{t}}+{\displaystyle \frac{}{x}}(un+F)={\displaystyle \frac{1}{3}}{\displaystyle \frac{u}{x}}{\displaystyle \frac{}{p}}(pn),`$ (1)
where $`F(x,p,t)`$ is the flux due to the stochastic propagation of particles in configuration space. If the spatial transport can be described by standard diffusion, then $`F`$ is proportional to the gradient in the density, and Eq. (1) is the well known diffusion-convection equation. In this case, the momentum dependence of the phase-space density of particles accelerated at a strong shock is given by a power law $`f(p)p^s`$ with spectral index $`s=3r/(r1)`$, depending solely on the compression ratio<sup>1</sup><sup>1</sup>1$`r=\rho ^{}/\rho `$ with $`\rho ^{}`$ and $`\rho `$ are the downstream and upstream plasma densities respectively. $`r`$ of the shock. However, the presence of a braided magnetic field (Jokipii & Parker 1969), can introduce a non-diffusive spatial transport. This is important especially in quasi-perpendicular shock fronts, where the mean magnetic field $`𝑩_0`$ lies in the plane of the shock, and a stochastic component with $`\delta b:=|\delta 𝑩|/|𝑩_0|1`$ parallel to the shock normal exists (in $`x`$-direction). Particles which follow the field lines are subject to a combined diffusion process. One is along the field line due to pitch-angle scattering and the other is introduced by the stochastic spatial fluctuations of the magnetic field on a larger scale as those responsible for scattering. This together leads to an anomalous transport of particles while gaining energy due to shock crossings, which is outlined in Sect. 2, followed by a brief description our Monte-Carlo method in Sect. 3. This method is designed to investigate test-particle acceleration in magnetic fields with a stochastic component, without a priory assumptions about the pitch-angle distribution of the phase-space density. The results are presented in Sect. 4, showing especially the dependence of the spectral index $`s`$ on the compression ratio $`r`$ in two different transport regimes in comparison to analytical treatments.
## 2 Anomalous transport
The main aspect of particle transport in a braided field (stochastic field with $`\delta b1`$) is the introduction of memory to the particle propagation. The change of the density at time $`\stackrel{~}{t}`$ is no longer proportional to the second derivative of the density at this time alone (standard diffusion equation), but also depends on the second derivative at times $`t<\stackrel{~}{t}`$. This arises, because any local variation of the particle density which is caused by the geometry of the magnetic field itself is not the source of a diffusive particle flux, and remains associated with the field line. This contribution has therefore to be subtracted from the standard diffusion term. A formulation of this kind of anomalous transport has been given by Balescu (1995). Effectively this introduces a memory of the particle density at time $`\stackrel{~}{t}`$, on the spatial realisation of the magnetic field to which a particle was correlated during $`\stackrel{~}{t}t_{\mathrm{corr}}<t<\stackrel{~}{t}`$. An important consequence of transport in braided fields is revealed by the time dependence of the mean quadratic deviation perpendicular to $`𝑩_0`$, in $`x`$-direction $`(\mathrm{\Delta }x(t))^2t^\alpha `$, which is given by $`\alpha =1/2`$ (Rechester & Rosenbluth 1978; Rax & White 1992). This kind of transport is called sub-diffusion. Note that here $`x`$ is the direction along the shock normal, and the relevant reference system is the magnetic field, which flows downstream with the background plasma. The time dependence shows that particles are even more effectively swept away from the shock in downstream direction as compared to standard diffusion ($`\alpha =1`$). This increases the escape probability and leads to a steeper spectrum as shown by the results in Sect. 4.
## 3 Monte-Carlo method
The simulation of particle acceleration in a stochastic field (static in the background plasma) has to consider the memory introduced by the magnetic field as described in the previous section. The spatial transport is a non-Markovian process. We generate a constant mean magnetic field and stochastic fluctuations at equidistant grid points, and assume the field to be linear in between. Using a random number generator for the stochastic
fluctuations, which allows to recall all values, we are able to assure a complete memory of the field until the particle crosses an escape boundary far downstream (Gieseler et al. 1997). At the same time this method allows to use a new random number for each field patch the particle crosses, which leads to the standard diffusion. A combination of the recalled value and a new random component would simulate a finite correlation time of particle and magnetic field. This is, of course, the more realistic case. However, to investigate the principal effect of sub-diffusion, we present here only results of ’pure’ sub-diffusion and standard diffusion. Particles move along the field lines under the influence of pitch-angle scattering. The length scale of the grid spaces of the field sampling is chosen to be in the same order as the scattering length. This assures, that while particles are transported in configuration space due to the field line geometry, they diffuse along the field line itself. At the same time this avoids, that particles diffuse along the field while sampling only a linear patch of it. At a change of the magnetic field direction (in particular at the shock) we make use of the conservation of the magnetic moment $`p_{}^2/B`$, where $`p_{}`$ is the component of the momentum perpendicular to the magnetic field $`B`$. This approximation is valid especially for non-relativistic quasi-perpendicular shocks, which we consider here (Gieseler et al. 1999, see also for a description of pitch-angle scattering). The momentum remains constant in the corresponding upstream and downstream rest frames. On crossing the shock the momentum and pitch-angle is transformed into the new system (Gieseler 1998). This method allows to measure the particle propagator and the steady state density profile, which are in agreement with theoretical predictions from Kirk et al. (1996) for sub-diffusive transport (Gieseler et al. 1997). Furthermore we are able to measure the pitch-angle distribution and the momentum spectrum which are presented in the next section.
## 4 Particle acceleration at quasi-perpendicular shocks
In accelerating particles over between two orders of magnitude (for the steepest spectra) and six orders of magnitude we always find a power law for the momentum distributions. We do not include loss mechanisms, and fit a power law function $`fp^s`$ between about one order of magnitude above the injection momentum and one order of magnitude below the (technical) cut-off. The results are plotted in Fig. 2 for relativistic particles ($`v=c`$) at non-relativistic shocks ($`u_\mathrm{s}c`$) for various compression ratios $`r`$. Dots represent standard diffusive acceleration, where the value of the fluctuation of a patch of field line is always random, i.e. no memory effect is introduced. The stars show the spectral index for particles which move
always along the same field line, so that sub-diffusive behaviour can take effect. The statistical error of the fit itself is well represented by the marker symbols. However, whereas the flatter diffusive spectra extend over many orders of magnitude, the steep sub-diffusive spectra are more difficult to measure. The maximal systematical error in finding the spectral index from the momentum distribution is indicated by error bars. Because the memory effect for sub-diffusion can not set in immediately, the momentum distribution has a plateau below about ten times the injection momentum. This is indicated by the lower bound of the error bar. A fit to the region where the spectrum is cut off due to technical reasons gives the upper bound of the error bar. For spectra flatter than about $`s=5`$, a cut-off is effectively absent, so that the upper bound almost coincides with the plotted index. It can be seen from Fig. 2, that the spectrum for sub-diffusive acceleration is significantly steeper than for standard diffusion. We now compare our results to analytical predictions, remembering that these are found under the assumption of an almost isotropic pitch-angle distribution. For standard diffusion the result was referred in connection with Eq. (1): $`s=3r/(r1)`$, and plotted as a dashed line in Fig. 2. Although we found the pitch-angle distribution is not really isotropic in this case, the spectral index found by the Monte-Carlo method agrees quite well with the analytical result. For sub-diffusive transport, an analytical solution was found by Kirk et al. (1996):
$`s=3\left(1+{\displaystyle \frac{n(\mathrm{})}{n(0)}}{\displaystyle \frac{1}{r1}}\right)={\displaystyle \frac{3r}{r1}}\left(1+{\displaystyle \frac{1}{2r}}\right);\text{where}{\displaystyle \frac{n(0)}{n(\mathrm{})}}={\displaystyle \frac{2}{3}}.`$ (2)
The second relation means that the density of continuously injected particles at the shock is less than the density far downstream. The resulting spectral index $`s(r)`$ is plotted as a solid line in Fig. 2. Again, this result was found under the assumption of an almost isotropic phase-space density. The Monte-Carlo method does not make any assumptions on this distribution, moreover we are able to measure the pitch-angle distribution at any distance from the shock. Figure 1 shows the pitch-angle distribution immediately upstream of the shock, in the upstream rest frame for the sub-diffusive and diffusive transport regime, at compression ratio $`r=3`$ and $`r=6`$. Especially for sub-diffusive transport and high compression ratio, we found the highest anisotropy. Here, the deviation of the Monte-Carlo results from the analytical result (2) is most prominent (see Fig. 2). We found, that the density of accelerated particles is not only reduced at the shock by the amount predicted by Kirk et al. (1996), moreover a jump arises at the shock, which is intimately related to an anisotropic phase-space distribution (Gieseler et al. 1999). This jump is such, that the upstream density is even more reduced, than indicated by Eq. (2) (Gieseler 1998). This leads to an increased escape probability, and therefore to a steeper spectrum, as compared to the analytical result.
## 5 Conclusions
We presented Monte-Carlo simulations of particle acceleration at non-relativistic quasi-perpendicular shock fronts. We found that a stochastic component in addition to the mean magnetic field introduces sub-diffusive particle transport. The transport aspects (like propagator and density) were compared to analytical treatments earlier (Gieseler et al. 1997), and we found very good agreement. Moreover, we tested our Monte-Carlo code for oblique shocks against semi-analytical results, and found precise agreement again (Gieseler et al. 1999). Here we showed, that particle acceleration under the sub-diffusive transport regime leads to a much steeper spectrum (e.g. $`s=5.3`$ for $`r=4`$) compared to standard diffusion ($`s=4.0`$ for $`r=4`$), even steeper than predicted by Kirk et al. (1996). The steepening of the spectrum depends strongly on whether or not particles are correlated to field lines, and not (to first order) on the shock velocity, the scattering operator, or the amplitude of the magnetic field fluctuations. However, if the mean field is not strictly perpendicular, i.e. is oblique with an angle $`\mathrm{\Theta }`$ with respect to the shock normal, then the transport properties depend on the amplitude of the fluctuations. The sub-diffusive transport will take effect as long as $`\delta b>1/\mathrm{tan}\mathrm{\Theta }`$. It is clear, that for the sub-diffusive transport regime our result yields an upper limit on the spectral index (i.e. the steepest possible), because it was produced by an unlimited correlation of particle and field line. In reality, particles will, of course, decorrelate from a given initial field geometry. This is connected with the realisation of the magnetic field itself, and is subject to further investigation.
## 6 Acknowledgments
This work was supported by the University of Minnesota Supercomputing Institute, by NSF grant AST-9619438 and by NASA grant NAG5-5055. U.G. acknowledges support from the Deutsche Forschungsgemeinschaft under SFB 328.
References
Balescu R., 1995, Phys. Rev. E 51, 4807
Gieseler U.D.J., 1998, Dissertation, Univ. Heidelberg, MPI für Kernphysik, preprint MPI H-V6-1998
Gieseler U.D.J., Duffy P., Kirk J.G., Gallant Y.A., 1997, Proc. 25. Int. Cosmic Ray Conf., Durban, 4, 437
Gieseler U.D.J., Kirk J.G., Gallant Y.A., Achterberg A., 1999, A&A 345, 298
Jokipii J.R., Parker E.N., 1969, ApJ 155, 777; 799
Jones F.C., Ellison D.C., 1991, Space Science Reviews 58, 259
Kirk J.G., Duffy P., Gallant Y.A., 1996, A&A 314, 1010
Parker E.N., 1965, Planet. Space Sci. 13, 9
Rax J.M., White R.B., 1992, Phys. Rev. Lett. 68, 1523
Rechester A.B., Rosenbluth M.N., 1978, Phys. Rev. Lett. 40, 38
|
no-problem/9906/hep-th9906138.html
|
ar5iv
|
text
|
# Plateaux Transitions from S-matrices based on 𝑆𝐿(2,𝑍) Invariant Field Theories
## I Introduction
A simple model was recently proposed in connection with Quantum Hall plateaux transitions. After bosonization, the model consists of a scalar field coupled to a pure gauge field with a boundary interaction incorporating a circular defect line of impurities. The $`c=1`$ conformal field theory possesses an $`SL(2,\text{Z Z })`$ symmetry. Though the precise connection with the complete Landau problem in the presence of disorder is not entirely clear, the model does have some promising features, and we continue to investigate it in this paper.
In the next section an exact S-matrix description of the theory is proposed. We cannot claim to give here an absolute derivation of this S-matrix since we are missing a more precise treatment of the zero-mode constraint (Eq. (3)). Rather we give a suggestive derivation based on the prescriptive treatment described in .
The proposed scattering description allows an exact computation of the boundary (impurity) contribution to the free energy, $`\mathrm{log}g`$. It reveals an infinite series of plateaux at integer values of $`g`$.
## II S-matrices
Let us first summarize some of the features of the model described in . The conformal field theory was defined by the euclidean action
$$S=𝑑t𝑑\sigma \left(\frac{1}{8\pi }\left(_\mu \phi \right)^2\frac{i}{2\pi \widehat{g}}ϵ_{\mu \nu }_\nu \phi A_\mu \frac{1}{2\pi \widehat{g}^2}A_\mu ^2\right)$$
(1)
where $`A_\mu `$ is the electro-magnetic gauge field. The above theory lives on a cylinder with $`0<\sigma <2\pi `$, and in the folded boundary version which we consider here, $`0<t<\mathrm{}`$. The lagrangian possesses the gauge symmetry
$$_{\mathrm{cft}}(\stackrel{~}{\phi }+\frac{2}{\widehat{g}}\lambda ,A_\mu +_\mu \lambda )=_{\mathrm{cft}}(\stackrel{~}{\phi },A_\mu )$$
(2)
where $`_\mu \stackrel{~}{\phi }=iϵ_{\mu \nu }_\nu \phi `$. The gauge field was taken to be a pure, singular gauge $`A_\mu =_\mu \chi `$, and the theory was supplemented by the zero mode constraint
$$\frac{\theta }{2\pi \widehat{g}}𝑑x_\mu _\mu \phi =𝑑x_\mu _\mu \chi $$
(3)
When $`\sigma _{xx}=0`$, the parameter $`\theta `$ was related to the Hall conductivity as $`\sigma _{xy}=1/\theta `$.
The pure gauge field can be gauged away using
$$_{\mathrm{cft}}(\stackrel{~}{\phi }+\frac{2}{\widehat{g}}\chi ,_\mu \chi )=_{\mathrm{cft}}(\stackrel{~}{\phi },0)$$
(4)
Using the zero mode constraint Eq. (3), this leads us to consider the gauge transformation
$$\stackrel{~}{\phi }\stackrel{~}{\phi }+\frac{\theta }{\pi \widehat{g}^2}\phi $$
(5)
From the transformation (5) on the primary fields of the theory, one finds that the partition function has an $`SL(2,\text{Z Z })`$ invariance acting on the modular parameter $`\tau =\theta /2\pi +i\widehat{g}^2/2`$.
Upon adding a circular defect line of impurities, in the folded theory this corresponds to a boundary term in the action $`𝑑\sigma \mathrm{cos}\left(\stackrel{~}{\phi }(0,\sigma )/\sqrt{2}\right)`$, where the boundary is at $`t=0`$. Performing the gauge transformation (5) led to the boundary field theory
$$S=S_{bcft}+\lambda 𝑑\sigma \mathrm{cos}\left(\frac{b}{2}\left(\stackrel{~}{\phi }+\frac{\theta }{\pi \widehat{g}^2}\phi \right)\right)$$
(6)
where $`b=\sqrt{2}`$. The boundary conformal field theory $`S_{bcft}`$ is that of a free scalar field, but with the unusual boundary condition:
$$_t\phi \frac{i\theta }{\pi \widehat{g}^2}_\sigma \phi =0,(t=0)$$
(7)
The above theory is massless in the bulk, but with a mass/length scale at the boundary. To obtain an S-matrix description of this theory we follow the ideology described in . Namely, we imagine turning on an integrable bulk interaction such that the integrability is not destroyed by the boundary interactions. This selects a special basis of particles in the bulk that diagonalize the boundary interactions. We then take a massless limit in the bulk. A bulk interaction that is compatible with the boundary interaction, as far as integrability goes, is $`\delta S_{\mathrm{bulk}}=\mathrm{\Lambda }𝑑t𝑑\sigma \mathrm{cos}(b\stackrel{~}{\phi })`$, where again $`b=\sqrt{2}`$. The reason is, that since $`(_\mu \phi )^2=(_\mu \stackrel{~}{\phi })^2`$, written in terms of $`\stackrel{~}{\phi }`$, the above theory, without the gauge field, is equivalent to the boundary sine-Gordon model, which is known to be integrable. Performing the same gauge transformation Eq. (5) leads us to consider $`\delta S_{\mathrm{bulk}}=\mathrm{\Lambda }𝑑t𝑑\sigma \mathrm{cos}b(\stackrel{~}{\phi }+\theta \phi /\pi \widehat{g}^2)`$.
Next consider the boundary condition Eq. (7). Using $`ϵ_{\sigma t}=ϵ_{t\sigma }=1`$, one has $`i_\sigma \phi =_t\stackrel{~}{\phi }`$. Thus the boundary condition can be written as
$$_t\left(\phi \frac{\theta }{\pi \widehat{g}^2}\stackrel{~}{\phi }\right)=0,(t=0)$$
(8)
This is a Neumann boundary condition for the combination $`\phi \theta \stackrel{~}{\phi }/\pi \widehat{g}^2`$. All of this leads us to consider the bulk theory
$$S_{\mathrm{bulk}}=𝑑t𝑑\sigma \left[\frac{1}{8\pi }\left(_\mu \left(\phi \frac{\widehat{\theta }}{2\pi }\stackrel{~}{\phi }\right)\right)^2+\mathrm{\Lambda }\mathrm{cos}\sqrt{2}\left(\stackrel{~}{\phi }+\frac{\widehat{\theta }}{2\pi }\phi \right)\right]$$
(9)
where we have defined $`\widehat{\theta }=2\theta /\widehat{g}^2`$. When $`\mathrm{\Lambda }=0`$, the boundary version of the above free theory leads to the boundary condition (8).
Let us now rewrite the theory in terms of $`\stackrel{~}{\phi }`$. As far as the bulk theory is concerned, we can drop the topological term $`_\mu \phi _\mu \stackrel{~}{\phi }`$, since it is identically zero. Using $`(_\mu \stackrel{~}{\phi })^2=(_\mu \phi )^2`$, and defining a rescaled field $`\stackrel{~}{\phi }i\stackrel{~}{\phi }/\sqrt{1(\widehat{\theta }/2\pi )^2}`$, one finds
$$S_{\mathrm{bulk}}=𝑑t𝑑\sigma \left(\frac{1}{8\pi }(_\mu \stackrel{~}{\phi })^2+\mathrm{\Lambda }\mathrm{cosh}\left(b_L\stackrel{~}{\phi }_L+b_R\stackrel{~}{\phi }_R\right)\right)$$
(10)
where we have used the left-right decompositions $`\phi =\phi _L+\phi _R`$, $`\stackrel{~}{\phi }=\phi _L\phi _R`$, and
$$\frac{b_L}{\sqrt{2}}=\frac{1+\widehat{\theta }/2\pi }{\sqrt{1(\widehat{\theta }/2\pi )^2}},\frac{b_R}{\sqrt{2}}=\frac{1\widehat{\theta }/2\pi }{\sqrt{1(\widehat{\theta }/2\pi )^2}},$$
(11)
When $`b_L=b_R`$, the above bulk theory is the well-known sinh-Gordon model. Consider now the massless limit $`\mathrm{\Lambda }0`$. The theory has both left and right moving particles. For the right-movers we parameterize the energy and momentum by $`E=P=\mu e^\beta `$, and for the left movers $`E=P=\mu e^\beta `$, where $`\beta `$ is a rapidity and $`\mu `$ an arbitrary energy scale. One can describe the theory in terms of an S-matrix $`S_{LL}(\beta )`$ for the left-movers and $`S_{RR}(\beta )`$ for the right-movers. From the form of the bulk interaction in Eq. (10) it is clear that $`S_{LL}`$ is the sinh-Gordon S-matrix defined by the sinh-Gordon coupling $`b_L`$, whereas $`S_{RR}`$ is defined by $`b_R`$. One argument is the following. In the sine-Gordon version, one can characterize the S-matrices by the non-local quantum affine symmetry. This symmetry survives in the massless limit, and the left-moving (right-moving) quantum affine symmetry will have $`q`$-deformation parameter defined by $`b_L`$ $`(b_R)`$. Using the known S-matrix for the sinh-Gordon model, the result is thus
$$S_{LL}(\beta )=\frac{\mathrm{tanh}\frac{1}{2}(\beta i\pi \gamma _L)}{\mathrm{tanh}\frac{1}{2}(\beta +i\pi \gamma _L)}$$
(12)
where
$$\gamma _L=\frac{b_L^2}{2+b_L^2}=\frac{1}{2}+\frac{\widehat{\theta }}{4\pi }$$
(13)
Similarly, $`S_{RR}`$ is given by Eq. (12) with
$$\gamma _R=\frac{b_R^2}{2+b_R^2}=\frac{1}{2}\frac{\widehat{\theta }}{4\pi }=1\gamma _L$$
(14)
Using the invariance of the S-matrix under $`\gamma 1\gamma `$, one sees that $`S_{LL}=S_{RR}`$.
We discuss now the $`SL(2,\text{Z Z })`$ properties of the above S-matrix. As described in , the bulk conformal field theory has an $`SL(2,\text{Z Z })`$ invariance acting on the modular parameter $`\tau =\theta /2\pi +i\widehat{g}^2/2`$. More specifically, the partition function on the torus is invariant under $`SL(2,\text{Z Z })`$. Since $`S_{LL},S_{RR}`$ provide a scattering description of this conformal field theory, we expect the S-matrix to be at least in part characterized by this symmetry. Under $`\tau 1/\tau `$, one has that $`(\widehat{g},\theta )(\widehat{g}^{},\theta ^{})`$ where
$$\widehat{g}^2=\frac{\widehat{g}^2}{\left(\widehat{g}^4/4+(\theta /2\pi )^2\right)},\theta ^{}=\frac{\theta }{\left(\widehat{g}^4/4+(\theta /2\pi )^2\right)}$$
(15)
Note that under this transformation, $`\theta /\widehat{g}^2\theta /\widehat{g}^2`$, which implies $`\widehat{\theta }\widehat{\theta }`$. Thus $`\tau 1/\tau `$ simply exchanges left and right movers:
$$\tau 1/\tau b_L^2b_R^2$$
(16)
Since $`S_{LL}=S_{RR}`$, the scattering theory has this invariance.
The other independent generator of $`SL(2,\text{Z Z })`$ corresponds to $`\tau \tau +1`$, which corresponds to $`\theta \theta +2\pi `$. The S-matrices turn out to only be invariant under a multiple of this transformation. Namely, using the invariance of the S-matrices under $`\gamma \gamma +2`$, one verifies their invariance under $`\theta \theta +4\pi \widehat{g}^2`$. For the original fermion model considered in with $`\widehat{g}=\sqrt{2}`$ this corresponds to $`\theta \theta +8\pi `$. The significance of this is not entirely clear.
We now consider performing the analytic continuation $`\theta i\theta `$. There are at least two justifications for doing this. First, the Hall conductivity computed in is imaginary unless one performs this continuation. Second, in order to formulate the above scattering description one needs to have identified a “time” by continuing to Minkowski space. Let us analytically continue to Minkowski space by identifying $`\sigma `$ as the time (based on the boundary interaction) and letting $`\sigma i\sigma `$. The Minkowski action $`S_M`$ is obtained from the euclidean one by $`S_M=iS(\sigma i\sigma )`$. This leads to the analytic continuation of $`\theta i\theta `$. To see this, let us rescale $`\chi \theta \chi `$. Then the euclidean topological term is
$$S^{\mathrm{top}}i\theta 𝑑t𝑑\sigma (_t\phi _\sigma \chi _\sigma \phi _t\chi )$$
(17)
Letting $`\sigma i\sigma `$ and identifying $`S_M`$ as above one finds that the Minkowski action is obtained by the analytic continuation $`\theta i\theta `$. Performing this continuation in Eq. (11) one finds that $`b_R=b_L^{}`$, $`b_Lb_R=2`$ and that the S-matrices now have the parameters:
$$\gamma _L=\frac{1}{2}+i\frac{\widehat{\theta }}{4\pi },\gamma _R=\gamma _L^{}$$
(18)
The resulting S-matrix has the same form as the bulk massive staircase model of Zamolodchikov , where there $`\gamma `$ was taken as $`\gamma _\pm =1/2\pm i\theta _0/\pi `$. However since we are here dealing with a massless bulk theory, the interpretation is different and in fact our theory doesn’t suffer from some of the problems of interpretation of the bulk massive staircase model. Namely, since there is no bulk interaction in the massless limit, just a free scalar field, the theory doesn’t have the reality problems of the massive case arising from complex values of the sinh-Gordon coupling $`b`$. The above scattering theory is a limiting case of the bulk massive theories studied in .
Finally, the interactions at the boundary are described by a reflection S-matrix $`R(\beta )`$ for reflection of the above particles off the boundary. In the massless sinh-Gordon model, this reflection S-matrix, which can be obtained by taking the explicit massless limit of the boundary sinh-Gordon model, is known to be independent of the sinh-Gordon coupling $`b_L`$. (See ). The result should be the same in our case and we thus take
$$R(\beta )=\mathrm{tanh}(\beta /2i\pi /4)$$
(19)
The physical S-matrix for right movers is $`R(\beta \beta _B)`$, and for left movers $`R(\beta +\beta _B)`$, where $`\mu e^{\beta _B}`$ is defined as a physical boundary energy scale.
## III Plateaux Transitions in the Boundary Entropy
We consider now the theory on a semi-infinite cylinder of circumference $`L`$, where $`\sigma `$ runs along the circumference and $`\sigma \sigma +L`$ and as before $`0<t<\mathrm{}`$. Viewing $`\sigma `$ as the time, the Hilbert space lives on the semi-infinite line $`0<t<\mathrm{}`$, and finite size $`L`$ effects can be computed from the “L-channel” thermodynamic Bethe ansatz. Of interest is the boundary entropy $`\mathrm{log}g`$, defined as the contribution to the free energy that is independent of the length $`R`$ of the cylinder, $`\mathrm{log}Z=\mathrm{log}g+\mathrm{log}Z_{\mathrm{bulk}}`$, where $`\mathrm{log}Z_{\mathrm{bulk}}`$ is proportional to $`R`$. Viewing the theory as a $`1+1`$ dimensional quantum system, $`g`$ represents the ground state degeneracy, i.e. it counts the number of states at the boundary.
Using the formulas in , one has
$$\mathrm{log}g=\frac{1}{2\pi i}_{\mathrm{}}^{\mathrm{}}𝑑\beta \left[_\beta \mathrm{log}R(\beta )\right]\mathrm{log}\left(1+e^{\epsilon (\beta )}\right)$$
(20)
where $`\epsilon (\beta )`$ is a solution of the integral equation
$$\epsilon (\beta )=\frac{L}{2\xi _B}e^\beta \frac{1}{2\pi i}𝑑\beta ^{}\left[_\beta \mathrm{log}S_{LL}(\beta \beta ^{})\right]\mathrm{log}\left(1+e^{\epsilon (\beta ^{})}\right)$$
(21)
In the above equation, $`\xi _B`$ is a boundary length scale $`1/\xi _B=\mu e^{\beta _B}`$; it defines an energy scale at the boundary $`E_B=1/\xi _B`$.
The numerical solution of $`g`$ as a function of $`\xi _B/L`$ is shown for the two values of $`\widehat{\theta }=200,100`$ in figures 1 and 2. There are two important features of these figures. The first is that on the plateaux $`g`$ takes on the series of integers $`g=1,2,3,..`$, as anticipated in . The second is that the plateaux are more clearly defined as $`\widehat{\theta }`$ is increased.
## IV Discussion
We conclude with a discussion of the possible implications of our results for the proposal made in in connection with the Quantum Hall transitions. The integer valued plateaux that we found in the boundary state degeneracy is a promising feature. The meaning of the boundary entropy suggests that as the scale $`L`$ is decreased, the model goes through a series of transitions where at each transition one more state becomes localized at the impurities. In order to verify this picture one needs to relate the boundary entropy of our 2-dimensional model to the $`2+1`$ dimensional system.
The significance of $`\widehat{\theta }=\pi `$, as discussed in , cannot be seen from what we have done in this paper since we have not studied the conductivities $`\sigma _{xx},\sigma _{xy}`$. In the conformal field theory it was argued that $`\widehat{\theta }=\pi `$ implies $`\sigma _{xx}=0`$, $`\sigma _{xy}=1/\theta `$. One needs to study the conductivities in the presence of the boundary interaction. It should be possible to do this using form factors. Since the conductivities are reduced to boundary effects, it seems likely that the transitions in the boundary entropy will entail transitions in the conductivity, but only a detailed analysis will tell.
A related issue which needs clarification concerns whether the scattering theory described here implicitly treats the boundary condition in a way that is consistent with the perturbative treatment in , which, when $`\widehat{\theta }=\pi `$, led to the correlation length exponent $`20/9`$.
|
no-problem/9906/cond-mat9906370.html
|
ar5iv
|
text
|
# Multiscale Random-Walk Algorithm for Simulating Interfacial Pattern Formation
\[
## Abstract
We present a novel computational method to simulate accurately a wide range of interfacial patterns whose growth is limited by a large scale diffusion field. To illustrate the computational power of this method, we demonstrate that it can be used to simulate three-dimensional dendritic growth in a previously unreachable range of low undercoolings that is of direct experimental relevance.
\]
Interfacial patterns form spontaneously in a wide range of physical and biological systems where the motion of an interface is limited by one (or several) diffusion fields, each one obeying the diffusion equation
$$_tu=D^2u$$
(1)
with specified boundary conditions on the interface. Classic examples include various cellular, dendritic, or eutectic solidification patterns (where $`u`$ represents the temperature or an impurity concentration) , dendrite-like branched patterns formed during electrochemical deposition (where $`u`$ is some ion concentration) and complex growth morphologies produced by bacterial colonies under stress (where $`u`$ can represent the concentration of some nutrient or a signaling agent) .
When attempting to accurately simulate the growth of such structures, one is generally faced with two major difficulties. The first one is front tracking, which requires to resolve accurately the boundary conditions imposed on $`u`$ at the evolving interface. Dendritic solidification, where even a weak crystalline anisotropy crucially influences the morphological development epitomizes this difficulty. The second problem is the large disparity of scale between the growing structure and the diffusion field surrounding it. The physical origin of this disparity is essentially dimensional. The diffusion field decays ahead of the growth structure on a length scale $`lD/v`$, where $`v`$ is the velocity of the advancing interface, whereas the characteristic scale of the structure, e.g. the tip radius $`\rho `$ of a growing dendrite, is itself the geometric mean of $`l`$ and a short length scale cutoff proportional to the interface thickness. Thus, for small growth rate, $`l`$ can be several orders of magnitude larger than $`\rho `$, and simulations that resolve simultaneously the details of the interfacial pattern and the diffusion field become extremely difficult.
Whereas various methods have been successfully developed to handle front tracking, bridging the length scale gap between $`\rho `$ and $`l`$ has remained a major computational challenge. A natural idea to overcome this problem is to use multigrid or adaptive mesh refinement algorithms that make the grid progressively coarser away from the interface . However, such methods need to dynamically adapt their grids to follow the moving interface. This is a nontrivial task, and quantitative simulations of dendritic crystal growth at low undercooling have remained restricted to two dimensions (2-d) .
In this letter, we present a novel hybrid computational approach that efficiently bridges this length scale gap. Over most of the computational domain, the diffusion equation is simulated by an ensemble of off-lattice random walkers that take longer, and concomitantly rarer, steps with increasing distance away from the growing interface. This drastically reduces the computational cost for evolving the large-scale field. Moreover, a short distance away from the interface, this stochastic evolution is connected to a finite-difference deterministic solution of the interface evolution. This conversion, in turn, reduces the inherent noise of the stochastic method to a negligibly small level at the interface. This approach is relatively simple to implement in both 2-d and 3-d while being at the same time quantitatively accurate. Here we sketch the method and then report results that demonstrate its capability to yield new quantitative predictions testable by experiments in the context of dendritic crystal growth. For clarity, we expose the method in this context although it will become clear below that it is general.
Let us consider a solid-liquid interface whose motion is limited by heat diffusion and define a scaled temperature field $`u`$ that is zero in equilibrium and equal to $`\mathrm{\Delta }`$ in the liquid far from the interface, where $`\mathrm{\Delta }`$ is the dimensionless undercooling. At the interface, $`u`$ satisfies the well-known boundary conditions
$`v_n`$ $`=`$ $`D\left(_nu|_s_nu|_l\right),`$ (2)
$`u`$ $`=`$ $`d_0{\displaystyle \underset{i=1}{\overset{2}{}}}\left[a(\widehat{n})+_{\theta _i}^2a(\widehat{n})\right]\kappa _i,`$ (3)
corresponding to heat conservation and local thermodynamic equilibrium at the interface, respectively, where $`v_n`$ is the normal velocity of the interface, $`d_0`$ is a microscopic capillary length, $`\theta _i`$ are the local angles between the normal $`\widehat{n}`$ to the interface and the two local principal directions on the interface, $`\kappa _i`$ are the principal curvatures, and the function $`a(\widehat{n})`$ describes the orientation dependence of the surface energy.
The basic idea of the method is to divide space into an ‘inner’ and an ‘outer’ domain as illustrated in Fig. 1. The inner domain consists of the growing structure and a thin ‘buffer layer’ of liquid surrounding the interface. The outer domain corresponds to the rest of the liquid, and is much larger than the inner domain at low undercooling. In the inner domain, we solve deterministically the diffusion equation on a fine uniform mesh. Moreover, for the present crystal growth application, we handle front tracking using a phase-field approach , and time-step explicitly both $`u`$ and the phase-field in the inner region using the same procedure as Karma and Rappel . The geometry of Fig. 1, however, implies that any other front tracking method that solves the diffusion equation on a uniform mesh could be used instead. Moreover, since the boundary conditions on $`u`$ need not necessarily be those defined by Eqs. 2 and 3, the method obviously extends to other diffusion-limited pattern forming systems.
In the outer domain, the diffusion equation is simulated stochastically by an ensemble of off-lattice random walkers. The idea of solving the diffusion or Laplace equation with an ensemble of random walkers is well-known and has been used previously to simulate diffusion-limited growth and Hele-Shaw flow . The main new feature of our method is that we have separated the solid-liquid interface from the boundary at which the conversion from the deterministic to the stochastic solution of the diffusion equation takes place. This separation yields two essential benefits. Firstly, it makes it possible to use the phase-field approach with its proven accuracy to simulate the interface evolution and to resolve even a weak crystalline anisotropy, without being affected by the details of the conversion process. Secondly, in the buffer layer between the solid-liquid interface and the conversion boundary, the temperature obeys the deterministic diffusion equation. Consequently, the noise created by the stochastic release and impingement of walkers is rapidly damped away from the conversion boundary. Hence, the amplitude of temperature fluctuations at the solid-liquid interface can be reduced to an insignificant level without much cost in computation time by increasing the thickness of the buffer layer.
To connect the inner (deterministic) and outer (stochastic) solutions, we have to supply a boundary condition for the integration of the inner region, and we must specify how walkers are created and absorbed at the boundary between inner and outer regions. Both processes are handled by using a coarse-grained grid that is superimposed on the fine grid of the inner region as shown in Fig. 1. Cells of the coarse grid on the border between inner and outer region, shaded in Fig. 1, are called conversion cells. The temperature in a conversion cell is related to the local density of walkers,
$$u_{cc}=\mathrm{\Delta }\left(1m_i(t)/M\right),$$
(4)
where $`m_i(t)`$ is the number of walkers in conversion cell number $`i`$ at time $`t`$, and $`M1`$ is a fixed integer. Hence, an empty cell corresponds to $`u=\mathrm{\Delta }`$ (the initial state), whereas a box containing $`M`$ walkers corresponds to $`u=0`$. This formula is used in each timestep to obtain the boundary condition for the integration of the inner region. Next, we determine the quantity of heat that flows in or out of each conversion cell from the inner region, and add this amount to a reservoir variable $`H_i(t)`$ which describes the heat content of conversion cell number $`i`$. If this variable exceeds a critical value $`H_c`$, a walker is created and $`H_c`$ is subtracted from the reservoir. Conversely, if $`H_i(t)`$ falls below $`H_c`$, a walker is absorbed and $`H_c`$ is added to the reservoir. This procedure assures that walkers are created and absorbed at a rate which is proportional to the local heat flux, and each walker corresponds to the same discrete amount of heat.
Evidently, as the structure grows the geometry of the conversion boundary and the configuration of the coarse grid need to be periodically updated in order to maintain a constant thickness of the liquid buffer layer. This procedure, however, is straightforward since the structure of the grids does not change.
In the outer region, each walker is represented by a set of variables indicating its position and the time it has next to be updated. To update a walker, a new position is randomly selected with a probability distribution given by the diffusion kernel,
$$P(\stackrel{}{x}^{},t^{}|\stackrel{}{x},t)=\frac{1}{\left[4\pi D(t^{}t)\right]^{d/2}}\mathrm{exp}\frac{|\stackrel{}{x}^{}\stackrel{}{x}|^2}{4D(t^{}t)},$$
(5)
where $`\stackrel{}{x}`$ and $`\stackrel{}{x}^{}`$ are the old and new positions of the walker, respectively, and $`t^{}t`$ is the time increment between updates. This representation of the diffusion equation is widely used in quantum Monte Carlo methods . The key improvement that makes the algorithm efficient in the present context is the introduction of a variable step size: we allow walkers to take progressively larger steps with increasing distance away from the interface, and to be concomitantly updated more rarely, which does not affect the quality of the solution near the solid-liquid interface. Adaptive steps have been previously used to speed up simulations of diffusion-limited aggregation, albeit in a simpler Laplacian context where the stepping time is irrelevant and only one walker at a time is simulated . Typically, we vary the average step size between a value comparable to the spacing of the inner mesh to about $`100`$ times that value. Our test dendrite computations show that the far field can be evolved at essentially no extra cost: the program spends most of its time in the inner region and for the walkers which are near to the conversion boundary and have to make small steps. Thus this ‘adaptive step’ implementation yields essentially the same benefits as an adaptive meshing algorithm, while avoiding the overhead of regridding. Finally, our method can be easily parallelized as will be discussed in more details elsewhere.
To illustrate our method, we focus here on the initial stage of dendritic solidification, during which four (six) primary arms in 2-d (3-d) emerge from a structureless nucleus, as shown in the example 3-d run of Fig. 2. While this transient regime has recently been investigated numerically in 2-d and experimentally in 3-d , it has not yet been explored by simulations in 3-d.
The present simulations started from a small spherical solid nucleus with a uniformly undercooled temperature $`u=\mathrm{\Delta }`$, and fully exploited a cubic symmetry (i.e. $`a(\widehat{n})(13ϵ_4)[14ϵ_4/(13ϵ_4)(n_x^4+n_y^4+n_z^4)]`$ with the Cartesian axes defined parallel to the directions) to reduce simulation time. The value $`ϵ_4=0.025`$ that corresponds to the experimentally estimated anisotropy value for pivalic acid (PVA) was used in all the simulations reported here. Results for other anisotropies and that pertain to the 3-d morphology of the dendrite tip will be discussed elsewhere. To obtain quantitative data on the growth transients, we recorded the arm length $`L(t)`$, the tip velocity $`v(t)=\dot{L}(t)`$, the tip radius of curvature $`\rho (t)`$, and the total volume of solid $`V_s(t)`$. In Fig. 2, we show $`v(t)`$ and $`\rho (t)`$ for a 3-d run at $`\mathrm{\Delta }=0.05`$ together with the time-dependent tip selection parameter $`\sigma ^{}(t)=2d_0D/[\rho ^2(t)v(t)]`$ and the steady-state velocity $`v_{ss}`$ calculated by a boundary integral method. Two results are particularly noteworthy. Firstly, $`\sigma ^{}(t)`$ becomes essentially constant as soon as the arms have emerged from the spherical seed, whereas both $`v(t)`$ and $`\rho (t)`$ are far from their steady-state values. Physically, this is a direct consequence of the fact that $`v(t)`$ and $`\rho (t)`$ evolve slowly on the tip diffusion time scale $`\rho ^2/D`$ where $`\sigma ^{}`$ is established. Secondly, we find that, as the undercooling is lowered, the volume of the dendrite (or its area in 2-d) approaches that of a sphere (circle) growing at the same undercooling. The latter is readily obtained from Zener’s well-known similarity solution, which yields that the radius of a $`d`$-dimensional sphere grows as $`\sqrt{4p_dDt}`$, where the Peclet number $`p_d(\mathrm{\Delta })`$ is implicitly defined by $`\mathrm{\Delta }=p_d^{d/2}\mathrm{exp}(p_d)_{p_d}^{\mathrm{}}s^{d/2}e^s𝑑s`$. Both in 2-d and 3-d, the volume of the dendrite grows slightly faster than the one of the sphere, but for the lowest undercoolings we could attain, the final volume differed by only $`20\%`$ from this prediction, even though the arms were already very well developed.
The above observations are in good agreement with theoretical expectations for 2-d growth transients. In particular, Almgren et al. have analyzed the related problem of anisotropic Hele-Shaw flow (i.e. Laplacian growth) at constant flux . Using an exact solution for the Laplacian field around a cross and exploiting the constancy of $`\sigma ^{}`$, they constructed a self-affine scaling shape for the arms. The length and width of this shape grows as $`t^\alpha `$ and $`t^\beta `$, respectively, with $`\alpha =3/5`$ and $`\beta =2/5`$. As subsequently remarked by Brener , the diffusion equation can be replaced by Laplace’s equation on the scale of the dendrite as long as $`L/\sqrt{Dt}1`$, and hence 2-d growth transients should obey this scaling with a flux set by the diffusive far field of Zener’s similarity solution in the low undercooling limit. In Fig. 3a, we plot the functions $`\alpha (t)=d(\mathrm{ln}L)/d(\mathrm{ln}t)`$ and $`\nu (t)=d(\mathrm{ln}V_s)/d(\mathrm{ln}t)`$. For an exact Laplacian scaling in 2-d, $`\alpha (t)=3/5`$ and $`\nu (t)=1`$. With decreasing undercooling both curves become flatter and indeed approach the expected Laplacian scaling. The slow rise with time of both curves can be attributed to diffusive corrections to the Laplacian scaling due to the slow increase in time of $`L/\sqrt{Dt}`$. Recently, Provatas et al. have reported scaling exponents that differ from the Laplacian prediction and that appear to be independent of $`\mathrm{\Delta }`$ for small $`\mathrm{\Delta }`$ . We note, however, that they used the distance from the tip to the time-dependent base (where the dendrite shaft is narrowest) to scale their results instead of $`L(t)`$. Since $`L(t)`$ is the only relevant scaling length for both the shape and the diffusion field in the Laplacian limit, we believe that these exponents are spurious. When $`L(t)`$ is used as a scaling length, our results are consistent with an approach to Laplacian scaling in the limit of vanishing undercooling.
It is simple to heuristically generalize some of the above ideas to 3-d. If we assume, as a reasonable first approximation, that the arm shape is axisymmetric and has a scaling form $`r(x,t)=t^\beta \stackrel{~}{r}\left(x/t^\alpha \right)`$, where $`x`$ is the growth direction, the constancy of $`\sigma ^{}`$ imposes that $`4\beta \alpha 1=0`$. Furthermore, assuming that the volume of the dendrite grows approximately as the one of the 3-d similarity solution, we have in addition $`\nu =\alpha +2\beta =3/2`$. These two conditions yield $`\alpha =2/3`$ and $`\beta =5/12`$. Fig. 3b shows the functions $`\alpha (t)`$ and $`\nu (t)`$, defined as before, in 3-d. As in 2-d, the curves approach the predicted exponents with decreasing undercooling, but the differences remain larger than in 2-d even for the lowest undercooling. This can be partly accounted for by the fact that the tip velocity, and hence also $`L/\sqrt{Dt}`$, is much larger in 3-d than in 2-d at equal undercoolings. We must emphasize that in the absence of an exact 3-d Laplacian solution, and in view of the above assumptions, no claim is made here that these 3-d exponents are exact or that a scaling regime exists asymptotically at small undercooling in 3-d. We content ourselves with the fact that they describe reasonably well our present simulations.
In conclusion, we have presented a novel computational approach that can resolve accurately the details of a complex branched structure and its large scale surrounding diffusion field. The method can be combined with many of the existing front tracking methods and should be applicable to a wide range of diffusion-limited pattern forming systems. Furthermore, we have demonstrated its feasibility in the non-trivial test case of dendritic growth in a range of parameters previously unreachable in 3-d.
This research is supported by U.S. DOE Grant No. DE-FG02-92ER45471 and benefited from computer time allocation at NERSC and NU-ASCC. We thank Flavio Fenton for help with 3-d visualization of our simulations and Vincent Hakim for fruitful conversations.
|
no-problem/9906/gr-qc9906023.html
|
ar5iv
|
text
|
# 1. Is Minkowski space stable?
## 1. Is Minkowski space stable?
Although flat spacetime is a trivial solution of the vacuum Einstein equations, the stability of this solution was questioned for a long time, mainly because in General Relativity (unlike in electrodynamics and other field theories) it is impossible to give a positive semidefinite expression for the field energy density. The numerous approaches to this problem reported in the literature can be essentially classified in the context of classical field theory, thermal field theory and quantum field theory. Let us quote very shortly some of the known facts.
(1) Classical field theory \- The long-outstanding conjecture that the total energy (A.D.M. energy) of asymptotically flat manifolds is positive semidefinite and that only Minkowski space has zero energy was finally proven by Schoen and Yau in 1979 and is now known as “the positive energy theorem”. Since energy is conserved, this theorem seems to preclude the possibility of flat space decaying by any mechanism.
(2) Thermal field theory \- The instabilities of Minkowski space at a finite temperature were studied by Gross, Perry and Yaffe in 1982 . They concluded that in hot flat space there are two distinct sources of instability: the large-wavelength density fluctuations of the thermal gravitons and the nucleation of black holes.
(3) Quantum field theory \- One can consider, at least formally, the functional integral of the theory, which represents a sum over all possible field configurations weighed with the factor $`\mathrm{exp}[i\mathrm{}S]=\mathrm{exp}\left[\frac{i\mathrm{}}{16\pi G}d^4x\sqrt{g(x)}R(x)\right]`$ and possibly with a factor due to the integration measure. The Minkowski space is a stationary point of the vacuum action and has maximum probability. “Off-shell” configurations, which are not solutions of the vacuum Einstein equations, are admitted in the functional integration but are strongly suppressed by the oscillations of $`e^{i\mathrm{}S}`$.
In this letter (based in part upon our work ) we shall be concerned with the quantum case. Even though any tunnelling to a ground state different from Minkowski space appears to be impossible due to the positive energy theorem, we shall see that the quantum fluctuations about flat space can be enhanced in certain conditions.
## 2. The quantum of curvature fluctuation
Due to the appearance of the dimensional constant $`G`$ in the Einstein action, the most probable quantum fluctuations of the gravitational field grow at very short distances, of the order of $`L_{Planck}=\sqrt{G\mathrm{}/c^3}10^{33}cm`$. This led Hawking, Coleman and others to depict spacetime at the Planck scale as a “quantum foam” , with high curvature and variable topology.
Let us reformulate this argument in short. Suppose we start with a flat configuration, then a curvature fluctuation appears in a region of size $`d`$. How much can the fluctuation grow before it is suppressed by the oscillating factor $`e^{iS}`$? (We set $`\mathrm{}=1`$ and $`c=1`$ in the following.) The contribution of the fluctuation to the action is of order $`Rd^4`$; both for positive and for negative $`R`$, the fluctuation is suppressed when this contribution exceeds $`1`$ in absolute value, therefore $`|R|`$ cannot exceed $`G/d^4`$. This means that the fluctuations of $`R`$ are stronger at short distances—down to $`L_{Planck}`$, the minimum physical distance.
Clearly if the curvature is large, then the metric is locally far from being flat and the factor $`\sqrt{g}`$ in the action is not trivially $`1`$, thus the estimate above is only approximate. In the following, however, we shall focus on the case of a weak field with small curvature, at distances much larger than $`L_{Planck}`$ (without any topology change). In this case we could say that $`R_0G/d^4`$ represents the “quantum of curvature fluctuation”, very small at macroscopic scale. As shown above, the number of such quanta in a certain region does not exceed $`N1`$; therefore fluctuations are practically irrelevant and spacetime looks almost perfectly flat at distances much larger than $`L_{Planck}`$.
## 3. Virtual dipoles containing $`N`$ +/quanta and $`N`$ –/quanta
At this point one might rise the following objection. Suppose two quanta of curvature fluctuation with opposite signs pop up in flat space, in two adjacent regions having the same size $`d`$. This does not change the total action. Then the negative fluctuation can grow up to comprise 2, 3, … $`N`$ quanta, provided the same happens with the positive fluctuation, because the total action of a configuration containing $`N`$ +/quanta and $`N`$ –/quanta is the same as the flat space action. Can this represent a possible instability, a way for the gravitational field to “run away” from the flat configuration, causing strong fluctuations of the metric also at scales larger than Planck scale?
This idea might seem naive and qualitative, and perhaps every beginner in General Relativity had it for a minute. However, it can be precised and made more rigorous. It is possible to construct explicitly field configurations with the property above, namely having scalar curvature which vanishes identically almost everywhere, except in two adjacent regions – one with positive $`R`$ and the other with negative $`R`$ – in such a way that the total integral of $`\sqrt{g}R`$ is zero.
One can regard each of these field configurations as the field generated by a virtual “mass dipole”. In fact, they can be defined as the solutions of the Einstein field equations with a dipolar source—a positive and a negative mass with certain sizes, chosen in such a way that the total integral of the scalar curvature is zero. Such sources are clearly unphysical and do not exist in the real world; however, here we are not interested into a solution of the classical field equations with physical sources but into any field configuration which can cause strong fluctuations in the functional integral.
Let us consider a solution $`g_{\mu \nu }(x)`$ of the Einstein equations
$$R_{\mu \nu }(x)\frac{1}{2}g_{\mu \nu }(x)R(x)=8\pi GT_{\mu \nu }(x),$$
(1)
with a (covariantly conserved) source $`T_{\mu \nu }(x)`$ obeying the additional integral condition
$$d^4x\sqrt{g(x)}\mathrm{Tr}T(x)=0.$$
(2)
Taking into account the trace of eq. (1), namely $`R(x)=8\pi G\mathrm{Tr}T(x)`$, we see that the action $`d^4x\sqrt{g}R`$ computed for this solution is zero.
As an example of an unphysical source which satisfies (2) one can consider a static dipole centred at the origin ($`m,m^{}>0`$):
$$T_{\mu \nu }(𝐱)=\delta _{\mu 0}\delta _{\nu 0}\left[mf(𝐱+𝐚)m^{}f(𝐱𝐚)\right].$$
(3)
Here $`f(𝐱)`$ is a smooth test function centred at $`𝐱=0`$, rapidly decreasing and normalised to 1, which represents the mass density. The range of $`f`$, say $`r_0`$, is such that $`ar_0r_{Schw}`$, where $`r_{Schw}`$ is the Schwarzschild radius corresponding to the mass $`m`$. The mass $`m^{}`$ is in general different from $`m`$ and chosen in such a way to compensate the small difference, due to the $`\sqrt{g}`$ factor, between the integrals
$$I^+=d^3x\sqrt{g(𝐱)}f(𝐱+𝐚)\mathrm{and}I^{}=d^3x\sqrt{g(𝐱)}f(𝐱𝐚).$$
(4)
The procedure for the construction of the desired field configuration is the following. One first considers Einstein equations with the source (3). Then one solves them with a suitable method, for instance in the weak field approximation. Finally, knowing $`\sqrt{g(x)}`$ one computes the two integrals (4) and adjusts the parameter $`m^{}`$ in such a way that
$$mI^+m^{}I^{}=0$$
(5)
(see , where these configurations were called “zero modes of the Einstein action”). To first order in $`G`$, the relation between $`m`$ and $`m^{}`$ turns out to be
$$m^{}m(1+8\pi Gm).$$
(6)
Note that these field configurations are in no way singular, so we do not expect them to be suppressed by the functional integration measure.
## 4. Does the effective gravitational lagrangian contain a scale-dependent cosmological term?
What can stabilise the Einstein action with respect to the dipole fluctuations described above? Possibly an additional term in the lagrangian. $`R^2`$ terms are only relevant at very short distances, however, which is not the case here. Another typical addition is the cosmological term $`(\mathrm{\Lambda }/8\pi G)d^4x\sqrt{g}`$. It is immediate to check, using eq.s (4)–(6) above, that such a term receives non-vanishing contributions from dipole fluctuations, and will therefore suppress them.
It is known that $`\mathrm{\Lambda }`$, if not zero, is very small in our universe; its effective value, however, could depend on the scale, being very small at cosmological distances but somewhat larger at short distances.
This concept, namely that the gravitational lagrangian may comprise a scale-dependent cosmological term, originally emerged in the Euclidean theory of gravity on the Regge lattice. Recent non-perturbative numerical simulations allow to study a “discretized spacetime” whose dynamics is governed by an action containing $`G`$ and $`\mathrm{\Lambda }`$ as bare parameters; it turns out that as the continuum limit is approached, the adimensional product $`|\mathrm{\Lambda }_{eff}|G_{eff}`$ behaves like
$$|\mathrm{\Lambda }_{eff}|G_{eff}(l_0/l)^\gamma $$
(7)
where $`l`$ is the scale, $`l_0`$ is the lattice spacing, $`\gamma `$ a critical exponent and the sign of $`\mathrm{\Lambda }_{eff}`$ is negative (for an earlier discussion see ). Furthermore, one can reasonably assume that $`l_0L_{Planck}`$, and that the scale dependence of $`G_{eff}`$ is much weaker than that of $`\mathrm{\Lambda }_{eff}`$.
A scale dependence of $`\mathrm{\Lambda }_{eff}`$ like that in eq. (7) also implies that any bare value of $`\mathrm{\Lambda }`$, expressing the energy density associated to the vacuum fluctuations of the quantum fields including the gravitational field itself, is “relaxed to zero” at long distances just by virtue of the gravitational dynamics, without any need of a fine tuning. One would have, in other words, a purely gravitational solution of the cosmological constant paradox.
We do not intend to discuss here whether (and for what values of $`\gamma `$) an ansatz like eq. (7), with $`\mathrm{\Lambda }_{eff}<0`$, is compatible with the most recent estimates of the Hubble parameter . In fact, admitted that $`\mathrm{\Lambda }_{eff}`$ depends on the scale $`l`$, it is clear that the determination of this dependence and a comparison with the observational constraints is a complex problem, given the enormous range spanned by $`l`$. One can consider at least four domains: a cosmological scale, a scale of the order of the solar system size (compare ), a laboratory scale (in a wide sense, i.e., down to the subnuclear and $`GeV`$ scale ) and the Planck scale. While in this work we are mostly concerned with the laboratory scale, we shall keep a general approach and just denote by $`\mathrm{\Lambda }_{eff}(l)`$ the general unknown function which gives the scale dependence of $`\mathrm{\Lambda }_{eff}`$.
## 5. Coupling to an external field
We have seen that a scale-dependent cosmological term in the effective gravitational action is able to suppress the virtual dipole fluctuations. The actual existence of this term can be regarded as a consistent hypothesis, suggested by some results of lattice theory. But how can we check that the whole idea makes sense, i.e. that on one hand “dangerous” dipole fluctuations are admitted by the Einstein action and on the other hand an intrinsic cosmological term is there to suppress them?
We can imagine a situation in which this latter term is canceled, in some region of spacetime, by coupling gravity to a suitable external field $`F(x,t)`$. Then, if virtual dipoles really exist, they will be free to grow in this region and cause abnormally large fluctuations. The amplitude of these fluctuations cannot be predicted at this stage, but eventually they could lead to a sort of partial “thermalization” of the gravitational field in this region.
The function $`F(x,t)`$ represents an assigned classical field, not a variable of the functional integral. It couples to gravity through its energy-momentum tensor
$$T_{\mu \nu }=\mathrm{\Pi }_\mu _\nu Fg_{\mu \nu }$$
(8)
where $`\mathrm{\Pi }_\mu `$ is the canonically conjugated momentum $`\delta /\delta (^\mu F)`$. For instance, for a classical scalar field $`\varphi `$ (typically describing, in the context of quantum field theory, coherent matter of some kind) we have
$`={\displaystyle \frac{1}{2}}\left(_\alpha \varphi ^\alpha \varphi m^2\varphi ^2\right),`$ (9)
$`\mathrm{\Pi }_\mu =_\mu \varphi ,`$ (10)
$`T_{\mu \nu }=_\mu \varphi _\nu \varphi g_{\mu \nu }`$ (11)
and the interaction term in the action takes the form
$$S^{}=8\pi Gd^4xh_{\mu \nu }T^{\mu \nu }=8\pi Gd^4x(h_{\mu \nu }^\mu \varphi ^\nu \varphi \mathrm{Tr}h).$$
(12)
The term $`h_{\mu \nu }^\mu \varphi ^\nu \varphi `$ can be regarded as a source term for $`h_{\mu \nu }`$, while the term $`d^4x\mathrm{Tr}h`$ interferes with the cosmological term: we recall that, to first order in $`h`$, $`\sqrt{g}1+\frac{1}{2}\mathrm{Tr}h`$; therefore, if at some point $`x`$ the condition
$$\frac{\mathrm{\Lambda }_{eff}(a)}{8\pi G}+(\varphi (x))=0$$
(13)
is satisfied, at that point large virtual dipoles fluctuations of scale $`a`$ can appear.
Eq. (13) can be regarded as a parametric equation for $`a(x)`$: given the value of the coherent lagrangian density at $`x`$, one finds a corresponding value for $`\mathrm{\Lambda }_{eff}`$ and thus for the scale $`a`$ of the dipole fluctuations allowed at $`x`$. (This makes sense if $`\varphi `$ varies on a scale much larger than $`a`$. Also note that being $`\varphi `$ an assigned “off shell” field, $`(\varphi (x))`$ is not necessarily an extremal value.) The scale obtained in this way can turn out to be physically relevant, or not—particularly if very short.
## 6. Conclusions
This work aims at pointing out a peculiar quantum behaviour of the gravitational field which is usually disregarded, but cannot a priori be excluded. The possibility of anomalous growth of the “dipole fluctuations” described above appears to be an intrinsic property of the Einstein action.
It is therefore important to understand to what extent these fluctuations are affected by a cosmological term in the action or by the coupling to an external field—which breaks the translation symmetry of Minkowski spacetime and could thus trigger the fluctuations in certain regions.
Our analysis is limited by the poor present knowledge of the quantum dynamics of the gravitational field; in particular, it seems impossible to predict at this stage the exact amplitude of the dipole fluctuations and the scale dependence $`\mathrm{\Lambda }_{eff}(l)`$ (if any).
This work has been partially supported by the A.S.P. – Associazione per lo Sviluppo Scientifico e Tecnologico del Piemonte – Turin – Italy.
|
no-problem/9906/astro-ph9906142.html
|
ar5iv
|
text
|
# Can Deflagration-Detonation-Transitions occur in Type Ia Supernovae?
## 1 Introduction
The delayed detonation scenario for Type Ia supernova (SN Ia) explosions asserts that a slowly accreting Chandrasekhar mass C+O white dwarf undergoes a thermonuclear explosion in two distinct modes: an initial turbulent deflagration (flame) phase that preexpands the star, allowing the abundant production of intermediate mass isotopes observed in SN Ia spectra, followed by a detonation that accounts for the high material velocities and the strength of the explosions (Khokhlov 1991; Woosley & Weaver 1994). Both modes are assumed to be linked by a deflagration-detonation-transition (DDT) that occurs either during the first expansion phase or after a partial recollapse of the star (Arnett & Livne 1994). The background density of the DDT, $`\rho _\mathrm{t}`$, is often referred to (Höflich et al. 1996; Nomoto et al. 1997) as the leading candidate for the physical parameter corresponding to the observed correlation of peak luminosity and light curve shape (Pskovskii 1977; Phillips 1993).
Despite the apparent success of one-dimensional delayed detonation models in reproducing many features of observed SN Ia spectra and light curves (Höflich et al. 1996), a quantitative investigation of DDTs in supernovae has begun only recently. Both dimensional analysis and numerical simulations indicate that a turbulent thermonuclear flame front driven on large scales by the Rayleigh-Taylor (RT) instability falls short of sonic propagation by at least one order of magnitude (Niemeyer & Hillebrandt 1995; Khokhlov 1995; Reinecke et al. 1998a), making early proposals for DDT by direct shock formation seem implausible. An alternative route to detonations in supernovae involving local flame quenching and microscopic turbulent mixing was recently proposed (Khokhlov et al. 1997b; Niemeyer & Woosley 1997). It is based on the induction time gradient mechanism (Zeldovich et al. 1970), first applied to detonations in supernovae by Blinnikov & Khokhlov (1986, 1987). This mechanism requires a sufficiently large region of unburned fluid to be preconditioned in a manner that establishes a uniform temperature gradient across it. For predominantly temperature dependent reaction rates, the temperature gradient can be mapped onto a gradient of induction times and hence onto a phase velocity for the spontaneous burning wave that sweeps across the region. When certain criteria regarding the fuel mixture fraction and the size of the region are met (Sec. 2), the pressure released by the burning wave may form a self-sustaining reaction-shock complex, i.e. a detonation. Note that no microscopic transport nor high fluid velocities are needed for the runaway to detonation. Instead, the problems of those proposals for DDT that need supersonic turbulent flame speeds are now entirely passed on to the preconditioning of the gradient regiion. It is well known that the gradient mechanism is very sensitive to even small temperature non-uniformities (Blinnikov & Khokhlov 1986, 1987; Woosley 1990; Niemeyer & Woosley 1997). Owing to the absence of walls or obstacles, the preferred locations for DDT in confined systems, preconditioning in supernova explosions can only be attributed to mixing in an unconfined turbulent flow field. It will be shown in Sec. (3) that under these conditions, successful preconditioning entails a degree of synchronicity that is irreconcilable with subsonic turbulence.
In this paper, it will be argued that turbulent mixing in large systems does not, in general, give rise to uniform gradients on large scales (Sec. (3)). Furthermore, even if locally isolated regions are considered, the robustness of thermonuclear flames with respect to turbulent quenching disfavors the emergence of sufficiently well-mixed regions for DDT. We conclude that unless we are missing an important piece of information, the physics of unconfined turbulent thermonuclear flames appears to allow transitions to a detonation only in the case of rare fluctuations instead of providing a robust framework for DDT. Some alternative explosion scenarios will be outlined in Sec. (4).
## 2 Prerequisites for deflagration-detonation-transitions in supernovae
Assuming that there are no natural sources of shocks in the turbulent flame brush of a supernova explosion (for a possible exception, see the description of active turbulent combustion (ATC) in Sec. (4)), such as corners or obstacles in terrestrial combustion experiments, the only way to create a pressure spike that turns into a detonation is by burning a certain critical volume, $`V_\mathrm{c}l_\mathrm{c}^3`$, of fuel within a time comparable to or less than its sound-crossing time, $`t_\mathrm{s}(l_\mathrm{c})l_\mathrm{c}/u_\mathrm{s}`$. This can be achieved in two very different ways. On the one hand, turbulent deformation of the flame surface can, in principle, create a sufficiently large flame surface area to burn a given volume in an arbitrarily short time. This idea is the motivation behind the fractal model (Woosley 1990). However, a simple argument shows that this can only occur in rare fluctuations as long as the steady-state turbulent flame velocity, $`S_\mathrm{T}`$, is subsonic, since the statement of burning $`V_\mathrm{c}`$ within $`t_\mathrm{s}`$ is equivalent to $`S_\mathrm{T}u_\mathrm{s}`$ if $`S_\mathrm{T}`$ is evaluated on the scale $`l_\mathrm{c}`$. Given that in the flamelet regime, the turbulent flame speed scales with the turbulent velocity fluctuations on each scale, $`S_\mathrm{T}(l)v(l)`$, and that the latter is bound from above by the (subsonic) terminal rise velocity of buoyant RT bubbles, it is clear that this mechanism is an unlikely candidate for a robust DDT scenario (Niemeyer & Woosley 1997).
On the other hand, detonations might be created via the well-studied induction time gradient mechanism (Zeldovich et al. 1970; Lee et al. 1978), whereby a combustion wave moving along a preconditioned temperature gradient coherently builds up a pressure wave that – for sufficiently large preconditioned volumes – eventually turns into a detonation (for a recent discussion, see Khokhlov et al. 1997a,b). The minimum size $`l_\mathrm{c}`$ of the preconditioned region that gives rise to a detonation in white dwarf matter was derived numerically by Niemeyer & Woosley (1997) and Khokhlov et al. (1997b). It sensitively depends on the composition and density of the fluid; however, for the purpose of this paper it is sufficient to note that in all cases, $`l_\mathrm{c}`$ is larger than the laminar flame width $`\delta `$ by more than three orders of magnitude.
So far, the problem has merely been shifted from the fine-tuning of the flame surface area within the critical region to that required to precondition the temperature field. In both cases, the only tool naturally available is subsonic buoyancy-driven turbulence. However, as pointed out by Khokhlov et al. (1997a,b) and Niemeyer & Woosley (1997), it is possible in principle that for a given laminar flame speed and width there exists a critical turbulence intensity such that turbulent mixing can locally extinguish, or quench, the nuclear reactions within the flame. If this were the case, turbulence might be able to mix burned and unburned material and establish an appropriately smooth temperature field. The details of this mixing process, however, were not investigated in previous studies.
Niemeyer & Woosley (1997) and Khokhlov et al. (1997a,b) used the Gibson length $`l_\mathrm{g}`$, defined as the scale where the turbulent eddy velocity is equal to the laminar flame speed, $`v(l_\mathrm{g})S_\mathrm{L}`$, to postulate the necessary conditions for flame quenching: if $`l_\mathrm{g}\delta `$, the burning regime changes from “flamelet” to “distributed” burning and turbulence begins to appreciably affect the diffusion-reaction structure of the flame. Only in the distributed burning regime can local flame quenching take place. The criterion above was later shown to be equivalent to a definition of the flamelet regime based on the relative strengths of turbulent and thermal diffusivities (Niemeyer & Kerstein 1997). Note that for Prandtl numbers $`Pr=\nu /\kappa `$, defined as the ratio of viscosity and thermal conductivity, below unity this criterion is in conflict with conventional flamelet theory (Peters 1984) which relies on a comparison of the Kolmogorov length and $`\delta `$; according to this definition, thermonuclear flames with $`Pr1`$ would never be anywhere near the flamelet regime. Recent numerical experiments favor the modified flamelet definition as opposed to the conventional one (Niemeyer et al. 1999).
Intriguingly, the transition from flamelet to distributed burning, and hence the first chance for turbulence to create large islands of preconditioned material, approximately takes place at the right transition density for DDT, $`\rho _\mathrm{t}10^7`$ g cm<sup>-3</sup>, inferred from one-dimensional explosion models (Niemeyer & Woosley 1997). It could therefore provide the switch that triggers detonations in the late phase of supernova explosions, replacing a free model parameter with a physical one. To conclude this section, the combination of the gradient mechanism for DDT and the transition from flamelet to distributed burning at a density of $`10^7`$ g cm<sup>-3</sup> may explain the bulk of SN Ia observations, but it crucially hinges on the existence of a mechanism for turbulent preconditioning of a region much larger than the laminar flame width at that density.
## 3 Failure of turbulent flame quenching and macroscopic preconditioning
Consider first an infinite fluid dynamical system, containing a passive scalar field that changes from zero to a finite value across the domain of interest, and is subject to self-similar turbulent mixing in the center of the domain. Assuming for now that turbulence or expansion manage to fully extinguish nuclear burning, this is a reasonable description of the temperature field $`T`$ in the turbulent flame brush, since the length scales we are interested in, $`ll_\mathrm{c}`$, are much smaller than the stellar radius and the turbulence on these scales had ample time to establish a self-similar cascade. Under these conditions, the temperature fluctuation amplitudes obey Kolmogorov scaling, $`T(l)l^{1/3}`$. The temperature field becomes a smooth function at the heat diffusion scale given by $`l_\mathrm{d}LRe^{3/4}Pr^{3/4}`$ for $`Pr<1`$, where $`Re`$ is the Reynolds number and $`L`$ is the integral scale of the turbulence. Under conditions typical for the onset of distributed burning in SN Ia models, $`L10^7`$ cm, $`Re10^{14}`$, and $`Pr10^4`$, the heat diffusion scale is $`l_\mathrm{d}10^{1/2}`$cm $`\delta l_\mathrm{c}`$. Evidently, increasing the turbulence intensity (and thus $`Re`$) decreases the largest length scale where $`T`$ can be considered smooth, rather than increasing it. Regardless of the amplitude of large-scale turbulent velocity fluctuations, turbulent mixing is inherently unable to provide uniformly mixed regions on macroscopic scales $`l_\mathrm{c}`$. Dropping the simplification of treating $`T`$ as a passive scalar further strengthens this statement, as burning strongly enhances temperature fluctuations on scales $`\delta l_\mathrm{c}`$.
The question of DDT in the presence of temperature fluctuations was recently investigated numerically by Montgomery et al. (1998). It was found that perturbation amplitudes of 10-15% are sufficient to divide the gradient region into subregions, each of which would need to have the size of the unperturbed critical length $`l_\mathrm{c}`$ in order to give rise to a detonation. However, this study optimistically assumed that a constant temperature gradient of order $`l_\mathrm{c}^1`$ exists initially and is subsequently perturbed by turbulent fluctuations on smaller scales. As argued above, these initial conditions are inconsistent with a self-similar turbulent mixing region.
One may also drop the assumption of self-similarity by looking at the special case of a locally isolated fluid element, recognizing that while these are not typical regions of a turbulent flow, a small number of them may be realized on statistical grounds. Consider, for instance, a single large eddy of size $`l_\mathrm{c}`$ with little or no entrainment of material from the outside. In this case, the passive scalar is mixed microscopically over the entire region after approximately one eddy turn-over time $`\tau _{\mathrm{eddy}}(l_\mathrm{c})`$. This situation would, in fact, give rise to suitable preconditioning for DDT if burning could be inhibited during the mixing process; otherwise, small scale fluctuations on the scale $`\delta `$ are continually resupplied. The remaining question is thus: can turbulence quench nuclear reactions in a region as large as $`l_\mathrm{c}\delta `$? More specifically, can the burning products contained in $`V_\mathrm{c}`$ be cooled sufficiently such that the burning time scale $`\tau _\mathrm{b}\dot{w}^1`$, where $`\dot{w}`$ is the fuel consumption rate, is larger than $`\tau _{\mathrm{eddy}}(l_\mathrm{c})`$ everywhere within $`V_\mathrm{c}`$?
The answer is no, provided that heat loss to the environment is negligible and the flow is subsonic. For simplification, we shall concentrate on carbon burning alone, since it represents the fastest reaction and its extinction is a necessary (and sufficient) condition for flame quenching. Ignoring the small density change across the flame, the carbon burning rate $`\dot{w}_{\mathrm{C}+\mathrm{C}}`$ depends only on temperature and carbon mixture fraction. Note further that because of electron degeneracy, heat diffuses many orders of magnitude more rapidly than nuclei, so that we can safely assume that carbon is non-diffusive. Consequently, flame quenching can only occur by diffusive cooling ($`p`$d$`V`$-cooling is irrelevant because the flow is to a very good approximation incompressible). Turbulence affects the efficiency of diffusion by straining the flame and thus steepening the temperature gradients. For temperature gradients of order $`\delta ^1`$, the diffusion time scale, $`\tau _\mathrm{d}(\delta )\delta ^2/\kappa `$, is by definition of $`\delta `$ comparable to the burning time scale $`\tau _\mathrm{b}\dot{w}_{\mathrm{C}+\mathrm{C}}^1`$. For gradients larger than $`\delta ^1`$, diffusion is faster than burning throughout most of the flame. However, it can lead to full extinction only if the entire region of burning products that it is connected with is also smaller than $`\delta `$, in which case heat can leak out to all sides and the products can be cooled sufficiently to satisfy $`\tau _\mathrm{b}\tau _\mathrm{d}`$. Otherwise, if the flame is connected to a heat bath of burning products larger than $`\delta `$, the temperature at the interface of fuel and ash always remains fixed at the final product temperature, keeping $`\tau _\mathrm{b}`$ small in its immediate vicinity, regardless of the strain rate experienced by the flame. The total burning rate may drop with respect to the flamelet regime, but fast nuclear burning is never fully extinguished within the whole volume.
According to these arguments, the only conceivable way to quench the flame in a large volume $`V_\mathrm{c}`$ is to stretch it into a thin filament with thickness $`\delta `$ and curl it up such that it fills $`V_\mathrm{c}`$. In order to prevent unquenched burning in any part of $`V_\mathrm{c}`$ before the onset of the spontaneous runaway, this curling has to be completed in a time $`tt_\mathrm{s}(l_\mathrm{c})`$. Interestingly, we now face the same problem as the fractal model described in the previous section: the eddy velocity has to be supersonic in order to prepare the runaway region before it is burned. Again, we are limited by the fact that a subsonic process cannot set up conditions that are later supposed to burn with a supersonic phase velocity.
The line of arguments above is supported by numerical (Poinsot et al. 1991) and experimental (Shy et al. 1996) evidence that premixed chemical flames can only be quenched in the presence of heat losses or complicated thermochemical effects, both of which are absent in thermonuclear combustion. Further confirmation was obtained with a one-dimensional calculation of a thermonuclear flame subject to discrete multiscale remappings representing turbulent eddies (Lisewski et al. 1999). The interaction of simple diffusion-reaction flames with turbulence on the scale of the flame width was studied by Niemeyer et al. (1999), demonstrating that local flame propagation is nearly unaffected by turbulence even if the turbulence intensity is comparable to the laminar flame speed.
## 4 Alternative scenarios
If the initial deflagration phase fails to release enough energy to unbind the star and no DDT takes place during the expansion, the star pulses and eventually recontracts, revitalizing the turbulence by compression (Arnett & Livne 1994; Khokhlov 1995). During the pulse, the cut-off scale for temperature fluctuations $`l_\mathrm{d}`$ can grow extremely large because turbulence is essentially frozen in. At very low densities the flame width $`\delta `$ is macroscopically large, allowing the formation of fluid regions which – if they survive the recontraction phase without disruption – may be suitably preconditioned for DDT later on. However, turbulent entrainment of hot and cold material during the collapse will again raise the amplitude and lower the cut-off scale of temperature fluctuations. It is impossible to say a priori whether the fluid is more likely to reignite in the deflagration or detonation mode. While the extensive mixing period during the pulse probably helps to create favorable conditions for DDT, its benefits may well be erased by the enhanced turbulence intensity during the recontraction. Moreover, the extremely fine-tuned time synchronization required for the gradient mechanism for DDT seems to be as unnatural in the pulsational mode as in the direct one.
An additional problem of the pulsational delayed detonation scenario was pointed out by Niemeyer & Woosley (1997): if a large pulse is needed to achieve the required degree of homogeneity, what are the observational counterparts of those events that barely unbind the star but do not detonate? One may evade this problem by assuming that turbulent deflagrations reliably fall short of releasing the binding energy of the white dwarf. This, however, is in conflict with the latest two-dimensional simulations that indicate a clear trend toward higher energy release with increased numerical resolution (Hillebrandt et al. 1999). These simulations employ a flame capturing algorithm based on the level set method (Reinecke et al. 1998b) that shows the emergence of more and more flame structure as the grid resolution is improved. For certain initial conditions, the star clearly becomes unbound, yet no convergence of total energy generation with respect to resolution has been achieved so far. Should this trend continue, and ultimately be confirmed in three-dimensional calculations, there is a realistic possibility that turbulent deflagrations alone are sufficient to power the explosions without the need for detonations.
The simulations by Niemeyer et al. (1996) and Reinecke et al. (1998a) further demonstrate that the role of the initial conditions for flame ignition has not yet been fully explored. If the explosion is sparked off at many disconnected points, the complexity of the flame surface later on may easily exceed the surface area derived from the non-linear growth of an initially smooth, RT unstable interface (“dandelion model”, Niemeyer & Woosley 1997). One-dimensional SN Ia models are unable to adequately represent such effects.
Finally, we can consider alternative routes to detonations that do not rely on large scale preconditioning. One such possibility is active turbulent combustion (ATC) (Kerstein 1996; Niemeyer & Woosley 1997), a runaway process of turbulent combustion that may occur as a consequence of flame-generated turbulence on multiple scales. Scaling arguments show that in the absence of an effective mechanism for stabilization, a runaway must ensue in any unconfined turbulent flame brush (Kerstein 1996). It is possible that the non-linear stabilization mechanism of the Landau-Darrieus (LD) instability by cusp formation (Zeldovich 1966) is unstable with respect to finite amplitude perturbations exerted by turbulent fluctuations, giving rise to an increasingly more violent acceleration of the flame front that ends only when compressibility effects become important. Cusp stabilization of the LD instability may also break down at a critical expansion ratio of burned and unburned material, as suggested by Blinnikov & Sasorov (1996). Practically, the consequences of ATC would involve either nearly sonic turbulent combustion or direct DDT by shock formation ahead of the combustion front. While undoubtedly speculative, ATC is a promising mechanism for powerful SN Ia explosions without the need for fine-tuning. Numerical experiments designed to measure the relevance of ATC for thermonuclear flames are underway.
## 5 Conclusions
This paper argues that the gradient mechanism for deflagration-detonation-transitions (DDT), previously believed to be the most realistic candidate to explain delayed detonations in Type Ia supernovae (SN Ia), is inconsistent with the phenomenology of turbulent mixing and combustion. Combining the inability of turbulence to provide microscopic mixing over macroscopic length scales with the robustness of thermonuclear flames with respect to quenching, the establishment of sufficiently large regions with a nearly constant temperature gradient is shown to be very unlikely. Both of these effects can (and must) be verified by means of direct numerical simulations on small scales. Work in this direction is in progress; first results of flame-turbulence interactions on small scales can be found in (Niemeyer et al. 1999). The argument above holds as well for pulsational explosions, although here the long intermediate period of diffusion dominated mixing may slightly facilitate the preconditioning needed for DDT.
Why, then, do one-dimensional explosion models with a slow deflagration phase followed by a delayed detonation so successfully fit the majority of SN Ia observations? Either we are missing an important effect that robustly leads to a DDT or at least to a very fast turbulent flame late during the explosion – a noteworthy, albeit speculative, possibility is active turbulent combustion (ATC) – or 1D models get the right answer for the wrong reasons, because they cannot accurately represent important multidimensional effects. An example for the latter is the impact of multipoint ignition on the development of the flame surface complexity, an effect that may well lead to a strongly enhanced burning rate in the deflagration mode as compared with the standard scenario. In any case, the success of both the direct and the pulsational modes for DDT hinges on a deflagration phase that is much slower than indicated by recent results of two-dimensional simulations.
On the other hand, there is a trend toward higher energy release by the turbulent flame if the numerical resolution is increased. So far, no convergence of the total energy generation has been attained. If this trend continues and is confirmed by three-dimensional calculations with realistic subgrid-scale modeling, the possibility that the bulk of Type Ia supernovae explodes without ever detonating must be taken more seriously.
To summarize, our analysis suggests that detonations may never take place in SN Ia explosions. If they do, they probably need to be preceded by a nearly sonic turbulent deflagration, in which case it may not be possible to clearly distinguish deflagrations from detonations observationally. ATC, multipoint ignition, higher than anticipated energy release in the turbulent flame brush, or any combination thereof may provide the required energy output to power the explosion.
This work is the result of many interesting discussions with Kendal Bushe, Alan Kerstein, Wolfgang Hillebrandt, Bob Rosner, and Stan Woosley. I also wish to acknowledge helpful information from Sergej Blinnikov, Greg Ruetsch, Martin Reinecke, and Martin Lisewski. This research was supported in part by the ASCI Center on Astrophysical Thermonuclear Flashes at the University of Chicago under DOE contract B341495.
|
no-problem/9906/hep-ph9906368.html
|
ar5iv
|
text
|
# Large Neutrino Mixing with Universal Strength of Yukawa Couplings
## I.Introduction
The measurement of solar and atmospheric neutrino fluxes provides experimental evidence pointing towards neutrino oscillations, thus implying non-zero neutrino masses and leptonic mixing. These exciting results have motivated various attempts at understanding the structure of neutrino masses and mixing . Assuming three neutrinos, the required neutrino mass differences are such that in order for neutrinos to be of cosmological relevance, their masses have to be approximately degenerate. For Majorana neutrinos the case of quasi-degeneracy is specially interesting, since mixing and CP violation can occur even in the limit of exact mass degeneracy .
In this paper, we propose a simple ansatz within the framework of universal strength for Yukawa couplings (USY) which leads in a natural way to a set of highly degenerate neutrinos, while providing a large mixing solution for both the solar and atmospheric neutrino data. Within USY, all Yukawa couplings have equal moduli, but different complex phases, thus leading to complex unimodular mass matrices. We extend this idea to the leptonic sector, choosing ansätze where the charged lepton and neutrino mass matrices have this special form. In the quark sector the USY hypothesis already proved to be quite successful, leading to ansätze for the Yukawa couplings, where the parameters of the Cabibbo-Kobayashi-Maskawa matrix are predicted in terms of quark mass ratios, without any free parameters .
The most recent results of the SuperKamiokande (SK) collaboration strengthen the possibility of nearly maximal mixing angle for atmospheric neutrino oscillations with the experimental parameters within the range $`\mathrm{\Delta }m_{atm}^2=(1.58)\times 10^3eV^2`$, $`\mathrm{sin}^2(2\theta _{atm})>0.8`$. In the absence of sterile neutrinos the dominant mode is $`\nu _\mu \nu _\tau `$ oscillations while the sub-leading mode $`\nu _\mu \nu _e`$ is severely restricted by the SK and CHOOZ data which require $`V_{13}0.2`$ for the range given above.
The interpretation of the present solar neutrino data leads to oscillations of the electron neutrino into some other neutrino species, with three different ranges of parameters still allowed. In the framework of the MSW mechanism there are two sets of solutions, the adiabatic branch (AMSW) requiring a large mixing, ($`\mathrm{\Delta }m_{sol}^2=(220)\times 10^5eV^2`$, $`\mathrm{sin}^2(2\theta _{sol})=0.650.95`$) , and the non-adiabatic branch (NAMSW) requiring small mixing ($`\mathrm{\Delta }m_{sol}^2=5.4\times 10^6eV^2`$, $`\mathrm{sin}^2(2\theta _{sol})6.0\times 10^3`$, for the best fit) . In the framework of vacuum oscillations again large mixing is required ($`\mathrm{\Delta }m_{sol}^2=8.0\times 10^{11}eV^2`$, $`\mathrm{sin}^2(2\theta _{sol})=0.75`$, for the best fit) .
The result of the LSND collaboration based on a reactor experiment has not yet been confirmed by other experiments and in particular the KARMEN data already excludes a sizeable part of the allowed parameter space. In this paper, we only take into consideration the solar and atmospheric neutrino data and consider three neutrino families without additional sterile neutrinos.
The paper is organized as follows. In the next section, we analyse various possibilities for having a degenerate or quasi-degenerate neutrino mass spectrum in USY, within the framework of the see-saw mechanism. In section III, we present a specific USY ansatz for charged lepton and neutrino mass matrices. In section IV, we show through numerical examples, how the ansatz can accommodate all present data on atmospheric and solar neutrinos. Finally our conclusions are presented in section V.
## II. The See-saw mechanism and USY
The see-saw mechanism provides one of the most attractive scenarios for having naturally small masses for the left-handed neutrinos. Although the mechanism has been introduced within the framework of models with an extended gauge group such as $`SO(10)`$ or $`SU(2)_L\times SU(2)_R\times U(1)`$ , it is clear that one may have the see-saw mechanism within the standard $`SU(3)_C\times SU(2)_L\times U(1)`$ theory, through the introduction of three right-handed neutrinos, with no other modification. The full $`(6\times 6)`$ neutrino mass matrix can be written as:
$$=\left[\begin{array}{ccc}0& m_D& \\ m_D^T& M_R& \end{array}\right]$$
(1)
where $`m_D`$ denotes the neutrino Dirac mass matrix, while $`M_R`$ stands for the right-handed Majorana mass matrix. The Dirac mass matrix is proportional to a vacuum expectation value $`v`$ of the Higgs doublet responsible for the $`SU(2)\times U(1)`$ breaking, while the right-handed Majorana mass term, being invariant under $`SU(2)\times U(1)`$, has a scale $`V_o`$ which can be much larger than $`v`$. The masses and mixing of the left-handed neutrinos are determined by an effective mass matrix given by:
$$m_{eff}=m_DM_R^1m_D^T$$
(2)
In this section, we analyse the various structures for $`m_D`$ and $`M_R`$, which can lead to $`m_{eff}`$ corresponding to quasi-degenerate neutrinos. We are specially interested in structures based on the USY principle. We will consider various examples, without attempting at being exhaustive. For simplicity, we will consider the exact degeneracy limit. The quasi-degenerate case can be viewed as a small perturbation around that limit.
Within the USY framework, exact mass degeneracy for a $`3\times 3`$ matrix is achieved for a mass matrix proportional to $`Y`$, where,
$$Y=\frac{1}{\sqrt{3}}\left[\begin{array}{ccc}\omega & 1& 1\\ 1& \omega & 1\\ 1& 1& \omega \end{array}\right]$$
(3)
with $`\omega =e^{2\pi i/3}`$. It can be readily verified that $`Y`$ can also be written as
$$Y=e^{\frac{5\pi i}{6}}F\text{diag}(1,1,\omega ^{})F^T$$
(4)
where $`F`$ is given by
$$F=\left[\begin{array}{ccc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{6}}& \frac{1}{\sqrt{3}}\\ \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{6}}& \frac{1}{\sqrt{3}}\\ 0& \frac{2}{\sqrt{6}}& \frac{1}{\sqrt{3}}\end{array}\right]$$
(5)
In the framework of the see-saw mechanism, there are various cases which can lead to mass degeneracy.
### Case I
Both the Dirac and the right-handed Majorana mass matrices are proportional to $`Y`$ so that one obtains for the full neutrino mass matrix:
$$=\left[\begin{array}{ccc}0& \lambda Y& \\ \lambda Y& \mu Y& \end{array}\right]$$
(6)
where $`\lambda `$, $`\mu `$ are real constants with dimension of mass, satisfying the relation $`\lambda ^2/\mu v^2/V_o`$. The effective $`3\times 3`$ mass matrix is then given by
$$m_{eff}=\frac{\lambda ^2}{\mu }YY^1Y=\frac{\lambda ^2}{\mu }Y$$
(7)
One concludes that if $`m_D`$ and $`M_R`$ are proportional to $`Y`$, then $`m_{eff}`$ will also have a degenerate mass spectrum.
### Case II
Both $`M_R`$ and $`M_D`$ have again degenerate eigenvalues, but we assume that $`M_R`$ is proportional to $`Y`$ in the weak-basis where $`m_D`$ is already diagonal and therefore proportional to the unit matrix. The neutrino mass matrix has then the form:
$$=\left[\begin{array}{ccc}0& \lambda 𝟙& \\ \lambda 𝟙& \mu Y& \end{array}\right]$$
(8)
which leads to
$$m_{eff}=\frac{\lambda ^2}{\mu }Y^1=\frac{\lambda ^2}{\mu }Y^{}$$
(9)
It is clear from Eq.(4) that $`m_{eff}`$ has also a degenerate mass spectrum.
### Case III
Let us now consider a situation analogous to case II, but where the forms of $`M_R`$ and $`m_D`$ are interchanged, i.e.
$$=\left[\begin{array}{ccc}0& \lambda Y& \\ \lambda Y& \mu 𝟙& \end{array}\right]$$
(10)
which implies
$$m_{eff}=\frac{\lambda ^2}{\mu }Y^2=\frac{\lambda ^2}{\mu }iY^{}$$
(11)
so that one obtains again $`m_{eff}`$ with a degenerate mass spectrum.
### Case IV
So far, we have only considered cases where both $`m_D`$ and $`M_R`$ have degenerate eigenvalues. We shall now assume that $`m_D`$ has an hierarchical spectrum and show that one may obtain a $`m_{eff}`$ with degenerate spectrum, using a $`M_R`$ which has an hierarchical spectrum also. For definiteness, let us assume that $`m_D`$ is given by
$$m_D=\lambda \left[\begin{array}{ccc}e^{iϵ_1}& 1& 1\\ 1& e^{iϵ_1}& 1\\ 1& 1& e^{iϵ_2}\end{array}\right]$$
(12)
where $`ϵ_i`$ are real parameters, satisfying the relations $`|ϵ_1|<<|ϵ_2|<<1`$. The matrix $`m_D=\lambda m_{D_o}`$ can be written as a sum with
$$m_{D_o}=\mathrm{\Delta }+ϵ_1A+ϵ_2B$$
(13)
where
$$\mathrm{\Delta }=\left[\begin{array}{ccc}1& 1& 1\\ 1& 1& 1\\ 1& 1& 1\end{array}\right]$$
(14)
and
$$\begin{array}{cc}A=\frac{(e^{iϵ_1}1)}{ϵ_1}\text{diag}(1,1,0),& B=\frac{(e^{iϵ_2}1)}{ϵ_2}\text{diag}(0,0,1)\end{array}$$
(15)
Since $`A`$, $`B`$ are of order $`1`$, it is clear that $`m_D`$ has a hierarchical spectrum. If we choose now
$$M_R=\mu m_{D_o}Y^{}m_{D_o}$$
(16)
one obtains
$$m_{eff}=\frac{\lambda ^2}{\mu }m_{D_o}\left[m_{D_o}Y^{}m_{D_o}\right]^1m_{D_o}=\frac{\lambda ^2}{\mu }Y$$
(17)
where we have used the fact that $`(Y^{})^1=Y`$. It is clear that $`m_{eff}`$ has a degenerate spectrum. The interesting point is that $`M_R`$ has a hierarchical spectrum, since from Eqs.(13, 16) one obtains
$$M_R=\mu \left[\mathrm{\Delta }+ϵ_1A+ϵ_2B\right]Y^{}\left[\mathrm{\Delta }+ϵ_1A+ϵ_2B\right]=3e^{\frac{\pi i}{6}}\mu \left[\mathrm{\Delta }+ϵ_1A^{}+ϵ_2B^{}\right]$$
(18)
where we have used the fact that for any matrix $`Z`$, one has $`\mathrm{\Delta }Z\mathrm{\Delta }=(_{ij}Z_{ij})\mathrm{\Delta }`$. It is clear from Eq.(18) that $`M_R`$ has indeed a hierarchical spectrum since $`A^{}`$ and $`B^{}`$ are at most of order one.
We have shown that starting from a hierarchical Dirac neutrino mass $`m_D`$, it is always possible to find a Majorana mass matrix $`M_R`$ which leads to an exactly degenerate mass matrix of the USY type. However, it should be stressed that in order to achieve that, it is required a significant amount of fine-tuning between the Dirac and Majorana sectors, unless there is a symmetry principle constraining both sectors.
## III. A specific ansatz within the USY framework
In this section, we suggest the following specific ansatz for the charged lepton mass matrix $`M_{\mathrm{}}`$ and the effective $`3\times 3`$ neutrino mass matrix $`M_\nu `$:
$$M_{\mathrm{}}=\begin{array}{cc}c_{\mathrm{}}\left[\begin{array}{ccc}e^{ia}& 1& 1\\ 1& e^{ia}& 1\\ 1& 1& e^{i(a+b)}\end{array}\right],& M_\nu =c_\nu \left[\begin{array}{ccc}e^{i\alpha }& 1& 1\\ 1& e^{i\beta }& 1\\ 1& 1& e^{i(\alpha +\beta )}\end{array}\right]\end{array}$$
(19)
Both $`M_{\mathrm{}}`$ and $`M_\nu `$ are of the USY type, symmetric and with only three real free parameters each, thus leading to full calculability of the mixing angles in terms of the mass ratios. Note that $`M_\nu `$ is the relevant mass matrix for the neutrinos; it can either be an effective see-saw mass matrix, as discussed in the previous section or simply a Majorana mass matrix for left-handed neutrinos in a model with no right-handed neutrinos.
The leptonic charged weak current interactions can be written as:
$$_W=\frac{g__W}{2}(\overline{e},\overline{\mu },\overline{\tau })_L\gamma _\mu V\left(\begin{array}{c}\nu _1\\ \nu _2\\ \nu _3\end{array}\right)_LW^\mu +\text{h.c.}$$
(20)
where the leptonic mixing matrix $`V`$ is given by:
$$V=U_{\mathrm{}}^{}U_\nu $$
(21)
and where
$$\begin{array}{cc}\mathrm{}_{L_i}^{\text{weak}}=(U_{\mathrm{}})_{ij}\mathrm{}_{L_i}^{\text{phys}},& \nu _{L_\alpha }=(U_\nu )_{\alpha i}\nu _{L_i}\end{array}$$
(22)
with $`\mathrm{}_{L_i}^{\text{phys}}`$ denoting the physical charged leptons and $`\nu _{L_i}`$ the physical light neutrinos. The charged leptons have hierarchical masses, thus implying that the phases $`a`$ and $`b`$ in Eq.(19) have to be small. These phases can be expressed in terms of the charged lepton masses and to leading order one obtains:
$$\begin{array}{cc}|a|=3\frac{m_e}{m_\tau },& |b|=\frac{9}{2}\frac{m_\mu }{m_\tau }\end{array}$$
(23)
On the other hand, we want the matrix $`M_\nu `$ in Eq.(19) to lead to highly degenerate neutrino masses. It can be easily checked that the matrix
$$M=c\left[\begin{array}{ccc}e^{i\alpha }& 1& 1\\ 1& e^{i\alpha }& 1\\ 1& 1& e^{i2\alpha }\end{array}\right]$$
(24)
has in general two degenerate eigenvalues and that in particular, for $`\alpha =2\pi /3`$ we recover the $`Y`$ matrix of Eq.(3), where all three eigenvalues are exactly degenerate. This suggests that we expand $`\alpha `$ and $`\beta `$ in Eq.(19) around the value $`2\pi /3`$, introducing two small parameters $`\delta `$ and $`\epsilon `$ defined by:
$$\begin{array}{cc}\alpha =\frac{2\pi }{3}\delta \epsilon ,& \beta =\frac{2\pi }{3}\delta \end{array}$$
(25)
In the limit $`\epsilon =0`$, one still has a two-fold degeneracy of eigenvalues, as in Eq.(24). The eigenvalues $`\lambda _i`$ of the dimensionless hermitian matrix $`H_\nu (M_\nu M_\nu ^{})/(3c_\nu ^2)`$ are given in terms of $`\alpha `$ and $`\beta `$ by the expression
$$\lambda _i=1+2x\mathrm{cos}\varphi _i$$
(26)
with
$$\begin{array}{ccc}\varphi _1=\theta +\frac{\alpha \beta }{3}\frac{2\pi }{3},& \varphi _2=\theta +\frac{\alpha \beta }{3}+\frac{2\pi }{3},& \varphi _3=\theta +\frac{\alpha \beta }{3}\end{array}$$
(27)
and
$$\begin{array}{cc}\mathrm{tan}(\theta )=\frac{\mathrm{sin}\beta \mathrm{sin}\alpha }{\mathrm{cos}\beta +\mathrm{cos}\alpha +1},& x=\frac{1}{3}\sqrt{3+2\mathrm{cos}(\beta +\alpha )+2\mathrm{cos}\beta +2\mathrm{cos}\alpha }\end{array}$$
(28)
The parameters $`\delta `$ and $`\epsilon `$ can be expressed in terms of neutrino masses, and in leading order one has:
$$\begin{array}{cc}|\delta |=\frac{1}{\sqrt{3}}\frac{\mathrm{\Delta }m_{32}^2}{m_3^2},& |\epsilon |=\sqrt{3}\frac{\mathrm{\Delta }m_{21}^2}{m_3^2}\end{array}$$
(29)
where $`\mathrm{\Delta }m_{ij}^2=|m_i^2m_j^2|`$. The matrix $`H_{\mathrm{}}(M_{\mathrm{}}M_{\mathrm{}}^{})/(3c_{\mathrm{}}^2)`$ is approximately diagonalized by $`U_{\mathrm{}}=F`$ defined in Eq.(5), with additional small corrections expressible in terms of charged lepton mass ratios. The diagonalization of $`M_\nu `$ requires special care since to leading order $`M_\nu `$ is an exactly degenerate mass matrix. In Ref. we have studied the general form of Majorana neutrino mass matrices leading to exact degeneracy and we have pointed out that if a given unitary matrix $`U_{}`$ diagonalizes the degenerate mass matrix, so does the matrix $`U_\nu =U_{}O`$, with $`O`$ an arbitrary orthogonal matrix. The diagonalizing matrix $`U_\nu =U_{}O`$ is only fixed when the mass degeneracy is lifted. For our specific case with $`M_\nu `$ given by Eq.(19), we obtain in next to leading order :
$$U_\nu =\frac{e^{\frac{\pi i}{4}}}{\sqrt{3}}\left[\begin{array}{ccc}\omega & 1& 1\\ 1& \omega & 1\\ 1& 1& \omega \end{array}\right]K$$
(30)
where $`K=`$ diag $`(1,1,1)`$, so that $`U_\nu ^TM_\nu U_\nu `$ is diagonal real and positive for a positive $`c_\nu `$. As a result the moduli of the mixing matrix are, to a very good approximation, given by:
$$|V|\left[\begin{array}{ccc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}& 0\\ \frac{1}{\sqrt{6}}& \frac{1}{\sqrt{6}}& \frac{2}{\sqrt{6}}\\ \frac{1}{\sqrt{3}}& \frac{1}{\sqrt{3}}& \frac{1}{\sqrt{3}}\end{array}\right]$$
(31)
## IV. Confronting the data
There is a stringent bound on the parameter $`c_\nu `$ of the neutrino mass matrix in Eq.(19) from neutrinoless double beta decay, which can be expressed by $`|<m>||_iU_{ei}^2m_{\nu _i}|=|m_{ee}|<0.2eV`$ ,with $`m_{ee}`$ denoting the entry $`(11)`$ of $`M_\nu `$ in the weak basis where $`M_l`$ is diagonal. Taking into account Eq.(19), this immediately leads to
$$m<\sqrt{3}c_\nu 0.2eV$$
(32)
so that in the case of almost degenerate neutrinos coming from the ansatz of Eq.(19), we cannot have light neutrinos with masses higher than about $`0.2eV`$, where $`m`$ is the approximate neutrino mass.
In order to compare our ansatz with the experimental results from atmospheric and solar neutrino experiments, we must bear in mind that in the context of three left-handed neutrinos the probability for a neutrino $`\nu _\alpha `$ to oscillate into other neutrinos is given by
$$1\text{P}(\nu _\alpha \nu _\alpha )=4\underset{i<j}{}|V_{\alpha i}|^2|V_{\alpha j}|^2\mathrm{sin}^2\left[\frac{\mathrm{\Delta }m_{ij}^2}{4}\frac{L}{E}\right]$$
(33)
where $`E`$ is the neutrino energy, and $`L`$ denotes the distance travelled between the source and the detector. The translation of the experimental bounds, which are given in terms of only two flavour mixing, into the three flavour mixing is simple, since in this case we have $`V_{13}`$ close to zero and also $`\mathrm{\Delta }m_{32}^2>>\mathrm{\Delta }m_{21}^2`$, and we may safely identify:
$$\mathrm{sin}^22\theta _{\text{atm}}=4\left(|V_{21}|^2|V_{23}|^2+|V_{22}|^2|V_{23}|^2\right)$$
(34)
$$\mathrm{sin}^22\theta _{\text{sol}}=4|V_{11}|^2|V_{12}|^2$$
(35)
The following examples illustrate how our ansatz fits the experimental bounds for large solar and atmospheric mixing. The first example is in the context of vacuum oscillations and the second for large mixing AMSW.
### 1st Example
We choose as input the masses for the charged leptons
$$\begin{array}{ccc}m_e=0.511MeV,& m_\mu =105.7MeV,& m_\tau =1777MeV\end{array}$$
(36)
which correspond to phases $`|a|=8.61\times 10^4`$ and $`|b|=0.267`$ of Eq.(19). For the neutrino sector we choose
$$\begin{array}{ccc}m_{\nu _3}=0.2eV,& \mathrm{\Delta }m_{32}^2=5.0\times 10^3eV^2,& \mathrm{\Delta }m_{21}^2=1.0\times 10^{10}eV^2\end{array}$$
(37)
thus fixing the parameters $`|\delta |=0.0772`$ and $`|\epsilon |=4.98\times 10^9`$ of Eq.(25).
Performing an exact numerical diagonalization of the mass matrices we obtain for the leptonic mixing matrix
$$|V|=\left[\begin{array}{ccc}0.707& 0.707& 6.78\times 10^{10}\\ 0.406& 0.406& 0.819\\ 0.579& 0.579& 0.574\end{array}\right]$$
(38)
which from Eq.(34) and Eq.(35) translates into
$$\begin{array}{cc}\mathrm{sin}^2(2\theta _{\text{atm}})=0.884,& \mathrm{sin}^2(2\theta _{\text{sol}})=1.0\end{array}$$
(39)
### 2nd Example
In this second numerical application, we choose
$$\begin{array}{ccc}m_{\nu _3}=0.2eV,& \mathrm{\Delta }m_{32}^2=5.0\times 10^3eV^2,& \mathrm{\Delta }m_{21}^2=5.0\times 10^5eV^2\end{array}$$
(40)
in agreement with the large mixing AMSW solution for the solar problem. This case corresponds to $`|\delta |=0.0764`$ and $`|\epsilon |=2.48\times 10^3`$. The resulting leptonic mixing matrix coincides, to an excellent approximation, with that of Eq.(38), with the exception of $`|V_{13}|`$ which is given by $`|V_{13}|=3.38\times 10^4`$. Of course this is to be expected from the discussion of section III where we have pointed out that this ansatz implies in leading order a leptonic mixing matrix given by Eq.(31). The resulting values for $`\mathrm{sin}^2(2\theta _{\text{atm}})`$ and $`\mathrm{sin}^2(2\theta _{\text{sol}})`$ do not deviate from those of Eqs.(39).
In these examples, we fixed the parameters of our ansatz in such a way that we reproduce the charged leptonic masses and obtain almost degenerate neutrino masses obeying the current experimental bounds on neutrino mass splitting. The ansatz then leads to large values for $`\mathrm{sin}^2(2\theta _{\text{atm}})`$ and $`\mathrm{sin}^2(2\theta _{\text{sol}})`$. Comparing our results with the experimental constraints, we conclude that our ansatz is in better agreement with the vacuum oscillation solution for solar neutrinos than with AMSW solution since the value $`\mathrm{sin}^2(2\theta _{\text{sol}})=1.0`$ is disfavoured in the framework of AMSW . However, it should be noted that it is possible, within USY, to obtain a $`\mathrm{sin}^2(2\theta _{\text{sol}})`$ compatible with the AMSW solution, with a slight modification of our ansatz, by replacing $`M_\nu `$ in Eq.(19) by $`M_\nu ^{}=CM_\nu C`$, where $`C=`$diag$`(1,e^{i\alpha },1)`$. In this case, we obtain, for $`\alpha =0.2`$, $`\mathrm{sin}^2(2\theta _{\text{sol}})=0.974`$, $`\mathrm{sin}^2(2\theta _{\text{atm}})=0.934`$, with the same mass splittings as before. This result is inside the $`90\%`$ C.L. experimental value for the $`\mathrm{sin}^2(2\theta _{\text{sol}})`$ in AMSW.
Concerning the stability of our ansatz under the renormalization group equations (RGE), we find that the VO solution is unstable, since the required $`\mathrm{\Delta }m_{21}^2`$ mass splitting, of the order of $`10^{10}eV^2`$, is much smaller than the mass splitting $`\mathrm{\Delta }m_{RGE}^2`$ generated by the running of the RGE. Indeed, in the framework of the standard model, one finds : $`\mathrm{\Delta }m_{RGE}^2m_\nu ^2ϵ`$, with $`ϵ`$ given by,
$$ϵ=\frac{Y_\tau ^2}{32\pi ^2}\mathrm{log}\left(\frac{\mathrm{\Lambda }}{M_Z}\right)$$
(41)
where $`Y_\tau `$ is the $`\tau `$ Yukawa coupling (at $`M_Z`$). It is clear, that even for $`\mathrm{\Lambda }=o(10TeV)`$, $`ϵo(10^6)`$. Therefore, with our $`m_\nu =0.2eV`$, the mass splitting coming from the RGE is always larger that $`10^8eV^2`$, and this far exceeds the required VO mass splitting of $`10^{10}eV^2`$. This result is in agreement with previous analysis found in the literature , and it implies that in order to have a stable VO solution, one would need a mechanism, like e.g. an exact symmetry, which would protect $`\mathrm{\Delta }m_{21}^2`$ from becoming too large.
On the other hand, the MSW solution is quite stable. Even if we take $`\mathrm{\Lambda }=o(10^{19})GeV`$, we get $`ϵ10^5`$, leading to a RGE mass splitting of $`\mathrm{\Delta }m_{RGE}^210^7eV^2`$, which is much smaller than the required AMSW splitting of $`\mathrm{\Delta }m_{21}^2=5\times 10^5eV^2`$. The mixing angles are not significantly altered by the running of the RGE.
## V. Conclusions
Within the framework of the USY hypothesis, we have analysed various structures for the Dirac and Majorana neutrino mass matrices which can lead, through the see-saw mechanism, to an effective neutrino mass matrix for the left-handed neutrinos, with a degenerate mass spectrum. The physically relevant case of quasi-degenerate neutrinos can be viewed as a small perturbation of this limit. In one of the cases considered, the neutrino Dirac mass matrix has a hierarchical spectrum, but the resulting effective neutrino mass matrix has a degenerate spectrum. This case has the attractive feature of having all fermions, namely quarks, charged leptons and neutrinos with hierarchical Dirac masses. We have then put forward an USY ansatz for the charged lepton and neutrino effective mass matrix, which leads to three quasi-degenerate neutrinos . The ansatz is highly predictive since the leptonic mixing matrix is given in leading order by a fixed matrix (independent of the lepton masses) with small corrections given in terms of lepton mass ratios, with no arbitrary parameters. A large mixing solution is obtained both for the solar and atmospheric neutrino data.
We have verified that the VO solution is unstable under the RGE, while the AMSW is stable. However, it should be pointed out that the problem of stability cannot be separated from the question of obtaining the USY structure from a symmetry principle. At present, this is still an open question, and therefore our ansatz should be viewed as an effective theory at low energies, resulting hopefully from an appropriate structure for the Dirac and right-handed Majorana neutrinos, imposed at a high energy scale.
One of the salient features of the ansatz of Eq.(19) is the fact that the mass matrices for both the charged leptons and the neutrinos have analogous structures, with all matrix elements with equal modulus and the non-vanishing phases appearing only along the diagonal. The drastic difference between the resulting spectra for the charged leptons and the neutrinos has to do with the fact that in the case of charged leptons the phases along the diagonal are small, while in the case of neutrinos the phases are close to $`2\pi /3`$ which corresponds to the exact degeneracy limit. These simple structures for the mass matrices and Yukawa couplings do suggest the existence of a symmetry principle leading to them.
###### Acknowledgements.
We thank E. Akhmedov for useful discussions. M. N. R. and J. I. S. are thankful for the hospitality of CERN Theory Division, where part of this work was done. The work of J. I. S. was partially supported by Fundação para a Ciência e Tecnologia of the Portuguese Ministry of Science and Technology.
|
no-problem/9906/chao-dyn9906016.html
|
ar5iv
|
text
|
# Inverse cascade in two-dimensional turbulence : deviations from Gaussianity
\[
## Abstract
High resolution numerical simulations of stationary inverse energy cascade in two-dimensional turbulence are presented. Deviations from Gaussianity of velocity differences statistics are quantitatively investigated. The level of statistical convergence is pushed enough to permit reliable measurement of the asymmetries in the probability distribution functions of longitudinal increments and odd-order moments, which bring the signature of the inverse energy flux. Their scaling laws do not present any measurable intermittency correction. The seventh order skewness is found to increase by almost two orders of magnitude with respect to the third, thus becoming of order unity.
\]
The inverse energy cascade in two dimensional Navier-Stokes turbulence is one of the most important phenomena in fluid dynamics. In agreement with the remarkable prediction of R.H. Kraichnan in 1967 , the coupled constraints of energy and enstrophy conservation make the energy injected into the system flow toward the large scales. This is a basic difference with respect to 3D turbulence, where energy flows toward small scales in a direct cascade. The dynamical process of structuring and organization of the large scales by the inverse cascade is also of great interest for geophysical fluid dynamics. First numerical and experimental observations of the inverse cascade and the ensuing Kolmogorov energy spectrum were obtained in . The important point foreseen in is that the smallness of the skewness suggests that intermittency might be weak. This conjecture was later supported by numerical simulations and experiments : scaling laws are compatible with dimensional predictions and both transversal and longitudinal velocity probability distribution functions (pdf’s) look not far from Gaussian. The evidence stemming from experiments and simulations is that the inverse transfer takes place via clustering of small-scale equal sign vortices. Strong deviations from Gaussianity appear if the system has a finite size and friction extracting the energy from the large scales is small (or absent). A pile-up of energy akin to the Bose-Einstein condensation takes then place in the gravest mode , large scale vortices are formed and energy spectra steeper than the Kolmogorov one are observed . Here we shall not consider the condensation phase, concentrating on the inverse cascade statistics. Theoretically, inverse cascade enjoys a great advantage with respect to the direct one : the limit of molecular viscosity $`\nu 0`$ can be taken without any harm in the equations of motion for velocity structure functions. At variance of 3D turbulence, the energy dissipation $`\nu \left(𝒗\right)^2`$ is indeed vanishing when $`\nu 0`$. The absence of dissipative anomalies is the clue for the analytical solution of inverse cascades in passive scalar advection . Intermittency was found to be absent, even though the statistics might be strongly differing from Gaussian. For 2D Navier-Stokes inverse cascade, dissipative terms can again be discarded but the situation is complicated by pressure gradients. They couple indeed the statistics of velocity differences $`\delta _r𝒗𝒗(𝒓)𝒗(\mathrm{𝟎})`$ at various $`𝒓`$’s in a non-local way. Closures on velocity increments-pressure gradients correlations have been proposed by invoking the quasi-Gaussianity of the statistics and quantitative predictions have been derived in this way . The issue of quasi-Gaussianity is however moot as deviations are intrinsically entangled to the dynamical process of inverse energy cascade. Standard calculations (see, e.g., ) permit indeed to derive the $`3/2`$ Kolmogorov law for 2D turbulence : $`S_L^{(3)}(r)=\left[\delta _r𝒗\widehat{𝒓}\right]^3=3/2ϵr`$, where $`\widehat{𝒓}=𝒓/r`$. The energy flux is denoted by $`ϵ`$ and the fact that it goes upscale reflects into the positive sign of the moment. Precise quantitative informations on the deviations from Gaussianity are however difficult to obtain. Odd-order structure functions involve for example strong cancellations between negative and positive contributions and the $`3/2`$ law itself could not be observed in previous studies, due to lack of resolution and/or statistical convergence. It is our purpose here to present the results of high-resolution numerical simulations aimed at quantitatively analyzing deviations from Gaussianity in the inverse energy cascade.
Specifically, the 2D Navier-Stokes equation for the vorticity $`\omega (𝒓,t)=\mathrm{\Delta }\psi (𝒓,t)`$ is :
$$_t\omega +J(\omega ,\psi )=\nu \mathrm{\Delta }\omega \alpha \omega \mathrm{\Delta }f,$$
(1)
where $`\psi `$ is the stream function, the velocity $`𝒗=^{}\psi =(_y\psi ,_x\psi )`$ and $`J`$ denotes the Jacobian. The friction linear term $`\alpha \omega `$ extracts energy from the system at scales comparable to the friction scale $`\eta _{\mathrm{fr}}ϵ^{1/2}\alpha ^{3/2}`$, assuming a Kolmogorov scaling law for the velocity. To avoid Bose-Einstein condensation in the gravest mode we choose $`\alpha `$ to make $`\eta _{\mathrm{fr}}`$ sufficiently smaller than the box size. The other relevant length in the problem is the small-scale forcing correlation length $`l_f`$, bounding the inertial range for the inverse cascade as $`l_fr\eta _{\mathrm{fr}}`$. We use a Gaussian forcing with correlation function $`f(𝒓,t)f(\mathrm{𝟎},t^{})=\delta (tt^{})F(r/l_f)`$. The $`\delta `$-correlation in time ensures the exact control of the energy injection rate. The forcing space correlation should decay rapidly for $`rl_f`$ and we choose $`F(x)=F_0l_f^2\mathrm{exp}(x^2/2)`$, where $`F_0`$ is the energy input. The numerical integration of (1) is performed by a standard $`2/3`$-dealiased pseudospectral method on a doubly periodic square domain of $`N^2=2048^2`$ grid points. The viscous term in (1) has the role of removing enstrophy at scales smaller than $`l_f`$ and, as customary, it is numerically more convenient to substitute it by a hyperviscous term (of order eight in our simulations). Time evolution is obtained by a standard second-order Adams-Bashforth scheme. After the system has reached stationarity, analysis is performed over twenty snapshots of the velocity field equally spaced by one large-eddy turnover time.
Let us now discuss the results. In Fig. 1 we present the third-order longitudinal structure function $`S_L^{(3)}(r)`$ compensated by the factor $`1/(ϵr)`$, showing a neat plateau at the value $`3/2`$ – in agreement with the Kolmogorov law – over a range of almost one decade of scales. In Fig. 2 it is presented the energy spectrum $`E(k)`$, which displays a clear Kolmogorov scaling $`k^{5/3}`$, and the energy flux $`\mathrm{\Pi }(k)`$.
The Kolmogorov constant in
$$E(k)=Cϵ^{2/3}k^{5/3},$$
(2)
is found to be $`C=6.0\pm 0.4`$ . Previous numerical simulations and experiments report values of the Kolmogorov constant $`C`$ ranging from $`5.8`$ to $`7.0`$ . The structure function constants corresponding to (2) are $`C_L^{(2)}=3C_T^{(2)}/5=\frac{\sqrt{3}\pi }{2^{5/3}\mathrm{\Gamma }(4/3)^2}C=12.9\pm 0.8`$, where the first two equalities follow from isotropy and incompressibility and
$$S_L^{(n)}(r)=\left[\delta _r𝒗\widehat{𝒓}\right]^n=C_L^{(n)}\left(ϵr\right)^{n/3}.$$
(3)
For transverse moments, $`\widehat{𝒓}`$ is substituted in (3) by $`\widehat{𝒓}_{}`$, perpendicular to it. It is of interest to remark that longitudinal and transverse velocity increments are uncorrelated, i.e. $`S_{L,T}^{(2)}(r)=0`$. The relatively large value of $`C_L^{(2)}`$ implies a small skewness of the longitudinal velocity differences $`(3/2)/(C_L^{(2)})^{3/2}=0.03`$. Albeit the longitudinal pdf looks close to Gaussian and quite symmetric, nevertheless on a more quantitative ground asymmetries turn out to be quite strong as shown by the two curves of $`S_L^{(5)}(r)`$ and $`S_L^{(7)}(r)`$ in Fig. 3.
First, we can observe that their scaling behavior is in agreement with Kolmogorov predictions, without significant anomalous corrections. Second, the constants are $`C_L^{(5)}130`$ and $`C_L^{(7)}14000`$, giving for the hyper-skewness $`C_L^{(5)}/(C_L^{(2)})^{5/2}0.22`$ and $`C_L^{(7)}/(C_L^{(2)})^{7/2}1.8`$. The error bars can be estimated from r.m.s. fluctuations of compensated plots and for the seventh order (which is of course the most delicate) they amount to $`20\%`$. The increase of the skewness by almost two orders of magnitude from the third to the seventh order is particularly informative. Hölder inequalities apply indeed to absolute moments and odd-order moments (without absolute values as in (3)) might a priori even become smaller when their order increases. Our main motivation was precisely to find out whether odd-order moments were decreasing or increasing with the order. The answer to this question shows that hyperskewness is definitely not a “small parameter” to be used in perturbative schemes for the statistical properties of the inverse energy cascade. The predictions in , although based on a closure explictly invoking small deviations from Gaussianity, turn out to be compatible with the numerical results. This indicates that the closure is likely to be more robust and “nonperturbative” than its derivation might suggest.
Another striking evidence for the importance of the longitudinal pdf asymmetries is provided in Fig. 4.
We consider here the antisymmetric part of the pdf $`𝒫\left(\delta v_L(r)\right)𝒫\left(\delta v_L(r)\right)`$ (shown in Fig. 5) and calculate “antisymmetric structure functions” such as $`S_{}^{(4)}=_0^{\mathrm{}}u^4\left(𝒫(u)𝒫(u)\right)𝑑u`$. Both the fourth and the sixth moment have a clean Kolmogorov scaling $`S_{}^{(n)}(r)=C_{}^{(n)}\left(ϵr\right)^{n/3}`$, with $`C_{}^{(4)}/(C_L^{(2)})^20.08`$ and $`C_{}^{(6)}/(C_L^{(2)})^30.6`$. This indicates that the non-Gaussian antisymmetric part, although visually small, has imprinted all the relevant scaling informations on the inverse cascade.
From the graph in the inset of Fig. 5 it can be appreciated that the asymmetric part of the pdf has relatively consistent tails, which are the cause for the large observed values of the hyperskewness of order $`5`$ and $`7`$.
To investigate possible dependencies on the type of forcing we also performed (shorter) numerical simulations with an injection rate characterized by the spectral correlation function $`f(𝒌,t)f(𝒌^{},t^{})=\delta (tt^{})\delta (𝒌+𝒌^{})\delta (1kl_f)`$ . At variance with the former choice, this forcing is limited to a narrow bandwidth in Fourier space but its spatial correlations decay rather slowly. Odd-order structure functions and the antisymmetric part of the pdf do not show any visible dependence on the details of the energy input. Conversely, the symmetric part of the pdf of velocity differences and even order moments are more sensitive. For the forcing limited to a shell of wavenumbers, both the (symmetrized) longitudinal and the transverse pdfs are visually indistinguishable from Gaussian, as shown in Fig. 6.
Deviations of kurtosis and hyperkurtosis from their Gaussian values are small and compatible with those presented in . For the forcing localized in physical space, the far tails of the pdf at scales $`O(l_f)`$ tend to be broader. This tendency is due to the formation of small vortices of size comparable to $`l_f`$, which generate large velocity differences – especially transverse ones – across a distance of the order of their size. The effect becomes of course negligible at scales larger than $`l_f`$ but it might affect the quality and the extension of the scaling region for even order structure functions .
In conclusion, we have presented quantitative evidences for deviations from Gaussianity of the velocity increment statistics in the inverse energy cascade. Odd-order structure functions display a clean power law scaling compatible with classical Kolmogorov predictions. Numerical prefactors in adimensionalized structure functions are expected to be universal with respect to the forcing statistics and have been measured up to the seventh order. Despite the small value of the skewness, deviations have been shown to be quite strong and the hyperskewness of seventh order to be of order unity. Asymmetries in longitudinal velocity statistics should therefore be incorporated and treated systematically in theoretical models for the inverse energy cascade.
Acknowledgements. We are grateful to A. Babiano, G. Falkovich, K. Gawȩdzki, A. Mazzino, A. Pouquet, P. Tabeling and V. Yakhot for useful discussions. Support from the ESF-TAO programme (AC), from the network ”Intermittency in Turbulent systems” under contract FMRX-CT98-0175 and from INFM ”PRA TURBO” (AC and GB), is gratefully acknowledged. Numerical simulations were performed at IDRIS, under the contract No. 991226, and partially at CINECA within the project ”Lagrangian and Eulerian statistics in fully developed turbulence”.
|
no-problem/9906/cond-mat9906335.html
|
ar5iv
|
text
|
# Electronic Spectra and Their Relation to the (𝜋,𝜋) Collective Mode in High-𝑇_𝑐 Superconductors
## Abstract
Photoemission spectra of $`Bi_2Sr_2CaCu_2O_{8+\delta }`$ reveal that the high energy feature near $`(\pi ,0)`$, the “hump”, scales with the superconducting gap and persists above $`T_c`$ in the pseudogap phase. As the doping decreases, the dispersion of the hump increasingly reflects the wavevector $`(\pi ,\pi )`$ characteristic of the undoped insulator, despite the presence of a large Fermi surface. This can be understood from the interaction of the electrons with a collective mode, supported by our observation that the doping dependence of the resonance observed by neutron scattering is the same as that inferred from our data.
In the high temperature copper oxide superconductors, a small change in doping takes the material from an antiferromagnetic insulator to a d-wave superconductor. This raises the fundamental question of the relation of the electronic structure of the doped superconductor to that of the parent insulator . Here we examine this by using angle resolved photoemission spectroscopy (ARPES). We find that the spectral lineshape and its dispersion evolves as a function of doping from one which resembles a strong coupling effect of superconductivity in the overdoped limit to one which resembles the insulator in the underdoped limit. The connection between these two limits can be understood in terms of a collective mode which has the same $`(\pi ,\pi )`$ wavevector characteristic of the magnetic insulator, and whose energy decreases as the doping is reduced. This is supported by our observation that the mode energy inferred from ARPES as a function of doping correlates strongly with that obtained directly from neutron scattering data , and points to the intimate relation of magnetic correlations to high $`T_c`$ superconductivity.
The experiments were carried out using procedures and samples described previously , as well as films grown by RF magnetron sputtering . The doping level was controlled by varying oxygen stochiometry, with samples labeled by their onset $`T_c`$. Spectra were obtained with a photon energy of 22 eV and a photon polarization directed along the CuO bond direction. Spectra had energy resolutions (FWHM) of 17, 26, or 34 meV with a momentum window of radius 0.045$`\pi `$/a. Energies are measured with respect to the chemical potential, determined using a polycrystalline Pt or Au reference in electrical contact with the sample.
We begin with the $`T`$ evolution of the spectra of $`Bi_2Sr_2CaCu_2O_{8+\delta }`$ (Bi2212) near the $`(\pi ,0)`$ point of the Brillouin zone (inset of Fig. 1a). In the underdoped region of the phase diagram (that lies between the undoped insulator and optimal doping, corresponding to the highest $`T_c`$), one observes a pseudogap ($`3050`$ meV) which is very likely associated with pairing above $`T_c`$, a precursor to superconductivity . For temperatures above the pseudogap temperature scale $`T^{}`$ we see a broad peak which is chopped off by the Fermi function, as shown in Fig. 1a for an underdoped 83K sample at 200 K. In this respect, the one-particle spectral function of the underdoped compounds, which is completely incoherent, is similar to that observed in the overdoped compounds. While there is only weak dispersion from $`(\pi ,0)(\pi ,\pi )`$ for $`T>T^{}`$, there is definite loss of integrated spectral weight and one can identify the $`(\pi ,0)(\pi ,\pi )`$ “Fermi surface” crossing .
As the temperature is reduced below $`T^{}`$, but still above $`T_c`$, we see that the spectral function remains completely incoherent, as shown in Fig. 1b for an underdoped 89K sample. The leading edge pseudogap which develops below $`T^{}`$ is difficult to see on the energy scale of Fig. 1b (the midpoint shift at 135K is 3 meV). However, a higher energy feature (the “high energy pseudogap”) can easily be identified by a change in slope of the spectra as a function of energy (see Fig. 2). On further reduction of the temperature below $`T_c`$, a coherent quasiparticle peak begins to grow at the position of the leading edge gap, accompanied by a redistribution of the incoherent spectral weight leading to a dip and hump structure . The peak-dip-hump lineshape and the dispersion of these features will play a central role in our discussion.
The high energy pseudogap feature is closely related to the hump below $`T_c`$, as seen from a comparison of their dispersions. We show data along $`(\pi ,0)(\pi ,\pi )`$ for an underdoped 75K sample in the superconducting state (Fig. 2a) and in the pseudogap regime (Fig. 2b). Below $`T_c`$, the sharp peak at low energy is essentially dispersionless, while the higher energy hump rapidly disperses from the $`(\pi ,0)`$ point towards the $`(\pi ,0)(\pi ,\pi )`$ Fermi crossing seen above $`T^{}`$. Beyond this, the intensity drops dramatically, but there is clear evidence that the hump disperses back to higher energy. In the pseudogap state, the high energy feature also shows strong dispersion , much like the hump below $`T_c`$, even though the leading edge is non-dispersive like the sharp peak in the superconducting state.
In Fig. 3 we show the dispersion of the sharp peak and hump (below $`T_c`$), for a variety of doping levels, in the vicinity of the $`(\pi ,0)`$ point along the two principal axes. The sharp peak at low energies is seen to be essentially non-dispersive along both directions for all doping levels, while the hump shows very interesting dispersion. Along $`(\pi ,0)(0,0)`$ (Fig. 3a), the hump exhibits a maximum, with an eventual dispersion away from the Fermi energy, becoming rapidly equivalent to the binding energy of the broad peak in the normal state as one moves away from the region near $`(\pi ,0)`$. In the orthogonal direction (Fig. 3b), since the hump initially disperses towards the $`(\pi ,0)(\pi ,\pi )`$ Fermi crossing, which is known to be a weak function of doping, one obtains the rather dramatic effect that the dispersion becomes stronger with underdoping. We also note that there is an energy separation between the peak and the hump due to the spectral dip. In essence, the hump disperses towards the spectral dip, but cannot cross it, with its weight dropping strongly as the dip energy is approached. Beyond this point, one sees evidence of the dispersion bending back to higher binding energy for more underdoped samples.
Fig. 4a shows the evolution of the low temperature spectra at the $`(\pi ,0)`$ point as a function of doping. The sharp quasiparticle peak moves to higher energy, indicating that the gap increases with underdoping (although this is difficult to see on the scale of Fig. 4a). We see that the hump moves rapidly to higher energy with underdoping . These trends can be seen very clearly in Fig. 4b, where the energy of the peak and hump are shown as a function of doping for a large number of samples. Finally, we observe that the quasiparticle peak loses spectral weight with increasing underdoping, as expected for a doped Mott insulator; in addition the hump also loses spectral weight though less rapidly.
The hump below $`T_c`$ is clearly related to the superconducting gap, given the weak doping dependence of the ratio between the hump and quasiparticle peak positions at $`(\pi ,0)`$, shown in Fig. 4c. Tunneling data find this same correlation on a wide variety of high-$`T_c`$ materials whose energy gaps vary by a factor of 30 . We have additional strong evidence that the peak and hump do not arise from two different “bands”.
Thus, the peak, dip and hump are features of a single spectral function, and imply a strong frequency dependence of the superconducting state self-energy (a “strong-coupling effect”). The hump represents the energy scale at which the spectral function below $`T_c`$ matches onto that in the normal state (as evident from the data in the bottom curve of Fig. 1b). However, the existence of the dip requires additional structure in the self-energy. We had suggested that this structure can be naturally understood in terms of electrons interacting with a sharp collective mode below $`T_c`$, which also leads to an explanation of the non-trivial dispersion, as discussed below. It was speculated that the mode was the same as that observed directly by neutron scattering in $`YBa_2Cu_3O_7`$ , and more recently in Bi2212 .
To motivate the analysis below that firmly establishes the mode interpretation of ARPES spectra and its connection with neutron data, we need to recall that the spectral dip represents a pairing induced gap in the incoherent part of the spectral function at $`(\pi ,0)`$ occurring at an energy $`\mathrm{\Delta }+\mathrm{\Omega }_0`$, where $`\mathrm{\Delta }`$ is the superconducting gap and $`\mathrm{\Omega }_0`$ is the mode energy. We can estimate the mode energy from ARPES data from the energy difference between the dip ($`\mathrm{\Delta }+\mathrm{\Omega }_0`$) and the quasiparticle peak ($`\mathrm{\Delta }`$).
In Fig. 5b we plot the mode energy as estimated from ARPES for various doping levels as a function of $`T_c`$ and compare it with neutron measurements. We find striking agreement both in terms of the energy scale and its doping dependence . We note that the mode energy inferred from ARPES decreases with doping, just like the neutron data, unlike the gap energy (Fig. 4b), which increases. This can be seen directly in the raw data, shown in Fig. 5a. Moreover, there is strong correlation between the temperature dependences in the ARPES and neutron data. While neutrons see a sharp mode only below $`T_c`$, a smeared out remnant persists up to $`T^{}`$ . As the sharpness of the mode is responsible for the sharp spectral dip, one then sees the correlation with ARPES where the dip disappears above $`T_c`$, but with a remnant of the hump persisting to $`T^{}`$.
An important feature of the neutron data is that the mode only exists in a narrow momentum range about $`(\pi ,\pi )`$, and is magnetic in origin. To see a further connection with ARPES, we return to the results of Fig. 3. Note the dispersion along the two orthogonal directions are similar (Fig. 3c), unlike the dispersion inferred in the normal state. As these two directions are related by a $`(\pi ,\pi )`$ translation ($`(x,0)(0,x);(0,x)+(\pi ,\pi )=(\pi ,\pi x)`$), we see that the hump dispersion is clearly reflecting the $`(\pi ,\pi )`$ nature of the collective mode. This dispersion is also consistent with a number of models in the literature which identify the high energy feature in the pseudogap regime as a remnant of the insulating magnet. We note, though, that the mode is due to quasiparticle pair creation and thus not just a continuation of the spin wave mode from the antiferromagnet .
This brings up a question that is at the heart of the high $`T_c`$ problem: how can a feature which can be understood as a strong coupling effect of superconductivity, as discussed above, turn out to have a dispersion that resembles that of a magnetic insulator? The reason is that the collective mode has the same wavevector, $`(\pi ,\pi )`$, which characterizes the magnetic order of the insulator. It is easy to demonstrate that in the limit that the mode energy goes to zero (long range order), one actually reproduces a symmetric dispersion similar to that in Fig. 3c, with the spectral gap determined by the strength of the mode. This is in accord with the increase in the hump energy with underdoping (Fig. 4b) tracking the rise in the neutron mode intensity. Since the hump scales with the superconducting gap, the obvious implication is that the mode is intimately connected with pairing, a conclusion which can also be made by relating the mode to the superconducting condensation energy . That is, high $`T_c`$ superconductivity is likely due to the same magnetic correlations which characterize the insulator and give rise to the mode.
This work was supported by the National Science Foundation DMR 9624048, and DMR 91-20000 through the Science and Technology Center for Superconductivity, the U. S. Dept. of Energy, Basic Energy Sciences, under contract W-31-109-ENG-38, the CREST of JST, and the Ministry of Education, Science, and Culture of Japan. The Synchrotron Radiation Center is supported by NSF DMR 9212658. JM is supported by the Swiss National Science Foundation, and MR by the Swarnajayanti fellowship of the Indian DST.
|
no-problem/9906/hep-ph9906535.html
|
ar5iv
|
text
|
# PACS:13.60.Le;14.80.-j;14.80.Ly Supersymmetric hadronic bound state detection at 𝑒⁺𝑒⁻ colliders
## 1 Introduction
In the Standard Model it has been verified that there is creation of bound states for every quark but the top (see for instance and references therein). The latter possibility is ruled out due to the high value of the top quark mass, which is responsible for its short lifetime. The natural step forward would be to consider the possibility of bound states creation outside the Standard Model. In this case we focus our attention to the supersymmetric extensions of the Standard Model , in particular to the resonant production and detection of a bound state (supermeson) created from a stop and an anti–stop (“stoponium”) at $`e^+e^{}`$ colliders.
## 2 Bound States
In this Section we will review the bound states creation. For the SUSY case, our assumption will be that the bound state creation does not differ from the SM case, as the relevant interaction is again driven by QCD, and is regulated by the mass of the constituent (s)quarks.
A formation criterion states that the formation of a hadron can occur only if the level splitting between the lying levels of the bound states, which depend upon the strength of the strong force between the (s)quarks and their relative distance , is larger than the natural width of the state. It means that, if
$$\mathrm{\Delta }E_{2S1S}\mathrm{\Gamma }$$
(1)
where $`\mathrm{\Delta }E_{2S1S}=E_{2S}E_{1S}`$ , $`\mathrm{\Gamma }`$ is the width of the would–be bound state, then the bound state exists.
For the case of a scalar bound state $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ , without referencing to a particular supersymmetric model, we should consider the Coulombic two–body interaction
$$V(r)=\frac{4}{3}\frac{\alpha _s}{r}$$
(2)
with the two–loop expression for $`\alpha _s`$
$$\alpha _s(Q^2)=\frac{4\pi }{\beta _0\mathrm{log}\left[Q^2/\mathrm{\Lambda }_{\overline{MS}}^2\right]}\left\{1\frac{2\beta _1}{\beta _0^2}\frac{\mathrm{log}\left[\mathrm{log}\left[Q^2/\mathrm{\Lambda }_{\overline{MS}}^2\right]\right]}{\mathrm{log}\left[Q^2/\mathrm{\Lambda }_{\overline{MS}}^2\right]}\right\}$$
(3)
with $`\beta _0=11\frac{2}{3}n_f,\beta _1=51\frac{19}{3}n_f`$ . Due to the present limits on the stop mass we could either assume that the stop is lighter than the top quark, that is $`n_f=5`$, or heavier, i.e. $`n_f=6`$. The $`\alpha _s`$ expression (3) has to be evaluated at a fixed scale $`Q^2=1/r_B^2`$ , where $`r_B`$ is the Bohr radius
$$r_B=\frac{3}{4\mu \alpha _s}$$
(4)
and $`\mu `$ is the reduced mass of the system. It has been shown in that in the case of high quark mass values, the predictions of the Coulombic potential evaluated at this scale do not differ from the other potential model predictions.
In figures 1 and 2 we show a plot of the energy splitting for the first two levels of the stoponium bound state with respect to the stop mass, for the LHC and the NLC case respectively. As from (1), those figures have to be compared to the width of the stoponium. The width of the stoponium, $`\mathrm{\Gamma }_{\stackrel{~}{t}\overline{\stackrel{~}{t}}}`$ , is twice the width of the single stop squark, as each should decay in a manner independent from the other.
There are several ways a stop should decay , depending on the assumptions made for the other superpartners. For very low values of the stop quark mass, the highest width value will not exceed a few $`KeV`$, quite smaller than the energy splitting of the first two levels of the stoponium. As the mass increases more decay modes enter in and the width increases. In particular for the regime where $`m_W+m_{\stackrel{~}{\chi }^0}+m_b<m_{\stackrel{~}{t}}<m_{\stackrel{~}{\chi }^+}+m_b`$ the three body decay $`\stackrel{~}{t}bW\stackrel{~}{\chi }^0`$ is kinematically allowed and is comparable to the flavour changing two body decay $`\stackrel{~}{t}c\stackrel{~}{\chi }^0`$ . Here $`\stackrel{~}{\chi }^0`$ refers to the lightest supersymmetric particle (LSP); $`\stackrel{~}{\chi }^+`$ is the lightest chargino. Even in this case those widths do not exceed values in the $`KeV`$ range. In this scenario we see, as before, that the energy splitting is much larger than the decay width of the bound state, thus hadronization is possible.
For even higher stop masses, the picture changes as more two body decays like $`\stackrel{~}{t}b\chi ^+`$ and $`\stackrel{~}{t}t\stackrel{~}{\chi }^0`$ are available. For these values of the stop mass there are regions of the parameter space where the decay widths, even if lowered by the one–loop corrections , could overtake the energy levels splitting, thus jeopardizing the formation of the supersymmetric bound state. For instance, in the region where $`\mu M_2`$ the decay width would be larger than $`\mathrm{\Delta }E_{2S1S}`$ for stop masses of about 200 $`GeV`$, spoiling hadronization for $`m_{\stackrel{~}{t}}`$ beyond this range (here $`\mu `$ is the Higgs–higgsino mass parameter, while $`M_2`$ is the wino mass parameter). On the contrary, for parameter values where $`\mu M_2`$ , the decay width of those modes are substantially lower. This would allow stoponium formation for stop mass values in the energy range of the future NLC collider. The region where $`\mu M_2`$ is in a situation intermediate between the two described above.
A quantitative description of the stoponium formation could be seen in figures (3) and (4), where we report the regions of the $`\mu M_2`$ plane for two values of $`\mathrm{tan}\beta `$ in which stoponium cannot be formed, as a function of the stop mass.
Regarding the hadronization problem we see that there are many possibilities due to the vast parameter space. For stop mass values under about 100–200 $`GeV`$ and $`\mathrm{tan}\beta =1.5`$ there is a window of opportunity for stoponium formation regardless of the parameter values; beyond that range the stoponium formation would either be allowed or forbidden depending upon the choice of the parameters.
## 3 Cross Section and Decay Width
The next natural step would be to see whether the stoponium could be detected on an $`e^+e^{}`$ collider with LEP or future NLC characteristics. For this purpose we shall calculate its cross section and decay modes; basing our predictions on , and updating their results.
We should look for the production and decay of the $`P`$ wave state, since we are interested in the search of the bound state at a $`e^+e^{}`$ collider, conserving thus quantum numbers.
We use the Breit–Wigner formula to evaluate the total cross section :
$$\sigma =\frac{3\pi }{M^2}\times \frac{\mathrm{\Gamma }_e\mathrm{\Gamma }_{tot}}{(EM)^2+\mathrm{\Gamma }_{tot}^2/4}$$
(5)
where $`M`$ is the mass of the resonance, $`E`$ is the centre–of–mass energy, $`\mathrm{\Gamma }_{tot}`$ is the total width, and $`\mathrm{\Gamma }_e`$ is the decay width to electrons.
The first decay we will investigate is the leptonic one, which is given by the Van Royen–Weisskopf formula
$$\mathrm{\Gamma }(2Pe^+e^{})=24\alpha ^2Q^2\frac{|R^{}(0)|^2}{M^4}$$
(6)
$`R^{}(0)`$ is the derivative of the radial wavefunction calculated at the origin, $`M`$ the mass of the bound state, $`\alpha `$ the QED constant, $`Q`$ the (s)quark charge. In this case we are neglecting the stop coupling to the $`Z`$ boson, and this allows to hide into the total width all the dependencies of the MSSM parameters for the cross–section formula (5).
For this and following cases, we shall make use of the radial wavefunctions of the Coulombic model, as presented in Section (1). Those are, for the $`1S`$ state
$$R_{1S}(r)=\left(\frac{2}{r_B}\right)^{3/2}\mathrm{exp}\left(\frac{r}{r_B}\right)$$
(7)
and for the $`2P`$
$$R_{2P}(r)=\frac{1}{\sqrt{3}}\left(\frac{1}{2r_B}\right)^{3/2}\frac{r}{r_B}\mathrm{exp}\left(\frac{r}{2r_B}\right)$$
(8)
$`r_B`$ is the Bohr radius defined in (4) .
For the hadronic width decay we have the following expression
$$\mathrm{\Gamma }(2P3g)=\frac{64}{9}\alpha _s^3\frac{|R^{}(0)|^2}{M^4}\mathrm{log}(m_{\stackrel{~}{t}}r_B)$$
(9)
where the Bohr radius acts as an infrared cutoff .
The $`2P`$ state could also decay into a $`1S`$ state and emit a photon. The width decay in this case is given by
$$\mathrm{\Gamma }(2P1S+\gamma )=\frac{4}{9}\alpha Q^2(\mathrm{\Delta }E_{2S1S})^3D_{2,1}$$
(10)
where $`\mathrm{\Delta }E_{2S1S}`$ is the energy of the emitted photon, and $`D_{2,1}=2P|r|1S`$ is the dipole moment . In figures 5 and 6 we present the decays of the $`2P`$ state into hadrons and into a $`1S`$ state plus a photon as a function of the stop mass, as predicted by the Coulombic model. In this case the behaviour of the hadronic decay width with respect to the stop mass is $`\mathrm{\Gamma }(2P3g)m\alpha _s^8`$ , while the radiative decay width goes like $`\mathrm{\Gamma }(2P1S+\gamma )m^2\alpha _s^5`$ . In the former case the linear growth with $`m`$ is suppressed by the high power of $`\alpha _s`$ , resulting in an essantially constant width for the stop mass range of our interest. The $`3g`$ width will eventually grow faster for stop mass values larger than about 1 $`TeV`$. The behaviour of the $`2P1S+\gamma `$ decay is more straightforward, since it grows faster with $`m`$ and contains a lower power of $`\alpha _s`$ . It is also apparent that among the two the $`2P1S+\gamma `$ decay width dominates for increasing values of the stop mass as it is clearly seen in figure 5 and particularly in 6. It is possible to notice also a small threshold effect due to the inclusion of the top flavour.
We must point out that this behaviour of decay widths of the stop bound state is given by the particular Coulombic potential model used in the computation. The results obtained however do not lose validity because, as it has been shown in , this Coulombic model does not differ significantly from other more popular potential models when the mass of the constituent (s)quarks gets larger. This fact could be intuitively understood by considering the Bohr radius of the bound state, which decreases like $`1/m`$: therefore the constituent (s)quarks “feel” more and more the Coulombic part of the potential which becomes dominant with respect to other components of the potential, like for instance the linear confining term that is added in the description of mesons containing lighter quarks.
For a light stop the analysed annihilation modes are the dominant widths so far. As the stop mass increases the single stop decay modes will dominate the total width because of the opening of the decay channel $`\stackrel{~}{t}t\stackrel{~}{\chi }^0`$ and $`\stackrel{~}{t}b\stackrel{~}{\chi }^+`$ .
Figure 7 shows the peak cross section obtained from (5) as function of the stop mass for 200 $`GeV`$ center of mass energy (LEP2). The evaluation of the peak cross–section assumes that the annihilation modes are dominant. While the peak cross section is in the $`nb`$ range, the resonance is practically undetectable at the present collider because its width is much smaller than the typical beam energy spread (of the order of 200 $`MeV`$ at LEP2 ). The effect of a growth of the total width – due to e.g. the opening of other decay channels – does not change the result, as the net effect will be a decrease of the peak cross section. This is clearly illustrated in figure 8 where the Breit–Wigner formula (5) is folded with the typical energy spread of the beam of 200 $`MeV`$.
The possibility of stoponium production with radiative returns has also been considered and the results for the cross–sections are illustrated in figure (9). We see that the cross–section is quite small, and in this manner there is no possibility of seeing any signal.
With the increase of the centre of mass energy (NLC case) the scenario changes: as more decay channels appear there are regions in the parameter space where the stoponium could not be formed. The net result for the signal detection does not change, as it could clearly be seen in figures (10) and (11) where we show the effective total cross–section and the radiative return cross–section for centre of mass energy of 500 $`GeV`$.
## 4 Conclusions
We have shown that because of the high energy binding and of the narrow decay width the formation of a $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ $`P`$ wave bound state is possible in certain regions of the parameter space, and in particular for a light stop. However our result shows that this supersymmetric bound state cannot be detected at the present and even future $`e^+e^{}`$ collider. The latter fact proves also that it gives a negligible contribution to the $`\stackrel{~}{t}\overline{\stackrel{~}{t}}`$ production cross section.
Acknowledgments
We Thank G. Pancheri for useful discussion and suggestions. We would like to thank also G. Altarelli and V. Khoze for careful reading of the manuscript. One of us (N.F.) wishes to thank A. Masiero for useful discussion.
|
no-problem/9906/cond-mat9906111.html
|
ar5iv
|
text
|
# Filler-Induced Composition Waves in Phase-Separating Polymer Blends
## I Introduction
The bulk properties of miscible fluid mixtures are characteristically insensitive to their microscopic fluid structure near the critical point for phase separation, where the properties are governed by large scale fluctuations in the local fluid composition. Composition fluctuations occur similarly for most near-critical fluid mixtures, so that the properties of these fluids are subject to a “universal” description. This accounts for the success of simple mathematical models of critical phenomena (e.g., Ising model, $`\varphi ^4`$-field theory) that contain the minimal physics of these fluctuation processes. Although mixtures near their critical point are susceptible to external perturbations, the influence of microscopic heterogeneities tend to become “washed out” in the large scale fluid properties, apart from changes in critical parameters describing the average properties of the fluid (e.g., critical temperature and composition, apparent critical exponents, etc.). This situation changes, however, when the fluid mixture enters the two-phase region. The fluid is then far from equilibrium and perturbations can grow to have a large-scale influence on the phase separation morphology. Perturbations in these unstable fluids can be amplified rather than washed out at larger length scales. Inevitably, the theoretical description of this kind of self-organization process is complicated by various non-universal phenomena associated with the details of the particular model or experiment under investigation . The beneficial aspect of this sensitivity of phase separation and other pattern formation processes to perturbations is that it offers substantial opportunities to control the morphology of the evolving patterns and leads to a great multiplicity of microstructures.
Many previous studies have considered the application of external influences (flow and gravitational fields, concentration and temperature gradients, chemical reactions , crosslinking , etc.) to perturb fluid phase separation, but the investigation of geometrical perturbations is more recent. There have been numerous studies on the perturbation of phase separation arising from the presence of a plane wall, which is one of the simplest examples of a geometrical perturbation of phase separation . Measurements and simulations both show the development of “surface-directed” composition waves away from plane boundaries under the condition where one component has an affinity for the surface. The scale of these coarsening surface waves grows much like those of bulk phase separation patterns . Recent simulations and experiments have shown that variation of the polymer-surface interaction within the plane of the film allows for the control of the local polymer composition in blends phase separating on these patterned substrates (“pattern-directed phase separation” ). Measurements have also indicated that the polymer-air boundary of phase-separating blend films on patterned substrates can be strongly perturbed by phase separation within the film and thermal fluctuations of the polymer-air boundary can also strongly influence the structure of thin polymer films .
In the present paper we focus on the consequences of having geometrical heterogeneities of finite extent in a phase separating blend. The Cahn-Hilliard-Cook (CHC) theory for phase separation is adapted to describe phase separation of a blend with spherical, cylindrical (fiber), and plate-like shaped filler particles. The extended dimensions of the fiber and platelet filler particles are taken to be much larger than the scale of the phase separation process. A variable polymer-surface interaction is incorporated into the filler model in a fashion similar to previous treatments of plane surfaces .
The paper is organized as follows. In section II we briefly summarize the CHC model to introduce notation, to define the relation between model parameters and those of polymer blends theory, and to explain modifications required for incorporating immobile filler particles into the CHC simulations of phase separation. Section III summarizes the results of simulations for representative situations. Key phenomena are identified: 1) Target composition patterns form in near critical composition blends. 2) Target patterns are a transient phenomenon. 3) The scale of the target patterns depends on quench depth and molecular weight. 4) Qualitative changes in the filler-induced composition patterns occur when the surface interaction is neutral and when the blend composition is off-critical. 5) Multiple filler particles induce composition waves exhibiting complex interference patterns. Section IV provides a simple analytic estimate of the scale of the target pattern based on the linearized CHC theory, and these results are tested against simulations for circular filler particles. In section V simulation results are compared to experiments on ultrathin polymer films having silica bead filler particles immobilized by the solid substrate on which the films were cast. Atomic force microscopy measurements on the filled blend films are compared to the analytical predictions. Simulations of phase separation in off-critical blend films are briefly compared to analogous experiments. Measurements on crosslinked blend films are also considered. The final section (VI) discusses generalizations of the present study to manipulate the structure of phase-separating blends.
## II The Model
### A CHC Equation
We present a brief discussion of the CHC model to introduce notation and to explain the modifications required for incorporating filler particles. The modeling of the phase separation dynamics is based on gradient flow of a conserved order parameter $`\varphi (𝐫,t)`$,
$$\frac{}{t}\varphi (𝐫,𝐭)=M^2\frac{\delta F[\varphi ]}{\delta \varphi (𝐫)}+\zeta (𝐫,t),$$
(1)
with $`\varphi (𝐫,t)`$ equal to the local volume fraction of one of the blend components. Incompressibility of the mixture is assumed so that the local volume fraction of the second component is $`1\varphi `$. The mobility $`M`$ is assumed to be spatially uniform and independent of concentration and the free energy functional $`F[\varphi ]`$ has the general form
$$F[\varphi (𝐫)]=\frac{d𝐫}{v}\left[\frac{1}{2}k_BT\kappa (\varphi )(\varphi )^2+f(\varphi )\mu _{eq}\varphi \right],$$
(2)
where $`f(\varphi )`$ is the bulk Helmholtz free energy per lattice site, $`v`$ is the volume per lattice site, $`k_B`$ is Boltzmann’s constant, $`T`$ is temperature, and $`\kappa (\varphi )`$ is a measure of the energy required to create a gradient in concentration. Higher order gradient terms are neglected. The chemical potential is given by $`\mu _{eq}=f/\varphi |_{\varphi _{eq}}`$, which ensures that $`\varphi (𝐫)=\varphi _{eq}`$ is the solution of $`\delta F[\varphi ]/\delta \varphi (𝐫)=0`$. Finally, thermal fluctuations necessary to ensure a Boltzmann distribution of $`\varphi (𝐫)`$ in equilibrium are included via the Gaussian random variable $`\zeta `$. The average of $`\zeta `$ vanishes, and $`\zeta `$ obeys the relation
$$\zeta (𝐫,t)\zeta (𝐫^{},t^{})=2Mk_BT^2\delta (𝐫𝐫^{})\delta (tt^{}).$$
(3)
For temperatures near the critical temperature $`T_c`$ the free energy can be expanded in powers of the composition fluctuation $`\psi (𝐫)=\varphi (𝐫)\varphi _c`$, giving the Ginzburg-Landau (GL) functional
$$F[\psi (𝐫)]=k_BT\frac{d𝐫}{v}[\frac{1}{2}\kappa _c(\psi )^2+\frac{1}{2}c\psi ^2+\frac{1}{4}u\psi ^4+\mathrm{}]$$
(4)
where $`\kappa _c\kappa (\varphi _c)`$. Neglected terms are either higher order in $`\psi `$ or in $`\sqrt{c}\sqrt{TT_c}`$, which is small near the critical point. Eq. (1), in combination with (4), defines the well-known Model B ,
$$\frac{}{t}\psi (𝐫,t)=M\left(\frac{k_BT}{v}\right)^2(\kappa _c^2\psi c\psi u\psi ^3)+\zeta (𝐫,t),$$
(5)
Eq. (5) is used to study the dynamics following a quench to the two-phase region $`T<T_c`$, where $`c<0`$. In that case, the CHC equation may be rescaled into the dimensionless form
$$\frac{}{t}\psi (𝐫,t)=^2(^2\psi +\psi \psi ^3)+ϵ^{1/2}\eta (𝐫,t)$$
(6)
by making the substitutions
$`𝐫`$ $``$ $`(|c|/\kappa _c)^{1/2}𝐫`$ (7)
$`t`$ $``$ $`(Mk_BTc^2/v\kappa _c)t`$ (8)
$`\psi `$ $``$ $`(u/|c|)^{1/2}\psi `$ (9)
Note that this amounts to rescaling space by $`\sqrt{2}\xi _{}`$, where $`\xi _{}`$ is the thermal correlation length in the two-phase region, and time by $`\tau =D_{coll}/(2\xi _{}^2)`$, with $`D_{coll}`$ the collective diffusion coefficient. Here the noise term $`\eta (𝐫,t)`$ satisfies $`\eta (𝐫,t)=0`$ and
$$\eta (𝐫,t)\eta (𝐫^{},t^{})=^2\delta (𝐫𝐫^{})\delta (tt^{}).$$
(10)
The only parameters left to specify the dynamics are the (conserved) average concentration $`\psi _0\varphi \varphi _c`$ and the dimensionless noise strength parameter $`ϵ`$,
$$ϵ=2u/(\kappa _c^{d/2}|c|^{(4d)/2}).$$
(11)
Roughly speaking, the reciprocal of $`ϵ`$ is a measure of quench depth. The parameter $`ϵ`$ also arises in discussions of the width of the critical region, and the connection between thermal noise strength and the Ginzburg criterion was first noted by Binder .
### B Polymer Blends
For polymer blends we take the Flory-Huggins (FH) form of the Helmholtz free energy per lattice site,
$$\frac{f^{FH}(\varphi )}{k_BT}=\frac{\varphi }{N_A}\mathrm{ln}\left(\frac{\varphi }{N_A}\right)+\frac{1\varphi }{N_B}\mathrm{ln}\left(\frac{1\varphi }{N_B}\right)+\chi \varphi (1\varphi ).$$
(12)
Here $`\chi `$ represents the monomer-monomer interaction energy, $`N_i`$ is the polymerization index of component $`i`$, and $`\varphi `$ is the volume fraction of component A. For the coefficient of the gradient term we use de Gennes’ random-phase approximation (RPA) result (neglecting the enthalpic contribution ),
$$\kappa (\varphi )=\frac{1}{18}\left[\frac{\sigma _A^2}{\varphi }+\frac{\sigma _B^2}{1\varphi }\right],$$
(13)
where $`\sigma _A`$ and $`\sigma _B`$ are monomer sizes of the $`A`$ and $`B`$ blend components, given in terms of the radius of gyration of the $`i`$th component $`R_{g,i}`$ by $`\sigma _i^2=R_{g,i}^2/N_i`$.
FH theory exhibits a critical point at
$`\varphi _c`$ $`=`$ $`N_B^{1/2}/(N_A^{1/2}+N_B^{1/2})`$ (14)
$`\chi _c`$ $``$ $`\chi (\varphi _c,T_c)=\left[N_A^{1/2}+N_B^{1/2}\right]^2/(2N_AN_B).`$ (15)
Consequently, the coefficients of the GL functional are defined as
$`c`$ $`=`$ $`2\chi _c(1\chi /\chi _c)`$ (16)
$`u`$ $`=`$ $`{\displaystyle \frac{4}{3}}\chi _c^2\sqrt{N_AN_B}`$ (17)
$`\kappa _c`$ $`=`$ $`{\displaystyle \frac{1}{18}}\left[\sigma _A^2\left(1+\sqrt{N_A/N_B}\right)+\sigma _B^2\left(1+\sqrt{N_B/N_A}\right)\right].`$ (18)
The phase separation dynamics of polymer blends can then be described by the dimensionless CHC equation with $`ϵ`$ determined by molecular parameters,
$$ϵ=\frac{36\sqrt{2}f(x)}{\gamma ^3(\chi /\chi _c1)^{1/2}N^{1/2}},$$
(19)
where $`N_A=N`$ and $`x=N_B/N`$, $`f(x)=(1+\sqrt{x})^3/8x`$, and $`\gamma `$ is the ratio of monomer size to lattice size,
$$\gamma =\frac{\left[\sigma _A^2(1+1/\sqrt{x})+\sigma _B^2(1+\sqrt{x})\right]^{1/2}}{2v^{1/3}}.$$
(20)
$`\gamma `$ simplifies to $`\gamma =\sigma _{A,B}/v^{1/3}`$ for symmetric blends. Thus we see that deep quenches ($`\chi \chi _c`$) and high molecular weight polymers effectively reduce the thermal fluctuations in the rescaled dynamical equations.
### C Surface Energetics
In the presence of a surface we add a local surface interaction energy to be integrated over the boundary,
$$F_s[\psi ]=_Sd^{d1}x[h\psi +\frac{1}{2}g\psi ^2+\mathrm{}].$$
(21)
The coupling constant $`h`$ in the leading term plays the role of a surface field which breaks the symmetry between the two phases, i.e., attracts one of the components to the filler surface. The coupling constant $`g`$ in the second term is neutral regarding the phases, and results from the modification of the interaction energy due to the missing neighbors near the surface and chain connectivity . For studies of surface critical phenomena and surface dynamics one typically keeps only these terms as a minimal model of phase separation with boundaries.
There has been considerable attention given to the subject of the appropriate dynamical equations for the surface boundary conditions . We follow most authors and impose zero flux at the boundary, $`\widehat{n}𝐣_\psi =0`$, which gives
$$\widehat{n}(^2\psi +\psi \psi ^3)=0.$$
(22)
For the second condition we impose that of local equilibrium at the surface, namely,
$$\widehat{n}\psi =h+g\psi .$$
(23)
While more sophisticated treatments are available , they lead to dynamics which rapidly relax to equilibrium and satisfy the above condition. The details of how we implement (22) and (23) in simulations with a curved interface are presented in Appendix A.
### D Simulation Details
The equation of motion is solved using a standard central finite difference scheme for the spatial derivatives, and a first-order Euler integration of the time step . In all the simulations, the lattice spacing is taken between $`0.7`$ and $`1.0`$ in dimensionless units (sufficiently smaller than any relevant physical length scales) and the time step is taken sufficiently small to avoid numerical instability. In the present paper, all simulations are performed in $`d=2`$ on lattices up to size 128<sup>2</sup>, depending on the choice of mesh size. We note that the CHC equation is known to exhibit quantitatively similar pattern formation and coarsening kinetics in two and three dimensions. Further technical details about this type of simulation can be found in Ref. .
## III Illustrative Simulations
In Fig. 1 we show the influence of an isolated circular filler particle on the development of the phase separation pattern of a blend film having a critical composition. The thermal noise is small in these two-dimensional simulations ($`ϵ=10^5`$), corresponding to high molecular weight and/or a deep quench (low and high temperatures relative to the critical temperature for upper and lower critical solution type phase diagrams, respectively). At an early stage of the phase separation process the filler particle creates a spherical composition wave disturbance that propagates a few “rings” into the phase separating medium in which the filler particle is embedded. The target rings initially have the size of the maximally unstable “spinodal wavelength” $`\lambda _0`$ obtained from the linearized theory . As the characteristic scale of the bulk phase separation pattern coarsens to the size of the filler particle, the outer rings of the “target” pattern become disconnected and increasingly become absorbed into the background spinodal pattern. The perturbing influence of the particle becomes weak at a late stage of the phase separation where the scale of the background phase separation pattern exceeds the filler particle size. The finite extent of the filler thus limits the development of the composition waves to a transient regime.
The formation of target patterns can be described as composition waves propagating into the bulk, unstable region until they are overwhelmed by the developing background spinodal decomposition pattern. The rate of onset of spinodal decomposition is controlled by the strength of thermal fluctuations, hence the noise parameter $`ϵ`$ plays a crucial role in determining the radial extent of the composition waves. Fig. 2 shows two systems with equal surface interaction, but with varying noise strengths: (a) $`ϵ=10^3`$ and (b) $`ϵ=10^5`$. We see that the spatial extent of the target pattern is larger for smaller $`ϵ`$. Consequently, we expect deeply quenched and/or high molecular weight polymer blends to be favorable systems for observing filler induced composition waves because of the relatively low thermal noise level typical of these systems, and the relatively high viscosity of these fluids which slows the dynamics and makes measurements of intermediate stage patterns possible (e.g. via atomic force microscopy as discussed in section V). In section IV, we use the linearized CHC equation to estimate the extent of the composition wave.
We observe that the composition waves disappear when the particle radius $`R`$ becomes vanishingly small, and the persistent waves developing from planar surfaces are recovered for very large spherical particles. We then examine particles of sizes intermediate between these extreme limits and during the intermediate phase separation period in which the target patterns are well developed. Fig. 3 shows the angular-averaged composition profile $`\psi (z)`$, with $`zrR`$ the radial distance from the surface of the filler particle. We observe that the amplitude of the local composition fluctuations become more developed and more sharply defined with increasing filler size, $`R`$. In comparison to the planar surface ($`R\mathrm{}`$), the composition wave profile for $`R=10`$ is only slightly reduced in amplitude, whereas for $`R=3`$ the amplitude is reduced to half that of the wall case. We also remark that the radial extent of the target pattern is similar for all particle sizes.
We next examine the influence of the surface interaction on the formation of filler-induced target patterns. The impact of the symmetry breaking perturbation of the filler particle on the phase separation may be tuned through the surface interaction parameters $`g`$ and $`h`$. We focus our attention on $`h`$ since it has a predominant effect on the resulting pattern formation.
Fig. 4 shows CHC simulations of blend phase separation for the case where one component strongly prefers the filler, weakly prefers the filler, and has no preference for the filler ($`h=0`$). The quench depth parameter and computation times are identical in these images. We see that the target patterns do not form in the case of filler particles with a (“neutral”) non-selective interaction, but rather there is a tendency for the patterns to align perpendicularly to the interface. This type of compositional alignment, which we find is even more pronounced in the case of a planar surface, also occurs in block copolymer fluids . The perturbing influence of the boundary interaction saturates with an increase of $`h`$, as can be seen by comparison of the $`h=2.0`$ and $`h=0.1`$ cases. Below we demonstrate that the extent of the target pattern depends logarithmically on $`h`$.
Target waves are a variety of spinodal pattern with a symmetry set by the shape of the filler particle boundary. The introduction of surface patterns on a solid substrate can similarly break the symmetry of the phase separation process and can be used to impart a particular “shape” to the spinodal pattern . A previous CHC simulation by us considered this “patterned-directed” phase separation in near critical composition blend films .
The blend composition also can have a large influence on the character of the filler-induced phase separation structures in blends, particularly when sufficiently far off critical to suppress the spinodal instability. In this case we find a layer of composition enrichment (“encapsulation layer”) forms about the filler particle, but there are no target patterns . This encapsulation layer grows in time, but appears to grow slower than $`t^{1/3}`$. A non-selective interaction ($`h=0`$) leads to the absence of encapsulation by the minority phase, and minority phase nucleation occurs largely unaffected by the presence of the filler. We thus find that the development of target patterns requires the conditions of ordinary spinodal pattern formation (i.e., “near” critical composition) and the existence of a heterogeneity to initiate the wave disturbance.
A representative example of the interference between filler-induced rings at a non-vanishing filler concentration is shown in Fig. 5. Filler particles can each have an affinity for the different blend components so that the enriching phase (“charge”) can vary near the surface of the filler particle at the core of the target waves. (Sackmann has noted that composition enrichment patterns occur about membrane proteins in lipid mixtures comprising living cell membranes, and these patterns mediate protein interactions, leading to attractive or repulsive interactions depending on the “charge”.) In the low noise limit it should be possible to obtain novel wave patterns like those found in reaction diffusion models with regularly spaced sources for wave propagation , but we do not pursue this here. We do mention that the use of filler particles responsive to external fields could allow the manipulation of the large scale phase separation pattern if the external fields are used to align the filler particles.
## IV Estimation of the Spatial Extent of the Target Patterns
In this section we derive an approximate analytical expression for the spatial extent of the composition wave pattern. Our method is based on the observation above that the composition wave propagates until it is overwhelmed by the growth of the bulk spinodal decomposition background pattern. In the context of surface-directed spinodal decomposition, a qualitative explanation of this type of phenomena was proposed by Ball and Essery , who argued that the early time dynamics can be adequately described by the linearized CHC equation, in which the composition wave and the bulk spinodal decomposition add linearly. Both processes continue independently until a local non-linear threshold value of $`|\psi |\psi _t`$ is reached, which then relaxes toward the equilibrium value $`\psi =\pm 1`$.
We test this conjecture numerically in the case of the filler inclusions. First, we simulate the CHC equation in bulk, with no filler particle, and determine the time $`t_0`$ at which the root-mean-square concentration $`\psi ^2(t)^{1/2}`$ reaches the threshold value $`\psi _t=0.15`$ (our reason for this choice is given below). Next, we simulate the filled blend in the absence of noise solving for the pattern at time $`t_0`$, and then estimate the radius $`r_0`$ at which the composition wave (envelope) exceeds $`\psi _t`$. Finally, we perform the simulation with both the filler particle and the thermal noise, and find the pattern to be well-characterized by the size $`r_0`$ for a range of noise and surface interaction parameters.
Based on these ideas, we next develop an analytic estimate of the spatial extent of the pattern using the linearized CHC theory . At early times the order parameter does not deviate significantly from zero, and one linearizes (6) to obtain
$$_t\psi =^2(k_0^2+^2)\psi +ϵ^{1/2}\eta $$
(24)
where $`k_0^2=13\psi _0^2`$. We apply this equation first to the determination of the root-mean-square order parameter $`\psi _{\mathrm{rms}}(t)\psi (t)^2^{1/2}`$, which characterizes the rate of growth of the concentration fluctuations at very early times following a quench to the two-phase region. Fourier transformation gives
$$_t\stackrel{~}{\psi }(𝐤,t)=k^2(k_0^2k^2)\stackrel{~}{\psi }+ϵ^{1/2}\stackrel{~}{\eta }(𝐤,t).$$
(25)
The structure factor $`S(𝐤,t)=\stackrel{~}{\psi }(𝐤,t)\stackrel{~}{\psi }(𝐤,t)`$ is then,
$$S(𝐤,t)=\frac{ϵ(e^{2k^2(k_0^2k^2)t}1)}{2(k_0^2k^2)}.$$
(26)
$`\psi _{\mathrm{rms}}`$ is found by integrating $`S(𝐤,t)`$ with respect to $`k`$,
$$\psi ^2(t)=\frac{d^dk}{(2\pi )^d}S(𝐤,t).$$
(27)
Following , we observe that the integrand is sharply peaked about $`k=k_0/\sqrt{2}`$ and we approximate it by a Gaussian. The integral is then readily evaluated to find (to leading order in $`1/t`$)
$$\psi ^2(t)=ϵe^{\frac{1}{2}k_0^4t}\left(\frac{\pi }{2t}\right)^{1/2}\frac{d}{k_0^{4d}\mathrm{\Gamma }(1+d/2)(8\pi )^{d/2}}.$$
(28)
This result, when tested against simulations of the fully nonlinear CHC, agrees well up to $`\psi ^2(t)^{1/2}(0.15)`$, thus motivating our choice for $`\psi _t`$ indicated above.
Finally, we equate $`\psi ^2(t_0)`$ to $`\psi _t^2`$ to determine the time $`t_0`$ at which the bulk phase separation process has reached the nonlinear threshold. The resulting transcendental equation for $`t_0`$ can be approximately solved by observing that $`\psi _t^2/ϵ1`$ in low noise conditions, and that this ratio must be compensated primarily by the $`e^{k_0^4t_0/2}`$ factor. Equating these two and then iteratively improving the estimate of $`t_0`$ yields
$$t_0\frac{2}{k_0^4}\mathrm{ln}\left(\frac{\psi _t^2}{ϵ}\right)+\mathrm{ln}\left(\frac{2\mathrm{\Gamma }(1+d/2)(8\pi )^{d/2}\sqrt{\mathrm{ln}(\psi _t^2/ϵ)}}{k_0^{d2}d}\right).$$
(29)
Next we consider the linearized theory for the filled blend. We solve (24) for the exterior of the filler particle or fiber, and in the absence of thermal noise (the noise can simply be averaged out of the composition wave within the linearized theory). We consider idealized filler particles that are symmetric and finite in some of their coordinates and infinite (i.e., very large on the phase separation pattern scale) in the remaining coordinates defining the particle dimensions. With such symmetry, the composition wave depends only on the coordinate perpendicular to the interface, which is a radial coordinate in $`d_{}`$ dimensions. For example, a spherical particle in $`d=3`$ corresponds to $`d_{}=3`$, a cylindrical fiber is prescribed by $`d_{}=2`$, and a platelet filler reduces to the planar surface with $`d_{}=1`$. We can treat the general $`d_{}`$ case through the $`d_{}`$-dimensional Laplacian,
$$^2\psi (r)=\frac{^2\psi }{r^2}+\frac{(d_{}1)}{2}\frac{\psi }{r},$$
(30)
yielding a fourth order partial differential equation for (24). The “source” for the composition wave comes from the boundary conditions obtained by linearization of (22) and (23), namely
$$\widehat{r}(k_0^2+^2)\psi (R)=0$$
(31)
for conservation at the boundary, and
$$\widehat{r}\psi (R)=h+g\psi (R).$$
(32)
This provides a pseudo one-dimensional system which can be readily integrated numerically.
The solutions for $`d_{}=1,2,3`$ presented in Fig. 6 illustrate the influence of $`d_{}`$ on the composition wave pattern. Increasing $`d_{}`$ reduces the amplitude of the composition wave. This feature can be understood to arise from the increasing volume occupied by the outer rings. The opposite situation should hold for exterior boundaries having these symmetries, so that more coherent ring structures might be anticipated in phase separation confined to these geometries, especially for spherical cavities. It may prove interesting to examine phase separation in the presence of fractal filler particles (like fumed silica) to determine whether geometry stabilizes or destabilizes the phase separation pattern and how the evolving phase separation pattern accommodates the fractal boundary structure.
Returning to the analytical estimation of the target pattern size, we expand $`\psi (r)`$ in a basis which diagonalizes the Laplacian. This amounts to performing a cosine transform for $`d_{}=1`$, a ($`J_0`$) Hankel transform in $`d=2`$, and a half-integer Hankel tranform in $`d=3`$. The last can be re-expressed as a Fourier cosine transform of $`r\psi (r)`$ rather than $`\psi `$. Here we study the extremal cases of $`d_{}=1`$ and $`3`$, and show that $`d_{}`$ has only a minor effect on the pattern size.
First, we revisit the $`d_{}=1`$ case already addressed by Ball and Essery . One can solve (24) with boundary conditions via Fourier cosine transformation with respect to $`r`$ and Laplace transform with respect to $`t`$, with the result,
$$\stackrel{~}{\psi }(k,s)=\frac{hk^2}{s[sk^2(k_0^2k^2)]}$$
(33)
for the case $`g=0`$ (to which we restrict our attention). Inverting the Laplace transform and using a Gaussian approximation again to invert the Fourier cosine transform gives the solution
$$\psi _{d_{}=1}(z,t)\psi (0,t)e^{(z/4\lambda _0)^2}\mathrm{cos}(z/\lambda _0),$$
(34)
where $`\psi (0,t)\frac{h}{k_0^3}\sqrt{\frac{8}{\pi t}}\mathrm{exp}\left(\frac{k_0^4t}{4}\right)`$, $`\lambda _0^1=k_0/\sqrt{2}`$, and where subdominant terms in $`1/t`$ have been neglected. A non-zero value of $`g`$, while complicating (33), would appear in this approximate solution only via the substitution $`hh+g\psi _0`$. In this $`d_{}=1`$ example, the $`R`$ dependence drops out of the linearized equation, and $`z=r`$ measures the distance from the wall.
For a given noise strength $`ϵ`$ we have a time $`t_0`$ at which the local composition of one phase reaches $`\psi _t`$. For surface interaction $`h`$ we solve for the distance $`z_0`$ out to which the envelope of $`\psi (r,t_0)`$ exceeds $`\psi _t`$. This gives the approximation,
$$z_02k_0^3t_0\left[1\frac{1}{k_0^4t_0}\mathrm{ln}\left(\frac{k_0^3\psi _t}{h}\sqrt{\frac{\pi t_0}{8}}\right)\right].$$
(35)
Thus, the propagation front grows with a velocity $`2k_0^3`$ at long times , and the terms in square brackets are the leading correction to this long time asymptotic behavior.
The $`d_{}=3`$ composition profile may be obtained by observing that (24) and (30) yield the same equation for $`r\psi _{d_{}=3}(z,t)`$ as for the composition profile $`\psi _{d_{}=1}(z,t)`$. Thus, we impose the boundary conditions (to leading order in $`R/r`$) and follow the above derivation with the result
$$\psi _{d_{}=3}(z,t)\frac{R}{R+z}\psi _{d_{}=1}(z,t).$$
(36)
Solving for the value $`z_0`$ at which the composition wave envelope equals $`\psi _t`$ leads to nearly the same expression as the $`d_{}=1`$ case (35), with an additional $`\mathrm{ln}(1+z_0/R)/k_0^4t_0`$ term in the square brackets. Typically $`z_010R`$, $`k_0`$ is of order unity, and $`t_0`$ ranges from 10 to 30, making this term roughly a 10% correction.
To compare with our simulations, we consider $`d_{}=d=2`$. For $`z_0`$ we simply take the arithmetic mean of the values obtained for $`d_{}=1`$ and $`d_{}=3`$ (motivated by numerical solutions). Equation (29) for $`t_0`$ is substituted into (35) to obtain a prediction for $`z_0`$ in terms of $`h`$, $`ϵ`$, and $`\psi _0`$. Here we consider critical quenches with $`\psi _0=0`$, or $`k_0=1`$. Finally, we approximate $`\mathrm{ln}t_0`$ and $`\mathrm{ln}z_0`$ with typical values, which introduces less than 10% error with the range of parameters considered here, and thus obtain
$$z_02.58.5\mathrm{log}_{10}ϵ+4.6\mathrm{log}_{10}h.$$
(37)
A similar expression results for $`d=d_{}=3`$, with the primary difference being a change of the $`\mathrm{log}_{10}ϵ`$ coefficient to $`7.9`$.
Fig. 7 shows a comparison of this estimate with the simulations. We see that the analytic approximation provides a good rough estimate of the spatial extent of the phase separation pattern, although it predicts a size typically one oscillation larger than the outermost unbroken target.
We point out that the spherical composition waves are apparent in the average composition profiles even in the rather noisy looking ring patterns found in the late stage of target pattern formation. In Fig. 8 we show a target pattern at intermediate values of noise, as well as the radial average of the composition profile about the center of the target. Comparison shows that the ring composition pattern persists in the radial average even after the target pattern appears visually to have broken up. This provides a possible explanation of the apparent overestimate of the target size in Fig. 7.
## V Comparison with Experimental Results
While scattering measurements of the growth of composition waves are readily performed for a single plane boundary , these measurements become more difficult in filled blends where the filler particles are randomly distributed within the blend. This situation is unfortunate, given the predicted transient nature of the composition wave patterns when the particles are small. However, real space studies of blend phase separation are possible in films sufficiently thin to suppress the formation of surface-directed waves normal to the solid substrate . Under the favorable circumstances that one of the polymer components segregates to both the solid substrate and the polymer-air boundary of these nearly two-dimensional (“ultrathin”) blend films, phase separation is observed within the plane of the film . The variation of the surface tension in the film accompanying phase separation gives rise to film boundary undulations that can be measured by atomic force microscopy (AFM) and optical microscopy (OM) . The thickness of ultrathin blend films is typically restricted to small values ($`L200`$ nm) and the height contrast of the surface patterns tends to become larger in still thinner films . A film thickness in the range of 20 – 50 nm is often suited for observing well resolved phase separation surface patterns similar to those found in simulations of bulk blends. In the following we compare our results with those of a model blend utilized in ultrathin phase separation studies reported elsewhere with silica beads added as the model filler .
The spun-cast films are composed of a near-critical composition blend of polystyrene and poly(vinyl methyl) ether (PVME). The filler particles are silica beads having an average size of about 100 nm, as measured by direct imaging of the particles. This particular filler was chosen because of its tendency to be enriched by polystyrene, rather than PVME, which enriches both the solid and air surfaces. In this way, the filler particles are not competing with the solid or air surfaces for the enriching polymer. Phase separation was achieved by annealing the film approximately 15 within the two-phase region, corresponding to a fairly shallow quench. Film topography (height) was measure by AFM. Further details of the experiment are provided in Ref. .
Fig. 9(top) shows the topography of the blend film at an intermediate stage of phase separation where we expect circular filler-induced composition waves to be evident. The pattern resembles the simulated patterns under similar quench conditions. The symmetry of the film phase separation pattern is locally broken by the presence of the filler particles, leading to the formation of ring-like concentration wave patterns. Note that when observed on a larger scale \[Fig. 9(bottom)\], the phase separation pattern far from any filler particles resembles the typical spinodal decomposition pattern observed in control measurements on the same blends without filler.
It is apparent that the patterns in Fig. 9 are in a relatively late stage of phase separation, where the rings are beginning to break up along with the “background” phase separation pattern. The simulations above indicate that the target patterns are more persistently expressed in the radially averaged patterns and in Fig. 10 we show the radial average of the AFM height data centered about a representative filler particle. The target pattern in the radially averaged data extends far beyond the ring feature apparent in the image in Fig. 9. The data in Fig. 10 correspond to a shallow quench, and are comparable to the intermediate stage, shallow quench simulation data in Fig. 3.
Next we directly compare the prediction of the linearized theory to the AFM data. An exact solution of $`\psi _{d_{}=2}(z,t)`$ is difficult, but we can obtain a reasonable approximation to $`\psi _{d_{}=2}(z,t)`$ by generalizing the method described above for $`\psi _{d_{}=1}(z,t)`$. We estimate $`\psi _{d_{}=2}(z,t)`$ as a Gaussian decay function multiplied by the eigenfunction of the Laplacian in $`d=2`$ \[rather than in $`d=1`$ as in the case of (34)\]. In this approximation, $`\psi _{d_{}=2}(z,t)`$ becomes a product of a Gaussian as in Eq. (34) and a Bessel function $`J_0(2\pi z/\lambda _0)`$, and we show a fit of this function to the AFM data in Fig. 10. The fitted value of the particle radius $`R`$ is $`82`$ nm, which is comparable to the average particle radius obtained by optical microscopy ($`R100`$ nm). The scale parameters of the Gaussian and Bessel functions have been adjusted along with the prefactor which is set by the value of $`\psi _{d_{}=2}(z,t)`$ as $`z`$ tends to zero. It is clear that the oscillatory pattern scale is on the order of the background phase separation pattern, and that the linearized expression for $`\psi _{d_{}=2}(z,t)`$ has the qualitative shape of the measured profile. Such qualitative agreement is the best that can be expected from the linearized theory, which strictly speaking should hold only at very early times.
At still longer times, the phase separation pattern eventually breaks up into droplets and little difference is observed between the films with and without filler. Thus, the target patterns induced by the filler particles are transient, as observed in the simulations. Of course, the version of the CHC model used here cannot reliably describe quantitative features of these late stage processes without the incorporation of hydrodynamic interactions.
Under far off-critical conditions and a selective interaction between the filler particles and one of the polymers ($`h>0`$) the filler particles are “encapsulated” by a layer of the favored polymer so that concentration waves do not develop. The formation of droplets by nucleation or far off-critical spinodal decomposition can also have the effect of breaking the symmetry of the phase separation process, but the pattern formation is not generally the same as for critical composition mixtures. Recent measurements have reported the occurrence of filler encapsulation in a blend of polypropylene and polyamine-6 with glass bead filler particles . Encapsulation occurs when the polypropylene-rich phase having the selective interaction for the filler is the minority phase, but no encapsulation occurs when polypropylene is the majority phase. This finding compares well with the simulation results discussed in section IV.
Radiation crosslinking provides another source of heterogeneity that can be introduced readily in phase separating films. Measurements of irradiated photoreactive blends of PVME and PS with a crosslinkable side group styrene-chloromethyl styrene random copolymer (PSCMS) show the formation of striking ring composition patterns and we reproduce one of these patterns in Fig. 11 (compare with Fig. 5). Furukawa has interpreted these observations in terms of a model by which irradiation first brings the blend into the nucleation regime where droplets phase separate, followed by the entrance into the spinodal regime where the droplets act like the filler particles discussed in the present paper. This is a plausible interpretation of the qualitative origin of these patterns, but it is difficult to interpret these measurements directly from CHC simulations since crosslinking imparts a non-trivial viscoelasticity to the polymer blend . The crosslinking, which also increases the molecular weight of PSCMS, and the increased elasticity, both lead us to expect a decrease in the thermal noise and thus an increased tendency to form target patterns. It also seems plausible that the crosslinks themselves provide the source of heterogeneity, inducing the development of composition waves.
## VI Discussion
The presence of filler particles in a phase separating fluid mixture is found to give rise to transient composition wave patterns in simulations based on the Cahn-Hilliard-Cook model and in measurements on ultrathin polystyrene/poly(vinylmethyl) ether blend films with silica filler particles. In both the simulation and the experiment, the composition wave patterns were found to be transient and the filler is found to have a diminishing effect as the scale of the phase separation pattern becomes larger than the filler particles. The propagation of composition waves is enhanced at lower thermal noise level so that the effect propagates to larger distances for deeper quenches and higher molecular weight blends. The finite size and the dimensionality of the filler particles are found in our simulations to have a similar effect in determining the stability of the composition wave pattern at intermediate times. The composition waves become more stable for particles large in comparison to the spinodal wavelength and the concentration waves exhibited by these larger particles are similar to planar interfaces. The composition waves about the filler particles are more stable for particles extended at great distance along more directions; i.e., surfaces are more stable than long cylinders, which are more stable than spherical filler particles. Our results compare favorably with experiments on phase-separating filled blend films which are nearly two-dimensional.
Filler particles are an example of a perturbation of phase separation by boundaries interior to the fluid. It would be interesting to investigate the influence of exterior boundaries of finite extent on phase separation. It seems likely that composition waves within confined geometries should be more stable because of the decreasing surface area of the rings farther from the surface. This should lead to well developed and more long lasting perturbations of the phase separation process. The relation between boundary shape and phase separation morphology should be very interesting for this class of measurements. Phase separation within arrays of filler particles where the distinction between interior and exterior boundaries becomes blurred and where larger perturbations of the phase separation process may be anticipated, should also prove interesting. The distinction between large and small and fixed and mobile filler particles should lead to a range of new phase separation morphologies since the development of composition waves should lead to changes in the filler-filler interaction that can influence the subsequent development of the film structure. The utilization of geometrically and chemically patterned surfaces and additives offer many opportunities for the control of the phase separation morphology and resulting properties of blend films, and the study of these surface-induced phase separation processes raise many interesting problems of fundamental and practical interest.
###### Acknowledgements.
The present work has benefitted greatly from close collaboration with experimentalists in the Polymer Blends Group at NIST. The simulations and experimental work were conducted simultaneously, and we thank Alamgir Karim and Eric Amis for many suggestions which influenced the design and interpretation of our simulations. We thank Giovanni Nisato for many useful conversations and providing the correlation function data for the filled films. We have also benefitted from conversations with Qui Tran-Cong regarding the relation of his measurements to our simulations and for contributing Fig. 11. B.P. Lee acknowledges the support of the National Research Council/NIST postdoctoral research program.
## A Boundary Conditions on a Curved Surface
Curved boundaries complicate the implementation of boundary conditions in a spatially discretized simulation. In the present work, we use a square lattice and simulate filler particles with circular, cylindrical, and spherical symmetry. This requires a method of incorporating the boundary conditions that minimizes the effects of errors caused by approximating curved boundaries by lattices. In this appendix we present our approach to this problem.
Generally, (22) and (23) are imposed by inclusion of $`\psi `$ and $`\mu =^2\psi \psi +\psi ^3`$ values at the lattice sites on the immediate interior of the boundary (within the wall or filler), which are determined from the boundary conditions before each time step. We superimpose the circular boundary over the square lattice so that no lattice vertices lie along the boundary. Consequently, every interior point corresponds to one of two possibilities, shown as the lower left corners of Fig. 12 (a) and (b). The boundary condition at the point $`(x_0,y_0)`$ (shown as a black dot) is not set at the interior lattice site but rather at the intersection of the boundary and the radius passing through the interior lattice site.
In both cases of Fig. 12 we use the three vertices shown as open circles for the discretized representation of $`\psi (x_0,y_0)`$ and its normal derivative $`\widehat{r}\psi (x_0,y_0)`$. To highest order these representations are unique \[to $`O(\mathrm{\Delta }x^2)`$ for $`\psi `$ and $`O(\mathrm{\Delta }x)`$ for the derivative, with $`\mathrm{\Delta }x`$ the lattice spacing\]. Hence, (23) may be used to determine $`\psi _{i,j}`$, the field at the interior site, from the appropriate exterior points. We find for case (a) the relation
$$\psi _{i,j}=\frac{(1g\mathrm{})\left[(\mathrm{sin}\theta \mathrm{cos}\theta )\psi _{i,j+1}+\mathrm{cos}\theta \psi _{i+1,j+1}\right]h\mathrm{\Delta }x}{(1g\mathrm{})\mathrm{sin}\theta +g\mathrm{\Delta }x},$$
(A1)
where $`\mathrm{}`$ is the distance from the interior lattice site to $`(x_0,y_0)`$ while $`\theta `$ is the angle between the radius and the horizontal axis. For case (b) the analogous expression is
$$\psi _{i,j}=\frac{(1g\mathrm{})\left[\mathrm{cos}\theta \psi _{i+1,j}+\mathrm{sin}\theta \psi _{i,j+1}\right]h\mathrm{\Delta }x}{(1g\mathrm{})(\mathrm{cos}\theta +\mathrm{sin}\theta )+g\mathrm{\Delta }x}.$$
(A2)
The chemical potential $`\mu _{i,j}`$ may also be assigned at the interior point, in practice by assigning a value to $`(^2\psi )_{i,j}`$ to supplement the Laplacian derived from $`\psi `$ outside the boundary. In this way we can impose the conservation requirement (22) for case (a) via
$$\mu _{i,j}=(1\mathrm{cot}\theta )\mu _{i,j+1}+\mathrm{cot}\theta \mu _{i+1,j+1},$$
(A3)
while for case (b),
$$\mu _{i,j}=\frac{\mathrm{cos}\theta }{\mathrm{cos}\theta +\mathrm{sin}\theta }\mu _{i+1,j}+\frac{\mathrm{sin}\theta }{\mathrm{cos}\theta +\mathrm{sin}\theta }\mu _{i,j+1}.$$
(A4)
In simulations with thermal noise we assume a separation of time scales between thermal fluctuations and order parameter variations (as described in ), and simply supplement the above conditions with the conservation law for fluctuations at the boundary: $`\widehat{r}\nu =0`$, where $`\nu `$ is the noise current derived from $`\eta =\nu `$.
|
no-problem/9906/hep-ph9906326.html
|
ar5iv
|
text
|
# UR-1572 ER-40685/933 May 1999 Top and gluons at lepton colliders*footnote **footnote *Presented by LHO at the 1999 Meeting of the Division of Particles and Fields of the APS, Los Angeles, CA, Jan. 5–9, 1999.
## I Introduction
Future high energy lepton colliders — $`e^+e^{}`$ and $`\mu ^+\mu ^{}`$ will provide relatively clean environments in which to study top quark physics. Although top production cross sections are likely to be lower at these machines than at hadron colliders, the color-singlet initial states and the fact that the laboratory and hard process center of mass frames coincide give lepton machines some advantages. Strong interactions effects such as those due to gluon radiation must still be considered, of course. Jets from radiated gluons can masquerade as quark jets, which can complicate top event identification and mass reconstruction from its decay products, especially for the hadronic decay modes.
In this talk we consider gluon radiation in top quark production and decay (; see also ). We consider only collision energies well above the top pair production threshold, so that our results do not depend on whether the initial state consists of electrons or muons. We focus on distributions of radiated gluons and their effects on top mass reconstruction.
In top events at lepton colliders, there are no gluons radiated from the color-singlet initial state. Final-state gluon emission can occur in both the production and decay processes, with gluons emittied from the top or bottom quarks (or antiquarks), as shown in Figure 1. Emission from the top quark contributes to both production- and decay-stage radiation, depending on when the top quark goes on shell. Emission from the $`b`$ quarks contributes to decay-stage radiation only.
## II Monte Carlo Calculation
The results presented here are from a Monte Carlo calculation of real gluon emission in top quark production and decay:
$$e^+e^{}\gamma ^{},Z^{}t\overline{t}(g)bW^+\overline{b}W^{}g.$$
(1)
We compute the exact matrix elements for the diagrams shown in Figure 1 with all spin correlations and the bottom mass included. We keep the finite top width $`\mathrm{\Gamma }_t`$ in the top quark propagator and include all interferences between diagrams, and we use exact kinematics in all parts of the calculation. We do not include radiation from the decays of the $`W`$ boson; this amounts to assuming either that the $`W`$ decays are leptonic or that radiative hadronic $`W`$ decays can be identified and separated out, for example by invariant mass cuts.
We are particularly interested in the reconstruction of the top quark momentum (and hence its mass) from its decay products. In an experiment this allows us both to identify top events and to measure the top quark’s mass. A complication arises when gluon radiation is present, because the emitted gluon may or may not be a top decay product. If it is, then we should include it in top reconstruction, i.e. we have $`m_t^2=p_{Wbg}^2`$ for decay-stage radiation. But if the gluon is part of top production, then we have $`m_t^2=p_{Wbg}^2`$. It is therefore desirable to be able to identify and distinguish production-stage gluons from those emitted in the decays.
Although this distinction cannot be made absolutely in an experiment, the various contributions can be separated in the calculation. As noted above, gluon emission from the top quark (or antiquark) contributes to both the production and decay stages. These can be separated in the calculation as follows. For definiteness, we consider gluon emission from the top quark, shown in the bottom left diagram in Figure 1. The matrix element contains propagators for the top quark both before and after it radiates the gluon. The matrix element therefore contains the factors
$$ME\left(\frac{1}{p_{Wbg}^2m_t^2+im_t\mathrm{\Gamma }_t}\right)\left(\frac{1}{p_{Wb}^2m_t^2+im_t\mathrm{\Gamma }_t}\right).$$
(2)
The right-hand side can be rewritten as
$$\frac{1}{2p_{Wb}p_{Wbg}}\left(\frac{1}{p_{Wb}^2m_t^2+im_t\mathrm{\Gamma }_t}\frac{1}{p_{Wbg}^2m_t^2+im_t\mathrm{\Gamma }_t}\right).$$
(3)
This separates the production and decay contributions to the matrix element because the two terms in parentheses peak respectively at $`p_{Wb}^2=m_t^2`$ (production emission) and $`p_{Wbg}^2=m_t^2`$ (decay emission). The cross section in turn contains separate production and decay contributions. It also contains interference terms, which in principle confound the separation but in practice are quite small.
In fact the interference terms are interesting in their own right, although not for top reconstruction. In particular, the interference between production- and decay-stage radiation can be sensitive to the top quark width $`\mathrm{\Gamma }_t`$, which is about 1.5 GeV in the Standard Model. The interference between the two propagators shown above can be thought of as giving rise to two overlapping Breit-Wigner resonances. The peaks are separated roughly by the gluon energy, and each curve has width $`\mathrm{\Gamma }_t`$. Therefore when the gluon energy becomes comparable to the top width, the two Breit-Wigners overlap and interference can be substantial. In constrast, if the gluon energy is much larger than $`\mathrm{\Gamma }_t`$, overlap and hence interference is negligible. Hence the amount of interference serves as a measure of the top width. We will explore this more below.
## III Numerical Results
### A Overall Gluon Properties
We begin our numerical results with the relative contributions of production- and decay-stage radiation to the total cross section. Figure 2 shows the fraction of the total cross section due to production stage emission, in events with an extra gluon. The solid line is for center-of-mass energy 1 TeV, and the dashed line is for 500 GeV. Both curves fall off as the minimum gluon energy increases; this reflects the decrease in phase space for emitted gluons. The production fraction is higher at a 1 TeV collision energy than at 500 GeV — again this reflects phase space — but decay-stage radiation always dominates for both cases.
Figure 3 shows the total gluon energy spectrum for an intermediate collision energy of 750 GeV along with its decomposition into production (dashed histogram) and decay (dotted histogram) contributions. Again we see that deca-stage radiation dominates. Otherwise the spectra are not very different; both exhibit the rise at low energies due to the infrared singularity characteristic of gluon emission, and both fall off at high energies as phase space runs out.
### B Mass Reconstruction
We now turn to the question of mass reconstruction. Figure 4 shows top invariant mass distributions with and without the extra gluon included. In both cases there is a clear peak at the correct value of $`m_t`$. In the left-hand plot, where the gluon is not included in the reconstruction, we see a low-side tail due to events where the gluon was radiated in the decay. Similarly, in the right-hand plot we see a high-side tail due to events where the gluon was radiated in association with production, and was included when it should not have been.
The narrowness of the peaks and the length of the tails in Figure 4 suggests that an invariant mass cut would be useful to separate the two types of events. We can do even better by considering cuts on the angle between the gluon and the $`b`$ quarks. This works because although there is no collinear singularity for radiation from massive quarks, the distribution of gluons radiated from $`b`$ quarks peaks close to the $`b`$ direction. Such gluons are emitted in decays. The dotted histogram in Figure 5 shows the top mass distribution that results from using proximity of the gluon to the $`b`$ quarks to assign the gluon.
Of course an importnat reason the cuts are so effective is that we work at the parton level. The experimentalists do not have that luxury, and, as one would expect, hadronization and detector effects are likely to cloud the picture. The solid histogram in Figure 5 shows the mass distribution after including energy smearing; the solid curve is a Gaussian fit. The spread in the measured energies is parametrized by Gaussians with widths $`\sigma =0.4\sqrt{E}`$ for quarks and gluon, and $`\sigma =0.15\sqrt{E}`$ for the $`W`$’s. We see that the central value does not shift, but the distribution is significantly wider.
### C Interference and Sensitivity to $`\mathrm{\Gamma }_t`$
Finally, we return to the subject of interference. As mentioned above, the interference between the production- and decay-stage radiation is sensitive to the total width of the top quark $`\mathrm{\Gamma }_t`$. However because the interference is in general small, we need to find regions of phase space where it is enhanced. This question was considered in Ref. in the soft gluon approximation, where it was found that the interference was enhanced when there was a large angular separation between the $`t`$ quarks and their daughter $`b`$’s.
Here we examine whether the result of survives the exact calculation. Figure 6 shows that it does. There we plot the distribution in the angle between the emitted gluon and the top quark for gluon energies between 5 and 10 GeV and with $`\mathrm{cos}\theta _{tb}<0.9`$. The center-of-mass energy is 750 GeV. The histograms show the decomposition into the various contributions. The negative solid histogram is the production-decay interference, and we see that not only is it substantial, it is also destructive. That means that the interference serves to suppress the cross section. If the top width is increased, the interference is larger, further suppressing the cross section. this is illustrated in Figure 7, which shows the cross section for different values of the top width. Although the sensitivity does not suggest a precision measurement, it is worth noting that the top width is difficult to measure by any means, and it is the total width that appears here. At the very least such a measurement would serve as a consistency check.
## IV Conclusion
In summary, we have presented preliminary results from an exact parton-level calculation of real gluon radiation in top production and decay at lepton colliders, with the $`b`$ quark mass and finite top width, as well as all spin correlations and interferences included. We have indicated some of the issues associated with this gluon radiation in top mass reconstruction and top width sensitivity in the gluon distribution. Further work is in progress.
This work was supported in part by the U.S. Department of Energy and the National Science Foundation.
|
no-problem/9906/hep-th9906212.html
|
ar5iv
|
text
|
# References
Massless Goldstone bosons arise from components of global symmetries which are spontaneously broken. There is no extra symmetry for Goldstone bosons in supersymmetr Instead the supersymmetry forces complexification of scalars. This leads to an increased number of massless excitations in general, with complete doubling of the original number in some cases. The special cases when the coset space manifold, $`G/H`$, of the original Goldstone bosons is Kahler might be expected to be an exception in view of the seminal work of Zumino . This does indeed confirm that a non-linear supersymmetry model can be established without any increase in the number of Goldstone bosons. However the theorem frequently attributed to Lerche and Shore, following early work by Ong , appears to prove that such models can never result from constraining linear supersymmetric ones. The formal proofs in Lerche , and in Kotcheff and Shore , reveal a striking similarity to the work of Witten , in which the impossibility of partially breaking extended global supersymmetries $`(N>1)`$ to lower values was proposed. Indeed it was this similarity which prompted the current work. Once Bagger and Wess , and subsequently Hughes, Liu and Polchinski had produced (non–linear) counter examples to the Witten analysis it seemed likely that the Lerche and Shore proofs would also fail, The key contribution could be argued to be that of Hughes and Polchinski , which revealed that the original anticommutator algebra for supersymmetric charges had to be generalised to include a central term at the underlying current density level. They attributed this revision to the more modern viewpoint that supermembranes are just as fundamental as elementary particles in string theory.
This reinterpretation is the current starting point. The generalisation of
$$\{Q_{A\alpha },Q_{B\dot{\beta }}\}=2(\sigma ^\mu )_{\alpha \dot{\beta }}\delta _{AB}$$
(1)
to local form is
$$_\mu T\left(j_{A\alpha }^\mu (x)\overline{j}_{B\dot{\beta }}^\nu (y)\right)=2\left(\sigma ^\rho \right)_{\alpha \dot{\beta }}T_\rho ^\nu \delta ^4(xy)\delta _{AB}+2\left(\sigma ^\nu \right)_{\alpha \dot{\beta }}C_{AB}\delta ^4(xy),$$
(2)
where Schwinger terms which are irrelevant to this analysis are ignored . The key feature is provided by the central terms $`C_{AB}`$ which give infinities of unclear covariance on integration over fixed volume. Possibly this was why this was previously overlooked. One might wonder if equation (2) could be restricted by the fact that the Hughes and Polchinski treatment was in two dimensions. But they appear to be taking advantage of the fact that $`T^{\mu \nu }`$ is not the unique conserved symmetric tensor since $`T^{\mu \nu }+C\eta ^{\mu \nu }`$ is also conserved. This does not depend on being in two or fewer dimensions. It seems that this is one of those situations where the symmetry of the hamiltonian is larger than the symmetry of the $`S`$-matrix. At any rate equation (2) is clearly finite and Lorentz invariant. From it follow the usual consequences of degenerate multiplets for unbroken supersymmetries and Goldstone fermions for those that are broken. In momentum space, with $`C_{AB}`$ diagonal and $`<T^{\mu \nu }>=\mathrm{\Lambda }\eta ^{\mu \nu }`$, we have
$$q_\mu <j_{A\alpha }^\mu (q)\overline{j}_{A\dot{\beta }}^\nu >=2\left(\sigma ^\nu \right)_{\alpha \dot{\beta }}(\mathrm{\Lambda }+C_{AA})+0(q)$$
(3)
where there is no sum over $`A`$. For those $`A`$ such that $`\mathrm{\Lambda }+C_{AA}0`$, equation (3) implies a $`1/\overline{)}q`$ singularity in the two current correlations, $`J_{A\alpha }^\mu `$ couples the vacuum to a massless fermion with coupling strength $`[2(\mathrm{\Lambda }+C_{AA})]^{1/2}`$. It also follows that $`\mathrm{\Lambda }+C_{AA}0`$. It should now be obvious how to evade the extra unwanted Goldstone bosons, in the case where the underlying coset manifold is indeed Kahler. The classical analyses of Coleman, Wess and Zumino , and Callan, Coleman, Wess and Zumino were extended in the case of non-linearly realised supersymmetry by Volkov and Akulov . This paper follows the elegant treatment of Itoh, Kugo and Kunitoma based upon the very complete generalisations of the classical analyses by Bando, Kuramoto, Maskawa and Uehara , and the same authors in and . Finally, we bring attention to the further clarifications made by Volkov and so elegantly presented by Ogievetsky .
The crucial point of extending the underlying algebra of supercharge current densities by central terms, has to be combined not merely with a Kahler $`G/H`$, but that manifold has to be re-expressed as a quotient space of the complexified $`G`$ (usually called $`G^c`$ ) by a maximially extended complex extension of $`H`$ (usually called $`\widehat{H}`$). In this treatment this will appear as an explicit mapping manifesting the homeomorphism between $`G/H`$ and $`G^c/\widehat{H}`$.
A concrete example is offered in the form of the simplest possible case of $`G/H=SU_2/U_1`$, usually called the complex projective space $`CP2`$, although it is a straightforward task to extend to all similar (i.e. Kahler) but more complicated cases. The starting point is a recent, interesting but incomplete, attempt to generalise the ideas of chiral perturbation theory to the supersymmetric level by Barnes, Ross and Simmons . It is instructive to see how the ambiguities arise in this chiral $`SU_2\times SU_2`$ based model and we adapt the notation of the original only slightly. The original (unconstrained) supersymmetric action is constructed from four (complex) chiral superfields. In components, with
$$y^m=x^m+i\theta \sigma ^m\overline{\theta },$$
(4)
these have the form
$`\mathrm{\Phi }(x,\theta \overline{\theta })`$ $`=`$ $`\varphi (y)+\sqrt{2}\theta \lambda _\varphi (y)+\theta ^2F_\varphi (y),`$ (5)
$`\mathrm{\Sigma }_3(x,\theta ,\overline{\theta })`$ $`=`$ $`\sigma _3(y)+\sqrt{2}\theta \lambda _3(y)+\theta ^2F_\sigma (y),`$ (6)
$`\mathrm{\Pi }_A(x,\theta ,\overline{\theta })`$ $`=`$ $`\pi _A(y)+\sqrt{2}\lambda _A(y)+\theta ^2F_A(y),`$ (7)
where $`\sigma ^m=(1,\tau ^a)`$, and the $`\tau ^a`$ are the Pauli matrices $`(a=1,2,3).`$ The chiral superfields $`\mathrm{\Pi }_A(A=1,2)`$, and $`\mathrm{\Sigma }_3`$ transform as a triplet under $`SU_2`$, where the third direction is that of the intended spontaneous breaking, and $`\mathrm{\Phi }`$ is a singlet. It has previously been noted by Barnes, Generowicz and Grimshare that the chiral $`SU_2`$ generated by the first two components of the axial generators together with the third component of the vector generators leads indeed to a Kahler manifold of the type $`SU_2/U_1`$. This is embedded in the chiral $`\frac{SU_2\times SU_2}{SU_2}`$ structure exactly so as to give the $`\pi _A`$ pseudoscalar nature in their real parts, and correspondingly $`\varphi `$ and $`\sigma _3`$ scalar nature. The most general supersymmetric action is then written as
$$I=d^8z(\overline{\mathrm{\Phi }}\mathrm{\Phi }+\overline{\mathrm{\Sigma }}_3\mathrm{\Sigma }_3+\overline{\mathrm{\Pi }}^A\mathrm{\Pi }_A)+d^6sW+d^6\overline{s}\overline{W},$$
(8)
where the superpotential $`W`$ is a functional of chiral superfields only. Combining the $`\mathrm{\Sigma }_3`$ and $`\mathrm{\Pi }_A`$ fields into the matrix
$$M=\mathrm{\Sigma }_3\tau ^3+\mathrm{\Pi }_A\tau ^A,$$
(9)
where the chiral $`\gamma _5`$ factors are now suppressed, reveals that, under chiral $`SU_2\times SU_2`$, M transforms as
$$MLMR^{},$$
(10)
and taking
$$W=k(detM+f_\pi ^2)\mathrm{\Phi },$$
(11)
where $`k`$ is a constant, ensures that the model reduces to the usual bosonic chiral model below the supersymmetry breaking scale provided that $`f_\pi `$ is required to be real. Notice that $`\sigma ^2`$ of reference now appears in the guise of $`(\sigma _3)^2`$. The advantage of this change of notation will become clear later. This starting action now yields the potential
$$\begin{array}{cc}V\hfill & =F_\sigma \overline{F}_\sigma +F_A\overline{F}_A+F_\varphi \overline{F}_\varphi \hfill \\ & =4k^2\varphi \overline{\varphi }(\sigma _3\overline{\sigma }_3+\pi _A\overline{\pi }_A)\hfill \\ & +k^2[f_\pi ^2\sigma _3^2\pi _A\pi _A][f_\pi ^2\overline{\sigma }_3^2\overline{\pi }_A\overline{\pi }_A].\hfill \end{array}$$
(12)
The minimum of this potential is clearly $`V=0`$ which may be achieved by giving the fields the following $`SU_2\times SU_2`$ breaking vacuum expectation values (VEVs)
$$<\sigma _3>=f_\pi <\pi _A>=0=<\varphi >.$$
(13)
Importantly no auxiliary field acquires a VEV with these assignments so supersymmetry is manifestly not broken in this model. The formal limit $`k\mathrm{}`$ leaves the action
$$I=d^8z(\mathrm{\Sigma }_3\overline{\mathrm{\Sigma }}_3+\mathrm{\Pi }_A\overline{\mathrm{\Pi }}_A+\mathrm{\Phi }\overline{\mathrm{\Phi }}),$$
(14)
with the superfields subject to the constraint
$$\mathrm{\Sigma }_3^2+\mathrm{\Pi }_A\mathrm{\Pi }_A=f_\pi ^2,$$
(15)
with the consequence that the superfield $`\mathrm{\Phi }`$ takes no part in the interactions and can be ignored as spectator field. Eliminating $`\sigma _3`$, $`\lambda _3`$ and $`F_3`$ by substituting the constraints into the kinetic part of the Lagrangian to obtain the leading term in the low momentum expansion (each fermion is considered to have associated with it a factor of the square root of the momentum scale) gives exactly the non-linear (Zumino) Lagrangian as reported in reference . As stated there, the interaction terms involving pseudo-Goldstone bosons and, or, the fermionic superpartners are not uniquely specified. However, the structure of the non-linear Lagrangian describing the Goldston pions alone (when the scalar fields are taken to be real, and the fermions supressed), is quite independent of the structure in which it is is now embedded. This applies also to the Kahler subset of fields, but in the previous conventional wisdom this subsector alone was prohibited from arising in this manner by the theorem of Lerche and Shore. Finally, it is necessary to express the manifold in the $`G^c/\widehat{H}`$ complex form. The key is to introduce the projection operator $`\eta `$ with the properties $`\eta ^2=\eta `$ and $`\eta ^{}=\eta `$ which is made possible because of the change in the underlying algebra of supercharge densities with the central terms. This can be taken, in this notation, to be
$$\eta =\frac{1+\tau _3}{2},$$
(16)
and it is trivial to confirm that the property
$$\widehat{h}\eta =\eta \widehat{h}\eta ,$$
(17)
picks out $`\tau ^+`$ and $`\tau ^3`$ as the four members of the complex subgroup. (There is a two way alternative choice at this point, but this has become standard.) With this in mind rewriting $`M`$ as
$$M=\mathrm{\Sigma }_3\tau ^3+\frac{i\mathrm{\Delta }\tau ^3}{2}\frac{i\mathrm{\Gamma }\tau ^{}}{2},$$
(18)
so that, now leaving out the $`f_\pi ^2`$ terms,
$$I=d^8z(\mathrm{\Sigma }_3\overline{\mathrm{\Sigma }}_3+\frac{\mathrm{\Gamma }\overline{\mathrm{\Gamma }}}{4}+\frac{\mathrm{\Delta }\overline{\mathrm{\Delta }}}{4}+\mathrm{\Phi }\overline{\mathrm{\Phi }}),$$
(19)
and
$$\begin{array}{cc}V\hfill & =4k^2\varphi \overline{\varphi }\left(\sigma _3\overline{\sigma }_3+\gamma \overline{\gamma }+\delta \overline{\delta }\right)\hfill \\ & +k^2\left[\sigma _3^2+\frac{\gamma \delta }{4}\right]\left[\overline{\sigma }_3^2+\frac{\overline{\gamma }\overline{\delta }}{4}\right].\hfill \end{array}.$$
(20)
In the formal limit as $`k\mathrm{}`$, the action becomes
$$I=d^8z\frac{\mathrm{\Gamma }\overline{\mathrm{\Gamma }}}{4},$$
(21)
as the constraints are satisfied by the superfield conditions
$$\mathrm{\Sigma }_3=0\mathrm{and}\mathrm{\Delta }=0.$$
(22)
The superfield $`\mathrm{\Phi }`$ can again be ignored as a non-interacting spectator. Notice that the single complex superfield $`\mathrm{\Gamma }`$ is all that remains in the action, and it is not constrained.
To describe the coset space of the chiral sphere , the real part of $`\pi ^A`$ is written as $`M^A`$, and so
$$L=\mathrm{exp}\left\{\frac{i}{2}\theta (\varphi )\frac{M_A\tau ^A}{\varphi }\right\},$$
(23)
where $`\theta (\varphi )`$ is any arbitrary function of
$$\varphi =[M_AM^A]^{1/2},$$
(24)
divided by the pion decay constant $`f_\pi `$, and where the chiral $`\gamma _5`$ dependence is again suppressed. The arbitrariness may be viewed as the freedom to change coordinate systems on the surface of the sphere, or to redefine the field variables describing the pions. Now this can alternatively be written in the form
$$L=\mathrm{exp}\left(\frac{i\gamma \tau ^{}}{2}\right)\mathrm{exp}\left(\frac{i\delta \tau ^+}{2}\right)\mathrm{exp}\left(\frac{V\tau ^3}{2}\right),$$
(25)
using the complex subgroup $`\widehat{H}`$. Moreover the expression
$$\mathrm{exp}\left(\frac{i\gamma \tau ^{}}{2}\right)$$
gives the explicit mapping of the homeomorphism between $`G/H`$ and $`G^c/\widehat{H}`$. In the general coordinate system
$$\frac{\gamma \overline{\gamma }}{4}=\mathrm{tan}^2\left(\frac{\theta }{2}\right),$$
(26)
and it is known from reference that the Kahler potential is given by
$$K=ln\underset{\eta }{det}\left\{\mathrm{exp}\left(\frac{i\overline{\gamma }\tau ^+}{2}\right)\mathrm{exp}\left(\frac{i\gamma \tau ^{}}{2}\right)\right\},$$
(27)
where the notation indicates that the determinant is to be taken in the top left hand corner of the matrix in this representation. This reveals at once that
$$K=\mathrm{ln}\left[1+\frac{\overline{\gamma }\gamma }{2}\right]=\mathrm{ln}\left[sec^2\left(\frac{\theta }{2}\right)\right]=V$$
(28)
which is the desired result. Note that, although the general coordinate notation is most convenient in this context, reliance is placed on the results of reference . The Kahler nature of the potential is demonstrated by diagonalizing the metric in stereographic coordinates, $`z`$ and $`\overline{z}`$, and revealing the holomorphic nature of the transformations in the usual manner.
Although this demonstration that Kahler potentials can arise from constrained linear supersymmetric schemes has used only $`CP2`$, there seems no reason whatsoever why this can not be generalised directly to larger Kahler manifolds – in particular to $`CPN`$.
The author is grateful to Professor D A Ross for raising his interest in this type of work. This work is partly supported by PPARC grant number GR/L56329.
|
no-problem/9906/cond-mat9906388.html
|
ar5iv
|
text
|
# Topological Defects in Nematic Droplets of Hard Spherocylinders
## I Introduction
Liquid crystals (LC) show behavior intermediate between liquid and solid. The coupling between orientational and positional degrees of freedom leads to a large variety of mesophases. The microscopic origin lies in anisotropic particle shapes and anisotropic interactions between the particles that constitute the material. The simplest, most liquid-like LC phase is the nematic phase where the particles are aligned along a preferred direction while their spatial positions are, like in an ordinary liquid, homogeneously distributed in space. The preferred direction, called the nematic director, can be macroscopically observed by illuminating a nematic sample between crossed polarizers.
There are many different systems that possess a nematic phase. Basically, one can distinguish between molecular LCs where the constituents are molecules and colloidal LCs containing mesoscopic particles, e.g., suspensions of tobacco mosaic viruses . Furthermore there is the possibility of self-assembling rodlike micelles , that can be studied with small-angle neutron scattering .
There are various theoretical approaches to deal with nematic liquid crystals. On a coarse-grained level one may use Ginzburg-Landau theories, including phenomenological elastic constants. The central idea is to minimize an appropriate Frank elastic energy with respect to the nematic director field . Second, there are spin models, like the Lebwohl-Lasher model, see, e.g., Refs. . There the basic degrees of freedom are rotators sitting on the sites of a lattice and interacting with their neighbors. The task is to sample appropriately the configuration space. The third class of models consists of particles with orientational and positional degrees of freedom. Usually, the interaction between particles is modelled by an anisotropic pair potential. Examples are Gay-Berne particles, e.g. and hard bodies, e.g., hard spherocylinders (HSC) . Beginning with the classical isotropic-nematic phase transition for the limit of thin, long needles due to Onsager , our knowledge has grown enormously for the system of hard spherocylinders. The bulk properties have recently been understood up to close packing. The phase diagram has been calculated by computer simulations , density-functional theory and cell theory . There are various stable crystal phases, like an elongated face-centered cubic lattice with ABC stacking sequence, a plastic crystal, smectic-A phase, nematic and isotropic fluid. Besides bulk properties, one has investigated various situations of external confinement, like nematics confined to a cylindrical cavity or between parallel plates . Also effects induced by a single wall have been studied, like depletion-driven adsorption , anchoring , wetting , and the influence of curvature. Furthermore, solid bodies immersed in nematic phases experience non-trivial forces , and point defects experience an interaction .
Topological defects within ordered media are deviations from ideal order, loosely speaking, that can be felt at an arbitrary large separation distance from the defect position. Complicated examples are screw dislocations in crystalline lattices and inclusions in smectic films . To deal with topological defects the mathematical tools of homotopy theory may be employed to classify all possible structures. The basic ingredients are the topology of both the embedding physical space and the order parameter space. For the case of nematics, there are two kinds of stable topological defects in 3d, namely point defects and line defects, whereas in 2d there are only point defects. These defects arise when the system is quenched from the isotropic to the nematic state . Also the dynamics have been investigated experimentally. On the theoretical side, there is the important work within the framework of Landau theory by Schopohl and Sluckin on the defect core structure of half-integer wedge disclinations and on the hedgehog structure in nematic and magnetic systems. The latter predictions have been confirmed with computer simulations of lattice spin models . The topological theory of defects has been used to prove that a uniaxial nematic either melts or exhibits a complex biaxial structure . Sonnet, Kilian and Hess have considered droplet and capillary geometries using an alignment tensor description.
The investigation of equilibrium topological defects in nematics has received a boost through a striking possibility to stabilize defects by imprisoning the nematic phase within a spherical droplet. The droplet boundary induces a non-trivial effect on the global structure within the droplet. Moreover, it can be experimentally controlled in a variety of ways to yield different well-defined boundary conditions, namely homeotropic or tangential ones. One famous experimental system are polymer-dispersed LCs. Concerning nematic droplets, there are various studies using the Lebwohl-Lasher model . There are investigations of the droplet shape , the influence of an external field , and chiral nematic droplets , structure factor , and ray propagation . Also simulations of Gay-Berne droplets have been performed . Other systems that exhibit topological defects are nematic emulsions , and defect gels in cholesteric LCs . The formation of disclination lines near a free nematic interface was reported .
In this work we are concerned with the microscopic structure of topological defects in nematics. We use a model for rod-like particles with a pair-wise hard core interaction, namely hard spherocylinders. It accounts for both, the orientational degrees of freedom as well as the positional degrees of freedom of the particles constituting the nematic. Especially, it allows for mobility of the defect positions. This system is investigated with Monte Carlo computer simulations. There exist successful simulations of topological line defects using hard particles, namely integer and half-integer line defects .
Here, we undertake a detailed study of the microscopic structure of the defect cores focusing on the behavior of the local nematic order and on the density field, an important quantity that has not been studied in the literature yet. As a theoretical prediction, we find that the arising half-integer point defects are surrounded by an oscillating density inhomogeneity. This can be verified in experiments. We also investigate the statistical properties of two defects interacting with each other extracting the distribution functions of the positions of the defect cores and their orientations. These are not accessible in mean-field calculations. We emphasize that both properties, the free-standing density wave which is due to microscopic correlations and the defect position distribution which is due to fluctuations cannot be accessed by a coarse-grained mean-field type calculation.
The paper is organized as follows: In section II our theoretical model is defined, namely hard spherocylinders within a planar spherical cavity and on the surface of a sphere. For comparison, we also propose a simplified toy model of aligned rods. Section III is devoted to the analytical tools employed, such as order parameter and density profiles. Section IV gives details about the computer simulation techniques used. The results of our investigation are given in section V and we finish with concluding remarks and a discussion of the experimental relevance of the present work in section VI.
## II The Model
### A Hard Spherocylinders
We consider $`N`$ identical particles with center-of-mass position coordinates $`𝐫_i=(r_{xi},r_{yi})`$ and orientations $`𝐧_i`$, where the index $`i=1,\mathrm{},N`$ labels the particles. Each particle has a rod-like shape: It is composed of a cylinder of diameter $`\sigma `$ and length $`L\sigma `$ and two hemispheres with the same diameter capping the cylinder on its flat sides. In three dimensions (3d) this geometric shape is called a spherocylinder, see Fig.1 . The 2d analog is sometimes called discorectangle as it is made of a rectangle and two half discs. We assume a hard core interaction between any two spherocylinders that forbids particle overlap. Formally, we may write
$`U(𝐫_i,𝐧_i;𝐫_j,𝐧_j)=\{\begin{array}{cc}\mathrm{}\hfill & \mathrm{if}\mathrm{particles}i\mathrm{and}j\mathrm{overlap}\hfill \\ 0\hfill & \mathrm{else}\hfill \end{array}`$ (3)
The geometric overlap criterion involves a sequence of elementary algebraic tests. They are composed of scalar and vector products between the distance vector of both particles and both orientation vectors. The explicit form can be found e.g. in Ref. . The bulk system is governed by two dimensionless parameters, namely the packing fraction $`\eta `$, which is the ratio of the space filled by the particle “material” and the system volume $`V`$. In two dimensions it is given by $`\eta =(N/V)(\sigma (L\sigma )+\pi \sigma ^2/4)`$. The second parameter is the anisotropy $`p=L/\sigma `$ which sets the length-to-width ratio. The bulk phase diagram in 3d was recently mapped out by computer simulation and density-functional theory . The nematic phase is found to be stable for anisotropies $`p>5`$. In 2d the phase diagram is not known completely but there is an isotropic to nematic phase transition for infinitely thin needles . The nematic phase is also present in a system of hard ellipses verified by computer simulations. In 2d the nematic-isotropic transition was investigated using density-functional theory and scaled-particle theory . There is work about equations of state , and direct correlation functions within a geometrical framework.
### B Planar model
To align the particles near the system boundary homeotropically we apply a suitably chosen external potential. The particles are confined within a spherical cavity representing the droplet shape. The interaction of each HSC with the droplet boundary is such that the center of mass of each particle is not allowed to leave the droplet, see Fig.2. The corresponding external potential is given by
$`U_{\mathrm{ext}}(𝐫_i)=\{\begin{array}{ccc}0\hfill & \mathrm{if}\hfill & |𝐫_i|<RL/2\hfill \\ \mathrm{}\hfill & \mathrm{else}\hfill & \end{array}`$ (6)
where $`R`$ is the radius of the droplet and we chose the origin of the coordinate system as the droplet center. The system volume is $`V=\pi R^2`$. This boundary condition is found to induce a nematic order perpendicular to the droplet boundary as the particles try to stick one of their ends to the outside. Hence the topological charge is one. In the limit, $`p=1`$, we recover the confined hard sphere system recently investigated in 2d and 3d .
### C Spherical model
A second possibility to induce an overall topological charge is to confine the particles to a non-planar, curved space, which we chose to be the surface of a sphere in three-dimensional space. The particles are forced to lie tangentially on the sphere with radius $`R`$, see Fig.3. Mathematically, this is expressed as
$`|𝐫_i|`$ $`=R,`$ (7)
$`𝐫_i𝐧_i`$ $`=0.`$ (8)
The director field on the surface of a sphere has to have defects. This is known as the “impossibility of combing a hedgehog”. The total topological charge is two. The topological charge is a winding number that counts the number of times the nematic director turns along a closed path around the defect. It may have positive and negative, integer or half-integer values, namely $`0,\pm 1/2,\pm 1,\mathrm{}`$.
### D Aligned Rods
To investigate pure positional effects we study a further simplified model where the orientation of each rod is uniquely determined by its position. Therefore we consider an arbitrary unit vector field $`𝐧(𝐫)`$ describing a given nematic order pattern. In reality, the particles fluctuate around this mean orientation. Here, however, we neglect these fluctuation by imposing $`𝐧_i=𝐧(𝐫_i)`$. In particular, we chose the director field to possess a singular defect with topological charge $`t`$, see Fig.4. The precise definition of this director field $`𝐧^{(t)}\left(𝐫\right)`$ is postponed to the next section (and given therein in Eq.9.) The case of parallel aligned rods, $`𝐧=\mathrm{const}`$, has been used to study phase transitions to higher ordered liquid crystals.
## III Analytical Tools
### A Order parameters
In order to analyze the fluctuating particle positions and orientations, we probe against a director field possessing a topological defect with charge $`t`$. It is given by
$$𝐧^{(t)}(𝐪,𝐫)=\underset{¯}{\underset{¯}{𝐃}}^{(t)}(𝐫)𝐪,$$
(9)
where the rotation matrix is
$`\underset{¯}{\underset{¯}{𝐃}}^{(t)}(𝐚)`$ $`=`$ $`\left(\begin{array}{cc}\mathrm{cos}\left(t\varphi \right)& \mathrm{sin}\left(t\varphi \right)\\ \mathrm{sin}\left(t\varphi \right)& \mathrm{cos}\left(t\varphi \right)\end{array}\right),`$ (12)
with $`\varphi =\mathrm{arctan}(a_y/a_x)`$, and $`𝐚=(a_x,a_y)`$ being a 2d vector. The vector $`𝐪`$ is the orientation of particles if one approaches the defect along the $`x`$-direction.
As an order parameter, we probe the actual particle orientations $`𝐧_i`$ against the ideal ones
$$S^{(t)}(𝐜,𝐪;r)=2\left[𝐧_i𝐧^{(t)}(𝐪,𝐫_i𝐜)\right]^2_r1,$$
(13)
where the radial average is defined as $`\mathrm{}_r=_{i=1}^N\delta (|𝐫_i^{}|r)\mathrm{}/_{i=1}^N\delta (|𝐫_i^{}|r)`$, with $`𝐫_i^{}=𝐫_i𝐜`$ and $`\mathrm{}`$ is an ensemble average. Normalization in Eq.13 is such that usually $`0S^{(t)}1`$, where unity corresponds to ideal alignment, and zero means complete dissimilarity with the defect of charge $`t`$ at position $`𝐜`$ and vector $`𝐪`$, Eq.9. (In general, $`1S^{(t)}<1`$ is possible, where negative values indicate an anti-correlation.)
If $`𝐜`$ and $`𝐪`$ are not dictated by general symmetry considerations (e.g. $`𝐜=0`$ because of the spherical droplet shape), we need to determine both quantities. To that end we measure the similarity of an actual particle configuration compared to a defect, Eq.9. We probe this inside a spherical region around $`𝐜`$ with radius $`R^{}`$ using
$$I^{(t)}(𝐜,𝐪)=\frac{2}{(R^{})^2}_0^R^{}𝑑rrS^{(t)}(𝐜,𝐪;r).$$
(14)
where $`R^{}`$ is a suitably chosen cutoff length. We maximize $`I^{(t)}(𝐜,𝐪)`$ with respect to $`𝐜`$ and $`𝐪`$. The value at the maximium is
$$\lambda ^{(t)}=\underset{𝐜,𝐪}{\mathrm{max}}\{I^{(t)}(𝐜,𝐪)\},$$
(15)
and the argument at the maximum is $`𝐪^{(t)}`$.
Before summarizing the quantities we compute during the simulation, let us note that $`𝐪^{(t)}`$ and $`\lambda ^{(t)}`$ are eigenvector and the corresponding (largest) eigenvalue of a suitable tensor. To see this, we attribute each particle the general tensor
$`\underset{¯}{\underset{¯}{𝐐_i}}^{(t)}`$ $`=`$ $`2\left(\underset{¯}{\underset{¯}{𝐃}}^{(t)}(𝐫_i𝐜)𝐧_i\underset{¯}{\underset{¯}{𝐃}}^{(t)}(𝐫_i𝐜)𝐧_i\right)\underset{¯}{\underset{¯}{\mathrm{𝟏}}},`$ (16)
where $``$ denotes the dyadic product, $`\underset{¯}{\underset{¯}{\mathrm{𝟏}}}`$ is the identity matrix. Summing over particles gives
$$\underset{¯}{\underset{¯}{𝐐}}^{(t)}=\underset{i}{}\underset{¯}{\underset{¯}{𝐐_i}}^{(t)}.$$
(17)
Note that for $`t=0`$ the usual bulk nematic order parameter is recovered<sup>*</sup><sup>*</sup>*The constants in Eq.16 depend on the dimensionality of the system and are different from 3d, where, e.g. $`\underset{¯}{\underset{¯}{𝐐}}^{(0)}=(3/2)_i𝐧_i𝐧_i\underset{¯}{\underset{¯}{\mathrm{𝟏}}}/2`$ holds.. The order parameter profile, Eq.13, is then obtained as
$$S^{(t)}(𝐜,𝐪,r)=𝐪\underset{¯}{\underset{¯}{𝐐}}^{(t)}𝐪_r,$$
(18)
and then the relation $`\lambda ^{(t)}𝐪^{(t)}=\underset{¯}{\underset{¯}{𝐐}}^{(t)}𝐪^{(t)}`$ holds, if the sum over $`i`$ in Eq.17 is restricted to particles located inside a spherical region of radius $`R^{}`$ around $`𝐜`$.
Let us next give three combinations of $`t,𝐜,𝐪`$ that apply to the current model. First, we investigate the (bulk) nematic order, $`t=0`$. We resolve this as a function of the distance from the droplet center, hence $`𝐜=0`$. The nematic director $`𝐪^{(0)}`$ is obtained from Eq.15 with $`R^{}=R`$ The order parameter, defined in Eq.13, then simplifies to
$$S^{(0)}(r)=2\left(𝐧_i𝐪^{(0)}\right)^2_r1.$$
(19)
Second, we probe for star-like order, hence $`t=1`$, $`𝐜=0`$. As we do not expect spiral arms of the star pattern to occur, we can set $`𝐪=𝐞_x`$, where $`𝐞_x`$ is the unit-vector in $`x`$-direction. We can rewrite Eq.13 as
$$S^{(1)}(r)=2\left(𝐧_i\widehat{𝐫}_i\right)^2_r1,$$
(20)
where $`\widehat{𝐫}_i=𝐫_i/|𝐫_i|`$.
Third, we investigate $`t=1/2`$ defects. To that end, we need to search for $`𝐜`$ and $`𝐪`$, as these are not dictated by the symmetry of the droplet. Hence we numerically solve Eq.15 with $`R^{}=2L`$ (see Sec.IV B.) We obtain
$$S^{(1/2)}(r)=2\left(𝐧_i𝐧^{(1/2)}(𝐪^{(1/2)},𝐫_i𝐜^{(1/2)})\right)^2_r1.$$
(21)
The distribution of the positions of the particles is analyzed conveniently using the density profile $`\rho (r)`$ around $`𝐜`$, which we define as
$`\rho (r)=(2\pi r)^1{\displaystyle \frac{1}{N}}{\displaystyle \underset{i=1}{\overset{N}{}}}\delta (|𝐫_i𝐜|r).`$ (22)
We consider two cases: The density profile around the center of the droplet, i.e. $`𝐜=0`$, and around the position of a half-integer defect, $`𝐜=𝐜_1,𝐜_2`$.
It is convenient to introduce a further direction of a $`t=1/2`$ defect by
$`𝐝=\underset{¯}{\underset{¯}{𝐃}}^{(\frac{1}{2})}\left(𝐪^{(\frac{1}{2})}\right)𝐪^{(\frac{1}{2})}.`$ (23)
The vector $`𝐝`$ is closely related to $`𝐪^{(\frac{1}{2})}`$ by a rotation operation, where the rotation angle is the angle between $`𝐪^{(\frac{1}{2})}`$ and the $`x`$-axis. The direction $`𝐝`$ is where the field lines are radial; see the arrow in Fig.4.
### B Defect distributions
For a given configuration of particles the planar nematic droplet has a preferred direction given by the global nematic director $`𝐪^{(0)}`$. Each of the two topological defects has a position $`𝐜_i`$ and an orientation $`𝐝_i,i=1,2`$. These quantities can be set in relation to each other to extract information about the average defect behavior and its fluctuations. In particular, we investigated the following probability distributions depending on a single distance or angle.
Concerning single defect properties, we investigate the separation distance from the droplet center,
$`P(r)=(2\pi r)^1{\displaystyle \frac{1}{2}}{\displaystyle \underset{i=1,2}{}}\delta (|𝐜_i|r),`$ (24)
and the orientation relative to the nematic director,
$`P(\theta )={\displaystyle \frac{1}{2}}{\displaystyle \underset{i=1,2}{}}\delta \left(\mathrm{arccos}\left(𝐝_i𝐪^{(0)}\right)\theta \right).`$ (25)
Between both defects there is a distance distribution,
$`P(c_{12})=(2\pi c_{12})^1\delta (|𝐜_1𝐜_2|c_{12}),`$ (26)
and an angular distribution between defect orientations,
$`P(\theta _{12})=\delta \left(\mathrm{arccos}\left(𝐝_1𝐝_2\right)\theta _{12}\right),`$ (27)
which can equivalently be defined with $`𝐪_1^{(\frac{1}{2})},𝐪_2^{(\frac{1}{2})}`$ by using the identity $`\mathrm{arccos}(𝐝_1𝐝_2)=2\mathrm{arccos}(𝐪_1^{(\frac{1}{2})}𝐪_2^{(\frac{1}{2})}).`$
## IV Computer Simulation
### A Monte Carlo
All our simulations were performed with the canonical Monte-Carlo technique keeping particle-number N, volume V and temperature T constant, for details we refer to Ref. . To simulate spherocylinders with only hard interactions, each Monte-Carlo trial is exclusively accepted when there is no overlap of any particles. One trial always consists of a small variation of position and orientation of one HSC.
For the planar case the translation for the particle $`i`$ is constructed by adding a small random displacement $`\mathrm{\Delta }𝐫_i`$ to the vector $`𝐫_i`$, similarly the rotation consists of adding a small random vector $`\mathrm{\Delta }𝐧_i`$ to the direction $`𝐧_i`$ with $`\mathrm{\Delta }𝐧_i𝐧_i=0`$.
To achieve an isotropic trial on the surface of the sphere, the rotation matrix $`\underset{¯}{\underset{¯}{𝐌}}`$ is applied simultaneously to the vectors $`𝐫_i`$ and $`𝐧_i`$. It is defined as
$`\underset{¯}{\underset{¯}{𝐌}}:=\left(\begin{array}{ccc}1c+\alpha ^2c& \gamma s+\alpha \beta c& \beta s+\alpha \gamma c\\ \gamma s+\beta \alpha c& 1c+\beta c& \alpha +\beta \gamma c\\ \beta s+\gamma \alpha c& \alpha s+\gamma \beta c& 1c+\gamma ^2c\end{array}\right)`$ (31)
with $`s=\mathrm{sin}\mathrm{\Delta }\theta `$ and $`c=1\mathrm{cos}\mathrm{\Delta }\theta `$. $`\alpha ,\beta ,\gamma `$ are for every trial randomly chosen cartesian coordinates of the unit vector specifying the rotation axis, $`\mathrm{\Delta }\theta `$ is a small random angle. With this method a simultaneous translation and rotation is warranted by keeping the vectors $`𝐫_i`$ and $`𝐧_i`$ normalized and perpendicularly oriented.
The maximal variation in all cases is adjusted such that the probability of accepting a move is about fifty percent. The overlap criteria were checked by comparing the second virial coefficient of two- and three-dimensional HSC with simulation results, where the excluded volume of two HSC were calculated. Each of the runs (I)-(VII) was performed with $`510^7`$ trials per particle. One tenth of each run was discarded for equilibration. Especially the strongly fluctuating distance distribution between both defects, $`P(c_{12})`$, needs good statistics. All quantities were averaged over 25 partial runs, from which also error bars were calculated.
An overview of the simulated systems is given in Tab.I. The systems (I)-(VII) are planar. System (I) is the reference. To study finite-size effects, system (II) has half as many particles, and system (III) has twice as many particles as (I). To investigate the dependence on the thermodynamic parameters, system (IV) has a lower packing fraction $`\eta `$, and system (V) has a higher one compared to system (I). The other thermodynamic parameter is the anisotropy, which is smaller for system (VI) and higher for system (VII) compared to the system (I). To keep the nematic phase stable for the short rods of system (VI), the packing fraction $`\eta `$ had to be increased. The packing fraction of the dense system (V) is $`\eta =0.4143`$. The spherical system has the same number of particles $`N`$, packing fraction $`\eta `$ and anisotropy $`p`$ as the reference (I). The radius of the sphere is half the radius of the planar droplet. The aligned rod model has the same parameters as the reference system (I).
### B Technical issues
We discuss briefly a projection method for the spherical problem and a search algorithm to find defect positions.
In order to perform calculations for the spherical system all interesting vectors in three dimensions are projected to a two-dimensional plane. Imagine a given vector $`𝐜`$ from the middle of the sphere pointing to an arbitrary point of the surface. We convert a position $`𝐫_i`$ and orientation $`𝐧_i`$ to the vectors $`𝐫_i^\mathrm{p}`$ and $`𝐧_i^\mathrm{p}`$ in a plane perpendicular to $`𝐜`$ through
$`𝐫_i^\mathrm{p}=𝐫_i(𝐜𝐫_i)𝐜,`$ (32)
$`𝐧_i^\mathrm{p}=𝐧_i(𝐜𝐧_i)𝐜.`$ (33)
After obtaining a set $`\{𝐫_i^\mathrm{p},𝐧_i^\mathrm{p}\}`$ of three dimensional vectors on this way, we transform them into a set of two dimensional vectors by typical algebraic methods. As reference the projection of the $`𝐱`$ unit vector of the fixed three dimensional coordinate system is always the x-orientation of the “new” coordinate-system in two dimensions. The results show that curvature effects are small.
To investigate the radial structure and interactions of the disclinations it is necessary to localize the centers of the two point defects. As described in the last section, the $`\lambda ^{(\frac{1}{2})}`$-parameter measures the degree of order of a half-integer defect in a chosen area, so the task is to find the two maxima of $`\lambda ^{(\frac{1}{2})}`$ in the droplet. In the planar case, we do this search with the following algorithm: A circular test-probe samples the droplet on a grid with a grid spacing of $`5\sigma `$. At this points all the particles in the circle are taken to calculate $`\lambda ^{(\frac{1}{2})}`$ in the described way. After sampling the grid both maxima are stored and for every maximum a refining Monte-Carlo-search is performed. The surrounding of size of the grid spacing is randomly sampled and the probe is only moved when $`\lambda ^{(\frac{1}{2})}`$ increases. The search is stopped when the probe does not move for 200 trials. In the spherical case the method is the same, but the grid is projected onto the sphere surface and the calculations of $`\lambda ^{(\frac{1}{2})}`$ were performed with projected two-dimensional vectors as described before.
It is important to chose an adequate radius $`R^{}`$ for the probe. If $`R^{}`$ is too large, the probe overlaps both defects. As they have opposite orientations on the average, the located point of the maximum deviates from the point we are interested in. If the $`R^{}`$ is too small, an ill-defined position results, as fluctuations become more important. The simulation results show that a good choice is $`R^{}=2L`$. Although this definition contains some freedom, we find the defect position to be a robust quantity. A detailed discussion is given in the following section.
## V Results
### A Order within the droplet
Let us discuss the order parameters $`S^{(t)}`$ as a function of the radial distance from the center of the droplet; see Fig.5. $`S^{(0)}`$ is the usual bulk nematic order parameter, but radially resolved. It reaches values of 0.6-0.75 in the middle of the droplet, $`r<2L`$, indicating a nematic portion that breaks the global rotational symmetry of the system. For $`r>3L`$, $`S^{(0)}`$ decays to values slightly larger than the isotropic value of $`0`$. The decrease, however, is not due to a microscopically isotropic fluid state, as can be seen from the behavior of $`S^{(1)}`$. This quantity indicates globally star-like alignment of particles for $`r>3L`$. It vanishes in the nematic “street” in the center of the droplet. The distance where $`S^{(0)}`$ and $`S^{(1)}`$ intersect is an estimate for the defect positions. In Fig.5, the finite size behavior of $`S^{(t)}`$ is plotted for particle numbers $`N=1004,2008,4016`$ corresponding to systems (II), (I), (III). There is a systematic shift of the intersection point of $`S^{(0)}`$ and $`S^{(1)}`$ to larger values as the system grows, the numerical values are $`r/L=2.54,2.91,3.87`$. However, if $`r`$ is scaled by the droplet radius $`R`$, a slight shift to smaller values is observed as the system size grows. Keeping the medium-sized system (I) as a reference, we have investigated the impact of changing the thermodynamic variables. For different packing fractions, $`\eta `$=0.2894 (IV), 0.3321 (I), 0.4143 (V), we found that the intersection distances are $`r/L`$=3.90, 2.91, 1.43. In the bulk, upon increasing the density the nematic order grows. Here, this happens for the star-order $`S^{(1)}`$. But this increase happens on the cost of the nematic street (see $`S^{(0)}`$) at small $`r`$-values. Increasing $`\eta `$ leads to a compression of the inhomogeneous, interesting region in the center of the droplet. A similar effect can be observed upon changing the other thermodynamic variable, namely the anisotropy $`p`$. The nematic street is compressed for longer rods, $`p=31`$(VII), $`r/L`$=1.33. Shorter rods, $`p=16`$, need a higher density to form a nematic phase, so the values for systems (I), $`r/L`$=3.16, and (VI), $`r/L`$=2.91, are similar, as both effects cancel out.
The behavior of $`S^{(1)}`$ is similar to the findings for a three-dimensional droplet, where a quadratic behavior near $`r=0`$ was predicted within Landau theory . A simulation study using the Lebwohl-Lasher model confirmed this finding and revealed that a ring-like structure that breaks the spherical symmetry is present. A comparison to the results for a 3d capillary by Andrienko and Allen seems qualitatively possible as they find alignment of particles predominantly normal to the cylinder axis. Their findings are consistent with the behavior of $`S^{(1)}`$. Although our system is simpler as it only has two spatial dimensions, we could also establish the existence of a director field that breaks the spherical symmetry by considering the order parameter $`S^{(0)}`$.
Having demonstrated that the system exhibits a broken rotational symmetry, we have to assure that no freezing into a smectic or even crystalline state occurs. Therefore we plot radial density profiles $`\rho (r)`$, where $`r`$ is the distance from the droplet center, in Fig.6. The density shows pronounced oscillations for large $`r`$ near the boundary of the system. They become damped upon increasing the separation distance from the droplet boundary and practically vanish after two rod lengths for intermediate density and four rod lengths for high density. Approaching the droplet center, $`r=0`$, the density reaches a constant value for the weakly nematic systems (I), (IV), and (V). For the strongly nematic systems, (V) with high density and (VII) with large anisotropy, a density decay at the center of the droplet occurs. This effect is not directly caused by the boundary as the density oscillations due to packing effects are damped. It is merely due to the topological defects present in the system. Quantitatively, the relative decrease is $`\left[\rho (3L)\rho (0)\right]/\rho (3L)`$ = 0.11 (V), 0.09 (VII). The finite-size corrections for systems (II) and (III) are negligible.
From both, the scissor-like behavior of the nematic order (Fig.5) and from the homogeneity of the density profile away from the system wall (Fig.6), we conclude that the system is in a thermodynamically stable nematic phase, and seems to contain two topological defects with charge 1/2.
In a 2d bulk phase, two half-integer (1/2) defects are more stable than a single integer (1) defect, as the free energy is proportional to the square of the charge. However, in the finite system of the computer simulation that is also affected by influence from the boundaries, it could also be possible that the defect pair merge into a single one .
Next we investigate the defect positions and their orientations. To illustrate both, a snapshot of a configuration of the planar system is shown in Fig.7 (I). One can see the coupling of the nematic order from the first layer of particles near the wall to the inside of the droplet. The particles near the center of the droplet are aligned along a nematic director (indicated by the bar outside the droplet). The two emerging defects are depicted by symbols. See Fig.8 for a snapshot of the spherical system. There the total topological charge is not induced by a system boundary but by the topology of the sphere itself.
### B Defect core
The positions of the defects are defined by maxima of the $`\lambda ^{(\frac{1}{2})}`$ order parameter, see Section III for its definition. In Fig.9, $`\lambda ^{(\frac{1}{2})}`$ is plotted as a function of the spatial coordinates $`r_x`$ and $`r_y`$ for one given configuration. There are two pronounced maxima, indicated by bright areas, which are identified as the positions of the defect cores $`𝐜_1`$ and $`𝐜_2`$. There are several more local maxima appearing as gray islands. These are identified as statistical fluctuations already present in the bulk nematic phase.
A drift of the positions of a defect core was also reported in . Here we follow this motion, to investigate the surrounding of the defects. The order parameter $`S^{(\frac{1}{2})}`$ is radially resolved around the defect position in Fig.10. It has a pronounced maximum around $`r=1.2L`$. For smaller distances it decreases rapidly due to disorder in the core region. For larger distances the influence from the second defect partner decreases the half-integer order $`S^{(\frac{1}{2})}`$. Increasing the overall density, and increasing the anisotropy leads to a more pronounced hump. The finite-size corrections, (II), (III), and the boundary effects (sphere) are negligible. However, the curves show two artifacts: A rise near $`r=0`$ and a jump at the boundary of the search probe, $`r=2L`$. In the inset the profile around a bulk defect is shown. It has a plateau value inside the probe, $`r<2L`$, and vanishes outside. If we subtract this contribution from the pure data (I), continuous behavior at $`r=2L`$ can be enforced.
However, the model does not account for 3d effects like the “biaxial escape”, namely the sequence planar uniaxial - biaxial - uniaxial with increasing distance from the core center , as the particles are only 2d rotators. Schopohl and Sluckin found an interface-like behavior between the inner and outer parts of a disclination line in 3d. In our system, we do not find a sign of an interface between the isotropic core and the surrounding nematic phase. This might be due to a small interface tension and a very weak bulk nematic-isotropic phase transition.
By radially resolving the probability of finding a particle around a defect center, we end up with density profiles depicted in Fig.11. The defect is surrounded by density oscillations with a wavelength of the particle length. The finite-size dependence is small. To estimate the influence from the system wall, one may compare with the spherical system. It shows slightly weaker oscillations. This might be due to curvature effects, as the effective packing fraction is slightly smaller as the linear particles may escape the spherical system. The toy model of aligned rods also exhibits a non-trivial density profile, showing a decrease towards small distance and oscillations compared to rotating rods. In all cases the first peak has a separation distance of half a particle length from the defect center. The second peak appears at $`r=3/2L`$. Again the search probe induces an artificial structure near $`r=2L`$. From this analysis, we can conclude that the oscillations are due to packing effects. The density oscillations become more pronounced at higher density, and for larger anisotropy, see Fig.12.
### C Defect position
In the planar system, each defect is characterized by its radial distance $`r`$ from the center, and the angle $`\theta `$ between its orientation and the global nematic director $`𝐪^{(0)}`$. We discuss the probability distributions of these quantities. In Fig.13 the distribution for finding the defect at a distance $`r`$ from the center is shown. Generally, the distributions are very broad. This indicates large mobility of the defects. Changing the thermodynamical variables has a large effect. For the stronger nematic systems (V) and (VII), the distribution becomes sharper with a pronounced maximum at $`r=1.5L`$. Decreasing the anisotropy weakens the nematic phase, so system (IV) has a very broad distribution. The inset shows that the distribution becomes broader upon increasing system size.
### D Interactions between two defects
A complete probability distribution of both positions of the defect cores can be regarded as arising from an effective interaction potential $`V_{\mathrm{eff}}(𝐜_1,𝐜_2)`$ between the defects. The latter play the role of quasi-particles. The effective interaction arises from averaging over the particle positions while keeping the defect positions constant. The effective interaction and the probability distribution are related via $`P(𝐜_1,𝐜_2)\mathrm{exp}(\beta V_{\mathrm{eff}}(𝐜_1,𝐜_2))`$.
Instead of the full probability distribution, we show its dependence on the separation distance between both defects and on their relative orientation. In Fig.14 the probability distribution of finding two defects at a distance $`c_{12}`$ is shown. It has small values for small as well as large $`c_{12}`$. Hence at small distances the defects repel each other. At large distances their effective interaction is attractive. Increasing the nematic order by increasing the density (V) or rod length (VII) causes the average defect separation distance to shrink. The rise near $`r/L=1`$ is an artifact: These are events where the search algorithm does not find two different defects, but merely finds the same defect two times. To avoid the problem a cutoff at $`r=L`$ was introduced. The finite size behavior is strong; see the inset. The large system (III) allows the defects to move further away from each other, whereas in the smaller system (II) they are forced to be closer together. However, from the simulation data, it is hard to obtain the behavior in the limit $`R/L\mathrm{}`$.
This is somewhat in contrast to the phase diagram of a 3d capillary containing isotropic, planar-radial and planar-polar structures, if one is willing to identify the dependence on temperature with our athermal system. There it was found that the transition from the planar-polar to the planar-radial structure happens upon increasing the temperature (and hence decreasing the nematic order).
The difference angle $`\theta _{12}`$ between both defect orientations in the planar system, see Fig.15, is most likely $`\pi `$, hence the defects point on average away from each other. However, the orientations are not very rigid. For the least ordered system (IV) there is still a finite probability of finding the defects with a relative orientation of 90 degrees! Even for the strongly nematic systems (V) and (VII) the angular fluctuations are quite large. The inset in Fig.15 shows the distribution of the angle $`\theta `$ between the defect orientation and the global nematic director. A clear maximum near $`\pi /2`$ occurs. Again, the distributions become sharper as density or anisotropy increase.
### E Outlook
Finally, it is worth mentioning that the spherical system still contains surprises. See Fig.16 for an unexpected configuration, namely an assembly of three positive 1/2-defects sitting at the corners of a triangle and a negative -1/2-defect in its center. This is remarkable, because the negative defect could annihilate with one of the outer positive defects.
In all cases, integer defects seem to dissociate into half-integer defects. The complete equilibrium defect distribution of hard spherocylinders lying tangentially on a sphere remains an open question.
## VI Conclusions
In conclusion, we have investigated the microscopic structure of topological defects of nematics in a spherical droplet with appropriate homeotropic boundary and for particles lying on the surface of a sphere. We have used hard spherocylinders as a model system for a lyotropic nematic liquid crystal. This system allows us to study the statistical behavior of the microscopic rotational and positional degrees of freedom. For this system we find half-integer topological point defects in two dimensions to be stable. The defect core has a radius of the order of one particle length. As an important observation, the defect generates a free-standing density oscillation. It possesses a wavelength of one particle length. Considering the defects as fluctuating quasi-particles we have presented results for their effective interaction.
The microscopic structure revealed by radially resolving density and order parameter profiles around the defect position is identical for the planar and the spherical system.
An experimental investigation using anisotropic colloidal particles like tobacco mosaic viruses or carbon nanotubes is highly desirable to test our theoretical predictions. Then larger accessible system sizes can be exploited. Also of interest is the long-time dynamical behavior of the motion of topological defects. The advantage of colloidal systems over molecular liquid crystals is the larger length scale that enables real-space techniques like digital video-microscopy to be used.
From a more theoretical point of view it would be interesting to describe the microstructure of topological defects within the framework of density functional theory. Using phenomenological Ginzburg-Landau models, one could take the elastic constants of the HSC model as an input, and could calculate the defect positions and check against our simulations.
Finally we note that we currently investigate the three-dimensional droplets that are filled with spherocylinders. In this case more involved questions appear, as both, point and line defects, may appear.
Acknowledgment. It is a pleasure to thank Jürgen Kalus, Karin Jacobs, Holger Stark, and Zsolt Németh for useful discussions, and Holger M. Harreis for a critical reading of the manuscript.
|
no-problem/9906/astro-ph9906265.html
|
ar5iv
|
text
|
# A Possible Energy Mechanism for Cosmological Gamma-ray Bursts
## 1 Introduction
Gamma-ray bursts (GRBs) are clearly the ”signal” of an extremely energetic event, which lasts typically a few seconds. The recent observations of afterglow of GRBs by BeppoSAX provide strong evidence of the cosmological origin (Metzger et al. 1997). For example, GRB971214 is the third GRB with a known optical counterpart. It was detected by the BeppoSAX Gamma-ray Burst Monitor (Frontera et al. 1997) on Dec 14.97 1997, as a 40s long-structured GRB and the fluence ($`>`$ 20KeV) is 10<sup>-5</sup> erg cm<sup>-2</sup>(Kippen et al. 1997). GRB971214 was also observed by the All Sky Monitor on board the X-ray Satellite XTE (Doty 1998). The fluence in the 2-12KeV band is estimated to be $`(1.8\pm 0.03)\times 10^7\mathrm{erg}\mathrm{cm}^2`$ (Kulkarni et al. 1998). GRB971214 is one of special interesting events since the burst energy may exceed $`10^{53}`$ ergs if its redshift is indeed 3.42 and the radiation is isotropic (Kulkarni et al 1998). The observation of GRB971214 puts serious constrains on the existing theoretical models, especially on the neutron star merger models (e.g. Narayan et al. 1992) in which the entire energy produced during the coalescence is required to release in gamma-rays in order to explain the observed power of GRB971214. However the process of the annihilation of neutrino and anti-neutrino used in this model may fail to produce the required energy in GRBs (Janka et al. 1996). Furthermore this model may also suffer from the so-called baryon contamination which one has to construct a special model to avoid (Mészáros & Rees 1994).
Before the BeppoSAX no association of GRBs with normal galaxies (Fenimore et al. 1993) or Abel cluster (Hurley et al. 1997) has been found. Recently, the studies of GRB’s afterglows provide the small offsets of GRBs with respect to their host galaxies and the association of GRBs with dusty regions and star-formation regions, which favors that the host galaxies of GRB may be faint galaxies (like a normal galaxy)( Bloom et al. 1999). In fact, at least, five GRBs have been found to associate with the normal galaxies. Paczynski (1998) suggest that this provides strong evidence to support that at least some GRBs are associated with normal galaxies. GRBs could be associated with nuclei of normal galaxies harbored a massive black hole, whose mass is typically $`10^5M_{}M10^6M_{}`$(Roland et al. 1994). It has also been argued that GRBs might be associated with some types of quasars, such as radio quiet quasars (Scharted et al 1997) and metal-rich quasars (Cheng et al. 1997; Cheng & Wang 1999). There are two classes of quasars, i.e. inactive quasars and active quasars, in nature (Rees 1990). The host galaxies of inactive (quiescent) quasars may appear as normal galaxies (Burderi , King & Szuszkiewicz 1998). With the quasars luminosity function one can obtain the mean density of the observed active quasar in the Universe about $`n_{QSO}=10^2`$ Gpc<sup>-3</sup> (Woltjer 1990; Osterbrock 1991). The time of the active phase of a quasar is $`t_{QSO}10^8`$ yr. A similar result can also be obtained independently from the quasar statistics for a wide range of redshifts (Phinney 1992). Some observations seem to support this simple estimation (Artymowicz et al. 1993). This typical life time is much shorter than Hubble time $`10^{10}`$ yr. It implies that the inactive quasar, where the central engine was extinct due to starving of fuel (Rees 1990), should have a mean number density $`n_{IQSO}=10^4`$ Gpc<sup>-3</sup>. Rees (1984) pointed out that the massive black hole is one of the favored sources for powering the quasar if an inactive quasar has a central massive black hole. Lynden-Bell (1969) first suggested that inactive quasar are in the form of massive underfed black holes, which may be present in the dense nuclei of normal galaxies, and Cannizzo (1990) has generalized that most ”normal” galaxies may also harbor black holes in their centers. In fact, the recent detection of a $`\gamma `$-ray flux along the direction of the Galactic center by GRGET on the broad Compton GRO suggested that these $`\gamma `$-rays may be originated close to the massive black hole (Mastichiads & Ozernoy 1994). By a more careful treatment of the physics of p-p scattering, Markoff, Melia & Sarcevic (1997) suggest that the black hole of mass $`10^6M_{}`$ may exist in the Galactic center and contributing to these high-energy emission.
In this paper, we investigate the possibility of inactive quasars, which have harbored rapidly spinning black holes embedded in a dense star cluster as the hosts of $`\gamma `$-ray bursts. When a star is tidally captured and disrupted by the black hole, part of the stellar matter will be swallowed by the black hole. A transient accretion disk/torus then can be formed and the inactive quasar will be activated (Rees 1988; Cannizzo 1990; Sanders 1984). During the active phase of the quasar, the rotational energy of the massive black hole can be extracted and quickly converted into the bulk kinetic energy of the magnetically driven outflow by Blandford-Znajek process (Blandford & Znajek 1977). A relativistic fireball wind is formed and results in a GRB (e.g.Rees & Mészáros 1994). This paper is organized as following. In section 2, we describe how the black hole captures a main sequence star and forms a transient accretion disk. The capture rate is also estimated. In section 3, we discuss under what conditions a strong energy outburst via the Blanford-Znajek mechanism can occur and the various time scales associated with GRBs in this model are estimated. The GRB burst rate of this mechanism and a brief discussion are presented in section 4.
## 2 Tidal disruption and formation of accretion disk
The rate at which a massive black hole in a dense star cluster tidally disrupts and swallows stars has been studied by many authors (e.g.Hills 1975; Bahcall et al. 1976; Lightman et al. 1977). An important step was the realization by Frank and Rees (1976) and Lightman et al.(1977) that the problem was essentially one of loss-cone diffusion-diffusion in angular momentum rather than energy. The maximum swallowing rate may occur for stars not near the black hole but at some relatively large distance from the black hole, at the so-called critical radius where the root-mean-square angular diffusion of stellar velocity vectors (due to two-body encounters) over one orbital period is comparable to the angular cross section of the black hole (the loss cone angle). The ideas presented in this section closely parallel to those discussed by Rees (1988). The only difference is that Rees (1988) has argued that the disrupted star forms a thick hot ring at the tidal radius with quite short accreting time scale, but we shall discuss the case of thin accretion disk formed by the disrupted star as suggested by Sanders (1984), which is also possible.
When a star whose trajectory happens to be sufficiently close to the massive black hole, the star would be captured and eventually tidally disrupted . After a dynamics time scale ( orbital time scale), the debris of a tidally disrupted star will form a transient accretion disk around the massive black hole with a radius typically comparable to the tidal radius (Rees 1988). On a time scale determined by the rate at which angular momentum can be transferred by a presumed turbulent viscosity, the torus of debris will be swallowed by the massive black hole. The inactive quasar with the massive black hole is activated only over this swallowing time scale,which is $``$ 1 yr for a thick hot ring (Rees 1988) and is $`10^2`$ yrs for a thin cool disk. The following physical quantities are relevant to this problem:
(a) The maximum tidal disruption distance for a captured star
If a Newtonian approximation for the gravitational field is used, the tidal disruption radius is (Sanders 1989)
$$R_T=(\frac{6M_{BH}}{\pi \overline{\rho _{}}})^{1/3}=1.4\times 10^{13}(\frac{M_{}}{M_{}})^{1/3}(\frac{M_{BH}}{10^6M_{}})^{1/3}(\frac{R_{}}{R_{}})\text{cm},$$
(1)
where $`\overline{\rho _{}}`$ is the mean mass density of matter in the star, $`R_{}`$ and $`M_{}`$ are the radius and mass of the incoming star respectively. Notice that the ratio of $`R_T`$ to the Schwarzschild radius ( $`r_s=\frac{2GM_{BH}}{c^2}`$) is about $`50(M_6)^{2/3}`$ ($`M_6=M_{BH}/10^6M_{}`$). So the Newtonian approximation is indeed a reasonable one if $`M_61`$. We choose the typical mass $`10^6M_{}`$ of a black hole because the afterglow of GRB observations suggest that the host galaxies are likely faint normal galaxies(Bloom et al.1999, Menten et al. 1997; Marfoff Melia & Sarcevic 1997; Astichiadis & Ozernoy 1994; Roland et al. 1994).
(b) The tidal capture and disruption rate
We note that the schwarzschild radius increases more rapidly with black hole mass than that of the tidal radius. Therefore, a star captured by a black hole with mass $`M_{BH}3\times 10^8M_{}`$ will be swallowed before being tidally disrupted (Hills 1975). We can reasonably estimate the capture and tidally disrupted rate $`\dot{N}_c`$ in terms of the density and velocities of the surrounding stars. For a star cluster scenario of our model we assume that the constituent stars are main sequence stars of identical proper masses. Lacy & Townes (1980) discussed that the internal dispersion velocity $`v_s`$ of these stars is $`100kms^1`$ and their density $`n10^2`$ in the inner one parsec core of the galactic nucleus. Based on Cohn et al. 1978 about the captured rate, we obtain:
$$\dot{N}_c10^7\left(\frac{M_{BH}}{10^6}\right)^{2.33}\left(\frac{n}{1\times 10^2\mathrm{pc}^3}\right)^{1.60}\left(\frac{v_s}{100\mathrm{km}\mathrm{s}^1}\right)^{5.76}(\frac{M_{}}{M_{}})^{1.06}(\frac{R_{}}{R_{}})^{0.4}\mathrm{yr}^1.$$
(2)
This could be an underestimate, since the observed stellar density rise more rapidly than that used in Cohn’s fully relaxed stellar cusps (Cannizzo 1990). However, no serious modification to above simple estimate $`\dot{N}_c`$ shall appear (Rees 1988).
(c)The evolution of the transient accretion disk
Cannizzo (1990) pointed out that two processes will supply mass to the central black hole after the tidal disruption of a star occurs: (1) the stream of stellar matter strung out in far-ranging orbits, and (2) mass loss from the inner edge of the accretion disk. Rees (1988, 1990) discussed the former process extensively and shows that, the infall rate declines as $`t^{5/3}`$, and the evolution of the swallowing rate (accretion rate) is given by
$$\dot{M}=25M_6^{1/2}(t/t_D)^{5/3}M_{}\mathrm{yr}^1,$$
(3)
The Eddington accretion rate of a massive black hole $`10^6M_{}`$ allowed is
$`\dot{M}_{Edd}=10^2M_6M_{}yr^1;`$ (4)
where $`t_D`$ is the dynamics or orbit time scale. For a central black hole of $`10^6M_{}`$, the orbital period is given by Sanders (1984)
$$t_D=\mathrm{\Omega }^1=(\frac{GM_{BH}}{R_T^3})^{1/2}=4.54\times 10^2M_6^{1/2}\mathrm{s}.$$
(5)
from the Eq(3) and Eq(4), one can get that the accretion rate of the disk will equal $`\dot{M}_{Edd}`$ at $`0.5`$ days and so most effective radiation would be concentrated on the dynamics time. Owing to the continued mass supply at late time, the effective falloff of $`\dot{M}`$ with t may be less steeper than $`t^{5/3}`$. Cannizzo (1990) has discussed the case of the late time evolution of $`\dot{M}`$ in which the accretion disk supply rate varies as $`t^{1.2}`$.
## 3 Energy mechanism of GRB
Once a transient accretion disk surrounding the rapidly spinning black hole is formed as a result of the processes described in previous section, there could be an ordered poloidal field threading the black hole, associated with a current ring in the disk. This ordered field can extract energy via the Blandford-Znajek process, creating magnetically driven outflows (jets) along the rotation axis (Blandford & Znajek 1977). Irrespective of the detailed field structure, any magnetically driven outflows along the rotation axis are less loaded with baryons than in other directions (Mészáros 1997). The role of the disk is mainly to anchor the magnetic field: the power comes from the rapidly spinning black hole itself, whose rotational energy can be $`10^{60}M_6`$ ergs. However, while an adequate energy source for GRB is available, like those discussed in previous section, two major problems arise. Namely, how the rotation energy of the black hole can be converted into an adequate photon flux in the right energy range, and what causes these photons to appear in bursts of $`1100s`$ duration. Two classes of fireball models ,which provide different explanations for the duration and variability of GRB, are generally used. The first class is called external shock models (e.g. Mészáros & Rees 1993) which are caused by the interaction (collision) between the fireball eject and the surrounding medium. The typical duration of a GRB is then given by the Doppler delayed arrival times of the emission from the two boundaries of the eject shell, or from the delay between different surface elements within the light cone. The second class is called internal shock models (e.g.Rees & Mészáros 1994; Paczynski & Xu 1994) which relates the shocks to inhomogeneities within the relativistic outflow, e.g. catching up of faster portions with slower portions of the flow. The duration of these shocks is likely to be given by the intrinsic duration of the energy release. According to the internal shocks model of Rees & Mészáros (1994), relative motions within the outflow material will be relativistic and lead to internal shocks resulting in relativistic fireball winds as well as triggering a GRB on the time of a few seconds if the mean Lorentz factor $`\mathrm{\Gamma }`$ fluctuates by a factor of $`2`$ around its mean value.
### 3.1 The energy of GRBs
The tidally disrupted star by a massive black hole is assumed to form a thin disk. According to the standard accretion disk theory (Shakura $`\&`$ Sunyaev 1973; Novikov & Throne 1973), the maximum pressure in the disk can be approximated by
$`p_{d.max}=\{\begin{array}{cc}2\times 10^{10}(\alpha M_6)^1\mathrm{}_1\text{dyne cm}\text{-2}\dot{m}>\dot{m}_c\hfill & \\ 8.013\times 10^{12}(\alpha M_6)^{9/10}\dot{m}^{4/5}\mathrm{}_2\text{dyne cm}\text{-2}\dot{m}<\dot{m}_c,\hfill & \end{array}`$ (8)
and
$$\dot{m}_c=5.98\times 10^3(\alpha M_6)^{1/8}(\frac{\stackrel{~}{R}_1}{\stackrel{~}{R}_2})^{5/4}.$$
(9)
The density $`n`$ of the disk at $`\dot{m}>\dot{m}_c`$ is
$`n`$ $`=`$ $`2.79\times 10^{20}\alpha M_{6}^{}{}_{}{}^{1}\text{cm}\text{-3};`$ (10)
where $`\alpha `$ is a viscous parameter($`0<\alpha <1`$), $`\dot{m}=\dot{M}/\dot{M}_{Edd}`$, $`\dot{M}_{Edd}=L_{Edd}/c^2=1.3\times 10^{44}M_6/c^2ergs^1`$. $`\mathrm{}_1\stackrel{~}{r}_{ms}^{3/2},\mathrm{}_2\stackrel{~}{r}_{ms}^{51/20}`$, $`\stackrel{~}{r}_{ms}=\frac{2r_{ms}}{r_s}`$ and $`r_{ms}`$ is the inner most stable orbit. The upper formula in equation (6) corresponds to the case of the maximum pressure dominated by radiation, while the lower formula in equation (6) corresponds to the case of the maximum pressure dominated by gas. So, for high accretion rates, the maximum pressure does not depend on the accretion rate. It can be checked by noting that disk pressure is $`\dot{m}/\alpha M(h/r)`$ and that $`h/r\dot{m}`$ for radiation pressure dominated disk, where $`h`$ is the height of an accretion disk. For $`\dot{m}>\dot{m_c}`$, the maximum pressure in innermost parts of the geometrically thin accretion disk is dominated by the radiation pressure. If magnetic viscosity is dominant, then
$$p_{d.max}=B_{eq}^2/8\pi p_r>>p_g,$$
(11)
where $`p_r`$ and $`p_g`$ are radiation pressure and gas pressure respectively. From Eq.(6)and Eq.(9), we obtain that $`B_{eq}10^5M_6^{1/2}`$ Gauss for the case of high accretion rate.
10With the evolution of the disk, when the accreted matter reaches the inner region of the disk (inside $`3r_s`$), a bursting growth of magnetic field occurs. It is possible that the flux-freezing in the differential rotating disk can cause the seed and/or generated magnetic field to wrap up tightly, becoming highly sheared and predominantly azimuthal in orientation: a manifestation of the dynamo effect elucidated by Parker (1970). Such poloidal fields can extract angular momentum from the disk, enabling efficient accretion of disk plasmas onto black holes. Haswell et al (1992) estimate that the growth time scale of the magnetic field is $`\mathrm{\Delta }t_g\left(\frac{2}{9}\right)^{1/2}\left(\frac{L_B}{V_A\sqrt{R_m}}\right)^{1/4}\left(\frac{r_s}{c}\right)^{3/4}`$. The amplified magnetic field is limited by the gravitational energy
$$B_{max}\frac{c}{V_A}\left(\frac{L_B}{r_s}\right)^{1/2}B_{eq}2.4\times 10^9M_6^{1/2}\mathrm{Gauss};$$
(12)
where $`V_A=B_{eq}/\sqrt{4\pi \rho },\rho =nm_p`$ is the Alfven velocity ,$`m_p=1.67\times 10^{24}g`$, $`R_m=10^{10}`$ is the magnetic Reynolds number (Haswell 1992) and $`L_B`$ ($`r_{ms}`$) is the radial size of the magnetic structure. For simplicity, we have chosen $`\alpha =1`$. Assuming $`M_{BH}10^6M_{}`$ and the black hole accreting at the Eddington rate, $`\mathrm{\Delta }t_g31s`$ and $`B_{max}2.4\times 10^9`$ Gauss. According to Lynden-Bell & Pringle (1974), when the magnetic field is so strong that it dominates over the material pressure, all the material within the inner boundary region will fall onto the central black hole rapidly. It is noted that no viscous energy losses could take place at $`r<r_{ms}=3r_s`$ (Shakura & Sunyaev 1973). Therefore, the debris with mass $`\mathrm{\Delta }M`$ in the disk
$`\mathrm{\Delta }M2\pi r_{ms}h^2m_pn2\times 10^{29}M_6^2\left({\displaystyle \frac{h}{r_{ms}}}\right)_2^2\mathrm{gm};`$ (13)
where $`\left(\frac{h}{r_{ms}}\right)_2=\left(\frac{h}{r_{ms}}\right)/10^2`$ will fall onto the black hole on a free fall time scale $`t_{ff}`$ given by
$$t_{ff}r_{ms}/c28M_6\mathrm{s}$$
(14)
This time scale is very close to the growth time scale $`\mathrm{\Delta }t_g`$ of $`B_{max}`$. We want to point out that $`B_{max}2.4\times 10^9`$ G can only occur no more than once during the entire accretion process. It is because it takes a viscous time scale $`(r/h)^2t_D4.5\times 10^6(r/h)_2^2M_6^{1/2}`$ s to replenish the mass loss, however, the accretion rate (cf. Eq.(3)) becomes several orders of magnitude lower than the Eddington rate. Since in such disks no viscous energy losses are taking place at $`r<r_{ms}`$, the process of the massive black hole swallowing the debris must be completed by dissipating its own rotation energy ($`Pdt`$) and angular momentum ($`Pdt/\mathrm{\Omega }_F`$) via the Blandford-Znajek process. The total Blandford-Znajek power is given by (cf. Lee, Wijers & Brown 1999)
$$P=1.7\times 10^{50}A^2f(A)M_6^2(\frac{B_n}{10^9gauss})^2ergs^1;$$
(15)
where $`B_n`$ is the intensity of a magnetic field component normal to the black hole horizon, $`f(A)=0(A=0)1.14(A=1)`$, which is a constant for a given $`A`$. $`A`$ is the dimensionless angular momentum ($`A=\frac{2GM_{BH}\mathrm{\Omega }_h}{c^3}`$), and $`\mathrm{\Omega }_h`$ is the angular velocity of the black hole. So, the maximum power extracted by the Blandford-Znajek process is
$`P_{max}`$ $``$ $`10^{51}f(A)A^2M_6ergs^1;`$ (16)
where $`B_n=B_{max}`$ is used and the maximum GRB energy is given by
$$(E_b)_{max}3\times 10^{52}A^2f(A)M_6^2\mathrm{ergs}.$$
(17)
We should note that the magnetic field generated during this process will accelerate plasma toward both polar directions by the $`J\times B`$ force, and the accelerated plasma form bipolar relativistic jets (magnetically driven outflow) collimated by the magnetic force. This magnetic mechanism is the same as that proposed for AGN jets (Lovelace 1976; Blandford & Payne 1982; Pelletier et al. 1996; Meier et al. 1997), and similar to that suggested by Dai & Lu (1998a,b; 1999) who considered a postburst shock renewed by the electromagnetic waves radiated from a puslar. According to the simulations of relativistic jets driven by non-steady accretion of magnetized disk of Koide et al. (1998), there exist two-layered shell structure jets consisting of a fast jet in the inner part and a slow jet in the outer part, both of which are collimated by the global poloidal magnetic field penetrating the disk. When the faster portions catch up with the slower portions of the outflow, the internal shock is produced which eventually triggers GRB (Rees & Mérzáros 1994). The typical Lorentz factor of this relativisitic jet can be estimated as $`\mathrm{\Gamma }_{max}(E_b)_{max}/c^2f\mathrm{\Delta }M`$, where $`\mathrm{\Delta }M`$ is the mass of the debris falling onto the black hole and $`f`$ is the fraction of the debris material rejected by the relativistic jet. Since the accretion flow near the black hole should be isotropic, then $`f\mathrm{\Delta }\mathrm{\Omega }/4\pi `$, where $`\mathrm{\Delta }\mathrm{\Omega }`$ is the solid angle of the jet. According to the observations of Active Galactic Nuclei (AGN), which should also contain a massive black hole, the opening angle of the jet $`15^o30^o`$. So $`f`$ should be approximately $`0.10.01`$, the typical Lorentz factor of the jet material should be
$`\mathrm{\Gamma }_{max}10^3f_{0.1}^1\left({\displaystyle \frac{h}{r_{ms}}}\right)_2^2;`$ (18)
where $`(h/r_{ms})_2=(h/r_{ms})/10^2`$.
### 3.2 The Variabilities of GRBs
There is unlikely any relativistic motion between the host galaxy and the Earth, the duration of the burst must be
$`\mathrm{\Delta }t_{busrt}M_{max}(\mathrm{\Delta }t_g,t_{ff})30M_6s;`$ (19)
In addition, there are at least two time variabilities associated with this type of GRB model. First, since the debris material are likely broken into clumps/blobs which should have a characteristic dimension $`h`$, by the strong magnetic field $`B_{max}`$ before they are dragged onto the black hole. These blobs are expected to be wrapped up by magnetic field lines, so they can maintain their size for a diffusion time scale $`t_{diff}h^2/r_lc`$, where $`r_l`$ is the Larmor radius. Since protons inside the blobs shall not have energy larger than $`\mathrm{\Gamma }m_pc^2`$ so $`t_{diff}>10^{10}(h/r_{ms})_2^2\mathrm{\Gamma }_3^1B_9s`$ . Therefore the burst should consist of some sub-bursts and the maximum number of sub-bursts is estimated as $`r_{ms}/h`$. It has been shown (Abramowicz & Zurek 1981) that accretion flow close to $`r=r_{ms}=3r_s`$ is transonic. Due to general relativistic effects, close to black hole, the relation of it’s thickness $`h`$ with radii $`r`$ in the range of $`r_{ms}r2r_s`$ is (Abramowicz 1985)
$`{\displaystyle \frac{h}{r}}`$ $``$ $`10^2\sqrt{\beta }/x;`$ (20)
where $`x`$ is a parameter depending on the mass of the black and the accretion rate, which are $`x1`$ for stellar black holes and $`x0.1`$ for massive black holes. $`\beta `$ is the ratio of the gas pressure to the total pressure of the disk and with $`1\beta 10^4`$. Then, the number of sub-bursts is about
$`N_{max}=r_{ms}/h=10\beta _4^{1/2}x_{0.1}^1.`$ (21)
where $`\beta _4=\beta /10^4,x_{0.1}=x/0.1`$. The typical number of sub-bursts of a GRB is about $`1010^2`$ (Sari & Piran 1997). Another fine time scale in this model is the shortest rising time of the burst. Since each of these blobs is moving with a Lorentz factor $`\mathrm{\Gamma }_{max}`$ and a dimension $`h`$ in the comoving frame. When these blobs convert their kinetic energy into radiation, the rising time scale of the the radiation should be
$`\mathrm{\Delta }t_{rise}{\displaystyle \frac{h}{\mathrm{\Gamma }c}}3\times 10^4\mathrm{\Gamma }_3^1(h/r_{ms})_2M_6s.`$ (22)
where $`\mathrm{\Gamma }_3=\mathrm{\Gamma }/10^3`$. Here, we have used the fact that the size of blobs does not evolve in a time scale of $`10^{10}s`$ because they are wrapped up by the magnetic field lines (cf. previous section).
### 3.3 GRBs with possible host galaxies
Severals strong GRBs detected by BATSE and BeppoSAX are suggested to associate with distance normal galaxies with redshift $`z>1`$ (except GRB980425, and GRB970228), whose observed features are listed in Table 1. Five of these GRBs show complex temporal structure, in particular their variability time scale is significantly shorter than the duration. In fact, most GRBs show sub-burst structure and the number of sub-burst is $`1010^2`$ and the rising time of the burst/sub-burst could be shoter than millisecond (Greiner 1998). We use the properties of GRB971214 to illustrate our model. The fluence of the recently observed GRB971214 (Kulkarni et al. 1998) corresponds to
$`E_\gamma =({\displaystyle \frac{\delta \mathrm{\Omega }_\gamma }{4\pi }})10^{53.5}\mathrm{ergs};`$ (23)
which is consistent with $`(E_b)_{max}`$ if a beaming factor $`\frac{\delta \mathrm{\Omega }_\gamma }{4\pi }`$ is $`0.1`$. Taking $`M=1.3\times 10^6M_{}`$, then we can fit the duration of GRB971214 $`40s`$ (cf. Eq.(17)). The number of sub-burst of GRB971214 is $`r_{ms}/h10`$ if $`\beta 10^4`$ , which is consistent with the fact that the radiation pressure dominates the gas pressure at high accretion rate. The shortest rising timescale $`t_{rise}3\times 10^3`$ s.
## 4 Discussion
We have proposed a new GRB model which is associated with an inactive quasar/normal galaxies harboring a spinning massive black hole with mass $`10^6M_{}`$ embedded a star cluster, a speculative case is GRB971214. We have discussed the rising time, sub-burst time, duration and energy mechanism of GRBs. According to the capture rate by the tidal force of the black hole (see Eq.(2)) and the number density of inactive quasar with spinning black hole (see $`n_{IQSO}`$), we also can estimate the rate of an inactive quasar which can be converted into an active quasar by tidal disrupted star is $`\rho =\dot{N}_cn_{IQSO}=1.55\times 10^6\mathrm{Gpc}^3\mathrm{yr}^1`$. If GRBs are only associated with inactive quasar harboring rapidly massive black hole, then we can obtain the density rate of burst events, $`\rho _{GRB}=\rho =1.55\times 10^6\mathrm{Gpc}^3\mathrm{yr}^1`$. This estimation is far lower than the observation results ($`22.3\mathrm{Gpc}^3\mathrm{yr}^1`$, Fenimore et al. 1993).
According to recent observations, at least five GRBs are associated with normal galaxies, which may contain a black hole with mass $`10^6M_{}`$ like that in our Galaxy. Assuming other normal galaxies having similar properties of our Galaxy, Eq(2) gives the capture rate $`\dot{N}_c10^7yr^1`$. The density of normal galaxies is $`n_{galaxy}10^9Gpc^3`$. Combining this result with the capture rate in normal galaxies, the GRB density rate is
$`\rho _{GRB}`$ $`=`$ $`n_{galaxy}\dot{N}_c`$ (24)
$`=`$ $`10^2Gpc^3yr^1.`$
If the beaming factor $`f`$ is included, the GRB rate will increase by a factor $`f^{1/2}`$ because we are able to observe farther GRBs, assuming the minimum detectable energy flux and the energy release of GRBs are constants.
Although our model resembles to other GRB models (e.g. microquasar model by Paczynski 1998; failed supernova model by MacFadyen & Woosley 1998; Popham, Woosley & Fryer 1998) in the way that both models suggest that the burst energy come from the Kerr black hole via Blandford-Znajek mechanism. However, our model differs from their models in the following aspects. (1)Our accretion disk is formed transiently after capturing a main sequence star. On the other hand, the disk in other models is formed by the mass coming from either the progenitor or ejected during the collapse process of merger. (2)Since the origin of the burst is so different, the estimation of burst rate is completely different. Other models, the burst rate relates either to the merger rate or the rate of failed supernova, ours relates to the capture rate. (3)The accretion rate required by our model is the Eddington rate but in other models are always super-Eddington rate( usually 5 to 7 orders of Eddington rate. It still requires more detailed study.) (4)The main energy release in our model is the rotation energy of the black hole NOT the accretion energy( this is exactly why we need to use the Blandford-Znajek Mechanism which can extract the rotation energy of the black hole in an extremely efficient way). (5)Since the GRB source is a normal galaxy (inactive quasar) which is activated by capturing a main sequence star. The accretion disk in our model therefore is a transient accretion disk and the gamma-ray burst is triggered by the disk instability. The identification of galaxy to GRBs does not conflict with our model.
Although our mechanism is also similar to that of quasar/ AGN model (e.g. Blandford-Znajek 1977), there are two major differences. First, the maximum outburst power (cf. Eq.(14)) is much larger than that of the the flare state of quasar because a transient magnetic field $`B_{max}`$ is expected to occur in GRB models. This exponential increase of the magnetic field from $`B_{eq}`$ to $`B_{max}`$ is due to an instability in the inner region of the disk, e.g. due to mass accumulation resulting in the increase of the viscosity and local accretion rate and developing an instability eventually. Second, the flare state of quasar/AGNs could be repeated in time scales from days to years. Our model does not predict repeated GRBs because: After the GRB, the disk losts a mass ring (cf. Eq(11)). It takes a viscous time scale $`4\times 10^6s`$ to replenish the mass. The accretion rate of the disk will decrease to $`10^3`$ of the Eddington rate ($`\mathrm{𝑐𝑓}.`$ Eq.(3)), then the radiation pressure $`p_{max}`$ ($`\mathrm{𝑐𝑓}.`$ the lower part of Eq.(6)) is too low to produce enough magnetic field to trigger GRB again.
Acknowledgments:
We thank an anonymous referee for important suggestions and Z.G.Dai, J.H.Fan, T.Lu, J.M.Wang, D.M.Wei and L.Zhang for very useful discussions and comments. This work is partially supported by a RGC grant of Hong Kong Government and the National Natural Science Foundation of China.
|
no-problem/9906/nucl-th9906081.html
|
ar5iv
|
text
|
# Use of the double dispersion relation in QCD sum rules with external fields
## Abstract
In QCD sum rules with external fields, the double dispersion relation is often used to represent the correlation function. In this work, we point out that the double spectral density, when it is determined by successive applications of the Borel transformation, contains the spurious terms which should be kept in the subtraction terms in the double dispersion relation. They are zero under the Borel transformation but, if the dispersion integral is restricted with QCD duality, they contribute to the continuum. For the simple case with zero external momentum, it is shown that subtracting out the spurious terms is equivalent to the QCD sum rules represented by the single dispersion relation.
preprint: TIT/HEP-423/NP
The QCD sum rule is widely used in studying hadronic properties based on QCD . In this framework, a correlation function is introduced as a bridge between the hadronic and QCD representations. In the QCD side, the perturbative part and the power corrections are calculated in the deep space-like region ($`q^2=\mathrm{}`$) using the operator product expansion (OPE), which is then used to extract the hadronic parameter of concern by matching with the corresponding hadronic representation.
In matching the two representations, it is crucial to represent the correlator using a dispersion relation. Usually in the nucleon mass sum rule as an example, the single-variable dispersion relation is used. With this, the QCD correlator calculated in the deep space-like region can be related to its imaginary part defined in the time-like region, which is then compared with the corresponding hadronic spectral density to extract the hadronic parameter of concern. The hadronic spectral density contains contributions from higher resonances as well as the pole from the low-lying resonance of concern. To subtract out the continuum, QCD duality is invoked above a certain threshold where the continuum contribution is equated to the perturbative part of QCD. This duality restricts the dispersion integral below the continuum threshold in the matching. Therefore, the predictive power of QCD sum rules relies heavily on the duality assumption. Indeed, in the quantum mechanical examples, the parton-hadron duality works well for two-point correlation functions .
Often, within the QCD sum rule framework, a correlation function with an external field is considered to calculate for examples pion-nucleon couplings , nucleon magnetic moment . In such a case, as the two baryonic lines propagate through the correlator at the tree level, the double-variable dispersion relation can be invoked to represent the correlation function. Namely,
$`\mathrm{\Pi }(p_1^2,p_2^2)={\displaystyle _0^{\mathrm{}}}𝑑s_1{\displaystyle _0^{\mathrm{}}}𝑑s_2{\displaystyle \frac{\rho (s_1,s_2)}{(s_1p_1^2)(s_2p_2^2)}}+\mathrm{subtractions}.`$ (1)
The subtraction terms serve to eliminate infinities coming from the integral. They are usually polynomials in $`p_1^2`$ or $`p_2^2`$, which vanish under the Borel transformations. Thus, the subtraction terms should not contribute to the sum rules. As before, QCD duality is imposed to the correlator, which restricts again the dispersion integral below a certain threshold for both integration variables, $`s_1`$ and $`s_2`$.
In general, the double spectral density $`\rho (s_1,s_2)`$ in the double dispersion relation of Eq. (1) is obtained formally by matching the correlation function with its corresponding OPE $`\mathrm{\Pi }^{\mathrm{ope}}`$ In this work, we focus mainly on the OPE terms which contribute to the continuum. calculated in the deep Euclidean region using QCD degrees of freedom. That is,
$`\mathrm{\Pi }(p_1^2,p_2^2)=\mathrm{\Pi }^{\mathrm{ope}}(p_1^2,p_2^2)+\mathrm{subtractions}.`$ (2)
The LHS contains the spectral density in the integrals as given in Eq. (1). To solve for the spectral density $`\rho (s_1,s_2)`$ from this formal equation, successive Borel transformations are needed to apply on both side. This will eliminate the unnecessary subtraction terms. In doing so, the integrals are disappeared, reducing to a simple equation for the spectral density in terms of a given OPE. However, such a chosen spectral density, when it is put back to the double dispersion integral Eq. (1), can reproduce the original OPE up to some subtraction terms. Of course, the additional subtraction terms do not matter if a sum rule are constructed using precisely Eq. (1). But in fact, in constructing continuum contribution, the duality argument is imposed, which further restricts the dispersion integral. If this restricted form of the dispersion integral is used, the subtraction terms do not vanish even after the Borel transformations. In this work, we point out with some explicit examples that QCD sum rules using the double dispersion relation contain these spurious contributions.
To proceed, we first demonstrate how the spectral density in the double dispersion relation is usually determined . The Borel transformation $`(M^2,Q^2)`$ is defined as
$`(M^2,Q^2)f(Q^2)=\underset{Q^2,n\mathrm{},Q^2/n=M^2}{lim}{\displaystyle \frac{(Q^2)^{n+1}}{n!}}\left({\displaystyle \frac{d}{dQ^2}}\right)^nf(Q^2).`$ (3)
With this definition, the Borel transformation converts the $`Q^2`$ dependence of the function $`f`$ into the Borel mass dependence, $`M^2`$. In doing so, polynomials in $`Q^2`$ vanish. By applying the double Borel transformations on Eq. (1), we obtain,
$`(M_2^2,p_2^2)(M_1^2,p_1^2)\mathrm{\Pi }(p_1^2,p_2^2)={\displaystyle _0^{\mathrm{}}}𝑑s_1{\displaystyle _0^{\mathrm{}}}𝑑s_2\rho (s_1,s_2)e^{s_1/M_1^2s_2/M_2^2},`$ (4)
where we have used the formula,
$`(M^2,Q^2)\left({\displaystyle \frac{1}{Q^2+\mu ^2}}\right)^n={\displaystyle \frac{1}{\mathrm{\Gamma }(n)(M^2)^{n1}}}e^{\mu ^2/M^2}.`$ (5)
To eliminate the integral, we further perform additional double Borel transformations and obtain,
$`(\tau _2^2,{\displaystyle \frac{1}{M_2^2}})(\tau _1^2,{\displaystyle \frac{1}{M_1^2}})(M_2^2,p_2^2)(M_1^2,p_1^2)\mathrm{\Pi }(p_1^2,p_2^2)=\rho ({\displaystyle \frac{1}{\tau _1^2}},{\displaystyle \frac{1}{\tau _2^2}}),`$ (6)
where another formula for the Borel transformation,
$`(M^2,Q^2)e^{a^2Q^2}=M^2\delta (a^2M^21),`$ (7)
has been used. Note in this derivation, $`s_1=1/\tau _1^20`$ and $`s_2=1/\tau _2^20`$, thus restricting the spectral density only in the region $`s_1,s_20`$.
The OPE spectral density can be obtained by applying this operation on a given OPE. Note also that the integral interval should include the point provided by the delta functions. If the interval does not include the point, $`s_1=1/\tau _1^2`$ or $`s_2=1/\tau _2^2`$, then Eq. (6) is not conclusive. We stress that the spectral density determined via Eq. (6) is correct within this context, obtained by successive applications of Borel transformation on the dispersion integral limited from zero to infinity. The subtraction terms, as they vanish under the Borel transformations, can be chosen freely as we like.
In practice, however, QCD sum rules require a certain assumption for high energy part of the correlator, QCD duality. With this assumption, the dispersion integral is restricted below the continuum threshold $`S_0`$, and the Borel-transformed sum rule becomes,
$`{\displaystyle _0^{S_0}}𝑑s_1{\displaystyle _0^{S_0}}𝑑s_2\rho ^{\mathrm{ope}}(s_1,s_2)e^{s_1/M_1^2s_2/M_2^2}={\displaystyle _0^{S_0}}𝑑s_1{\displaystyle _0^{S_0}}𝑑s_2\rho ^{\mathrm{phen}}(s_1,s_2)e^{s_1/M_1^2s_2/M_2^2}`$ (8)
$`\rho ^{\mathrm{phen}}(s_1,s_2)`$ is obtained from hadronic representation of the correlator while $`\rho ^{\mathrm{ope}}(s_1,s_2)`$ is obtained via Eq. (6) for a given OPE. The LHS restricted below the continuum threshold $`S_0`$ can be calculated directly using the spectral density obtained from Eq.(6). Another equivalent method which is more useful for our discussion is to calculate the LHS via
$`(M_2^2,p_2^2)(M_1^2,p_1^2)\left[\mathrm{\Pi }^{\mathrm{ope}}(p_1^2,p_2^2){\displaystyle _{S_0}^{\mathrm{}}}𝑑s_1{\displaystyle _{S_0}^{\mathrm{}}}𝑑s_2{\displaystyle \frac{\rho ^{\mathrm{ope}}(s_1,s_2)}{(s_1p_1^2)(s_2p_2^2)}}\right].`$ (9)
Note that the continuum is subtracted out from the OPE using the duality argument. Integral intervals like $`_{S_0}^{\mathrm{}}𝑑s_1_0^{S_0}𝑑s_2`$ or $`_0^{S_0}𝑑s_1_{S_0}^{\mathrm{}}𝑑s_2`$ do not contribute because, as we will see, the spectral density is of the form $`\delta (s_1s_2)`$ at least in our examples that will be considered in this work. The integral in the second term is bounded below by $`S_0`$. As we have stressed above, in determining the spectral density via Eq. (6), it is important that the dispersion integral is limited from zero to infinity. Since the second integral is bounded below by $`S_0`$ due to the duality argument, it is not clear if the subtraction terms as written in Eq. (1) do not participate in the sum rule. This is our main question to be addressed in this work.
Let us proceed how our question is realized in QCD sum rules with external fields. To do so, we consider as an example the two-point correlation function with a pion,
$`\mathrm{\Pi }(q,p_\pi )=i{\displaystyle d^4xe^{iqx}0|T[J_N(x)\overline{J}_N(0)]|\pi (p_\pi )},`$ (10)
where $`J_N`$ is the nucleon interpolating field proposed by Ioffe . To be specific, let us take a typical OPE from this correlation function,
$`\mathrm{\Pi }_1^{\mathrm{ope}}={\displaystyle _0^1}𝑑u\phi _p(u)ln[(qup_\pi )^2]{\displaystyle _0^1}𝑑u\phi _p(u)ln[up_2^2(1u)p_1^2].`$ (11)
Here $`p_1^2=q^2`$ and $`p_2^2=(qp_\pi )^2`$. We have taken the limit $`p_\pi ^2=m_\pi ^2=0`$ as is usually done in the light-cone QCD sum rules . Note, Eq. (11) contains terms polynomials in $`p_1^2`$ or $`p_2^2`$ but we did not specify these subtraction terms explicitly. For the twist-3 pion wave function, we take its asymptotic form, $`\phi _p(u)=1`$ . With higher conformal spin operators, the wave function takes more complicate form but our claims in this work are still valid even with more general wave functions. We will discuss this point later.
To obtain the double spectral density, we take the operation as given in Eq. (6). For $`\mathrm{\Pi }_1^{\mathrm{ope}}`$, we straightforwardly obtain
$`\rho _1^{\mathrm{ope}}(s_1,s_2)=s_1\delta (s_1s_2).`$ (12)
Note, the spectral density is defined only in the region $`s_1,s_20`$. Therefore, the spectral density should be understood as being multiplied by the step function, $`\theta (s_1)\theta (s_2)`$. To include entire region of $`\rho (s_1,s_2)0`$, the lower boundary of the dispersive integral should be understood as $`0^{}`$, infinitesimal but negative value. This subtlety does not matter in this example but is important in later examples.
Normally, $`\rho _1^{\mathrm{ope}}`$ is simply used in QCD sum rules Eq. (9) without justifying its use carefully. To see a problem with this spectral density, we put this expression into Eq. (1) and perform the integrations using the Feynman parametrization,
$`{\displaystyle _0^{\mathrm{}}}𝑑s_1{\displaystyle _0^{\mathrm{}}}𝑑s_2{\displaystyle \frac{\rho _1^{\mathrm{ope}}(s_1,s_2)}{(s_1p_1^2)(s_2p_2^2)}}={\displaystyle _0^1}𝑑u{\displaystyle _0^{\mathrm{}}}𝑑s{\displaystyle \frac{s}{[sup_2^2(1u)p_1^2]^2}}`$ (13)
$`={\displaystyle _0^1}𝑑u\left[{\displaystyle \frac{up_2^2(1u)p_1^2}{sup_2^2(1u)p_1^2}}|_0^{\mathrm{}}+ln[sup_2^2(1u)p_1^2]|_0^{\mathrm{}}\right].`$ (14)
The second term in the last line is the anticipated logarithmic term matching the OPE of Eq. (11). In other words, the second term is enough to reproduce the Borel-transformed OPE of Eq. (11). This means that the first term is spurious and it vanishes under Borel transformations with respect to the variables, $`p_1^2`$ and $`p_2^2`$. Therefore, it is a part of subtraction terms and should not contribute to the QCD sum rule. That is, we have to subtract out this term using our freedom to choose any subtraction term. Of course, this subtraction by hand is not necessary if the sum rule is used in the context of Eq. (4). However, in practice, the sum rule is used in the context of Eq. (9) invoking QCD duality. The continuum part from this subtraction term,
$`{\displaystyle _0^1}𝑑u{\displaystyle \frac{up_2^2(1u)p_1^2}{sup_2^2(1u)p_1^2}}|_{S_0}^{\mathrm{}},`$ (15)
becomes, under the double Borel transformation,
$`S_0M^2e^{S_0/M^2}\mathrm{where}{\displaystyle \frac{1}{M^2}}={\displaystyle \frac{1}{M_1^2}}+{\displaystyle \frac{1}{M_2^2}}.`$ (16)
This nonzero continuum is spurious as it originated from the subtraction term. A prescription for including the continuum presented in Ref. can be obtained when this spurious continuum is kept and it is often used in the light-cone QCD sum rules . However, keeping this term while neglecting the OPE subtraction terms in Eq. (11) is inconsistent.
We now consider a slightly different OPE from Eq. (10),
$`\mathrm{\Pi }_2^{\mathrm{ope}}={\displaystyle _0^1}𝑑u\phi _p(u){\displaystyle \frac{1}{up_2^2(1u)p_1^2}}`$ (17)
Again we take the asymptotic form for the pion wave function $`\phi _p(u)=1`$ for simplicity. Note, when the external momentum is zero, we have $`\mathrm{\Pi }_2^{\mathrm{ope}}=1/p^2(p_1^2=p_2^2=p^2)`$. It is clear that this OPE should not contribute to the continuum. Even if the sum rule is constructed with nonzero external momentum, this aspect should be recovered whenever we take the external momentum zero. Under the successive applications of the Borel transformation, it is straightforward to obtain the corresponding spectral density,
$`\rho _2^{\mathrm{ope}}(s_1,s_2)=\delta (s_1s_2)\theta (s_1).`$ (18)
Here we put the step function explicitly because the subtlety associated with the lower boundary affects the discussion. Using this spectral density, Eq. (1) becomes
$`{\displaystyle _0^1}𝑑u{\displaystyle _0^{}^{\mathrm{}}}𝑑s{\displaystyle \frac{\theta (s)}{[sup_2^2(1u)p_1^2]^2}}.`$ (19)
Note, we have $`0^{}`$ for the lower limit of the integral in order to ensure that the integration includes entire region of $`\rho _2(s_1,s_2)0`$. Integration by part leads to
$`{\displaystyle _0^1}𝑑u{\displaystyle _0^{}^{\mathrm{}}}𝑑s{\displaystyle \frac{\delta (s)}{sup_2^2(1u)p_1^2}}{\displaystyle _0^1}𝑑u{\displaystyle \frac{\theta (s)}{sup_2^2(1u)p_1^2}}|_0^{}^{\mathrm{}}.`$ (20)
Here the first term yields the anticipated OPE of Eq. (17) and the second term, as the lower limit lies just below the zero, is zero. Note also that this separation becomes possible because the subtlety with the lower boundary. If there were no subtlety with the lower boundary, then we would not have the first term containing the delta function. If this were true, then in the limit of zero external momentum, Eq.(17) can not be equal to Eq.(20), which does not make sense.
It is the second term that should be a part of the subtraction terms. Of course, this separation does not have any mathematical significance. It is however important physically because this separation enables us to identify where the spurious contribution to the sum rule comes from. That is, the second term, when the lower limit changes to $`S_0`$, survives under the Borel transformations and contributes to the continuum. Once again, we have identified a spurious subtraction which contributes to the sum rule.
Up to now, from the two simple examples, we have shown that QCD sum rules invoking the double dispersion relation contain the spurious terms originated from the subtraction terms. They contribute to the sum rule when QCD duality is imposed. This is a general statement as long as QCD sum rules are constructed using Eqs. (8),(9) while the spectral density is determined via Eq. (6). In general, for the correlator of Eq. (10) as an example, the OPE contains complicate functions like the pion wave functions. The general twist-3 pion wave function can be written ,
$`\phi _p(u)={\displaystyle \underset{k}{}}a_ku^k.`$ (21)
Using this general form, $`\mathrm{\Pi }_2^{\mathrm{ope}}`$ under the double Borel transformations becomes
$`\phi _p(u_0)M^2`$ (22)
where $`u_0=M_1^2/(M_1^2+M_2^2)`$ and $`M^2=M_1^2M_2^2/(M_1^2+M_2^2)`$. Under additional Borel transformations, the spectral density can be shown to be proportional to derivatives of $`\delta (s_1s_2)`$. The dispersion integral of Eq. (1) can be performed by integrations by part but, in doing so, the boundary terms become parts of the subtraction terms and produce the spurious continuum contributions when the sum rule is combined with QCD duality. In general, it is difficult to eliminate these spurious terms systematically. This is a generic problem in the sum rules using the double dispersion relation combined with QCD duality.
Now, let us consider a simple case with the zero external momentum. As a specific example, we consider the two-point correlation function with a pion with the vanishing pion momentum (the soft-pion limit) or the same correlation function with one pion momentum taken out as an overall factor but in the rest with $`p_\pi ^\mu =0`$ (beyond the soft-pion limit). Even in this case, the double dispersion relation is proposed as a correct representation of the correlator . The double dispersion relation might be useful in treating phenomenological side properly but the spurious terms still persist.
The double spectral density in this case takes the form
$`\rho (s_1,s_2)=\rho (s_1)\delta (s_1s_2).`$ (23)
Since the two correlator momenta are equal in this case, the delta function appears as a part of the spectral density. The double dispersion relation Eq. (1) reduces to
$`\mathrm{\Pi }(p^2)={\displaystyle _0^{\mathrm{}}}𝑑s{\displaystyle \frac{\rho (s)}{(sp^2)^2}}+\mathrm{subtractions}.`$ (24)
Unlike to the single dispersion relation, the correlation function contains square of $`sp^2`$ in the denominator. The spectral density $`\rho (s)`$ is obtained by
$`(\tau ^2,{\displaystyle \frac{1}{M^2}})(M^2,p^2)\mathrm{\Pi }(p^2)={\displaystyle \frac{\rho (s)}{s}}|_{s=1/\tau ^2}`$ (25)
Thus, we can determine the derivative of the spectral density for a given OPE. The OPE corresponding to Eq. (11) is $`ln(p^2)`$. We do not need to worry about the pion wave function $`\phi _p(u)`$ since its overall normalization, which is fixed to the unity, participates in the case.
Substituting $`ln(p^2)`$ into Eq. (25), we obtain
$`{\displaystyle \frac{\rho (s)}{s}}|_{s=1/\tau ^2}=1\rho (s)=s+\mathrm{constant}.`$ (26)
The constant term, when put into the dispersion integral, yields the term $`1/p^2`$. To be consistent with the logarithmic behavior of the OPE, the constant should be zero. The rest of the spectral density leads to the dispersion integral,
$`{\displaystyle _0^{\mathrm{}}}𝑑s{\displaystyle \frac{s}{(sp^2)^2}}={\displaystyle \frac{p^2}{sp^2}}|_0^{\mathrm{}}ln(sp^2)|_0^{\mathrm{}}.`$ (27)
The second term in the RHS is what we have anticipated. This is what one would have obtained if the single dispersion relation is used. Again, this is enough to reproduce the Borel-transformed OPE of $`ln(p^2)`$. But, as before, the first term in the RHS is spurious. This term is zero under the Borel transformation but the continuum gives nonzero contribution to the sum rule. Note that the first term can be separated as
$`{\displaystyle \frac{p^2}{sp^2}}|_0^{\mathrm{}}={\displaystyle \frac{p^2}{sp^2}}|_0^{S_0}+{\displaystyle \frac{p^2}{sp^2}}|_{S_0}^{\mathrm{}}.`$ (28)
What is interesting is that the contribution from the upper limit in the first term cancels the one from the lower limit in the second term. These two terms coming from the continuum threshold survive separately under the Borel transform though their sum is still zero. If the spectral density of Eq. (26) is simply used in the sum rule of Eq. (8), then the lower limit from the second term contributes to the continuum while the upper limit from the first term does not participate to the sum rule Eq. (8). Once again, the spurious nature of this continuum is obvious.
Another way to support our claim is to consider the correlator Eq. (10) in the soft-pion limit. According to the soft-pion theorem, the correlator becomes the commutator with the axial charge $`Q_5`$,
$`\mathrm{\Pi }(q,p_\pi =0)i{\displaystyle d^4xe^{iqx}0|[Q_5,T[J_N(x)\overline{J}_N(0)]]|0}.`$ (29)
The commutator, if it is readily evaluated using the commutation relation for the quark fields, becomes the anticommutator,
$`\mathrm{\Pi }(q,p_\pi =0)\{\gamma _5,i{\displaystyle d^4xe^{iqx}0|T\left[J_N(x)\overline{J}_N(0)\right]|0}\}.`$ (30)
It means, in the soft-pion limit, Eq. (10) is equivalent to the nucleon chiral-odd sum rule, which should be represented by the single dispersion relation in the construction of its sum rule. Note, in deriving this, we have used only the soft-pion theorem and the commutation relation for the quark fields. Therefore, this is an operator identity that has to be satisfied always. But if one starts from the double dispersion relation and takes the soft-pion limit afterward, then Eq. (30) is not satisfied exactly by the presence of the spurious term like $`\frac{p^2}{sp^2}|_0^{\mathrm{}}`$ in Eq. (27).
Indeed, the spurious terms lead to different continuum factors as appeared in Ref. . A similar discussion can be found in Ref. where we simply point out a pole at the continuum threshold, not clarifying its spurious nature. It is currently under-debate whether or not keep the terms in question in QCD sum rules with external fields. But it is now clear from our discussion that they are spurious. Anyway, once the spurious subtractions are eliminated, then what is left is the same sum rule that we would have obtained via the single dispersion relation. Our argument can be generally applied to other OPE contributing to the continuum and it can be shown that subtracting the spurious continuum leads to the sum rule invoking the single dispersion relation. Normally, the continuum contribution is denoted by the factors, $`E_n(xS_0/M^2)=1(1+x+\mathrm{}+x^n/n!)e^x`$. Subtracting the spurious continuum replaces the continuum factor, $`E_n(x)E_{n1}(x)`$ for $`n1`$ . The changes due to this spurious term are sometime huge as discussed in Ref. . In Ref. , figure 1 shows for the sum rules with $`i\gamma _5p_\mu \gamma ^\mu `$ structure how this spurious continuum changes the Borel curve of $`\pi NN`$ coupling. There, the Borel curve with the spurious continuum varies within the range 11.5 - 11.2. But without this spurious continuum, the variation scale becomes 15 - 30, clearly showing huge effects from the spurious terms. The $`\pi NN`$ coupling extracted from this Borel curve will be quite different from what we know experimentally. As discussed in Ref. , however, this only says that the Dirac structure $`i\gamma _5p_\mu \gamma ^\mu `$ is not adequate for calculating the coupling. Instead, $`i\gamma _5`$ or $`\gamma _5\sigma _{\mu \nu }q^\mu p^\nu `$ Dirac structure is more useful to calculate the coupling . Nevertheless, from this consideration, we see that the spurious continuum sometime becomes substantial and thus should be treated carefully.
In summary, we have pointed out in this work that QCD sum rules with external fields employing a double dispersion relation can contain the spurious continuum originated from the subtraction terms. This spurious term appears because the spectral density obtained via successive applications of the Borel transformation is not compatible with QCD duality. The spurious term should be subtracted out using the freedom for subtraction terms in QCD sum rules. In the case with the zero external momentum, subtracting the spurious term is equivalent to the sum rules invoking the single dispersion relation. Of course, our finding only affects continuum contributions but in some cases this modification leads to significant corrections to QCD sum rule results as is presented in Ref. .
###### Acknowledgements.
This work is supported by Research Fellowships of the Japan Society for the Promotion of Science.
|
no-problem/9906/astro-ph9906122.html
|
ar5iv
|
text
|
# On the Spin History of the X-ray Pulsar in Kes 73: Further Evidence For an Utramagnetized Neutron Star
## 1. Introduction
The discovery of 12-s pulsed X-ray emission from the compact source within Kes 73 (Vasisht & Gotthelf 1997; herein VG97, Gotthelf & Vasisht 1997) came somewhat as a surprise, as this Einstein source (1E 1841$``$045) had been studied for some time (Kriss et al. 1985; Helfand et al. 1994). The pulsar was initially detected in an archived ASCA observation (1993) of Kes 73 (also SNR G27.4+0), and soon confirmed in an archived ROSAT dataset. The measured spindown from these detections indicated rapid braking on a timescale of $`\tau _s4\times 10^3`$ yr, consistent with the inferred age of the supernova remnant. The similarity in age along with the geometric location of the pulsar in the center of the symmetric and well defined remnant strongly suggests that the two objects are related.
There is sufficient evidence to argue that the host supernova remnant Kes 73 is relatively young, at age $`2\times 10^3`$ yr (Helfand et al. 1994; Gotthelf & Vasisht 1997). Morphologically, it resembles any classic, limb-brightened shell-type radio supernova remnant, $`4^{}`$ in diameter, located at an HI derived distance of $`6.07.5`$ kpc (Sanbonmatsu & Helfand 1992). Its diffuse X-ray emission is distributed throughout the remnant and has a spectrum characteristic of a hot plasma, $`kT0.8`$ keV, along with fluorescence lines of several atomic species, including O-group elements, that indicate a young blast-wave of Type II or Ib origin, still rich in stellar ejecta. The relative abundance of ionized species of Si and S observed in the Kes 73 spectrum, suggest a level of ionization in-equilibrium consistent with an age $`\stackrel{<}{}2\times 10^3`$ yr.
Based on the observed characteristics of Kes 73 and its X-ray pulsar, we suggested (VG97) that 1E 1841$``$045 was similar to the ‘anomalous’ X-ray pulsars (Mereghetti & Stella 1995; van Paradijs, Taam & van den Heuvel 1995). We argued that 1E 1841$``$045 could not be an accreting neutron star; this was based on evolutionary arguments and the relative youth of Kes 73 and the fact that we found no evidence for accretion in our datasets. Instead, it was proposed that the X-ray pulsar was powered by an ultramagnetized neutron star with a dipole field of $`B_s7\times 10^{14}`$ G, and was the first of its kind; magnetic braking was assumed to be the predominant spindown mechanism, with $`B_s(P\dot{P})^{1/2}`$. Since then, large magnetic fields have also been inferred for the soft $`\gamma `$-ray repeaters 1806$``$20 and 1900$`+`$14 (see Kouveliotou et al. 1998, 1999) via measurement of their spins and spindown with RXTE and ASCA.
In this paper we present a longterm spin history of the X-ray pulsar. We reinforce our original ASCA and ROSAT datasets with new ASCA, RXTE, & BeppoSAX pointings and ten year-old Ginga archival datasets, bringing the total number of timing observations of Kes 73 to seven. We show that the spin evolution obeys a steady linear spindown at a rate consistent, within errors, with our original estimate; re-affirming our somewhat marginal ROSAT detection (VG97).
## 2. Observations
### 2.1. ASCA & BeppoSAX Datasets
Kes 73 was re-observed with ASCA (Tanaka et al. 1994) on March 27-28, 1998, using an observing plan identical with the original 1993 observations. A complete description of the observing modes can be found in VG97. Here we concentrate on the high temporal resolution data ($`62\mu `$s or 48.8 ms depending on data mode) acquired with the two Gas Imaging Spectrometers (GIS2 & GIS3) on-board ASCA. The datasets was edited using the standard Rev 2 screening criteria which resulted in an effective exposure time of 39 ks per GIS sensor. Photons from the two GISs were merged and arrival times corrected to the barycenter. A log of all observations presented herein are given in Table 1.
We also acquired a 1.5 day BeppoSAX observation of Kes 73 on March 8-9, 1999 using the three operational narrow field instruments, the Low Energy Concentrator (LECS) and two Medium Energy Concentrators (MECS2 & MECS3). These imaging gas scintillation counters are similar to the GIS detectors, providing arcminute imaging over a $`40^{}`$ field-of-view in a broad energy band-pass of $`0.112`$ kev (LECS) and $`112`$ kev (MECS), with similar energy resolution; the data consists of photon arrival times tagged with 16 $`\mu `$s precision. All data were pre-screened during the standard SAX pipeline processing to remove times of enhanced background resulting in usable exposure times of 57.9 ks for each of the MECSs and 26.5 ks for the LECS. Here we concentrate exclusively on photons obtained with the two MECS detectors, which makes up the bulk of the data. Data from the two MECS were merged and barycentered by the SAX team.
For all data sets we extracted barycenter corrected arrival times from a $`4^{}`$ radius aperture centered on the central object in Kes 73 and restricted the energy range to $`210`$ keV. We then searched these time series for coherent pulsation by folding the data about the expected periods derived from the ephemeris of VG97. In each case highly significant power is detected in the resulting periodograms corresponding to the central pulsar’s pulse period at the specific epoch. Figure 1 compares periodograms of the ASCA data of 1993 and 1998 produced in the manner described above. We plot these on the same scale to emphasized both the significance of the detection and the unambiguous change in period between the two epochs.
### 2.2. Ginga & RXTE Datasets
With a period detection in hand, we re-analysed archival data from the Ginga (Makino 1987) and RXTE (Bradt et al. 1993) missions. The main instrument on-board Ginga is the non-imaging Large Area Counter (LAC) which covers an energy range of $`137`$ kev with an effective area of $`4000`$ cm<sup>2</sup> over its $`2\times 2`$ field-of-view. Ginga observed Kes 73 twice in data modes with sufficient temporal resolution ($`2\mu `$s or $`16\mu `$s depending on data mode) and exposure to carry out the present analysis (see Table 1). These data were screened using the following criteria: i) Earth-limb elevation angle $`>5`$ degrees ii) cut-off rigidity $`>8`$ GeV/c, and iii) South Atlantic Anomaly avoidance. The LAC light curves were restricted to the $`117`$ keV energy band-pass and corrected to the solar heliocenter using available software<sup>1</sup><sup>1</sup>1The barycentric correction to these periods is small and has been added in, along with the statistical error in the periods.
Kes 73 was observed for 5 ks by RXTE during 1996 as part of the GO program. We analyzed archive data acquired with the Proportional Counter Array (PCA) in “Good Xenon” data mode at 0.9 $`\mu `$s time resolution. The PCA instrument is similar to the LAC, with a smaller field-of-view and a greater effective area of $`6,500`$ cm<sup>2</sup>. The two RXTE observation windows were scheduled so that no additional time filtering was required. After processing and barycentering the Good Xenon data according to standard methods, we selected events from layer 1 only and applied an energy cut of $`\stackrel{<}{}10`$ keV.
As found with the imaging data, epoch folding around the anticipated period produced a highly significant period detection. These periods are consistent with the extrapolated ASCA derived ephemeris. In Figure 2 we display pulse profiles of Kes 73 at two epochs separated by over a decade to look for possible long term changes. To improve the signal-to-noise in the latter observation, we have co-added phased aligned profiles from the 1998 ASCA and the 1999 BeppoSAX observations. No significant differences were found between the two pulse profiles, which are identical to the 1993 ASCA and 1996 RXTE profiles, to within statistical uncertainties.
### 2.3. 1E 1841$``$045: Timing Characteristics
In order to accurately determine the detected period at each epoch we oversampled the pulse signal by zero-padded the lightcurves (binned at 1 s resolution) to generate $`2^{20}`$ point FFTs. We then fit for the centroid to the peak signal to determine the best period. In none of the cases do we detect significant higher harmonics, a fact consistent with the roughly sinusoidal shape of the pulsar’s profile.
To estimate the uncertainty in the period measurements we carried out extensive Monte-Carlo simulations. For each data set we generated a set of simulated time series whose periodicity, total count rate, and noise properties and observation gap are consistent with the actual data set for each epoch. We used the normalized profile folded into 10 bins, to compute the probability of a photon arriving in a given phase bin. Each realization of the simulated data was subjected to the same analysis as the actual data sets to obtain a period measurement. After 500 trials we accumulated a range of measured periods which was well represented by a normal distribution. The resulting standard deviation of this function is taken as the 1-sigma uncertainty in the period and is presented in table 1. The errors in the period are roughly consistent with the size of a period element divided by the signal-to-noise of each detection.
Each period measurement was assigned an epoch defined as the mid-observation time (in MJD) for that data set. The period measurements and their uncertainties were then fit with simple first-order and second-order models to evaluate the spindown characteristics. The parameters were $`P_s`$, $`\dot{P}_s`$ with the addition of the second derivative $`\ddot{P}_s`$ for the second order fit, with the spin history written as a Taylor expansion. The best fit to the linear model gives the following period ephemeris (Epoch MJD 49000), $`P=11.765732\pm 0.000024`$ s; $`\dot{P}=4.133\times 10^{11}\pm 1.4\times 10^{13}`$ s s<sup>-1</sup>.
The linear spindown model was consistent with the data with fit residuals at the $`10^4`$ s (or 0.1 micro-Hz) level. We found that these residual were not sensitive to a second derivative of the period. An upper-limit allowed by the available datasets and their associated errors is more than an order-of-magnitude larger than expected just due to classic vacuum dipole spindown; for a vacuum dipole rotator one expects $`\ddot{P}_s(2n)\dot{P}_s^2/P_s`$; where $`n=3`$ is the braking index in the vacuum dipole formalism.
## 3. Discussion
In its observed characteristics, 1E 1841$``$045 most resembles the anomalous X-ray pulsars (AXPs), with its slow, $`10`$ s pulse period, steep X-ray spectral signature, inferred luminosity of $`4\times 10^{35}`$ erg s<sup>-1</sup> cm<sup>-2</sup>, and lack of counterpart at any wavelength. It is, however, unique among these objects in its apparent temporal and spectral stability. The two AXPs for which sufficient monitoring data is available, 2259+586 and 1E 1048$``$59, show large excursions in flux ($`\stackrel{>}{}3`$) and significant irregularities in their spin down ($`\mathrm{log}(|\delta P/P|)4`$). Compared with these objects, the spindown of 1E 1841$``$045 suggests a lower level of torque fluctuations. For 1E 1841$``$045, the magnitude of timing irregularities, given by the timing residuals after subtracting the linear model, is $`\mathrm{log}(|\sigma /P|)<5`$, where $`\sigma 10^4`$ s is the typical size of the measurement error (Table 1). The spindown of this object is apparently quieter than that observed in some middle-aged pulsars, which have red noise fluctuations in the pulse times-of-arrival of order $``$3 to $``$4 (e.g. Arzoumanian et al. 1994).
There is mounting evidence that, as a class, AXPs are related to the soft $`\gamma `$-ray repeaters (SGR), given their similar X-ray spectral and timing properties. If the spindown were to show systematic departures from linearity as in the case of glitches, which might be accompanied by bursting activity (as is the case in the SGRs, and may be expected in 1E 1841$``$045 if it is ultramagnetized), then such activity has so-far not been detected by orbiting $`\gamma `$-ray observatories, nor is it reflected in the spin history. Note that the glitch observed from SGR 1900+14 resulted in a period change an order of magnitude larger, $`\mathrm{log}(|\delta P/P|)4.3`$ (Kouveliotou et al. 1999).
In the context of an evolutionary link between the SGRs and AXPs, the timing and spectral stability of 1E 1841$``$045 suggest a quiescent state either pre- or post- SGR activity. The relative age of 1E 1841$``$045 argues for an early state, possibly preceding $`\gamma `$-ray activity, as there is some evidence that 1E 1841$``$045 is the youngest amongst the currently recognized AXPs and SGRs. Half the AXPs are known to be associated with supernova remnants, while of the four known soft repeaters - two have host remnants while another, SGR 1806$``$20, has associated plerion-like emission but no discernible supernova shell. All these objects are thought be at least $`\stackrel{>}{}10^4`$ yr-old, with the oldest AXPs having been around for a few $`\times 10^5`$ years (spindown on timescales of a few hundred years observed in SGRs is no reflection of their true ages).
Rotational energy loss is insufficient to power the inferred luminosity of 1E 1841$``$045, unlike in the usual radio pulsar. In the SGRs, the ultimate mechanism for powering particle acceleration is naturally the release of magnetic free energy (both steady and episodic-seismic), rather than rotation. The episodic emission of $`\gamma `$-ray bursts it thought to be due to starquakes in the neutron star crust. This could suggest a future “turning-on” of 1E 1841$``$045 as a $`\gamma `$-ray repeater on several thousand year timescale, presumably resulting from a slow buildup of stress between then core and surface of the neutron star due to, a yet, unknown state transition in the stellar crust.
The blackbody spectrum suggests a radiating surface of size $`R_{\mathrm{}}8d_7^2`$ km, were $`d_7`$ is the distance to Kes 73 in units of 7 kpc, which is consistent with neutron stellar dimensions (assuming an isotropic emitter and ignored surface redshift and photospheric corrections to the observed spectral energy distribution). This rough estimate suggests low surface temperature anisotropies (as opposed to small hotspots), and is in agreement with the low modulation, broad pulse originating from near the stellar surface. In an ultramagnetized neutron star, outward energy transport from a decaying magnetic field can keep the surface at elevated temperatures, with high thermal luminosities ($`10^{35}`$ erg s<sup>-1</sup>), not observed normal neutron stars at age $`10^3`$ yr. In contrast, researchers have argued (Heyl & Hernquist 1998) that the X-ray luminosities in such stars may be driven by the cooling of the neutron star through a strongly magnetized, light-element envelope without the need for appreciable field decay (see also Heyl & Kulkarni 1999). Along with these cooling emissions from the surface, the star may have a magnetically-driven, charged-particle outflow as is suggested by VLA observations of the SGR 1806$``$20 (Frail, Vasisht & Kulkarni 1997) via the energetics and small scale structure of its plerion. Evidence a more episodic particle ejection, rather than a steady wind, is inferred from the recent radio flare observations of SGR 1900+14, taken during a period of high activity (Frail, Kulkarni & Bloom 1999).
For 1E 1841$``$045, we can only attempt to place limits on a steady pair-wind luminosity: upper limits to radio emission from a putative plerionic structure surrounding the pulsar, suggest an averaged pair luminosity of less than $`10^{36}`$ erg s<sup>-1</sup>. Similarly, limits on a hard X-ray tail suggest a present day wind luminosity to be less than $`5\times 10^{35}`$ erg s<sup>-1</sup>; the latter condition assumes a tail with a photon index of 2, quite typical of plerions, and a soft X-ray radiation conversion efficiency of 10 percent. This bounds are well within the energy budget available from field decay,
$$L_B(1/6)\dot{B}BR^3,$$
which is expected to power a bolometric luminosity of $`\stackrel{<}{}10^{36}`$ erg s<sup>-1</sup>; the fastest avenue for stellar field decay would be the modes of ambipolar diffusion for which theoretical arguments suggest a timescale of decay ($`B/\dot{B}`$) of $`3B_{15}^2`$ kyr (Goldreich & Reisenegger 1992). Note that these upper limits are a factor $`10^2`$ larger than the star’s dipole luminosity of $`4\pi ^2I\dot{P}/P^310^{33}`$ erg s<sup>-1</sup>. This suggests that spindown torques on the star could conceivably be dominated by wind torques, although such a wind would have to be remarkably steady to not produce timing residuals larger than those observed (Thompson & Blaes 1998). Alternatively, if classical dipole radiations is the primary spindown mechanism, then the stellar magnetic field is $`B0.75B_{15}`$ G.
To conclude, ongoing X-ray timing monitoring of the spin period is underway by independent groups, including ours, which will provide accurate measurement of the braking index of this pulsar. As previously mentioned, a large braking index may directly reflect on active field decay or field re-alignment inside the the star. The observed braking index in a pure dipole rotator with loss of torque due to field decay may be written as
$$n_{obv}=32(\frac{P}{\dot{P}})(\frac{\dot{B}}{B}).$$
Given that the spindown timescale for this pulsar is about 9 kyr, a field decay time of about $`\stackrel{<}{}3B_{15}^2`$ kyr could impose a fairly large curvature on the spindown. Perversely, the situation may be far more complicated with different factors such as dipole radiation, winds driven by magnetic activity, and field decay all competing for torque evolution. This, however, remains to be tested through accurate longterm timing. Measuring the pulsar spin via a series of closely spaced observations may also reveal a wealth of information on possible glitching and the subsequent recovery by the star.
Acknowledgments: EVG is indebted to Jules Halpern and Daniel Q. Wang for discussions and insights into timing noise and measurement error. We thank Angela Malizia for barycentering our BeppoSAX data. This research is supported by the NASA LTSA grant NAG5-22250.
|
no-problem/9906/cond-mat9906347.html
|
ar5iv
|
text
|
# Apparent multifractality in financial time series
## Acknowledgements
We thank M. E. Brachet, P. Cizeau, L. Laloux and A. Matacz for enlightening discussions.
|
no-problem/9906/hep-ph9906330.html
|
ar5iv
|
text
|
# FCNC and non-standard soft-breaking terms in weak-scale Supersymmetry Work supported by CONACYT and SNI (México).
## Abstract
We study the inclusion of non-standard soft-breaking terms in the minimal SUSY extension of the SM, considering it as a model of weak-scale SUSY. These terms modify the higgs-sfermion interaction and the sfermion mass matrices, which can induce new sources of flavour violation. Bounds on the new soft parameters can be obtained from current data. The results are then applied to evaluate the FCNC top quark decays $`tc+h_i`$ $`(h_i=h^0,H^0,A^0)`$. Implications of complex soft parameters for CP-violation are also addressed.
1.- Supersymmetric (SUSY) extensions of the Standard model (SM) have been extensively studied, mainly because of the possibilty to solve the hierarchy problem. The minimal SUSY SM (MSSM) , has been used as as a framework to search for signals of SUSY. The required breaking of SUSY is incorporated in the model through soft-breaking terms , which include gaugino and scalar masses, as well as trilinear interactions. General soft-breaking terms can produce large flavour changing neutral currents (FCNC) . Possible solutions to this problem have been proposed within the main theoretical frameworks of SUSY-breaking.
The MSSM reproduces the SM agreement with data, and predicts new signatures associated with the superpartners that are expected to appear in current or future colliders . However, this anaysis usually involves some simplifications about the soft-breaking parameters. For instance, one could work within a particular GUT model and incorporate some specific mechanism of SUSY breaking, then use the structure of the soft-terms to study the mass spectrum of superpartners, evaluate production cross-section and decay rates, and search for their signatures at future colliders. Although this approach makes a certain amount of sense, one could question its generality and whether the future colliders will test weak-scale SUSY or only a particular model of SUSY breaking. In order to study, in a general setting, the possible presence of SUSY in nature, we shall define the MSSM at the weak-scale by considerig the most general structure of soft-breaking terms, whose values will be constrained by low-energy phenomenology.
Although it is widely stated that the soft-terms included in the definition of the MSSM are the most general ones, there are extra terms that are not usually considered in the literature , which should be included in a model-independent analysis of weak-scale SUSY. In this paper we study how the inclusion of non-standard terms in the MSSM, modify the Higgs-sfermion interactions and the sfermion mass matrices, which in turn can induce new sources of flavour violation. We evaluate then the contribution of the trilinear terms to the the FCNC top quark decays $`tc+h_i`$, with $`h_i`$ denoting the neutral Higgs bosons of the MSSM. We also comment on the implication of complex trilinear terms for CP-violation phenomena.
2.- The usual trilinear terms included in the MSSM correspond to interactions of the sfermions with the Higgs doublets ($`H_{1,2}`$), of the form
$$_3=ϵ_{ij}[A^d\stackrel{~}{Q^i}H_1^j\stackrel{~}{D}A^u\stackrel{~}{Q^i}H_2^j\stackrel{~}{U}+A^l\stackrel{~}{L^i}H_1^j\stackrel{~}{E}],$$
(1)
where $`\stackrel{~}{Q},\stackrel{~}{L}`$ represent the squark and slepton doublets, whereas the squark and slepton singlets are denoted by $`\stackrel{~}{U},\stackrel{~}{D},\stackrel{~}{E}`$. Equation (1) resembles the Yukawa Lagrangian of the MSSM, provided that the fermion fields are replaced by their scalar superpatners. However, one could write extra soft-breaking terms that resemble the most general two-Higgs doublet model, known as model III , by allowing each sfermion flavour to couple to both Higgs doublets, namely,
$$_{}^{}{}_{3}{}^{}=ϵ_{ij}[C^d\stackrel{~}{Q^i}H_2^{cj}\stackrel{~}{D}C^u\stackrel{~}{Q^i}H_1^{cj}\stackrel{~}{U}+C^l\stackrel{~}{L^i}H_2^{cj}\stackrel{~}{E}],$$
(2)
where $`H_n^c=i\tau _2H_n^{}`$ ($`n=1,2`$); $`A^{u,d,l}`$ and $`C^{u,d,l}`$ denote $`3\times 3`$ matrices in flavour space. These terms are indeed soft, because each of the scalar fields carries $`U(1)_Y`$ charges that forbidds their appearence in tadpoles graphs, which are the only diagrams that could generate quadratic divergences from these cubic interactions . The resulting squared sfermion mass-matrices ($`6\times 6`$) can be written in terms of $`3\times 3`$ blocks, as follows
$$M_{\stackrel{~}{f}}^2=\left(\begin{array}{cc}(M_{\stackrel{~}{f}}^2)_{LL}\hfill & (M_{\stackrel{~}{f}}^2)_{LR}\hfill \\ (M_{\stackrel{~}{f}}^2)_{LR}^{}\hfill & (M_{\stackrel{~}{f}}^2)_{RR}\hfill \end{array}\right)$$
(3)
The mass terms $`(M_{\stackrel{~}{f}}^2)_{LL,RR}`$ receive contributions from the F- and D-terms, after the Higgs fields aquire v.e.v.’s $`<H_{1,2}^0>=v_{1,2}`$, as well from the chiral-conserving soft-masses. On the other hand, the chirality-changing mass terms $`(M_{\stackrel{~}{f}}^2)_{LR}`$, which receive contributions from F-terms and from the $`A`$ and $`C`$trilinear interactions, are given by
$`(M_{\stackrel{~}{u}}^2)_{LR}`$ $`=`$ $`\mu m_u^0\mathrm{cot}\beta +A^uv\mathrm{sin}\beta +C^uv\mathrm{cos}\beta ,`$ (4)
$`(M_{\stackrel{~}{d}}^2)_{LR}`$ $`=`$ $`\mu m_d^0\mathrm{tan}\beta +A^dv\mathrm{cos}\beta +C^dv\mathrm{sin}\beta ,`$ (5)
$`(M_{\stackrel{~}{l}}^2)_{LR}`$ $`=`$ $`\mu m_l^0\mathrm{tan}\beta +A^uv\mathrm{cos}\beta +C^lv\mathrm{sin}\beta ,`$ (6)
where $`m_{u,d,l}^0`$ denote the (non-diagonal) fermionic mass matrices and $`v^2=v_1^2+v_2^2`$, $`\mathrm{tan}\beta =v_2/v_1`$.
The fermion and sfermion mass matrices must be diagonalized in order to get the mass eigenstates. However, since the general fermion and sfermion mass matrices are not diagonalized by the same rotations, flavour-violating interactions will appear in the MSSM . In our case, since the $`C^f`$ terms modify the chirality-changing (LR) sfermion mass matrices, they can represent a new source of flavour violation.
3.- To determine the phenomenological predictions of the model, we need to know the values of the parameters $`A^f`$ and $`C^f`$, which requires a complete understanding of the mechanism of SUSY breaking. In supergravity/superstrings , these terms are associated to non-holomorphic interactions, whereas in models with horizontal symmetries , they will appear as higher-dimensional operators. In gauge-mediated models , the non-standard soft-terms will appear as higher-order loops, as the $`A`$-terms do (two-loop level). Thus, the $`C^q`$ parameters appear to be small in the minimal realization of the these SUSY-breaking scheemes. However, their contribution to low-energy processes may not be negligible when compared with the A-terms, for instance when they are proportional to the light fermion masses. Thus, the corresponding $`C^{q,l}`$ parameters should be included in a model independent analysis of FCNC phenomena.
To discuss FCNC bounds, it is convenient to work in the so-called super-KM basis, where fermion mass matrices and fermion-sfermion gaugino vertices are diagonal; flavour violation arises from the off-diagonal components of the sfermion mass matrices, which are treated as mass-insertions in loop-graphs . The FCNC bounds on $`M_{LR}^2`$ are expressed in terms of dimensionless parameters:
$$(\delta _{LR}^{\stackrel{~}{q}})_{ij}=\frac{1}{m_{\stackrel{~}{q}}^2}[V_L^q(M_{\stackrel{~}{q}}^2)_{LR}V_R^q]_{ij}$$
(7)
where $`V_{L,R}^q`$ denote the diagonalizing matrices of the fermion masses. Bounds on the off-diagonal elements of $`\delta _{LR}^{\stackrel{~}{f}}`$ could be obtained, for instance, by requiring that the SUSY contribution to the $`K\overline{K},D\overline{D},B\overline{B}`$ mass differences, saturates the observed values. Similarly, the diagonal elements $`(\delta _{LR}^{\stackrel{~}{f}})_{ii}`$ can be bounded using the SUSY correction to the fermion masses. For d-type squarks, the bounds corresponding to $`m_{\stackrel{~}{q}}^2=m_{\stackrel{~}{g}}^2=500`$ GeV, are :
$$(\delta _{LR}^{\stackrel{~}{d}})\left(\begin{array}{ccc}1.6\times 10^3\hfill & 4.4\times 10^3\hfill & 3.3\times 10^2\hfill \\ 4.4\times 10^3\hfill & 2.4\times 10^2\hfill & 1.6\times 10^2\hfill \\ 3.3\times 10^2\hfill & 1.6\times 10^2\hfill & 7.3\times 10^1\hfill \end{array}\right)$$
(8)
The C-terms appear in the definition of the $`\delta _{LR}`$ parameter, namely:
$$(\delta _{LR}^{\stackrel{~}{q}})_{ij}=\frac{1}{m_{\stackrel{~}{q}}^2}(a_qv\overline{A^q}+b_q\mu m_q+c_qv\overline{C^q})$$
(9)
where $`\overline{A^q}=V_L^qA^qV_R^q`$, $`\overline{C^q}=V_L^qC^qV_R^q`$; $`m_{\stackrel{~}{q}}^2`$ denotes an average squark mass, and $`m_q`$ is the quark mass matrix; $`a_q,c_q`$ can be read from Eqs. (4-6). However, FCNC data constraints the off-diagonal elements of the combination $`A^d\mathrm{cos}\beta +C^d\mathrm{sin}\beta `$ and $`A^u\mathrm{sin}\beta +C^u\mathrm{cos}\beta `$, and the constraints are strong only for $`A^d`$ and $`C^d`$ associated with first and second families. Moreover, since the analysis of FCNC constraints is not complete for stop/scharm parameters, one can only estimate $`A^u`$ and $`C^u`$ to be in the range $`1001000`$ GeV, for which the $`\delta _{LR}^u`$ parameters would be one or two orders of magnitude larger than those of the third-family d-type sfermions, still in agreement with present FCNC bounds.
4.- To illustrate the effects of the non-standard soft-breaking terms, we shall consider the FCNC decays of top quark $`tc+h_i`$ , including only the contribution arising from the FCNC Higgs-sfermion interaction, with the gluino and squarks circulating in the loop. The resulting expression for the decay width is
$$\mathrm{\Gamma }(tc+h_i)=\frac{m_t}{16\pi }(1\frac{m_h^2}{m_t^2})(|F_L|^2+|F_R|^2),$$
(10)
where:
$`F_L`$ $`=`$ $`{\displaystyle \frac{\sqrt{2}\alpha _s}{3\pi }}M_{\stackrel{~}{g}}r_{h_i}C_0(m_{\stackrel{~}{t}L},m_{\stackrel{~}{g}},m_{\stackrel{~}{c}R},m_t^2,m_c^2,m_h^2),`$ (11)
$`F_R`$ $`=`$ $`{\displaystyle \frac{\sqrt{2}\alpha _s}{3\pi }}M_{\stackrel{~}{g}}r_{h_i}C_0(m_{\stackrel{~}{t}R},m_{\stackrel{~}{g}},m_{\stackrel{~}{c}L},m_t^2,m_c^2,m_h^2),`$ (12)
$`C_0`$ denotes the scalar Veltman-Passarino scalar function; $`m_{\stackrel{~}{t}}`$, $`m_{\stackrel{~}{c}}`$, $`m_{\stackrel{~}{g}}`$ correspond to the stop, scharm and gluino masses, respectively, with
$$\begin{array}{cc}r_{h_i}=\hfill & \{\begin{array}{cc}A^u\mathrm{cos}\alpha C^u\mathrm{sin}\alpha ,\hfill & \mathrm{for}h^0,\hfill \\ A^u\mathrm{sin}\alpha +C^u\mathrm{cos}\alpha ,\hfill & \mathrm{for}H^0,\hfill \\ A^u\mathrm{cos}\beta +C^u\mathrm{sin}\beta ,\hfill & \mathrm{for}A^0.\hfill \end{array}\hfill \end{array}$$
(13)
Including only the A-term, the resulting branching ration has values of order $`10^510^6`$. On the other hand, if we include $`A_{tc}^u`$ and $`C_{tc}^u`$ terms of similar strenght ($``$ 500 GeV), we find that the branching ratio reaches values of order $`10^4`$. If we also include the constributions from off-diagonal terms in $`M_{LL,RR}^2`$ it is possible to obtain branching ratios of order $`10^3`$, which could be tested at LHC. Some representative values of B.R. are shown in table 1.
5.- Another interesting aplication of the new soft-breaking terms is in CP-violation phenomena. In a recent paper , it has been proposed to use a non-minimal expression for the $`A`$-terms, in order to explain the recently observed value of $`ϵ^{}/ϵ`$ as having a SUSY origin. Since the C-terms can also be complex, its contribution to the imaginary part of $`(\delta _{LR}^d)_{12}`$ could enhance the amount of CP-violation due to SUSY, and would help to explain the observed effect within the MSSM.
CP-violating Higgs interactions will also receive a contribution from the $`C^f`$ terms. For instance, the parameter $`\eta _{CP}^l`$, which measures CP-violation in the coupling of Higgs bosons with leptons , receives a new contributions from the C-terms, with sleptons and gauginos circulating in the loop, it is given by
$$\eta _{CP}^l=\frac{6\alpha _{em}}{20\sqrt{2}\mathrm{cos}^2\theta _Wy_l}Im[C^lM_1f(M_1,m_{\stackrel{~}{l}})],$$
(14)
where $`y_l`$ denotes the Yukawa coupling of lepton $`l`$, $`m_{\stackrel{~}{l}},M_1`$ corresponds to the slepton and Bino masses, respectively; $`f`$ is a function that arises from the loop integration. For SUSY masses of order $`200`$ GeV, $`\mathrm{tan}\beta =10`$ and $`m_A=100`$ GeV, we find that $`\eta _{CP}^\mu `$ reaches values of order 0.1, which can be detected at a future muon collider .
6.- In conclusion, we have studied the effects of non-standard soft-breaking terms in the MSSM, and found that they modify the chirality-changing (LR) components of the squared sfermion mass matrices, which can induce new sources of flavour violation. Given present FCNC data, we can only estimate the $`A`$ and $`C`$ parameters. To probe their strength, we evaluate the decays $`tc+h_i`$, and find a B.R. that may be detectable at LHC. The C-terms also give the possibility to explain the newly observed CP-violation phenomena as a SUSY effect, and to measure a CP-violating higgs-lepton coupling at a future muon collider.
Acknowledgment.- Discussions with G. Kane and M.A. Perez are acknowledged. This work was supported by CONACYT and SNI (México).
TABLE CAPTION
Table 1. B.R. of top FCNC decay $`tc+h_i`$. Results are shown for $`\mathrm{tan}\beta =2`$, $`m_{\stackrel{~}{q}}=300`$ GeV, $`m_{\stackrel{~}{g}}=A^u=C^u=500`$ GeV, and the numbers in paranthesis correspond to $`\mathrm{tan}\beta =10`$.
| $`m_A`$ GeV | $`B.R.(tc+h^0)`$ | $`B.R.(tc+H^0)`$ | $`B.R.(tc+A^0)`$ |
| --- | --- | --- | --- |
| 100. | $`7.1\times 10^4`$ ($`4.8\times 10^4`$) | $`1.9\times 10^5`$ ($`1.1\times 10^5`$) | $`5.8\times 10^4`$ ($`3.8\times 10^4`$) |
| 130. | $`7.0\times 10^4`$ ($`5.1\times 10^4`$) | $`1.2\times 10^6`$ ($`1.7\times 10^7`$) | $`3.9\times 10^4`$ ($`2.6\times 10^4`$) |
| 160. | $`6.8\times 10^4`$ ($`3.8\times 10^4`$) | $`0`$ ($`2.5\times 10^5`$) | $`1.4\times 10^4`$ ($`9.6\times 10^5`$) |
| 190. | $`6.6\times 10^4`$ ($`3.3\times 10^4`$) | 0 (0) | 0 (0) |
Table. 1
|
no-problem/9906/astro-ph9906055.html
|
ar5iv
|
text
|
# 1 Introduction:
## 1 Introduction:
Precise measurement of the gamma-ray energy spectrum in the TeV region from nearby extra-galactic objects is one of the important objectives. We expect cutoff in their energy spectrum by the interaction with the infrared photons in the inter galactic space. The cutoff energy depends on the distance of the objects and density of infrared photons (Stecker and M.Salamon 1997). If we can measure the cutoff energy precisely as a function of redshift z, we can give constraints on the Hubble constant and the infrared photon density experimentally. The maximum energy of the electron acceleration in the AGN jet is another interesting topic. It will represent the physics state of the jet and the super massive black hole which is supposed to exist at the center of the AGN.
The most promising way to measure the gamma-ray energy in the TeV region is the stereoscopic observation of the Cherenkov light from the air showers event by event. With this method, the axis of air shower, the arrival direction, the intersection of the axis and the ground (core location) and the amount of Cherenkov photon are detected more precisely. Making use of these advantages, we tried to calculate the differential energy spectrum of TeV gamma rays.
Using the Monte Carlo Simulation, we derived the relation between the primary energy of gamma rays and several parameters, ADC value(ADCsum), the zenith angle, and the core distance which is the distance between the shower axis and the telescope. In this procedure, we compared parameter distributions of experimental data with those of simulation data and we found good agreement. Using this method, we have determined the differential energy spectrum of the gamma rays from Mrk501.
## 2 Experiment:
The Utah Seven Telescope Array has been in operation at Cedar Mountain(1,600 m a.s.l.), Dugway in Utah ($`40.33^{}`$ N, $`113.02^{}`$ W). Each telescope is arranged at the grid of a regular hexagon and at the center with a separation of 70m. We have started the operation with three telescopes since March, 1997.
Each telescope has a 3 m diameter dish with nineteen hexagonal segment mirrors and total effective mirror area of 6 $`m^2`$. The 256 channel camera with 0.25 degree pixels is mounted on the focal plane of each telescope. This camera is made of multi-anode photomultipliers(MAPMT) having 4 pixels.
Details of this experiment are reported in OG.4.3.25.
## 3 Analysis:
In order to determine primary energies, directions and core positions from images of Cherenkov light of air showers generated by TeV gamma rays and cosmic rays, we used a simulation based of the code of CORSIKA.
From shower images on the cameras of multiple telescopes, image parameters are calculated and a shower axis is obtained. A detector-shower plane which includes the position of the telescope and the shower axis is determined for each telescope. Then we can determine the intersection line of these planes, which corresponds to the three-dimensional shower axis. Core location of the shower is determined as the intersection point of the shower axis and the ground plane. Figure 1 shows these procedures and the resolution. Position resolution obtained by this method is 12 m in which 50 % of the total events are included.
The relation between the total ADC counts and the primary energy of gamma rays is estimated as a function of zenith angle and core location. Based on this relation, the primary energy of gamma rays is determined event by event. The energy resolution obtained from this analysis is estimated to be 23 % (Figure 2.a).
## 4 Energy spectrum:
For the analysis in this paper, we use all of the events that were detected by two telescopes. In the case of the observation using three telescopes, there are three combinations of two telescopes. It means the efficiency of the observation using three telescopes is three times higher than two telescopes. The data used in this paper was acquired from the beginning of April to the end of the July 1997. In April, two telescopes were operated and one more telescope joined to the observation since May. Total observation time was 137.6 hours.
Figure 3.a shows the primary energy distribution of the detected air showers assuming the primary particles are gamma rays. We calculate the energy distributions in the on-source and in the off-source region, respectively. With this figure, the energy distribution of gamma rays from Mrk501 can be estimated. Subtracting off-source events from on-source events, the energy distribution of the gamma rays is obtained. Taking into account the effective area and the observation time, differential energy spectrum of the gamma rays from Mrk501 is calculated. Figure 3.b)
## 5 Conclusion:
We have developed energy determination method for gamma rays using stereoscopic technique. Accuracies of core location determination and energy determination are 12m and 23%, respectively. Note that the image parameters like WIDTH and LENGTH are not used except to determine the image axis in this analysis. We have obtained the differential energy spectrum of the gamma-ray flare of Mrk501 in 1997. The flux of the gamma rays is well represented by a differential power spectrum with an index of -2.5 between 700 GeV and 3 TeV and the steepening effect seems to appear above several TeV.
Acknowledgments
This work is supported in part by the Grants-in-Aid for Scientific Research (Grants #0724102 and #08041096) from the Ministry of Education, Science and Culture. The authors would like to thank the people at Dugway for the help of observations.
References
Hayashida, H. et al 1998, ApJ 504, L71.
Nishikawa, D. et al 1999,Proc.26<sup>th</sup> ICRC(Salt Lake city,1999)OG.2.1.17
Yamamoto, T. et al 1999,Proc.26<sup>th</sup> ICRC(Salt Lake city,1999)OG.2.1.25
|
no-problem/9906/hep-ph9906399.html
|
ar5iv
|
text
|
# Direct determination of the gluon density in the proton from jet cross sections in deep-inelastic scattering
## 1 INTRODUCTION
The present knowledge on the gluon density in the proton basically comes from deep-inelastic scattering (DIS) structure function data (i.e. from inclusive DIS cross sections). These are, however, only indirectly sensitive to the gluon density, which enters the cross section formulae only via the next-to-leading order (NLO) corrections in the boson-gluon fusion process.
A process that is directly sensitive to the gluon density in the proton is the production of jets at (moderately) high transverse energies ($`E_T`$) in the Breit frame. At leading order, high $`E_T`$ jet cross sections are described by QCD-Compton and by boson-gluon fusion processes, the latter being dominant over large regions of phase space. A QCD analysis of jet cross sections may therefore lead to a direct determination of the gluon density, independent of assumptions needed in the indirect determinations from structure function data.
## 2 EXPERIMENTAL RESULTS
The dijet and the inclusive jet cross sections presented here have been measured with the H1 detector at HERA, based on data taken in the years 1994-97 corresponding to an integrated luminosity $`_{\mathrm{int}}36\mathrm{pb}^1`$. The kinematic range extends from moderate to large momentum transfers $`10<Q^2<5000\mathrm{GeV}^2`$ for $`0.2<y<0.6`$. Jets are defined by the inclusive $`k_{}`$ algorithm which is applied to the final state particles in the Breit frame. Only jets in the central region of the detector acceptance with pseudorapidities $`1<\eta _{\mathrm{jet},\mathrm{lab}}<2.5`$ and transverse jet energies $`E_{T\mathrm{Breit}}>5\mathrm{GeV}`$ are considered.
The inclusive jet cross section $`\mathrm{d}^2\sigma _{\mathrm{jet}}/\mathrm{d}E_T\mathrm{d}Q^2`$ has been measured for $`7<E_{T\mathrm{jet},\mathrm{Breit}}<50\mathrm{GeV}`$. For the dijet cross section events with two or more jets have been selected where the two jets with the highest $`E_T`$ fulfill $`E_T>17\mathrm{GeV}`$. Double differential distributions have been obtained for a large set of variables . Here we show the dependence of the inclusive jet cross section on both hardness scales $`E_T`$, $`Q^2`$ (Fig. 1), and the dependence of the dijet cross section on the jet pseudorapidity $`\eta ^{}`$ and the reconstructed fractional parton momentum $`\xi `$ (Figs. 2,3). The basic observations are: towards larger $`Q^2`$ a harder $`E_T`$ spectrum is seen; large jet pseudorapidities $`\eta ^{}`$ are suppressed at higher $`E_T`$; the jet cross sections are sensitive to fractional parton momenta $`0.005<\xi <0.3`$.
The perturbative calculations give a very good description of these and other distributions , except at small values of $`Q^2<100\mathrm{GeV}^2`$ where NLO corrections are very large (k-factors are between 1.6 and 2) and sizeable contributions from higher orders are expected.
## 3 QCD ANALYSIS
We have seen that the jet data are nicely described by perturbative calculations at NLO when parton densities and $`\alpha _s(M_Z)`$ values are taken from global fits. We therefore conclude that perturbative QCD at next-to-leading order $`\alpha _s`$ is able to describe jet production processes in DIS, at least in the kinematic region under investigation: at fairly large transverse jet energies $`E_T`$ in the Breit frame and momentum transfers $`Q^2`$ not too small. The influence of non-perturbative contributions has been investigated and is found to be small (below 7% for the dijet cross sections) with negligible model dependence.
It is now straightforward to make a QCD analysis of these data for a determination of the free parameters of the theory: $`\alpha _s`$, the gluon density and the quark densities in the proton. The present data are, however, not able to constrain these simultaneously. We therefore decided to take the value of $`\alpha _s(M_Z)`$ (as e.g. measured by the LEP experiments independently of the proton structure) as external input, and perform a simultaneous fit of the gluon and quark densities in the proton. For this purpose we include H1 data on the inclusive DIS cross section in the QCD analysis at $`Q^2`$ values which are of the order of the transverse jet energies $`E_T^2`$ in the dijet cross sections.
In this kinematic region ($`200<Q^2<650\mathrm{GeV}^2`$) the inclusive DIS cross sections give very strong constraints on the quark densities while they depend on the gluon density only weakly. In the combined fit of both datasets the inclusive DIS cross sections therefore determine the quarks while the dijet cross sections determine the gluon density.
The dijet data used in the fit are restricted to $`Q^2>200\mathrm{GeV}^2`$ where NLO corrections and hadronization corrections are small. The fit has been performed to the double differential cross sections $`\mathrm{d}^2\sigma _{\mathrm{dijet}}/\mathrm{d}Q^2\mathrm{d}\xi `$, $`\mathrm{d}^2\sigma _{\mathrm{dijet}}/\mathrm{d}Q^2\mathrm{d}x_{\mathrm{Bj}}`$ and $`\mathrm{d}^2\sigma _{\mathrm{incl}.}/\mathrm{d}Q^2\mathrm{d}x_{\mathrm{Bj}}`$ which are most sensitive to the $`x`$-dependence of the parton distributions. The gluon and the quark densities have been fitted at a factorization scale $`\mu _f^2=200\mathrm{GeV}^2`$ which is of the size of the hard scales for both the dijet ($`\mu _f^2E_T^2`$) and the inclusive DIS cross sections ($`\mu _f^2Q^2`$). More details on the fitting procedure can be found in .
The resulting gluon density is displayed in Fig. 4 at the scale $`\mu _f^2=200\mathrm{GeV}^2`$ in the range $`0.01<x<0.1`$. The error band includes experimental and theoretical uncertainties as well as the uncertainty of $`\alpha _s(M_Z)`$. Within these uncertainties this direct determination is consistent to the results from global fits and to indirect determinations from HERA structure function data and extends their range of sensitivity to larger $`x`$-values.
|
no-problem/9906/cond-mat9906193.html
|
ar5iv
|
text
|
# Untitled Document
$`\mathrm{\Gamma }(2)`$ MODULAR SYMMETRY, RENORMALIZATION
GROUP FLOW AND THE QUANTUM HALL EFFECT
Yvon Georgelin<sup>a</sup>, Thierry Masson<sup>b</sup> and Jean-Christophe Wallet<sup>a</sup>
<sup>a</sup>Groupe de Physique Théorique, Institut de Physique Nucléaire
F-91406 ORSAY Cedex, France
<sup>b</sup>Laboratoire de Physique Théorique (U.M.R. 8627), Université de Paris-Sud
Bât. 211, F-91405 ORSAY Cedex, France
Abstract: We construct a family of holomorphic $`\beta `$-functions whose RG flow preserves the $`\mathrm{\Gamma }(2)`$ modular symmetry and reproduces the observed stability of the Hall plateaus. The semi-circle law relating the longitudinal and Hall conductivities that has been observed experimentally is obtained from the integration of the RG equations for any permitted transition which can be identified from the selection rules encoded in the flow diagram. The generic scale dependance of the conductivities is found to agree qualitatively with the present experimental data. The existence of a crossing point occuring in the crossover of the permitted transitions is discussed.
(May 1999)
LPT-99/02
1. INTRODUCTION
The Quantum Hall Effect (QHE) is a remarquable phenomenon occuring in a two-dimensional electron gas in a strong magnetic field at low temperature . Since the discovery of the quantized integer and fractional Hall conductivity, the QHE has been an intensive field of theoretical and experimental investigations. The pioneering theoretical contributions analyzing the basic features of the hierarchy of the Hall plateaus have triggered numerous works aiming to provide a better understanding of the underlying properties governing the complicated phase diagram associated with the quantum Hall regime together with the precise nature of the various observed transitions between plateaus and/or focusing on a characterization of a suitable theory.
It has been realized for some time that modular symmetries may well be of interest to understand more deeply salient properties of the QHE. For instance, the superuniversality proposed in to explain the apparent similarity of the observed transitions is reminiscent to modular tranformations. Besides, it has been shown that some properties of the phase diagram may well be explained in terms of modular group transformations in a two-parameter scaling theory. At the present time, a fully satisfactory microscopic or effective theory for the QHE, from which the relevant modular symmetry (if any) would come out, is still lacking. This has somehow motivated studies focalized on the derivation of general constraints on the phase diagram (and/or expressions for the conductivities) coming from the full modular group or some of its subgroups . Indeed, it is well known that the existence of a discrete symmetry group acting on the parameter space of a theory induces restrictions on the renormalization group (RG) flow. This has been pointed out , in the case of the full modular group which in that context can be viewed as a rich extension of the old Kramers-Wannier $`Z_2`$ duality of the two-dimensional Ising model. This interesting aspect has been applied in various areas of physics, such as statistical systems , extended Sine-Gordon theories , as well as non-perturbative analysis of $`N=2`$ supersymmetric Yang-Mills theory .
In this paper, we construct a familly of holomorphic $`\beta `$-functions which reproduces the observed stability of the Hall plateaus and whose corresponding RG flow in the conductivity plane (i.e. the parameter space) preserves a $`\mathrm{\Gamma }(2)`$ symmetry acting on it. The paper is organized as follows. The section 2 is devoted to the construction. In section 3, we discuss the corresponding physical consequences. We show in particular that the recently observed semi-circle law relating the longitudinal and Hall conductivities can be recovered from the integration of the RG equations and that the predicted crossover for the various transitions is found to be in good qualitative agreement with the present experimental observations. We also compare our results to those obtained in a recent work dealing with the construction of a $`\beta `$-function based on another (larger) subgroup of the modular group \[7c\]. In section 4, we collect the main results of this paper and we conclude.
2. CONSTRUCTION OF THE $`\beta `$-FUNCTIONS
2.1 Basic properties of $`\mathrm{\Gamma }(2)`$
The properties of the modular group $`\mathrm{\Gamma }(1)`$($`PSL(2,Z)`$) and its various subgroups can be found in . In this section we collect all the relevant ingredients that we will use in the subsequent analysis. Let $`𝒫`$ and $`z`$ denote respectively the open upper-half complex plane and a complex coordinate on $`𝒫`$ (Im$`z>0`$). One defines $`\overline{𝒫}`$$`𝒫Q`$, where $`Q`$ is the set of rational numbers. We first recall that the group $`\mathrm{\Gamma }(2)`$ is the set of transformations $`G`$ acting on $`\overline{𝒫}`$ defined by:
$$G(z)=\frac{az+b}{cz+d},a,b,c,dZ,(a,d)\text{odd and},(b,c)\text{even}$$
$`(2.1a)`$
$$adbc=1(\text{unimodularity condition})$$
$`(2.1b).`$
$`\mathrm{\Gamma }(2)(\mathrm{\Gamma }(1))`$, is the free group generated by
$$T^2(z)=z+2,\mathrm{\Sigma }(z)=ST^2S(z)=\frac{z}{2z+1}$$
$`(2.2),`$
where $`T(z)=z+1`$ and $`S(z)=\frac{1}{z}`$ are the two generators of the modular group $`\mathrm{\Gamma }(1)`$ and is known in the mathematical litterature as the principal congruence unimodular group at level 2. The corresponding principal fundamental domain $`𝒟_{\mathrm{\Gamma }(2)}`$, depicted on fig.1, has three cusps denoted by , $``$\[$`1`$\] and \[$`i\mathrm{}`$\] that are respectively identified with the three points 0, 1 and $`i\mathrm{}`$ of $`\overline{𝒫}`$ which are the only fixed points of $`\mathrm{\Gamma }(2)`$ on $`𝒟_{\mathrm{\Gamma }(2)}`$. The whole set of fixed points of $`\mathrm{\Gamma }(2)`$ on $`\overline{𝒫}`$ is obtained as usual by successive $`\mathrm{\Gamma }(2)`$ transformations of these three pointsObserve that $`\mathrm{\Gamma }(2)`$ has only real fixed points.. Notice the identification of the frontiers on $`𝒟_{\mathrm{\Gamma }(2)}`$ as indicated on fig.1.
It has been pointed out recently in \[7b,13\] that $`\mathrm{\Gamma }(2)`$ can be used to derive a model for a classification of fractional (as well as integer) Hall states. This classification, which refines the Jain one and involves a kind of generalization of the ”law of the corresponding states” derived in , seems to reproduce successfully the observed hierarchical structure of the Hall states. The salient feature of the proposed construction is that each family of quantum fluid states indexed by fractions with odd denominators (plus the insulator state(s)) is generated from a metallic state labelled by an even denominator fraction through specific $`\mathrm{\Gamma }(2)`$ transformations. To be more precise, we first rewrite the transformations (2.1a,b) as
$$G(z)=\frac{(2s+1)z+2n}{2rz+(2k+1)},(2s+1)(2k+1)4rn=1$$
$`(2.3a,b)`$
where $`k,n,r,sZ`$. We identify for the moment $`z`$ with a filling factor $`\nu =p/q`$ and select a given Hall metallic state labelled by $`\lambda =\frac{(2s+1)}{2r}`$ ($`r0,s0`$). Then, as shown in , one obtains a hierarchy of Hall (liquid) states surrounding the metallic state $`\lambda `$ from the images $`G_{n,k}^\lambda (0)`$ and $`G_{n,k}^\lambda (1)`$ of 0 and 1 by the family of transformations $`G_{n,k}^\lambda \mathrm{\Gamma }(2)`$ ($`n`$ and $`k`$ satisfying (2.3b)) which sends $`z=i\mathrm{}`$ onto $`\lambda `$. As an exemple, the double Jain family of states surrounding the metallic state $`\lambda =1/2`$
$$\frac{1}{3},\frac{2}{5},\frac{3}{7},\mathrm{},\frac{N}{2N+1}$$
$`(2.4a),`$
$$\frac{2}{3},\frac{3}{5},\frac{4}{7},\mathrm{},\frac{N}{2N1}$$
$`(2.4b),`$
can be easily recovered in this scheme from the images $`G_{n,k}^{1/2}(0)`$ and $`G_{n,k}^{1/2}(1)`$ with respectively $`n0`$ for (2.4a) and $`n<0`$ for (2.4b). We recall that this construction separates the even numerator Hall fractions from the odd numerator ones so that it may be possible to take into account a possible particle-hole symmetry within the present scheme. Other families surrounding any state indexed by an even denominator (metallic state) can be constructed in the same way so that all the experimentally observed Hall states can be taken into account in the present construction. It has been further shown in \[7b\] that the corresponding predicted global organization of the various Hall conductivity states stemming from the action of $`\mathrm{\Gamma }(2)`$ fits quite well with (some of) the present experimental data.
The possible important role played by modular symmetries in the QHE has been considered for some time. Some of the related works have emphasized that (most of) the main features of the (up to now) experimentaly observed phase structure of QHE seem to be recovered from the action of suitable subgroup of the modular group on the complex conductivity plane hereafter identified with $`\overline{𝒫}`$ and parametrized by $`z=\frac{\mathrm{}}{e^2}(\sigma _{xy}+i\sigma _{xx})`$ in the following, $`e^2`$=$`\mathrm{}=1`$ (with Im$`z\sigma _{xx}`$$`0`$). In particular the group $`\mathrm{\Gamma }_0(2)`$ has been mostly considered by some authors and used recently in \[7c\] to constraint the $`\beta `$-function governing the Renormalization Group (RG) flow of the conductivities for Quantum Hall systems. The corresponding studies have been performed under various physically acceptable set of hypothesis. In the next subsection, we will consider somehow similar hypothesis to study the restrictions which can be obtained from $`\mathrm{\Gamma }(2)`$ on the $`\beta `$-function of the RG flow for the conductivities. We therefore assume that the action of $`\mathrm{\Gamma }(2)`$ on real filling factors $`\nu =p/q`$ that has been described above can be extended to an action on $`\overline{𝒫}`$.
2.2 Holomorphic $`\beta `$-functions from $`\mathrm{\Gamma }(2)`$ symmetry
Let $`t`$ be a scale parameter whose possible explicit form will be specified in section 3. We first recall that scale transformations on the complex conductivity plane $`\overline{𝒫}`$ generate a RG flow $`zR(t;z,\overline{z})z(t)`$, from which the $`\beta `$-function is defined to be the (contravariant) vector field tangent to this flow, namely:
$$\beta (z,\overline{z})=\frac{dR(t;z,\overline{z})}{dt}=\frac{dz(t)}{dt}$$
$`(2.5).`$
It is well known that the existence of a discrete symmetry group acting on the parameter space of a theory may induce restrictions on the RG flow and, in turn, provides some non-perturbative information on the RG flow (stemming basically from reasonable ansätze for the corresponding $`\beta `$-functions). As already mentionned in the introduction, this aspect has already been investigated in various areas of physics. Most of the considerations involved in these corresponding works can be adapted to the present situation for which we now outline the main steps of the analysis.
First of all, the crucial mathematical hypothesis is that the action of $`\mathrm{\Gamma }(2)`$ commutes with the RG flow, which basically means that if the $`\mathrm{\Gamma }(2)`$ symmetry of the parameter space (that is in the present case the conductivity plane) holds at a given scale, it will be preserved by the RG downwards to lower scales. This hypothesis in particular determines the $`\mathrm{\Gamma }(2)`$-transformations of the $`\beta `$-function, given by
$$\beta (G(z),\overline{G(z)})=(cz+d)^2\beta (z,\overline{z})$$
$`(2.6),`$
for any $`G\mathrm{\Gamma }(2)`$, and may account for the apparent observed superuniversality in the Quantum Hall transitions <sup>*</sup><sup>*</sup>It is easy to show that distinct critical points of the RG flow related by a $`\mathrm{\Gamma }(2)`$ transformation will have the same scaling exponents..
Eqn. (2.6) indicates that $`\beta `$ transforms as a modular form of $`\mathrm{\Gamma }(2)`$ with weight $`2`$ whenever $`\beta `$ is holomorphic in $`z`$ on $`𝒫`$. In this later case, the applications of general results stemming from complex analysis permits one already to constraint strongly the possible expression for an admissible $`\beta `$-function. In the rest of this section, we will therefore assume that $`\beta `$ is holomorphic in $`z`$. This hypothesis will be commented upon in the beginning of section 3.
Now, a general theorem on modular forms states that any modular form $`\omega (z)`$ of any subgroup $`\mathrm{\Gamma }`$ (of finite index) of the modular group with even weight $`k`$ can be represented on $`𝒟_\mathrm{\Gamma }`$, the fundamental domain of $`\mathrm{\Gamma }`$, as
$$\omega (z)=(\lambda ^{}(z))^{k/2}R(\lambda )$$
$`(2.7),`$
where $`\lambda (z)`$ is a modular function of $`\mathrm{\Gamma }`$that is, a function invariant under the action of $`\mathrm{\Gamma }`$ defined on $`𝒟_\mathrm{\Gamma }`$, $`\lambda ^{}(z)=\frac{d\lambda (z)}{dz}`$ and $`R(\lambda )`$ is a rational function in $`\lambda `$. In the present case, $`k=2`$ and $`\lambda `$, the modular function of $`\mathrm{\Gamma }(2)`$, can be chosen as
$$\lambda =\frac{\theta _2^4}{\theta _3^4}$$
$`(2.8),`$
and satisfies on $`𝒟_{\mathrm{\Gamma }(2)}`$
$$\lambda (i\mathrm{})=0,\lambda (0)=1,\lambda (1)=\mathrm{}$$
$`(2.9).`$
In (2.8), the Jacobi $`\theta `$ functions $`\theta _2`$ and $`\theta _3`$ (together with $`\theta _4`$ given here for further convenience) are defined by
$$\theta _2=2\underset{n=0}{\overset{\mathrm{}}{}}q^{(n+\frac{1}{2})^2}=2q^{\frac{1}{4}}\underset{n=1}{\overset{\mathrm{}}{}}(1q^{2n})(1+q^{2n})^2$$
$`(2.10a),`$
$$\theta _3=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}q^{n^2}=\underset{n=1}{\overset{\mathrm{}}{}}(1q^{2n})(1+q^{2n1})^2$$
$`(2.10b),`$
$$\theta _4=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}(1)^nq^{n^2}=\underset{n=1}{\overset{\mathrm{}}{}}(1q^{2n})(1q^{2n1})^2$$
$`(2.10c),`$
where $`q=\mathrm{exp}(i\pi z)`$.
Therefore, consistency with the previous two hypothesis requires the general expression for the $`\beta `$-function on $`𝒟_{\mathrm{\Gamma }(2)}`$ to be $`\beta (z)=\lambda ^{}(z)^1R(\lambda )`$ with $`\lambda `$ given in (2.8), which can then be straightforwardly extended to $`𝒫`$ by the action of $`\mathrm{\Gamma }(2)`$. We have now to confront this general expression to the experimental situation.
The strongest experimental constraint comes from the observed stability of the Hall plateaus labelled by integer as well as odd denominator fractional filling factors, which must presumably correspond to attractive stable fixed points of the $`\beta `$-function. To apply this constraint, we proceed as follows. First, observe that in the present framework, the fixed points of $`\mathrm{\Gamma }(2)`$ must be by construction critical points of the $`\beta `$-function. On $`𝒟_{\mathrm{\Gamma }(2)}`$, the only fixed points of $`\mathrm{\Gamma }(2)`$ are 0, 1 and $`i\mathrm{}`$ which can be respectively identified to the (Hall) insulator state, the first Landau level and some (unobserved) superconducting state. Next, observe that in the classification of the Hall states based on the $`\mathrm{\Gamma }(2)`$ symmetry \[7b,13\], the Hall plateaus correspond to the images of 0 and 1 by $`\mathrm{\Gamma }(2)`$ as recalled in section 2.1 (the images of $`i\mathrm{}`$ correspond to even denominator (metallic) Hall states). From these observations and under the further assumption that $`\beta (z)`$ has no other critical points than those given by the fixed points of $`\mathrm{\Gamma }(2)`$, it is easy to realize that $`\beta (z)`$ can be conveniently parametrized on $`𝒟_{\mathrm{\Gamma }(2)}`$ as
$$\beta (z)=\frac{\alpha }{\lambda ^{}(z)}\lambda ^p(z)(\lambda (z)1)^q$$
$`(2.11),`$
where $`\alpha `$ is a complex constant, $`\lambda `$ is still given by (2.8), $`p,qZ`$ and use has been made of (2.9).
Now, according to the previous discussion, eqn.(2.11) must have zeros at $`z=0`$ and $`z=1`$. This is realized provided
$$q1$$
$`(2.12a),`$
for $`\beta (z=0)=0`$ and
$$p+q10$$
$`(2.12b),`$
for $`\beta (z=1)=0`$. These constraints can be easily derived by combining (2.11) and (2.9) with the explicit expression for $`\lambda ^{}(z)`$ given by $`\lambda ^{}(z)=i\pi \lambda (z)\theta _4^4(z)`$, obtained from (2.8) and the functional relation
$$\frac{\theta _2^{}}{\theta _2}\frac{\theta _3^{}}{\theta _3}=\frac{i\pi }{4}\theta _4^4$$
$`(2.13),`$
and making use of the following asymptotic expansions for the Jacobi $`\theta `$ functions:
$$\theta _2(z)\sqrt{\frac{i}{z}},\theta _3(z)\sqrt{\frac{i}{z}},\theta _4(z)\sqrt{\frac{i}{z}}\mathrm{exp}(\frac{i\pi }{4z})\text{for}z0$$
$`(2.14a),`$
$$\theta _2(z)\sqrt{\frac{i}{z1}},\theta _3(z)\sqrt{\frac{i}{z1}}\mathrm{exp}(\frac{i\pi }{4(z1)}),\theta _4(z)\sqrt{\frac{i}{z1}}\text{for}z1$$
$`(2.14b).`$
¿From the combination of (2.11) with (2.12a) and (2.12b), one easily realize that $`\beta (z)`$ must be singular for $`z=i\mathrm{}`$, using for instance
$$\theta _2(z)\mathrm{exp}(\frac{i\pi z}{4}),\theta _3(z)1,\theta _4(z)1\text{for}zi\mathrm{}$$
$`(2.15),`$
a result that can be expected from general results on holomorphic functions. Finally, we find that the complex constant $`\alpha `$ appearing in (2.11) must be chosen to be real in order to obtain from the $`\beta `$-function a flow with the desired properties, in particular involving stable fixed points at values corresponding to the Hall plateaus. This is illustrated on fig. 2 and 3 where the flow of (2.11) with $`q=1,p=0`$ and $`\alpha =1`$ is represented.
3. DISCUSSION
Let us first summarize what has been derived up to now. Putting (2.8), (2.11), (2.12) together with $`\lambda ^{}(z)=i\pi \lambda (z)\theta _4^4(z)`$, we finally obtain
$$\beta =i^{2q1}\frac{\alpha }{\pi }\frac{\theta _2^{4(p1)}\theta _4^{4(q1)}}{\theta _3^{4(p+q1)}}$$
$$p,qZ,q1,p+q10,\alpha <0$$
$`(3.1),`$
which is defined on $`𝒟_{\mathrm{\Gamma }(2)}`$ and can be straightforwardly extended to $`\overline{𝒫}`$. This is the main result of the previous section. Eqn (3.1) represents a physically admissible familly of holomorphic $`\beta `$-functions reproducing in particular the experimentally observed stability of the Hall plateaus and whose corresponding RG flow in the complex conductivity plane (i.e. the parameter space) preserves a $`\mathrm{\Gamma }(2)`$ symmetry acting on it.
In this section we will show that the physical predictions that can be extracted from (3.1) are in good agreement with the present experimental observations. Namely, we will show that the ”semi-circle”law which has been recently observed can be recovered from the behaviour of $`\sigma _{xy}`$ and $`\sigma _{xx}`$ obtained from the integration of (3.1). We will also show that the crossover in the plateau-plateau and plateau-insulator (observed) transitions described by (3.1) agrees qualitatively with the present experimental observations.
Before starting the discussion, some important remarks concerning the holomorphy hypothesis as well as the already proposed candidates for $`\beta `$-functions are in order. As it is well known, the holomorphy constraint is a very strong one which restricts severely the possible expression for $`\beta `$. Relaxing this constraint permits one to have much more freedom for the construction of physically admissible $`\beta `$-functions. For a recent comprehensive analysis of the non-holomorphic case, see e.g. and references therein. Here, we notice that non-holomorphic $`\beta `$ that have been proposed in can obviously be related through their asympotic (large $`\sigma _{xx}`$) behaviour to the $`\beta `$-functions stemming from the dilute-instanton gas calculations performed in the framework of non-linear sigma-models , whereas the family of holomorphic $`\beta `$-functions given in (3.1) cannot be. In particular, the corresponding asymptotic (large $`\sigma _{xx}`$) behaviour is different and indeed cannot be finite as it is for those non-holomorphic $`\beta `$’s. Imposing finitude at large $`\sigma _{xx}`$ for (3.1) would produce necessarely unstable Hall plateaus, in clear contradiction with the experimental observations. Therefore, one of the characteristic features of (3.1) is the existence of a singularity as $`\sigma _{xx}\mathrm{}`$ and consequently at any even denominator rational point on the real axis in the conductivity plane. Anticipating the discussion, the physical consequence on the corresponding flow diagram is the existence of unstable path connecting even denominator (metallic) Hall states to odd denominator (or insulator) states which might be associated to ”unfavored” (but nevertheless observable) transitions.
Let us now turn to already proposed candidates for a $`\beta `$-function and RG flows. Recall that the early developments on this subject were essentially based on a field theory of the type of non linear sigma model (see also ) mentionned just above with two (dimensionless) coupling constants identified with the longitudinal ($`\sigma _{xx}`$) and Hall ($`\sigma _{xy}`$) conductivities of the disorderd electron gas. Starting from this framework, a RG flow diagram for the conductivities has been conjectured whose characteristic feature is the existence of fixed points occuring at some $`\sigma _{xx}`$ and $`\sigma _{xy}`$ equal to half integer values. Althought this proposal is appealing and seems to capture some experimental features of the (mainly integer) Quantum Hall Effect, it is plagued with a problem. Indeed, the postulated fixed points (if they exist as fixed points of the non linear sigma model , a fact which is not clear at the present time) correspond to a small $`\sigma _{xx}`$ (strong coupling) regime whereas the dilute-instanton gas calculation that gives rise to the non-holomorphic $`\beta `$-function underlying the conjectured RG flow is valid only in the large $`\sigma _{xx}`$ (weak coupling) regime. Whether this crude extrapolation from the weak to the strong coupling regime is finally correct or not is unclear at the present time. Keeping in mind the above remarks and that there is no experimental facts favoring either holomorphic or non-holomorphic $`\beta `$ as far as we know, one can reasonably regard the family of holomorphic $`\beta `$’s (3.1) based on $`\mathrm{\Gamma }(2)`$ as possible candidates for a description of aspects of the physics of the QHE effect.
We close these remarks by noticing that similar constructions of holomorphic $`\beta `$-functions based on larger subgroups of the full modular group have been performed recently, most of them focussing on the subgroup $`\mathrm{\Gamma }_0(2)`$. Recall that $`\mathrm{\Gamma }_0(2)`$ is generated by $`T(z)=z+1`$ and $`\mathrm{\Sigma }(z)=z/(2z+1)`$ and that one has $`\mathrm{\Gamma }(2)\mathrm{\Gamma }_0(2)`$, i.e. $`\mathrm{\Gamma }(2)`$ is a subgroup of $`\mathrm{\Gamma }_0(2)`$. It has been shown \[7c\] that a $`\mathrm{\Gamma }_0(2)`$-based construction gives rise to a qualitative behaviour of the crossover in the various transitions that is similar to the one obtained from our $`\mathrm{\Gamma }(2)`$-based construction. There are however specific differences appearing in the corresponding flows. In particular, one of the fixed point of $`\mathrm{\Gamma }_0(2)`$ in its fundamental domain has to be identified naturally with the crossing point appearing in the plateau-insulator transitions, whose position is therefore entirely (”rigidely”) determined in the conductivity plane. Recall that it corresponds to $`\sigma _{xy}=1/2`$ and $`\sigma _{xx}=1/2`$ ($`z=\frac{1+i}{2}`$) for the $`01`$ transition. Moreover, the proposed $`\beta `$-function has a pole at this point. In that case, the unique path in the conductivity plane connecting $`z=0`$ to $`z=1`$, which must go obviously through the pole $`z=\frac{1+i}{2}`$, appears to be unstable (and indeed can evolve either from $`0`$ toward $`\mathrm{}`$ or to $`0`$ toward $`1/2`$), a fact that can be easily verified numerically by plotting the flow generated by the corresponding $`\beta `$. Further investigations are needed to clarify the experimental and theoretical status of this predicted crossing point. In the $`\mathrm{\Gamma }(2)`$ case, the situation is quite different. Indeed, the point $`z=\frac{1+i}{2}`$ does not play a distinguished role simply because it is not a fixed point of $`\mathrm{\Gamma }(2)`$ so that consequently it can be neither a zero nor a pole of the corresponding $`\beta `$ in the present framework.
Let us now examine critically the physical consequences encoded in (3.1). First, notice that $`\beta `$-functions defined by (3.1) and such that $`p+2q2=0`$ holds generate a flow approaching $`z=0`$ and $`z=1`$ in the same manner. This can be easily seen by combining (3.1) with (2.14a,b) and studying the behaviour of $`\beta `$ in the vicinity of $`z=0`$ and $`z=1`$. In particular, it can be straightforwardly realized that the fastest approach of 0 and 1 is obtained when $`q=1`$ and $`p=0`$ which corresponds to asymptotic expressions for $`\beta `$ that do not involve exponential factors. From now on we will focuss on this later situation.
Now, we point out that the RG equation can be formally integrated along the single trajectories in the parameter space, leading to an algebraic relation between $`\sigma _{xy}`$ and $`\sigma _{xx}`$ which reproduces the ”semi-circle” law that has been experimentaly observed, at least at low temperature, in the study of the plateau-insulator and plateau-plateau transitions . Indeed, combining (2.5) with (2.11) (in which now $`p=0`$, $`q=1`$ and we take $`\alpha =1`$other real value for $`\alpha `$ corresponds to simple rescaling of $`t`$ so that the ensuing analyzis is not modified), one obtains
$$dt=\frac{d\lambda }{\lambda 1}$$
$`(3.2),`$
where $`\lambda `$ is still given by (2.8). The integration of (3.2) gives
$$t=log(\lambda 1)+\chi $$
$`(3.3),`$
where $`\chi `$ is a complex constant and $`log`$ denotes a determination of the complex logarithm. It follows, for any determination of the logarithm, that
$$\lambda =1+\mathrm{exp}(\chi t)$$
$`(3.4),`$
and one has $`\lambda 1`$(resp.$`+\mathrm{}`$) for $`t+\mathrm{}`$ (resp.$`\mathrm{}`$).
Let us assume $`\chi `$ real so that $`\lambda `$ is real for any $`t`$. Then, it is a general result that the inversion of the map $`z(t)\lambda (z(t))=1+\mathrm{exp}(\chi t)`$, with $`\lambda `$ given by (2.8), gives rise to a curve $`tz(t)`$ which is a semi circle linking two rational numbers as end-points on the real axis together with a point $`z_0`$ located on this semi-circle and satisfying $`\lambda (z_0)=1+\mathrm{exp}\chi `$. The inversion of this map does not result in a unique semi-circle because if $`tz(t)`$ is a solution then, for any $`G\mathrm{\Gamma }(2)`$, $`tG(z(t))`$ is also a solution (whose end-points on the real axis are therefore the images by $`G\mathrm{\Gamma }(2)`$ of the end-points of the initial solution). But it can be easily realized that the choice of such a solution among all the possible one is completely equivalent to the choice of a $`z_0`$ such that $`\lambda (z_0)=1+\mathrm{exp}\chi `$, which can be regarded as initial conditions for the RG equation.
It is instructive to attempt to make a closer contact with the experimental situation throught the following (more physical) interpretation for $`z_0`$. Observe first that one should have $`t=0`$ at $`z=z_0`$. Assume then that $`t`$ can be cast into the form
$$t=f(\frac{1}{T^\gamma }(\frac{1}{B}\frac{1}{B_c})^\delta )$$
$`(3.5),`$
at least in the domain of interest, a form which is inspirated from a two-parameter scaling framework, where $`f`$ is a monotonic function with $`f(0)=0`$, $`B_c`$ is a critical value for the external magnetic field $`B`$ and $`T`$ is the temperature. This therefore suggests that $`z_0`$ would correspond to the point on the semi-circle where $`B=B_c`$, independantly of the temperature, so that $`z_0`$ might be naturally identified with a crossing point appearing in the crossover between the two Hall states whose filling factors correspond to the rational points on the real axis connected by the semi-circle.
To illustrate the previous discussion, a representative exemple of the resulting flow is depicted on figure 2 where only semi-circle line flows are considered. The plateau-insulator transition $`01`$ corresponds to the (full line) large semi-circle linking $`0`$ and $`1`$. As it should be clear now, the action of successive $`\mathrm{\Gamma }(2)`$ transformations on this semi-circle (which may be viewed as a template for the transitions) gives rise to the other (full-line) semi-circles appearing on this figure. For instance, the following transformation $`G(z)=\frac{3z2}{2z1}`$$`\mathrm{\Gamma }(2)`$ maps the semi-circle for the $`01`$ transition into the semi-circle connecting $`1`$ and $`2`$, which therefore corresponds to a direct transition between the Hall plateaus with integer filling factor $`\nu =1`$ and $`\nu =2`$, whereas $`G(z)=\frac{z}{2z+1}`$$`\mathrm{\Gamma }(2)`$ maps the $`01`$ semi-circle into the one associated to the transitions $`01/3`$. Therefore, if the present framework is correct, it is expected that such a semi-circle law should be experimentally observed for any other permitted transitions. This is already the case for the observed $`01/3`$, $`01`$ and $`12`$ transitions that have been recently studied in e.g. some Si MOSFET devices in the Quantum Hall regime \[20,15b\] and for which the data on $`\sigma _{xx}`$ and $`\sigma _{xy}`$ fit well with the semi-circle law expected from the present $`\mathrm{\Gamma }(2)`$-based framework. Notice by the way that selection rules for the permitted direct plateau-plateau and plateau-insulator transitions can be obviously extracted from the flow diagram depicted on fig.2 by simply observing that each permitted transition is rigidely linked with a semi-circle whose end-points on the real axis correspond to the filling factors labelling the transition.
The above analysis can be easily adapted to the case where $`\chi `$ is a complex number. In this later situation, the trajectories $`tz(t)`$ (each of which still connecting two rational numbers on the real axis) are no longer semi-circles, as depicted on fig.3.
As announced in the beginning of this section, there are a priori possible transitions of the type insulator$``$ even denominator (metallic) state as well as odd denominator $``$ metallic state, namely $`01/2`$, $`11/2`$ and the corresponding images by $`\mathrm{\Gamma }(2)`$. Some representative examples of these transitions are indicated on fig.2 and 3 by dashed lines Notice that there exists also, as a consequence of the present construction, transitions of the type $`0\mathrm{}`$ or $`1\mathrm{}`$ and the corresponding $`\mathrm{\Gamma }(2)`$ images. The state $`\sigma _{xx}=\mathrm{}`$ would represent some (yet unobserved) supraconducting state.. The associated trajectories are in fact unstable as any small deviation from the semi-circle connecting, says $`0`$ to $`1/2`$ will give rise to a quite different transition. For instance, it can be easily realised from fig.3 that a small perturbation of the flow line associated to the $`01/2`$ transition will give rise to either the $`01`$ or $`01/3`$ transition, whereas any plateau-plateau as well as plateau-insulator transition is stable against perturbation.
The explicit expressions for $`\sigma _{xx}`$ and $`\sigma _{xy}`$ as functions of $`t`$, which are expected to provide the qualitative behaviour of the crossover for a transition, can be easily extracted from (3.2)-(3.4). The method is standard and is essentially similar to the one used in \[7c\]. The resulting expressions are similar to the one given in \[7c\]. It is however interesting to plot them numerically. This is done on fig.4 (resp. fig.6) for the $`10`$ (resp. $`12`$) transition, whereas fig.5 and fig.7 represent the $`t`$-dependance of the corresponding resistivities. It can be realized that the predicted behaviour of the conductivities (as well as the resistivities) is in good qualitative agreement with the experimental one reported in particular in \[15a,b\]. We observe by the way that the ”almost linear” asymptotic shape of $`\rho _{xx}(t)`$ for $`t>0`$ that we obtain fits well with a recent experimental measurement of the corresponding quantity for the $`10`$ transition \[15b\] (keeping in mind the possible interpretation for $`t`$ parametrized by (3.5)). Notice that somehow similar results for the crossovers have been obtained in \[7c\] where the symmetry $`\mathrm{\Gamma }_0(2)`$ is considered instead of $`\mathrm{\Gamma }(2)`$.
As a last remark, notice that a precise experimental determination of the location of the crossing point on the semi-circular trajectories in the conductivity plane would be helpful for the identification of the possibly relevant modular subgroup. Indeed, a crossing point found to be really located on the uppermost point of the semi-circle (e.g. corresponding to $`z=\frac{1+i}{2}`$ for the $`01`$ transition) would favor the group $`\mathrm{\Gamma }_0(2)`$. On the other hand, a crossing point found to deviate significantly from the uppermost location would favor $`\mathrm{\Gamma }(2)`$ (putting therefore $`\mathrm{\Gamma }_0(2)`$ into trouble). This later case seems to have been recently observed in the experiment performed in the second ref.\[15b\]. Note that, in this experiment, the corresponding candidates for the crossing points in the $`01`$ and $`01/3`$ transitions can be related to each other (within, says, $``$ $`10\%`$) by a $`\mathrm{\Gamma }(2)`$ transformation, as it must be the case in the present framework.
4. CONCLUSION
Let us summarize in physical words the main results of this paper. We have proposed a (familly of) holomorphic $`\beta `$-function(s) whose RG flow preserves the $`\mathrm{\Gamma }(2)`$ modular symmetry and which is consistant with the observed stability of the Hall plateaus. The semi-circle law relating the longitudinal and Hall conductivities that has been observed experimentally for the $`01`$, $`01/3`$ and $`12`$ transitions is obtained from the integration of the RG equations for these transitions and, in fact, must hold in the present framework for any permitted transition which can be easily identified from the selection rules encoded in the flow diagram. Moreover, it has been shown that there exists a unique point on each semi-circle where the generic scale parameter $`t`$ vanishes. This combined with a two-parameter scaling hypothesis, as an additional phenomenological input (yielding a plausible parametrization for $`t`$ given by (3.5)), suggests to interpret this point as the crossing point occuring in the crossover of the two Hall states involved in the transition.The generic scale dependance of the conductivities has been verified to agree qualitatively with the present experimental data. In the present framework, the trajectories in the conductivity plane involving an even denominator filling factor are found to be unstable. Although we do not have a clear interpretation (if any) of this, one might expect that the corresponding (observed) transitions are unfavored.
REFERENCES
1) The Quantum Hall Effect, 2nd ed., R.E. Prange and S.M. Girvin eds. (Springer-Verlag, New-York) 1990. See also Perspectives in Quantum Hall Effect, S.D. Sarma and A. Pinczuk eds. (Wiley, New York) 1997.
2) K.von Klitzing, G. Dorda and M. Pepper, Phys. Rev. Lett. 45 (1980) 494.
3) D.C. Tsui, H.L. Störmer and A.C. Gossard, Phys. Rev. Lett. 48 (1982) 1559.
4) R.B. Laughlin, Phys. Rev. Lett. 50 (1983) 1385; F.D. Haldane, Phys. Rev. Lett. 51 (1983) 605; B.I. Halperin, Phys. Rev. Lett. 52 (1984) 1583.
5) S. Kivelson, D.H. Lee and S.C. Zhang, Phys. Rev. B46 (1992) 2223.
6) C.A. Lütken and G.G. Ross, Phys.Rev. B45 (1992) 11837, Phys. Rev. B48 (1993) 2500; see also E. Fradkin and S. Kivelson, Nucl. Phys. B474 (1996) 543.
7a) C.A. Lütken, Nucl. Phys. B396 (1993) 670.
7b) Y. Georgelin, T. Masson and J.C. Wallet, J. Phys. A: Math. Gen. 30 (1997) 5065.
7c) B.P. Dolan, Modular invariance, Universality and crossover in the QHE, cond-mat/9809294; B.P. Dolan, Duality and the modular group in QHE, cond-mat/9805171.
8) J. Cardy, Nucl. Phys. B203 (1982) 17; J. Cardy and E. Rabinovici, Nucl. Phys. B203 (1982) 1.
9) D. Carpentier, J. Phys. A: Math.Gen.32 (1999) 3865 and references therein.
10) N. Seiberg and E. Witten, Nucl. Phys. B426 (1994) 19; J.I. Latorre and C.A. Lütken, Phys. Lett. B421 (1998) 217; A. Ritz, Phys. Lett. B434 (1998) 54.
11) A.M. Dykhne and I.M. Ruzin, Phys. Rev. B50 (1994) 2369; I.M. Ruzin and S. Feng, Phys. Rev. Lett. 74 (1995) 154.
12) R.A. Rankin, Modular Forms and Functions, Cambridge University Press, 1977.
13) Y. Georgelin and J.C. Wallet, Phys. Lett. A224 (1997) 303.
14) J.K. Jain, Phys. Rev. B41 (1990) 7653.
15a) M. Hilke et al., Nature 395 (1998) 675.
15b) M. Hilke et al., Semi circle: An exact relation in the Integer or Fractional Quantum Hall Effect, cond-mat/9810217.
16) C.P. Burgess and C.A. Lütken, On the implication of discrete symmetries for the $`\beta `$-function of Quantum Hall System, cond-mat/9812396.
17) A.M.M. Pruisken quoted in ref. \[1a\], Phys. Rev. B32 (1985) 2636, Nucl. Phys. B285 (1987) 61.
18) D.E. Khmel’niskii, Pisma Zh. Eksp. Teor. Fiz.38 (1983) 454.
19) For a general discussion see B. Huckestein, Rev. Mod. Phys. 67 (1995) 357.
20) D. Shahar et al., Phys. Rev. Lett. 79 (1997) 479.
|
no-problem/9906/cond-mat9906048.html
|
ar5iv
|
text
|
# Scale invariance and contingent claim pricing
## 1 Introduction
The essence of trading is the exchange of goods. Every transaction sets a ratio between the value of the two goods. This means that there is no such thing as the absolute value of an object, it can only be defined relative to the value of another object. If we only have one asset, we cannot assign a price to the asset. We need at least two assets. Then after choosing one of these two assets, the other asset can be assigned a price relative to the first one. If we have $`n+1`$ tradable assets we can choose any of these $`n+1`$ tradables to assign a price to the other ones. The asset that is chosen to set the prices of the other asset is often called a numeraire. In fact, we have even more freedom. We can choose any positive-definitive function as a numeraire and express every asset price in terms of it, e.g. money.
Thus a price is always given in terms of some unit of measurement. It is a measure-stick which is used to relate different objects. As long as everything is expressed in terms of this one unit prices can be compared. Whether we scale the unit does not matter, prices will scale accordingly. This scale-invariance is of great importance. Not only the prices of tradables which are used to set up the basic economy should scale with a change in numeraire, but any derived tradable like contingent claims, depending on other tradables, should act in the same way. This leads in a natural way to the constraint that the price of a claim as a function of the underlying tradables should be homogeneous<sup>3</sup><sup>3</sup>3A function $`f(x_1,\mathrm{},x_n)`$ is called homogeneous of degree $`\rho `$ if $`f(ax_1,\mathrm{},ax_n)=a^\rho f(x_1,\mathrm{},x_n)`$. Homogeneous functions of degree $`\rho `$ satisfy the following property (Euler): $`_{i=1}^nx_i\frac{}{x_i}f(x_1,\mathrm{},x_n)=\rho f(x_1,\mathrm{},x_n)`$ of degree $`1`$. Otherwise the economy is not well posed.
Although Merton \[Mer73\] already noticed the homogeneity property for the case of a simple European warrant, it was apparently not recognized that this property should be an intrinsic property of any economy in which tradables and derivatives on these tradables have prices relative to some numeraire. More recently, Jamshidian \[Jam97\] discussed interest-rate models and showed that if a payoff is a homogeneous function of degree $`1`$ in the tradables, it leads naturally to self-financing trading strategies for interest-rate contingent claims. But again it is not appreciated that the homogeneity is a fundamental property, which any economy should possess to be properly defined.
To compute the price of a contingent claim \[HP81\] one normally starts with the definition of the stochastic dynamics of the underlying tradables. The next step is to find a self-financing trading strategy which replicates the payoff of the claim at the maturity of the contract. If the economy does not allow for arbitrage and is complete, this self-financing trading strategy gives a unique price for the claim price. To arrive at this result, one has to find a measure under which the tradables, discounted by a numeraire, are martingales. This requires a change of measure. When this change of measure exists, we have to show that the discounted payoff of the claim is a martingale under this new measure too. Then the martingale representation theorem is invoked to link the discounted payoff martingale to the underlying discounted tradables. This then gives a self-financing trading strategy using underlying tradables, which replicates the claim at all times and thus yields a price for the claim. The invariance of the choice of numeraire is reflected in the fact that the price of the claim is indeed invariant under changes of measure, which are associated with different numeraires. Geman et.al. \[HJ95\] used this invariance to show that, depending on the pricing problem at hand, it is useful to select a numeraire, which most naturally fits the payoff of the claim.
In this paper we start our discussion with the scale-invariance of a frictionless economy of tradables with prices expressed in an arbitrary numeraire. We assume the economy to be complete. Our next step is to define the stochastic dynamics of the prices of tradables. Itô then leads to a SDE for a claim-price. If the claim-price solves a certain PDE then together with the homogeneity property this leads automatically to a self-financing trading strategy replicating the claim price. If no-arbitrage constraints are imposed on the drifts and volatilities of the stochastic prices, this price is unique. The invariance under changes of numeraire becomes very transparent due to the homogeneity-property. We do not have to apply changes of measure and this leads in our view to a conceptually more satisfying and transparent contingent claim pricing argument. Finally the scale-invariance property should be satisfied also in economies which do have friction. The symmetry invokes constraints which may be useful in model-building, e.g. more general stochastic processes. We will discuss this in a forthcoming publication \[HN99\]. Also a more rigorous exposition of these results will be presented in this publication. In the present paper, we want to focus on the main ideas and defer the mathematical details to a later time. To the best of our knowledge this is the first time that the consequences of the scale-invariant economy for contingent-claim pricing have been outlined and discussed.
The outline of the article is as follows. In section 2 we introduce some standard notions used to price contingent claims in an economy with stochastic tradables. In subsection 2.1 we show that for an economy to be properly defined it is required to be scale-invariant. The scaling-symmetry restricts the contingent claim price: it should be a homogeneous function of the underlying tradables of degree 1. In subsection 2.2 we introduce the dynamics of the prices of tradables and introduce the notion of deterministic constraints on the dynamics, which may follow from certain choices for the drifts and volatilities of the tradables. In subsection 2.3 we use the homogeneity together with Itô to derive a PDE for the contingent claim value. The homogeneity automatically insures the existence of a self-financing trading strategy for the contingent-claim. In subsection 2.4 we show that the claim price will be unique if the constraints on the dynamics can be written as self-financing portfolios. Finally in subsection 2.5 it is shown that the symmetry is inherited by the PDE for the claim value. This allows us to pick an appropriate numeraire (fix a gauge) and solve the PDE. Section 3 gives various applications of the PDE and the scale-invariance in pricing of contingent claims. In subsection 3.1 we give the explicit formula for a European claim with log-normal prices for the underlying tradables. In subsection 3.2 it is shown that the Black-Scholes PDE is contained in our approach. In subsection 3.3 the pricing of quantos is discussed. In our formulation the pricing becomes trivial. In subsection 3.4 we show that term-structure models fit naturally into our approach and give as an example the price of a log-normal stock in a gaussian HJM model. Another example of the simple formulae is given in subsection 3.5, where we consider a trigger-swap. Finally we give our conclusions and outlook in section 4.
## 2 Contingent claim pricing
In the following subsections we will discuss some general properties of contingent claim pricing using dimensional analysis.
First let us recall the basic principles. We consider a frictionless market with $`n+1`$ tradables<sup>4</sup><sup>4</sup>4We will always use Greek symbols for indices running from $`0`$ to $`n`$ and Latin symbols for indices running from $`1`$ to $`n`$. Furthermore, we use Einstein’s summation convention: repeated indices in products are summed over. with prices $`x_\mu `$, where $`\mu =0,\mathrm{},n`$. The prices $`x\{x_\mu \}_{\mu =0}^n`$ follow stochastic processes, driven by Brownian motions<sup>5</sup><sup>5</sup>5More general processes will be discussed in Ref. \[HN99\]. Time is continuous. Transaction costs are zero. Dividends are zero. Short positions in tradables are allowed. We want to value a European claim at time $`t`$ promising a payoff $`f(x)`$ at maturity $`T>t`$. To attach a rational price to the claim at time $`t`$ we have to find a dynamic portfolio or trading strategy $`\varphi \{\varphi _\mu (x,t)\}_{\mu =0}^n`$ of underlying tradables $`x`$ with value
$$V(x,t)=\varphi _\mu (x,t)x_\mu $$
which replicates the payoff of the claim at maturity, $`V(x,T)=f(x)`$. Let us apply Itô to the trading strategy:
$$dV=\varphi _\mu dx_\mu +x_\mu d\varphi _\mu +d[\varphi _\mu ,x_\mu ]$$
Here $`[\varphi _\mu ,x_\mu ]`$ stands for the quadratic variation<sup>6</sup><sup>6</sup>6Or covariance. of the two processes. We assume that the $`\varphi `$ are adapted to $`x`$, predictable, i.e. given the values of $`x`$ up to time $`t`$ we know the $`\varphi `$. This implies
$$d[\varphi _\mu ,x_\mu ]=0$$
Furthermore the trading-strategy has to be self-financing, i.e. we set up a portfolio for a certain amount of money today such that no further external cash-flows are required during the life-time of the contract to finance the payoff of the claim at maturity. All changes in the positions $`\varphi _\mu (x,t)`$ at any given instant are financed by exchanging part of the tradables at current market prices for others such that the total cost is null:
$$x_\mu d\varphi _\mu =0$$
If we can find such a trading-strategy, then the rational value of the claim today equals the value of the trading portfolio today. If there is a non self-financing trading-strategy, the claim value at time $`t`$ will not be unique. Hence arbitrage opportunities exist. Uniqueness of the claim value only follows in special cases, i.e. for specific choices of stochastic dynamics and drifts and volatilities. This will be discussed in more detail in Sec. 2.4. The self-financing property of the trading-strategy is expressed as follows.
$$dV=\varphi _\mu dx_\mu $$
Finally we also have to impose the following restriction on the allowed tradingstrategies $`\varphi `$ to be admissible: the value of a self-financing replicating portfolio is either deterministically zero at any time during the life of the contract or never. Otherwise arbitrage is possible. We come back to this point in Sec. 2.4.
### 2.1 Homogeneity
For a market to exist we need at least two tradables. Prices are always expressed in terms of a numeraire. The numeraire may be any positive-definite, possibly stochastic, function. The freedom to choose an arbitrary numeraire implies the existence of a scaling-symmetry for prices. The symmetry automatically implies the existence of a delta-hedging strategy for any tradable which depends on other underlying tradables.
Let us consider again a market with $`n+1`$ basic tradables with prices $`x`$ at time $`t`$. These prices are in units $`U`$ of the numeraire. We say that the $`x`$ have dimension $`U`$, or symbolically $`[x_\mu ]=U`$. For the moment we leave the dynamics unspecified. What can be said about the price of a claim today, again in units of $`U`$, when expressed in terms of the tradables $`x`$? Let us denote the price of the claim by $`V(x,t)`$. Just on the basis of dimensional analysis we can write down the following form for the price
$$V(x,t)=\varphi _\mu (x,t)x_\mu $$
(1)
Since $`[V]=U`$ and $`[x_\mu ]=U`$, the functions $`\varphi _\mu `$ are dimensionless, $`[\varphi _\mu ]=1`$. This implies that they can only be functions of ratios of different tradables, which are again dimensionless.
The same arguments apply to any payoff function, for else it is ill-specified. For example, the payoff-function of a vanilla call with maturity $`T`$ does not seem to have this form at first sight
$$(S(T)K)^+$$
But what is meant is the following function of a stock $`S(t)`$ and a discount bond $`P(t,T)`$, which pays $`1`$ unit of $`U`$ at time $`T`$
$$(S(T)KP(T,T))^+$$
and this does have the right form.
Now suppose that we change our unit of measurement. If we scale the unit by $`a`$, such that $`UU/a`$, then the prices of the tradables will scale accordingly, $`x_\mu ax_\mu `$. Using the dimensional analysis result above we then find the following property for the price of the claim
$$V(ax,t)=\varphi _\mu (ax,t)ax_\mu =a\varphi _\mu (x,t)x_\mu =aV(x,t)$$
(2)
The price of the claim is a homogeneous function of degree $`1`$. Note the scaling factor $`a`$ may be local, $`a=a(x,t)`$. Differentiating Eq. 2 with respect to $`a`$, this immediately yields the following relation, valid for any homogeneous function<sup>7</sup><sup>7</sup>7We allow generalized functions. of degree $`1`$,
$$V(x,t)=\frac{V(x,t)}{x_\mu }x_\mu V_{x_\mu }(x,t)x_\mu $$
(3)
This result is independent of the choice of dynamics. Even if we relax the frictionless market assumptions, this scaling-symmetry should not be broken.
As already mentioned various authors \[Mer73, Jam97\] already touched upon the homogeneity-property of certain claim prices, but they always inferred this property as a consequence of the no-arbitrage conditions they imposed on the drift and volatilities of the tradables. Furthermore their claim is that this property only holds in certain cases. In fact Jamshidian \[Jam97\] gives a theorem which is very similar to what we discuss in subsection 2.3, except that he doesn’t recognize the fact that the required homogeneity should always be satisfied. This should be contrasted with our presentation above, where we show that this homogeneity property is one of the most fundamental properties any market model must posses to be well-posed. The homogeneity property just expresses the fact that one needs a proper coordinate-system. It could be termed: ‘the relativity principle of finance’.
### 2.2 Dynamics: the market model
The prices of tradables, relative to a numeraire, change over time. Let us assume that the dynamics of the tradables is given by the following stochastic differential equation:
$$dx_\mu (t)=\alpha _\mu (x,t)x_\mu (t)dt+\sigma _\mu (x,t)x_\mu (t)dW(t)$$
(4)
where we have $`k`$ independent Brownian motions driving the $`n`$ tradables and initial conditions<sup>8</sup><sup>8</sup>8Here $`\sigma _\mu `$ and $`dW`$ should be understood as $`k`$-dimensional vectors. We denote the inner product by a dot. $`x_\mu (t)`$. The Brownian motion is defined under the measure with respect to the numeraire. This is often called the real-world measure in the literature. To determine a price for the claim we will always work under this measure. This should be contrasted with the usual approach, where one first applies a change of measure to make the tradables martingales under the new measure. Then one invokes the martingale representation theorem to determine the claim price. This change of measure is not required, as we will show later, for the determination of a rational price. In fact we do not even have to require the tradables to be strictly positive. If one of the tradables would become zero, this is allowed as long as it hits zero in a non-deterministic way. The tradable should not be used as a numeraire.
For the properties of the drift and volatilities we refer to Appendix 5. It is convenient to extract a unit of $`x_\mu `$ from the drift and volatilities to make the LHS of Eq. 4 dimensionless. Then the RHS should be a homogeneous function<sup>9</sup><sup>9</sup>9In the literature the $`\alpha _\mu `$ and $`\sigma _\mu `$ are often called relative drift and volatilities. of the tradables of degree $`0`$ too. Thus the only allowed form for the drift and volatility-structure are functions of the ratios of the tradables. This is a fundamental requirement for any viable and properly posed market model.
A priori it could well be that deterministic relations exist between the tradables. These relations should satisfy certain constraints in order to attach a unique rational price to a claim. If these constraints are satisfied, arbitrage is not possible. We will come back to this point in section 2.4.
### 2.3 Deriving the basic PDE
The results of the previous sections are precisely what is needed to obtain a PDE for the price of a contingent claim. It will be shown that the homogeneity-property, together with this PDE, is all that is necessary to obtain a unique self-financing trading-strategy in an arbitrage-free market. We do not have to make a detour using martingale techniques to prove this fact. This is a substantial conceptual simplification of the standard theory.
Let us consider the evolution of the contingent claim price $`V(x,t)`$ in time. Using Itô we arrive at the following SDE
$$dV=\left(V_t+\frac{1}{2}\sigma _\mu \sigma _\nu x_\mu x_\nu V_{x_\mu x_\nu }\right)dt+V_{x_\mu }dx_\mu $$
At this point the homogeneity property of $`V(x,t)`$ is used. Since
$$V=V_{x_\mu }x_\mu $$
we see that if the claim value solves the PDE
$$V_t+\frac{1}{2}\sigma _\mu \sigma _\nu x_\mu x_\nu V_{x_\mu x_\nu }V=0$$
(5)
a replicating portfolio, containing $`V_{x_\mu }`$ of tradable $`x_\mu `$, is indeed self-financing.
$$dV=V_{x_\mu }dx_\mu $$
As usual, the payoff of the claim is specified as the boundary condition of the PDE.
Note that the drift terms did not enter the derivation of the PDE at all. We did not have to apply a change of measure to obtain an equivalent martingale measure and use the martingale representation theorem. All that is needed is the homogeneity of the contingent claim price as a function of the underlying tradables.
The PDE in Eq. 5 provides, in our view, the most natural formulation of the valuation of claims on tradables in a Brownian motion setting. It allows us to easily derive the classical result of Black, Scholes, and Merton (subsection 3.2), but also the results of Heath-Jarrow-Morton (subsection 3.4). Although we considered European claims up till now, it is not too difficult to include path-dependent properties. This will be discussed in Ref. \[HN99\].
### 2.4 Uniqueness: No arbitrage revisited
In the previous section we showed that if the claim-value solves Eq. 5 then the replicating portfolio for the claim is self-financing. If deterministic relations between tradables exist, this is too strong a condition. In that case the constraints introduce a redundancy (gauge-freedom) in the space of tradables. This implies that we only have to solve $`V=0`$ modulo the constraints. The deterministic relations between tradables allow the construction of deterministic portfolios with zero value for all times. We will call them null-portfolios. Suppose that there exist $`m`$ deterministic relations
$$P_i(t)=\psi _{i,\mu }(x,t)x_\mu =0$$
with $`i=1,\mathrm{},m`$. We will assume for the moment that these relations are independent such that they span the null-space $`𝒫`$. Otherwise we can find a smaller set of independent constraints to span the null-space. We also assume that the dimension of the null-space is constant over time. Thus we can write the null-space $`𝒫`$ as follows.
$$𝒫=\{f_i(x,t)P_i(t)|\mathrm{arbitrary}f_i(x,t)\}$$
where the $`f_i`$ are predictable homogeneous functions of degree 0 w.r.t. the prices. Taking into account the constraints we require
$$V0$$
Here we use the notation $`0`$ to write $`V=0`$ modulo elements in the null-space $`𝒫`$.
The null-portfolios are either self-financing or not. In the first case, the price of the claim is unique up to arbitrary null-portfolios for all times. No external cash-flows are required to keep the null-portfolio null. In the second case we can find two portfolios which replicate the payoff at maturity but whose values diverge as one moves away from maturity. There will be no unique price and arbitrage is possible.
A market will have self-financing null-portfolios if the drift and volatilities satisfy certain constraints. A null-portfolio $`P=\psi _\mu x_\mu 𝒫`$ satisfies by definition
$$dP0$$
(6)
Since the null-portfolio is by definition deterministic, this leads automatically to the following constraints
$$\frac{P}{x_\mu }\sigma _\mu x_\mu =\psi _\mu \sigma _\mu x_\mu +\frac{\psi _\nu }{x_\mu }\sigma _\mu x_\mu x_\nu 0$$
(7)
If a null-portfolio is self-financing, we have
$$dP=\psi _\mu dx_\mu $$
But Eq. 7 immediately gives
$$\psi _\mu dx_\mu 0$$
(8)
which implies
$`\psi _\mu \alpha _\mu x_\mu `$ $``$ $`0`$
$`\psi _\mu \sigma _\mu x_\mu `$ $``$ $`0`$
If these constraints are satisfied for all null-portfolios, then the null-portfolios will be self-financing and hence no arbitrage is possible.
As a simple example of such constraints, let us consider two tradables $`x_{1,2}`$ with one Brownian motion
$$\frac{dx_{1,2}}{x_{1,2}}=\alpha _{1,2}dt+\sigma _{1,2}dW(t)$$
and constant drift $`\alpha _{1,2}`$ and equal volatility $`\sigma _{1,2}`$ and initial values $`x_{1,2}(0)=1`$. Note that this is the usual setting of Black-Scholes. The SDE for the ratio $`x_2/x_1`$ then becomes
$$\frac{dx_2/x_1}{x_2/x_1}=(\alpha _2\alpha _1\sigma _1(\sigma _2\sigma _1))dt+(\sigma _2\sigma _1)dW$$
If the tradables satisfy a deterministic relation, we see that this is only possible if the volatilities are equal, $`\sigma _1=\sigma _2\sigma `$. In that case the above SDE reduces to an ODE
$$\frac{dx_2/x_1}{x_2/x_1}=(\alpha _2\alpha _1)dt$$
Solving the ODE, we find the following deterministic relation
$$x_2(t)=x_1(t)e^{(\alpha _2\alpha _1)t}$$
(9)
The existence of this relation allows us to construct a null-portfolio with zero value and previsible coefficients for all times. Indeed
$$P(t)=x_2(t)x_1(t)e^{(\alpha _2\alpha _1)t}$$
is trivially zero. Two cases can be distinguished. The portfolio $`P`$ is self-financing or it is not. Consider the evolution of $`P`$
$$dP=dx_1e^{(\alpha _2\alpha _1)t}dx_2+(\alpha _2\alpha _1)e^{(\alpha _2\alpha _1)t}x_1dt$$
It should be clear that only if $`\alpha _1=\alpha _2`$ the portfolio $`P`$ will be self-financing and $`x_1`$ can be hedged using $`x_2`$. Otherwise arbitrage is possible. Intuitively this should be obvious, two tradables with equal risk $`\sigma `$ should yield the same returns $`\alpha `$.
Let us consider the consequences for the price $`V`$ of a claim if $`\alpha _1\alpha _2`$. We construct a portfolio $`P`$ with constant coefficients $`\psi _{1,2}`$
$$P(t)=\psi _1x_1(t)+\psi _2x_2(t)$$
If we set
$$\psi _2=\psi _1e^{(a_1\alpha _2)T}$$
then the value of the portfolio at time $`T`$ is $`P(T)=0`$. However at $`t<T`$ we have
$$P(t)=\psi _1x_1(t)\left(1e^{(a_1\alpha _2)(Tt)}\right)$$
Since $`\psi _1`$ can take any value, the value of the contract which pays zero at time $`T`$ can have any value. But this implies that we can ask any price $`V(t)+P(t)`$ for a claim paying $`V(T)`$ by adding an arbitrary portfolio with $`P(T)=0`$.
### 2.5 Gauge invariance of the PDE
It was shown that a fundamental property of any viable market-model is the scale-invariance of the prices of tradables as expressed through the freedom of choice of the numeraire. It leads automatically to the requirement that the claim-price should be a homogeneous function of degree 1 in terms of prices of tradables. This invariance should be inherited by the dynamical equations governing the price-process for the claim. Indeed, by differentiating Eq. 3 again we obtain
$$x_\mu V_{x_\mu x_\nu }=0$$
(10)
Using this result it is a simple exercise to show that $`V`$ is invariant under the (simultaneous) substitutions
$$\sigma _\mu (x,t)\sigma _\mu (x,t)\lambda (x,t)$$
This invariance-property represents the fact that volatility is a relative concept. It can only be measured with respect to some numeraire. Prices should not depend on this<sup>10</sup><sup>10</sup>10This is called a gauge-invariance in physics’ parlance and change of numeraire in finance parlance.. We can exploit this freedom to reduce the dimension of the problem. For example, choosing $`x_0`$ as a numeraire corresponds to taking $`\lambda (x,t)=\sigma _0(x,t)`$. Then
$$V_t+\frac{1}{2}(\sigma _i(x,t)\sigma _0(x,t))(\sigma _j(x,t)\sigma _0(x,t))x_ix_jV_{x_ix_j}=0$$
(11)
Now one can introduce
$$V(x_0,\mathrm{},x_n,t)=x_0E(\frac{x_1}{x_0},\mathrm{},\frac{x_n}{x_0},t)$$
(12)
Then $`E(x_1,\mathrm{},x_n,t)`$ again satisfies Eq. 11. Interesting things happen when $`V`$ is independent of $`x_0`$. In that case, $`E`$ is homogeneous again, the $`\sigma _0(x,t)`$ dependence drops out, and the game can be repeated. Furthermore it should be noted, that the numeraire does not have to be a tradable. As stated earlier it may be be any positive-definite stochastic function. This freedom can be exploited to simplify calculations. Finally recall Eqs. 3 and 10. These relations give some interesting relations between the various greeks. This can be of use in numerical schemes to solve the PDE.
## 3 Applications
In this section we give several examples, which show the simplicity and clarity with which one derives results for contingent claim prices using the scale-invariance of the PDE.
### 3.1 General solution for the log-normal case
We compute the claim price for a path-independent European claim with an arbitrary number of underlying tradables, when the prices of the tradables are log-normally distributed,
$$\frac{dx_\mu (t)}{x_\mu (t)}=\alpha _\mu (t)dt+\sigma _\mu (t)dW(t)$$
It is easy to write the general solution for a path-independent European claim in this case. First we perform a change of variables
$$x_\mu =\mathrm{exp}(y_\mu )$$
such that the PDE becomes
$$V_t+\frac{1}{2}\sigma _\mu (t)\sigma _\nu (t)(V_{y_\mu y_\nu }\delta _{\mu \nu }V_{y_\mu })=0$$
A Fourier transformation yields an ODE in $`t`$
$$\stackrel{~}{V}_t\frac{1}{2}\sigma _\mu (t)\sigma _\nu (t)(\stackrel{~}{y}_\mu \stackrel{~}{y}_\nu i\delta _{\mu \nu }\stackrel{~}{y}_\mu )\stackrel{~}{V}=0$$
where $`i`$ denotes the imaginary unit. The ODE has the solution
$$\stackrel{~}{V}(t)=\stackrel{~}{V}(T)\mathrm{exp}\left(\frac{1}{2}\mathrm{\Sigma }_{\mu \nu }(\stackrel{~}{y}_\mu \stackrel{~}{y}_\nu i\delta _{\mu \nu }\stackrel{~}{y}_\mu )\right)$$
with
$$\mathrm{\Sigma }_{\mu \nu }_t^T\sigma _\mu (u)\sigma _\nu (u)𝑑u$$
Since $`\mathrm{\Sigma }`$ is a non-negative symmetric matrix, it can be diagonalized as
$$\mathrm{\Sigma }_{\mu \nu }=A_{\mu \sigma }A_{\nu \rho }B_{\sigma \rho },B=\text{diag}(\lambda _0,\mathrm{},\lambda _{m1},0,\mathrm{})$$
where $`A`$ is an orthogonal matrix and $`m`$ equals the rank of $`\mathrm{\Sigma }`$ (so $`\lambda _i>0`$ for $`0i<m`$). It will turn out to be convenient to introduce the matrix
$$\mathrm{\Theta }_{\mu \nu }=\{\begin{array}{cc}A_{\mu \nu }\sqrt{\lambda _\nu }\hfill & \text{for }\nu <m\hfill \\ A_{\mu \nu }\hfill & \text{otherwise}\hfill \end{array}$$
Clearly, this matrix is invertible, $`det\mathrm{\Theta }=\sqrt{\lambda _0\mathrm{}\lambda _{m1}}`$, and it satisfies
$$\mathrm{\Sigma }_{\mu \nu }=\mathrm{\Theta }_{\mu \sigma }\mathrm{\Theta }_{\nu \rho }\mathrm{\Lambda }_{\sigma \rho },\mathrm{\Lambda }=\text{diag}(\underset{𝑚}{\underset{}{1,\mathrm{},1}},0,\mathrm{})$$
We now perform an inverse Fourier transformation on the solution of the ODE, and find
$`V(x_0,\mathrm{},x_n,t)={\displaystyle \frac{1}{(2\pi )^{n+1}}}{\displaystyle V(\mathrm{exp}(y_0),\mathrm{},T)}`$
$`\times \mathrm{exp}\left({\displaystyle \frac{1}{2}}\mathrm{\Sigma }_{\mu \nu }(\stackrel{~}{y}_\mu \stackrel{~}{y}_\nu i\delta _{\mu \nu }\stackrel{~}{y}_\mu )+i\stackrel{~}{y}_\mu (y_\mu \mathrm{ln}x_\mu )\right)dyd\stackrel{~}{y}`$
$`=`$ $`{\displaystyle \frac{1}{(2\pi )^{n+1}}}{\displaystyle V(x_0\mathrm{exp}(y_0\frac{1}{2}\mathrm{\Sigma }_{00}),\mathrm{},T)}`$
$`\times \mathrm{exp}\left({\displaystyle \frac{1}{2}}\mathrm{\Sigma }_{\mu \nu }\stackrel{~}{y}_\mu \stackrel{~}{y}_\nu +i\stackrel{~}{y}_\mu y_\mu \right)dyd\stackrel{~}{y}`$
Next we introduce new variables as follows
$$y_\mu =\mathrm{\Theta }_{\mu \nu }z_\nu ,\mathrm{\Theta }_{\mu \nu }\stackrel{~}{y}_\mu =\stackrel{~}{z}_\nu $$
In terms of these variables, the integral becomes (note that the Jacobian of this transformation exactly equals one)
$$\frac{1}{(2\pi )^{n+1}}V(x_0\mathrm{exp}(\mathrm{\Theta }_{0\nu }z_\nu \frac{1}{2}\mathrm{\Sigma }_{00}),\mathrm{},T)\mathrm{exp}\left(\frac{1}{2}\mathrm{\Lambda }_{\mu \nu }\stackrel{~}{z}_\mu \stackrel{~}{z}_\nu +i\stackrel{~}{z}_\mu z_\mu \right)𝑑z𝑑\stackrel{~}{z}$$
The integral over the $`\stackrel{~}{z}_\mu `$ can be calculated explicitly. It gives rise to an $`m`$-dimensional standard normal PDF, multiplied by some $`\delta `$-functions
$$\frac{1}{(2\pi )^{n+1}}\mathrm{exp}\left(\frac{1}{2}\mathrm{\Lambda }_{\mu \nu }\stackrel{~}{z}_\mu \stackrel{~}{z}_\nu +i\stackrel{~}{z}_\mu z_\mu \right)𝑑\stackrel{~}{z}=\varphi (z)\delta (z_m)\mathrm{}\delta (z_n)$$
$$\varphi (z)=\frac{1}{\left(\sqrt{2\pi }\right)^m}\mathrm{exp}\left(\frac{1}{2}\underset{i=0}{\overset{m1}{}}z_i^2\right)$$
The integrals over $`z_\mu `$ for $`\mu m`$ are now trivial. To express the result in a compact form, it is useful to introduce a set of $`m`$-dimensional vectors
$$(\theta _\mu )_i=\mathrm{\Theta }_{\mu i},0i<m$$
These vectors in fact define a Cholesky-decomposition of the covariance matrix. Indeed, they satisfy
$$\theta _\mu \theta _\nu =\mathrm{\Sigma }_{\mu \nu }$$
Here the inner product is understood to be $`m`$ dimensional. Combining all, the solution becomes
$$V(x_0,\mathrm{},x_n,t)=\varphi (z)V(x_0\mathrm{exp}(\theta _0z\frac{1}{2}\theta _0\theta _0),\mathrm{},T)d^mz$$
(13)
For homogeneous $`V`$, the result can be expressed in an even more compact form
$$V(x_0,\mathrm{},x_n,t)=V(x_0\varphi (z\theta _0),\mathrm{},x_n\varphi (z\theta _n),T)d^mz$$
If the number of tradables is small we may be able to compute Eq. 13 analytically. Otherwise we have to use numerical techniques.
At this point let us remind the reader that it is easy to include stocks in the model with known future dividend yields. This can be done as follows. Suppose we want to price a European claim $`V`$, whose price depends on a dividend paying stock $`S`$. The dividend payments occur at times $`t_i`$, $`1in`$ during the lifetime of the claim. These dividends are given as a fraction $`\delta _i`$ of the stock-price $`S(t_i)`$. The effect of the dividend payments on the price of the claim can be incorporated by making the substitution
$$S(t)S(t)\underset{i=1}{\overset{n}{}}(1+\delta _i)^1$$
in the price function of a similar claim, but depending on a non dividend paying stock. Indeed, a dividend payment at time $`t_i`$ has the effect of reducing the stock-price by a factor $`(1+\delta _i)^1`$. For dividends paid at a continuous rate $`q`$, the substitution simply becomes
$$S(t)S(t)e^{q(Tt)}$$
If dividend payments are known in terms of another tradable, e.g. a bond, the situation becomes more complicated. This is so because a dividend payment of $`\delta _i`$ units of a tradable $`P`$ at time $`t_i`$ has the effect of reducing the stock-price by a factor
$$(1+\delta _i\frac{P(t_i)}{S(t_i)})^1$$
This makes the correction factor on $`S`$ path-dependent in general. We will return to this problem in Ref. \[HN99\].
### 3.2 Recovering Black-Scholes
In subsection 2.3 we derived a very general PDE for the pricing of contingent claims, when the stochastic terms are driven by Brownian motion. In this section we show that it reduces to the standard Black-Scholes equation when the underlying tradables are log-normally distributed with constant drift and volatilities. In the Black-Scholes world, we have a number of stocks $`S_i`$ with SDE’s
$$\frac{dS_i}{S_i}=\alpha _idt+\sigma _idW(t)$$
Furthermore we have a deterministic bond $`P`$, satisfying
$$P(t,T)=\mathrm{exp}(r(tT))$$
or in terms of its differential equation
$$\frac{dP(t,T)}{P(t,T)}=rdt$$
with $`P(T,T)=1`$. For simplicity we take the interest rate and volatilities to be time-independent. It is not too difficult to extend the present discussion to the time-dependent case. In fact the solution was already computed in the previous section. Our basic equation, Eq. 5, gives for the price of a claim
$$V_t+\frac{1}{2}\sigma _i\sigma _jS_iS_jV_{S_iS_j}=0$$
Note that $`V`$ is explicitly a function of $`P`$. In the Black-Scholes formulation it is usually defined implicitly. This can be done by defining
$$\begin{array}{cc}\hfill E(S,t)& =V(P,S,t)\hfill \\ \hfill V(1,S,t)& =\frac{E(P(t)S,t)}{P(t)}\hfill \end{array}$$
(14)
Thus we find, as promised,
$$E_t+rS_iE_{S_i}+\frac{1}{2}\sigma _i\sigma _jS_iS_jE_{S_iS_j}rE=0$$
(15)
Let us now consider a simple one-dimensional example, a European call option. The solution can be easily found using the results of the previous section.
$`V`$ $`=`$ $`{\displaystyle (S(t)\varphi (z\sigma \sqrt{Tt})KP(t,T)\varphi (z))^+𝑑z}`$
$`=`$ $`S(t)\mathrm{\Phi }(d_1)KP(t,T)\mathrm{\Phi }(d_2)`$
with
$$d_{1,2}=\frac{\mathrm{log}\frac{S(t)}{KP(t,T)}\pm \frac{1}{2}\sigma ^2(Tt)}{\sigma \sqrt{Tt}}$$
This is the well-known Merton’s formula \[HJ95\]. The homogeneity relation, Eq. 3, can be used to derive relations between the greeks. In this present case it is given by
$$V=SV_S+PV_P$$
Indeed, using $`V_S=\mathrm{\Phi }(d_1)`$ and $`V_P=K\mathrm{\Phi }(d_2)`$, the equality follows. Since in the Black-Scholes universe $`P`$ is a deterministic function of $`r`$, we have for $`\rho V_r`$
$$\rho =V_PP_r=(Tt)PV_P=(Tt)(SV_SV)$$
These type of relations were already observed in a different con -text in Ref. \[Car93\]. Furthermore, Eq. 10 gives the following relations
$$SV_{SS}+PV_{PS}=SV_{SP}+PV_{PP}=0$$
Again this is easily checked by substitution of the solution $`V`$.
### 3.3 Quantos
Quantos are instruments which have a payoff specified in one currency and pay out in another currency. The pricing of these instruments becomes trivial, when we consider the problem using only tradables in one economy. This requires the introduction of an exchange-rate to relate the instruments denominated in one currency to ones denominated in another currency. The exchange-rate is assumed to follow some stochastic process. In the following we will use a Brownian motion setting. Let us denote the exchange-rate to convert currency 2 into currency 1 by $`C_{12}`$, satisfying
$$\frac{dC_{12}}{C_{12}}=\alpha _{12}dt+\sigma _{12}dW(t)$$
The exchange-rate $`C_{21}=C_{12}^1`$ to convert currency 1 into currency 2 then satisfies
$$\frac{dC_{21}}{C_{21}}=(\alpha _{12}+\sigma _{12}^2)dt\sigma _{12}dW(t)$$
Let us consider two assets, one denominated in currency 1, the other in currency 2, with the following dynamics respectively, ($`i=1,2`$),
$$\frac{dx_i}{x_i}=\alpha _idt+\sigma _idW(t)$$
To be able to price the instrument we need two tradables denominated in one currency. Let us define the converted prices $`\stackrel{~}{x}_1=C_{21}x_1`$ and $`\stackrel{~}{x}_2=C_{12}x_2`$. The converted prices give us our pairs of tradables $`x_1,\stackrel{~}{x}_2`$ and $`\stackrel{~}{x}_1,x_2`$ needed to price the instrument. The price is identical whether we work in terms of currency 1 or 2. This is a direct consequence of the scale-invariance of the problem. For consider first the case where everything is denoted in terms of currency 1. Then we arrive at the following two SDE’s
$`{\displaystyle \frac{dx_1}{x_1}}`$ $`=`$ $`\alpha _1dt+\sigma _1dW(t)`$
$`{\displaystyle \frac{d\stackrel{~}{x}_2}{\stackrel{~}{x}_2}}`$ $`=`$ $`(\alpha _2+\alpha _{12}+{\displaystyle \frac{1}{2}}\sigma _2\sigma _{12})dt+(\sigma _2+\sigma _{12})dW(t)`$
Thus the volatilities entering in the pricing problem are $`\sigma _1`$ and $`\stackrel{~}{\sigma }_2\sigma _2+\sigma _{12}`$. Next consider the case where we denominate everything in terms of currency 2. The SDE’s become
$`{\displaystyle \frac{d\stackrel{~}{x}_1}{\stackrel{~}{x}_1}}`$ $`=`$ $`(\alpha _1\alpha _{12}+\sigma _{12}^2{\displaystyle \frac{1}{2}}\sigma _1\sigma _{12})dt+(\sigma _1\sigma _{12})dW(t)`$
$`{\displaystyle \frac{dx_2}{x_2}}`$ $`=`$ $`\alpha _2dt+\sigma _2dW(t)`$
In this case, the volatilities which are relevant for the pricing problem are $`\sigma _2`$ and $`\stackrel{~}{\sigma }_1\sigma _1\sigma _{12}`$. Therefore we see that the difference between calculations in the two currencies amounts to an overall shift in the volatilities by $`\sigma _{12}`$. But we have already seen that solutions of the PDE, Eq. 5, are invariant under such a translation. So we obtain a unique price function.
### 3.4 Heath-Jarrow-Morton
Let us consider the Heath-Jarrow-Morton framework \[DA92\]. The common approach is to postulate some forward rate dynamics and from there derive the prices of discount-bonds and other interest-rate instruments. But it is well-known that this model can also be formulated in terms of discount-bond prices \[Car95\]. Since discount bonds are tradables, this approach fits directly into our pricing formalism. Assume the following price process for the bonds<sup>11</sup><sup>11</sup>11Here $`d_t`$ denotes the stochastic differential w.r.t. $`t`$.
$$\frac{d_tP(t,T)}{P(t,T)}=\alpha (t,T,P)dt+\sigma (t,T,P)dW(t)$$
As was mentioned before, the drift and volatility functions should be homogeneous of degree zero in the bond prices in order to have a well-defined model. So they can only be functions of ratios of bond prices. In fact the precise form of the drift-terms is not of any importance in deriving the claim-price.
Let us consider as an example the price of an equity option with stochastic interest rates. We restrict our attention to Gaussian HJM models. In that case we have a bond satisfying
$$\frac{d_tP(t,T)}{P(t,T)}=\alpha (t,T)dt+\sigma (t,T)dW(t)$$
So the drift and volatility only depend on $`t`$ and $`T`$. Note that this form includes both the Vasicek and the Ho-Lee model. As usual, the stock satisfies
$$\frac{dS}{S}=\alpha dt+\sigma dW(t)$$
Now choosing $`P(t,T)`$ as a numeraire, we find the following PDE for the price of a claim (cf. Eq. 11)
$$V_t+\frac{1}{2}|\sigma \sigma (t,T)|^2S^2V_{SS}=0$$
The $`|v|`$ denotes the length of the vector $`v`$. Using the standard techniques, this leads to the following price for a call option with maturity $`T`$ and strike $`K`$
$$V(S,P,t)=S(t)\mathrm{\Phi }(d_1)+KP(t,T)\mathrm{\Phi }(d_2)$$
with
$$d_{1,2}=\frac{\mathrm{log}\frac{S(t)}{KP(t,T)}\pm \frac{1}{2}\mathrm{\Sigma }}{\sqrt{\mathrm{\Sigma }}},\mathrm{\Sigma }=_t^T|\sigma \sigma (u,T)|^2𝑑u$$
Remember that both $`\sigma `$ and $`\sigma (t,T)`$ are understood to be vectors. Note that in our model it is not necessary to use discount-bonds as fundamental tradables to model the interest rate market. One could equally well use other tradables such as coupon-bonds or swaps, being linear combinations of discount-bonds, or even caplets and swaptions. In our view, it seems to be less natural to model the LIBOR-rate directly, since this is not a traded object. In fact, $`\delta `$-LIBOR-rates are dimensionless quantities, defined as a quotient of discount bonds
$$L(t,T)=\frac{P(t,T)P(t,T+\delta )}{\delta P(t,T+\delta )}$$
In this respect, the name ‘LIBOR market-model’\[Jam97\] seems a contradiction in terms.
### 3.5 A trigger swap
Let us now consider a somewhat more complicated example, a trigger swap. This contract depends on four tradables $`S_i`$, and it is defined by its payoff function at maturity $`T`$
$$f(S)=(S_3S_4)\mathrm{𝟏}_{S_1>S_2}$$
Note that both exchange options and binary options are special cases of this trigger swap. The former is found by setting $`S_3=S_1`$ and $`S_4=S_2`$, the latter by setting $`S_3=P(t,T)`$ and $`S_4=0`$. Let us assume that the $`S_i`$ satisfy
$$\frac{dS_i}{S_i}=\alpha _i(t)dt+\sigma _i(t)dW(t)$$
For this log-normal model, we can immediately write down the following formula for the price of the claim
$$V=_{S_1\varphi (z\theta _1)>S_2\varphi (z\theta _2)}(S_3\varphi (z\theta _3)S_4\varphi (z\theta _4))𝑑z$$
Here, the $`\theta _i`$ are given by a Cholesky decomposition of the integrated covariance matrix
$$\mathrm{\Sigma }_{ij}=_t^T\sigma _i(u)\sigma _j(u)𝑑u=\theta _i\theta _j$$
We will omit the details of the evaluation of this integral. It is a straightforward application of the procedure described in subsection 3.1. The result can be written as
$$V=S_3\mathrm{\Phi }(d_3)S_4\mathrm{\Phi }(d_4)$$
where
$$d_i=\frac{\mathrm{log}\frac{S_1}{S_2}+\frac{1}{2}(\mathrm{\Sigma }_{22}\mathrm{\Sigma }_{11})+\mathrm{\Sigma }_{1i}\mathrm{\Sigma }_{2i}}{\sqrt{\mathrm{\Sigma }_{11}2\mathrm{\Sigma }_{12}+\mathrm{\Sigma }_{22}}}$$
The reader can check that this result is again independent under gauge-transformations $`\sigma _i\sigma _i\lambda `$, as it should be. Note that $`V_{S_1}`$ and $`V_{S_2}`$ are not in general equal to zero. This means that one needs a portfolio consisting of all four underlyings to hedge this claim. Now let us consider the special case of an exchange option, setting $`S_3=S_1`$ and $`S_4=S_2`$. In this case, the formulae reduce to
$$V=S_1\mathrm{\Phi }(d_1)S_2\mathrm{\Phi }(d_2)$$
where
$$d_{1,2}=\frac{\mathrm{log}\frac{S_1}{S_2}\pm \frac{1}{2}(\mathrm{\Sigma }_{11}2\mathrm{\Sigma }_{12}+\mathrm{\Sigma }_{22})}{\sqrt{\mathrm{\Sigma }_{11}2\mathrm{\Sigma }_{12}+\mathrm{\Sigma }_{22}}}$$
In Ref. \[LW99\] it is claimed that the value of an option to exchange two stocks has a dependence on the interest-rate term structure, or in other words, a dependence on bond-prices. It should be clear from the discussion above that this is in fact impossible, because neither the payoff, nor the volatility functions make any reference to bonds. Therefore, the price of such an exchange option can be calculated in a market where bonds do not even exist.
## 4 Conclusions and outlook
In the preceding sections we have clearly shown the advantages of a model formulated in terms of tradables only. In this formulation, the relativity of prices manifests itself as a homogeneity condition on the price of any contingent claim, and this fact can be exploited to bypass the usual martingale construction for the replicating trading-strategy. The result is a transparent general framework for the pricing of derivatives.
In this article we have restricted our attention to the problem of pricing European path-independent claims. The generalization to path-dependent and American options is straightforward and will be dealt with in other publications.
Obviously, the applicability of the scaling laws is not restricted to models with Brownian driving factors. Currently we are considering alternative driving factors such as Poisson and Levy processes. We are also looking at implications for modeling incomplete markets. Finally the scaling-symmetry should also hold in markets with friction. This may serve as an extra guidance in the modeling of transaction-costs and restrictions on short-selling.
## 5 Stochastic differential equations
We use stochastic differential equations to model the dynamics of the prices $`x_\mu (t)`$ of tradables. The governing equation is given by
$$d_tx_\mu (t)=\alpha _\mu (x,t)x_\mu (t)dt+\sigma _\mu (x,t)x_\mu (t)dW(t)$$
with initial conditions $`x_\mu (t)`$ and $`dW(t)`$ denote $`k`$-dimensional Brownian motion with respect to some measure. The drifts $`\alpha _\mu (x,t)`$ and volatilities $`\sigma _\mu (x,t)`$ are assumed to be adapted to $`x`$ and predictable. For this equation to have a unique solution, we have to require some regularity-conditions on the drift $`\alpha _\mu (x,t)`$ and volatility $`\sigma _\mu (x,t)`$. These can stated as follows \[Gar85, Arn74, BS96\].
* Lipschitz condition: there exists a $`K>0`$ such that for all $`x,y`$ and $`s[t,T]`$
$$|\alpha _\mu (x,s)\alpha _\mu (y,s)|+|\sigma _\mu (x,s)\sigma _\mu (y,s)|K|xy|$$
* Growth condition: there exists a $`K`$ such that for all $`s[t,T]`$
$$|\alpha _\mu (x,s)|^2+|\sigma _\mu (x,t)|^2K^2(1+|x|^2)$$
The Lipschitz condition above is global, it can in fact be weakened to a local version. If the growth condition is not satisfied, the solution may still exist up to some time $`t^{}`$, where the solution $`x_\mu (t)`$ has a singularity and thus ‘explodes’.
|
no-problem/9906/quant-ph9906068.html
|
ar5iv
|
text
|
# Zeno effect preventing Rabi transitions onto an unstable energy level
## 1 Introduction
The quantum Zeno effect is one of the specific quantum effects which have gained much interest in particular in connection with the quantum theory of measurement. It goes back to successive or truly continuous measurements of the system and depends on the strength of the measurement. Generally speaking, it is the slowing down of induced quantum transitions as a result of repeated measurements of the system. In the limit of a continuous measurement, transitions of the system are inhibited. The system is frozen on a level.
Characteristic for the Zeno effect is that it is caused by measurement. A prevention of a transition must not necessarily be a manifestation of the Zeno effect. It may also happen that a Zeno effect is accompanied by other influences. In the following we discuss a new demonstration of a pure Zeno effect.
A well known particular example of the Zeno effect is the slowing down of Rabi transitions between two energy levels $`|1`$ and $`|2`$ of a $`2`$-level system, which are induced by a resonant driving laser field, as the result of frequently repeated energy measurements. In the well-known experiment of Itano et al. the repeated energy measurement is realized for ions with the help of periodical short pulses of another laser field inducing optical transitions from level $`|1`$ onto a third level $`|3`$ with subsequent spontaneous decay back to the initial level $`|1`$ ($`V`$-configuration). The photon emitted by spontaneous emission is monitored. If the ion is on level $`|1`$, it will emit photons. This does not happen if it is on level $`|2`$. In this sense the fluorescence photons contain information about the state of the ion. The $`|1|3`$ transition together with the emission of the fluorescence photon represents a non-destructive projection measurement of level $`|1`$. As a result the transition $`|1|2`$ is hindered. This freezing on a level is a pure demonstration of quantum Zeno effect caused by repeated almost instantaneous projection measurements (comp. , where also a more complete list of the literature can be found).
The aim of this letter is to show that the Zeno effect may be demonstrated in a much simpler way for a driven $`2`$-level system with one level showing spontaneous decay to an otherwise uncoupled third level. At the same time this system may serve as a good example for a truly continuous quantum measurement. In addition it visualizes the important role of null measurements.
We consider the very simple quantum system presented in Fig. 1a. It is a $`2`$-level system with levels (energy eigenstates) $`|1`$ and $`|2`$ which is subject to the influence of a resonant driving field $`V`$ generating Rabi oscillations between $`|1`$ and $`|2`$. Initially the system is on level $`|1`$ and the driving field induces a transition to level $`|2`$. However level $`|2`$ is assumed to be unstable. It decays rapidly by spontaneous decay with relaxation rate $`\gamma `$ to a third level $`|3`$ with emission of a photon. Accordingly, if the system reaches level $`|2`$ it very quickly transits further to level $`|3`$. Therefore, if no photon is emitted, this is a sign that the system stays perpetually on level $`|1`$. Consequently we may consider this setup as a realization of a continuous measurement of the energy level $`|1`$ in a passive way leading to null-results (in contrast to the non-continuous and active scheme realized by Itano et. al. ).
What type of dynamical behavior is to be expected? Naively one could imagine that the system is transferred with the first Rabi pulse to level $`|2`$ from where it quickly decays to level $`|3`$. The contrary turns out to be the case. The transition $`|1|2`$ is slowed down and the effectivity for this freezing on the initial level $`|1`$ actually increases with the effectivity of the spontaneous decay of level $`|2`$, i.e. with increasing relaxation rate $`\gamma `$. This may seem to be counterintuitive or paradoxical. But in fact it is no surprise, because in a situation in which a continuous quantum measurement is realized due to the coupling to the environment, the quantum Zeno effect is to be expected. We will show that this behavior of the system can indeed very easily be understood as another experimentally accessible example of a pure manifestation of the Zeno effect. That this effect is in our case based only i) on null-measurements and ii) on genuinely continuous measurements, makes it even more interesting from the conceptional point of view.
We start in Sect. 2 with a quantum optical description of the microphysical dynamics underlying this Zeno effect. We compare it in Sect. 3 with the phenomenological approach based on repeated measurements. In Sect. 4 we give a treatment within a phenomenological theory of continuous quantum measurements.
## 2 Quantum optical treatment of the system
We are dealing with a 3-level system which is under the influence of a driving field and in interaction with the electromagnetic vacuum causing the spontaneous emission of a photon. An adequate treatment of the complete underlying Schrödinger dynamics of the open system is rather involved. Let us therefore introduce already on the level of the microphysical quantum optical treatment some phenomenological elements. We study instead of the 3-level system an equivalent 2-level system consisting only out of the levels $`|1`$ and $`|2`$ with energies $`\mathrm{}\omega _1`$ and $`\mathrm{}\omega _2`$, and represent the instability of level $`|2`$ by an imaginary term in the energy of this level (Fig. 1 b).
We write the Hamiltonian of the system as $`H=H_\gamma +V`$ with
$$H_\gamma |1=\mathrm{}\omega _1|1,H_\gamma |2=\mathrm{}(\omega _2i\gamma )|2,1|V|2=2|V|1^{}=\mathrm{}\mathrm{\Omega }e^{i(\omega _2\omega _1)t}.$$
(1)
Let us introduce amplitudes $`a_1(t)`$, $`a_2(t)`$ as follows:
$$|\psi =a_1(t)e^{i\omega _1t}|1+a_2(t)e^{i\omega _2t}|2.$$
(2)
The dynamics
$$|\dot{\psi }=\frac{i}{\mathrm{}}(H_\gamma +V)|\psi $$
(3)
then result in the following differential equations for the amplitudes
$$\dot{a}_1=i\mathrm{\Omega }a_2,\dot{a}_2=i\mathrm{\Omega }a_1\gamma a_2.$$
(4)
The solution is
$`a_1(t)`$ $`=`$ $`e^{\frac{\gamma }{2}t}\left[a_1(0)\mathrm{cosh}\mathrm{\Omega }_\gamma t+{\displaystyle \frac{1}{\mathrm{\Omega }_\gamma }}\left({\displaystyle \frac{\gamma }{2}}a_1(0)i\mathrm{\Omega }a_2(0)\right)\mathrm{sinh}\mathrm{\Omega }_\gamma t\right],`$
$`a_2(t)`$ $`=`$ $`e^{\frac{\gamma }{2}t}\left[a_2(0)\mathrm{cosh}\mathrm{\Omega }_\gamma t{\displaystyle \frac{1}{\mathrm{\Omega }_\gamma }}\left(i\mathrm{\Omega }a_1(0)+{\displaystyle \frac{\gamma }{2}}a_2(0)\right)\mathrm{sinh}\mathrm{\Omega }_\gamma t\right],`$ (5)
where we have introduced
$$\mathrm{\Omega }_\gamma =\sqrt{(\gamma ^2/4)\mathrm{\Omega }^2}.$$
(6)
Under the condition that the system is initially ($`t=0`$) on level $`|1`$, we find
$$a_1(t)=e^{\frac{\gamma }{2}t}\left(\mathrm{cosh}\mathrm{\Omega }_\gamma t+\frac{\gamma }{2\mathrm{\Omega }_\gamma }\mathrm{sinh}\mathrm{\Omega }_\gamma t\right),a_2(t)=e^{\frac{\gamma }{2}t}\left(i\frac{\mathrm{\Omega }}{\mathrm{\Omega }_\gamma }\mathrm{sinh}\mathrm{\Omega }_\gamma t\right).$$
(7)
Depending on the relative strength of the two parameters $`\gamma `$ and $`\mathrm{\Omega }`$ one can discriminate different regimes of behavior. If the influence of the driving field is strong as compared to the strength of the spontaneous decay, i.e. for $`\mathrm{\Omega }\gamma `$, we are in the Rabi regime, where the behavior of the system shows the usual Rabi oscillations modified by some damping:
$$a_1(t)=e^{\frac{\gamma }{2}t}\mathrm{cos}\mathrm{\Omega }t,a_2(t)=ie^{\frac{\gamma }{2}t}\mathrm{sin}\mathrm{\Omega }t.$$
(8)
We see that the system, being initially on level 1, will be on level 2 after the Rabi period $`T_R=\pi /2\mathrm{\Omega }`$.
For our purpose it is the opposite regime with strong coupling $`\gamma \mathrm{\Omega }`$ to the environment which is of interest. In this case the probability of permanent survival $`P_{ps}(T)`$ of the system on level $`|1`$ during sufficiently long time $`T`$ turns out to be:
$$P_{ps}(T)=|a_1(T)|^2=e^{2\mathrm{\Omega }^2T/\gamma }.$$
(9)
Because of $`\gamma \mathrm{\Omega }`$ we find damping of the Rabi oscillations. In the limit $`\mathrm{\Omega }/\gamma 0`$ we have $`P_{\mathrm{ps}}(T)1`$ for any fixed $`T`$ and the system is frozen on level $`|1`$. This is a manifestation of the Zeno effect.
We recall that the Zeno effect is the phenomenon that continuous measurement or many consecutive quantum measurements lead to damping and in a limit to freezing of the evolution of the measured system. In our case we have a continuous measurement because of the coupling to the environment via spontaneous decay. And we have just shown that it results in freezing of the evolution of the system.
It is a characteristic trait of our system that it is not closed, even when no photon is emitted. In fact this enduring failure to observe a photon in the interval $`[0,T]`$ represents the continuum version of a series of nondetection measurements or negative result measurements. We called them null measurements. A null measurement influences the system in just the same way as the usual positive result measurement. It would be therefore misleading to call it an “interaction free measurement” . May be it should be better called an “energy exchange free measurement” . Our setup may serve as a simple example where the concept of null measurements leading to wave function collapses which result in level freezing can successfully be applied. Other types of null measurements which are performed on the neutron spin or on the photon polarization are discussed in the literature.
Finally we add that a full and detailed quantum optical treatment, which is not based on a complex energy as in Eq. (1) leads to the same result. It is therefore possible to obtain for our system the quantum Zeno effect within a fully microphysical approach. Note that it was not necessary to refer to a collapse of the wave function. We will now compare these results with the usual phenomenological approach to the Zeno effect, which is based on repeated measurements.
## 3 Comparison with repeated projection measurements
Let us first consider a situation for which the setup of Itano et. al. may be regarded as an experimental realization. We have a system with two levels $`|1`$ and $`|2`$. We perform a sequence of strong measurements resulting in projections of the state vector (the continuous case will be discussed below). The two corresponding measurement results are $`R_1`$ and $`R_2`$. If $`R_i`$ is measured, the system is transferred to level $`|i`$. Accordingly the measurement of $`R_i`$ is equivalent to the information that the system is immediately afterwards on level $`|i`$.
Let us assume that the first measurement of a sequence is performed at $`t=0`$ (thus fixing the initial state) and then repeated $`N`$ times with time interval $`\tau `$ between the consecutive measurements. $`\tau `$ is the relevant parameter. After the first repetition at time $`t=\tau `$ the system will still be found on the initial level with probability $`q`$ or on the other level with probability $`p=1q`$. After the time $`T=N\tau `$ the probability that the system has made $`k`$ times a transition to the other level and has stayed $`Nk`$ times on the initial level is $`P(k)=p^kq^{Nk}`$. The probability that the system will be found finally at $`t=T`$ on the initial level regardless of what have been the results of the measurements until then is
$$P^Z(T)=\underset{k\mathrm{even}}{}\left(\genfrac{}{}{0pt}{}{N}{k}\right)p^kq^{Nk}.$$
(10)
For any fixed $`T`$ the probability $`P^Z(T)`$ tends to $`1`$ for $`\tau 0`$ (or $`N\mathrm{}`$).
For the experiment of the previous section the Rabi oscillation during the time $`\tau `$ fixes $`p`$ and $`q`$ to be
$$p=\mathrm{sin}^2\mathrm{\Omega }\tau ,q=\mathrm{cos}^2\mathrm{\Omega }\tau .$$
(11)
To compare with the sequential approach sketched above, we have to take into account that our system is peculiar in the following sense: the first measurement result $`R_1`$ is “no emission of a photon”. This is a null result. We know that the system is on the initial level $`|1`$. There is no alternative measurement result $`R_2`$ because “emission of a photon” is not connected with a projection onto $`|i`$, but indicates the end of the measurement series. Detection and emission of a photon are identified (see below). We know that all the measurements before have had the null result $`R_1`$. The related probability of permanent survival at level $`|1`$ is obtained as the first term ($`k=0`$) of the sum (10):
$$P_{ps}^Z(T)=\mathrm{cos}^{2N}(\mathrm{\Omega }\tau ).$$
(12)
We turn now to the case of very rapid repetitions with $`\tau \mathrm{\Omega }^1`$. In this limit we may rewrite Eq. (12) without restriction of the time $`T`$ of permanent survival as
$$P_{ps}^Z(T)=\mathrm{exp}(\mathrm{\Omega }^2T\tau ).$$
(13)
Accordingly the result is that we can reach complete agreement with $`P_{ps}(T)`$ of Eq. (9) by identifying
$$\tau =\frac{2}{\gamma }.$$
(14)
The repetition time $`\tau `$ must be chosen as the inverse of half of the relaxation rate. The quantum optical treatment of Sect. 2 uniquely fixes in our comparison the otherwise unspecified parameter $`\tau `$ . The condition $`T\gamma ^1`$ of Sect.2 corresponds to the evident demand $`T\tau `$. Perfect level freezing is obtained in the limit $`\tau \gamma ^10`$.
It is interesting to see that because of Eq. (14) the repeated collapse to the ground state happens after time intervals $`\tau `$, which are completely fixed by the atomic lifetime. No reference to a photon detector is necessary. This demonstrates that the level freezing is independent of the detection or nondetection of the emitted photon in a real photon-detector. Instead it is the possibility of the irreversible decay into vacuum caused by the interaction with the environment, which is responsible for the level freezing .
The fact that we are dealing with null measurements establishes another difference compared to the Itano et.al. experiment. There the probability has been investigated of finding the initial state at time $`T`$ regardless of the past history of the system in the interval $`[0,T]`$. This is to be described by $`P^Z(T)`$ given in Eq. (10). But, as has been pointed out in , this is not the genuine quantum Zeno effect according to Misra and Sudarshan, which is based on the idea of permanent survival as it is expressed by $`P_{\mathrm{ps}}^Z(T)`$ of Eq. (12). This needs continuous observations as they can be found in our proposal.
## 4 Treatment within a phenomenological theory of continuous measurements
The level freezing which we have observed above is a realization of a genuinely continuous (strictly not sequential) measurement of energy, which has led to one uninterrupted null-result of duration $`T`$. This represents a particular measurement readout $`[E]`$, namely $`E(t)=E_1=\mathrm{const}`$, with $`0tT`$. There are different approaches to the theory of continuous quantum measurements (for a review see ). We will refer to the presentation in (for the realization scheme see ). Note that the assumption of instantaneous wave function collapse is not at all necessary in a phenomenological theory of continuous quantum measurement. Our phenomenological approach for example goes back to restricted path integrals. We call it the method of the complex Hamiltonian (mcH). This fully elaborated scheme allows the treatment of all strengths of the influence of the measurement on the measured system from weak to strong.
According to the mcH, a system with the Hamiltonian $`H_0+V`$, subject to the continuous measurement of energy resulting in the readout $`[E]=\{E(t)\}`$, is described by the Schrödinger equation with the effective Hamiltonian containing an imaginary part:
$$H_{[E]}=H_0+Vi\kappa (H_0E(t))^2,$$
(15)
where the constant $`\kappa `$ characterizes the strength of the measurement. The inverse of this constant is a measure of the measurement fuzziness.
For comparison with the mcH we introduce a 2-level Hamiltonian $`H_0`$ with eigenvalues $`E_i=\mathrm{}\omega _i`$ and rewrite our Hamiltonian $`H_\gamma `$ which describes the system without the driving field, in the form
$$H_\gamma =H_0i\mathrm{}\gamma |22|.$$
(16)
Then the complete Hamiltonian $`H_\gamma +V`$used in Sect. 2 for the description of our system is identical to Eq. (15) provided that $`E(t)E_1`$ and the coefficient $`\kappa `$ is expressed by the level width $`\gamma `$ and the energy difference of the levels $`\mathrm{\Delta }E=\mathrm{}(\omega _2\omega _1)`$ as follows: $`\kappa =\gamma /\mathrm{\Delta }E^2`$. Notice that the measurement readout $`E(t)E_1`$ means in the mcH that the system, according to the measurement, is staying on the level $`|1`$ all the time. Thus, there is complete agreement between the results of mcH and our dynamical consideration of Sect. 2.
The so called level resolution time $`T_{\mathrm{lr}}=1/(\kappa \mathrm{\Delta }E)`$ is a measure of the weakness of the continuous measurement and of the resulting fuzziness of the readout. It is in our case $`T_{\mathrm{lr}}=\gamma ^1`$. The condition $`T_l\mathrm{\Omega }^1`$ characterizes what is called the Zeno regime (strong influence of the measurement). This agrees with the condition used above in Sects. 23. The comparison with the phenomenological approach mcH has enabled us to introduce the concept of the weakness of our continuous null measurement and to fix it quantitatively. $`H_\gamma `$ refers to the particular readout $`[E]=E_1`$. According to the mcH the probability to obtain this $`[E]`$ agrees with $`|a_1(T)|^2`$ of Eq. (9).
## 5 Similar systems
Some systems resembling the one considered above have been discussed in the literature.
In the papers continuous null measurements of transitions in a $`2`$-level “atom” ($`|1`$,$`|2`$) were considered. The atom occupies level $`|1`$ at the initial moment of time, its evolution without observation is described by the equation $`|\dot{\psi }=\mathrm{i}\beta \widehat{\sigma }_1|\psi `$, thus the atom oscillates between two levels with frequency $`\beta `$. The atom is exposed to a continuous observation by an apparatus indicating the transition from level $`|1`$ to level $`|2`$. The apparatus is coupled only to the state $`|2`$ of the atom by an interaction Hamiltonian of the form $`|22|H`$, where $`H`$ is a selfadjoint operator in the Hilbert space of the apparatus. This coupling is assumed to switch on a process in the apparatus resulting in the quick indication of the transition. Neglecting the inner dynamical evolution of the apparatus, the authors showed that the evolution of the system is inhibited if the apparatus is sufficiently fast. Their conclusion is that the Zeno effect may take place and that it can be considered as a consequence of pure dynamical unitary evolution of the combined system object-apparatus. The scheme of observation considered in is however rather abstract and no experimental realization of this scheme has been discussed.
The 3-level scheme identical to the one considered by us has been mentioned in the work of Plenio et al in connection with the Zeno effect but with regard to a quite different aspect. Without any detailed consideration, it was noted that the oscillatory behavior of probability $`P_1`$ is inhibited and replaced by the exponential one if level $`|2`$ spontaneously decays to another state during a sufficiently short time. This feature of the system was related to a finite width of the level $`|2`$, but the inhibition of oscillations was not connected with the quantum Zeno effect. On the contrary, the authors suggested to use the resulting behavior of level $`|1`$ as a model for a decaying system which shows a long enough non-exponential period. This “artificial” decaying system had then to be measured repeatedly to demonstrate the Zeno effect.
Rabi oscillations with radiative damping have been studied in different contexts. We give an example: The state $`|1`$ is considered to be the ground state. A higher state $`|2`$ is assumed to be unstable. The system decays back to the state $`|1`$ with emission of a photon. Investigation of such a system produces the well known result, that Rabi oscillations are damped by radiative decay. If the decay rate of level $`|2`$ is much greater than the Rabi frequency the probability to find the system on the level $`|1`$ remains close to unity for all times. In this case (in contrast to our 3-level system) there are simultaneously two mechanisms of “freezing” the system on the level $`|1`$: the fast transition from level $`|2`$ back to level $`|1`$ and in addition Zeno-like inhibition of Rabi oscillations connected with broadening of the level $`|2`$. Stimulated and spontaneous emission happen together. It is difficult to separate the two mechanisms. We cannot interpret the resulting behavior as Zeno effect, because the two possible observational results — that a photon is or is not emitted — are both related to the same information, namely that the system is transferred to level $`|1`$. This example shows that not any level “freezing” may be taken as an indication for the presence of a Zeno type measurement influence. In fact the analysis must be carried out the other way round: If a situation can be interpreted as a measurement of Zeno type (including null measurements) than level freezing is to be expected.
## 6 Conclusions
We have shown that the freezing on the initial level (inhibition of a transition) caused by the possibility of spontaneous decay of the other level can indeed be understood as a Zeno effect because it goes back to the measurement of the system. This is supported by a comparison with the phenomenological scheme based on repeated projection measurements and the phenomenologically described continuous energy measurement in the Zeno regime. After appropriate adjustment of parameters they all agree with the result of the quantum optical calculation.
The Zeno effect as described above is a particular quantum phenomenon related to quantum measurements which may occur in many different physical situations. To refer to it is of great predictive power: before any involved microphysical calculation has been performed, it must be expected that freezing on a level (or at least slowing down of a transition) will happen, if measurement influences the system.
Referring to our case, we know that quantum measurement may be explained as an interaction with the environment. The spontaneous emission is the mediator between the $`2`$-level system and the environment. It suggests itself to make use of this by treating conversely the influence of the environment as measurement of the system. This is what we have done above. As shown, the results are in accord with the microphysical calculation. The simplicity and the usefulness of the approach became evident.
The advantage of the particular system presented above is, that it is easier to analyse than the standard example of Itano et al. and shows nevertheless to a certain extent a counterintuitive behavior. At the same time it is a very simple example of a null measurement, which is sometimes not quite correctly called interaction free measurement. Potentially the most interesting aspect of the system may be that it represents a truly continuous measurement, which can easily be handled and which we have discussed above only in its Zeno limit.
ACKNOWLEDGEMENT
This work was supported in part by the Deutsche Forschungsgemeinschaft and by the Russian Foundation for Basic Research, grant 98-01-00161. We thank Frank Burgbacher and Thomas Konrad for interesting discussions.
|
no-problem/9906/nucl-th9906076.html
|
ar5iv
|
text
|
# Elliptic flow in heavy ion collisions near the balance energy
## Abstract
The proton elliptic flow in collisions of <sup>48</sup>Ca on <sup>48</sup>Ca at energies from 30 to 100 MeV/nucleon is studied in an isospin-dependent transport model. With increasing incident energy, the elliptic flow shows a transition from positive to negative flow. Its magnitude depends on both the nuclear equation of state (EOS) and the nucleon-nucleon scattering cross section. Different elliptic flows are obtained for a stiff EOS with free nucleon-nucleon cross sections and a soft EOS with reduced nucleon-nucleon cross sections, although both lead to vanishing in-plane transverse flow at the same balance energy. The study of both in-plane and elliptic flows at intermediate energies thus provides a means to extract simultaneously the information on the nuclear equation of state and the nucleon-nucleon scattering cross section in medium.
PACS number(s): 25.70.-z, 25.75.Ld, 24.10.Lx
Heavy ion collisions provide the possibility to study the properties of nuclear matter in conditions vastly different from that in normal nuclei, such as high density and excitation as well as large difference in the proton and neutron numbers . Such knowledge is not only of interest in itself but also useful in understanding astrophysical phenomena such as the properties of the core of compact stars, the evolution of the early universe, and the formation of elements in stellar nucleosynthesis. One observable that has been extensively used for extracting such information from heavy ion collisions is the collective flow of various particles (for a recent review, see Refs. ). For example, the proton flow in heavy ion collisions at 200 MeV/nucleon to 1 GeV/nucleon has been found to be consistent with a soft nuclear equation of state . From the kaon flow in heavy ion collisions at 1 to 2 GeV/nucleon, the existence of a weak repulsive kaon potential has been obtained . In heavy ion collisions at energies higher than 2 GeV/nucleon, recent studies of proton flow seem to indicate that there is a softening of nuclear equation of state as the nuclear density and excitation increase . There are also suggestions that particle collective flows at ultrarelativistic heavy ion collisions are sensitive to the initial parton dynamics and subsequent phase transitions .
In general, collective flow in heavy ion collisions is affected by both the nuclear mean-field potential and nucleon-nucleon cross sections. In heavy ion collisions at intermediate energies of a few tens MeV/nucleon, the collision dynamics is dominated by the attractive nuclear mean-field potential as nucleon-nucleon scatterings are largely blocked due to the Pauli principle. As a result, the nucleon transverse flow in the reaction plane is negative, i.e., nucleons moving in the projectile direction are deflected to negative angles. With increasing incident energies, the repulsive nucleon-nucleon scattering becomes important and reduces the negative flow caused by the attractive nuclear mean-field potential. At certain incident energy, called the balance energy, in-plane transverse flow vanishes as a result of the cancellation between these two competing effects . The disappearance of transverse collective flow has been experimentally observed in heavy ion collisions . The measured balance energy depends strongly on the mass and isospin of colliding nuclei as well as on the impact parameter of collisions . Studies based on transport models have shown that the same balance energy can be obtained with different nuclear equations of state and in-medium nucleon-nucleon scattering cross sections . To extract their information from measured balance energies thus requires the measurement of other observables. One of the present authors has recently shown that different EOS and cross sections which give the same balance energy show different differential transverse flows, i.e., their transverse flows have different dependence on the total transverse momentum. In the present paper, we shall study instead the proton elliptic flow, which measures the anisotropy in their transverse momentum distribution. In particular, using the isospin-dependent Boltzmann-Uehling-Uhlenbeck (IBUU) model , we shall consider collisions of <sup>48</sup>Ca + <sup>48</sup>Ca at energies from 30 to 100 MeV/nucleon. As shown below, different EOS and cross sections that give the same balance energy lead to significantly different elliptic flows.
Taking the beam direction along the $`z`$axis and the reaction plane on the $`xz`$ plane, the elliptic flow is then determined from the average difference between the square of the $`x`$ and $`y`$ components of particle transverse momentum, i.e.,
$$v_2=\frac{p_x^2p_y^2}{p_x^2+p_y^2}.$$
(1)
It corresponds to the second Fourier coefficient in the transverse momentum distribution and describes the eccentricity of an ellipse-like distribution, i.e., $`v_2>0`$ indicates in-plane enhancement, $`v_2<0`$ characterizes the squeeze-out perpendicular to the reaction plane, and $`v_2=0`$ shows an isotropic distribution in the transverse plane.
The IBUU transport used in the present study treats explicitly protons and neutrons. It also includes an asymmetry term in the nuclear mean-field potential and different scattering cross sections for protons and neutrons. The nuclear mean-field potential is parameterized as
$$U(\rho ,\tau _z)=U_0(\rho )+U_{\mathrm{asy}}(\rho ,\tau _z),$$
(2)
$$U_0(\rho )=a\left(\frac{\rho }{\rho _0}\right)+b\left(\frac{\rho }{\rho _0}\right)^\sigma ,$$
(3)
$$U_{\mathrm{asy}}(\rho ,\tau _z)=C\frac{\rho _p\rho _n}{\rho _0}\tau _z.$$
(4)
In the above, $`\rho _0`$ is the normal nuclear matter density; $`\rho `$, $`\rho _n`$ and $`\rho _p`$ are the nucleon, neutron, and proton densities, respectively; and $`\tau _z`$ equals 1 for proton and -1 for neutron. For the strength of the asymmetry potential, we take $`C=32`$ MeV. Two different EOS are used in our studies: a stiff EOS with compressibility of 380 MeV ($`a=124`$ MeV, $`b=70.5`$ MeV, $`\sigma =2`$) and a soft one with compressibility of 200 MeV ($`a=356`$ MeV, $`b=303`$ MeV, $`\sigma =7/6`$). We also include the Coulomb potential for protons. For nucleon-nucleon scatterings, both elastic and inelastic channels are included by using the experimentally measured cross sections with explicit isospin dependence. Details of the IBUU model can be found in Refs. .
We first study the flow parameter at midrapidity, which is defined by
$$F=\frac{d<p_x>}{dy}|_{y=0}.$$
(5)
In Fig. 1, we show the incident energy dependence of the proton flow parameter in <sup>48</sup>Ca + <sup>48</sup>Ca reactions at an impact parameter of 2 fm. Open circles are obtained from the IBUU model using the soft EOS. In this case, the proton in-plane transverse flow below about 45 MeV/nucleon is negative as a result of the dominant effect of attractive nuclear mean-field potential. Above this incident energy, nucleon-nucleon scatterings become more important, and their repulsive effects lead to a positive flow parameter. For the stiff EOS, shown by solid circles, the flow parameter is generally reduced because of a less attractive mean-field potential than that for the soft EOS. The exception to this general behavior occurs, however, at very low incident energies below about 40 MeV/nucleon, where the flow parameter increases instead with decreasing incident energy. This is due to the fact that scattering effects at low energies are not strong enough to reverse the effect due to the attractive mean-field potential. We have also shown in Fig. 1 by solid triangles the flow parameter obtained for the soft EOS but with the nucleon-nucleon scattering cross section reduced by 12%. Compared with the case of the soft EOS and free nucleon-nucleon cross section $`\sigma _{NN}`$, the flow parameter is reduced as expected. We note that the same balance energy, about 65.5 MeV/nucleon, is obtained for both the stiff EOS with $`\sigma _{NN}`$ and the soft EOS with $`0.88\sigma _{NN}`$.
The excitation function of the proton elliptic flow for the same reaction is shown in Fig. 2. For both the soft EOS (open circles) and the stiff EOS (solid circles) the proton elliptic flow changes from positive flow at low energies to negative flow at high energies, i.e., a transition from the dominance of in-plane transverse flow to that of out-of-plane squeeze out as the beam energy increases. However, the energy at which this transition occurs differs for the two EOS; it is smaller for the stiff EOS than for the soft EOS. To understand this difference, we also show in Fig. 2 the proton elliptic flow in the absence of mean-field potential (open squares), which is negative at all energies, i.e., out-of-plane squeeze out dominates over in-plane transverse flow. Since the soft EOS gives a larger in-plane transverse flow than that due to the stiff EOS, it leads to a higher energy at which the elliptic flow changes sign. The abnormal behavior at energies below 45 MeV/nucleon, where the elliptic flow for the stiff EOS decreases with decreasing energy, reflects its behavior in the flow parameter as shown in Fig. 1.
Also shown in Fig. 2 is the elliptic flow for the soft EOS with 0.88$`\sigma _{NN}`$ (solid triangle), which gives the same flow parameter as the stiff EOS with $`\sigma _{NN}`$. As seen, the two give very different elliptic flow; it is negative for the stiff EOS with $`\sigma _{NN}`$ but is positive for the soft EOS with 0.88$`\sigma _{NN}`$.
In summary, the IBUU model has been used to study the proton elliptic flow in collisions of <sup>48</sup>Ca + <sup>48</sup>Ca at an impact parameter of 2 fm for beam energies from 30 to 100 MeV/nucleon. We find that it shows a transition from positive to negative flow as the incident energy increases. A strong dependence on both the nuclear EOS and the nucleon-nucleon cross section is seen in proton elliptic flow. Although both the stiff EOS with $`\sigma _{NN}`$ and the soft EOS with $`0.88\sigma _{NN}`$ have the same balance energy, they are found to give very different elliptic flows. The study of both in-plane and elliptic flows at intermediate energies thus allows one to extract simultaneously the information on the nuclear equation of state and the nucleon-nucleon scattering cross section in medium.
We thank Joe Natowitz for a critical reading of the manuscript. This work was supported in part by the National Science Foundation Grant PHY-9870038, the Department of Energy grant DE-FG03-93ER40773, the Welch Foundation Grant A-1358, and the Texas Advanced Research Program FY97-010366-068.
|
no-problem/9906/cond-mat9906402.html
|
ar5iv
|
text
|
# Possible scenario of the melting of metals
## Abstract
A microscopic picture of the “preparation” of a crystal to the transition to liquid state at the approach to melting temperature is proposed. Basing on simple crystallogeometric considerations and the analysis of the computational results for corresponding anharmonic characteristics an evaluation is given for the magnitude of atomic displacements leading to the appearance of close packed Bernal pseudonuclei in the solid phase. A physical meaning of the classical Lindeman criterion and the mechanisms of the formation of free volume in the crystal phase are discussed.
Despite a great amount of collected experimental and theoretical information (see, e.g., ) a classical question what is the melting is still unsolved. The melting temperature $`T_m`$ itself is determined from thermodynamic considerations and, e.g., for alkali metals it can be calculated in the framework of microscopic theory in a good agreement with experimental data . However, it does not elucidate the nature of the processes resulting in the reconstruction of the structure at the melting point. There are direct evidences of this reconstruction such as nonlinear in temperature $`T`$ increase of the heat capacity and a significant softening of shear moduli at $`TT_m`$ (for the most of metals the latter is as large as 50% decrease of one of the moduli but, e.g., for indium it is even 90% ). To investigate the phenomena taking place in the crystal lattices near $`T_m`$ a broad variety of techniques are used - from classical or quantum Monte-Carlo computer simulations to experiments with artificial “crystals” of charged drops . Despite this we have no yet a complete and reliable picture of the melting of crystals. Here we try to suppose a qualitative scenario of the melting for a specific case of metals.
We are interested in the dynamic picture of the processes in a crystal near the melting point. According to the well-known Lindeman criterion of the melting one has $`\overline{u}/d0.1`$ at $`T=T_m`$ where $`\overline{u}=\sqrt{u^2},`$ $`u^2`$ being average square of thermal atomic displacements, $`d`$ is the minimal interatomic distance (see ). It is important that under such conditions, as a rule, phonons are well-defined collective excitations in all the Brillouin zone and anharmonic effects are relatively small up to $`T_m.`$ Microscopic calculations for metals with BCC and FCC structures show that the value of relative phonon damping $`\mathrm{\Gamma }_{𝐪\xi }/\omega _{𝐪\xi }`$ (where $`𝐪,\xi `$ are the wavevector and phonon branch number, correspondingly) at $`T=T_m`$ does not exceed 0.1-0.25 for the most of them. It is naturally to ask why so small anharmonic effects may lead, nevertheless, to the loss of stability of crystal lattices at the melting. To our opinion, the answer is connected with geometric properties of three-dimensional Euclidean space.
Microscopic calculations of structural and thermodynamic characteristics of liquid rare gases and liquid metals show that the accuracy of their description near $`T_m`$ is dependent mainly on the simple geometric parameter of atomic packing $`\eta =\pi d^3n/6`$ where $`n`$ is the atomic density, $`d`$ is the diameter of non-overlapped spheres connected with each atom. It confirms qualitatively the adequacy of geometric approach to the description of structure of liquids proposed by Bernal , namely, the model of random packing of hard spheres. His conclusion about the existence of close packed “Bernal pseudonuclei” formed by the connection of several tetrahedra is the most important for the scenario of the melting proposed here (they are called pseudonuclei because it is impossible to build regular structure from these formations). Locally they are characterized by the packing parameter $`\eta 0.78`$ (at the average packing parameter $`\eta 0.64`$) which is higher than the maximum possible packing in crystals, $`\eta 0.74`$ (see ). To prevent misunderstanding note that we consider purely geometric definition of the packing parameter corresponding to the hard-sphere model with the interatomic potential
$`\phi _{HS}\left(r\right)=\{\begin{array}{c}\mathrm{},r<d\hfill \\ 0,r>d\hfill \end{array}.`$
Such definition does not take into account the effects of “softness” of interatomic interactions in metals, i.e. the role of the attractive part of the interactions, as well as nonpairwise forces (the dependence of the potential on mean electron density) which are important for metals. Because of this the effective packing parameter in liquid metals at $`TT_m`$ can be estimated really as $`\eta _{met}0.45`$ . Despite this molecular dynamics simulations give us evidences about qualitative adequacy of the Bernal model of the structure of liquids as a whole .
For metals as well as for liquid rare gases the closest packing is energetically favorable for small groups of atoms what is confirmed by the data about the structure of small metallic clusters . It is well known that in three-dimensional Euclidean space the closest packing is the tetrahedral one which, at the same time, cannot be reached in all the space. This is a drastic difference of three-dimensional space from the two-dimensional one. The latter can be filled completely by regular triangles, six of them being meet in each lattice site. In three-dimensional case only five regular tetrahedra can have the common edge and the void appears in this case with the angle deficit $`\delta =2\pi 5\mathrm{cos}^1\left(1/3\right),\delta /2\pi 0.02.`$ This simple consideration is the base of an elegant approach to the description of structure of liquids and glasses . According to this approach the structure is defined not with the respect to usual Euclidean space but to Riemannian one. The latter can be, for a specific value of the curvature radius, filled completely by regular “tetrahedra”. The transition to the real physical space is carried out by introducing of corresponding “decurving” defects, namely, structural disclinations. Their appearance near $`T_m`$ has been recently demonstrated in a simple model of a crystal by molecular dynamics simulations . Earlier a similar approach to the problem of melting basing on the consideration of statistics of linear defects has been developed phenomenologically by Patashinskii with collaborators .
Taking into account the elastic energy of the disclinations as well as the trends to the closest packing of atoms in “chemical” energy of interatomic interactions it may be shown that the formation of an inhomogeneous state can be in principle described by these considerations. This state is characterized by the existence of superdense packed regions and Bernal “voids” around them. The central regions have tetrahedral-like packing structure and are similar in this sense to Bernal pseudonuclei. Our main hypothesis is that such regions can be created by thermal fluctuations near the melting point.
To form the close packed region as a result of thermal motion of an atomic group it is necessary to “overcame” the angle deficit $`\delta .`$ It has to be done as a result of not just oscillations but of displacements of the centers of the oscillations, i.e. of long-lived component of atomic motion. The latter can be estimated from the following simple considerations. The irreversibility of atomic displacements results from the phonon damping. The weight of this irreversible component is of order of $`\overline{\mathrm{\Gamma }}/\overline{\omega }`$ where $`\overline{\mathrm{\Gamma }},\overline{\omega }`$ are the average phonon damping and frequency, correspondingly. The close packed region will be long-lived enough after its formation by thermal fluctuations since it corresponds to a local minimum of free energy. Indeed, it is known that for a system of hard spheres with relatively weak attraction (which is a good approximation for the most of metals ) the density of free energy of sufficiently small groups of atoms is minimal for the closest tetrahedral packing . As a result, the probability of formation of the close packed pseudonucleus can be not too small for atomic displacements satisfying the condition
$`\left(\overline{u}/d\right)\left(\overline{\mathrm{\Gamma }}/\overline{\omega }\right)\delta /2\pi 0.02.`$
We believe that this estimation is a realistic melting criterion. Empirically, it coincides approximately with the Lindeman criterion but, in contrast with the latter, it is directly connected with contemporary views about the structure of liquids and a scale of anharmonic effects, at least, for metals.
Provided that the average atomic density is fixed the formation of close packed region results inevitably in the appearance of Bernal voids around it. This changes essentially traditional views formulated by Frenkel about the leading role of the monovacancies in the formation of free volume near the melting point. According to the scenario proposed the latter appears first with the temperature increase as the voids around close packed regions. Molecular dynamics simulation shows that for systems with “soft” enough interatomic interactions (e.g., liquid metals) there are two kinds of Bernal voids: distorted tetrahedral and octahedral ones . Apparently these voids are the most energetically profitable “carriers” of the free volume in the solid phase also. Monovacancies should have higher free energy and appears only as a result of diffusion decay of the voids in the close vicinity of $`T_m.`$
To check this assumption we have made the fitting of defect contributions to the heat capacity of sodium near $`T_m`$ which can be obtained from the experimental data by substraction electronic, phonon and anharmonic contributions . Trying to fit this contribution $`C^d\left(T\right)`$ by Arrhenius law in a broad temperature range $`\left(0.6T_m<T<T_m\right)`$ we find that the decline of the curve has a jump at $`T0.8T_m`$. The fitting for the narrower temperature range $`0.8T_m<T<T_m`$ gives the following results for the entropy $`S_{1v}`$ and energy $`E_{1v}`$ of the formation of monovacancies as well as their concentration at the melting point $`n_{1v}`$: $`S_{1v}4.3,E_{1v}0.3`$ eV, $`n_{1v}0.3\%.`$ For another temperature range $`0.6T_m<T<0.8T_m`$ one has $`S_{1v}0.5,E_{1v}0.17`$ eV, $`n_{1v}0.74\%`$ . The first set of the results is in much better agreement with the data obtained by differential dilatometry data $`S_{1v}5.2,E_{1v}0.35`$ eV, $`n_{1v}0.3\%`$ as well as the results of microscopic calculations of the energy of formation of monovacancy, $`E_{1v}0.3`$ eV .
This confirms that, in contrast with traditional views and in agreement with the scenario of the melting proposed here, the temperature increase leads first to the formation of clusters of vacancies (the voids) and the monovacancies appears only in the close vicinity of $`T_m.`$ Note also that the formation of monovacancies in a broad temperature range below $`T_m`$ corresponding to Frenkel views about the nature of the melting was never observed in computer simulations by molecular dynamics and Monte-Carlo methods. At the same time, according to these simulations, pentagonal faces of Voronoi polyhedra are observed frequently for instant atomic configurations not only in liquids but also in crystals near the melting point . According to Bernal this is the main feature of the structure of liquids. It means that some structural elements which are characteristic for liquids arises by thermal fluctuations already in the crystal phase.
As a conclusion, consider a possible way to describe such fluctuations quantitatively. The formation of the closest tetrahedral packing near $`T_m`$ has to lead to the appearance of icosahedral short-range order describing by the parameter
$`Q_{6m}\left(𝐫\right)=Y_{6m}[\theta \left(𝐫\right),\phi \left(𝐫\right)]`$
where $`Y_{6m}`$ are spherical functions, $`\theta \left(𝐫\right),\phi \left(𝐫\right)`$ are polar angles of the direction to the nearest neighbors of the atom at point $`𝐫`$ and the corresponding correlation function
$`G\left(r\right)={\displaystyle \frac{4\pi }{13}}{\displaystyle \underset{m}{}}Q_{6m}\left(𝐫\right)Q_{6m}^{}\left(0\right).`$
Its calculation in the crystal phase by molecular dynamics or Monte-Carlo methods could give an important information about the local structure of solids near the melting.
This work is supported by Russian Basic Research Foundation, grant 98-02-16219.
|
no-problem/9906/cond-mat9906089.html
|
ar5iv
|
text
|
# Monte Carlo simulation with time step quantification in terms of Langevin dynamics
## Abstract
For the description of thermally activated dynamics in systems of classical magnetic moments numerical methods are desirable. We consider a simple model for isolated magnetic particles in a uniform field with an oblique angle to the easy axis of the particles. For this model, a comparison of the Monte Carlo method with Langevin dynamics yields new insight in the interpretation of the Monte Carlo process, leading to the implementation of a new algorithm where the Monte Carlo step is time-quantified. The numeric results for the characteristic time of the magnetisation reversal are in excellent agreement with asymptotic solutions which itself are in agreement with the exact numerical results obtained from the Fokker-Planck equation for the Néel-Brown model.
Studies of spin dynamics in particulate systems are currently of significant interest, as model systems for understanding the thermodynamics of the reversal process. Brown developed a theoretical formalism for thermally activated magnetisation reversal based on the Fokker-Planck (FP) equation which led to a high energy barrier asymptotic formula in the axially symmetric case of a particle with easy (uniaxial) anisotropy axis collinear with the applied magnetic field. Since then extensive calculations have been carried out in which improved approximations were found for the axially symmetric case. Coffey and co-workers also derived formulae for the non-axially symmetric case, investigating also the different regimes imposed by the damping parameter $`\alpha `$ of the Landau-Lifshitz-Gilbert (LLG) equation. This work represents an important basis for the understanding of dynamic processes in single-domain particles. New experimental techniques which allow for an investigation of nanometer-sized, isolated, magnetic particles confirmed this theoretical approach to thermal activation .
Unfortunately, the extension of this work to the important case of strongly coupled spin systems such as are found in micromagnetic calculations of magnetisation reversal is non-trivial, and realistic calculations in systems with many degrees of freedom would appear to be impossible except by computational approaches. These are currently of two types: (i) calculations involving the direct simulation of the stochastic (Langevin) equation of the problem, in this case the LLG equation supplemented by a random force representing the thermal perturbations. This is referred to as the Langevin Dynamics (LD) formalism , and (ii) Monte Carlo (MC) simulations with a continuously variable (Heisenberg like) Hamiltonian . The LD approach, although having a firm physical basis is limited to timescales of the order of a few ns for strongly coupled systems. The MC approach is capable of studying longer timescales involving reversal over large energy barriers, but has the severe problem of having no physical time associated with each step, resulting in unquantified dynamic behavior.
Physically, the dynamic behavior of interacting spin systems is a topic of considerable current interest, much of this interest being driven by the need to understand spin electronic devices such as MRAM. The possibility of truly dynamic models of strongly coupled systems would seem to be an important factor in the development of a fundamental physical understanding. This requires dynamic studies over the whole time range from ns and sub - ns to the so-called ’slow dynamic’ behavior arising from thermally excited decay of metastable states over timescales from 10-100s and upwards. It is inconceivable that the LD technique can be used over the whole timescale and therefore a truly time quantified MC technique is necessary in order to allow calculations over the longer timescales of physical interest. Here we propose a technique for the quantification of the MC timestep and give a supporting argument developed from the fluctuation dissipation theorem. This argument results in a theoretical expression for the timestep in terms of the size of MC move, and also gives the validity criterion that the MC timestep is much longer than the precession time. Comparison with an analytical formula for relaxation in the intermediate to high damping limit is used to verify the theoretically predicted relationship relating the timestep to the size of MC move. This represents an important first step in the process of deriving a theoretical formalism for time quantified MC calculations of strongly interacting spin systems.
We consider an ensemble of isolated single-domain particles where each particle is represented by a magnetic moment with energy
$$E(\underset{¯}{S})=dVS_z^2\mu _s\underset{¯}{B}\underset{¯}{S},$$
(1)
where $`\underset{¯}{S}=\underset{¯}{\mu }/\mu _s`$ is the magnetic moment of unit length, $`\underset{¯}{B}=B_x\underset{¯}{\overset{^}{x}}+B_z\underset{¯}{\overset{^}{z}}`$ represents a magnetic field under an arbitrary angle $`\psi `$ to the easy axis of the system, $`d`$ is the uniaxial anisotropy energy density and $`V`$ the volume of the particle. Throughout the article we use the material parameters $`V=8\times 10^{24}\text{m}^3`$, $`d=4.2\times 10^5\text{J/m}^3`$, magnetic moment $`\mu _s=1.12\times 10^{17}\text{J/T}`$.
The LLG equation of motion with LD is
$$\underset{¯}{\overset{\dot{}}{S}}=\frac{\gamma }{(1+\alpha ^2)\mu _s}\underset{¯}{S}\times \left(\underset{¯}{H}(t)+\alpha \underset{¯}{S}\times \underset{¯}{H}(t)\right),$$
(2)
where $`\gamma =1.7610^{11}(Ts)^1`$ is the gyromagnetic ratio, $`\underset{¯}{H}(t)=\underset{¯}{\zeta }(t)\frac{E}{\underset{¯}{S}}`$, and $`\zeta `$ is the thermal noise with $`\zeta _i(t)=0`$ and $`\zeta _i(t)\zeta _j(t^{})=\delta _{ij}\delta (tt^{})2\alpha k_BT\mu _s/\gamma `$. $`i`$ and $`j`$ denote the cartesian components.
The equation above is solved numerically using the Heun method . Also, it is possible to obtain analytically asymptotic solutions for the escape rate which have been extensively compared with the exact numerical solutions from the corresponding matrix form of the FP equation for a wide range of parameters and non-axially symmetric potentials .
Both of our simulations, MC as well as LD, start with the magnetic moments in $`z`$-direction. The magnetic field has a negative $`z`$-component so that the magnetization will reverse after some time. The time that is needed for the $`z`$-component of the magnetization to change its sign averaged over a large number of runs ($`N=1000`$) is the characteristic time $`\tau `$ which corresponds to the inverse of the escape rate following from exact numerical solutions of the corresponding FP equation.
For the MC simulations we use a heat-bath algorithm. The trial step of our MC algorithm is a random movement of the magnetic moment within a cone with a given size. In order to achieve this efficiently we construct a random vector with constant probability distribution within a sphere of radius $`R`$. This random vector is added to the initial moment and subsequently the resulting vector is normalized.
The size of the cone $`R`$ of our algorithm influences the time scale the method simulates. We investigate the influence of $`R`$ on our MC algorithm by varying $`R`$ and calculating $`\tau `$. As usual in a MC procedure the time is measured in Monte Carlo steps (MCS). For our calculation we use a field of $`|\underset{¯}{B}|=0.2`$T and an angle of $`\psi =27^{}`$ to the easy axis. The resulting energy barrier is $`\mathrm{\Delta }E=8.2\times 10^{19}`$J, the temperature we chose for Fig. 1 is $`\mathrm{\Delta }E/k_BT=3.3`$. As Fig. 1 demonstrates, it is $`\tau R^2`$. This dependence can be understood by considering the moments as performing a random walk where $`R`$ is proportional to the mean step width. Having understood that the MC time can be set by choosing an appropriate size of the step width we search for a relation for $`R`$ such that one MCS corresponds to a real-time interval, in the sense of LD.
MC methods calculate trajectories in phase space following a master equation which describes the coupling of a system to the heat bath. Hence, only the irreversible part of the dynamics of the system is considered — there is no precession of the moments since no equation of motion is solved during the simulation. Nevertheless, in the following we will argue that the exact knowledge of the movement of the single moments is not necessary in order to describe the effects of thermal activation in an ensemble of systems under the following conditions: (i) the relevant time scales are larger than the precession time $`t_p`$ of the moments, (ii) we consider the high damping limit of the LLG equation where the energy dissipation during one cycle of the precession is considerably large so that the system relaxes (to the local energy minimum) on the same time scale $`t_rt_p`$.
In Fig. 2 we present the time evolution of our system in phase space, $`(S_x,S_y)`$, following from a simulation of the LLG equation for high damping, $`\alpha =1`$. We use $`\mathrm{\Delta }E/k_BT=8.2`$, a rather low temperature so that the characteristic time $`\tau `$ for the escape from the local energy minimum is of the order of $`10^6`$s (see also Fig. 3). The spin-precession time is $`t_p=9\times 10^{11}`$s here. The simulation starts close to the local energy minimum with $`S_x=S_y=0,S_z=1`$ and the solid line shows the trajectory of one moment over a time interval of $`\mathrm{\Delta }t=t_p`$. The 20 points are the positions of an ensemble of 20 moments after the same time. As one can see, the moments show no significant precession (the precession of an undisturbed moment, i. e. without relaxation and fluctuations is indicated by the circle around the energy minimum at $`S_y=0,S_x0.22`$). The small dots represent 1000 states of the ensemble for $`t<6\times t_p`$. Altogether, Fig. 2 demonstrates that in the high damping case already after time periods of only a few $`t_p`$ the moments are uncorrelated and the ensemble reaches a local equilibrium configuration (remember that the time scale to leave the local equilibrium is much larger here so that Fig.2 shows only the local short-time equilibration, not the escape from the local energy minimum).
We will show that this high-damping scenario can also be simulated by a MC simulation and we will now derive a relation for $`R`$ in order to quantify the MC time step. The intention is to compare the fluctuations which are established in the MC technique within one MCS with the fluctuations within a given time scale associated with the linearized LLG equation. Close to a local energy minimum one can write the energy, given that first order terms vanish as
$$EE_0+\frac{1}{2}\underset{i,j}{}A_{ij}S_iS_j,$$
(3)
where the $`S_i`$ are the variables representing small deviations from equilibrium. In our system, for $`B_x=0`$ we find equilibrium along the $`z`$ axis, leading to variables $`S_x`$ and $`S_y`$. The energy increase $`\mathrm{\Delta }E`$ associated with fluctuation in $`S_x`$ and $`S_y`$ is $`\mathrm{\Delta }E\frac{1}{2}(A_{xx}S_x^2+A_{yy}S_y^2)`$, with $`A_{xx}=A_{yy}=2dV+\mu _sB_z`$. Rewriting the LLG equation in the linearized form, $`\dot{S}_x=L_{xx}S_x+L_{xy}S_y`$, $`\dot{S}_y=L_{yx}S_x+L_{yy}S_y`$, it has been shown that the correlation function takes the form
$$S_i(t)S_j(t^{})=\mu _{ij}\delta _{i,j}\delta (tt^{}).$$
(4)
Dirac’s $`\delta `$ function is here an approximation for exponentially decaying correlations on time scales $`tt^{}`$ that are much larger than the time scale of the exponential decay $`t_r`$. The covarianz matrix $`\mu _{ij}`$ can be calculated from the system matrices $`𝐀`$ and $`𝐋`$ as $`\mu _{ij}=k_BT(L_{ik}A_{kj}^1+L_{jk}A_{ki}^1)`$. For our problem a short calculation yields $`\mu _{xx}=\mu _{yy}=2k_BT\frac{\alpha \gamma }{(1+\alpha ^2)\mu _s}`$. Integrating the fluctuating magnetisation $`S_x(t)`$ over a finite time interval $`\mathrm{\Delta }t`$, Eq. 4 takes the form
$$\overline{S}_x^2=\mu _{xx}\mathrm{\Delta }t=2k_BT\frac{\alpha \gamma }{(1+\alpha ^2)\mu _s}\mathrm{\Delta }t,$$
(5)
representing the fluctuations of $`S_x`$ averaged over a time interval $`\mathrm{\Delta }t`$.
Next, we calculate the fluctuations $`S_x^2`$ during one MCS of a MC simulation. This is possible if we assume that all magnetic moments are initially in their equilibrium position. For our MC algorithm described above the probability distribution for trial steps with step width $`r=\sqrt{S_x^2+S_y^2}`$ is $`p_\text{t}=3\sqrt{R^2r^2}/(2\pi R^3)`$. The acceptance probability within a heat bath algorithm is $`p_\text{a}(r)=1/(1+\mathrm{exp}(\mathrm{\Delta }E(r^2)/k_BT))`$, where $`\mathrm{\Delta }E(r^2)`$ can be taken from Eq. 3. Hence, for the fluctuations within one MC step it is:
$$S_x^2=2\pi _0^Rr\text{d}r\frac{r^2}{2}p_\text{t}(r)p_\text{a}(r)=\frac{R^2}{10}+𝒪(R^4)$$
(6)
where the last line is an expansion for small $`R`$. By equalizing the fluctuations within corresponding time intervals we find the relation
$$R^2=\frac{20k_BT\alpha \gamma }{(1+\alpha ^2)\mu _s}\mathrm{\Delta }t.$$
(7)
Note, from our derivation above it follows that one time step $`\mathrm{\Delta }t`$ must be larger than the intrinsic time scale $`t_r`$ of the relaxation. This means - as already mentioned above - that the Monte Carlo method can only work on time scales that are much larger than any microscopic time scale of a precession or relaxation (to local equilibrium) of the moment.
In principle, equation 7 gives the possibility to choose the trial step for a MC simulation in such a way that 1MCS corresponds to a real time interval, say $`\mathrm{\Delta }t=10^{12}`$s. However, there are of course restrictions for possible values of $`R`$, like $`R<1`$. Also, $`R`$ should not be too small since then a Monte Carlo algorithm is inefficient. Therefore, either one has to choose such a value for $`\mathrm{\Delta }t`$ so that $`R`$ takes on reasonable values (these will usually be of the order of $`10^{12}`$s) or one uses a reasonable constant value for $`R`$, say 0.1, and uses Eq. 7 to calculate $`\mathrm{\Delta }t`$ as the real time interval corresponding to 1MCS. In the following we use the first method since it turns out to be very efficient to change $`R`$ with temperature. However, we confirmed that the other method yields the same results.
To test the validity of our considerations we performed MC simulations with an algorithm using a trial step according to Eq. 7 with $`\mathrm{\Delta }t6\times 10^{12}`$s (the inverse value of $`\gamma `$, in other words the time in the LLG equation is rescaled by $`\gamma `$). For Fig. 3 we set $`\alpha =1`$ and compare the data for $`\tau (T)`$ following from our MC simulation with results from LD simulations and with the intermediate to high damping (IHD) asymptote , namely
$$\tau =\frac{2\pi \omega _0}{\mathrm{\Omega }_0\omega _2}e^{\beta (V_0V_2)}=\frac{2\pi \omega _0}{\mathrm{\Omega }_0\omega _2}e^{\mathrm{\Delta }E/k_BT},$$
(8)
where $`\omega _0`$ and $`\mathrm{\Omega }_0`$ are the saddle and damped saddle angular frequencies which have been defined in Eqs. (21) and (22) of Ref. explicitly. $`\omega _2`$ is the well angular frequency for the deeper of the two potential wells and is defined in Eq. (20) of Ref. . All have been defined in terms of the coefficients of the truncated Taylor series representation of the energy equation described in detail in section V of Ref. , (particularly Eqs. 59-64). For the purpose of comparison with MC and LD simulations, we consider one escape path only, $`e^{\beta (V_0V_2)}`$, where $`\beta =V/k_BT`$ and $`V_0V_2`$ is the energy described by Eq. (62) of Ref. . For our purposes, $`\beta (V_0V_2)`$ may be represented by $`\mathrm{\Delta }E/k_BT`$. The validity condition for the IHD formula is $`\alpha \mathrm{\Delta }E/k_BT1`$ where $`\mathrm{\Delta }E/k_BT>1`$ which have been satisfied in all cases represented here.
From Fig. 3 it is clear that the LD data agree with the asymptote above. For higher temperatures the asymptote is no longer appropriate. Here, the numerical data for $`\tau `$ tend to zero for $`T\mathrm{}`$ as one expects. The MC data deviate slightly and are roughly 10% larger. However, considering the fact that to the best of our knowledge this is the first comparison of a ”real-time MC simulation” with LD simulations and asymptotic formulae, the agreement is remarkable - especially taking into account the simple form of Eq.7 underlying our algorithm and also that there is no adjusted parameter in all our calculations and formulae.
Since we expect that our MC procedure leads to a high damping limit we also tested the $`\alpha `$-dependence of $`\tau `$. Fig. 4 shows the corresponding data for the same parameter values as before and $`\mathrm{\Delta }E/T=3.3`$. The figure shows that the MC data converge to the IHD formula and to the data from LD simulation for large $`\alpha `$. Even the small 10% deviation of the MC data mentioned before (Fig. 3) vanishes in the limit of larger $`\alpha `$.
To summarize, we discussed the conditions under which a comparison of LD with a MC process appears to be possible. Considering a simple system of isolated single-domain particles we derived an equation for the trial step width of the MC process so that one step of the MC algorithm can be related to a certain time interval. Testing this algorithm we found excellent agreement with data from LD simulation as well as with intermediate to high damping asymptotes for the characteristic times of the magnetisation reversal. Even, though our algorithm was derived only for the special system which we consider here, we belive that the arguments we brought forward might the fundament even for the MC simulation of more complicated systems, especially systems consisting of interacting magnetic moments.
###### Acknowledgements.
We would like to thank W. T. Coffey and K. D. Usadel for helpful discussions. E. C. Kennedy thanks EPSRC for financial support (GR/L06225). R. W. Chantrell thanks EPSRC for financial support (ref GR/M24011). This work was done within the framework of the COST action P3 working group 4.
|
no-problem/9906/cond-mat9906099.html
|
ar5iv
|
text
|
# High real-space resolution measurement of the local structure of Ga1-xInxAs using x-ray diffraction
## Abstract
High real-space resolution atomic pair distribution functions (PDF)s from the alloy series Ga<sub>1-x</sub>In<sub>x</sub>As have been obtained using high-energy x-ray diffraction. The first peak in the PDF is resolved as a doublet due to the presence of two nearest neighbor bond lengths, Ga-As and In-As, as previously observed using XAFS. The widths of nearest, and higher, neighbor pairs are analyzed by separating the strain broadening from the thermal motion. The strain broadening is five times larger for distant atomic neighbors as compared to nearest neighbors. The results are in agreement with model calculations.
The average atomic arrangement of crystalline semiconductor alloys is usually obtained from the position and intensities of the Bragg peaks in a diffraction experiment , and the actual nearest neighbor and sometimes next nearest neighbor distances for various pairs of atoms by XAFS measurements . In this Letter we show how high energy x-ray diffraction and the resulting high-resolution atomic pair distribution functions (PDF)s can be used for studying the internal strain in Ga<sub>1-x</sub>In<sub>x</sub>As alloys. We show that the first peak in the PDFs can be resolved as a doublet and, hence, the mean position and also the widths of the Ga-As and In-As bond length distributions determined. The detailed structure in the PDF can be followed out to very large distances and the widths of the various peaks obtained. We use the concentration dependence of the peak widths to separate the strain broadening from the thermal broadening. At large distances the strain broadening is shown to be about five times larger than for nearest neighbor pairs. Using a simple valence force field model, we get good agreement with the experimental results.
Ternary semiconductor alloys, in particular Ga<sub>1-x</sub>In<sub>x</sub>As, have technological significance because they allow important properties, such as band-gaps, to be tuned continuously between the two end-points by varying the composition $`x`$. Surprisingly, there is no complete experimental determination of the microscopically strained structure of these alloys. On average, both GaAs and InAs form in the zinc-blende structure where Ga or In and As atoms occupy two inter-penetrating face-centered-cubic lattices and are tetrahedrally coordinated to each other . However, both extended x-ray absorption fine structure (XAFS) experiments and theory have shown that Ga-As and In-As bonds do not take some average value but remain close to their natural lengths of $`L_{\mathrm{Ga}\mathrm{As}}^o=2.437`$ Å and $`L_{\mathrm{In}\mathrm{As}}^o=2.610`$ Å in the alloy. Due to the two considerably different bond lengths present, the zinc-blende structure of Ga<sub>1-x</sub>In<sub>x</sub>As alloys becomes locally distorted. A number of authors have proposed distorted local structures but there has been limited experimental data available to date. The fully distorted structure is a prerequisite as an input for accurate band structure and phonon dispersion calculations .
The technique of choice for studying the local structure of semiconductor alloys has been XAFS . However, XAFS provides information only about the immediate atomic ordering (first and sometimes second coordination shells) and all longer-ranged structural features remain hidden. To remedy this shortcoming we have taken the alternative experimental approach of obtaining high-resolution PDFs of these alloys from high energy x-ray diffraction data.
The PDF is the instantaneous atomic density-density correlation function which describes the local arrangement of atoms in a material . It is the sine Fourier transform of the experimentally observable total structure function obtained from powder diffraction measurements. PDF analysis yields the real local structure whereas an analysis of the Bragg scattering alone yields the average crystal structure. Determining the PDF has been the approach of choice for characterizing glasses, liquids and amorphous materials for a long time . However, its widespread application to study crystalline materials has been relatively recent . Very high real-space resolution is required to differentiate the distinct Ga-As and In-As bond lengths present in Ga<sub>1-x</sub>In<sub>x</sub>As. High real-space resolution is obtained by measuring the structure function, $`S(Q)`$ ($`Q`$ is the amplitude of the wave vector), to a very high value of $`Q`$ ($`Q_{\mathrm{max}}40`$ Å<sup>-1</sup>). An indium neutron absorption resonance rules out neutron measurements in the Ga<sub>1-x</sub>In<sub>x</sub>As system. We therefore carried out x-ray powder diffraction measurements. To access $`Q`$ values in the vicinity of 40-50 Å<sup>-1</sup> it is necessary to use x-rays with energies $`50`$ keV. The experiments were carried out at the A2 56 pole wiggler beamline at Cornell High Energy Synchrotron Source (CHESS) which is capable of delivering intense x-rays of energy 60 keV. Six powder samples of Ga<sub>1-x</sub>In<sub>x</sub>As, with $`x=0.0`$, 0.17, 0.5, 0.67, 0.83 and 1.0, were measured. The samples were made by standard methods and the details of the sample preparation will be reported elsewhere . All measurements were done in symmetric transmission geometry at 10K. Low temperature was used to minimize thermal vibration in the samples, and hence to increase the sensitivity to atomic static displacement amplitudes. A double crystal Si(111) monochromator was used. Scattered radiation was collected with an intrinsic germanium detector connected to a multi-channel analyzer. The elastic component was separated from the incoherent Compton scattering before data analysis . Several diffraction runs were conducted with each sample and the resulting spectra averaged to improve the statistical accuracy. The data were normalized for flux, corrected for background scattering, detector deadtime and absorption and divided by the average form factor to obtain the total structure factor, $`S(Q)`$ . Details of the data processing are described elsewhere . Correction procedures were done using the program RAD . Experimental reduced structure factors, $`F(Q)=Q[S(Q)1]`$, are shown in Fig. 1.
The corresponding reduced PDFs, $`G(r)`$, obtained through a Fourier transform
$$G(r)=\frac{2}{\pi }_0^{Q_{max}}F(Q)\mathrm{sin}QrdQ$$
(1)
are shown in Fig. 2.
The data for the Fourier transform were terminated at $`Q_{max}=45`$ Å<sup>-1</sup> beyond which the signal to noise ratio became unfavorable. This is a very high momentum transfer for x-ray diffraction measurements; for comparison, $`Q_{max}`$ from a Cu K<sub>α</sub> x-ray tube which is less than 8 Å<sup>-1</sup>.
Significant Bragg scattering (well-defined peaks) are immediately evident in Fig. 1 up to $`Q40`$ Å<sup>-1</sup> in the end-members, GaAs and InAs. This implies that the samples have long range order and that there is little positional disorder (dynamic or static) on the atomic scale. The Bragg-peaks disappear at much lower $`Q`$-values in the alloy data: the samples are still long-range ordered but they have significant local positional disorder. At high-$`Q`$ values, oscillating diffuse scattering is evident. This has a period of $`2\pi /2.5`$ Å<sup>-1</sup> and contains information about the shortest atomic distances in Ga<sub>1-x</sub>In<sub>x</sub>As alloys seen as a sharp first peak in $`G(r)`$ at 2.5 Å (see Fig. 2). In the alloys, this peak is split into a doublet as is clearly evident in Fig. 2; with a shorter Ga-As bond and a longer In-As bond. This peak is shown on an expanded scale in the inset to Fig. 3 for all the compositions measured. We determined the positions of the two subcomponents of the first PDF peak, i.e. the mean Ga-As and In-As bond lengths, and the results are shown in Fig. 3. Also shown is the room temperature result previously obtained in the XAFS study of Mikkelson and Boyce . There is clearly good agreement. The PDF-based bond lengths are shifted to smaller lengths by about 0.012 Å since our data were measured at 10K, whereas the XAFS experiments were at room temperature.
The nearest neighbor peak is the only peak which is sharp in the experimental PDFs as can be seen in Fig. 2. From the second-neighbor onwards the significant strain in the alloy samples results in broad atom-pair distributions without any resolvable splitting. Model calculations show that this broadening is intrinsic and not due to any experimental limitations. The strain in Ga<sub>1-x</sub>In<sub>x</sub>As was quantified by fitting the individual peaks in experimental PDFs. We used Gaussians convoluted with Sinc functions which account for the experimental resolution coming from the finite $`Q_{max}`$. The FWHM of the resolution function is 0.086 Å. This is significant for the near-neighbor peaks as shown in Fig. 3, but is much smaller than the width of the high-$`r`$ peaks. The high-r peaks are fit using the PDFFIT modeling program assuming the virtual crystal zinc-blende structure and refining displacement parameters. The resulting mean-square Gaussian standard deviations are shown in Fig. 4. One can see Ga-As and In-As bond lengths are sharply peaked about the mean value whereas the static strain on more distant neighbors is five times larger than on the near-neighbors. The strain peaks at a composition $`x`$=0.5 and affects the common (As) more than the mixed (metal) sublattice.
In order to better understand these results, we have modeled to the static and thermal disorder in the alloy by using a Kirkwood potential . The key element in this potential is the central force term that connects nearest neighbor atoms and would like to keep each bond at its natural (unstrained) length. The potential contains nearest neighbor bond stretching force constants $`\alpha `$ and force constants $`\beta `$ that couple to the change in the angle between adjacent nearest neighbor bonds. We choose these parameters to fit the end members with $`\alpha _{\mathrm{Ga}\mathrm{As}}`$ = 96N/m, $`\alpha _{\mathrm{In}\mathrm{As}}`$ = 97N/m, $`\beta _{\mathrm{Ga}\mathrm{As}\mathrm{Ga}}`$ = $`\beta _{\mathrm{As}\mathrm{Ga}\mathrm{As}}`$ = 10N/m and $`\beta _{\mathrm{In}\mathrm{As}\mathrm{In}}`$ = $`\beta _{\mathrm{As}\mathrm{In}\mathrm{As}}`$ = 6N/m. The additional angular force constant required in the alloy are taken to be the geometrical mean, so that $`\beta _{\mathrm{Ga}\mathrm{As}\mathrm{In}}`$ = $`\sqrt{(\beta _{\mathrm{Ga}\mathrm{As}\mathrm{Ga}}.\beta _{\mathrm{In}\mathrm{As}\mathrm{In}})}`$. We have constructed a series of cubic 512 atom periodic supercells in which the Ga and In atoms are distributed randomly according to the composition $`x`$. The system is relaxed using the Kirkwood potential to find the displacements from the virtual crystal positions. The volume of the supercell is also adjusted to find the minimum energy. Using this strained static structure, a dynamical matrix has been constructed and the eigenvalues and eigenvectors found numerically. From this the Debye-Waller factors for all the individual atoms in the supercell can be found and hence the PDF of the model by including the Gaussian broadening of all the subpeaks. We have shown previously that this is the correct procedure within the harmonic approximation. The model-PDF is plotted with the data in the inset to Fig. 2 and in Fig. 5. The agreement at higher-$`r`$ is comparable to that in the $`r`$-range shown. All the peaks shown in the Figures consist of many Gaussian subpeaks. The overall fit to the experimental $`G(r)`$ is excellent and the small discrepancies in Fig. 5 between theory and experiment are probably due to small residual experimental errors. Note that in comparing with experiment, the theoretical PDF has been convoluted with a sinc function to incorporate the truncation of the experimental data at $`Q_{max}=45`$ Å. The technique discussed above could be extended using a better force constant model with more parameters, but does not seem necessary at this time.
The thermal and strain contributions to the widths of the individual peaks in the reduced PDF act independently, as expected and as confirmed by our supercell calculations described in the previous paragraph. We therefore expect the squared width $`\mathrm{\Delta }`$ to be a sum of the two parts. The thermal part $`\sigma `$ is almost independent of the concentration and we fit $`\sigma ^2`$ by a linear function of the composition $`x`$ between the two end points in Fig. 4. To better understand the strain model it is convenient to assume that all the force constants are the same and independent of chemical species. Then it can be shown for any such model that
$$\mathrm{\Delta }_{ij}^2=\sigma _{ij}^2+A_{ij}x(1x)(L_{\mathrm{In}\mathrm{As}}^oL_{\mathrm{Ga}\mathrm{As}}^o)^2$$
(2)
where the subscripts $`ij`$ refer to the two atoms that lead to a given peak in the reduced PDF. For the Kirkwood model the $`A_{ij}`$ are functions of the ratio of force constants $`\beta /\alpha `$ only. It further turns out that the $`A_{ij}`$ are independent of whether a site in one sublattice is Ga or In, so we will just refer to that as the metal site. Taking mean values from the force constants used in the simulation we find that $`\beta /\alpha `$ = 0.83, and that for nearest neighbor pairs $`A_{ij}`$ = 0.0712. For more distant pairs the motion of the two atoms becomes incoherent so that $`A_{ij}=A_i+A_j`$ and we find that for the metal site $`A_i=0.375`$ and for the As site $`A_i=1.134`$. The validity of the approximation of using mean values for the force constants was shown to be accurate by calculating the model-PDF for all compositions as described above and comparing to the prediction of Eq. (2) . Equation 2 shows good agreement with the data for near and far neighbor PDF peaks, and for the different sublattices, over the whole alloy range, as shown in Fig. 4, using only parameters taken from fits to the end-members. There is a considerably larger width associated with the As-As peak in Fig. 4 when compared to the Me-Me peak, because the As atom is surrounded by four metal cations, providing five distinct first-neighbor environments . The theoretical curve in the lower panel of Fig. 4 is predicted to be the same for the Ga-As and In-As bond length distribution, using the simplified approach. The Kirkwood model seems adequate to describe the experimental data at this time, although further refinement of the error bars may require the use of a better potential containing more parameters.
In summary, we report for the first time a high-real-space-resolution measurement of the PDF of Ga<sub>1-x</sub>In<sub>x</sub>As ($`0<x<1`$) alloys. The PDF allows the local distortions away from the average structure over a wide range of $`r`$ to be determined in disordered crystals such as these. The nearest-neighbor Ga-As and In-As bond lengths in the alloys are clearly resolved. Significantly greater disorder exists in the more distant neighbor length distributions in the alloys. The experimental results are well fit over a wide range or $`r`$ using a Kirkwood model. Because the agreement between theory and experiment is good at both short and large distances, the Kirkwood model can be used with some confidence to generate strained alloy structures for use in the calculation of electronic band structure and phonon dispersion curves.
We would like to thank Rosa Barabash for discussions and help with the analysis of the static strains and Andrea Perez and the support staff at CHESS for help with data collection and analysis. This work was supported by DOE through grant DE FG02 97ER45651. CHESS is operated by NSF through grant DMR97-13424.
|
no-problem/9906/cond-mat9906438.html
|
ar5iv
|
text
|
# Enhanced Pulse Propagation in Non-Linear Arrays of Oscillators
## I Introduction
In recent years there has been a great deal of interest in the interplay of nonlinearity and applied forcing (deterministic and/or stochastic) in the stationary and transport properties of discrete spatially extended systems . The ability of discrete anharmonic arrays to localize and propagate energy in a persistent fashion, and the fact that noise may act (sometimes against one’s intuition) to enhance these properties, has led to particularly intense activity . Interesting noise-induced phenomena include stochastic resonance , noise-induced phase transitions , noise-induced front propagation , and array-enhanced stochastic resonance .
Our interests in this area have been motivated by the relative dearth of information concerning the effects of a thermal environment on the sometimes exquisite balances that are required to achieve these interesting resonances and persistences . At the same time, we have also noted that most of the literature has concentrated on overdamped arrays (often motivated by mathematical or computational constraints rather than physical considerations), a restriction that leaves out important inertial effects and that is easily overcome.
Perhaps the simplest generic discrete arrays in which to analyze these issues are systems of oscillators consisting of masses that may be subject to local monostable potentials (harmonic or anharmonic) and nearest neighbor monostable interactions (harmonic or anharmonic) (other generic arrays of current interest are bistable units linearly or nonlinearly connected to one another). These are the systems of choice in our work, and we have separated our inquiries into three distinct groups of questions: 1) The study of such arrays in thermal equilibrium . The questions here concern the spatial and temporal “energy landscape” that determines the degree of spontaneous energy localization due to thermal fluctuations and the temporal persistence of high or low energy regions; 2) The study of the propagation of a persistent signal applied at one end of the array . The questions here concern the signal-to-noise ratio and distance of signal propagation; 3) The study of the propagation of an initial $`\delta `$-function energy pulse (this work). The questions here concern the velocity of propagation and the dispersion of such a pulse.
It is useful and relevant to provide a very brief summary of our conclusions on the first two sets of questions. Our work on equilibrium energy landscapes was based on chains of harmonically coupled oscillators subject to a local potential that may be anharmonic. Each oscillator is connected to a heat bath at temperature $`T`$. We analyzed the thermal fluctuations and their persistence as influenced by the local potential (we compared hard, harmonic, and soft potentials), the strength of the harmonic coupling between the oscillators, the strength of the dissipative force connecting each mass to the heat bath, and the temperature. Among our conclusions are the following: 1) An increase in temperature in weakly coupled soft chains leads not only to greater energy fluctuations but also to a slower decay of these fluctuations; 2) An increase in temperature in weakly dissipative hard chains leads not only to greater energy fluctuations but also to a slower decay of these fluctuations; 3) High-energy-fluctuation mobility in harmonically coupled nonlinear chains in thermal equilibrium does not occur beyond that which is observed in a completely harmonic chain.
However, we noted earlier that interest in energy localization in perfect arrays, as contrasted with localization induced by disorder, arises in part because localized energy in these systems may be mobile. Dispersionless or very slowly dispersive mobility would make it possible for localized energy to reach a predetermined location where it can participate in a physical or chemical event. Our results raised the possibility of observing such localized mobility if the anharmonicity lies in the interoscillator interactions rather than (or in addition to) the local potentials. We ascertained that a persistent sinusoidal force applied to one site of a chain of masses connected by anharmonic springs may indeed propagate along the chain . Furthermore, we demonstrated a set of resonance phenomena that we have called thermal resonances because they involve optimization via temperature control. In particular, these results establish the existence of optimal finite temperatures for the enhancement of the signal-to-noise ratio at any site along the chain, and of an optimal temperature for maximal distance of propagation along the chain. These resonances differ from the usual noise-enhanced propagation where the noise is external and/or the system is overdamped.
This work addresses the third set of questions posed above concerning the way in which a nonequilibrium initial condition in the form of an energy pulse propagates as the system relaxes toward equilibrium. More specifically, we investigate the motion and dispersion of such an energy pulse and the effects of finite temperatures on pulse propagation. In view of our earlier results on thermal resonances, perhaps the most interesting question to be asked at this point is this: Is it possible to enhance pulse propagation via temperature control?
In order to monitor the evolution of the nonequilibrium initial condition it is useful to partition the Hamiltonian as
$$H=\underset{n}{}E_n$$
(1)
where $`E_n`$ contains the kinetic energy of site $`n`$ and an appropriate portion of the potential energy of interaction with its nearest neighbors (1/2 in one dimension, 1/4 in two dimensions). In one dimension
$$E_n=\frac{p_n^2}{2}+\frac{1}{2}V(x_{n+1},x_n)+\frac{1}{2}V(x_n,x_{n1}).$$
(2)
In Section II the potentials considered in this paper are briefly presented. Section III contains our analysis and results for one-dimensional oscillator chains. Here we discuss ways to characterize the mobility and dispersion of an initial localized impulse, and compare the behaviors of harmonic, hard anharmonic, and soft anharmonic chains. In Section IV we present some results for isolated two-dimensional arrays and note some interesting geometric features with perhaps unanticipated consequences. Section V is a summary of results.
## II Potentials
The particular potentials as a function of the relative displacement $`yx_nx_{n1}`$ used in our presentations are the harmonic,
$$V_0(y)=\frac{k}{2}y^2,$$
(3)
a hard anharmonic,
$$V_h(y)=\frac{k}{4}y^4,$$
(4)
and a soft anharmonic,
$$V_s(y)=k\left[|y|\mathrm{ln}(1+|y|)\right].$$
(5)
The anharmonic potentials have been chosen to be strictly hardening and strictly softening, respectively, with increasing amplitude. The potentials are shown in the first panel in Fig. 1. In almost all our simulations we take $`k=1`$.
The displacement variable $`y`$ of a single oscillator of energy $`E`$ in a potential $`V(y)`$ satisfies the equation of motion
$$\frac{dy}{dt}=\pm \sqrt{2[EV(y)]}.$$
(6)
This equation can be integrated and, in particular, one can express the period of oscillation $`\tau (E)`$ and the frequency of oscillation $`\omega (E)`$ as
$$\tau (E)=\frac{2\pi }{\omega (E)}=4_0^{y_{max}}\frac{dy}{\sqrt{2[EV(y)]}}.$$
(7)
The amplitude $`y_{max}`$ is the positive solution of the equation $`V(y)=E`$. The resulting oscillation frequencies obtained from the integration of Eq. (7) for the three potentials with $`k=1`$ as well as that of the frequently used “quadratic plus quartic potential” are shown in the second panel of Fig. 1 .
The frequency vs energy variations seen in Fig. 1 can be shown via rescaling and bounding arguments to represent general features of hardening and softening monostable potentials. The exercise is trivial if the potential is of the form
$$V(y)=\frac{k}{\alpha }y^\alpha $$
(8)
since then
$`\tau (E)`$ $`=`$ $`4{\displaystyle _0^{y_{max}}}{\displaystyle \frac{dy}{\sqrt{2[Eky^\alpha /\alpha ]}}}`$ (9)
$`=`$ $`4\left({\displaystyle \frac{\alpha }{k}}\right)^{1/\alpha }{\displaystyle _0^1}{\displaystyle \frac{dz}{\sqrt{2(1z^\alpha )}}}E^{\frac{1}{\alpha }\frac{1}{2}}B_\alpha E^{\frac{1}{\alpha }\frac{1}{2}}`$ (11)
whence
$$\omega (E)=\frac{2\pi }{B_\alpha }E^{\frac{1}{2}\frac{1}{\alpha }}.$$
(12)
The coefficient $`B_\alpha `$ can be expressed exactly in terms of the beta function and is equal to $`2\pi `$ for the harmonic potential.
If the potential is not of the simple single-power form it is still possible to bound the resulting energy dependence to establish the trend . For example, the soft potential (5) is bounded below by $`(k/2)|y|`$ and above by $`k|y|`$. These bounds immediately lead to the conclusion that the associated $`\omega (E)`$ must decrease as $`E^{1/2}`$. The argument for a mixed power potential such as $`V(y)=\frac{1}{2}y^2+\frac{1}{4}y^4`$ is a bit more cumbersome but otherwise similar: by making the change of variables from $`y`$ to $`\frac{1}{2}y^2+\frac{1}{4}y^4=Ez^4`$ one can show not only that $`\omega (E)`$ is an increasing function of $`E`$ but that it lies above the harmonic potential result for any positive $`E`$.
Figure 1 summarizes the well known frequency characteristics of oscillators: for a harmonic oscillator the frequency is independent of energy (and, with our parameters, equal to unity); for a hard oscillator the frequency increases with energy, while that of a soft oscillator decreases with energy. The hard oscillator frequency curve starts below the other two if a harmonic portion is not included. These frequency–energy trends are generalized to oscillator chains in Appendix A. The frequency vs energy behavior will figure prominently in our subsequent interpretations. In particular, the following broad view seems to be overarchingly supported: the speed and dispersion of pulse propagation in discrete arrays of oscillators are principally dependent on the mean frequency associated with the energy in the pulse. Higher frequencies lead to faster propagation and slower dispersion.
## III One-Dimensional Arrays
We consider one-dimensional arrays of $`2N+1`$ sites numbered from $`N`$ to $`N`$ with periodic boundary conditions. We distinguish isolated chains (that is, ones not connected to a heat bath), chains connected to a heat bath at zero temperature, and finite temperature chains. This provides an opportunity to organize the effects of different parameters on the behavior of the chains.
In all cases at time $`t=0`$ a kinetic energy $`\epsilon `$ is imparted to one particular oscillator (the oscillator at $`n=0`$) of the chain. If the chain is isolated or at zero temperature, this initial impulse is applied to an otherwise quiescent chain. At finite temperatures the chain is first allowed to equilibrate and then this impulse is imparted in addition to the thermal motions already present. We then observe how this initial $`\delta `$-function impulse propagates and spreads along the chain, and how these behaviors depend on system parameters.
### A Isolated Chains
The equations of motion for an isolated chain are
$$\ddot{x_n}=\frac{}{x_n}[V(x_nx_{n1})+V(x_{n+1}x_n)].$$
(13)
The initial conditions are
$`x_n(0)=0\mathrm{for}\mathrm{all}n,`$ (14)
$`\dot{x}_n(0)=0\mathrm{for}n0,`$ (15)
$`\dot{x}_0(0)p_0=\sqrt{2\epsilon }.`$ (16)
For a harmonic array this system can of course be solved exactly, and we do so in Appendix B. The analytic harmonic results are helpful and informative, although our discussion is primarily based on simulation results since the anharmonic chains cannot be solved analytically. The numerical integration of the equations of motion is performed using the second order Heun’s method (which is equivalent to a second order Runge Kutta integration) with a time step $`\mathrm{\Delta }t=0.0001`$.
One can think of the dynamics ensuing from the initial momentum impulse in two equivalent ways. One is to interpret the $`x_n`$ and $`\dot{x}_n`$ as displacements and momenta along the chain. Two symmetric pulses start from site zero and move to the left and to the right along the chain, and our discussion focuses on either of these two identical pulses. This symmetry occurs regardless of the sign of the initial momentum since the energy does not depend on the sign, i. e., the contraction of the spring between sites $`n=0`$ and $`n=1`$ that follows an initial positive impulse has exactly the same effect as the equal extension of the spring between sites $`n=0`$ and $`n=1`$. Alternatively, one can think of $`x_n`$ and $`\dot{x}_n`$ as displacements and momenta perpendicular to the chain, the sign then simply representing motion “up” or “down”. The symmetry around the site $`n=0`$ is then even more obvious.
In any case, the energy $`\epsilon `$ excites the displacements as well as momenta of other oscillators as it moves and disperses. The evolution can be characterized in a number of ways. We have found the most useful to be the mean distance of the pulse from the initial site, defined as
$$x\frac{_n|n|E_n}{_nE_n}$$
(17)
and the dispersion
$$\sigma ^2x^2x^2=\frac{_nn^2E_n}{_nE_n}x^2.$$
(18)
(The sums over $`n`$ extend from $`N`$ to $`N`$.) Here the $`E_n`$ are the local energies defined in Eq. (2) and, since these depend on time, so do the mean distance and the variance. The time dependence of the mean distance traveled is a measure of the velocity of the pulse, and that of the dispersion is a measure of how long the pulse survives before it degrades to a uniform distribution. An indication of the progression of a pulse is shown in Appendix B for a harmonic chain.
Results for the mean distance traveled by the pulse as a function of time for isolated chains of 151 sites are shown in the first panel in Fig. 2 for the hard, harmonic, and soft potentials and for various values of the initial pulse amplitude $`p_0`$. The mean distance varies essentially linearly with time in all cases (this is only approximately true in all cases – even the harmonic oscillator exhibits early deviations from linear behavior due to inertial effects). The important results apparent from Fig. 2 are summarized as follows and can be understood from the frequency vs energy trends in Fig. 1:
1. The pulse velocity in the harmonic chain is independent of the initial amplitude. This reflects the energy-independence of the mean frequency (and in fact of the entire frequency spectrum) for harmonic chains (also see Appendix B).
2. The pulse velocity in the hard chain increases with increasing initial amplitude. This is because the mean frequency for the hard chains increases with increasing energy.
3. The pulse velocity in the soft chain decreases with increasing initial amplitude. This is because the mean frequency for the soft chains decreases with increasing energy.
We note that with our choice of potentials the velocity in the hard chain for very weak initial amplitudes may actually lie below that of the harmonic chain or even the soft chain because we have omitted a harmonic contribution to the hard potential, but the hard chain velocity necessarily increases and surpasses that of the other chains with increasing initial pulse amplitude.
Not only is the pulse transmitted more rapidly in the hard isolated chains than in the others, but the pulse retains its integrity over longer distances in the hard chain. This is seen in the second panel in Fig. 2. The dispersion $`\sigma ^2`$ is shown for the three chains for a particular initial pulse amplitude. Rather than the dispersion as a function of time, the dispersion is shown as a function of position along the chain so that the pulse widths at a particular location along the chain can be compared directly. Clearly the hard chain pulse is the most compact at a given distance from the initially disturbed site (a plot of $`\sigma ^2`$ vs $`t`$ would show the opposite trend, that is, the pulse in the hard chain would have the greatest width, but it will have traveled a much greater distance than the pulses in the other chains). This combination of results leads to interesting geometrical consequences in higher dimensions (see Sec. IV).
### B Chains at Zero Temperature
If the chains are connected to a heat bath at zero temperature, the equations of motion Eq. (13) are modified by the inclusion of the dissipative contribution,
$$\ddot{x_n}=\frac{}{x_n}[V(x_nx_{n1})+V(x_{n+1}x_n)]\gamma \dot{x}_n,$$
(19)
where $`\gamma `$ is the dissipation parameter. The initial conditions are as set forth in Eqs. (16).
The mean distance traveled by the pulse is shown in Fig. 3 for each of the chains with and without friction so that the frictional effects can be clearly established. The salient results can again be understood from the frequency vs energy trends in Fig. 1:
1. The pulse velocity in the harmonic chain is independent of friction. This again reflects the energy-independence of the mean frequency for harmonic chains. The energy loss suffered through the frictional effects therefore does not affect the pulse velocity.
2. The pulse in the hard chain slows down with time in the presence of a frictional force. This is because the chain loses energy via friction, and the mean chain frequency decreases with decreasing energy.
3. The pulse in the soft chain speeds up with time in the presence of a frictional force. This is because the chain loses energy via friction, and the mean chain frequency increases with decreasing energy.
The dependence of the pulse width on friction (not shown explicitly) follows trends that are consistent with our other results. An increase in friction causes the pulse to narrow in the soft chain. This is consistent with the observation that higher frequencies are associated with narrower pulses. In a harmonic chain there is also some narrowing of the pulse, but not nearly as much as in the soft chain (detailed explanation of this would require consideration of the spectrum beyond just the mean frequency). In the hard chain we can not make an unequivocal claim from our numerical results because the dependence of pulse width on friction for our parameters is extremely weak, with perhaps a very small amount of narrowing.
### C Chains at Finite Temperature
If the chains are connected to a heat bath at temperature $`T`$, the equations of motion Eq. (19) are further modified by the inclusion of the fluctuating contribution,
$$\ddot{x_n}=\frac{}{x_n}[V(x_nx_{n1})+V(x_{n+1}x_n)]\gamma \dot{x}_n+\eta _n(t).$$
(20)
The $`\eta _n(t)`$ are mutually uncorrelated zero-centered Gaussian $`\delta `$-correlated fluctuations that satisfy the fluctuation-dissipation relation:
$$\eta _n(t)=0,\eta _n(t)\eta _j(t^{})=2\gamma k_BT\delta _{nj}\delta (tt^{}).$$
(21)
The initial conditions are now no longer given by Eqs. (16). Instead, the chain is allowed to equilibrate at temperature $`T`$ and at time $`t=0`$ an additional impulse of amplitude $`\sqrt{2\epsilon }`$ is added to the thermal velocity of site $`n=0`$. The integration of the equations of motion proceeds as before, but now we report averages over 100 realizations. The system is initially allowed to relax over enough iterations to insure thermal equilibrium, after which we take our “measurements.”
The pulse dynamics is no longer conveniently characterized by the mean pulse velocity (although this was the most useful and direct characterization in the absence of thermal fluctuations). This is because there is now a thermal background that causes fluctuations and distortions of the information in this mean (as well as in other simple moments and measures such as the pulse maximum). We find that the most suggestive presentation of the dynamics is that of the energy profile itself. An illustrative set of typical profiles for chains of 51 sites is presented in Fig. 4, showing energy profiles as a function of time on the $`5^{th}`$ site on either side of $`n=0`$ as a function of temperature. In all cases there is a delay time until the pulse reaches the $`5^{th}`$ site (reflecting a finite velocity). The local energy around this site then reaches a maximum, and the pulse moves on, leaving behind a series of later energy oscillations at ever decreasing amplitudes that eventually settle down to the appropriate thermal levels. The after-oscillations are derived analytically in Appendix B for the harmonic case. The discussion below concentrates exclusively on the first pulse, which we think of as characterizing the arrival of the disturbance at that site.
The important conclusions, some illustrated in the figure, can once again be understood from the trends in Fig. 1 and include the following:
1. The pulse velocity in the harmonic chain is independent of temperature. This is illustrated in the figure by the fact that the peak of the pulse reaches the particular site under observation at the same time for the two temperatures shown. The reason once again is that the characteristic frequencies of the chain are independent of energy and therefore the inclusion of thermal effects is immaterial to this measure.
2. The pulse velocity in the hard chain increases with increasing temperature. This is illustrated by the ever earlier arrival of the pulse at the site under observation with increasing temperature. The reason is that the mean frequency of the chain increases with energy, so that the hard chain at higher temperatures is associated with a higher frequency than at lower temperatures and hence with a faster pulse.
3. The pulse velocity in the soft chain decreases with increasing temperature. This is not explicitly illustrated in the figure, but is due to the decrease of the mean frequency with increasing temperature. Thus the soft chain at higher temperatures is associated with a lower frequency and hence a slower pulse.
4. The hard chain not only transmits pulses more rapidly than the other chains, increasingly so with increasing temperature, but it also transmits the most compact and persistent pulses at any temperature. This is seen not only by the obviously smaller width of the pulses in the hard chain, but by the fact that the energy trace “left behind” as the first pulse passes through is lower in the hard chain than in the other cases.
The pulses in all cases become more dispersive with increasing temperature. This behavior is clearly evident in Fig. 4 for the hard and harmonic chains, as is the fact that the temperature dependence of the pulse width is weakest for the hard chain (and strongest for the soft chain). These dependences complement those described earlier for the pulse width as a function of friction: increasing friction in all cases narrows the pulse (subject to our caveat concerning the hard chain mentioned earlier) while increasing the temperature broadens it, both of these dependences being weakest for the hard chain.
## IV Two-Dimensional Isolated Arrays
We showed in Sec. III A that a pulse travels more rapidly and less dispersively in an isolated hard chain than in a harmonic or soft chain. In higher dimensions these two tendencies, that of moving faster and that of maintaining the energy localized, leads to some interesting geometric effects and to very different pulse propagation properties depending on the spatial configuration of the initial condition.
In one dimension one could visualize the displacements and momenta $`x,\dot{x}`$ as describing motion along the chain or perpendicular to the chain. In two dimensions these are distinct cases: a generalization of the first requires introduction of two-dimensional coordinates $`(x,y)`$ and momenta $`(\dot{x},\dot{y})`$. The second requires only a single perpendicular coordinate $`z`$ and associated momentum $`\dot{z}`$ for each site, and this is the case we pursue. We thus consider a two-dimensional square array of dimension $`(2N+1)\times (2N+1)`$ wherein motion occurs in a direction perpendicular to the array. The Hamiltonian with $`\dot{z}_{n,j}p_{n,j}`$ is expressed as a sum of local energy contributions,
$$H=\underset{n,j}{}E_{n,j},$$
(22)
where
$`E_{n,j}={\displaystyle \frac{p_{n,j}^2}{2}}`$ $`+`$ $`{\displaystyle \frac{k}{4}}\left[(z_{n,j}z_{n+1,j})^2+(z_{n,j}z_{n1,j})^2+(z_{n,j}z_{n,j+1})^2+(z_{n,j}z_{n,j1})^2\right]`$ (23)
$`+`$ $`{\displaystyle \frac{k^{}}{8}}\left[(z_{n,j}z_{n+1,j})^4+(z_{n,j}z_{n1,j})^4+(z_{n,j}z_{n,j+1})^4+(z_{n,j}z_{n,j1})^4\right].`$ (25)
For the harmonic case we take $`k=0.5`$ and $`k^{}=0`$, and for the hard anharmonic array we set $`k=0`$ and $`k^{}=0.5`$. Our lattices are of size $`51\times 51`$ and our integration time step is $`\mathrm{\Delta }t=0.0005`$. The boundary conditions are immaterial (although we happen to use boundaries whose edge sites have only three connections and its corners sites only two) because the lattices are sufficiently large for the initial excitations not to reach the boundaries within the time of our computations.
We consider two initial excitation geometries. In one a “front” is created by exciting all sites along the line $`(0,j)`$, $`NjN`$, with the same initial momentum $`p_{0,j}=p_0`$. The front then moves symmetrically away from this line and its motion is measured by the mean distance and dispersion (in all double sums $`n`$ and $`j`$ each range from $`N`$ to $`N`$)
$$x\frac{_{n,j}|n|E_{n,j}}{_{n,j}E_{n,j}},$$
(27)
$$\sigma ^2x^2x^2=\frac{_{n,j}n^2E_{n,j}}{_{n,j}E_{n,j}}x^2.$$
(28)
In the other, an initial pulse of kinetic energy is deposited at the central site of the array. We then measure the mean radial distance of the pulse from the origin,
$$r\frac{_{n,j}\sqrt{n^2+j^2}E_{n,j}}{_{n,j}E_{n,j}}$$
(29)
(the dispersion in this case is less informative but can also be monitored if desired). The motion and dispersion in this geometry are expected to be roughly spherically symmetric subject to the square connectivity of the lattice.
Typical gray-scale snapshots of the energy distribution are shown in Figs. 5 and 6, and the differences, while easily understood, are clearly dramatic. In the case of the front, the tendency of a hard lattice to propagate faster than the harmonic lattice while maintaining the energy more localized is clearly realized. The associated mean distance and dispersion that quantify the comparison are shown in Fig. 7. In the case of an initial point pulse, on the other hand, there is clearly a conflict between rapid motion and smaller dispersion – one can be realized only at the expense of the other. The latter “wins”: the pulse remains more localized in time in the hard lattice than in the harmonic. The associated mean radius is shown in Fig. 8. In the anharmonic lattice the pulse at first expands as fast as the harmonic but it essentially quickly saturates while the harmonic pulse continues to disperse.
## V Conclusions
In this paper we have considered pulse propagation in discrete arrays of masses connected by harmonic or anharmonic springs. We have focused on the pulse velocity and width, and have found a pattern of behavior that can be strongly correlated with the energy dependence of the mean array frequency.
First we investigated the propagation of pulses in isolated (microcanonical) arrays. We found that in a hard array an amplitude increase causes a pulse to travel more rapidly and less dispersively. In a harmonic array the pulse speed and width are independent of pulse amplitude, while in a soft array a more intense pulse travels more slowly and spreads out more rapidly. These trends are a result of the fact that in a hard array the mean frequency increases with energy, in a harmonic array it is independent of energy, and in a soft array the mean frequency decreases with increasing energy. In higher dimensions these trends lead to interesting initial condition dependences that in turn may lead to apparently “opposite” behavior in different cases. Thus, for example, a front in a two-dimensional isolated hard array propagates more rapidly and more sharply than in harmonic or soft arrays, and the effect is enhanced if the front is more intense. On the other hand, a point pulse in a hard array spreads more slowly than in the others: it is not possible in this geometry to both propagate quickly and yet retain a strong localization of energy, and the latter tendency dominates the dynamics.
We then investigated the effects on pulse propagation of connecting the nonlinear chains to a heat bath (we did this only for the 1D arrays). We found that dissipative forces tend to slow down the pulse in the hard array, leave its speed unchanged in the harmonic chain, and actually speed it up in the soft array. This somewhat counterintuitive behavior is, however, fully consistent with the observation that dissipation causes a decrease in energy and hence a decrease in mean frequency in the hard case and an increase in mean frequency in the soft chain (and no change in the mean frequency of the harmonic chain). Dissipation in all cases causes a narrowing of the pulse, the effect being greatest in the soft array.
An increase in temperature has the opposite (and again at first sight perhaps somewhat counterintuitive) effect: it speeds up the pulse in the hard array, leaves it unchanged in the harmonic array, and slows it down in the soft chain. Again this behavior is consistent with the frequency vs energy trends and the fact that an increase in temperature is associated with an increase in the energy of the chain. A temperature increase in all cases causes a broadening of the pulse, the effect again being greatest in the soft array.
## Acknowledgments
R. R. gratefully acknowledges the support of this research by the Ministerio de Educación y Cultura through Postdoctoral Grant No. PF-98-46573147. A. S. acknowledges sabbatical support from DGAPA-UNAM. This work was supported in part by the Engineering Research Program of the Office of Basic Energy Sciences at the U. S. Department of Energy under Grant No. DE-FG03-86ER13606.
## A Frequency vs Energy for Oscillator Chains
Consider a chain of oscillators, and let us focus on the displacement variable $`x`$ of a particular mass in the chain, say oscillator j, whose displacement satisfies the equation of motion
$$\frac{dx_j}{dt}=\pm \left(2\left[E\underset{n}{}V(x_nx_{n1})\right]\underset{nj}{}p_n^2\right)^{1/2}.$$
(A1)
The period of oscillation for oscillator j can be defined in analogy with Eq. (12):
$`\tau (E;𝐱^{},𝐩^{})`$ $`=`$ $`{\displaystyle \frac{2\pi }{\omega (E;𝐱^{},𝐩^{})}}`$ (A2)
$`=`$ $`4{\displaystyle _0^{x_{max}}}{\displaystyle \frac{dx_j}{\left(2\left[E_nV(x_nx_{n1})\right]_{nj}p_n^2\right)^{1/2}}},`$ (A4)
where $`𝐱^{}`$ stands for the set of all the $`x`$’s except $`x_j`$, and similarly for $`𝐩^{}`$. The upper limit of integration $`x_{max}`$ depends not only on $`E`$ but on all the other displacements and momenta, and is the positive value of $`x_j`$ at which the denominator of the integrand vanishes. The resulting $`\omega `$ with all the coordinate and momentum dependences is not very useful, but it would seem reasonable to simply average over all possible values of these coordinates and momenta and thus obtain an average period. We define the average period as
$`\tau (E)`$ $``$ $`\tau (E;𝐱^{},𝐩^{})`$ (A6)
$``$ $`4{\displaystyle \frac{\mathrm{}𝑑𝐱^{}\mathrm{}\mathrm{𝐝𝐩}^{}_\mathrm{𝟎}^{𝐱_{\mathrm{𝐦𝐚𝐱}}}\mathrm{𝐝𝐱}_𝐣\left(\mathrm{𝟐}\left[𝐄_𝐧𝐕(𝐱_𝐧𝐱_{𝐧\mathrm{𝟏}})\right]_{𝐧𝐣}𝐩_𝐧^\mathrm{𝟐}\right)}{\mathrm{}𝑑𝐱^{}\mathrm{}\mathrm{𝐝𝐩}^{}}}^{1/2}.`$ (A8)
The limits of integration not explicitly indicated are appropriate nested relations among the variables and the energy such that the argument of the square root always remains positive. The multiple integral in the denominator covers the same integration regime and insures proper normalization for this average. Our interest lies in extracting the energy dependence - the remaining energy-independent coefficients are complicated and not important for our arguments. If the pair potentials are powers as in the single oscillator example, the scaling argument can be generalized by introducing scaled variables $`z_n(x_nx_{n1})E^{1/\alpha }`$ and $`u_np_nE^{1/2}`$ with appropriate constants of proportionality. The limits of integration then become independent of energy and the only energy dependence arises from factoring an $`E^{1/2}`$ from the square root in the denominator and an $`E^{1/\alpha }`$ from the numerator because it contains one $`z`$integration more than the denominator. The result, as before, is that
$$\tau (E)=_\alpha E^{\frac{1}{\alpha }\frac{1}{2}}$$
(A10)
with a complicated but energy-independent expression for the coefficient $`_\alpha `$, and therefore
$$\omega (E)\frac{2\pi }{\tau (E)}E^{\frac{1}{2}\frac{1}{\alpha }}.$$
(A11)
More complicated potentials require suitable generalization of this argument, but the result in any case is that the average frequencies for the hard, harmonic, and soft chains follow the same trends as those shown in Fig. 1.
## B Isolated Linear Oscillator Chain
Although linear oscillator chains are of course fully understood, it is nevertheless useful to present aspects of their behavior in the context of the present discussion.
The linear equations of motion (13) with the initial conditions (16) are easily solved:
$$x_n(t)=\frac{\sqrt{2\epsilon }}{2N+1}\underset{q=N}{\overset{N}{}}\frac{\mathrm{sin}(\omega _qt)}{\omega _q}e^{2\pi iqn/(2N+1)}=\sqrt{2\epsilon }_0^tJ_{2n}(2\sqrt{k}\tau )𝑑\tau $$
(B1)
where the frequencies $`\omega _q`$ obey the dispersion relation
$$\omega _q^2=4k\mathrm{sin}^2\left(\frac{2\pi q}{2N+1}\right)$$
(B2)
and where $`J_n(z)`$ denotes the Bessel function of the first kind of integer order $`n`$. (The energy-independence of the frequencies for the harmonic chain seen here in the $`\epsilon `$-independence of the $`\omega _q`$ is prominent in our discussions throughout this paper.) The momenta are then
$$\dot{x_n}(t)=\sqrt{2\epsilon }J_{2n}(2\sqrt{k}t).$$
(B3)
Using a number of relations obeyed by the Bessel functions it is possible to combine these results and obtain for the local energy the simple expression
$$E_n(t)=\epsilon \left[J_{2n}^2(2\sqrt{k}t)+\frac{1}{2}J_{2n+1}^2(2\sqrt{k}t)+\frac{1}{2}J_{2n1}^2(2\sqrt{k}t)\right].$$
(B4)
The energy profiles for various sites are shown in Fig. 9. The $`n=5`$ profile (here obtained from the analytic expression (B4) also appears in Fig. 4 (there obtained by numerical integration). Note that the energy is not transported in a single absorption-emission process but rather in a series of oscillatory steps of decreasing amplitude. Our analysis in the body of the paper focuses on the first energy pulse.
In Section III A we rely on $`x(t)`$, the mean distance traveled by the pulse as a function of time, as one measure to characterize the transport properties of our arrays. An alternative measure that can be calculated analytically for the harmonic chain (but turns out to be somewhat less convenient for numerical computation) is the time-dependent site $`n^{}(t)`$ at which the energy is a maximum. Because the passing energy pulse in general leaves a track behind it, one expects $`n^{}(t)x_{max}(t)`$ to grow more rapidly than $`x(t)`$. That this is indeed the case is illustrated in the first panel of Fig. 10, where both quantities are shown for a harmonic chain with unit force constant. The steps in the $`x_{max}`$ curve are a consequence of the discreteness of the problem. The analytic result for $`n^{}(t)`$ is obtained by maximizing Eq. (B4) with respect to $`n`$ and is, after some manipulation, found to be the solution of the relation
$$\frac{1+\sqrt{\frac{2n1}{2n+1}}}{4n}=\frac{J_{2n}(2\sqrt{k}t)}{2\sqrt{k}tJ_{2n1}(2\sqrt{k}t)}.$$
(B5)
Except for a very short initial transient the solution is essentially linear in time and exceedingly simple:
$$n^{}(t)x_{max}(t)\sqrt{k}t.$$
(B6)
This dependence is confirmed in the second panel of Fig. 10 for three values of the force constant. The curves shown are obtained numerically, and differ from the analytic straight lines only at the very earliest times by an exceedingly small barely visible amount.
|
no-problem/9906/astro-ph9906158.html
|
ar5iv
|
text
|
# A VLBI and MERLIN Survey of faint, compact radio sources
## 1 Introduction
The new VLA surveys, FIRST (Becker et al.,, 1995) and NVSS (Condon et al,, 1998), have opened up the mJy radio population on a vast scale, but even the higher resolution FIRST survey does not resolve the majority of sources which it detects. The tendency for the fainter sources to be considerably smaller has been known for some years (Oort,, 1987) and is strikingly demonstrated by recently compiled statistics of the large MIT VLA surveys, where 47% of the sources brighter than 50 mJy imaged with 0.25 arcsec resolution at 8.4 GHz were found to be unresolved (Fletcher et al.,, 1998)
However, this mJy population has yet to be systematically investigated with VLBI, previous VLBI surveys have so far been limited to flux densities $`>`$ 100 mJy (Polatidis et al.,, 1995; Beasley et al.,, 1996) and often these samples have been restricted to relatively flat spectrum sources ($`\alpha <0.5`$). High resolution images will be of interest for population studies of weak AGN, such as the evolutionary schemes recently proposed by Wall and Jackson, (1997), and will be a valuable input to statistical studies of gravitational lensing, since the compact mJy radio source population is the ‘parent population’ of the lensed sources being detected in the CLASS and MIT surveys (Falco et al.,, 1998).
VLBI observations of mJy sources are only possible by using the technique of phase referencing. The density of the new VLA surveys is such that we are now free to choose an ideal phase calibrator source and then simply select targets from the surrounding field.
The initial swathe of the FIRST survey included the area around 1156+295, an interesting source in its own right (Figure 1, see also Hong, these proceedings) but also an ideal phase calibrator. We therefore defined a $`3\times 4`$ degree field centred on this source and simply selected as potential targets all 127 sources from the FIRST catalogue in this area with $`S_{1.4}>10`$ mJy and LAS $`<`$ 5.0 arcsec.
## 2 MERLIN Observations
In order to select those sources with sufficient emission on milliarcsec scales for VLBI imaging, we used MERLIN at $`\lambda 6`$cm, observing each source for $`3\times 5`$ minutes, giving a detection threshold of about 2 mJy with 50 mas resolution (Garrett & Garrington,, 1998). This is a savage cut for steep-spectrum sources, which would have to be very compact to be detected by MERLIN at this level, but we would expect to detect the majority of flat-spectrum sources, which should account for $`<30`$% of the sample. Perhaps surprisingly, just over half of the list, 70 sources, were detected. These appeared to be mostly point-sources, but there were a few simple double sources, and several marginal detections of complex emission from the lobes of large double sources, which, given the very simple selection criteria, are not excluded from the target list.
VLA A-configuration observations at 1.4 and 8.4 GHz (1.0 and 0.25 arcsec resolutions) were made of the final target list in order to image structure on intermediate scales and to determine spectral indices. MERLIN + EVN observations at 18cm have also been made and and the EVN data are now being correlated.
## 3 Global VLBI Observations at $`\lambda =6`$cm
Global $`\lambda 6`$cm VLBI observations (18 telescopes; VLBA + EVN) were made earlier this year. To meet scheduling constraints, 35 sources closest to the phase calibrator and detected by MERLIN at 6cm were selected. With $`6\times 2`$ minutes observation per source, we achieved 2 mas resolution and a detection threshold of 1 - 2 mJy. The source positions were known to within a few mas from the MERLIN observations, reducing the risk of false detections.
Only 8 of the 35 sources were not detected in these global VLBI observations. These were bright, steep-spectrum and clearly extended on intermediate scales three were in fact lobes of large double sources, four were CSS sources with sizes $`>`$ 200 mas and one source was a marginal detection with MERLIN. Indeed, we would not have expected to have detected these sources, but they were not excluded by the ’blind’ selection process.
Of the 27 sources which were detected, there were roughly equal numbers of flat and steep-spectrum sources.
The brighter flat-spectrum sources are associated with nearby red galaxies. Two are the nuclei of arcminute scale radio galaxies, but otherwise they have no extended emission on intermediate scales (0.1 - 10 arcsec), and two are quasars. On the milliarcsecond scale, these sources mostly have core-jet structures. The fainter sources (10 - 50 mJy) have no optical identifications on the POSS and appear to be unresolved from mas to arcsecond scales. These flat-spectrum sources have $`T_\mathrm{b}10^{10}10^{11}`$ K.
Of the steep-spectrum sources, six are barely resolved: these may be faint GPS sources with spectral peaks below 1 GHz and expected sizes of 1 - 10 mas (Snellen, these proceedings). If so, this would indicate quite a high fraction of GPS sources at this flux level. The sources with intermediate spectral indices have, as expected, arcsecond scale jets and mas cores. The largest example is a 5 arcsecond triple source, which just fell within the initial selection.
One source has a rather unusual 5 mas diameter ring-like structure. Optically, this is an empty field down to 21 mag. The nearest optical objects are 30 arcsec away and are faint. The source has a spectral index of 0.80 with no extended radio structure on any scales from 50 mas to several arcminutes, except perhaps a few mJy within 0.5 arcsec of the ‘core’ and its nearest radio neighbour is a 4 mJy source 2 arcminutes away. Ascribing this distorted structure to gravitational lensing would imply a lens mass of $`10^6`$ solar masses. The expected probability for lensing on this scale, even assuming all normal galaxies harbour massive black holes is tiny ($`3\times 10^9`$). Alternatively, this may be an extremely distorted jet, perhaps like 3C119 (Nan Ren-dong et al.,, 1991) or Mkn 501 (Conway & Wrobel,, 1995) but 50 times fainter and several times smaller. More sensitive VLBI observations are required to investigate this source further.
## 4 Conclusions
We have produced VLBI images of an un-biased sample (no spectral index or optical selection) of faint, compact radio sources. With only 10 minutes observation per source, approximately 35% of all sources with $`S_{20}>10`$ mJy can be detected on global VLBI baselines at 6cm.
This project has used the simplest of selection criteria and we have seen that that at the mJy level, the VLBI sky shows a broad range of flat and steep spectrum sources, apparently distant and nearby. In order to complete the statistical picture on this small sample, we are awaiting optical imaging and spectroscopic observations as well as the EVN 18cm observations to complete radio picture. Perhaps the most important lesson from this pilot VLBI survey of faint radio sources is that the mJy population is readily accessible with current VLBI techniques. The new VLA surveys provide a huge resource for this type of work and MERLIN can be used as a very efficient filter for selecting potential VLBI targets.
|
no-problem/9906/cond-mat9906119.html
|
ar5iv
|
text
|
# Coordination defects in a-Si and a-Si:H : a characterization from first principles calculations
## I Introduction
Amorphous silicon (a-Si) and hydrogenated amorphous silicon (a-Si:H) are prototypes of disordered covalent semiconductors. Extensive work, both experimental and theoretical, has been done to study their topological and electronic structure. Although most of Si atoms are tetrahedrally coordinated, anomalously coordinated configurations can locally occur in pure and hydrogenated amorphous samples, but –at variance with the case of crystals where coordination defects can be easily recognized as deviations from the perfect ordered structure– their identification is not trivial. Hence, one of the most challenging problems in the amorphous systems is to localize the defects, to classify them and to identify their peculiar electronic features.
Traditionally, three-fold ($`T_3`$) defects have been considered as the most likely intrinsic defects in a-Si. The non vanishing density of states (DOS) observed in the gap has been commonly ascribed for a long time to the “dangling bonds” corresponding to these defects, and its lowering upon hydrogenation has been explained with the saturation of dangling bonds by hydrogen (Ley 1984, Fedders and Carlsson 1988, 1989, Biswas et al. 1989, Holender and Morgan 1993, Lee and Chang 1994, Davis 1996, Tuttle and Adams 1996).
More recently, this picture has been debated and revised. In particular the importance of five-fold coordinated ($`T_5`$ or “floating bonds”) in a-Si has been clearly stated in the theoretical works by Pantelides (1986, 1987) and Kelires and Tersoff (1988) a douzen of years ago, both in terms of their existence and their peculiar role in the electronic structure. The empirical simulation by Kelires and Tersoff (1988) has shown that $`T_5`$ atoms have lower energy than $`T_3`$ atoms, and therefore should be favoured in general. Also some ab-initio molecular dynamics simulations of a-Si structures show a predominance of $`T_5`$ defects with respect to $`T_3`$ (Buda et al. 1989, $`\stackrel{ˇ}{\mathrm{S}}`$tich et al. 1991, Buda et al. 1991). Pantelides (1986, 1987) argued that $`T_3`$ and $`T_5`$ are conjugated defects and must be considered on the same footing, since a bond elongation can transform a $`T_5+T_4`$ structure into a $`T_4+T_3`$ one, or vice versa an inward relaxation can transform a $`T_4+T_3`$ structure into a $`T_5+T_4`$ one; furthermore, he proposed a mechanism for H diffusion based on floating-bond switching and annihilation/formation of $`T_5`$’s through interaction with H (Pantelides 1987), which —at variance with the commonly accepted picture of dangling bonds hydrogenation— is compatible with the rapid decrease in the number of defects without any appreciable change in the density of Si–H bonds experimentally observed at low temperature.
Some of these ideas have been widely used in discussing the geometrical characterization of defects; their soundness in terms of electronic properties has been investigated mainly by model calculations (Fedders and Carlsson 1987, 1988, 1989, Fedders et al. 1992) and more recently by some first-principles calculations (Fedders et al. 1992, Lee and Chang 1994, Tuttle and Adams 1996, 1998, Fornari et al. 1999).
It remains the necessity of a simple tool going beyond purely geometrical criteria for a localization and an unambiguous characterization of defects. Recently the maximally-localized Wannier function approach has been applied to analyze the bonding properties in amorphous silicon ((Marzari and Vanderbilt 1997, Silvestrelli et al. 1998). We focus in the present work on a real-space analysis of the bonding pattern of a-Si and a-Si:H using the simplest tools provided by first-principles electronic structure calculations: a comparative analysis of the electronic charge density and the “electron localization function” (ELF) (Savin et al. 1992). We address the reader to another work (Fornari et al. 1999) for an accurate analysis using local or projected density of states (DOS) which completes the characterization of the coordination defects in terms of electronic properties, and we recall here only the main results.
## II Results and discussions
For studying the bonding properties in a-Si and a-Si:H we start from some selected samples generated by other authors (Buda et al. 1989, $`\stackrel{ˇ}{\mathrm{S}}`$tich et al. 1991, Buda et al. 1991) using Car-Parrinello first-principles molecular dynamics (CPMD). These structures reproduce quite well the experimental pair correlation function and bond angle distribution function using a reasonable number of atoms and hence they are suitable for accurate ab-initio studies. The configurations studied are cubic supercells of side $`a=2a_0`$, where $`a_0`$=10.17 a.u. is the theoretical equilibrium lattice parameter of c-Si, which also corresponds —in our calculations— to the optimized density of a-Si and a-Si:H. The supercells contain respectively 64 Si atoms to describe a-Si (Buda et al. 1989, $`\stackrel{ˇ}{\mathrm{S}}`$tich et al. 1991) and 64 Si atoms plus 8 H atoms for a-Si:H (Buda et al. 1989, 1991).
We use state-of-the-art electronic structure methods based on DFT using norm-conserving pseudopotentials and plane-wave basis set (Fornari et al. 1999). The CPMD configurations, aiming mainly at reproducing the structural properties, have been obtained using a kinetic energy cutoff $`E_{cut}`$=12 Ry and the $`\mathrm{\Gamma }`$ point only for Brillouin Zone (BZ) sampling. We improve in our calculations the BZ sampling using 4 inequivalent special k points for self-consistency and 75 k points for DOS. These parameters have been chosen as a reasonable compromise between accuracy and computational cost. The optimization of the a-Si and a-Si:H structures with the new computational parameters is accompanied only by small structural rearrangements, and therefore the mean structural properties are very similar to those reported by Buda et al. (1989, 1991) and $`\stackrel{ˇ}{\mathrm{S}}`$tich et al. (1991) for the original configurations, and we do not discuss them in detail here. We only report that in a-Si the mean bond length is $`d4.47`$ a.u., quite similar to the crystalline one which is 4.40 a.u.. The mean bond angle is $`\vartheta 109^{}`$, close to the characteristic value of the perfect tetrahedral network. The location of the first minimum of the radial distribution function defines geometrically the cutoff distance for the nearest neighbours (NN), which turns out to be $`R_{NN}=5.08`$ a.u., giving an average coordination number of about $`4.03`$. In a-Si:H the average Si-Si bond length is the same as in a-Si, but the first peak of the radial distribution function is more broadened and it is more appropriate to consider a larger NN cutoff distance, $`R_{NN}=5.49`$ a.u.. Each H is bound to one Si atom with an average distance $`d_H=`$ 2.95 a.u., very close to the corresponding value in SiH<sub>4</sub> molecule.
The standard geometrical analysis based simply on counting the atoms lying inside a sphere of radius $`R_{NN}`$ indicates that the starting configurations have a predominance of $`T_5`$ defects and of distorted $`T_4`$ sites. Moreover, the a-Si samples do not contain well defined $`T_3`$ defects. This feature can be a consequence of the rapid quench from the liquid states which has been done in preparing the sample in the molecular dynamics process (since the liquid state is sixfold coordinated, a rapid quench typically favours overcoordination rather than undercoordination).
We thus start analyzing in detail an overcoordinated environment. For the sake of clarity, we will consider the case of a-Si (in a-Si:H overcoordination can be due to five Si neighbours, or to four Si and one H, and so on).
In our a-Si sample there are two $`T_5`$ sites close one to each other (labelled A and B in the upper snapshot in figure 1), with a sort of interstitial (I) atom connecting them. A charge density analysis confirms for this configuration the bonding pattern predicted by the geometrical criteria, and helps in characterizing the different types of bonds ($`\stackrel{ˇ}{\mathrm{S}}`$tich et al. 1991). We observe that $`T_5`$ sites are accompanied by a valence charge density depletion. The charge density profiles reported in figure 1 show in particular that some $`T_5`$-$`T_4`$ “long” bonds and the bonds $`T_5`$-$`I`$ are characterized by a very small charge density; hence, they are “weak” and therefore those $`T_5`$ defects are the best candidates to transform into $`T_3`$ sites after a bond elongation. The asymmetry in the bond charge profiles indicates that, at variance with the perfect crystalline environment, the bonds are not perfectly homopolar but have a certain degree of ionicity.
It is useful to investigate the bonding pattern using a different kind of real-space analysis, i.e. the study of the “electron localization function”. The ELF was originally defined as a scalar function $`(𝐫)`$ measuring the conditional probability of finding an electron in the neighbourhood of another electron with the same spin. In the reformulation due to Savin et al. (1992) it is expressed as:
$$(𝐫)=\frac{1}{1+[D(𝐫)/D_h(𝐫)]^2},$$
where $`D(𝐫)`$ is the Pauli excess energy density, i.e. the difference between the kinetic energy density of the system and the kinetic energy of a non-interacting system of bosons at the same density. $`D_h(𝐫)`$ is the same quantity for the homogeneous electron gas at a density equal to the local density. With this definition, a value of $`(𝐫)`$ close to 0.5 in the bonding regions indicates a metallic character; a value close to one is characteristic of regions where the electrons are paired to form a covalent bond, but also of regions with an unpaired lone electron localized, thus corresponding to a dangling bond. The ELF has been originally proposed in the all-electron formalism, and only very recently it has been successfully applied in the framework of the density functional theory (DFT) within the pseudopotential method (De Santis and Resta 1999). Whereas charge density plots are a standard tool in the first-principles theoretical studies of real materials, ELF investigations are still lacking, and this is, to our knowledge, the first application to disordered solid state systems.
In the case of normal or floating bonds, the ELF does not add much more informations with respect to the standard charge density analysis. In the left upper panel of figure 2 we show the ELF=0.85 isosurfaces for the overcoordinated environment in a-Si described before. High-value charge density (not show here) and ELF isosurfaces are almost similar in their extension and shape. The ELF isosurface in correspondence to the A–I bond clearly visualizes its bowing (the isosurface is not perfectly centred on the geometrical bond) and its weakness (the isosurface is smaller than those on the other bonds).
Adding two hydrogen atoms in the neigbourhood of the $`T_5`$ sites and allowing the system to relax, two Si–Si bonds are broken so that the atoms A and B become normally tetrahedrally coordinated, and their fifth NN atoms connect with the additional hydrogens (see the snapshot in the lower panel of figure 2). In this configuration all the Si–Si bonds are rather strong (the ELF isosurface between A and I is more extended with respect to the previous case) and more bulk-like (all the isosurfaces are more regular in shape). The plots of the density of states (right panels in figure 2) show that, at variance with the starting configuration having a metallic character evidently due to defect induced states in the gap, the final one is clearly semiconducting.
The combined charge-density and ELF analysis is necessary to identify unambiguously the dangling bonds and to distinguish for instance a $`T_5+T_4`$ configuration from a $`T_4+T_3`$. Whereas the presence of a covalent bond is indicated by a region of local maxima of both ELF and charge density, a dangling bond is identified by a region with high values of ELF but low electronic charge density. This is evident in figure 3 (upper panels), where we show a snapshot from a a-Si:H sample with a $`T_4`$ (labelled A) and a $`T_3`$ (labelled B) atoms (we have created a dangling bond by removing an hydrogen initially bond to the silicon atom B). Panel (a) shows charge density isosurfaces, and panel (b) ELF isosurfaces. The absence of high-value charge density isosurface together with the presence of high-value ELF isosurfaces in the region between atoms A and B clearly indicate the presence of a dangling bond originated from atom B. As expected, this configurations has a metallic character, with electronic states around the Fermi energy $`E_f`$ (panel (c) of figure 3).
When the system is allowed to relax, a new bond is formed between the silicon atoms A and B, as it clear from the panels (d) (charge density) and (e) (ELF). The final system has still gap states, both because of the $`T_5`$ defect B which is now formed and because other coordination defects are present in the rest of the a-Si:H sample. The evolution of this structure from a $`T_4+T_3`$ into a $`T_5+T_4`$ is consistent with the picture of Pantelides (1986) of the conjugated $`T_3`$ and $`T_5`$ sites.
## III Summary
In conclusion, we have presented the results of accurate ab initio self-consistent pseudopotential calculations of a-Si and a-Si:H samples with different coordination defects starting from some configurations generated via CPMD, and we have followed some possible processes of defect formation, annihilation by H, and transformation of one defect into the other. Those techniques allowing to identify the defects in real space are suitable for their localization in disordered structures. In particular, we have shown that a combined analysis of the electronic charge density distribution and ELF allows to unambiguously classify the different kind of defects. We have clearly identified $`T_3`$ and $`T_5`$ defects, and comparing the DOS in the different configurations we have shown that they both can induce states in the gap, whose density is reduced in both cases by interaction with H.
## IV Acknowledgments
This work has been done within the “Iniziativa Trasversale di Calcolo Parallelo” of INFM. We acknowledge useful discussions with N. Marzari. One of the authors (S. de G.) acknowledges support from the MURST within the initiative Progetti di ricerca di rilevante interesse nazionale.
|
no-problem/9906/hep-ph9906428.html
|
ar5iv
|
text
|
# A New Approach to Nuclear Collisions at RHIC Energies
## 1 Introduction
The standard parton model approach to proton-proton or also nucleaus-nucleus scattering amounts to presenting the partons of projectile and target by momentum distribution functions, $`f_A`$ and $`f_B`$, and calculating inclusive cross sections as \[sjo87, dur87\]
$$\sigma _{\mathrm{incl}}=\underset{ij}{}𝑑t𝑑x^+𝑑x^{}f_A^i(x^+,Q^2)f_B^j(x^{},Q^2)\frac{d\widehat{\sigma }_{ij}}{dt}(x^+x^{}s),$$
where $`d\widehat{\sigma }_{ij}/dt`$ is the elementary parton-parton cross section. This simple factorisation formula is the result of cancellations of complicated diagrams (AGK cancellations) and hides therefore the complicated multiple scattering structure of the reaction. The most obvious manifestion of such a structure is the fact that the inclusive cross section exceeds the total one, so the average number of elementary interactions must be bigger than one. The usual solution is the so-called eikonalization, which amounts to re-introducing multiple scattering, however, based on the above formula for the inclusive cross section. The latter formula has the disdvantage of having lost many important details about the multiple scattering aspects. Constructing in this way a model (in particular an event generator) for particle production is completely arbitrary. For example, energy obviously must be conserved, but the QCD formulas will not provide the slightest hint how to realize energy conservation in a particular multiple scattering event.
This problem has first been discussed in \[abr92\],\[bra90\]. The authors claim that following from the nonplanar structure of the corresponding diagrams, conserving energy and momentum in a consistent way is crucial, and therefore the incident energy has to be shared between the different elementary interactions, both real and virtual ones.
Following these ideas, we provide in this paper a rigorous treatment of the multiple scattering aspect, such that questions as energy conservation are clearly determined by the rules of field theory, removing the arbitrariness of the procedures applied so far. The general idea is as follows: starting point is an expression for the inelastic cross section for nucleus-nucleus scattering at very high energies, expressed in terms of cut Feynman diagrams. Clearly at this point certain assumptions concerning the dominace of certain classes of diagrams is employed. But this assumptions really define the model, what follows is just the application of the rules of field theory, calculating diagrams, making partial summations, and finally interpreting these partial sums as “partial cross sections” for certain physical processes. Production of particles (on the parton level) is thus completely determined \[wer97\].
## 2 The Elementary Cut Diagram: the Parton Ladder
We want to write down an expression for the inelasic cross secion in nucleus-nucleus (including nucleon-nucleon) scattering in terms of cut Feynman diagrams. We will assume that dominant contributions will come from certain classes of diagrams, which are composed of so-called “elementary diagrams”, also referred to as “elementary interactions”. We will therefore first dicuss the elementary diagrams, before introducing the multiple scattering theory. A complete diagram will certainly contain cut and uncut elementary diagrams. We will investigate explicitely only cut diagrams and use a relation between cut and uncut diagrams to calculate the latter ones.
We assume an elementary interaction to be represented by a parton ladder with “soft ends”, see fig 1.
The central part is a parton ladder with ordered virtualities, such that the highest virtuality is at the center and the virtualities are decreasing towards the ends of the ladder. This part of the diagram can be calculated using perturbative techniques of QCD. Since the virtualities are decreasing towards the ends, one reaches finally values where perturbative calculation can no longer be employed, although the longitudinal momentum fraction of the corresponding parton may be much smaller than one. This means there is still a large mass “object” between the first parton of the ladder and the nucleon, however, with small virtualities involved \[lan94\]. The most naturel candidate for such an object is the soft Pomeron, which can not be calculated from first principles, but where reasonable parametrisations exist, based on general considerations of scattering matrices in the limit of very high energies. So, the mathematical expression corresponding to the cut diagram of fig. 1 is
$$D_{\mathrm{semi}}(s,x^+,x^{},b)=𝑑b^{}\underset{ij}{}\frac{dx_1^+}{x_1^+}\frac{dx_1^{}}{x_1^{}}E_{\mathrm{soft}}^\mathrm{i}(\frac{x_1^+}{x^+},b^{})E_{\mathrm{soft}}^\mathrm{j}(\frac{x_1^{}}{x^{}},bb^{})\sigma _{\mathrm{ladder}}^{ij}(x_1^+x_1^{}s),$$
(1)
where $`E_{\mathrm{soft}}`$ represents the soft Pomeron at each end of the ladder and $`\sigma _{\mathrm{ladder}}^{ij}`$ the parton ladder itself, the precise definition of both quantities being given in the following.
The hard part of the elementary interaction, $`\sigma _{\mathrm{ladder}}^{ij}`$, is given as
$`\sigma _{\mathrm{ladder}}^{ij}(\widehat{s})`$ $`=`$ $`{\displaystyle \underset{kl}{}}{\displaystyle 𝑑w^+𝑑w^{}𝑑Q_1^2}`$
$`E_{\mathrm{QCD}}^{ik}(Q_0^2,Q_1^2,w^+)E_{\mathrm{QCD}}^{jl}(Q_0^2,Q_1^2,w^{}){\displaystyle \frac{d\sigma _{\mathrm{Born}}^{kl}}{dQ^2}}(w^+w^{}\widehat{s},Q_1^2),`$
where $`d\sigma _{\mathrm{Born}}^{kl}/dQ^2`$represents the hardest scattering in the middle of the ladder (indicated symbolically by the somewhat thicker ladder rung in fig. 1),
and where $`E_{\mathrm{QCD}}`$ represents the evolution of parton cascade from scale $`Q_0^2`$ to $`Q_1^2`$, using the DGLAP approximation \[alt82, ell96\], given as (see fig. 2)
$$E_{\mathrm{QCD}}^{ij}(Q_0^2,Q_1^2,x)=\underset{n\mathrm{}}{lim}E_{\mathrm{QCD}}^{(n)ij}(Q_0^2,Q_1^2,x),$$
(3)
where $`E_{\mathrm{QCD}}^{(n)}`$ represents an ordered ladder with at most $`n`$ ladder rungs. This is calculated iteratively based on
$`E_{\mathrm{QCD}}^{(n)ij}(Q_0^2,Q_1^2,x)=\delta (1x)\delta _{ij}\mathrm{\Delta }^i(Q_0^2,Q_1^2)`$ (4)
$`+{\displaystyle \underset{k}{}}{\displaystyle _{Q_0^2}^{Q_1^2}}{\displaystyle \frac{dQ^2}{Q^2}}{\displaystyle _0^{1ϵ}}{\displaystyle \frac{d\xi }{\xi }}{\displaystyle \frac{\alpha _s}{2\pi }}E_{\mathrm{QCD}}^{(n1)ik}(Q_0^2,Q^2,\xi )\mathrm{\Delta }^j(Q^2,Q_1^2)P_k^j({\displaystyle \frac{x}{\xi }}),`$
where the indices $`i`$, $`j`$, $`k`$ represent parton flavors. $`P_k^j`$ are the Altarelli-Parisi splitting functions and $`\mathrm{\Delta }^j`$ is the so-called Sudakov form factor. The soft part of the elementary interaction, $`E_{\mathrm{soft}}`$, is the usual soft Pomeron expression.
In addition to the semihard contribution $`D_{\mathrm{semi}}`$, one has to consider the expression representing the purely soft contribution:
$$D_{\mathrm{soft}}(s,x^+,x^{},b)=E_{\mathrm{soft}}(\frac{s_0}{x^+x^{}s},b),$$
(5)
with the scale parameter $`s_0=1`$ GeV. The complete contribution, representing an elementary inelastic interaction in an energy range of say $`10`$ \- $`10^4`$ GeV, is therefore given as
$$D=D_{\mathrm{soft}}+D_{\mathrm{semi}}.$$
(6)
We would like to stress, that the “soft end” of the semihard Pomeron has exactly the same structure as the soft contribution itself, no new parameters enter.
There is still something missing: the outer legs of the elementary diagram are not the nucleons, but nucleon “constituents”, to be more precise quark-antiquark pairs. We call these constituents also “participants” to indicate that they are actively participating in the interaction, in contrast to the remnants, which represent the non-participating part of the nucleons. So for each incoming leg, we have an additional factor $`F_{\mathrm{part}}`$, which we assume to be of the form $`F_{\mathrm{part}}(x)=x^{\alpha _{\mathrm{part}}}`$. Similarly, for each remnant, we add a factor $`F_{\mathrm{remn}}(x)=x^{\alpha _{\mathrm{remn}}}`$, where the arguments of $`F_{\mathrm{remn}}(x)`$ are the momentum fractions of the remnants. We define
$$2G(s,x^+,x^{},b)=F_{\mathrm{part}}(x^+)D(s,x^+,x^{},b)F_{\mathrm{part}}(x^{}),$$
where we introduced a factor 2 for later convenience.
## 3 Multiple Scattering
We assume that the dominant diagrams for nucleus-nucleus scattering are those which consist of elementary diagrams as discussed in the previous sections, see fig. 3.
One easily writes down the corresponding formula: each cut Pomeron contributes a factor $`2CG`$, each uncut one a factor $`(2CG)`$(the semihard Pomeron amplitude is assumed to be imaginary), and each remnant $`F^+`$ or $`F^{}`$. So we get
$`\sigma _{\mathrm{inel}}(s)`$ $`=`$ $`{\displaystyle d^2b𝑑T_{AB}\underset{m_1l_1}{}\mathrm{}\underset{m_{AB}l_{AB}}{}\underset{k=1}{\overset{AB}{}}\left\{\underset{\mu =1}{\overset{m_k}{}}dx_{k,\mu }^+dx_{k,\mu }^{}\underset{\lambda =1}{\overset{l_k}{}}d\stackrel{~}{x}_{k,\lambda }^+d\stackrel{~}{x}_{k,\lambda }^{}\right\}}`$ (7)
$`{\displaystyle \underset{k=1}{\overset{AB}{}}}\left\{{\displaystyle \frac{1}{m_k!}}{\displaystyle \frac{1}{l_k!}}{\displaystyle \underset{\mu =1}{\overset{m_k}{}}}2CG(s,x_{k,\mu }^+,x_{k,\mu }^{},b){\displaystyle \underset{\lambda =1}{\overset{l_k}{}}}2CG(s,\stackrel{~}{x}_{k,\lambda }^+,\stackrel{~}{x}_{k,\lambda }^{},b)\right\}`$
$`{\displaystyle \underset{i=1}{\overset{A}{}}}F^+\left(x_i^{P+}{\displaystyle \underset{\pi (k)=i}{}}\stackrel{~}{x}_{k,\lambda }^+\right){\displaystyle \underset{j=1}{\overset{B}{}}}F^{}\left(x_j^T{\displaystyle \underset{\tau (k)=j}{}}\stackrel{~}{x}_{k,\lambda }^{}\right)`$
with
$$x_i^{P+}=1\underset{\pi (k)=i}{}x_{k,\mu }^+$$
$$x_i^T=1\underset{\tau (k)=j}{}x_{k,\mu }^{},$$
where $`𝑑T_{AB}`$ represents the integration over transverse coordinates of projectile and target nucleons with the appropriate weight given by the so-called thickness functions \[wer93\]. The factor $`D_{AB}`$ is given as
$$D_{AB}=\underset{i=1}{\overset{A}{}}\left\{\frac{1}{\sqrt{C}}+(1\frac{1}{\sqrt{C}}\delta _i^+)\right\}\underset{j=1}{\overset{B}{}}\left\{\frac{1}{\sqrt{C}}+(1\frac{1}{\sqrt{C}}\delta _j^{})\right\},$$
with
$$\delta _n^\pm =\{\begin{array}{cc}1\hfill & \mathrm{if}\mathrm{nucleon}\mathrm{is}\mathrm{passive}\hfill \\ 0\hfill & \mathrm{if}\mathrm{nucleon}\mathrm{is}\mathrm{active}\hfill \end{array},$$
where an active nucleon participates in at least one elementary interaction, whereas a passive one does not. The functions $`\pi (k)`$ and $`\tau (k)`$ refer to the projectile and the target nucleons participating in the $`k^{\mathrm{th}}`$ interaction. We fully account for energy-momentum conservation, which we consider extremely important due to the nonplanarity of the diagrams, implying the interactions to occur in parallel.
The expansion of $`\sigma _{\mathrm{inel}}`$ in terms of cut diagrams as given in eq. 7 represents a sum of a large number of positive and negative terms, including all kinds of interferences, which excludes any probabilistic interpretation. Our strategy consists therefore of performing partial summations such that the remaining terms allow such an interpretation \[agk73\]. So we classify the diagrams according to the cut elementary diagrams (real emissions), and then sum over all diagrams of a given class, which amounts to summing over uncut elementary diagrams (sum over virtual emissions):
$`\sigma _{\mathrm{inel}}`$ $`=`$ $`{\displaystyle \underset{\mathrm{cut}\mathrm{diagrams}𝒟}{}}{\displaystyle 𝑑X𝑑\stackrel{~}{X}𝒟}`$
$`=`$ $`{\displaystyle \underset{\mathrm{cut}\mathrm{ladders}}{}}dX\left\{{\displaystyle \underset{\mathrm{uncut}\mathrm{ladders}}{}}{\displaystyle 𝑑\stackrel{~}{X}𝒟}\right\},`$
where $`𝒟`$ is the mathematical expression corresponding to a diagram as shown in fig. 3, appearing in formula 7, and where $`X`$ and $`\stackrel{~}{X}`$ represents all the light cone momenta of the cut and uncut elementary diagrams. The term in brackets $`\left\{\mathrm{}\right\}`$ may be finally interpreted as probability for the corresponding “ladder configuration”.
Let us write the formulas explicitely. We have
$`\sigma _{\mathrm{inel}}(s)`$ $`=`$ $`{\displaystyle d^2b𝑑T_{AB}\underset{m_1}{}\mathrm{}\underset{m_{AB}}{}\underset{k=1}{\overset{AB}{}}\left\{\underset{\mu =1}{\overset{m_k}{}}dx_{k,\mu }^+dx_{k,\mu }^{}\right\}}`$ (8)
$`{\displaystyle \underset{k=1}{\overset{AB}{}}}\left\{{\displaystyle \frac{1}{m_k!}}{\displaystyle \underset{\mu =1}{\overset{m_k}{}}}2G(s,x_{k,\mu }^+,x_{k,\mu }^{},b)\right\}R(s,x^{P+},x^T,b),`$
with
$`R(s,x^{P+},x^T,b)`$ $`=`$ $`{\displaystyle \underset{l_1}{}}\mathrm{}{\displaystyle \underset{l_{AB}}{}}{\displaystyle \underset{k=1}{\overset{AB}{}}\left\{\underset{\lambda =1}{\overset{l_k}{}}d\stackrel{~}{x}_{k,\lambda }^+d\stackrel{~}{x}_{k,\lambda }^{}\right\}\underset{k=1}{\overset{AB}{}}\left\{\frac{1}{l_k!}\underset{\lambda =1}{\overset{l_k}{}}2G(s,\stackrel{~}{x}_{k,\lambda }^+,\stackrel{~}{x}_{k,\lambda }^{},b)\right\}}`$ (9)
$`\times `$ $`{\displaystyle \underset{i=1}{\overset{A}{}}}F^+\left(x_i^{P+}{\displaystyle \underset{\pi (k)=i}{}}\stackrel{~}{x}_{k,\lambda }^+\right){\displaystyle \underset{j=1}{\overset{B}{}}}F^{}\left(x_j^T{\displaystyle \underset{\tau (k)=j}{}}\stackrel{~}{x}_{k,\lambda }^{}\right).`$
The variables appearing in eq. (8) may be represented by two multivariables: the interaction-type variable $`M`$ which specifies for each of the $`AB`$ nucleon pairs the type of the interaction (how many cut Pomerons of which type occur), and the momentum variable $`X`$, already mentoned earlier, which specifies for each elementary interaction the momentum fractions. Eq. (8) may thus be written as
$$\sigma _{\mathrm{inel}\text{ }}=\underset{M}{}𝑑X\mathrm{\Omega }(M,X).$$
(10)
where $`\mathrm{\Omega }(M,X)`$ is the integrand of eq. (8). Both variables, $`K=\{M,X\}`$ represent a “ladder configuration” and $`\mathrm{\Omega }(M,X)`$ considered to be the corresponding probability density.
There are two fundamental problems to be solved:
* the sum over virtual emmisions has to be performed
* tools have to be delevopped to deal with the multidimensional probability distribution $`\mathrm{\Omega }(M,X)`$.
Both are difficult tasks. There is no way to do the summation numericly in case of two heavy nuclei $`A`$ and $`B`$: we have $`AB`$ summation indices, so if we assume 10 terms per index to reach convergence, we have to sum over $`10^{AB}`$ terms! It is also out of question to use Monte Carlo methods, since positive and negative terms occur. However, we are able to provide a solution, as discussed later. Concerning the multidimensional probability distribution $`\mathrm{\Omega }(M,X)`$, we are going to develop methods well known in statistical physics (Markov chain techniques), which we also are going to discuss in detail later. So finally, we are able to calculate the probability distribution $`\mathrm{\Omega }(M,X)`$, and are able to generate (in a Monte Carlo fashion) “ladder configurations” $`(M,K)`$ according to this probability distribution. The next task amounts to generate explicitely partons, again based on our master formula eq. 8. This will be discussed in next section.
## 4 Parton Configurations
In this section, we consider the generation of parton configurations in nucleus-nucleus (including proton-proton) scattering for a given ladder configuration, which means, the number of elementary interactions per nucleon-nucleon pair is known, as well as the light cone momentum fractions $`x^+`$ and $`x^{}`$ of each elementary interaction. A parton configuration is specified by the number of partons, their types and momenta. We showed earlier that the inelastic cross section may be written as
$$\sigma _{\mathrm{inel}}=\underset{K𝒦}{}\mathrm{\Omega }(K),$$
(11)
where the symbol $`\mathrm{\Sigma }`$ means $`\mathrm{\Sigma }`$ and where $`K=\{M,X\}`$ represents a ladder configuration. The function $`\mathrm{\Omega }(K)`$ is known (see eq. (8)) and is interpreted as probability distribution for a ladder configuration $`K`$. For each individual ladder a term $`2CG`$ appears in the formula for $`\mathrm{\Omega }(K)`$, where $`2G`$ itself can be expressed in terms of parton configurations, which provides probability distributions for parton configurations, and which provides the basis for generating partons. We want to stress that the parton generation is also based on the master formula eq. (8), no new elements enter. In the following, we want to discuss in detail the generation of parton configurations for an elementary interaction with given light cone momentum fractions $`x^+`$ and $`x^{}`$and given impact parameter difference $`b`$ between the corresponding pair of interacting nucleons.
First, we have to specify the type of elementary interaction (soft or semihard). The corresponding probabilities are
$$G_{\mathrm{semi}}(s,x^+,x^{},b)/G(s,x^+,x^{},b)$$
(12)
and
$$G_{\mathrm{soft}}(s,x^+,x^{},b)/G(s,x^+,x^{},b)$$
(13)
respectively.
Let us now consider a semihard contribution. We obtain the desired probability distributions from the explicit expressions for $`2G_{\mathrm{semi}}`$. For given $`x^+`$, $`x^{}`$, we have
$$2G_{\mathrm{semi}}(s,x^+,x^{},b)𝑑x_1^+𝑑x_1^{}\left\{d^2b^{}\underset{ij}{}E_{\mathrm{soft}}^i(\frac{x_1^+}{x^+},b^{})E_{\mathrm{soft}}^j(\frac{x_1^{}}{x^{}},bb^{})\sigma _{\mathrm{ladder}}^{ij}(x_1^+x_1^{}s)\right\},$$
(14)
with
$`\sigma _{\mathrm{ladder}}^{ij}(\widehat{s})`$ $`=`$ $`{\displaystyle \underset{kl}{}}{\displaystyle 𝑑w^+𝑑w^{}𝑑Q^2}`$
$`E_{\mathrm{QCD}}^{ik}(Q_0^2,Q^2,w^+)E_{\mathrm{QCD}}^{jl}(Q_0^2,Q^2,w^{}){\displaystyle \frac{d\sigma _{\mathrm{Born}}^{kl}}{dQ^2}}(w^+w^{}\widehat{s},Q^2),`$
representing the perturbative parton-parton cross section, where both initial partons are taken at the virtuality $`Q_0^2`$. The integrand $`\left\{\mathrm{}\right\}`$ of eq. 14 serves as probability distribution to generate $`x_1^+`$ and $`x_1^{}`$.
Knowing the momentum fractions $`x_1^+`$ and $`x_1^{}`$of the “first partons” of the parton ladder, we can construct the complete ladder. To do so, we generalize the definition of the parton-parton cross section $`\sigma _{\mathrm{ladder}}`$ to arbitrary virtualities of the initial partons, we define
$`\sigma _{\mathrm{ladder}}^{ij}(Q_1^2,Q_2^2,\widehat{s})`$ $`=`$ $`{\displaystyle \underset{kl}{}}{\displaystyle 𝑑w^+𝑑w^{}𝑑Q^2}`$
$`E_{\mathrm{QCD}}^{ik}(Q_1^2,Q^2,w^+)E_{\mathrm{QCD}}^{jl}(Q_2^2,Q^2,w^{}){\displaystyle \frac{d\sigma _{\mathrm{Born}}^{kl}}{dQ^2}}(w^+w^{}\widehat{s},Q^2).`$
and
$`\sigma _{\text{ord}}^{ij}(Q_1^2,Q_2^2,\widehat{s})`$ $`=`$ $`{\displaystyle \underset{k}{}}{\displaystyle 𝑑w^{}𝑑Q^2}`$
$`E_{\mathrm{QCD}}^{jk}(Q_2^2,Q^2,w^{})\mathrm{\Delta }^i(Q_1^2,Q^2){\displaystyle \frac{d\sigma _{\mathrm{Born}}^{ki}}{dQ^2}}(w^{}\widehat{s},Q^2).`$
representing ladders with ordering of virtualities on both sides ($`\sigma _{\text{jet}}`$) or on one side only ($`\sigma _{\text{ord}}`$). We calculate and tabulate $`\sigma _{\text{jet}}`$ and $`\sigma _{\text{ord}}`$ initially so that we can use them via interpolation to generate partons. The generation of partons is done in an iterative fashion based on the following equations:
$`\sigma _{\mathrm{ladder}}^{ij}(Q_1^2,Q_2^2,\widehat{s})`$ $`=`$ $`{\displaystyle \underset{k}{}}{\displaystyle \frac{dQ^2}{Q^2}\frac{d\xi }{\xi }\mathrm{\Delta }^i(Q_1^2,Q^2)\frac{\alpha }{2\pi }P_i^k(\xi )\sigma _{\mathrm{ladder}}^{kj}(Q^2,Q_2^2,\xi \widehat{s})}`$
$`+\sigma _{\mathrm{ord}}^{ij}(Q_1^2,Q_2^2,\widehat{s}).`$
and
$`\sigma _{\mathrm{ord}}^{ij}(Q_1^2,Q_2^2,\widehat{s})`$ $`=`$ $`\sigma _{\mathrm{Born}}^{ij}(Q_1^2,Q_2^2,\widehat{s})`$
$`+{\displaystyle \underset{k}{}}{\displaystyle \frac{dQ^2}{Q^2}\frac{d\xi }{\xi }\mathrm{\Delta }^i(Q_1^2,Q^2)\frac{\alpha }{2\pi }P_i^k(\xi )\sigma _{\mathrm{ladder}}^{kj}(Q^2,Q_2^2,\xi \widehat{s})}.`$
## 5 Outlook
So far we presented a consistent and very transparent new approch to calculate parton production in nucleus-nucleus (including nucleon-nucleon) scattering. But, unfortunately, the real world consists of hadrons, so we still have to deal with the problem of hadronization. This is not so clear. We provide a “minimal model” where we simply translate the partons from each individual elementary interaction into the language of relativistic strings, the latter ones being decayed using the machinery of relativistic string decay. An alternative would be to take our partons as initial condition for a transport treatment of a partonic system. We do not want to explore these options any further in this paper.
## 6 Acknowledgements
This work has been funded in part by the IN2P3/CNRS (PICS 580) and the Russian Foundation of Basic Researches (RFBR-98-02-22024).
|
no-problem/9906/gr-qc9906069.html
|
ar5iv
|
text
|
# Untitled Document
$`\mathrm{\copyright }`$ 1999 The American Institute of Physics
PROGRESS IN ESTABLISHING A CONNECTION BETWEEN THE
ELECTROMAGNETIC ZERO-POINT FIELD AND INERTIA
Bernhard Haisch<sup>1</sup> and Alfonso Rueda<sup>2</sup>
<sup>1</sup>Solar & Astrophysics Laboratory, Lockheed Martin, H1-12, B252, 3251 Hanover St., Palo Alto, CA 94304
haisch@starspot.com
<sup>2</sup>Dept. of Electrical Eng. and Dept. of Physics & Astronomy, California State Univ., Long Beach, CA 90840
arueda@csulb.edu
Presented at Space Technology and Applications International Forum (STAIF-99)
January 31--February 4, 1999, Albuquerque, NM
Abstract. We report on the progress of a NASA-funded study being carried out at the Lockheed Martin Advanced Technology Center in Palo Alto and the California State University in Long Beach to investigate the proposed link between the zero-point field of the quantum vacuum and inertia. It is well known that an accelerating observer will experience a bath of radiation resulting from the quantum vacuum which mimics that of a heat bath, the so-called Davies-Unruh effect. We have further analyzed this problem of an accelerated object moving through the vacuum and have shown that the zero-point field will yield a non-zero Poynting vector to an accelerating observer. Scattering of this radiation by the quarks and electrons constituting matter would result in an acceleration-dependent reaction force that would appear to be the origin of inertia of matter (Rueda and Haisch 1998a, 1998b). In the subrelativistic case this inertia reaction force is exactly newtonian and in the relativistic case it exactly reproduces the well known relativistic extension of Newton’s Law. This analysis demonstrates then that both the ordinary, $`\stackrel{}{F}=m\stackrel{}{a}`$, and the relativistic forms of Newton’s equation of motion may be derived from Maxwell’s equations as applied to the electromagnetic zero-point field. We expect to be able to extend this analysis in the future to more general versions of the quantum vacuum than just the electromagnetic one discussed herein.
BACKGROUND
In July 1998 the Advanced Concepts Office at JPL and the NASA Office of Space Science sponsored a four-day workshop at Caltech on “Robotic Interstellar Exploration in the Next Century.” The objective was to bring together scientists and engineers to survey the landscape of possible ideas that could lead to unmanned missions beyond the Solar System beginning within a timeframe of 40 years. Missions to Kuiper Belt objects ($`>40`$ AU), the local interstellar medium beyond the heliopause ($`>150`$ AU), the Oort Cloud ($`1000050000`$ AU), and ultimately the nearest star system ($`\alpha `$ Centauri at 270000 AU) were considered.
Present rocket technology falls short of the necessary propulsion requirements by orders of magnitude. Radically new capabilities are needed, and one approach is by extreme extrapolation of known technologies. Ideas presented at the workhop thus included: anti-matter initiated fusion; particle beams which could be accurately directed to a target vehicle at light-year distances owing to nanotechology navigation capabilities built into the particles themselves; current-carrying tether arrays 1000 by 1000 km in size that could generate Lorentz forces by interaction with the interstellar magnetic field; laser-pushed lightsails that could be accurately pointed and maintain collimation at stellar distances, etc.
The extreme sizes of structures and the extreme tolerances required for such concepts is worrisome. Taking the laser-driven sail as an example, let us assume that a mission propelled by a 1 km diameter lightsail is halfway ($`2\times 10^{13}`$ km) to $`\alpha `$ Centauri when a beam problem reaches the vehicle. A one part in $`10^{13}`$ misalignment of the laser which occurred 2 years previously back on earth is now reaching the vehicle causing the beam to miss the light sail. Owing to the speed-of-light limitation, it will be another 2 years before any news of this transmitted by the spacecraft can reach the earth. It will be yet another 2 years before the correction from earth will reach the spacecraft. But by then the vehicle may have drifted out of its trajectory sufficiently owing to the secular effects of interaction with the interstellar medium (or other causes) that it is still out of the beam. Indeed, any drift of the vehicle from a line-of-sight trajectory will cause the same uncorrectible problem in the first place since there is no way to know where the vehicle is “now” (in the sense of where the beam is supposed to hit). This illustrates the inherent problem of speed-of-light caused time delay in any feedback loop; it would be all too easy — in fact probably unavoidable — to have a mission “lost in space without a paddle” due to the slightest error using this propulsion mechanism.
An alternative approach to extreme and perhaps unrealistic technologies is to consider what kinds of new physics might leapfrog us beyond this: after all, no amount of money and energy expended on maximizing information transmission via 19th century pony express couriers could approach the amount and instantaneity of information transmission by televison or the internet for example. In this case, the capabilities of “new physics” in electrodynamics utterly superseded mechanics.
Obviously one would like new physics that will permit faster-than-light $`(v>c)`$ travel and provide access to unlimited energy. The $`v>c`$ hope cannot be encouraged, however, owing to the fundamental conflicts it causes with respect to relativity and causality. <sup>a</sup> In a famous speech in 1900, Lord Kelvin extolled the near completeness of physics (owing in no small measure, naturally, to his own Herculean efforts) with only two dark clouds on the otherwise clear horizon: the blackbody problem and the failure to detect the ether. At the moment there is arguably a glimmering of two even smaller clouds on the horizon with respect to relativity and causality that may ultimately point the way to some conceivable $`v>c`$ physics. With respect to relativity, it would be surprising if the rest frame defined by the cosmic microwave background did not turn out to be somehow special after all, perhaps opening the door to non-Lorentzian space-time physics. With respect to causality, the cloud is even wispier. It is an enigma within an anomaly that there is some credible evidence for human extrasensory perception and that this perception of information appears not to be dependent upon time, there apparently being no greater barrier to accessing future information than present information (Jahn et al. 1997). It would, of course, be unwise to make any interstellar plans on this basis. However it does appear that there may be the equivalent of other new physics lurking within the electrodynamics of the quantum vacuum.
At this time there are four possibilities relevant to future propulsion technology that have a sufficiently well-developed basis in quantum vacuum physics so as to warrant further theoretical investigation: extraction of energy, generation of force, and manipulation of inertia and possibly even of gravitation. The realization of any one of these would leapfrog beyond all other concepts of interstellar travel presented at the workshop, and this was the topic of an invited presentation at the Caltech conference by Haisch.
THE ZERO-POINT FIELD OF THE QUANTUM VACUUM
A NASA-funded research effort has been underway since 1996 at the Lockheed Martin Advanced Technology Center in Palo Alto and at the California State University in Long Beach to explore the physics of the quantum vacuum and its possible long-term potential applications. That effort is a follow-on to previous work suggesting a relationship between the electromagnetic zero-point field of the quantum vacuum and inertia (Haisch, Rueda and Puthoff, 1994).
In the conventional interpretation of quantum theory, an electromagnetic zero-point field arises as a consequence of the Heisenberg uncertainty relation as applied to each mode of the electromagnetic field. No oscillator can ever be brought completely to rest due to quantum fluctuations. The minimum energy for a mechanical oscillator whose natural frequency is $`\nu `$ is $`E=h\nu /2`$. Each mode of the electromagnetic field also acts as an oscillator. Thus for any frequency, $`\nu `$, direction, $`\stackrel{}{k}`$, and polarization state, $`\sigma `$, there is a minimum energy of $`E=h\nu /2`$ in the electromagnetic field. Summing up all of these modes, each with its $`E=h\nu /2`$ of energy, results in an electromagnetic ground state of energy that should permeate the entire universe: the electromagnetic zero-point field, or ZPF. (The term ZPF refers to either the zero-point fields or equivalently zero-point fluctuations of the electromagnetic quantum vacuum; the term ZPE refers to the energy content of the electromagnetic quantum vacuum.) All other natural or artificial electromagnetic radiation would sit on top of this very energetic ground state.
The volumetric density of modes between frequencies $`\nu `$ and $`\nu +d\nu `$ is given by the density of states function $`N_\nu d\nu =(8\pi \nu ^2/c^3)d\nu `$. Using this density of states function and the minimum energy, $`h\nu /2`$, that we call the zero-point energy per state one can calculate the ZPF spectral energy density:
$$\rho (\nu )d\nu =\frac{8\pi \nu ^2}{c^3}\frac{h\nu }{2}d\nu .$$
$`(1)`$
It is instructive to write the expression for zero-point spectral energy density side by side with blackbody radiation:
$$\rho (\nu ,T)d\nu =\frac{8\pi \nu ^2}{c^3}\left(\frac{h\nu }{e^{h\nu /kT}1}+\frac{h\nu }{2}\right)d\nu .$$
$`(2)`$
The first term (outside the parentheses) represents the mode density, and the terms inside the parentheses are the average energy per mode of thermal radiation at temperature $`T`$ plus the zero-point energy, $`h\nu /2`$, which has no temperature dependence. Take away all thermal energy by formally letting $`T`$ go to zero, and one is still left with the zero-point term: the ground state of the electromagnetic quantum vacuum. The laws of quantum mechanics as applied to electromagnetic radiation force the existence of a background sea of zero-point-field (ZPF) radiation.
Zero-point radiation is taken to result from quantum laws. It is traditionally assumed in quantum theory, though, that the ZPF can for practical purposes be ignored or subtracted away. The foundation of the discipline in physics known as stochastic electrodynamics (SED) is the exact opposite (see e.g. de la Peña and Cetto 1996 for a thorough review of SED). It is assumed in SED that the ZPF is as real as any other electromagnetic field. As to its origin, the assumption is that zero-point radiation simply came with the Universe. The justification for this is that if one assumes that all of space is filled with ZPF radiation, a number of quantum phenomena may be explained purely on the basis of classical physics including the presence of background electromagnetic fluctuations provided by the ZPF. The Heisenberg uncertainty relation, in this view, becomes then not a result of the existence of quantum laws, but of the fact that there is a universal perturbing ZPF acting on everything. The original motivation for developing SED was to see whether the need for quantum laws separate from classical physics could thus be obviated entirely.
Philosophically, a universe filled — for reasons unknown — with a ZPF but with only one set of physical laws (classical physics consisting of mechanics and electrodynamics), would appear to be on an equal footing with a universe governed — for reasons unknown — by two distinct physical laws (classical and quantum). In terms of physics, though, SED and quantum electrodynamics, QED, are not on an equal footing, since SED has been successful in providing a satisfactory alternative to only some quantum phenomena (although this success does include a classical ZPF-based derivation of the all-important blackbody spectrum, cf. Boyer 1984). Some of this is simply due to lack of effort: The ratio of man-years devoted to development of QED is several orders of magnitude greater than the expenditure so far on SED.
There is disagreement about whether this zero-point field should be regarded as real or virtual. A number of well-established phenomena such as the Casimir force and the Lamb shift are equally well explained in terms of either the action of a real ZPF or simply the quantum fluctuations of particles. This paradox is discussed in some detail by Milonni (1988). It is clearly essential to determine how “real” the zero-point field is.
ENERGY EXTRACTION FROM THE QUANTUM VACUUM
In the early part of this century the discovery of radioactivity seemed to violate the law of energy conservation. Heat and radiation appeared to be continuously given off as if by a source of “free energy” in certain elements (e.g. radium, uranium). The resolution came with the understanding that mass was being converted into energy via the $`E=mc^2`$ relationship of special relativity in a process of spontaneous decay of unstable elements, the naturally occuring radioisotopes. The low-level energy emitted by decay of natural or artificially-created radioisotopes is of limited use. However with the successful demonstration of fission in the 1940’s, nuclear engineering became possible allowing us to tap a much more powerful mode of atomic energy release .
Chemical energy production, as in the burning of petroleum-based fuels or conventional rocket propulsion, taps the energy in the orbital electrons of atoms. Atomic energy generation taps the binding energy of the nuclear constituents of atoms. Is it possible to tap yet a deeper (and potentially much more powerful) source of energy: the ZPE of the electromagnetic quantum vacuum? There are two issues that bear on this.
It is often assumed that attempting to tap the energy of the vacuum must violate thermodynamics. One cannot extract thermal energy from a reservoir at temperature, $`T_1`$, if the environment is at temperature, $`T_2>T_1`$. While it is true that the ZPE is the energy remaining when all energy sources have been removed and the temperature reduced to $`T=0`$ K, the ZPE is not a thermal reservoir. It has very different characteristics than ordinary heat. The thermodynamics of energy extraction from the quantum vacuum have been analyzed by Cole and Puthoff (1993). They conclude as follows:
Relatively recent proposals have been made in the literature for extracting energy and heat from electromagnetic zero-point radiation via the use of the Casimir force. The basic thermodynamics involved in these proposals is analyzed and clarified here, with the conclusion that, yes, in principle, these proposals are correct. Technological considerations for the actual application and use are not examined here, however.
If the zero-point field is a real electromagnetic ground state, then there is no inconsistency with present-day physics in the possibility of tapping this energy. Indeed, it has been proposed that certain astrophysical processes are driven by the natural extraction of such energy (cf. Rueda, Haisch and Cole 1995 and references therein) and an ideal experiment has been proposed by Forward (1984) that clearly demonstrates the conceptual possibility of extracting vacuum energy.
A second point to consider is that the orbital energy of electrons and the $`E=mc^2`$ relationship itself may both ultimately be traceable to zero-point energy. While limited so far to the single, simple case of the ground-state of hydrogen, the work of Boyer (1975) and Puthoff (1987) suggests that electron energy levels may be stabilized against radiative collapse by interaction with the quantum vacuum. This would in principle link chemical energy to the energy of the ZPF.
In his preliminary development of the Sakharov conjecture on gravity as a ZPF-induced force, Puthoff (1989) suggests that the $`E=mc^2`$ relationship reflects the kinetic energy of zitterbewegung (Schrödinger’s term) which originates in the fluctuations induced by the ZPF on charged particles (quarks and electrons). In other words, instead of expressing a relationship between mass and energy, the $`E=mc^2`$ relationship tells us how much ZPF-driven energy is associated with a given particle. When this energy is liberated it is thus not really a transformation of mass into energy, rather a release of zero-point energy associated with this quantum motion known as zitterbewegung. This would in principle link atomic energy to the energy of the ZPF. (An attempt at a classical visualization of this motion from the SED viewpoint may be found in Rueda, 1993).
If the above interpretations prove to be valid and it is ultimately the energy of the ZPF that is being tapped in chemical and atomic energy production processes, then it is not inconceivable that other channels will be found to liberate energy from the quantum vacuum. Since the ZPF must be universal, a propulsion drive energized by the ZPF would have access to unlimited “fuel” anywhere…the ZPE thus providing the ultimate energy source.
IN-SITU GENERATION OF FORCES
The existence of the Casimir force — an attraction between uncharged conducting plates — is now well established. Measurements by Lamoreaux (1997) are in agreement with theoretical predictions to within a few percent. A particularly simple interpretation involving the ZPF was presented by Milonni, Cook and Goggin (1988):
We calculate the vacuum-field radiation pressure on two parallel, perfectly conducting plates. The modes outside the plates push the plates together, those confined between the plates push them apart, and the net effect is the well-known Casimir force.
In other words, the electromagnetic boundary conditions on the two plates exclude a certain amount of ZPF from the cavity in between. This results in an overpressure from the ZPF outside, which then acts as a pressure on the plates.
This is one interpretation: call it the ZPF radiation pressure model. It is also possible to interpret the force as “a macroscopic manifestation of the retarded van der Waals force between two neutral polarizable particles” (Milonni, Cook and Goggin 1988), i.e. a quantum effect involving the particles in the plates (see Milonni 1982).
The speculation concerning propulsion is that the ZPF radiation pressure model of the Casimir force is physically correct and that it may be possible to construct some wall or cavity that interacts with the ZPF differently on one side than on the other. If that were possible, one would have in effect a ZPF-sail that could provide a propulsive force anywhere in space.
MODIFICATION OF INERTIAL AND GRAVITATIONAL MASS
The ultimate capability enabling practical interstellar travel would be the ability to tap the energy of the vacuum while at the same time modifying the inertia of the spacecraft. Reducing inertia of a spacecraft would allow higher velocity for the same expenditure of energy and more rapid acceleration without damage to the structure owing to the reduction of inertial forces. The absolute limit would be acceleration to velocity $`cϵ`$ in time $`\delta `$ where both $`ϵ`$ and $`\delta `$ approach zero.
Until recently there was absolutely no basis in physics for even considering such a possibility. While such a possibility still appears to be remote, there now does exist a basis in physics to at least begin to explore this concept.
In 1994 Haisch, Rueda and Puthoff published a paper, “Inertia as a zero-point field Lorentz Force,” in which a substantive mathematical analysis indicated that the inertia of matter could be interpreted as an electromagnetic reaction force originating in the quantum vacuum. This concept has now been redeveloped by Rueda and Haisch (1998a, 1998b) in a way that is both mathematically simpler while at the same time yielding a properly covariant relativistic result. This is encouraging that we are on the right track.
In a frame at rest or in uniform motion, the ZPF is uniform and isotropic. This is due to the Lorentz invariance of the ZPF spectrum. (The spectral cutoff of the spectrum would not be Lorentz invariant, but if the inertia interaction takes place at a frequency or resonance far from the cutoff, this would not matter.) However in an accelerated frame the ZPF becomes asymmetric. Rueda and Haisch have shown that the Poynting vector — which characterizes the radiative energy flux — becomes non-zero in an accelerating frame. If the quarks and electrons in matter undergoing acceleration scatter this ZPF flux, a reaction force will arise that is proportional to accleration. This is proposed to be the origin of inertia of matter. Inertia is not an innate property of matter; it is an acceleration-dependent electromagnetic reaction force.
The principle of equivalence mandates that gravitational and inertial mass must be the same. Therefore, if inertial mass is electromagnetic in origin, then gravitational mass must also be electromagnetic in some fashion. A preliminary development of a gravitational analysis based on the electrodynamics of the ZPF has been made by Puthoff (1989). Subsequent critiques have pointed out some deficiencies in this analysis. Nevertheless, it is encouraging that the ZPF-related parameters that determine “mass” turn out to be identical in the inertia and gravitation analyses (see Appendix A in Haisch, Rueda and Puthoff 1994). From this view, the ZPF acts as a mediator of a gravitational force, but cannot itself gravitate, hence would not result in an unacceptably large cosmological constant (Haisch and Rueda 1997).
We now have a theoretical basis to explore the possibility that electrodynamics may be used to modify the quantum vacuum in some way so as to alter inertia and/or gravitation. It would be prudent to continue such investigations.
ACKNOWLEDGMENTS
We acknowledge support of NASA contract NASW-5050 for this work. BH also acknowledges the hospitality of Prof. J. Trümper and the Max-Planck-Institut where some of these ideas originated during several extended stays as a Visiting Fellow. AR acknowledges stimulating discussions with Dr. D. C. Cole.
REFERENCES
Boyer, T. H., “Random electrodynamics: The theory of classical electrodynamics with classical electromagnetic zero-point radiation,” Phys. Rev. D, 11, 790 (1975).
Boyer, T. H., “Derivation of the blackbody radiation spectrum from the equivalence principle in classical physics with classical electromagnetic zero-point radiation,” Phys. Rev. D, 29, 1096 (1984).
Cole, D.C. and Puthoff, H.E., “Extracting energy and heat from the vacuum,” Phys. Rev. E, 48, 1562 (1993).
de la Peña, L. and Cetto, A.M. The Quantum Dice: An Introduction to Stochastic Electrodynamics, Kluwer Acad. Publishers, Dordrecht, (1996).
Forward, R., “Extracting electrical energy from the vacuum by cohesion of charged foliated conductors,” Phys. Rev. B, 30, 1700 (1984).
Haisch, B. and Rueda, A., “Reply to Michel’s ‘Comment on Zero-Point Fluctuations and the Cosmological Constant’,” Astrophys. J., 488, 563 (1997).
Haisch, B., Rueda, A. and Puthoff, H.E. (HRP), “Inertia as a zero-point-field Lorentz force,” Phys. Rev. A, 49, 678 (1994).
Jahn, R.G, Dunne, B.J., Nelson, R.D., Dobyns, Y.H. and Bradish, G.J., “Correlations of Random Binary Sequences With Pre-stated Operator Intention: A Review of a 12-year Program,” J. Scientific Exploration, 11, 345 (1997).
Lamoreaux, S.K., “Demonstration of the Casimir Force in the 0.6 to 6 $`\mu `$m Range,” Phys. Rev. Letters, 78, 5 (1997)
Milonni, P.W., “Casimir forces without the vacuum radiation field,” Phys. Rev. A, 25, 1315 (1982).
Milonni, P.W., “Different Ways of Looking at the Electromagnetic Vacuum,” Physica Scripta, T21, 102 (1988).
Milonni, P.W., Cook, R.J. and Goggin, M.E., “Radiation pressure from the vacuum: Physical interpretation of the Casimir force,” Phys. Rev. A, 38, 1621 (1988).
Puthoff, H.E., “Ground state of hydrogen as a zero-point-fluctuation-determined state,” Phys. Rev. D, 35, 3266 (1987).
Puthoff, H.E., “Gravity as a zero-point-fluctuation force,” Phys. Rev. A, 39, 2333 (1989).
Rueda, A., “Stochastic Electrodynamics with Particle Structure: Part I – Zero-point induced Brownian Behaviour,” Found. Phys. Letters, 6, 75 (1993); and “Stochastic Electrodynamics with Particle Structure: Parts II – Towards a Zero-point induced Wave Behaviour,” 6, 193 (1993).
Rueda, A. and Haisch, B., “Inertia as reaction of the vacuum to accelerated motion,” Physics Letters A, 240, 115 (1998a).
Rueda, A. and Haisch, B., “Contribution to inertial mass by reaction of the vacuum to accelerated motion,” Foundations of Physics, 28, 1057 (1998b).
Rueda, A., Haisch, B. and Cole, D. C., “Vacuum Zero-Point Field Pressure Instability in Astrophysical Plasmas and the Formation of Cosmic Voids,” Astrophys. J., 445, 7 (1995).
|
no-problem/9906/cond-mat9906310.html
|
ar5iv
|
text
|
# Similarity of slow stripe fluctuations between Sr-doped cuprates and oxygen-doped nickelates.
## Abstract
Stripe fluctuations in La<sub>2</sub>NiO<sub>4.17</sub> have been studied by <sup>139</sup>La NMR using the field and temperature dependence of the linewidth and relaxation rates. In the formation process of the stripes the NMR line intensity is maximal below 230 K, starts to diminish around 140 K, disappears around 50 K and recovers at 4 K. These results are shown to be consistent with, but completely complementary to neutron measurements, and to be generic for oxygen doped nickelates and underdoped cuprates.
PACS numbers: 76.60.-k, 74.72.Dn, 75.30.Ds, 75.40.Gb
Evidence is accumulating that the electron systems in doped Mott-Hubbard insulators exhibit quite complex ordering phenomena. In two dimensional (2D) systems this takes the form of stripe phases, where the excess charges bind to antiphase boundaries in the Néel state. It was recently demonstrated by Hunt et al. that in a large temperature regime where the stripe order appears to be complete according to diffraction experiments, the stripe system is still slowly fluctuating. This follows from NMR experiments, showing both motional narrowing at higher temperatures and a wipe-out of the NMR signal upon cooling down, caused by the characteristic fluctuation frequency becoming of the order of the NQR linewidth/splitting of a few MHz. Here we will demonstrate that these fluctuations are not unique to cuprate stripes, and thereby unrelated to intricacies associated with the proximity of the superconducting state. We present a NMR study of the stripe phase in oxygen doped La<sub>2</sub>NiO<sub>4</sub>. It seems well established that the excess oxygen enters as an interstitial that shows a tendency to order three dimenionally, creating a larger unit cell. It should therefore be regarded as a rather clean system compared to the Sr-doped nickelates. We find that this ‘clean’ nickelate system exhibits a fluctuation behavior which closely parallels the fluctuations of the cuprate stripes: although scattering experiments in La<sub>2</sub>NiO<sub>4.13</sub> show charge- and spin freezing at $`T_{\mathrm{CO}}`$=220 K and $`T_{\mathrm{SO}}`$=110 K, our NMR experiments in La<sub>2</sub>NiO<sub>4.17</sub> indicate that the stripes only become static at a temperature of 2 K. Interestingly, these slow fluctuations seem absent in the ’dirty’ Sr-doped nickelate La<sub>5/3</sub>Sr<sub>1/3</sub>NiO<sub>4</sub> (with the same 1/3 hole doping content as our oxygen doped nickelate), where $`\mu `$SR measurements reveal the onset of static spin order at the same temperature (200 K) as the (quasi)elastic peaks develop in the neutron scattering. These observations suggest that the slow stripe fluctuations, characteristic for the cuprates and oxygen doped nickelates, are in first instance unrelated to quenched disorder; at the same time, that disorder is excessively effective in pinning these slow, intrinsic fluctuations in the insulator but not in the (super)conductor.
Below we analyze the field and temperature dependence of the <sup>139</sup>La linewidth and relaxation rates for La<sub>2</sub>NiO<sub>4+δ</sub> with $`\delta =0.17`$. <sup>139</sup>La has a nuclear spin $`I=7/2`$, which makes NMR sensitive to both charge and spin, and allows the study of charge and spin order and also the dynamics at time scales longer than $`10^7\mathrm{s}^1`$. The measurements were performed on two single crystals from different batches that were prepared under atmospheric condition in a mirror oven at 1100 K. Slices from both samples were analyzed by microprobe techniques; on the average the oxygen contents were found to be the same. Samples for the measurements were cut from those parts that had a homogeneous oxygen content and had a typical weight of 50 mg. Thermogravity (TGA) analysis of the oxygen concentration gave $`\delta `$ = 0.17. The same value follows from the volume of the unit cell as determined by X-ray. Samples of batch 1 show a peak in the susceptibility ($`\chi `$) around 110 K, which is also seen in the 2/15-doped compounds and is associated with oxygen order. The absence of this peak in the other sample (batch 2) points to local differences in the two samples. Apart from the peak, the susceptibility of both samples show the same Curie like dependence on $`T`$ down to 80 K. Below that temperature down to 4 K $`\chi `$ increases smoothly, resembling (but not identical to) Curie-Weiss behavior with an antiferromagnetic exchange interaction. The $`\chi `$-data above 100 K give a Ni moment of about one $`\mu _B`$. The NMR results of the two batches are qualitatively identical with small differences in the freezing (see below) temperatures. The results presented relate to the samples of batch 2.
Line profiles as function of temperature were determined by frequency sweeps at fixed field (9.4 T and 4.7 T) or field sweeps at fixed frequency. Both methods give the same results. As detuning is not needed, the latter data have a better accuracy and are analysed in the following. Below 250 K, the $`3/21/2`$ ($`m=3/2`$) transition becomes visible and is seen to be split into two lines, A and B, Fig.1. In the same figure we show the results for the $`1/2+1/2`$ ($`m=1/2`$) transition, which develops a second component at about 230 K. As we will further clarify, these lines have the same origin and correspond with La sites which are inequivalent because of different electrical field gradients. We are facing the ambiguity that these differences can originate either in the distribution of the excess oxygen, or that it might reflect the inhomogeneous charge distribution associated with the stripes in the NiO<sub>2</sub> planes: in both cases a similar pattern is expected. At present we are further investigating these matters; important for the present context is that both lines reveal a very similar temperature dependence suggesting that both sites communicate with the same electronic system as we will now demonstrate.
The field dependence of the two lines was measured in detail for both transitions. Only the position of B depends on field as illustrated for the $`m=3/2`$ and $`m=1/2`$ in Fig.2a. The $`T`$-dependences of the spin-lattice relaxation rates ($`T_1^1`$) are shown in Fig.2b for the two $`m=3/2`$ lines, together with the spin dephasing rate ($`T_2^1`$) for the A line. $`T_1^1`$ is measured by a $`\pi `$$`t`$$`\pi /2`$$`\tau `$$`\pi `$ sequence and analyzed with the multi-exponential expression of Narath. The effective relaxation rates correspond to 41$`W`$, the fundamental transition probability. The magnetic character of the relaxation mechanism below 200 K was checked by comparing the relaxation rates for the $`m=1/2`$ and $`m=3/2`$ transitions. When analyzed by the magnetic expression they gave the same $`W_M`$. $`T_1^1`$(A) is about twice as fast as $`T_1^1`$(B).
The intensity ratio of B to A depends on the cooling history, being larger when cooling proceeds slower. In Figs. 1-3 we show the results obtained after slowly cooling down. With decreasing temperature the intensity of the resonance lines first increases, then saturates around 150 K to disappear completely around 50 K. Fig.3 gives the $`T`$ dependence of the normalized intensity (log. scale) and of the wipe-out fraction $`F`$ (the intensities are integrated over the linewidth, corrected for the apparent $`T_2`$’s), defined as the ratio of the experimentally observed and the Curie extrapolated value.
At lower temperatures the A and B lines resurrect again; at 4.2 K the linewidths at 38.2 MHz (A $`6`$ MHz, B $`2`$ MHz) are about 3 times larger than at 75 K, see fig.4. Also in zero field line B ($`m=7/2`$) is only broadened, not split, with a width which is comparable (3 MHz) to that found in field for the same transition. This width is an order of magnitude higher than the calculated effect of the dipolar field. There is only a faint $`T`$-dependence in the position of the two lines. Above 50 K the width of A is increased by 50%, while that of line B is even less $`T`$ dependent.
Let us now turn to the interpretation of the data. Consistent with the neutron and susceptibility measurements in La<sub>2</sub>NiO<sub>4.13</sub>, we find in La<sub>2</sub>NiO<sub>4.17</sub> the following temperature regimes:
(i) $`T>250`$ K. Only a narrow $`m=1/2`$ NMR line is visible with strongly reduced intensity. All visible La-sites experience the same (averaged) electrical field gradient. Oxygen motion explains the activated relaxation rates of the $`m=1/2`$ lines (not shown here, see ref.) with an activation energy of $`3\times 10^3`$ K, and the increase of $`T_2^1`$(A), shown in fig.2.
(ii) 250 K$`>T>`$230 K. Due to oxygen/charge ordering the $`m=3/2`$ transition comes up with two lines A and B with an intensity ratio close to 2. Due to the small intensity this splitting is not yet visible in the $`m=1/2`$ line.
(iii) 230 K$`>T>`$140 K. Below 230 K the same two inequivalent sites (A and B), that can be distinguished in the $`m=3/2`$ transition, broaden the $`m=1/2`$ line, Fig. 1. The field dependence of the line pattern is well explained by a quadrupolar interaction, drawn lines in Fig.2. The saturation of the intensity ratio of two to one for the unshifted to shifted line around 230 K is expected for site ordered stripes.
(iv) 140 K$`>T>`$50 K. Slow motion of spins or charges wipes out most of the NMR intensity. The nuclei that can still be observed show additional line broadening and a relaxation rate, which is reminescent to the $`T`$-dependence of $`T_1^1`$ in La<sub>2</sub>Cu<sub>1-x</sub>Li<sub>x</sub>O<sub>4</sub> or Sr doped La<sub>2</sub>CuO<sub>4</sub> above the spin freezing temperature. $`T_1^1(T)`$ can be fitted with an activated process $`T_1^1\mathrm{exp}(\mathrm{\Delta }E_a/T)`$ with $`\mathrm{\Delta }E_a=10^2`$ K (dotted line in Fig. 2) or a power law dependence $`T_1^1[(TT_f)/T_f]^\alpha `$ with $`\alpha 0.46`$ (drawn line in Fig. 2) and a spin freezing temperature $`T_f`$ of about 45 K. As the $`T`$-dependence of the rates of A and B spins look identical, the ratio of a factor 2 is due to the strength of the hyperfine coupling to the electron spins. When analyzed in the renormalized classical limit, the activation energy equals the exchange constant $`J`$. The magnetic correlation length $`\xi /a=0.5\mathrm{exp}(J/T)`$ above $`10^2`$ K is of the order of the lattice constant $`a`$, and an order of magnitude larger around 40 K. In any case the relaxation rates are not related to the wipe-out process, see Figs.3,2. The same behavior is observed in the cuprates and it suggests that the bulk of the stripe system is subjected to fluctuations on the MHz scale (the invisible fraction) while local events occur characterized by spin fluctuations on a much shorter time scale. Finally, it appears that the functional depenence of the wipe-out factor $`F`$ on $`T`$ is similar to that of the $`s`$-wave BCS order parameter (see Fig. 3).
(v) 50 K$`>T>`$15 K. Wipe out of the NMR intensity is complete. The whole La-nuclear system experience the fluctuations on MHz time scales and the rare sites dealt with in the previous paragraph have disappeared completely.
(vi) 15 K$`>T`$. After emerging again, the La NMR lines are appreciably broadened with a width that is at least a few times larger for A than for B spins. This signals the onset of truly static order and it is noticed that even at the lowest temperatures where we measured (2 K) only 10% percent of the intensity has recovered. The line width reflects the static disorder (inhomogeneous broadening) and one would be tempted to ascribe this to spin glass behavior. Interestingly, we find that the width of $`m=3/2`$ does not depend on the field strength as expected for disorder originating in the charge sector.
When we compare our data with NMR data in underdoped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>, the similarities in the wipe out process are striking. In all samples stripe structures are clearly seen in the neutron data, but seem to be absent in the NMR data. This apparent contradiction is solved if the wipe-out effect seen in the NMR data is associated with slow dynamics in the stripe system. Almost all cuprates, studied in ref., are superconductors - the only non-superconducting sample with 0.04Sr content behaving anomalously as it shows no stripe fluctuations. In the cuprates at low doping, where the superconducting fraction seems to be very low, the wipe out fraction starts to grow typically below 50 K and equals $`1`$ around 20 K. Near optimal doping (0.15Sr content) $`F`$ is $`0.3`$. In the underdoped cuprates static magnetic hyperfine fields changes the NQR frequencies completely below 10 K, the same ordering temperature as seen by $`\mu `$SR. Here the comparison between cuprates and nickelates is elucidating. The nickelates are very poor conductors and show no superconductivity. As mentioned above, in neutron scattering on the nickelates with oxygen doping of $`0.133`$ charge order is observed around 210 K while spins order around 110 K: we find that static order only occurs at temperatures less than 10 K. The wipe out fraction of one is in agreement with the trend in the cuprate data. The onset of the wipe-out process around 140 K makes the combined action of spin and charge motion in the already formed stripes structure a plausible explanation for its origin. More specifically, the general pattern seems susceptible to an explanation in terms of dislocation melting. As is well understood in the context of two dimensional melting, dislocation unbinding leads to an overall fluid character of the system, and such a transition can occur also when the density of topological defects is low, i.e. in a regime where the time scales are large. However, we also found the fast spin relaxations in the fraction of the system which is still visible in NMR when the wipe out is nearly complete. We are tempted to ascribe those to the dislocations: we envisage that in the core region of the dislocations the spin system is strongly frustrated.
We end by mentioning the puzzling aspect related to the role of quenched disorder in the Sr doped nickelates, compared to oxygen doped systems we have been studying. Especially for hole doping of 1/3, the difference between oxygen (O4.17) and Sr doped (Sr0.33) nickelates is manifest: in the neutron scattering spin ordering in O4.15 is seen around 110 K (the wipe out process in our samples starts around 140 K), while in Sr-doped samples this temperature is around 200 K (both in $`\mu `$SR, NMR and NS). The differences go further. No strong wipe-out features are present for Sr0.33, and the intensity ratio of the two NMR lines differs from ours. We believe these effects to be linked to the effect of the ”quenched” Sr-dopant in this insulator, and we find an analogy with what happens in highly two-dimensional type II superconductors. As is well known in that field, pinning centers can suppress the formation of a flux liquid, i.e enhance the vortex melting temperature appreciably above the Kosterlitz-Thouless melting temperature.
In summary, our NMR study of stripes in oxygen doped La<sub>2</sub>NiO<sub>4</sub> shows a striking similarity with the behavior found previously in cuprate superconductors. The stripe system is apparently a slowly fluctuating strongly correlated fluid over an extended range of temperatures. Since this behavior also occurs in the insulating nickelate, these fluctuations are uncorrelated to the proximity with the superconducting state, while it also appears as unlikely that it is completely driven by quenched disorder. We suspect that our findings are related to the peculiar character of the stripe phase itself.
We gratefully acknowledge fruitful discussions with S. Mukhin about theoretical aspects, with P.C. Hammel about the NMR results in La<sub>5/3</sub>Sr<sub>1/3</sub>NiO<sub>4</sub>, and D.E. MacLaughlin about NMR in spin-glass systems. One batch of the single crystals was prepared by Y.M. Mukovskii at the Steel and Alloys Institute in Moscow.
|
no-problem/9906/astro-ph9906048.html
|
ar5iv
|
text
|
# PROPER MOTION OF THE COMPACT, NONTHERMAL RADIO SOURCE IN THE GALACTIC CENTER, SAGITTARIUS A∗
## 1 INTRODUCTION
The compact, nonthermal radio source, Sgr A, was discovered by Balick & Brown (1974) while looking for compact HII regions in the center of the galaxy. The nature of Sgr A and its role in the center of our galaxy have been a matter of speculation over the past 25 years. Until recently theoretical and observational arguments were advanced that the galactic center contains a million solar mass black hole that might be identified with Sgr A (Lynden-Bell & Rees 1971; Genzel, Hollenbach, & Townes 1994). However, emission across the electromagnetic spectrum definitively identified with, or even possibly identified with, Sgr A contains no more than $`10^{36}`$ solar luminosities (Beckert et al. 1996; Serabyn et al. 1997) which does not necessarily demand a supermassive object. Angular size measurements of Sgr A also have yet to reveal definitively the nature of this object owing to the blurring effects of interstellar scattering in the dense, turbulent plasma near the galactic center (Lo et al. 1985; Backer 1988; Jauncey et al. 1989; Frail et al. 1994; Rogers et al. 1994; Bower & Backer 1998; Lo et al. 1998). From the highest frequency VLBI observations we infer an upper limit to the size of 1 AU at 86 GHz (Rogers et al. 1994). Recent summaries of the variability of the radio emission (Zhao et al. 1991; Gwinn et al. 1991; Backer 1994; Wright & Backer 1993; Tsuboi, Miyazaki, & Tsutsumi 1999) and limits on its linear and circular polarization (Zhao 1992, personal communication; Bower & Backer 1999) also do not give us a definitive handle on the intrinsic nature of this object – stellar mass object or supermassive black hole?
Over the past five years our understanding of both the presence of dark matter in the center and the nature of Sgr A has improved radically. Large proper motions of luminous infrared stars within 0.1 parsec of Sgr A have now been detected and lead to a good estimate on the central dark mass of $`2.5\times 10^6`$ M (Eckart & Genzel 1997; Genzel et al. 1997; Ghez et al. 1998). However, models for the full Sgr A electromagnetic flux spectrum based on low radiative efficiency accretion of wind-driven matter from nearby stars onto a black hole are not yet consistent with a mass of a few million solar masses given nominal estimates of the mass accretion rate (Melia 1992, 1994; Falcke et al. 1993; Falcke & Melia 1997; Narayan, Yi, & Mahadevan 1995; Narayan et al. 1998; Mahadevan 1998).
Shortly after the discovery of Sgr A we began an astrometry program to determine its proper motion relative to extragalactic reference sources, active galactic nuclei and quasars, with the NRAO <sup>1</sup><sup>1</sup>1The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Green Bank interferometer (Backer & Sramek 1982; BS82). If Sgr A were ‘just’ a stellar mass object, then it would be buzzing around in the central gravitational potential well in equipartition with the other stars and gas clouds in the center. Transverse velocity components of at least 100-200 km s<sup>-1</sup> would be expected (Sellgren et al. 1990). Alternatively, if Sgr A were indeed a supermassive black hole, then it might very well be at rest in the center. Different formation scenarios for such an object as well as considerations of galactic dynamics predict different residual motions of a black hole with respect to the galactic center.
Our observations from the solar system of an object in the galactic center relative to the extragalactic sky are sensitive, of course, to the secular parallax resulting mainly from the rotation of the galaxy with a small additional contribution from the solar motion with respect to the local standard of rest. The expected motion is approximately 6 mas y<sup>-1</sup> using current values of galactic constants (Kerr & Lynden-Bell 1986). If we remove this large secular parallax from the apparent motion, the residual, peculiar motion with respect to the galactic barycenter can be used to estimate the mass of Sgr A using an equipartition or other dynamical argument. The uncertainty in the secular parallax correction is largest in the longitude direction. Therefore, the peculiar motion of Sgr A in galactic latitude is most important for assessment of mass of the parent body of Sgr A. Alternatively, if we assume both that Sgr A is attached rigidly to a several million solar mass black hole and that this object defines the inertial reference frame for the galaxy, then the apparent motion can be used to define galactic constants.
The intensity distribution of Sgr A is broadened by multipath propagation (diffraction) in the intervening thermal plasma whose density is perturbed on small length scales. The detection of similar broadening of OH masers near Sgr A by Frail et al. (1994) suggests strongly that this plasma is located in the central 140 pc of the galaxy. The apparent diameter of Sgr A is 1.4 mas $`\lambda ^{+2.0}`$ where $`\lambda `$ is the wavelength in cm. Past and present proper motion measurements have an error much smaller than this size and therefore we have a particular concern about temporal stability of diffractive and refractive propagation effects.
Our Green Bank experiment (1976-1981) detected a proper motion in galactic longitude that was consistent with the expected secular parallax and therefore with negligible peculiar motion or refractive effects (BS82). However the errors were too large to place a meaningful limit on the mass of Sgr A. They did establish that Sgr A was galactic and not a chance superposition of an extragalactic background source.
In 1981 we began a new Sgr A proper motion experiment with the NRAO Very Large Array (VLA, see Napier, Thompson, & Ekers 1983). The number of antennas, the two dimensional distribution of antennas in a wye configuration, the excellent site for phase stability and the sensitive receivers provided considerable new capability. Reports of the progress of this experiment have been provided in a series of conference reports (Backer & Sramek 1987; Backer 1994, 1996). In this paper we provide a full report of 8 epochs of VLA observations between 1982 and 1998. The observations are described in §2, and our procedures for the determination of the apparent proper motion of Sgr A are explained in §3. In §4 we discuss the current best estimate for galactic constants that lead to the $`6\mathrm{mas}\mathrm{y}^1`$ secular parallax in galactic latitude that dominates the measured motion. The possibility of refractive wander of the position of Sgr A is then explored and limited by recent dual frequency data. The third topic in §4 concerns interpretation of the peculiar motion which remains after subtraction of the estimated secular parallax. A summary of the paper is given in §5.
## 2 OBSERVATIONS AND DATA REDUCTION
In 1981 we searched the literature for candidate reference sources closer to Sgr A than those used in the Green Bank experiment reported previously (BS82). The Westerbork planetary nebula searches (Wouterloot & Dekker 1979; Isaacman 1981) provided the most sensitive and highest angular resolution images for location of background quasars or other compact extragalactic sources. We further undertook a blind search at the VLA by making snapshot images at 5 GHz of about 50 fields whose solid angles were determined by a combination of primary and delay beams. The Westerbork candidates were also observed. These efforts led to the identification of 3 reference sources with sufficiently strong fluxes ($`>`$25 mJy) and compact structure ($`<1^{\prime \prime }`$). Sources W56 (B1737-294) and W109 (B1745-291) were from the Westerbork surveys and source GC441 (B1737-294) was the first source cataloged in our 44th blind search beam. Figure 1 provides a map of the sky surrounding Sgr A with the relative locations of the three reference sources. While these sources are not ‘identified’ as quasars or AGNs, their brightness temperature lower limits and spectra are such that we can confidently assume that they were extragalactic. The three sources yield an estimated source density of 3 per 4 square degrees at an average 5-GHz flux density of 75 mJy. This is consistent with source counts of extragalactic sources (Condon 1984). A test of this primary ‘extragalactic’ assumption is presented in a later section. Table 1 provides the assumed source positions for our primary calibration sources, Sgr A, and the three reference sources. The initial positions and Besselian 1950 reference frame were assumed for all measurements. In our analysis relative offsets between the three reference sources from their assumed positions were determined. These offsets and the improved positions for all three reference sources are included in Table 1.
There remains a bias in the reference source positions with respect to our primary phase calibrator, B1748-253, as will be evident when we introduce figure 2. Furthermore our assumed position for B1748-253 at the start of our observations is not accurate as evidence by hourly observations of the astrometric standard B1741-038. In Table 2 we present J2000 FK5 positions of all sources. The positions of Sgr A and the three reference sources were kindly provided by G. Bower 1999 during his polarization study. These were referenced to B1748-253 assuming our original coordinates. The four positions were then corrected for the errors in the original B1748-253 coordinates as determined by that listed in the current VLA manual and that determined in our data via the B1741-038 observations. Our estimate of the 1$`\sigma `$ absolute accuracy is 5 mas.
From 1982 to 1998 a sequence of eight observations at 4.885 GHz were conducted using the VLA in its 36-km (A) configuration. In more recent epochs a second band was recorded at 4.835 GHz, but this data was typically not analyzed owing to the dominance of atmospheric errors that are very strongly correlated between the two bands. In the last three epochs a portion of the standard observing schedule was devoted to observations at 8.435 GHz and 8.485 GHz. Typically three days of observations were obtained each epoch. In the text below we refer to the two bands by their center frequencies of 4.9 and 8.4 GHz
Each day’s observations were divided into hour-long blocks. During each block we first observed the 3 nearby reference sources, then Sgr A, then 2 reference sources, then Sgr A, then 2 reference sources, etc, with a final observation of the 3 reference sources. Every hour our phase calibration source B1748-253 and a standard VLA calibration source B1741-038 were observed. Table 3 gives a detailed UT schedule for a block to show typical integration times and spacings. Identical LST stop times were used to schedule all observations. Sgr A was observed every 10 minutes in these blocks. During epochs 6 through 8 we allocated one LST block to 8.4-GHz observations. Given that our analysis had shown that the dominant errors in phase referencing were temporal rather than angular we revised the schedule for the 8.4-GHz observations by looking at only one reference source between Sgr A scans and therefore returning to Sgr A every 6 minutes.
Table 4 provides a journal of the observations with epoch number, sequential day index within the epoch, calendar date, Julian Date and a code to indicate which band was observed during which block (C for 4.9 GHz and X for 8.4 GHz). LST blocks at 15, 16, 17, 18 and 19 h were originally used with all observations at 4.9 GHz. In later epochs we observed at 8.4 GHz during 18h block and dropped the 15h block. For recovery of the files containing these observations we include in Table 4 information needed to access the archive tapes at NRAO in Socorro, NM.
Calibration proceeded along standard lines for the VLA. The flux densities for 3C 286 were established with the SETJY task. Then the CALIB task was run to determine the gains for 3C 286 using recommended UV restrictions. Next CALIB was run on secondary flux standards: B1748-253, B1741-038, 3C 48, and NRAO 530. The flux densities of these sources were determined using the GETJY task. The program source, Sgr A and the 3 reference sources, were then calibrated in flux and phase using two-point interpolation of the B1748-253 data via the CLCAL task. These calibration steps are described using the current AIPS program names while the earliest epochs of data were processed using predecessor versions of the software. The hourly calibration to B1748-253 removed instrumental phases and part of the atmospheric phase. While this helps in subsequent data analysis, the effects of this phase calibration are accurately removed by the reference source comparison described below. For epochs 1-7 this initial stage of data reduction was done at NRAO facilities in Datil or Socorro. For the last epoch reduction was done in Berkeley.
Our main approach to the analysis of each source observation uses the 351 complex visibility phases for each orthogonally polarized channel along with the time of the observation and a table of antenna positions. These were created within the VLA DEC10 and AIPS analysis systems by directing a matrix of scan and vector averaged phases to a disk file using the LISTER task. This procedure was easy to replicate each epoch as the VLA developed, and allowed us freedom to develop algorithms for precise phase referencing. Recently we reanalysed raw data from epochs 2 and 4 to ensure that there are no systematic errors in these matrix listings of phases as well as the associated times, positions and antenna locations. We conclude that there are no systematic effects at the milliarcsecond level in the second difference position offsets described below.
## 3 ANALYSIS
The phase calibration described in §2 using B1748-253 every hour leaves as much as one radian of residual phase on the longer baselines with time scales for variation as fast as 15 minutes. We reason that the bulk of this differential atmospheric phase can be modeled by a differential refraction angle, which is differential owing to the B1748-253 calibration. In other words, the phases for any scan can be modeled by a plane wave deviation from the assumed source direction. Furthermore we expect that the differential refraction angles for our three reference sources will differ from those of Sgr A according to a simple Taylor series expansion in angle on the sky and in time. The discussion focuses here on atmospheric perturbations (i.e., tropospheric and ionospheric), but any source of astrometric error (e.g., frequency, time, baseline, reference frame) will have similar effects. While previous experience and analysis supported this approach, we know that the differential phase fluctuations from the atmosphere will show higher order spatial variations than the plane-wave assumption in this model. We expect, however, that the effects of these higher order terms will be similarly encoded in the differential refraction angles for the set of sources.
Our first program reads a scan of phases and associated antenna locations, and then fits them to a plane wave model to produce the instantaneous refraction angle, $`\mathrm{\Delta }𝐬_𝐢(t_j)`$ with an iteration algorithm. Sidereal time is calculated from the recorded TAI values, and current coordinates of the sources were rotated from the B1950.0 positions (Table 1). On the first iteration only phase data from baselines with projected lengths between 150 and 200 k$`\lambda `$ are used. The minimum excludes baselines for which large scale structure will confuse the phase. The maximum prevents use of data that may have a $`2\pi `$ lobe ambiguity. Phases that exceed 90 are excluded owing to a possible lobe ambiguity. On the next iteration, the first estimate is used and the maximum baseline is extended to include the full array. One final pass is done to insure that the maximum amount of data is used. Each $`\mathrm{\Delta }𝐬_{\mathrm{𝐢𝐣}}`$ solution has its internal error estimated from the variations of the phases. Much of the phase variation is not independent from baseline to baseline, so this internal error estimate will underestimate the uncertainty. The error will however reflect the phase scatter, and so is useful as a relative weight in further analysis.
Figure 2 displays three sets of these differential refraction angles for one day each in epochs 2, 3 and 8. The data from epochs 2 and 8 show the best differential phase stability. In general, the four sources, Sgr A and the three reference objects, meander back and forth with an amplitude of 0.1<sup>′′</sup> on a time scale of one half hour. The data from epoch 3 (1983 September 2) display the worst differential phase stability in the entire experiment.
One can readily see in Figure 2 that the position of Sgr A drifts away from the cluster of reference sources over the sixteen year span of the data in both right ascension and declination which corresponds to an apparent motion toward negative galactic longitude. This drift is caused mainly by the inexorable rotation of the solar system around the center of the galaxy!
In our next analysis step we interpolate the differential refraction angles of the reference source observations to the times and position of the Sgr A observations. Figure 2 demonstrates that temporal variations dominate. If we were to start the program over, we would choose to switch sources even more rapidly. We separately analyse the data within each of the one hour blocks of the schedule. Position offsets are removed from the reference sources as we have used constant source position models for all observations, while we determined improved positions in later years. All reference source data in a given block, typically 12 observations, are fit to an 8th order polynomial in time. This polynomial is then evaluated at the times of all sources and removed. Then a simple two-point calibration is done between the data of all reference sources for each Sgr A observation. Examples of these polynomials are shown in Figure 2.
These two analysis steps can be represented as follows: First the $`\chi ^2`$ sums are defined that allow solution for the right ascension ($`a_k`$) and declination ($`d_k`$) polynomial coefficients.
$$\chi _a^2=\mathrm{\Sigma }_{i=1,3}[\mathrm{\Delta }\alpha _i(t_j)\mathrm{\Sigma }_{k=0}^{k=n}a_k(t_j<t>)^k]^2.$$
$`(1a)`$
$$\chi _d^2=\mathrm{\Sigma }_{i=1,3}[\mathrm{\Delta }\delta _i(t_j)\mathrm{\Sigma }_{k=0}^{k=n}d_k(t_j<t>)^k]^2.$$
$`(1b)`$
Then the polynomial coefficient estimates ($`\widehat{a}_k,\widehat{d}_k`$) are used to remove this effect from all sources.
$$\mathrm{\Delta }\alpha _i^{}(t_j)=\mathrm{\Delta }\alpha _i\mathrm{\Sigma }_{k=0}^{k=n}\widehat{a}_k(t_j<t>)^k.$$
$`(2a)`$
$$\mathrm{\Delta }\delta _i^{}(t_j)=\mathrm{\Delta }\delta _i\mathrm{\Sigma }_{k=0}^{k=n}\widehat{d}_k(t_j<t>)^k.$$
$`(2b)`$
Finally the primed reference source data are interpolated in time, combined with weights to effect an interpolation in angle, and subtracted from the primed Sgr A data.
$$\mathrm{\Delta }s_{}^{\prime \prime }(t_j)=\mathrm{\Delta }s_{}^{}(t_j)\mathrm{\Sigma }_{i=1}^{i=3}w_i\left[\mathrm{\Delta }s_i^{}(t_{j+})\left(\frac{t_jt_j}{t_{j+}t_j}\right)+\mathrm{\Delta }s_i^{}(t_j)\left(\frac{t_{j+}t_j}{t_{j+}t_j}\right)\right].$$
$`(3)`$
The optimal weights were chosen such that the mean weighted reference position was equal to that of Sgr A: 0.288, 0.288, and 0.424 for GC441, W56, and W109, respectively. One can estimate these weights by inspection of Figure 1 which has the right ascension of W56 nearly equal to that of Sgr A and the declination of W109 nearly that of Sgr A, and the right ascension offset of GC441 nearly double and opposite that of W109 and the declination of GC441 nearly equal and opposite that of W56. The errors are propagated from the internal errors carried along with the various steps outlined above. In the best conditions these errors do indicate the agreement of the data. The errors also display when the data is less good, but as stated earlier, the magnitude of the errors may underestimate the expected data agreement owing to correlation of phase errors between baselines.
Our next step is to combine the position offsets for Sgr A for each block on each day using the internally propagated errors as weighting factors. The poor quality of the low elevation data in the 15h LST block leads us to ignore this data for all days. Only on a few occasions does its quality match that of the higher elevation data. The results of this block averaging for the 4.9-GHz data are displayed in Figure 3 along with a weighted least squares fit for a proper motion which is:
$$\mu _{\alpha ,}=2.70\pm 0.15\mathrm{mas}\mathrm{y}^1.$$
$`(4a)`$
$$\mu _{\delta ,}=5.60\pm 0.20\mathrm{mas}\mathrm{y}^1.$$
$`(4b)`$
This fit is presented in Figure 4 along with the results of six other fits to subsets of the data. In three subsets we selected one of the three days in each epoch. In the other three subsets we selected one of the three hour angle blocks which are available for all epochs. These provide a measure of the effects of the troposphere and other errors on our measurement and serve as our primary estimator of uncertainty in the measured proper motion. We also explored the chance possibility that the reference sources themselves might be galactic by setting the weight of each reference source to 0 in separate runs. The results are all contained in the error polygon shown in Figure 4 which is used to estimate the errors quoted above. We conclude then that the reference sources are indeed extragalactic and not chance compact objects in the center themselves.
The 3-epoch, 8.4-GHz data also provided a proper motion fit which is presented in Figure 4. The result is consistent with the 4.9-GHz result quoted above although the errors are larger. Again we used the data on independent days to provide three independent fits to assess errors.
The phase analysis discussed here has been done primarily on computers at Berkeley with migration from VAX to $`\mu `$VAX to SUN.
Use of the B1950 coordinate frame is not ideal for precision astrometry owing to improved precession constants and its incorporation of E-terms of aberration into the calibrator source positions. However, our differential technique suppresses errors in the reference frame and calculation of apparent coordinates for use in observation time modeling of the fringe phase. We have inspected the effects of the B1950 system by precessing source positions at and near Sgr A from 1950 to various epochs in the range of 1981 to 1998 with old precession and nutation values and then to 2000 using the new values as specified for FK4 to FK5 catalog conversions by Seidelman ( 1992; §3.5). We find that a false proper motion of
$$\delta \mu _\alpha =0.0\mathrm{mas}\mathrm{y}^1.$$
$`(5a)`$
$$\delta \mu _\delta =0.2\mathrm{mas}\mathrm{y}^1.$$
$`(5b)`$
is induced. This small motion is similar for our three reference sources and therefore has negligible effect on our measurements. The size of the effect is significantly larger in other parts of the sky.
## 4 INTERPRETATION
### 4.1 Secular parallax for object at rest in the galactic center
The expected motion for an object at rest in the galactic barycenter, its secular parallax, is given in galactic coordinates by
$$[\mu _l,\mu _b]_\mathrm{\Pi }=[\mu _l,\mu _b]_{\mathrm{GR}}+[\mu _l,\mu _b]_{}=[(AB),0][V_{}/R_{},W_{}/R_{}],$$
$`(6)`$
where $`A`$ and $`B`$ are Oort’s constants expressed in angular terms, $`V_{}`$ and $`W_{}`$ give the solar motion with respect to the local standard of rest in directions of $`l=90^{}`$ and $`b=90^{}`$, respectively, and $`R_{}`$ is the distance to the galactic center. The 1984 IAU adopted value for $`(AB)`$ is $`26.4\pm 1.9`$ km s<sup>-1</sup> kpc<sup>-1</sup> (Kerr & Lynden-Bell 1986). More recent determinations are consistent with this value: (Hanson 1987) uses the Lick northern sky proper motion data to obtain $`25.2\pm 1.9`$ km s<sup>-1</sup> kpc<sup>-1</sup>; Feast & Whitelock (1997) use of a Hipparcos study of cepheid stars yields $`27.2\pm 1.0`$ km s<sup>-1</sup> kpc<sup>-1</sup>; Olling & Merrifield (1998) use a more complete model of the galactic mass field to determine $`25.2\pm 1.9`$ km s<sup>-1</sup> kpc<sup>-1</sup>; and Feast, Pont, & Whitelock (1998) analyze cepheid period-luminosity zero point from radial velocities and Hipparcos proper motions and revise their previous result to $`27.23\pm 0.86`$ km s<sup>-1</sup> kpc<sup>-1</sup>. As the 1984 IAU value of $`(AB)`$ remains in the midst of these new estimates we will use this in further calculations.
$$[\mu _l,\mu _b]_{\mathrm{GR}}=[5.57\pm 0.42,0.0]\mathrm{mas}\mathrm{y}^1.$$
$`(7)`$
The solar motion has been determined by Dehnen & Binney (1998) using Hipparcos results: $`(U_{},V_{},W_{})=(11.0\pm 0.4,5.3\pm 0.6,7.0\pm 0.4)`$ km s<sup>-1</sup>. The apparent proper motion owing to solar motion with respect to the local standard of rest using $`R_{}=8.5`$ kpc is
$$[\mu _l,\mu _b]_{}=[0.13\pm 0.02,0.17\pm 0.01]\mathrm{mas}\mathrm{y}^1.$$
$`(8)`$
The total secular parallax is
$$[\mu _l,\mu _b]_\mathrm{\Pi }=[5.70\pm 0.42,0.17\pm 0.01]\mathrm{mas}\mathrm{y}^1.$$
$`(9)`$
The solar motion contributes negligible additional uncertainty to the secular parallax.
### 4.2 Apparent peculiar motion of Sgr A
We project the observed proper motion, equation 4, from equatorial coordinates to galactic coordinates and remove the expected secular parallax for an object at rest in the galactic barycenter, equation 9, to obtain the peculiar motion. The north celestial pole (NCP), north galactic pole (NGP), and galactic center (GC) form a spherical triangle. The equatorial coordinates of the NGP and the GC are: 12<sup>h</sup> 49<sup>m</sup> and $`+`$27 24; and 17<sup>h</sup> 42<sup>m</sup> 24<sup>s</sup> and $``$28 55, respectively (B1950; Blaauw et al. 1960). The spherical angle NGP-NCP-GC is then 73.37 and the side of the triangle opposite NGP-GC-NCP has length 62.60. NGP-GC-NCP is the negative of the position angle <sup>2</sup><sup>2</sup>2position angles are measured north toward east, counterclockwise on the sky of the positive galactic latitude axis ($`\widehat{b}`$), and by law of sines is $`58.29^{}`$. The position angle of the positive longitude axis ($`\widehat{l}`$) is then +31.71. Errors in determination of the galactic pole and center are of order $`7^{}`$ (Blaauw et al. 1960) and hence of little consequence to these calculations. Redetermination of the principal plane of the galaxy via population II stars seen by IRAS (Habing 1988) would be an interesting stellar mass check on the early HI gaseous disk determination. The resultant observed proper motion of Sgr A in galactic coordinates is
$$\mu _{l,}=[6.18\pm 0.19]\mathrm{mas}\mathrm{y}^1$$
$`(11a)`$
$$\mu _{b,}=[0.65\pm 0.17]\mathrm{mas}\mathrm{y}^1$$
$`(11b).`$
The observed peculiar motion of Sgr A is then obtained by subtracting the expected secular parallax, equation 9, from the measurements:
$$\mathrm{\Delta }\mu _{l,}=[0.48\pm 0.46]\mathrm{mas}\mathrm{y}^1$$
$`(12a)`$
$$\mathrm{\Delta }\mu _{b,}=[0.48\pm 0.17]\mathrm{mas}\mathrm{y}^1$$
$`(12b).`$
The errors have been combined in quadrature. At a distance of 8.5 kpc the peculiar velocity of Sgr A is
$$v_{l,}=[19\pm 19]\mathrm{km}\mathrm{s}^1$$
$`(13a)`$
$$v_{b,}=[19\pm 7]\mathrm{km}\mathrm{s}^1$$
$`(13b).`$
In the subsequent sections we discuss this result further.
### 4.3 Radio wave propagation effects
VLBI observations show that the apparent angular diameter of Sgr A depends strongly on frequency, 1.4 mas $`\lambda ^{+2.0}`$, which is consistent with angular broadening by scattering in the intervening plasma (Lo et al. 1981, 1985; Backer 1988; Jauncey et al. 1989; Lo et al. 1993; Alberdi et al. 1993; Yusef-Zadeh et al. 1994; Backer 1994; Rogers et al. 1994; Bower & Backer 1998; Lo et al. 1998). The scattering interpretation is strengthened by the demonstration that OH masers within 0.5 degrees of Sgr A are similarly broadened (van Langevelde et al. 1992; Frail et al. 1994). A simple explanation is that the diffuse thermal plasma in the central 140 pc (diameter in longitude) is sufficiently turbulent to produce the observed scattering. This gas may be that seen in long wavelength thermal bremsstrahlung emission by Mezger & Pauls (1979) which has an emission measure of at least $`10^4`$ cm<sup>-6</sup> pc. Alternatively there may be scattering within material that is being accreted onto the black hole and serves as fuel for Sgr A (Backer & Sramek 1999); see also §5 of van Bueren (1978), a comprehensive ‘pre-ADAF’ explanation of Sgr A as an accreting black hole.
While most of these observations have been conducted in the northern hemisphere where VLBI baseline coverage is poor, several experiments have shown convincingly that the scatter-broadened image is elliptical with a ratio of axes of about 2:1 at position angle (PA) $`80^{}`$. The strong ellipticity in scattering most likely indicates that the scattering gas is threaded by a relatively uniform magnetic field whose pressure dominates the thermal and turbulent pressures of the plasma. The thin ‘threads’ of synchrotron emission detected in the galactic center provide ample evidence for strong and uniform magnetic fields (Yusef-Zadeh, Morris, & Chance 1984). The field is not uniform over scales of 50 pc as the OH maser elongations are not aligned.
Our concern here is not so much with the scattering itself, but rather with the stability of the scattering. Consider a scattering screen located a distance $`fD`$ from Sgr A with the observer at $`D`$. The screen broadens a plane wave by an angle $`\mathrm{\Theta }_s`$. The observed source size is then $`\theta _{}=f\mathrm{\Theta }_s`$ which leads to a decorrelation in the visibility domain on baselines of length $`b_{}=1/(2\pi f\mathrm{\Theta }_s)`$. This decorrelation arises from phase differences through the screen on length scales of $`fb_{}`$. For 4.9 (8.4) GHz this is $`240f`$ $`(400f)`$ km. The identification of the scattering with a 140-pc halo around the galactic center suggests $`f0.01`$ and therefore decorrelation on extremely small scales, 2.4 (4.0) km. These length scales are most likely smaller than the inner scale of the density fluctuation spectrum which is set by plasma wave dissipation processes. When phase decorrelation occurs on scales much smaller than that of the density fluctuations owing to many radian phase wrapping, the expected dependence of scattering diameter on wavelength is exactly $`\lambda ^{+2.0}`$ which is consistent with current observations. In this regime we also don’t expect image wander from large scale refractive effects, and any changes in the angular broadening will occur on long time scales set by $`\theta _{}/v_{}`$ where $`v_{}`$ is the transverse motion of the line of sight through the perturbing plasma. We proceed to inspect the evidence for stable propagation through the intervening medium.
The relevance of this discussion to our proper motion measurements is that our epoch accuracy is around 1-2 mas while the scatter broadened image is 50 (18) mas (and VLA synthesized beam is 500 (180) mas) at 4.9 (8.4) GHz, respectively. The scattered image size itself is very stable. Lo et al. (1981) determined a size at 8.4 GHz of $`17\pm 1`$ mas in 1974.4 with principal resolution in the East-West direction. Later measurements in 1983.4 had sufficient UV coverage to determine elliptical source parameters: $`15.5\pm 0.1`$ mas with axial ratio of $`0.55\pm 0.25`$ and PA of $`98^{}\pm 15^{}`$ (Lo et al. 1985). Recent VLBA observations provide parameters of $`18.0\pm 1.5`$ mas with ratio of $`0.55\pm 0.14`$ and PA of $`78^{}\pm 6^{}`$ (Lo et al. 1998). We conclude that the source size has not changed by more than 5-10% over 23 y either with random or secular variations. Thus the apparent source image is not expanding or contracting at a rate any larger than 0.07 mas y<sup>-1</sup> at 8.4 GHz.
The time scale for the scattered image to sample an independent portion of the turbulent screen is given by the ratio of the linear size of the image to the velocity of the screen relative to the line of sight. If we take this transverse velocity to be 100 km s<sup>-1</sup>, which is characteristic of the rotating molecular disk, then independent samples of any refractive beam wander (or source size change) will occur on time intervals of 20 (7) years, respectively. Over the somewhat shorter 16-y interval of our 4.9-GHz measurements, we might see just a linear change of the position if our above conclusion that refraction was not important was wrong. The typical contribution to the proper motion from refractive wander is $`0.25g`$ mas y<sup>-1</sup>, where $`g(t)`$ is the fractional shift of the centroid of the scattering disk from its long term average. Statistically the amplitude of this false motion would be frequency independent as the time scale shortens with frequency just in proportion to the apparent size. During any short time interval the refractive motion will differ over an octave of frequency, and so we could expect that the effects at our two radio frequencies would differ.
A separate test of refractive effects is to look at the differential positions at the two frequencies at a single epoch. In our Green Bank experiment (BS82) we found that the differential positions at 2.7 and 8.1 GHz were identical to within $`0.02`$ of the scattering diameter at 2.7 GHz. Note that the reference sources in the Green Bank and VLA experiments differ. In Figure 5 we show the differential Sgr A positions from three days of observations in epoch 8 at 4.9 GHz and 8.4 GHz. There is a systematic offset of $`5`$ mas in right ascension which is 0.1 times the scattering diameter at 5 GHz. Source structure can be one source of difference although 5 mas is large value for this effect. Without further high resolution imaging and monitoring we cannot determine the source or the stability of this offset. A similar offset is seen in the epoch 7 data although the errors are somewhat larger. We conclude that even if refractive wander is present, $`g`$ is no more than 0.1 and the apparent motion it might contribute is less than our current errors.
### 4.4 Dynamical effects on the central black hole
A black hole in the center of the galaxy will have a statistical motion with respect to the galactic barycenter owing to the influence of the uneven momentum distribution of objects surrounding it. Consider the motion induced by the transit, or orbit, of a perturbing mass ($`m_2`$) such as a nearby star or a passing molecular cloud. The affect of $`m_2`$ on the mass enclosed ($`m_1(r)`$), and therefore Sgr A, is given by the acceleration $`Gm_2/r^2`$ acting for a time given by $`r`$ divided by the circular velocity at $`r`$, $`r/v_c(r)`$. The circular velocity at $`r`$ is given by $`\sqrt{Gm_1(r)/r}`$. The resulting motion of the barycenter (towards $`m_2`$) is then:
$$\mathrm{\Delta }v_{\mathrm{BC}}=\frac{\sqrt{G}m_2}{\sqrt{rm_1(r)}}.$$
$`(14)`$
Figure 6 shows the mass and radial distance of a number of asymmetric masses in the center of the galaxy. In general these appear to grow as $`r^{1.5}`$ which is shown in the Figure 6. The asymmetric masses range from the nearest solar mass star whose orbital period is long with respect to our measurement interval to the star formation complex, Sgr B2. Inside of about 1 pc $`m_1`$ is constant as shown by the IR stellar motions (Eckart & Genzel 1997; Ghez et al. 1998) and $`\mathrm{\Delta }v_{\mathrm{BC}}`$ will be proportional to $`r`$ as one considers various contributions to the barycentric motion. The resultant motions, however, are small, less than 1 km s<sup>-1</sup>. In the range of 1 pc to 100 pc the enclosed luminous mass grows as $`r^{1.2}`$ based on 2-$`\mathrm{\mu m}`$ measurements (see review by Genzel et al. (1994)). Mass asymmetries in this range then have an influence on the barycenter motion that grows more slowly as $`r^{0.4}`$. For example, the molecular cloud M-0.02-0.07 shown in Figure 6 will give the enclosed mass at its radius a peculiar motion of about 1 km s<sup>-1</sup>. As one goes to larger and larger radii the peculiar motion from mass asymmetries will be increasingly dominated by a longitude motion and not a latitude motion which is the central concern in this paper. We conclude that the influence of mass asymmetries in the galactic center can be ignored at the present level of accuracy.
The perturbations of few km s<sup>-1</sup> that are expected for a central black hole in our galaxy based on the discussion above can be compared to that expected in other galaxies based on observed asymmetries. The nature of the double nucleus in M31 remains uncertain. The nucleus has probably been identified by the large velocity dispersion at the location of the P2 nucleus (Statler et al. 1999). The other nucleus, P1, may be a concentration of stars in an eccentric disk (Tremaine 1995). Alternatively P1 may be a star cluster which will shortly be ‘absorbed’ into the central by tidal disruption. In either case the observations indicate that the $`7\times 10^7`$ Mblack hole and the surrounding stars will not be at rest in the mass center of M31 at the level of 10 km s<sup>-1</sup> owing to the influence of the estimated $`3\times 10^6`$ M stars in P1. This mass asymmetry in M31 is considerably larger than that known for our galaxy at a comparable radius (Fig. 6).
One source of the excitation of an eccentric disk in M31 mentioned above is an unstable $`m=1`$ normal mode in an axisymmetric disk. Numerical N-body simulations by Miller & Smith (1992) have shown that the core of galaxies will exhibit motions owing to an unstable $`m=1`$ normal mode of oscillation. For the parameters of our galactic center the black hole and its associated cusp of stars could be moving as fast as 70 km s<sup>-1</sup> (Miller 1996). The instability is the result of an amplification of the small motions discussed above. The direction of this putatative motion is arbitrary if the perturbations are the result of mass asymmetries on scales less than 100 pc. Over these scales there is as much evidence for order as disorder with respect the well defined galactic plance seen on kpc scales. At the level of 70 km s<sup>-1</sup> we definitely don’t see the effect predicted by Miller. We can be unlucky and the motion may be largely radial. If so, we would expect the black hole to be offset in angle from the centroid of stars at larger radii which could be tested with analysis of the IR stellar distribution.
If there is a massive black hole at the center of the galaxy, Gould & Ramirez ( 1998) have shown that our limit on the observed acceleration implies that Sgr A is either coincident with or closely bound to that black hole. They point out that acceleration has the advantage of not being confused by uncertainty in Oort’s constants. If one expresses both the peculiar velocity and the acceleration of Sgr A in units of the Earth’s motion around the Sun, the normalized velocity and acceleration are equal at a distance of 140 AU for a gravitational mass of $`2.5\times 10^6`$ M. Acceleration measurements, or limits, are therefore relatively more important for distances inside 140 AU if the measurements have comparable precision in Earth units. If Sgr A is a random object in the gravitational potential that one can establish firmly from the IR proper motion studies, then its acceleration is expected to be $`0.27a_{}`$, where $`a_{}`$ is the acceleration of the Earth in its orbit around the sun. Our upper limit of the acceleration allowed using the full 1982 to 1998 data set is 0.3 mas y<sup>-2</sup>, or $`0.06a_{}`$. This result, although slightly higher than that used by Gould & Ramirez is still a small compared to that of a low mass object near the massive black hole. By comparison, the precision of our latitude peculiar motion is 7 km s<sup>-1</sup>, or $`0.21v_{}`$. We conclude that If the center harbors a massive black hole, then the radio source Sgr A must be attached to it. They also discuss the possibility that Sgr A is in very close orbit around the black hole with an excursion less than our single epoch precision and orbital period less than our time base. The VLBA result of Reid ( 1999) and its comparison with the longer duration VLA result here will place further constraints on this extreme scenario.
Gould & Ramirez also use the limit on acceleration to state the low probability of Sgr A being a random object passing through a dense cluster of weakly interacting dark matter. In their conclusion they return to this scenario and describe a test using flux density variations caused by Doppler boosting. Such variations would be evident in the daily sampled data discussed by Backer (1994). They note in passing that the equipartition mass of Sgr A based on the acceleration limit is 250 M based on a 10 M characteristic mass. The ‘equipartition’ mass limit for Sgr A based on the limit on peculiar motion of $`<19`$ km s<sup>-1</sup> and 10 M IR stars moving at 1000 km s<sup>-1</sup> is $`>2\times 10^4`$ M.
Maoz (1998) discusses the dynamical constraints on alternatives to supermassive black holes in galactic nuclei. Critical to his discussion are estimates of the black hole mass and surrounding density in the cusp of stars that form around the black hole. Sgr A’s diameter upper limit from the 3mm VLBI measurements of Rogers et al. (1994) is 1 AU. When combined with the mass limit this leads to a lower limit for its density of $`10^{21}`$ Mpc<sup>-3</sup>. As noted by Maoz (1998 personal communication), one can argue this point. The radio emission may come from the central body of a cluster or a disk and hence may not delimit the full size of the parent mass. In proceeding we assume that the radio emission encompasses the parent mass as it would in the case of quasi-spherical accretion and core-jet models. The density estimate is such that any form of matter other than a black hole will have a dissipative lifetime less than $`10^8`$ y.
## 5 CONCLUSION
Measurements with the NRAO Very Large Array from 1982 to 1998 at 4.8 GHz have provided the first proper motion of the compact radio source in our galactic center, Sgr A. The peculiar motion of Sgr A in the mass center of the galaxy is obtained after removing an estimate of the secular parallax which results from the solar motion. In latitude the estimated peculiar motion is $`19\pm 7`$ km s<sup>-1</sup>. Our ongoing uncertainty about the nature of Sgr A leads us to use the limit on peculiar motion along with an equipartition argument to place a lower bound on its mass of $`2\times 10^4`$ M. The inferred mass density of Sgr A is then $`10^{21}`$ Mpc<sup>-3</sup> based on a previous estimated 1 AU source diameter at 86 GHz. This is the highest mass density inferred for any galactic black hole candidate. Mass density is currently the best argument for existence of a black hole when consideration is given to the stability of configurations of dark matter other than a solitary black hole.
The simplest model is that Sgr A is radiation from the atmosphere of the $`2.5\times 10^6`$ Mblack hole. Nearly steady infall and outflow models for the radiative properties of Sgr A exist. The possibility of a non-zero peculiar motion has led to consideration of the influence of known mass asymmetries in the central region of our galaxy. We conclude that these would account for no more than a few km s<sup>-1</sup> pertubation. Another source of motion may be a $`m=1`$ instability in the central potential. Our estimated peculiar motion is in fact smaller than the estimated size of this effect although projection factors need to be considered to make a firm statistical statement.
A nonzero proper motion might be attributed to systematic errors in the measurements, time variable frequency dependent effects, or variations in the intrinsic structure. At 4.8 GHz Sgr A is scattered by angles significantly greater than our relative position measurement accuracy. While one can ague that variable refraction is probably not important, this remains a source of uncertainty for the VLA measurements. Models for the radio emission of Sgr A suggest an increasing intrinsic source size with decreasing radio frequency. This could lead to additional systematic effects for the VLA measurements. Further measurements at higher radio frequencies are planned to resolve these uncertainties.
We commend the National Radio Astronomy Observatory and its staff for developing and maintaining the superb Very Large Array instrument and for providing ample support during this extended observing campaign. The genesis of the VLA experiment started with the 1976-1981 Green Bank experiment, and that was inspired by a lunch time conversation with R. Fisher in Green Bank circa 1975. The authors support has been from UC Berkeley, NAIC, and NRAO and we therefore thank the NSF and California taxpayers. We thank M. Reid for discussions about his and our measurements, and E. Maoz, A. Sternberg and I. King for comments on the manuscript. G. Bower provided a valuable independent analysis of absolute positions from the epoch 8 data set.
|
no-problem/9906/hep-ph9906548.html
|
ar5iv
|
text
|
# Local equilibrium in heavy ion collisions. Microscopic model versus statistical model analysis.
## I Introduction
Since the beginning of the 1930s, when cosmic ray cascades of various particles were detected, physicists are trying to describe the process of multiple production of particles in ultrarelativistic collisions of hadrons and nuclei. The idea of Fermi, namely that all secondary particles produced in a Lorentz-contracted volume are in statistical equilibrium , was further modified by Pomeranchuk . It was finally developed by Landau into the hydrodynamic theory of multiparticle processes. The most important quantitative predictions of the hydrodynamic theory, such as the dependence of the average particle multiplicity $`N`$ on total energy of the system $`\sqrt{s}`$, the rapidity distributions $`dN/dy`$, violation of Feynman scaling, the mean value of the transverse momentum $`p_{}`$ and its dependence on $`\sqrt{s}`$ and the mass of the secondaries, have been verified experimentally. On the other hand, the basic assumptions of the theory, such as the formation of the initial state, the relaxation rate of the system to local equilibrium (LE), the sharpness of the freeze-out (FO) and, finally, the equation of state (EOS) of hot and dense hadronic matter, are based on rough estimates, which have not been rigorously proven yet.
To answer these questions, one can analyze the dynamics provided by microscopic Monte Carlo simulations, i.e., microscopic string, cascade, transport, etc. models. These models describe experimental data on hadronic and nuclear collisions in a wide energy range reasonably well, but do not postulate local equilibrium. Consequently, the EOS and macroscopic variables like temperature, entropy or chemical potential are not implied, but can be calculated, if the system does actually reach LE. For instance, the hypothesis of sharp freeze-out of secondaries in heavy ion collisions was checked by means of three microscopic Monte Carlo models: QGSM , RQMD and UrQMD . It was found that these models do neither exhibit a thin or a thick freeze-out layer, which resemble the hyperbolic surface predicted by Bjorken scaling model .
The present paper employs the recently developed ultrarelativistic quantum molecular dynamics (UrQMD) model to examine the approach to local equilibrium of hot and dense nuclear matter, produced in central heavy ion collisions at energies from AGS to SPS. Local (and sometimes, in fireball models, even global) equilibrium is the basic ad hoc assumption of the macroscopic hydrodynamic models. It is usually assumed that the nonequilibrium initial stage of nuclear collisions, during which shock waves, partonic jets, etc., heat the system, is considerably shorter than the characteristic hadronization times. Evidently, there must be dissipative and irreversible processes leading to equilibration. One may adopt the scheme of binary collisions, in which the correlation of the many-particle distribution functions, describing highly nonequilibrium states, rapidly sets in . The typical time scale for these processes are collision times, $`\tau _{\mathrm{kin}}\tau _{\mathrm{coll}}`$. Then a kinetic stage is approached, where the $`N`$-body distribution functions are reduced to many one-particle distribution functions, one for each particle species. For time scales sufficiently larger than $`\tau _{\mathrm{coll}}`$, the evolution of the system can be described in terms of local average particle number, their velocities and energies. These local average values are the moments of the one-particle distribution functions, and the hydrodynamic stage arises. Other processes, which can cause even faster equilibration, are collective instabilities, convective turbulent transport or chaotic relaxation . They can yield a crude estimate of the relaxation times in the system. On the other hand, multiparticle processes can lead to a delay in reaching equilibrium, because the produced particles are not thermalized. The time scale of the equilibration may appear too long as compared to the typical hadronization times.
As emphasized in , according to an UrQMD analysis, it is unlikely that global thermal equilibrium sets in for central Au+Au collisions at the AGS energy. This statement remains true at higher energies also. Figure 1 depicts the time evolution of the baryon density in a single central Pb+Pb collision at 160A GeV. Two disks of baryon rich matter, remnants of the colliding nuclei, consisting mostly of resonances and diquarks, are seen in the fragmentation regions. The volume between the disks becomes more and more baryon dilute, but never purely homogeneous. Apparently, global equilibrium is not reached even in central Pb+Pb collisions at SPS energies. However, the occurrence of LE in the central zone of the heavy ion reaction is still not ruled out.
To verify how close the hot hadronic matter in the cell is to equilibrated matter, one can do a comparison with the statistical model (SM) of a hadron gas . Three parameters, namely, the energy density $`\epsilon `$, the baryon density $`\rho _\mathrm{B}`$, and the strangeness density $`\rho _\mathrm{S}`$, extracted from the analysis of the cell, are inserted into the equations for an equilibrated ideal gas of hadrons. Then all characteristics of the system in equilibrium, including the yields of different hadronic species, their temperature $`T`$, and chemical potentials, $`\mu _\mathrm{B}`$ and $`\mu _\mathrm{S}`$, may be calculated unambiguously. If the yields and the energy spectra of the hadrons in the cell are sufficiently close to those of the SM, one can take this as indication for the creation of equilibrated hadronic matter in the central reaction zone.
This method is applied to extract the equation of state (EOS), which connects the pressure $`P`$ and the energy density $`\epsilon `$ of the system (generally, either $`P,\epsilon ,\rho _\mathrm{B}`$ or $`P,T\mathrm{and}\mu _\mathrm{B}`$). Note that, without the EOS, the system of relativistic hydrodynamic equations, which describe the evolution of hadronic matter, is incomplete. Therefore, the EOS, as extracted from the microscopic model, has a direct impact on the parametrizations used in the macroscopic (hydrodynamic) models.
This paper is organized as follows: a brief description of the UrQMD model is given in Sec. II. Here the necessary and sufficient criteria of local equilibrium are formulated also. It is shown that the hadron distributions in the UrQMD central cell become isotropic at $`t8`$fm/$`c`$, irrespective of the initial energy of the reaction. This means that kinetic equilibrium is reached. Section III presents the basic equations of the statistical model of the ideal hadronic gas, which is applied for the analysis of the hadron distributions in the central cell. The relaxation of the hadronic matter in the central zone of central heavy ion collisions to thermal and chemical equilibrium is studied in Sec. IV. The UrQMD cell and the SM results are compared for center-of-mass energies of 10.7A GeV (Au+Au, AGS), 40A GeV (Pb+Pb, SPS) and 160A GeV (Pb+Pb, SPS). Finally, the conclusions are drawn in Sec. V.
## II Criteria of local equilibrium and conditions in the UrQMD cell
### A The UrQMD model
The UrQMD is a microscopic transport model designed for the description of hadron-hadron ($`hh`$), hadron-nucleus ($`hA`$) and nucleus-nucleus ($`AA`$) collisions for energies spanning a few hundred MeV up to hundreds of GeV per nucleon in the center-of-mass system (c.m. system). The model is presented in detail in . As discrete, quantized degrees of freedom, the model contains 55 baryon and 32 meson states, together with their antiparticles and explicit isospin-projected states, with masses up to 2.25 GeV/$`c^2`$. The tables of the experimentally available hadron cross sections, resonance widths and decay modes are implemented as well.
At moderate energies the dynamics of $`hh`$ or $`AA`$ collisions is described in UrQMD in terms of interactions between the hadrons and their excited states (resonances). At higher values of the four-momentum transfer, $`hh`$ interactions cannot be reduced to the hadron-resonance picture anymore, and new excited objects, color strings, come into play. The excitation of the strings make it energetically favorable to break the string into pieces by producing multiple $`q\overline{q}`$-pairs from the vacuum. Due to the fact that the color string is assumed to be uniformly stretched, the hadrons produced as a result of the string fragmentation will be uniformly distributed in rapidity between the endpoints of the string. The propagation of the hadrons is governed by Hamilton equations of motion, with a binary collision term of the form of relativistic Boltzmann-Uehling-Uhlenbeck (BUU) transport model. The Pauli principle is taken into account via the blocking of the final state, if the outgoing phase space is occupied. No Bose enhancement effects are implemented in the model yet.
In its present version UrQMD describes the main properties of both hadronic and nuclear interactions reasonably well. Very important for the further analysis is the fact that the UrQMD model reproduces the experimental transverse mass spectra of hadrons in different rapidity intervals, as shown in Fig. 2. The inverse slope parameter, extracted from the Boltzmann fit to the spectra, is directly linked to the temperature in the statistical model. Thus, if the equilibrium conditions (see below) are satisfied, one may estimate the temperature of the cell.
### B Preequilibrium stage
Let us first define our system. The sizes of the central zone of perfectly central (impact parameter $`b=0`$fm) heavy ion collisions should be neither too small nor too large. The effects caused by the collective flow of particles also can wash out the equilibration picture. In order to diminish number of distorting factors we choose a central cubic cell of volume $`V=5\times 5\times 5=125`$fm<sup>3</sup> centered around the origin of the c.m. system of colliding nuclei. Due to the absence of the preferable direction of the collective motion, the collective velocity of the cell is essentially zero.
Then, to decide whether or not the local equilibrium in the cell is reached, one has to introduce criteria of the equilibrium. In statistical physics the equilibrium state is defined as a state with maximum entropy . However, the direct calculation of the entropy evolution in the cell is notoriously difficult. The cell is not an isolated system. Particles leave it freely, and neither internal energy nor particle number is conserved. Therefore we should apply other, more suitable for this problem, criteria of equilibrium bearing in mind that they are consequences of the general principle of maximum entropy.
We start from the necessary conditions which characterize the preequilibrium stage in the central cell. The flow velocities there should have minimum gradient tending to zero. It means that the local equilibrium in the cell cannot be reached earlier then certain time $`t^{\mathrm{cross}}=2R/(\gamma _{\mathrm{c}.\mathrm{m}.}v_{\mathrm{c}.\mathrm{m}.})`$ during which the Lorentz contracted nuclei would have pass through each other. Here $`R`$ is the radius of the nuclei, and the times $`t^{\mathrm{cross}}`$, typical for each reaction, are 5.46 fm/$`c`$ (10.7A GeV), 2.9 fm/$`c`$ (40A GeV), and 1.44 fm/$`c`$ (160A GeV). After $`t=t^{\mathrm{cross}}`$ the collective flow of freely streaming hadrons rapidly vanishes.
It looks likely that for the cell of small longitudinal size the pre-equilibrium stage sets in very quickly. For instance, for the central $`4\times 4\times 1`$ fm<sup>3</sup> cell in Pb+Pb collisions at SPS one may expect the equilibration already at $`t^{\mathrm{eq}}=t^{\mathrm{cross}}+\mathrm{\Delta }z/2\beta 2`$ fm/$`c`$. But the detailed analysis shows that the hadronic matter in the different central cells equilibrates at the same rate which does not depend on the longitudinal size of the cell.
Figure 3 presents the average transverse and longitudinal flow of hadrons in 1/8-th of the central cell with the coordinates $`0\{x,y,z\}2.5`$ as a function of time $`t`$. The longitudinal flow reaches its maximum value at times from $`t=4`$ fm/$`c`$ (SPS) to $`t=6`$ fm/$`c`$ (AGS). Then it drops and converges to the transverse flow. At $`t=10`$ fm/$`c`$ the longitudinal component of the collective flow in the central cell is about 0.1–0.15$`c`$ only. This corresponds to the temperature distortion $`T_{\mathrm{fl}}m_Nv_{\mathrm{fl}}^2/37`$ MeV. Disappearance of the flow implies (i) isotropy of the velocity distributions, which leads to (ii) isotropy of the diagonal elements of the pressure tensor, calculated from the virial theorem ,
$$P_{\{x,y,z\}}=\frac{1}{3V}\underset{i=h}{}\frac{p_{i\{x,y,z\}}^2}{(m_i^2+p_i^2)^{1/2}},$$
(1)
containing the volume of the cell $`V`$ and the mass and the momentum of the $`i`$th hadron, $`m_i`$ and $`p_i`$, correspondingly.
The method of moments of the distribution function is a useful tool to investigate irregularities in the particle spectra in high energy physics. Indeed, it is possible to make a conclusion about isotropy of the velocity distributions in terms of the function
$$f_a^{(n)}=\sigma _z^{(n)}\frac{1}{2}\left(\sigma _x^{(n)}+\sigma _y^{(n)}\right),$$
(2)
where $`\sigma _{x,y,z}^{(n)}`$ is the $`n`$th moment of the $`x`$, $`y`$, or $`z`$ distribution. The function $`f_a^{(n)}`$ is a measure of anisotropy of the particle average velocities in longitudinal and transverse directions. The results of the calculations of particle velocity anisotropy of nucleons and pions produced in the central cell at 10.7A GeV, 40A GeV and 160A GeV are given in Fig. 4 for the function $`f_a^{(2)}`$. It seems that the velocity distributions becomes isotropic already at $`t=57`$fm/$`c`$. But in equilibrium at energies and temperatures in question hadrons should have Maxwellian, or normal, velocity distribution, which is the only one satisfying the principle of maximum entropy. The $`dN/dv`$ distributions of nucleons and pions for all three energies are shown in Fig. 5. One can see that despite the isotropy of collective velocities, the shapes of the distributions become close to the normal one at $`t=810`$fm/$`c`$, not earlier. Therefore, the second moments of the velocity distribution functions are insufficient to study the system anisotropy, and one should apply the moments of higher order.
The requirement of isotropy of the velocity distributions is closely related to the requirement of the pressure isotropy. Here the momentum distributions of particles are integrated over the whole number of hadron species, as given by Eq. (1). Figure 6 depicts the time evolution of the pressure in longitudinal and transverse directions calculated for all three energies in question. These components become very close (within the 5%-limit of accuracy) to each other also at $`t8`$ fm/$`c`$. The result does not depend practically on the initial energy of colliding nuclei.
It is worth noting that the appearance of the preequilibrium stage does not imply inevitably that the matter in the cell would be in equilibrium forever. The preequilibrium stage in the cell holds for a period of about $`810`$ fm/$`c`$ (Fig. 6). Then the system develops again the anisotropy in the pressure and velocity sectors due to significant reduction of number of collisions between particles.
### C Criteria of thermal and chemical equilibrium
Suppose conditions (i) and (ii) are fulfilled. Does it mean that the hadronic matter in the cell is in a state close to the equilibrium? No, because both criteria concern the kinetic preequilibrium stage rather than the thermal equilibrium one. Kinetic equilibrium implies isotropy of the velocity distributions of particles (and, therefore, isotropy of pressure) together with the requirement that their velocity distributions must be Maxwellian. Thermal equilibrium indicates that the macroscopic characteristics of the system are nearly equal to their average values. In thermal equilibrium the system is characterized by a unique temperature $`T`$. Then, the principle of maximum entropy compels the particles of mass $`m_i`$ to obey the equilibrium distribution function
$$F(p,m_i)=\left[\mathrm{exp}(\sqrt{p^2+m_i^2}\mu )/T\pm 1\right]^1.$$
(3)
Here $`p`$ is the momentum of particle, $`\mu `$ is its chemical potential, $`\mathrm{`}\mathrm{`}+\mathrm{"}`$ sign stands for fermions and $`\mathrm{`}\mathrm{`}\mathrm{"}`$ for bosons. In the case $`\mathrm{exp}[(E_i\mu )/T]1`$, where $`E_i^2=p^2+m_i^2`$, the Maxwell velocity distribution follows automatically from Eq. (3). Therefore, if the number of particles is conserved, then both definitions of kinetic and thermal equilibrium are equivalent . But in strong interactions at high energies (as well as in chemical reactions) the number of interacting particles and their origin is changed. Thus, the condition of thermal equilibrium should be completed by the requirement of chemical equilibrium. Both criteria read (iii) the distribution functions of hadrons are close to the equilibrium distribution functions, given by Eq. (3) (thermal equilibration), and (iv) the yields of hadrons become saturated (chemical equilibrium) after a certain period. The latter condition assumes that any inverse reaction proceeds with the same rate as the direct reaction. This means, particularly, that the chemical potentials of nonconserved charges vanish, and the chemical potential $`\mu _j`$, assigned to a given particle $`j`$, is simply
$$\mu _j=\mu _\mathrm{B}B_j+\mu _\mathrm{S}S_j,$$
(4)
where $`B_j`$ and $`S_j`$ denote the baryon charge and strangeness of the particle, and $`\mu _\mathrm{B},\mu _\mathrm{S}`$ is the baryo- and strangeness chemical potential, respectively.
After the preequilibrium conditions are satisfied, one may address the question on thermal and chemical equilibrium in the cell. But how can we apply the thermostatic criteria to the dynamic picture in the cell, where the internal parameters are instantly changing? The standard procedure is to compare the snapshot of particle yields and spectra in the cell at given time with those predicted by the statistical thermal model of hadron gas . If these spectra are close to each other the hadronic matter in the cell is assumed to reach thermal and chemical equilibrium. The simplicity of the SM has led to a very abundant literature (see, e.g. and references therein). Therefore, we shall recall briefly some of its principle features.
## III Statistical model of ideal hadron gas
For the further analysis the thermodynamical parameters of the system, $`T`$, $`\mu _\mathrm{B}`$ and $`\mu _\mathrm{S}`$, at each step of the time evolution of the colliding system were extracted from the predictions of the statistical model of an ideal hadron gas with the same 55 baryon and 32 meson species and their antistates considered in UrQMD model. As an input the SM uses the total energy density $`\epsilon `$, baryon density $`\rho _\mathrm{B}`$ and strangeness density $`\rho _\mathrm{S}`$, determined within the UrQMD model during the dynamical evolution of the central zone with volume $`V`$ of the A+A system:
$`\epsilon ^{\mathrm{mic}}`$ $`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle \underset{i}{}}E_i^{\mathrm{SM}}(T,\mu _\mathrm{B},\mu _\mathrm{S}),`$ (5)
$`\rho _\mathrm{B}^{\mathrm{mic}}`$ $`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle \underset{i}{}}B_iN_i^{\mathrm{SM}}(T,\mu _\mathrm{B},\mu _\mathrm{S}),`$ (6)
$`\rho _\mathrm{S}^{\mathrm{mic}}`$ $`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle \underset{i}{}}S_iN_i^{\mathrm{SM}}(T,\mu _\mathrm{B},\mu _\mathrm{S}).`$ (7)
Here $`B_i`$, $`S_i`$ are the baryon charge and strangeness of the hadron species $`i`$, whose particle yields, $`N_i^{\mathrm{SM}}`$, and total energy, $`E_i^{\mathrm{SM}}`$, are calculated within the SM as:
$`N_i^{\mathrm{SM}}`$ $`=`$ $`{\displaystyle \frac{Vg_i}{2\pi ^2\mathrm{}^3}}{\displaystyle _0^{\mathrm{}}}p^2f(p,m_i)𝑑p,`$ (8)
$`E_i^{\mathrm{SM}}`$ $`=`$ $`{\displaystyle \frac{Vg_i}{2\pi ^2\mathrm{}^3}}{\displaystyle _0^{\mathrm{}}}p^2\sqrt{p^2+m_i^2}f(p,m_i)𝑑p,`$ (9)
where $`p`$, $`m_i`$ and $`g_i`$ are the momentum, mass and the degeneracy factor of the hadron species $`i`$. The distribution function $`f(p,m_i)`$ is given by Eq. (3) with the chemical potential $`\mu =\mu _\mathrm{B}B_i+\mu _\mathrm{S}S_i`$. Then, instead of Fermi-Dirac or Bose-Einstein distributions we use for all hadronic species the classical Boltzmann distribution function
$$f^{\mathrm{Boltz}}(p,m_i)=\mathrm{exp}\left(\frac{\sqrt{p^2+m_i^2}\mu _\mathrm{B}B_i\mu _\mathrm{S}S_i}{T}\right).$$
(10)
At temperatures above 100 MeV the only visible difference (about 10%) between quantum and classical descriptions is in the yields of pions .
The hadron pressure given by the statistical model reads
$$P^{\mathrm{SM}}=\underset{i}{}\frac{g_i}{2\pi ^2\mathrm{}^3}_0^{\mathrm{}}p^2\frac{p^2}{3(p^2+m_i^2)^{1/2}}f(p,m_i)𝑑p.$$
(11)
Finally, the entropy density $`s^{\mathrm{SM}}`$ can be calculated for the ideal gas model either by the Gibbs thermodynamical identity
$$\epsilon ^{\mathrm{mic}}=T^{\mathrm{SM}}s^{\mathrm{SM}}+\mu _\mathrm{B}^{\mathrm{SM}}\rho _\mathrm{B}^{\mathrm{mic}}+\mu _\mathrm{S}^{\mathrm{SM}}\rho _\mathrm{S}^{\mathrm{mic}}P^{\mathrm{SM}},$$
(12)
or as a sum over all particles of the product $`f(p,m_i)[1\mathrm{ln}f(p,m_i)]`$ integrated over all possible momentum states
$$s^{\mathrm{SM}}=\underset{i}{}\frac{g_i}{2\pi ^2\mathrm{}^3}_0^{\mathrm{}}f(p,m_i)\left[\mathrm{ln}f(p,m_i)1\right]p^2𝑑p.$$
(13)
According to the principle of maximum entropy the value of $`s^{\mathrm{SM}}`$ calculated from Eq. (12) or from Eq. (13) represents the maximum value of the entropy density in the system for a given particle composition and given set of microscopic parameters, i.e., energy density, $`\epsilon ^{\mathrm{mic}}`$, baryon density, $`\rho _\mathrm{B}^{\mathrm{mic}}`$, and strangeness density, $`\rho _\mathrm{S}^{\mathrm{mic}}`$.
## IV UrQMD versus statistical model. Results and discussion
### A Baryon density and strangeness in the cell
As shown in Sec. II, the kinetic equilibrium is attained in the central cell at $`t10`$ fm/$`c`$ for all three reactions. The fraction of non-formed particles at this time is less than 20% and rapidly vanishes. Therefore, $`t=10`$ fm/$`c`$ is chosen as a starting point of the direct comparison between UrQMD and SM. Substitution of the values $`\{\epsilon ^{\mathrm{mic}},\rho _\mathrm{B}^{\mathrm{mic}},\rho _\mathrm{S}^{\mathrm{mic}}\}`$, calculated in the cell for all three reactions in question at $`10t18`$ fm/$`c`$, into Eqs. (5)–(7) gives us the key parameters $`T,\mu _\mathrm{B}`$ and $`\mu _\mathrm{S}`$ needed to reproduce particle spectra in the statistical model. The input and output parameters are listed in Tables IIII. Apparently, the conditions in the cell are different for all reactions even at this late stage of the expansion. Because of the different expansion rates the higher temperatures and lower values of the baryonic chemical potential are assigned to the system of heavy ions colliding at SPS energy, which is the highest one in our case. The final time of the calculations may be estimated from the imposed usual hydrodynamic freeze-out conditions, e.g. $`\rho _{tot}0.5\rho _0`$ or $`\epsilon =0.1`$ GeV/fm<sup>3</sup>, i.e. 18–20 fm/$`c`$ for all three reactions. At these times the fraction of already frozen particles in the central cell is about 40–47 %, irrespective on the initial energy of colliding nuclei.
The baryon density in the central zone of the collision at the late stage is not larger than 0.15 fm<sup>-3</sup> for all three reactions. Note also that at AGS energies we deal with baryon rich matter, where about 70% of the total energy is carried by baryons, while at SPS most of the energy is deposited in the mesonic sector (more then 70%). At 40A GeV the mesonic and baryonic parts of energy are equal.
At all energies from 10.7A GeV to 160A GeV the total strangeness density of all particles in the central cell is small and negative. This result is independent on the size of the cell . The origin of this effect is quite simple. Strange particles are produced in pairs, for instance, kaons are produced mainly together with lambdas. The total strangeness of the reaction is essentially zero but, owing to small interaction cross section with hadrons, $`K`$’s are leaving the central cell much earlier than $`\mathrm{\Lambda }`$’s or $`\overline{K}`$’s. The strangeness density has a minimum somewhere at the maximum overlap of the nuclei and then it relaxes to zero. This evolution behavior of the strangeness density $`\rho _\mathrm{S}={\displaystyle \underset{i}{}}S_in_i`$, where $`S_i`$ and $`n_i`$ are the strangeness and density of the hadron species $`i`$, can not be explained by simple combination of e.g. $`K`$’s, $`\overline{K}`$’s and $`\mathrm{\Lambda }`$’s, but is defined by contributions of all species, carrying the strange charge. For the ratio $`f_s=\rho _\mathrm{S}/\rho _\mathrm{B}`$ (Fig. 7) the behavior is opposite. This ratio rises continuously with time because the baryon density decreases much faster than the strangeness density.
At the early stages of the reaction the strange charge is carried mostly by resonances. At AGS energy the positive strangeness of mesons, mostly $`K`$’s, is compensated by the contribution of both baryons (like $`\mathrm{\Lambda }`$’s) and mesons (like $`\overline{K}`$’s), carrying the negative strange charge. It makes the net strangeness density negative, even though small. At SPS energy the contribution of strange baryons to the total strangeness is relatively small, so the difference in strangeness is defined mostly by meson contributions.
To decide whether the strangeness density in the cell is small or not we have also performed calculations for SM with zero strangeness density. As shown in Table IV, although the hadronic yields themselves are only slightly affected by the “symmetrization” of strangeness, their ratios are changed more distinctly. For instance, ratio $`F_K=K/\overline{K}`$ in the central cell drops from 6.48 to 5.74 at AGS energy, from 2.96 to 2.60 at 40A GeV, and from 1.82 to 1.58 at SPS energy. The total effect is about 15% for all three reactions.
### B Isentropic expansion and EOS
Two other important facts may be gained from the Tables IIII. The entropy per baryon ratio in the cell is almost constant and varies from $`S/A=s/\rho _\mathrm{B}12`$ at AGS to $`s/\rho _\mathrm{B}38\pm 2`$ at SPS energy. This result supports the Landau idea of isentropic expansion of a relativistic fluid. The isentropic-like expansion is demonstrated in Fig. 8 which presents the evolution of the central cell in the $`T\epsilon `$ plane. Also, the fact that the microscopic pressure calculated according to Eq. (1) is nicely reproduced by the Eq. (11) for the pressure of ideal hadron gas favors the applicability of the hydrodynamic description of relativistic heavy ion collisions. The ratio $`P/\epsilon `$, shown in Fig. 9, is constant for the whole time interval for all three energies. Thus the equation of state, which connects pressure with energy density, has a rather simple form $`P(\epsilon )/\epsilon =0.12`$ (AGS), 0.13 (40A GeV) and 0.15 (SPS), where $`\left(dP/d\epsilon \right)^{1/2}=c_s`$ corresponds to the speed of sound in the medium. For the ideal ultra-relativistic gas $`c_s^2=1/3`$, while the presence of resonances diminishes the sonic velocity to $`c_s^20.14`$ , which is in quantitative agreement with our calculations. It is important to stress that the EOS, extracted from the UrQMD analysis of the evolution of hot hadronic matter in the central cell of heavy ion collisions at energies from AGS to SPS, does not contain any kind of softening, which may be associated with the phase transition between the quark-gluon plasma (QGP) and hadronic phase.
The evolution of the central cell in $`T`$-$`\mu _\mathrm{B}`$ plane is shown in Fig. 10. Here, in spite of the absence of indication on the QGP-hadrons phase transition in $`P`$-$`\epsilon `$ plane, we see that the maximal temperature is growing with the initial collision energy and reaches at the beginning of the kinetic equilibrium stage the zone of the phase transition predicted by the MIT bag model for ideal QGP phase with $`m_\mathrm{S}=0`$. At earlier times the determination of temperature in the cell by means of the SM fit is doubtful, since the necessary conditions of local equilibrium are not satisfied.
### C Hadron yields and energy spectra
To complete the analysis of local equilibrium in the central cell we should make a comparison between hadron yields and spectra obtained in the both models. If the number of particles in the cell and their energy spectra are very close to those predicted by the SM, one may conclude that the thermal and chemical equilibrium is reached. The yields of different hadrons in the central $`V=125`$ fm<sup>3</sup> cell are shown in Figs. 11(a)–(c) (see also Table IV) for central (impact parameter $`b=0`$) heavy ion collisions at 10.7A GeV, 40A GeV and 160A GeV, respectively. We see that for baryons at $`t10`$ fm/$`c`$ the agreement between the SM and UrQMD results is reasonably good. For pions and kaons the yields differ drastically, especially at 160A GeV. Compared to UrQMD, the statistical model significantly underestimates the number of pions and overestimates the kaon yield. This difference in the pion yield cannot be explained only by the many-body ($`N3`$) decays of resonances, whose number is lower in the UrQMD calculations as seen in Fig. 12. In fact, the main discrepancy is observed for pions and many-body decaying resonances, like $`\omega ,N^{},\mathrm{\Delta }^{},\mathrm{\Lambda }^{}`$, etc. Statistical model overestimates the production of such resonances and underestimates the yield of pions. The enhancement of the resonances, however, can describe only 20% of difference in pion yields, the other 80% are coming via the multiparticle processes, i.e., fragmentations of strings. The condition (iv) is not satisfied and, therefore, the hadronic matter in the UrQMD central cell is not chemically equilibrated.
To verify how good the SM reproduces the temperature of the system we display in Figs. 13(a)-13(c) the energy spectra of different hadronic species, obtained from the microscopic calculations. The predictions of the statistical model are plotted onto the particle spectra, too. Again, at AGS energy the difference between the UrQMD and SM results for baryons lies within the 10%- range of accuracy. With the rise of initial energy from AGS to SPS the agreement between the models in the baryonic sector becomes worse. Pion energy spectra demonstrate the same tendency. Moreover, even at 10.7A GeV the deviations of pion spectra in UrQMD from those of the SM are significant. The Boltzmann fit to pion and nucleon energy spectra from the central cell has been performed at 160A GeV, where the deviations from the SM predictions are especially noticeable. Results of the fit are listed in Table V. We see that the nucleon “temperature” is always 30–40 MeV below the temperature obtained in the statistical model. For pions the difference is more dramatic: 50–60 MeV, although the UrQMD energy spectra themselves agree well with the exponential form of Boltzmann distribution.
But maybe all these nonequilibrium effects are caused solely by pions which, due to their simultaneous production in inelastic collisions and decays of resonances, are the only hadrons not thermally and chemically equilibrated? Indeed, from Fig. 14, which depicts the evolution of the average number of collisions per particle in the cell at SPS energy, it follows that pions have undergone about 1.6–1.7 elastic collisions while baryons have suffered more than 20 strong interactions. Thus, it would be expected that without pions the SM will predict much lower temperature which will agree with that of the UrQMD. To check this hypothesis we subtract the energy in the cell carried by pions from the total energy of hadrons. Then we substitute the new value of the energy density together with the unchanged values of baryon and strangeness densities into the SM fit to the UrQMD data, and impose the requirement of absence of pions. Results of the fit are listed in Table VI for Pb+Pb at 160A GeV at $`t=10`$ fm/$`c`$. Although the number of pions in the cell is almost two times larger than that of SM, it appears that, due to lower temperature of pions in UrQMD, the total excess of pion energy density in the cell is 24 MeV/fm<sup>3</sup>, or about $`1/3`$ of the total pion energy density given by the statistical model. The contamination of pion fraction does not decrease the temperature in the SM. Instead, it leads to the increase of chemical potential of strange particles.
Therefore, despite the occurrence of a state in which hadrons are in kinetic equilibrium and collective flows are very small, the hadronic matter is neither in thermal nor in chemical equilibrium. This state of hot hadronic matter is very peculiar and the results of the investigations will be published elsewhere . Similar results have been obtained in , where the central region of ultra-relativistic Au+Au collisions at RHIC energy was studied using the parton cascade model . It was found that, despite approaching kinetic equilibrium in the system, the chemical composition of quarks and gluons was not in chemical equilibrium. Our analysis shows also that the extraction of temperature by performing the SM fit to hadron yields and energy spectra is a very delicate procedure. If the whole system is out of the equilibrium state, than the “apparent” temperatures obtained from the fit may occur high enough to hit the zone of quark-hadron phase transition or even pure QGP phase (Fig. 10, open symbols).
## V Conclusions
The results of the present study may be summarized as follows. We used the microscopic transport UrQMD model to verify the appearance of the local equilibrium in the central zone of heavy ion collisions at relativistic energies, spanning from AGS to SPS. To analyze the results of the dynamical calculations the traditional methodic has been applied. First, the conditions of preequilibrium kinetic stage have been checked by means of the isotropy of the pressure and velocity distributions. It is shown that the kinetic equilibrium is reached by hadronic matter in the central $`V=125`$ fm<sup>3</sup> cell at about $`t=10`$ fm/$`c`$ for a not very long period, $`\mathrm{\Delta }t810`$ fm/$`c`$. Secondly, the values of the energy density, baryonic density and strangeness density, calculated microscopically, were used as an input to calculate temperature as well as baryonic and strangeness chemical potentials within the statistical model of ideal hadron gas.
The total strangeness of all hadronic species carrying strangeness charge in the central cell is shown to be negative though small. This is because of the fact that $`K`$’s escape from the interaction zone much easier than $`\mathrm{\Lambda }`$’s or $`\overline{K}`$’s due to their small interaction cross section with hadrons. The small negative strangeness of the central cell, however, cannot be neglected because it affects the ratios of strange particles, like $`K/\overline{K}`$.
It is worth to note that due to rather complicated dynamics of heavy ion collisions thermal models cannot fully describe the bulk of experimental data . In contrast, in the symmetric central zone of the heavy ion collisions almost all dynamical factors are reduced. This gives us a chance to study the relaxation of the hot hadronic matter to the thermal and chemical equilibrium, provided it would set in within the hadronization time of the system.
We found that the entropy per baryon in the central cell remains constant at the late stage of the expansion for energies varying from 10.7A GeV to 160A GeV. This circumstance formally supports the application of the relativistic hydrodynamical model. But the further comparison between the predictions of the microscopic and macroscopic models reveals significant discrepancies in the yields and energy spectra of hadrons. Compared to UrQMD, the statistical model underestimates, for instance, the number of pions. This “meson problem” is not a feature attributed solely to the particular microscopic model like UrQMD. Experimental data on pion yields at SPS energies show unambiguously the enhancement of pions compared to the SM calculations. Several possible solutions have been suggested recently. Admitting that the hot hadronic matter appears not in the state of chemical equilibrium, one may implement the effective chemical potential for pions (and other species, too) . In that case the state of maximum entropy is not reached yet.
Also, the temperatures of different hadronic species are not the same. The differences between the UrQMD and SM results increase with the rise of initial energy of colliding nuclei. Pions seem to have the lowest temperature and nucleons the highest one among all hadron species. Both, the pion and nucleon temperatures in the cell in Pb+Pb collisions at SPS energy are always far below the temperature predicted by the statistical thermal model.
The information at hand permits us to summarize that, in contrast to low energies, local thermal and chemical equilibrium (in the sense of the thermal model) is not obtained even in the central zone of heavy ion collisions at energies above 10.7A GeV in the UrQMD simulations. The hadronic matter in the UrQMD model seems not to evolve towards the state of maximum entropy, and this fact deserves to be investigated in detail.
## Acknowledgments
We would like to thank L. Csernai, U. Heinz, J. Rafelski, L. Satarov, E. Shuryak, J. Stachel, and N. Xu for the helpful discussions and comments. L.B. and E.Z. are grateful to the Institute for Theoretical Physics, University of Frankfurt for the warm and kind hospitality. This work was supported by the Graduiertenkolleg für Theoretische und Experimentelle Schwerionenphysik, Frankfurt–Giessen, the Bundesministerium für Bildung und Forschung, the Gesellschaft für Schwerionenforschung, Darmstadt, Deutsche Forschungsgemeinschaft, and the Alexander von Humboldt–Stiftung, Bonn.
|
no-problem/9906/hep-th9906035.html
|
ar5iv
|
text
|
# Correspondence between Minkowski and de Sitter Quantum Field Theory
## Abstract
In this letter we show that the “preferred” Klein–Gordon Quantum Field Theories (QFT’s) on a $`d`$-dimensional de Sitter spacetime can be obtained from a Klein–Gordon QFT on a ($`d+1`$)-dimensional “ambient” Minkowski spacetime satisfying the spectral condition and, conversely, that a Klein–Gordon QFT on a ($`d+1`$)-dimensional “ambient” Minkowski spacetime satisfying the spectral condition can be obtained as superposition of $`d`$-dimensional de Sitter Klein–Gordon fields in the preferred vacuum. These results establish a correspondence between QFT’s living on manifolds having different dimensions. The method exposed here can be applied to study other situations and notably QFT on Anti de Sitter spacetime.
<sup>a</sup> SISSA, v. Beirut 2–4, 34014 Trieste
<sup>b</sup> Dipartimento di Scienze Matematiche Fisiche e Chimiche,
Via Lucini 3, 22100 Como and INFN sez. di Milano, Italy
<sup>c</sup> Service de Physique Théorique, C.E. Saclay, 91191 Gif-sur-Yvette, France
The study of the relations between Quantum Field Theories (QFT’s) in different dimensions has come recently to the general attention. Some of the most interesting and intriguing developments of QFT and string theory, like Maldacena’s ADS/CFT conjecture and t’Hooft’s and Susskind’s holographic principle, seem to indicate that relations of this kind are going to play a fundamental role in understanding QFT and string theory.
In this letter we point out a relation that exists between Minkowski QFT and de Sitter QFT in one dimension less. We show that the “preferred” de Sitter Klein–Gordon field of squared mass $`\lambda `$ arises by averaging in a well-defined sense an ordinary Klein–Gordon field of mass $`M`$ living in the Minkowski ambient spacetime and, vice versa, the Klein–Gordon field in the ambient Minkowski spacetime can be obtained by superposing fields in the lower dimensional de Sitter manifold.
The idea that QFT’s on the de Sitter manifold can be obtained by restriction from the ambient spacetime is of course not new, but it has been of little use in the standard coordinate approach to de Sitter field theories.
Recently, however, it has been shown that the well-known thermal properties of the de Sitter Klein–Gordon fields in the “preferred” vacuum, are linked to certain analyticity properties of the correlation functions; these properties are precisely obtained by restriction to the de Sitter manifold of the analyticity properties of the (general) correlation functions in the ambient spacetime, which hold when the corresponding QFT satisfies the energy–momentum spectral condition .
This idea has then been pushed further and it has become possible to show that the thermal interpretation can be established also for interacting de Sitter field theories .
In this letter we take one step more by showing how a Klein–Gordon Minkowski field in the Wightman vacuum gives rise to a Klein–Gordon field on the de Sitter spacetime in the “preferred” thermal vacuum, giving a further argument in favour of the adjective “preferred”. Indeed, as it is well known, there are in general infinitely many inequivalent vacua corresponding to a certain QFT and one needs criteria to select the physically meaningful ones. This is true already for Minkowski QFT but, in this case, one has strong physical criteria to select among the vacua. The situation is more difficult when considering QFT’s on a curved background, but several criteria have been established also in this case (many of them however work only for linear field theories). The “preferred” vacuum for de Sitter Klein–Gordon fields has been shown to satisfy many of such criteria, such as the Hadamard condition . Here we prove that, actually, the “preferred” Klein–Gordon de Sitter QFT’s can be directly obtained by any massive or massless Wightman Klein–Gordon QFT in the ambient spacetime. There is however a restriction on the mass of the fields that can be obtained this way. In particular when working with a $`d`$-dimensional de Sitter hyperboloid of unit radius we can construct de Sitter Klein–Gordon fields whose mass is greater or equal to $`(d1)/2`$.
The existence of such a link gives also a quantitative support to the idea that a thermal effect on a curved manifold can be looked at as an Unruh effect in a higher (flat) dimensional spacetime .
Let therefore $`=\{X𝕄^{d+1}:\eta _{\mu \nu }X^\mu X^\nu <0\}`$ be the manifold of events which are spacelike w.r.t. a chosen event (which is taken as the origin of a frame) of a $`(d+1)`$-dimensional Minkowski space-time $`𝕄^{d+1}`$. Coordinates of $`𝕄^{d+1}`$ are denoted by $`\{X^\mu \}`$, $`\mu =0,\mathrm{},d`$. $``$ is foliated by a family of $`d`$-dimensional de Sitter spacetimes identified with the hyperboloids
$$𝒴_R=\{\eta _{\mu \nu }X^\mu X^\nu =(X^0)^2(\stackrel{}{X})^2=R^2\}.$$
(1)
As a topological manifold, $`=^+\times 𝒴`$, where $`^+`$ is the positive real half-line with coordinate $`R`$ and $`𝒴=𝒴_1`$ is the $`d`$-dimensional de Sitter spacetime with radius $`R=1`$. Points of $`𝒴`$ are denoted by $`y`$ (i.e. $`y^2=1`$).
We can therefore assign to an event of $``$ coordinates $`(R,y)`$ so that $`X=Ry`$ and $`y^2=1`$. The Minkowskian metric of $``$ can consequently be rewritten as follows:
$$ds^2=dR^2+R^2ds_𝒴^2,$$
where $`ds_𝒴^2`$ is the de Sitter metric of $`𝒴`$ obtained as restriction of the Minkowski metric of the ambient space.
$``$ is a globally hyperbolic manifold where quantum field theory can be formulated . Let us therefore consider a canonical quantum field $`\widehat{\mathrm{\Phi }}`$ on $``$ satisfying the Klein–Gordon equation $`(\mathrm{}+M^2)\widehat{\mathrm{\Phi }}(X)=0`$ (in the following we will consider the massive case $`M>0`$; the massless case can be obtained by a limit procedure but can also be studied directly, with a considerable simplification of the formulae), and let us also consider the corresponding equation for the modes $`\mathrm{\Phi }(X)`$. By separating the variables as in the metrics we write $`\mathrm{\Phi }(X)=\theta _\lambda (R)\phi _\lambda (y)`$ and we are led to the following equations:
$`\left(\mathrm{}_𝒴+\lambda \right)\phi (y)=0`$ (2)
$`R^2\left(_R^2+{\displaystyle \frac{d}{R}}_RM^2\right)\theta _\lambda (R)=\lambda \theta _\lambda (R).`$ (3)
Let us consider in particular the equation (3) for the radial modes $`\theta _\lambda `$. The operator appearing at the L.H.S. is self-adjoint (on a suitable domain) w.r.t. the following Hilbert product:
$$(\theta ,\eta )=_^+\overline{\theta }(R)\eta (R)R^{d2}𝑑R.$$
(4)
By means of the transformation $`\theta (R)=R^{\frac{1d}{2}}f(R)`$ and the rescaling $`\rho =MR`$, which together are particular instances of the so called ‘Lommel’s transformation’, Eq. (3) is turned into the modified Bessel’s equation. By further introducing the variable $`x=\mathrm{log}\rho =`$, we finally obtain the following equation (the prime means derivative w.r.t. $`x`$):
$$f_\lambda ^{\prime \prime }+(e^{2x}\nu ^2)f_\lambda =0\text{with}\nu =\nu (\lambda )=\sqrt{\lambda \frac{(d1)^2}{4}}.$$
(5)
We have thus obtained the Schrödinger problem for a particle in a one-dimensional potential $`e^{2x}`$ with eigenvalue $`\nu ^2`$; this problem is to be studied in the standard Hilbert space $`L^2()`$. The spectrum is nondegenerate and coincides with the positive real line; this implies that $`4\lambda >(d1)^2`$.
The solutions which have the correct asymptotic behaviour at $`x=\mathrm{}`$ are the modified Bessel functions $`K_{i\nu }(e^x)`$ . These modes are real, and their asymptotic behaviour near $`x=\mathrm{}`$ is the following:
$$K_{i\nu }(e^x)\left[\frac{\pi \nu }{\mathrm{sinh}(\pi \nu )}\right]^{\frac{1}{2}}\frac{\mathrm{sin}[(x\mathrm{log}(2))\nu +arg\left(\mathrm{\Gamma }(1+i\nu )\right)]}{\nu };$$
(6)
comparison with the free Schrödinger waves gives
$$_{}K_{i\nu }(e^x)K_{i\nu ^{}}(e^x)𝑑x=\frac{\pi ^2}{\mathrm{sinh}(\pi \nu )}\delta \left(\nu ^2\nu _{}^{}{}_{}{}^{2}\right)=N_\lambda ^2\delta (\lambda \lambda ^{}).$$
(7)
In terms of the original variable $`R`$ we therefore obtain the following normalized generalised eigenfunctions
$$\theta _\lambda (R)=N_\lambda R^{\frac{1d}{2}}K_{i\nu }(MR).$$
(8)
The orthonormality and completeness conditions for these modes read
$`{\displaystyle _^+}\theta _\lambda (R)\theta _\lambda ^{}(R)R^{d2}dR=\delta (\lambda \lambda ^{})`$ (9)
$`{\displaystyle _{\frac{(d1)^2}{4}}^{\mathrm{}}}𝑑\lambda \theta _\lambda (R)\theta _\lambda (R^{})=R^{(d2)}\delta (RR^{})`$ (10)
We now introduce the fields $`\widehat{\phi }_\lambda (y)`$ on the de Sitter manifold $`𝒴`$ by smearing the field $`\widehat{\mathrm{\Phi }}`$ with the complete set of radial modes (8):
$$\widehat{\phi }_\lambda (y)=_^+\widehat{\mathrm{\Phi }}(X)\overline{\theta }_\lambda (R)R^{d2}𝑑R.$$
(11)
Our main result is that the field $`\widehat{\phi }_\lambda (y)`$ is a Klein–Gordon field on the de Sitter manifold in the “preferred” (also called Euclidean or Bunch-Davies) vacuum state. In precise terms, the Minkowski vacuum expectation values (v.e.v.) of the fields $`\widehat{\phi }_\lambda (y)`$ are given by
$$W_{\lambda ,\lambda ^{}}(y,y^{})\mathrm{\Omega }|\widehat{\phi }_\lambda (y)\widehat{\phi }_\lambda ^{}(y^{})|\mathrm{\Omega }=\delta (\lambda \lambda ^{})W_\lambda (y,y^{}),$$
(12)
where $`W_\lambda `$ is the “preferred” two-point function of de Sitter Klein–Gordon field in dimension $`d`$ . In particular, the fields $`\widehat{\phi }_\lambda `$ have zero correlation (and hence commute) for different values of the square mass $`\lambda `$.
We now will give an argument to prove the result by first of all deriving an explicit expression for $`W_{\lambda ,\lambda ^{}}(y,y^{})`$ (eq. 17 below) . Let us rewrite the v.e.v. appearing at the LHS of Eq. (12) by using the momentum representation of the two-point function of the field $`\widehat{\mathrm{\Phi }}(X)`$:
$`W_{\lambda ,\lambda ^{}}(y,y^{})=`$ (13)
$`={\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dR}{R}}R^{d1}\theta _\lambda (R){\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dR^{}}{R^{}}}R_{}^{}{}_{}{}^{d1}\theta _\lambda ^{}(R^{}){\displaystyle \frac{\mathrm{d}^{d+1}P}{(2\pi )^d}\delta (P^2M^2)\mathrm{\Theta }(P_0)e^{iP(XX^{})}}.`$ (14)
In this expression we insert the parametrisations $`X=Ry`$ and $`X^{}=R^{}y^{}`$ and introduce the vector $`\alpha `$ defined by the relation $`M\alpha =P`$, where $`P`$ is on the mass shell; $`\alpha `$ is therefore on the unit shell $`\alpha ^2=1`$, $`\alpha _0>0`$.
By exchanging the order of integration we are led to the following integrals (, Vol II, Eq. (7.8.5)):
$`\phi _\lambda (y,\alpha )=\phi _\lambda (y\alpha )=M^{\frac{d1}{2}}{\displaystyle _0^{\mathrm{}}}e^{iy\alpha MR}\theta _\lambda (R)R^{d1}{\displaystyle \frac{dR}{R}}=`$ (15)
$`=\sqrt{{\displaystyle \frac{\pi }{2}}}N_\lambda \mathrm{\Gamma }\left({\displaystyle \frac{d1}{2}}i\nu \right)\mathrm{\Gamma }\left({\displaystyle \frac{d1}{2}}+i\nu \right)\left((iy\alpha )^21\right)^{\frac{2d}{4}}P_{\frac{1}{2}i\nu }^{\frac{2d}{2}}(iy\alpha ),`$ (16)
where the factor $`M^{\frac{d1}{2}}`$ has been inserted for convenience so that the function is dimensionless.
The functions $`\phi _\lambda (y,\alpha )`$ are a new set of plane waves on de Sitter manifold i.e. are (global) modes satisfying the de Sitter Klein–Gordon equation (2) whose phase is constant on planes; $`\alpha `$ plays the role of wave-vector and $`P`$ is an associated Legendre function . The integral appearing in the definition of these waves is well defined at both extrema provided $`|\mathrm{}(\nu )|<\frac{d1}{2}`$.
We are finally led to consider the following expression:
$$W_{\lambda ,\lambda ^{}}(y,y^{})=\frac{\mathrm{d}^{d+1}\alpha }{(2\pi )^d}\delta (\alpha ^21)\mathrm{\Theta }(\alpha _0)\overline{\phi }_\lambda (y,\alpha )\phi _\lambda ^{}(y^{},\alpha ).$$
(17)
This formula coincides with the “preferred” two-point function of the de Sitter Klein–Gordon field in dimension $`d`$ (see Eq. 12), giving at the same time a new integral representation for it. The actual full proof of this claim is somewhat involved and will be given elsewhere .
We can however illustrate the result in the simplest case $`d=1`$ where things are easier. In this case the de Sitter spacetime has only “time” and no “space”: it can be visualised as the two branches of an hyperbola in the two–dimensional Minkowski spacetime; these represent the world–lines of two uniformly accelerated observers.
The plane waves $`\phi _\lambda `$ reduce here to ordinary trigonometric functions in the rectifying parameter of the hyperbola.
To carry out the computation of the integral (17) we parametrise $`\alpha =\left(\begin{array}{c}\mathrm{cosh}(s)\\ \mathrm{sinh}(s)\end{array}\right)`$ and $`y(t)=\left(\begin{array}{c}\mathrm{sinh}(t)\\ \pm \mathrm{cosh}(t)\end{array}\right)`$; $`t`$ is the proper time of the ‘freely falling’ observer in the one–dimensional de Sitter spacetime (the same observer is actually subject to a constant acceleration when regarded from the Minkowskian ambient space). By taking the two points on the same branch of the hyperbola (say, the right one) we promptly find $`\alpha y(t)=\mathrm{sinh}(s+t)`$. The measure $`\mathrm{d}^2\alpha \delta (\alpha ^21)\mathrm{\Theta }(\alpha _0)`$ becomes simply $`\frac{1}{2}\mathrm{d}s`$.
A second step exploits the analyticity properties of the Minkowskian Wightman function and of our plane waves by shifting the two times $`t,t^{}`$ by an imaginary part $`tti\pi /2`$ and $`t^{}t^{}+i\pi /2`$. It follows that
$`{\displaystyle _{}}{\displaystyle \frac{ds}{2(2\pi )}}{\displaystyle \frac{dR}{R}\frac{dR^{}}{R^{}}e^{i\alpha (s)\left(y(t)y(t^{})\right)}\theta _\lambda (R)\theta _\lambda ^{}(R^{})}\stackrel{\begin{array}{c}tti\pi /2\\ t^{}t^{}+i\pi /2\end{array}}{}`$
$`{\displaystyle _{}}{\displaystyle \frac{ds}{2(2\pi )}}{\displaystyle \frac{dR}{R}\frac{dR^{}}{R^{}}e^{M\left(R\mathrm{cosh}(s+t)+R^{}\mathrm{cosh}(s+t^{})\right)}\theta _\lambda (R)\theta _\lambda ^{}(R^{})}=`$
$`=N_\lambda N_\lambda ^{}{\displaystyle _{}}{\displaystyle \frac{ds}{2(2\pi )}}\left|\mathrm{\Gamma }\left(i\nu \right)\right|^2\mathrm{cos}(\nu (s+t))\left|\mathrm{\Gamma }\left(i\nu ^{}\right)\right|^2\mathrm{cos}(\nu ^{}(s+t^{}))=`$
$`={\displaystyle \frac{1}{4}}N_\lambda N_\lambda ^{}\left|\mathrm{\Gamma }\left(i\nu \right)\right|^2\left|\mathrm{\Gamma }\left(i\nu ^{}\right)\right|^2\delta (\nu \nu ^{})\mathrm{cos}(\nu (tt^{}))=`$
$`={\displaystyle \frac{1}{4\nu ^2\mathrm{sinh}(\pi \nu )}}\delta (\nu \nu ^{})\mathrm{cos}(\nu (tt^{}))=\delta (\lambda \lambda ^{}){\displaystyle \frac{\mathrm{cos}(\nu (tt^{}))}{2\nu \mathrm{sinh}(\pi \nu )}}`$
Returning to the Minkowski spacetime from the correct tubular domains ($`tt+i\pi /2`$ and $`t^{}t^{}i\pi /2`$) we obtain the result
$$\mathrm{\Omega }|\widehat{\phi }_\lambda (y(t))\widehat{\phi }_\lambda ^{}(y(t^{}))|\mathrm{\Omega }=\delta (\lambda \lambda ^{})\frac{\mathrm{cos}(\nu (tt^{}+i\pi ))}{2\nu \mathrm{sinh}(\pi \nu )}=\delta (\lambda \lambda ^{})W_\lambda (y(t),y(t^{})).$$
This two–point function correspond to the “preferred” one for de Sitter Klein–Gordon quantum field as given in as well as to that of a quantum harmonic oscillator in a thermal state at the inverse temperature $`2\pi `$. Indeed, the quantum Klein–Gordon field on a one–dimensional spacetime corresponds to a single quantum harmonic oscillator in the Heisenberg picture where the mass represents the spring constant. The thermal time correlation function at an inverse temperature $`\beta `$ of the position operator of such as oscillator is given by:
$$W(t,t^{})=\frac{\mathrm{cos}(\omega (tt^{}+i\beta /2))}{2\omega \mathrm{sinh}(\omega \beta /2)}$$
(18)
which is precisely the expression derived above with $`\beta =2\pi `$. This simple computation gives a quantitative support of the relations between Hawking effect in de Sitter and Unruh effect in a flat spacetime as in .
We can also invert the transformation (11). Indeed, using the completeness of the radial modes in eqs. (10) we can express the ambient field $`\widehat{\mathrm{\Phi }}`$ as
$$\widehat{\mathrm{\Phi }}(R,y)=_{\frac{(d1)^2}{4}}^{\mathrm{}}𝑑\lambda \theta _\lambda (R)\widehat{\phi }_\lambda (y),$$
(19)
and consequently obtain the following decomposition of the Wightman function
$`\mathrm{\Omega }|\widehat{\mathrm{\Phi }}(R,y)\widehat{\mathrm{\Phi }}(R^{},y^{})|\mathrm{\Omega }={\displaystyle _{\frac{(d1)^2}{4}}^{\mathrm{}}}𝑑\lambda {\displaystyle _{\frac{(d1)^2}{4}}^{\mathrm{}}}𝑑\lambda ^{}W_{\lambda ,\lambda ^{}}(y,y^{})\theta _\lambda (R)\theta _\lambda ^{}(R^{})=`$ (20)
$`={\displaystyle _{\frac{(d1)^2}{4}}^{\mathrm{}}}𝑑\lambda \theta _\lambda (R)\theta _\lambda (R^{})W_\lambda (y,y^{}).`$ (21)
This formula allows to express the restriction of the ambient field $`\widehat{\mathrm{\Phi }}`$ to a fixed leaf $`R=R^{}`$ as a superposition of Klein–Gordon fields in the respective Euclidean vacuum,
$$\mathrm{\Omega }|\widehat{\mathrm{\Phi }}(R,y)\widehat{\mathrm{\Phi }}(R,y^{})|\mathrm{\Omega }=_{\frac{(d1)^2}{4}}^{\mathrm{}}𝑑\lambda |\theta _\lambda (R)|^2W_\lambda ^{(E)}(y,y^{}).$$
The so–obtained Källen–Lehmann type of expansion has a weight in the square mass parameter $`\lambda `$ given by the density of states per unit spectrum per unit volume of the self adjoint operator $`R^2\left(_R^2\frac{d}{R}_R+M^2\right)`$.
These results hold true in any dimension and can actually be generalised to other manifolds. For instance, by the same methods, one can decompose a $`d+1`$-dimensional Anti de Sitter (AdS) spacetime into a $`d`$-dimensional Minkowski spacetime and a $`1`$-dimensional Minkowski leaf parametrised by a radial coordinate $`R`$. In a sense this generalises the geometrical basis of the AdS/CFT correspondence conjecture where one describes the Minkowski space as a boundary of an AdS manifold (that is $`R`$ is sent to infinity). These and other results will be discussed elsewhere .
|
no-problem/9906/hep-lat9906024.html
|
ar5iv
|
text
|
# Colour confinement and dual superconductivity of the vacuum - I
## I Introduction
Order-disorder duality plays an increasingly important rôle in our understanding of the dynamics of gauge theories, specifically of QCD and of its supersymmetric generalizations .
Duality is typical of systems which can have configurations with non trivial spatial topology, carrying a conserved topological charge. The prototype example is the $`2d`$ Ising model. If viewed as a discretized version of a $`1+1`$ dimensional field theory, it presents one-dimensional configurations, kinks, whose topology is determined by the boundary conditions ($`\pm 1`$) at $`x_1=\pm \mathrm{}`$.
In the usual description in terms of the local variable $`\sigma (x)=\pm 1`$, at low temperature (weak coupling) the system is in an ordered phase with nonzero magnetization $`\sigma 0`$. At the critical point $`\sigma 0`$ and the system becomes disordered. $`\sigma `$ is called an order parameter. However one can describe the system in terms of a dual variable $`\sigma ^{}`$, on a dual lattice. A dual $`1d`$ configuration with one $`\sigma ^{}`$ up is a kink, which is a highly non local object in terms of $`\sigma `$. In ref. it was shown that the partition function in terms of $`\sigma ^{}`$ has the same form as in terms of $`\sigma `$, i.e. that the system with dual description looks again as an Ising model, except that the new Boltzmann factor $`\beta ^{}`$ is related to the old one $`\beta `$ by the relation
$`\mathrm{sinh}\left(2\beta ^{}\right)={\displaystyle \frac{1}{\mathrm{sinh}\left(2\beta \right)}}.`$ (1)
The disordered phase is an ordered phase for the dual and viceversa. In the disordered phase $`\sigma ^{}0`$: kinks condense in the ground state. $`\sigma ^{}`$ is called a disorder parameter. $`\sigma ^{}`$ is a dual variable to $`\sigma `$. In this specific case the system is self dual, and duality transformation maps the strong coupling regime in the weak coupling regime and viceversa.
a
Other systems showing duality properties are the $`3d`$ $`XY`$ model, whose dual is a Coulomb gas in $`3d`$, and the compact $`U(1)`$ gauge theory.
In the $`3d`$ $`XY`$ model topological excitations are the vortices of the $`2d`$ $`XY`$ model. These vortices condense in the disordered phase .
For $`U(1)`$ theory topological excitations are monopoles. There the duality transformation can be performed for special choices of the action (e.g. the Villain action , dual to a $``$ gauge theory, and the Wilson action ). For other choices it is not known how to explicitly perform the transformation to the dual.
An alternative approach consists in identifying the symmetry which is spontaneously broken in the disordered phase, i.e. the topological configurations which are supposed to condense, and in writing a disorder parameter in terms of the original local fields . The disorder parameter is then the vacuum expectation value (vev) of a non local operator.
This approach has been translated on the lattice , tested by numerical simulations in the compact $`U(1)`$ gauge theory , in the $`3d`$ $`XY`$ model and in the $`O(3)`$ sigma model , and first used to investigate colour confinement in QCD in ref. .
In the early literature on the subject condensation was demonstrated as the sudden increase of the density of topological excitations. This is incorrect, since disorder can only be described by the vev of an operator which violates the dual symmetry and the number of excitations does not.
Looking at symmetry is specially important in QCD. For QCD there exists some general idea about the dual . The dual description should also be a gauge theory, possibly with interchange of the rôle of electric and magnetic quantities.
This idea could fit the mechanism for confinement of colour proposed in ref.’s as dual superconductivity of the ground state, if confinement were due to disorder and monopoles were the topological excitations which condense. However a dual superconductor is a typically abelian system, while the disorder parameter is expected to break a non abelian symmetry. An abelian conserved monopole charge can be associated to each operator in the adjoint representation by a procedure which is known as abelian projection . We will recall that procedure in Sect. II. There exists a functional infinity of choices for the operator, and correspondingly an infinity of monopole species. A possibility is that the true disorder symmetry implies the condensation of all these species of monopoles . Some people believe instead that some abelian projection (specifically the Maximal Abelian one) identifies monopoles that are more relevant than others for confinement. Both attitudes reflect our ignorance of the dual description of the system.
In this paper we shall systematically explore condensation of monopoles defined by different abelian projections, in connection with confinement of colour.
We will do that for $`SU(2)`$ gauge theory. The treatment of $`SU(3)`$ will be given in a companion paper. Some of the results have been obtained during the last years and have been reported to conferences and workshops . This paper contains conclusive results, and is an organic report of the methods and of the results obtained after ref. .
Our strategy consists in constructing an operator with non zero magnetic charge, for each abelian projection (Sect. III). Its vev is a candidate disorder parameter for dual superconductivity of the ground state. We shall determine numerically that vev at finite temperature below and above the deconfining phase transition. If condensation of these monopoles is related to confinement, we expect the disorder parameter to be zero in the deconfined phase, and different from zero in the confined phase.
This is strictly speaking true only in the thermodynamic (infinite volume) limit. A finite size scaling analysis allows to go to that limit, and, as a by-product, gives a determination of the transition temperature and of the critical indices if the transition is higher order than first. This analysis is presented in Sect. IV.
A special treatment for the Maximal Abelian projection is presented in Sect. V.
We find that gauge theory vacuum is indeed a dual superconductor in the confined phase, and becomes normal in the deconfined phase for a number of abelian projections, actually for all projections that we have analyzed, in agreement with the guess of ref. .
The idea that confinement is produced by dual superconductivity is thus definitely confirmed. The guess that all the abelian projections are physically equivalent is also supported, and this is an important piece of information on the way to understand the true dual symmetry.
We find evidence that $`SU(2)`$ deconfining transition is second order. In next paper we will show that for $`SU(3)`$ this transition is first order.
An analysis of full QCD, including quarks, is on the way; if the mechanism proved to be the same, the idea that quarks are a kind of perturbation, and that the dynamics is determined by gluons would be tested. This would also be a test of the ansatz that the theory already contains its essential dynamics at $`N_c=\mathrm{}`$, and that the presence of fermions and the extrapolation to $`N_c=3`$ can be viewed as perturbations.
The results are summarized in Sect. VI.
## II The abelian projection
What follows will refer to the case of gauge group $`SU(2)`$. Adaptation to $`SU(3)`$ will be described in the companion paper.
Let $`\widehat{\mathrm{\Phi }}(x)`$ be the direction in colour space of any local operator $`\stackrel{}{\mathrm{\Phi }}(x)`$, belonging to the adjoint representation of $`SU(2)`$. A gauge transformation $`g(x)`$ which rotates $`\widehat{\mathrm{\Phi }}(x)`$ to $`(0,0,1)`$, or which diagonalizes $`\widehat{\mathrm{\Phi }}(x)\stackrel{}{\sigma }`$ is called the abelian projection on $`\stackrel{}{\mathrm{\Phi }}(x)`$. $`g(x)`$ can be singular in a configuration at the points where $`\stackrel{}{\mathrm{\Phi }}(x)`$ has zeros, and $`\widehat{\mathrm{\Phi }}(x)`$ is not defined.
The field strength $`F_{\mu \nu }`$, defined as
$`F_{\mu \nu }=\widehat{\mathrm{\Phi }}(x)\stackrel{}{G}_{\mu \nu }{\displaystyle \frac{1}{g}}\left(D_\mu \widehat{\mathrm{\Phi }}(x)D_\nu \widehat{\mathrm{\Phi }}(x)\right)\widehat{\mathrm{\Phi }}(x)`$ (2)
is a colour singlet, and is invariant under non singular gauge transformations.
In general eq. (2) can be written as
$`F_{\mu \nu }`$ $`=`$ $`_\mu \stackrel{~}{A}_\nu _\nu \stackrel{~}{A}_\mu `$ (3)
$``$ $`{\displaystyle \frac{1}{g}}\left(_\mu \widehat{\mathrm{\Phi }}(x)_\nu \widehat{\mathrm{\Phi }}(x)\right)\widehat{\mathrm{\Phi }}(x).`$ (4)
with
$`\stackrel{~}{A}_\mu =\widehat{\mathrm{\Phi }}^aA_\mu ^a.`$ (5)
In the abelian projected gauge $`\widehat{\mathrm{\Phi }}(x)`$ is constant, the second term in the right hand side of eq. (3) vanishes and the field $`F_{\mu \nu }`$ becomes an abelian field.
Denoting by $`F_{\mu \nu }^{}`$ the usual dual tensor,
$`F_{\mu \nu }^{}={\displaystyle \frac{1}{2}}ϵ_{\mu \nu \rho \sigma }F^{\rho \sigma },`$ (6)
and defining the magnetic current as
$`j_\mu `$ $`=`$ $`^\nu F_{\mu \nu }^{},`$ (7)
it follows from eq.’s (3), (6), (7) that
$`^\mu j_\mu =0.`$ (8)
The magnetic charge is conserved, and defines a magnetic $`U(1)`$ symmetry.
The abelian projection $`g(x)`$ can have singularities and as a consequence an additional field strength adds to the usual covariant gauge transform of $`G_{\mu \nu }`$ .
After abelian projection
$`G_{\mu \nu }=gG_{\mu \nu }g^1+G_{\mu \nu }^{sing},`$ (9)
with $`\stackrel{}{G}_{\mu \nu }^{sing}=\widehat{\mathrm{\Phi }}(x)\left(_\mu \stackrel{~}{A}_\nu ^{sing}_\nu \stackrel{~}{A}_\mu ^{sing}\right)`$ parallel to the colour direction $`\widehat{\mathrm{\Phi }}(x)`$, and consisting of Dirac strings starting at the zeros of $`\stackrel{}{\mathrm{\Phi }}(x)`$. The field configurations contain monopoles at the zeros of $`\stackrel{}{\mathrm{\Phi }}(x)`$, as sinks or sources of the regular field, and the strings carry away the corresponding magnetic flux.
On a lattice (or in any other compact regularized description in terms of parallel transport) the Dirac string reduces to an additional flux of $`2\pi `$ across a sequence of plaquettes, which is invisible .
The mechanism relating confinement of colour to dual superconductivity of the vacuum advocates a spontaneous breaking à la Higgs of the magnetic $`U(1)`$ symmetry described by (8), which constrains the electric component of the field eq. (3) into flux tubes.
All particles which have non zero electric charge with respect to the residual $`U(1)`$ (3) will then be confined. There exist coloured states, e.g. the gluon oriented parallel to $`\widehat{\mathrm{\Phi }}(x)`$ which are not confined. This is a strong argument that, if dual superconductivity is a mechanism of confinement at all, it must exist in many different abelian projections, as a manifestation of non abelian disorder.
On the lattice the abelian gauge field corresponding to any given projection, or $`\widehat{\mathrm{\Phi }}(x)`$, is extracted as follows .
Let $`\overline{U}_\mu (n)=g(n)U_\mu (n)g^{}(n)`$ be the generic link after abelian projection. We adopt the usual notation $`U_\mu (n)U_\mu (\stackrel{}{n},t)\mathrm{exp}(iA_\mu (n))`$, with $`A_\mu (n)=\stackrel{}{A}_\mu (n)\stackrel{}{\sigma }`$.
The representation in terms of Euler angles has the form
$`\overline{U}_\mu `$ $`=`$ $`e^{i\alpha _\mu \sigma _3}e^{i\gamma _\mu \sigma _2}e^{i\beta _\mu \sigma _3}`$ (10)
$`=`$ $`e^{i\alpha _\mu \sigma _3}e^{i\gamma _\mu \sigma _2}e^{i\alpha _\mu \sigma _3}e^{i\left(\beta _\mu +\alpha _\mu \right)\sigma _3}`$ (11)
$`=`$ $`e^{i\stackrel{}{\gamma }_\mu ^T\stackrel{}{\sigma }}e^{i\theta _\mu \sigma _3},\theta _\mu =\alpha _\mu +\beta _\mu ,`$ (12)
and $`\stackrel{}{\gamma }^T`$ is a vector perpendicular to the $`3`$ axis. We assume the usual representation in which $`\sigma _3`$ is diagonal.
For a plaquette, a similar decomposition can be performed,
$`\mathrm{\Pi }_{\mu \nu }=e^{i\stackrel{}{\gamma }_{\mu \nu }^T\stackrel{}{\sigma }}e^{i\theta _{\mu \nu }\sigma _3},`$ (13)
$`\theta _{\mu \nu }=\mathrm{}_\mu \theta _\nu \mathrm{}_\nu \theta _\mu `$ up to terms $`𝒪(a^2)`$. $`\theta _{\mu \nu }`$ is the lattice analog of $`F_{\mu \nu }`$. The abelian magnetic flux is conserved by construction. A monopole appears whenever the flux entering five faces of a spatial cube adds to more than $`2\pi `$: then the flux through the sixth face is larger than $`2\pi `$, but multiples of $`2\pi `$ are invisible in the exponent. Formally
$`\theta _{\mu \nu }=\overline{\theta }_{\mu \nu }+2\pi n_{\mu \nu },`$ (14)
with $`\pi <\overline{\theta }_{\mu \nu }<\pi `$. A string through the sixth face takes care of the flux which has disappeared.
We shall construct a disorder parameter for monopole condensation as the vev of an operator carrying non zero magnetic charge, $`\mu `$. $`\mu 0`$ will signal dual superconductivity.
## III The disorder parameter
The disorder parameter will be constructed on the same lines as in ref.’s .
An improvement exists with respect to ref. , which consists in properly taking the compactness into account: in ref. the approximation was that the field was treated as non compact. The same improvement was done in ref. with respect to ref. .
All the results presented in ref.’s already contain such improvement.
We first analyze the case in which $`\stackrel{}{\mathrm{\Phi }}(x)`$ is determined by the Wilson Polyakov line, i.e. the closed parallel transport to $`+\mathrm{}`$ along the time axis and back from $`\mathrm{}`$ to the initial point via the periodic boundary conditions. For this choice, after abelian projection all the links $`U_0(n)`$ along the temporal axis are diagonal, of the form $`\overline{U}_0(n)\mathrm{exp}(iA_0^3(n)\sigma _3)`$.
Assuming for sake of definiteness the Wilson action we construct the operator $`\mu (\stackrel{}{y},t)`$ which creates a monopole at site $`\stackrel{}{y}`$ and time $`t`$ with the following recipe (a similar construction can be made for other action).
Let $`\stackrel{}{A}^M(\stackrel{}{x},\stackrel{}{y})`$ be the vector potential describing the field value at site $`\stackrel{}{x}`$ of a static monopole sitting at $`\stackrel{}{y}`$. We shall write it as
$`\stackrel{}{A}^M(\stackrel{}{x},\stackrel{}{y})=\stackrel{}{A}_{}^M(\stackrel{}{x},\stackrel{}{y})+\stackrel{}{}\mathrm{\Lambda }(\stackrel{}{x},\stackrel{}{y}),`$ (15)
with $`\stackrel{}{}\stackrel{}{A}_{}^M(\stackrel{}{x},\stackrel{}{y})=0`$.
The first term describes the physical part of $`\stackrel{}{A}^M`$, the second term the classical gauge freedom.
Let $`\mathrm{\Pi }_{i0}`$ be the electric field plaquette at time $`t`$. Then we define
$`\mu =\mathrm{exp}\left[\beta \mathrm{\Delta }S\right],`$ (16)
$`\mathrm{\Delta }S={\displaystyle \frac{1}{2}}{\displaystyle \underset{\stackrel{}{n}}{}}\text{Tr}\left\{\mathrm{\Pi }_{i0}(\stackrel{}{n},t)\mathrm{\Pi }_{i0}^{}(\stackrel{}{n},t)\right\}.`$ (17)
Here
$`\mathrm{\Pi }_{i0}(\stackrel{}{n},t)`$ $`=`$ $`U_i(\stackrel{}{n},t)U_0(\stackrel{}{n}+\widehat{ı},t)`$ (19)
$`\left(U_i(\stackrel{}{n},t+1)\right)^{}\left(U_0(\stackrel{}{n},t)\right)^{}`$
is the electric field term of the action, and $`\mathrm{\Pi }_{i0}^{}`$ is a modification of it, defined as
$`\mathrm{\Pi }_{i0}(\stackrel{}{n},t)^{}`$ $`=`$ $`U_i(\stackrel{}{n},t)U_0(\stackrel{}{n}+\widehat{ı},t)`$ (21)
$`\left(U_i^{}(\stackrel{}{n},t+1)\right)^{}\left(U_0(\stackrel{}{n},t)\right)^{},`$
$`U_i^{}(\stackrel{}{n},t+1)`$ $`=`$ $`e^{i\mathrm{\Lambda }(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n},t)\stackrel{}{\sigma }}U_i(\stackrel{}{n},t)`$ (23)
$`e^{iA_i^M(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}e^{i\mathrm{\Lambda }(\stackrel{}{n}+\widehat{ı},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}.`$
The disorder parameter is defined as $`\mu `$, or
$`\mu ={\displaystyle \frac{\left(𝒟U\right)e^{\beta (S+\mathrm{\Delta }S)}}{\left(𝒟U\right)e^{\beta S}}}.`$ (24)
It follows from the definition (17) that adding $`\mathrm{\Delta }S`$ to the action amounts to replace the term $`\mathrm{\Pi }_{i0}`$ at time $`t`$ with $`\mathrm{\Pi }_{i0}^{}`$.
The $`\mathrm{\Pi }_{i0}(\stackrel{}{n},t)`$ are the only terms in the action where the $`U_0(\stackrel{}{n},t)`$ appear. In the path integral (24) a change of variables $`U_0(\stackrel{}{n},t)U_0^{}(\stackrel{}{n},t)=U_0(\stackrel{}{n},t)e^{i\mathrm{\Lambda }(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n},t)\stackrel{}{\sigma }}`$ leaves the Haar measure invariant and reabsorbs the unphysical gauge factor of eq. (23), so that $`\mu `$ is independent, as it must be, of the choice of the classical gauge for the field produced by the monopole.
Also a change of variables can be made
$`U_i(\stackrel{}{n},t+1)U_i(\stackrel{}{n},t+1)e^{iA_i^M(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}.`$ (25)
Again, this leaves the measure invariant, and brings $`\mathrm{\Pi }_{i0}(\stackrel{}{n},t)`$ to its original form. However in the plaquette $`\mathrm{\Pi }_{ij}(\stackrel{}{n},t+1)`$ it produces the change $`U_i(\stackrel{}{n},t+1)U_i(\stackrel{}{n},t+1)e^{iA_i^M(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}`$. By the construction of Sect. II this amounts to change, in the abelian projected gauge
$`\theta _{ij}(\stackrel{}{n},t+1)`$ $``$ $`\theta _{ij}(\stackrel{}{n},t+1)+`$ (27)
$`\mathrm{}_iA_j^M(\stackrel{}{n},\stackrel{}{y})\mathrm{}_jA_i^M(\stackrel{}{n},\stackrel{}{y})`$
or to add the magnetic field of a monopole.
The same redefinition of variables reflects in the change
$`\mathrm{\Pi }_{i0}(\stackrel{}{n},t+1)\mathrm{\Pi }_{i0}^{}(\stackrel{}{n},t+1),`$ (28)
analogous to equation (21). Again the gauge factors $`e^{i\mathrm{\Lambda }(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n},t)\stackrel{}{\sigma }}`$, $`e^{i\mathrm{\Lambda }(\stackrel{}{n}+\widehat{ı},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}`$ are irrelevant, since they can be reabsorbed in a redefinition of $`U_0(\stackrel{}{n},t+1)`$ . $`e^{iA_i^M(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}`$ commutes with $`U_0(\stackrel{}{n}+\widehat{ı},t+1)`$, which is diagonal with it by definition of the Polyakov line abelian projection.
In detail
$`\mathrm{\Pi }_{i0}^{}(\stackrel{}{n},t+1)`$ $`=`$ $`U_i(\stackrel{}{n},t+1)e^{iA_i^M(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}`$ (31)
$`U_0(\stackrel{}{n}+\widehat{ı},t+1)`$
$`\left(U_i(\stackrel{}{n},t+2)\right)^{}\left(U_0(\stackrel{}{n},t+1)\right)^{}`$
$`=`$ $`U_i(\stackrel{}{n},t+1)U_0(\stackrel{}{n}+\widehat{ı},t+1)`$ (33)
$`e^{iA_i^M(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}`$
$`\left(U_i(\stackrel{}{n},t+2)\right)^{}\left(U_0(\stackrel{}{n},t+1)\right)^{}.`$ (34)
A new change of variable can be done analogous to (25), exposing now a monopole at $`t+2`$ and producing a change $`\mathrm{\Pi }_{i0}(\stackrel{}{n},t+2)\mathrm{\Pi }_{i0}^{}(\stackrel{}{n},t+2)`$. The procedure can be iterated. If an antimonopole is created at $`t+T`$, by an operator analogous to that of (16), but with $`\stackrel{}{A}_{}^M\stackrel{}{A}_{}^M`$, then at time $`t+T`$ the change cancels and the procedure stops.
This shows that the correlation function
$`𝒟(T)=\overline{\mu }(\stackrel{}{y},t+T)\mu (\stackrel{}{y},t)`$ (35)
indeed describes the creation of a monopole at $`\stackrel{}{y}`$ at time $`t`$ and its propagation from $`t`$ to $`t+T`$. This argument in this gauge is perfectly analogous to the argument for compact $`U(1)`$ gauge theory . The construction is the compact version of that of ref. .
At large $`T`$, by cluster property
$`𝒟(T)A\mathrm{exp}(MT)+\mu ^2,`$ (36)
where the equality $`\mu =\overline{\mu }`$ has been used stemming from charge conjugation invariance.
$`\mu 0`$ indicates spontaneous breaking of the $`U(1)`$ magnetic symmetry defined in Sect. II eq. (8), and hence dual superconductivity . $`\mu `$ is the corresponding disorder parameter. In the thermodynamic limit we expect $`\mu 0`$ below the deconfining transition, $`\mu =0`$ above it. At finite volume $`\mu `$ can not vanish for $`\beta >\beta _C`$ without vanishing identically, since it is an entire function of $`\beta `$. Only in the limit $`N_s\mathrm{}`$ singularities develop , and $`\mu `$ can vanish.
$`M`$ is the lowest mass with quantum numbers of a monopole. In the Landau-Ginzburg model of superconductivity it corresponds to the Higgs mass. When compared to the inverse penetration depth of the field, it can give information on the type of superconductor. We will discuss the determination of $`\mu `$ in the next section.
For numerical reasons it will prove convenient to determine
$`\rho ={\displaystyle \frac{\text{d}}{\text{d}\beta }}\mathrm{log}\mu ,`$ (37)
which, by use of eq. (24), amounts to the difference of the two actions
$`\rho =S_SS+\mathrm{\Delta }S_{S+\mathrm{\Delta }S}.`$ (38)
$`\rho `$ contains all the informations we need. At finite temperature the lattice is asymmetric ($`N_s^3\times N_t`$ with $`N_tN_s`$), the quantities which can be computed are static and the vev of a single operator $`\mu `$, $`\mu `$ must be directly computed. Indeed there is no way of putting a monopole and an antimonopole at large distance along the $`t`$ axis as we do at $`T=0`$, since at $`TT_c`$, $`N_Ta`$ is comparable to the correlation length. $`C^{}`$-periodic boundary conditions in time ($`U_\mu (\stackrel{}{n},N_t)=U_\mu ^{}(\stackrel{}{n},0)`$, where $`U^{}`$ is the complex conjugate of $`U`$ ) are needed. The magnetic charge is conserved. If we create a monopole say at $`t=1`$, and we propagate it to $`t=N_T`$ by the changes of variables described above, the magnetic charge at $`t=N_T`$ will be different by one unit from that at $`t=0`$, and this is inconsistent with periodic boundary conditions. With $`C^{}`$-periodic boundary conditions the magnetic field at $`N_T`$ is opposite to the one at $`t=0`$, since under complex conjugation the term proportional to $`\sigma _3`$ in eq.(10) changes sign. By the change of variables eq.(25) a magnetic field in then added with opposite sign at $`N_T`$. This produces a dislocation with magnetic charge $`1`$ at the boundary which plays the role of the antimonopole in eq.(35).
With a generic choice of the abelian projection different from the Polyakov line, we can define the operator $`\mu `$ in a similar way, by sending
$`\mathrm{\Pi }_{i0}(\stackrel{}{n},t)\mathrm{\Pi }_{i0}^{}(\stackrel{}{n},t),`$ (39)
according to eq. (21).
Again to demonstrate that a monopole is created at $`t+1`$ we can perform the change of variables, eq. (25), and expose a change of the abelian magnetic field at $`t+1`$ given by eq. (27).
However now the resulting change of $`\mathrm{\Pi }_{0i}(\stackrel{}{n},t+1)`$ is
$`\mathrm{\Pi }_{i0}^{}(\stackrel{}{n},t+1)`$ $`=`$ $`U_i(\stackrel{}{n},t+1)e^{iA_i^M(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}`$ (41)
$`U_0(\stackrel{}{n}+\widehat{ı},t+1)`$
$`\left(U_i(\stackrel{}{n},t+2)\right)^{}\left(U_0(\stackrel{}{n},t+1)\right)^{}`$ (42)
$`=`$ $`U_i(\stackrel{}{n},t+1)U_0(\stackrel{}{n}+\widehat{ı},t+1)`$ (44)
$`((U_0(\stackrel{}{n}+\widehat{ı},t+1)))^{}e^{iA_i^M(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}`$
$`U_0(\stackrel{}{n}+\widehat{ı},t+1)`$ (46)
$`\left(U_i(\stackrel{}{n},t+2)\right)^{}\left(U_0(\stackrel{}{n},t+1)\right)^{}.`$
The change of $`U_i(\stackrel{}{n},t+2)`$ is by a factor on the right
$`\left(U_0(\stackrel{}{n}+\widehat{ı},t+1)\right)^{}e^{iA_i^M(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}`$ (47)
$`U_0(\stackrel{}{n}+\widehat{ı},t+1).`$ (48)
$`U_0`$ does not commute with $`e^{iA_i^M(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}`$, as it was in the case of $`\widehat{\mathrm{\Phi }}(\stackrel{}{n},t)`$ in the direction of the Polyakov line.
This looks at first sight as a complication, but it is not. Indeed the abelian projected phase of a product of links is the sum of the abelian projected phases of the factors, to $`𝒪(a^2)`$. From eq. (13) it follows $`𝒪(a^2)`$
$`e^{i\stackrel{}{\gamma }^{T1}\stackrel{}{\sigma }}e^{i\gamma _z^1\sigma _3}e^{i\stackrel{}{\gamma }^{T2}\stackrel{}{\sigma }}e^{i\gamma _z^2\sigma _3}=e^{i\stackrel{}{\mathrm{\Gamma }}^T\stackrel{}{\sigma }}e^{i(\gamma _z^1+\gamma _z^2)\sigma _3}.`$ (49)
Hence the abelian phases of $`U_0,U_0^{}`$ in (47) cancel $`𝒪(a^2)`$ and the abelian projected field of the modified plaquette at time $`t+2`$ is again changed according to eq. (27).
## IV Numerical results for $`\rho `$
We will determine the temperature dependence of $`\rho `$ on an asymmetric lattice $`N_s^3\times N_t`$ ($`N_sN_t`$).
For reasons which will be clear in what follows we will distinguish between abelian projections in which the operator $`\widehat{\mathrm{\Phi }}(\stackrel{}{n},t)`$ which defines the monopoles is explicitly known, and projections (like the so called Maximal Abelian) in which the projection is fixed by a maximizing procedure, and $`\widehat{\mathrm{\Phi }}(\stackrel{}{n},t)`$ is not explicitly known.
In the first category we studied the following projections. We will define the operator $`\widehat{\mathrm{\Phi }}(x)=\stackrel{}{\mathrm{\Phi }}(x)/|\stackrel{}{\mathrm{\Phi }}(x)|`$ starting from an operator $`𝒪`$ which is an element of the group, by the formula
$`𝒪=𝒪_0+i\stackrel{}{\mathrm{\Phi }}(x)\stackrel{}{\sigma }`$
* $`𝒪`$ is connected to the Polyakov line $`L(\stackrel{}{n},t)=\mathrm{\Pi }_{t^{}=t}^{N_t1}U_0(\stackrel{}{n},t^{})\mathrm{\Pi }_{t^{}=0}^{t1}U_0(\stackrel{}{n},t^{})`$ as follows<sup>*</sup><sup>*</sup>*by $``$ we indicate the complex conjugation operation.:
$`𝒪(\stackrel{}{n},t)=\mathrm{\Pi }_{t^{}=t}^{N_t1}U_0(\stackrel{}{n},t^{})L^{}(\stackrel{}{n},0)\mathrm{\Pi }_{t^{}=0}^{t1}U_0(\stackrel{}{n},t^{});`$ (50)
* $`𝒪`$ is an open plaquette, i.e. a parallel transport on an elementary square of the lattice
$`𝒪(n)`$ $`=`$ $`\mathrm{\Pi }_{ij}(n)=U_i(n)U_j(n+\widehat{ı})`$ (52)
$`\left(U_i(n+\widehat{ȷ})\right)^{}\left(U_j(n)\right)^{};`$
* “butterfly” projection, where the projecting operator is
$`𝒪(n)=F(n)`$ $`=`$ $`U_x(n)U_y(n+\widehat{x})\left(U_x(n+\widehat{y})\right)^{}`$ (55)
$`\left(U_y(n)\right)^{}U_z(n)U_t(n+\widehat{z})`$
$`\left(U_z(n+\widehat{t})\right)^{}\left(U_t(n)\right)^{}.`$
The trace of $`F`$ is the density of topological charge. The projection defined in (50) is the Polyakov projection on a $`C^{}`$-periodic lattice.
From eq. (37) and the condition $`\mu (\beta =0)=1`$ we obtain
$`\mu (\beta )=\mathrm{exp}\left({\displaystyle _0^\beta }\rho (\beta ^{})\text{d}\beta ^{}\right).`$ (56)
If $`\mu `$ (defined in any abelian projection) is a disorder parameter for the deconfining phase transition, we expect that in the thermodynamic limit ($`N_s\mathrm{}`$, $`N_t`$ constant) $`\rho `$ goes to a finite bounded value in the strong coupling region, i.e. in the region below the deconfining transition. In the weak coupling region $`\mu `$ should go to zero in the same limit, i.e. $`\rho `$ must go to $`\mathrm{}`$. In the critical region we expect an abrupt decrease of $`\mu `$, and hence a negative sharp peak in $`\rho `$.
A few details about numerical computation. According to eq. (38), $`\rho `$ is the difference between two actions: the standard $`SU(2)`$ Wilson action and the “monopole” action $`S+\mathrm{\Delta }S`$.
For the Wilson term simulation can be performed by using an heat-bath algorithm. This is not possible in the case of the “monopole” action. Consider for example the Polyakov projection and a single monopole operator $`\mu (\stackrel{}{y},0)`$. In the updating procedure, we can distinguish the following four cases:
1. update of a spatial link at $`t0,1`$. The plaquettes involved have Wilson’s form and the variation of the “monopole” action is linear with respect to the link we are updating;
2. update of a spatial link at $`t=0,1`$. Although some plaquettes are modified by the monopole term, the variation of the modified action $`S+\mathrm{\Delta }S`$ is again linear with respect to the link, because the field $`\mathrm{\Phi }`$ does not depend on the link we are changing;
3. update of a temporal link at $`t0`$. The local variation of the action is linear, but the change also induces a change of the Polyakov loop, i.e. of $`\mathrm{\Phi }`$, according to eq. (50), so that there is an effect on the action which is non linear;
4. update of a temporal link at $`t=0`$. We can not define a force, because due to the change of the corresponding Polyakov loop the change of the modified action is non-linear.
In order to perform numerical simulations in the case of the system described by the “monopole” action, an appropriate algorithm could be Metropolis; however, this method can have long correlation times. In view of improving decorrelation, we have performed simulations by using an heat-bath algorithm for the update of the spatial links and a Metropolis algorithm for the update of the temporal links.
Similar techniques can be used for the other projections we have investigated: in all cases, we have chosen to use the heat-bath updating when the contribution to the action is linear with respect to the link we are changing and the Metropolis algorithm when it is not. As a test we verified that the mixed update correctly works for the Wilson action.
The simulation was done on a 128-node APE Quadrics Machine. We used an overrelaxed heat-bath algorithm to compute the Wilson term of eq. (38), and a mixed algorithm as described above for the other term. Far from the critical region at each $`\beta `$ typically 4000 termalized configurations were produced, each of them taken after 4 sweeps. The errors are computed with a Jackknife analysis to the data binned in bunches of different length. As error we took the maximum of the standard deviation as a function of the bin length at plateau. In the critical region a higher statistic is required. Typically the Wilson term is more noisy. Thermalization was checked by monitoring the action density and the probability distribution of the trace of the Polyakov loop.
The discussion of Sect. III implies that different choices for $`\stackrel{}{A}_{}^M`$ are equivalent: eq. (27) shows that only the magnetic field of the monopole determines the value of $`\mu `$. In our simulation we used the Wu-Yang form of $`\stackrel{}{A}_{}^M`$. We checked that the Dirac form (with different position of the string) gives compatible results.
In simulations of $`S+\mathrm{\Delta }S`$ we found that correlation times are small and under control for $`N_t=4`$. For $`N_t=6`$ in the critical region thermalization problems arise and modes with long correlation time appear. For this reason, we have used mainly lattices with $`N_t=4`$.
Fig. 1 shows the typical behaviour of $`\rho `$ for different abelian projections, for a lattice $`12^3\times 4`$. The negative peak occurs at the expected transition point, $`\beta _C`$ . Below $`\beta _C`$ the different projections are indistinguishable within errors, suggesting that different monopoles behave in the same way.
Fig. 2 shows the comparison with a $`18^3\times 6`$ lattice. The peak is displaced at the correct $`\beta _C`$, showing that it is not an artifact but it is related to deconfinement of colour .
Since different projections give indistinguishable results, for sake of simplicity we shall only display the plaquette projection in the following figures.
Fig. 3 shows the dependence of $`\rho `$ on $`N_s`$ at fixed $`N_t=4`$. The qualitative behaviour does not change when we increase the lattice size. We now try to understand the thermodynamic limit .
In the strong coupling region (cfr. fig. 4) $`\rho `$ seems to converge to a finite value, which is consistent with $`0`$ at low $`\beta `$’s. Eq. (56) then implies that $`\mu 0`$ in the infinite volume limit in the confined phase.
The weak coupling region is perturbative. An estimate of $`\rho `$ is the minimum on the ensemble of the configurations $`U`$ and is given by the action of classical solutions of the system described by $`S+\mathrm{\Delta }S`$:
$`\rho \underset{\beta \mathrm{}}{}`$ $`\left[\underset{U}{min}\{S\}\underset{U}{min}\{S+\mathrm{\Delta }S\}\right]`$ (57)
$`=`$ $`\underset{U}{min}\{S+\mathrm{\Delta }S\},`$ (58)
since $`min_U\{S\}=0`$.
In other systems, where the same shifting procedure has been applied and studied, this asymptotic value has been analytically calculated in perturbation theory with the result
$`\rho =cN_s+d,`$ (59)
where $`c`$ and $`d`$ are constants, i.e $`\rho `$ goes linearly with the spatial dimension.
In $`SU(2)`$ we are unable to perform the same calculation and we have evaluated the minimum $`min_U\{S+\mathrm{\Delta }S\}`$ numerically. Some technical remarks on the numerical procedure. An oversimplified strategy would be to start from a random configuration and then decrease the action by Metropolis-like steps in which the new configuration is accepted only if its action is lower. However this procedure will not work, because of the presence of local minima where often the procedure stops. A way to overcome this difficulty is to perform an usual Monte-Carlo simulation where $`\beta `$ is increased indefinitively during the simulation . This is equivalent to freeze the system.
We found useful to integrate the two strategies. Firstly we freeze the system increasing $`\beta `$ in the following way:
1. we thermalize the system at a reasonable $`\beta `$ (e.g. $`\beta =10`$);
2. we increase $`\beta `$ by a fraction $`1/200`$ and at the new value of $`\beta `$ we perform a number of sweeps (typically 200), looking for the corresponding minimum of the action;
3. we iterate the step 2 until the minimum of the action looks stable along a larger number of sweeps (typically 5000).
When this procedure becomes inefficient (typically for $`\beta 10^6`$), we go to a Metropolis-like minimization, which is stopped when the action stays constant within errors.
The result is shown in fig. 5 for the plaquette projection. It is consistent with the linear dependence of eq. (59) with $`c0.6`$ and $`d12`$. Thus in the weak coupling region in the thermodynamic limit $`\rho `$ goes to $`\mathrm{}`$ linearly with the spatial lattice size and
$`\mu \underset{N_s\mathrm{}}{}Ae^{(cN_s+d)\beta }0,\beta >\beta _C.`$ (60)
The magnetic U(1) symmetry is restored in the deconfined phase.
To sum up, $`\mu `$ is different from zero at least in a wide range of $`\beta `$ below $`\beta _C`$ and goes to zero exponentially with the lattice size for $`\beta >\beta _C`$. The strong coupling region and the weak coupling one must be connected by a decrease of $`\mu `$; the sharp peak of $`\rho `$ signals that this decline is abrupt and takes place in the critical region.
To understand the behaviour of $`\rho `$ near the critical point we shall use finite size analysis. By dimensional argument
$`\mu =N_s^{\delta /\nu }\mathrm{\Phi }({\displaystyle \frac{\xi }{N_s}},{\displaystyle \frac{a}{\xi }},{\displaystyle \frac{N_t}{N_s}}),`$ (61)
where $`a`$ and $`\xi `$ are respectively the lattice spacing and the correlation length of the system.
Near the critical point, for $`\beta <\beta _C`$
$`\xi \left(\beta _C\beta \right)^\nu ,`$ (62)
where $`\nu `$ is the corresponding critical exponent. In the limit $`N_sN_t`$ and for $`a/\xi 1`$, i.e. sufficiently close to the critical point we obtain
$`\mu =N_s^{\delta /\nu }\mathrm{\Phi }(N_s^{1/\nu }\left(\beta _C\beta \right),0,0)`$ (63)
or equivalently
$`{\displaystyle \frac{\rho }{N_s^{1/\nu }}}=f\left(N_s^{1/\nu }\left(\beta _C\beta \right)\right).`$ (64)
The ratio $`\rho /N_s^{1/\nu }`$ is a universal function of the scaling variable
$`x=N_s^{1/\nu }\left(\beta _C\beta \right).`$ (65)
Since critical values of $`\beta `$ and critical indices of $`SU(2)`$ pure gauge theory are well known , we can check how well scaling is obeyed by plotting $`\rho /N_s^{1/\nu }`$ as a function of $`x`$.
Fig. 6 shows the quality of the scaling in the plaquette projection for $`\beta _C=2.2986`$ and $`\nu =0.63`$. Similar qualitative results have been obtained for the Polyakov projection.
As a further check, we can vary $`\nu `$ and try to estimate “by eye” sensitivity of our data to this exponent. We obtain that in both projections the scaling relation is satisfied within errors for $`0.57\nu 0.67`$.
In the thermodynamic limit in some region of $`\beta `$ below the critical point we expect
$`\mu \left(\beta _C\beta \right)^\delta ,`$ (66)
which implies
$`{\displaystyle \frac{\rho }{N_s^{1/\nu }}}={\displaystyle \frac{\delta }{x}}.`$ (67)
Using eq. (67) it should be possible in principle to determine $`\nu `$, $`\delta `$ and $`\beta _C`$. Our statistic is not enough accurate to perform such a fit. However, we can determine $`\delta `$ using as an input $`\beta _C`$, $`\nu `$, which are known, by parameterizing $`\rho `$ in a wide range by the form
$`{\displaystyle \frac{\rho }{N_s^{1/\nu }}}={\displaystyle \frac{\delta }{x}}c,`$ (68)
where $`c`$ is a constant, as suggested by fig. 6.
Our best fitFits have been performed by using the Minuit routines. gives $`\delta =0.24\pm 0.07`$ in the plaquette gauge and $`\delta =0.12\pm 0.04`$ in the Polyakov gauge. The reduced $`\chi ^2`$ is order 1.
This concludes our argument about the thermodynamic limit ($`N_s\mathrm{}`$). The deconfining phase transition can be seen from a dual point of view as the transition of the vacuum from the dual superconductivity phase to the dual ordinary phase. That feature seems to be independent of the abelian projection chosen.
## V The Maximal Abelian projection
There are abelian projections which are not explicitly defined by an operator $`\mathrm{\Phi }`$, but by some extremization procedure. The prototype is the Maximal Abelian projection, which is defined by maximizing numerically the quantity
$`S_U(\{\mathrm{\Phi }\})={\displaystyle \underset{n,\mu }{}}\text{tr}\left[U_\mu (n)\sigma _3U_\mu ^{}(n)\sigma _3\right]`$ (69)
with respect to gauge transformations .
The Maximal Abelian projection is very popular since, in the projected gauge all links are oriented in the abelian direction within $`15\%`$, and therefore all observables are dominated by the abelian part within $`85\%`$. This fact is known as abelian dominance , and could indicate that the abelian degrees of freedom in this projection are the relevant dynamical variables at large distances. Moreover, out of the abelian projected configurations, monopoles seem to dominate observable quantities (monopole dominance ).
With our approach we have a technical difficulty to determine $`\rho `$ via $`S+\mathrm{\Delta }S`$ (eq. (38)). At each updating the operator $`\mathrm{\Phi }`$ and $`S+\mathrm{\Delta }S`$ are only known after maximization. Accepting or rejecting an updating therefore requires a maximization, and the procedure takes an extremely long computer time.
Therefore in order to study this abelian projection we have to explore the possibility of measuring $`\mu `$ directly, and to confront with the huge fluctuations coming from the fact that $`\mu `$ is the exponential of a sum on a space volume and typically fluctuates as $`e^{N_s^{3/2}}`$. We adopt the following strategy
1. we study the probability distribution of the quantity
$`\mathrm{log}\mu =\beta (\mathrm{\Delta }S);`$ (70)
2. we reconstruct $`\mu `$ from the $`\mathrm{log}\mu `$ distribution by means of cumulant expansion formula truncated at some order.
This procedure should be compared with that of ref. .
If we have a stochastic variable $`X`$ distributed with probability $`p(X)`$ we have
$`{\displaystyle 𝑑Xe^{\beta (XX)}p(X)}=e^{_{n2}\frac{\beta ^n}{n!}C_n}.`$ (71)
$`X`$ is the mean value and $`C_n`$ is the $`n`$-th cluster. For example, if we call $`\mathrm{\Delta }=XX`$
$`\begin{array}{c}C_1=0;\hfill \\ C_2=\mathrm{\Delta }^2;\hfill \\ C_3=\mathrm{\Delta }^3;\hfill \\ C_4=\mathrm{\Delta }^43\mathrm{\Delta }^2\mathrm{\Delta }^2;\hfill \\ C_5=\mathrm{\Delta }^510\mathrm{\Delta }^3\mathrm{\Delta }^2.\hfill \end{array}`$ (77)
If $`\mathrm{log}\mu `$ were gaussianly distributed with mean value $`m`$ and standard deviation $`\sigma `$, we would have
$`\mu =\mathrm{exp}\left(m+{\displaystyle \frac{\sigma ^2}{2}}\right).`$ (78)
In order to check the method we have explored the cluster expansion for the $`\mathrm{log}\mu `$ distribution in the projection we have already studied by means of the quantity $`\rho `$. Fig. 7 shows a comparison between $`\mathrm{log}\mu `$, taken from the integration of $`\rho `$ data, and cluster expansions truncated at different orders. The first and the second cluster are insufficient to account for the right behavior of $`\mathrm{log}\mu `$, whereas with the third cluster added the two determinations are consistent. Moreover the fourth cluster is zero within statistical errors. It seems that one can estimate $`\mu `$ with a cluster expansion truncated at the third order. As a rule, the higher clusters are quite noisy and error bars grow with increasing order. Therefore this kind of estimation requires a very high statistics. For this reason numerical determination of $`\mathrm{log}\mu `$ in the Maximal Abelian projection is possible, but very time consuming. Our data are displayed in fig. 8, showing that monopoles in the Maximal Abelian projection behave in the same way as monopoles in other projections.
For this kind of simulations, we have used a standard overrelaxed heat-bath algorithm. For each value of $`\beta `$ we performed about 50000 measurements, each of them taken after 8 sweeps. In order to improve the statistics, we have considered eight symmetric different position of the monopole (namely we have inserted the monopole at the center of each optant of a cartesian coordinate system with the origin at the center of the lattice); data corresponding to each position are analyzed separately with the method exposed in the previous section and our best value is the weighted average of the eight measurements. Putting more monopoles would not improve the statistics, since strong correlations appear whenever the distance is shorter than the correlation length. Also these simulations have been performed on a 128-node APE QUADRICS machine.
## VI Conclusions
We have constructed a disorder parameter $`\mu `$ detecting condensation of monopoles of non abelian gauge theories defined by different abelian projections. The parameter is the vev of an operator which creates a magnetic charge. $`\mu 0`$ signals dual superconductivity. The same construction has been tested in many known systems .
We measure by numerical simulations $`\mu `$, or better $`\rho =\frac{\text{d}}{\text{d}\beta }\mathrm{log}\mu `$, which contains all the relevant informations and less severe fluctuations.
An extrapolation to thermodynamic limit (infinite spatial volume) is possible.
The system behaves as a dual superconductor in the confined phase, and has a transition to normal at the deconfining phase transition, where $`\mu 0`$.
The deconfining $`\beta _C`$ and the critical index $`\nu `$ as well as the critical index $`\delta `$ describing the way in which $`\mu 0`$ when $`TT_C`$ can be determined. The first two quantities are known independently and our determination is consistent with others. As for $`\delta `$, defined by
$`\mu \underset{TT_C}{}\left(1{\displaystyle \frac{T}{T_C}}\right)^\delta ,`$ (79)
it is $`0.20\pm 0.08`$. Different abelian projections (plaquette, Polyakov, “butterfly”) give results which agree with each other.
Our technique proves difficult for the Maximal Abelian projection, but a direct determination of $`\mu `$ looks consistent with other projections.
In conclusions
1. Dual superconductivity is at work in the confined phase, and disappears at the deconfinement phase transition.
2. This statement is independent of the abelian projection defining the monopoles.
Further theoretical effort is needed to understand the real symmetry breaking in the deconfined phase, or in the dual description of QCD.
Similar results for $`SU(3)`$ will be presented in the companion paper.
Finally we stress that, whatever topological excitations are responsible for colour confinement, counting them is not a right criterion to detect disorder. Only the vev of an operator carrying the appropriate topological charge can be a legitimate disorder parameter.
## Acknowledgements
This work is partially supported by EC contract FMRX-CT97-0122 and by MURST.
|
no-problem/9906/cond-mat9906319.html
|
ar5iv
|
text
|
# The Feynman effective classical potential in the Schrödinger formulation.
## Abstract
New physical insight into the correspondence between path integral concepts and the Schrödinger formulation is gained by the analysis of the effective classical potential, that is defined within the Feynman path integral formulation of statistical mechanics. This potential is related to the quasi-static response of the equilibrium system to an external force. These findings allow for a comprehensive formulation of dynamical approximations based on this potential.
The path integral formulation of statistical mechanics is a powerful method to study quantum many-body systems. An essential property of this approach is the mapping of a quantum system onto a classical model of harmonic ring polymers, whose equilibrium properties can be studied with high accuracy. However, dynamical properties at finite temperatures can not be derived with the same sort of rigor, as the solution of the real time path integrals involves complex valued functionals without a suitable sampling function for stochastic integration.
The effective classical potential (ECP), that was introduced by Feynman to study systems in thermodynamic equilibrium, has been a central quantity to derive two quantum dynamical approximations, the quantum transition state theory (QTST), that aims at calculating rate constants of activated processes, and the centroid molecular dynamics (CMD), that aims at calculating real time correlation functions for quantum particles. The definition of the ECP is based on the concept of path centroid, that is the center of gravity of each of the ring polymers that represent the quantum particle. However, in spite of the extensive use of the path centroid in condensed matter and chemical physics studies, its relation to a measurable physical observable and hence the physical meaning of the derived dynamical approximations remain largely unexplained.
In this work, we aim at a deeper understanding of the equivalence between the path integral and the Schrödinger formulation by showing that the centroid coordinate is related to the quasi-static response of the system to an external force. The Schrödinger formulation provides new physical insight into the dynamical approximations based on the ECP. At zero temperature, the ECP is the mean energy of minimum energy wave packets (MEWP’s, to be defined below) and the CMD is an approximate dynamics based on these wave packets. Technical details of the analysis are left for a later work.
We begin by reviewing the definition of the ECP in the path integral formulation. The simple case of a quantum particle of mass $`m`$ having bound states in a one-dimensional potential $`V(x)`$ is considered. The Hamiltonian of the particle is $`H_0`$. The extension to the many-particle case is, for distinguishable particles, straightforward. At a given temperature the equilibrium properties are derived from the partition function, $`Z_0`$, and from the particle’s probability density, $`\rho (x)`$. Both are related by the expression $`Z_0=_{\mathrm{}}^{\mathrm{}}𝑑x\rho (x)`$. The path integral formulation of $`\rho (x)`$ is
$$\rho (x)=_{x=x(0)}^{x=x(\beta \mathrm{})}D[x(u)]\mathrm{exp}\left(\frac{S[x(u)]}{\mathrm{}}\right),$$
(1)
where $`S[x(u)]`$ is the functional of the Euclidean action of the path $`x(u)`$, $`u`$ is the imaginary time that varies between 0 and $`\beta \mathrm{}`$, and $`\beta `$ is the inverse temperature $`(k_BT)^1`$. The paths $`x(u)`$ can be considered as alternative ways for the propagation of the particle, and the sum over paths can be analyzed in many different ways depending on the different classes into which the alternatives can be divided. The way conducting to the ECP uses the centroid or average point, $`x_c`$, of the path $`x(u)`$
$$x_c=\frac{1}{\beta \mathrm{}}_0^\beta \mathrm{}𝑑ux(u).$$
(2)
A class of paths is the subset of paths that have the same centroid. A constrained path integral over the class of paths with centroid at $`X`$ is defined by introducing a delta function in the integrand of Eq. (1),
$$\rho _X(x)=_x^xD[x(u)]\delta (Xx_c)\mathrm{exp}\left(\frac{S[x(u)]}{\mathrm{}}\right).$$
(3)
$`\rho _X(x)`$ may be considered as a probability density around the centroid position $`X`$. As illustration, the normalized function $`Z(X)^1\rho _X(x)`$ obtained by a Monte Carlo path integral simulation of a particle of mass $`m=16`$ au in a double-well potential, $`V_{dp}(x)=\frac{1}{4}(x^21)^2`$, is represented in Fig. 1. The probability density is shown for four values of the centroid coordinate $`X`$. We also show the probability density of the ground state of the potential $`V_{dp}(x)fx`$, for several values of the parameter $`f`$, that represents an external force acting on the particle. The identity between these ground state probability densities and $`Z(X)^1\rho _X(x)`$ will be explained below. The normalization constant $`Z(X)`$ is an important quantity
$$Z(X)=_{\mathrm{}}^{\mathrm{}}𝑑x\rho _X(x),$$
(4)
which has the physical meaning of a probability density for the class of paths with centroid at $`X`$ and it is called the centroid density. The partition function can be recovered by a sum over all classes, i.e.,
$$Z_0=_{\mathrm{}}^{\mathrm{}}𝑑XZ(X).$$
(5)
This integral, after substitution of $`Z(X)`$ by the following definition,
$$Z(X)=\left(\frac{m}{2\pi \beta \mathrm{}^2}\right)^{1/2}\mathrm{exp}\left[\beta F_{ef}(X)\right],$$
(6)
has the same form as the partition function of a classical particle moving in the potential, $`F_{ef}(X)`$, which is the ECP. We summarize some properties of the ECP: i) all the thermodynamic properties that depend on $`Z_0`$ can be derived from it; ii) it is temperature dependent; iii) its calculation is analytical only for quadratic potentials, but variational approximations are available; iv) its high temperature limit is the actual potential $`V(x)`$.
The centroid density $`Z(X)`$ (or the ECP) and the probability densities $`Z(X)^1\rho _X(x)`$ are important quantities in the theory of path integrals, whose correspondence to the Schrödinger formulation has not been clearly stated. In the following, we show how the centroid density is related to a physical observable. We first consider a Hamiltonian depending on an external force $`f`$ acting on the particle, $`H_f=H_0fx`$. The path integral representation of the partition function $`Z_f`$, corresponding to the Hamiltonian $`H_f`$, can be expressed as
$$Z_f=_{\mathrm{}}^{\mathrm{}}𝑑XZ(X)e^{\beta fX}=Z_0e^{\beta fX},$$
(7)
where $`Z(X)`$ is the centroid density for the particle with Hamiltonian $`H_0`$. The angle brackets show an average over the normalized centroid density. If $`Z(X)`$ is known, the partition function $`Z_f`$ can be derived by Eq. (7) for any arbitrary value of the external force $`f`$. Thus, the analysis of classes of paths with fixed centroid allows to derive the thermodynamic properties of a whole family of systems whose Hamiltonian depends on a parameter $`f`$. The moments of the centroid density are defined as
$$X^n=Z_0^1_{\mathrm{}}^{\mathrm{}}𝑑XZ(X)X^n.$$
(8)
The physical meaning of Eq. (7) is that the ratio of partition functions $`Z_f/Z_0`$ is the function generating the moments of the centroid density. The moments are derived by differentiation of $`Z_f/Z_0`$ with respect to the variable $`\beta f`$. The result is simpler if we use the free energy, $`F_f`$, defined as $`\mathrm{exp}(\beta F_f)=Z_f`$. The first two moments are obtained as
$$X=\left(\frac{F_f}{f}\right)_{f=0},$$
(9)
$$X^2X^2=k_BT\left(\frac{^2F_f}{f^2}\right)_{f=0}.$$
(10)
The mean-squared deviation of $`X`$ is often called the classical delocalization of the particle. Higher order derivatives of $`F_f`$ lead to higher moments of the centroid density. If the moments are known, then the centroid density itself will be fully determined. We have then a formal relation between the centroid density and the change in the free energy as a function of a quasi-static force acting on the particle. This central result reveals the physical meaning of the centroid density within the Schrödinger formulation.
This correspondence between the centroid density and the Schrödinger formulation is valid for arbitrary temperatures. The zero temperature limit is particularly interesting, as quantum effects are then most important. Before analyzing this limit, it is convenient to study a property of the Hamiltonian $`H_f`$ that leads to the definition of MEWP’s. We look for quantum states of the particle whose mean energy, $`E_0^{min}=\psi _f|H_0|\psi _f`$, is minimum against small variations of $`|\psi _f`$. Moreover, the states must satisfy two constraints: i) their mean position is fixed at an arbitrary value $`\overline{x}=\psi _f|x|\psi _f`$; ii) they are normalized $`\psi _f|\psi _f=1`$. By straightforward application of calculus of variations, one finds that $`|\psi _f`$ is the ground state of $`H_f`$
$$(H_0fx)|\psi _f=E_f|\psi _f,$$
(11)
$`f`$ and $`E_f`$ are Lagrange multipliers chosen so that the constraints i and ii are satisfied. Note that $`f`$ is an implicit function of the arbitrary position $`\overline{x}`$, i.e., $`ff(\overline{x})`$. The minimum energy, $`E_0^{min}`$, as a function of $`\overline{x}`$, is derived from Eq. (11) as
$$E_0^{min}(\overline{x})=E_f+f\overline{x}.$$
(12)
We call the states $`|\psi _f`$ the MEWP’s of the potential $`V(x)`$ whose mean energy is $`E_0^{min}(\overline{x})`$. We show next how these states are related to the ECP at zero temperature.
At $`T=0`$, the free energy is equal to the ground state energy, $`F_f=E_f`$, and from Eqs. (9), and (10) we obtain
$$X=\left(\frac{E_f}{f}\right)_{f=0};X^2X^2=0.$$
(13)
$`X`$ is then the expectation value of the derivative of the Hamiltonian operator in Eq. (11) with respect to $`f`$ at $`f=0`$, i.e., $`X`$ is equal to the mean position of the ground state of the Hamiltonian $`H_0`$
$$X=\psi _0|x|\psi _0=\overline{x}_0.$$
(14)
The mean-squared deviation of the centroid density vanishes \[Eq. (13\], thus the centroid density must be a delta function centered at $`\overline{x}_0`$. This result implies that only the class of paths with centroid at $`X\overline{x}_0`$ contributes to the path integral in Eq. (1), i.e.,
$$\underset{T0}{lim}Z_0^1\rho (x)=\underset{T0}{lim}Z(X)^1\rho _X(x),\text{for}X\overline{x}_0.$$
(15)
This identity provides a clear physical interpretation of path integral quantities at $`T=0`$. It implies that the centroid density and the partition function of the particle are related by: $`Z(X)=Z_0\delta (X\overline{x_0})`$. Moreover, the asymptotic behavior of $`Z_0`$ and $`Z(\overline{x}_0)`$ as $`T0`$ is given by exponential functions of the ground state energy, $`\mathrm{exp}(\beta E_0)`$, and the ECP, $`\mathrm{exp}[\beta F_{ef}(\overline{x}_0)]`$, respectively. Then, $`F_{ef}(\overline{x}_0)`$ must be equal to the ground state energy. This result is a particular case of a relation valid for any arbitrary centroid position $`X`$. An essential step for this generalization is that the probability densities $`Z(X)^1\rho _X(x)`$ are invariant under any change in the Hamiltonian $`H_0`$ that is linear in the coordinate $`x`$. The final result of this generalization is: i) at $`T=0`$ the normalized probability density for a given class of paths, $`Z(X)^1\rho _X(x)`$, is identical to the probability density of the MEWP, $`|x|\psi _f|^2`$, whose mean position is at $`\overline{x}X`$; ii) at $`T=0`$ the ECP, $`F_{ef}(X)`$, is equal to the mean energy, $`E_0^{min}(\overline{x})`$ of the MEWP whose mean position is at $`\overline{x}X`$.
In Fig. 1 we showed the probability density $`Z(X)^1\rho _X(x)`$ obtained from Monte Carlo path integral simulations of a particle in a double-well potential at temperature $`k_BT=10^3`$ au. This value is a small fraction of the lowest excitation energy, $`\mathrm{\Delta }E_0=29\times 10^3`$ au (tunnel splitting), and therefore a good approximation to the zero temperature limit. For comparison, the probability density of MEWP’s, obtained numerically as the ground state of the potential $`V_{dp}(x)fx`$, are shown by broken lines. The identity of both probability densities is an important result that clarifies the physical meaning of fixed centroid path integrals. In Fig. 2 the energies $`V_{dp}(\overline{x})`$ and $`E_0^{min}(\overline{x})`$ are shown. These curves correspond to the limits of infinite and zero temperature for the ECP, respectively.
In QTST the second derivative of the ECP, $`F_{ef}(X)`$ with respect to $`X`$, has been shown to be an important quantity to determine the pre-exponential factor of rate constants. If the effect of a small external force on a stationary quantum state is approximated by a rigid spatial displacement of the state, it can be shown that in the zero temperature limit the second derivative of the ECP with respect to $`X`$ gives an approximation to the first excitation energy of the Hamiltonian $`H_f`$, i.e.
$$\mathrm{}\left[\frac{1}{m}\left(\frac{^2E_0^{min}(\overline{x})}{\overline{x}^2}\right)\right]^{\frac{1}{2}}\mathrm{\Delta }E_f.$$
(16)
In Fig. 3 we display the exact values of $`\mathrm{\Delta }E_f`$ as a function of $`\overline{x}`$ (remember that $`f`$ is an implicit function of $`\overline{x}`$). The l.h.s of Eq.(16), drawn by a broken line, was obtained by numerical differentiation of the function $`E_0^{min}(\overline{x})`$, shown in Fig. 2. At $`\overline{x}=0`$ the approximation overestimates the energy of the tunnel splitting, $`\mathrm{\Delta }E_0`$, by 25 $`\%`$.
An approximation to the time evolution of the particle can be formulated with MEWP’s: i) only MEWP’s with mean position $`\overline{x}`$ and momentum $`\overline{p}`$ are allowed as time dependent states, i.e., these are of the form $`x|\psi _f\mathrm{exp}(i\overline{p}x/\mathrm{})`$. The value of the mean force is $`\overline{f}=f`$; ii) the time dependence of $`\overline{x}`$ and $`\overline{p}`$ is given by the Ehrenfest relations
$$\frac{d\overline{x}}{dt}=\frac{\overline{p}}{m};\frac{d\overline{p}}{dt}=\overline{f}.$$
(17)
The total energy of the MEWP, $`E_0^{min}(\overline{x})+\overline{p}^2/(2m)`$, is conserved along this time evolution. This approximation, formulated without any reference to path integral concepts, can be recognized as the zero temperature limit of CMD. The formulation of CMD as an approximate dynamics based on MEWP’s provides new physical insight into this approximation. In Fig. 4 we show a phase space representation of the time evolution of $`\overline{x}(t)`$ and $`\overline{p}(t)`$ for an initial MEWP with $`\overline{x}(0)=0.7`$ au and $`\overline{p}(0)=0`$ moving in the potential $`V_{dp}`$. The exact trajectory was obtained by numerical solution of the time dependent Schödinger equation. The CMD trajectory, derived from Eq. (17), is that of a classical particle moving in the ECP shown in Fig. 2. For comparison the phase space trajectory of a classical particle in the potential $`V_{dp}`$ is also shown. The CMD result resembles the real time trajectory.
Summarizing, the centroid density has been derived in the Schrödinger formulation with the help of a function generating its moments. The centroid density is related to the response of the system to a quasi-static force. For potentials with bound states the centroid density converges to a delta function as $`T0`$. Several results have been found in this limit: i) the ECP is the mean energy of the MEWP’s of the potential; ii) the second derivative of the ECP with respect to the centroid position approximates the first excitation energy of the system; iii) CMD is an approximate dynamics based on MEWP’s.
This work was supported by DGICYT (Spain) under contract PB96-0874. We thank E. Artacho and J.J. Sáenz for helpful discussions.
|
no-problem/9906/gr-qc9906079.html
|
ar5iv
|
text
|
# Gravitation as a Quantum Diffusion11footnote 1Published in: ”Z.Zakir (2003) Structure of Space-Time and Matter, CTPA, Tashkent.”
## 1 Introduction
In inhomogeneous conservative diffusion with the tensor of diffusion $`\nu ^{ab}(x,t)`$ has been considered, and the similarities in the geometric descriptions of gravitation and quantum fluctuations have been discussed. The proposed idea about the physical nature of gravitation is that gravitation is inhomogeneous Nelson’s diffusion, i.e. a consequence of the quantum fluctuations. This treatment may be applied also to some gauge fields by means of the Kaluza-Klein mechanism.
In only the behavior of a sample particle on inhomogeneous stochastic background has been described. In this paper the influence of the source to this stochastic background will be considered, and gravitation will be described fully in terms of the general quantum diffusion. Then the Einstein equations will be related with the corresponding diffusion equations.
## 2 Sample particles on inhomogeneous stochastic background
At inhomogeneous diffusion, considered in , the mean acceleration $`E[a^i(x,t)]`$ does not contain terms with the osmotic velocity $`u^i`$, and for the diffusion of the free particle we have:
$$E\left[\frac{v_i}{t}+(𝐯)v_i\right]=0.$$
(1)
Here a new diffusional acceleration appears due to the presence of derivatives of the metrics in the Laplace-Beltrami operator $``$:
$$E\left[\frac{v_i}{t}+(𝐯)v_i\right]=E[\mathrm{\Gamma }_{ij}^kv^jv_k].$$
(2)
The diffusional acceleration $`E[\mathrm{\Gamma }_{ij}^kv^jv_k]`$ does not depend on the mass $`m`$ of the sample particle, i.e. we have an exact analog of the equivalence principle.
L.Smolin had paid attention to the equality of the quantum diffusional mass $`m_q`$, determining from the diffusion coefficient $`\nu =\mathrm{}/2m_q,`$ and the inertial mass $`m_{in}`$ with the high accuracy $`(m_{in}m_q)/m_{in}<4\times 10^{13}`$. Here we can consider this fact as following from the generalized equivalence principle \- the equivalence between the inertial motion in curved spacetime, the motion in the gravitational field and inhomogeneous quantum diffusion.
Independence of the acceleration on the mass of the sample particle leads also to the same acceleration of macroscopic objects and the basises of reference frames. The acceleration of the reference frame means the appearance of non-trivial macroscopic metric and non-zero curvature of space-time.
## 3 The influence of matter to the stochastic background
At the interaction with the stochastic background, a massive classical particle of a bare mass $`m_0`$ undergoes the stochastic fluctuations. At these fluctuations the energy of the vacuum around the particle partly transforms into the energy of particle’s fluctuations. As the result, particle’s energy increases to the quantum fluctuations energy $`T_{ik}^{(q)}(m_0^1)`$ which is inverse proportional to the bare mass $`m_0`$, and the physical energy momentum density of matter becomes equal to:
$$T_{ik}(m)=T_{ik}^{(0)}(m_0)+T_{ik}^{(q)}(\frac{1}{m_0}).$$
(3)
Due to this energy transfer the vacuum energy density around the source decreases with respect to the unperturbed vacuum at the spatial infinity. This lowering of the vacuum energy density is maximal near the source and vanish at large distances. The such inhomogeneity of the vacuum energy density leads to a coordinate dependence of the diffusion coefficient $`\nu _{ik}(x)`$ which can be considered as the metric tensor of the effective Riemannian manifold. As it was shown in , the lowering of the energy density near the massive object leads to the diffusional acceleration of sample particles to that source so that this acceleration does not depend on the mass of the sample particles.
As a physical model of this effect we can consider the behavior of two classical particles in an accelerated frame of reference. Let the first particle’s trajectory be a geodesic line. This particle does not interact with the frame of reference and its ”gravitational energy” is zero. Let the second particle be accelerated together with one of the local frames of the accelerated frame, and it is at rest in this frame. In this case the local frame expends the energy for the acceleration of the particle. As a result, the energy of the particle increases while the energy of the local frame interacting with the particle decreases. If the local frames at some surface of the extended reference frame are bonded by elastic springs (same as a trampoline), then the acceleration of the particle with one of the local frames leads to the formation of a smooth curved surface around this frame.
Thus, a gravitational energy density is related with the lowering of the vacuum energy density around the source, i.e. by the decreasing of the intensity of the quantum fluctuations. This fact we can take into account in the standard action function $`A`$:
$$A=\frac{1}{2}𝑑\mathrm{\Omega }\sqrt{\gamma }\left(\frac{1}{\kappa }R+L_{(m)}\right),$$
(4)
by the variation not the Lagrangian with the Ricci tensor $`R_{ik}`$, but directly containing the Riemann tensor $`R_{ilkm}`$ as in . We have:
$$\delta A=\frac{1}{2}𝑑\mathrm{\Omega }\sqrt{\gamma }[G_{iklm}+T_{iklm}]\gamma ^{il}\delta \gamma ^{km}=0,$$
(5)
and
$$G_{iklm}+T_{iklm}=0.$$
(6)
Here:
$$G_{iklm}=\frac{1}{\kappa }[R_{iklm}\frac{1}{6}(\gamma _{il}\gamma _{km}\gamma _{im}\gamma _{kl})R],$$
(7)
$$T_{iklm}=V_{iklm}+\frac{1}{2}(\gamma _{km}T_{il}\gamma _{kl}T_{im}+\gamma _{il}T_{km}\gamma _{im}T_{kl})\frac{1}{6}(\gamma _{il}\gamma _{km}\gamma _{im}\gamma _{kl})T.$$
(8)
The energy-momentum density tensor of the source $`T_{iklm}`$ contains the energy-momentum density of the matter $`T_{ik}`$, its scalar $`T=g^{ik}T_{ik}`$, and the new term $`V_{iklm}`$ which is the 4-index energy-momentum density tensor for the gravitational field with zero contraction $`g^{il}V_{iklm}=0`$. In the vacuum the $`R_{iklm}`$ is equal to the Weyl tensor $`C_{iklm}`$ which, in fact, determines the energy-momentum density of the gravitational field as:
$$\frac{1}{\kappa }C_{iklm}=V_{iklm}.$$
(9)
In the asymptotically flat spacetime this definition of the gravitational energy leads to the same total energy of the source and its gravitational field as the pseudotensor and Hamiltonian approaches.
Another evidence of the quantum diffusional nature of gravitation is the explanation of the time dilation in the gravitational field. Due to the slowering of the quantum fluctuations around the massive source (the vacuum around had ”lost” the energy for the fluctuating the source), the frequencies of the wave functions of sample particles and energy levels of atoms, related with the intensity of the quantum fluctuations, become redshifted, i.e. all quantum processes near the source occur slower than at the spatial infinity.
|
no-problem/9906/astro-ph9906199.html
|
ar5iv
|
text
|
# 86, 43, and 22 GHz VLBI Observations of 3C 120
## 1 Introduction
The radio galaxy 3C 120 is a powerful and variable emitter of radiation at all observing frequencies. It is usually classified as a Seyfert 1 galaxy (Burbidge B67 (1967)), although its optical morphology is also consistent with that of a broad-line radio galaxy. It was among the first sources in which superluminal motion was detected, on a scale of parsecs (Seielstad et al. Se79 (1979); Walker, Benson, & Unwin Wa87 (1987)) to tens of parsecs (Benson et al. Be88 (1988); Walker Wa97 (1997)). An optical counterpart of the radio jet has also been detected (Hjorth et al. Hj95 (1995)). 3C 120 is among the closest known extragalactic superluminal sources ($`z`$=0.033), allowing the study of its inner jet structure with unusually fine linear resolution for this class of objects. Previous observations using NRAO’s<sup>1</sup><sup>1</sup>1The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Very Long Baseline Array (VLBA) at 22 and 43 GHz (Gómez et al. JL98 (1998), hereafter G98) revealed a very rich inner jet structure containing up to ten different superluminal components, with velocities between 2.3 and 5.5 $`h^1c`$, mapped with a linear resolution of 0.07 $`h^1`$pc. Linear polarization was also detected in several components, revealing a magnetic field orientation that varies with respect to the jet flow direction as a function of frequency, epoch, and position along the jet.
In this Letter we present the first 86 GHz Coordinated Millimeter VLBI Array (CMVA) observations of 3C 120, providing an angular resolution of 54 $`\mu `$as, which for this source represents a linear resolution of 0.025 $`h^1`$pc. We also present contemporaneous 22 and 43 GHz polarimetric VLBA observations of 3C 120 and compare these with previous observations obtained one year previously.
## 2 Observations and data analysis
The 86 GHz CMVA observation took place on 1997 October 24 (epoch 1997.82). Participating antennas were Pico Veleta, Effelsberg, Onsala, Metsähovi, Haystack, Kitt Peak 12-Meter, and VLBA Pie Town. The data were correlated with the Haystack MkIII correlator, after which global fringe fitting was performed within the NRAO Astronomical Image Processing System (AIPS) software. Because of bad weather at some of the stations and the relatively low flux density exhibited by 3C 120 during the observations, a significant amount of data were lost. We found a very consistent calibration for the American sub-array of antennas, which was used to calibrate the rest.
The 7 mm and 1.3 cm VLBA observations were performed on 1997 November 10, seventeen days apart from the CMVA observation. The data were recorded in 1-bit sampling VLBA format with 32 MHz bandwidth per circular polarization. The reduction of the data was performed with the AIPS software in the usual manner (e.g., Leppänen, Zensus, & Diamond Ka95 (1995)). Opacity corrections were introduced by solving for receiver temperature and zenith opacity at each antenna. Fringe fitting to determine the residual delays and fringe rates was performed for both parallel hands independently and referred to a common reference antenna. Delay differences between the right- and left-handed polarization systems were estimated over a short scan of cross-polarized data of a strong calibrator (3C 454.3). The instrumental polarization was determined by using the feed solution algorithm developed by Leppänen, Zensus, & Diamond (Ka95 (1995)).
The absolute phase offset between right and left circular polarization at the reference antenna was determined by VLA observations of the sources 0420-014 and OJ 287 on 1997 November 21, and referenced to an assumed polarization position angle of 33 for 3C 286 and 11 for 3C 138, at both observing frequencies. This provided an estimation of the absolute polarization position angle within 9 and 7 at 22 and 43 GHz, respectively.
## 3 Results and conclusions
Figure 1 shows the resulting CMVA and VLBA images of 3C 120. Table 1 summarizes the physical parameters obtained for 3C 120 at the three frequencies. Tabulated data correspond to total flux density ($`S`$), polarized flux density ($`P`$), magnetic vector position angle ($`\chi _B`$), separation ($`r`$) and structural position angle ($`\theta `$) relative to the easternmost bright component \[which we refer to as the “core”\], and angular size (FWHM). Components in the total intensity images were analyzed by model fitting the uv data with circular Gaussian components within the software Difmap (Shepherd Se97 (1997)). We have obtained an estimation of the errors in the model fitting by introducing small changes in the fitted parameters and studying the variations in the $`\chi ^2`$ of the fit, as well as reproducing the model fitting from the start and comparing with previous results. For the 43 GHz image the estimated model fitting errors are of the order of 5 mJy in flux, between 0.01 and 0.02 mas in position, and $``$ 0.01 in component sizes. Larger errors, of the order of 50% higher, are estimated for the 22 GHz image model fitting.
The 3 mm CMVA image of Fig. 1 shows the existence of a bright core component. The mapping process was able to recover 1.51 Jy of the integrated total flux of 1.95$`\pm `$0.05 Jy measured by Pico Veleta (H. Ungerechts, private communication), resulting in a maximum brightness temperature of 1.15$`\times `$10<sup>10</sup> K. The combined spectrum for the core at the three observing frequencies gives an inverted spectrum with $`\alpha `$ 1.1 ($`S_\nu \nu ^\alpha `$). Components n1 and n2 show a steep spectrum between 22 and 43 GHz, which, together with the inverted spectrum of the core, would explain why they are not detected in the 86 GHz map, given the relatively poor dynamic range of the latter. Metsähovi flux density monitoring at 22 and 37 GHz (H. Teräsranta, private communication) observed the strongest radio flare ever seen in 3C 120 to take place starting in the end of 1997 and peaking in June 1998, which would explain the flattening of the spectrum of the core with respect to the late 1996 observations. No polarization was detected for the core at either 22 or 43 GHz.
We estimate the size of the core to lie below the resolution of the 86 GHz map, that is $``$ 54 $`\mu `$as, which implies a linear size less than 0.025 $`h^1`$pc, or 30 light-days. This represents a direct determination of the upper limit to the size of the base of the jet that is independent of variability arguments, which depend on the quite uncertain estimate of the Doppler factor.
The 43 and 22 GHz images show a much richer structure consisting of multiple components. We have labeled components from west to east, using upper-case letters for the map at 22 GHz. We shall note that several of the components in the 43 GHz image appear blended in the corresponding 22 GHz image. Through comparison with previous images obtained at the end of 1996 (G98) and extrapolation of the component speeds, we have cross–identified several components. The most reliable identification corresponds to components A, B, C, and D. Components H and K may be associated with two distinct radio flares detected by Metsähovi (H. Teräsranta private communication) at the end of 1996 and in mid-1997, with component H appearing previously in the November 1996 and December 1996 images of G98.
In comparison with the late 1996 epochs of G98, components A, B, C, E, F, and G are found to have experienced a similar evolution. Each moved along the curved path traced by the jet in the plane of the sky. During this downstream translation their flux densities declined at both observing frequencies (except for component E at 43 GHz). This is as expected for components that are subject to expansion and radiative cooling (the latter of which might have occurred earlier and become prominent at radio frequencies only after further expansion). There is some evidence for deceleration of components upstream of D to a common value close to $`3h^1c`$, but further observations are needed to confirm this. One of the largest variations can be found in polarization. Components F and G decreased their polarized flux density between November and December 1996 to undetectable values, but by November 1997 showed an increase to values between 9 and 14%. This change was accompanied by a rotation of the magnetic field vector from orthogonal to parallel to the jet axis. Components B and C experienced a similar drastic variation, changing their degree of polarization from values close to 10% in late 1996, to values below the noise in the images in Fig. 1. This suggests changes in the underlying configuration of the magnetic field and/or the shocks presumably associated with the components in the inner parsec of 3C 120.
We found a significantly different evolution for component D. During the one-year period between the G98 observations and that shown in Fig. 1, component D gained flux, and its spectrum became significantly flatter. Even more pronounced changes are found in polarization, from below the noise level in the G98 observations, to a level that made it one of the strongest components in polarized flux density, with a degree of polarization of 15% (13% at 43 GHz). All this evidence suggests significant activity in component D not present in the other components.
We interpret the different evolution occurring in component D as caused by its interaction with the external medium. As shown in the 22 GHz image, the jet clearly bends at the position of D, hence we expect at least a standing oblique shock there. While between late 1996 and November 1997 the position angle of components B and C rotated by $``$ 3, the increment in component D was significantly smaller, of the order of 1.5. As a consequence, the curvature near component D appears more pronounced than in the previous images of G98, actually showing component D centered $``$ 0.3 mas south of the jet axis defined by components E and C. If the jet components travel along a gently curved jet funnel, it is possible that components with relatively larger momentum –as may be the case for D– would move along a more ballistic trajectory, progressively moving closer to the jet funnel boundary, and consequently undergoing interactions with the external medium. This would explain the more ballistic trajectory of component D, while other components seem to have followed the curvature of the jet. The interaction with the external medium should produce a strong shock and turbulence (from Rayleigh-Taylor and perhaps Kelvin-Helmholtz instabilities), enhancing the emission and flattening the spectrum, as is commonly observed in the terminal hot spot of large-scale jets. The shock would also produce an enhancement and reordering of the magnetic field parallel to the shock front; this would explain the increase in the polarized flux and degree of polarization. The magnetic field orientation in D is consistent with this scenario, and contrasts with that observed for the rest of the jet, which is roughly parallel to the jet flow (except for components n1 and M).
Component H seems to share some of the properties and activity of component D. Its structural position angle shifted to the south by 2.5, which placed it $``$ 0.25 mas south of the jet axis in Nov 1997. The spectrum is flatter at this later epoch, and the degree of polarization increased, from values below 4% at the end of 1996, to 23% (18% at 43 GHz) in Nov 1997, the maximum value measured in the jet. During this burst in polarization, H maintained a similar orientation of its magnetic field, almost parallel to the jet axis, with a slightly shifted orientation at 43 GHz towards the direction of component d’s field. To the north of H (h at 43 GHz), model fitting reveals the presence of component I (i), which presumably is traveling through the jet funnel, out of which H seems to be breaking. This is suggested by a slightly different orientation of the magnetic field position angle.
The structural position angle of the inner milliarcsecond components in 3C 120 underwent a mean shift to the south of $`6^{}`$ between late 1996 and 1997. This indicates a progressive swing of the jet ejection direction towards the south, which may be responsible for the continuous curvature observed on parsec and kiloparsec scales (e.g., Walker Wa97 (1997)). It is then expected that some components (perhaps with relatively larger momentum) — such as D and H — would experience strong interactions with the external medium, opening a new path through which subsequently ejected components would travel relatively undisturbed.
Further observations (in progress) will provide a follow up of the changes observed in 3C 120 and test the hypothesis presented here, as well as allow comparison with simulations of the hydrodynamics and emission of relativistic jets (Gómez et al. JL97 (1997)), resulting in a better understanding of the physics involved in 3C 120 and other AGNs.
###### Acknowledgements.
This research was supported in part by Spain’s Dirección General de investigación Científica y Técnica (DGICYT), grants PB94-1275 and PB97-1164, by NATO travel grant SA.5-2-03 (CRG/961228), and by U.S. National Science Foundation grant AST-9802941. We thank Iván Agudo-Rodríguez for preparing Fig. 1. We thank Robert Phillips and Elisabeth Ambrose for providing confirmation of the CMVA results through a map obtained using the software HOPS. We are also grateful to Barry Clark for providing *ad hoc* VLA time in order to determine the absolute polarization position angle.
|
no-problem/9906/cond-mat9906372.html
|
ar5iv
|
text
|
# Van Hove Singularity and Superconductivity in Disordered Sr2RuO4
## I Introduction
The perovskite structure of Strontium Ruthenate, Sr<sub>2</sub>RuO<sub>4</sub> is very similar to that of HTS copper oxides. However, its superconducting transition temperature T<sub>C</sub> is relatively low (T<sub>C</sub> $``$ 1 K) . Nevertheless, recent reports indicate that its Cooper pairs are not of the usual s–wave symmetry. In fact they suggest that this material features triplet pairing and is a superconducting analogue of the <sup>3</sup>He superfluid system \[1-4\]. Clearly, the possibility of exotic pairing engenders interest in the effects of disorder on the superconducting properties. Moreover, studies of the electronic structure have identified an extended van Hove singularity close to the Fermi energy E<sub>F</sub>, and therefore one may wonder whether the van Hove scenario could lead to a rise in T<sub>C</sub> with doping. Evidently, since doping the system always increases the disorder one should investigate both aspects simultaneously.
## II The Model
We base our discussion on the extended negative U Hubbard Hamiltonian:
$$H=\underset{ij\sigma }{}t_{ij}c_{i\sigma }^{}c_{j\sigma }+\frac{1}{2}\underset{ij}{}U_{ij}\widehat{n}_i\widehat{n}_j\underset{i}{}(\mu \epsilon _i)\widehat{n}_i,$$
(1)
where $`\widehat{n}_i=\widehat{n}_i+\widehat{n}_i`$ and $`\widehat{n}_{i\sigma }`$ is the usual, site occupational number operator $`c_{i\sigma }^{}c_{i\sigma }.`$ Evidently the above $`\widehat{n}_i`$ is the charge operator on site labelled $`i`$, $`\mu `$ is the chemical potential, which at $`T=0`$ is equal to Fermi energy $`E_F`$. Disorder is introduced into the problem by allowing the local site energy $`\epsilon _i`$ to vary randomly from site to site. Finally, $`c_{i\sigma }^{}`$ and $`c_{i\sigma }`$ are the Fermion creation and annihilation operators for an electron on site $`i`$ with spin $`\sigma `$, $`t_{ij}`$ is the amplitude for hopping from site $`j`$ to site $`i`$ and $`U_{ij}`$ is the attractive interaction ($`ij`$) which causes superconductivity.
In the Hartree-Fock-Gorkov approximation the equation for the Green’s function $`𝑮(i,j;ı\omega _n)`$, corresponding to the Hamiltonian in Eq. 1, is given by:
$$\underset{l}{}\left[\begin{array}{cc}(ı\omega _n+\mu \epsilon _i)\delta _{il}+t_{il}& \mathrm{\Delta }_{il}\\ \mathrm{\Delta }_{il}^{}& (ı\omega _n\mu +\epsilon )\delta _{il}t_{il}\end{array}\right]𝑮(l,j;ı\omega _n)=\mathrm{𝟏}\delta _{ij},$$
(2)
where $`\omega _n`$ is Matsubara frequency.Let us define the random potential $`𝑽^{\epsilon _i}`$ by:
$$𝑽^{\epsilon _i}=\left[\begin{array}{cc}\epsilon _i& 0\\ 0& \epsilon _i\end{array}\right],$$
(3)
where $`\epsilon _i`$ is uniformly distributed on the energy interval $`[\frac{\delta }{2},\frac{\delta }{2}]`$. The Green’s function for an impurity, described by $`𝑽^{\epsilon _i}`$ in Eq. 3, embedded in the medium, described by $`𝚺(ı\omega _n)`$ is given by:
$$𝑮^{\epsilon _i}(i,i,ı\omega _n)=\{\mathrm{𝟏}𝑮^C(i,i,ı\omega _n)[𝑽^{\epsilon _i}𝚺(ı\omega _n)]\}^1𝑮^C(i,i,ı\omega _n),$$
(4)
Following the usual CPA procedure we demand that the coherent potential Greens function $`𝑮^C(i,i;ı\omega _n)=(ı\omega _nϵ_k𝚺(ı\omega _n))^1`$ satisfy the relation:
$$𝑮^C(i,i,ı\omega _n)=𝑮^{\epsilon _i}(i,i,ı\omega _n)=\frac{1}{\delta }_{\delta /2}^{\delta /2}d\epsilon _i𝑮^{\epsilon _i}(i,i,ı\omega _n).$$
(5)
Evidently, Eq. 5 completely determines, that is to say can be solved for, $`𝚺(ı\omega _n)`$.
Let us now proceed further with the CPA strategy and determine the averaged Greens function matrix $`𝑮(i,j;ı\omega _n)`$ subject to the self consistency conditions:
$$\overline{\mathrm{\Delta }}_{ij}=|U_{ij}|\frac{1}{\beta }\underset{n}{}\mathrm{e}^{ı\omega _n\eta }G_{12}(i,j;ı\omega _n),\overline{n}=\frac{2}{\beta }\underset{n}{}\mathrm{e}^{ı\omega _n\eta }G_{11}(i,i;ı\omega _n).$$
(6)
In this paper we assumed nearest neighbour electron hopping and pairing on a two dimensional lattice. In Figure 1$`a`$ and $`b`$ we have presented Fermi surfaces for $`n=0.55`$ and $`n=1`$ respectively. The latter case correspond to the situation, where the Fermi Energy, $`E_F`$, is located exactly at the van Hove singularity.
## III Critical Temperature and Residual Resistivity
The linearised gap equation for the critical temperature T<sub>C</sub> of p-wave superconducting phase transition reads as follows :
$$1=|U|T_C\underset{n}{}\mathrm{e}^{ı\omega _n\eta }\frac{1}{N}\underset{\stackrel{}{k}}{}\frac{2(\mathrm{sin}k_x)^2}{(ı\omega _nϵ_\stackrel{}{k}\mathrm{\Sigma }_{11}(ı\omega _n))(ı\omega _n+ϵ_\stackrel{}{k}\mathrm{\Sigma }_{22}(ı\omega _n))}.$$
(7)
A useful measure of disorder is the resistivity $`\rho `$. Thus we shall study the relationship between $`\rho `$ and T<sub>C</sub>. The Residual resistivity $`\rho `$ for low temperature can be obtained from The Kubo–Greenwood formula. For the disordered two dimensional systems at hand :
$$\rho =\left\{2\frac{\mathrm{e}^2}{\pi \mathrm{}c}\frac{1}{N}\underset{k}{}4(\mathrm{sin}k_x)^2t^2\left[\mathrm{Im}G_{11}^C(\stackrel{}{k},0)\right]^2\right\}^1,$$
(8)
where e is the electron charge, $`\mathrm{}`$ is Plank constant and $`c`$ is the distance between RuO<sub>2</sub> planes.
In short, we have solved the CPA equations (Eqs. 4,5) for various system parameters (Eq. 1) and calculated both T<sub>C</sub> and residual resistivity $`\rho `$.
To illustrate how effective a van Hove singularity can be in raising T<sub>C</sub>, in Fig. 2$`a`$ we present T<sub>C</sub>, calculated for clean systems and normalised to its maximal value T$`{}_{}{}^{max}{}_{C}{}^{}`$, versus band filling $`n`$ for various values of $`U/t`$. Clearly, T<sub>C</sub> is peaked at $`n=1`$, where the Fermi energy $`E_F`$ is exactly at the van Hove singularity. For small enough interaction $`U`$ it is enlarged by a factor of 7. Going further we turn to our results for the disordered case. Thus, in Fig. 2$`b`$, we plotted T<sub>C</sub> versus residual resistivity $`\rho `$ as calculated by the CPA procedure described above. The parameters $`U/t=0.702`$ as well as band filling $`n=0.55`$ were chosen so that the T<sub>C</sub> vs. $`\rho `$ curve reproduce the experiments . Unlike the Born approximation limit, the CPA residual resestivity is dependent on the strength of disordered potential, $`\delta `$, nonlinearly. This is illustrated in Fig. 3$`a`$, where the different curves correspond to different band fillings $`n`$. The pronounced nonlinearity for n=1 is due to a van Hove singularity being near $`E_F`$. As shown in Fig. 3b this give rise to an interesting upturn as $`\rho 0`$ in the T<sub>C</sub> vs. $`\rho `$ plot.
## IV Remarks and Conclusions
Our results confirm that, similarly to d–wave superconductors , in the case of p–wave paring the critical temperature T<sub>C</sub> is very sensitive function of nonmagnetic diagonal disorder. Nevertheless, they sugest that in Sr<sub>2</sub>RuO<sub>4</sub> doping could lead to higher value of critical temperature T<sub>C</sub>. Here we used uniform distribution of site energy levels $`\epsilon _i`$ as the simplest model of disorder. Clearly further study of the problem would include a more sophisticated impurity model, and more realistic band structure.
## Acknowledgements
Authors would like to thank Dr A.M. Martin, Prof. K.I. Wysokiński for helpful discussions and Prof. A.P. Mackenzie for the experimental data. This work has been partially supported by KBN grant No. 2P03B05015 and EPSRC grant No. GR/L22454.
|
no-problem/9906/astro-ph9906011.html
|
ar5iv
|
text
|
# V751 Cyg and V Sge as transient supersoft X-ray sources
## 1 Introduction
Supersoft X-ray sources (SSS) were established as a new class of astronomical objects during the early years of this decade (Trümper et al. 1991, Greiner et al. 1991, Kahabka & van den Heuvel 1997) and are thought to contain white dwarfs accreting mass at rates high enough to allow quasi-steady nuclear burning of the accreted matter (van den Heuvel et al. 1992). The sources are highly luminous ($`L_{bol}10^{36}10^{38}`$ ergs s<sup>-1</sup>), but since their characteristic temperatures are on the order of tens of eV, much of the energy is radiated in the far ultraviolet or soft X-ray region of the spectrum, where the radiation is easily absorbed by the interstellar medium. Because of this, only 2 close-binary Galactic supersoft sources are known (Motch et al. 1994, Beuermann et al. 1995), though there should be about 1000 in the Milky Way (Di Stefano & Rappaport 1994). The situation is further complicated by the fact that supersoft X-ray binaries are highly time variable, both at X-ray and optical wavelengths (Greiner 1995). The greatest number are known in the Magellanic Clouds, but they are difficult to study because of their distance. The quest to find new SSBs has inspired several projects including comprehensive studies of deep ROSAT pointings. So far, these were not generally successful, however.
Recently, two other approaches to search for further members of the SSS class have been attempted: (1) Using a unique variability pattern: Although most SSS are variable in their X-ray and/or optical emission, the behaviour of several systems is distinctive. Data collected during the MACHO team’s monitoring of the LMC has shown that the seemingly sporadic X-ray bright states of RX J0513.9–6951, a well-known supersoft X-ray binary in the LMC (Schaeidt et al. 1993, Pakull et al. 1993), are correlated with short-lived optical low states (Pakull et al. 1993, Reinsch et al. 1996, Southwell et al. 1996). As first suggested by Brian Warner, the birthday of whom we are celebrating here, at the SSS workshop in Garching in 1996, the 3 year MACHO light curve (Southwell et al. 1996) suggests a strong similarity to the VY Scl stars. VY Scl stars are a subclass of nova-like, cataclysmic variables which are bright most of the time, but occasionally drop in brightness by several magnitudes at irregular intervals (Bond 1980, Warner 1995). (2) Searching among unusual cataclysmic variables: One possible Galactic source, V Sagitae, has been suggested by studying the properties of several unusual cataclysmic variables (Patterson et al. 1998). Steiner & Diaz (1998) note the similarity of $`3`$ other Galactic systems to V Sge.
In the following, I will describe both strategies and review the evidence for transient, luminous supersoft X-ray emission in the two cataclysmic variables (CVs) V751 Cyg and V Sge during their optical low states.
## 2 V751 Cyg
Full details of the correlated X-ray and optical observations of V751 Cyg have appeared elsewhere (Greiner et al. 1999), so we only summarize the relevant information here. The distinct lightcurve of RX J0513.9–6951 and its similarity to VY Scl stars led us to decide to monitor the light curves of the known VY Scl stars. When V751 Cyg started to drop in brightness somewhere between 1 March and 11 March 1997 (Fig. 1) we performed a target-of-opportunity ROSAT HRI observation (4660 sec) on 3 June 1997. A new X-ray source, RX J2052.2+4419, was discovered within 1<sup>′′</sup> of V751 Cyg, at a mean count rate of 0.015 cts/s. During a second ROSAT HRI observation on Dec. 2–8, 1997 the count rate and X-ray spectrum are nearly identical to the June values.
In contrast, V751 Cyg was not detected during the ROSAT all-sky survey on Nov. 19/20, 1990 giving a 3$`\sigma `$ upper limit of 0.019 cts/s in the PSPC. In addition, it was also not detected during a serendipituous pointing on Nov. 11, 1992 providing an upper limit of 0.0058 cts/s in the PSPC. On both occasions V751 Cyg was in its optical bright state. This suggests an anti-correlation of optical and X-ray intensity in V751 Cyg.
A new method (Prestwich et al. 1999) to extract reliable spectral information from HRI data allowed to craft a response matrix for a given observation. Fits using this response matrix to all the source photons of V751 Cyg show that simple black-body models with kT of a few tens of eV are consistent with the data, whereas higher temperature models (0.5 keV) can be ruled out (Fig. 2). An IUE observation performed in 1985 (during an optical high state) was used to derive the extinction towards V751 Cyg based on the broad absorption centered at 2200 Å: $`E(BV)=0.25\pm 0.05`$ $``$ $`N_\mathrm{H}`$ = 1.1$`\times `$10<sup>21</sup> cm<sup>-2</sup>. This implies a distance of the order of 500 pc. With this $`N_\mathrm{H}`$ the X-ray spectral fitting gives $`kT=15_{10}^{+15}`$ eV (see Fig. 2). At this temperature, the bolometric luminosity on 3 June 1997 is 6.5$`\times `$10<sup>36</sup> (D/500 pc)<sup>2</sup> erg/s.
Thus, during its optical low state, V751 Cyg was emitting soft X-rays with a temperature and luminosity which confirm that it is a transient supersoft X-ray source. The appearance of He ii 4686 Å emission in optical spectra taken in Sep. 1997 also indicates the presence of $`>`$54 eV photons. V751 Cyg, like the other members of the VY Scl star group, accretes at a few times 10$`{}_{}{}^{8}M_{}^{}`$ yr<sup>-1</sup>. If the mass of the white dwarf in V751 Cyg is small, this may allow nuclear burning, as the high X-ray luminosity suggests. The V751 Cyg values of $`M_\mathrm{V}^{\mathrm{max}}=3.9`$ and $`\mathrm{log}\mathrm{\Sigma }=(L_x/L_{\mathrm{Edd}})^{1/2}P_{\mathrm{orb}}^{2/3}(hr)=0.23`$ are consistent, within the uncertainties of $`L_x`$ and $`P_{\mathrm{orb}},`$ with the relation $`M_\mathrm{V}=0.83(\pm 0.25)3.46(\pm 0.56)\mathrm{log}\mathrm{\Sigma }`$ found for 5 SSB (van Teeseling et al. 1997) implying that, if nuclear burning is the correct interpretation of the X-ray flux during the optical low state, then nuclear burning may continue during the optical high state.
The discovery that V751 Cyg is a transient supersoft X-ray source arose from the similarity in the optical light curve of RX J0513.9–6951 and VY Scl stars. RX J0513.9–6951 (Schaeidt et al. 1993, Pakull et al. 1993) shows $``$4 week optical low states which are accompanied by luminous supersoft X-ray emission (Reinsch et al. 1996, Southwell et al. 1996). It is generally assumed that the white dwarf accretes at a rate slightly higher than the burning rate, and thus is in an inflated state during the optical high state. Changes in the irradiation of the disk caused by the expanding/contracting envelope around the white dwarf have been proposed as explanation for the 1 mag intensity variation (Reinsch et al. 1996, Southwell et al. 1996). The explanation of the X-ray/optical variability of V751 Cyg could be similar to RX J0513.9–6951: $`\dot{M}`$variations change both the photospheric radius and the disk spectrum. If the white dwarf has a small mass, than photospheric radius expansion is reached at 1$`\times `$10<sup>-7</sup> M/yr (Cassisi et al. 1998).
The explanation for the character of the optical and UV observations is not yet clear, but it seems certain that the illumination of the donor and disk play important roles in determining what we see. If the X-ray source during the optical low state indeed is very luminous one may expect a strong heating effect on the secondary as well as on the accretion disk. The heating of the secondary in V751 Cyg is probably comparable to that in SSS because the illumination depends on the ratio of companion radius and binary separation which is similar in both kind of systems. Unfortunately, no photometry has been obtained during the optical low state to immediately test for this effect in V751 Cyg though it is anyway not expected to produce a strong modulation due to the low orbital inclination.
The question of the illumination of the accretion disk has to be addressed separately for optical low and high state. There is ample evidence in some VY Scl stars that during the optical low state the accretion disk has vanished. Though we have no direct evidence for this in V751 Cyg due to the lack of optical observations, the disk is certainly optically thin thus drastically reducing the efficancy of illumination. In the optical high state the illumination depends on whether hydrogen burning stops or whether it continues on an inflated white dwarf at a temperature below the sensitivity range of ROSAT: If the burning stops then there are no soft X-rays which could be reprocessed. If the nuclear burning continues, reprocessing may still not be strong because the amount of reprocessing depends on the size of the accretion disk (orbital period) and only for large disks also on the flaring of the disk. For V751 Cyg the disk is so small that any flaring is probably insignificant compared to the angle which the white dwarf subtends with respect to the disk. Even if flaring were significant, reprocessing of the radiation from the white dwarf will begin to have a dominant effect on the local disk temperature (King 1997, Knigge & Livio 1998) if the white dwarf luminosity $`L_{\mathrm{WD}}`$ $`>`$$`2.5L_{\mathrm{acc}}(1\beta )^1`$ (where $`\beta `$ is the albedo of the disk surface). That is, a disk around a 1 M white dwarf accreting at 10<sup>-8</sup> M/yr will be dominated by reprocessing only if the white dwarf temperature is $`>`$2$`\times `$10<sup>5</sup> K $``$ 17 eV. This is seemingly just a value between the temperatures of SSS (30–50 eV) and V751 Cyg (15 eV). Thus, one difference of the systems could be that the disk in V751 Cyg is not flared and therefore not dominated by reprocessing while the SSS disks are flared and dominated by reprocessing and thus are optically much brighter than the VY Scl disks.
## 3 V Sge
It has recently been suggested (Steiner & Diaz 1998; Patterson et al. 1998) that V Sge (and possibly also WX Cen, V617 Sge and HD 104994) have properties very similar to SSBs. These suggestions are based on the following characteristics, shared by these four stars but rare or even absent among canonical cataclysmic variables: (1) the presence of both O VI and N V emission lines, (2) a He II $`\lambda 4686`$/H$`\beta `$ emission line ratio $`>`$2, (3) rather high absolute magnitudes and very blue colours, and (4) orbital lightcurves which are characterized by a wide and deep eclipse.
V Sge has been the target of three dedicated pointed PSPC and HRI observations (one of these splits into 3 separate observation intervals), and in addition is in the field of view of another PSPC observation. A detailed comparison of the optical states of V Sge and archival ROSAT observations has shown that during optical bright states, V Sge is a faint hard X-ray source, while during optical faint states ($`V>12`$ mag), V Sge is a ‘supersoft’ X-ray source (Greiner & Teeseling 1998). Spectral fitting confirms that V Sge’s X-ray properties during its soft X-ray state may be similar to those of supersoft X-ray binaries, although a much lower luminosity cannot be excluded.
The model that has been suggested for RX J0513.9–6951 cannot explain the observational data of V Sge. First, the optical brightness changes of V Sge are very rapid: both the faint-/bright-state transitions as well as the succession of different faint states may occur on timescales of $`<`$1 day (compared to the smooth decline of several days in RX J0513.9–6951). Such very rapid changes are only possible if the white dwarf envelope expands and contracts on the Kelvin-Helmholtz timescale and the mass of the expanding envelope is rather small ($`M_{\mathrm{env}}10^9`$ M). Such a small envelope mass is difficult to accept for a white dwarf with stable shell burning (e.g. Prialnik & Kovetz 1995). Second, the expected optical eclipse would become deeper when the system becomes brighter, opposite to what has been observed (Patterson et al. 1998).
It is possible to explain the different optical/X-ray states of V Sge by a variable amount of extended uneclipsed matter, which during the optical bright states contributes significantly to the optical flux and completely absorbes the soft X-ray component (Greiner & Teeseling 1998). A simple wind model for the recently observed radio flux density of V Sge implies a mass-loss rate of the order of 10<sup>-6</sup> M/yr (Lockley et al. 1997). With their (assumed) terminal velocity of 1500 km/s this wind zone is completely opaque for X-rays up to 0.7 keV, even if the wind is assumed to be circumbinary instead of arising from one component. Since the radio measurement has been obtained during optical high state, it supports the above described scenario.
## 4 Discussion and Conclusion
As shown above, both strategies to search for new SSS were successful. This is very promising, and opens up the way to use optical observation strategies besides X-ray observations to identify new SSS. The surprising part of this development is that both, V751 Cyg and V Sge, are short-period CVs, and if true, their transient supersoft source nature would establish a direct link between classical CVs and canonical SSS which typically have orbital periods inthe 0.5–3 days range.
For both sources, V751 Cyg and V Sge, the estimates for their X-ray luminosity (under reasonable assumptions on $`N_\mathrm{H}`$) are $``$ 10$`{}_{}{}^{36}\mathrm{}10^{37}`$ erg/s. This is at the lower end of the stable burning region. Two issues are relevant in this respect: (1) As for most CVs, the available evidence for VY Scl stars suggests that the white dwarfs have a low mass. At these low masses the accretion rate necessary for steady-state burning (consistent with Fujimoto 1982) is 1–3$`\times `$10<sup>-8</sup> M/yr (Sion & Starrfield 1994, Cassisi 1998), and correspondingly the luminosity is lower than the canonical values for SSS. (2) Even at rates below the steady-state burning level there could be a range where the hydrogen flashes are rather mild. A recent study of this parameter space has shown that the luminosity between the flashes remains at the surprisingly high value of $``$10%–30% of the burning luminosity (Rappaport 1999).
Acknowledgement: I’m grateful to the organizers of this symposium for the exciting atmosphere which stimulated many fruitful discussions. JG is supported by the German Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie (BMBF/DLR) under contract No. 50 OR 96 09 8.
|
no-problem/9906/astro-ph9906400.html
|
ar5iv
|
text
|
# GROWTH OF PERTURBATIONS IN GRAVITATIONAL COLLAPSE AND ACCRETION
## 1 Introduction
Gravitational instability is responsible for a wide range of structures in the observable universe. Much effort has been devoted to structure formation in cosmological models, and it is well-established that the Hubble expansion slows the growth of perturbations (e.g., Peebles 1980). In this paper we are interested in the behavior of nonspherical perturbations in the background of spherical collapse or accretion.
Hunter (1962) studied the collapse of a homogeneous, pressureless, dust cloud, and showed that in the linear regime, perturbations of arbitrary shape and scale grow asymptotically as $`\delta \rho /\rho (t_0t)^1R^{3/2}`$, where $`t=t_0`$ denotes the time of complete collapse, and $`R`$ is the cloud radius. It was thought that this instability might be responsible for the fragmentation of collapsing protostellar clouds. Lin, Mestel and Shu (1965) studied the collapse of a homogeneous ellipsoidal cloud of dust, and showed that its ellipticity increases until a sheet or pancake forms. Homogeneity plays a key role in both examples. However, the presence of a slight initial central concentration significantly alters the evolution of a cloud. Since the dynamical time, $`(G\rho )^{1/2}`$, is shortest in the central region, a cuspy density and velocity profile develops. As a result, one may question the applicability of the Hunter and Lin-Mestel-Shu instabilities to realistic astrophysical situations. Goodman & Binney (1983) have already commented on problems with the Lin-Mestel-Shu instability for inhomogeneous clouds. In this paper, we show that perturbation growth is slowed by the central mass concentration in an inhomogeneous collapse.
A related problem concerns the stability of spherical accretion flow. Bondi (1952) found a class of solutions describing steady-state accretion onto a compact object from a homogeneous medium. There is a unique transonic “critical” flow, for which the mass flux is maximum; the other subcritical solutions describe subsonic flows. Bondi speculated that nature would prefer the critical flow (see Shu 1992 for a discussion) and underlined the importance of a linear stability analysis. After several attempts by a number of authors, the correct analysis was achieved by Garlick (1979) and Moncrief (1980), who showed that both the critical and subcritical flows are globally stable. However, this does not mean that perturbations can not grow spatially. Indeed, we show in this paper that nonspherical perturbations carried by fluid elements are amplified in the supersonic region of the critical flow (see also Goldreich, Lai & Sahrling 1996; Kovalenko & Eremin 1998).
The present study originated from our attempts to understand the origin of asymmetric supernova (Goldreich, Lai & Sahrling 1996; hereafter GLS). A large body of evidence suggests that Type II supernovae are globally asymmetric, and that neutron stars receive kick velocities of order a few hundred to a thousand km s<sup>-1</sup> at birth (see, e.g., GLS, Cordes & Chernoff 1998, and references therein). The origin of the kick is unknown. A class of mechanisms relies on local hydrodynamical instabilities in the collapsed stellar core (e.g., Burrows et al. 1995; Janka & Müller 1994, 1996; Herant et al. 1994) that lead to asymmetric matter ejection and/or asymmetric neutrino emission; but numerical simulations indicate these instabilities are not adequate to account for kick velocities $`>100`$ km s<sup>-1</sup> (Burrows & Hayes 1996; Janka 1998). Global asymmetric perturbations of presupernova cores may be required to produce the observed kicks (GLS; Burrows & Hayes 1996). GLS suggested that overstable g-modes driven by shell nuclear burning might provide seed perturbations which could be amplified during core collapse.<sup>1</sup><sup>1</sup>1See Lai & Qian (1998) and Arras & Lai (1999) for discussion/review on alternative mechanisms which rely on asymmetric neutrino transport induced by strong magnetic fields.
Hints regarding perturbation growth during collapse are obtained by considering the collapse of finite-mass fluid shells onto central point masses. In the context of core-collapse supernova, one might imagine that the spherical shell mimics the outer supersonic region, while the central mass represents the homologous core. A large number of growing modes can be identified (GLS; Appendix B). In particular, the dominant dipole mode grows according to $`\mathrm{\Delta }R(\theta ,t)/R(t)(t_0t)^s`$, with $`s<0`$, where $`R`$ is the mean radius of the shell and $`\mathrm{\Delta }R`$ is the perturbation. For a core-shell mass of order unity, the power-law index $`s1`$, corresponding to $`\mathrm{\Delta }R/RR^{3/2}`$.
The thin-shell model is too simplistic to be applicable to real presupernova collapse. One approach is to determine the stability of the self-similar collapse solution (Goldreich & Weber 1980; Yahil 1983). An analysis by Goldreich & Weber (1980) shows that the inner homologous core is stable against nonradial perturbations. This is not surprising given the significant role played by pressure in the subsonic collapse. Pressure is less important in the supersonically collapsing region, making it more susceptible to large scale instability. A stability analysis of Yahil’s self-similar solution, which extends the Goldreich-Weber solution to include a supersonically collapsing outer core, does not reveal any unstable global mode before the proto-neutron star forms (Lai 2000). However, it is perhaps more illuminating to treat the initial-value problem, and determine how initial perturbations evolve as the collapse proceeds. We carry out such an analysis in this paper.
The rest of the paper is organized as follows. The basic perturbation equations are summarized in §2, and in §3 we derive the asymptotic scaling relations for the perturbations in the regime where the collapse/accretion flow is supersonic. In §4 we present a numerical study of the evolution of perturbations during collapse; this numerical study not only confirms the analytic asymptotic relations, but also explores the regime where pressure is important. We discuss the implications of our results in §5.
## 2 Basic Equations
We consider barotropic fluid obeying the equation of state $`p=K\rho ^\gamma `$, where $`\gamma `$ is the adiabatic index and $`K`$ is a constant. The unperturbed flow is spherically symmetric, with velocity in the radial direction. The Eulerian perturbations of density, $`\rho `$, and velocity, $`𝐯`$, can be decomposed into different angular modes, each of which has the form
$`\delta \rho (𝐫,t)`$ $`=`$ $`\delta \rho (r,t)Y_{lm}(\theta ,\varphi ),`$ (1)
$`\delta 𝐯(𝐫,t)`$ $`=`$ $`\delta v_r(r,t)Y_{lm}(\theta ,\varphi )\widehat{r}+\delta v_{}(r,t)\widehat{}_{}Y_{lm}(\theta ,\varphi )+\widehat{}_{}\times \left[\delta v_{\mathrm{rot}}(r,t)Y_{lm}(\theta ,\varphi )\widehat{r}\right],`$ (2)
where
$$\widehat{}_{}\widehat{\theta }\frac{}{\theta }+\frac{\widehat{\varphi }}{\mathrm{sin}\theta }\frac{}{\varphi },$$
(3)
and $`\widehat{r},\widehat{\theta },\widehat{\varphi }`$ are unit vectors in spherical coordinates. The perturbations of pressure, $`p`$, and gravitational potential, $`\psi `$, have the same angular dependence as $`\delta \rho (𝐫,t)`$. The perturbed mass continuity equation reads
$$\frac{d\delta \rho }{dt}+(𝐯)\delta \rho +\frac{1}{r^2}\left(r^2\rho \delta v_r\right)^{}l(l+1)\frac{\rho \delta v_{}}{r}=0,$$
(4)
where $`\rho ,𝐯=v\widehat{r}`$ denote the unperturbed spherical flow variables, prime stands for $`/r`$, and $`d/dt=/t+𝐯`$ is the total time derivative. The perturbed radial Euler equation can be written as
$$\frac{d\delta v_r}{dt}+v^{}\delta v_r=\left(\frac{\delta p}{\rho }\right)^{}\left(\delta \psi \right)^{},$$
(5)
and the tangential Euler equation reduces to
$`{\displaystyle \frac{d\delta v_{}}{dt}}+{\displaystyle \frac{v}{r}}\delta v_{}={\displaystyle \frac{1}{r}}\left({\displaystyle \frac{\delta p}{\rho }}\right){\displaystyle \frac{1}{r}}\delta \psi ,`$ (6)
$`{\displaystyle \frac{d}{dt}}(r\delta v_{\mathrm{rot}})=0.`$ (7)
The perturbed Poisson equation is
$$\frac{1}{r^2}\left[r^2(\delta \psi )^{}\right]^{}\frac{l(l+1)}{r^2}\delta \psi =4\pi G\delta \rho .$$
(8)
The vorticity perturbation reads
$$\times \delta 𝐯=\frac{\delta v_T}{r}\widehat{r}\times \widehat{}_{}Y_{lm}+\frac{l(l+1)}{r}\delta v_{\mathrm{rot}}Y_{lm}\widehat{r}+\frac{1}{r}(r\delta v_{\mathrm{rot}})^{}\widehat{}_{}Y_{lm},$$
(9)
where
$$\delta v_T\delta v_r\delta u^{},$$
(10)
with
$$\delta u(r,t)r\delta v_{}(r,t).$$
(11)
Thus $`\delta v_{\mathrm{rot}}`$ and $`\delta v_T`$ are related to the vorticity of the perturbed flow. Equation (6) is transformed to
$$\frac{d\delta u}{dt}=\frac{\delta p}{\rho }\delta \psi ,$$
(12)
with the aid of equation (11). Combining equations (5) and (12), we find
$$\frac{d}{dt}\delta v_T=v^{}\delta v_T$$
(13)
Equations (7) and (13) express the conservation of circulation in a barotropic fluid.<sup>2</sup><sup>2</sup>2For homogeneous collapse or expansion, as in an expanding universe, we have $`v=(\dot{R}/R)r`$, where $`R`$ is the scale factor. Equation (13) then becomes the familiar $`d(R\delta v_T)/dt=0`$. The former also reflects the conservation of angular momentum; following a fluid element, $`\delta v_{\mathrm{rot}}1/r`$. Since, as we shall prove shortly, $`vr^{1/2}`$ as $`r0`$, equation (13) implies the asymptotic relation $`\delta v_Tr^{1/2}`$. We shall focus on irrotational flows from here on. We neglect $`\delta v_{\mathrm{rot}}`$ because it is decoupled from the density perturbation, and $`\delta v_T`$ because it decays inward. The continuity equation (4) for irrotational flow simplifies to
$$\frac{d\delta \rho }{dt}+(𝐯)\delta \rho +\frac{1}{r^2}\left(r^2\rho \delta u^{}\right)^{}l(l+1)\frac{\rho \delta u}{r^2}=0.$$
(14)
Note that the Poisson equation (8) has the following integral solution:
$$\delta \psi (r,t)=\frac{4\pi G}{2l+1}\left[\frac{1}{r^{l+1}}Q_l(r,t)+r^lS_l(r,t)\right],$$
(15)
where
$$Q_l(r,t)=_0^rx^{l+2}\delta \rho (x,t)𝑑x,S_l(r,t)=_r^{\mathrm{}}x^{1l}\delta \rho (x,t)𝑑x.$$
(16)
Equations (12), (14), and (15) determine the perturbed flow. This particular form of the perturbation equations is convenient for implementation as a Lagrangian numerical code (see §4).
## 3 Asymptotic Analysis
Consider a cloud in hydrostatic equilibrium with an initial density profile that decreases outward. As its pressure is depleted, the cloud starts to collapse. Since the dynamical time, $`(G\rho )^{1/2}`$, decreases outward, cuspy density and velocity profiles will be established after the core has collapsed. By contrast, a uniform density dust cloud collapses homologously, and remains uniform as the collapse proceeds. We now study the behavior of the flow and its perturbations in the asymptotic regime where the gas pressure is negligible compared to gravity.
### 3.1 Unperturbed Spherical Flow
Consider how the velocity, $`v_m`$, and density, $`\rho _m`$, of a fluid shell with enclosed mass $`m`$ change as the shell collapses from its initial radius $`r_{m0}`$ to a smaller radius $`r_m`$. The pressure is negligible in the supersonic region, so for $`r_mr_{m0}`$, we have
$$v_m\left(\frac{2Gm}{r_m}\right)^{1/2}r_m^{1/2}.$$
(17)
Then from the continuity equation, we obtain
$$\rho _mr_m^{3/2}.$$
(18)
Note that these relations do not require the mass of the collapsed object to be fixed. Indeed, with $`m_c(t)`$ the mass of the collapsed core at time $`t`$, we have
$$v(r,t)\left[\frac{2Gm_c(t)}{r}\right]^{1/2}r^{1/2},\rho (r,t)\frac{\dot{m}_c(t)}{4\pi r^2v(r,t)}r^{3/2},$$
(19)
where $`\dot{m}_c`$ is the mass accretion rate onto the core.<sup>3</sup><sup>3</sup>3If the central region of a pressureless cloud is non-singular to begin with, then at the moment when the center reaches infinite density, the central density and velocity profiles are $`\rho r^{12/7}`$ and $`vr^{1/7}`$ (Penston 1969). However, after the core has formed, the profiles are given by equation (19). When the accretion time, $`m_c/\dot{m}_c`$, is much longer than the dynamical time of the flow, equation (19) describes the inner region of a steady-state Bondi flow. In a dynamical collapse, when the accreted mass becomes much larger than the original core mass, dimensional analysis implies $`m_c(t)=K^{3/2}G^{(3\gamma 1)/2}t^{43\gamma }\overline{m}_c`$, where $`\overline{m}_c`$ is a dimensionless number, and $`t`$ is measured from the moment when the center collapses. For $`\gamma =1`$, this reduces to the familiar $`m_c=\overline{m}_c(c_s^3/G)t`$ (see Shu 1977). Equation (19) describes the central region of Shu’s expansion-wave solution (Shu 1977), and the post-collapse extension of the Larson-Penston solution (Larson 1969; Penston 1969; Hunter 1977) in the context of star formation, as well as Yahil’s post-collapse solution in the context of core-collapse supernova (Yahil 1983).
Note that the above asymptotic scaling solution assumes supersonic flow for $`r0`$, i.e., $`vc_s\rho ^{(\gamma 1)/2}r^{3(\gamma 1)/4}`$. This requires $`\gamma <5/3`$. The special case of $`\gamma =5/3`$ is considered in Appendix A.
### 3.2 Perturbations
We investigate asymptotic power-law solutions to equations (12), (14) and (15). Let $`\delta \rho r^a`$ and $`\delta ur^b`$. The Poisson equation has a general solution of the form $`\delta \psi Q_c/r^{l+1}+r^2\delta \rho `$, where the first term arises from a central multipole moment, $`Q_c`$, and the second term is due to the density perturbation outside the the central core. In most astrophysical situations, $`Q_c`$ is zero or close to zero; possible exceptions are discussed separately in §3.3. For example, in accretion onto a star, the supersonic flow may be stopped by a standing shock near the stellar surface. Inside the shock, any inhomogeneity carried in by the gas will be smeared out on a local dynamical timescale. In accretion onto a Schwarzschild black hole, the event horizon defines the inner boundary of the supersonic flow, and the no-hair theorem ensures that mass multipole moments are not retained by the black hole. Thus we have $`\delta \psi r^{a+2}`$.
In the asymptotic regime, $`d/dtv(/r)`$, so equations (12) and (14) reduce to:
$`{\displaystyle \frac{bv\delta u}{r}}+\gamma K\rho ^{\gamma 2}\delta \rho +\delta \psi =0,`$ (20)
$`\left(a+{\displaystyle \frac{3}{2}}\right){\displaystyle \frac{v\delta \rho }{r}}+\left[b\left(b{\displaystyle \frac{1}{2}}\right)l(l+1)\right]{\displaystyle \frac{\rho \delta u}{r^2}}=0.`$ (21)
Equation (21) implies $`b=a+2`$ for $`a3/2`$. Equation (20) has the scaling form $`𝒪(br^{b3/2})+𝒪(Kr^{a3(\gamma 2)/2})+𝒪(r^{a+2})=0`$, from which we see immediately that $`b=0`$ and $`a=2`$ (for $`\gamma <5/3`$). To obtain the scaling behavior for $`\delta v_r=\delta u^{}`$, we need a higher order correction for $`\delta u`$. Let $`\delta u=\delta u_0+\delta u_1r^{b_1}`$, where $`\delta u_0`$ and $`\delta u_1`$ are constants independent of $`r`$. For $`K=0`$ (the pressureless case) or $`\gamma <2/3`$ so that $`\delta p/\rho <\delta \psi `$ asymptotically, equation (12) gives $`𝒪(b_1v\delta u_1r^{b_11})𝒪(r^{a+2})`$. We then have $`b_1=a+7/2=3/2`$. For $`K0`$ and $`\gamma >2/3`$, equation (12) reduces to $`𝒪(b_1v\delta u_1r^{b_11})+𝒪(r^{a3(\gamma 2)/2})=0`$, which gives $`b_1=(53\gamma )/2`$. To summarize, the asymptotic scaling relations for the perturbations are:
$`\delta u`$ $``$ $`\delta u_0+\delta u_1r^{b_1},`$ (22)
$`\delta v_r`$ $``$ $`r^{b_11},\delta v_{}r^1,`$ (23)
$`\delta \rho `$ $``$ $`2l(l+1){\displaystyle \frac{\rho \delta u}{rv}}r^2,`$ (24)
where
$$b_1=\{\begin{array}{cc}3/2\hfill & \text{for }K=0\text{ or }\gamma <2/3\text{,}\hfill \\ (53\gamma )/2\hfill & \text{for }K0\text{ and }\gamma 2/3\text{.}\hfill \end{array}$$
(25)
Note that the above results apply for $`\gamma <5/3`$. The special case of $`\gamma =5/3`$ Bondi accretion is discussed in Appendix A.
### 3.3 Physical Interpretation
Equations (22)-(24) describe the Eulerian perturbations. To derive scaling relations for the Lagrangian displacement, $`𝝃(𝐫,t)=\xi _r(r,t)Y_{lm}\widehat{r}+\xi _{}(r,t)\widehat{}_{}Y_{lm}`$, and for the Lagrangian density perturbation, $`\mathrm{\Delta }\rho `$, we note that $`\mathrm{\Delta }𝐯=\delta 𝐯+(𝝃)𝐯=d𝝃/dt=(𝝃/t)+(𝐯)𝝃`$, which yields
$`\delta v_r`$ $`=`$ $`{\displaystyle \frac{\xi _r}{t}}+v\xi _r^{}v^{}\xi _r,`$ (26)
$`\delta v_{}`$ $`=`$ $`{\displaystyle \frac{\xi _{}}{t}}+v\xi _{}^{}{\displaystyle \frac{v}{r}}\xi _{}.`$ (27)
In the asymptotic regime, we find using equation (23) that
$$\xi _rr^{b_1+1/2},\xi _{}r^{1/2}.$$
(28)
The Lagrangian density and velocity perturbations are
$$\frac{\mathrm{\Delta }\rho }{\rho }\frac{\mathrm{\Delta }v_{}}{v}r^{1/2},\frac{\mathrm{\Delta }v_r}{v}r^{b_11/2}.$$
(29)
The scaling relations (28) and (29) can also be derived directly from Lagrangian perturbation theory.
Equations (22)-(24) and (29) are the main results of this paper.<sup>4</sup><sup>4</sup>4This result was discussed without derivation in GLS. Similar results were also obtained by Kovalenko & Eremin (1998) in the context of Bondi accretion. They can be understood from the following simple consideration: The time that a fluid element spends near radius $`r`$, of order $`t_r=dt/d\mathrm{ln}rr^{3/2}`$, decreases rapidly as $`r`$ decreases. Unless the specific torque, $`(\delta \psi +\delta p/\rho )`$, increases inward sufficiently rapidly to compensate for the decreased time, the specific angular momentum, $`\delta u`$, of the fluid element will be independent of $`r`$ at small radii. Thus $`\mathrm{\Delta }v_{}r^1`$. This leads to a tangential displacement $`\xi _{}t_r\delta v_{}r^{1/2}`$, which induces a density perturbation $`\mathrm{\Delta }\rho /\rho \xi _{}/rr^{1/2}`$.
Equation (29) indicates that the fractional density and tangential velocity perturbations grow in supersonic collapse/accretion. The radial velocity perturbation, however, can be affected by pressure even in the regime where the pressure has negligible effect on the unperturbed flow. We see that $`\mathrm{\Delta }v_r`$ grows with decreasing $`r`$ when $`\gamma >1`$, and $`\mathrm{\Delta }v_r/v`$ grows only when $`\gamma >4/3`$ (but recall that the scaling relations apply only for $`\gamma <5/3`$).
### 3.4 Special Cases: Possibility of Faster Growth
Now consider a hypothetical situation in which the inner boundary of the flow is a “sticky sphere”: Once a fluid element enters the sphere, it gets stuck on the spot where it enters. In this case, a finite multipole moment, $`Q_c`$, will accumulate at the center (see eq. ). In the region where $`Q_c`$ dominates the gravitational perturbation, we have $`\delta \psi Q_c/r^{l+1}r^2\delta \rho `$. It is easy to show from equations (20)-(21) that the perturbations have the following scaling behavior (for $`\gamma <5/3`$):
$`\delta u`$ $``$ $`{\displaystyle \frac{2}{2l1}}{\displaystyle \frac{r}{v}}\delta \psi r^{1/2l},`$ (30)
$`\delta \rho `$ $``$ $`{\displaystyle \frac{3}{2}}{\displaystyle \frac{\rho \delta u}{rv}}r^{l3/2},`$ (31)
$`\delta \psi `$ $``$ $`\left({\displaystyle \frac{4\pi G}{2l+1}}\right){\displaystyle \frac{Q_c}{r^{l+1}}}r^{l1}.`$ (32)
These scalings depend on $`l`$. Even for $`l=1`$, the growth is faster than the case discussed in §3.2 and §3.3. These scalings can be understood as follows: The central multipole moment exerts a torque $`r^{l1}`$ and a radial force $`r^{l2}`$ on a fluid element. The angular momentum grows as $`t_r\delta \psi r^{l+1/2}`$, and the radial velocity perturbation grows as $`\delta v_rr^{l1/2}`$. Thus we have $`\delta v_r/v\delta v_{}/vr^l`$. The Lagrangian displacement is $`\xi _r\xi _{}t_r\delta vr^{l+1}`$, and the resulting density perturbation is $`\mathrm{\Delta }\rho /\rho \xi /rr^l`$.
As we have argued in §3.2, it is unlikely the scaling relations derived in this section will apply in general situations since it is hard to imagine that the central core can retain its multipole moments for a time much longer than the dynamical time of the flow at the inner boundary. There are two possible exceptions:
(i) For $`l=1`$ modes: The core has an extra degree of freedom, i.e., it can have linear motion in response to flow perturbations. This gives rise to a net central dipole moment and the possibility of faster growth of dipolar perturbations. Let $`m_c`$ be the core mass and $`Z_c`$ be the position (along the $`z`$-axis) of its center-of-mass relative to the the origin of coordinates. The displaced core induces a potential perturbation $`\delta \psi (Gm_cZ_c/r^2)\mathrm{cos}\theta `$. From equations (30)-(32), the flow perturbations are given by
$$\frac{\delta \rho }{\rho }\frac{3Z_c}{2r}\mathrm{cos}\theta ,\frac{\delta v_r}{v}\frac{Z_c}{2r}\mathrm{cos}\theta ,\frac{\delta v_{}}{v}\frac{Z_c}{r}\mathrm{sin}\theta ,$$
(33)
where we have used $`v(2Gm_c/r)^{1/2}`$. However, the asymptotic perturbations given by equation (33) simply represent a spherical flow centered at $`Z_c`$. Thus it is not surprising that the sum of the gravitational force exerted on the core plus the rate at which momentum flows across a surface surrounding it sum to zero.
(ii) For $`l=2`$ modes: Suppose the central core consists of a rotating star, or a star with a massive circumstellar disk. This could result from an early phase of accretion/collapse with significant angular momentum. Subsequent spherical accretion, with no net angular momentum, will be affected by the central quadrupole.
## 4 Numerical Calculations of Linear Perturbation Growth
The asymptotic scaling relations derived in §3 apply only in the supersonic regime. To determine the behavior of perturbations under general conditions, we numerically follow the collapse of a self-gravitating cloud and evolve the nonspherical perturbation carried by each fluid element. Because of the large disparity in the timescales involved in the central region and the outer region, it is essential for the code to have a wide dynamical range. Previous multidimensional simulations of Bondi accretion (e.g., Ruffert 1994) did not achieve high enough resolution in the central region to reveal the growth of perturbation. We restrict our calculations to the linear regime. Thus perturbations associated with different $`Y_{lm}`$ evolve independently, and our calculations involve only one spatial dimension.
We have constructed a one-dimensional Lagrangian finite-difference code. The unperturbed flow variables $`(r,v,\rho )`$ are followed with a standard scheme (Bowers & Wilson 1991), and the Eulerian perturbations $`(\delta \rho ,\delta u,\delta \psi )`$ are evolved using equations (12), (14) and (15). The flow is covered by a uniform mass grid. The quantities $`r,v,\delta u,\delta \psi `$ are zone-edge-centered, while $`\rho ,\delta \rho `$ are zone-centered. A staggered leapfrog integration scheme is adopted to ensure second-order accuracy in time.
### 4.1 Pressureless Collapse
To calibrate our code and check the asymptotic scaling relations of §3, we study the collapse of a centrally concentrated dust cloud. Figure 1 shows an example of such a collapse calculation. The cloud, of total mass $`m=1`$ and radius $`r=1`$, is initially at rest with density profile $`\rho r^1`$; i.e. the radius of a mass shell with enclosed mass $`m`$ is $`r_m=m^{1/2}`$. We initialize an $`l=2`$ perturbation with $`\delta \rho /\rho =1`$ in arbitrary units and $`\delta u=0`$. The Poisson equation is solved to give the potential perturbation, $`\delta \psi `$. We impose an inner boundary at $`r_c=10^3`$: Once a mass shell enters this boundary, it is removed from the simulation domain, and its perturbations are immediately smeared out; i.e., the central quadrupole moment, $`Q_c`$, is maintained at zero. The profiles $`\rho r^{3/2}`$ and $`vr^{1/2}`$ for the unperturbed flow are established near the center as the collapse proceeds. Figure 1 depicts the evolution of the perturbation carried by three different mass shells. We see that as the mass shells collapse to small radii, the analytic asymptotic scalings $`\delta u`$ constant and $`\delta \rho /\rho r^{1/2}`$ are achieved. Calculations with other initial conditions confirm that these scalings are generic features of perturbation growth in the absence of central multipole moments.
We have also studied the case where the central multipole moment, $`Q_c`$, is nonzero, and confirmed the steeper scalings derived in §3.4.
### 4.2 Collapse with Finite Pressure
Next we study the collapse of clouds having finite pressure. For definiteness, we choose the initial cloud to be a $`\gamma =4/3`$ spherical polytrope in hydrostatic equilibrium. The collapse is initiated by reducing $`\gamma `$ to $`1.3`$, and by reducing $`K`$ by $`10\%`$. Either one of these reductions alone is adequate to induce the collapse. This mimics Type II supernova collapse, where a white dwarf core of a massive star collapses to a neutron star. However, to focus on the growth of perturbations during the collapse, we do not include a shock in our calculation. We define an inner boundary at $`r_c=0.005`$; the original cloud radius is $`r=1`$ and its mass $`m=1`$. When the central density becomes greater than $`10^6`$, corresponding to a few times nuclear density if the initial cloud has mass and radius typical of a Chandrasekhar mass white dwarf, we cut out the flow inside $`r_c`$ from the computational domain. This enables us to follow the collapse and accretion of the rest of the cloud.
Figure 2 shows the unperturbed density and velocity of several different mass shells as functions of their Lagrangian radii. The inner region collapses homologously; the Mach numbers of individual shells remain below unity outside $`r_c`$. The outer region of the flow goes through a transonic point, and eventually attains the free-fall asymptotics, with $`vr^{1/2}`$ and $`\rho r^{3/2}`$. Inspecting the flow profiles at different time slices (not shown), we confirm that our numerical results agree with the self-similar solution (Yahil 1983) in the regime where it applies.
Figure 3 and Figure 4 show two examples of the evolution of $`l=2`$ perturbations during the collapse depicted in Fig. 2. In Fig. 3, the initial perturbation is chosen to be $`\delta \rho /\rho =1`$ and $`\delta u=0`$, while in Fig. 4, the initial perturbation corresponds to the eigenfunctions of the first g-mode of a $`\gamma =4/3`$ polytrope, with adiabatic index $`\gamma _1=5/3`$. This value of $`\gamma _1`$ is used only for setting up the initial perturbations; after the collapse starts, the adiabatic index is set to $`\gamma =1.3`$. We see that the perturbations carried by the inner mass shells ($`m=0.2,0.6`$), which never become supersonic, vary in an oscillatory manner with no increase in amplitude. This is consistent with the result of Goldreich & Weber (1980) that the homologous inner core of a collapsing $`\gamma =4/3`$ polytrope is stable against non-radial perturbations. However, the outer region of the cloud ($`m>0.85`$) attains a high Mach number and eventually approaches free-fall. We see from Figs. 3 and 4 that the density perturbation grows in this outer region, and that the asymptotic scaling relations derived in §3.2 are recovered.
## 5 Discussion
Nonspherical perturbations are amplified during the supersonic collapse or accretion of a centrally concentrated gas cloud. We have derived asymptotic scaling relations for their growth. These general results have implications for several different astrophysical problems which we now discuss.
In spherical accretion onto a black hole, we expect the radiative efficiency to increase as a result of nonradial perturbations in the flow. In the asymptotic regime, the tangential velocity perturbation scales as $`\delta v_{}r^1`$, and the corresponding Mach number scales as $`\delta v_{}/c_sr^{(3\gamma 7)/4}`$. For $`\gamma <5/3`$, the Mach number grows faster than $`r^{1/2}`$. Similarly, the Mach number associated with the radial velocity perturbation (see eq. for $`\gamma >2/3`$) scales as $`\delta v_r/c_sr^{3(\gamma 1)/4}`$. One might expect the formation of shocks which lead to thermalization of the flow and high radiative efficiency (see Chang & Ostriker 1985 for previous discussion on the formation of shocks in spherical flows). This point has also been noted recently by Kovalenko & Eremin (1998), who derived similar scaling relations for Bondi accretion.
In the context of core-collapse supernovae, our results indicate that perturbations in the homologous inner core do not grow, but those in the outer core, involving $`15\%`$ of the core mass, and in the envelope are amplified. Since $`\delta \rho /\rho `$ scales as $`r^{1/2}`$, we expect that the amplification factor is at most $`10`$ for $`r`$ decreasing from $`1500`$ km to $`15`$ km. It is possible that dipole perturbations obey the $`r^1`$ scaling (see §3.4), in which case the amplification factor could be larger. Interestingly, if overstable g-modes driven by shell nuclear burning are responsible for the seed of presupernova perturbations (see GLS), it is exactly at the outer core where the perturbation amplitude is expected to be the largest. The asymmetric density perturbation may lead to asymmetric shock propagation and breakout, which then give rise to asymmetry in the explosion and a kick to the neutron star (e.g., Burrows & Hayes 1996).
Finally, we note that our analysis has neglected rotation (i.e., the flow does not have a net angular momentum). In the perturbative regime, rotation (around the $`z`$-axis) is represented by the last term of equation (2) with $`m=0`$, i.e., $`\delta 𝐯_{\mathrm{rot}}=\delta v_{\mathrm{rot}}(r,t)(Y_{l0}/\theta )\widehat{\varphi }`$. As shown in §2, this rotational perturbation is decoupled from the density perturbation. Therefore, provided the rotational velocity is small in comparison to the radial velocity, i.e., $`|\delta v_T||v|`$, we expect our scaling relations for the growth of perturbations to be valid.
A major portion of this research was done between 1995 and 1997, when D.L. was a postdoc in theoretical astrophysics at Caltech; support from a Richard C. Tolman fellowship is gratefully acknowledged. This work is supported in part by NASA grant NAG 5-8356, by a research fellowship from the Alfred P. Sloan foundation, and by NSF grant 94-14232.
## Appendix A Perturbation of Bondi Accretion for $`\gamma =5/3`$
The Bondi solution for $`\gamma =5/3`$ is special because the flow remains subsonic for $`r>0`$. We can think of the sonic point as being located at $`r=0`$ (in Newtonian theory). For $`rGM/c_{\mathrm{}}^2`$, where $`M`$ is the central mass and $`c_{\mathrm{}}`$ is the sound speed at infinity, the flow velocity and density are given by
$`|v|`$ $``$ $`c_s\left({\displaystyle \frac{GM}{2r}}\right)^{1/2},`$ (A1)
$`\rho `$ $``$ $`\rho _{\mathrm{}}\left({\displaystyle \frac{GM}{2c_{\mathrm{}}^2}}\right)^{3/2}r^{3/2}.`$ (A2)
Although these have the same scalings as equations (17) and (18), the asymptotic behavior of the perturbations is quite different. Indeed, for $`\gamma =5/3`$, eqs. (20) and (21) yield
$$\delta \rho r^{l1/2},\delta ur^{l+3/2},\delta \psi r^l.$$
(A3)
Thus all perturbations decrease as $`r`$ decreases in keeping with the subsonic nature of the unperturbed flow. If a central multipole moment is present (see §3.4), the following asymptotics become dominant:
$$\delta \rho r^{l3/2},\delta ur^{l+1/2},\delta \psi r^{l1}.$$
(A4)
## Appendix B Perturbations in Collapsing Spherical Shells
Consider a spherical fluid shell of mass $`M_s`$ falling from infinity onto a central point mass $`M_c`$. The shell radius, $`R_0(t)`$, evolves in time according to
$$\frac{d^2R_0}{dt^2}=\frac{GM}{R_0^2},\mathrm{with}M=M_c+\frac{M_s}{2},$$
(B1)
which gives
$$R_0(t)=\left(\frac{9GM}{2}\right)^{1/3}(t)^{2/3},$$
(B2)
where we have set $`t=0`$ at the point of complete collapse. The surface density and radial velocity are given by
$$\mathrm{\Sigma }_0(t)=\frac{M_s}{4\pi R_0^2}(t)^{4/3},V_0(t)=\frac{dR_0}{dt}(t)^{1/3}.$$
(B3)
### B.1 Perturbation Equations
The dynamical variables for a perturbed shell are its surface density $`\mathrm{\Sigma }(\theta ,t)`$, radius $`R(\theta ,t)`$, radial velocity $`V_r(\theta ,t)=\dot{R}`$, where dot indicates $`/t`$, and tangential velocity $`𝐕_{}(\theta ,t)`$. We assume spherical coordinates and axisymmetry so that there is no $`\varphi `$-dependence. The core mass, $`M_c`$, is free to move. We use $`Z_c(t)`$ to denote its displacement from the coordinate origin. Note that $`\mathrm{\Sigma },V_r,𝐕_{}`$ are rigorously defined from the three-dimensional fluid variables via
$$\mathrm{\Sigma }\frac{1}{R^2}_R^{R+}\rho r^2𝑑r,V_r\frac{1}{\mathrm{\Sigma }R^2}_R^{R+}\rho r^2v_r𝑑r,𝐕_{}\frac{1}{\mathrm{\Sigma }R^2}_R^{R+}\rho r^2𝐯_{}𝑑r,$$
(B4)
where the integration runs through the thickness of the shell. Using these definitions and the standard hydrodynamical equations, we derive the continuity and Euler equations for the shell:
$`{\displaystyle \frac{\mathrm{\Sigma }}{t}}={\displaystyle \frac{2\dot{R}}{R}}\mathrm{\Sigma }\mathrm{\Sigma }_{}𝐕_{},`$ (B5)
$`{\displaystyle \frac{V_r}{t}}=\ddot{R}={\displaystyle \frac{1}{2}}\left[\left({\displaystyle \frac{\mathrm{\Phi }}{r}}\right)_{R+}+\left({\displaystyle \frac{\mathrm{\Phi }}{r}}\right)_R\right],`$ (B6)
$`{\displaystyle \frac{𝐕_{}}{t}}={\displaystyle \frac{\dot{R}𝐕_{}}{R}}{\displaystyle \frac{1}{2}}\left[\left(_{}\mathrm{\Phi }\right)_{R+}+\left(_{}\mathrm{\Phi }\right)_R\right],`$ (B7)
where we have assumed that the nonspherical perturbation is small. Note that here $`_{}(1/R)\widehat{}_{}=(1/R)\left[\widehat{\theta }(/\theta )+(\widehat{\varphi }/\mathrm{sin}\theta )(/\varphi )\right]=(1/R)\widehat{\theta }(/\theta )`$. The gravitational potential $`\mathrm{\Phi }=\mathrm{\Phi }_c+\mathrm{\Phi }_s`$ includes contributions from the core, $`\mathrm{\Phi }_c`$, as given by
$$\mathrm{\Phi }_c(𝐫,t)=\frac{GM_c}{|𝐫Z_c\widehat{z}|}=GM_c\underset{l}{}\frac{Z_c^l}{r^{l+1}}\left(\frac{4\pi }{2l+1}\right)^{1/2}Y_{l0}(\theta ,\varphi ),$$
(B8)
for $`R>Z_c`$. The potential produced by the shell satisfies
$$^2\mathrm{\Phi }_s(𝐫,t)=4\pi G\mathrm{\Sigma }(\theta ,t)\delta [rR(\theta ,t)].$$
(B9)
Finally we need the equation of motion for the core mass:
$$\ddot{Z}_c=\left(\frac{\mathrm{\Phi }_s}{r}\right)_{r=Z_c,\theta =0}.$$
(B10)
Consider linear perturbation modes associated with spherical harmonics $`Y_{l0}`$:
$`R(\theta ,t)=R_0(t)\left[1+a_l(t)Y_{l0}\right],`$ (B11)
$`𝐕_{}(\theta ,t)=\dot{R}_0(t)b_l(t)\widehat{}_{}Y_{l0},`$ (B12)
$`\mathrm{\Sigma }(\theta ,t)=\mathrm{\Sigma }_0(t)\left[1+c_l(t)Y_{l0}\right].`$ (B13)
Solving equation (B9), the shell potential to linear order in $`a_l,c_l`$ is
$$\mathrm{\Phi }_s(𝐫,t)=\frac{GM_s}{r}\frac{1}{2l+1}\left(\frac{GM_s}{R_0}\right)\left(\frac{R_0}{r}\right)^{l+1}\left[c_l+(l+2)a_l\right]Y_{l0},$$
(B14)
for $`r>R(\theta ,t)`$ (outside the shell), and
$$\mathrm{\Phi }_s(𝐫,t)=\frac{GM_s}{R_0}\frac{1}{2l+1}\left(\frac{GM_s}{R_0}\right)\left(\frac{r}{R_0}\right)^l\left[c_l(l1)a_l\right]Y_{l0},$$
(B15)
for $`r<R(\theta ,t)`$ (inside the shell). Using the unperturbed solution (eqs. \[B2\] and \[B3\]), the perturbation equations (B5)-(B7) for the shell reduce to
$`2t\dot{a}_l+t\dot{c}_l{\displaystyle \frac{2}{3}}l(l+1)b_l=0,`$ (B16)
$`9t^2\ddot{a}_l+12t\dot{a}_l2a_l={\displaystyle \frac{4M_c}{M}}\delta _{l1}z_c+{\displaystyle \frac{4M_c}{M}}a_l{\displaystyle \frac{2M_s}{(2l+1)M}}\left[l(l1)a_l+{\displaystyle \frac{1}{2}}c_l\right],`$ (B17)
$`3t\dot{b}_l+b_l={\displaystyle \frac{M_c}{M}}\delta _{l1}z_c+{\displaystyle \frac{M_s}{(2l+1)M}}\left(c_l+{\displaystyle \frac{3}{2}}a_l\right),`$ (B18)
where $`\delta _{l1}`$ is Kronecker delta, and
$$z_c\left(\frac{4\pi }{3}\right)^{1/2}\left(\frac{Z_c}{R_0}\right).$$
(B19)
The equation of motion for the core mass, equation (B10), becomes
$$9t^2\ddot{z}_c+12t\dot{z}_c2z_c=\frac{2M_s}{3M}c_1.$$
(B20)
Setting
$$a_l,b_l,c_l,z_c(t)^s,$$
(B21)
equations (B16)-(B20) reduce to a set of algebraic equations from which the eigenvalue, $`s`$, and the corresponding eigenmode can be determined. We discuss these eigenmodes below.
### B.2 $`l=1`$ Modes
There are six roots for $`s`$. Two of these, $`s=1/3,2/3`$, are trivial modes which do not involve any surface density perturbation ($`c_1=0`$), and for which the core experiences no acceleration ($`\ddot{Z}_c=0`$). These correspond to the collapse of a uniform shell onto a displaced core; for $`\mathrm{\Delta }R=RR_0(t)`$, $`\mathrm{\Delta }R/R_0(t)^{1/3}`$, and for $`\mathrm{\Delta }R=`$ constant, $`\mathrm{\Delta }R/R_0(t)^{2/3}`$.
Two of the remaining four roots correspond to stable modes ($`s>0`$). The two unstable modes are shown in Fig. 5. Both lead to the growth of the separation between the center of mass of the shell and the position of the core: (i) “Bending” Mode: the surface density perturbation grows because one side of the shell collapses faster than the other, and the geometric center of the shell moves in opposition to the motion of the central mass; when $`M_s0`$, the mode has $`s=1`$ and $`(a_1,b_1,c_1)=(1,0,2)`$; (ii) “Jeans” Mode: the surface density perturbation grows due to the internal tangential flow in the shell, while the shell’s geometric center suffers little displacement with respect to the position of the central mass; when $`M_s0`$, the mode has $`s=1/3`$ and $`(a_1,b_1,c_1)=(0,1,4)`$. The “bending” mode grows more rapidly, with $`s`$ ranging from $`1`$ for $`M_s0`$ to $`s=(\sqrt{17}+1)/6=0.854`$ for $`M_c0`$.
### B.3 $`l=2`$ Modes
For $`l=2`$, the core mass experiences no acceleration. There are two types of unstable modes as shown in Fig. 6: (i) “Bending” Mode: the north pole and south pole of the shell collapse faster and have higher density than the equator, leading to a quadrupolar density perturbation; (ii) “Jeans” Mode: the density perturbation is mainly due to tangential fluid motion within the shell. The “bending” mode is the more rapidly growing mode, with $`s`$ in the range between $`1`$ and $`0.85`$;
### B.4 Large-$`l`$ Limit
Our results for general $`l`$ will not be presented here. But it is of interest to consider the $`l1`$ limit.
(i) “Bending” Mode: The dispersion relation of bending waves on a pressureless, nonrotating surface is $`\omega ^2=2\pi G\mathrm{\Sigma }_0|k|`$, with $`|k|l/R_0`$. This gives
$$\omega =\pm \frac{s_l}{t},\mathrm{with}s_l\frac{1}{3}\left(\frac{lM_s}{M}\right)^{1/2}.$$
(B22)
The wave evolves as $`\mathrm{exp}(i\omega 𝑑t)=(t)^{\pm is_l}`$. To obtain the amplitude evolution, we use the conservation of wave action (energy per unit mass divided by frequency) $`\omega |\mathrm{\Delta }R|^2`$, which gives $`|\mathrm{\Delta }R|(t)^{1/2}`$, and $`|\mathrm{\Delta }R|/R_0(t)^{1/6}`$. Thus
$$\frac{\mathrm{\Delta }R}{R_0}(t)^s,s=\frac{1}{6}\pm is_l.$$
(B23)
This agrees with our numerical results.
(ii) “Jeans” Mode: The dispersion relation of density wave is $`\omega ^2=2\pi G\mathrm{\Sigma }_0|k|`$, which gives $`\omega =\pm is_l/t`$. Similar to (i), we find
$$\frac{\mathrm{\Delta }R}{R_0}(t)^{1/6}\mathrm{exp}(i\omega 𝑑t)(t)^s,s=\frac{1}{6}\pm s_l.$$
(B24)
This also agrees with our numerical results.
### B.5 Effect of Internal Pressure
We can include the effect of fluid pressure by adding a term $`(1/\mathrm{\Sigma })_{}(\mathrm{\Sigma }C_s^2)`$ to the tangential Euler equation (B7); the radial equation (B6) is not affected by pressure. For definiteness, we parametrize the shell-averaged sound speed, $`C_s`$, by
$$C_s=\beta \left(\frac{GM}{R_0}\right)^{1/2},$$
(B25)
where $`\beta `$ is a constant. This amounts to adding a term $`\beta ^2c_l`$ to the right-hand-side of equation (B18).
Figures 5 and 6 show the effect of pressure on the eigenvalues of the $`l=1`$ and $`l=2`$ modes, respectively. We see that pressure always tends to stablize the modes. However, for $`M_s/M_c`$ less than a few, the “bending” mode is only slightly affected.
In the large-$`l`$ limit, the “bending” mode is unaffected by the pressure, thus equation (B23) still applies. For the “Jeans” mode, the dispersion relation is $`\omega ^2=k^2C_s^22\pi G\mathrm{\Sigma }_0|k|`$, which gives
$$\omega ^2=\left(2\beta ^2l^2\frac{lM_s}{M}\right)\frac{1}{9t^2}.$$
(B26)
For $`lM_s/(2\beta ^2M)`$, we have $`\omega \pm (\sqrt{2}/3)\beta l/t`$. Using similar precedure as in §B.4, we obtain
$$\frac{\mathrm{\Delta }R}{R_0}(t)^s,\mathrm{with}s\frac{1}{6}\pm i\frac{\sqrt{2}}{3}\beta l.$$
(B27)
|
no-problem/9906/hep-ex9906004.html
|
ar5iv
|
text
|
# Uncertainties in the measurements of the Inclusive Jet Cross Section at the Tevatron
## I Introduction
The inclusive jet cross section in $`\overline{p}p`$ collisions has recently been measured by the CDF and DØ collaborations. These measurements are compared with NLO perturbative QCD predictions . These experimental measurements have uncertainties that are smaller than the the uncertainties of the theoretical predictions, $``$30$`\%`$ .
The CDF measurement of the inclusive jet cross section showed an excess of jet production at high transverse energy ($`E_T`$) which could be caused by new physics such as quark compositeness, inaccuracies in the parton distribution functions, or inadequacies in the NLO QCD predictions. The theoretical predictions are in good agreement with the DØ measurement. Both experimental measurements are also in agreement . Our ability to compare quantitatively the theoretical predictions and the measurements depends on a thorough understanding of the systematic uncertainties.
## II Systematic Uncertainties
The major components of the systematic uncertainties of the CDF measurement are depicted in Fig. 2. The dominant uncertainties are due to the jet energy scale correction, the resolution unsmearing, and the integrated luminosity. The uncertainties are divided up into different components (see Fig. 2). Each component is assumed to be $`100\%`$ correlated as a function of $`E_T`$ and independent of all other components.
Similarily, the five largest uncertainties in the DØ measurement are depicted in Fig. 2. The uncertainties are either $`100\%`$ correlated , partially correlated (correlation lies between $`100\%`$ to $`100\%`$), or uncorrelated as a function of $`E_T`$.
In general the uncertainties in the jet energy scale, jet energy resolutions, the luminosity, etc., are assumed to be Gaussian. Hence, the uncertainties of the cross section are asymmetric, i.e. the positive and negative errors on the cross section are different. This is a direct result of the steeply falling inclusive jet cross section.
The assumption of Gaussian uncertainties is not always a valid one. One of the major sources of uncertainty in the inclusive jet cross section is the integrated luminosity. The two experiments base their luminosity calculations on different measurements of the total $`\overline{p}p`$ cross section. CDF uses its own measurement while DØ uses a world average cross section based on the CDF and E710 measurements. This leads to a $`7\%`$ difference between the luminosities quoted by the two experiments. The assumption that the uncertainty due to the luminosity is Gaussian in nature is probably incorrect.
## III Quantitative Comparisons
DØ has made quantitative comparisons between theoretical predictions and their measurement base on a $`\chi ^2`$ test. The $`\chi ^2`$ is given by
$$\chi ^2=\underset{i,j}{}\delta _iV_{ij}^1\delta _j$$
(1)
where $`\delta _i`$ is the difference between the data and theory for a given $`E_T`$ bin, and $`V_{ij}`$ is element $`i,j`$ of the covariance matrix:
$$V_{ij}=\rho _{ij}\mathrm{\Delta }\sigma _i\mathrm{\Delta }\sigma _j.$$
(2)
where $`\mathrm{\Delta }\sigma `$ is the sum of the systematic uncertainty and the statistical error added in quadrature if $`i=j`$ and the systematic uncertainty if $`ij`$. $`\rho _{ij}`$ is the correlation between the uncertainties of two $`E_T`$ bins.
The construction of the covariance matrix requires that the uncertainties follow a Gaussian distribution. Hence using $`\chi ^2`$ to determine the probability that a theoretical prediction agrees with a measurement does not take advantage of all the information available. Additionally it does not take into account boundary conditions (for example you cannot fluctuate a cross section more than $`100\%`$ below its value).
Parton distribution fits are now using the measurements of the inclusive jet cross section to constrain the gluon distributions. If the information available in the inclusive jet measurements is to be used to best effect then new methods must be developed to calculate the probability that a theoretical prediction agrees with the data.
## IV Conclusion
The treatment of the systematic uncertainties of the inclusive jet cross section in $`\overline{p}p`$ collisions have been discussed. The $`\chi ^2`$ values presented in are based on approximations of the uncertainties and do not use all of the information available.
I thank my colleagues on the DØ and CDF experiments for their helpful comments, suggestions and discussions.
|
no-problem/9906/cond-mat9906156.html
|
ar5iv
|
text
|
# Reducing vortex density in superconductors using the ratchet effect
PACS numbers:
A serious obstacle that impedes the application of low and high temperature superconductor (SC) devices is the presence of trapped flux . Flux lines or vortices are induced by fields as small as the Earth’s magnetic field. Once present, vortices dissipate energy and generate internal noise, limiting the operation of numerous superconducting devices . Methods used to overcome this difficulty include the pinning of vortices by the incorporation of impurities and defects , the construction of flux dams, slots and holes and magnetic shields which block the penetration of new flux lines in the bulk of the SC or reduce the magnetic field in the immediate vicinity of the superconducting device. Naturally, the most desirable would be to remove the vortices from the bulk of the SC. There is no known phenomenon, however, that could form the basis for such a process. Here we show that the application of an ac current to a SC that is patterned with an asymmetric pinning potential can induce vortex motion whose direction is determined only by the asymmetry of the pattern. The mechanism responsible for this phenomenon is the so called ratchet effect, and its working principle applies to both low and high temperature SCs. As a first step here we demonstrate that with an appropriate choice of the pinning potential the ratchet effect can be used to remove vortices from low temperature SCs in the parameter range required for various applications.
Consider a type II superconductor film of the geometry shown in Fig. 1, placed in an external magnetic field $`H`$. The superconductor is patterned with a pinning potential $`U(x,y)=U(x)`$ which is periodic with period $`\mathrm{}`$ along the $`x`$ direction, has an asymmetric shape within one period, and is translationally invariant along the $`y`$ direction of the sample. The simplest example of an asymmetric periodic potential, obtained for example by varying the sample thickness, is the asymmetric sawtooth potential, shown in Fig. 1b. In the presence of a current with density $`𝐉`$ flowing along the $`y`$ axis the vortices move with the velocity
$$𝐯=(𝐟_\mathrm{L}+𝐟_{\mathrm{𝐯𝐯}}+𝐟_𝐮)/\eta ,$$
(1)
where $`𝐟_\mathrm{L}=(𝐉\times \widehat{𝐡})\mathrm{\Phi }_0d/c`$ is the Lorentz force moving the vortices transverse to the current, $`\widehat{𝐡}`$ is the unit vector pointing in the direction of the external magnetic field $`𝐇`$, $`𝐟_𝐮=\frac{\mathrm{d}U}{\mathrm{d}x}\widehat{𝐱}`$ is the force generated by the periodic potential, $`𝐟_{\mathrm{𝐯𝐯}}`$ is the repulsive vortex-vortex interaction, $`\mathrm{\Phi }_0=2.07\times 10^7`$ G cm<sup>2</sup> is the flux quantum, $`\eta `$ is the viscous drag coefficient, and $`d`$ is the length of the vortices (i.e. the thickness of the sample).
When a dc current flows along the positive $`y`$ direction, the Lorentz force moves the vortices along the positive $`x`$ direction with velocity $`v_+`$. Reversing the current reverses the direction of the vortex velocity, but its magnitude, $`|v_{}|`$, due to the asymmetry of the potential, is different from $`v_+`$. For the sawtooth potential shown in Fig. 1b the vortex velocity is higher when the vortex is driven to the right, than when it is driven to the left ($`v_+>|v_{}|`$). As a consequence the application of an ac current (which is the consecutive application of direct and reverse currents with density $`J`$) results in a net velocity $`v=(v_++v_{})/2`$ to the right in Fig. 1b. This net velocity induced by the combination of an asymmetric potential and an ac driving force is called ratchet velocity . The ratchet velocity for the case of low vortex density (when vortex-vortex interactions are neglected) can be calculated analytically. Denoting the period of the ac current by $`T`$, the ratchet velocity of the vortices in the $`T\mathrm{}`$ limit is given by the expression
$$v=\{\begin{array}{cc}0\hfill & \text{if }f_\mathrm{L}<f_1\hfill \\ \frac{1}{2\eta }\frac{(f_\mathrm{L}+f_2)(f_\mathrm{L}f_1)}{f_\mathrm{L}+f_2f_1}\hfill & \text{if }f_1<f_\mathrm{L}<f_2\hfill \\ \frac{1}{\eta }\frac{f_1f_2(f_2f_1)}{f_\mathrm{L}^2(f_2f_1)^2}\hfill & \text{if }f_2<f_\mathrm{L},\hfill \end{array}$$
(2)
where $`f_1=\mathrm{\Delta }U/\mathrm{}_1`$ and $`f_2=\mathrm{\Delta }U/\mathrm{}_2`$ are the magnitudes of the forces generated by the ratchet potential on the facets of length $`\mathrm{}_1`$ and $`\mathrm{}_2`$, respectively (see Fig. 1c), $`\mathrm{\Delta }U`$ is the energy difference between the maximum and the minimum of the potential, and $`f_\mathrm{L}=|𝐟_\mathrm{L}|=J\mathrm{\Phi }_0d/c`$.
Since for high magnetic fields vortex-vortex interactions play an important role, we have performed molecular dynamics simulations to determine the ratchet velocity for a collection of vortices. As Fig. 2 demonstrates, we find that for low vortex densities the numerical results follow closely the analytical prediction (2), and the magnitude of the ratchet velocity decreases with increasing vortex density. The vortex densities used in the simulations correspond to an internal magnetic field of about 0.7, 35, and 70 G, covering a wide range of magnetic fields. A key question for applications is if the ratchet velocity (2) is large enough to induce observable vortex motion at experimentally relevant time scales. To address this issue in Fig. 2 we plotted $`v`$ for Nb, a typical low temperature SC used in a wide range of devices, for which the potential $`U(x)`$ is induced by thickness variations of the SC. The details of the model are described in the caption of Fig. 2. As the figure indicates, the maximum ratchet velocity (5.2 m/s) is high enough to move a vortex across the typical few micrometer wide sample in a few microseconds. Furthermore, increasing the vortex density by two orders of magnitude decreases the vortex velocity only by a factor of three.
Next we discuss a potentially rather useful application of the ratchet effect by demonstrating that it can be used to drive vortices out of a SC. Consider a SC film that is patterned with two arrays of the ratchet potential oriented in opposite directions, as shown in Fig. 3a. During the application of the ac current the asymmetry of the potential in the right half moves the vortices in that region to the right, while vortices in the left half move to the left. Thus the vortices drift towards the closest edge of the sample, decreasing the vortex density in the bulk of the SC. We performed numerical simulations to quantitatively characterize this effect, the details and the parameters being described in the caption of Fig. 3. In Fig. 3b we summarize the effectiveness of vortex removal by plotting the reduced vortex density inside the SC as a function of the Lorentz force $`f_\mathrm{L}`$ and the period $`T`$ of the current. One can see that there is a well defined region where the vortex density drops to zero inside the SC, indicating that the vortices are completely removed from the bulk of the SC. Outside this region we observe either a partial removal of the vortices or the ac current has no effect on the vortex density.
The $`(f_\mathrm{L},T)`$ diagram shown in Fig. 3b has three major regimes separated by two boundaries. The $`T_1=2\eta \frac{\mathrm{}_1}{f_\mathrm{L}[f_1+f_{\mathrm{in}}(w+\mathrm{}_2)]}`$ phase boundary (here we assume $`d/2<\mathrm{}_2`$ and $`f_{\mathrm{in}}(x)`$ is defined in Fig. 3) provides the time needed to move the vortex all the way up on the $`\mathrm{}_1`$ long facet of the ratchet potential at the edge of the SC, i.e. to remove the vortex from the SC. When $`T<T_1`$ the vortices cannot exit the SC. The $`T_2=2\eta \left(\frac{d/2}{f_\mathrm{L}[f_2+f_{\mathrm{edge}}]}+\frac{\mathrm{}_2d/2}{f_\mathrm{L}[f_2f_{\mathrm{in}}(w+d/2)]}\right)`$ phase boundary (where $`f_{\mathrm{edge}}`$ is also defined in Fig. 3) is the time needed for a vortex to enter from the edge of the SC past the first potential maxima. Thus, when $`T<T_2`$ the vortices cannot overcome the edge of the potential barrier. These phase boundaries, calculated for non-interacting vortices, effectively determine the vortex density in the three phases. Vortex removal is most effective in regime I, where the vortices cannot move past the first potential barrier when they try to enter the SC, but they get past the barriers opposing their exit from the SC. Thus the vortices are swept out of the SC by the ratchet effect, and no vortex can reenter, leading to a vortex density $`\rho =0`$. Indeed, we find that the numerical simulations indicate complete vortex removal in the majority of this phase (see the contour lines in Fig. 3b). An exception is the finger structure near the crossing of the $`T_1`$ and $`T_2`$ boundaries. For fields and periods within the first finger (lowest in Fig. 3b) the vortex follows a periodic orbit inside a single potential well . The subsequent fingers represent stable period orbits between two, three, or more wells, respectively. Since the vortices cannot escape from these orbits, they remain trapped inside the SC, increasing the vortex density within the fingers in the phase diagram. Fig. 3b shows the analytically calculated envelopes of the regions where such trapping occurs. An important feature of the finger structure is that stable periodic orbits, which prevent vortex removal, do not exist above the line $`T_{\mathrm{tip}}=f_\mathrm{L}\frac{2\eta \mathrm{\Delta }U}{f_1f_2(f_2f_1)}`$ connecting the finger tips. In regime II vortices can enter the SC, but the ratchet effect is still sweeping them out, thus here we expect partial removal of the vortices, the final vortex density inside the SC being determined by the balance of vortex nucleation rate at the edge of the sample (which depends on the surface properties of the SC) and the ratchet velocity moving them out. In regime III the vortices cannot leave the SC and new vortices cannot enter the system, thus the initial density inside the SC is unchanged throughout this phase ($`\rho =\rho _0`$).
Since the forces $`f_{\mathrm{in}}(x)`$ and $`f_{\mathrm{edge}}`$ depend on $`H`$, the position of the phase boundaries $`T_1`$ and $`T_2`$ also depends on the external magnetic field. In particular, there exists a critical field $`H^{}`$, such that for $`H>H^{}`$ regime I, where vortex removal is complete, disappears, but regime II with partial vortex removal does survive. We find that for Nb films of geometry described in Fig. 3 we have $`H^{}10`$G. However, since $`H^{}`$ is a consequence of the geometric barrier, its value can be modified by changing the aspect ratio of the film. Furthermore, for superconductors with elliptic cross section the geometric barrier can be eliminated , thus phase I with complete vortex removal could be extended to high magnetic fields as well.
Vortex removal is important for numerous SC applications and can improve the functioning of several devices. An immediate application of the proposed method would be improving the operation of superconducting quantum interference devices (SQUIDs), used as sensors in a wide assortment of scientific instruments . A long-standing issue in the performance of SQUIDs is $`1/f`$ noise , arising from the activated hopping of trapped vortices . Reducing the vortex density in these superconductors is expected to extend the operation regime of these devices to lower frequencies.
Although over the past few years several applications of the ratchet effect have been proposed, such as separating particles , designing molecular motors , smoothing surfaces , or rectifying the phase across a SQUID , our proposal solves a well known acute problem of condensed matter physics, by removing vortices from a SC. In contrast with most previous applications, which require the presence of thermal noise, our model is completely deterministic. Indeed, in Nb the variations in the pinning potential is $`\mathrm{\Delta }U25`$ eV, which is more than 10<sup>4</sup> times larger than $`k_\mathrm{B}T0.8`$ meV at $`T_c=9.26`$ K, thus rendering thermal fluctuations irrelevant. On the practical side, a particularly attractive feature of the proposed method is that it does not require sophisticated material processing to make it work: First, it requires standard few-micron scale patterning techniques (the micrometer tooth size was chosen so that a few teeth fit on a typical SQUID, but larger feature size will also function if the period $`T`$ is increased proportionally). Second, the application of an ac current with appropriate period and intensity is rather easy to achieve. For applications where an ac current is not desired, the vortices can be flushed out before the normal operation of the device. On the other hand, if the superconducting device is driven by an ac current (e.g. rf SQUIDs, ac magnets, or wires carrying ac current), the elimination of the vortices will take place continuously during the operation of the device. The analytically predicted phase boundaries, whose position is determined by the geometry of the patterning, provide a useful tool for designing the appropriate patterning to obtain the lowest possible vortex density for current and frequency ranges desired for specific applications. Finally, although here we limited ourselves to low temperature SCs, the working principle of the ratchet effect applies to high temperature superconductors as well.
Acknowledgements. We wish to thank D. J. Bishop, S. N. Coppersmith, D. Grier, H. Jeong, A. Koshelev and S. T. Ruggiero for very useful discussions and help during the preparation of the manuscript. This research was supported by NSF Career Award DMR-9710998.
Correspondence and requests for material should be addressed to A.-L.B. (e-mail:alb@nd.edu).
|
no-problem/9906/astro-ph9906079.html
|
ar5iv
|
text
|
# GRB 970228 Revisited: Evidence for a Supernova in the Light Curve and Late Spectral Energy Distribution of the Afterglow
## 1 Introduction
The discovery of both an X-ray (Costa et al. 1997a) and optical (Groot et al. 1997) afterglow to GRB 970228 revolutionized the gamma-ray burst (GRB) field. The mean temporal and spectral properties of this afterglow appeared to be consistent with the relativistic fireball model (Tavani 1997; Waxman 1997; Wijers, Rees, & Mészáros 1997; Reichart 1997; Sahu et al. 1997; Katz & Piran 1997), which predicted that GRBs would have afterglows at these and radio wavelengths (Paczyński & Rhoads 1993; Katz 1994; Mészáros & Rees 1997). However, now that the photometry has been finalized, and perhaps more importantly, now that nearly a dozen optical afterglows to GRBs are available for comparison, the afterglow of GRB 970228 is perhaps the most difficult of the observed afterglows to reconcile with the fireball model (see §2).
The most problematic feature of this afterglow for the fireball model is its extreme reddening with time: V - I changes from $`0.7\pm 0.2`$ mag 21 hours after the burst to $`2.3\pm 0.2`$ mag 26 days after the burst and $`2.2\pm 0.3`$ mag 38 days after the burst. Bloom et al. (1999) found that the afterglow of GRB 980326 exhibits a similar behavior, and argued that this is due to a supernova that overtook the light curve of the afterglow about one week after the burst.
Unfortunately, late afterglow data is sparse in the case of GRB980326, and its redshift is not yet known. Data is more plentiful, both temporally and spectrally, in the case of GRB 970228, and its redshift, $`z=0.695\pm 0.002`$ (Djorgovski et al. 1999), is known. Therefore, in this Letter, we build on the groundbreaking work of Bloom et al. (1999), and investigate the suggestion of Dar (1999) that a supernova dominated the late afterglow of GRB 970228. In §3, we argue that the R-band light curve of this afterglow favors this hypothesis. In §4, we argue that the spectral energy distribution of this afterglow at late times also favors this hypothesis. We draw conclusions in §5.
## 2 Problems with the Relativistic Fireball Model Interpretation of the Afterglow of GRB 970228
As mentioned in §1, the most problematic feature of the afterglow of GRB 970228 for the relativistic fireball model is the afterglow’s extreme reddening with time. The fireball model can accommodate mild reddening; e.g., the passage of either the synchrotron or cooling breaks of this model’s spectrum (see e.g., Sari, Piran, & Narayan 1998) through these bands would increase the V $``$ I color of the afterglow by $`0.3`$ mag. However, in the case of GRB 970228, V $``$ I increased by $``$ 1.6 mag.
To better quantify how this afterglow changed from early to late times, we have collected from the literature the published optical and near-infrared (NIR) photometry of the afterglow, which we list in Table 1, as well as that of the underlying host galaxy, which we list in Table 2. In Figure 1, we plot the BVRI photometry of the afterglow. Excluding the 1.5-m Bologna University Telescope (BUT) photometry at $``$ 17 hours after the burst (see Guarnieri et al. 1997 for a discussion of these data), clear and distinct trends can be seen in the early ($`t<5`$ days after the burst) and late ($`t>25`$ days after the burst) subsets of these data: the late afterglow appears to have faded more slowly than the early afterglow, and as already mentioned, the late afterglow appears to have been considerably redder than the early afterglow. The intermediate afterglow (5 days $`<t<25`$ days) appears to have been transitional between these two limiting behaviors.
To quantify these limiting behaviors, we have fitted the functional form $`F_\nu =F_0\nu ^at^b`$ to these two subsets of the data. For the early afterglow, we find that $`a_{BVRI}=0.61\pm 0.32`$ and $`b=1.58\pm 0.28`$ ($`\chi ^2=1.609`$, $`\nu =2`$); for the late afterglow, we find that $`a_{VRI}=4.31\pm 0.30`$ and $`b=0.89\pm 0.12`$ ($`\chi ^2=0.153`$, $`\nu =3`$). In the case of the early afterglow, the afterglow dominated the host galaxy (see Tables 1 and 2); consequently, we could ignore the contribution of the host galaxy in this fit. In the case of the late afterglow, all of the measurements, except for the Keck-II 10-m R-band measurement of Metzger et al. (1997b), were made from HST images; in these cases, the angular resolution was sufficient to separate the afterglow from the host galaxy. In the case of the R-band measurement, we instead fitted the functional form $`F_0\nu ^at^b+F_R`$, where $`F_R`$ is the flux density of the host galaxy in the R band. Employing Bayesian inference, we adopted a prior probability distribution for $`F_R`$, given by the HST/STIS R-band measurement of Castander & Lamb (1999b; see also Fruchter et al. 1999).
To confirm that the afterglow is not consistent with a single power-law spectrum and a single power-law fading, we fitted the above functional forms to the entire BVRI data set, excluding the BUT 1.5-m photometry and the 2.5-m Isaac Newton Telescope (INT) B-band measurement at $``$ 10 days, which may be dominated by the host galaxy (no B-band measurement of the host galaxy is available for comparison). We find that $`\chi ^2=78.487`$ for $`\nu =11`$ degrees of freedom. If we restrict ourselves to the data set fitted to above, we find that $`\chi ^2=67.844`$ for $`\nu =8`$. Comparing the model fitted above with this model, we find that $`\mathrm{\Delta }\chi ^2=67.8441.6090.153=66.082`$ for $`\mathrm{\Delta }\nu =832=3`$; using the $`\mathrm{\Delta }\chi ^2`$ test, this implies that the above “two limiting power-law behaviors” model is favored over this “single power-law behavior” model at the $`10^{13.5}`$ confidence level.
The temporal index of the early afterglow, $`b=1.58\pm 0.28`$, is similar to that found by Costa et al. (1997b) in the 2 - 10 keV band ($`b=1.33_{0.13}^{+0.11}`$, 8 hours $`<t<4`$ days), and to that found by Frontera et al. 1998 in the 0.1 - 2.4 keV band ($`b=1.50_{0.23}^{+0.35}`$, 8 hours $`<t<13`$ days). Furthermore, the spectral index of the early afterglow, $`a_{BVRI}=0.61\pm 0.32`$, is consistent with the optical to X-ray spectral index of the afterglow at these early times: $`a_{OX}0.7`$. Finally, both the spectral and temporal indices of the early afterglow are consistent (1) with the spectral and temporal indices of other afterglows, and (2) with the general expectations of the fireball model.
However, the spectral index of the late afterglow, $`a_{VRI}=4.31\pm 0.30`$, is not consistent with the general expectations of the fireball model. In principle, one could achieve this extremely negative spectral index if the afterglow were heavily extincted, either by our galaxy, by the host galaxy, or by the immediate vicinity of the burst (see e.g., Reichart 1998). However, if that were the case, either the intrinsic spectral index of the early afterglow would have to be extremely positive, which would also defy the general expectations of the fireball model, or the level of extinction would have to increase with time, which is the opposite of what one expects: the afterglow may destroy dust, but it will not create it. Consequently, although the early afterglow appears to be consistent with the fireball model, the late afterglow does not. Likewise, the late afterglow is equally unlikely to be due to a refreshed shock, which must also meet the general expectations of the fireball model. However, the extremely negative spectral index of the late afterglow is consistent with the general expectations of a supernova at the redshift of this burst. We return to this claim in §4.
## 3 Evidence for a Supernova in the R-band Light Curve of the Afterglow of GRB 970228
In this section we investigate whether the R-band light curve of the afterglow of GRB 970228 is consistent with a supernova overtaking the light curve of the early afterglow at late times; we investigate whether the spectral energy distribution of the late afterglow is consistent with this hypothesis in §4.
First, we construct the R-band light curve. We do this by scaling all non-R-band measurements to the R-band using the fitted functional forms of §2; we have again excluded the BUT 1.5-m photometry and the INT 2.5-m B-band measurement (see §2). In the top panel of Figure 2, we plot the ground-based photometry of the afterglow plus the host galaxy, as well as the best-fit R-band light curve of the early afterglow from §2 plus the best-fit R-band flux density of the host galaxy from Table 2. In the bottom panel of Figure 2, we plot the space-based photometry of the afterglow, as well as the best-fit R-band light curve of the early afterglow.
Next, we construct an example supernova light curve. We begin with the U-band light curve of the Type Ib/c supernova SN 1998bw, which may or may not be related to GRB 980425.<sup>1</sup><sup>1</sup>1In addition to SN 1998bw, the fading X-ray source 1SAX J1935.3-5252 was found in the error circle of GRB 980425 (Pian et al. 1998a,b). Since 1SAX J1935.3-5252 is typical of nearly every other GRB afterglow observed to date, and since SN 1998bw, and in particular, its low redshift of $`z=0.0085`$ (Tinney et al. 1998), are not typical of any other GRB afterglow observed to date, the association between GRB 980425 and SN 1998bw is uncertain (see also Graziani, Lamb, & Marion 1999). However, for the purposes of this Letter, any Type Ib - Ic supernova light curve is sufficient. We have chosen SN 1998bw primarily because it is well-sampled, and only secondarily because of its possible association with GRB 980425. Galama et al. (1998) observed SN 1998bw in the U band between 9 and 56 days after this burst. At earlier times we extrapolate to the U band from the B band light curve of Galama et al. (1998); at later times we extrapolate to the U band from the B band light curve of McKenzie & Schaefer (1999). Next, we transform this U-band light curve to the redshift of GRB 970228, z = 0.695, which involves (1) shifting it to the R band, (2) stretching it in time, and (3) dimming it. Here, we have set $`\mathrm{\Omega }_m=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$; other cosmologies yield similar results. Finally, we have corrected the data for differences in Galactic extinction between the SN 1998bw line of sight in the U band ($`A_U=0.382`$ mag), and the GRB 970228 afterglow line of sight in the R band ($`A_R=0.959`$ mag), using the dust maps of Schlegel, Finkbeiner, & Davis (1998)<sup>2</sup><sup>2</sup>2Software and data available at http://astro.berkeley.edu/davis/dust/index.html. in the case of the SN 1998bw line of sight, Castander & Lamb (1999a; however, see also Fruchter et al. 1999) in the case of the GRB 970228 line of sight, and the Galactic extinction curve of Cardelli, Clayton, & Mathis (1989) for $`R_V=3.1`$; no attempt has been made to quantify the differences in extinction between these sources’ host galaxies and immediate vicinities along their respective lines of sight. This redshifted, Galactic extinction-corrected, example supernova light curve is also plotted in Figure 2.
Despite all of the uncertainties in this example supernova light curve, including the uncertainty in its peak luminosity, which can vary by factors of a few for Type Ib - Ic supernovae, we find the summed early afterglow plus host galaxy plus supernova light curve (Figure 2, top panel) and the summed early afterglow plus supernova light curve (Figure 2, bottom panel) to be remarkably consistent with the photometry.<sup>3</sup><sup>3</sup>3The HST/STIS observation at 189 days after the burst was done in Clear Aperture mode (Fruchter et al. 1999); consequently, its measured value depends upon an assumption about the spectrum of the afterglow at this late time. In Figures 1 and 2, we assume that the spectrum of the late afterglow did not evolve with time; if, for example, the late afterglow grew bluer with time, then Figures 1 and 2 overestimate the flux density of the afterglow at this late time.
## 4 Evidence for a Supernova in the Spectral Energy Distribution of the Late Afterglow of GRB 970228
We have shown that the R-band light curve of the late afterglow of GRB 970228 is consistent with what one would expect from a supernova at the redshift of the burst. In this section, we go a step further and show that the spectral energy distribution of the late afterglow is also consistent with that of a supernova at this redshift, and that this consistency spans five spectral bands.
First, we construct the spectral energy distribution of the late afterglow. Between $``$ 30 and $``$ 38 days after the burst, Sahu et al. (1997) observed the afterglow in the V and I bands with HST/WFPC2, Metzger et al. (1997b) observed the afterglow plus host galaxy in the R band with the Keck-II 10-m, and Soifer et al. (1997) observed the afterglow plus host galaxy in the J and K bands with the Keck-I 10-m. From these R- and K-band measurements of the afterglow plus host galaxy, we subtract the contribution of the host galaxy (see Table 2); as no J-band measurement of the host galaxy is available, we replace this J-band measurement of the afterglow plus host galaxy with an upper limit. Next, we scale these afterglow measurements to a common time, $`38`$ days after the burst, using the fitted functional form of the late afterglow from §2; as these measurements are nearly coincident in $`\mathrm{log}t`$, this adjustment is minor. Finally, we correct these afterglow measurements for Galactic extinction along the line of sight (see §3). We plot the resulting Galactic extinction-corrected, five-band spectral energy distribution of the late afterglow in Figure 3.
For comparison, we also plot in Figure 3 the Galactic extinction-corrected, redshifted spectral energy distribution of SN 1998bw (see §3), which Galama et al. (1998) observed in the U, B, V, R, and I bands $`2238(1+z_{SN1998bw})/(1+z_{GRB970228})`$ days after GRB 980425. When transformed to the redshift of GRB 970228, these measurements span the R through J bands.
To the limit of the uncertainties in the late afterglow spectral energy distribution measurements, and to the limit of the smaller wavelength range of the redshifted SN 1998bw spectral energy distribution, these distributions appear to be nearly identical. Furthermore, the relativistic fireball model is not expected to produce a spectral break of this nature, especially so long after the burst. Consequently, we conclude that the late afterglow of GRB 970228 was most likely dominated by a supernova.
## 5 Conclusions
In conclusion, while the early afterglow of GRB 970228 appears to have been consistent with the general expectations of the relativistic fireball model, the late afterglow does not appear to have been consistent with this model. Nor was it similar to any of the other observed afterglows, with the sole exception of the afterglow of GRB 980326, which Bloom et al. (1999) attribute to a supernova that overtook the light curve. We find that this was most likely the case with GRB 970228. Not only is the light curve of the afterglow consistent with this hypothesis, but more convincingly, the spectral energy distribution of the late afterglow is also consistent with this hypothesis, and this consistency spans five spectral bands. Furthermore, the identification of the late afterglow of GRB 970228 with a supernova strengthens the case that other GRBs, in particular, GRB 980326 and GRB 980425, are indeed related to supernovae. However, in the case of GRB 980425, many reservations remain (§3).
This research was supported in part by NASA grant NAG5-2868 and NASA contract NASW-4690. D. E. R. thanks D. Q. Lamb for comments that greatly improved this Letter, as well as for his general enthusiasm for this research.
|
no-problem/9906/astro-ph9906148.html
|
ar5iv
|
text
|
# A Search for Photometric Rotation Periods in Low-Mass Stars and Brown Dwarfs in the Pleiades
## 1 Introduction
During the last two decades, there have been many extensive studies of the rotation rates of G and K stars in open clusters (Stauffer et al. 1984, 1985, 1989; Stauffer & Hartmann (1987); Stauffer et al. (1991); Soderblom et al. (1993); Krishnamurthi et al. (1998); Queloz et al. (1998)), accompanied by considerable progress in understanding the mechanisms and mass dependence of angular momentum loss (see Pinsonneault (1997) for a review). Briefly, most stars arrive on the main sequence as relatively slow rotators, but about 20 - 25% arrive with rapid rotation ($`20<v\mathrm{sin}i<200`$ km s<sup>-1</sup>). The number of rapid rotators and the maximum observed rotation rates decline with increasing cluster age, showing that the time scale for these rapid rotators to lose most of their angular momentum ranges from tens to hundreds of millions of years, and that this time scale becomes longer with decreasing stellar mass. In order to be consistent with these observations, theoretical evolutionary models require one or more of the following features: (a) core-envelope rotational decoupling (Endal & Sofia (1981); Pinsonneault et al. (1990)); (b) coupling of the stellar rotation to that of its circumstellar disk during pre-main sequence evolution (Königl (1991); Li & Collier Cameron (1993)); and (c) saturation of the angular momentum loss rate above a specified rotation rate (Keppens et al. (1995)).
Though the general trends are clear at higher masses, there is still not much information on the rotation rates of main sequence stars of low mass (here $`M0.4M_{}`$). Jones et al. (1997), Stauffer et al. (1997), and Oppenheimer et al. (1997) have determined rotational velocities ($`v\mathrm{sin}i`$) for a small sample of stars in the Pleiades and Hyades with masses down to $`0.1M_{}`$, while Basri & Marcy (1995) and Delfosse et al. (1998) have similarly obtained rotational velocities for low mass field stars. All of these studies indicate that the spindown time scales are longer for lower mass stars, with the time scale near 0.1 $`M_{}`$ being several billion years. This facet of the observational database is best explained by assuming that the critical rotation rate for saturation of the angular momentum loss rate is a function of mass (Krishnamurthi et al. (1997); Bouvier et al. (1997)).
Obtaining more rotational data in the low-mass regime would help to constrain models of angular momentum evolution at the bottom of the main sequence. It would also be of interest to extend the samples into the substellar regime (“brown dwarfs”) to explore whether current formulations of angular momentum loss for stars are applicable there as well. Brown dwarfs are very low mass objects that do not burn hydrogen, and their properties are not very well characterized. Determining rotation rates for low-mass stars and substellar objects using high-resolution spectra is extremely challenging as they are quite faint ($`I>17`$) and their spectra (late M-type) are complex. Furthermore, stars at and below the hydrogen-burning mass limit have quite small radii; the smallest rotation rates that are routinely determined spectroscopically (say $`5`$ km s<sup>-1</sup>), correspond to short rotation periods, about 0.5 day. Therefore the discovery of low-mass stars and brown dwarfs with periods in excess of 0.5 – 1 day may best be accomplished through through photometry, which can detect brightness modulation from starspots. A determination of rotation periods by this method for brown dwarfs would indicate the presence of spots and hence magnetic activity on these sub-stellar objects, since the presence of starspots is a manifestation of magnetic fields.
Motivated by these considerations, we undertook a pilot study to derive rotation periods from photometric modulation of low-mass members of the Pleiades. Section 2 describes the survey, and data analysis techniques, while $`\mathrm{\S }3`$ presents our results and a discussion.
## 2 Observations and Analysis
We selected eight low-mass stars and brown dwarf candidates in the Pleiades; a list of these stars and a compilation of some of their properties is in Table 1. Columns 2–3 of that table show the J2000.0 positions for the targets, followed in columns 4–7, respectively, by their spectral classification and available photometry. The last two columns give a mass estimate, as derived from comparison to theoretical isochrones, and references for the various data about each star. The full sample ranges from $`M0.4M_{}`$ down to $`M0.05M_{}`$.
The two Pleiades stars with $`M0.4M_{}`$, HHJ 409 and HHJ 375 (Hambly et al. (1993)), were selected for monitoring because they are known to have quite different spectroscopic rotation rates of $`v\mathrm{sin}i`$ = 69 km s<sup>-1</sup> and 10 km s<sup>-1</sup>, respectively (Stauffer et al. (1999)). The rest of the stars were selected from deep imaging surveys which have produced candidate Pleiades brown dwarfs: Teide 1 (Rebolo et al. (1995)), Teide 2 (Martín et al. (1998)), Roque 11 and Roque 13 (Zapatero Osorio et al. (1997)), and CFHT-PL-8 and CFHT-PL-12 (Bouvier et al. (1998)).
A number of the fainter stars have spectra which show measurable equivalent widths of Li I $`\lambda 6708`$ Å. As originally proposed by Magazzù et al. (1993), stars with masses below about 0.065 $`M_{}`$ never develop central core temperatures sufficient to destroy lithium ($`3\times 10^6`$ K). At higher masses, the time it takes the core to achieve this temperature is a sensitive function of mass. Because low-mass stars are fully convective, the surface Li is rapidly depleted once the core reaches a high enough temperature. For clusters of the age of the Pleiades (100 – 120 Myr), the mass above which Li is depleted is about 0.08 $`M_{}`$ (Stauffer et al. (1998) and references therein); consequently, finding a Li abundance which does not show significant depletion would show that a given star in the Pleiades is a brown dwarf. The stars with spectroscopic confirmation of their status as brown dwarfs are Teide 1 (Rebolo et al. (1996)), Teide 2 (Martín et al. (1998)), Roque 13 and CFHT-Pl 12 (Stauffer et al. (1998)).
We also monitored the very low-mass object 2MASS J0149090+295613 (hereafter referred to as 2MASSJ0149), which has exhibited a significant flare in H$`\alpha `$ (Liebert et al. (1999)). As this flare can be interpreted as evidence for magnetic activity, we included this object in our sample to determine if we could observe spot modulation, which could be another indicator (or confirmation) of magnetic activity.
We used the 2.4-m Hiltner telescope of the MDM Observatory on Kitt Peak to obtain $`I_C`$ (Cousins $`I`$ band) photometry of our targets from UT 11-23 November 1998. The detector was the “Wilbur” CCD, a thick $`2048\times 2048`$ Loral chip which was operated at a nominal gain of 2.3 electrons/DN and a readout noise of 4.7 electrons r.m.s. The readout was binned by a factor of two in rows and columns to yield a pixel scale of 0.393 arcseconds/pixel. In addition, we obtained a few observations of HHJ 409 in $`V`$ to measure the relative photometric amplitude at two wavelengths. Most of the run was photometric, with median seeing about $`0.8\mathrm{}`$; on a few nights we observed a large number of standards from Landolt (1992) for calibration, though in this paper we will only discuss amplitude variations and will not present calibrated photometry.
The EXPORT version of Sun/IRAF V2.11.1 for SunOS 4 and Solaris 2.6 was used for the initial processing of data. The raw images were corrected for overscan bias, zero-exposure level, and were flattened by twilight-sky flats. Inspection of the flats from several nights and of the sky level on deep exposures showed that the flattening process was accurate to 0.2% over most of the frames, increasing to about 1% at the extreme edges of the frames.
The observing strategy was to make repeated visits to each target and obtain from 2-5 exposures in quick succession; exposures ranged from 5-10 sec for the bright stars HHJ 375 and HHJ 409, and were as long as 300 sec for the faintest stars in the sample. Because the target fields were uncrowded, aperture photometry (rather than PSF-fitting) was adequate. Magnitudes were obtained for each unsaturated star on each frame in an aperture of radius 3 times the stellar FWHM; the median sky was measured in an annulus with radii of 5 and 7 times the FWHM.
The assembly of the magnitudes into a time series was accomplished in two steps. In the first, the magnitudes from the repeated exposures at each visit were averaged after rescaling by an additive constant. This offset was determined by shifting the stellar coordinates to a common system, then determining the mean weighted magnitude difference between photometry from pairs of frames, using all unsaturated stars common to each frame. The weights were assigned as $`1/\sigma ^2`$, where $`\sigma `$ was the magnitude error reported from the aperture photometry software. Because there were always a number of well-exposed stars on each frame, the magnitude offset could be determined very precisely (error typically $`0.004`$ mag). The target star was included in the determination of the magnitude offset at this step because it would not have varied in the $`30`$ min spent at each visit. The average HJD was taken as the time point for each visit.
In the second step, the average photometry for the many visits to each star were similarly scaled using mean weighted magnitude differences, but this time the target star was not included in the determination of the offset. Errors in the average magnitude were computed in both steps by taking the larger of the actual weighted error in the mean, or the expected error for Gaussian statistics.
Periods were determined using the Scargle (1982) implementation of the PERIODOGRAM algorithm for unevenly spaced data, which also yields the “false alarm probability” (FAP) that random fluctuations would produce the derived period. We also confirmed the derived periods (or absence of a detected signal) using the algorithm described in Roberts et al. (1987) (kindly supplied by D. H. Roberts), which takes into account aliasing introduced by the sampling frequency. Both algorithms yielded the same set of stars with definite detections and the resulting periods were the same.
## 3 Results and Discussion
The results of our observational program are summarized in Table 2. The first column lists the star names as in Table 1, while columns 2–5 show, respectively, the number of visits to the star, the r.m.s. scatter about the mean magnitude, and the average error in a single measurement. The last two columns display the derived period in days for the star and the false-alarm probability for the stars with definite periods.
As discussed in Section 2, the targets in our sample were selected to have a mass distribution which crossed the H-fusion boundary. We were able to measure definite rotation periods for two of the four low-mass stars (hydrogen burning) in our sample, namely HHJ 409 and CFHT-PL-8. The photometry for these two stars, phased to the derived period, is shown in Figures 1 and 2. Also shown on each figure is the photometry for a nearby star of similar brightness phased to the period of the target star; the scatter in the photometry of the nearby star is equal to the expected photometric error, verifying that the brightness variations in the target are real. In both figures, the differential photometry is plotted in the sense ($`\mathrm{\Delta }I_C=I_CI_C`$). There is insufficient time-series data to be able to find a period for HHJ 375. This star shows photometric variation at the 2.5$`\sigma `$ level, and a very approximate period of $``$1.5d (FAP $`0.4`$). CHFT-PL-8 has an amplitude of 10%; this star is comparable in mass to a star in the young cluster Alpha Perseii for which Martín & Zapatero Osorio (1997) derived a photometric rotation period. None of the other stars showed variations which were as much as twice the typical error of measurement (see table 2). Thus, we were unable to measure rotation periods for any of the objects which are likely to be sub-stellar.
We also followed HHJ 409 in $`V`$ (five visits) to determine the relative amplitude compared to that in $`I_C`$. Figure 3 plots the difference in magnitude with respect to the average magnitude in each filter for these five visits. We find that the relative amplitude in $`V`$ is 2.1 times that in $`I_C`$, and since the the star gets redder as it gets fainter, the spots are cooler than the average temperature of the photosphere. The $`VI`$ color of the star indicates a photospheric temperature of $`T=3560`$ K. To estimate roughly the temperature difference between the spots and the star, we did a simple model of blackbody spots (see figure 4). For each assumed spot temperature, we derived a filling factor which would produce an amplitude of 0.04 mag in $`I_C`$; the cooler the spots, the smaller the required filling factor to produce this amplitude. Then for each temperature, we calculated the resulting expected $`V`$ amplitude. At spot temperatures near 3330 K and filling factors near 13%, the observed $`V`$ amplitude of 0.08 mag was matched (Figure 4).
Though this measurement of the relative amplitudes in the two filters was only done for one star, it suggests that observations in wavelengths as red as $`I_C`$ may be quite useful for this kind of work, for the simple reason that the stars are so much brighter in $`I`$ than in $`V`$ that it is easier to obtain good signal-to-noise in the photometry. Still, only two of our stars had amplitudes which were larger than the errors in the photometry. Several alternative explanations arise: the non-variables may have very little chromospheric activity, which is strongly correlated with rotation (e.g., Krishnamurthi et al. (1998) and references therein). The stars may have activity cycles and it so happened that they did not have spots with a large filling factor during the time of our observations. Some higher-mass Pleiades stars show significant variations in their amplitude, which is interpreted as being the result of activity cycles (e.g., Stauffer et al. 1987b ; Prosser et al. (1993)), and our target J0149 has been quiescent except for the one flare event (Liebert et al. (1999)). Finally, the spots may have very low contrast to the average photospheric color.
There have been several efforts recently to measure activity in brown dwarfs. Neuhäuser et al. (1999) conducted a search for X-ray emission from confirmed (Li test) and candidate brown dwarfs. A very small number of the targets were detected in X-rays: only one out of 26 brown dwarfs was detected, while 4 out of 57 of the brown dwarf candidates were detected. Except for 2MASSJ0149, the brown dwarfs and brown dwarf candidates studied in this paper were all included in the Neuhäuser et al. (1999) study. No detections were made and only upper limits were obtained for those of our targets included in that study.
This lack of activity is consistent with the findings of Krishnamurthi et al. (1999), who conducted a deep search for radio emission from a sample of very low mass stars and brown dwarfs in the field; no radio emission was detected from the studied targets. A possible explanation for a lack of detectable activity in such low-mass objects has been suggested by Neuhäuser et al. (1999) \- brown dwarfs are fully convective objects that briefly burn deuterium and are not massive enough to burn hydrogen. When these objects stop burning deuterium, the core starts to cool down. As the temperature difference between the core and the surface decreases, the convective velocities decrease. This dampens the dynamo, which is powered not only by rotation, but also by the convection velocity. Thus, the objects may be rotating rapidly (as is seen from some measurements) but still not show much sign of activity in the form of X-ray emission, radio emission or spots.
In summary, we monitored a set of low-mass stars and brown dwarfs in the Pleiades and one low-mass star in the field in order to determine photometric rotational periods. We were able to derive rotation periods for two of the low mass stars. We thus verify (as in Martín & Zapatero Osorio 1997) that at least some stars near the hydrogen burning limit have large spot groups. The frequency of the spots is low (about 20-25%), or the spots have temperatures which are near that of the photosphere in these cool stars. We were unable to measure rotation periods for any of the (likely) substellar objects. From comparison to earlier work looking for activity, we speculate that this lack of detectable activity may be an inherent problem with using a sample of objects that are older than a few million years. If this hypothesis is correct, only very young brown dwarfs show signs of activity and it will be necessary to study such young objects to be able to detect spot activity. We will identify and study such candidates in an effort to determine if brown dwarfs show signs of magnetic activity at younger ages.
We acknowledge support from NSF grant AST-9731621 to MHP and DT. Thanks to MDM staff, and to Liam Mac Iomhair for his assistance at the telescope. AK received support from NASA grant S-56500-D to the University of Colorado.
|
no-problem/9906/cond-mat9906007.html
|
ar5iv
|
text
|
# Current dependence of grain boundary magnetoresistance in La0.67Ca0.33MnO3 films
## I Introduction
Colossal magnetoresistance (CMR) in thin films of manganese based perovskites has been investigated intensively during the last years. One motivation is the prospect to use these materials for magnetic devices, but a strong field in the Tesla range is necessary to obtain large resistance changes.
Another peculiarity, the highly spin polarized conduction band, allows to achieve high MR in low fields, because of the spin polarized tunneling of electrons between grains . This can be realized by heteroepitaxial tunnel junctions with electrodes of manganites and a thin insulating barrier . Other approaches are to produce polycrystalline films , bulk materials , ramp-edge junctions , or thin films which are under compressive strain . As is well known from the high-$`T_C`$ superconductors, a further access is to deposit thin manganite films on bicrystal substrates to realize a single artificial grain boundary (GB) .
In order to understand the mechanism of the large low-field MR, we have carefully studied the resistivity of thin manganite films with and without an artificial GB, induced by a bicrystal substrate, as a function of temperature, magnetic field and current.
## II Experimental
Epitaxial thin films of La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> (LCMO) were grown from a stoichiometric target by pulsed laser deposition (KrF Laser, $`\lambda =248`$ nm). As substrates we used 45 SrTiO<sub>3</sub> bicrystals or monocrystals. The optimized deposition conditions were a substrate temperature of 950C in an oxygen partial pressure of 14 Pa. The deposition rate was 0.3 Å/pulse with a pulse frequency of 3 Hz. With a Mireau interferometer the film thickness was determined to 190 nm. After deposition the samples cooled in 500 hPa O<sub>2</sub> atmosphere without further annealing.
In X-ray diffraction in Bragg-Brentano geometry only film reflections corresponding to a (0 0 $`l`$) orientation of the cubic perovskite cell are visible. The lattice parameter normal to the substrate plane ($`c=3.82`$ Å) is smaller than the bulk value and indicates a tensile strain of the $`a,b`$ axis due to the lattice mismatch of the substrate. Rocking angle analysis shows epitaxial $`c`$-axis oriented growth with an angular spread smaller than 0.03 on both sides of the artificial GB. The in-plane orientation was studied by $`\varphi `$-scans on both sides of the bicrystal using the group of symmetry equivalent (2 0 2) reflections. The cubic perovskite axes of the film are parallel to that of the SrTiO<sub>3</sub> bicrystal substrate with an angular spread in the orientation of the individual epitaxial grains of less than 0.5. Thus the 45 GB in the film induced by the bicrystal substrate is the only relevant large-angle GB in the following experiment.
The surface morphology of the samples was studied by scanning electron (SEM) and by atomic force microscopy (AFM). A shallow rectangular trench structure of 200 nm width and 5 nm depth along the grain boundary was visible in AFM. The GB itself was identified as a further sharp dip with characteristic dimensions close to the resolution limit of the used wide area scanner. The SEM scan in Fig. 1 shows that the actual distorted GB region is smaller than 50 nm.
The films were patterned with conventional photolithography and chemically etched in an acidic solution of hydrogen peroxide. We used a meander track which crosses the GB eleven times. The total length is 5.6 mm and the width is 0.1 mm. For comparison an identical second structure was prepared adjacent to the GB, but not crossing it.
The resistivity was measured in a superconducting magnet cryostat by standard four-point technique with a constant DC current between 10 nA and 100 $`\mu `$A. For higher currents up to 10 mA current pulses of 3 ms length were applied in order to minimize sample heating. To see the low-field MR it is necessary to diminish demagnetization effects, which requires that the film plane is parallel to the magnetic field $`H`$. Further the angle between the GB and the magnetic field is important for the MR ratio. In this paper we show the results for $`H`$ parallel GB, where the low-field MR is largest.
## III Results and Discussion
The resistances of the meander tracks as a function of temperature show the usual maximum in the resistivity and its suppression in high magnetic fields. The conductivity above the Curie-temperature $`T_C=225`$ K is thermally activated, which can be explained by small polaron hopping . At low temperatures the behavior of the two meanders is not the same. The difference between the meander resistance on and beside the GB isolates the temperature dependence of the true GB resistance. In agreement with published data there is a broad flat peak around 100 K below $`T_C`$ .
In Fig. 2 the resistivity as a function of magnetic field at 4.2 K measured with a constant DC current of 1 $`\mu `$A is shown. Here the magnetic field is parallel to the plane of the film and to the GB. The MR defined as $`\mathrm{\Delta }R/R=[R(H)R(H=0)]/R(H=0)`$ reaches its maximum value of 70 % in a field of 120 Oe at 4.2 K, which is approximately equal to the coercitive field. Assuming spin polarized tunneling , the maximum MR is given by
$$\mathrm{\Delta }R/R_{max}=(R_{}R_{})/R_{}=2P^2/(1P^2)$$
(1)
with $`P=(n_{}n_{})/(n_{}+n_{})`$ the spin polarization parameter and $`R_{}`$ and $`R_{}`$ the resistances for the parallel and antiparallel configurations of the magnetizations, respectively. A value of 70 % for $`\mathrm{\Delta }R/R`$ will result in $`P0.51`$ for LCMO. We have to mention here that the different steps and switches in Fig. 2 indicate that in the neighborhood of the GB multiple magnetic domains can exist and thus the spin polarization parameter will be underestimated.
With higher temperatures the maximum low-field MR decreases almost linearly down to 125 K. At $`T_C`$ of the sample the hysteretic low-field MR vanishes, as can be seen in Fig. 3. This temperature dependence is different to that of magnetic tunnel junctions . The magnetic field value, where the maximum MR ratio occurs, does not shift with temperature up to 200 K. The inset of Fig. 3 shows a hysteresis loop measured close to $`T_C`$ ($`T=200`$ K). At higher temperature the hysteretic low-field MR becomes less pronounced and the intrinsic negative CMR effect becomes visible. Above 150 K the meander track not passing the GB shows comparable MR effects. Thus the artificial GB acts in this case as a magnetic field independent series resistor. This indicates loss of magnetic order in the distorted GB region well below the bulk $`T_C`$ . The low-field MR visible above 150 K in both meanders shows a maximum value of 3 % at 175 K. We think that this MR results from a large number of small angle grain boundaries between the individual epitaxial grains. This effect decreases at higher temperatures due to the loss of spin orientation in the individual grains near $`T_C`$ of the sample. Below 175 K the increasing magnetic domain size will lead to ferromagnetic coupling of a large number of grains in each magnetic domain. Thus only grains at magnetic domain boundaries contribute to the low-field MR. If at low temperatures the magnetic domains are much larger than the crystal grain size, the number of not spin aligned grains is negligible and the low-field MR vanishes, as can be seen from the lower curve of Fig. 3. At 4.2 K the noise level of the experiment sets an upper limit for the MR of the reference meander of $`<0.05`$ %. Within the above description this requires a growth of the magnetic domain size by two orders of magnitude.
The influence of the GB on the transport properties is also visible in the current-voltage ($`I`$-$`V`$) curves of the meanders. In the paramagnetic regime both meanders show a nearly identical ohmic response. Below $`T_C`$ the one which crosses the GB is strongly nonlinear in contrast to the ohmic behavior of the reference meander as can be seen in Fig. 4 for $`T=4.2`$ K in zero field. With increasing bias current the differential conductivity of the GB meander becomes asymptotic to that of the reference meander. Significant heating effects can be excluded, because the resistance decreases with increasing current, while the temperature coefficient of the resistivity is positive.
Nonlinear $`I`$-$`V`$ curves are often taken as an indication for tunneling transport. In ferromagnetic materials the inclusion of spin dependent density of states factors and hysteretic magnetizations on both sides of the tunneling barrier can explain a hysteretic MR. A description of the nonlinear $`I`$-$`V`$ curve shown in Fig. 4 with the standard Simmons tunneling model in the parabolic approximation is able to reproduce the data within the linewidth of the figure. In addition to the tunneling parameters one has to take into account in a fitting procedure the resistivity drop in the non distorted regions. This can be done either as a free parameter or by using the resistivity of the reference meander. We obtained as minimizing parameters a barrier height of $`\mathrm{\Phi }=0.26`$ eV, a width of 2.1 nm, and a resistance of the undistorted region of 1630 $`\mathrm{\Omega }`$ (reference meander $`R=1770`$$`\mathrm{\Omega }`$). Such a small width of the barrier is not expected in view of the clear visibility of the GB in the SEM scan. In spite of the nearly perfect fit possible for the nonlinear $`I`$-$`V`$ curve of Fig. 4, the investigation of direct $`I`$-$`V`$ data gives vanishing weight to the true differential conductivity at zero bias current. Therefore it is necessary to inspect the differential conductivity shown in Fig. 5. For very low bias voltages a nearly parabolic behavior of the differential conductivity, as predicted by the Simmons model, was found by Steenbeck et al. . However, the inclusion of high bias current data in our measurements results in stronger systematic deviations of a Simmons model from the experimental data at low voltages. The differential conductivity sketched in Fig. 5 was calculated in the parabolic approximation. In view of the low barrier energy this is not appropriate at very high bias currents, which correspond to high bias voltages. But using the more general formulas given by Simmons, will even enhance the deviations. Thus it is unclear if charge transport takes place by GB-tunneling.
Also the observed anisotropy of the MR with respect to the magnetic field direction cannot be easily understood in this framework. Therefore as an alternative mechanism Evetts et al. proposed, that the GB does not act as a tunneling barrier, but as a mesoscale region with distorted magnetic and transport properties. The resistivity of the GB region is assumed to be a function of the GB magnetization, which depends on the effective field acting in the GB region. This interpretation is in agreement with the observed anisotropy. Also the loss of magnetic order in the mesoscale region can naturally occur well below the bulk $`T_C`$ and will be responsible for the vanishing low-field MR at higher temperatures. The up to now proposed transport mechanisms in the mesoscale region, however, are essentially thermally activated and ohmic. Therefore they cannot simply account for nonlinear $`I`$-$`V`$ curves.
A dependence of the hysteretic low-field MR on the current strength, which we present in the following, is in neither in a tunneling model nor in a mesoscale transport model a priori expected.
We investigated the dependence of the low-field MR on the transport current on a meander which shows a maximum MR peak $`\mathrm{\Delta }R/R_{max}`$ of 22 % at $`T=4.2`$ K. The applied current range spanned six decades, from 10 nA to 10 mA. Fig. 6 shows that the width of the hysteresis loops is current independent, while there is a strong current dependence of the height of the MR peaks. The suppression of the MR with current is shown in Fig. 7 for bias currents from 22 % for 10 nA to 1.4 % for 10 mA.
A close inspection of the data shows, that a current dependence exists also for magnetic field strengths well above the hysteretic region. The decrease of this negative CMR of the GB meander with increasing current is shown in an enlarged scale in Fig. 8. On the reference meander no current dependence of the resistivity is observed. The negative linear CMR slope $`\mathrm{d}R/\mathrm{d}(\mu _0H)`$ as function of the current was measured up to 10 T and is also plotted in Fig. 7. The close correlation between the CMR suppression and the low-field MR peak suppression with current indicates the same physical origin for both effects in the GB region. Since high bias currents correspond to high bias voltages one possible explanation is, that at high bias voltage a new spin independent channel for charge transport opens. However, due to the intimate relationship between ferromagnetism and spin polarized charge transport in the manganites a different scenario is possible: A polarized charge carrier from the undisturbed region will induce due to double exchange a spin polarization in the distorted region during its transition, i.e. the carrier leaves a trail of localized spins behind, which are more than usually aligned. Each manganese ion will encounter the passage of up to $`10^8`$ polarized carriers per second for the current values used in our experiment. If the spin relaxation time is much longer than 10<sup>-8</sup> s the charge transport induced magnetization will be nonzero in the time average and a second carrier can use this spin polarized path before the spin relaxation has destroyed the induced order! The existence of electron spin resonance measurements at microwave frequencies in the manganites , shows that the relaxation time is longer than (10 GHz)<sup>-1</sup>, and therefore this scenario is imaginable. In the limit of full magnetization of the distorted region the GB meander resistivity will be indistinguishable from that of the reference meander. This interpretation does not depend on the details of the transport of the charge carriers from one manganese ion to the next in the mesoscale region, but requires only a hopping probability depending on the respective spin alignment as proposed by several authors .
## IV Summary
By pulsed laser ablation we prepared thin films of La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> on SrTiO<sub>3</sub> bicrystal substrates. X-ray diffraction showed full epitaxial growth on both sides of the bicrystal substrate. Their low- and high-field MR was measured as a function of magnetic field, temperature and current. We obtained hysteretic MR values up to 70 % at low temperatures in low magnetic fields. The MR is induced by switching of magnetic domains at the coercitive field. With increasing temperature the artificial grain boundary MR decreases almost linearly. We observed a highly non ohmic behavior of the current-voltage relation, which leads to a nearly complete suppression of the MR effect at high bias currents. The current dependence of low- and high-field MR is identical, which indicates the same physical origin for this suppression. We discussed the data in view of proposed tunneling and mesoscale magnetic transport models. An explicit dependence of the spin polarization on the applied current is necessary for a consistent interpretation. In the mesoscale transport model the number of manganese ions in the distorted grain boundary region is comparable to the number of charge carriers passing the region within the spin relaxation time and we propose a current induced magnetization change.
###### Acknowledgements.
This work was supported by the Deutsche Forschungsgemeinschaft through project JA821/1-3.
|
no-problem/9906/hep-ph9906374.html
|
ar5iv
|
text
|
# Supersymmetric CP violation 𝜀'/𝜀 due to asymmetric 𝐴-matrix
BA-99-44
Shaaban Khalil<sup>1,2</sup> and Tatsuo Kobayashi<sup>3</sup>
<sup>1</sup>Bartol Research Institute, University of Delaware Newark, DE 19716
<sup>2</sup>Ain Shams University, Faculty of Science, Cairo 11566, Egypt
<sup>3</sup>Department of Physics, High Energy Physics Division, University of Helsinki
and
Helsinki Institute of Physics, P.O. Box 9 (Siltavuorenpenger 20 C)
FIN-00014 Helsinki, Finland
ABSTRACT
> We study contributions of supersymmetric CP phases to the CP violation $`\epsilon ^{}/\epsilon `$ in models with asymmetric $`A`$-matrices. We consider asymmetric $`A`$-matrices, which are obtained from string-inspired supergravity. We show that a certain type of asymmetry of $`A`$-matrices enhances supersymmetric contributions to the CP violation $`\epsilon ^{}/\epsilon `$ and the supersymmetric contribution to $`\epsilon ^{}/\epsilon `$ can be of order of the KTeV result, $`\epsilon ^{}/\epsilon 10^3`$.
1. CP violation is sensitive to physics beyond the standard model (SM), that is, CP violation , as well as flavor changing neutral current (FCNC) processes, constrains significantly physics beyond the SM, e.g., supersymmetric models. On the other hand, if any deviation of CP violation from the SM is observed, that would be a strong hint to physics beyond the SM. Many works have been done on CP violation in supersymmetric models . Recently the KTeV collaboration at Fermilab has reported a measurement of the direct CP-violation in $`K\pi \pi `$ decays
$$\mathrm{Re}(\epsilon ^{}/\epsilon )=(28\pm 4)\times 10^4.$$
This measurement has confirmed the previous result of the NA31 experiment at CERN . Hence, it excludes the superweak models. The SM predicts non-zero value for $`\epsilon ^{}/\epsilon `$. However, its prediction suffers from a large ambiguity due to the theoretical uncertainties in the hadronic quantities. Recent discussions on $`(\epsilon ^{}/\epsilon )`$ can be found in Refs.-. In supersymmetric models, there are several possible sources for CP violation beyond the Cabibbo-Kobayashi-Maskawa (CKM) phase $`\delta _{CKM}`$ in the SM. Two types of physical phases only remain in the minimal supersymmetric standard model (MSSM) after all appropriate field redefinition, namely, the phases of $`A`$-parameters and the phase of the $`\mu `$-term ($`\varphi _\mu `$). A generic type of $`A`$-matrices include many degree of freedom for the real part and imaginary part. However, in most of cases the universality of $`A`$-matrices has been assumed. That is good to simplify calculations, but that removes interesting degree of freedom, in particular for CP violation. Several implications of non-universal $`A`$-matrices have been discussed in Ref. . The non-universality among the soft supersymmetry (SUSY) breaking terms plays an important role on all CP violating processes. In particular, it has been shown in Ref. that non-degenerate $`A`$-parameters can generate the experimentally observed CP violation $`\epsilon `$ even with the vanishing CKM CP phase $`\delta _{CKM}=0`$, that is, the fully supersymmetric CP violation in the Kaon system is possible. In Ref. , an example of symmetric $`A`$-matrices for the first $`2\times 2`$ block has been considered with the exact degeneracy of squark masses between the first and second families. Because string-derived soft terms require symmetric $`A`$-matrices for exactly degenerate soft masses. This model can lead to the parameter region where we have the SUSY contribution of $`O(10^3)`$ to $`\epsilon `$. However, this type of models provide with a very small value for $`\epsilon ^{}/\epsilon `$. This result is due to an accidental cancellation between the different contributions because of the symmetric form of the $`A`$-matrices which have been used. In this letter, we study the SUSY contributions to the CP violation $`\epsilon ^{}/\epsilon `$ in models with asymmetric $`A`$-matrices. We consider two possibilities for asymmetric $`A`$-matrices keeping the degeneracy of squark masses. One model has almost degenerate squark masses, which are realized by dilaton-dominate SUSY breaking. In the other model, we require a delicate cancellation between string-derived soft masses and the $`D`$-term contributions to soft masses. The latter can lead to a large asymmetry for the $`A`$-matrix. Using these models, we calculate $`\epsilon ^{}/\epsilon `$ explicitly. Then we show that in the case with asymmetric $`A`$-terms the SUSY contribution to $`\epsilon ^{}/\epsilon `$ can be of order of the KTeV result, $`\epsilon ^{}/\epsilon 10^3`$. In the whole analysis we take $`\delta _{CKM}=0`$ in order to show the pure SUSY contributions.
2. Here we consider the possibilities that one can obtain non-degenerate $`A`$-matrices keeping degeneracy of the squark masses. First we give a brief review on the soft SUSY breaking terms in string models,
$`_{\mathrm{SB}}`$ $`=`$ $`{\displaystyle \frac{1}{6}}h_{ijk}\varphi _i\varphi _j\varphi _k+{\displaystyle \frac{1}{2}}(\mu B)^{ij}\varphi _i\varphi _j+{\displaystyle \frac{1}{2}}(m^2)_i^j\varphi ^i\varphi _j+{\displaystyle \frac{1}{2}}M_a\lambda \lambda +\text{H.c.}`$ (1)
where the $`\varphi _i`$ are the scalar parts of the chiral superfields $`\mathrm{\Phi }_i`$ and $`\lambda `$ are the gauginos. We use the notation of trilinear coupling terms, the so-called $`A`$-terms, as $`h_{ijk}=(YA)_{ijk}`$, where $`Y_{ijk}`$ is the corresponding Yukawa coupling. We start with the (weakly coupled) string-inspired supergravity theory. Its Kähler potential is
$$K=\mathrm{ln}(S+S^{})+3\mathrm{ln}(T+T^{})+\underset{i}{}(T+T^{})^{n_i}|\mathrm{\Phi }^i|^2,$$
(2)
where $`S`$ and $`T`$ are the dilaton field and the moduli field. Now we assume a nonperturbative superpotential of $`S`$ and $`T`$, $`W_{np}(S,T)`$, is induced and $`F`$-terms of $`S`$ and $`T`$ contribute to SUSY breaking. In addition, we assume the vanishing vacuum energy. Then we parameterize $`F`$-terms
$`F^S=\sqrt{3}m_{3/2}(S+\overline{S})\mathrm{sin}\theta e^{i\alpha _S},F^T=m_{3/2}(T+\overline{T})\mathrm{cos}\theta e^{i\alpha _T}.`$ (3)
Within this framework, the soft scalar mass and the $`A`$-parameter are obtained
$`m_i^2`$ $`=`$ $`m_{3/2}^2(1+n_i\mathrm{cos}^2\theta ),`$ (4)
$`A_{ijk}`$ $`=`$ $`\sqrt{3}m_{3/2}\mathrm{sin}\theta e^{i\alpha _s}m_{3/2}\mathrm{cos}\theta (3+n_i+n_j+n_k)e^{i\alpha _T},`$ (5)
where $`n_i`$, $`n_j`$ and $`n_k`$ are modular weights of fields in the corresponding Yukawa coupling $`Y_{ijk}`$. Here we have assumed the corresponding Yukawa coupling $`Y_{ijk}`$ is $`T`$-independent. In addition, the gaugino masses are obtained
$`M_a`$ $`=`$ $`\sqrt{3}m_{3/2}\mathrm{sin}\theta e^{i\alpha _S}.`$ (6)
It is obvious that if we require exact degeneracy of squark masses between the first and second families, i.e. $`n_{D1}=n_{D2}`$ and similar relations for the other squarks, the first $`2\times 2`$ blocks of the $`A`$-matrices for the up and down sector, $`A_{ij}^u`$ and $`A_{ij}^d`$, are degenerate. That is what have been used in Ref. . In such a case we obtain a suppressed contribution to the CP violation $`\epsilon ^{}/\epsilon `$. In the first model we use, we assign different modular weights for the first and second families in order to have asymmetric $`A`$ matrices. In the dilaton-dominant case with $`\mathrm{tan}\theta >>1`$, we have almost degeneracy of the squark masses. In addition, the renormalization group effects due to the gaugino masses dilute the nondegeneracy. In Ref. it has been shown that the goldstino angle $`\theta `$ is constrained $`\mathrm{cos}^2\theta <1/3`$ for $`n_in_j=1`$ from the FCNC<sup>1</sup><sup>1</sup>1Furthermore, in Ref. it has been shown that a certain type of non-universal $`A`$-terms reduce the difference at low energy and even could tune it to vanish.. For example, as our first model with the asymmetric $`A`$-matrix, we take the following assignment of the modular weights
$`n_{Q_1}=1,n_{Q_2}=2,n_{Q_3}=3,`$
$`n_{Di}=n_{Ui}=n_{H_1}=1,n_{H_2}=3,`$ (7)
where $`i=1,2,3`$. On the top of that, we restrict our analysis to the region $`\mathrm{cos}^2\theta <1/3`$. Under this assumption, we have the $`A`$-parameter matrix for the down sector,
$`A_{ij}^d=\left(\begin{array}{ccc}a_d& a_d& a_d\\ b_d& b_d& b_d\\ c_d& c_d& c_d\end{array}\right),`$ (11)
where
$`a_d`$ $`=`$ $`\sqrt{3}m_{3/2}\mathrm{sin}\theta ,`$
$`b_d`$ $`=`$ $`m_{3/2}(\sqrt{3}\mathrm{sin}\theta +e^{i\alpha ^{}}\mathrm{cos}\theta ),`$ (12)
$`c_d`$ $`=`$ $`m_{3/2}(\sqrt{3}\mathrm{sin}\theta +2e^{i\alpha ^{}}\mathrm{cos}\theta ).`$
Here we have rotated the gaugino mass terms into real and rotated the $`A`$-terms at the same time and $`\alpha ^{}`$ denotes $`\alpha ^{}\alpha _T\alpha _S`$. In this case, we have the asymmetry between $`A_{12}^d`$ and $`A_{21}^d`$, but it is limited because of $`\mathrm{cos}^2\theta <1/3`$. Such asymmetry can be enlarged in the $`h_{ijk}`$-matrix if we take an asymmetric Yukawa matrix. Thus, we discuss the two cases: one case has a typical symmetric Yukawa matrix and the other case has an example of asymmetric Yukawa matrices. As a typical type of the symmetric and realistic Yukawa matrices, we use the type which are shown explicitly in Ref. . The Yukawa matrices among this type lead to similar results for the CP violation each other. As an example of asymmetric Yukawa matrices, we take the following form,
$`Y_{ij}^u=y^u\left(\begin{array}{ccc}\lambda ^8& \lambda ^5& \lambda ^3\\ \lambda ^7& \lambda ^4& \lambda ^2\\ \lambda ^5& \lambda ^2& 1\end{array}\right),Y_{ij}^d=y^d\left(\begin{array}{ccc}\lambda ^5& \lambda ^3& \lambda ^3\\ \lambda ^4& \lambda ^2& \lambda ^2\\ \lambda ^2& 1& 1\end{array}\right),`$ (19)
where $`\lambda 0.22`$. These correspond to the Yukawa matrices with one $`O(\lambda )`$ deviation in Ref. .
3. We explain the second model which we use. We assume an extra $`U(1)`$ gauge symmetry <sup>2</sup><sup>2</sup>2This $`U(1)`$ symmetry may be anomalous or anomaly-free. and it is broken by the vacuum expectation value (VEV) of the Higgs field $`\chi `$. This breaking induces another type of contribution to soft scalar masses, i.e. the $`D`$-term contribution, which is proportional to a charge of the broken symmetry. In this case, the soft scalar mass is obtained
$`m_i^2`$ $`=`$ $`m_{3/2}^2(1+n_i\mathrm{cos}^2\theta )+q_im_D^2,`$ (20)
where $`q_i`$ is the charge of the broken $`U(1)`$ of the matter field, and $`m_D^2`$ is the universal part of the $`D`$-term contributions. Thus, the soft scalar masses are, in general, non-degenerate for $`n_i`$ and $`q_i`$. However, the soft scalar masses $`m_i^2`$ and $`m_j^2`$ are degenerate if the following two conditions are satisfied,
(a) $`n_in_j=C(q_iq_j)`$,
where $`C`$ is universal for $`i`$ and $`j`$,
(b) $`m_{3/2}^2\mathrm{cos}^2\theta +m_D^2/C=0`$.
In this case, we obtain the degenerate soft scalar masses for different $`n_i`$ and $`n_j`$, that is, we can obtain non-degenerate $`A`$-matrices keeping degenerate soft scalar masses. Thus, this is a very interesting fine-tuned case in the whole parameter space.
Before calculations of $`\epsilon ^{}/\epsilon `$ in this model, we give comments on the conditions (a) and (b). We denote here the modular weight and the $`U(1)`$ charge of $`\chi `$ by $`n_\chi `$ and $`q_\chi `$ and we take the normalization such that $`q_\varphi =1`$. The VEV of $`\chi `$ induces the following terms in the superpotential,
$`W_{Yukawa}`$ $`=`$ $`Y_{ij}^u\theta (q_{H2}+q_{Qi}+q_{Uj})(<\chi >/M)^{(q_{H2}+q_{Qi}+q_{Uj})}Q^iU^jH_2`$ (21)
$`+`$ $`Y_{ij}^d\theta (q_{H1}+q_{Qi}+q_{Dj})(<\chi >/M)^{(q_{H1}+q_{Qi}+q_{Dj})}Q^iD^jH_1.`$
The superpoetential includes a similar term for the lepton sector. Here the couplings $`Y_{ij}^u`$ and $`Y_{ij}^d`$ are naturally of $`O(1)`$. The suppression factors $`(<\chi >/M)^{(q_{H2}+q_{Qi}+qUj)}`$ and $`(<\chi >/M)^{(q_{H1}+q_{Qi}+qDj)}`$ can lead to realistic hierarchies of the Yukawa matrices . Now we consider the $`T`$-duality transformation,
$$T\frac{aTib}{icT+d},$$
(22)
where $`adbd=1`$ and $`a,\mathrm{},d`$ are integers. We assume that the chiral field transforms
$$\mathrm{\Phi }^i(icT+d)^{n_i}\mathrm{\Phi }^i.$$
(23)
Then we require $`GK+\mathrm{ln}|W|^2`$ is duality-invariant. That implies that the superpotential should have the total modular weight $`n_i=3`$, that is, <sup>3</sup><sup>3</sup>3See also for $`D`$-term contributions derived superstring theory e.g. .
$`(q_{H2}+q_{Qi}+q_{Uj})n_\chi +n_{H2}+n_{Qi}+n_{Uj}`$ $`=`$ $`3,`$
$`(q_{H1}+q_{Qi}+q_{Dj})n_\chi +n_{H1}+n_{Qi}+n_{Dj}`$ $`=`$ $`3,`$ (24)
for non-vanishing couplings. Thus, it is obvious that for $`Q^i`$ fields the condition (a) is satisfied, i.e.
$$n_{Qi}n_{Qj}=n_\chi (q_{Qi}q_{Qj}),$$
(25)
if the $`(i,k)`$ and $`(j,k)`$ entries do not vanish for one of $`k`$ in either the up-sector or down-sector. Similarly we obtain
$$n_{Ui}n_{Uj}=n_\chi (q_{Ui}q_{Uj}),n_{Di}n_{Dj}=n_\chi (q_{Di}q_{Dj}),$$
(26)
if $`(k,i)`$ and $`(k,j)`$ entries do not vanish for one of $`k`$ in the up-sector and down-sector, respectively. Thus, a certain type of symmetries can realize the condition (a). Let us give a comment on the condition (b), too. Now our free parameters are $`m_{3/2}`$, $`\theta `$ and $`m_D^2`$ and these are determined by VEVs. Note that if the condition (a) is satisfied, we have the very special direction in our parameter space, i.e.
$$m_{3/2}\mathrm{cos}^2\theta =n_\chi m_D^2,$$
(27)
which corresponds to the condition (b). Along this direction, we have the degenerate soft scalar masses, $`m_{Q(U,D)i}=m_{Q(U,D)j}`$ for $`i,j`$. For simplicity, we assume that all squark masses are universal for the direction (27), i.e. $`m_{Q(U,D)i}^2=m_0^2`$. Now we consider a vicinity around (27) and there are two types of directions to change parameters. One is to violate the degeneracy and the other is to keep the degeneracy. Here let us consider only the direction to violate the degeneracy and treat the degree of freedom of the corresponding VEV as a dynamical parameter. Thus, we can write
$$m_i^2=m_0^2+q_i\delta m_0^2,$$
(28)
around (27), where $`\delta m_0^2`$ corresponds to the dynamical direction to violate the degeneracy. It is obvious that such direction is proportional to $`q_i`$ for $`m_i`$, because of eq.(20) and the linear relation between $`q_i`$ and $`n_i`$. Now let us consider the one-loop effective potential,
$$\mathrm{\Delta }V_1=\frac{1}{64\pi ^2}\mathrm{Str}^4(\mathrm{ln}^2/Q^23/2).$$
(29)
Around (27), we can expand
$`\mathrm{\Delta }V_1`$ $`=`$ $`[\mathrm{\Delta }V_1]_{m_i^2=m_0^2}+\mathrm{Tr}q_i[d\mathrm{\Delta }V_1/d\delta m_0^2]_{m_i^2=m_0^2}\delta m_0^2`$ (30)
$`+`$ $`\mathrm{Tr}(q_i)^2[d^2\mathrm{\Delta }V_1/d(\delta m_0^2)^2]_{m_i^2=m_0^2}(\delta m_0^2)^2+\mathrm{}.`$
Note that the gaugino masses have no direction corresponding to $`\delta m_0^2`$. If $`\mathrm{Tr}q_i=0`$ and $`d^2\mathrm{\Delta }V_1/d(\delta m_0^2)^2>0`$, the fine-tuning direction (27) is a locally minimum. As explained above, we obtain the degenerate soft scalar mass with non-degenerate $`A`$-parameters under the conditions (a) and (b). Note that the assignment of the modular weights are related with the assignment of the $`U(1)`$ charges. As a simple example, we use the following assignment,
$$n_{Q_1}=4,n_{Q_2}=3,n_{Q_3}=1,$$
(31)
$$n_{U_1}=6,n_{U_2}=3,n_{U_3}=1,$$
(32)
$$n_{D_1}=3,n_{D_2}=1,n_{D_3}=1,$$
(33)
$$n_{H_1}=1,n_{H_2}=1.$$
(34)
This assignment of the modular weights and the corresponding $`U(1)`$ charge assignment lead to the Yukawa matrices (19) . In realistic string models, we have constraints on the modular weights for the MSSM matter fields , and it might be difficult to obtain e.g. the quark field with $`n_{U_1}=6`$. However, we use this assignment as a toy supergravity model. If we make our model more complicated e.g. by use of complicated extra symmetries or we concentrate only the first two families, it might be possible to assign more natural values of modular weights. Namely, our initial conditions in the second model are as follows. We assume $`m_i^2=m_0^2`$, and furthermore we assume $`m_0^2=m_{3/2}^2`$. We use the Yukawa matrices (19) and the $`A`$-matrices (5) with the above assignment of the modular weights.
4. The convenient basis to discuss flavor changing effects in the SUSY loop with gluino exchange is the so-called super-CKM basis . In this basis the relevant quark mass matrix is diagonalized, and the squarks are rotated in the same way. We define $`(\delta _{ij})_{LR}`$ by normalizing the off diagonal components by average squark mass squared $`m_{\stackrel{~}{q}}^2`$. The relevant contribution, in our models, to CP violation comes from the terms proportional to $`(\delta _{12})_{LR}`$ and $`(\delta _{12})_{RL}`$. The mass insertion $`(\delta _{12})_{LR}^d`$, for instance, is given by
$$(\delta _{12})_{LR}^d=\frac{1}{m_{\stackrel{~}{q}}^2}U_{1i}(h^d)_{ij}U_{j2}^T,$$
(35)
where $`U`$ is the matrix diagonalizing the down quark mass matrix. It was shown in Ref. that to obtain a large SUSY contribution to $`\epsilon `$ it is necessary to enhance the values of $`\mathrm{Im}(\delta _{12})_{LR}`$ and $`\mathrm{Im}(\delta _{12})_{RL}`$. The non-degeneracy of $`A`$-terms is an interesting example for enhancing these quantities. From eq.(35) we notice that the phases of the all entries of the $`A`$-matrix contribute to $`(\delta _{12})_{LR}`$. It is remarkable that we have $`\mathrm{Im}(\delta _{12}^d)_{LR}\mathrm{Im}(\delta _{12}^d)_{RL}`$ in the models with asymmetric $`h`$-matrices unlike the case of symmetric $`h`$-terms. For a wide region of the parameter space we find that $`\mathrm{Im}(\delta _{12}^d)_{LR}`$ is of order $`10^410^5`$. That is similar to the case with the symmetric $`h`$-terms discussed in Ref. . For these values the CP violation $`\epsilon `$, as explained in Ref. , is of order of the experimental limit $`2.2\times 10^3`$. Note that this is a genuine SUSY contribution since we are assuming that $`\delta _{CKM}=0`$. Also in this analysis we have assumed the vanishing of the phase of $`\varphi _\mu `$, which is usually constrained by the experimental limit of the electric dipole moment of the neutron .
Now we estimate the $`\mathrm{\Delta }S=1`$ CP violating parameter $`\epsilon ^{}/\epsilon `$ in our two models. $`(\delta _{12}^d)_{LR}`$ and $`(\delta _{12}^d)_{RL}`$ give the important contributions to the CP violation processes in kaon physics. The relevant part of the effective hamiltonian $`_{\mathrm{eff}}`$ for $`\mathrm{\Delta }S=1`$ CP violation is
$$_{\mathrm{eff}}=C_8O_8+\stackrel{~}{C}_8\stackrel{~}{O}_8,$$
(36)
where the Wilson coefficient $`C_8`$ and the $`\mathrm{\Delta }S=1`$ effective Hamiltonian $`O_8`$ are given in Ref.. The dependence on $`(\delta _{12}^d)_{LR}`$ and $`(\delta _{12}^d)_{RL}`$ appear in $`C_8`$ and $`\stackrel{~}{C}_8`$,
$$C_8=\frac{\alpha _s\pi }{m_{\stackrel{~}{q}}^2}\left[(\delta _{12}^d)_{LL}(\frac{1}{3}M_3(x)3M_4(x))+(\delta _{12}^d)_{LR}\frac{m_{\stackrel{~}{g}}}{m_s}(\frac{1}{3}M_1(x)3M_2(x))\right]$$
(37)
where $`x=m_{\stackrel{~}{g}}^2/m_{\stackrel{~}{q}}^2`$ and $`\stackrel{~}{C}_8`$ can be obtained from $`C_8`$ by exchange $`LR`$ while the matrix element of the operator $`\stackrel{~}{O}_8`$ is obtained from the matrix element of $`O_8`$ multiplying them by $`(1)`$. The functions $`M_1`$, $`M_2`$, $`M_3`$ and $`M_4`$ can be found in Ref. . The expression for $`\epsilon ^{}`$ is given by the following formula,
$$\epsilon ^{}=e^{\frac{\pi }{4}}\frac{\omega }{\sqrt{2}}\xi (\mathrm{\Omega }1)$$
(38)
where $`\omega =\mathrm{Re}A_2/\mathrm{Re}A_0`$, $`\xi =\mathrm{Im}A_0/\mathrm{Re}A_0`$ and $`\mathrm{\Omega }=\mathrm{Im}A_2/(\omega \mathrm{Im}A_0)`$. The amplitudes $`A_I`$ are defined as $`\pi \pi (I)|_{eff}|K^0`$ where $`I=0,2`$ is the isospin of the final two-pion state. Let us discuss the first model with the symmetric and asymmetric Yukawa matrices. We take $`\mathrm{cos}\theta =1/\sqrt{10}`$ and $`\alpha ^{}\pi /2`$. In both cases, we have $`\mathrm{Im}(\delta _{12}^d)_{LR}`$ and $`\mathrm{Im}(\delta _{12}^d)_{RL}`$ of order $`10^410^5`$ and they are not degenerate. Thus, it is possible to obtain large values of $`\epsilon ^{}/\epsilon `$. Indeed for $`\mathrm{Im}(\delta _{12}^d)_{LR}10^5`$ we obtain $`\epsilon ^{}/\epsilon `$ of order of the experimental results of NA31 and KTeV, $`\epsilon ^{}/\epsilon =2.8\times 10^3`$. It is interesting to note that even for $`\mathrm{Im}(\delta _{12}^d)_{LR}10^5`$ one can obtain a sizable contribution to $`\epsilon `$. As pointed out in Ref. , for $`\mathrm{Re}(\delta _{12}^d)_{LR}10^3`$ the $`\epsilon `$-requirement $`|2\mathrm{R}\mathrm{e}(\delta _{12}^d)_{LR}^2\mathrm{Im}(\delta _{12}^d)_{LR}^2|^{\frac{1}{2}}2\times 10^4`$ is satisfied for the region of the parameter space which leads to $`\epsilon ^{}/\epsilon 10^3`$. Figure 1 shows $`\epsilon ^{}/\epsilon `$ versus $`m_{3/2}`$. The solid line and the dotted line correspond to the symmetric and the asymmetric Yukawa matrices, respectively. In the both cases, the SUSY phase has a sizable contribution to $`\epsilon ^{}/\epsilon `$. As expected, the asymmetric Yukawa matrix provides with larger values of $`\epsilon ^{}/\epsilon `$ than the symmetric case. For example, the asymmetric Yukawa matrix leads to the value $`\epsilon ^{}/\epsilon =2.8\times 10^3`$ at $`m_{3/2}450`$ GeV.
Similarly, we can discuss the second model. Actually, the asymmetry of the $`A`$-matrices is large compared with the first model. Thus, the second model predicts much larger values of $`\epsilon ^{}/\epsilon `$ for the same value of $`\mathrm{cos}\theta `$. For example, we have $`\epsilon ^{}/\epsilon =O(10^210^1)`$ for $`\mathrm{cos}\theta =1/\sqrt{10}`$ and $`m_{3/2}=O(100)`$ GeV. Such a case, i.e. such magnitude of asymmetry in the $`h`$-matrices, is excluded by the KTeV result. We find $`\epsilon ^{}/\epsilon =O(10^3)`$ for $`\mathrm{cos}\theta =1/10`$. For instance, we have $`\epsilon ^{}/\epsilon =3.3\times 10^3`$ for $`m_{3/2}=600`$ GeV, $`\mathrm{cos}\theta =1/10`$ and $`\alpha ^{}\pi /2`$. The behaviour of the $`m_{3/2}`$-dependence is similar to the both cases of the first model.
Furthermore, the CP violation $`\epsilon ^{}/\epsilon `$ can be reduced if any elements in the Yukawa matrices (19) include suppression factors such that the asymmetry of the $`h`$-matrices is reduced. For example, here we replace the (3,1) element of the down Yukawa matrix as $`\lambda ^2x\lambda ^2`$. For $`x=0.1`$ we have $`\epsilon ^{}/\epsilon =O(10^3)`$ in the case with $`\mathrm{cos}\theta =1/\sqrt{10}`$ and $`m_{3/2}=O(100)`$ GeV. For instance, we find $`\epsilon ^{}/\epsilon =2.5\times 10^3`$ for $`x=0.1`$, $`\mathrm{cos}\theta =1/\sqrt{10}`$ and $`m_{3/2}=300`$ GeV.
5. We have studied the CP violation $`\epsilon ^{}/\epsilon `$ in the models with asymmetric $`A`$-matrices as well as asymmetric $`h`$-matrices. We have shown that a certain type of the asymmetry enhances $`\epsilon ^{}/\epsilon `$ and it can be of order of the KTeV result, $`\epsilon ^{}/\epsilon 10^3`$. A large magnitude of the asymmetry leads to too large CP violation $`\epsilon ^{}/\epsilon `$. Thus, we have a constraint on the large asymmetry of $`h`$-matrices from the experimental value $`\epsilon ^{}/\epsilon 2.8\times 10^3`$. Other CP violating aspects and FCNC processes would also give further constraints.
The authors would like to thank S.M.Barr for useful discussions. S.K. would like to acknowledge the support given by the Fulbright Commission and the hospitality of the Bartol Research Institute. T.K. is supported in part by the Academy of Finland under the Project no. 44129.
|
no-problem/9906/hep-th9906114.html
|
ar5iv
|
text
|
# Airy functions in the thermodynamic Bethe ansatz
## Abstract.
Thermodynamic Bethe ansatz equations are coupled non-linear integral equations which appear frequently when solving integrable models. Those associated with models with $`N`$=2 supersymmetry can be related to differential equations, among them Painlevé III and the Toda hierarchy. In the simplest such case the massless limit of these non-linear integral equations can be solved in terms of the Airy function. This is the only known closed-form solution of thermodynamic Bethe ansatz equations, outside of free or classical models. This turns out to give the spectral determinant of the Schrodinger equation in a linear potential.
A great deal of interesting mathematical physics has arisen from the study of integrable models of statistical mechanics and field theory. One interesting area is known as the thermodynamic Bethe ansatz (TBA), which has proven a useful tool for computing the free energy of an integrable $`1+1`$ dimensional system . One ends up with a set of coupled non-linear integral equations, the “TBA equations”. One completely-unexpected result was a correspondence between a limit of these integral equations and some very well-studied non-linear differential equations, namely the Toda hierarchy . The purpose of this paper is to extend these results further, and show that in at least one case there is a closed-form but non-trivial solution of the integral equations. Not only is it interesting that such complicated equations have a simple solution in terms of the Airy function, but proving it requires some utilizing some very intricate results involving the Painlevé III differential equation . Moreover, it turns out to be related to the spectral determinant of the Schrodinger equation in a linear potential .
The TBA integral equations are generically of the form
(1)
$$ϵ_a(\theta )=m_a\mathrm{cosh}\theta \underset{b}{}\frac{d\theta ^{}}{2\pi }\varphi _{ab}(\theta \theta ^{})\mathrm{ln}(1+e^{\mu _bϵ_b(\theta ^{})}).$$
Physically, $`Tϵ_a(\theta )`$ is the energy for creating a particle of type $`a`$ and rapidity $`\theta `$ in a thermal bath at temperature $`T`$. The $`m_a`$ are the particle masses over temperature, while the $`\mu _a`$ are their chemical potentials over temperature. The kernels $`\varphi _{ab}`$ are a result of the interactions between particles. This and all unlabelled integrals in this paper run from $`\mathrm{}`$ to $`\mathrm{}`$. The free energy per unit length is
(2)
$$F=T^2\underset{a}{}\frac{d\theta }{2\pi }m_a\mathrm{cosh}\theta \mathrm{ln}(1+e^{\mu _aϵ_a(\theta )}).$$
Here we study TBA equations where the underlying physical system has $`N`$=2 supersymmetry. The amazing thing is that solutions of a particular limit of such TBA equations are simply related to solutions of a non-linear differential equation . Particles in an $`N`$=2 theory all have a charge, $`f_a`$, known as their fermion number. When the chemical potentials are $`\mu _a=i\pi f_a`$, a consequence of supersymmetry is that the $`ϵ_a`$ in (1) are all constants and the free energy is $`F=0`$. This is known as the Witten index (the usual integer contributions to the index do not contribute to the free energy per unit length) . The result of is that for chemical potentials $`\mu _a=i(\pi h)f_a`$, one can derive a differential equation for the order $`h`$ piece of the free energy. The simplest cases give the Painlevé III differential equation and the Toda hierarchy.
The TBA equations for the case at hand were derived in . They have $`\varphi _{12}=\varphi _{13}=1/\mathrm{cosh}\theta `$ with $`\varphi _{ab}=\varphi _{ba}`$ and other $`\varphi _{ab}=0`$, while $`m_2=m_3=0`$, and $`f_1=0`$, $`f_2=f_3=1`$. For small positive $`h`$, the functions $`A`$ and $`B`$ are defined by $`ϵ_1(\theta )=A(\theta )\mathrm{ln}h`$, and $`ϵ_2(\theta )=ϵ_3(\theta )=hB(\theta )`$. The order $`h`$ TBA equations are
(3) $`A(\theta )`$ $`=`$ $`2u(\theta ){\displaystyle \frac{d\theta ^{}}{2\pi }\frac{1}{\mathrm{cosh}(\theta \theta ^{})}\mathrm{ln}(1+B^2(\theta ^{}))}`$
(4) $`B(\theta )`$ $`=`$ $`{\displaystyle \frac{d\theta ^{}}{2\pi }\frac{1}{\mathrm{cosh}(\theta \theta ^{})}e^{A(\theta ^{})}}`$
where $`2u(\theta )=m_1\mathrm{cosh}\theta `$ here. In , a physics proof was given that the resulting free energy is simply related to a solution of the Painlevé III differential equation with variable $`m_1`$. This is a physics proof because one method of computation gives integral equations, the other the differential equations. This result was extended considerably in . Subsequently, the equivalence was proven directly and rigorously in .
A particularly interesting situation is the “massless” limit, where $`m_1`$ is very small. Then $`2u(\theta )=e^\theta `$, because $`m_1`$ can be removed by redefining $`\theta `$ by a shift. The result of this paper is $`A`$ and $`B`$ in (3,4) can be found in closed form in this massless limit.
Result: When $`u(\theta )=e^\theta /2`$, the solution of (3,4) is
(5) $`e^{A(\theta )}`$ $`=`$ $`2\pi {\displaystyle \frac{d}{dz}}(Ai(z))^2`$
(6) $`B(\theta )`$ $`=`$ $`2\pi {\displaystyle \frac{d}{dz}}\left[Ai(ze^{i\pi /3})Ai(ze^{i\pi /3})\right]`$
where $`z=(3e^\theta /4)^{2/3}`$ and $`Ai(z)`$ is the Airy function.
To check the normalization, note that (3,4) imply that $`e^{A(\theta )}2/\sqrt{3}`$ and $`B(\theta )1/\sqrt{3}`$ as $`\theta \mathrm{}`$, in agreement with the limits of the appropriate Airy functions as $`z0`$. $`Ai(z)`$ is a solution of the differential equation $`w^{\prime \prime }=zw`$, while $`e^{A(\theta )}`$ and $`B(\theta )`$ solve $`w^{\prime \prime \prime }4zw^{}=6w`$.
As far as I can tell, it is difficult to prove the result by direct substitution into the integrals, but requires utilizing some additional structure. First, define the integral operator $`K`$ which maps functions to functions with the kernel
$$K(\theta ,\theta ^{})=2\frac{E(\theta )E(\theta ^{})}{e^\theta +e^\theta ^{}},$$
where
$$E(\theta )=e^{\theta /2}e^{u(\theta )}.$$
Solutions of Painlevé III can be expressed in terms of this operator $`K`$ when $`2u=m_1\mathrm{cosh}\theta `$. The functions $`Z_+`$ and $`Z_{}`$ are defined as
(7)
$$Z_+=e^{2\theta /3}(I+K)^1EZ_{}=e^{\theta /3}(IK)^1E$$
These $`Z_\pm `$ are simply related to the $`QP=(I\pm K)^1E`$ used in . It was proven in that functions $`A`$ and $`B`$ solve (3,4) if
(8) $`e^{A(\theta )}`$ $`=`$ $`4\pi Z_+(\theta )Z_{}(\theta )`$
(9) $`B(\theta )`$ $`=`$ $`4\pi e^{i\pi /3}e^{u(\theta +i\pi /2)+u(\theta i\pi /2)}Z_+(\theta i\pi /2)Z_{}(\theta +i\pi /2)i`$
The functions (8,9) obey (3,4) for any entire $`u(\theta )`$ obeying $`u(\theta )=u(\theta +2\pi i)`$ , not only the special case $`u(\theta )=e^\theta /2`$ analyzed in this paper.
With the similarity between (3,4) and (8,9), proving the result is equivalent to proving that
(10)
$$Z_+(\theta )=Ai(z)Z_{}(\theta )=Ai^{}(z)$$
when $`u(\theta )=e^\theta /2`$. The expression (6) for $`B(\theta )`$ follows by using a few Airy-function identities. The expressions for $`Z_\pm `$ in (10) can be proven by directly evaluating the integral in their definition. Specifically, (7) requires that
(11)
$$E(\theta )=e^{2\theta /3}Z_+(\theta )+2E(\theta )𝑑\theta ^{}Z_+(\theta ^{})e^{e^\theta ^{}/2}\frac{e^{7\theta ^{}/6}}{e^\theta +e^\theta ^{}}$$
Defining $`xe^\theta /2`$ and using the expression of an Airy function in terms of a Bessel function, the integral is proportional to
$$_0^{\mathrm{}}𝑑xe^x\frac{\sqrt{2x}}{e^\theta +2x}K_{1/3}(x).$$
This can be looked up in , or evaluated by using Mellin transforms, under which the Bessel function $`K_\nu `$ has nice properties. One indeed finds that (11) and the analogous relation for $`Z_{}`$ are true when $`Z_\pm `$ are given by (10). Since the solution with the appropriate analyticity properties (no zeroes and bounded in the strip $`|Im\theta |<\pi /2`$) is unique , the result is proven.
The functions $`Z_+(\theta )`$ and $`Z_{}(\theta )`$ are interesting in their own right. They arose as a sort of partition function of the boundary-sine Gordon problem at coupling $`g`$=2/3, and equivalently as the continuum version of the Baxter $`Q`$-operator in studies of conformal field theory . The $`Q`$ operator gives the generating function of the conserved quantities (local and non-local) of the theory. Up to normalization, the functions $`Z_\pm `$ are $`Z_{BSG}(z,\pm 1/3)`$ in , and $`Q(z)`$ at $`p=\pm 1/6`$ in . Thus the result here provides the only case (other than where the system is free or classical) where these quantities can be computed explicitly. In fact, it provides strong evidence that the results of are applicable in the repulsive regime $`g>1/2`$ as conjectured. For example, it shows that the zeroes of the eigenvalues of the $`Q`$-operator obey the pattern conjectured in . The “quantum Wronskian” of becomes equivalent to the Proposition of , and both are easily verified by using the (ordinary) Wronskian of the Airy functions. Also, by using the series expansion of , it gives a strange sequence of identities of sums of products of gamma functions.
Recently, the work of also arose in a completely new context. Namely, it was observed in that the spectral determinant for the Schrodinger equation in a potential $`|x|^\alpha `$ for any $`\alpha `$ is given precisely by the vacuum eigenvalue of this Baxter $`Q`$-operator. A non-zero angular momentum can be added, and this correspondence still holds . This is because the quantum Wronskian is equivalent to a set of functional relations for the spectral determinant derived in . The result above then means that Airy function is the spectral determinant for the Schrodinger equation in a linear potential, a result shown directly in . It would be most interesting to understand how to extend this result.
Since this system discussed in this paper provides the simplest example of the differential equation/TBA correspondence , it seems likely that there is a simple solution of the massless limit of any TBA equations of this type. What is not yet known outside of this case is the detailed analysis of the differential equation of , which was vital to the results of . Given how beautifully the Airy function solved the problem here, it would be quite interesting to see how this result is generalized.
This work was mostly done in spring 1996, while I was a postdoc at the University of Southern California. I thank Hubert Saleur and the Packard Foundation for support then. It was presented in 1997 at the Supersymmetry and Integrable Models Workshop, University of Illinois-Chicago, and the International Workshop on Statistical Mechanics and Integrable Systems in Coolongatta. I thank the organizers for their hospitality. I thank P. Dorey for conversations, and S. Lukyanov for urging me to write this up despite the late date. Currently, my work is supported by a DOE OJI Award, a Sloan Foundation Fellowship, and by NSF grant DMR-9802813.
|
no-problem/9906/cond-mat9906066.html
|
ar5iv
|
text
|
# Atomic Scale Sliding and Rolling of Carbon Nanotubes
## Abstract
A carbon nanotube is an ideal object for understanding the atomic scale aspects of interface interaction and friction. Using molecular statics and dynamics methods different types of motion of nanotubes on a graphite surface are investigated. We found that each nanotube has unique equilibrium orientations with sharp potential energy minima. This leads to atomic scale locking of the nanotube. The effective contact area and the total interaction energy scale with the square root of the radius. Sliding and rolling of nanotubes have different characters. The potential energy barriers for sliding nanotubes are higher than that for perfect rolling. When the nanotube is pushed, we observe a combination of atomic scale spinning and sliding motion. The result is rolling with the friction force comparable to sliding.
Although the fundamental aspects of friction have been studied for more then centuries, our knowledge about its microscopic aspects is very limited. The invention of atomic force microscope (AFM) and its application in measurements of atomic scale friction (friction force microscope - FFM) have made a great impact on the studies of friction. A carbon nanotube is a stable nanoobject having cylindrical shape, thus ideal for understanding atomic scale friction. M. Falvo et. al. showed that it is possible to slide, rotate and roll carbon nanotubes on a graphite surface. They demonstrated that a nanotube has preferred orientations on the graphite surface and prefer rolling than sliding when it is in atomic scale registry with the surface. In this study, we carried out molecular statics, dynamics calculations and studies of stick-slip motion for a variety of nanotubes. We found that: (i) A nanotube has sharp potential energy minima leading to orientational locking. The locking angles are directly related to the chiral angle.(ii) Sliding and rolling of nanotubes have different characters. The energy barriers for sliding are higher then the barriers for perfect rolling. (iii) The effective contact area and total interaction energy scale with the square root of the radius. (iv) A combination of sliding and spinning motion is observed when the tube is pushed. The net result is rolling with the friction force comparable to the corresponding force for sliding.
The character of interaction between the moving object ( atom, molecule or any nanoparticle ) and the underlying surface defines the motion. The interaction energy may consist of short-range, attractive interaction energy due to chemical bonding; short-range repulsive energy and long-range, attractive van der Waals energy. The interaction between a carbon nanotube and a graphite surface is similar to that between two graphite planes which is weak and van der Waals in origin. To investigate the overall behavior of the motion of a carbon nanotube on a graphite surface, we represent the interaction between the tube and the graphite surface atoms by an empirical potential of Lennard-Jones type which was used extensively to study solid $`C_{60}`$ and nanotube. Recent theoretical calculations showed that multiwall nanotubes on a graphite surface are not deformed significantly. The atomic scale motion is determined mostly by the interaction of the outmost layer of the nanotube with the surface. In this work we studied rigid single wall nanotubes with different chiralities and radii.
The energy barriers related to the motion of nanotubes can be conveniently analyzed by calculating the variation of the potential energy, $`E_P`$, and corresponding force during the motion. Four different types of motion are considered: spinning, rotating, sliding and rolling. During each step of motion the height of the tube is optimized. We first spin and rotate the nanotubes in order to find the equilibrium positions. Fig. 1 shows the interaction energy as a function of the rotation angle between the tube axis and the graphite lattice (All the data given in this study is for per Å length of the nanotubes). Each nanotube has unique equilibrium orientations repeating in every 60<sup>o</sup>, reflecting the lattice symmetry of the graphite. The variation of energy near the minimum is very sharp which causes atomic scale locking of the nanotube. Locking angles are different for different nanotube and they are the direct measure of the chiral angle. This provides a novel method for measuring the chiralities of carbon nanotubes. Other important point in Fig. 1 is that the energy variation between two consecutive energy minima is very small (except near the minima) . Thus the force needed to rotate a nanotube is very small when the tube is out-of-registry. These results are in good agreement with the recent experiments by M. Falvo et. al.
Next, we studied sliding and rolling of carbon nanotubes. In Fig. 2(a) the variation of the total interaction energy, $`E_P(s)`$ of a (20,10) nanotube as a function of sliding distance, $`s`$ is shown. The energy variation is very small for rolling since perfect atomic scale registry is always maintained in the contact region. On the other hand in sliding motion, all the atoms in the contact region move simultaneously out-of-registry to higher energy positions and contribute to the energy barrier of motion. When the nanotube is attached to an AFM tip, stick-slip motion occurs. The tube first sticks and then slips suddenly (slides or rolls or both) when the force exerted by the tip is sufficiently large. The friction force in stick-slip motion of the nanotube when it is attached to an AFM tip ( with string constant $`8.0\times 10^3`$ N/m ) is shown in Fig. 2(b). The area in the hysteresis curve gives us the amount of energy dissipated during the stick-slip motion.
To investigate the tube dependence, we performed calculations with tubes in different chiralities or radii. Fig. 3(a) shows the interaction energy as a function of the nanotube radius, $`R`$. We found that the effective contact area and the interaction energy scale with the square root of the radius of the nanotube. The interaction energy is independent of chirality. In-registry sliding force (when the tube is in a minimum energy orientation) is also scale with the square root of the radius. However, the sliding force is different for nanotubes with different chiralities and the same radii. Fig. 3(b) shows the force for in-registry sliding and rolling. For a typical nanotube ( radius $`13`$ nm, length $``$ 600 nm ), the sliding force value is estimated as $`87`$ nN for an armchair tube and $`43`$ nN for a zigzag tube in good agreement with the friction force values measured in the experiments ( $`50`$ nN ).
The static calculations we discussed gives insight of ideal sliding and rolling. However, the dynamical behavior is crucial for the competition between sliding and rolling in the course of motion. In our model, the graphite substrate and the nanotube are considered to be rigid but the nanotube as a whole is able to move having translational and spinning degrees of freedom. Constant lateral force was applied to the nanotube for a short period of time ( 50 ps ) and then the motion of the nanotube was analyzed. In the same way, we applied constant torque or combinations of torque and lateral force to the nanotube. Total energy of the system was kept constant.
After constant lateral force is applied on the nanotube, slide-spin motions in the atomic scale are observed. When the atoms in the contact region are in atomic scale registry it is easier for the nanotube to slide. Then the nanotube atoms move from in-registry to out-of-registry positions and it is easier for the nanotube to spin. By spinning the nanotube decreases its potential energy and the atoms recover the in-registry positions. The switching of the tube motion between spinning and sliding can be clearly seen in the sliding and the spinning component of the kinetic energy shown in Fig. 4(a). The switching between spinning and sliding is in the atomic scale and directly related to the corrugation of interaction energy. The total force acting on the tube in the direction of motion is shown in Fig 4(b). The maximum value of the force is comparable to the force required to slide the nanotube (see Fig. 3(b)). Sliding and spinning distances ( angle multiplied by $`R`$) as functions of time are shown in Fig. 4(c). Ideal rolling would be a perfect overlap of sliding and spinning distances meanwhile slide-spin motion gives oscillations and the net result is equivalent to rolling.
For a better understanding of the slide-spin motion, we plot the interaction energy as a function of sliding distance and spinning angle in Fig. 5(a). The trajectory for ideal rolling is a line at the bottom of the valley like regions. However, a nanotube performing slide-spin motion follows an oscillating path in these valleys (see Fig. 5(b)) with an amplitude of oscillation depending on the initial kinetic energy.
When the system is coupled to a heat bath or energy dissipation due to friction is considered, there are changes in the slide-spin motion. We modeled the dissipation by additional velocity dependent forces on the atoms close to the contact region. The results are presented in Fig. 6. Without energy dissipation the tube oscillates in the valley like regions of potential energy surface (see Fig. 5(a)) in the slide-spin motion. When the energy dissipation is considered the total kinetic energy decreased and there is more mixing between sliding and spinning. Eventually, the nanotube performs ideal rolling. If the nanotube has very high kinetic energy, it slides over many surface unit cells. But due to energy dissipation the tube’s total kinetic energy decreases and slide-spin motion starts. Afterwards the motion is close to ideal rolling (As seen in Fig. 6(b)). This atomic scale picture of rolling is very similar to the rolling of macroscopic objects. Recent molecular dynamics simulations with a full relaxed system by J. D. Schall and D. W. Brenner find similar conclusions.
To conclude, we investigated different types of motion of carbon nanotubes on a graphite surface. Each nanotube has unique minimum energy orientations with respect to the surface structure. The variation of interaction energy is very sharp leading to orientational locking of the nanotubes. The locking angles are direct measure of the chiral angles and this provides a novel method for measuring the nanotube chirality. We found that the effective contact area and the total interaction potential energy scale with square root of the radius of the nanotube. A combination of atomic scale spinning and sliding motion is observed when the nanotube is pushed.
###### Acknowledgements.
The authors are greatful to R. Superfine, M. R. Falvo, J. Steele, D.W. Brenner, J.D. Schall and S. Washburn for many stimulating discussions. This work is supported by U. S. Office of Naval Research (N00014-98-1-0593).
Email: buldum@physics.unc.edu
Email: jpl@physics.unc.edu
|
no-problem/9906/astro-ph9906484.html
|
ar5iv
|
text
|
# Detection of the Red Giant Branch Stars in M82 Using the Hubble Space Telescope1footnote 11footnote 1Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, operated by AURA, Inc. under NASA contract No. NAS5-26555.
## 1 Introduction
M82 has been one of the most frequently targeted candidates for studying sturburst galaxies and is distinguished by its very high IR luminosity ($`L_{IR}=3\times 10^{10}L_{}`$, Telesco & Harper 1980). A cluster of young supernova remnants has also been observed around the nucleus of M82 (Kronberg et al. 1985). This galaxy is located in a small group of galaxies, which is comprised of M81, M82, NGC 3077 and several other smaller dwarf galaxies. The HI map of the M81 group revealed tidal tails bridging M81 with M82 and NGC 3077, suggesting a recent interaction of these galaxies (Yun, Ho & Lo 1993).
The distance to the M81 group has been measured by Freedman et al. (1994) using the $`HST`$ observations of Cepheid variables in M81. They report a distance modulus of $`(mM)_0=`$27.80 $`\pm `$ 0.20 mag. Two other galaxies in the M82 group have also recently been the $`HST`$ targets. Caldwell et al. (1998) observed dwarf ellipticals, F8D1 and BK5N, and reported the distances measured by the tip of the red giant branch (TRGB) method of 28.0 $`\pm `$ 0.10 and 27.9 $`\pm `$ 0.15 mag respectively.
As part of a long–term project to obtain direct distances to galaxies in the nearby Universe using the TRGB method, we observed two fields in the halo of M82 using the HST and Wide Field Planetary Camera 2. The details of observations and data reductions are reported in the following Section 2. In Sections 3 and 4, we discuss the detection of the RGB stars, and report a distance using the I–band luminosity function. In addition to the RGB stars, we detected a large number of stars brighter than the TRGB in the M82 halo regions. We briefly explore in Section 5 what this population of stars may be.
## 2 Observations and Reductions
Two positions in the halo region of M82 were chosen for our HST observations. A digital sky survey image of M82, is shown in Figure 1, on which the $`HST`$ Wide Field Planetary Camera 2 (WFPC2) footprints are superimposed, indicating the two regions observed. We refer to the region closer to the center of the galaxy as Field I, and the other to the east as Field II. The Planetary Camera (PC) chip covers the smallest area; we refer to this as chip 1. The three Wide Field (WF) chips cover the three larger fields and are referred to as chips 2, 3 and 4 respectively, counterclockwise from the PC. A closeup $`HST`$ image of one of the chips, WF2 field of Field I, is shown in Figure 2.
Observations of the M82 halo region were made with the WFPC2 on board the Hubble Space Telescope on July 9, 1997 using two filters, F555W and F814W. Two exposures of 500 seconds each were taken for both filters at each position. Cosmic rays on each image were cleaned before being combined to make a set of F555W and F814W frames.
The subsequent photometric analysis was done using point spread function fitting packages DAOPHOT and ALLSTAR. These programs use automatic star finding algorithms and then measure stellar magnitudes by fitting a point spread function (PSF), constructed from other uncrowded HST images (Stetson 1994). We checked for a possible variation in the luminosity function as a function of the position on each chip by examining the luminosity functions for different parts of the chip. For each frame, we find the identical luminosity function, confirming that there is no significant systematic offsets originating from the adopted PSFs.
The F555W and F814W instrumental magnitudes were converted to the calibrated Landolt (1992) system as follows. (A detailed discussion is found in Hill et al. 1998). The instrumental magnitudes were first transformed to the Holtzman et al. (1995) 0<sup>′′</sup>.5 aperture magnitudes by determining the aperture correction that need to be applied to the PSF magnitudes. This was done by selecting 20–30 brighter, isolated stars on each frame. Then all the stars were subtracted from the original image except for these selected stars. The aperture photometry was carried out for these bright stars, at 12 different radii ranging from 0<sup>′′</sup>.15 to 0<sup>′′</sup>.5. The 0<sup>′′</sup>.5 aperture magnitudes were determined by applying the growth curve analysis provided by DAOGROW (Stetson 1990), which were then compared with the corresponding PSF magntiudes to estimate the aperture corrections for each chip and filter combination. The values of aperture corrections for each chip are listed in Table 1. We use a different set of aperture corrections for two Fields. Most of the values agree with each other within $`2\sigma `$, however slight offsets between the corrections in the two Fields are most likely due to the PSFs not sampling the images in the exactly same way. When images are co–added, the combined images are not exactly identical to the original uncombined images; that is, the precise positions of stars on the frames are slightly different. Thus we should expect some differences in the aperture corrections of the same chip in two Fields.
Finally, the 0<sup>′′</sup>.5 aperture magnitudes are converted to the standard system via the equation:
$$M=m+2.5\mathrm{log}t+\text{C1}+\text{C2}\times (VI)+\text{C3}\times (VI)^2+\text{a.c.},$$
(1)
where $`t`$ is the exposure time, C1, C2 and C3 are constants and a.c. is the aperture correction. C1 is comprised of several terms including (1) the long–exposure WFPC2 magnitude zero points, (2) the DAOPHOT/ALLSTAR magntidue zero point, (3) a correction for multiplying the original image by 4 before converting it to integers (in order to save the disk space), (4) a gain ratio term due to the difference between the gain settings used for M82 and for the Holtzman et al. data (7 and 14 respectively), (5) a correction for the pixel area map which was normalized differently from that of Holtzman et al. (1995), and (6) an offset between long and short exposure times in the HST zero point calibration. C2 and C3 are color terms and are the same for all four chips. In Table 2, we summarize all three constants for each chip.
## 3 Detection of the Red Giant Stars in M82
The $`V`$ and $`I`$ photometric results are shown in the color–magnitude diagrams in Figure 3. In Table 3, astrometric and photometric data for a set of brighter, isolated reference stars are presented. The X and Y coordinates tabulated refer to those on the image of rootname u3nk0201m for Field I, and u3nka201m for Field II. We also show luminosity function histograms in Figure 4. In both Fields I and II, WF 4 field samples the least crowded halo region of M82. Based on the observations of Cepheids in M81, the parent galaxy of M82, we know that the distance modulus of M82 is approximately $`\mu _0=27.8`$ mag. Then the tip of the red giant branch should therefore be observed at I $``$ 23.7 mag. In all CMDs presented here, we can visually detect the position of the TRGB at $`I23.723.9`$ mag relatively easily, which is also evident in the luminosity functions, as a jump in number counts, especially in those of Field I. If we are observing the TRGB at around $`I23.8`$ mag, then a significant number of brighter stars are present in the halo regions of M82, which are observed above the tip of the RGB in the CMDs. In addition, comparing two regions, more of these stars are found in Field II. This will be discussed more in detail in Section 5.
## 4 TRGB Distance to M82
The TRGB marks the core helium flash of old, low–mass stars which evolve up the red giant branch, but almost instantaneously change their physical characteristics upon ignition of helium. This restructuring of the stellar interior appears as a sudden discontinuity in the luminosity function and is observed at $`M_I4`$ mag in the $`I`$band ($`8200`$Å). The TRGB magnitude has been shown both observationally and theoretically to be extremely stable; it varies only by $``$0.1 mag for ages 2 – 15 Gyr, and for metallicities between $`2.2<`$ \[Fe/H\] $`<0.7`$ dex, (the range bounded by the Galactic globular clusters). Here, we use the calibration presented by Lee et al. (1993) which is based on the observations of four Galactic globular clusters by Da Costa & Armandroff (1990). The globular cluster distances had been determined using the RR Lyrae distance scale based on the theoretical horizontal branch model for $`Y_{MS}=0.23`$ of Lee, Demarque and Zinn (1990), and corresponds to $`M_V(\text{RR Lyrae})=0.57`$ mag at \[Fe/H\] = $`1.5`$.
The top panel of each plot in Figure 5 shows an $`I`$band luminosity function smoothed by a variable Gaussian whose dispersion is the photometric error for each star detected. We apply a Sobel edge–detection filter to all luminosity functions to determine quantitatively and objectively the position of the TRGB following $`E(m)=\mathrm{\Phi }(I+\sigma _m)\mathrm{\Phi }(I\sigma _m)`$, where $`\mathrm{\Phi }(m)`$ is the luminosity function at magnitude defined at $`m`$, and $`\sigma _m`$ is the typical photometric error of stars of magnitude $`m`$. For the details of the Sobel filter application, readers are referred to the Appendix of Sakai, Madore & Freedman (1996). The results of the convolution are shown as in bottom panels of Figure 5. The position of the TRGB is identified with the highest peak in the filter output function.
The TRGB method works as a distance indicator best in practice when the $`I`$band luminosity function sample is restricted to those stars in the halo region only. This is mainly due to three reasons: (1) less crowding, (2) less internal extinction and (3) less contamination by AGB stars which tend to smear out the “edge” defining the TRGB in the luminosity function. In Field I, the tip position is detected clearly in the luminosity function and filter output for the WF 4 region at $`I_{\text{TRGB}}=23.82\pm 0.15`$ mag. The one–sigma error here is determined roughly by estimating the “full width half maximum” of the peak profile defining the TRGB in the filter output function. In the WF 3 region, the tip is also observable at $`I_{\text{TRGB}}=23.72\pm 0.10`$ mag, slightly brighter than the case of WF 4. The simulations have shown that the position of the tip shifts to a brighter magnitude due to crowding effects (Madore and Freedman 1995), and that is what we observe on WF3.
The stellar population in Field II is comprised of more of these brighter stars (which could be AGB stars), thus restricting the luminosity function to the halo region helps especially in determining the TRGB position. Here, we obtain $`I_{\text{TRGB}}=23.71\pm 0.09`$ mag and $`I_{\text{TRGB}}=23.95\pm 0.14`$ mag for WF 3 and WF 4 field respectively. The tip magnitude of WF 3 agrees extremely well with that of the same chip in Field I. However, the TRGB magnitude defined by the WF 4 sample is fainter by $`0.17`$ mag compared to the halo region of Field I. There are several reasons to believe that the TRGB defined by the Field II halo region would more likely correspond to the true distance of M82. First, if one examines the WFPC2 image of Field I closely, the presence of wispy, filamentary structures is recognizable. Such features are likely to increase the uncertainties furthermore due to variable reddening. Another but more important reason for putting less weight on the Field I WF4 data is that there are far fewer stars observed in this region. Madore and Freedman (1995) showed using a simulation that the population size does matter in systematically detecting the TRGB position accurately. That is, if not enough stars are sampled in the first bin immediately fainter than the TRGB magnitude, the distance can be overestimated. We show here again how the population sampling size affects our distance estimates. We used $`V`$ and $`I`$ photometric data of the halo of NGC 5253 (Sakai 1999) which is comprised of 1457 stars that are brighter than $`M_I3`$, and is considered here as a complete sample. The TRGB magnitude for this galaxy is $`I=23.90`$ mag. $`N`$ stars are then randomly selected from this NGC 5253 database 100 times, for which the smoothed luminosity function is determined. The edge–detection filter is then applied to the luminosity function in a usual fashion to estimate the TRGB magnitude. This exercise was repeated for the case comprised solely of the RGB stars; that is, the stars brighter than the TRGB were excluded from the parent sample. We show the results for $`N=20,100`$ and $`1000`$ in Figure 6, where the number distribution of TRGB magnitudes is shown for each simulation. And in Table 4, we list the average offset from the TRGB magnitude. In both cases, for smaller two samples, the TRGB determination becomes very uncertain, as the RGB population becomes indistinguishable from the brighter intermediate–age AGB population. Or in the case where the RGB population is undersampled (the second scenario in which only the RGB stars were included in the sample), the stars around the tip of the RGB are missed, yielding an overestimated distance to this galaxy. Another way to present this effect is to plot the TRGB magnitude as a function of the difference between the 0.15–mag bins immediately brighter and fainter than the TRGB. This is shown in Figure 7. For the least complete sample ($`N=300`$), the difference in number counts in the consecutive bins around the TRGB is merely $``$20. This figure suggests that at least a number count difference of $``$40 is needed to estimate the TRGB position accurately.
Using the photometric data of the WF4 of Field II, the TRGB is detected at $`I=23.95\pm 0.14`$ mag. The foreground extinction in the line of sight of M82 is $`A_B=0.12`$ mag (Burstein and Heiles 1982). Using conversions of $`A_V/E(VI)=2.45`$ and $`R_V=A_V/E(BV)=3.2`$ (Dean, Warren & Cousins (1978), Cardelli et al. (1989) and Stanek (1996)), we obtain $`A_I=0.05`$ mag. To calculate the true modulus to M82, we use the TRGB calibration of Lee et al. (1993), according to which the tip distance is determined via the relation $`(mM)_I=I_{TRGB}M_{bol}+BC_I`$, where both the bolometric magnitude ($`M_{bol}`$) and the bolometric correction ($`BC_I`$) are dependent on the color of the TRGB stars. They are defined by: $`M_{bol}=0.19[Fe/H]3.81`$ and $`BC_I=0.8810.243(VI)_{TRGB}`$. The metallicity is in turn expressed as a function of the $`VI`$ color: $`[Fe/H]=12.65+12.6(VI)_{3.5}3.3(VI)_{3.5}^2`$, where $`(VI)_{3.5}`$ is measured at the absolute $`I`$ magnitude of $`3.5`$. The colors of the red giant stars range from $`(VI)_0=1.52.2`$ (see Figure 4), which gives the TRGB magnitude of $`M_I=4.05\pm 0.10`$. We thus derive the TRGB distance modulus of M82 to be $`(mM)_0=27.95(\pm 0.14)_{\text{random}}[\pm 0.16]_{\text{systematic}}`$ mag. This corresponds to a linear distance of $`3.9(\pm 0.3)[\pm 0.3]`$ Mpc. The sources of errors include (1) the random uncertainties in the tip position (0.14 mag) and (2) the systematic uncertainties, mainly those due to the TRGB calibration (0.15 mag) and the HST photometry zero point (0.05 mag). Unfortunately, because the TRGB method is calibrated on the RR Lyrae distance scale whose zero point itself is uncertain at a 0.15 mag level, the TRGB zero point subsequently has an uncertainty of 0.15 mag. Recently, Salaris & Cassisi (1997: SC97) presented a theoretical calibration of the TRGB magnitude that utilized the canonical evolutionary models of stars for a combination of various masses and metallicities for $`Y=0.23`$ (Salaris & Cassisi 1996). SC97 find that their theoretical calibration gives to a zero point that is $``$0.15 mag brighter than the empirical zero point given by Da Costa & Armandroff (1990). They attribute this systematic difference to the small sample of stars observed in the Galactic globular clusters. We did find in previous section that under-sampling the RGB stars leads to a systematically fainter TRGB magnitude, which seem to be in agreement with CS97. Clearly, the issues pertaining to the TRGB calibration need to be reviewed in detail in the future. In this paper, we adopt the TRGB systematic calibration uncertainty of 0.15 mag based on these studies.
## 5 Stars Brighter than the TRGB: What are they?
It was noted in Figure 4 that the Field II appears to have a considerable number of stars that are brighter than the TRGB. There are two possible scenarios to explain what these stars are: (1) blends of fainter stars due to crowding, or (2) intermediate–age asymptotic giant branch (AGB) stars. To explain how much effect the crowding has on stellar photometry, we turn our attention to Grillmair et al. (1996) who presented HST observation of M32 halo stars. They concluded that the AGB stars detected in the same halo region by Freedman (1989) were mostly due to the crowding. Upon convolving the HST data to simulate the 0<sup>′′</sup>.6 image obtained at CFHT, they successfully recovered these brighter “AGB” stars. While HST’s 0<sup>′′</sup>.1 resolution at the distance of M32, 770kpc, corresponds to 0.37 pc, 0<sup>′′</sup>.6 resolution at the same distance corresponds to 2.2pc. Our HST M82 data has a resolution of 1.7 pc (0<sup>′′</sup>.1 at 3.2 Mpc), indicating that those stars brighter than the first–ascent TRGB stars are, by analogy with M32, likely blends of fainter stars.
If instead we were to adopt the second scenario in which these brighter stars are actually AGB stars, the first striking feature in the CMDs shown in Figure 3 is that Field II contains significantly more AGB stars in comparison to Field I. In particular, we focus on the WF3 chip of each field; we restrict the sample to smaller regions of WF3 chips where the surface brightness is roughly in the range of $`21.0\mu _i21.5`$. This corresponds to the lower 3/4 of WF3 chip in Field I (Regions 3A$`+`$3B in Figure 8), and upper 3/4 of WF3 chip in Field II (Regions 3A$`+`$3B). The difference in the number of AGB population of two Fields is compared in term of $`N_{AGB}/N_{RGB}`$, defined here as the ratio of numbers of stars in a 0.5–mag bin brighter than the TRGB to those in a 0.5–mag bin fainter than the TRGB. We chose the 0.5–mag bin here as it might be less affected by the incompleteness of stars detected at magnitudes $``$1 mag fainter than the TRGB. In calculating the ratios, we also assume that 20% of the fainter giants below the TRGB are actually AGB stars. The ratios of Fields I and II are, respectively, $`N_{AGB}/N_{RGB}=58/193=0.30\pm 0.04`$ and $`164/484=0.64\pm 0.04`$. Restricting the samples furthermore to avoid the more crowded regions, by using those stars in the section 3A only, we obtain $`N_{AGB}/N_{RGB}=58/193=0.30\pm 0.04`$ and $`87/172=0.51\pm 0.06`$ for Fields I and II respectively. These ratios seem to suggest that the difference between the two fields is significant, at a level of $`45\sigma `$’s. Because these subregions were chosen to match the surface brightness as closely as possible, the blending of stars due to crowding should not be a major factor in systematically making Field II much richer in the intermediate–age AGB population compared to Field I.
Although the present analysis cannot by any means rule out crowding as the dominant effect and there is strong evidence that these brighter stars above the TRGB are blends of fainter stars, we conclude the paper by mentioning a possible connection between the presence of these brighter stars (if real) with the HI distribution around this galaxy. Yun, Ho and Lo (1993) presented the VLA observations of M82 which revealed tidal streamers extending $`10`$kpc from M82, characterized by two main structures. One of these streamers extend northward from the NE edge of the galaxy, which coincides with our Field II position. The integrated HI flux map of Yun et al. does not, however, reveal any neutral hydrogen in the region around Field I. If M82 is a tidally–disrupted system that has undergone direct interaction with M81 and NGC 3077, could this have affected the star–formation history of M82, enhancing a more recent star formation in the northeastern edge of the galaxy (Field II)? Answering this question is obviously beyond the scope of this paper, requiring much deeper, higher–resolution observations, such as with the Advanced Camera.
This work was funded by NASA LTSA program, NAS7-1260, to SS. BFM was supported in part by the NASA/IPAC Extragalactic Database.
| Table 1: Aperture Corrections | | |
| --- | --- | --- |
| Chip | F555W | F814W |
| Field I | | |
| WF 2 | $`0.048\pm 0.047`$ | $`0.082\pm 0.029`$ |
| WF 3 | $`+0.094\pm 0.040`$ | $`0.127\pm 0.027`$ |
| WF 4 | $`+0.091\pm 0.053`$ | $`+0.013\pm 0.032`$ |
| Field II | | |
| WF 2 | $`0.039\pm 0.038`$ | $`0.029\pm 0.015`$ |
| WF 3 | $`+0.053\pm 0.040`$ | $`+0.185\pm 0.022`$ |
| WF 4 | $`+0.155\pm 0.063`$ | $`+0.005\pm 0.024`$ |
| Table 2: Transformation Coefficients | | | |
| --- | --- | --- | --- |
| Chip | C1 | C2 | C3 |
| F555W | | | |
| 2 | $`0.957`$ | $`0.052`$ | $`0.027`$ |
| 3 | $`0.949`$ | $`0.052`$ | $`0.027`$ |
| 4 | $`0.973`$ | $`0.052`$ | $`0.027`$ |
| F814W | | | |
| 2 | $`1.822`$ | $`0.063`$ | $`0.025`$ |
| 3 | $`1.841`$ | $`0.063`$ | $`0.025`$ |
| 4 | $`1.870`$ | $`0.063`$ | $`0.025`$ |
Figure Captions
Figure 1: A digital sky survey image of M82. Two footprints indicate the regions of the $`HST`$ WFPC2 observations.
Figure 2: A closeup view of a WFPC2 field of Field II, WF 2 chip.
Figure 3: $`(VI)I`$ color magnitude diagrams of each chip in two fields. Note a significant number of stars brighter than the TRGB are observed in WF 2 and WF 3 chips, especially in Field II. The arrow in each CMD shows the location of the TRGB, while the dotted line represents $`V=26.9`$, roughtly indicating the incompleteness level.
Figure 4: I–band luminosity histograms for each chip in both fields.
Figure 5: Smoothed luminosity function (top) and edge–detection filter output for each position.
Figure 6: Number of simulations detecting the TRGB magnitude plotted on the x–axis. See text for details.
Figure 7: The observed TRGB magnitude in each simulation plotted as a function of the difference between the number of stars in the 1.5–mag bin above and below the TRGB ($`N_+N_{}`$).
Figure 8: A schematics showing the regions used to calculate the ratios of AGB–to–RGB stars.
|
no-problem/9906/hep-ph9906398.html
|
ar5iv
|
text
|
# UrQMD at RHIC Energies
* Center of mass energy of individual collisions
* Final state is dominated by meson-meson
and meson-baryon reaction,
* Fraction of high energy baryon-baryon
reactions is extremely small
* Net protons, net $`\mathrm{\Lambda }+\mathrm{\Sigma }`$, net $`\mathrm{\Xi }`$, net $`\mathrm{\Omega }`$.
* Rapidity distribution of all net baryons shows a dip
* Baryon to anti-baryon ratio at midrapidity: 3/1
* Distributions of all Pions, charged Pions and Kaons
* More than 1100 Pions at midrapidity,
* Approx. 750 charged Pions at midrapidity
* Approx. 80 Kaons at midrapidity
* Mean $`p_{}`$ as function of rapidity
* Remnants of Sea-Gull effect visible in the Proton distribution (Dip at $`y=0`$)
* Plateau in the transverse momentum distribution of newly produced particles
* Distributions of Protons and Pions at midrapidity
* Slopes at freeze-out are comparable to SPS (Pb+Pb)
* Inv. slopes increase with particle mass
|
no-problem/9906/chao-dyn9906007.html
|
ar5iv
|
text
|
# Efficient algorithm for detecting unstable periodic orbits in chaotic systems
## Abstract
We present an efficient method for fast, complete, and accurate detection of unstable periodic orbits in chaotic systems. Our method consists of a new iterative scheme and an effective technique for selecting initial points. The iterative scheme is based on the semi-implicit Euler method, which has both fast and global convergence, and only a small number of initial points is sufficient to detect all unstable periodic orbits of a given period. The power of our method is illustrated by numerical examples of both two- and four-dimensional maps.
It has now been a widely accepted notion that unstable periodic orbits (UPOs) constitute the most fundamental building blocks of a chaotic system . Theoretically, the infinite number of UPOs embedded in a chaotic invariant set provides a skeleton of the set, and many dynamical invariants of physical interest, such as the natural measure, the spectra of Lyapunov exponents and fractal dimensions, as well as other statistical averages of physical measurements, can be computed from the infinite set of UPOs in a fundamental way . In Hamiltonian systems, the quantum mechanical density of states in the semiclassical regime can be expressed explicitly in terms of UPOs of the corresponding classical dynamics . The knowledge of UPOs is also of significant experimental interest because they provide a way to characterize and understand the chaotic dynamics of the underlying system . All these call for efficient techniques for detecting UPOs in chaotic systems.
Systematic detection of a complete set of UPOs of high periods embedded in a chaotic set even in situations where the system’s equations are known is, however, an extremely difficult problem. A fundamental reason is that the number of UPOs grows exponentially as the period increases at a rate given by the topological entropy of the chaotic set. The basic requirements for a good detection algorithm are, therefore, fast convergence and the ability to yield complete set of UPOs .
Recently, a general algorithm for detecting UPOs in chaotic systems was proposed by Schmelcher and Diakonos (SD) who, for the first time, computed UPOs of high periods for systems such as the Ikeda-Hammel-Jones-Moloney map . The success of the SD method relies on a globally convergent iterative scheme: convergence to UPOs can be achieved, in principle, from any initial point. However, as we will discuss shortly, this method is not very efficient from the standpoint of convergence, neither does it provide a satisfactory test for the completeness of the detected UPOs. As a matter of fact, for the Ikeda-Hammel-Jones-Moloney map, only UPOs of periods up to 13 were reported in Ref. , and one of the UPOs of period 10 was not detected.
The aim of this Letter is to present an efficient method for detecting UPOs in general chaotic systems. Our new iterative scheme is based on the semi-implicit Euler method and has the following favorable properties: near an orbit point it exhibits a fast convergence similar to that of the traditional Newton-Raphson (NR) method, while away from the orbit points it is similar to the SD method and, therefore, is globally convergent. Another key ingredient of our method is that we select initial points based on the observation that using orbit points of UPOs of other periods to initialize the search for UPOs of a given period is much more effective than using randomly selected points in the phase space or in the attractor. We find, in most cases, it is sufficient to use only orbit points of period $`p1`$ in order to detect all UPOs of period $`p`$. With such a strategy, we are able to compute UPOs for, say, the Ikeda-Hammel-Jones-Moloney map, of periods up to 22 for a total of over $`10^6`$ orbit points using roughly the same amount of computation required by the SD method to compute all UPOs of periods up to 13 that have less than 6000 orbit points . Due to its efficiency, our method allows us to compute UPOs in higher-dimensional systems, which we illustrate using a four-dimensional chaotic map.
We begin by describing the NR and the SD methods. Consider an $`N`$-dimensional chaotic map: $`𝐱_{n+1}=𝐟(𝐱_n)`$. The orbit points of period $`p`$ can be detected as zeros of the following function:
$$𝐠(𝐱)=𝐟^{(p)}(𝐱)𝐱,$$
(1)
where $`𝐟^{(p)}(𝐱)`$ is the $`p`$-times iterated map of $`𝐟(𝐱)`$. The process of finding zeros of $`𝐠(𝐱)`$ usually begins with the choice of initial point $`𝐱_0`$ followed by the computation of successive corrections: $`𝐱_{\mathrm{n}ew}=𝐱_{\mathrm{o}ld}+\delta 𝐱`$, which converge to the desired solution. In the NR method, the corrections are calculated from a set of $`N`$ linear equations:
$$𝐉(𝐱)\delta 𝐱=𝐠(𝐱),$$
(2)
where $`𝐉(𝐱)=𝐠/𝐱`$ is the Jacobian matrix. The NR method has excellent convergence properties, approximately doubling the number of significant digits upon every iteration, provided that the initial point is within the linear neighborhood of the solution. While it is relatively easy to find suitable initial points for very small periods (using, for example, uniform grid, iterations of the map, or random number generator), the method becomes impractical for UPOs of high periods because the volume of the basin from which $`𝐱_0`$ can be chosen decreases exponentially as the period increases. On the other hand, in the SD method, the corrections are determined as follows:
$$\delta 𝐱=\lambda \mathrm{𝐂𝐠}(𝐱),$$
(3)
where $`\lambda `$ is a small positive number and $`𝐂`$ is an $`N\times N`$ matrix with elements $`C_{ij}\{0,\pm 1\}`$ such that each row or column contains only one element that is different from zero. With an appropriate choice of $`𝐂`$ and a sufficiently small value of $`\lambda `$ the above procedure can find any periodic point of a chaotic system. The main advantage of the SD method is that the basin of attraction of each UPO extends far beyond its linear neighborhood, so most initial points converge to a UPO. In fact, the basins of attraction of individual orbit points completely fill a region in the phase space, and any initial point in this region converges to an orbit point.
Schmelcher and Diakonos tested their method by computing the UPOs for the Hénon map and other simple maps, for which the UPOs are known from methods specific to these maps . They also applied the method to the Ikeda-Hammel-Jones-Moloney map, for which no special technique for computing UPOs was previously available. The method appears to be particularly useful when detecting for each period the least unstable periodic orbits . However, if the goal is to determine complete sets of UPOs of increasingly higher periods, the SD method becomes inefficient due to the following two reasons: (i) the convergence rate of Eq. (3) is much slower than that of the NR method, so it takes significantly more steps to reach the desired accuracy ; and (ii) even though the SD scheme is globally convergent, the basins of attraction of individual UPOs are interwoven in a complicated manner, so it is extremely difficult to determine which initial point converges to a particular UPO. Because of this difficulty, the SD method cannot guarantee the detection of all UPOs of a given period.
To overcome the problem of slow convergence, while retaining the global convergence property, we propose the following iteration scheme:
$$[\mathrm{𝟏}\beta g(𝐱)\mathrm{𝐂𝐉}(𝐱)]\delta 𝐱=\mathrm{𝐂𝐠}(𝐱),$$
(4)
where $`g(𝐱)𝐠(𝐱)0`$ is the length of the vector, $`\beta >0`$ is an adjustable parameter, and $`𝐂`$ is the same
matrix as in Eq (3). In the vicinity of an UPO, the function $`g(𝐱)`$ tends to zero and the NR method is restored. In fact, it is straightforward to verify that the above scheme retains the quadratic convergence. On the other hand, away from the solution and for sufficiently large values of $`\beta `$, our scheme is similar to Eq. (3) and thus almost completely preserves the global convergence property of the SD method. This similarity is easily understood, since Eq. (3) is the Euler method with step size $`\lambda `$ for solving the following system of ODEs:
$$\frac{\mathrm{d}𝐱}{\mathrm{d}s}=\mathrm{𝐂𝐠}(𝐱).$$
(5)
On the other hand, Eq. (4) is the semi-implicit Euler method with step size $`h=1/\beta g(𝐱)`$ for solving the same system of ODEs. Consequently, with sufficiently small step size, both methods closely follow the solutions to Eq. (5) and thus share the global convergence property.
To illustrate and contrast the convergence properties of the NR, the SD, and our methods, we consider the following simple example: finding zeros of the function $`g(x)=\mathrm{cos}(x^2)`$ in the interval $`(3,3)`$. The basins of convergence for each method are shown in Fig. 1 with the thick arrows. The NR method converges to all six zeros and the basins are essentially within the linear neighborhood of each point . The SD method converges to the solution $`g(x_0)=0`$ if $`0<\lambda <2/g^{}(x_0)`$ and $`𝐂=\mathrm{s}ign(g^{}(x_0))`$. Diagram (b) in Fig. 1 shows the basin of convergence for $`0<\lambda <0.3568`$ and $`𝐂=1`$. Obvious is the global character of convergence to zeros with negative function derivatives, while zeros with positive derivatives serve as basin boundaries. With $`𝐂=1`$ the convergence directions are reversed. The result of applying our iteration scheme with $`\beta =4.0`$ and $`𝐂=1`$ to the same function is shown in the diagram (c). We see that, as in the NR method, all zeros have basins of convergence. However, the basins of zeros with negative function derivatives cover most of the interval, while the basins of other zeros, as well as the intervals between basins, are reduced and become smaller with increasing value of $`\beta `$. Therefore, our scheme combines the efficiency of the NR method with the global character of the SD algorithm.
Another important ingredient of our method lies in the selection of initial points: we find that the most efficient strategy for detecting UPOs of period $`p`$ is to use UPOs of other periods as initial points. This is understandable, since orbit points cover the attractor in a systematic manner, which reflects the foliation of the function $`𝐟^{(p)}(𝐱)`$ and its iterates. In cases of the Hénon and the Ikeda-Hammel-Jones-Moloney maps, we are able to detect all UPOs of period $`p`$ using only orbit points of period $`p1`$, provided that period $`p1`$ orbits exist. In more complicated cases of higher-dimensional maps, this simple strategy leaves a small fraction of UPOs undetected . However, in all cases, we are able to find these UPOs using period $`p+1`$ points (first we use incomplete set of period $`p`$ orbits to find period $`p+1`$ points and then use them to complete the detection of period $`p`$ orbits). The main advantage of using orbit points of neighboring periods as initial points is that once we establish the strategy for smaller periods, it works in a similar manner for the detection of UPOs of large period. This allows us to claim with confidence that we detect all UPOs of increasingly longer periods for general multi-dimensional chaotic maps.
We now apply our method to detecting UPOs for the following Ikeda-Hammel-Jones-Moloney map :
$`x^{}`$ $`=`$ $`a+b(x\mathrm{cos}\varphi y\mathrm{sin}\varphi ),`$ (6)
$`y^{}`$ $`=`$ $`b(x\mathrm{sin}\varphi +y\mathrm{cos}\varphi ),`$ (7)
where $`\varphi =k\eta /(1+x^2+y^2)`$, and the parameters are chosen such that the map has a chaotic attractor: $`a=1.0`$, $`b=0.9`$, $`k=0.4`$, and $`\eta =6.0`$. Detection of UPOs proceeds as follows: UPOs of period 1 and 2 are quickly found using several initial points on the attractor. Starting from $`p=3`$ we use only orbit points of period $`p1`$ as initial points. We choose $`𝐂`$ from the set of five matrices $`\{𝐂_k|k=1,\mathrm{},5\}`$ provided in Ref. , where $`𝐂_1=\mathrm{𝟏}`$ is the identity matrix. The iteration sequence computed from Eq. (4) is terminated when it either converges to an orbit point or leaves the chaotic attractor. The average number of iterations increases linearly with $`\beta `$, which is understandable since $`\delta 𝐱1/\beta `$ for large $`\beta `$ and away from an orbit point. However, a small fraction of initial points produces very long sequences which neither converge to an UPO nor leave the attractor. In order to limit the amount of unproductive computation, we set the maximum number of iterations to 4-6 times $`\beta `$, which is sufficient for the majority of iterates to be terminated properly. The quadratic convergence of our scheme allows us to achieve, without much computational effort, accuracy limited only by the computer round-off error. Once the sequence converges to an orbit point, we check whether it belongs to a yet undetected UPO, and if so, we compute the rest of the orbit points by iterating the map and refining the solutions with a couple of NR steps \[we simply set $`\beta =0`$ in Eq. (4) \].
Figure 2 shows the number of detected UPOs of periods 10 through 18 using different values of $`\beta `$ in the range from 10 to 3000. Note that for every period there exists a value $`\beta =\beta _{\mathrm{m}in}(p)`$ above which we are guaranteed to find a maximum number of UPOs. This feature of our scheme strongly suggests that the detected orbits constitute a complete set of UPOs for each period. Since $`\beta _{\mathrm{m}in}(p)`$ is approximately proportional to $`\mathrm{e}^{\alpha p}`$, where $`\alpha `$ is a positive constant, we can estimate the value of $`\beta `$ necessary to find all UPOs of increasingly longer periods. The numbers of the UPOs for periods up to 13 agree with those of Schmelcher and Diakonos except for period 10, where we have detected one additional orbit. The number of orbits of periods 14 through 22, which were not reported previously, are given in Table I.
If we monitor the number of orbits detected with different matrices $`𝐂`$, we note that, for a wide range of values of $`\beta `$, after we use identity matrix $`𝐂_1`$, only a few UPOs remain undetected. For example, with $`\beta =5000`$ and $`𝐂=𝐂_1`$ in Eq. (4), our method detects 14 699 orbits of period 21, and only one new orbit is detected with $`𝐂=𝐂_2`$. To understand this feature of our method, which is common to all the maps tested, we show in Fig. 3, for the chaotic attractor in Eq. (6), the number of period 13 orbits detected with $`𝐂_1`$ (solid dots) and the number of additional orbits detected with $`𝐂_k`$, $`k=2,\mathrm{},5`$, (triangles). For $`100<\beta <1000`$, almost all UPOs are detected with $`𝐂`$ being the identity matrix. At larger values of $`\beta `$ the number of thus detected orbits decreases, but the remaining orbits are always detected with other matrices. For $`\beta >10^5`$ the numbers converge to those of the SD iteration scheme, where about half of the orbits are detected with $`𝐂_1`$ and the other half with $`𝐂_2`$ and $`𝐂_3`$. This behavior of our scheme follows directly from the convergence considerations of Fig. 1 and results in a greatly improved efficiency compared to either the NR or the SD methods.
Finally, we briefly describe the performance of our method for other maps. In case of the Hénon map our algorithm works extremely well and, for the standard parameter values of $`(a,b)=(1.4,0.3)`$, detects all UPOs up to period 29 with $`\beta <500`$, $`𝐂=𝐂_1`$ and $`𝐂_2`$, and using for initialization only orbit points of period $`p1`$. We have also applied our algorithm to detecting UPOs in the following four-dimensional map: Two coupled Ikeda maps with coupling in the form: $`\varphi _{(1,2)}=k\eta /(1+x_{(1,2)}^2+y_{(1,2)}^2)+2\pi \epsilon (x_{(2,1)}x_{(1,2)})`$, and the parameters are chosen such that the system has two positive Lyapunov exponents. We estimate the topological entropy in this system to be $`h_\mathrm{T}1.6`$, and thus the number of orbits grows extremely fast with increasing orbit length. We have detected complete sets of UPOs up to period 7 with $`\beta <1000`$. We have found that the reliability of the algorithm was not affected by the increased dimensionality of the system. Even though the number of possible matrices $`𝐂`$ in four dimensions is 384, only a dozen of them are needed to detect all UPOs. The necessary set of matrices $`𝐂`$ can be selected empirically when detecting short UPOs and then used in the detection of longer orbits.
In conclusion, we have proposed an efficient algorithm for the detection of UPOs in chaotic systems and have successfully detected large number of UPOs in several two- and higher-dimensional maps. Our method allows for a verification of the completeness of the detected orbits and high accuracy limited only by the round-off error.
This work was supported by AFOSR under Grant No. F49620-98-1-0400 and by NSF under Grant No. PHY-9722156.
|
no-problem/9906/astro-ph9906008.html
|
ar5iv
|
text
|
# On the Continuum Shape of Broad Absorption Line Quasars
## 1. Introduction
In optically selected QSO samples, $``$10% are broad absorption line (BAL) QSOs (Weymann et al. 1991). BAL QSOs can be subdivided into high- and low-ionization (hi-BALs and lo-BALs) where hi-BALs have absorptions from high ionization species (N v, C iv, and Si iv) and lo-BALs, which make up only $``$10% of the BAL QSO population, have additional absorptions from low-ionization species (Al iii, Al ii, and sometimes Mg ii). In some cases lo-BALs also have Fe ii and Fe iii absorptions (iron lo-BALs).
Becker et al. (1997) reported the discovery of two new iron lo-BALs from radio-selected samples (FIRST survey; Becker et al. 1995) with one of the objects (1556+3517) showing severe extinction in the spectrum. Goodrich (1997) pointed out that a significant number of BAL QSOs are missed in optically selected samples due to attenuation of the continuum and that the true fraction of BAL QSOs might be as high as 30% instead of 10%. This could mean that these heavily reddened objects are more common among QSOs than previously thought.
The continuum shape of BAL QSOs is often flat or distorted compared to that of non-BAL QSOs (Weymann et al. 1991). Sprayberry and Foltz (1992) showed that the composite spectra of high- and low-ionization BAL QSOs can be brought into coincidence by dereddening the lo-BAL composite spectrum relative to the hi-BAL composite with appropriate extinction laws. This method shows only the tendency for lo-BALs to be redder than hi-BALs on average.
The present work describes a systematic method for estimating the amount of extinction for individual hi- and lo-BAL spectra. The systematic method for deriving extinction for all BAL QSO types allows a homogeneous comparison of the whole data set, which can also be extended to non-BAL QSO spectra. Section 2 describes the observational data and section 3 will introduce the dereddening procedure for BAL QSOs followed by results and discussions in section 4. Conclusions are shown in section 5.
## 2. Observational Data
The observational data consist of a total of 26 BAL QSO spectra; 15 objects are from Weymann et al. (1991), 8 from Korista et al. (1993), 2 from Brotherton et al. (1997), 1 from Ogle (1997, private communication) and parts of spectra from Becker et al. (1997) and Wills (1998, private communication) have kindly been made available by the authors. From the sample we chose only those spectra which had spectral coverage of all three bands defined below in order to do the colour measurements.
We divide the quasars into hi- and lo-BAL types (hi and lo in table 1), similar to the classification in Weymann et al. (1991). However, the classification for hi- and lo-BALs is somewhat ambiguous and for those cases where it is not clear which group an object belongs to we indicated this with “hi–lo”. Additionally we use the classification of the iron lo-BALs (iron in table 1) which was introduced by Becker et al. (1997). Table 1 (columns 1, 2, and 3) lists the names of the program BAL QSOs, the redshifts and the classifications of the objects.
## 3. Dereddening Procedure
The extinction laws used here are the Milky Way (MW), Large Magellanic Cloud (LMC), and Small Magellanic Cloud (SMC) from Pei (1992), the attenuation curve from Calzetti et al. (1994) and theoretical extinction curves for astronomical silicate with different grain sizes from Laor and Draine (1993).
### 3.1. The Unreddened Continuum
Sprayberry and Foltz (1992) established the idea for the existence of dust extinction in BAL QSOs by showing that lo-BALs are reddened relative to hi-BALs. We take a more general approach by comparing BAL QSOs with a representative non-BAL QSO sample. By comparing BAL QSO spectra with a single spectral shape we assume that all BAL QSOs are reddened relative to the representative spectrum, which can be understood as an “intrinsic quasar spectrum”. We refer to subsection 4.1 for discussion of the intrinsic quasar spectrum.
For the present investigation the following spectra are taken into consideration for determining the continuum shape of an unreddened QSO spectrum: (a) the composite spectrum from Francis et al. (1991; hereafter FSED), (b) the composite spectrum Zheng et al. (1997; hereafter ZSED), and (c) an artificially steepened FSED (hereafter FSED-plus) with the spectral index (defined by $`F_\nu \nu ^\alpha `$) corresponding to $`\alpha =1`$ between 1250 and 2100 Å and $`\alpha =+0.4`$ between 2100 and 3100 Å. We note that spectra of BAL QSOs are included in the composite spectrum from Francis et al. (1991) but the final spectral slope of the composite was chosen to have the median slope and thus the BAL continuum will hardly affect the final shape.
The spectral slopes of FSED-plus result from the following consideration: A comparison (e.g. figure 9 in Zheng et al. 1997) shows that FSED and ZSED have an almost identical continuum shape between 1250 Å and 2100 Å with $`\alpha 1`$. However, longward of 2100 Å the Francis et al. (1991) composite spectrum is harder with $`\alpha 0.3`$, whereas the Zheng et al. (1997) remains flat ($`\alpha 1`$). Examination of the distribution of spectral slope (Francis et al. 1991, figure 1) shows that $`\alpha `$ has a wide spread around the median value of $`\alpha =0.32`$. Thus we can consider $`\alpha =1`$ (soft spectrum) as a lower bound and $`\alpha =+0.4`$ (hard spectrum) as the upper bound for the spectral index for wavelengths $`>2100`$ Å. Our choice of FSED-plus together with the composite spectra FSED and ZSED should cover most of the range in the spectral index.
For the sake of clarity later in figure 5 we approximate the continuum shape of the composite spectrum FSED as a broken power-law with $`\alpha =1`$ for 1200 Å $`<\lambda <`$ 2100 Å and $`\alpha =0.32`$ for $`\lambda >`$ 2100 Å. Note that all measurements are carried out on the composite spectra and not on the approximate power-law.
### 3.2. Definition of the Bands
We define three bands in the spectrum at restframe of the quasar around 1750 Å, 2100 Å, and 3000 Å. Measurement of the continuum level of BAL QSO spectra is difficult, since BAL spectra have numerous absorption lines of high- and low-ionization elements that arise in the broad and narrow line region of the quasar. Therefore, windows, which seem relatively free from either emission or absorption, were defined within the bands as follows:
at 1715–1735 Å and 1760–1780 Å
at 1980–2010 Å, 2125–2145 Å, and 2185–2195 Å
at 2880–2930 Å and 3030–3090 Å
The total coverage of the bands is $`\mathrm{\Delta }\lambda 1250`$ Å with separation of 350 Å and 900 Å between the bands. This is sufficient to trace extinction effects in the UV region where the extinction curves rise steeply. However, the necessity for a wide spectral coverage restricts the number of objects which can be analyzed. Note that B2100 is located near to the 2175 Å feature of the Galactic extinction curve.
Figure 1 shows the position of the windows in the individual bands. As an example we show two BAL spectra, one which has relatively strong absorption lines of low and high ionization species (lo-BAL 0840+3633) and another, which has only high ionization lines (hi-BAL 2225$``$0534).
The choice of the windows can be summarized as follows:
Band around 1750 Å: (figure 1, upper panel) This band lies between C iv and the semiforbidden emission line of C iii\]. Two windows (1a) at 1715–1735 Å and (1b) 1760–1780 Å cover a total of 40 Å. The strong absorption lines are the singlet Al ii$`\lambda \mathrm{\hspace{0.17em}1671}`$, the doublets Si ii$`\lambda \lambda \mathrm{\hspace{0.17em}1808},1817,`$ and Al iii$`\lambda \lambda \mathrm{\hspace{0.17em}1855},1863`$. The Si ii line is only $`\mathrm{\Delta }\lambda =28`$ Å away (which corresponds to a velocity of $`v=c\mathrm{\Delta }\lambda /\lambda \mathrm{4\hspace{0.17em}000}`$ km s<sup>-1</sup>) from window (1b) but the blueshift of broad absorption lines of Si ii seldom exceed velocities of 4 000 km s<sup>-1</sup> and thus are unlikely to affect the measurement in this window. Al iii shows much stronger BALs. However it is unlikely for the width of the absorption line to exceed 13 000 km s<sup>-1</sup> which would diminish measurements in (1b). The weaker features are absorptions from multiplet lines of Fe ii and possibly Ni iii. In strongly absorbed quasar spectra these weak emissions and absorptions may have an effect on window (1a). The emission-like feature around 1790 Å is caused by the Fe ii uv 191 transition.
Band around 2100 Å: (figure 1, middle panel) The second band lies between C iii\] and Mg ii. Three windows at 1980–2010 Å, 2125–2145 Å, and 2185–2195 Å cover a total of 60 Å. This definition is close to the one used by Weymann et al. (1991). Window (2a) is positioned before the rise of the semiforbidden line emission of C iii\] $`\lambda \mathrm{\hspace{0.17em}1909}`$ on the red side. The soft rise between (2a) and (2b) with its peak around 2070 Å is likely caused by emission from the Fe ii multiplet. This iron multiplet also shows BALs in some quasars but its absorption is unlikely to reach window (2a) which is 9 000 km s<sup>-1</sup> away. The well-known strong multiplets of Fe ii uv 1, 64 (around 2600 Å) and Fe ii uv 2, 3, 35, 36 (around 2400 Å) produce deep blueshifted BALs in 0840+3633. The series of three strong absorptions is likely caused by Fe ii (around 2250 Å) and possibly Ni ii (around 2210 Å and 2170 Å). Windows (2b) and (2c) are set to avoid these spectral features.
Band around 3000 Å: (figure 1, lower panel) The third band is on the red side of Mg ii. Two windows at 2880–2930 Å and 3030–3090 Å cover a total of 110 Å. The main feature here is the emission lines of Mg ii$`\lambda \lambda \mathrm{\hspace{0.17em}2796},2803`$. The continuum level around Mg ii is raised by numerous Fe ii multiplets and the Balmer continuum of hydrogen. Although the continuum to the red of Mg ii seems smooth, there appears a weak emission feature around 3000 Å, possibly caused by Fe ii, which varies in strength from object to object. Window (3a) is placed between Mg ii and this emission feature and (3b) is placed to the red side of the emission.
### 3.3. Calibration of the Diagnostic Diagram
The measurement of the fluxes $`F_{1750}`$, $`F_{2100}`$, and $`F_{3000}`$ in the bands is obtained by integrating the spectrum over the windows and dividing it by the total width of the bands, i.e. average flux per Å. Although the windows have been carefully chosen, it is important to check e.g. for overlapping BALs when measuring the bands and to correct the measure if necessary.
Table 1 (columns 4 and 5) gives the derived colour indices which are defined as $`F_{1750}/F_{2100}`$ and $`F_{2100}/F_{3000}`$. The two-colour diagnostic diagram (TCDD) is obtained by plotting colour indices as ($`x,y`$)-pairs. The errors (shown in figure 2) are estimated by dividing the maximum difference of flux in a band with the square root of the number of bins within the integration interval.
The effects of reddening with the various extinction laws, including SMC, LMC, and MW type from Pei (1992) and astronomical silicate from Laor and Drain (1993), are investigated by applying them to the composite spectra from Francis et al. (1991) and Zheng et al. (1997). After reddening the composite spectra the intensities in the bands are determined with the same procedure as for the observed spectrum.
For simplicity, we use single grain size for the extinction curve of astronomical silicate where the following sizes are used (given is the logarithm of grain sizes in $`\mu `$m): $``$3, $``$2, $``$1.5, $``$1.35, $``$1.25, and $``$.75.
Numerous multiplet lines of iron-group elements can either depress the continuum or produce emission features in the bands and influence the measurements of the colour indices. Unfortunately, at this stage model calculations of the low-ionization region cannot provide hints of whether the continuum level in the bands is lowered or raised. At present, our definition of the colour indices provides a homogeneous measurement for all quasars, including lo-BALs, and yields a robust estimate of the continuum shape.
## 4. Results and Discussions
The extinction of the quasars can be obtained from the TCDD once the comparison spectrum for the diagram is chosen (see subsection 4.1). In order to distinguish the type of extinction more accurately we introduce a classification scheme for the determination of the extinction type in subsection 4.2. The information obtained from the TCDD is used to deredden the spectra and we show examples for six of the program quasars at the end of this section.
### 4.1. The Two-Colour Diagnostic Diagrams
Figure 2 shows the positions of the program quasars in the TCDD together with lines produced from FSED-plus (panel a), FSED (panel b), and ZSED (panel c) applying empirical extinction laws.
Figure 3 shows the same program quasars as in figure 2 but this time lines are produced by applying theoretical extinction curves of astronomical silicate with different grain sizes to FSED-plus, FSED, and ZSED (panels a, b, and c respectively). By comparing figures 2 and 3 it becomes apparent that intermediate and large grain sizes mimic the lines in figure 2 that are produced by SMC and LMC extinction law respectively. The region above the line produced by SMC in panel b of figure 2 may be explained by reddening FSED using extinction curves of astronomical silicate with small grain sizes.
The first thing to notice in figure 2 is that the bulk of observations have $`F_{2100}/F_{3000}`$ values which are higher than those derived from ZSED at the top where the curves join (figure 2 panel c). We attribute this to a poor representation of the flux around 3000 Å where the composite is derived from only a few quasars. The position of FSED in the TCDD, on the other hand, is at one of the bluest positions in the diagram (panel b). FSED-plus is further to the blue in the diagram (panel a) as expected since we chose a value resulting in the hardest (bluest) spectrum in our comparison.
Nonetheless, the upper sequence of observations (see e.g. figure 2 panel b, to the left of the SMC-line) implies the existence of a yet bluer quasar spectrum. Such an intrinsic spectrum would have a spectral slope which is harder than the Francis et al. (1991) composite. The artificially hardened FSED-plus is only one way to represent the intrinsic spectrum, however, it should be reminded that FSED-plus is only a hypothetical spectrum whereas FSED and ZSED are derived from observations and is therefore less desirable. Another method of obtaining the intrinsic spectrum of quasars is to take the steepest spectra from a quasar survey and construct a composite spectrum. If such an intrinsic spectrum exists it would mean that almost all quasar spectra observed are reddened by dust.
### 4.2. Determination of the Extinction Types
From the discussion above we conclude that the ZSED and FSED-plus are less suitable for representing the intrinsic shape of BAL QSOs. Therefore, the determination of the amount and type of extinction will be carried out using FSED.
Figure 4 shows the TCDD with lines for the case of Francis et al. (1991) composite spectrum when empirical extinction laws are applied. From the position of the curves that are produced using the SMC, LMC, and MW extinction laws we define the following classification for the extinction type: SMC, SMC-LMC, LMC, and LMC-MW. Additionally we define the “silicate” type for objects on the left of the SMC-line. We subdivide the regions between SMC, LMC, and MW and decide upon the positions of the objects in the TCDD between the extinction types. In cases where it does not seem clear whether the spectrum has silicate or SMC type extinction this was marked with “silicate/SMC” in table 1. Note that $`F_{2100}/F_{3000}`$ mainly provides information about the amount of dust while $`F_{1750}/F_{2100}`$ generally distinguishes between extinction types.
The lo-BALs with iron absorptions (0059$``$2735 and 0840$`+`$3633; open squares in figure 4) lie on the SMC-line. Within its errors, even the iron lo-BAL 1556$`+`$3517, which suffers severe extinction, lies near the SMC-line (see panel b of figure 2). An alternative solution for iron lo-BALs gives astronomical silicate with grain size of about 0.05 $`\mu `$m. However, better agreement of the continuum level at shorter wavelengths between C iv$`\lambda \mathrm{\hspace{0.17em}1549}`$ and Si iv$`\lambda \mathrm{\hspace{0.17em}1394}`$ is achieved when the spectra are dereddened with the SMC extinction law.
The lo-BALs without iron absorption (1331$``$0108, 2202$``$2007 and 2358+0216; open circles in figure 4) have a peculiar continuum shape which neither of the empirical extinction laws can explain. In this case astronomical silicate with small grain size of about 0.03 to 0.04 $`\mu `$m best reproduces the extinction in all three bands.
The hi–lo and hi-BALs, on the other hand, have a wider distribution near to the “origin” of the curves in the TCDD. However their small amount of extinction makes it difficult to uniquely distinguish the type of extinction.
Table 1 (column 6) shows the determined extinction type for each object. For those objects which have positions above the silicate-line (grain size 0.001 $`\mu `$m) no extinction type could be derived and they have been marked with “–” in table 1.
Remarkable is that most of the observations in the TCDD do not lie on the curve produced with the MW extinction law. There are only two quasars that have extinction type LMC-MW in our sample. Since $`F_{2100}`$ is very sensitive to the 2175 Å feature, we can conclude from the lack of MW type extinction objects that the 2175 Å bump is virtually absent in BAL QSOs. For the same reason, theoretical extinction curves of graphite grains (Laor, Drain 1993), which have pronounced features at around 2200 Å, are unsuitable for dereddening of BAL QSO spectra.
There is a clear division of the determined extinction laws (table 1) between lo-BALs and hi-BALs. This can be due to different viewing angles amongst various types of BAL QSOs (Yamamoto 1998) which means that the areas where the extinction occurs are in separate regions where different dust composition, grain size distribution, and physical conditions might affect the extinction law. It might also be due to a strong selection effect of the lo-BALs since only a few objects are known.
### 4.3. Extinction with Spatial Distribution of Dust
The spatial distribution of dust is regarded as important for determining the effect of extinction on the spectrum in the case of galaxies (e.g. Witt et al. 1992; Calzetti et al. 1994). In the case of quasars, however, there is only one continuum source with most of the continuum-forming region being too hot for dust to survive. Therefore, the uniform dust-screening model commonly used for stars might be the most suitable configuration of dust for investigations of extinction effects in quasars.
To demonstrate the robustness of the TCDD in the case of more arbitrary spatial distribution of the dust in extended regions, we apply the attenuation curve (AC) from Calzetti et al. (1994) to FSED. Figure 4 illustrates that the AC-line lies on the left side close to the SMC-line and shows that a more complicated configuration than simple screening does not significantly affect the shape of the lines in the diagram. We expect that more realistic treatment of the dust distribution around quasars and solution of the radiative transfer problem are unlikely to endanger our assumption of the dust screen model where the absorption occurs outside the BAL region.
### 4.4. Dereddening of Quasar Spectra
The method employed by Sprayberry and Foltz (1992) to show evidence for extinction in BAL QSOs used only composite spectra from Weymann et al. (1991). We show in our diagram (figure 4) that the composite spectra they used, namely the hi- and lo-BALs, are both affected by dust. This means that estimation of the extinction in lo-BALs was done with respect to the intrinsically reddened composite spectrum of hi-BALs and can only be regarded as a hint for dust extinction in quasars.
In our case the type and amount of intrinsic reddening of BAL QSOs are obtained from the TCDD by looking for the observed points’ nearest positions to the lines produced by the extinction laws. Since we apply only standard extinction laws (SMC, LMC, MW, and astronomical silicate) we assign, for simplicity, one type for the cases where positions in the TCDD indicate an intermediate type of extinction.
Several trials of the dereddening process were made for each object with the suggestion offered by the TCDD as first guess. In most cases, the extinction predicted by the TCDD gave satisfactory results. Then the spectra in the region around C iv and Si iv were examined for the quality of the fit and the amount of extinction slightly readjusted by eye to match the continuum level between C iv and Si iv. A more systematic fit would be possible if another band could be defined in this region. However, the continuum in this region is subjected to the C iv broad absorption line, whose maximal width strongly varies from object to object. This makes it impossible to define a band between C iv and Si iv that would serve for all BAL QSOs as a measure of the continuum level.
### 4.5. Examples of Dereddened Spectra
Six objects are chosen (marked with asterisk in table 1) for presentation of the dereddened spectra as follows: the object with the highest extinction (1556$`+`$3517) to show that the recovery of the spectrum works correctly for extreme cases; at intermediate extinction, representatives of various extinction types are chosen (2225$``$0534, 0840$`+`$3633, and 1331$``$0108, which we have determined to be LMC, SMC, and silicate type respectively) to show that spectra dereddened with the predicted extinction types indeed recover the overall shape of the spectrum; two are taken from the region with the highest population density in TCDD (1228$`+`$1216 and 1303$`+`$3080) with one of the objects near to the MW-line in figure 4.
Figure 5 shows dereddened and observed spectra together with the corresponding power-law continua. Numerous emission lines significantly change the local shape of the quasar continuum which cannot be presented with the power-law and therefore we show FSED in panel (a) as a guide of the continuum level.
The iron lo-BAL 0840$`+`$3633 in panel (a) was dereddened with SMC type extinction. The successful recovery of the spectrum shows that strong Mg ii and Fe ii absorption lines do marginally affect the measurements of the flux. It is striking to see that even the heavily extinct spectrum of 1556$`+`$3517 (panel b) can be recovered with the extinction values determined from the TCDD. The ratio of the flux in B2100 in this case between observed and dereddened spectrum is a factor of $`60`$. For these two objects adjustment of $`E(BV)`$ to values slightly smaller than suggested by the TCDD was necessary.
The dereddened spectrum of the weakly extinct hi-BAL 1228$`+`$1216 (panel c) almost exactly matches FSED. The significantly stronger reddened hi-BAL 2225$``$0534 (panel d) for which the LMC type extinction curve was applied also is in good agreement when compared with FSED. Note that FSED is only shown in panel (a) of figure 5.
Dereddening of the hi-BAL 1303$`+`$3080 (panel e), which was classified as LMC-MW, with a MW type extinction law gave somewhat better results than did the LMC type. In the TCDD (figure 4) this object is closest to the MW-line for which measurable extinction could be derived. We show one example for astronomical silicate type extinction from our sample — 1331$``$0108 (panel f). In this case, dereddening with the extinction values derived from the TCDD also leads to satisfactory results. We note that dereddening of 1331$``$0108 with SMC type extinction law, as figure 2 (panel a) would suggest, cannot recover the region around C iv whereas the steep change in the silicate extinction curve in this region gives a better fit.
In general, better fits were achieved with the empirical extinction laws. The only exceptions from the whole quasar sample were 1331$``$0108, 2202$``$2007, and 2358+0216, for which only astronomical silicate with a grain size smaller than for the other objects could recover the hump-like shape of the continuum at around 1750 Å.
## 5. Conclusions
26 BAL QSOs have been analyzed in this paper for which we have defined bands to measure the flux and constructed a two-colour diagnostic diagram (TCDD). Based on the TCDD we introduced a classification scheme for the determination of the extinction type and estimation of the amount of extinction for individual objects. Using this information we can deredden observed spectra with standard extinction laws (SMC, LMC, MW, and astronomical silicate). From our investigation we draw the following conclusions:
For low-ionization BAL QSOs, the SMC type extinction law is appropriate for dereddening while the range of extinction laws for high-ionization BAL QSOs varies from Milky Way, LMC to SMC.
The TCDD also indicates that the Milky Way type extinction curve is not applicable for most of the BAL QSOs.
The Francis et al. (1991) composite spectrum is more suitable than Zheng et al. (1997) composite spectrum for the representation of the continuum shape of BAL QSOs.
The similarity of the shapes of BAL QSOs’ dereddened continua to those of non-BAL QSOs suggests that both types of quasar have the same kind of continuum source. This means that a representative non-BAL (normal) QSO spectrum such as the composite spectrum from Francis et al. (1991) might be appropriate for determining the continuum shape of incident radiation for the broad line region.
There is a hint of the existence of an “intrinsic” continuum shape of quasars that is bluer than the Francis et al. (1991) composite spectrum. This would imply that all quasars observed might have spectra that are reddened by dust.
We are grateful to M. S. Brotherton, P. M. Ogle, B. J. Wills, and R. J. Weymann for making available the observational data. We also like to thank P. J. Francis for having made available the composite spectrum from Francis et al. (1991). For valuable comments on the manuscript we thank E. J. Wampler, V. Korchagin, T. Nakajima, A. Kučinskas, and M. Iye. Finally, we would like to thank Y. McLean and T. Hoffmann for careful reading of the manuscript. This work was supported by the Japan Society for the Promotion of Science.
## References
Becker R.H., Gregg M.D., Hook I.M., McMahon R.G., White R.L., Helfand D.J. 1997, ApJ 479, L93
Becker R.H., White R.L., Helfand D.J. 1995, ApJ 450, 559
Brotherton M.S., Tran H.D., van Breugel W., Dey A., Antonucci R. 1997, ApJ 487, L113
Calzetti D., Kinney A.L., Storchi-Bergmann T. 1994, ApJ 429, 582
Francis P.J., Hewett P.C., Foltz C.B., Chaffee F.H.,Weymann R.J., Morris S.L. 1991, ApJ 373, 465 (FSED)
Goodrich R.W. 1997, ApJ 474, 606
Korista K.T., Voit G.M., Morris S.L., Weymann R.J. 1993, ApJS 88, 357
Laor A., Draine B.T. 1993, ApJ 402, 441
Pei Y.C. 1992, ApJ 395, 130
Sprayberry D., Foltz C.B. 1992, ApJ 390, 39
Weymann R.J., Morris S.L., Foltz C.B., Hewett P.C. 1991, ApJ 373, 23
Witt A.N., Thronson H.A. Jr, Capuano J.M. Jr 1992, ApJ 393, 611
Yamamoto T. M. 1998, PhD Thesis, The University of Tokyo
Zheng W., Kriss G.A., Telfer R.C., Grimes J.P., Davidsen A.F. 1997, ApJ 476, 469 (ZSED)
|
no-problem/9906/hep-ph9906436.html
|
ar5iv
|
text
|
# What can we learn from the Caldwell plot? Talk given at DIS’99, Zeuthen, Germany, April 1999
## 1 Introduction
The Caldwell plot of $`\frac{F_2(x,Q^2)}{ln(Q^2/Q_0^2)}`$ presented at the Desy Workshop in November 1997 suprized the community. The results appeared to indicate that we have reached a region in the x and $`Q^2`$ where pQCD was no longer valid. DGLAP evolution lead us to expect that $`\frac{F_2(x,Q^2)}{ln(Q^2/Q_0^2)}`$ at fixed $`Q^2`$ would be a monotonic increasing function of $`\frac{1}{x}`$, whereas a superficial glance at the data suggests that the logarithmic derivative of $`F_2`$ deviates from the expected pQCD behaviour, and has a turnover in the region of 2 $`Q^2`$ 4 GeV<sup>2</sup> (see fig.1 where the ZEUS data and the GRV’94 predictions are shown). Opinions were also voiced that the phenomena was connected with the transition from ”hard” to ”soft” interactions.
Amongst the problems that one faces in attempting to comprehend the data, is the fact that due to kinematic constraints that data is sparse, and each point shown pertains to a different pair of values of x and $`Q^2`$. We miss the luxury of having measurements at several different values of x for fixed values of $`Q^2`$, which would allow one to deduce the detailed behaviour of $`\frac{F_2(x,Q^2)}{ln(Q^2/Q_0^2)}`$.
## 2 Results
We show that the Caldwell plot is in agreement with the pQCD expectations, once screening corrections (SC) (which become more important as one goes to lower values of x and $`Q^2`$), are included. To provide a check of our calculations, we compare with the results one derives using the ALLM’97 parametrization , which we use as a ”pseudo data base”.
Following the method suggested by Levin and Ryskin and Mueller we calculate the SC pertaining to $`\frac{F_2(x,Q^2)}{ln(Q^2/Q_0^2)}`$ for both the quark and gluon sector. In fig.2 we show the results as well as those of ALLM compared with the experimental results.
In fig.3 and 4 we display our calculations for the logarithmic derivative of $`F_2`$ after SC have been incorporated, as well as the ALLM results. In fig.3 for fixed values of $`Q^2`$ and varying values of x, and in fig.4 for fixed x and varying values of $`Q^2`$. In fig.4 we show our results as well as those of ALLM compared with the experimental results. We note that $`\frac{F_2(x,Q^2)}{ln(Q^2/Q_0^2)}`$ at fixed $`Q^2`$ both in our calculations and in the ”psuedo data” (ALLM), remains a $`\mathrm{𝐦𝐨𝐧𝐨𝐭𝐨𝐧𝐢𝐜}`$ increasing function of $`\frac{1}{x}`$.
From fig.4 we note that for fixed x, $`\frac{F_2(x,Q^2)}{ln(Q^2/Q_0^2)}`$ decreases as $`Q^2`$ becomes smaller. The decrease becomes stronger as we go to lower values of x. This phenomena which is due to SC adds to the confusion in interpreting the Cadwell plot.
## 3 Conclusions
1) We have obtained a good description of $`\frac{F_2(x,Q^2)}{ln(Q^2/Q_0^2)}`$ for x $``$ 0.1.
2) At low $`Q^2`$,
$$\frac{F_2(x,Q^2)}{ln(Q^2/Q_0^2)}Q^2$$
both in the pseudo data and in our calculations.
3) Our results suggest that there is a smooth transition between the ”soft” and ”hard” processes.
4) The apparent turn over of $`\frac{F_2(x,Q^2)}{ln(Q^2/Q_0^2)}`$ is an illusion, created by the experimental limitation in measuring the logarithmic derivative of $`F_2`$ at particular correlated values of $`Q^2`$ and x.
The detailed calculations and results that this talk was based on appear in and
## 4 Acknowledgements
I would like to thank my friends and collegues Genya Levin and Uri Maor for an enjoyable and fruitful collaboration.
|
no-problem/9906/quant-ph9906034.html
|
ar5iv
|
text
|
# References
Classical interventions in quantum systems.
II. Relativistic invariance
Asher Peres<sup>*</sup><sup>*</sup>*E-mail: peres@photon.technion.ac.il
Department of Physics, Technion—Israel Institute of Technology, 32 000 Haifa, Israel
Abstract
If several interventions performed on a quantum system are localized in mutually space-like regions, they will be recorded as a sequence of “quantum jumps” in one Lorentz frame, and as a different sequence of jumps in another Lorentz frame. Conditions are specified that must be obeyed by the various operators involved in the calculations so that these two different sequences lead to the same observable results. These conditions are similar to the equal-time commutation relations in quantum field theory. They are sufficient to prevent superluminal signaling. (The derivation of these results does not require most of the contents of the preceding article. What is needed is briefly summarized here, so that the present article is essentially self-contained.)
PACS numbers: 03.65.Bz, 03.30.+p, 03.67.\*
Physical Review A 61, 022117 (2000)
I. THE PROBLEM
Quantum measurements are usually considered as quasi-instantaneous processes. In particular, they affect the wave function instantaneously throughout the entire configuration space. Measurements of finite duration cannot alleviate this conundrum. Is this quasi-instantaneous change of the quantum state, caused by a local intervention, consistent with relativity theory? The answer is not obvious. The wave function itself is not a material object forbidden to travel faster than light, but we may still ask how the dynamical evolution of an extended quantum system that undergoes several measurements in distant spacetime regions is described in different Lorentz frames.
Difficulties were pointed out long ago by Bloch , Aharonov and Albert , and many others . Still before them, in the very early years of quantum mechanics, Bohr and Rosenfeld had given a complete relativistic theory of the measurement of quantum fields, but these authors were not concerned about the properties of the new quantum states that resulted from these measurements and their work does not answer the question that was raised above. Other authors considered detectors in relative motion, and therefore at rest in different Lorentz frames. These works also do not give an explicit answer to the above question: a detector in uniform motion is just as good as one that has undergone an ordinary spatial rotation (accelerated detectors involve new physical phenomena and are not considered in this article). The point is not how individual detectors happen to move, but how the effects due to these detectors are described in different ways in one Lorentz frame or another.
In the preceding article , the notion of measurement was extended to the more general one of intervention. An intervention consists of the acquisition and recording of information by a measuring apparatus, possibly followed by the emission of classical signals for controlling the execution of further interventions. More generally, a consequence of the intervention may be a change of the environment in which the quantum system evolves. These effects are the output of the intervention. These notions are refined in Sect. II of the present article so as to be applicable to relativistic situations.
A relativistic treatment is essential to analyze space-like separated interventions, such as in Bohm’s version of the Einstein-Podolsky-Rosen “paradox” (hereafter EPRB) which is sketched in Fig. 1, with two coordinate systems in relative motion. In that experiment, a pair of spin-$`\frac{1}{2}`$ particles is prepared in a singlet state at time $`t_0`$ (referred to one Lorentz frame) or $`t_0^{}`$ (referred to another Lorentz frame). The particles move apart and are detected by two observers. Each observer measures a spin component along an arbitrarily chosen direction. The two interventions are mutually space-like as shown in the figure. Event A occurs first in $`t`$-time, and event B is the first one in $`t^{}`$-time. The evolution of the quantum state of this bipartite system appears to be genuinely different when recorded in two Lorentz frames in relative motion. The quantum states are not Lorentz-transforms of each other. Yet, all the observable results are the same. Consistency of the theoretical formalism imposes definite relationships between the various operators used in the calculations. These are investigated in Sect. III.
Another example, this one taken from real life, is the detection system in the experimental facility of a modern high energy accelerator . Following a high energy collision, thousands of detection events occur in locations that may be mutually space-like. Yet, some of the detection events are mutually time-like, for example when the world line of a charged particle is recorded in an array of wire chambers. High energy physicists use a language which is different from the one in the present article. For them, an “event” is one high energy collision together with all the subsequent detections that are recorded. This “event” is what I call here an experiment (while they call “experiment” the complete experimental setup that may be run for many months). Their “detector” is a huge machine weighing thousands of tons, while here the term detector means each elementary detecting element, such as a new bubble in a bubble chamber or a small segment of wire in a wire chamber. (A typical wire chamber records only which wire was excited. However, it is in principle possible to approximately locate the place in that wire where the electric discharge occurred, if we wish to do so.) Apart from the above differences in terminology, the events that follow a high energy collision are an excellent example of the circumstances discussed in the present article.
Returning to the Einstein-Podolsky-Rosen conundrum, we must analyze whether it actually involves a genuine quantum nonlocality. Such a claim has led some authors to suggest the possibility of superluminal communication. This would have disastrous consequences for relativistic causality . Bell’s theorem asserts that it is impossible to mimic quantum correlations by classical local “hidden” variables, so that any classical imitation of quantum mechanics is necessarily nonlocal. However Bell’s theorem does not imply the existence of any nonlocality in quantum theory itself. It is shown in Sect. IV that quantum measurements do not allow any information to be transmitted faster than the characteristic velocity that appears in the Green’s functions of the particles involved in the experiment. In a Lorentz invariant theory, this limit is the velocity of light, of course. The last section is devoted to a few concluding remarks.
II. RELATIVISTIC INTERVENTIONS
This section includes a brief summary of some parts of the preceding article and contains all the material necessary to make the present one self-contained. Besides this summary, new notions are introduced to cope with the relativistic nature of the phenomena under discussion.
First, recall that each intervention is described by a set of classical parameters . The latter include the location of that intervention in spacetime, referred to an arbitrary coordinate system. The coordinates are classical numbers, just as time in the Schrödinger equation is a classical parameter. We also have to specify the speed and orientation of the apparatus in that coordinate system and various other input parameters that define the experimental conditions under which the measuring apparatus operates. The input parameters are determined by classical information received from the past light-cone at the point of intervention, or they may be chosen arbitrarily (in a random way) by the observer and/or the apparatus.
I just mentioned the existence of a past light-cone. Actually, the only notion needed at the present stage is a partial ordering of the interventions: namely, there are no closed causal loops. This property defines the terms earlier and later. The input parameters of an intervention are deterministic (or possibly stochastic) functions of the parameters of earlier interventions, but not of the stochastic outcomes resulting from later interventions, as explained below.
In the conventional presentation of non-relativistic quantum mechanics, each intervention has a (finite) number of outcomes, which are also known as “results of measurements” (for example, this or that detector clicks). In a relativistic treatment, the spatial separation of the detectors is essential and each detector corresponds to a different intervention. The reason is that if several detectors are set up so that they act at a given time in one Lorentz frame, they would act at different times in another Lorentz frame. However, a knowledge of the time ordering of events is essential in our dynamical calculations, so that we want the parameters of an intervention to refer unambiguously to only one time (indeed to only one spacetime point). Therefore, an intervention can involve only one detector and it can have only two possible outcomes: either there was a “click” or there wasn’t.
Note that the absence of a click, while a detector was present, is also a valid result of an intervention. The state of the quantum system does not remain unchanged: it has to change to respect unitarity. The mere presence of a detector that could have been excited implies that there has been an interaction between that detector and the quantum system. Even if the detector has a finite probability of remaining in its initial state, the quantum system correlated to the latter acquires a different state . The absence of a click, when there could have been one, is also an event and is part of the historical record.
The effect of an intervention on a quantum system initially prepared in the state $`\rho `$ is given by Eq. (20) in the preceding article:
$$\rho \rho _\mu ^{}=\underset{m}{}A_{\mu m}\rho A_{\mu m}^{},$$
(1)
where $`\mu `$ is a label that indicates which detector was involved and whether or not it was activated. The initial $`\rho `$ is assumed to be normalized to unit trace, and the trace of $`\rho _\mu ^{}`$ is the probability of occurrence of outcome $`\mu `$. Each symbol $`A_{\mu m}`$ in the above equation represents a matrix (not a matrix element). These may be rectangular matrices where the number of rows depends on $`\mu `$. The number of columns is of course equal to the order of the initial $`\rho `$. Thus, the Hilbert space of the resulting quantum system may have a different number of dimensions than the initial one. A quantum system whose description starts in a given Hilbert space may evolve in a way that requires a set of Hilbert spaces with different dimensions. If one insists on keeping the same Hilbert space for the description of the entire experiment, with all its possible outcomes, this can still be achieved by defining it as a Fock space.
Each experiment yields a record that comprises a complete list of which detectors were available (including when and where) and whether these detectors reacted. Such a record is objective: everyone agrees on what happened (e.g., which detectors clicked) irrespective of the state of motion of the observers who read these records. Therefore everyone agrees on the relative frequency of each type of record among all the records that are observed if the experiment is repeated many times, and the theoretical probabilities also have to be the same for everyone.
What is the role of relativity theory here? We may likewise ask what is the role of translation and/or rotation invariance in a nonrelativistic theory. The point is that the rules for computing quantum probabilities involve explicitly the spacetime coordinates of the interventions. Lorentz invariance (or rotation invariance, as a special case) says that if the classical spacetime coordinates are subjected to a particular linear transformation, then the probabilities remain the same. This invariance is not trivial because the rule for computing the probability of occurrence of a given record involves a sequence of mathematical operations corresponding to the time ordered set of all the relevant interventions. If we only consider the Euclidean group, all we have to know is how to transform the classical parameters, and the wave function, and the various operators, under translations and rotations of the coordinates. However, when we consider genuine Lorentz transformations, we have not only to Lorentz-transform the above symbols, but we are faced with a new problem: the natural way of calculating the result of a sequence of interventions, namely by considering them in chronological order, is different for different inertial frames. The issue is not only a matter of covariance of the symbols at each intervention and between consecutive interventions. There are genuinely different prescriptions for choosing the sequence of mathematical operations in our calculation. The principle of relativity asserts that there are no privileged inertial frames. Therefore these different orderings ought to give the same set of probabilities, and this demand is not trivial.
The experimental records are the only real thing we have to consider. Their observed relative frequencies are objective numbers and are Lorentz invariant. On the other hand, wave functions and operators are mathematical concepts useful for computing quantum probabilities, but they have no real existence . All the difficulties that have been associated with a relativistic theory of quantum measurements are due to attributing a real nature to the symbols that represent quantum states.
Note also that while interventions are localized in spacetime, quantum systems are pervasive. In each experiment, irrespective of its history, there is only one quantum system. The latter typically consists of several particles or other subsystems, some of which may be created or annihilated by the various interventions. The next two sections of this article are concerned with sharp localized interventions on quantum systems that freely evolve throughout spacetime between these interventions, and in particular with the Lorentz covariance of the results.
III. TWO MUTUALLY SPACELIKE INTERVENTIONS
Consider again the EPRB gedankenexperiment which is depicted in Fig. 1, with two coordinate systems in relative motion. There exists a Lorentz transformation connecting the initial states $`\rho `$ (at time $`t_0`$) and $`\rho ^{}`$ (at time $`t_0^{}`$) before the two interventions, and likewise there is a Lorentz transformation connecting the final states at times $`t_f`$ and $`t_f^{}`$ after completion of the two interventions. On the other hand, there is no Lorentz transformation relating the states at intermediate times represented by the lines that pass between interventions A and B . This may be contrasted with the ontology of classical relativistic theory. Classical theory asserts that fields, velocities, etc., transform in a definite way and that the equations of motion of particles and fields behave covariantly. For example if the expression for the Lorentz force is written $`f_\mu =F_{\mu \nu }u^\nu `$ in one frame, the same expression is valid in any other frame. These symbols ($`f_\mu `$, etc.) have objective values. They represent entities that really exist, according to the theory. On the other hand, wave functions have no objective value. They do not transform covariantly when there are interventions. Only the classical parameters attached to each intervention transform covariantly. Yet, in spite of the non-covariance of $`\rho `$, the final results of the calculations (the probabilities of specified sets of events) are Lorentz invariant.
Note that each line in Fig. 1 represents one instant of the time coordinate, as in the ordinary non-relativistic formulation of quantum mechanics. There is no way of defining a relativistic proper time for a quantum system which is spread all over space. It is possible to define a proper time for each apparatus, which has classical coordinates and follows a continuous world-line. However, this is not necessary. We are only interested in a discrete set of interventions, and the latter are referred to a common coordinate system that covers the whole of spacetime. There is no role for the private proper times that might be attached to the apparatuses’ world-lines.
If we attempt to generalize the parallel straight lines in Fig. 1 to a spacelike foliation in a curved spacetime, as we would have in general relativity, we encounter the difficulty that no such foliation may exist globally. However, there is no need for such a global foliation and in particular we do not assume the validity of a Schwinger-Tomonaga equation, $`i\delta \mathrm{\Psi }/\delta \sigma =H(\sigma )\mathrm{\Psi }`$, as can be found in the work of Aharonov and Albert . The only condition that we need is the absence of closed timelike curves. Namely, if two events can be connected by continuous timelike (or null) curves, without past-future zigzags, then all these curves have the same orientation.
Returning to special relativity, consider the evolution of the quantum state in the Lorentz frame where intervention A is the first one to occur and has outcome $`\mu `$, and B is the second intervention, with outcome $`\nu `$. Between these two events, nothing actually happens in the real world. It is only in our mathematical calculations that there is a deterministic evolution of the state of the quantum system. This evolution is not a physical process. For example, the quantum state of Schrödinger’s legendary cat, doomed to be killed by an automatic device triggered by the decay of a radioactive atom, evolves into a superposition of “live” and “dead” states. This is a manifestly absurd situation for a real cat. The only meaning that such a quantum state can have is that of a mathematical tool for statistical predictions on the fates of numerous cats subjected to the same cruel experiment.
What distinguishes the intermediate evolution between interventions from the one occurring at an intervention is the unpredictability of the outcome of the latter: either there is a click or there is no click of the detector. This unpredictable macroscopic event starts a new chapter in the history of the quantum system which acquires a new state, according to Eq. (1). As long as there is no such branching, the quantum evolution will be called free, even though it may depend on external classical fields that are specified by the classical parameters of the preceding interventions.
Quantum mechanics asserts that during the free evolution of a closed quantum system, its state undergoes a unitary transformation generated by a Hamiltonian. The latter depends in a prescribed way on the preceding outcome(s) according to the protocol that has been specified for the experiment. The unitary operator for the evolution following intervention A with outcome $`\mu `$, and ending at intervention B, will be denoted by $`U_{BA_\mu }`$. (More generally, it is possible to consider an evolution which is continuously perturbed by the environment, as in the last section of the preceding article . In that case, the unitary evolution would be replaced by a more general continuous completely positive map, so that instead of $`U_{BA_\mu }`$ there would be Kraus operators with additional indices to be summed over. I shall refrain from using this more general formalism so as not to get into an unnecessarily complicated argument. Anyway, the presence of such a pervasive environment would break Lorentz invariance.)
Note that the chronological order of the indices in $`U_{BA_\mu }`$ is from right to left (just as is the order for consecutive applications of a product of linear operators), and in particular that $`U_{BA_\mu }`$ does not depend on the future outcome at intervention B. Likewise, there is a unitary operator $`U_{A0}`$ for the evolution that precedes event A, and an operator $`U_{fB_\nu }`$ for the final evolution that follows outcome $`\nu `$ of intervention B. The final quantum state at time $`t_f`$ is given by a generalization of Eq. (1):
$$\rho _f=\underset{m,n}{}K_{mn}\rho K_{mn}^{},$$
(2)
where
$$K_{mn}=U_{fB_\nu }B_{\nu n}U_{BA_\mu }A_{\mu m}U_{A0}.$$
(3)
The same events can also be described in the Lorentz frame where B occurs first. We have, with the primed variables,
$$\rho _f^{}=\underset{m,n}{}L_{mn}^{}\rho ^{}L_{mn}^{},$$
(4)
where
$$L_{mn}^{}=V_{fA_\mu }^{}A_{\mu m}^{}V_{AB_\nu }^{}B_{\nu n}^{}V_{B0}^{}.$$
(5)
Here, the unitary operator for the free evolution between the two interventions has been denoted by $`V_{AB_\nu }^{}`$. It is not related in any obvious way to the operator $`U_{BA_\mu }`$. These operators indeed correspond to different slabs of spacetime. Likewise the other evolution operators in the primed coordinates have been called $`V^{}`$ with appropriate subscripts. Note that $`\text{Tr}(\rho _f)=\text{Tr}(\rho _f^{})`$ is the joint probability of occurrence of the records $`\mu `$ and $`\nu `$ during the experiment.
Einstein’s principle of relativity asserts that there is no privileged inertial frame, and therefore both descriptions given above are equally valid. Formally, the states $`\rho _f`$ (at time $`t_f`$) and the state $`\rho _f^{}`$ (at time $`t_f^{}`$) have to be Lorentz transforms of each other. This requirement imposes severe restrictions on the various matrices that appear in the preceding equations. In order to investigate this problem, consider a continuous Lorentz transformation from the primed to the unprimed frame. As long as the order of occurrence of A and B is not affected by this continuous transformation of the spacetime coordinates, the latter is implemented in the quantum formalism by unitary transformations of the various operators. These unitary transformations obviously do not affect the observable probabilities.
Therefore, in order to investigate the issue of relativistic invariance, it is sufficient to consider two Lorentz frames where A and B are almost simultaneous: either A occurs just before B, or just after B. There is of course no real difference in the actual physical situations and the Lorentz “transformation” between these two arbitrarily close frames (primed and unprimed) is performed by the unit operator. In particular, $`U_{BA_\mu }=\mathrm{𝟏}=V_{AB_\nu }^{}`$, since there is no finite time lapse for any evolution to occur between the two events. The only difference resides in our method for calculating the final quantum state: first A then B, or first B then A. Consistency of the two results is obviously achieved if
$$A_{\mu m}B_{\nu n}=B_{\nu n}A_{\mu m},$$
(6)
or
$$[A_{\mu m},B_{\nu n}]=0.$$
(7)
This equal-time commutation relation, which was derived here as a sufficient condition for consistency of the calculations, is always satisfied if the operators $`A_{\mu m}`$ and $`B_{\nu n}`$ are direct products of operators pertaining to the two subsystems:
$$A_{\mu m}=a_{\mu m}\mathrm{𝟏}\text{and}B_{\nu n}=\mathrm{𝟏}b_{\nu n},$$
(8)
where 1 now denotes the unit matrix of each subsystem. This relationship is obviously fulfilled if there are two distinct apparatuses whose dynamical variables commute, and moreover if the dynamical variables of the quantum subsystems commute. This is indeed a necessary condition for legitimately calling them subsystems.
The analogy with relativistic quantum field theory is manifest: field operators belonging to points at space-like distances commute (or anticommute in the case of fermionic fields). Quantum field theory mostly uses the Heisenberg picture or the interaction picture, while in the present work it is the Schrödinger picture that is employed. This makes no difference in Eq. (7) which applies to equal times. Could we have here too anticommutation relations? It is easily seen that it is possible to introduce a minus sign on the right hand side of Eq. (6), or even an arbitrary phase factor $`e^{i\varphi _{AB}}`$. However, this generalization will not be investigated in the present article whose subject is quantum mechanics, not quantum field theory.
One may wonder whether the result expressed in Eq. (8) is trivial. Direct products were postulated in the very early years of quantum mechanics by Weyl as the only reasonable way for describing composite systems. Here, this representation was derived from an argument involving Lorentz invariance. However, such a proof may well be circular : it assumes a relativistic partial ordering of events, i.e., the impossibility of superluminal signaling, while this impossibility is proved in quantum field theory by assuming the tensor product representation for composite systems. This issue was also investigated by Rosen in the context of molecular biology. According to Rosen, while any microphysical system can be expressed as a composite of subsystems, there is no reason to suppose that such a factorization is unique, because rings of operators may in general be factored in many distinct ways. Only if it were found that the factorization is unique, this would imply that there is only one way in which the state of a system can be synthetized from the states of simpler subsystems.
Returning to Eq. (8), it is important to remember that an intervention can change the dimensions of the quantum system. Here is a simple example. The quantum system initially consists of a pair of spin-$`\frac{1}{2}`$ particles, as in the EPRB experiment. The two observers are called Alice and Bob, as usual. Alice, who intervenes at A, uses an apparatus that contains a subsystem $`𝒮`$ prepared as an entangled state of a spin-$`\frac{1}{2}`$ particle and a particle of spin 1. She receives a particle of spin $`\frac{1}{2}`$ (that is, one of the two particles of the quantum system under observation) and she measures the Bell operator of the composite system formed by that particle and the spin-$`\frac{1}{2}`$ particle in $`𝒮`$. That measurement can have four different outcomes, and according to its result Alice performs one of four specified unitary rotations on the spin 1 particle of $`𝒮`$. She then discards everything but that particle of spin 1, and she releases the latter for future experiments. In this way, Alice’s intervention converts an incoming spin-$`\frac{1}{2}`$ system into an outgoing spin 1 system.
Likewise, Bob’s intervention, located space-like with respect to Alice’s, outputs a spin 2 particle when Bob receives one with spin $`\frac{1}{2}`$. How shall we describe the sequence of events in the frame where Alice is the first one to act, and in the frame where Bob is first?
Alice’s $`A_{\mu m}`$ matrices are direct products of a matrix of dimensions $`3\times 2`$ and the two-dimensional unit matrix, as in Eq. (8). Thereafter, there is a free unitary evolution, where $`U_{BA_\mu }`$ has rank 6. Then Bob’s $`B_{\nu n}`$ matrices are direct products of a 3-dimensional unit matrix and one of dimension $`5\times 2`$. The final $`\rho `$ is 15-dimensional (the final quantum system consists of a particle of spin 1 and a particle of spin 2). A similar description holds, mutatis mutandis, in the frame where Bob acts first (this frame is denoted by primes). The unitary matrix $`V_{AB_\nu }^{}`$ for the free evolution from B to A is of order 10, while $`U_{BA_\mu }`$ was of order 6. Obviously these cannot be Lorentz transforms of each other. They would not be Lorentz transforms even if dimensions were the same. However, the final $`\rho _f`$ and $`\rho _f^{}`$ have to be Lorentz transforms of each other.
Are $`A_{\mu m}`$ and $`A_{\mu m}^{}`$ related by a Lorentz transformation? We have seen that $`A_{\mu m}`$ is a direct product of a matrix of dimension $`3\times 2`$ and the two-dimensional unit matrix. On the other hand, $`A_{\mu m}^{}`$ is a direct product of a matrix of dimension $`3\times 2`$ and the 5-dimensional unit matrix (the latter acts on the spin 2 particle that Bob has produced). Then, the non-trivial parts of $`A_{\mu m}`$ and $`A_{\mu m}^{}`$, both rectangular $`3\times 2`$ matrices, are Lorentz transforms of each other. We may also, if we wish, call the complete $`A_{\mu m}`$ and $`A_{\mu m}^{}`$ matrices “Lorentz transforms” if we accept that unit matrices of any order be considered as Lorentz transforms of each other.
IV. SUPERLUMINAL COMMUNICATION?
Bell’s theorem has led some authors to suggest the feasibility of superluminal communication by means of quantum measurements performed on correlated systems far away from each other . It will now be shown that such a possibility is ruled out by the present relativistic formalism. We have already assumed that there exists a partial ordering of events. Superluminal communication would mean that the deliberate choice of the test performed by an observer (or the random choice of the test performed by his apparatus) could influence in a deterministic way, at least statistically, the outputs of tests located at a space-like distance from that observer (or apparatus) and having a later time-coordinate. If this were true for any pair of space-like separated events, this would lead to the possibility of propagating information backwards in time between events with time-like separation. For example, we may have A in the past light cone of B, and both A and B space-like with respect to C. Then B could superluminally influence C in the frame where B occurs earlier than C, and in another frame C would likewise influence A, so that B could indirectly influence A. Therefore the assumption of Lorentz invariance, and the existence of random inputs, and the restriction of causal relationships between time-like related events to the future direction, are incompatible with causal relationships at spatial distances.
All this was discussed ad nauseam at the classical level many years ago, when tachyons were popular . More recently, superluminal group velocities have actually been observed in barrier tunneling in condensed matter . However, special relativity does not forbid the group velocity to exceed $`c`$. It is the front velocity of a wave packet that is the relevant criterion for signal transmission, and the front velocity never exceeds $`c`$. What novelty does quantum theory bring to this issue? The common wisdom is that the measuring process creates a “reality” that did not exist objectively before the intervention . Let us examine this claim more carefully.
Consider a classical situation analogous to the EPRB setup: a bomb, initially at rest, explodes into two fragments carrying opposite angular momenta. Alice and Bob, far away from each other, measure arbitrarily chosen components of $`𝐉_1`$ and $`𝐉_2`$. (They can measure all the components, since these have objective values.) Yet, Bob’s measurement tells him nothing of what Alice did, nor even whether she did anything at all. He can only know with certainty what would be the result found by Alice if she measures her J along the same direction as him, and make statistical inferences for other possible directions of Alice’s measurement.
In the quantum world, consider two spin-$`\frac{1}{2}`$ particles in a singlet state. Alice measures $`\sigma _z`$ and finds +1, say. This tells her what the state of Bob’s particle is, namely the probabilities that Bob would obtain +1 if he measures (or has measured, or will measure) $`𝝈`$ along any direction he chooses. This is manifestly counterfactual information: nothing changes at Bob’s location until he performs the experiment himself, or receives a classical message from Alice telling him the result that she found. No experiment performed by Bob can tell him whether Alice has measured (or will measure) her half of the singlet. The rules are exactly the same as in the classical case. It does not matter at all that quantum correlations are stronger than classical ones and violate the Bell inequality.
A seemingly paradoxical way of presenting these results is to ask the following naive question: suppose that Alice finds that $`\sigma _z=1`$ while Bob does nothing. When does the state of Bob’s particle, far away, become the one for which $`\sigma _z=1`$ with certainty? Though this question is meaningless, it has a definite answer: Bob’s particle state changes instantaneously. In which Lorentz frame is this instantaneous? In any frame! Whatever frame is chosen for defining simultaneity, the experimentally observable result is the same, owing to Eq. (7). This does not violate relativity because relativity is built in that equation, as will now be shown in a formal way.
Consider again Eqs. (2) and (3) which give the final (unnormalized) $`\rho _f`$ following two interventions in which Alice gets the result $`\mu `$, and then Bob gets the result $`\nu `$. The probability for that pair of results is $`\text{Tr}(\rho _f)`$. If event B lies in the future light cone of A, there can be ordinary classical communication from A to B and there is no causality controversy. We are interested here in the case where B is spacelike with respect to A. The problem is to prove that the probability of Bob’s outcome $`\nu `$ is independent of whether or not Alice intervenes before him (in any Lorentz frame). Note that the unitary matrices in Eq. (3) are the Green’s functions for the propagation of the complete quantum system, and that its subsystems may interact in a nontrivial way even when they are macroscopically separated (for example, these may be charged particles).
Fortunately, we don’t need to know these Green’s functions explicitly. We simply note that the probabilities that we are seeking are invariant under unitary transformations of the various operators in Eq. (3). In particular, they are not affected by the initial $`U_{A0}`$ and final $`U_{fB_\nu }`$. There still is the intermediate unitary operator $`U_{BA_\mu }`$ for the propagation of the composite quantum system between times $`t_A`$ and $`t_B`$. That quantum system is not a localized object. Its velocity is not a well defined concept and it is meaningless to argue that it is less than the velocity of light. However, it is possible to eliminate $`U_{BA_\mu }`$ by using the same stratagem as in Sect. III: we perform a Lorentz transformation of the spacetime coordinates, which is implemented by a unitary transformation of the quantum operators (so that all probabilities are invariant), in such a way that the time elapsing between interventions A and B is arbitrarily small, and therefore $`U_{BA_\mu }\mathrm{𝟏}`$.
The probability that Bob gets a result $`\nu `$, irrespective of Alice’s result, thus is
$$p_\nu =\underset{\mu }{}\text{Tr}\left(\underset{m,n}{}B_{\nu n}A_{\mu m}\rho A_{\mu m}^{}B_{\nu n}^{}\right).$$
(9)
We now employ Eq. (7) to exchange the positions of $`A_{\mu m}`$ and $`B_{\nu n}`$, and likewise those of $`A_{\mu m}^{}`$ and $`B_{\nu n}^{}`$, and then we move $`A_{\mu m}`$ from the first position to the last one in the product of operators in the traced parenthesis. We thereby obtain expressions
$$\underset{m}{}A_{\mu m}^{}A_{\mu m}=E_\mu .$$
(10)
As explained in , these are elements of a positive operator valued measure (POVM) that satisfy $`_\mu E_\mu =\mathrm{𝟏}`$. Therefore Eq. (9) reduces to
$$p_\nu =\text{Tr}\left(\underset{n}{}B_{\nu n}\rho B_{\nu n}^{}\right),$$
(11)
whence all expressions involving Alice’s operators $`A_{\mu m}`$ have totally disappeared. The statistics of Bob’s result are not affected at all by what Alice may do at a spacelike distance, so that no superluminal signaling is possible.
Note that in order to obtain meaningful results the entire experiment has to be considered as a whole: namely, what was prepared in the past light cone of all the interventions, and the complete set of results that were obtained, and are known in their joint future light cone. It is tempting and it is often possible to dissect an experiment into consecutive steps, just as it is often possible to discuss separately the properties of entangled particles. However, if ambiguities (or conflicting predictions, or any other “paradoxes”) are encountered, what has to be done is to consider the whole entangled system and the whole experiment. Contrary to naive intuition, there is no physical state vector that interpolates between the initial and the final states. Such interpolations can formally be written, but they are not unique, not Lorentz covariant, and therefore they are physically meaningless.
Yet, there is an important exception to the above rule: if there exists a spacetime point such that there are interventions in the past and future light cones of that point, but no intervention is spacelike with respect to it, then it is possible to divide the experiment into two steps, before and after that point. It is then meaningful to define not only an initial state $`\rho _0`$ and a final state $`\rho _f`$, but also an intermediate state $`\rho _i`$ at that point. It is conventional to refer such a state to a spacelike hyperplane that passes through the point, but actually the only role of that hyperplane is to define the Lorentz frame in which we write a mathematical description of the state.
It thus appears that the notion of quantum state should be reassessed. There are two types of states: first, there are physically meaningful states, attached to spacetime points with respect to which no classical intervention has a spacelike location. Then, between any two such points, we may draw a continuous timelike curve and try to attach a quantum state to each one of the points of that curve. These interpolating states can indeed be defined as shown in the present article, by considering a set of parallel spacelike hyperplanes. However, states defined in such a way are merely formal mathematical expressions and they have no invariant physical meaning.
In summary, relativistic causality cannot be violated by quantum measurements. The fundamental physical assumption that was needed in the above proof was that Lorentz transformations of the spacetime coordinates are implemented in quantum theory by unitary transformations of the various operators. This is the same as saying that the Lorentz group is a valid symmetry of the physical system.
V. CONCLUDING REMARKS
In the present article it has been shown that a careful treatment, avoiding any speculations that have no experimental support, leads to the “peaceful coexistence” of quantum mechanics and special relativity. The spacetime coordinates of the observers’ interventions are classical parameters subject to ordinary (classical) Lorentz transformations. The latter are implemented in quantum mechanics by unitary transformations of the operators. There are no essentially new features in the causality issue that arise because of quantum mechanics. Quantum correlations do not carry any information, even if they are stronger than Bell’s inequality allows. The information has to be carried by material objects, quantized or not.
The issue of information transfer is essentially nonrelativistic. Replace “superluminal” by “supersonic” and the argument is exactly the same. The maximal speed of communication is determined by the dynamical laws that govern the physical infrastructure. In quantum field theory, the field excitations are called “particles” and their speed over macroscopic distances cannot exceed the speed of light. In condensed matter physics, linear excitations are called phonons and the maximal speed is that of sound.
The classical-quantum analogy (with bomb fragments carrying opposite angular momenta $`𝐉_1=𝐉_2`$) becomes complete if we use statistical mechanics for treating the classical case. The distribution of bomb fragments is given by a Liouville function in phase space. When Alice measures $`𝐉_1`$, the Liouville function for $`𝐉_2`$ is instantly altered, however far Bob is from Alice. No one would find this surprising, since it is universally agreed that a Liouville function is only a mathematical tool representing our statistical knowledge. Likewise, the wave function $`\psi `$, or the corresponding Wigner function which is the quantum analogue of a Liouville function, should be considered as mere mathematical tools for computing probabilities. It is only when they are regarded as physical objects that superluminal paradoxes arise.
The essential difference between the classical and quantum functions which change instantaneously as the result of measurements is that the classical Liouville function is attached to objective properties that are only imperfectly known. On the other hand, in the quantum case, the probabilities are attached to potential outcomes of mutually incompatible experiments, and these outcomes do not exist “out there” without the actual interventions. Unperformed experiments have no results .
ACKNOWLEDGMENTS
I am grateful to California Institute of Technology, where this research program began, for its hospitality, and in particular to Chris Fuchs for many helpful comments and an inexhaustible supply of references. I also had fruitful discussions with Dagmar Bruß, Rainer Plaga, Barbara Terhal and Daniel Terno. This work was supported by the Gerard Swope Fund and the Fund for Encouragement of Research.
FIG. 1. A quantum system is prepared at point P. The interventions A and B are mutually space-like. The solid and dotted lines represent equal times, $`t`$ and $`t^{}`$ respectively, in two Lorentz frames in relative motion. Event A occurs first in $`t`$-time, and event B is the first one in $`t^{}`$-time.
|
no-problem/9906/quant-ph9906053.html
|
ar5iv
|
text
|
# Calculation of the Deflection of Light Ray near the Sun with Quantum-corrected Newton’s Gravitation Law
## Abstract
The deflection of light ray passing near the Sun is calculated with quantum-corrected Newton’s gravitation law. The satisfactory result suggests that there may exist other theoretical possibilities besides the theory of relativity.
The deflection of light ray near the Sun was a successful prediction of Einstein’s theory of relativity, to which classical mechanics is futile. That was one of the best experimental supports to Einstein’s theory. But as many people know, the successful synthesis of Einstein’s theory with quantum theory has not emerged. In the attempt to construct the theory of quantum gravity, some scientists even thought that one of the two theories was temporary. Though research on such a theory of quantum gravity has made no convincing breakthrough, some features of it can be seen. One of the crucial points is the quantization of spacetime.
To get a theory which is compatible with quantum mechanics and at the same time preserves the successful conclusions of the theory of relativity as much as possible, I proposed a theoretical framework in recent years\[3-5\]. It has given new insights to problems like EPR paradox. Also, consideration of space quantization in this framework gives naturally a correction to Newton’s Gravitation Law:
$$F=G\frac{Mm}{r(r\delta )}(1)$$
where $`\delta `$ is the space quantum. We have used this formula in the calculation of planetary precession of their perihelions and got quite good result. Here in this paper, we shall see it also gives satisfactory explanation to the deflection of the light ray passing at the edge of the Sun.
The orbital equation in classical physics for centered force is
$$h^2u^2(\frac{d^2u}{d\theta ^2}+u)=\frac{F}{m}(2)$$
where $`r=\frac{1}{u}`$ and $`\theta `$ are the polar coordinates of the photo, $`h=cR`$, $`c`$ is the speed of light and $`R`$ is the radius of the Sun. Substituting (1) into (2), we get
$$h^2u^2(\frac{d^2u}{d\theta ^2}+u)=\frac{GMu^2}{(1\delta u)}(4)$$
This in the first order turns into
$$h^2u^2(\frac{d^2u}{d\theta ^2}+u)=GMu^2(1+\delta u)(5)$$
It follows that
$$\frac{d^2u}{d\theta ^2}+(1D\delta )u=D(6)$$
where $`D=\frac{GM}{h^2}`$. It is straightforward to verify that the solution of equation (6) is
$$u=A\mathrm{cos}\sqrt{1D\delta }\theta +B\mathrm{sin}\sqrt{1D\delta }\theta +\frac{D}{1D\delta }(5)$$
in which the constants $`A`$ and $`B`$ can be determined in the following consideration. When $`\theta =0`$, $`r=\mathrm{}`$, so that $`u=0`$. This leads to $`A=\frac{D}{1D\delta }`$. From $`y=r\mathrm{sin}\theta `$ and (5) we have
$$\frac{1}{y}=\frac{D}{1D\delta }\frac{1\mathrm{cos}\sqrt{1D\delta }\theta }{\mathrm{sin}\theta }+B\frac{\mathrm{sin}\sqrt{1D\delta }\theta }{\mathrm{sin}\theta }(6)$$
when $`\theta 0,yR`$ . It follows that $`B=\frac{1}{R\sqrt{1D\delta }}`$ . Thus finally the orbital equation is
$$u=\frac{D}{1D\delta }(1\mathrm{cos}\sqrt{1D\delta }\theta )+\frac{1}{R\sqrt{1D\delta }}\mathrm{sin}\sqrt{1D\delta }\theta (7)$$
Designating the final polar angle of the light ray as $`\varphi `$ , when $`r\mathrm{}`$, the deflection angle may be expressed as $`\mathrm{}`$$`\theta =\varphi \pi `$ . Thus the equation for $`\varphi `$ is
$$\frac{D}{1D\delta }(1\mathrm{cos}\sqrt{1D\delta }\varphi )+\frac{1}{R\sqrt{1D\delta }}\mathrm{sin}\sqrt{1D\delta }\varphi =0(8)$$
This gives
$$\varphi =\frac{2}{\sqrt{1D\delta }}\left[\mathrm{arctan}(\frac{\sqrt{1D\delta }}{RD})+m\pi \right](9)$$
where $`m`$ is any integer.
In determining the constant $`B`$ , we suppose $`\theta 0`$ . This is equivalent to presuming $`R0`$ . In this arithmetical process, the space quantum $`\delta `$ has been specified, somehow inadvertently. This is in complete accordance with the physical meaning of the space quantum: the unmeasurable quantity presupposed to be zero in the problem. This has great exemplary significance in determining the uncertainty quantum which is crucial in our theoretical framework. In Table 1 we show our calculation for $`\mathrm{}\theta `$ with $`\delta =R`$ , $`1.3R`$ and $`2R`$, together with experimental observation. It is easily seen that our calculation gives quite satisfactory explanation to the deflection of the light ray.
Another interesting and perhaps also important discovery in the calculation is that $`m=1`$ is only integer giving reasonable $`\mathrm{}\theta `$ value. From Table 2 it is easy to find that other values for $`m`$ produce incredibly large values which, oddly enough, are symmetrical relative to $`m=1`$ ,where there occurs a sudden dramatical fall in the order of magnitude. I believe this is a profound reflection of its innate quantum nature of the problem, and therefore, an indication that our framework is reasonable.
It is one of the most important topics in modern physics to preserve the quantum feature of quantum mechanics while keeping the theoretical competence of the theory of relativity. Our research indicate that there may exist theoretical possibilities other than the theory of relativity.
REFERENCES
1. R. Penrose, The Emperor’s New Mind, (Oxford Univ. Press,1989)
2. S. W. Hawking & R. Penrose, Sci. American, 275, 1, 44(1996)
3. Z. Wang, quant-ph/9605017
4. Z. Wang, quant-ph/9605019
5. Z. Wang, quant-ph/9807035
6. Z. Wang, quant-ph/9806071
7. Z. Wang, quant-ph/9804070
8. Y. B. Zhou, Theoretical Mechanics, (JiangSu Science Press, 1961)
9. K. R. Lang, Astrophysical Formulae, (Spriner-Verlag 1974)
|
no-problem/9906/quant-ph9906072.html
|
ar5iv
|
text
|
# The Mach-Zehnder and the Teleporter
## Abstract
We suggest a self-testing teleportation configuration for photon q-bits based on a Mach-Zehnder interferometer. That is, Bob can tell how well the input state has been teleported without knowing what that input state was. One could imagine building a “locked” teleporter based on this configuration. The analysis is performed for continuous variable teleportation but the arrangement could equally be applied to discrete manipulations.
(June 1999)
One problem with teleportation experiments as they are currently performed is that Victor (the verifier) must examine the teleported state to determine if the machine is working. Victor prepares the original input state and is the only person who knows its identity. Because of the imperfect nature of experiments even Victor must be careful not to be tricked in deciding if some level of teleportation has occurred. In principle one might imagine checking the teleporter is working once, then leaving it to go, but in practice machines drift. Thus it would be useful if a constant, straightforward assessment of the teleportation could be carried out without prior knowledge of the input.
Consider first the set-up shown schematically in Fig.1(a). Basically we place a teleporter in one arm of a Mach-Zehnder interferometer, inject a single photon state, in an arbitrary polarization superposition state into one port, then use the interference visibility at the output ports to characterize the efficacy of teleportation. The beauty of such a set-up is the visibility does not depend on the input state, so we can assess how well the teleporter is working without knowing what is going into it. Let us see how this works.
The input for one port of the interferometer is in the arbitrary polarization superposition state
$$|\varphi _a=\frac{1}{\sqrt{2}}(x|1,0+y|0,1)$$
(1)
where $`|n_h,n_v|n_h_h|n_v_v`$, $`n_h`$ and $`n_v`$ are the photon number in the horizontal and vertical polarizations respectively, and $`|x|^2+|y|^2=1`$. The input of the other port is in the vacuum state $`|\varphi _b=|0,0`$. The Heisenberg picture operators for the four input modes (two spatial times two polarization) are $`a_h`$ and $`a_v`$ (superposition), and $`b_h`$ and $`b_v`$ (vacuum). We propagate these operators through the Mach-Zehnder (including the teleporter). After the first beamsplitter we can write
$`c_{h,v}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(a_{h,v}+b_{h,v})`$ (2)
$`d_{h,v}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(a_{h,v}b_{h,v})`$ (3)
One of the beams ($`c`$) is then teleported using the continuous variable method discussed in Reference . The individual polarization modes of $`c`$ are separated using a polarizing beamsplitter. Each mode is then mixed on a 50:50 beamsplitter with a correspondingly polarized member of an entangled pair of beams. The entangled pairs may come from two separate 2-mode squeezers or alternatively a single polarization/number entangler could be used .Amplitude and phase quadrature measurements are carried out respectively on the two output beams for each mode either through homodyne detection or parametric amplification . A classical channel for each of the polarization modes is formed from these measurements which are passed to the reconstruction site where they are used to displace the corresponding entangled pair for each mode. The output $`c_T`$ is formed by combining the two displaced polarization modes on a polarizing beamsplitter. Under conditions for which losses can be neglected the output from the teleporter is
$$c_{h,v,T}=\lambda c_{h,v}+(\lambda \sqrt{H}\sqrt{H1})f_{h,v,1}^{}+(\sqrt{H}\lambda \sqrt{H1})f_{h,v,2}$$
(4)
where $`\lambda `$ is the feedforward gain in the teleporter, the $`f_{h,v,i}`$ are vacuum inputs to the 2-mode squeezer providing the entanglement for the teleporter (see Fig.2(a)) and $`H`$ is the parametric gain of the squeezer. The fields are recombined in phase at the final beamsplitter giving the outputs
$`a_{h,v,out}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(c_{h,v,T}+d_{h,v})`$ (5)
$`b_{h,v,out}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(c_{h,v,T}d_{h,v})`$ (6)
The photon counting rates of the two arms have expectation values
$`<a_{out}^{}a_{out}>`$ $`=`$ $`\varphi |_a\varphi |_b(a_{h,out}^{}{}_{}{}^{}+a_{v,out}^{}{}_{}{}^{})(a_{h,out}+a_{v,out})|\varphi _a|\varphi _b`$ (7)
$`=`$ $`0.25(1+\lambda )^2+(\lambda \sqrt{H}\sqrt{H1})^2`$ (8)
$`<b_{out}^{}b_{out}>`$ $`=`$ $`\varphi |_a\varphi |_b(b_{h,out}^{}{}_{}{}^{}+b_{v,out}^{}{}_{}{}^{})(b_{h,out}+b_{v,out})|\varphi _a|\varphi _b`$ (9)
$`=`$ $`0.25(1\lambda )^2+(\lambda \sqrt{H}\sqrt{H1})^2`$ (10)
In the limit of very strong entanglement squeezing ($`\sqrt{H}\sqrt{H1}0`$) we find from Eq. 4 that $`c_{h,v,T}c_{h,v}`$ for unity gain ($`\lambda =1`$), i.e. perfect teleportation. For the same conditions (and only for these conditions) the visibility of the Mach-Zehnder outputs,
$$V=\frac{<a_{out}^{}a_{out}><b_{out}^{}b_{out}>}{<a_{out}^{}a_{out}>+<b_{out}^{}b_{out}>}$$
(11)
goes to one, indicating the state of the teleported arm exactly matches that of the unteleported arm. Notice that the expectation values (Eq.10), and thus the visibility, do not depend on the actual input state (no dependence on $`x`$ and $`y`$). Hence we can demonstrate that the teleporter is operating ideally even if we do not know the state of the input. Classical limits can be set by examining the visibility obtained with no entanglement ($`H=1`$). In Fig.3 we plot the visibility versus feedforward gain in the teleporter for the cases of no entanglement (0%), 50% entanglement squeezing and 90% entanglement squeezing. Maximum visibility in the classical case is $`0.42`$. Increasing entanglement leads to increasing visibility.
It is known that using a single mode squeezed beam, divided in half on a beamsplitter (see Fig.2(b)), instead of a true 2-mode squeezed source (which exhibits Einstein, Podolsky, Rosen (EPR) correlations), can still produce fidelities of teleportation higher than the classical limit for coherent state inputs. Loock and Braunstein have recently contrasted various single mode and 2-mode squeezing schemes on the basis of their fidelity. It is educational to examine how well the single squeezer teleporter performs in our single photon Mach-Zehnder. The input output relation for a single squeezer teleporter is
$$c_{h,v,S}=\lambda c_{h,v}+\frac{1}{\sqrt{2}}((\lambda \sqrt{H}\sqrt{H1})f_{h,v,1}^{}+(\sqrt{H}\lambda \sqrt{H1})f_{h,v,1}+\lambda f_{h,v,2}^{}+f_{h,v,2})$$
(12)
The expectation values for the outputs then become
$`<a_{out}^{}a_{out}>`$ $`=`$ $`0.25(1+\lambda )^2+0.5(\lambda \sqrt{H}\sqrt{H1})^2+0.5\lambda ^2`$ (13)
$`<b_{out}^{}b_{out}>`$ $`=`$ $`0.25(1\lambda )^2+0.5(\lambda \sqrt{H}\sqrt{H1})^2+0.5\lambda ^2`$ (14)
On Fig.3 we also present the visibility as a function of gain for the single squeezer case with squeezing of 87.5%. The squeezing is picked such that the average coherent state unity gain fidelity is the same as for the 50% squeezed 2-mode entanglement (the criteria used in Ref.). The performance of the single squeezer teleporter is clearly inferior. Although achieving a better visibility than the classical teleporter it never exceeds, or equals, for any gain, the performance of the 50% squeezed 2-mode teleporter. The maximum visibility of the 2-mode teleporter is 25% higher. We conclude that the entanglement of the single squeezer is not as useful for teleportation as might be suggested by the coherent state average fidelity measure.
In the experiments we have imagined so far the level of visibility has been determined not only by the ability of the teleporter to reproduce the input states of the photons (the mode overlap) but also the efficiency with which input photons to the teleporter lead to correct output photons (the power balance). It is of interest to try to separate these effects. We can investigate just state reproduction if we allow attenuation to be applied to beam $`d`$, thus “balancing” the Mach-Zehnder by compensating for the loss introduced by the teleporter (see Fig.1(b)). The attenuated beam $`d`$ becomes
$$d_{h,v,A}=\sqrt{\eta }d_{h,v}+\sqrt{1\eta }g_{h,v}$$
(15)
where $`g`$ is another vacuum field and $`\eta `$ is the intensity transmission of the attenuator. The expectation values of the outputs (using 2-mode entanglement) are now
$`<a_{out}^{}a_{out}>`$ $`=`$ $`0.25(\sqrt{\eta }+\lambda )^2+(\lambda \sqrt{H}\sqrt{H1})^2`$ (16)
$`<b_{out}^{}b_{out}>`$ $`=`$ $`0.25(\sqrt{\eta }\lambda )^2+(\lambda \sqrt{H}\sqrt{H1})^2`$ (17)
In Fig.4 we plot visibility versus gain, using the attenuation $`\eta `$ to optimize the visibility ($`\eta 1`$). Now we can always achieve unit visibility for any finite level of entanglement by operating at gain $`\lambda _{opt}=\frac{\sqrt{H1}}{\sqrt{H}}`$ and balancing the interferometer by setting $`\eta =\lambda _{opt}^2`$. The high visibility is achieved because at gain $`\lambda _{opt}`$ the teleporter behaves like pure attenuation . That is the photon flux of the teleported field is reduced, but no “spurious photons” are added to the field. Thus, at this gain, all output photons from the teleporter are in the right state, but various input photons are “lost”. This effect does not occur for the single squeezer teleporter (also plotted in Fig.4) whose performance is not improved by balancing the interferometer, further emphasizing its lack of useful entanglement.
So far we have considered test arrangements in which a teleported field is compared with one which is not teleported. However the result of Eq.17 suggests a self testing arrangement for a teleporter. Suppose we place a teleporter in both arms of the interferometer as portrayed in Fig.1(c). Writing an expression for the teleported beams $`d`$ similar to Eq.4 we find the expectation values of the outputs are now
$`<a_{out}^{}a_{out}>`$ $`=`$ $`\lambda ^2+2(\lambda \sqrt{H}\sqrt{H1})^2`$ (18)
$`<b_{out}^{}b_{out}>`$ $`=`$ $`2(\lambda \sqrt{H}\sqrt{H1})^2`$ (19)
where we have assumed the gains of the two teleporters are the same. By monitoring the “dark” output port ($`b_{out}`$) it may be possible to keep the system “locked” to maximum visibility, without any knowledge of the input state or requiring the destruction of the output state ($`a_{out}`$). Once again, under low loss conditions, unit visibility is achieved for gain $`\lambda _{opt}`$ as illustrated in Fig.5. The added complexity of using two teleporters may be justified in practice by the greater versatility of this system.
In conclusion, we have examined a Mach-Zehnder arrangement for testing the efficacy of single photon qubit teleportation. The major advantage of this arrangement is it doesn’t require the tester to know the input state of the photon. We have contrasted the results obtained with no entanglement, single mode entanglement and true 2-mode entanglement using continuous variable teleportation. The highest visibilities are always achieved with 2-mode entanglement. We have also suggested that a “locked” teleporter could be constructed using a generalization of the testing scheme. We have only examined here the case where losses can be neglected. Losses reduce visibilities but the general trends discussed here remain the same.
|
no-problem/9906/hep-ex9906034.html
|
ar5iv
|
text
|
# Total Forward and Differential Cross Sections of Neutral 𝐷 Mesons Produced in 500 GeV/𝑐 𝜋⁻–Nucleon Interactions
## Abstract
We measure the neutral $`D`$ total forward cross section and the differential cross sections as functions of Feynman-$`x`$ ($`x_F`$) and transverse momentum squared for 500 GeV/$`c`$$`\pi ^{}`$–nucleon interactions. The results are obtained from 88 990$`\pm `$460 reconstructed neutral $`D`$ mesons from Fermilab experiment E791 using the decay channels $`D^{\text{ 0}}K^{}\pi ^+`$ and $`D^{\text{ 0}}K^{}\pi ^+\pi ^{}\pi ^+`$ (and charge conjugates). We extract fit parameters from the differential cross sections and provide the first direct measurement of the turnover point in the $`x_F`$ distribution, 0.0131$`\pm `$0.0038. We measure an absolute $`D^{\text{ 0}}`$+$`\overline{D}^{\text{ 0}}`$ ($`x_F`$$`>`$0) cross section of $`15.4+\text{2.3}\text{1.8}`$ $`\mu `$barns/nucleon (assuming a linear $`A`$ dependence). The differential and total forward cross sections are compared to theoretical predictions and to results of previous experiments.
FERMILAB-Pub-99/185-E
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and
Charm hadroproduction is a convolution of short range processes that can be calculated in perturbative quantum chromodynamics (QCD) and long range processes that cannot be treated perturbatively and thus must be modeled using experimental measurements. The large theoretical uncertainties from both contributions are reflected in the relatively large number of input parameters that can be adjusted when comparing models to the results of experiments. A single measurement, no matter how precise, cannot unambiguously determine these parameters. However, the results of high statistics measurements like the ones reported here, when combined with other measurements of similar precision, can constrain such parameters as the charm quark mass, the intrinsic transverse momentum of the partons in the incoming hadrons, and the effective factorization and renormalization scales used in theoretical calculations.
We report here measurements of the differential cross sections versus the kinematic variables Feynman-$`x`$ ($`x_F`$) and transverse momentum squared ($`p_T^2`$), as well as the total forward cross section for the hadroproduction of neutral $`D`$ mesons. The relatively high pion beam momentum, 500 GeV/$`c`$, coupled with the good geometric acceptance of the Tagged Photon Laboratory (TPL) spectrometer, allows us to investigate a wide kinematic region that includes points at negative $`x_F`$. We are able to measure the shape of the differential cross section versus $`x_F`$ with sufficient precision to confirm, for the first time, that the turnover in the cross section does occur at $`x_F`$$`>`$0, as expected for incident pions .
Combining data from two $`D^{\text{ 0}}`$ decay modes, $`D^{\text{ 0}}K\pi `$ and $`D^{\text{ 0}}K\pi \pi \pi `$, <sup>1</sup><sup>1</sup>1Charge conjugates are always implied. We use $`D^{\text{ 0}}`$ to represent the sum of $`D^{\text{ 0}}`$ and $`\overline{D}^{\text{ 0}}`$. Similarly, $`K\pi `$ ($`K\pi \pi \pi `$) includes $`K^{}\pi ^+`$ ($`K^{}\pi ^+\pi ^{}\pi ^+`$) and charge conjugate. we extract a sample of 88 990$`\pm `$460 (78730$`\pm `$430 at $`x_F`$$`>`$0) fully reconstructed charm decays to use for these measurements. In addition to the greater statistical significance, the use of two modes provides a means to better understand the systematic errors associated with the reconstruction of the decay products of these fully-charged decays.
The data were accumulated during the 1991/1992 Fermilab fixed-target run of experiment E791 . The experiment utilized the spectrometer built by the previous TPL experiments, E516 , E691 , and E769 , with significant improvements. The experiment employed a 500 GeV/$`c`$ $`\pi ^{}`$ beam tracked by eight planes of proportional wire chambers (PWC’s) and six planes of silicon microstrip detectors (SMD’s). The beam impinged on one 0.52-mm thick platinum foil (1.6 cm in diameter) followed by four 1.56-mm thick diamond foils (1.4 cm in diameter), each foil center separated from the next by an average of 1.53 cm, allowing most charm particles to decay in air. The downstream spectrometer consisted of 17 planes of SMD’s for vertexing and tracking along with 35 planes of drift chambers, 2 PWC planes, and 2 analysis magnets (bending in the same direction) for track and momentum measurement. Two multicell threshold Čerenkov counters, an electromagnetic calorimeter, a hadronic calorimeter, and a wall of scintillation counters for muon detection provided particle identification. The trigger was generated using signals from scintillation counters as well as the electromagnetic and hadronic calorimeters. The beam scintillation counters included a beam counter 1.3 cm in diameter (14 cm upstream of the first target) and a large beam-halo veto counter with a 1.0 cm hole (8 cm upstream of the first target). The interaction counter was located 2.0 cm downstream of the last target and 0.6 cm upstream of the first SMD plane. The first-level trigger required a signal corresponding to at least 1/2 of that expected for a minimum ionizing particle (MIP) in the beam counter, no signal greater than 1/2 of a MIP in the beam halo counter, and a signal corresponding to greater than $``$4.5 MIP’s in the interaction counter (consistent with a hadronic interaction in one of the targets). The second-level trigger required more than 3 GeV of transverse energy in the calorimeters. Additional requirements eliminated events with multiple beam particles. A fast data acquisition system collected data at rates up to 30 Mbyte/s with 50 $`\mu `$s/event deadtime. Over 2$`\times `$10<sup>10</sup> events were written to 24 000 8mm magnetic tapes during a six-month period.
The raw data were reconstructed and filtered to keep events with at least two separated vertices, consistent with a primary interaction and a charm particle decay. Following the event reconstruction and filtering, selection criteria for the $`D^{\text{ 0}}`$ candidates were determined by maximizing $`S/\sqrt{S+B}`$ where $`S`$ is the (normalized) number of signal events resulting from a Monte Carlo simulation and $`B`$ is the number of background events appearing in the data sidebands of the reconstructed $`K\pi `$ or $`K\pi \pi \pi `$ mass distribution. Only selection variables that are well modeled by the Monte Carlo simulation were used. The final selection criteria varied by decay type ($`K\pi `$ and $`K\pi \pi \pi `$) and by $`x_F`$ region. The full range of a cut variation is given in the descriptions below. To eliminate generic hadronic interaction backgrounds as well as secondary interactions, the secondary vertex was required to be longitudinally separated from the primary vertex by more than 8-11 times the measurement uncertainty on the longitudinal separation ($``$400 $`\mu `$m) and to lie outside of the target foils. Backgrounds from the primary interaction were also reduced by requiring that the candidate decay tracks miss the primary vertex by at least 20-40 $`\mu `$m. To ensure a correctly reconstructed charm particle and primary vertex, the momentum vector of the $`D^{\text{ 0}}`$ candidate was required to point back to within 35-60 $`\mu `$m of the primary vertex and to have a momentum component perpendicular to the line connecting the primary and secondary vertices of less than 350-450 MeV/$`c`$. Finally, the sum of the squares of the transverse momenta of the decay tracks relative to the candidate $`D^{\text{ 0}}`$ momentum vector was required to be greater than 0.4 (GeV/$`c`$)<sup>2</sup> (0.15 (GeV/c)<sup>2</sup>) for the $`K\pi `$ ($`K\pi \pi \pi `$) candidates to favor the decay of a high-mass particle. All primary vertices were required to occur in the diamond targets. Thus, our results come from a light, isoscalar target. The Čerenkov information is not used in this analysis; all particle-identification combinations are tried. The inclusive $`K\pi `$ and $`K\pi \pi \pi `$ signals are shown in Fig. 1.
The reconstructed data were split into 20 bins of $`x_F`$, integrating over all $`p_T^2`$, and 20 bins of $`p_T^2`$, integrating over $`x_F`$$`>`$0. To combine data from varied conditions (e.g., using particles that pass through one and two magnets), the normalized mass ($`m_n`$) is constructed for each candidate using its calculated mass and error ($`m`$ and $`\sigma _m`$) and the measured mean mass ($`m_D`$): $`m_n\frac{mm_D}{\sigma _m}`$. Using the binned maximum likelihood method, the normalized mass distributions were fit to a simple Gaussian for the signal and linear or quadratic polynomials for the background.
The acceptance can be factorized into trigger efficiency ($`ϵ_{trig}`$) and reconstruction efficiency ($`ϵ_{rec}`$). Most of the trigger inefficiency is due to vetoes on multiple beam particles. These resulted in a (70.3$`\pm `$1.1$`\pm `$4.2)% trigger efficiency, where the first error is statistical and the second error is systematic. The interaction and transverse energy requirements were greater than 99% efficient for reconstructable hadronic charm decays. Writing and reading the data tapes was (97.6$`\pm `$1.0)% efficient. Combining these efficiencies gives $`ϵ_{trig}=(68.3\pm 4.4)\%`$, where the error is dominated by the systematic error. The reconstruction efficiency is obtained from a Monte Carlo simulation. The Monte Carlo simulation used Pythia/Jetset as a physics generator and models the effects of resolution, geometry, magnetic fields, and detector efficiencies as well as all analysis cuts. The efficiencies were separately modeled for five evenly spaced temporal periods during the experiment. This was motivated by a highly inefficient region of slowly increasing size in the center of the drift chambers caused by the 2 MHz pion beam. The Monte Carlo events were weighted to match the observed data distributions of $`x_F`$, $`p_T^2`$, and the summed $`p_T^2`$ of all two-magnet charged tracks in the event other than those from the candidate $`D`$ meson. The resulting reconstruction efficiencies as a function of $`x_F`$ and $`p_T^2`$ are shown in Fig. 2.
From the number of reconstructed $`D^{\text{ 0}}`$ candidates, the reconstruction efficiency, the trigger efficiency, and the PDG branching fractions , we obtain the number of $`D^{\text{ 0}}`$ mesons produced in our experiment during the experiment livetime, $`N_{prod}`$. The cross section as a function of each variable $`z`$ (where $`z`$ = $`x_F`$ or $`p_T^2`$), is:
$$\sigma (\pi ^{}ND^{\text{ 0}}X;z)=\frac{N_{prod}(D^{\text{ 0}};z)}{T_NN_\pi ^{}}.$$
(1)
$`T_N`$ is the number of nucleons per area in the target (calculated from the target thickness) and is $`(1.224\pm 0.004)\times 10^6`$ nucleons/$`\mu `$b. $`N_\pi ^{}`$ is the number of incident $`\pi ^{}`$ particles during the experiment livetime. This is obtained directly from a scaler which counted clean beam particles ($`>`$1/2 MIP signals in the beam and interaction counters and no signal greater than 1/2 MIP in the beam-halo veto counter) during the experiment livetime. These are the only beam particles which could cause a first-level trigger. Using Eq. 1, we obtained the $`D^{\text{ 0}}`$+$`\overline{D}^{\text{ 0}}`$ differential cross sections versus $`x_F`$ and $`p_T^2`$ shown in Figs. 3 and 4.
The systematic errors are divided into two categories and incorporated in two stages. The *uncorrelated* systematic errors are determined individually for the $`K\pi `$ and $`K\pi \pi \pi `$ results. These systematic errors include uncertainties in the Monte Carlo modeling of the selection criteria, the background functions, and the widths used in the Gaussian signal functions. The *correlated* systematic errors are calculated for the combined $`D^{\text{ 0}}`$ result, obtained from adding the $`K\pi `$ and $`K\pi \pi \pi `$ samples together, weighted by the inverse-square of the combined statistical and uncorrelated systematic errors. The correlated errors are associated with uncertainties in the $`D^{\text{ 0}}`$ lifetime, the Monte Carlo production model, the Monte Carlo weighting procedure, and the run period weighting procedure. Finally, we compare our measured $`K\pi `$ to $`K\pi \pi \pi `$ branching ratio to the PDG value to estimate the residual tracking and vertexing efficiency modeling error. In addition to these errors, which can affect both the normalization and the shape of the differential cross sections, there are two errors which affect only the normalization: the uncertainties in the trigger efficiency and target thickness.
For the differential cross sections shown in Figs. 3 and 4, the systematic errors are factorized into shape and normalization parts. The error bars in the figures show the sum, in quadrature, of the statistical and all systematic errors after factoring out the normalization component. Although the relative importance varies bin-by-bin, the most important systematic errors generally come from uncertainties in the signal width and the Monte Carlo efficiency modeling. In all cases, the systematic error dominates. For the total forward cross section, all of the errors are summarized and summed in Table 1.
In the past, $`x_F`$ distributions have been fit with
$$\frac{d\sigma }{dx_F}=A(1|x_F|)^n.$$
(2)
Fitting Eq. 2 in the range $`0.05<x_F<0.50`$, we find $`n=4.61\pm 0.19`$, as shown in Fig. 3. This function does not provide a complete representation of our data. Although the $`\chi ^2/dof`$ is small (0.3), the value of $`n`$ is quite dependent on the range fitted and on the errors on the data points. Another function, which can be extended into the negative $`x_F`$ region, is an extension of Eq. 2 which uses a power-law function in the tail region and a Gaussian in the central region; that is,
$$\frac{d\sigma }{dx_F}=\{\begin{array}{cc}A(1|x_Fx_c|)^n^{},\hfill & |x_Fx_c|>x_b\hfill \\ A^{}\mathrm{exp}\left[\frac{1}{2}(\frac{x_Fx_c}{\sigma })^2\right],\hfill & |x_Fx_c|<x_b\hfill \end{array}.$$
(3)
Requiring continuous functions and derivatives allows us to write Eq. 3 with one normalization parameter and three shape parameters: $`n^{}`$ gives the shape in the tail region, $`x_c`$ is the turnover point, and $`x_b`$ is the boundary between the Gaussian and power-law function. The fit parameters from this function are nearly independent of the fit range. Fitting our data in the range $`0.125`$$`<`$$`x_F`$$`<`$0.50 gives $`n^{}=4.68\pm 0.21`$, $`x_c=0.0131\pm 0.0038`$, and $`x_b=0.062\pm 0.013`$ with a $`\chi ^2/dof`$=0.4, as shown in Fig. 3. This is the first measurement of the turnover point $`x_c`$ in the charm sector. The fact that it is significantly greater than zero is consistent with a harder gluon distribution in the beam pions than in the target nucleons.
The functions which have been used in the past to fit the $`p_T^2`$ distribution are:
$$\frac{d\sigma }{dp_T^2}=Ae^{bp_T^2}$$
(4)
at low $`p_T^2`$ ($`p_T^2`$$`<`$4.0 (GeV/$`c`$)<sup>2</sup> for this analysis),
$$\frac{d\sigma }{dp_T^2}=Ae^{b^{}p_T}$$
(5)
at high $`p_T^2`$ ($`p_T^2`$$`>`$1.0 (GeV/$`c`$)<sup>2</sup> for this analysis), and
$$\frac{d\sigma }{dp_T^2}=\left[\frac{A}{\alpha m_c^2+p_T^2}\right]^\beta $$
(6)
over all $`p_T^2`$ with $`m_c`$ set to 1.5 GeV/$`c`$<sup>2</sup> . The results of fitting these equations to the data are shown in Fig. 4. For the ranges given above, the fit results are:
* $`b`$ = 0.83$`\pm `$0.02 with $`\chi ^2/dof`$ = 2.8,
* $`b^{}`$ = 2.41$`\pm `$0.03 with $`\chi ^2/dof`$ = 1.7, and
* $`\alpha `$ = 2.36$`\pm `$0.23 (GeV/$`c`$<sup>2</sup>)<sup>-2</sup> and $`\beta `$ = 5.94$`\pm `$0.39 with $`\chi ^2/dof`$ = 0.3.
Equation 4 does not provide a good fit even over the very limited range to which it is applied. While the $`\chi ^2/dof`$ (1.7) of the fit to Eq. 5 is not good, it appears to be a reasonable fit to the data. Equation 6 provides a very good fit to the data over the entire range of $`p_T^2`$. Unfortunately, using two free parameters (in addition to the normalization) makes it more difficult to compare to other experiments and theory since the parameters in this fit are highly correlated. This is reflected in the large (7-10%) errors on $`\alpha `$ and $`\beta `$ compared to the error on $`b^{}`$ (1%), as shown above.
Figures 5 and 6 show a comparison of our $`x_F`$ and $`p_T^2`$ distributions to theoretical predictions for charm quark and $`D`$ meson production. Although the data come from $`D^{\text{ 0}}`$ mesons, the theoretical predictions for charm quark production are included for completeness. The theoretical curves are normalized to obtain the best fit (lowest $`\chi ^2/dof`$) to our data. The theoretical predictions come from a next-to-leading order (NLO) calculation by Mangano, Nason, and Ridolfi (MNR) and the Pythia/Jetset event generator. The MNR NLO charm quark calculation uses SMRS2 (HMRSB ) NLO parton distribution functions for the pion (nucleon), a charm quark mass of 1.5 GeV/$`c`$<sup>2</sup>, and an average intrinsic transverse momentum of the incoming partons ($`\sqrt{k_t^2}`$) of 1.0 GeV/$`c`$. The value for $`\sqrt{k_t^2}`$ was suggested by M. L. Mangano and is independently motivated by the study of azimuthal angle correlations between two charm particles in the same event . The $`D`$ meson results are obtained by convoluting the charm quark results with the Peterson fragmentation function with $`ϵ=0.01`$. The low value for $`ϵ`$ was also suggested by M. L. Mangano in response to a reanalysis of $`D`$ fragmentation in $`e^+e^{}`$ collisions . The Pythia/Jetset event generator uses leading order DO2 (CTEQ2L ) parton distribution functions for the pion (nucleon), a charm quark mass of 1.35 GeV/$`c`$<sup>2</sup>, $`\sqrt{k_t^2}`$ of 0.44 GeV/$`c`$, and the Lund string fragmentation scheme to obtain $`D^{\text{ 0}}`$ results. Tables 2 and 3 show a comparison of our $`x_F`$ and $`p_T^2`$ fit results to theoretical predictions and to recent high-statistics charm experiments which used pion beams. The evident energy dependence of the shape parameters in Tables 2 and 3 are consistent with theoretical predictions .
We obtain the total forward cross section by summing the $`x_F`$ differential cross section for $`x_F`$$`>`$0 and assuming the cross section for 0.8$`<`$$`x_F`$$`<`$1.0 is half that of the cross section for 0.6$`<`$$`x_F`$$`<`$0.8 but with the same error. Assuming a linear dependence on the atomic number , we obtain the neutral $`D`$ total forward cross section, $`\sigma (D^{\text{ 0}}+\overline{D}^{\text{ 0}};x_F>0)=\mathrm{\hspace{0.17em}15.4}+\text{2.3}\text{1.8}`$ $`\mu `$barns/nucleon. To obtain the total charm cross section, $`\sigma (c\overline{c})`$, we multiply our $`D^{\text{ 0}}`$+$`\overline{D}^{\text{ 0}}`$cross section by 1.7. This accounts for three multiplicative effects: the relative production of charm quarks compared to $`D^{\text{ 0}}`$ mesons (2.1), the conversion from $`x_F`$$`>`$0 to all $`x_F`$ (1.6), and the conversion to the $`c\overline{c}`$ cross section from the sum of charm plus anticharm cross sections (0.5) . We compare our total charm cross section to other experiments and to the NLO predictions as a function of pion-beam energy in Fig. 7. All experimental results are obtained by multiplying the $`D^{\text{ 0}}`$+$`\overline{D}^{\text{ 0}}`$cross section by 1.7. The rise of the charm production cross section with energy is modeled reasonably well by the NLO theory, although the absolute value at any point depends greatly on the input parameters to the theory.
In this paper we have presented the total forward cross section and differential cross sections versus $`x_F`$ and $`p_T^2`$ for $`D^{\text{ 0}}`$ mesons from Fermilab experiment E791 data. This analysis represents the first measurement of the $`D^{\text{ 0}}`$ cross section for a 500 GeV/$`c`$ pion beam. The high statistics allows us to clearly observe a turnover point greater than zero ($`x_c`$ = 0.0131$`\pm `$0.0038) in the Feynman-$`x`$ distribution, providing evidence for a harder gluon distribution in the pion than in the nucleon.
We have compared our differential cross section results to predictions from the next-to-leading order calculation by Mangano, Nason, and Ridolfi and to the Monte Carlo event generator Pythia by T. Sjöstrand et al. . With suitable choices for the intrinsic $`k_t`$ of the partons and the Peterson fragmentation function parameter, the NLO $`D`$ meson calculation provides a good match to the $`p_T^2`$ spectra and a fair match to the $`x_F`$ distribution. The string fragmentation scheme in Pythia softens the original charm quark $`p_T^2`$ distribution too much, and hardens the $`x_F`$ spectra too much in both directions. However, the $`D^{\text{ 0}}`$ result does predict the flattening of the $`x_F`$ cross section at high $`x_F`$. The many adjustable parameters in the theoretical models allow one to obtain distributions which are quite consistent with these data. Unfortunately, a given set of parameters is neither unique, nor does it necessarily provide a good match to other data. In conjunction with other charm production results from this and other recent high-statistics experiments, however, it may be possible to find a unique set of parameters. These results come from experiments with a variety of beam energies and types, and include measurements of differential cross sections , production asymmetries , and correlations between two charm particles in the same event .
Unlike the uncertainties in the theoretical calculations of the differential cross sections, the uncertainties in the theoretical calculation of the total cross section come mostly from the perturbative calculation. The relatively large uncertainties are due to the low mass of the charm quark, which results in a large (unknown) contribution from higher-order terms. The total forward $`D^{\text{ 0}}`$+$`\overline{D}^{\text{ 0}}`$ cross section measured by E791 is $`\sigma (D^{\text{ 0}}+\overline{D}^{\text{ 0}};x_F>0)=\mathrm{\hspace{0.17em}15.4}+\text{2.3}\text{1.8}`$ $`\mu `$barns/nucleon, assuming a linear atomic number dependence. The cross section is consistent with the MNR NLO prediction.
We express our special thanks to S. Frixione, M. L. Mangano, P. Nason, and G. Ridolfi for the use, and help in the use, of their NLO QCD software. We gratefully acknowledge the assistance from Fermilab and other participating institutions. This work was supported by the Brazilian Conselho Nacional de Desenvolvimento Científico e Technológico, CONACyT (Mexico), the Israeli Academy of Sciences and Humanities, the U.S. Department of Energy, the U.S.-Israel Binational Science Foundation, and the U.S. National Science Foundation.
|
no-problem/9906/physics9906059.html
|
ar5iv
|
text
|
# AIAA-99-2144 PROPULSION THROUGH ELECTROMAGNETIC SELF-SUSTAINED ACCELERATION
## Introduction
In 1881 J. J. Thomson first realized that a charged particle was more resistant to being accelerated than an otherwise identical neutral particle. His observation marked the origin of the concept of electromagnetic mass of charged particles. This concept was developed in a full theory mostly by Heaviside , Searle , Lorentz , Poincaré , Abraham , Fermi and Rohrlich . It follows from this theory that it is the unbalanced<sup>1</sup><sup>1</sup>1The mutual repulsion of two inertial like charges is completely balanced and therefore there is no net force acting on the charges. repulsion of the volume elements of an accelerating charged particle that causes the resistance to its acceleration known as inertia. Alternatively, the unbalanced attraction of accelerating opposite charges results in further enhancement of their acceleration. By the equivalence principle the opposite effects of resistance to like charges’ acceleration and increase of unlike charges’ acceleration resulting from the unbalanced repulsion of like charges and the unbalanced attraction of unlike charges, respectively should also occur when the charges are in a gravitational field. The equivalence principle requires that these effects be present in a gravitational field as well but does not provide any insight into what causes them there. The answer to this question is that it is a spacetime anisotropy around massive bodies that causes those effects. It manifests itself in the anisotropy of the velocity of electromagnetic signals (for short - the velocity of light). Since it is now believed that the anisotropy of the velocity of light around massive bodies results from the curvature of spacetime, here we shall discuss two *independent* results which indicate that the correct interpretation of general relativity is in terms of spacetime anisotropy, not spacetime curvature. While the first result just shows that there is no need for spacetime curvature since spacetime anisotropy *alone* accounts for all inertial and gravitational effects, the second one directly demonstrates that the standard curved-spacetime interpretation of general relativity contradicts the gravitational redshift experiments. The implication that there is no spacetime curvature is crucial not only for understanding and possible utilization of the effects discussed but also for gaining deeper insight into the nature of inertia and gravitation. One far-reaching consequence from the anisotropy of spacetime is that inertia and gravitation can (at least in principle) be electromagnetically manipulated.
*It is the anisotropy of spacetime that causes the phenomena traditionally called inertia and gravitation.* An analysis of the classical electromagnetic mass theory in conjunction with general relativity leads to the conclusion that (i) gravitational attraction is caused by the anisotropy of spacetime around massive objects and (ii) inertia (and inertial mass) as described in an accelerating reference frame originates from the spacetime anisotropy in that frame . The essence of this analysis is as follows. Consider a classical<sup>2</sup><sup>2</sup>2At present quantum mechanical treatment of the electromagnetic mass is not possible since quantum mechanics does not offer a model for the quantum object itself. electron in the Earth’s gravitational field. Due to the anisotropy of the velocity of light in a non-inertial reference frame (supported in the Earth’s gravitational field) the electric field of an electron on the Earth’s surface is distorted which gives rise to a self-force originating from the interaction of the electron charge with its distorted electric field<sup>3</sup><sup>3</sup>3That explanation of the origin of the self-force is another way of saying that the force arises from the mutual unbalanced repulsion of the volume elements of the electron charge.. This self-force tries to force the electron to move downwards<sup>4</sup><sup>4</sup>4The self-force which starts to act on the electron whenever its electric field distorts effectively resists this distortion. That is why the self-force strives to make the electron move downwards with an acceleration $`𝐠`$ in order to compensate the spacetime anisotropy which in turn will eliminate the distortion of the electron’s electric field. and coincides with what is traditionally called the gravitational force. The electric self-force is proportional to the gravitational acceleration $`𝐠`$ and the coefficient of proportionality is the mass ”attached” to the electron’s electric field which proves to be equal to the electron mass. The anisotropy of the speed of light in the Earth’s vicinity is compensated if the electron is falling toward the Earth’s surface with an acceleration $`𝐠`$. In other words, the electron is falling in order to keep its electric field not distorted. A Coulomb (not distorted) field does not give rise to any self-force acting on the electron; that is why the motion of a falling electron is non-resistant (inertial, or geodesic)<sup>5</sup><sup>5</sup>5It is clear from here that a falling electron does not radiate since its electric field is the Coulomb field and therefore does not contain the radiation $`r^1`$ terms .. If the electron is prevented from falling it can no longer compensate the anisotropy of the speed of light, its field distorts and as a result a self-force pulling the electron downwards arises. The resistance which an electron offers to being accelerated is similarly described in an accelerating reference frame as caused by the anisotropy of the speed of light there. When the electron accelerates its electric field distorts and the electron resists that deformation. In such a way, given the fact that the speed of light is anisotropic in non-inertial reference frames, all inertial and gravitational effects (including the equivalence of inertial and gravitational mass) of the electron and the other elementary charged particles are fully and consistently accounted for if both the inertial and passive gravitational mass of the elementary charged particles are entirely<sup>6</sup><sup>6</sup>6On the one hand, the entirely electromagnetic mass of an elementary charged particle - an electron for example - is supported by the fact that the electromagnetic mass of the classical electron is equal to its observable mass. On the other hand, the electromagnetic mass raises the question of stability of the electron (what keeps its charge together). This question, however, cannot be adequately addressed until a quantum-mechanical model of the electron structure is obtained. An important feature of the electromagnetic mass theory is that the stability problem does not interfere with the derivation of the self-force (acting on a non-inertial electron) containing the electromagnetic mass . This hints that perhaps there is no real problem with the stability of the electron (as a future quantum mechanical model of the electron itself may find); if there were one it would inevitably emerge in the calculation of the self-force. electromagnetic in origin. The gravitational attraction and inertia of all matter can be accounted for as well if it is assumed that there are no elementary neutral particles in nature. A direct consequence from here is that only charged particles or particles that consists of charged constituents possess inertial and passive gravitational mass<sup>7</sup><sup>7</sup>7It is evident that in this case the electromagnetic mass theory predicts zero neutrino mass and appears to be in conflict with the apparent mass of the $`Z^0`$ boson which is involved in the neutral weak interactions. The resolution of this apparent conflict could lead to either restricting the electromagnetic mass theory (in a sense that not the entire mass is electromagnetic) or reexamining the facts believed to prove (i) that the $`Z^0`$ boson is a fundamentally neutral particle (unlike the neutron), and (ii) that it does possess inertial and gravitational mass if truly neutral.. Stated another way, it is only elementary charges that comprise a body; there is no such fundamental quantity as mass. This means that a body’s (inertial or passive gravitational) mass corresponds to the energy stored in the electric fields of all elementary charged particles comprising the body. However, the inertial and passive gravitational mass of a body manifest themselves as such - as a measure of the body’s resistance to being accelerated - only if the body is subjected to an acceleration (kinematic or gravitational). This resistance originates from the unbalanced mutual repulsion of the volume elements of every elementary charged constituent of the body. The active gravitational mass of a body proves to be also electromagnetic originating from its charged constituents<sup>8</sup><sup>8</sup>8Given that the inertial and passive gravitational masses are electromagnetic the electromagnetic nature of the active gravitational mass follows immediately since the three kinds of masses are equal. since there is no mass but only charges. As there is no need for spacetime curvature since all gravitational effects are fully accounted for by the electromagnetic nature of the passive gravitational mass and the anisotropic velocity of light in the vicinity of a massive object , it follows that it is the object’s charges (and their fields) that cause that anisotropy of spacetime around the object.
One thing concerning the electromagnetic mass theory which is often overlooked should be especially stressed: even if the mass is viewed as only partly electromagnetic, as presently believed, it still follows that inertia and gravitation are electromagnetic in origin but in part. It should be also noted that once the fact of the partly electromagnetic origin of inertia and gravitation is fully realized a thorough analysis of this open issue can be carried out which will most probably lead to the result that electromagnetic interaction is the only cause behind inertia and gravitation which are now regarded as separate phenomena<sup>9</sup><sup>9</sup>9Such an analysis will be presented in another paper. The basic idea of this analysis is to demonstrate that it is highly unlikely that Nature has invented two drastically different and independent causes of gravitation - an anisotropic spacetime for elementary charged particles and a curved spacetime for elementary neutral ones (such as the $`Z^0`$ boson if it turns out to be a truly neutral particle). If the mass of the elementary charged particles is regarded as only partly electromagnetic, the phenomenon of gravitation becomes even more complicated. Spacetime must be anisotropic (to account for the gravitational interaction of the electromagnetic part of the mass of the particles) as well as curved (to account for the gravitational interaction of the non-electromagnetic part of the mass of the charged particles)..
*The concept of spacetime curvature is in direct contradiction with experiments*. A recently obtained result shows that the gravitational redshift contradicts the curved-spacetime interpretation of general relativity. It has not been noticed up to now that both frequency and velocity of a photon change in the gravitational redshift experiment. In such a way the measurement of a change in a photon frequency is in fact an indirect measurement of a change in its local velocity in this experiment. This shows that the local velocity of a photon depends upon its pre-history (whether it has been emitted at the observation point or at a point of different gravitational potential) - a result that contradicts the standard curved-spacetime interpretation of general relativity which requires that the local velocity of light be always $`c`$ . Therefore the gravitational redshift demonstrates that general relativity cannot be interpreted in terms of spacetime curvature. This situation calls for another interpretation of the mathematical formalism of general relativity. Such a possibility of interpreting the Riemann tensor not in terms of spacetime curvature but in terms of spacetime anisotropy has always existed since the creation of general relativity but received no attention. In such an interpretation the Riemannian geometry describes not a curved but an anisotropic spacetime thus linking gravitation to the anisotropy of spacetime .
An additional indication that spacetime around massive bodies is anisotropic (not curved) comes from the following argument. According to the standard curved-spacetime interpretation of general relativity the gravitational effects observed in a non-inertial reference frame $`N^g`$ on the Earth’s surface are caused by the curvature of spacetime originating from the Earth’s mass. The principle of equivalence requires that what is happening in $`N^g`$ be happening in a non-inertial (accelerating) frame $`N^a`$ as well. The gravitational effects in general relativity include time and length effects in addition to the pre-relativistic ones (falling of bodies and their weight). These, according to the standard interpretation of general relativity, are also caused by the spacetime curvature around the Earth. By the principle of equivalence the time and length effects must be present in $`N^a`$ as well. An inertial observer can explain those effects happening in $`N^a`$ by employing only special relativity . An observer in $`N^a`$, however, can explain them neither by *directly* making use of the frame’s acceleration nor by the anisotropic velocity of light there since the concepts of time and space are more fundamental than the concepts of velocity and acceleration. The non-inertial observer in $`N^a`$ has to prove that spacetime in $`N^a`$ is anisotropic due to $`N^a`$’s acceleration by obtaining the spacetime interval in $`N^a`$. Only then the observer can derive the time and length effects in $`N^a`$ by using the anisotropic spacetime interval there. Therefore the standard interpretation of general relativity leads to a picture involving the principle of equivalence that is not quite satisfying: if one cannot distinguish between the effects in $`N^a`$ and in $`N^g`$ then why is the spacetime in $`N^a`$ anisotropic while in $`N^g`$ it is curved. Taking into account the two results discussed above the picture becomes perfectly consistent: spacetime in both $`N^a`$ and $`N^g`$ is anisotropic (in $`N^a`$ the spacetime anisotropy is caused by the frame’s acceleration while in $`N^g`$ it originates from the elementary charges that comprise the Earth).
The result that spacetime is anisotropic (in which the velocity of light is different in different directions) has enormous implications for both understanding the nature of inertia and gravitation and the possibility of controlling them since both inertia and gravitation turn out to be electromagnetic in origin (at least in part<sup>10</sup><sup>10</sup>10It is now an established (but unexplainably ignored ) fact from the electromagnetic mass theory that the inertial mass and inertia are at least partly electromagnetic in origin .). There exist two theoretical possibilities for electromagnetic manipulation of inertia and gravitation:
(i) *Changing the anisotropy of spacetime.* Since one of the corollaries of the electromagnetic mass theory is that the anisotropy of spacetime around a body is caused by the body’s charged constituents (and their electromagnetic fields), the employment of strong electromagnetic fields can create a local spacetime anisotropy which may lead to a body being propelled without being subjected to a direct force.
(ii) *Using the spacetime anisotropy.* Due to the anisotropic velocity of light in an accelerating reference frame the electromagnetic attraction of the opposite charges of an accelerating electric dipole enhances its accelerated motion - it leads to a self-sustaining accelerated motion perpendicular to the dipole’s axis . According to the principle of equivalence an electric dipole supported in an uniform gravitational field should levitate . In such a way a strong electromagnetic attraction between oppositely charged parts of a non-inertial device may lead to its propulsion or at least to a reduction of its mass; thus allowing for the external force that accelerates the device (or the weight of the device) to be reduced.
The possibility of manipulating inertia and gravitation by changing the anisotropy of spacetime was reported in . This paper deals with the possibility of altering inertia and gravitation by using the spacetime anisotropy.
## Self-Sustained Acceleration
The equations of classical electrodynamics applied to an accelerating electric dipole show that it can undergo self-sustaining accelerated motion perpendicular to its axis, meaning that not only does the electromagnetic attraction of the opposite charges of a dipole not resist its accelerated motion but further increases it. The application of the principle of equivalence shows that an electric dipole supported in an uniform gravitational field will be also subjected to a self-sustained acceleration which may lead to the dipole’s levitation. Here we shall derive this effect in a gravitational field directly without applying the equivalence principle.
Consider a non-inertial reference frame $`N^g`$ supported in a gravitational field of strength $`𝐠`$. The gravitational field is directed opposite to the $`y`$ axis. A dipole with a separation distance $`d`$ between the two charges is laying along the $`x`$ axis. Due to the spacetime anisotropy in $`N^g`$ (manifesting itself in the anisotropic velocity of light in $`N^g`$) the electric field of the negative charge $`q`$ with coordinates ($`d,0`$) at a point with coordinates ($`0,0`$), where the positive charge $`+q`$ is, is distorted<sup>11</sup><sup>11</sup>11This is the electric field of a charge at rest in a gravitational field. If the charge is uniformly accelerated with $`𝐚=𝐠`$ its electric field at a distance $`d`$ from the negative charge is
$$𝐄_+^a=\frac{q}{4\pi ϵ_o}\left(\frac{𝐧_+}{d^2}+\frac{𝐚𝐧_+}{c^2d}𝐧_+\frac{𝐚}{c^2d}\right).$$
This is the electric field as described in an inertial reference frame. The calculation of the electric field in the accelerated frame in which the dipole is at rest gives the same expression due to the anisotropy of spacetime in that frame .
$$𝐄_+^g=\frac{q}{4\pi ϵ_o}\left(\frac{𝐧_+}{d^2}\frac{𝐠𝐧_+}{c^2d}𝐧_++\frac{𝐠}{c^2d}\right)$$
(1)
where $`𝐧_+`$ is a unit vector pointing from the negative charge toward the positive charge and $`𝐧_+=\widehat{𝐱}`$ where $`\widehat{𝐱}`$ is a unit vector along the $`x`$ axis. Since $`𝐠𝐧_+=0`$ ($`𝐠`$ is orthogonal to $`𝐧_+`$) the electric field (1) reduces to
$$𝐄_+^g=\frac{q}{4\pi ϵ_od^2}\widehat{𝐱}\frac{q}{4\pi ϵ_oc^2d}𝐠.$$
The force with which the negative charge attracts the positive one in the anisotropic spacetime in $`N^g`$ is
$$𝐅_+^g=q\left(1\frac{𝐠𝐧_+}{2c^2}\right)𝐄_+^g$$
where $`𝐧_+`$ is a unit vector pointing from the positive charge toward the negative charge. Noting that $`𝐠𝐧_+=0`$ we can write
$$𝐅_+^g=\frac{q^2}{4\pi ϵ_od^2}\widehat{𝐱}\frac{q^2}{4\pi ϵ_oc^2d}𝐠.$$
(2)
The first term in (2) is the ordinary force with which the negative charge attracts the positive one. The second term represents the vertical component of the force (2) that is opposite to $`𝐠`$ and has a levitating effect on the positive charge.
The calculation of the force with which the negative charge of the dipole is attracted by the positive one gives
$$𝐅_+^g=\frac{q^2}{4\pi ϵ_od^2}\widehat{𝐱}\frac{q^2}{4\pi ϵ_oc^2d}𝐠.$$
(3)
The net (self) force acting on the dipole as a whole is directly obtained from (2) and (3)
$$𝐅_{self}^g=𝐅_+^g+𝐅_+^g=\frac{q^2}{2\pi ϵ_oc^2d}𝐠.$$
(4)
Therefore, unlike the attraction of the charges of an inertial dipole which does not produce a net force acting on the dipole, the mutual attraction of the charges of a dipole in a gravitational field becomes unbalanced and results in a self-force which opposes the dipole’s weight. The effect of the self-force (4) on the dipole can be explained in a sense that a fraction of the dipole of mass
$$m_{att}=\frac{q^2}{2\pi ϵ_oc^2d},$$
(5)
resulting from the unbalanced attraction of the two charges, is subjected to an acceleration $`𝐠`$ as long as the dipole stays in a gravitational field of strength $`𝐠`$. While the mass (5) remains smaller than the dipole mass the effect of the self-force (4) will be a reduction of the dipole mass by $`m_{att}`$ since the self-force is opposite to the dipole’s weight. When $`m_{att}`$ becomes equal to the dipole mass (i.e. when $`𝐅_{self}^g`$ becomes equal to the weight of the dipole), the dipole starts to levitate. Further increase of $`m_{att}`$ will result in lifting of the dipole.
If the charges of the dipole are an electron and a positron its weight is $`𝐅=2m_e𝐠`$, where $`m_e`$ is the mass of the electron (and the positron). Using the electron electromagnetic mass
$$m_e=\frac{e^2}{4\pi ϵ_oc^2r_0},$$
(6)
where $`r_0`$ is the classical electron radius, we can calculate the resultant force acting on the dipole supported in a gravitational field
$$𝐅_{res}=𝐅+𝐅_{self}^g=\frac{e^2}{2\pi ϵ_oc^2}\left(\frac{1}{r_0}\frac{1}{d}\right)𝐠.$$
(7)
As seen from (7) the dipole will start to levitate when the separation distance $`d`$ between its charges becomes equal to $`r_0`$. However, this could hardly be achieved in a laboratory since $`r_010^{15}`$ m.
Consider now a reference frame $`N^a`$ which is uniformly accelerating with an acceleration $`𝐚=𝐠`$. Let the dipole be at rest in $`N^a`$ placed in such a way that the acceleration $`𝐚`$ is perpendicular to its axis. In a similar fashion to what we have done in the case of a dipole in $`N^g`$ here too can be shown that due to the anisotropic speed of light in $`N^a`$ there is a self-force acting on the dipole as a whole which is given by
$$𝐅_{self}^a=\frac{q^2}{2\pi ϵ_oc^2d}𝐚$$
(8)
(the self-force (8) can be directly obtained from (4) by applying the equivalence principle and substituting $`𝐠=𝐚`$). A fraction of the dipole of mass $`m_{att}`$ resulting from the unbalanced attraction of the two charges will be subjected to an acceleration $`𝐚`$ as long as the whole dipole is experiencing the same acceleration $`𝐚`$. This means that the fraction of the dipole of mass $`m_{att}`$ accelerates on its own (due to the unbalanced attraction of the two charges) which results in a reduction of the dipole’s resistance to being accelerated by the external force. Therefore in order to maintain the same acceleration $`𝐚`$ the external force accelerating the dipole should be reduced. Stated another way, the dipole mass is effectively reduced and the resistance which the dipole offers to being accelerated will reduce as well. When $`m_{att}`$ becomes equal to the dipole mass the resistance of the dipole to being accelerated will cease and consequently there will be no external force needed to accelerate it. The dipole will continue to maintain its acceleration entirely on its own - it will be in a state of self-sustained accelerated motion. This type of motion resembles the inertial motion of an object: as a free object continues to move with constant velocity until being prevented from doing so, a dipole (initially accelerated by an external force) whose charges and separation distance ensure that $`m_{att}`$ is equal to the dipole mass will continue to move with constant acceleration on its own until being prevented from doing so.
If the charges of the dipole are an electron and a positron the external force accelerating the dipole will be
$$𝐅_{ext}=2m_e𝐚.$$
Taking into account (8) the dipole will mantain a constant acceleration if
$$𝐅_{ext}+𝐅_{self}^a=2m_e𝐚.$$
Noting that for an electron and a positron the mass in (8) will be
$$m_{att}=\frac{e^2}{2\pi ϵ_oc^2d}$$
we can write
$$𝐅_{ext}=\left(2m_em_{att}\right)𝐚.$$
If we assume that $`m_{att}>2m_e`$ (which is unlikely to be achieved since the separation distance $`d`$ between the charges should be smaller than the dimension $`r_0`$ of the classical electron considered) an external force would be needed to slow down the dipole in order that it maintains its uniform acceleration $`𝐚`$.
The effect of mass reduction caused by the mutual attraction of the accelerating dipole’s charges can be described in the following way as well. Instead of regarding the self-force (8) as subjecting only a part of the dipole of mass $`m_{att}`$ to the acceleration $`𝐚`$ it is also possible to say that $`𝐅_{self}^a`$ subjects the whole dipole of mass $`2m_e`$ to an acceleration $`𝐚_{att}`$ originating from the unbalanced attraction between the electron and the positron. Then using the electron electromagnetic mass (6) we obtain the relation between $`𝐚_{att}`$ and $`𝐚`$
$$\frac{e^2}{2\pi ϵ_oc^2r_0}𝐚_{att}=\frac{e^2}{2\pi ϵ_oc^2d}𝐚$$
or
$$𝐚_{att}=\frac{r_0}{d}𝐚.$$
(9)
As seen from (9) the dipole will experience a self-sustained acceleration $`𝐚_{att}𝐚`$ if $`dr_0`$.
## Conclusion
At present it does not appear realistic to expect that a self-sustained acceleration of a body (equal or greater than its initial acceleration) can be achieved. However, the possibility for eventual practical applications of the effect of mass reduction can be assessed if macroscopic charge distributions are considered. It appears that most promising will be the use of specially designed capacitors like the commercial ones which consist of alternatively charged layers of metal foil rolled into the shape of a cylinder with the cylinder axis parallel to $`𝐚`$ or $`𝐠`$. Such capacitors can be charged to large amounts of charge. There are capacitors already available on the market that can carry a charge well above $`1C`$. With such (and greater) amounts of charge the experimental testing of the mass reduction effect appears now possible although it is not a real issue since this effect is a direct consequence of the classical electrodynamics when applied to a non-inertial dipole and for this reason there should be no doubt that it will be experimentally confirmed. The purpose of this paper is to demonstrate that the practical applicability of the mass reduction effect may be within reach if proper technological effort is invested.
## Acknowledgement
I would like to thank Dr. G. Hathaway for bringing to my attention the papers of Cornish and Griffiths in a discussion of the effects considered here.
|
no-problem/9906/cond-mat9906081.html
|
ar5iv
|
text
|
# Inherent Structure Entropy of Supercooled Liquids
## Abstract
We present a quantitative description of the thermodynamics in a supercooled binary Lennard Jones liquid via the evaluation of the degeneracy of the inherent structures, i.e. of the number of potential energy basins in configuration space. We find that for supercooled states, the contribution of the inherent structures to the free energy of the liquid almost completely decouples from the vibrational contribution. An important byproduct of the presented analysis is the determination of the Kauzmann temperature for the studied system. The resulting quantitative picture of the thermodynamics of the inherent structures offers new suggestions for the description of equilibrium and out-of-equilibrium slow-dynamics in liquids below the Mode-Coupling temperature.
In recent years, a significant effort has been devoted to understanding the fundamental nature of glass-forming materials, a long-standing open problem in condensed matter physics. Theoretical, experimental and numerical efforts have broadened our knowledge of the physical mechanisms responsible for the dramatic slowing down of the dynamics in supercooled liquids (more than 15 order of magnitudes) as the temperature $`T`$ changes over a small range, as the glass transition temperature is approached.
Some recent theoretical approaches build upon ideas which were presented several decades ago. In these works, the slowing down of the dynamics was connected to the presence of basins in configuration space. The short time dynamics (on a $`ps`$ time scale) was related to the process of exploring a finite region of phase space around a local potential energy minimum, while the long time dynamics was connected to the transition among different local potential energy minima. In this picture, upon cooling the intra-basin motion becomes more and more separated in time from the slow (and strongly $`T`$-dependent) inter-basins motion. The decrease of the entropy of supercooled liquids on cooling was associated with the progressive ordering of the system in configuration space, i.e. in the progressive population of basins with deeper energy but of lower degeneracy.
Following these ideas, Stillinger and Weber introduced the concept of inherent structure (IS), defined as local minimum configuration of the $`3N`$-dimensional potential energy surface. A basin in configuration space was defined as the set of points that — via a steepest descent path along the potential energy hypersurface — map to the same IS. This precise operational definition of a basin allows the configuration space to be partitioned into an ensemble of distinct basins. Thus, the canonical partition function $`Z_N`$ for a system of $`N`$ atoms at inverse temperature $`\beta =1/k_BT`$ can be written as
$$Z_N=\lambda ^{3N}\underset{\alpha }{}\mathrm{exp}(\beta \mathrm{\Phi }_\alpha )_{R_\alpha }\mathrm{exp}[\beta \mathrm{\Delta }_\alpha (𝐫^N)]𝑑𝐫^N$$
(1)
where $`R_\alpha `$ is the set of points composing the basin $`\alpha `$, $`\mathrm{\Phi }_\alpha `$ is the potential energy of minimum $`\alpha `$ and the non-negative quantity $`\mathrm{\Delta }_\alpha (𝐫^N)`$ measures the potential energy at a point $`𝐫^N`$ belonging to the basin $`\alpha `$ relative to the minimum. The integration over the momenta introduces the thermal wavelength $`\lambda =\sqrt{\beta h^2/2\pi m}`$, where $`m`$ is the mass. Eq. (1) shows that both the IS energy and the thermal excitation within the basin region $`R_\alpha `$ contribute to $`Z_N`$. Stillinger and Weber also noted that, if the value of the potential energy minimum uniquely characterizes the properties of the basin, then a very powerful simplification of Eq. (1) can be performed. By introducing a density of states $`\mathrm{\Omega }(e_{IS})`$ with IS-energy $`e_{IS}`$, $`Z_N`$ can be written as
$$Z_N𝑑e_{IS}\mathrm{\Omega }(e_{IS})\mathrm{exp}[\beta e_{IS}\beta f(\beta ,e_{IS})]$$
(2)
where
$$\beta f(\beta ,e_{IS})=ln(_{R(e_{IS})}\mathrm{exp}[\beta \mathrm{\Delta }_{e_{IS}}\mathrm{\Phi }(𝐫^N)]\frac{d𝐫^N}{\lambda ^{3N}})$$
(3)
can be interpreted as free energy of the system when confined in one of the characteristic basins with IS energy $`e_{IS}`$. Then, the probability that a configuration of the liquid extracted from an equilibrium ensemble of configurations at temperature $`T`$ is associated to an IS with energy between $`e_{IS}`$ and $`e_{IS}+\delta e_{IS}`$ is
$$P(e_{IS},T)=\frac{\mathrm{\Omega }(e_{IS})\delta e_{IS}exp[\beta e_{IS}\beta f(\beta ,e_{IS})]}{Z_N(\beta )}=\frac{exp[\beta (e_{IS}TS_{conf}(e_{IS})+f(\beta ,e_{IS}))]}{Z_N(\beta )}$$
(4)
where we have defined $`S_{conf}(e_{IS})k_Bln[\mathrm{\Omega }(e_{IS})\delta e_{IS}]`$, since $`\mathrm{\Omega }(e_{IS})\delta e_{IS}`$ is the number of states between $`e_{IS}`$ and $`e_{IS}+\delta e_{IS}`$.
The formalism proposed by Stillinger and Weber, although often used in the past to clarify structural issues in liquids, for long time was not quantitatively applied to computer studies of the glass-transition problem, due to the significant computational effort required in equilibrating atomic configurations at low $`T`$. Only recently, Sastry et al addressed the problem of evaluating the $`T`$ dependence of the average IS energy ($`\overline{e}_{IS}`$) in supercooled states in a binary mixture of LJ particles, observing a significant decrease of $`\overline{e}_{IS}`$ on supercooling. This result, also observed for other models for liquids, e.g. in models for water and orthoterphenyl, furnishes strong evidence of the relevant role played by the low-energy basins on cooling. In a recent work, we proposed to invert the relation between the $`\overline{e}_{IS}`$ energy and $`T`$ to define an effective temperature at which the configurational part of the system is in equilibrium. This hypothesis, which has been proven useful in interpreting the aging process in a model liquid in terms of progressive thermalization of the IS, support the validity of Eq. (2) and together with the work of Sastry et al., calls for an effort in the direction of checking the formal expression for the supercooled liquid free energy, i.e. the $`T`$-range of validity, and an effort in the direction of evaluating the $`e_{IS}`$ dependence of the configurational entropy. This article is a first effort in this direction.
The model system we study is the well-known 80-20 Lennard Jones $`AB`$ binary mixture, composed by a 1000 atoms in a volume $`V_o=(9.4)^3`$, corresponding to a reduced density of 1.2039. Units of length and energy are defined by the $`\sigma `$ and $`ϵ`$ parameters of the $`AA`$ Lennard Jones interaction potential. The mass of atom $`A`$ is chosen to be 1. In these units, $`k_B=1`$. Simulations, covering the range $`0.446<T<5`$, have been performed in the canonical ensemble by coupling the system to a Nose’-Hoover thermostat. This system is well characterized and its slow dynamics has been studied extensively. The critical temperature of Mode Coupling Theory $`T_{MCT}`$ for this system is $`0.435`$.
Between 500 and 1000 equilibrium configurations for each $`T`$ (covering more than $`80`$ millions integration time steps for each $`T`$ ) have been quenched to their local minima by using a standard conjugate-gradient minimization algorithm. By performing this large number of quenches we are able to determine not only $`\overline{e}_{IS}`$ and its $`T`$ dependence but also the probability distribution $`P(e_{IS},T)`$, shown in Fig.1-(A).
In the $`T`$-region where Eq. (4) is supposed to hold, curves of $`ln[P(e_{IS},T)]+\beta e_{IS}`$ are equal to $`S_{conf}(e_{IS})/k_B\beta f(\beta ,e_{IS})`$, except for the $`T`$-dependent constant $`ln[Z(\beta )]`$. If $`f(\beta ,e_{IS})`$ has only a weak dependence on $`e_{IS}`$, then it is possible to superimpose $`P(e_{IS},T)`$ curves at different temperatures which overlap in $`e_{IS}`$. The resulting $`e_{IS}`$-dependent curve is, except for an unknown constant, $`S_{conf}(e_{IS})/k_B`$ in the $`e_{IS}`$ range sampled within the studied $`T`$ interval. This procedure is displayed in Fig.1-(B). We note that while below $`T=0.8`$ curves for different $`T`$ lie on the same master curve, above $`T=0.8`$, curves for different $`T`$ have different $`e_{IS}`$ dependence, thus showing the progressive $`e_{IS}`$-dependence of $`f(\beta ,e_{IS})`$. The overlap between different $`P(e_{IS},T)`$ curves below $`T=0.8`$ indicates that the obtained master curve is indeed, except for an unknown constant, the $`e_{IS}`$ configurational entropy, i.e. the logarithm of the number of basins with the same $`e_{IS}`$ value.
It is particularly relevant that, for $`T<0.8`$, $`f(\beta ,e_{IS})f(\beta )`$ and thus $`Z_N`$ (Eq. 2) is well approximated by the product of a vibrational contribution \[$`e^{\beta f}`$\] and of a configurational contribution depending only on the IS-energies and their degeneracy \[$`𝑑e_{IS}\mathrm{\Omega }(e_{IS})e^{\beta e_{IS}}`$\]. Thus the liquid can be considered as composed by two independent subsystems, respectively described by the IS and by the vibrational part. The IS subsystem can be considered as a continuum of levels characterized by an energy value $`e_{IS}`$ and an associated degeneracy $`\mathrm{\Omega }(e_{IS})`$. When the IS-subsystem is in thermal equilibrium (with the vibrational subsystem), then the $`T`$ dependence of the average configurational entropy $`\overline{S}_{conf}`$ can be evaluated using the standard thermodynamic relation
$$\frac{d\overline{S}_{conf}(T)}{d\overline{e}_{IS}}=\frac{1}{T}$$
(5)
i.e. by integrating the $`T`$ dependence of $`d\overline{e}_{IS}/T`$.
While the evaluation of the $`T`$-dependence of $`S_{conf}`$ already furnishes relevant information, the evaluation of the unknown integration constant would allow for a determination of the number of IS with the same $`e_{IS}`$ and, via a suitable low $`T`$ extrapolation, to the determination of the so-called Kauzmann temperature $`T_K`$, i.e. the $`T`$ at which the configurational entropy appears to approach zero. To do so we exploit the fact that Eq. 2 predicts that the liquid free energy $`F_{liquid}(T)`$ can be written as
$$F_{liquid}(T)=k_BTln[Z_N(T)]=\overline{e}_{IS}(T)TS_{conf}(\overline{e}_{IS}(T))+f(\beta ,\overline{e}_{IS}(T))$$
(6)
where $`\overline{e}_{IS}(T)`$ is the $`e_{IS}`$ value which maximises the integrand in Eq. (2). We assume that at the lowest studied $`T`$, the unknown $`f(\beta ,\overline{e}_{IS})`$ can be approximated by the harmonic free energy of a disordered system characterized by the eigenfrequencies spectrum calculated from the distribution of IS at the corresponding $`T`$. In this approximation, the entropy of the liquid, which can be calculate via thermodynamic integration, once the entropy of the corresponding harmonic disordered solid is subtracted, provides an estimate of the configurational entropy in absolute units.
To calculate the liquid entropy we perform a thermodynamic integration first along the $`T=5.0`$ isotherm, from infinite volume down to $`V_o`$, followed by a $`T`$ integration of the specific heat at fixed volume, down to the lowest studied temperature. Since the $`T`$ dependence of the potential energy $`E`$ along the studied isochore is extremely well described by the law $`E(T)T^{3/5}`$, in agreement with recent theoretical predictions for dense fluids, the potential energy contribution to the liquid entropy follows the law $`T^{2/5}`$. By summing the kinetic energy contribution we obtain an analytic expression which can reliably be extrapolated to temperatures lower than the studied ones.
The evaluated $`T`$ dependence of liquid and disordered-solid entropies is reported in Fig.2. We also show in Fig. 2 the contribution to the disordered-solid entropy arising from the $`T`$-dependence of the eigenfrequencies spectrum, to confirm that the $`T`$-dependence of the harmonic solid frequencies contribute only weakly to the entropy. The $`T`$ dependence of $`\mathrm{\Delta }SS_{liquid}`$-$`S_{disorderedsolid}`$ is shown in Fig.3-(A). We note that this difference vanishes at $`T=0.297\pm 0.02`$, which defines $`T_K`$ for the studied binary mixture. An independent recent estimate of $`T_K`$ for this system, based on an integral equation approach and on a similar analysis of simulation data, is $`T_K=0.29`$ . The resulting ratio between $`T_K`$ and $`T_{MCT}`$ support the view that the studied system has intermediate fragility character, as recently predicted by Angell and coworkers on the basis of a comparison between experimental results and numerical data for the same system.
At the lowest studied temperatures, where the harmonic approximation is valid, $`\mathrm{\Delta }S`$ coincides with $`S_{conf}(T)`$. We use this identity to calculate the unknown integration constant for the inherent structure entropy. Moreover, by integrating $`Td(\mathrm{\Delta }S(T))`$, both the configurational energy dependence of the configurational entropy (Fig. 1-(c)) and the $`T`$ dependence of the configurational energy (3-(b)) can be evaluated, allowing to bridge the gap between $`T_K`$ and the lowest $`T`$ at which we were able to equilibrate the system. The present analysis predicts $`e_{IS}(T_K)=7.82\pm 0.01`$. Thus, both the configuration entropy and energy around $`T_{MCT}`$ is halfway between $`T_K`$ and the high $`T`$ value, suggesting that the ordering process in configuration space at the lowest temperature which we have been able to equilibrate is far from being complete.
The data reported in this article offer a quantitative thermodynamic analysis of the supercooling state. This picture confirms the fruitful ideas put forward long time ago and shows that a thermodynamic approach for the inherent structures subsystem becomes possible in supercooled states, since the thermodynamics of the inherent structures almost completely decouples from the “vibrational” thermodynamics. In particular, the quantitative evaluation of the degeneracy of the inherent structures for a well characterized system constitutes a basis for a comprehensive description of the slow dynamics below $`T_{MCT}`$, the $`T`$-range in which an accurate theoretical prediction is still missing. Previous estimates of the degeneracy of $`e_{IS}`$, the key ingredient for a descriptions of the dynamics in terms of IS, were available only for systems composed by less than 50 atoms. The proposed view of a supercooled liquids as composed by two weakly coupled subsystems — the IS subsystem and the “vibrational” subsystem — will also offer stimulating ideas for a microscopic understanding both of the out-of-equilibrium thermodynamics theories recently proposed and of the aging process.
Acknowledgment: We thank B. Coluzzi, G. Parisi and P. Verrocchio for sharing with us their recent independent results of the BM LJ system (). We thank G. Parisi for bringing to our attention Ref.. We thank S. Ciuchi, A. Crisanti, K. Kawasaki, E. La Nave, P. Poole, A. Scala for comments. F.S. and P.T. acknowledge support from MURST PRIN 98 and W.K. from the DFG through SFB 262.
|
no-problem/9906/cond-mat9906321.html
|
ar5iv
|
text
|
# Effects of construction history on the stress distribution under a sand pile
## Abstract
We report experiments on cohesionless granular piles to determine the effect of construction history on the static stress distribution. The stresses beneath the piles are monitored using a very sensitive capacitive technique. The piles are formed either by release of granular material from a relatively small output (localized source), or from a large diameter sieve (homogeneous rain). The stress profiles resulting from localized source inputs have a clear stress dip near the center of the pile while the results from an homogeneous rain show no stress dip. We also show that the stress profiles scale simply with the pile height. Experiments on wedges-shaped piles show the same effects but to a lesser degree.
Granular systems have captured much recent interest because of their rich phenomenology, and important applications. Static arrays show inhomogeneous spatial stress profiles called stress chains, where forces are carried primarily by a small fraction of the total number of grains. Recent numerical simulations and experiments have shown that the structure and the nature of these chains plays a critical role in the dynamics and statics of dense granular systems even in the absence of strong disorder of the granular packings (see Fig. 1). Necessarily, the presence of these chains must be reflected in the continuum constitutive relations which are needed to close the governing equations and thereby, solve even the simplest boundary value problems in granular statics .
The stress profile under a static pile of granular material provides a useful method for probing the effects of stress chains and the history of their formation. The literature contains many experiments examining stress profiles under static piles of granular material. Although there are a number of such studies, they are not in mutual agreement. In addition, a number of competing constitutive models have been invoked to explain the experimental observations.
The present experiments have been carried out with the aim of resolving the experimental conflict by determining as carefully as possible the relation between the preparation of a heap and the stress profile at its base. It is important to clarify the effect of construction technique on the stress structure for the following reasons:
1. To help understand the wide variation in past data.
2. To test some of the theories which depend explicitly
on construction history.
Of possible pile geometries, conical, and wedge-shaped heaps have been the most frequently studied. Many of the experiments on conical piles have indicated, contrary to simple intuition, that there is a dip in the pressure profile beneath the center. The existence of a dip in the stress profile for wedge-shaped piles is an open question.
There are important technical considerations in determining whether there is a stress dip. The most important of these is the fact that even modest deformations either of the surface supporting the pile or of the force detector may lead to erroneous measurements. In addition, if the pile is formed by dropping material onto the heap from a considerable height, as opposed to gentler deposition methods, it is likely that residual stresses become frozen into the heap. In such a case, or for a very heavy load, there is likely to be a characteristic length associated with the deformation of the pile under its own weight.
There are only a few reported experiments addressing the influence of the construction technique or filling rate on the stress. (However, two of us have probed the effect of the granular packing history on the mean pressure at the bottom of a silo.) Regarding sand-piles in particular, we are aware of only one set of experiments that considered the effect of construction technique on stress profiles, namely the work of Lee and Herrington for wedge-shaped piles of sand. These authors constructed piles using three different methods and found that the different construction techniques yielded results that were identical within the resolution of their instruments; no dip was recorded.
In the present experiments, we explore the effects of construction procedures on the pressure profiles using two different methods to build both conical and wedge-shaped heaps. We use detectors with very high resolution and very small deflection. We also build these piles on rigid base plates. In the case of a conical pile, we explicitly investigate the scaling of this profile with total mass for piles with an height ratio up to three times that of the smallest pile, corresponding to a mass ratio of $`30`$.
Several details of these experiments are important. We used sand of diameter $`1.2mm`$ $`\pm 0.4mm`$ and angle of repose $`32^o`$. The base plate on which we constructed most of these piles was a $`15.0`$ $`mm`$ thick duralumin support which was adequate to prevent deflection under the weight of the pile. (Some additional experiments were carried out using a $`1.3mm`$ steel base. These experiments used a fixed funnel height, and they are discussed below.) For a typical sand pile of $`H=8cm`$ height, we estimated the maximal sagging of the bottom plate to be $`w_m=6.5\mu m`$. Therefore $`w_m/H=10^5`$, a value that was smaller by $`10^3`$ than the relative deflection for which sagging of the base might create a significant perturbation. A single capacitive normal stress (i.e. pressure) sensor of diameter of $`11.3`$ mm ($`9`$ grain diameters) was placed flush with the surface of the base plate. We then determined the normal stress at various locations along the radial axis of the conical piles or along the short edge of the wedge-shaped piles by repeated construction of heaps with the same mass of sand. The resolution of the measuring device was $`0.25\%`$ of the typical maximum stress for an $`8cm`$ pile, which corresponded to a vertical deflection of the sensor of $`1.3\mu m`$. We tested the consistency of the measurements with different membrane thicknesses, and we found consistent results within experimental resolution. Here, we present data obtained with only one of these membranes which had a thickness $`t=100\mu m`$ . The sensor was calibrated against the hydrostatic pressure of a water column. However, the response of the sensor to known weights of granular material was consistently somewhat smaller, by a factor of $`0.9`$, than for water. We emphasize that this reduction was constant throughout the measurements. In particular, using a calibration based on granular mass, we consistently found that the integrated weight of the pile was correct.
We constructed both types of heaps by two qualitatively different procedures. The first of these used a funnel and we refer to it as the ‘localized source’ procedure; the second used a sieve, and we refer to it as the ‘raining procedure’. The following paragraphs give details on each method. Fig. 2 and Fig. 3 show photographs of these two configurations.
The localized-source procedure: We formed the pile using a funnel with an outlet that was much smaller than the final pile diameter. The funnel lifted steadily and slowly so that the outlet was always slightly above the apex during the heap formation. This approach, as opposed to a fixed funnel height, avoided effects from the deposition of particles with large kinetic energies that varied with the distance between the apex of the heap and the bottom of the funnel. For conical piles, the sand emptied from a conical funnel of outlet diameter $`11.7`$ mm ($`10`$ grains) onto the duralumin plate; the latter had a diameter larger than that of the final heap. For wedge-shaped piles the sand emptied from a wedge-shaped funnel with an outlet that was $`11.7`$ mm in the short direction and $`20`$ cm in the long direction. The dimensions of the supporting surface and the bottom of the wedge-shaped pile were $`20`$ cm X $`26`$ cm. Boundaries consisting of two Plexiglas walls $`2.0`$ cm thick and taller than the peak of the pile preserved the wedge-shape as the pile formed; the remaining two sides, parallel to the long direction of the wedge, were open. The sensor was placed halfway between the supporting walls, and at various distances from the centerline of the heap. During the experiments we measured the volume of the known mass of granular material forming the pile to determine its average volume fraction, $`\rho `$.
The raining procedure: The second construction method was designed to build up a pile in which the stress chains, and hence the principal stress directions were more nearly in the vertical direction. The containers from which the sand was poured had cross-sectional dimensions slightly larger than the platform on which the heap formed. The bottom of these containers were wire meshes with $`0.40`$ cm diameter holes. To form the heaps, the containers were filled while resting on the platform; they were then raised slowly above the platform, allowing a steady rain of sand onto the heap. Excess sand, at angles greater than the angle of repose, was allowed to avalanche off the platform. For this procedure, the base platform had the same size as the bottom of the final pile. The final mass of sand and the pile volume was measured at the end of the procedure. For conical piles we used a cylindrical container and a supporting platform of diameter $`26`$ cm ($`236`$ grain diameters). For the wedge-shaped piles, we used a rectangular box with dimensions $`20`$ cm X $`26`$ cm; the platform was identical to the one used in the localized-source procedure.
Pressure profiles and photographs of the final conical and wedge-shaped piles are shown in Figs. 2 and center of the heap is scaled by $`R`$, where $`R`$ is the pile radius for conical heaps, and the distance from the center axis for wedge-shaped heaps. The pressure is scaled by the hydrostatic pressure, $`\rho gH`$. The bars represent the standard deviation of several independent runs, not experimental error, which is about $`0.25\%`$.
The entire weight of each pile was integrated by curve-fitting the profile and integrating over the base area. This calculation is then compared with the known weight of the pile. For conical piles and localized-source wedge-shaped piles the error between both measurements is about $`1.5\%`$ or less. For the raining procedure applied to wedge-shaped piles, we observe a discrepancy as large as $`8\%`$. This relatively large “missing mass” for the wedge-shaped pile may be caused by a screening effects of the walls which support some of the weight.
Data for the conical piles created by the localized-source method show a clear pressure minimum at $`r/R=0`$. A maximum in the stress of $`0.6\rho gH`$, occurs at a position $`r/R0.3`$ which agrees reasonably well with previous conical pile data . The dimensionless stress at $`r/R=0`$, $`0.3\rho gH`$, is $`50\%`$ lower than the maximum stress. Experiments performed with a fixed height funnel show a larger pressure difference between the maximum at $`r/R=0`$ and the value at $`r/R=0`$. This suggests that the particles pack differently with different deposition energies. The pressure is clearly largest near or at $`r=0`$ in the case of the raining procedure with maximum value $`0.6\rho gH`$.
A dip does not occur in the profiles of the heaps created by the raining method. Rather, there is a peak pressure of about $`0.6`$ at $`r/R=0`$, and a steady drop in the pressure moving out towards the edge of the pile.
For the wedge-shaped piles we find results qualitatively identical to conical piles. For the raining procedure, the stress profile shows no indication of a central dip. However, for the localized-source case there is a clear minimum at $`r/R=0`$. The value of this dip is notably smaller than for the analogous conical heap, i.e. only $`15\%`$ lower than the maximum stress, rather than $`50\%`$ lower. The pressure at the center is about $`0.65\rho gH`$ . The maximum in the stress occurs at $`r/R0.25`$ with a value of about $`0.75\rho gH`$. While the dip is smaller than the conical pile case, there is a definite variation in the shapes of the profiles which indicates a difference in the stress structure caused by the deposition process.
An important question concerns the dependence of the stress profile on the heap size. Earlier experiments suggested that the size and relative position of the stress maximum may vary with the size of the pile. Alternatively, Radjai has suggested that the relative size of the funnel opening to the size of the heap may be important.
We have investigated the issue of heap size by considering conical piles built with the localized source procedure. Specifically, we obtained data for for heap heights spanning $`4.5`$ cm to $`14.0`$ cm by simply stopping the filling process at various stages to obtain stress data. This variation by $`3`$ in the maximum height of the piles corresponds to a variation of $`30`$ in the mass, and hence the peak stress. The resulting data are displayed in Fig. 4. While there is some scatter in the results, the normalized profiles collapse surprisingly well. The peak occurs consistently at $`r/R=0.3`$, and the stress at $`r=0`$ is consistently $`50\%`$ of the peak stress. This finding disagrees with earlier studies by Jotaki et al., who also examine conical piles formed by pouring from funnels. These authors found that that the larger piles had deeper dips in the stress at the center. The difference between this data and ours is that Jotaki et al use a fixed funnel height for a given heap height. Larger piles were formed by setting the funnel progressively higher. Material dropped from the larger heights had more energy than for smaller heights. The height dependence observed by Jotaki et al may be explained by a density differences in the packings, with a corresponding height dependence of the scaled stresses. In experiments where we fixed the funnel height at a height $`z>H`$ above the base, we found that the stress dips were deeper than for the experiments where we gradually raised the funnel.
To conclude, we have shown that the construction history affects the pressure distribution at the bottom of a sand pile on a rigid base. These experiments were conducted for conical and wedge shaped piles. We observed the existence of a pressure dip at the center of a sand pile if the filling procedure corresponded to a localized-source. We found that the pressure profile scaled linearly with the pile height, within the experimental scatter. It seems likely that the progressive formation of the pile by successive small avalanches leads to the occurrence of a pressure dip. In the case of a more uniformly vertical filling via a raining procedure, the dip disappears. A localized-source procedure with a fixed pouring height tends to produce a height dependent stress profile (with a dip). We have shown that the dip in these experiments cannot be caused by a deformation of the base. If small deflections of the base (order $`10^5`$) were an issue then, that effect should appear in both the localized-source and raining procedures, and would also prevent the collapse of the data for different heap heights..
A heuristic explanation of the mechanism producing the dip is that the flow of particles during the localized-source procedure forms stress chains oriented preferentially in the direction of the slope (c.f. Fig 1.). These chains form arches which shield the center from some of the weight, thereby forming the dip. These effects agree qualitatively with the explanations of Witmmer et al. Explanations for the magnitude and of the dip its variation with geometry and are still lacking. We will present additional details and a more extensive comparison to theory elsewhere.
Acknowledgments We appreciate useful interactions with and comments from Shigeyuki Tajima. The work of DH and RPB was supported by the National Science Foundation under Grants DMR-9802602, and DMS-9803305, and by NASA under Grant NAG3-1917. This work is also supported by a grant P.I.C.S.-563 from the CNRS. Two of us (E.C. and L.V.) acknowledge the efficient technical assistance and the pertinent advices of J.Lanuza, P.Lepert and J.Servais.
|
no-problem/9906/hep-ph9906510.html
|
ar5iv
|
text
|
# REFERENCES
USM-TH-80
Electric Screening Mass of the Gluon with Gluon
Condensate at Finite Temperature
Iván Schmidt<sup>1</sup> and Jian-Jun Yang <sup>1,2</sup>
<sup>1</sup> Departamento de Física, Universidad Técnica Federico Santa Maria,
Casilla 110-V, Valparaíso, Chile
<sup>2</sup> Department of Physics, Nanjing Normal University, Nanjing 210097, P. R. China
Abstract
The electric screening mass of the gluon at finite temperature is estimated by considering the gluon condensate above the critical temperature. We find that the thermal gluons acquire an electric mass of order $`T`$ due to the gluon condensate.
PACS number(s): 12.38.Mh; 12.38.Lg; 52.25.Kn
1. Introduction
The color screening effect is one of main features of the high temperature quark-gluon plasma(QGP). The value of the electric screening mass $`(m_eg(T)T)`$, which gives rise to the Debye screening of the heavy quark potential, has been known in perturbative theory for quite some time. Although much progress in perturbative calculations has been achieved by the newly developed techniques of resumed perturbation theory, it seems that non-perturbative effects still dominate in the temperature regime attainable in the near future heavy ion experiments. Especially in the regime right above the critical temperature, nonperturbative effects are supposed to be important.
Quark and gluon condensates, which describe the nonperturbative effects of the QCD ground state, have been extensively used in QCD sum rules in order to study hadron properties at zero and finite temperature. Recently, the quark propagator at finite temperature including the gluon condensate was calculated by Sch$`\ddot{a}`$fer and Thoma and also the contribution of the gluon condensate to the gluon propagator at zero temperature has been extensively studied . Here we will extend these investigations to the study of the self energy of the gluon at finite temperature by using the thermal gluon propagator. The main purpose of this paper aims at estimating the gluon contribution to the electric screening mass of the gluon at finite temperature.
2. Gluon Condensate Contribution to Electric Screening Mass
of the Gluon at Finite Temperature
The lowest order contribution of the gluon, ghost and quark condensates to the gluon propagator is depicted by diagrams in Fig. 1. The presence of the quark condensate comes from chiral symmetry breaking, so its contributions in Fig. 1(e-f) vanish due to chiral symmetry restoration above the phase transition. The gluon condensate above the critical temperature which is associated with the breaking of scale invariance has been measured on the lattice recently. We will focus on the gluon condensate contribution to the self energy of the gluon. In principle, in order to preserve the Slavnov-Taylor identity(STI), the ghost condensate contributions in Fig.1(b-c) should be taken into account to make sure that the gluon self energy including the gluon condensate is transverse, which makes the problem complicated since the expression for the gluon self energy contains unknown condensates. However, as pointed by Lavelle , the gluon condensate is always the main contribution to the nonperturbative gluon at $`T=0`$. We assume that this still holds in the case $`T0`$. To avoid unnecessary complications, here we only analyze the gluon condensate contribution to the self energy of the gluon at small momenta and extract the electric screening mass of the gluon due to nonperturbative effects above the phase transition.
The real part of the gluon self-energy in the one-loop approximation is determined by the 1-1 component of the Feynman propagator in thermo-field dynamics (TFD). Similar to the perturbative case, using the Feynman rules of thermo-field dynamics and adopting the imaginary time formalism, the self-energy of the gluon with the gluon condensate contributions shown in Fig. 1, is
$`\mathrm{\Pi }_{\mu \nu }(p_0,\stackrel{}{p})`$ $`=`$ $`{\displaystyle \frac{iN_c}{(N_c^21)}}4\pi \alpha _siT{\displaystyle \underset{k_0=2\pi inT}{}}{\displaystyle \frac{d^3k}{(2\pi )^3}D_{\lambda \lambda ^{}}^{(\mathrm{NP})}(k_0,\stackrel{}{k})}`$ (1)
$`\times `$ $`[(2pk)_\lambda g_{\mu \rho }+(p+2k)_\mu g_{\rho \lambda }+(kp)_\rho g_{\lambda \mu }]`$ (2)
$`\times `$ $`[(2pk)_\lambda ^{}g_{\sigma \nu }+(pk)_\sigma g_{\nu \lambda ^{}}+(2kp)_\nu g_{\lambda ^{}\sigma }]`$ (3)
$`\times `$ $`[{\displaystyle \frac{g^{\rho \sigma }}{(pk)^2}}+{\displaystyle \frac{(1\xi )(pk)^\rho (pk)^\sigma }{(pk)^4}}]`$ (4)
where at finite temperature the zeroth components of momentum 4-vectors take on discrete values, namely, $`k_0=2\pi inT`$ with integer n. This is a direct consequence of Fourier analysis in the imaginary time formation of Matsubara. The lowest Matsubara mode with $`n=0`$ should be a good approximation as long as $`p`$ is not much larger than the critical temperature $`T_c`$. At zero temperature, the transversality and Lorentz-invariance of the gluon polarization tensor requires it to have the form $`\mathrm{\Pi }_{\mu \nu }(k)=\mathrm{\Pi }(k^2)(g_{\mu \nu }k_\mu k_\nu /k^2)`$. At finite temperature, Lorentz-invariance is lost and the polarization operator presents a combination of transverse and longitudinal tensor structures. Therefore, the nonperturbative gluon propagator $`D_{\lambda \lambda ^{}}^{(\mathrm{NP})}=D_{\lambda \lambda ^{}}^{full}D_{\lambda \lambda ^{}}^{pert}`$, which contains the gluon condensate, is generally assumed to have the form
$$D_{\lambda \lambda ^{}}^{(\mathrm{NP})}(k_0,\stackrel{}{k})=D_L(k_0,\stackrel{}{k})P_{\lambda \lambda ^{}}^L+D_T(k_0,\stackrel{}{k})P_{\lambda \lambda ^{}}^T,$$
(5)
where $`D_{T,L}`$ are the transverse and longitudinal parts of the nonperturbative gluon propagator at finite temperature, and the bare gluon propagator has been subtracted since we are not interested in perturbative corrections to the gluon self energy. In (5), the longitudinal and transverse projectors can be written as
$$P_{\lambda \lambda ^{}}^L=\frac{k_\lambda k_\lambda ^{}}{k^2}g_{\lambda \lambda ^{}}P_{\lambda \lambda ^{}}^T$$
(6)
$$P_{\lambda 0}^T=0,P_{ij}^T=\delta _{ij}\frac{k_ik_j}{k^2}$$
(7)
Using the same method as in Ref. , we expand the gluon propagator in (4) for small loop momenta k and keep only terms which are bilinear in k to relate the gluon condensate to moments of the gluon propagator, and we obtain
$`\mathrm{\Pi }_{00}(p_0=0,\stackrel{}{p})`$ $`=`$ $`{\displaystyle \frac{4\pi \alpha _sN_c}{(N_c^21)}}T{\displaystyle \frac{d^3\stackrel{}{k}}{(2\pi )^3}(\frac{8}{9}D_T\frac{136}{45}D_L)\frac{\stackrel{}{k}^2}{\stackrel{}{p}^2}}.`$ (8)
The moments of the longitudinal and transverse gluon propagators are related to the chromoelectric and chromomagnetic condensates via
$$\stackrel{}{E}^2_T=8T\frac{d^3\stackrel{}{k}}{(2\pi )^3}\stackrel{}{k}^2D_L(0,\stackrel{}{k}),$$
(9)
$$\stackrel{}{B}^2_T=16T\frac{d^3\stackrel{}{k}}{(2\pi )^3}\stackrel{}{k}^2D_T(0,\stackrel{}{k}).$$
(10)
Therefore, $`\mathrm{\Pi }_{00}(p_0=0,\stackrel{}{p})`$ can be re-written as
$`\mathrm{\Pi }_{00}(p_0=0,\stackrel{}{p})`$ $`=`$ $`{\displaystyle \frac{4\pi \alpha _sN_c}{(N_c^21)}}[{\displaystyle \frac{1}{18}}\stackrel{}{B^2}_T{\displaystyle \frac{17}{45}}\stackrel{}{E^2}_T]{\displaystyle \frac{1}{\stackrel{}{p}^2}}.`$ (11)
Furthermore, from the expectation values of the space and timelike plaquettes $`\mathrm{\Delta }_{\sigma ,\tau }`$ of lattice QCD, the chromoelectric and chromomagnetic condensates can be extracted as
$$\frac{\alpha _s}{\pi }\stackrel{}{E}^2_T=\frac{4}{11}\mathrm{\Delta }_\tau T^4\frac{2}{11}\stackrel{}{G}^2_{T=0},$$
(12)
$$\frac{\alpha _s}{\pi }\stackrel{}{B}^2_T=\frac{4}{11}\mathrm{\Delta }_\sigma T^4+\frac{2}{11}\stackrel{}{G}^2_{T=0},$$
(13)
Hence, we obtain,
$`\mathrm{\Pi }_{00}(p_0=0,\stackrel{}{p})`$ $`=`$ $`{\displaystyle \frac{4N_c\pi ^2}{(N_c^21)\stackrel{}{p}^2}}[{\displaystyle \frac{2}{99}}(\mathrm{\Delta }_\sigma {\displaystyle \frac{34}{5}}\mathrm{\Delta }_\tau )T^4+{\displaystyle \frac{29}{495}}G^2_{T=0}]`$ (14)
where the gluon condensate at zero temperature is taken as
$$G^2_{T=0}=(2.5\pm 1.0)T_c^4$$
(15)
The critical temperature is taken as $`T_c=260MeV`$ in the following calculations.
The electric screening mass is related to the low momentum behavior of the gluon polarization tensor, $`\mathrm{\Pi }_{\mu \nu }(p_0,\stackrel{}{p})`$. It is generally defined as the zero momentum limit $`(|\stackrel{}{p}|0)`$ in the static sector $`(p_0=0)`$ of $`\mathrm{\Pi }_{00}(p_0,\stackrel{}{p})`$ , i.e.,
$$m_e^2=\mathrm{\Pi }_{00}(0,|\stackrel{}{p}|0)$$
(16)
However, the above definition cannot be correct since beyond leading order in the coupling it is gauge dependent in non-Abelian theories. A better way is to define the electric screening mass as the position of the pole of the propagator at spacelike momentum
$$p^2+\mathrm{\Pi }_{00}(0,p^2=m_e^2)=0,$$
(17)
where $`p=|\stackrel{}{p}|`$. Using this definition, the leading perturbative contribution to the electric screening mass reads,
$$m_e^2(T)=(\frac{N_c}{3}+\frac{N_f}{6})g^2(T)T^2.$$
(18)
Using the lattice results for the plaquette expectation values, the electric screening mass can be solved from Eq. (17). The numerical results are shown in Fig. 2. From Fig. 2, one can see that the screening mass due to the gluon condensate is almost proportional to T when $`T>1.3T_c`$. In addition, the electric screening mass is not sensitive to the value of the zero temperature gluon condensate.
3. Summary and Discussion
To sum up, we investigated the gluon condensate contribution to the electric screening mass of the gluon at finite temperature. In the plasma, the gluons acquire an electric screening mass which is approximately proportional to $`T`$. This screening mass comes mainly from the thermal gluon condensate and is not sensitive to the value of the zero temperature gluon condensate. The electric mass gives rise to the Debye screening of the heavy quark potential. The magnitude of $`m_e`$ influences strongly the existence or non-existence of charmonium in the high temperature phase according the following behaviour of the potential $`V(r)`$ between gauge invariant sources:
$$V(r)\frac{\mathrm{e}^{2\mathrm{m}_\mathrm{e}\mathrm{r}}}{r^2}.$$
(19)
Therefore, the gluon screening mass of order $`T`$ due to nonperturbative effects is supposed to influence the dissociation of charmonium in the QGP.
In principle, this work can be extended to study the screening behavior of the gluon magnetic sector. It is well known that the absence of static magnetic screening in the hard thermal loop resummed propagator leads to infrared singularities in perturbative calculations, i.e. the zero value of the magnetic mass-squared results in the divergence of the expansion of the thermodynamic potential. However, a static magnetic screening always vanishes in perturbative calculations. A magnetic mass of the gluon might appear in the nonperturbative gluon propagator. Unfortunately, in order to study the transversality of the gluon self energy, the STI should be considered exactly. Hence the ghost condensate and higher order condensates at finite temperature have to be included.
ACKNOWLEDGEMENTS
We would like to thank Drs. M. Lavelle and M.H. Thoma for discussions. This work was supported in part by Fondecyt (Chile) postdoctoral fellowship 3990048, by Fondecyt (Chile) grant 1990806 and by a Cátedra Presidencial (Chile), and also by Natural Science Foundation of China grant 19875024.
|
no-problem/9906/astro-ph9906108.html
|
ar5iv
|
text
|
# Wide-field VLBI Imaging
## 1 Introduction
The undistorted field of view of a given VLBI data set is usually limited by two main effects: bandwidth smearing and time-average smearing (Bridle & Schwab,, 1989). The narrower the individual frequency channels and the smaller the integration time, the larger the unaberrated field of view. Data generated by VLBI correlators are comprised of a set of measurements of the complex visibility as a function of frequency (or delay) and time. Most continuum VLBI data sets are delivered to the astronomer with relatively narrow frequency channels ($`0.5`$ MHz) and short integration times ($`2`$ secs). For example, a typical $`\lambda 18`$ cm EVN data set, in its original form, boasts a field-of-view, $`\theta _{fov}`$, in excess of $`\frac{1}{2}`$ arcminute. The same EVN data set, having for example being averaged in frequency over $`64`$ MHz, has a field-of-view of $`300`$ milliarcseconds (mas). This reduction in the field-of-view by a factor in excess of two orders of magnitude is often considered to be unimportant. The aim of this paper is to show, by illustration, that this presumption may no longer be valid.
## 2 New Developments in Wide-Field VLBI Imaging
In the early 1980s, the off-line computer resources available to most astronomers were ill-equipped to deal with extremely large and cumbersome VLBI data sets. Not surprisingly, one of the major goals of VLBI data reduction was to severely average continuum data at the earliest possible stage in the analysis process (as soon as fringe-fitting corrections had been applied). Today, the processing power and data storage volumes enjoyed by the vast majority of VLBI astronomers is $`2`$ orders of magnitude greater than the shared systems used previously. Nevertheless, the custom of excessive data averaging continues. This practice severely limits the natural field of view of VLBI images, and is often inappropriate - especially in the era of high sensitivity observations.
### 2.1 Detection of hot-spots in Large-scale Jets
In Fig. 1 we present a $`\lambda =18`$ cm EVN-only map of the gravitational lens, 0957+561 A,B. The data were calibrated in the usual manner. To avoid bandwidth smearing, the data were held in the form of 28 independent but contiguous 2 MHz channels. The data were averaged in time in a baseline dependent manner with integration times ranging from 2.5 seconds (on the longest projected baselines) to 30 seconds (on the shortest projected baselines). Fig. 1 clearly shows the two main, compact components, 0957+561 A,B and a very faint compact source known as G, lying about 1 arcsec to the north of B. All three components have been detected by previous VLBI campaigns. However, the real excitement relates to the fact that we have detected and imaged two low brightness temperature features ($`T_b10^610^7`$ K), that are associated with compact regions or hot-spots in the singly imaged arc-second scale jet that dominates VLA maps of this source. There must be many more cases where similar low surface brightness emission goes undetected, simply because a wide-field approach to the data analysis is not pursued. VLBI observations of such emission could improve our understanding of large-scale jet physics, distinguishing between various hot-spot models and allowing a comparison between the properties of the jet (e.g. flow velocity) and the intergalactic medium on pc and kpc scales.
### 2.2 Imaging Faint SNR in the Starburst Galaxy M82
A wide-field approach has also been applied by Pedlar et al., (1999) to $`\lambda 18`$ cm EVN observations of the starburst galaxy M82. Previous VLBI observations had focussed on the brightest SNR (41.95+575), but by following a wide-field approach to the data analysis Pedlar et al., (1999) have been able to generate exquisite images of 4 other compact SNR in the field, of which the faintest has a peak flux of $`0.4`$ mJy/beam. A sub-section of the entire 1 arcminute field is shown in Fig. 2. Pedlar et al., (1999) have also re-analysed “vintage” EVN data of M82 from epoch 1986. By employing a wide-field approach to the re-analysis they have been able to measure expansion rates for one of the SNR and place upper-limits on two others. This work is a superb example of the type of information “gain” one can easily acquire through the general application of wide-field techniques to VLBI data.
### 2.3 Going Deeper into a Crowded Sky
The advances in VLBI hardware described elsewhere in this volume suggest that for a Global VLBI array r.m.s. noise levels of $`10\mu `$Jy/beam will be attainable by the end of the millennium. Looking further ahead (10-15 years) there is every reason to believe that $`1\mu `$Jy/beam noise levels will be achievable. At this level of sensitivity, the radio sky becomes a very crowded place. At the $`\mu `$Jy level one may expect to encounter 1 source every few arcseconds (see Muxlow et al., (1999)). We can expect to reach these levels of sensitivity within the next 10-20 years, by which time wide-field VLBI imaging may have evolved into a standard VLBI processing route in order to avoid source confusion. Even at the level of a few mJy a wide-field approach may pay dividends. Garrington et al., (1999) have embarked on a survey of faint mJy sources located within a few degrees of a bright, compact radio source, 1156+295. Two of the faint sources surveyed are separated by only a few arcminutes on the sky. Even although the observations had not been set up with wide-field imaging specifically in mind, it was still possible to produce tapered images of both sources simultaneously from a single $`\lambda 6`$ cm global VLBI data set (see Fig. 3). We note that this wide-field approach permits a very direct measurement of the relative astrometric positions of radio sources in the same undistorted field of view.
## 3 Current Limitations
Currently the main limits on wide-field imaging are set by the maximum data rate that can be generated by VLBI correlators. In other words, the limitation is exactly the reverse of the situation in the early 1980’s, where the bottleneck was associated with off-line data processing facilities. Another restriction is the size of the primary beam of the larger VLBI antennas, especially phased arrays (e.g. WSRT & VLA<sub>27</sub>). In this case at least, “small is beautiful”. This latter point should be borne in mind when the next generation of large telescope arrays are being designed (e.g. SKA), especially with regard to their possible incorporation within existing VLBI networks.
|
no-problem/9906/hep-th9906222.html
|
ar5iv
|
text
|
# A Remark on Brane Stabilization in Brane World
## Abstract
In this note we discuss dynamical mechanisms for brane stabilization in the brane world context. In particular, we consider supersymmetry preserving brane stabilization, and also brane stabilization accompanied by supersymmetry breaking. These mechanisms are realized in some four dimensional $`𝒩=1`$ supersymmetric orientifold models. For illustrative purposes we consider two explicit orientifold models previously constructed in . In both of these models branes are stabilized at a finite distance from the orientifold planes. In the first model brane stabilization occurs via supersymmetry preserving non-perturbative gauge dynamics. In the second model supersymmetry is dynamically broken, and brane stabilization is due to an interplay between non-perturbatively generated superpotential and tree-level Kähler potential.
preprint: HUTP-99/A025, NUB 3201
The discovery of D-branes is likely to have important phenomenological implications. Thus, the Standard Model gauge and matter fields may live inside of $`p9`$ spatial dimensional $`p`$-branes, while gravity lives in a larger (10 or 11) dimensional bulk of the space-time. This “Brane World” pictureFor recent developments, see, e.g., . The brane world picture in the effective field theory context was discussed in . TeV-scale compactifications were originally discussed in in the context of supersymmetry breaking. a priori appears to be a viable scenario, and, based on considerations of gauge and gravitational coupling unification, dilaton stabilization and weakness of the Standard Model gauge couplings, in it was actually argued to be a likely description of nature. In particular, these phenomenological constraints seem to be embeddable in the brane world scenario (with the Standard Model fields living on branes with $`3<p<9`$), which therefore might provide a coherent picture for describing our universe . This is largely due to a much higher degree of flexibility of the brane world scenario compared with, say, the old perturbative heterotic framework.
An important question arising in the brane world context is the issue of brane stabilization. In particular, in many four dimensional $`𝒩=1`$ supersymmetric Type I/Type I vacua there are flat directions corresponding to the positions of branes either with respect to each other or to the orientifold planes. Perturbatively, therefore, branes are not stabilized in such vacua. Upon supersymmetry breaking we might generically expect brane stabilization to occur. However, brane stabilization can sometimes occur via supersymmetry preserving non-perturbative dynamics. The purpose of this note is to illustrate some possible mechanisms of brane stabilization in the brane world context. In particular, brane stabilization in the mechanisms we discuss here is due to non-perturbative gauge dynamics in the world-volume of the branes, and can be supersymmetry preserving as well as supersymmetry breaking dynamics. In the latter case the VEV of the field whose F-term breaks supersymmetry is precisely the inter-brane distance or the distance between the branes and an orientifold plane. For illustrative purposes we present explicit string models where these mechanisms are realized. These examples, which are consistent string vacua (with four non-compact and six compact space-time dimensions) satisfying the requirements of tadpole and anomaly cancellation, were originally constructed in , and here we briefly review these constructions. In particular, the models we consider in this paper are four dimensional $`𝒩=1`$ supersymmetric orientifolds with non-trivial NS-NS $`B`$-flux.
Here we should emphasize that in the following discussions we will assume that dilaton is somehow stabilized via some other dynamics whose relevant scale is higher than the strong scale of the gauge dynamics responsible for brane stabilization. We can then safely integrate dilaton out and consider brane stabilization for a fixed gauge coupling determined by the dilaton VEV. On the other hand, if dilaton stabilization is not achieved, one runs into the usual runaway problem - the ground state of the theory is located at infinitely weak coupling where branes are destabilized and supersymmetry is restored (in the cases where brane stabilization is accompanied by supersymmetry breaking). The examples we discuss in this note, therefore, should only be understood as illustrative (toy) representatives of $`𝒩=1`$ gauge theories (arising from consistent string vacua) where non-perturbative dynamics can stabilize the VEV of a field which in the brane world context is interpreted as measuring the separation between branes or branes and orientifold planes.
Let us begin our discussion with the case where brane stabilization is accompanied by supersymmetry breaking. Thus, consider an $`𝒩=1`$ supersymmetric gauge theory with $`SU(4)`$ gauge group and matter consisting of one chiral superfield $`\mathrm{\Phi }`$ transforming in the two-index antisymmetric irrep $`\mathrm{𝟔}`$ of $`SU(4)`$. Note that this theory is anomaly free. There is a flat direction in this theory along which $`\mathrm{\Phi }`$ acquires a VEV as follows: $`\mathrm{\Phi }=\text{diag}(Xϵ,Xϵ)`$, where $`ϵi\sigma _2`$ is a $`2\times 2`$ antisymmetric matrix ($`\sigma _2`$ is a Pauli matrix). The gauge group along this flat direction is broken down to $`Sp(4)SO(5)`$, and the only matter left is a singlet of $`Sp(4)`$ (which is precisely the chiral superfield acquiring the VEV $`X`$). Thus, below the mass scale $`X`$ we have pure $`𝒩=1`$ supersymmetric gauge theory with the gauge group $`Sp(4)`$ plus the singlet.
Supersymmetry in this model is actually broken. In fact, this is already the case in the context of global supersymmetry. The reason for this is that the gaugino condensate in the $`Sp(4)`$ gauge theory depends non-trivially on $`X`$. This can be seen as follows. The gauge coupling running starts at the string scale $`M_s`$ (which we will identify as the UV cut-off of the theory). Between the scales $`M_s`$ and $`X`$ (where we assume $`XM_s`$) the gauge coupling of the $`SU(4)`$ gauge theory runs with the one-loop $`\beta `$-function coefficient $`b_0=11`$. Below the mass scale $`X`$ the gauge coupling of the $`Sp(4)`$ gauge theory runs with the one-loop $`\beta `$-function coefficient $`\stackrel{~}{b}_0=9`$. Matching the two gauge couplings at the scale $`X`$ then gives the following strong scale $`\stackrel{~}{\mathrm{\Lambda }}`$ of the $`Sp(4)`$ theory:
$$\stackrel{~}{\mathrm{\Lambda }}=\mathrm{\Lambda }^{b_0/\stackrel{~}{b}_0}X^{1b_0/\stackrel{~}{b}_0}=\mathrm{\Lambda }^{11/9}X^{2/9},$$
(1)
where $`\mathrm{\Lambda }M_s\mathrm{exp}(2\pi S/b_0)`$ is the strong scale of the original $`SU(4)`$ gauge theory. (Here $`S4\pi /g^2+i\theta /2\pi `$ is the dilaton VEV which determines the $`SU(4)`$ gauge coupling $`g`$ at the string scale as well as the vacuum angle $`\theta `$.) The gaugino condensation in the $`Sp(4)`$ theory generates the following superpotentialNote that for a pure $`𝒩=1`$ supersymmetric gauge theory with a simple gauge group $`G`$ the non-perturbative superpotential generated by the gaugino condensation (in the $`\overline{\text{DR}}`$ subtraction scheme) is given by $`W=h_G\mathrm{\Lambda }_G^3`$, where $`h_G`$ is the dual Coxeter number of the group $`G`$ ($`h_G=N`$ for $`G=SU(N)`$, $`h_G=N2`$ for $`G=SO(N)`$, $`h_G=N+1`$ for $`G=Sp(2N)`$, and in our conventions $`Sp(2N)`$ has rank $`N`$), and $`\mathrm{\Lambda }_G\mathrm{exp}(2\pi S/b_0(G))`$ is the strong scale of the theory (here $`b_0(G)=3h_G`$).
$$W=3\stackrel{~}{\mathrm{\Lambda }}^3=3\mathrm{\Lambda }^{11/3}X^{2/3}.$$
(2)
Note that this superpotential has runaway behavior in $`X`$ so that for any finite $`X`$ supersymmetry is broken. In the context of global supersymmetry the vacuum is located at infinite $`X`$ where supersymmetry is restored. In the locally supersymmetric context, however, the situation is quite different.
The key point here is the following. Thus, as was pointed out in , once such a theory with a runaway direction is coupled to supergravity, generically there is a natural shut-down scale for runaway directions, namely, the Planck scale. More precisely, this is the case for a certain large class of Kähler potentials, which, in particular, includes Kähler potentials of minimal type. Local supersymmetry breaking and stabilization of the runaway directions is then due to the interplay between non-perturbatively generated runaway superpotential and tree-level Kähler potential.
Let us illustrate the above point by considering a theory in which there is a runaway superpotential $`W=c/X^\alpha `$ generated via some non-perturbative dynamics. Let us assume that the Kähler potential for $`X`$ is minimal: $`K=XX^{}`$. For simplicity in the following we will adapt the units where the reduced Planck scale $`M_P=1/\sqrt{8\pi G_N}`$ is set to one, so that the scalar potential reads:
$$V=e^K\left(K_{XX^{}}^1|F_X|^23|W|^2\right),$$
(3)
where the F-term $`F_X=W_X+K_XW`$. Let $`\rho XX^{}`$. Then we have
$$V(\rho )=|c|^2e^\rho \left(\frac{\alpha ^2}{\rho ^{\alpha +1}}\frac{2\alpha +3}{\rho ^\alpha }+\frac{1}{\rho ^{\alpha 1}}\right).$$
(4)
The extrema of this scalar potential are located at $`\rho _0=\alpha `$ and $`\rho _\pm =\alpha +1\pm \sqrt{\alpha +1}`$. Here we are assuming that $`\alpha >0`$. Then the extremum at $`\rho =\rho _0`$ corresponds to a supersymmetric maximum with vanishing F-term. The extrema at $`\rho =\rho _\pm `$ correspond to minima with broken supersymmetry. Thus, in such a theory $`X`$ is indeed stabilized and supersymmetry is broken. Note that in the runaway superpotential in (2) generated in the $`SU(4)`$ gauge theory with one chiral $`\mathrm{𝟔}`$ which we discussed above we have $`\alpha =2/3`$.
Note, however, that the superpotential (2) is only valid for $`XM_s`$, whereas the above minima correspond to the values of $`XM_P`$ (here we can assume for simplicity that the reduced Planck scale and the string scale are of the same order of magnitude). So, strictly speaking, the above quantitative analysis is not valid. However, the qualitative conclusion that $`X`$ is stabilized and supersymmetry is broken is still correct. The reason for this is the following. Note that for $`XM_s`$ we would have pure $`Sp(4)`$ super-Yang-Mills theory below the cut-off scale $`M_s`$. Then the superpotential generated in this case would be independent of $`X`$: $`W=3\stackrel{~}{\mathrm{\Lambda }}^3`$, where $`\stackrel{~}{\mathrm{\Lambda }}M_s\mathrm{exp}(2\pi S/\stackrel{~}{b}_0)`$. The corresponding scalar potential (in the assumption of the minimal Kähler potential) is then given by<sup>§</sup><sup>§</sup>§Here we should point out that for $`XM_s`$ we must also take into account the fact that generically dilaton may not be constant, but vary logarithmically in the two transverse directions corresponding to the motion of the branes. This would modify the $`X`$ dependence in $`V`$, but it is not difficult to see that the qualitative conclusions on brane stabilization at $`XM_s`$ would still remain correct.
$$V(\rho )=|W|^2e^\rho (\rho 3).$$
(5)
This scalar potential can be seen to stabilize $`\rho `$ at $`\rho =2`$, where the F-term is non-zero, hence supersymmetry breaking. For intermediate values of $`X`$ (that is, $`XM_s`$), the superpotential is a non-trivial function of $`X`$ interpolating between $`X^{2/3}`$ for small $`X`$ and a constant for large $`X`$. (This interpolating function depends on the details of various threshold corrections around the scale $`M_s`$.) From the limiting behavior, however, it is clear that $`X`$ is stabilized and supersymmetry is broken (subject to the appropriate assumptions on the Kähler potential). As we have already mentioned, in the following we will present an explicit string model whose low energy effective field theory contains the $`SU(4)`$ gauge theory with one chiral $`\mathrm{𝟔}`$ discussed above, and where the VEV $`X`$ is identified with the separation between D-branes and the corresponding orientifold plane.
Next, we would like to discuss an example where brane stabilization occurs via supersymmetry preserving dynamics. Thus, consider an $`𝒩=1`$ supersymmetric gauge theory with $`SU(4)SU(4)`$ gauge group and matter consisting of one chiral superfield $`\mathrm{\Phi }`$ transforming in $`(\mathrm{𝟏𝟎},\mathrm{𝟏})`$ of $`SU(4)SU(4)`$, one chiral superfield $`Q`$ transforming in $`(\overline{\mathrm{𝟒}},\mathrm{𝟒})`$, and one chiral superfield $`\stackrel{~}{Q}`$ transforming in $`(\overline{\mathrm{𝟒}},\overline{\mathrm{𝟒}})`$. Note that this theory is anomaly free. We will assume that there is a tree-level superpotential given by
$$W_{tree}=\lambda \mathrm{\Phi }Q\stackrel{~}{Q},$$
(6)
where $`\lambda `$ is the corresponding Yukawa coupling. There are three flat directions in this model. One corresponds to the field $`Q`$ acquiring a non-zero VEV. The gauge group along the flat direction $`Q0`$ is broken down to $`SU(4)_{diag}SU(4)SU(4)`$. After the gauge symmetry breaking we have a singlet of $`SU(4)_{diag}`$ coming from $`Q`$, and also one chiral $`\mathrm{𝟔}`$ of $`SU(4)_{diag}`$ coming from $`\stackrel{~}{Q}`$. The rest of the fields in $`Q`$ are eaten in the super-Higgs mechanism, whereas the rest of the fields in $`\stackrel{~}{Q}`$ pair up with those in $`\mathrm{\Phi }`$ and become heavy via the corresponding coupling in the tree-level superpotential (6). Thus, along this flat direction we have the $`SU(4)`$ gauge theory with one chiral $`\mathrm{𝟔}`$ plus a singlet. This is the same theory as that already discussed above except for the presence of the extra singlet. One can analyze this case in a similar fashion, however, and it is not difficult to see that VEV stabilization is accompanied by supersymmetry breaking just as in the previous case. Note that the second flat direction along which $`\stackrel{~}{Q}0`$ gives rise to the same theory as above.
In the following we will therefore be interested in the third flat direction along which $`\mathrm{\Phi }=\text{diag}(X,X,X,X)`$ and the gauge group is broken down to $`SO(4)SU(4)`$. The only matter left is a singlet (coming from $`\mathrm{\Phi }`$) which is precisely the chiral superfield acquiring the VEV $`X`$. The rest of the fields in $`\mathrm{\Phi }`$ are eaten in the super-Higgs mechanism, and the fields $`Q,\stackrel{~}{Q}`$ acquire mass $`\lambda X`$ due to the corresponding coupling in the tree-level superpotential (6). Thus, at low energies we have gaugino condensates in both $`SO(4)SU(2)SU(2)`$ and $`SU(4)`$ subgroups. Matching the gauge couplings of $`[SU(2)SU(2)]SU(4)`$ with the original gauge couplings of $`SU(4)SU(4)`$ at the corresponding threshold scales, we obtain the following non-perturbative superpotential on this branch of the classical moduli space (here we assume that gauge couplings of both $`SU(4)`$ subgroups are the same at the string scale):
$$W=2\lambda ^4X^2\mathrm{\Lambda }^5+4\lambda X(\mathrm{\Lambda }^{})^2,$$
(7)
where $`\mathrm{\Lambda }M_s\mathrm{exp}(2\pi S/b_0)`$, and $`\mathrm{\Lambda }^{}M_s\mathrm{exp}(2\pi S/b_0^{})`$. Here $`b_0=5`$ and $`b_0^{}=8`$ are the one-loop $`\beta `$-function coefficients of the first respectively second $`SU(4)`$ subgroups above the corresponding threshold scales.
In fact, we can generalize the above model as follows. Thus, consider an $`𝒩=1`$ supersymmetric gauge theory with $`SU(M)SU(N)`$ gauge group and matter consisting of one chiral superfield $`\mathrm{\Phi }`$ transforming in $`(𝐑_\eta ,\mathrm{𝟏})`$ of $`SU(M)SU(N)`$, one chiral superfield $`Q`$ transforming in $`(\overline{𝐌},𝐍)`$, and one chiral superfield $`\stackrel{~}{Q}`$ transforming in $`(\overline{𝐌},\overline{𝐍})`$. Here $`𝐑_\eta =𝐒`$ for $`\eta =+1`$, and $`𝐑_\eta =𝐀`$ for $`\eta =1`$, where $`𝐒`$ is the two-index symmetric ($`M(M+1)/2`$ dimensional) representation of $`SU(M)`$, while $`𝐀`$ is the two-index antisymmetric ($`M(M1)/2`$ dimensional) representation of $`SU(M)`$. Anomaly cancellation requires that $`M=2N4\eta `$. We will assume that there is a tree-level superpotential given by (6). Along the flat direction $`\mathrm{\Phi }0`$ the gauge group is broken down to $`G_\eta (M)SU(N)`$, where $`G_\eta (M)=SO(M)`$ for $`\eta =+1`$, and $`G_\eta (M)=Sp(M)`$ for $`\eta =1`$. The only matter left is a singlet coming from $`\mathrm{\Phi }`$. The gaugino condensation in both $`G_\eta (M)`$ and $`SU(N)`$ subgroups gives rise to the following non-perturbative superpotential:
$$W=C(N,\eta )\lambda ^{\frac{N}{N3\eta }}X^{\frac{N2\eta }{N3\eta }}\mathrm{\Lambda }^{\frac{4N11\eta }{N3\eta }}+N(\lambda X)^{2\frac{4\eta }{N}}(\mathrm{\Lambda }^{})^{1+\frac{4\eta }{N}},$$
(8)
where $`\mathrm{\Lambda }^{4N11\eta }M_s\mathrm{exp}(2\pi S)`$, and $`(\mathrm{\Lambda }^{})^{N+4\eta }M_s\mathrm{exp}(2\pi S)`$ (and we are assuming that gauge couplings of both $`SU(M)`$ and $`SU(N)`$ are the same at the string scale). Here $`C(N,\eta )=N3\eta `$ except when $`N=4,\eta =+1`$. In the latter case $`G_\eta (M)=SO(4)SU(2)SU(2)`$ (that is, $`G_\eta (M)`$ is not simple), and $`C(N,\eta )=2`$ (instead of 1) as we have to add the contributions of gaugino condensates in both $`SU(2)`$ subgroups. (Here we are assuming that the phases of the gaugino condensates are the same for both $`SU(2)`$ subgroups.) We note that in deriving the superpotential (8) some care is needed. In particular, for $`\eta =1`$ we have the breaking $`SU(M)Sp(M)`$, and the $`Sp(M)`$ gauge coupling at the scale $`X`$ is the same as that of $`SU(M)`$. However, for $`\eta =+1`$ we have the breaking $`SU(M)SO(M)`$, and the $`SO(M)`$ gauge coupling $`\stackrel{~}{g}(X)`$ at the scale $`X`$ is related to the $`SU(M)`$ gauge coupling $`g(X)`$ at the same scale via $`2\stackrel{~}{g}^2(X)=g^2(X)`$. This is due to the special embedding of $`SO(M)`$ into $`SU(M)`$. (Thus, in the perturbative heterotic language $`SO(M)`$ in this case is realized via a level-2 current algebra.) In particular, this is important in obtaining the correct powers of $`X`$ and $`\mathrm{\Lambda }`$ as well as the numerical coefficient $`C(N,\eta )`$ in the first term in (8).
Let us go back to the non-perturbative superpotential (7) in the $`SU(4)SU(4)`$ example discussed above. Let us first consider the globally supersymmetric setup. The F-term of $`X`$ vanishes for
$$X=\lambda \left(\frac{\mathrm{\Lambda }^5}{(\mathrm{\Lambda }^{})^2}\right)^{1/3}=\lambda M_s\mathrm{exp}(\pi S/2).$$
(9)
For a large enough dilaton VEV $`S`$ (that is, for small enough gauge coupling at the string scaleNote that large $`S`$ in our conventions does not necessarily imply weak string coupling. Thus, as was pointed out in , the string coupling can be of order 1, while the gauge coupling (at the string scale) of a gauge theory arising from a set of $`p`$-branes compactified on a $`p3`$ dimensional space can be small if the corresponding compactification volume $`V_{p3}`$ is large in the string units. This volume is absorbed in the above definition of the dilaton VEV $`S`$.) we have $`XM_s`$, so that the above quantitative analysis is valid in the globally supersymmetric context. That is, the VEV $`X`$ is stabilized via supersymmetry preserving dynamics. Here we would like to address the issue of whether the same is true once we couple this system to supergravity.
To answer the above question, consider a generic system of superfields $`\mathrm{\Phi }_i`$ with the superpotential $`W`$. Let $`K(\mathrm{\Phi },\overline{\mathrm{\Phi }})`$ be the Kähler potential, which we will assume to be non-singular for non-zero finite values of $`\mathrm{\Phi }_i`$. The scalar potential is given by (once again, here we set the reduced Planck scale $`M_P`$ to one):
$$V=e^K\left(G_{i\overline{j}}F_i(F_j)^{}3WW^{}\right),$$
(10)
where the summation over repeated indices is understood, $`G_{i\overline{j}}K_{i\overline{j}}^1`$ is the inverse of the Kähler metric, $`F_iW_i+K_iW`$ are the F-terms for the fields $`\mathrm{\Phi }_i`$, and $`W_iW/\mathrm{\Phi }_i`$.
Note that if all of the F-terms $`F_i`$ are vanishing, local supersymmetry is unbroken. It is easy to verify that a point with $`F_i0`$ corresponds to an extremum of the scalar potential. (In this case $`V_i=V_{\overline{i}}=0`$.) We would like to investigate the conditions under which this extremum is actually a local minimum of the scalar potential. To do this we need the following quantities evaluated at the point $`F_i0`$:
$`V_{ij}`$ $`=`$ $`e^KF_{ij}W^{},`$ (11)
$`V_{i\overline{j}}`$ $`=`$ $`e^K\left[G_{i^{}\overline{j}^{}}F_{i^{}i}(F_{j^{}j})^{}2K_{i\overline{j}}WW^{}\right].`$ (12)
Here $`F_{ij}(F_i)_j=W_{ij}+(K_{ij}K_iK_j)W`$. Note that if, say, the absolute values of all the eigenvalues of the matrix $`W_{ij}`$ are much largerOne can easily write down the precise condition for the minimum which does not require the absolute values of the eigenvalues of the matrix $`W_{ij}`$ be much larger than $`|W|`$, but satisfy certain inequalities. than $`|W|`$ at the point $`F_i0`$, then this point corresponds to a local minimum of the scalar potential<sup>\**</sup><sup>\**</sup>\**Here we assume that the Kähler potential and its derivatives are of order 1 or smaller at $`\mathrm{\Phi }_i=\mathrm{\Phi }_i^{(0)}`$, which is usually the case..
The above discussion has the following implications for the fate of globally supersymmetric minima corresponding to the superpotential $`W`$ in the local supergravity context. Suppose the F-flatness conditions $`W_i=0`$ in the global setting have a solution with all the VEVs stabilized at $`\mathrm{\Phi }_i=\mathrm{\Phi }_i^{(0)}`$. Furthermore, suppose that the eigenvalues of the matrix $`W_{ij}(\mathrm{\Phi }^{(0)})`$ are all non-zero, and their absolute values are much larger than $`|W(\mathrm{\Phi }^{(0)})|`$. Then in the vicinity of the point $`\mathrm{\Phi }_i^{(0)}`$ there is another point $`\mathrm{\Phi }_i^{(1)}`$ which is a locally supersymmetric minimum of the scalar potential (10). The proof of this statement is very simple. First, the F-flatness conditions in the local setting are given by $`W_i(\mathrm{\Phi })=K_i(\mathrm{\Phi },\overline{\mathrm{\Phi }})W(\mathrm{\Phi })`$. Since $`W_{ij}(\mathrm{\Phi }^{(0)})`$ is non-singular, these equations have a solution at $`\mathrm{\Phi }_i=\mathrm{\Phi }_i^{(1)}`$, where $`\mathrm{\Phi }_i^{(1)}=\mathrm{\Phi }_i^{(0)}M_{ij}(\mathrm{\Phi }^{(0)})K_j(\mathrm{\Phi }^{(0)},\overline{\mathrm{\Phi }}^{(0)})W(\mathrm{\Phi }^{(0)})+𝒪(ϵ^2)`$, and $`ϵ<<1`$ is the absolute value of the largest eigenvalue of the matrix $`M_{ij}(\mathrm{\Phi }^{(0)})W(\mathrm{\Phi }^{(0)})`$, where $`M_{ij}(\mathrm{\Phi }^{(0)})`$ is the matrix inverse to $`W_{ij}(\mathrm{\Phi }^{(0)})`$. Moreover, according to our discussion above, the point $`\mathrm{\Phi }_i^{(1)}`$ corresponds to a local minimum. Here we note that the physical reason for this is clear - since in the globally supersymmetric setup all the fields are heavy at the minimum, there is no Goldstino candidate required for local supersymmetry breaking once we couple the system to supergravity.
Let us now go back to the superpotential (7), and see whether the aforementioned conditions are satisfied at the global minimum given by (9). Thus, we have
$$|W_{XX}/W|=2|\lambda |^2M_s^2\mathrm{exp}(\pi \text{Re}(S))1.$$
(13)
Here we are using the $`M_P=1`$ units, and we are assuming that the Yukawa coupling is of order of the gauge coupling at the string scale: $`|\lambda |g`$. Thus, in this model the VEV $`X`$ is indeed stabilized without supersymmetry breaking, and the stabilized value of $`X`$ is much smaller than $`M_s`$ (for large enough values of $`S`$).
Next, as we promised in the beginning, we are going to give the explicit string construction of the above models. In fact, these models were already constructed in where a more detailed discussion can be found. Here we will briefly review the construction in , and identify various flat directions in the classical moduli space with the motion of D-branes in the corresponding transverse directions.
Consider Type I compactified on the following orbifold Calabi-Yau three-fold with $`SU(3)`$ holonomy: $`_3(T^2T^2T^2)/(𝐙_3𝐙_3)`$. Let $`g_1`$ and $`g_2`$ be the generators of the two $`𝐙_3`$ subgroups. Their action on the complex coordinates $`z_1,z_2,z_3`$ parametrizing the three two-tori is given by:
$`g_1z_1=\omega z_1,g_1z_2=\omega ^1z_2,g_1z_3=z_3,`$ (14)
$`g_2z_1=z_1,g_2z_2=\omega z_2,g_2z_3=\omega ^1z_3.`$ (15)
Here $`\omega \mathrm{exp}(2\pi i/3)`$. This Calabi-Yau three-fold has the Hodge numbers $`(h^{1,1},h^{2,1})=(84,0)`$, so there are 84 chiral supermultiplets in the closed string sector. In the open string sector we have D9-branes only whose number depends on the rank $`b`$ of the NS-NS $`B`$-field $`B_{ij}`$ ($`i,j=1,\mathrm{},6`$ label the real compact directions corresponding to complex coordinates $`z_1,z_2,z_3`$). Note that since $`B_{ij}`$ is antisymmetric, $`b`$ can only take even values $`0,2,4,6`$. The untwisted tadpole cancellation conditions then imply that we have $`2^{5b/2}`$ D9-branes . In the following we are going to be interested in the cases $`b=2`$ and $`b=4`$.
$``$ $`b=2`$. We have 16 D9-branes. The orientifold projection is of the $`Sp`$ type. The solution to the twisted tadpole cancellation conditions (up to equivalent representations) is given by<sup>††</sup><sup>††</sup>††Here we work with $`2^{4b/2}\times 2^{4b/2}`$ (rather than $`2^{5b/2}\times 2^{5b/2}`$) Chan-Paton matrices as we choose not to count the orientifold images of the corresponding D-branes. :
$`\gamma _{g_1,9}=\text{diag}(\omega 𝐈_4,\omega ^1𝐈_4),`$ (16)
$`\gamma _{g_2,9}=\text{diag}(\omega 𝐈_2,𝐈_2,\omega ^1𝐈_2,𝐈_2).`$ (17)
Here we have chosen $`B_{12}=1/2`$, $`B_{34}=B_{56}=0`$. The gauge group is $`U(4)U(4)`$. The matter consists of the following chiral superfields: $`\mathrm{\Phi }=(\mathrm{𝟏𝟎},\mathrm{𝟏})(+2,0)`$, $`Q=(\overline{\mathrm{𝟒}},\mathrm{𝟒})(1,+1)`$, $`\stackrel{~}{Q}=(\overline{\mathrm{𝟒}},\overline{\mathrm{𝟒}})(1,1)`$, where the $`U(1)`$ charges are given in parentheses. The tree-level superpotential, which can be derived using the standard techniques, reads:
$$𝒲=\lambda \mathrm{\Phi }Q\stackrel{~}{Q}.$$
(18)
Note that this is precisely the second model discussed above except for the extra $`U(1)`$’s. In fact, the first $`U(1)`$ is anomalous. It is broken by VEVs of some of the twisted closed string sector chiral multiplets which transform non-linearly under the anomalous $`U(1)`$. The second $`U(1)`$ is anomaly free, and it is not difficult to see that it has no effect on the above discussion of VEV stabilization.
Note that in the above construction we have D9-branes. We can T-dualize to arrive at the corresponding Type I vacuum with, say, D3-branes. Then the space transverse to D3-branes is $`_3`$. The point in the moduli space with the unbroken gauge group corresponds to the brane configuration where all D3-branes are sitting on top of the corresponding orientifold plane. The flat directions along which $`\mathrm{\Phi }0`$, $`\stackrel{~}{Q}0`$ and $`Q0`$ then correspond to the branes moving off the orientifold plane in the $`z_1`$, $`z_2`$ and $`z_3`$ directions, respectively. Thus, if $`\mathrm{\Phi }0`$ then motion in the $`z_2,z_3`$ directions is not allowed due to the corresponding F-flatness conditions implied by the superpotential (18). On the other hand, as we have discussed above, the VEV of $`\mathrm{\Phi }`$ is stabilized via supersymmetry preserving non-perturbative dynamics on this branch of the classical moduli space, so that D3-branes are stabilized at a finite distance from the orientifold plane.
$``$ $`b=4`$. We have 8 D9-branes. The orientifold projection is of the $`SO`$ type. The solution to the twisted tadpole cancellation conditions is given by :
$`\gamma _{g_1,9}=𝐈_4,`$ (19)
$`\gamma _{g_2,9}=\text{diag}(\omega 𝐈_2,\omega ^1𝐈_2).`$ (20)
Here we have chosen $`B_{12}=B_{34}=1/2`$, $`B_{56}=0`$. The gauge group is $`U(4)`$. The matter consists of one chiral superfield $`\mathrm{\Phi }=\mathrm{𝟔}(+2)`$. Note that we have anomalous $`U(1)`$ in this model which is broken by VEVs of the corresponding twisted closed string sector chiral multiplets<sup>‡‡</sup><sup>‡‡</sup>‡‡Note that in this model the corrections to the Kähler potential due to the anomalous $`U(1)`$ breaking are important for the quantitative analysis of brane stabilization. They, however, do not affect the qualitative picture of brane stabilization.. Note that there are no renormalizable couplings in this case. This model is then the same as the first model we discussed above. Once we T-dualize so that we have D3- instead of D9-branes, the motion of D3-branes in the $`z_3`$ direction is described by the $`\mathrm{\Phi }`$ VEV. Note that D3-branes are frozen in the $`z_1,z_2`$ directions as the corresponding moduli are absent. As we saw above, the VEV of $`\mathrm{\Phi }`$ in this model is stabilized via supersymmetry breaking non-perturbative dynamics, so that D3-branes are stabilized at a finite distance from the orientifold plane.
Here we should point out that the above open string spectra correspond to perturbative (from the orientifold viewpoint) sectors. However, unlike in some other cases (examples of which were recently discussed in ), non-perturbative orientifold sector states in these models become heavy and decouple once we blow up the orbifold singularities (which is required to break anomalous $`U(1)`$’s) along the lines of .
Before we finish this note, we would like to make a few comments. First, it would be interesting to see whether in models of this type log-like corrections to the Kähler potential could stabilize branes at distances large compared with the string length along the lines of . Also, the above discussion assumed dilaton stabilization. In a more realistic setup dilaton would have to be stabilized via the standard “race-track” mechanism , or via a quantum modification of the moduli space (also see ), or else via some other mechanism. It would be interesting to understand these issues more explicitly in the brane world context (for a general discussion see ).
Finally, we should point out that brane stabilization mechanisms similar to those discussed above were considered in in the non-compact context where gravity decouples from the gauge theory dynamics. Here we consider consistent compact tadpole-free orientifold models. Brane stabilization in the latter context was also discussed in , where branes were argued to be stabilized via an interplay between non-perturbatively generated superpotential and an anomalous $`U(1)`$. However, as we have mentioned above, anomalous $`U(1)`$’s in the orientifold vacua are generically broken by VEVs of the corresponding twisted closed string sector chiral multiplets, which ameliorates the effect of the anomalous $`U(1)`$’s leaving behind the usual non-perturbative runaway superpotential (as in the first model discussed in this note).
I would like to thank Gia Dvali and Gregory Gabadadze for discussions. This work was supported in part by the grant NSF PHY-96-02074, and the DOE 1994 OJI award. I would also like to thank Albert and Ribena Yu for financial support.
|
no-problem/9906/hep-ex9906003.html
|
ar5iv
|
text
|
# MiniBooNE: the Booster Neutrino Experiment
## I Introduction
Currently, the Liquid Scintillator Neutrino Detector (LSND) at the Los Alamos National Laboratory is the only accelerator-based neutrino experiment to have evidence for neutrino oscillations .
The Booster Neutrino Experiment (BooNE) at Fermilab is being prepared to conclusively test these results. The experiment will take place at a new neutrino beamline coming off of the FNAL 8 GeV proton Booster. The first phase of BooNE — MiniBooNE — will be a single-detector experiment. MiniBooNE will obtain approximately 1000 events per year if the LSND signal is due to $`\nu _\mu \nu _e`$ oscillations, and will be capable of establishing the signal with greater than 5$`\sigma `$ significance. This new experiment expects to be collecting data by the end of 2001.
KARMEN, the Karlsruhe Rutherford Medium Energy Neutrino experiment at the Rutherford Appleton Laboratory, is very similar to LSND, has been running since 1990, and will be completed in early 2001. Therefore we discuss the results of both KARMEN and LSND to highlight the experimental context that MiniBooNE will encounter.
## II LSND and KARMEN
LSND presented its first evidence for $`\overline{\nu }_\mu \overline{\nu }_e`$ oscillations in 1995 . In the same year, KARMEN completed its first phase of searching for the same oscillations, but lacked the sensitivity to confirm or refute LSND . After upgrading its cosmic ray veto , KARMEN resumed data taking in February 1997 and plans to continue running through 2001. Results based on KARMEN data collected up to April 1998 were available at the time of DPF’99 . More recently KARMEN has updated results based on data collected through February 1999 .
LSND and KARMEN both search for $`\overline{\nu }_\mu \overline{\nu }_e`$. Both experiments are at 800 MeV proton accelerators where muon-antineutrinos are produced from the decay of muons at rest, through the decay chain:
$`\pi ^+`$ $``$ $`\mu ^+\nu _\mu `$ (2)
$`e^+\nu _e\overline{\nu }_\mu \mathrm{endpoint}\mathrm{\hspace{0.33em}52.8}\mathrm{MeV}`$
Both experiments employ liquid scintillator detectors. Without being able to directly distinguish $`e^+`$ from $`e^{}`$, the experiments distinguish the appearance of $`\overline{\nu }_e`$ from the presence of $`\nu _e`$ by correlating the electron-type track in position and time with the photon from an associated neutron capture reaction. Neutrons captured on protons produce 2.2 MeV photons:
$`\overline{\nu }_ep`$ $`e^+`$ $`n`$ (4)
$`npd\gamma (2.2\mathrm{MeV})`$
There are, of course, differences between the two experiments . The LSND detector is three times the size of the one at KARMEN, 167 tons versus 56 tons; and LSND’s proton exposure of more than 25000 C outpaces KARMEN’s post-upgrade expectation of 9000 C. LSND is positioned 30 m from the neutrino source, compared with 17.6 m for KARMEN. By being further from the target, LSND gains sensitivity to lower $`\mathrm{\Delta }m^2`$ at the expense of reduced neutrino flux. LSND is a single tank whereas KARMEN is segmented into 512 modules, and the concentration of scintillator in LSND is lower than at KARMEN. KARMEN therefore has better position and energy resolution, whereas LSND can measure the track direction and use Cherenkov rings for particle identification. Finally, because the target at LSND includes a drift space and the detector is positioned downstream, LSND is able to also search for the charge conjugate $`\nu _\mu \nu _e`$ oscillation using neutrinos from $`\pi ^+`$ decay in flight .
Based on analysis of data collected through April 1998, which was a proton exposure of 2900 C, KARMEN observes 0 candidate events while expecting $`2.88\pm 0.13`$ background events. With this data, the KARMEN sensitivity to an LSND-type signal is on the order of 1 event. Figure 1 shows the limit derived from this experimental observation overlaying the LSND favored region. Also shown is KARMEN’s sensitivity, the limit if only the background expectation were observed.
Noting that KARMEN expects three times more data, a glance at Figure 1 raises the question, At this rate won’t KARMEN soon rule out LSND? Well, not so fast. The difference between KARMEN’s sensitivity and limit curves merits scrutiny. The limit here benefits from the non-observation of even the expected background. KARMEN will keep running, and assuming that events are seen in the future, which would be more consistent with the background expectation, KARMEN’s limit will necessarily move closer to its sensitivity contour. The sensitivity will improve with more data and with a more sophisticated likelihood analysis. Nevertheless, KARMEN will lack the sensitivity to rule out or confirm LSND.
The situation is illustrated by KARMEN’s recently updated results, based on data gathered through February 1999, about half of its ultimate total. With a new analysis, 8 events are observed, while $`7.8\pm 0.5`$ background events are expected. For the favored LSND parameters, KARMEN would expect between 1.5 and 5.5 oscillation events. The limit is now weaker than in Figure 1. These results encroach on LSND’s allowed region, but they do not rule out LSND. Another experiment will be needed to make a conclusive statement about LSND, and that is where MiniBooNE comes in.
## III MiniBooNE: experimental design
MiniBooNE will use a new neutrino beamline coming off of the FNAL 8 GeV proton Booster. The Booster is a reliable machine, expected to provide $`2\times 10^7`$ s of running per year, while delivering $`5\times 10^{12}`$ protons per 1 $`\mu `$s pulse at a rate of 5 Hz to MiniBooNE. Furthermore, the Booster will be able to deliver beam to MiniBooNE while also supplying protons for the TeVatron and NuMI programs.
The secondary pion beam will emerge from a two-horn focusing system into a 50 m decay region. The pion decay length will be either 25 m or 50 m depending on the position of a movable steel beam stop (varying the decay length provides a check of experimental systematics). The detector will be positioned 500 m downstream of the decay region.
The detector will consist of a spherical tank 6.1 m (20 feet) in radius filled with 807 tons of pure mineral oil. An inner-tank structure at 5.75 m will support phototubes and form an optical barrier, separating the tank into a central main volume and an outer veto shield. Cherenkov and scintillation light from neutrino interactions in the main volume will be detected by 1280 8-inch phototubes, providing 10% photocathode coverage of the 445 ton fiducial volume. (Undoped mineral oil tends to scintillate modestly from the presence of intrinsic impurities.) The veto shield will be viewed by 240 phototubes mounted on the tank wall.
Typical neutrino energies will be from 0.5 to 1.0 GeV. In one year of running, the experiment will collect approximately 500000 reconstructed $`\nu _\mu `$ events. The intrinsic $`\nu _e`$ contamination in the beam will be approximately 0.3% or approximately 1500 reconstructed $`\nu _e`$ background events.
## IV MiniBooNE: analysis description
The detector will reconstruct quasielastic $`\nu _e`$ interactions by identifying electrons via their characteristic Cherenkov and scintillation light signatures. Besides the $`\nu _ee^{}`$ signal, several backgrounds will contribute. The analysis will come down to accounting for the backgrounds and determining whether or not there is an excess. The background sources will be due to $`\nu _e`$ contamination in the beam and the misidentification of muons and $`\pi ^0`$’s in the tank as electrons. Because the neutrinos are at higher energies than at LSND and KARMEN, neutrons will not play a role in the signal and will not contribute background.
The detector will record the time of the initial hit and total charge for each phototube. From this information, the track position and direction will be determined. Muon tracks will be distinguished from electron tracks by their Cherenkov rings and scintillation light. Electrons will tend to produce “fuzzy” rings due to multiple scattering and bremsstrahlung, while muon rings will tend to have sharp outer boundaries. Electrons also tend to have a high fraction of prompt (Cherenkov) light compared to late (scintillation) light, whereas muons produce relatively more late light.
The $`\nu _e`$ contamination in the beam is due to decays of pions and kaons. Monte Carlo simulation constrained by production data will be used to limit the systematic uncertainty in the $`\nu _e`$ background to better than 10%. In addition, it will be possible to measure the pion energy spectrum using the the observed $`\nu _\mu `$ events, virtually all (99%) of which will come from pion decay. The technique exploits the classic energy-angle correlation in neutrino beams, which will be enhanced here by the relatively low beam energy and small solid angle subtended by the detector. By measuring the pion spectrum, MiniBooNE expects to reduce the uncertainty in the pion component of the $`\nu _e`$ background to less than 5%.
Ninety two percent of the muons contained in the detector will decay, and they will be relatively easily identified by the presence of a second track. However, the 8% of muons that get captured have a greater chance to be misidentified. The misidentification of muon captures will be estimated by studying the large sample of muons that decay and determining the particle identification algorithm performance while ignoring the decay track. Using this technique, which does not rely on Monte Carlo simulation, the muon misidentification uncertainty is expected to be below 5%.
Most neutral pions will be identified by their two electromagnetic decay tracks. The small fraction (1%) of asymmetric $`\pi ^0`$ decays will not yield two resolvable tracks and will therefore be more likely to be misidentified. The misidentification contribution of these decays will be studied with Monte Carlo simulation, which will be constrained by the large sample of measured $`\pi ^0`$’s in the experiment. The pion misidentification uncertainty is expected to be 5%.
## V MiniBooNE: prospects
If oscillations occur as indicated by LSND, MiniBooNE will observe an excess of approximately 1000 events in one year of running. Figure 2 shows the number of excess events and their significance for two points in the LSND favored region. The significance is calculated using the systematic uncertainties for the various background sources above.
MiniBooNE will gain additional sensitivity by measuring the energy dependence of the $`\nu _e`$ events. The oscillation signal has a different energy distribution from the background. Therefore an underestimate of the background will not necessarily lead to a fictitious oscillation signal. Figure 3 shows the exclusion contours for the energy-dependent fit as well as the limit based on using the total number of observed events above background.
## VI Conclusions
LSND has presented evidence for muon to electron neutrino oscillations. KARMEN is now searching for these oscillations, but KARMEN is unlikely to have the sensitivity to reach a conclusion about LSND. Another experiment will be needed.
MiniBooNE is being prepared to fill this need. The detector and new 8 GeV beam line are being designed at Fermilab, and the experiment is scheduled to start data taking at the end of 2001.
MiniBooNE will either rule out LSND or it will demonstrate the signal and home in on the parameters. Should a single be found, BooNE would be ready to continue its experimental program with a second detector, the position of which determined by the MiniBooNE result.
## VII Acknowledgements
It is a pleasure to thank my MiniBooNE colleagues. I thank Eric Church for discussions about LSND and Klaus Eitel for discussions about KARMEN.
|
no-problem/9906/hep-th9906126.html
|
ar5iv
|
text
|
# Remarks on M Theoretic Cosmology
## 1 Moduli and Inflation
Generic compactifications of M theory to four dimensions with only four SUSYs have no moduli. That is, there are no theorems which prevent the occurence of a superpotential on the would be moduli space. Furthermore, all symmetries of M theory are gauge symmetries. As a consequence of D-terms, the true moduli space is made up of fields invariant under all continuous gauge symmetries. Thus, the superpotential is restricted only by discrete symmetries and these generically do not require it to vanish. The exception is a discrete complex R symmetry, . The submanifold of field space invariant under such a symmetry, and containing no directions of R charge 2, is a true moduli space<sup>1</sup><sup>1</sup>1Another possible region of field space where the superpotential vanishes has been explored by Witten . Witten’s argument for vanishing superpotential uses a $`U(1)`$ symmetry valid only in a certain large volume limit to draw exact conclusions about the superpotential. I find it mildly suspicious and will not include Witten’s region in the discussion here, but it may have an important role to play..
Most discussions of the phenomenology of string/M theory have been based either on low energy SUGRA, or weakly coupled string expansions. In these discussions the apparent moduli space of the theory is much larger than the true moduli space. There are theorems which prevent the occurence of a superpotential to all orders in the perturbative expansion. If one works in the regime where nonperturbative corrections to the superpotential are small then the phrase “superpotential for moduli ”is not an oxymoron. It was in this context that the idea of moduli as inflatons was proposed . A serious problem with this regime was pointed out long ago by Dine and Seiberg . Within the context of a systematic perturbative expansion one cannot stabilize the moduli (small couplings or large radii) whose large values justify the expansion. Racetrack models and Kahler stabilization are two attempts to get around this problem. Neither leads to a reliable calculational framework, and their fundamental postulates have not been verified.
Another reason for avoiding extreme regions of moduli space was pointed out by Moore and Horne . In extreme regions of moduli space, the metric on the space can be reliably calculated and the infinite regions have finite volume. This means that the system dynamically avoids the extreme regions. For example, if the potential has two minima with vanishing cosmological constant, one in the interior of moduli space and the other in one of the extreme regions, then a generic motion of the system will end up at the minimum in the interior.
In a vacuum state with no large moduli on the other hand, it is not clear what the term modular inflation could mean. This is particularly true if one adopts Witten’s explanation of the ratio between the Planck and unification scales, with the concomitant conclusion that the fundamental scale of M theory is on the order of $`10^{16}`$ GeV. The simplest interpretation of the magnitude of primordial energy density fluctuations in inflationary cosmology invokes a vacuum energy during inflation of approximately this order of magnitude. In what sense can fields with a potential energy of this order of magnitude be considered moduli? Recall that the motivation for separating moduli out from the other degrees of freedom of M theory is that they are supposed to parametrize low energy motions of the system among would be ground states.
In fact, scenarios like that of Horava and Witten contain the clue to an answer to this question. The universe is separated into “branes ”and bulk, and the latter has more SUSY than the former. Thus, the bulk universe has 8 approximately conserved supercharges and thus contains fields which would act as true moduli if it were not for the presence of the branes. The superpotential for the moduli is generated on the branes.
Let us examine the consequences of this fact. At an energy scale small compared to the mass of the bulk Kaluza-Klein modes on the compact manifold, the world is effectively four dimensional. The moduli become fields in this four dimensional effective field theory. Since the effective theory has only four SUSYs, these fields have a superpotential. Since it comes only from the vicinity of the branes on which the larger SUSY algebra is broken, it is independent of the volume of the internal space, and has, by dimensional analysis, the form
$$W=M^3w(\theta _a)$$
(1)
where $`\theta _a`$ are dimensionless parameters characterizing the internal geometry. On the other hand, the kinetic term for these zero modes, just like the Einstein term for the zero modes of the gravitational field, is proportional to the volume $`V_7`$ of the internal manifold, and has the form
$$M^9V_7\sqrt{g}G_{ab}(\theta )\theta _a\theta _b.$$
(2)
Note that $`M^9V_7=m_P^2=\frac{1}{8\pi G_N}`$ is, as the notation indicates, the same coefficient which multiplies the Einstein action. Furthermore, although the volume $`V_7`$ is itself a modulus, when we pass to the Einstein conformal frame in which $`V_7`$ is replaced by its vacuum value, the kinetic term of the moduli is rescaled in precisely the same manner as the gravitational action. It is then natural to define canonical scalar fields by $`\varphi _a=m_P\theta _a`$. Their action has the form
$$\sqrt{g}G_{ab}(\varphi /m_P)\varphi ^a\varphi ^b\frac{M^6}{m_P^2}v(\varphi /m_P).$$
(3)
The slow roll equations of motion derived from this action are
$$3Hd\varphi ^a/dt=\frac{M^6}{m_P^2}G^{ab}\frac{v}{\varphi ^b}.$$
(4)
and lead to the equation
$$dv/dt=\frac{M^3}{3m_P^2\sqrt{v}}_avG^{ab}_bv.$$
(5)
where $`_a`$ refers to the derivative with respect to the dimensionless variable $`\theta ^a`$. We have also used the slow roll expression for $`H`$ in terms of the potential. From 5 we immediately derive an expression for the number of e-foldings
$$N_e=3\frac{v}{_avG^{ab}_bv}_cvd\theta ^c.$$
(6)
where the integral is over the trajectory in moduli space that the system follows during the time interval when the slow roll approximation is valid. We see that in order to obtain a large number of e-foldings we need a potential which is flat in the sense that $`|v|/v1/N_e`$. The phenomenologically necessary $`N_e60`$ can be achieved with only a mild fine tuning of dimensionless coefficients. Correspondingly, the conditions on the potential which ensure the validity of the slow roll approximation are order one conditions on the derivatives of the potential and do not contain any small dimensionless numbers.
The fact that actions of the form 3 give rise to inflation with minimal fine tuning, and that such actions naturally arise for moduli in string theory was pointed out in . The general point that moduli might provide the flat potentialled, weakly coupled fields necessary to inflation was first made in . Here we note that in brane scenarios, it is the bulk moduli which play this role. By contrast, moduli associated with a single brane will have a natural scale $`M`$ and do not play the role of inflatons in quite as gracious a manner.
Another pleasant surprise awaits us when we plug the potential from 3 into the standard formula for the amplitude of the primordial energy density fluctuations generated by inflation. Up to numbers of order one we find
$$\frac{\delta \rho }{\rho }N_\lambda (M/m_P)^310^5$$
(7)
where the numerical value comes from the measured cosmic microwave background fluctuations, and $`N_\lambda 50`$. This gives $`M(2/10)^{1/3}\times 2\times 10^{16}`$ GeV, which, given the crudeness of the calculation, is the unification scale. To put this in the most dramatic manner possible, we can say that a brane scenario of the Horava-Witten type, given the unification scale as input, predicts the correct amplitude for inflationary density fluctuations. Furthermore, the whole scenario only makes sense because of the same large volume factor that underlies Witten’s explanation of the ratio between the Planck and unification scales. This is necessary at a conceptual level to understand why it is sensible to think about a modulus with a super potential of order the fundamental scale, and at a phenomenological level to understand the magnitude of the density fluctuations.
Although it has no connection with our discussion here we cannot resist pointing out the other piece of evidence for a scale of the same order as $`M`$. Any theory of the type we are discussing would be expected to contain corrections to the standard model Lagrangian of the form (in superfield notation) $`\frac{1}{M}LLH^2`$, which gives rise to neutrino masses. It is a matter of public record now that such masses exist, with an estimated value for $`M`$ between $`.6`$ and $`1.8\times 10^{15}`$ GeV. Although this is an order of magnitude shy of the unification scale I believe the uncertainties in coefficients of order one in dimensional analysis could easily make up the difference. If not, we will have the interesting problem of explaining the existence of two close but not identical energy scales in fundamental physics. .
Finally, we want to note that this scenario for inflation does not suffer from the runaway problem pointed out by Brustein and Steinhardt . These authors noted that the inflationary vacuum energy is much larger than the SUSY breaking scale. Furthermore, the minimum of the effective potential was assumed close to the region of weak string coupling. There was then a distinct possibility that the inflaton field would overshoot the small barrier separating it from the extreme weak coupling regime where string theory is incompatible with experiment. In the present scenario, the coupling is not assumed to be weak (nor the volume extremely large). Furthermore the inflationary potential has nothing to do with SUSY breaking. There is no runaway problem at all.
### 1.1 SUSY breaking
The authors of the papers in agonized over the discrepancy between the unification scale and the scale of SUSY breaking. In fact, they discussed and discarded what I now believe is the obvious solution of this problem, because of problems specific to weakly coupled string theory. The obvious way to avoid SUSY breaking at the scale $`M`$, is to insist that the superpotential 1 has a SUSY minimum. In fact, the existence of such minima is generic , requiring only the solution of $`n`$ complex equations for $`n`$ unknowns. However, in general, the superpotential will not vanish at such a minimum but instead will give rise to a negative cosmological constant. We refer the reader to for the elementary argument that in a postinflationary universe, such a SUSY point in moduli space is not a stable attractor of cosmological solutions. Instead, generic solutions which try to fall into such a minimum, recollapse on microscopic time scales.
The stable postinflationary attractors of a supersymmetric cosmology are points in moduli space with vanishing superpotential and SUSY order parameters. These can be characterized in terms of a symmetry. Namely, any complex R symmetry forces the superpotential to vanish, and if there are no fields of R charge 2 then the SUSY order parameter vanishes as well. The R symmetry must of course be discrete, since we are discussing M theory. If in addition, there do exist fields of R charge 0, then there will be an entire submanifold on which the superpotential vanishes and SUSY is preserved. Our future considerations will concentrate on this submanifold, which, following the terminology in the introduction, we call the true moduli space. It is the locus of restoration of a discrete R symmetry with the above properties.
Before proceeding to the discussion of SUSY breaking on the true moduli space, we should introduce the final characters in our story, the boundary or brane moduli. We could in fact have inserted such fields, which arise as excitations localized on one of the branes, into our discussion of inflation. However, they would have been of little use there, as their natural scale is $`M`$ rather than $`m_P`$ and they are rapidly driven to their instantaneous minima during the inflationary era. At lower energies however they will play an interesting role.
In addition to these moduli fields, any brane scenario will contain a variety of gauge fields and matter fields in nontrivial representations of the gauge group. The moduli will interact with these fields via the moduli dependence of bare gauge and yukawa coupling parameters in the effective theory as well as thru a variety of irrelevant operators. If the gauge couplings are asymptotically free and do not run to infrared fixed points at low energy, this description of the physics only makes sense if the bare gauge couplings are sufficiently small that the scale at which the effective coupling becomes large is substantially below the scale $`M`$. Otherwise it is not consistent to include the gauge degrees of freedom in the low energy effective theory. The weakness of bare couplings in these scenarios is not evident a priori, as it would be in a purely perturbative approach. The underlying physics is assumed to be strongly coupled. Witten has shown how the small unified coupling of the standard model can be explained in terms of a product of a large number of factors of order one in a geometry of large dimensions. We will assume that similar numerical factors explain the strength of the gauge interactions that lead to SUSY breaking.
The main role of the gauge interactions is not to break SUSY, but rather the discrete R symmetry. If we fix the moduli and treat the gauge theory as a flat space quantum field theory, then SUSY remains unbroken even though a nonperturbative superpotential is generated. The scale of this superpotential is determined via a standard renormalization group analysis in terms of the bare gauge coupling function $`f(\varphi /m_P,\chi /M)`$, where we have indicated dependence on both bulk and boundary moduli. For simplicity we assume that $`f`$ is a large constant $`f_0`$ plus a smaller, moduli dependent, term. The conclusions are not affected by this assumption. The scale $`\mu `$ of the nonperturbative superpotential is then determined by $`f_0`$. It takes the form
$$W_1=\mu ^3w_1(\varphi /m_P,\chi /M)$$
(8)
We have eliminated all (composite) superfields related to the gauge interactions from this expression by solving their F and D flatness conditions for fixed values of the moduli. The possibility of doing this is equivalent to the statement that the gauge theory does not itself break SUSY. We assume that $`W_1`$ does not vanish at any minimum of the effective potential. This is the statement of spontaneous R symmetry breaking. As a consequence, SUSY minima of the potential have negative cosmological constant of order at least $`\mu ^6/m_P^2`$ and are not attractors of the cosmological equations. Thus, cosmologically, R symmetry breaking forces the moduli to choose a minimum with spontaneously broken SUSY<sup>2</sup><sup>2</sup>2The tunneling amplitudes of such nonsupersymmetric vacua into supersymmetric AdS vacua are incredibly tiny and might be identically zero, as discussed in ..
Phenomenology requires a value of $`\mu `$ which gives acceptable squark masses. The details depend on whether or not we can set the F terms of the boundary moduli equal to zero (if there are no bulk moduli this is not consistent with our other assumptions). If we can, then the nonvanishing F terms are of order $`\frac{\mu ^3}{m_P}`$. A standard argument shows that squark masses will be of order $`\frac{\mu ^3}{m_P^2}`$, about the same as the gravitino. Assuming this is about a TeV we find $`\mu 10^{13}`$ GeV . An attractive feature of this scenario is that the positive and negative terms in the SUGRA potential are naturally of the same order of magnitude. Although we have no real understanding of why the cosmological constant is so small, this fact of nature is an indication of a relation between the scales of R symmetry breaking and of SUSY breaking. In models in which the SUSY breaking F term originates as a bulk modulus the correct order of magnitude relation between these scales arises automatically.
By contrast, if we assume that the SUSY breaking F term is that of a boundary modulus, the negative term in the potential is of order $`\frac{M^2}{m_P^2}10^4`$ smaller than the positive term. To understand the cancellation of the cosmological constant, one can , following introduce two gauge groups. The first leads to spontaneous R symmetry breaking with unbroken SUSY at a scale $`\mu _1`$ while the second breaks SUSY at $`\mu _2`$. If $`(\mu _1/\mu _2)^6m_P^2/M^2`$ one can again obtain ”order of magnitude cancellation” of the cosmological constant, but the scenario clearly lacks simplicity. In this scenario squark masses are of order $`\mu _2^3/M^2`$, and the gravitino is lighter than this by a factor $`10^4`$ and weighs about $`100MeV`$. $`\mu _2`$ has to be about $`5\times 10^{11}`$ GeV.
The first of these scenarios is clearly simpler, but as we now recall, it leads to the cosmological moduli problem. The scalar fields in the bulk moduli multiplets acquire masses from the SUSY violating potential of order $`m_M\mu ^3/m_P^2`$ which is the same order of magnitude as the gravitino and squark masses, i.e. a TeV. They have only nonrenormalizable couplings to ordinary matter, scaled by $`m_P`$. Thus, their nominal reheat temperature , $`\sqrt{m_M^3/m_P}`$ is of order $`3\times 10^2`$ MeV, and the universe is matter dominated at the time that nucleosynthesis is supposed to be taking place. The thermal inflation scenario can solve this problem, and we will review another solution below, but it might tempt us into adopting the scenario with boundary moduli as the instigators of SUSY breaking.
In this case, one would assume that all bulk moduli are frozen by the initial superpotential of order $`M^3`$. Dine has advocated that the proper vacuum should be an enhanced symmetry point of moduli space at which all moduli (he does not make a distinction between bulk and boundary fields) are nonsinglets. We temporarily adopt this point of view, but only for the bulk moduli. Then the boundary moduli masses are of order $`1`$ TeV, but their couplings to ordinary matter are scaled by $`M`$ rather than $`m_P`$. The reheat temperature is rescaled by a factor of $`10^2`$ and is (just) above the temperature for nucleosynthesis. The Hot Big Bang occurs just in time to light the furnace in which the primordial elements were formed.
One still has to account for baryogenesis. Adopting a mechanism suggested long ago by Holman, Ramond and Ross we aver that this can come from the decay of the moduli themselves. All of their interactions are of order the fundamental scale of M theory, so there is no reason for them to preserve accidental symmetries like baryon and lepton number. It is quite reasonable that they also violate CP, though the status of CP in M theory is somewhat more obscure. The decay itself is an out of equilibrium process, so all of the Sakharov criteria for baryogenesis are fulfilled. However, we must also take note of the theorem of Weinberg , according to which baryon number violating terms in the Hamiltonian must act twice in order to generate an asymmetry. In the decay of moduli, the first action of the Hamiltonian comes at no cost in amplitude, because the modulus must decay somehow and there is no reason for its baryon number violating decays to be significantly smaller than those which conserve baryon number. However the second baryon number violating interaction should not be highly suppressed if we want to generate a reasonable baryon asymmetry. Indeed, a one TeV, gravitationally coupled, particle which produces a baryon asymmetry of order one in its decay, also produces of order $`(1\mathrm{T}\mathrm{e}\mathrm{V}/3\mathrm{M}\mathrm{e}\mathrm{V})`$ or $`3\times 10^5`$ photons. Thus a large suppression of the average baryon number per decay would give too small a baryon asymmetry. A way out of this difficulty is to admit renormalizable baryon number violating operators in the supersymmetric standard model. Discrete symmetries such as a $`Z_2`$ lepton parity can adequately suppress all unobserved baryon and lepton number violating processes in the laboratory, while allowing such operators with quite large coefficients. An unfortunate casualty of this mechanism is the lightest SUSY particle. The LSP is no longer stable in the scenario described above and we have to look elsewhere for a dark matter candidate.
With this scenario in mind, let us return to the situation with bulk moduli. Suppose that the coefficient in the order of magnitude relation between the moduli mass and the fundamental parameters is $`m_M=5\times \mu ^3/m_P^2`$, while the squark mass is actually $`m_{\stackrel{~}{q}}=\mu ^3/4m_P^2=1`$ TeV. Then the reheat temperature for the bulk moduli is multiplied by a factor of $`20^{3/2}10^2`$ and is again just above $`1`$ MeV. Nucleosynthesis is again saved and baryogenesis can take place in the process of reheating. Again we must invoke renormalizable baryon number violation. Now however, there are natural candidates for dark matter. Imagine a boundary modulus whose potential energy is substantially smaller than the the estimate $`\mu ^3/M^2`$ coming from 8. We will call this the dark modulus, because it will be our dark matter candidate. It has a potential of the form $`U=\mathrm{\Lambda }^4u(D/M)`$. (In , where this scenario was first proposed, the candidate was a QCD axion field (which arises under certain natural conditions in Horava-Witten scenarios). This model works, but the mechanism is much more general and does not require energy densities as small as those of the axion.
Now, briefly review cosmic history. First we have inflation generated by bulk moduli fields which are not on the true moduli space<sup>3</sup><sup>3</sup>3Perhaps we should call these inflamoduli. . This period ends after of order $`100`$ e-foldings, and the universe is heated by inflamoduli decay to a temperature of order $`10^9`$ GeV . The primordial plasma quickly redshifts away. Furthermore, as soon as the inflamoduli potential energy density falls to $`\mu ^6/m_P^2`$, the universe becomes dominated by the coherent oscillations of the true bulk moduli. The dark modulus remains frozen at some generic point on its potential until the Hubble parameter falls to the mass scale of this field. At this point the energy density of the universe is of order $`\rho m_P^2\mathrm{\Lambda }^4/M^2`$ which is of order $`(m_P/M)^210^4`$ times larger than the energy density of the dark modulus. The important point now is that this ratio is preserved by further cosmic evolution until the true bulk moduli decay. After that time, the dark energy density grows linearly with the inverse temperature relative to radiation, and matter radiation equality occurs at $`10^4MeV`$. This is close enough to the true value for the observable universe that the factors of order one which we have neglected throughout might account for the difference. $`\mathrm{\Lambda }`$ must satisfy two constraints in order for this scenario to work: the dark moduli must remain frozen until the true bulk moduli begin to oscillate, and the dark modulus must have a lifetime at least as long as the age of the universe. The second constraint is by far the stronger, and leads to $`\mathrm{\Lambda }<3\times 10^6`$ GeV. Axions satisfy this constraint by a large margin. Note that this scenario completely removes the conventional cosmological constraint on the axion decay constant. Axions will be very weakly coupled and will escape all of the usual schemes for detecting them.
In view of the more natural explanation of the ratio between R symmetry and SUSY breaking scales, and the existence of a dark matter candidate in the bulk modulus scenario for SUSY breaking, we tentatively reject the idea that SUSY breaking is triggered by the F term of a boundary modulus. Its only advantage over the bulk modulus scenario is that we do not have to massage coefficients of order one in order to push the reheat temperature above an MeV.
For completeness, we should also discuss the possibility that SUSY breaking itself is caused by gauge interactions which are weakly coupled at the fundamental scale. This is required if we assume, with Dine , that moduli are fixed at some enhanced symmetry point. Scenarios of this sort are attractive because they allow us to use the idea of gauge mediation to solve the SUSY flavor problem. Gauge interactions generate superpotentials of the form $`\mu _1^3w_{g}^{}{}_{1}{}^{}(C_1/m_1)+\mu _2^3w_{g}^{}{}_{2}{}^{}(C_2/m_2)`$, where the $`C^{}s`$ are composite superfields and the $`m_i`$ the nonperturbative low energy scales generated by asymptotic freedom. Again, in order to cancel the cosmological constant, we must introduce an R breaking gauge theory with scale $`(m_1)`$, which preserves SUSY and a SUSY breaking gauge theory, with scale related by $`m_1^6=m_P^2m_2^4`$.
There is no cosmological moduli problem in this picture, since all moduli are assumed to be frozen by the initial superpotential. Moduli and dark matter in gauge mediated SUSY breaking models have been discussed in .
### 1.2 Density Fluctuations Redux
There is a small discrepancy in what we have said up till now, which reader may have been rushed into ignoring. We bragged about achieving the right magnitude for energy density fluctuations of the inflaton, but then proceeded to claim that the energy we see today in the universe comes from another source entirely, viz. the true bulk moduli.
It is easy to see however that the true moduli inherit the fluctuations of the inflaton. In a given region the moduli fields start to oscillate when the Hubble constant is about equal to their mass. In a region of inflaton overdensity, this will happen later and the ratio of modular energy density in the overdense and average regions will start to increase like $`a^3`$. This will continue until the moduli in the overdense region begin to oscillate, a fter which the ratio will remain constant. Since the decrease of oscillating modular energy and oscillating inflaton energy follows the same scaling law, the magnitude of modular fluctuations will be the same as those in the original inflaton field.
We have assumed here that the true moduli begin to oscillate before the inflatons decay into radiation. Since the reheat temperature is $`10^9`$ GeV and the oscillation energy scale is $`10^{11}`$ GeV, this assumption is valid.
Another question to worry about is the possibility of large isocurvature fluctuations in the true bulk moduli fields. However, during inflation, when the inflamoduli are excited away from their minimum, these are not light fields. The nonzero values of the inflamoduli break R symmetry. The true moduli space is a “river valley” running between the hills of the inflationary potential, and during inflation the system lies in the hills above the valley, where the potential is not flat in the valley direction. Indeed, this situation persists long into the era when the inflamoduli have begun to oscillate, because of the factor of $`10^{20}`$ between the inflationary and SUSY violating energy densities.
These issues deserve a more careful analysis, because it is possible that the transfer of fluctuations could leave some observable relic in the cosmic microwave background or that an observable level of isocurvature fluctuations could be generated. It is unclear to me whether reliable conclusions can be obtained without more information about the nature of the potential. Nonetheless, it appears that to a first approximation, the true moduli inherit the adiabatic perturbations of the inflaton field, so that the estimates we made above can be directly related to measurements of microwave background fluctuations.
### 1.3 Generalizing Horava-Witten
The moduli space of 11 dimensional SUGRA compactifications which preserve $`𝒩=1`$ SUSY in four Minkowski dimensions splits into three components. These are Joyce sevenfolds, F theory limits of compactification on Calabi-Yau fourfolds, and Heterotic limits of compactification on $`K3\times CY_3`$. These may be continously connected when short distance physics is properly taken into account. In addition, there may be many branches of moduli space which join onto these through generalized extremal transitions. The moduli space is thus highly complex.
The cosmological arguments of this paper indicate that the phenomenologically relevant compactifications may belong to a highly constrained submanifold of this complicated space. Namely, they should preserve eight supercharges in the bulk. The breaking to $`𝒩=1`$ should occur only on branes. SUGRA compactifications preserving eight SUSYs are much more constrained. The holonomy must be contained in $`SU(3)`$ which implies that the manifold is the product of a Calabi-Yau threefold times a torus, modded out by a discrete group $`\mathrm{\Gamma }`$. In order to obtain a smooth manifold with eight SUSYs , $`\mathrm{\Gamma }`$ should act freely and the holonomy around the new cycles created by $`\mathrm{\Gamma }`$ identification should be in $`SU(3)`$. Clearly, a way to obtain Horava-Witten like scenarios is to allow fixed manifolds of $`\mathrm{\Gamma }`$, on which an additional SUSY is broken. The original scenario of Horava and Witten was a $`CY_3\times S^1`$ compactification in which $`\mathrm{\Gamma }`$ is a $`Z_2`$ reflection on the $`S^1`$. The fixed planes carry $`E8`$ gauge groups, and one must also choose an appropriate gauge bundle. A further generalization allows five branes wrapped on two cycles of $`CY_3`$ to live between the planes.
It seems likely that more complicated choices of $`\mathrm{\Gamma }`$ might lead to a wider class of scenarios. The problem of classifying scenarios of this type seems quite manageable <sup>4</sup><sup>4</sup>4Preliminary results on the classification problem have been obtained by L.Motl.. The moduli space of compactifications of M theory on $`CY_3`$ times a torus has a reasonably complicated structure, replete with extremal transitions. Nonetheless, it is considerably simpler than the fourfold or Joyce manifold problem, and we know much more about its structure. Thus, if cosmology really points us in the direction of generalized Horava-Witten compactifications, we have made real progress in the search for the true vacuum of M theory.
## 2 Conclusions
Witten’s explanation of the discrepancy between the Planck and unification scales in the context of Horava-Witten compactifications, poses a challenge for inflationary cosmology and particularly for the notion that moduli are inflatons. In fact, the enhanced bulk SUSY of these compactifications gives us a clean definition of modular inflatons. The scenario then makes an order of magnitude prediction of the amplitude of primordial density fluctuations in terms of the unification scale.
Cosmological arguments first discussed in then focus attention on the true moduli space of M theory, a locus of enhanced discrete R symmetry. Such a space almost certainly exists . It is the attractor of postinflationary cosmological evolution. The further evolution of the universe then depends on whether this space contains bulk moduli. In the attractive scenario in which it does, the initial Hot Big Bang generated by inflation, is soon dominated by the energy density stored in coherent oscillations of true bulk moduli. By making optimistic but plausible assumptions about coefficients of order one in order of magnitude estimates, one obtains a reheat temperature above that required by nucleosynthesis. The decay of true bulk moduli, rather than that of the inflaton, generates the Hot Big Bang of classical cosmology. The baryon asymmetry must also be generated in these decays, and this is possible if the SUSY standard model contains renormalizable baryon number violating interactions (compatible with laboratory tests of baryon and lepton number conservation). As a consequence of this, there is no LSP dark matter candidate. Instead, boundary moduli with a suppressed potential energy act as a natural source of dark matter. Indeed, the ratio between the Planck and unification scales appears again in this scenario, this time in explaining the temperature at which matter and radiation make equal contributions to the energy density of the Universe. This estimate comes out an order of magnitude too high, but given the crudity of the calculation it seems quite plausible that this mechanism could be compatible with observation. The “dark modulus ”which appears in this scenario could be a QCD axion with decay constant of order the unification scale. Our unconventional origin for the Hot Big Bang completely removes the cosmological upper bound on this decay constant. Such a particle would be undetectable in presently proposed axion searches.
If a cosmology like that outlined here turns out to be correct, one might be tempted to revise Einstein’s famous estimate of the moral qualities of a hypothetical Creator. The current standard model of cosmology was constructed in the sixties. Since then there has been much speculation about cosmology at times earlier than that at which the primordial elements were synthesized. Most of it has been based on an eminently reasonable extrapolation of the Hot Big Bang to energy densities orders of magnitude higher. If the present scenario is correct, no such extrapolation is possible, and the conditions in the Universe in the first fraction of the First Three Minutes were considerably different from those at any subsequent time. There was a prior Big Bang after inflation, whose remnants may be forever hidden from us. The dark matter which dominates our universe is so weakly coupled to ordinary matter that its detection is far beyond the reach of currently planned experiments. The QCD and electroweak phase transitions never occurred.
The only dramatic prediction of this scenario for currently planned experiments is the occurrence of renormalizable baryon number violation in the low energy SUSY world. The details of the baryogenesis scenario envisaged here should be worked out more carefully, and combined with laboratory constraints, to nail down precisely which kind of operators are allowed. The scenario is thus easily falsifiable, but even the discovery of renormalizable baryon number violating interactions among SUSY particles will not be a confirmation of our cosmology. Similarly, any evidence for the existence of more or less conventional WIMP dark matter will be a strong indication that the present speculations are incorrect, but the failure to discover WIMPS will not prove that they are correct.
Instead one will have to rely on the slow accumulation of evidence against alternatives: ruling out vanishing up quark mass and spontaneous CP violation as solutions to the strong CP problem, the failure of conventional axion and WIMP searches, the discovery of renormalizable B violation. These will be steps on the road to proving that this cosmology is correct, but the end of that road is not in sight.
###### Acknowledgments.
I am grateful to Sean Carroll, Willy Fischler, Lubos Motl and Paul Steinhardt for valuable discussions. This work was supported in part by the DOE under grant number DE-FG02-96ER40559.
|
no-problem/9906/quant-ph9906121.html
|
ar5iv
|
text
|
# The hyperfine structure of highly charged {_ 92}²³⁸U ions with rotationally excited nuclei
## Abstract
The hyperfine structure (hfs) of electron levels of $`{}_{92}{}^{}{}_{}{}^{238}`$U ions with the nucleus excited in the low-lying rotational $`2^+`$ state with an energy $`E_{2^+}=44.91`$ keV is investigated. In hydrogenlike uranium, the hfs splitting for the $`1s_{1/2}`$-ground state of the electron constitutes $`1.8`$ eV. The hyperfine-quenched (hfq) lifetime of the $`1s2p`$ $`{}_{}{}^{3}P_{0}^{}`$ state has been calculated for heliumlike $`{}_{92}{}^{}{}_{}{}^{238}`$U and was found to be two orders of magnitude smaller than for the ion with the nucleus in the ground state. The possibility of a precise determination of the nuclear $`g_r`$ factor for the rotational $`2^+`$ state by measurements of the hfq lifetime is discussed.
Atomic hfs experiments and pure nuclear measurements (pionic scattering etc.) are two supplementary ways for obtaining informations on nuclear moments. However, up to now the atomic hfs experiments have been performed exclusively for atoms or ions with nuclei in the ground state. In this paper we point out that experiments are also feasible for highly charged ions with rotationally excited even-$`A`$ nuclei.
The low-lying excited rotational state of the $`{}_{92}{}^{}{}_{}{}^{238}`$U nucleus with excitation energy $`E_{2^+}=44.91`$ keV plays an important role in recent accurate calculations of the Lamb shift for highly charged uranium ions. In particular, for these ions the contribution of the $`2^+`$ state dominates in calculations of nuclear polarization shifts . However, in such calculations this state enters as a virtual nuclear excitation. In the present work we consider the situation of a real excitation of the $`2^+`$ state due to the interaction of the incoming atomic uranium beam with a target in beam-foil experiments.
The empirical energy spectrum of a rotationally excited nucleus in the ground-state band fits well to the formula ($`\mathrm{}=c=1`$)
$$E_I=𝒜I(I+1).$$
(1)
In Eq. (1), $`I`$ denotes the total angular momentum of the rotating nucleus and $`𝒜`$ is the rotational constant. The latter can be related to the moment of inertia $``$ of the nucleus according to $`𝒜=1/2`$. The lowest rotational excitation in $`{}_{92}{}^{}{}_{}{}^{238}`$U is the electric quadrupole transition $`0^+2^+`$ within the ground-state band with an excitation energy $`E_{2^+}=44.91`$ keV. Fitting this energy to Eq. (1) yields the rotational constant $`𝒜7.5`$ keV .
A rotationally excited nucleus should have a magnetic moment $`𝝁=\mu ^{}𝒏+\mu _Ng_r(𝑰\mathrm{\Omega }𝒏)`$ associated with its total angular momentum $`𝑰`$ . Here $`\mu ^{}𝒏`$ denotes the magnetic moment of the nonrotating nucleus, $`𝒏`$ represents the unit vector directed along the nuclear axis, and $`\mu _N`$ is the nuclear magneton. The ratio $`g_r=_p/`$ defines the gyromagnetic factor for the rotation of the nucleus with $`_p`$ being the protonic part of the total moment of inertia $``$. The projection $`\mathrm{\Omega }=(𝑰𝒏)`$ characterizes the various rotational bands. After averaging over rotations the magnetic moment is directed along the conserving vector $`𝑰`$:
$$\widehat{𝝁}=\frac{\mu }{I}\widehat{𝑰}=\mu ^{}\overline{𝒏}+\mu _Ng_r(\widehat{𝑰}\mathrm{\Omega }\overline{𝒏}).$$
(2)
Multiplying Eq. (2) by $`\widehat{𝑰}`$ and passing over to eigenvalues, we obtain
$$\mu =\mu ^{}\frac{\mathrm{\Omega }}{I+1}+\mu _Ng_r\left(I\frac{\mathrm{\Omega }^2}{I+1}\right).$$
(3)
Thus in highly charged ions with $`\mu ^{}=0`$ but rotationally excited nuclei a hyperfine structure splitting of levels should arise.
In the point-nucleus approximation the hfs magnetic-dipole interaction operator $`\widehat{H}_{\mathrm{hfs}}`$ can be written in the form:
$$\widehat{H}_{\mathrm{hfs}}(𝒓)=e\widehat{𝝁}\frac{\left[𝜶\times 𝒓\right]}{r^3},$$
(4)
where the nuclear magnetic moment is defined by Eq. (2), $`e`$ is the electron charge $`(e<0)`$, and $`𝜶`$ and $`𝒓`$ are the Dirac matrices and the spatial coordinate for the atomic electron, respectively. For a spinless nucleus in the ground-state band $`(\mathrm{\Omega }=0)`$ the expression (3) yields $`\mu =\mu _Ng_rI`$.
The hfs correction to the energy levels of hydrogen-like $`{}_{92}{}^{}{}_{}{}^{238}`$U ion is defined by the standard Landé expression $`\mathrm{\Delta }E_{\mathrm{hfs}}(F)=Ca/2`$, where the cosine factor is $`C=F(F+1)I(I+1)j(j+1)`$, $`j`$ is the total electron angular momentum, $`F`$ is the total angular momentum of an ion, and the hfs constant $`a`$ is determined by
$$a=g_r\frac{\alpha }{m_p}\frac{\kappa }{j(j+1)}_0^{\mathrm{}}\frac{dr}{r^2}P_{nlj}(r)Q_{nlj}(r).$$
(5)
Here $`\alpha =e^2`$ is the fine-structure constant, $`m_p`$ is the proton mass, and $`P_{nlj}(r)`$ and $`Q_{nlj}(r)`$ are the upper and lower radial components of the electron wave function characterized by the principal quantum number $`n`$ and the relativistic quantum number $`\kappa =(lj)(2j+1)`$. Employing analytical results from Refs. , one finds
$$a=\alpha (\alpha Z)^3g_r\frac{m_e^2}{m_p}\frac{\kappa \left[2\kappa (\gamma +n_r)N\right]}{j(j+1)N^4\gamma (4\gamma ^21)}(1\delta _{nlj}),$$
(6)
where $`\gamma =\sqrt{\kappa ^2(\alpha Z)^2}`$, $`N=\sqrt{n_r(2\gamma +n_r)+\kappa ^2}`$, $`m_e`$ is the electron mass, $`Z`$ is the number of protons, and $`n_r`$ is the radial quantum number ($`n=n_r+|\kappa |`$). The correction $`\delta _{nlj}`$ accounts for the finite nuclear charge distribution. The nuclear magnetization distribution correction as well as QED corrections are rather small and therefore negligible. For electron states with $`j=1/2`$, the hfs splitting $`\mathrm{\Delta }E_\mu `$ between the states with $`F=I+1/2`$ and $`F=I1/2`$ is just $`\mathrm{\Delta }E_\mu =(I+1/2)a`$. Assuming homogeneous nuclear charge and mass distributions one obtaines $`g_r=Z/A`$ for a nucleus with mass number $`A`$. Thus from atomic hfs experiments with rotationally excited nuclei one can deduce directly the deviation of the empirical $`g_r`$ factor from the $`Z/A`$ approximation. To our knowledge, $`g_r`$ factors for <sup>238</sup>U have been determined only for rotational states with spin $`I=6`$ and higher by means of measurements of the precession angles in transient magnetic fields by $`\gamma `$-ray - particle coincidences . These measurements cannot be performed in the case of the highly converted, low-lying $`2^+`$ and $`4^+`$ nuclear states.
For the $`1s_{1/2}`$-ground state of hydrogenlike $`{}_{92}{}^{}{}_{}{}^{238}`$U the splitting is indicated in Fig. 1. The value of $`E_{1s_{1/2}}^{(0)}`$ corresponds to the ground-state energy level of the uranium ion with the unexcited nucleus. The uncertainty of the value $`E_{1s_{1/2}}^{(0)}`$ is determined by the Lamb shift calculations. The present theoretical value for the Lamb shift is $`\mathrm{\Delta }E_{1s_{1/2}}^{(\mathrm{th})}=464.7\pm 1.0`$ eV , while the experimental value is $`\mathrm{\Delta }E_{1s_{1/2}}^{(\mathrm{exp})}=470\pm 16`$ eV . The evaluation of the hfs constant (6) for the ground state of $`{}_{92}{}^{}{}_{}{}^{238}`$U<sup>91+</sup> with $`g_r=Z/A`$ yields $`a=0.89`$ eV for a point nucleus and $`a=0.72`$ eV for an extended one. In the latter case, the finite-size correction $`\delta _{1s}=0.19`$ has been approximated according to results obtained in Ref. . Then one finds a hyperfine splitting $`\mathrm{\Delta }E_\mu =1.8`$ eV ($`\mathrm{\Delta }\lambda =0.69`$ $`\mu `$m), which is well resolvable within the accuracy of 1 eV envisaged for the near-future level shift measurements in hydrogenlike uranium .
The lifetime $`\tau _{2^+}`$ of the excited rotational state $`2^+`$ of the $`{}_{92}{}^{}{}_{}{}^{238}`$U nucleus can be obtained from the known empirical value for the reduced transition probability $`B(E2;0^+2^+)=12.3`$ $`e^2b^2`$ , where b denotes barn. This leads to $`\tau _{2^+}10^2`$ ns. One should point out that the much smaller value $`\tau _{2^+}10^2`$ ps given in the literature corresponds to neutral uranium atoms and is due to the internal conversion process. The latter decay channel is absent in H- and He-like $`{}_{92}{}^{}{}_{}{}^{238}`$U ions (see also ). The time $`\tau _{2^+}`$ is large enough to consider the magnetic interaction between the electron in hydrogenlike uranium and the rotating nucleus as a stationary problem. The time of revolution $`\tau _{\mathrm{rot}}`$ associated with this nuclear excitation can be deduced from
$`E_{2^+}={\displaystyle \frac{1}{2}}\omega _{\mathrm{rot}}^2={\displaystyle \frac{1}{4𝒜}}\left({\displaystyle \frac{2\pi }{\tau _{\mathrm{rot}}}}\right)^2`$
yielding $`\tau _{\mathrm{rot}}10^4`$ fs, which is negligibly small compared to $`\tau _{2^+}`$.
Let us now discuss the possibility of an experimental verification of this effect using the SIS/ESR facility at GSI in Darmstadt. We consider a beam of bare uranium ions with typical kinetic energy $`E_{\mathrm{kin}}320`$ MeV/u which corresponds to a velocity $`v0.67c`$. From the lifetime $`\tau _{2^+}10^2`$ ns it follows that the decay length of the $`2^+`$ state is larger than $`25`$ m behind the foil. The number of ions $`n_i`$ with rotationally excited nuclei that can be prepared per second is given by $`n_i=J\sigma 𝒩`$, where $`J`$ denotes the intensity of the ion beam, $`\sigma `$ is the cross section for excitating the nucleus inside the foil, and $`𝒩`$ is the number of foil atoms per unit area.
The Coulomb excitation cross section $`\sigma `$ for uranium nuclei in collisions with nuclei of the carbon foil can be estimated within the framework of the equivalent photon method as described in Ref. . Assuming that only the rotational $`2^+`$ state of <sup>238</sup>U with the energy $`E_{2^+}=44.91`$ keV is excited, the photonuclear absorption cross section may be approximated by $`\sigma _\gamma ^{E2}(\epsilon )\frac{4\pi ^3}{75}\left(\frac{\epsilon }{\mathrm{}c}\right)^3B(E2)\delta (\epsilon E_{2^+})`$. This approximation is legitemized by the huge value of the reduced transition strength $`B(E2)`$ of the rotational $`2^+`$ state as the most dominant collective nuclear excitation of the <sup>238</sup>U isotope. Then the total cross section $`\sigma `$ results as $`\sigma \frac{\mathrm{d}\epsilon }{\epsilon }n^{E2}(\epsilon )\sigma _\gamma ^{E2}(\epsilon )`$, where $`n^{E2}(\epsilon )`$ denotes the number of equivalent photons. The adiabaticity parameter involved in the problem is equal to $`\xi =\frac{b_{\mathrm{min}}E_{2^+}}{\mathrm{}c\beta \gamma }`$, where $`\beta =v/c0.67`$, $`\gamma =(1\beta ^2)^{1/2}`$, and $`b_{\mathrm{min}}`$ is the minimum impact parameter taken as the sum of the two nuclear radii. Since $`\xi 2.6\times 10^3`$ is quite small, the number of equivalent photons can be approximated by $`n^{E2}(E_{2^+})\frac{4\alpha Z_f^2}{\pi \gamma ^2\xi ^2\beta ^4}`$, where $`Z_f`$ is the nuclear charge number of the foil atoms. Finally, we obtain $`\sigma 10.7`$ fm<sup>2</sup>. Taking the value $`𝒩0.5\times 10^{20}`$ $`\mathrm{cm}^2`$ for a typical carbon foil density $`\rho =1`$ mg/$`\mathrm{cm}^2`$ together with a characteristic intensity $`J10^{10}`$ ions/s, one finds $`n_i0.5\times 10^5`$ ions/s. Only a fraction of about $`0.5\times 10^5`$ of the primary ions are ions with excited nuclei in the rotational $`2^+`$ state. However, even if all electrons captured in the foil are supposed to decay to the $`1s_{1/2}`$-ground state by the emission of Lyman radiation, a direct measurement of the hfs splitting is at present prohibited by the low efficiency ($`10^8`$) of the required high-resolution spectrometers.
Still there exists another possibility for the observation of the effect. It is based on the measurement of the hfq lifetime of the metastable $`1s2p`$ $`{}_{}{}^{3}P_{0}^{}`$ level in $`{}_{92}{}^{}{}_{}{}^{238}`$U<sup>90+</sup> ions. This effect was observed in Refs. for the isoelectronic sequence of heliumlike ions with non-zero nuclear spin. The level scheme of the first excited states in the $`{}_{92}{}^{}{}_{}{}^{238}`$U<sup>90+</sup> ion without taking into accout the hfq decay channels is depicted in Fig. 2. The energy values and partial transition probabilities were calculated within the framework of the multiconfigurational Dirac-Fock method (MCDF) . The energies of the electron levels include the radiative (electron self-energy and vacuum polarization) and the exact one-photon exchange corrections. The E1M1 two-photon transition rate has been calculated by Drake and the 2E1 decay rate is taken from Ref. .
Hyperfine quenching of the metastable $`2^3P_0`$ state results from a mixing with the short-lived $`2^3P_1`$ state by the hyperfine interaction (4). The partial widths $`\mathrm{\Gamma }_J`$ ($`J=0,1`$) for $`2^3P_J`$ levels due to the radiative E1 transitions to the ground $`1^1S_0`$ state are related by
$$\mathrm{\Gamma }_0=\eta ^2\mathrm{\Gamma }_1,$$
(7)
where the mixing coefficient $`\eta `$ is defined by
$$\eta =\underset{i=1}{\overset{2}{}}\frac{2^3P_0|\widehat{H}_{\mathrm{hfs}}(𝒓_i)|2^3P_1}{E_{2^3P_0}E_{2^3P_1}}$$
(8)
and the rotationally-induced hyperfine interaction operators $`\widehat{H}_{\mathrm{hfs}}(𝒓_i)`$ are given by Eq. (4). The coefficient $`\eta `$ can be expressed directly through the $`g_r`$ factor. Performing the integrations over the angles, the matrix element in expression (8) reads
$`2^3P_0|{\displaystyle \underset{i=1}{\overset{2}{}}}\widehat{H}_{\mathrm{hfs}}(𝒓_i)|2^3P_1`$ $`=`$ $`g_r{\displaystyle \frac{2\alpha }{3m_p}}\sqrt{I(I+1)}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dr}{r^2}}\left[P_{1s}(r)Q_{1s}(r)+P_{2p_{1/2}}(r)Q_{2p_{1/2}}(r)\right]`$ (9)
$`=`$ $`g_r\alpha (\alpha Z)^3\sqrt{I(I+1)}{\displaystyle \frac{m_e^2}{m_p}}\left[{\displaystyle \frac{(2\gamma +2\sqrt{2\gamma +2})}{(2\gamma +2)^2\gamma (4\gamma ^21)}}(1\delta _{2p}){\displaystyle \frac{(1\delta _{1s})}{\gamma (2\gamma 1)}}\right],`$ (10)
where $`\gamma =\sqrt{1(\alpha Z)^2}`$.
For the $`{}_{92}{}^{}{}_{}{}^{238}`$U<sup>90+</sup> ion, the mixing coefficient (8) has been calculated for $`g_r=Z/A`$ in the framework of the MCDF approach to yield
$`\eta ={\displaystyle \frac{0.764\text{eV}}{E_{2^3P_0}E_{2^3P_1}}}=0.696\times 10^2.`$
It leads to the appearance of an additional contribution to the radiative width of the $`2^3P_0`$ level, that turns out to be $`0.147\times 10^{13}`$ s<sup>-1</sup>. As a result, the lifetime of the $`2^3P_0`$ level is diminished from 56 ps to 0.67 ps, which corresponds to a decay length of about $`0.18`$ mm in the laboratory. We should emphasize that there will be no background from the ions with unexcited nuclei, since in those ions the one-photon transition $`2^3P_01^1S_0`$ with the energy $`\omega _096.271`$ keV is absolutely forbidden. The lifetime of 56 ps for ions in the $`2^3P_0`$ state arises from the transition $`2^3P_02^3S_1`$ with the energy $`\omega _1256.21`$ eV, which is far away from $`\omega _0`$ ($`70\%`$ of the total width) as well as from the two-photon transition $`2^3P_01^1S_0`$ ($`30\%`$ of the total width). Both of these transitions cannot give any background contribution in the proposed experiment. The transition $`2^3P_11^1S_0`$ with the energy of about $`96.162`$ keV does also not contribute to the background, since the level $`2^3P_1`$ has a lifetime of 0.033 fs and hence decays already inside of the foil. In order to obtain clean spectra without loss of efficiency, a measurement of coincidences between heliumlike ions and photons should be performed. Thus, the observation of the predicted effect becomes feasible utilizing the beam-foil time-of-flight technique. In view of Eq. (7), the accuracy of the determination of the $`g_r`$ factor is even two times better than the accuracy of the measured hfq lifetime, which in this region can be expected to be at the level of about $`1`$%.
###### Acknowledgements.
The authors are indebted to I.M. Band, V.I. Isakov, and K. H. Speidel for helpful discussions. L. L. and A. N. are grateful to the Technische Universität Dresden and the Max-Planck-Institut für Physik komplexer Systeme (MPI) for the hospitality and for financial support from the MPI, DFG, and the RFFI (grant no. 99-02-18526). G. S. and G. P. acknowledge financial support from BMBF, DAAD, DFG, and GSI.
|
no-problem/9906/patt-sol9906010.html
|
ar5iv
|
text
|
# Dispersion relations to oscillatory reaction-diffusion systems with the self-consistent flow
## I Introduction
In biological systems of cell population, taxis induces cooperative movement of cells or the flow of cellular mass. Cells behave according to chemical and physiological signals given by themselves. Two well-known pattern formations for population dynamics of cells are the aggregation of cellular slime molds and the branching growth of bacterial colonies. Slime mould cells secrete signaling chemicals responding to extracellular chemicals, and move towards increasing some chemical concentration. They aggregate to centers of the wave pattern of chemicals with dendritic streaming lines and eventually form slugs. Morphology of bacterial colonies depends on agar concentration and nutrient level. Bacteria are swimming toward high concentration of nutrients, and form various colonial patterns as concentric rings and dendritic branchings. Mechanisms of these pattern formations have been modeled by taxis equations or reaction-diffusion systems with the cellular flow.
For one of protozoan myxomycete, the Physarum plasmodium, which is a unicellular organism, a reaction-diffusion-flow model is presented in relation to control mechanism of amoeboid movements. The flow of endoplasm and chemicals induced by protoplasmic streaming in Physarum plays an important role in individual behavior under a large plasmodium state. It is not difficult to look for reaction, diffusion and flow couplings in various biological process, more and more. The self-consistent flow as above is realized by biological active process, and is a different character from physico-chemical ones. Effects of the self-consistent flow on simple reaction-diffusion systems are to be clarified in order to study a mechanism of functional self-organization of multicellular systems.
In the present article, we discuss basic effects of the flow on oscillatory phenomena in a reaction-diffusion system with the self-consistent flow. Although we dealt with a model for the Physarum plasmodium, it is expected that the results obtained here are applicable to other biological systems with the flow. We carry out numerical calculation of the dispersion relation in the system, and show effects of the flow on traveling plane waves. Then we derive the dynamics of phase waves from reaction-diffusion-advection equations in general form by means of limit cycle perturbations. In the coefficient of the gradients of the phase, the advection terms compete with diffusion terms.
## II Model equations for the plasmodium
The plasmodium of Physarum has a cytoplasmic cortex (ectoplasm) filled with endoplasm. The ectoplasm shows contractile oscillation everywhere within the organism , and the contraction causes intracellular streaming of the endoplasm . The contraction-relaxation behavior is regulated by metabolic cycles of chemical oscillators . Metabolic chemicals transported by endoplasmic streaming, affect chemical oscillators .
A model of contractile and motile dynamics of the Physarum plasmodium is represented by conservation of the cytoplasmic mass and reaction-diffusion system of metabolic elements with endoplasmic streaming. Under some assumptions, oscillatory dynamics in the plasmodium is governed by the following reaction-diffusion equations with advection terms:
$$\frac{𝐮}{t}+M\stackrel{}{}𝐮\stackrel{}{}𝐮=𝐟(𝐮)+D\stackrel{}{}^2𝐮,$$
(1)
where $`𝐮`$ is a $`N`$-component vector of reaction species which are separated into two types of metabolic elements, free chemicals and non-free ones. While the former is transported by the protoplasmic streaming, the latter is bound or stored at some cytoplasmic structure. Reaction kinetics $`𝐟`$ denotes the metabolic oscillation which emerges from the coupling of two types of reaction elements mentioned above, and the system is assumed to have a limit cycle orbit. The quantity $`D`$ is a diagonal matrix of diffusion constants. A tensor $`M`$ represents advection coefficients, here $`M\stackrel{}{}𝐮`$ is a flow vector for each reaction component. This system has a generic form of reaction-diffusion equations with the self-consistent flow, and we derive the phase dynamics from eq. 1 in section IV.
In our numerical calculations, we use a more simple two-variable model:
$`{\displaystyle \frac{u}{t}}+\stackrel{}{w}\stackrel{}{}=f(u,v)+D\stackrel{}{}^2u,`$ (2)
$`{\displaystyle \frac{v}{t}}=g(u,v),`$ (3)
where $`u`$ and $`v`$ are the concentrations of a free chemical substance and a bound/stored one, respectively. We adopt the Schnackenberg’s tri-molecular two species model for reaction kinetics of chemicals,
$$f(u,v)=au+u^2v,g(u,v)=bu^2v,$$
(4)
here $`a`$ and $`b`$ are positive constants. In the spatially homogeneous conditions ($`D=0`$ and $`\stackrel{}{w}=\stackrel{}{0}`$), the system has a stable limit cycle as shown in Fig. 1 for $`ba>(a+b)^3`$. The quantity $`D`$ is the diffusion constant of the free chemical $`u`$. The velocity of the endoplasmic flow $`\stackrel{}{w}`$ is determined by the concentration of the metabolic chemical as
$$\stackrel{}{w}=q\stackrel{}{}u,$$
(5)
and thus the flow is self-consistent. Here $`q`$ depends on the permeability and the mechanism of intracellular pressure. In the following calculation, $`q`$ is assumed to be constant.
## III Dispersion relation
The numerical calculations were carried out for eq. 3 on a ring, that is a one-dimensional region with periodic boundary conditions. Using the limit cycle, we initiate a pulse traveling on the ring, and solve eq. 3 until the solution becomes periodic in time. After we measured the rotating period of the traveling pulse on the ring, we repeat the calculation for rings of different lengths. As above, the dispersion relation has been obtained for periodic wave trains with the stable propagation.
In the measurements, parameters have been set as $`a=0.1`$, $`b=0.5`$, and $`D=1.0`$. We have used the explicit Euler’s method for reaction terms, upwind differencing method for advection terms, and the implicit method for diffusion terms. The data have been recorded for waves in stationary propagation.
Dispersion curves in the reaction-diffusion-advection system 3 are shown in Fig. 2 (wave number vs frequency) and Fig. 3 (period vs velocity) for various values of the advection constant $`q`$. Variations in the self-consistent flow make the propagation of plane waves quite changed. The inflection point which separates two branches corresponding to trigger waves and phase waves , moves according to the flow. It is the point that the variation of propagating behavior depends on the wave length. The other point is that the sign of $`q`$ gives different effects on waves in propagation features.
For the positive value of $`q`$, the oscillation frequency $`\omega `$ of phase waves gets slightly greater, while that of trigger waves becomes smaller. Hence, phase waves travel a longer way for the greater value of $`q`$ even if they have the same frequency. Such a case of positive $`q`$ corresponds to the phenomena of phase waves in the plasmodium, when we regard the reaction kinetics of eq. 4 as the oscillation of some metabolic element like Ca<sup>2+</sup>. The flow make the plasmodium communicate local information to the wide area in the cell with phase waves.
When the advection constant $`q`$ is negative, the opposite situation to the case of positive $`q`$ arises. Furthermore, we find out the steep rise of the frequency in Fig. 2. Such steepness denotes the sharp transition from phase waves to trigger waves with increasing dimensions of the propagating media.
## IV Phase dynamics
In this section, we show that the effect of the flow comes out through the coefficient of the nonlinear term in phase dynamics for reaction-diffusion-advection systems. By means of limit cycle perturbations, the dynamics of phase waves in ordinary reaction-diffusion systems for oscillatory media are described by the Burgers equation. We adopt the similar method to oscillatory reaction-diffusion equations with advection terms introduced in section II.
We assume that the limit cycle of eq. 1 has the frequency $`\omega _0`$. Then a solution of homogeneous oscillation is
$`𝐮=𝐮_0(\tau ),\tau =\omega _0t,\text{where}`$
$`\omega _0𝐮_0^{}=𝐟(𝐮_0),\text{and}𝐮_0(\tau +2\pi )=𝐮_0(\tau ).`$
Since the system 1 is invariant under the time translation, $`𝐮_0(\tau +\psi )`$ ($`\psi `$ is an arbitrary constant) is also a solution of eq. 1.
Let us consider the dynamics of phase waves in eq. 1 when the advection and diffusion terms are small. We introduce multiple scales and asymptotic expansions,
$`\stackrel{}{R}=\sqrt{ϵ}\stackrel{}{r},\tau =\omega _0t,T=ϵt,`$ (6)
$`𝐮=𝐮_0(\tau +\psi )+ϵ𝐮+\mathrm{},`$ (7)
where $`ϵ`$ is a small parameter and $`\psi =\psi (\stackrel{}{R},T)`$. Substitution of eq. 7 into eq. 1 yields perturbation equations for each order in $`ϵ`$:
$`\omega _0{\displaystyle \frac{𝐮_0}{\tau }}=𝐟(𝐮_0),`$ (8)
$`L𝐮_j=𝐛_j,L=\omega _0{\displaystyle \frac{}{\tau }}{\displaystyle \frac{𝐟}{𝐮}}(𝐮_0),`$ (9)
here $`j=1,2,\mathrm{}`$, and $`𝐛_j`$ denotes the inhomogeneous term of the $`j`$th order equation. For the first order equation in eq. 9, the inhomogeneous term is
$`𝐛_1`$ $`=`$ $`{\displaystyle \frac{𝐮_0}{T}}M\stackrel{}{}_R𝐮_0\stackrel{}{}_R𝐮_0+D\stackrel{}{}_R^2𝐮_0`$
$`=`$ $`𝐮_0^{}{\displaystyle \frac{\psi }{T}}M𝐮_0^{}𝐮_0^{}|\stackrel{}{}_R\psi |^2`$
$`+D𝐮_0^{\prime \prime }|\stackrel{}{}_R\psi |^2+D𝐮_0^{}\stackrel{}{}_R^2\psi ,`$
where $`\stackrel{}{}_R`$ is the nabla operator in respect to scaled coordinates $`\stackrel{}{R}`$. Thus the solvability condition for $`𝐮_1`$ gives the dynamics of phase waves:
$$\frac{\psi }{T}=c_1\stackrel{}{}_R^2\psi +c_2|\stackrel{}{}_R\psi |^2.$$
(10)
If we use a new dependent variable $`\stackrel{}{V}\stackrel{}{}_R\psi `$, then it satisfies the Burgers equation. The coefficients $`c_1`$ and $`c_2`$ are obtained from the relations,
$`c_j=𝐯^{},𝐯_j/𝐯^{},𝐮_0,`$
$`𝐯_1=D𝐮_0^{},𝐯_2=D𝐮_0^{\prime \prime }M𝐮_0^{}𝐮_0^{},`$
here $`𝐯^{},𝐯{\displaystyle _0^{2\pi }}d\psi (𝐯^{},𝐯)`$ and $`𝐯^{}`$ is the nontrivial periodic solution to the adjoint differential equation of $`L𝐯=\mathrm{𝟎}`$. Equation 10 describes slow and slight modulation of the homogeneous oscillation with the frequency $`\omega _0`$ by the phase $`\psi `$. We note that the coefficient of the nonlinear term, $`c_2`$, show competition between diffusion and advection.
When we use the quantity $`\varphi =\omega _0t+\psi `$, eq. 10 becomes
$$\frac{\varphi }{t}=\omega _0+c_1\stackrel{}{}^2\varphi +c_2|\stackrel{}{}\varphi |^2.$$
(11)
The dispersion relation is thus estimated from the phase equation 11 through the wave characteristics $`\omega =\varphi /t`$ and $`\stackrel{}{k}=\stackrel{}{}\varphi `$ as
$$\omega =\omega _0+c_2k^2+\mathrm{},k=|\stackrel{}{k}|.$$
(12)
Since the scaling of coordinates in the perturbation expansions 7 means spatially slight modulation, $`k=O(\sqrt{ϵ})`$, eq. 12 is the Taylor expansion for the dispersion curve, $`\omega =\omega (k)`$ in the vicinity of $`k=0`$. Thus, the coefficient of nonlinear term in eq. 11 is $`c_2=\omega ^{\prime \prime }(0)/2`$. Here eq. 12 has no linear term in $`k`$ because of the reflectional symmetry in space of eq. 1. As mentioned above, $`c_2`$ depends on advection constants as well as diffusion constants, and hence $`c_2`$ can take the value of a wide range.
We point out that the dispersion relation 12 is only applicable to some of periodic waves with stationary traveling. It is not adopted to waves with non-uniform phase gradients . In such a case, we need to use the phase equation 11, or analyze eq. 1 directly.
## V Discussion
By means of numerical calculations for eq. 3, it has been shown that the flow make remarkable differences in the dispersion curves or propagation of phase waves. To elucidate such differences induced by the flow, we consider the spatial scale. The two-component system 3 has the diffusion term only in the free element. Thus, without advection term, the variation of the diffusion constant is canceled by the scaling of spatial coordinates. In contrast, such a cancellation is impossible with the advection term because the advection coefficient is independent of the diffusion constant.
The deformation of the dispersion curves with the flow term has been shown for the branch of phase waves by the phase equation derived from eq. 1. Furthermore, we have found out another phenomenon of phase wave mentioned below.
Advection coefficients govern the coefficient $`c_2`$ of the quadratic term in the wave number $`k`$ to the dispersion relation 12. The coefficient $`c_2`$ is given by the addition of two parts which stem from the diffusion and advection terms, respectively. Thus, the competition between the diffusion and advection constants governs the value of $`c_2`$. The variation of $`c_2`$ gives a quadratic change in the frequency of long waves. This result elucidates the branch of phase waves ($`k0`$) for the dispersion curves obtained numerically in section III.
Varying the advection constants, we can take $`c_2`$ negative. Such a situation seems to be impossible for simple reaction-diffusion equations without the advection terms. The dispersion curve is convex for $`c_2<0`$, and hence $`\omega =0`$ at some wave number. This means that the phase wave is frozen and stationary. It sounds strange, and we need to study these waves in detail. One of such a system is the $`\lambda `$-$`\omega `$ system with advection terms of phase gradients.
When the coefficient $`c_2`$ becomes sufficiently small, it may be possible to derive other types of phase equations by introducing different scaling of independent variables from ones used in the present analysis. Furthermore we need to study waves with relaxation oscillations to know the characteristics of trigger waves with the flow, because the phase equation is satisfied only in the region of the phase-wave branch.
## Acknowledgments
This study was supported by The Sumitomo Foundation (Grant No. 970628), and The Institute of Physical and Chemical Research (RIKEN) (T. N.).
|
no-problem/9906/hep-th9906002.html
|
ar5iv
|
text
|
# Spatial Compactification and Decay-Rate Behaviour
## Abstract
The transition from instanton-dominated quantum tunneling regime to sphaleron-dominated classical crossover regime is explored in (1+1)-dimensional scalar field theory when spatial coordinate is compactified. It is shown that the type of sphaleron transition is critically dependent on the circumference of the spatial coordinate.
preprint: KNTP-99-03
Recently, much attention is paid to the winding number transition from instanton-dominated quantum tunneling regime to the sphaleron-dominated classical crossover regime in $`SU(2)`$-Higgs model, which is believed to describe electroweak phase transition in early universe. The active research in this field is mainly for the hope to understand baryon number violating process, which is very important consequence of electroweak chiral anomaly.
Since, unfortunately, the sphaleron transition in $`SU(2)`$-Higgs model or real electroweak theories is too complicated to treat and it needs lot a numerical calculation, it is very hard to understand the real mechanism of the electroweak phase transition by investigating these models directly. Hence a decade ago Mottola and Wipf(MW) adopted a non-linear $`O(3)`$ model with a soft symmetry breaking term as a toy model for the study of baryon number violating process. This model has an advantage that analytical expression of the sphaleron solution can be derived by paralleling Manton’s original argument. Recently, the sphaleron transition in this model with and without Skyrme term is examined.
Comparing, however, the result of Ref. with that of Ref., one can obviously conclude that MW model in itself cannot play a key role of toy model for electroweak theory when $`M_H>6.665M_W`$, where $`M_H`$ and $`M_W`$ are masses of Higgs and $`W`$ particles, respectively. In this region $`SU(2)`$-Higgs theory exhibits a smooth second-order sphaleron transition, while MW model exhibits a first-order sphaleron transition in the full range of its parameter space. Hence it may be helpful in understanding the real nature of the electroweak phase transition if one can find a simple toy model which exhibits both first-order and second-order sphaleron transitions. We argue in the present letter that this can be achieved by giving a nontrivial topology to the spatial coordinate.
If we impose a compactified spatial coordinate $`x`$, it naturally generates a periodic boundary condition $`\varphi (x=0)=\varphi (x=L)`$, where $`\varphi `$ is arbitrary scalar field and $`L`$ is a circumference of a compactified spatial coordinate. On the other hand, the decay transition of a metastable state at finite temperature is governed by classical configuration which satisfies another periodic boundary condition at temporal coordinate: $`\varphi (\tau =\tau _0)=\varphi (\tau =\tau _0+1/T)`$, where $`\tau `$ and $`T`$ are Euclidean time and temperature. This means we have two distinct periodic boundary conditions in this case, which makes the mechanism of the sphaleron transition to be very complicate. In this letter we will explore this issue by introducing a simple (1+1)-dimensional scalar field model and show the type of sphaleron transition is dependent on the circumference of the spatial coordinate.
Now, let us start with Euclidean action
$$S_E=𝑑\tau 𝑑x\left[\frac{1}{2}\left(\frac{\varphi }{\tau }\right)^2+\frac{1}{2}\left(\frac{\varphi }{x}\right)^2+U(\varphi )\right],$$
(1)
where $`U(\varphi )`$ is usual inverted double well potential
$$U(\varphi )=\frac{\mu ^2}{2a^2}(\varphi ^2a^2)^2+\frac{\mu ^2}{2a^2}a^4.$$
(2)
It is very easy to show that sphaleron transition for the model (1) with usual non-compactified spatial coordinate is smooth second order if non-linear perturbation or number of negative modes approach are employed. Both approaches yield an identical sufficient condition for the first-order sphaleron transition and are very useful for the discussion of the effect of the arbitrary wall thickness in the bubble nucleation. Fig. 1 describes action-vs-temperature diagram in this simple model, which shows the type of the sphaleron transition to be second order.
Now, let us consider action (1) with a compactified spatial coordinate. In this case as mentioned before sphaleron solution $`\varphi _s(x)`$ must satisfy a periodic boundary condition $`\varphi _s(x)=\varphi _s(x+L)`$. The explicit expression of $`\varphi _s(x)`$ is
$$\varphi _s(x)=\frac{a}{\mu }\beta (k)\text{dn}[\beta (k)x,\kappa ],$$
(3)
where $`k`$ is modulus of elliptic function and
$`\kappa `$ $`=`$ $`{\displaystyle \frac{2\sqrt{k}}{1+k}},`$ (4)
$`\beta (k)`$ $`=`$ $`\mu {\displaystyle \frac{1+k}{\sqrt{1+k^2}}}.`$ (5)
Since Jacobian Elliptic function $`\text{dn}[y,\kappa ]`$ has period $`2K(\kappa )`$, where $`K`$ is complete elliptic function, the circumference $`L`$ is defined
$$L_n=\frac{2n}{\beta (k)}K(\kappa ),$$
(6)
where $`n`$ is some integer. Using $`\varphi _s(x)`$ the classical action for sphaleron solution is straightforwardly computed:
$`{\displaystyle \frac{S_n}{P}}`$ $``$ $`{\displaystyle _{L_n/2}^{L_n/2}}𝑑x\left[{\displaystyle \frac{1}{2}}\left({\displaystyle \frac{\varphi _s}{x}}\right)+U(\varphi _s)\right]=n{\displaystyle \frac{S_1}{P}},`$ (7)
$`{\displaystyle \frac{S_1}{P}}`$ $`=`$ $`{\displaystyle \frac{a^2\mu ^2}{3\beta (k)}}{\displaystyle \frac{(1+k)^2}{1+k^2}}\left[4E(\kappa ){\displaystyle \frac{(1k)^2}{1+k^2}}K(\kappa )\right],`$ (8)
where $`P`$ is period of sphaleron solution, i.e., $`1/T`$ and $`E`$ is another complete elliptic function. Since $`S/P`$ is interpreted as a barrier height of energy, the barrier height with $`L=L_n`$ is n-times higher than that with $`L=L_1`$, and hence decay-rate is negligible for large $`n`$. In this letter, therefore, we will confine ourselves to only $`L=L_1`$ case.
Now we apply the result of non-linear perturbation presented in Ref. in this model. For this we expand $`\varphi (x,\tau )`$ around sphaleron $`\varphi _s(x)`$;
$$\varphi (x,\tau )=\varphi _s(x)+\eta (x,\tau ),$$
(9)
where $`\eta (x,\tau )`$ is small fluctuation field. Inserting it into the equation of motion
$$\frac{^2\varphi }{\tau ^2}+\frac{^2\varphi }{x^2}=U^{}(\varphi ),$$
(10)
one can get
$$\widehat{l}\eta =\widehat{h}\eta +G_2[\eta ]+G_3[\eta ],$$
(11)
where
$`\widehat{l}`$ $`=`$ $`{\displaystyle \frac{^2}{\tau ^2}},`$ (12)
$`\widehat{h}`$ $`=`$ $`{\displaystyle \frac{^2}{x^2}}+U^{\prime \prime }(\varphi _s),`$ (13)
$`G_2[\eta ]`$ $`=`$ $`{\displaystyle \frac{1}{2}}U^{\prime \prime \prime }(\varphi _s)\eta ^2,`$ (14)
$`G_3[\eta ]`$ $`=`$ $`{\displaystyle \frac{1}{6}}U^{\prime \prime \prime \prime }(\varphi _s)\eta ^3.`$ (15)
It is well-known that the eigenvalue equation of $`\widehat{h}`$ is standard Lamé equation:
$$\frac{d^2\psi }{dz^2}+\left[\lambda N(N+1)\kappa ^2\text{sn}^2[z,\kappa ]\right]\psi =0.$$
(16)
Although the solutions of Lamé equation with period $`4K(\kappa )`$ and $`2K(\kappa )`$ are well-known, solutions with other periods have not been known yet. Since $`\varphi _s(x)`$ has period $`2K(\kappa )/\beta (k)`$, the physically meaningful solutions of Lamé equation in this model are those whose periods are $`2K(\kappa )/\beta (k)n`$, where $`n=1,2,3,\mathrm{}`$. The eigenfunctions with period $`2K(\kappa )/\beta (k)`$ and their corresponding eigenvalues are summarized at Table I.
The $`\kappa `$-dependence of eigenvalues $`h_0`$, $`h_1`$, and $`h_2`$ are shown at Fig. 2. As shown in Fig. 2 $`u_0(x)`$, $`u_1(x)`$, and $`u_2(x)`$ given at Table I are the lowest three eigenstates of $`\widehat{h}`$. Although the existence of higher states is obvious, it is impossible to derive the eigenfunctions and their eigenvalues analytically until now. However, the knowledge of the lowest three eigenstates is sufficient for the discussion of the effect of compactified spatial coordinate in the sphaleron transition. Note that $`h_2`$ is very close to zero compared to $`h_0`$ in the small $`\kappa `$ region. We will show in the following that this effect guarantees the different types of sphaleron transition in the small $`\kappa `$ region from that in large $`\kappa `$ region.
The nomalization constants $`C_0`$, $`C_1`$, and $`C_2`$ defined at Table I are easily derived by direct calculation. Since $`C_1`$ is not needed for further discussion, we give only the explicit form of $`C_0`$ and $`C_2`$:
$`C_0^2`$ $`=`$ $`{\displaystyle \frac{3}{4}}{\displaystyle \frac{\beta (\kappa )\kappa ^4}{\frac{1}{3}K(\kappa )\left[(2\kappa ^2)\sqrt{1\kappa ^2\kappa ^2}+1\kappa ^2+\kappa ^4\right]+E(\kappa )\sqrt{1\kappa ^2\kappa ^2}}},`$ (17)
$`C_2^2`$ $`=`$ $`{\displaystyle \frac{3}{4}}{\displaystyle \frac{\beta (\kappa )\kappa ^4}{\frac{1}{3}K(\kappa )\left[(2\kappa ^2)\sqrt{1\kappa ^2\kappa ^2}+1\kappa ^2+\kappa ^4\right]E(\kappa )\sqrt{1\kappa ^2\kappa ^2}}},`$ (18)
where $`\kappa ^21\kappa ^2`$. Here, $`\beta (\kappa )`$ is $`\kappa `$-dependence of $`\beta (k)`$ whose explicit form can be obtained by using Eq.(4):
$$\beta (\kappa )=\mu \sqrt{\frac{2}{2\kappa ^2}}\frac{1\sqrt{1\kappa ^2}}{\sqrt{(2\kappa ^2)2\sqrt{1\kappa ^2}}}.$$
(19)
Ref. has shown that the perturbation near sphaleron solution yields a sufficient condition for the sharp sphaleron transition. The explicit form of this sufficient criterion is
$$\frac{1}{b^2}\left[l(\omega )l(\omega _s)\right]<u_0f[u_0]><0.$$
(20)
where $`b`$ is a small parameter, which is associated with small amplitude of periodic solution whose center is $`\varphi _s`$ at quantum mechanical model, and $`\omega _s`$ is sphaleron frequency
$$\omega _s\sqrt{h_0}=\sqrt{2}\mu \left[1+2\frac{\sqrt{1\kappa ^2\kappa ^2}}{2\kappa ^2}\right]^{1/2}.$$
(21)
$`f[u_0]`$ in Eq.(20) is defined as follows:
$$f[u_0]=\frac{1}{2}\frac{G_2}{\eta }|_{\eta =u_0}\left[\widehat{h}^1+\frac{1}{2}[\widehat{h}l(2\omega _s)]^1\right]G_2[u_0]+\frac{3}{4}G_3[u_0]$$
(22)
where $`l(\omega )\omega ^2`$.
If one uses a completeness condition for $`\widehat{h}`$, it is easy to show that the condition (20) reduces to
$$I_0(\kappa )+I_2(\kappa )+I_4(\kappa )+J(\kappa )<0,$$
(23)
where
$`I_0(\kappa )`$ $`=`$ $`\left[{\displaystyle \frac{1}{h_0}}+{\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{h_0l(2\omega _s)}}\right]<u_0G_2[u_0]>^2,`$ (24)
$`I_2(\kappa )`$ $`=`$ $`\left[{\displaystyle \frac{1}{h_2}}+{\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{h_2l(2\omega _s)}}\right]<u_2G_2[u_0]>^2,`$ (25)
$`I_4(\kappa )`$ $`=`$ $`{\displaystyle \underset{n=4}{\overset{\mathrm{}}{}}}\left[{\displaystyle \frac{1}{h_n}}+{\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{h_nl(2\omega _s)}}\right]<u_nG_2[u_0]>^2,`$ (26)
$`J(\kappa )`$ $`=`$ $`{\displaystyle \frac{3}{4}}<u_0G_3[u_0]>.`$ (27)
It is worthwhile noting that the only positive one in Eq.(23) is $`I_0(\kappa )`$. Now, it is clear why small $`h_2`$ in small $`\kappa `$ region changes the type of sphaleron transition to be sharp first-order. Since $`1/h_2`$ is involved in $`I_2(\kappa )`$ and it becomes large value in small $`\kappa `$ region, the dominant contribution of the left-hand side of Eq. (23) can be $`I_2(\kappa )`$, and hence first-order transition may take place.
Now let us compute $`I_0(\kappa )`$, $`I_2(\kappa )`$, and $`J(\kappa )`$ explicitly. Using a recurrence relation of $`G_n𝑑u\text{sn}^n[u,\kappa ]`$
$$G_{2m+2}=\frac{\text{sn}^{2m1}[u,\kappa ]\text{cn}[u,\kappa ]\text{dn}[u,\kappa ]+2m(1+\kappa ^2)G_{2m}+(12m)G_{2m2}}{(2m+1)\kappa ^2},$$
(28)
it is straightforward to show
$$J(\kappa )=\frac{3\mu ^2C_0^4}{a^2\beta (\kappa )}\left[A^4K(\kappa )4A^3j_2(\kappa )+6A^2j_4(\kappa )4Aj_6(\kappa )+j_8(\kappa )\right],$$
(29)
where
$`A`$ $`=`$ $`{\displaystyle \frac{1+\kappa ^2+\sqrt{1\kappa ^2\kappa ^2}}{3\kappa ^2}},`$ (30)
$`j_2(\kappa )`$ $`=`$ $`{\displaystyle \frac{1}{\kappa ^2}}\left[K(\kappa )E(\kappa )\right],`$ (31)
$`j_4(\kappa )`$ $`=`$ $`{\displaystyle \frac{1}{3\kappa ^4}}\left[(2+\kappa ^2)K(\kappa )2(1+\kappa ^2)E(\kappa )\right],`$ (32)
$`j_6(\kappa )`$ $`=`$ $`{\displaystyle \frac{1}{15\kappa ^6}}\left[(8+3\kappa ^2+4\kappa ^4)K(\kappa )(8+7\kappa ^2+8\kappa ^4)E(\kappa )\right],`$ (33)
$`j_8(\kappa )`$ $`=`$ $`{\displaystyle \frac{1}{105\kappa ^8}}\left[(48+16\kappa ^2+17\kappa ^4+24\kappa ^6)K(\kappa )(48+40\kappa ^2+40\kappa ^4+48\kappa ^6)E(\kappa )\right].`$ (34)
The $`\kappa `$-dependence of $`J(\kappa )`$ is shown in Fig. 3. Fig. 3 shows that $`J(\kappa )`$ in Eq. (29) correctly recovers the $`\kappa =1`$ limit of $`J(\kappa )`$, $`108\mu ^3/70\sqrt{2}a^2`$, which can be obtained easily by calculating same quantity in the same model with non-compactified spatial coordinate.
Now, let us compute $`I_0(\kappa )`$. Using a recurrence relation of $`D_n𝑑u\text{dn}^n[u,\kappa ]`$
$$D_{2m+3}=\frac{\kappa ^2\text{dn}^{2m}[u,\kappa ]\text{sn}[u,\kappa ]\text{cn}[u,\kappa ]2m\kappa ^2D_{2m1}+(2m+1)(2\kappa ^2)D_{2m+1}}{2(m+1)},$$
(35)
and identity $`\kappa ^2\text{sn}^2[u,\kappa ]+\text{dn}^2[u,\kappa ]=1`$, one can show straightforwardly
$$I_0(\kappa )=\frac{15\pi ^2\mu ^2C_0^6}{128a^2h_0}\left[518A+24A^216A^3\right]^2.$$
(36)
Once again one can see the correct $`\kappa =1`$ limit of $`I_0(\kappa )`$, $`3375\pi ^2\mu ^3/4096\sqrt{2}a^2`$ in Fig. 3.
Finally, direct computation of $`I_2(\kappa )`$ yields
$`I_2(\kappa )=`$ $``$ $`{\displaystyle \frac{3h_28h_0}{2h_2(h_24h_0)}}{\displaystyle \frac{9\pi ^2\mu ^2}{64a^2}}C_0^4C_2^2`$ (37)
$`\times `$ $`\left[56(2A+B)+8(A^2+2AB)16A^2B\right]^2,`$ (38)
where
$$B=\frac{1+\kappa ^2\sqrt{1\kappa ^2\kappa ^2}}{3\kappa ^2}.$$
(39)
The fact $`I_2(\kappa =1)=0`$ as shown in Fig. 3 means that there is no correspondent discrete mode at $`\kappa =1`$. In fact, if one derives $`\widehat{h}`$ in the same model with non-compactified space, it is easy to show that $`\widehat{h}`$ becomes usual Pöschl-Teller type operator which has only two discrete modes.
Although it is impossible to calculate $`I_4(\kappa )`$ analytically, we know that it is small negative value, which results in the following inequality
$$\frac{1}{b^2}[l(\omega )l(\omega _s)]<I_0(\kappa )+I_2(\kappa )+J(\kappa ).$$
(40)
Fig. 4 shows $`\kappa `$-dependence of $`\frac{a^2\beta (\kappa )}{\mu ^2C_0^4}\left(I_0(\kappa )+I_2(\kappa )+J(\kappa )\right)`$. Eq.(40) and Fig. 4 guarantee the existence of $`\kappa _c>\kappa ^{}=0.820621`$, which distinguishes the types of transition. As expected this simple model exhibits a first-order sphaleron transition in small $`\kappa `$ region and a second-order sphaleron transition in large $`\kappa `$ region which can be expected from the result of $`\kappa =1`$ case. Fig. 5 shows the numerical result of actions for the vacuum bounce and sphaleron solutions in the compactified model. It is worthwhile noting that the difference between two action values becomes smaller and smaller when the circumference of the compactified spatial coordinate decreases. This means the possibility for the occurrence of sharp transition is enhanced in small $`\kappa `$ region, which is consistent with our main result.
ACKNOWLEDGMENT
This work was partially supported by the Kyungnam University Research Fund in 1999.
|
no-problem/9906/gr-qc9906037.html
|
ar5iv
|
text
|
# Strongly gravitating empty spaces
## 1 Introduction
The choice of mathematical model for spacetime has important physical significance. B. Riemann suggested that the geometry of space may be more than just a mathematical tool defining a stage for physical phenomena, and may in fact have profound physical meaning in its own right . With the advent of general relativity theoreticians started to think of the spacetime as a differential manifold. Since then various assumptions about the spacetime topology and geometry have been discussed in the literature . Until recently, the choice of differential structure of the spacetime manifold has been assumed to be trivial because most topological spaces used for modeling spacetime have natural differential structures and these differential structures where (wrongly) thought to be unique. Therefore the counterintuitive discovery of exotic $`𝐑^4`$’s following from the work of Freedman and Donaldson raised various discussions about the possible physical consequences of this discovery. Exotic $`𝐑^4`$’s are smooth ($`C^{\mathrm{}}`$) four-manifolds which are homeomorphic to the Euclidean four-space $`𝐑^4`$ but not diffeomorphic to it. Exotic $`𝐑^4`$’s are unique to dimension four, see \[5-11\] for details. Later we have realized that exotic (nonunique) smooth structures are abundant in dimension four. For example it is sufficient to remove one point from a given four-manifold to obtain a manifold with exotic differential structures ; every manifold of the form $`M\times 𝐑`$, $`M`$ being compact 3-manifold has infinitely many inequivalent differential structures. Such manifolds play important role in theoretical physics and astrophysics. Therefore the physical meaning of exotic smoothness must be thoroughly investigated. This is not an easy task: we only know few complicated coordinate descriptions and most mathematicians believe that there is no finite atlas on an exotic $`𝐑^4`$. To our knowledge, only few physical examples have been discussed in the literature . In this paper we would like to discuss some peculiarities that may happen while studying the theory of gravity on some exotic $`𝐑^4`$’s. The most important result is that on some topologically trivial spaces there exist only ”complicated” solutions solutions to the Einstein equations. By this we mean that there may be no stationary cosmological model solutions and/or that empty space can gravitate. Such solutions are counterintuitive but we are aware of no physical principle that would require rejection of such spacetimes (besides common sense?).
## 2 General relativity on exotic $`𝐑^4`$’s with few symmetries
As it was written in the previous section, exotic $`𝐑^4`$’s are defined as four-manifolds that are homeomorphic to the fourdimensional Euclidean space $`𝐑^4`$ but not diffeomorphic to it. There are infinitely many of such manifolds (at least a two parameter family of them) . Note that exotic differential structures do not change the definition of the derivative. The essential difference is that the rings of real differentiable functions are different on nondiffeomorphic manifolds. In the case of exotic $`𝐑^4`$’s this means that there are some continuous functions $`𝐑^4𝐑`$ that are smooth on one exotic $`𝐑^4`$ and only continuous on another and vice versa . To proceed we will recall several definitions. We will call a diffeomorphism $`\varphi :MM`$, where $`M`$ is a (pseudo-)Riemannian manifold with metric tensor $`g`$ an isometry if and only if it preserve $`g`$, $`\varphi ^{}g=g`$ . Such mappings form a group called the isometry group. We say that a smooth manifolds has few symmetries provided that for every choice of differentiable metric tensor, the isometry group is finite. Recently, L. R. Taylor managed to construct examples of exotic $`𝐑^4`$’s with few symmetries . Among these there are examples with nontrivial but still finite isometry groups. Taylor’s result, although concerning Riemannian structures, has profound consequences for the analysis of the possible role of differential structures in physics where Lorentz manifolds are commonly used. To show this let us define a (non-)proper actions of a group on manifolds as follows. Let $`G`$ be a locally compact topological group acting on a metric space $`X`$. We say that $`G`$ acts properly on $`X`$ if and only if for all compact subsets $`YX`$, the set $`\{gG:gYY\mathrm{}\}`$ is also compact. Restating this we say that $`G`$ acts nonproperly on $`X`$ if and only if there exist sequences $`x_nx`$ in $`X`$ and $`g_n\mathrm{}`$ in $`G`$, such that $`g_nx_n`$ converges in $`X`$. Here $`g_n\mathrm{}`$ means that the sequence $`g_n`$ has no convergent subsequence in the compact open topology on the set of all isometries \[15 p. 202\]. Note that for many manifolds a proper $`G`$ action is topologically impossible and on the other hand a nonproper $`G`$ action on a Lorentz (or pseudo-Riemannian) manifolds is for all but a few groups also impossible . Our discussion would strongly depend on the later fact and on the theorems proved by N. Kowalsky . First of all let us quote :
###### Theorem 1
Let G be Lie transformation group of a differentiable manifold X. If G acts properly on X, then G preserves a Riemannian metric on X. The converse is true if G is closed in Diff(X).
As a special case we have:
###### Theorem 2
Let G and X be as above, and in addition assume G connected. If G acts properly on X preserving a time-orientable Lorentz metric, then G preserves a Riemannian metric and an everywhere nonzero vector field on X.
If we combine these theorems with the Taylor’s results we immediately get:
###### Theorem 3
Let G be a Lie transformation group acting properly on an exotic $`𝐑^\mathrm{𝟒}`$ with few symmetries and preserving a time-orientable Lorentz metric. Then G is finite.
Further, due to N. Kowalsky, we also have :
###### Theorem 4
Let G be a connected noncompact simple Lie group with finite center. Assume that G is not locally isomorphic to SO(n, 1) or SO(n,2). If G acts nontrivially on a manifold X preserving a Lorentz metric, then G actually acts properly on X.
and
###### Theorem 5
If G acts nonproperly and nontrivially on X, then G must be locally isomorphic to SO(n,1) or SO(n,2) for some n.
The general nonproper actions of Lie groups locally isomorphic to SO(n,1) or SO(n,2) would be discussed in ref. . In many cases it is possible to describe the cover $`\stackrel{~}{X}`$ up to Lorentz isometry.
Now, suppose we are given an exotic $`𝐑_\theta ^4`$ with few symmetries. Given any boundary conditions, we can try to solve the Einstein equations on $`𝐑_\theta ^4`$. Suppose we have found some solution to the Einstein equations on $`𝐑_\theta ^4`$. Whatever the boundary conditions be we would face one of the two following situations.
* The isometry group G of the solution acts properly on $`𝐑_\theta ^4`$. Then according to Theorem 3 G is finite. There is no nontrivial Killing vector field and the solution cannot be stationary . The gravitation is quite ”complicated” and even empty spaces do evolve. Note that this conclusion is valid for any open subspace of $`𝐑_\theta ^4`$. This means that this phenomenon cannot be localized on such spacetimes.
* The isometry group G of the solution acts nonproperly on $`𝐑_\theta ^4`$. Then G is locally isomorphic to SO(n,1) or SO(n,2 ). But the nonproper action of G on $`𝐑_\theta ^4`$ means that there are points infinitely close together in $`𝐑_\theta ^4`$ ($`x_nx`$) such that arbitrary large different isometries ($`g_n\mathrm{}`$) in G maps them into infinitely close points in $`𝐑_\theta ^4`$ ($`g_nx_ny𝐑_\theta ^4`$). There must exists quite strong gravity centers to force such convergence (even in empty spacetimes). Such spacetimes are unlikely to be stationary.
We see that in both cases Einstein gravity is quite nontrivial even in the absence of matter. Let us recall that if a spacetime has a Killing vector field $`\zeta ^a`$, then every covering manifolds admit appropriate Killing vector field $`\zeta ^{}_{}{}^{}a`$ such that it is projected onto $`\zeta ^a`$ by the differential of the covering map. This means that discussed above properties are ”projected” on any space that has exotic $`𝐑^4`$ with few symmetries as a covering manifold eg quotient manifolds obtained by a smooth action of some finite group. Note that we have proven a weaker form of the Brans conjecture : there are four-manifolds (spacetimes) on which differential structures can act as a source of gravitational force just as ordinary matter does.
## 3 Conclusions
The existence of topologically trivial spacetimes that admit only ”nontrivial” solutions to the Einstein equations is very surprising. Such phenomenon might be also possible for other four-manifolds admitting exotic differential structures enumerated in the Introduction. The first reaction is to reject them as being unphysical mathematical curiosities. But this conclusion might be erroneous \[6-8, 13\]. Besides the arguments put forward by Brans \[6-8\] and Asselmeyer we would like to add the following. Suppose that spacetime is only a secondary entity emerging as a result of interactions between physical (matter) fields. If we use the A. Connes’ noncommutative geometry formalism do describe Nature then Dirac operators and their spectra define the spacetime structure . There are known examples of differential structures that are distinguished by spectra of the Dirac operators \[23-25\]. This suggest that fundamental interactions of matter are ”responsible” for the selection of the differential structure and might not have ”chosen” the simplest structure of the spacetime manifold. Although it is unlikely that the spectrum of the Dirac operator alone would allow to distinguish the differential structure in a general case, it seems to be reasonable to conjecture that physically equivalent spacetimes must be isospectral and we might hope that this would solve the problem with the plethora of exotic differential structures. If Nature have not used exotic smoothness we physicists should find out why only one of the existing differential structures has been preferred. Does it mean that the differential calculus, although very powerful, is not necessary (or sufficient) for the description of the laws of physics? It might not be easy to find any answer to these questions.
Let us conclude by saying that if exotic smoothness has anything to do with the physical world it may be a source/ explanation of various astrophysical and cosmological phenomena. Dark matter and vacuum energy substitutes and attracting centers are the most obvious among them \[27-29\]. ”Exoticness” of the spacetime might be responsible for the recently discovered anomalies in the large redshift supernovae properties. The process of ”elimination” of exotic differential structures might also result in the emergence time or spacetime signature.
Acknowledgments. The author would like to thank K. Kołodziej and I. Bednarek for stimulating and helpful discussions.
### References
* Riemann B, Über die Hypothesen, welche der Geometrie zugrunde liegen (1854).
* Sładkowski J, hep-th/9610093.
* Freedman M H, J. Diff. Geom. 17 (1982) 357.
* Donaldson S K, J. Diff. Geom 18 (1983) 279.
* Gompf R, J. Diff. Geom. 37 (1993) 199.
* Brans C and Randall D, Gen. Rel. and Grav. 25 (1993) 205.
* Brans C, Class. Quantum. Grav. 11 (1994) 1785.
* Brans C, J. Math. Phys. 35 (1994) 5494.
* Sładkowski J, Acta Phys. Pol. B27 (1996) 1649.
* Freedman M H and Taylor L R, J. Diff. Geom. 24 (1986) 69.
* Bizaca Z and Etnyre J, Topology 37 (1998) 461.
* Bizaca Z, J. Diff. Geom. 39 (1994) 491.
* Asselmeyer T, Class. Quantum Grav. 14 (1997) 749.
* Sładkowski J, Int. J. Theor. Phys. 33 (1994) 2381.
* Helgason S, Differential Geometry, Lie Groups and Symmetric spaces, Academic Press, 1978.
* Taylor R L, math.QT/9807143.
* Szaro J P, Am. J. Math. 120 (1998) 129.
* Kowalsky N, Ann. Math. 144 (1997) 611.
* Kowalsky N, Actions of SO(n,1) and SO(n,2) on Lorentz manifolds, in preparation.
* Wald R, General Relativity, The Univ. of Chicago Press, Chicago (1984).
* Connes A, Noncommutative Geometry, Academic Press London 1994.
* Sładkowski J, Acta Phys. Pol. B 25 (1994) 1255.
* Donnelly H, Bull. London Math. Soc. 7, 147 (1975).
* Kreck M and Stolz S, Ann. Math. 127 (1988) 373.
* Stolz S, Inv. Math. 94 (1988) 147.
* Fintushel R and Stern R J, math.SG/9811019.
* Bednarek I and Mańka R, Int. J. Mod. Phys. D 7 (1998) 225.
* Mańka R and Bednarek I, Astron. Nachr. 312 (1991) 11.
* Mańka R and Bednarek I, Astrophys. and Space Science 176 (1991) 325.
* Isham C and Butterfield J, in The Arguments of Time, J. Butterfield (ed.), Oxford Univ. Press, Oxford 1999.
* Heller M and Sasin W, Phys. Lett A 250 (1998) 48.
|
no-problem/9906/hep-ph9906537.html
|
ar5iv
|
text
|
# Closing the Low-mass Axigluon Window
## Motivation
The possible existence of an axigluon was first realized in chiral color models chiral , where the gauge group of the strong interaction is extended from $`SU(3)_C`$ to $`SU(3)_L\times SU(3)_R`$. At low energy, this larger color gauge group breaks to the usual $`SU(3)_C`$ with its octet of massless vector gluons, but it leaves a residual $`SU(3)`$ with an octet of massive, axial vector bosons called axigluons. In these chiral color models, the axigluon is expected to have a mass of order the electroweak scale. Similar states are predicted in technicolor models tc .
In order to search for these states, Bagger, Schmidt and King bsk noted that the di-jet cross section at hadron colliders would be modified by the addition of $`s`$-channel axigluon exchange. Searches were performed by the UA1 and CDF collaborations, with limits of $`150GeV<M_A<310GeV`$ by UA1 UA1-dijet and $`120GeV<M_A<980GeV`$ by CDF CDF-dijet . Given additional center of mass energy and/or luminosity, these di-jet searched at hadron colliders will easily raise the upper exclusion limit, but it will be difficult to decrease the lower exclusion limit.
Several additional search strategies were suggested involving the $`Z^0`$ and the large amounts of data taken by the LEP experiments. Rizzo rizzo suggested $`Z^0q\overline{q}A`$ and Carlson, et al.carlson suggested $`Z^0gA`$ going through a quark loop. These suggestions involve low rates, and the former requires precision multi-jet reconstruction.
In the remainder of this report, I will address some additional search strategies for the axigluon, and report on the current status of the search.
## $`\mathrm{{\rm Y}}`$ decays to real axigluons
The decay of the $`\mathrm{{\rm Y}}`$ family is an ideal area to search for low mass, strongly interacting particles. In the Standard Model, the dominant hadronic decay mode of any heavy vector quarkonium state ($`J^{PC}=1^{}`$) is the 3 gluon mode, $`V_Qggg`$, where $`Q`$ refers to the specific flavor of heavy quark. The decay to a single gluon is forbidden by color, while the decay to 2 gluons is forbidden by both the Landau-Pomeranchuk-Yang theorem lpy (which forbids the decay of a $`J=1`$ state to 2 massless spin 1 states) and quantum numbers ($`C=1`$ for the gluon, so an odd number of gluons are needed for this particular decay). The leading order decay rate of a heavy quarkonium state to 3 gluons is well known:
$$\mathrm{\Gamma }(V_Qggg)=\frac{40(\pi ^29)\alpha _s^3}{81\pi M_V^2}|R(0)|^2$$
(1)
where $`M_V`$ is the quarkonium state’s mass and R(0) is the non-relativistic, radial wavefunction evaluated at the origin.
A heavy, vector quarkonium state may decay into a gluon plus an axigluon. As the axigluon is massive, the Landau-Pomeranchuk-Yang theorem is avoided, and the axigluon has $`C=+1`$. The decay rate for $`V_QAg`$ is given by mhr1 :
$$\mathrm{\Gamma }(V_QAg)=\frac{16\alpha _s^2}{9M_V^2}|R(0)|^2(1x)(1+\frac{1}{x})$$
(2)
where $`x=\left(\frac{M_A}{M_V}\right)^2`$. Both this decay rate and the leading order Standard Model rate depend on the non-relativistic radial wavefunction; a ratio of these two decay rates does not depend on the wavefunction, and, as such, had much less uncertainty. The ratio is given by:
$$\frac{\mathrm{\Gamma }(V_QAg)}{\mathrm{\Gamma }(V_Qggg)}=\frac{18\pi }{5\alpha _s(\pi ^29)}(1x)(1+\frac{1}{x})$$
(3)
Notice that, since the gluon plus axigluon mode has one fewer power of $`\alpha _s`$, the ratio is large (the numerical factor in front of the kinematical structure is approximately 100).
This ratio, as a function of $`x`$ is shown in Figure 1. The addition of this new hadronic decay mode will at least double the hadronic width of a vector quarkonium state, even for an axigluon mass nearly equal to the quarkonium state mass. Using this process and the $`\mathrm{{\rm Y}}`$ system, we can exclude an axigluon with mass below about $`10GeV`$. A analysis by Cuypers and Frampton cf yielded quantitatively similar conclusions.
## $`\mathrm{{\rm Y}}`$ decays to virtual axigluons
In addition to $`\mathrm{{\rm Y}}`$ decays to real axigluons, it is possible to study $`\mathrm{{\rm Y}}`$ decays to virtual axigluons, $`\mathrm{{\rm Y}}gA^{}(q\overline{q})`$. The decay rate is given by mhr2
$$\mathrm{\Gamma }(V_Qq\overline{q}g)=\frac{2^8n\alpha _s^3}{3^5\pi }\frac{M_V^2}{M_A^4}F(x)|R(0)|^2$$
(4)
where $`n`$ is the number of active quark flavors (in this case 4) and
$$F(x)=\frac{3}{2}x^2\left(2x\mathrm{ln}\left(\frac{x}{x1}\right)2\frac{1}{x}\right).$$
(5)
As before, we can look at the ratio of this hadronic width to the dominant Standard Model width:
$$\frac{\mathrm{\Gamma }(V_Qq\overline{q}g)}{\mathrm{\Gamma }(V_Qggg)}=\frac{128F(x)}{15(\pi ^29)x^2}.$$
(6)
This time, there is no large numerical factor. This ratio is shown in Figure 2. The dashed lines in the figure indicate 2 possible exclusion limits that can be made using data. The more conservative estimate is to argue that our knowledge of the $`\mathrm{{\rm Y}}`$ width is such that a correction to the standard width larger than 50% is unacceptable; thus, this ratio is smaller than 0.5, which gives an upper exclusion limit of $`M_A<21GeV`$. A less conservative estimate is to compare this correction to the expected rate to QCD radiative corrections to the Standard Model rate and other possible contributions to the hadronic width (e.g., $`\mathrm{{\rm Y}}\gamma ^{}q\overline{q}`$), and argue that another correction larger than these is unacceptable. In this case, the ratio must be less than 0.25, excluding axigluons with mass smaller than $`25GeV`$.
Not long after our work on the $`\mathrm{{\rm Y}}`$, Cuypers and Frampton and Cuypers, Falk and Frampton cff published papers on the $`R`$ value in $`e^+e^{}`$ collisions at low energy. They included the full set of QCD radiative corrections, including axigluon radiative corrections, to the tree level process. They exclude an axigluon with $`M_A<50GeV`$ using PEP and PETRA data.
## Top production
The top is too short lived to allow for a toponium state; if it did, the same techniques that worked in the $`\mathrm{{\rm Y}}`$ system would work for toponium as well. On the other hand, because of the large mass of the top, top production is inherently perturbative, $`q\overline{q}t\overline{t}`$ is well understood, and it can be used to search for a light axigluon. The parton level cross section for $`q\overline{q}t\overline{t}`$, due to an $`s`$-channel gluon, is well known:
$$\left(\frac{d\sigma }{d\widehat{t}}\right)_0=\frac{1}{16\pi \widehat{s}^2}\frac{64\pi ^2}{9}\alpha _s^2\left[\frac{(m^2\widehat{t})^2+(m^2\widehat{u})^2+2m^2\widehat{s}}{\widehat{s}^2}\right]$$
(7)
and the cross section with the addition of an $`s`$-channel axigluon is mr1
$$\left(\frac{d\sigma }{d\widehat{t}}\right)_{q\overline{q}}=\left(\frac{d\sigma }{d\widehat{t}}\right)_0\left[1+|r(\widehat{s})|^2+4\mathrm{}(r(\widehat{s}))\frac{(\widehat{t}\widehat{u})\widehat{s}\beta }{(\widehat{t}\widehat{u})^2+\widehat{s}^2\beta ^2}\right]$$
(8)
where $`r(\widehat{s})=\frac{\widehat{s}}{\widehat{s}M_A^2+iM_A\mathrm{\Gamma }_A}`$ and $`\beta `$ is the top quark velocity parameter, $`\beta =\sqrt{1\frac{4m^2}{\widehat{s}}}`$. The addition of an $`s`$-channel axigluon affects both the total cross section and the forward-backward asymmetry (only the interference term affects the forward-backward asymmetry).
The results on total cross section are shown in Figure 3. From the relatively good agreement between experimental values of the top cross section top-D0 ; top-CDF and theoretical calculations bc ; cat , we can say that an axigluon is disfavored by top production cross section, but nothing conclusive can be said.
Shown in Figure 4 is the forward-backward asymmetry in top production as a function of axigluon mass. Without an axigluon, the asymmetry is identically zero.
## Miscellaneous
Unitarity is violated, in that $`Q\overline{Q}Q\overline{Q}`$ will be non-perturbative unless
$$M_A>\sqrt{\frac{5\alpha _s}{3}}M_Q$$
(9)
as pointed out by Robinett rick . Using the top quark as $`Q`$, this leads to a lower limit on the axigluon mass of $`M_A>73GeV`$ mr2 .
Higgs searches, e.g., by CDF, can make use of the process $`p\overline{p}W+X^0`$, where $`X^0`$ is the neutral Higgs boson, and it is assumed to decay to $`b\overline{b}`$ bhat . The limit on Higgs boson mass is such that $`\sigma BR>1520pb`$ are not allowed. The same final state is possible with an axigluon in place of the Higgs boson; we find the part level cross section for $`q\overline{q}^{}WA`$ to be mr2 :
$$\frac{d\widehat{\sigma }}{d\widehat{t}}=\frac{4\alpha _s}{9}\left[\frac{G_FM_W^2}{\sqrt{2}}\right]\frac{|V_{qq^{}}|^2}{\widehat{u}\widehat{t}\widehat{s}^2}\left[\widehat{u}^2+\widehat{t}^2+2\widehat{s}(M_W^2+M_A^2)\frac{M_A^2M_W^2(\widehat{u}^2+\widehat{t}^2)}{\widehat{u}\widehat{t}}\right].$$
(10)
Assuming $`BR(Ab\overline{b})=\frac{1}{5}`$, and calculating the cross section for the associated production of $`W+A`$, a conservative lower limit of $`M_A>70GeV`$ is possible, using the same analysis at the Higgs search.
Finally, we can examine the value of $`\alpha _s`$, extracted from low energy data but run up to $`M_Z`$ to the value of $`\alpha _s`$ extracted from the hadronic width of the $`Z^0`$ at the pole. Since the axigluon mass is expected to be at least $`70GeV`$, the running of $`\alpha _s`$ should not be affected much by the axigluon. Then, the $`R`$ value at low energy, or the hadronic width at the $`Z^0`$ pole, is subject to a correction from real and virtual axigluons cff , of the form:
$$\left[1+\frac{\alpha _s(\sqrt{s})}{\pi }f\left(\frac{\sqrt{s}}{M_A}\right)+𝒪(\alpha _s^2)\right]$$
(11)
where the function $`F(\sqrt{s}/M_A)`$ is calculated in Ref. cff . The Particle Data Group pdg quotes a value of $`\alpha _s`$ from various low energy data run up to $`M_Z`$ as $`\alpha _s^{(LE)}=0.118\pm 0.004`$, while the extraction from the hadronic with of the $`Z^0`$ at $`M_Z`$ as $`\alpha _s^{(HE)}=0.123\pm 0.004\pm 0.002`$. Attributing the difference in the extracted values of $`\alpha _s`$ to the axigluon gives abound on the $`f(\sqrt{s}/M_A)`$ term, such that $`f(M_Z/M_A)0.042\pm 0.050`$. This implies that $`f(M_Z/M_A)<0.092(0.142)`$ at the $`65\%`$ $`(95\%)`$ level, and that $`M_Z>570GeV(365GeV)`$ at the same confidence levels. Should the agreement between the low energy and high energy extractions of $`\alpha _s`$ increase, the corresponding lower limit on the axigluon mass would also increase.
## Conclusions
The existence of an axigluon is predicted in chiral color models. A low-mass axigluon is difficult to exclude in typical collider experiments (e.g., using di-jet data). Other approaches must be used to rule out axigluons with masses below $`125GeV`$. $`\mathrm{{\rm Y}}`$ decay, top production, unitarity bounds and associated production of a $`W`$ boson with an axigluon can exclude axigluons with mass below about $`70GeV`$. A comparison of $`\alpha _s`$ as extracted in low energy experiments and high energy experiments can rule out an axigluon with a mass lower than $`365GeV`$. This completely closes the low-mass axigluon window, and when combined with the CDF limits, an axigluon with mass below about $`1TeV`$ is not allowed. An axigluon, if it exists, is in the realm of $`TeV`$ physics.
## acknowledgements
I would like to acknowledge the support of Penn State through a Research Development Grant and the Mont Alto Faculty Affairs Committee’s Professional Development Fund. I would like to thank Rick Robinett for carefully reading this manuscript.
|
no-problem/9906/cond-mat9906394.html
|
ar5iv
|
text
|
# Conventional mechanisms for “exotic” superconductivity
## Abstract
We consider the pairing state due to the usual BCS mechanism in substances of cubic and hexagonal symmetry where the Fermi surface forms pockets around several points of high symmetry. We find that the symmetry imposed on the multiple pocket positions could give rise to a multidimensional nontrivial superconducting order parameter. The time reversal symmetry in the pairing state is broken. We suggest several candidate substances where such ordering may appear, and discuss means by which such a phase may be identified.
Most conventional superconductors are described very well by the BCS theory. The electron-phonon interaction mediates an attraction between electrons that is stronger than the Coulomb repulsion. This gives rise to the Cooper instability of the normal state leading to the appearance of a condensate of pairs. The order parameter (the anomalous function $``$ ) in this case belongs to the ”s-wave” type, i.e., it is invariant with respect to the transformations of $`GR`$, where $`G`$ is the crystal point group and $`R`$ is time reversal operation. As a result the quasiparticle spectrum has a gap, which leads to well-known experimental consequences. A variety of materials: <sup>3</sup>He , UBe<sub>13</sub> , UPt<sub>3</sub> , high $`T_c`$ materials , and Sr<sub>2</sub>RuO<sub>4</sub>, have been discovered that potentially break the $`GR`$ symmetry of the normal state. A well known example is $`A`$-phase of <sup>3</sup>He which is not rotationally or time reversal invariant (note that the $`B`$ phase of <sup>3</sup>He is both rotation and time reversal invariant). Such non $`s`$-wave superconductors are usually expected to have a gapless excitation spectrum and arise when the interaction itself depends upon the superconducting ground state (in the case of <sup>3</sup>He the BCS ground state is the $`B`$ phase and the spin fluctuation feedback effect is required to stabilize the $`A`$ phase ). All the possible symmetry classes of the superconducting state in crystalline materials were enumerated in Ref. (for a review see Ref.).
We show below that exotic superconductivity can be a much more common phenomenon and does not require unusual mechanisms. The electron-phonon and Coulomb interactions are enough to give rise to an multidimensional order parameter which would have lower symmetry than the normal state - including the breaking of time-reversal invariance. The effects we consider are possible in metals with several pockets which are centered at or around some symmetry points of the Brillouin Zone (BZ). A BCS approximation generalized to the the multi-band case (see, e.g., ) will be used. The new point here is that since the form of the interaction parameters describing the two electron scattering on and between the different pockets of the Fermi surface (FS) is fixed by symmetry, the resulting superconducting state need not be $`s`$-wave. Below we consider three cases in detail: a) three FS pockets centered about the X-points of a simple cubic lattice; b) three FS pockets at the M-points of the hexagonal lattice; c) four FS pockets at the L-points in the face-centered cubic lattice. A complete analysis of all other high symmetry points is possible, and will be published elsewhere.
We emphasize that this FS structure is not unusual. Indeed, superconductivity with pockets as in case (a) and $`T_c0.1K`$ is found in LaB<sub>6</sub>. Another example is given by the superconducting semiconductors such as PbTe, SnTe, or SrTiO<sub>3</sub> . Many materials exist where such FS sheets co-exist with other non-symmetry related FS sheets and some of these materials have anomalous superconducting properties. One example is CeCo<sub>2</sub> , Fig. 1 shows some of the FS sheets of CeCo<sub>2</sub> ($`T_c=1.6`$ K). We will return to this later.
We will use the generalized Ginsburg-Landau (GL) functional to identify possible nontrivial superconducting phases. The Hamiltonian for several separate pieces of the FS can be written in the following form:
$$H=\underset{\alpha \sigma 𝐩}{}ϵ(𝐩)a_{\alpha \sigma }^{}(𝐩)a_{\alpha \sigma }(𝐩)+\frac{1}{2}\underset{𝐤,𝐤^{},𝐪}{}\underset{\alpha \beta \sigma \sigma ^{}}{}\lambda _{\alpha \beta }(𝐪)a_{\alpha \sigma }^{}(𝐤+𝐪)a_{\beta \sigma ^{}}^{}(𝐤^{}𝐪)a_{\alpha \sigma ^{}}(𝐤^{})a_{\beta \sigma }(𝐤),$$
(1)
where $`\sigma `$ and $`\sigma ^{}`$ are spin indices, $`\lambda _{\alpha \beta }(𝐪)`$ includes the interaction for scattering two electrons from the pocket $`\alpha `$ into pocket $`\beta `$ which is due to both Coulomb and electron-phonon terms. Introducing the anomalous Green’s function $`\widehat{}_\alpha (xx^{})`$ for each FS sheet $`\alpha `$, the corresponding Gor’kov equations can be used to obtain the following solution at finite temperatures for the case of singlet pairing:
$$_\alpha ^{}(\omega _n,𝐩)=\frac{\mathrm{\Delta }_\alpha ^{}(𝐩)}{\omega _n^2+\xi ^2+|\mathrm{\Delta }_\alpha (𝐩)|^2},$$
(2)
where
$$\mathrm{\Delta }_\alpha ^{}(𝐩)=\frac{T}{(2\pi )^3}\underset{\beta }{}\underset{n}{}𝑑𝐤\lambda _{\beta \alpha }(𝐩𝐤)F_\beta ^{}(\omega _n,𝐤).$$
(3)
Eq.(2) being expanded in $`|\mathrm{\Delta }_\alpha |`$ up to the third order, Eq.(3) becomes a variation of the GL functional with respect to the vector order parameter $`\mathrm{\Delta }_\alpha (𝐩)`$. For simplicity we assume that each $`\mathrm{\Delta }_\alpha (𝐩)`$ is constant along the corresponding FS, and to fourth order in $`\mathrm{\Delta }_\alpha `$ we can write
$$F_sF_n=\mathrm{\Delta }_\alpha ^{}\left[(\widehat{\lambda }^1)_{\alpha \beta }\delta _{\alpha \beta }\frac{mp_0}{2\pi ^2}\mathrm{ln}\left(\frac{2\gamma \omega _D}{\pi T}\right)\right]\mathrm{\Delta }_\beta +\frac{7\zeta (3)mp_0}{32\pi ^4T^2}\underset{\beta }{}|\mathrm{\Delta }_\beta |^4$$
(4)
where $`\widehat{\lambda }^1`$ is the matrix inverse to the interaction $`\lambda _{\alpha \beta }`$, $`\omega _D`$ is the cutoff (Debye) frequency.
We now analyze three different cases for multiple FS sheets.
(a) Three X points in a cubic lattice.
The interaction matrix $`\widehat{\lambda }`$ for three X points takes the following general form:
$$\lambda _{\alpha \beta }=\lambda \delta _{\alpha \beta }+\mu (1\delta _{\alpha \beta }).$$
(5)
Here $`\lambda `$ is the interaction on the same pocket, $`\mu `$ couples any two different pockets. Consider first the linearized gap equation Eqs.(2) and (3) to determine $`T_c`$:
$$\mathrm{\Delta }_\alpha ^{}\frac{2\pi ^2}{mp_0}=\underset{\beta }{}\lambda _{\alpha \beta }\mathrm{\Delta }_\beta ^{}\mathrm{ln}\left(\frac{2\gamma \omega _D}{\pi T_c}\right)$$
(6)
The three $`\mathrm{\Delta }_\alpha `$ transform among each other at cubic symmetry transformations forming a 3D reducible representation of the cubic group $`O_h`$, which is split into a $`1D`$ $`A_{1g}`$ and a $`2D`$ $`E_g`$ irreducible representation. These two representations correspond to different order parameters with two critical temperatures:
$`T_{c,E}`$ $`=`$ $`{\displaystyle \frac{2\gamma \omega _D}{\pi }}\mathrm{exp}\left({\displaystyle \frac{2\pi ^2}{mp_0(\lambda \mu )}}\right)(2D)`$ (7)
$`T_{c,A}`$ $`=`$ $`{\displaystyle \frac{2\gamma \omega _D}{\pi }}\mathrm{exp}\left({\displaystyle \frac{2\pi ^2}{mp_0(\lambda +2\mu )}}\right)(1D)`$ (8)
(the terms in the exponents must be negative for the Cooper effect to take place). The basis wave function for 1D identical representation is
$$l=(\mathrm{\Delta }_1+\mathrm{\Delta }_2+\mathrm{\Delta }_3)/\sqrt{3}$$
(9)
and the basis wave functions for the 2D representation can be chosen as
$`\eta _1`$ $`=`$ $`(\mathrm{\Delta }_1+ϵ\mathrm{\Delta }_2+ϵ^2\mathrm{\Delta }_3)/\sqrt{3}`$ (10)
$`\eta _2`$ $`=`$ $`(\mathrm{\Delta }_1+ϵ^2\mathrm{\Delta }_2+ϵ\mathrm{\Delta }_3)/\sqrt{3},`$ (11)
where $`ϵ=\mathrm{exp}(2\pi i/3)`$. From Eq.(8) if $`\lambda \mu <0`$ and $`\mu >0`$ then superconductivity will belong to the nontrivial 2D $`E_g`$ representation, i.e., if the interaction between two different FS pockets is dominated by Coulomb repulsion. Let us consider the latter case in detail. Rewriting the Landau functional Eq.(4) in terms of $`l`$, $`\eta _1`$ and $`\eta _2`$, we obtain, for temperatures $`T`$ near $`T_{c,E}`$:
$$\frac{2\pi ^2}{mp_0}\delta F=\frac{TT_{c,E}}{T_{c,E}}(|\eta _1|^2+|\eta _2|^2)+\mathrm{ln}(T_{c,E}/T_{c,A})|l|^2+\frac{7\zeta (3)}{48\pi ^2T_{c,E}^2}(|\eta _1|^4+|\eta _2|^4+4|\eta _1|^2|\eta _2|^2+F_{l\eta }^{(4)}),$$
(12)
where $`F_{l\eta }^{(4)}`$ is the fourth order term in the GL functional which may admix the $`1D`$ representation $`l`$:
$`F_{l\eta }^{(4)}=2l(\eta _1^{})^2\eta _2+2l\eta _1(\eta _2^{})^2+h.c.`$ (13)
With $`T_{c,A}<T_{c,E}`$ the superconducting instability will correspond to the $`2D`$ representation $`E_g`$ of the cubic point group. The fourth order coefficients in Eq.(12) indicate that the class $`O(D_2)`$ is the most preferable energetically. This class corresponds to a phase with $`\eta _2=0,\eta _10`$ in Eq.(12). The symmetry properties of this class are known. Time-reversal symmetry is broken and allows for antiferromagnetic domains and for fractional vortices to appear (see, e.g., ). In principle point nodes should appear where the FS intersects the cube diagonals. This would lead to the electronic contribution in the $`T^3`$ behavior for the heat capacity at low temperatures. In our case, however, there is no FS along the diagonals of the cube and the low temperature thermodynamic properties will be determined by the gap of the same magnitude for all three FS sheets. In the presence of another FS, for example, at the $`\mathrm{\Gamma }`$-point, the nontrivial order $`O(D_2)`$ will be induced on it. In this case the point nodes will exist and power laws in thermodynamic properties due to the superconductivity should be seen experimentally. The anisotropy of the upper critical field ($`H_{c2}`$) near $`T_c`$ for this class also requires that (at least) two vortex lattice phases (with a second order transition between them) exists when the magnetic field is applied along the $`(1,1,0)`$ and equivalent directions. Note from Eq.(13) that the terms linear in $`l`$ identically disappear for this class, i.e., there will be no admixture of the $`s`$-wave component.
(b) Three X points in a hexagonal lattice.
Calculations in this case are the same as in the previous case, i.e., Eqs.(5)-(13) apply. We only have to specify the symmetry properties and the superconducting class in this case since the symmetry of the lattice is different. The three-dimensional representation is split by the hexagonal group $`D_{6h}R`$ into 1D($`A_{1g}`$) and 2D($`E_{2g}`$). Note that the basis functions for $`E_{2g}`$ can be once again chosen as given by Eq.(11). The phase with $`\eta _10,\eta _2=0`$ has the lowest free energy and in this case corresponds to the nontrivial class $`D_6(C_2)`$. Time-reversal symmetry for this class is broken, and ferromagnetism is allowed. Point nodes (at two points of intersection of an additional FS at the $`\mathrm{\Gamma }`$-point with the six-fold axis) can be seen in thermodynamic properties but again these nodes are only present if such a FS exists. The upper critical field is isotropic near $`T_c`$ for this superconductivity representation. Nevertheless, it can be shown that there will also exist (at least) two distinct vortex lattice phases with a second order transition between them for the magnetic field applied in the basal plane .
(c) Four L-points in the fcc lattice.
The interaction and the linearized gap equation for the four $`L`$ points again take the form Eqs.(5-6). This time the 4D representation $`\mathrm{\Delta }_\alpha `$ is split into the 1D($`A_{1g}`$) and 3D($`F_{2g}`$) irreducible representations.
The critical temperatures for the two representations are now given by
$`T_{c,F}`$ $`=`$ $`{\displaystyle \frac{2\gamma \omega _D}{\pi }}\mathrm{exp}\left({\displaystyle \frac{2\pi ^2}{mp_0(\lambda \mu )}}\right)(3D)`$ (14)
$`T_{c,A}`$ $`=`$ $`{\displaystyle \frac{2\gamma \omega _D}{\pi }}\mathrm{exp}\left({\displaystyle \frac{2\pi ^2}{mp_0(\lambda +3\mu )}}\right)(1D),`$ (15)
The basis functions for the 1D and 3D representations are
$$l=(\mathrm{\Delta }_1+\mathrm{\Delta }_2+\mathrm{\Delta }_3+\mathrm{\Delta }_4)/2(1D)$$
(16)
and
$`\eta _x`$ $`=`$ $`(\mathrm{\Delta }_1\mathrm{\Delta }_2\mathrm{\Delta }_3+\mathrm{\Delta }_4)/2`$ (17)
$`\eta _y`$ $`=`$ $`(\mathrm{\Delta }_1+\mathrm{\Delta }_2\mathrm{\Delta }_3\mathrm{\Delta }_4)/2(3D)`$ (18)
$`\eta _z`$ $`=`$ $`(\mathrm{\Delta }_1\mathrm{\Delta }_2+\mathrm{\Delta }_3\mathrm{\Delta }_4)/2.`$ (19)
The nontrivial 3D representation is stable if $`\lambda \mu <0`$ and $`\mu >0`$, i.e. if the interaction is attractive for each pocket alone, while it is repulsive between two different pockets. As above, we can expand Eq.(4) in terms of $`l`$ and $`\stackrel{}{\eta }`$. Dropping the 1D identical representation, we get:
$$\frac{2\pi ^2}{mp_0}\delta F=\frac{TT_{c,F}}{T_{c,F}}(\stackrel{}{\eta }\stackrel{}{\eta }^{})+\frac{7\zeta (3)}{64\pi ^2T_{c,F}^2}[2(\stackrel{}{\eta }\stackrel{}{\eta }^{})^2+|\stackrel{}{\eta }^2|^22(|\eta _x|^4+|\eta _y|^4+|\eta _z|^4)].$$
(20)
The GL coefficients in Eq.(20) places the system right on the boundary of two phases, superconducting classes $`D_4^{(2)}(D_2)R`$ and $`D_4(E)`$ (see Fig.2). This degeneracy is an artifact of the BCS theory, it is not lifted by higher order terms in the GL functional. The presence of a FS at the $`\mathrm{\Gamma }`$ point lifts this degeneracy. As a result, the magnetic superconducting class $`D_4(E)`$ is likely to appear. This class also allows ferromagnetism. Note that it has a line of nodes (at the intersection of the FS with the horizontal plane of symmetry) i.e., $`CT^2`$ at low temperatures. There are also point nodes at the intersection of the FS with the four-fold symmetry axis. This representation also exhibits an anisotropy of $`H_{c2}`$ near $`T_c`$. In principle multiple vortex lattice phases can also exist for this class but they are not required by symmetry as they are for the $`E`$ representations discussed above.
In the above cases, other than the standard isotropic order parameter, only multidimensional order parameters appeared. This leads to the possibility of domain walls between different equivalent superconducting states and as a possible consequence the existence of inhomogeneous magnetic order (see also ). Also in all the above cases the resulting superconducting states had gaps of equal magnitude on each of the FS sheets. In such a case a Hebel-Slichter peak in $`1/T_1`$ measurements may be present. The non-trivial representations were stable when the pair interaction between the different sheets was repulsive (independent of the intra-sheet interaction). In many materials the Coulomb repulsion can be comparable to the attraction between electrons due to electron-phonon interactions. This is illustrated through the reduction of isotope effect due to Coulomb repulsion. It is well known that for a number of metals $`T_cM^\alpha `$, where not only $`\alpha 0.5`$ but it may even have the opposite sign. ($`\alpha 5`$ for $`\alpha U`$ ). The sensitivity of these exotic superconducting phases to impurities needs a more detailed analysis. However, provided the inter-pocket defect scattering amplitudes are much smaller than the intra-pocket amplitudes, these phases will survive the presence of a considerable amount of defects (due to the ordinary BCS pairing on each sheet). This will be studied in more detail in .
In summary, we have shown that exotic superconductivity can appear merely as a competition of the phonon and Coulomb interactions if the FS consists of several pockets located at some symmetry points. Time-reversal symmetry is broken for the nontrivial order, meaning that the superconducting transition should be accompanied by some kind of magnetic order. The simplest methods to identify exotic order parameters are, apart from the phase-sensitive measurements, the power law dependence of the heat capacity (due to the nodes), measurements of the upper critical field anisotropy $`H_{c2}(\theta )`$ at $`T_c`$ , or the observation of transitions between different vortex lattice phases. Note that if the FS pockets are fully isolated then the nodes are absent since the order parameter is then constant on each FS pocket and changes phase as one moves from one pocket to another. Nodes could appear, however, if there are “necks” connecting different sheets or if superconductivity is induced on a FS centered, for example, around the $`\mathrm{\Gamma }`$-point. The upper critical field anisotropy near $`T_c`$ does not work as a test of nontrivial order in the hexagonal group . The magnetic order, on the other hand, can be observed in $`\mu `$SR measurements or magnetization measurements in small enough samples (where the dimensions are on the order of the penetration depth). A general classification of all cases of nontrivial superconductivity of the type considered above is possible. These FS sheets are not always centered on the BZ boundary, as, for example, in some doped semiconductors and CeCo<sub>2</sub> . We postpone a detailed analysis of the various possibilities to a future work.
We would like to thank Z.Fisk, D. Khokhlov, J.R. Schrieffer, and the members of the NHMFL condensed matter theory group seminar for useful discussions and comments. This work was supported by the National High Magnetic Field Laboratory through NSF cooperative agreement No. DMR-9527035 and the State of Florida.
|
no-problem/9906/quant-ph9906092.html
|
ar5iv
|
text
|
# Continuous Quantum Measurement and the Emergence of Classical ChaosLA-UR-99-3187
## Abstract
We formulate the conditions under which the dynamics of a continuously measured quantum system becomes indistinguishable from that of the corresponding classical system. In particular, we demonstrate that even in a classically chaotic system the quantum state vector conditioned by the measurement remains localized and, under these conditions, follows a trajectory characterized by the classical Lyapunov exponent.
The emergence of classical chaos from quantum mechanics is probably the most important theoretical problem in the study of the quantum to classical transition. Because of the absence of chaos in isolated quantum systems noqc and the noncommutativity of the twin limits $`\mathrm{}0`$ (the semiclassical limit) and $`t\mathrm{}`$ (the late-time limit, necessary to describe chaos), the fundamental mechanism of how classical chaos arises from quantum mechanics remains to be elucidated. While there has been much progress recently in the development of sophisticated semiclassical methods for chaotic dynamical systems semiclass , attempts to unambiguously characterize notions of chaos in the exact quantum dynamics attempts and to extract classical chaos as a formal semiclassical limit have been less successful: a rigorous quantifier of ‘quantum chaos’ on par with the classical Lyapunov exponents has yet to be found. And, since formal techniques have so far not succeeded in extracting trajectories from isolated quantum systems, they have not been able to explain the generation of chaotic time series in actual experimental situations. The experimental state of the art has, however, reached the stage where the quantum to classical transition can now be probed directly expts . So, it is crucial that one understand the mechanism underlying this transition in order to interpret existing results and design future experiments.
As real experiments always deal with open systems, and the interaction with the measuring apparatus necessary to deduce classical behavior provides an irreducible disturbance on the free evolution of the quantum system, the resulting decoherence and conditioned evolution could play a crucial role in the emergence of the classical limit, and of chaos, from the underlying quantum dynamics. Indeed, some qualitative results in this direction already exist hzs ; qsd . In this Letter, we show that, even in the absence of any other interaction with the environment, the theory of continuous quantum measurements applied to the quantum dynamics of classically chaotic systems provides a quantitatively satisfactory explanation of how classical chaos, and Lyapunov exponents characterizing it, emerges from quantum mechanics.
Open quantum systems are often studied by writing the evolution equation for the reduced density matrix obtained by tracing over the degrees of freedom in the environment. Although this process, called decoherence, can be extremely effective in suppressing interference effects and thereby making the quantum Wigner function approach the corresponding classical phase space distribution function hzs , it does not succeed in extracting localized ‘trajectories’ from the quantum dynamics. Without the existence of such trajectories it is extremely difficult, if not impossible, to rigorously quantify the existence of chaos both mathematically and in actual experimental practice. Since in order to extract classical trajectories systems must be observed, one expects observed quantum systems to obey classical dynamics in the macroscopic limit.
What is therefore desired is an unraveling of the Master equation which provides a more detailed understanding of the trajectories underlying the average system dynamics. When these detailed trajectories follow classical dynamics (albeit noisy), one can infer that the average distributions generated by them also become classical. This then provides a ‘microscopic’ understanding of the quantum to classical transition demonstrated, e.g., in Ref. hzs .
The first requirement in this program is to have a good model of continuous quantum measurement. Even though ‘continuous’ measurement is always an idealization, real experimental situations exist which approximate it extremely closely, and simple models which correspond accurately to these processes have now been developed cqm1 ; cqm1a ; cqm2 ; cqm3 . These models show that as a necessary result of the information it provides, continuous measurement produces and maintains localization in phase space. On the other hand, the Ehrenfest theorem guarantees that well-localized quantum systems effectively obey classical mechanics. As the measurement process, in addition to localizing the state, also introduces a noise in its evolution, to obtain classical mechanics one must be in a regime in which the localization is sufficiently strong and, yet, the resulting noise sufficiently weak. We show that such a regime exists, and is precisely the one which governs macroscopic objects, i.e. $`\mathrm{}S`$, the action of the system. In what follows, we refer to this regime as the classical regime. Our central result is that, once this regime is achieved, the localized trajectories for the continuously observed quantum system obey the classical dynamics (possibly chaotic) for that system driven by a weak noise. As a result, even at a finite but non-zero value of $`\mathrm{}`$, the quantum ‘trajectories’ possess the same Lyapunov exponents as the corresponding classical system. As one goes deeper into the classical regime with $`\mathrm{}0`$, one can make the noise progressively smaller by optimizing the measurement, and, in the limit, the intrinsic classical Lyapunov exponents are recovered.
In order for this mechanism to satisfactorily explain the quantum-classical transition, the following conditions need to be satisfied: (1) localization as discussed above, (2) suppression of measurement noise, (3) the actual value of the measurement strength should become irrelevant, and (4) the measurement record (i.e., the actual results of the continuous measurement process), suitably band-limited, should follow the classical trajectory. These conditions are studied in more detail below.
We consider, for simplicity, a single quantum degree of freedom, with position and momentum operators denoted by $`X`$ and $`P`$, evolving under an unperturbed Hamiltonian $`P^2/2m+V(X)`$. Quantum mechanics then dictates the familiar Heisenberg equations of motion for these operators: $`\dot{X}=P/m`$, and $`\dot{P}=_XV(X)F(X)`$. Except in the limit $`\mathrm{}0`$, a continuous observation with finite measurement strength does not localize either the position or the momentum completely. Nevertheless, we can describe the state of the particle in terms of the central moments of $`X`$ and $`P`$ and, anticipating the limit, assign it to a point in phase space given by the mean values $`X`$ and $`P`$.
The most natural measurement to use is a continuous measurement of position, not only because this is often what is observed with mechanical detectors, but also because real schemes for the continuous measurement of position, considered in the field of quantum optics, may be described very simply measx . In addition, a continuous measurement of position is an unraveling of the thermal Master equation in the high temperature limit, so that results demonstrated for this case also apply to decoherence due to a weakly coupled, high temperature thermal bath. We stress however, that we do not expect the particular measurement model to effect the results significantly; any measurement or interaction which produces a localization in phase space should lead to classical behavior in essentially the same manner.
Under continuous position measurement the evolution of the wavefunction becomes stochastic. The stochastic master equation for the density matrix $`\rho (t)`$, conditioned on the measurement record $`X+\xi (t)`$ with $`\xi (t)(8\eta k)^{1/2}dW/dt`$, is measx
$`\rho (t+dt)`$ $`=`$ $`\rho ({\displaystyle \frac{i}{\mathrm{}}}[H,\rho ]k[X,[X,\rho ]])dt`$ (1)
$`+\sqrt{2\eta k}([X,\rho ]_+2\rho Tr\rho X)dW,`$
where $`k`$ is a constant specifying the strength of the measurement, $`\eta `$ is the measurement efficiency and is a number between 0 and 1, and $`dW`$ is a Weiner process, satisfying $`(dW)^2=dt`$. When $`\eta =1`$, the evolution preserves the purity of the state and can be rewritten in a way which allows it to be understood as a series of diffuse projection measurements cqm2 on an unnormalized wavefunction $`\stackrel{~}{\psi }`$:
$`|\stackrel{~}{\psi }(t+dt)=e^{2kdt(X(X+\xi (t)))^2}e^{iHdt/\mathrm{}}|\stackrel{~}{\psi }(t),`$ (2)
where $`\xi (t)`$, the difference between $`X`$ and the measured value of the position, becomes a white noise in the limit that $`dt`$ tends to zero. Under this continuous measurement process, the average values of position and momentum evolve according to
$`dX`$ $`=`$ $`(P/m)dt+\sqrt{2k}V_x(t)dW`$ (3)
$`dP`$ $`=`$ $`F(X)dt+\sqrt{2k}C_{xp}(t)dW,`$ (4)
where $`V_x`$ is the variance in position, and $`C_{xp}`$ is the symmetrized covariance between $`X`$ and $`P`$ eomav . Thus the effect of the measurement is to provide some zero-mean noise proportional to the square root of the measurement strength $`k`$ and to the width of the distribution. It is important to note, however, that this is just the first in a hierarchy of equations for the moments; the equations governing the second moments contain terms depending on higher moments, and so on up the hierarchy.
Even though the structure of the hierarchy makes it almost impossible to obtain analytic answers to questions regarding the behavior of the variances, and resulting noise strength, which are the crucial quantities determining the quantum to classical transition, we can, nevertheless discuss the effect of varying $`\mathrm{}`$ by truncating the hierarchy at the second order, and looking at the steady state solution for the variances. These equations show that to maintain enough localization to guarantee that, at a typical point on the trajectory, $`F(x)F(x)`$, as required in the classical limit, the measurement strength, $`k`$, must stop the spread of the wavefunction at the unstable pointsfootnote1 , $`_xF>0`$:
$$8\eta k\frac{_x^2F}{F}\sqrt{\frac{_xF}{2m}}.$$
(5)
Note that this condition is automatically satisfied for linear systems, where quantum dynamics of the expectation values are identical with classical evolution.
On the other hand, a large measurement strength introduces noise into the trajectory. If we demand that averaged over a characteristic time period of the system, the change in position and momentum due to the noise are small compared to those induced by the classical dynamics, it is sufficient that, at a typical point on the trajectory, the measurement satisfy
$$\frac{2\left|_xF\right|}{\eta s}\mathrm{}k\frac{\left|_xF\right|s}{4},$$
(6)
where $`s`$ is the typical value of the actionfootnote2 of the system in units of $`\mathrm{}`$. Obviously as $`s`$ becomes much larger than $`2\sqrt{2}\eta ^{1/2}`$, this relationship is satisfied for an ever larger range of $`k`$, and this defines the classical limit.
Finally, in experiments one usually considers the measurement record itself rather than the estimated state of the system as we have done. As measurement introduces a white noise, it is important to investigate the condition under which the record tracks the estimate faithfully. If $`\mathrm{\Delta }t`$ is the time over which the continuous measurement is averaged to obtain the record, and we allow ourselves a maximum of $`\mathrm{\Delta }x`$ as the position noise, it is easy to see that the measurement strength needs to satisfy
$$8\eta k>\frac{1}{\mathrm{\Delta }t(\mathrm{\Delta }x)^2}$$
(7)
With this introduction, we consider, as an example, a bounded, one-dimensional, driven system with the Hamiltonian,
$$H=P^2/2m+BX^4AX^2+\mathrm{\Lambda }X\mathrm{cos}(\omega t).$$
(8)
with $`m=1`$, $`B=0.5`$, $`A=10`$, $`\mathrm{\Lambda }=10`$, $`\omega =6.07`$. This Hamiltonian has been used before in studies of quantum chaos linbal and quantum decoherence hzs and, in the parameter regime used here, a substantial area of the accessible phase space is stochastic. The numerical method used to solve Eq. (1) is a split-operator, spectral algorithm implemented on a parallel supercomputer.
Simulations at various values of $`\mathrm{}`$ confirm that as $`\mathrm{}`$ is reduced, both the steady-state variance, and the resulting noise (for optimal measurement strengths) are reduced, as expected. As the dynamical time scale of this problem is $`10.1`$, we decide to average the continuous observation record over a period of $`0.01`$. Similarly, as the range of the motion covers distances of $`O(10)`$, we demand that the position be tracked to an accuracy of $`0.01`$. By Eq. 7, this means we need $`\eta kO(10^5)`$ or larger. In our example, we choose the energy to be $`O(10^2)`$, and the corresponding typical action turns out to be $`O(10)`$, and the typical nonlinearity makes the rhs of Eq. 5 $`O(1)`$. We see that a choice of $`\mathrm{}=10^5`$, $`\eta =1`$ and $`k=10^5`$, satisfies all the constraints for a classical motion. In Fig. 1 we demonstrate that in this regime, localization is maintained in spite of low noise. Fig. 1(a) shows a typical phase space trajectory, with the position variance during the evolution, $`V_x(\mathrm{\Delta }X)^2`$, plotted in Fig. 1(b). We find that the width $`\mathrm{\Delta }X`$ is always bounded by $`3.4\times 10^3`$. Furthermore, as is immediately evident from the smoothness of the trajectory in Fig. 1(a), the noise is also negligible on these scales.
Thus, we find that continuous measurement can effectively obtain classical mechanics from quantum mechanics. We substantiate this further by demonstrating that the trajectories we obtain show the common signatures of classical chaos. A direct way to compare qualitatively the global nature of the quantum and classical trajectories in phase space is to compare the stroboscopic maps (the distribution of the locations of the system at a constant phase of the driving term). Fig. 2 demonstrates the excellent correspondence between the classical and quantum maps in this regard.
On a more quantitative level, we now calculate the key characteristic of chaos, the maximal Lyapunov exponent, $`\lambda `$, and compare it against that of the classical system driven with a similar amount of noise. We start with the definition of $`\lambda `$: that for a chaotic system the distance between two nearby trajectories, $`\mathrm{\Delta }(t)`$, evolves, on average, as $`\mathrm{ln}\mathrm{\Delta }(t)\lambda t`$, as long as $`\mathrm{\Delta }(t)`$ is small and $`t`$ is large. To calculate this we take 10 fiducial trajectories (9 in the quantum case) starting at the point (-3,8) and at 17 points along each trajectory, separated by time intervals of 20 each, we obtain neighboring trajectories by varying the noise realization. The distance between the fiducial and these neighboring trajectories is tracked for a time interval of 8. The values of $`\mathrm{ln}\mathrm{\Delta }(t)`$ thus obtained are averaged over all the instances and plotted versus $`t`$ in Fig. 3, both for the classical (with a small amount of noise) and the continuously measured quantum systems. For very small separations $`\mathrm{\Delta }`$, the noise dominates, which gives rise to an initial steep slope. This is followed by a linear region dominated by the Lyapunov exponent. Eventually $`\mathrm{\Delta }(t)`$ becomes large and the curve flattens out. The behavior of the observed quantum and noisy classical systems are essentially indistinguishable, and the Lyapunov exponent $`0.57(2)`$ is the same for both. Performing the analysis with the classical system without noise, this time using 50 fiducial trajectories with initial points in a neighborhood of (-3,8), we obtain a Lyapunov exponent of $`0.56(1)`$, in agreement with the previous values.
After having demonstrated that in the classical regime, the localization and low noise conditions are satisfied simultaneously, we study the sensitivity to the measurement strength. To this effect, we vary $`k`$ between $`2\times 10^4`$ and $`5\times 10^5`$. The Lyapunov exponents remained unchanged within the quoted errors; only at $`k=5\times 10^5`$ did the noise start to wash out the flat region of the curve.
|
no-problem/9906/hep-ex9906032.html
|
ar5iv
|
text
|
# Erratum to:”Measurement of ϕ meson parameters in 𝐾⁰_𝐿𝐾⁰_𝑆 decay mode with CMD-2” [Phys. Lett. B 466 (1999) 385 – 391.]
In the analysis described in our Letter a wrong sign of the cross section correction for the beam energy spread was chosen. Although this error only slightly affected the values of the $`\varphi `$ meson mass and the product of its branching ratios, it unfortunately resulted in a strong overestimation of the $`\varphi `$ meson total width. By the time the error was discovered, various improvements in the reconstruction and calibration procedures allowed an increase in the accuracy of the cross section measurement. The resulting systematic uncertainty of the cross section and branching ratio measurements is 1.7% common for all energy scans.
The revised results of our analysis are presented in Table 1. Within the errors they are consistent with those in the Letter. The new corrected value of the $`\varphi `$ meson total width is:
$$\mathrm{\Gamma }_\varphi =4.280\pm 0.033\pm 0.025MeV.$$
The authors would like to thank their colleagues from the SND Collaboration for pointing out this error.
|
no-problem/9906/gr-qc9906024.html
|
ar5iv
|
text
|
# I INTRODUCTION
## I INTRODUCTION
The discovery of a discrete family of asymptotically flat particle-like solutions for the static, spherically symmetric Einstein–Yang–Mills (EYM) equations with $`SU(2)`$ gauge group, made by Bartnik and McKinnon in 1988 , evoked considerable interest in these equations and their various generalizations. The intensity of investigations performed in this field is shown by the latest review , which summarizes a decade’s work and contains more than three hundred references to publications on the subject. However, there are still some problems which remain unsolved. One of the most interesting of these is probably the task to prove the existence of the metric oscillations in the origin, $`r=0`$, neighborhood, which were found numerically during the study of the EYM black holes interior structure . The initial purpose of the present work was to solve this problem. Some other open questions can be found in .
Let us note that there are at least two approaches to the analysis of local solutions of nonlinear ordinary differential equations. One of them, namely, the asymptotic theory of differential equations, in some cases makes a possibility to obtain a complete classification of solutions in a singular point neighborhood. However, the right hand side of the studied equation must as a rule satisfy some rather specific conditions (see, e.g., ). Another way, known as the theory of dynamical systems, or the qualitative theory of differential equations, is less restrictive in this sense, though it also does not always lead to a comprehensive description of the solutions behavior (see, e.g., ). Nevertheless, there are a number of problems in astrophysics and cosmology, which were solved basing on this approach (see and references therein).
In this paper, dynamical systems methods are used for the analysis of the EYM solutions asymptotic behavior. This enables us to prove the existence of the above mentioned solutions with the oscillating metric, as well as the existence of local solutions for all known formal power series expansions, and to find two new local solutions. Moreover, a classification of local solutions in the vicinity of the origin is obtained. In particular, it is shown that there exists a neighborhood of $`r=0`$ such that the metric function has a fixed sign in it. Specifically, if the limiting value of the gauge function equals $`\pm 1`$, then all real solutions belong to the Schwarzschild and Bartnik–McKinnon type. In other cases, the solution behavior depends on the metric function sign. Namely, if the metric function is positive, then all solutions possess the behavior of the Reissner–Nordström type. If, on the contrary, the metric function is negative, then almost all solutions are such that the metric function oscillates with its amplitude growing unboundedly as $`r0`$, but the gauge function is monotonous (though its derivative also oscillates with the unboundedly growing amplitude). Only particular solutions in this case exhibit asymptotic behavior of the “anti–Reissner–Nordström” type. This result also gives the negative answer to the question stated in , whether $`r=0`$ is a limit point for zeros of the metric function.
We have also considered the asymptotic behavior of solutions in the far field, $`r1`$, and in the vicinity of the points where the metric function tends to zero. This analysis leads to a discovery of two new local singular solutions and allows us to obtain some conclusions concerning the extendibility of solutions and their limiting behavior as the number of the gauge function nodes tends to infinity.
A detailed discussion of the physical interpretation of the EYM equations solutions can be found in .
## II THE EQUATIONS
Recall that the space-time metric for the static, spherically symmetric EYM equations can be written as
$$ds^2=\sigma ^2Ndt^2N^1dr^2r^2\left(d\vartheta ^2+\mathrm{sin}^2\vartheta d\phi ^2\right),$$
where $`N`$ and $`\sigma `$ depend on $`r`$, and the Yang–Mills gauge field reads as
$$A=(T_2d\vartheta T_1\mathrm{sin}\vartheta d\phi )w+T_3\mathrm{cos}\vartheta d\phi ,$$
where $`T_i=\frac{1}{2}\tau _i`$ are the $`SU(2)`$ group generators and $`\tau _i`$ are the Pauli matrices, $`i=1,2,3`$ (see, e.g., ).
The EYM equations in this framework take the form of two ordinary differential equations for the metric function $`N`$ and the gauge function $`w`$:
$$\begin{array}{cc}& r^3N^{}+\left(1+2w_{}^{}{}_{}{}^{2}\right)r^2N+\left(1w^2\right)^2r^2=0,\hfill \\ & r^3Nw^{\prime \prime }\left[\left(1w^2\right)^2r^2+r^2N\right]w^{}+(1w^2)rw=0,\hfill \end{array}$$
(1)
and a decoupled equation for $`\sigma `$:
$$\frac{\sigma ^{}}{\sigma }=\frac{2w_{}^{}{}_{}{}^{2}}{r}.$$
Since (1) do not involve $`\sigma `$, one can use these to obtain $`N`$ and $`w`$, and then solve the equation for $`\sigma `$. Thus we restrict our considerations to Eqs. (1). We also remark that (1) are invariant under the transformation $`rr`$; thus, in what follows we discuss only the region $`r0`$.
For the purposes of studying the EYM solutions at finite $`r`$, it is convincing to rewrite (1) in terms of $`w`$ and $`u=r^2N`$. They become
$$\begin{array}{cc}& ru^{}\left(12w_{}^{}{}_{}{}^{2}\right)u+\left(1w^2\right)^2r^2=0,\hfill \\ & ruw^{\prime \prime }\left[u+\left(1w^2\right)^2r^2\right]w^{}+\left(1w^2\right)rw=0.\hfill \end{array}$$
(2)
Recall that the only known explicit solutions of (2) are the Schwarzschild solution
$$w\pm 1,u=ar+r^2,$$
(3)
and the Reissner–Nordström solution
$$w0,u=1+br+r^2,$$
(4)
where $`a`$ and $`b`$ are arbitrary constants.
In order to apply the theory of dynamical systems to the analysis of the EYM equations (2), it is necessary to write them as an autonomous system of first-order differential equations. Thus we introduce the function $`v=w^{}`$ and an independent variable $`t`$ defined by $`dr=rudt`$. After making these changes, we obtain the dynamical system
$$\begin{array}{cc}\hfill \dot{r}& =ru,\hfill \\ \hfill \dot{u}& =\left[\left(12v^2\right)u\left(1w^2\right)^2+r^2\right]u,\hfill \\ \hfill \dot{v}& =\left[u+\left(1w^2\right)^2r^2\right]v\left(1w^2\right)rw,\hfill \\ \hfill \dot{w}& =ruv.\hfill \end{array}$$
(5)
Notice that this system has solutions for $`r0`$ and $`u0`$, which do not take place for (2).
The first step to start analyzing (5) is to determine the critical points. It is easy to verify that the dynamical system (5) has the following critical sets:
$$\begin{array}{cc}\mathrm{𝐴𝑅𝑁}^\pm :\hfill & (0,(1w^2)^2,\pm 1,w),\hfill \\ \mathrm{𝑅𝑁}:\hfill & (0,(1w^2)^2,0,w),\hfill \\ \mathrm{𝑆𝐵𝑀}^\pm :\hfill & (0,0,v,\pm 1),\hfill \\ W:\hfill & (0,0,0,w),\hfill \\ \mathrm{𝑅𝐻}:\hfill & (r,0,(1w^2)rw/[(1w^2)^2r^2],w),\hfill \\ \mathrm{𝐷𝐻}^\pm :\hfill & (\pm 1,0,v,0).\hfill \end{array}$$
All the critical sets belong to the hyperplanes $`r=0`$ and/or $`u=0`$, in which the conditions of the existence–uniqueness theorem do not hold for (2).
In what follows we shall not give a global phase portrait for (5), but we shall mainly concentrate on the results that have direct consequence for the EYM equations (2).
## III PRELIMINARY INVESTIGATION OF THE <br>ORIGIN NEIGHBORHOOD
Let us consider the projection of (5) into the hyperplane $`r=0`$. This immediately leads to $`ww_0=\mathrm{const}`$. Thus the dynamical system (5) reduces to
$$\begin{array}{cc}\hfill \dot{u}& =\alpha ^2u+\left(12v^2\right)u^2,\hfill \\ \hfill \dot{v}& =\alpha ^2v+uv,\hfill \end{array}$$
(6)
where $`\alpha =1w_0^2`$. Notice that (6) is invariant under the transformation $`vv`$. Hence, the phase portrait will be symmetric with respect to the $`u`$-axis.
Since (6) contains a free parameter $`\alpha `$, it is convincing to split the analysis into two steps. Let us begin with $`\alpha =0`$. In this case, (6) reads as
$$\begin{array}{cc}& \dot{u}=\left(12v^2\right)u^2,\hfill \\ & \dot{v}=uv.\hfill \end{array}$$
(7)
One can easily solve this system. First, the critical points, which are the projection of the sets $`\mathrm{𝑆𝐵𝑀}^\pm `$ and $`W`$, give $`u0`$, $`v\mathrm{const}`$. Next,
$$C_2C_1t=\frac{1}{v}e^{v^2}2_0^ve^{\stackrel{~}{v}^2}𝑑\stackrel{~}{v}=\frac{1}{v}e^{v^2}+i\sqrt{\pi }\mathrm{erf}(iv),\text{and}u=C_1ve^{v^2},$$
where $`C_1`$ and $`C_2`$ are arbitrary constants, and $`v0`$. Finally, for $`v0`$ one has $`u=(Ct)^1`$, where $`C`$ is an arbitrary constant, $`tC`$. We remark that if $`u<0`$, then the nontrivial solutions tend to zero as $`t+\mathrm{}`$. In the opposite case, they tend to zero as $`t\mathrm{}`$. In particular, it follows that if $`lim_{r0}w(r)=\pm 1`$ for a solution of (2), then $`lim_{r0}w^{}(r)=0`$.
The phase portrait of (7) is shown in Fig. 1.
Now let us study (6) for $`\alpha 0`$ (i.e., for $`w_0\pm 1`$). In this case, the system (6) has the following critical points: $`Z`$: $`(0,0)`$, $`A^\pm `$: $`(\alpha ^2,\pm 1)`$, and $`R`$: $`(\alpha ^2,0)`$.
The point $`Z`$ is the projection of $`W`$. The eigenvalues of $`Z`$ are $`\lambda _u=\alpha ^2`$ and $`\lambda _v=\alpha ^2`$. Thus, $`Z`$ is a saddle. It has four separatrices, which can be easily obtained explicitly. The repelling separatrices, denote them by $`S^\pm `$ in accordance with the sign of $`v`$, are tangent to the eigenvector $`\zeta _v=(0,1)`$. They can be written as
$$u0,v=C_ve^{\alpha ^2t}.$$
(8)
Here and forth the letter $`C`$, with an alphabetical subscript ($`C_u`$, $`C_v`$, etc.), denotes a nonzero constant.
The attracting separatrices are tangent to the eigenvector $`\zeta _u=(1,0)`$. They take the form
$$u=\frac{\alpha ^2}{1+C_ue^{\alpha ^2t}},v0.$$
(9)
One of these separatrices, denote it by $`S_1`$, belongs to the half-plane $`u<0`$. For $`S_1`$, $`C_u<0`$ and $`t>\alpha ^2\mathrm{ln}|C_u|`$. Another separatrix, $`S_2`$, lies in the half-plane $`u>0`$. It has $`C_u>0`$ and joins $`Z`$ to $`R`$ (see Fig. 2).
The point $`R`$ is the projection of the critical curve $`\mathrm{𝑅𝑁}`$. The eigenvalues of $`R`$ are $`\lambda _u=\alpha ^2`$ and $`\lambda _v=2\alpha ^2`$. Thus, it is an unstable node. Almost all trajectories that approach $`R`$ as $`t\mathrm{}`$, are tangent to the eigenvector $`\zeta _u=(1,0)`$. The corresponding separatrices have the form (9). One of them, namely, $`S_2`$ joins $`R`$ to $`Z`$. Another one, $`S_3`$, is defined for $`C_u<0`$ and $`t<\alpha ^2\mathrm{ln}|C_u|`$.
There are also two separatrices, denote them by $`T^\pm `$, which are tangent to the eigenvector $`\zeta _v=(0,1)`$. Let us show that they have the form
$$u=\alpha ^2\left(1\frac{2}{3}v^2\right)+o(v^2)$$
(10)
as $`v0`$. To see this, define $`u_v=(u\alpha ^2)v^1`$. Now the dynamical system (6) reads as
$$\begin{array}{cc}\hfill \dot{u}_v& =\alpha ^2\left(u_v+2\alpha ^2v\right)2\left(2\alpha ^2+u_vv\right)u_vv^2,\hfill \\ \hfill \dot{v}& =2\alpha ^2v+u_vv^2.\hfill \end{array}$$
For $`v=0`$, this system has a saddle $`(0,0)`$ with the eigenvalues $`\lambda _{u_v}=\alpha ^2`$, $`\lambda _v=2\alpha ^2`$ and the eigenvectors $`\zeta _{u_v}=(1,0)`$, $`\zeta _v=(\frac{2}{3}\alpha ^2,1)`$. The separatrices that are tangent to the eigenvector $`\zeta _{u_v}`$, belong to the line $`v=0`$. Hence, there are no corresponding trajectories of (6). Conversely, the eigenvector $`\zeta _v`$ determines the outgoing separatrices, which take the form $`u_v=\frac{2}{3}\alpha ^2v+o(v)`$ as $`v0`$. This yields (10).
Note that the same technique can be used to find the higher order terms in (10). This is also valid for the asymptotic solutions presented below.
Finally, the points $`A^\pm `$ represent projections of the critical curves $`\mathrm{𝐴𝑅𝑁}^\pm `$. The eigenvalues of $`A^\pm `$ are $`\lambda _{u,v}=\frac{1}{2}\alpha ^2(1\pm i\sqrt{15})`$. Thus, these critical points are repelling foci. It is important to note here that for all the trajectories that spiral away from $`A^\pm `$, the metric function $`u`$ is strictly negative because of the separatrices $`S^\pm `$, and $`v`$ preserves its sign due to the separatrix $`S_1`$ (or, the same, because of the above mentioned invariance of (6) under the transformation $`vv`$). Since there are no other finite critical points in the half-plane $`u<0`$, the trajectories that spiral away from the points $`A^\pm `$, do not have limit cycles. We remark that this can also be easily proved if we define the Dulac function for (6) by $`(u^2v)^1`$. Hence, $`u`$ and $`v`$ exhibit oscillations with the amplitude growing infinitely as $`t+\mathrm{}`$.
Fig. 2 shows the phase portrait of (6) near the points $`A^\pm `$, $`Z`$, and $`R`$. Notice that this portrait is drastically different from that one shown in Fig. 1. Thus, $`\alpha `$ is a bifurcation parameter for the dynamical system (6).
In order to obtain the global phase portrait of (6), one has to study the behavior of its trajectories at infinity. Using the standard transform to the projective coordinates, one can find out that the system (6) has four critical points at the $`(u,v)`$ phase plane boundary, namely, $`U^\pm `$: $`(u=\pm \mathrm{},v=0)`$ and $`V^\pm `$: $`(u=0,v=\pm \mathrm{})`$. The points $`U^\pm `$ are saddles, and $`V^\pm `$ are saddle-nodes. The separatrices $`S_3`$ and $`S_1`$ are the only trajectories, which approach the points $`U^\pm `$ from the finite region of the phase plane. The points $`V^\pm `$, besides the separatrices $`S^\pm `$, have ingoing trajectories, which emanate from $`R`$. These trajectories have the form $`u=\frac{1}{2}\alpha ^2v^2+o(v^2)`$ as $`v\mathrm{}`$. The boundary of the phase plane contains two separatrices that join $`U^+`$ to $`V^\pm `$ and two separatrices that go from $`V^\pm `$ to $`U^{}`$.
## IV THE ORIGIN NEIGHBORHOOD
### A The critical curves $`\mathrm{𝐴𝑅𝑁}^\pm `$
Let us turn to the analysis of the dynamical system (5) near the critical curves $`\mathrm{𝐴𝑅𝑁}^\pm `$: $`(0,(1w^2)^2,\pm 1,w)`$ for $`w\pm 1`$. The excluded points also belong to the lines $`\mathrm{𝑆𝐵𝑀}^\pm `$ and will be studied below. Notice that the curves $`\mathrm{𝐴𝑅𝑁}^\pm `$ (with the points $`w=\pm 1`$ excluded) lie in the region $`u<0`$. Hence, in a neighborhood of these curves, $`t`$ growing to infinity corresponds to decreasing $`r`$ in the EYM equations (2).
The eigenvalues of $`\mathrm{𝐴𝑅𝑁}^\pm `$ are $`\lambda _r=(1w^2)^2`$, $`\lambda _{u,v}=\frac{1}{2}(1w^2)^2(1\pm i\sqrt{15})`$, and $`\lambda _w=0`$. It should be recalled at this point that an $`n`$-dimensional critical set necessarily has $`n`$ zero eigenvalues (see, e.g., ). Thus, the zero eigenvalue $`\lambda _w`$ corresponds to the fact that $`\mathrm{𝐴𝑅𝑁}^\pm `$ are one-dimensional sets of critical points. Since other eigenvalues have nonzero real part, $`\mathrm{𝐴𝑅𝑁}^\pm `$ are hyperbolic sets.
The eigenvalues $`\lambda _{u,v}`$ determine three-dimensional unstable manifolds $`M^\pm `$ of the curves $`\mathrm{𝐴𝑅𝑁}^\pm `$, respectively. Since $`\lambda _{u,v}`$ are complex, the trajectories that lie on $`M^\pm `$, describe oscillatory behavior of $`u`$ and $`v`$. The found above trajectories that spiral away from the points $`A^\pm `$, are the projection of the trajectories that lie on $`M^\pm `$, in the plane $`(r=0,w=w_0\pm 1)`$. The separatrices $`S^\pm `$ and $`S_1`$ do also have the obvious counterparts for (5):
$$r0,u0,v=C_ve^{\alpha ^2t},ww_0,$$
(11)
and
$$r0,u=\frac{\alpha ^2}{1+C_ue^{\alpha ^2t}},v0,ww_0,$$
(12)
respectively, where $`w_0=\mathrm{const}\pm 1`$, and $`C_u<0`$. These two-dimensional separatrices preserve the signs of $`u`$ and $`v`$ for the trajectories on $`M^\pm `$.
Recall that the trajectories that spiral away from $`A^\pm `$ do not have limit cycles. Evidently, the same is valid for the trajectories on $`M^\pm `$.
Next, due to the negative eigenvalue $`\lambda _r`$, each of the curves $`\mathrm{𝐴𝑅𝑁}^\pm `$ has two-dimensional stable separatrices, which are tangent to the eigenvectors
$$\zeta _r^\pm =(1,\pm 4(1w^2)w,\frac{w}{1w^2},\pm 1),$$
where the upper sign applies for $`\mathrm{𝐴𝑅𝑁}^+`$ and the lower one for $`\mathrm{𝐴𝑅𝑁}^{}`$. It is easy to see that these separatrices correspond to a one-parameter family of the EYM solutions that exist in the origin neighborhood. Thus, we conclude with
###### Proposition 1.
Let $`𝒰^{}`$ be a set of solutions for the EYM equations (2) such that they are defined in some neighborhood of $`r=0`$, $`u(r)<0`$ in this neighborhood, and $`lim_{r0}w(r)=w_0<\mathrm{}`$, $`w_0\pm 1`$. Then $`𝒰^{}`$ is nonempty. Moreover, almost all solutions of (2) that belong to $`𝒰^{}`$, are monotonous for the gauge function $`w`$ and oscillating for the metric function $`u`$. These solutions have the following properties:
1. The amplitude of the metric function oscillations grows unboundedly as $`r0`$.
2. The values of the metric function at the points of maximum form a sequence, which monotonically converges to zero as $`r0`$.
3. The derivative of the gauge function also oscillates with the amplitude growing unboundedly as $`r0`$, and gets closer to zero on each cycle of the oscillations, but its sign remains unchanged.
Besides these solutions, $`𝒰^{}`$ also contains a one-parameter family of local solutions of the “anti–Reissner–Nordström” type:
$$\begin{array}{cc}\hfill u& =(1w_0^2)^2\pm 4(1w_0^2)w_0r+o(r),\hfill \\ \hfill w& =w_0\pm r+\frac{w_0}{2(1w_0^2)}r^2+o(r^2)\hfill \end{array}$$
(13)
as $`r0`$, where $`w_0\pm 1`$.
Formal expansions of the form (13) and some corresponding numerical solutions were found in . This paper was also the first to present the oscillating solutions. Some of their properties were analyzed in .
### B The critical curve $`\mathrm{𝑅𝑁}`$
Now let us study (5) in the vicinity of the curve $`\mathrm{𝑅𝑁}`$: $`(0,(1w^2)^2,0,w)`$ for $`w\pm 1`$. Similar to the above, the excluded points also belong to the critical sets $`\mathrm{𝑆𝐵𝑀}^\pm `$ (and $`W`$) and will be studied below. Notice that $`\mathrm{𝑅𝑁}`$ (with the points $`w=\pm 1`$ excluded) lies in the region $`u>0`$. Hence, in a neighborhood of $`\mathrm{𝑅𝑁}`$, $`t`$ growing to infinity corresponds to increasing $`r`$ in the EYM equations (2).
The eigenvalues of $`\mathrm{𝑅𝑁}`$ are $`\lambda _r=\lambda _u=(1w^2)^2`$, $`\lambda _v=2(1w^2)^2`$, and $`\lambda _w=0`$. Therefore, all trajectories of (5) in the vicinity of $`\mathrm{𝑅𝑁}`$ belong to an unstable four-dimensional manifold; they correspond to a three-parameter family of EYM solutions.
The eigenvalue $`\lambda _v`$ determines two-dimensional separatrices, which are tangent to the eigenvector $`\zeta _v=(0,0,1,0)`$ and take the form
$$r0,w\mathrm{const}\pm 1,u=\alpha ^2\left(1\frac{2}{3}v^2\right)+o(v^2)\text{as }v0.$$
The separatrices $`T^\pm `$ found above, represent their projection in the plane $`(r=0,w=w_0\pm 1)`$. The EYM equations (2) do not have any corresponding solution. Conversely, the eigenvectors $`\zeta _r=(1,0,w/(1w^2),0)`$ and $`\zeta _u=(0,1,0,0)`$ determine trajectories, which correspond to the EYM solutions
$$\begin{array}{cc}\hfill u& =(1w_0^2)^2+u_1r+o(r),\hfill \\ \hfill w& =w_0+\frac{w_0}{2(1w_0^2)}r^2+o(r^2)\hfill \end{array}$$
(14)
as $`r0`$, where $`w_0\pm 1`$ and $`u_1`$ are arbitrary constants, and the higher order terms contain one more parameter.
Let us show how one can choose the third parameter in (14). The procedure will be similar to that one used for obtaining (10). Namely, consider (5) in the local coordinates
$$r,u_r=\frac{u(1w^2)^2}{r},v_r=\frac{v}{r},w.$$
Then the corresponding dynamical system, which we omit for brevity, has the critical surface $`(0,u_r,w/(1w^2),w)`$. (The other critical sets either have $`w=\pm 1`$ or do not belong to the hyperplane $`r=0`$.) The eigenvalues of this surface are $`\lambda _r=\lambda _{v_r}=(1w^2)^2`$ and $`\lambda _{u_r}=\lambda _w=0`$. It follows that all trajectories in a neighborhood of this surface belong to an unstable four-dimensional manifold. The nonzero eigenvalues have the eigenvectors $`\zeta _r=(1,1+2w^2,0,0)`$ and $`\zeta _{v_r}=(0,0,1,0)`$. Thus, all the trajectories assume the form
$$u_r=u_1+(1+2w_0^2)r+o(r),v_r=\frac{w_0}{1w_0^2}+v_1r+o(r),w=w_0+o(r)$$
as $`r0`$, where $`w_0\pm 1`$, $`u_1`$, and $`v_1`$ are arbitrary constants. Changing back to the initial variables and taking into account the above discussion, one has
###### Proposition 2.
Let $`𝒰^+`$ be a set of solutions for the EYM equations (2) such that they are defined in some neighborhood of $`r=0`$, $`u(r)>0`$ in this neighborhood, and $`lim_{r0}w(r)=w_0<\mathrm{}`$, $`w_0\pm 1`$. Then $`𝒰^+`$ is nonempty. Moreover, all solutions of (2) that belong to $`𝒰^+`$, form a three-parameter family of local solutions of the Reissner–Nordström type:
$$\begin{array}{cc}\hfill u& =(1w_0^2)^2+u_1r+r^2+o(r^2),\hfill \\ \hfill w& =w_0+\frac{w_0}{2(1w_0^2)}r^2+w_3r^3+o(r^3)\hfill \end{array}$$
(15)
as $`r0`$, where $`w_0`$, $`u_1`$ and $`w_3`$ are arbitrary constants, $`w_0\pm 1`$.
A formal power series expansion (15) was presented in . Some black hole solutions with this asymptotic were first found numerically in . The local existence proof for these solutions was given in . We remark that here we follow the terminology, introduced in , which is slightly different from that one used in and .
### C The critical lines $`\mathrm{𝑆𝐵𝑀}^\pm `$
The lines $`\mathrm{𝑆𝐵𝑀}^\pm `$: $`(0,0,v,\pm 1)`$ are degenerate critical sets, since the eigenvalues $`\lambda _r`$, $`\lambda _u`$, and $`\lambda _w`$ are equal to zero. In order to study the behavior of trajectories of (5) in a neighborhood of these lines, we use the standard technique .
First, define $`\overline{w}`$ by $`w=\overline{w}\pm 1`$, where the upper sign applies for $`\mathrm{𝑆𝐵𝑀}^+`$ and the lower one for $`\mathrm{𝑆𝐵𝑀}^{}`$. Now the lines $`\mathrm{𝑆𝐵𝑀}^\pm `$ are transformed to the $`v`$-axis. Next, introduce the local coordinates
$$r_u=\frac{r}{u},u,v,w_u=\frac{\overline{w}}{u},$$
(16)
in which the dynamical system (5) reads as
$$\begin{array}{cc}\hfill \dot{r}_u& =(2v^2+K)r_u,\hfill \\ \hfill \dot{u}& =(12v^2K)u,\hfill \\ \hfill \dot{v}& =(1+K)v+(1\pm uw_u)(2\pm uw_u)r_uuw_u,\hfill \\ \hfill \dot{w}_u& =(12v^2K)w_u+r_uv,\hfill \end{array}$$
(17)
where $`K=[(2\pm uw_u)^2w_u^2r_u^2]u`$, and an overdot stands for derivatives with respect to $`\tau `$ defined by $`d\tau =udt`$ (thus, $`\tau =\mathrm{ln}r+\mathrm{const}`$).
The system (17) has one critical set in the hyperplane $`u=0`$, namely, the $`r_u`$-axis, which is an unstable hyperbolic line. The corresponding eigenvalues are $`\lambda _{r_u}=0`$, $`\lambda _u=\lambda _v=1`$, and $`\lambda _{w_u}=1`$. The two-dimensional ingoing separatrices
$$r_u\mathrm{const},u0,v0,w_u=C_we^\tau ,$$
which are tangent to the eigenvector $`\zeta _{w_u}=(0,0,0,1)`$, belong to the hyperplane $`u=0`$. Hence, they do not correspond to any trajectories of (5). In their turn, the eigenvalues $`\lambda _u`$ and $`\lambda _v`$, which have the eigenvectors $`\zeta _u=(r_u^3,1,0,0)`$ and $`\zeta _v=(0,0,2,r_u)`$, determine the outgoing three-dimensional separatrices
$$r_u=r_0r_0^3u+o(u),v=2v_1u+o(u),w_u=r_0v_1u+o(u)$$
as $`u0`$, where $`r_0`$ and $`v_1`$ are arbitrary constants. This implies
###### Proposition 3.
All solutions of the EYM equations (2) such that $`lim_{r0}w(r)=\pm 1`$, belong to a two-parameter family of local solutions of the Schwarzschild type:
$$\begin{array}{cc}\hfill u& =u_1r+r^2+o(r^2),\hfill \\ \hfill w& =\pm 1+w_2r^2+o(r^2)\hfill \end{array}$$
(18)
as $`r0`$, where $`u_1`$ and $`w_2`$ are arbitrary constants.
In particular, these local solutions describe the behavior of the Bartnik–McKinnon particle-like solutions (for $`u_1=0`$) and the black hole solutions of the Schwarzschild type in the vicinity of the origin.
It is interesting to note that the separatrices that are tangent to the eigenvector $`\zeta _u`$, can be written as
$$u=\frac{1+u_1r_u}{r_u^2},v0,w_u0,$$
where $`u_1`$ is an arbitrary constant, and $`r_u0`$. Obviously, these separatrices correspond to the Schwarzschild solution (3).
We also remark that the coordinates (16) enable us to resolve the degeneracy of the $`v`$-axis along the $`u`$-direction. Analysis of the $`r`$\- and $`w`$-directions leads to the same conclusion for the EYM equations as stated in Proposition 3.
It is necessary to underline here that we discuss only real EYM solutions, though the EYM equations also possess complex solutions. For example, a study of (5) in the local coordinates $`(r,u/r^2,v,\overline{w}/r)`$, leads to a discovery of complex EYM solutions of the form
$$\begin{array}{cc}& u=\left(1+4w_1^2\right)\left(r^2+w_0w_1r^3\right)+o(r^3),\hfill \\ & w=w_0+w_1r\frac{1}{8}w_0r^2+o(r^2)\hfill \end{array}$$
as $`r0`$, where $`w_0=\pm 1`$ and $`w_1=\pm (\sqrt{3}\pm i\sqrt{5})/4`$. These solutions do not have free parameters.
### D The critical line $`W`$
The eigenvalues of the critical line $`W`$: $`(0,0,0,w)`$ are $`\lambda _r=0`$, $`\lambda _u=(1w^2)^2`$, $`\lambda _v=(1w^2)^2`$, and $`\lambda _w=0`$. Hence, $`W`$ is degenerate for any $`w`$. The eigenvalues $`\lambda _u`$ and $`\lambda _v`$ are nonzero and have different signs whenever $`w\pm 1`$. In this case, $`W`$ is an unstable critical set with the outgoing two-dimensional separatrices (11) and the ingoing two-dimensional separatrices (12). Investigation of (5) in the vicinity of $`W`$ gives the same result for the EYM equations as already stated in Proposition 3. By this reason we omit the discussion.
Thus, we have obtained a description of the EYM solutions that have finite values of the gauge function $`w`$ in the origin neighborhood. It follows from our considerations that for all these solutions $`lim_{r0}w(r)=w_0<\mathrm{}`$. A standard analysis of the dynamical system (5) in the projective coordinates $`(rz,uz,vz,z=1/w)`$ reveals that Eqs. (2) do not have solutions such that $`lim_{r0}w(r)=\mathrm{}`$. Hence, the results of this section can be summarized in the following classification of the EYM solutions, defined in the vicinity of the origin.
###### Theorem.
All real solutions of the EYM equations (2), defined in a neighborhood of $`r=0`$, belong to one of the following disjoint classes:
1. $`w_0=\pm 1`$. In this case, all solutions are of the Schwarzschild and Bartnik–McKinnon type (18).
2. $`w_0\pm 1`$, and the metric function $`u`$ is negative in some neighborhood of $`r=0`$. In this case, almost all solutions are such that the metric function oscillates with the unboundedly growing amplitude as $`r0`$, but the gauge function is monotonous (though its derivative also oscillates with the amplitude growing infinitely). Only particular solutions in this case exhibit asymptotic behavior of the “anti–Reissner–Nordström” type (13).
3. $`w_0\pm 1`$, and the metric function $`u`$ is positive in some neighborhood of $`r=0`$. In this case, all solutions belong to the Reissner–Nordström type (15).
We remark that this classification scheme explains why almost all interior black hole solutions, found numerically in , exhibit oscillatory behavior of the metric.
## V CRITICAL SETS FOR $`r0`$
### A The critical surface $`\mathrm{𝑅𝐻}`$
Now let us discuss the remaining critical sets of (5). The surface $`\mathrm{𝑅𝐻}`$: $`(r,0,(1w^2)rw/[(1w^2)^2r^2],w)`$ is an unstable hyperbolic set. The eigenvalues of $`\mathrm{𝑅𝐻}`$ are $`\lambda _r=0`$, $`\lambda _u=[(1w^2)^2r^2]`$, $`\lambda _v=(1w^2)^2r^2`$, and $`\lambda _w=0`$. The three-dimensional separatrices that are tangent to the eigenvector $`\zeta _v=(0,0,1,0)`$ read as
$$rr_0=\mathrm{const},u0,v=\frac{\gamma }{\beta }+C_ve^{\beta t},ww_0=\mathrm{const},$$
where $`\beta =(1w_0^2)^2r_0^20`$ and $`\gamma =(1w_0^2)r_0w_0`$. Obviously, they do not correspond to any EYM solution. In their turn, the three-dimensional separatrices that are tangent to the eigenvector
$$\zeta _u=(1,\frac{G}{r},\frac{\{2F^4[F^3+(13w^2)r^2]r^2\}Fw}{2G^3},\frac{Frw}{G}),$$
where $`F=1w^2`$ and $`G=F^2r^2`$, correspond to a two-parameter family of local EYM solutions. It is convenient to fix one of these parameters and to write down these solutions as follows.
###### Proposition 4.
For any fixed $`r_h>0`$, the EYM equations (2) possess a one-parameter family of local solutions of the form
$$\begin{array}{cc}\hfill u& =\frac{(1w_h^2)^2r_h^2}{r_h}s+o(s),\hfill \\ \hfill w& =w_h+\frac{(1w_h^2)r_hw_h}{(1w_h^2)^2r_h^2}s\hfill \\ & +\frac{(1w_h^2)\{2(1w_h^2)^4[(1w_h^2)^3+(13w_h^2)r_h^2]r_h^2\}w_h}{4[(1w_h^2)^2r_h^2]^3}s^2+o(s^2)\hfill \end{array}$$
(19)
as $`s=rr_h0`$, where $`w_h`$ is a constant, satisfying $`|1w_h^2|r_h`$.
The local solutions (19) represent the EYM solutions in the vicinity of a regular horizon. For the black hole solutions, this is either an event horizon (if $`r_h>|1w_h^2|`$), or an interior Cauchy horizon (if $`r_h<|1w_h^2|`$). The first existence proof for these local solutions was given in . Some black hole solutions with an interior horizon were found numerically in .
Note that $`\mathrm{𝑅𝐻}`$ transforms to the line $`W`$ for $`r=0`$.
### B The critical lines $`\mathrm{𝐷𝐻}^\pm `$
The lines $`\mathrm{𝐷𝐻}^\pm `$: $`(\pm 1,0,v,0)`$ are degenerate. All their eigenvalues are equal to zero. Since the EYM equations (2) are invariant under the transformation $`rr`$, we shall study (5) only in the vicinity of the line $`\mathrm{𝐷𝐻}^+`$.
It is convenient to use the local coordinates
$$\overline{r}=r1,u_r=\frac{u}{\overline{r}^2},v,w_r=\frac{w}{\overline{r}},$$
in which the dynamical system (5) reads as
$$\begin{array}{cc}\hfill \dot{\overline{r}}& =(1+\overline{r})\overline{r}u_r,\hfill \\ \hfill \dot{u}_r& =\left[2\left(2+\overline{r}+2\overline{r}v^2\right)u_r+\left(1+2w_r^2\right)\overline{r}\overline{r}^3w_r^4\right]u_r,\hfill \\ \hfill \dot{v}& =2vw_r\left(v+w_ru_rv+2vw_r^2\right)\overline{r}+\overline{r}^2w_r^3+(1+vw_r)\overline{r}^3w_r^3,\hfill \\ \hfill \dot{w}_r& =(1+\overline{r})(vw_r)u_r,\hfill \end{array}$$
(20)
where an overdot stands for derivatives with respect to $`\tau `$ defined by $`d\tau =\overline{r}dt`$.
There are two critical sets of (20) in the hyperplane $`\overline{r}=0`$, namely, the point $`D`$: $`(0,1,0,0)`$ and the line $`L`$: $`(0,0,\frac{1}{2}w_r,w_r)`$. The eigenvalues of $`D`$ are $`\lambda _{\overline{r}}=1`$, $`\lambda _{u_r}=2`$, and $`\lambda _{v,w_r}=\frac{1}{2}(3\pm i\sqrt{3})`$. Thus, $`D`$ is a saddle. The outgoing one-dimensional separatrices
$$\overline{r}=\frac{1}{1+C_re^\tau },u_r1,v0,w_r0,$$
which are tangent to the eigenvector $`\zeta _{\overline{r}}=(1,0,0,0)`$, correspond to the extreme Reissner–Nordström solution
$$w0,u=(1r)^2.$$
(21)
Besides this, in the vicinity of $`D`$ there exists a stable three-dimensional manifold. In order to figure out whether this manifold belongs to the hyperplane $`\overline{r}=0`$, one may study a projection of (20) into $`\overline{r}=\mathrm{const}`$. It occurs that the critical point $`(1,0,0)`$ exists for any $`\overline{r}`$ and has the eigenvalues $`\lambda _{u_r}=2\overline{r}`$ and $`\lambda _{v,w_r}=\frac{1}{2}(3+\overline{r})\pm i\sqrt{(3+\overline{r})(1+3\overline{r})}`$. Hence, for any $`\overline{r}>1/3`$ all the eigenvalues have negative real part, and $`\lambda _v`$ and $`\lambda _{w_r}`$ are complex conjugate. Thus, the stable three-dimensional manifold does not belong to the hyperplane $`\overline{r}=0`$, and the trajectories on this manifold correspond to a two-parameter family of local EYM solutions.
It is interesting to note that the projection of (20) into the plane $`(\overline{r}=0,u_r=1)`$ gives a system of two linear differential equations
$$\begin{array}{cc}\hfill \dot{v}& =2vw_r,\hfill \\ \hfill \dot{w}_r& =vw_r,\hfill \end{array}$$
(22)
which can be easily solved:
$$\begin{array}{cc}\hfill v& =\left[C_1\mathrm{cos}\left(\frac{\sqrt{3}}{2}\tau \right)\frac{\sqrt{3}}{3}(C_1+2C_2)\mathrm{sin}\left(\frac{\sqrt{3}}{2}\tau \right)\right]\mathrm{exp}\left(\frac{3}{2}\tau \right),\hfill \\ \hfill w_r& =\left[C_2\mathrm{cos}\left(\frac{\sqrt{3}}{2}\tau \right)+\frac{\sqrt{3}}{3}(2C_1+C_2)\mathrm{sin}\left(\frac{\sqrt{3}}{2}\tau \right)\right]\mathrm{exp}\left(\frac{3}{2}\tau \right),\hfill \end{array}$$
where $`C_1`$ and $`C_2`$ are the constants of integration. Thus, the projection of (20) into $`(\overline{r}=0,u_r=1)`$ represents linear oscillations of $`v`$ and $`w_r`$ with infinitely many zeros.
It is easy to see that $`\overline{r}0`$ corresponds to $`\tau \mathrm{}`$, so that $`w^{}`$ diverges as $`r1`$. Conversely, $`r`$ goes away from $`1`$ as $`\tau +\mathrm{}`$, and these solutions tend to the extreme Reissner–Nordström solution. Thus, we have
###### Proposition 5.
There exists a neighborhood of $`r=1`$, in which the EYM equations (2) have a two-parameter family of local solutions with the gauge function oscillating with infinitely many zeros, and the metric function tending to zero.
These solutions can be treated as a description of the limiting behavior of the EYM solutions in the vicinity of $`r=1`$ as the number of the gauge function nodes tends to infinity. The limiting behavior of the EYM solutions was first studied in . Solutions that exhibit oscillations of $`w`$ were first discussed in . But, in addition to the results of , we see that the gauge function may have infinitely many zeros not only to the left of $`r=1`$, but also to the right.
Finally, the line $`L`$ has the eigenvalues $`\lambda _{\overline{r}}=0`$, $`\lambda _{u_r}=2`$, $`\lambda _v=2`$, and $`\lambda _{w_r}=0`$. Thus, $`L`$ is a degenerate set. The eigenvectors $`\zeta _{u_r}=(0,1,\frac{3}{16}w_r,\frac{3}{4}w_r)`$ and $`\zeta _v=(0,0,1,0)`$ determine two-dimensional separatrices, which lie in the hyperplane $`\overline{r}=0`$. Thus, they do not correspond to any EYM solution. Further investigation of the line $`L`$ did not reveal any trajectories of (5) that have corresponding EYM solutions different from the discussed above.
## VI SOLUTIONS WITH A SINGULAR HORIZON
A typical EYM solution cannot be continued to the far field, since it has a singular horizon, i.e., a point, at which the metric function tends to zero, the gauge functions stays finite, but its derivative diverges . This fact was first noticed in , and a power series expansion, describing the behavior of the EYM solutions in the vicinity of a singular horizon was given. Let us show how one can obtain singular EYM solutions basing on dynamical systems methods. This will also lead us to a discovery of two new local solutions.
Let us rewrite the dynamical system (5) as
$$\begin{array}{cc}\hfill \dot{r}& =ruz^2,\hfill \\ \hfill \dot{u}& =\left[\left(2z^2\right)u+\left(1w^2\right)^2z^2r^2z^2\right]u,\hfill \\ \hfill \dot{z}& =\left[u+\left(1w^2\right)^2r^2\left(1w^2\right)rzw\right]z^3,\hfill \\ \hfill \dot{w}& =ruz,\hfill \end{array}$$
(23)
where $`z=1/v`$, and an overdot stands for derivatives with respect to $`\tau `$ defined by $`dt=z^2d\tau `$.
For $`z=0`$, the critical points of (23) form a degenerate plane $`(r,0,0,w)`$. In order to study (23) in the vicinity of this plane, we introduce the local coordinates $`(r,u_z=u/z^2,z,w)`$, in which (23) can be written as
$$\begin{array}{cc}\hfill \dot{r}& =ru_zz^2,\hfill \\ \hfill \dot{u}_z& =\left[\left(23z^2\right)u_z\left(1w^2\right)^2+r^2+2\left(1w^2\right)rzw\right]u_z,\hfill \\ \hfill \dot{z}& =\left[u_zz^2+\left(1w^2\right)^2r^2\left(1w^2\right)rzw\right]z,\hfill \\ \hfill \dot{w}& =ru_zz,\hfill \end{array}$$
(24)
where an overdot stands for derivatives with respect to $`t`$.
The system (24) has two critical sets in the hyperplane $`z=0`$, namely, the plane $`RW_1`$: $`(r,0,0,w)`$ and the surface $`\mathrm{𝑆𝐻}`$: $`(r,[(1w^2)^2r^2]/2,0,w)`$, which are nondegenerate whenever
$$r|1w^2|.$$
(25)
The eigenvalues of $`RW_1`$ are $`\lambda _r=0`$, $`\lambda _{u_z}=(1w^2)^2r^2`$, $`\lambda _z=\lambda _{u_z}`$, and $`\lambda _w=0`$. Thus, $`RW_1`$ is an unstable hyperbolic set. One can easily see that if (25) holds, then the eigenvectors $`\zeta _{u_z}=(0,1,0,0)`$ and $`\zeta _z=(0,0,1,0)`$ determine the three-dimensional separatrices
$$rr_0=\mathrm{const},u_z=\frac{\beta }{2+C_ue^{\beta t}},z0,ww_0=\mathrm{const},$$
and
$$rr_0,u_z0,z=\frac{\beta }{\gamma +C_ze^{\beta t}},ww_0,$$
respectively, where $`\beta =(1w_0^2)^2r_0^20`$ and $`\gamma =(1w_0^2)r_0w_0`$. Clearly, these separatrices do not correspond to any EYM solution.
Next, the eigenvalues of $`\mathrm{𝑆𝐻}`$ are $`\lambda _r=0`$, $`\lambda _{u_z}=\lambda _z=[(1w^2)^2r^2]`$, and $`\lambda _w=0`$. Hence, if (25) holds, then all trajectories of (24) in the vicinity of $`\mathrm{𝑆𝐻}`$ belong to a four-dimensional manifold. The eigenvalues $`\lambda _{u_z}`$ and $`\lambda _z`$ have the eigenvectors $`\zeta _{u_z}=(0,1,0,0)`$ and $`\zeta _z=(0,0,1,\frac{1}{2}r)`$. Thus, all trajectories in the vicinity of $`\mathrm{𝑆𝐻}`$ take the form
$$r=r_0+o(z),u_z=\frac{\beta }{2}+u_1z+o(z),w=w_0\frac{r_0}{2}z+o(z)$$
as $`z0`$, where $`\beta 0`$, and $`u_1`$ is an arbitrary constant. This leads to
###### Proposition 6.
Let $`r_0>0`$ be a point such that
$$\underset{rr_0}{lim}u(r)=0,\underset{rr_0}{lim}w(r)=w_0<\mathrm{},\underset{rr_0}{lim}w^{}(r)=\mathrm{},$$
and $`\beta =(1w_0^2)^2r_0^20`$. Then all solutions of the EYM equations (2) in the vicinity of $`r_0`$ have the form
$$u=\frac{2\beta }{r_0}s^2+u_1s^3+o(s^3),w=w_0\pm \sqrt{r_0}s+o(s)$$
(26)
as $`s=\sqrt{r_0r}0`$, where $`u_1`$ is an arbitrary constant. These solutions do not have other parameters besides $`w_0`$ and $`u_1`$.
Thus, local solutions (26) exist in the vicinity of any point $`r_0>0`$, $`r_0|1w_0^2|`$.
Notice that solutions (19) were determined by the trajectories that belong to a three-dimensional manifold. Unlike them, singular solutions (26) correspond to the trajectories that lie on a four-dimensional manifold. It follows immediately that in the vicinity of an arbitrary point $`r_0>0`$ such that $`lim_{rr_0}u(r)=0`$, $`lim_{rr_0}w(r)=w_0`$, and $`r_0|1w_0^2|`$, almost all solutions of the EYM equations (2) exhibit asymptotic behavior (26) and, therefore, cannot be continued toward $`r=\mathrm{}`$.
Let us also mention that it follows from (26) and the above analysis of the critical sets $`\mathrm{𝑅𝐻}`$ and $`\mathrm{𝐷𝐻}^\pm `$ that almost all EYM solutions, defined at an arbitrary finite point $`r_0>0`$, can be continued to the left for all $`r<r_0`$. A detailed investigation of extendibility of solutions of the EYM equations can be found in .
Now let us study (23) in the vicinity of the curve $`r=|1w^2|`$. We start with $`|w|1`$. In the local coordinates
$$r_z=\frac{1}{z}(r1+w^2),u_z=\frac{u}{z^3},z,w,$$
the dynamical system (23) reads as
$$\begin{array}{cc}\hfill \dot{r}_z& =\left(1w^2\right)\left(z+2w\right)u_z+\left[2(z+w)u_zz\left(1w^2\right)^2w\right]r_z\hfill \\ & \left(1w^2\right)\left(2+zw\right)r_z^2r_z^3z,\hfill \\ \hfill \dot{u}_z& =\left[2\left(12z^2\right)u_z+3\left(1w^2\right)^2w+\left(1w^2\right)\left(4+3zw\right)r_z+2r_z^2z\right]u_z,\hfill \\ \hfill \dot{z}& =\left[\left(1w^2\right)^2wu_zz^2+\left(1w^2\right)\left(2+zw\right)r_z+r_z^2z\right]z,\hfill \\ \hfill \dot{w}& =\left(1w^2+r_zz\right)u_zz,\hfill \end{array}$$
(27)
where an overdot stands for derivatives with respect to $`\tau `$ defined by $`d\tau =zdt`$.
The system (27) has six critical sets in the hyperplane $`z=0`$, namely, $`\mathrm{𝐿𝑊}`$: $`(0,0,0,w)`$, $`\mathrm{𝐿𝑅}^\pm `$: $`(r_z,0,0,\pm 1)`$, $`\mathrm{𝐶𝑅}`$: $`(\frac{1}{2}(1w^2)w,0,0,w)`$, $`RU_1`$: $`((1w^2)w,\frac{1}{2}(1w^2)^2w,0,w)`$, and $`RU_2`$: $`(\frac{3}{2}(1w^2)w,\frac{3}{2}(1w^2)^2w,0,w)`$. Analysis of the first four critical sets does not reveal any trajectories of (27) that have corresponding EYM solutions. Thus we discuss only the curves $`RU_1`$ and $`RU_2`$.
The eigenvalues of $`RU_1`$ are $`\lambda _{r_z}=\lambda _{u_z}=(1w^2)^2w`$, $`\lambda _z=(1w^2)^2w`$, and $`\lambda _w=0`$. Thus, $`RU_1`$ is an unstable hyperbolic critical set whenever
$$w0,\pm 1.$$
(28)
In this case, all trajectories on a three-dimensional manifold, determined by $`\lambda _{r_z}`$ and $`\lambda _{u_z}`$, are tangent to the eigenvector $`\zeta _{r_z,u_z}=(1,(1w^2),0,0)`$, which defines the two-dimensional separatrices
$$u_z=\frac{1}{2}\alpha ^2w_0\alpha r_z,z0,ww_00,\pm 1.$$
One of these separatrices joins $`RU_1`$ to $`\mathrm{𝐶𝑅}`$. However, the hole manifold belongs to the hyperplane $`z=0`$. Thus, the trajectories on it do not correspond to any EYM solution.
Unlike this, the two-dimensional separatrices that are tangent to the eigenvector
$$\zeta _z=(\frac{1}{8}\left(1w^2\right)\left(35w^2\right),\frac{1}{4}\left(1w^2\right)^2\left(23w^2\right),1,\frac{1}{2}\left(1w^2\right)),$$
take the form
$$\begin{array}{cc}\hfill r_z& =\alpha w_0+\frac{1}{8}\alpha \left(35w_0^2\right)z+o(z),\hfill \\ \hfill u_z& =\frac{1}{2}\alpha ^2w_0\frac{1}{4}\alpha ^2\left(23w_0^2\right)z+o(z),\hfill \\ \hfill w& =w_0\frac{1}{2}\alpha z+o(z)\hfill \end{array}$$
(29)
as $`z0`$, where $`w_00,\pm 1`$, and thus have corresponding EYM solutions.
Finally, the eigenvalues of $`RU_2`$ are $`\lambda _{r_z}=(1w^2)^2w`$, $`\lambda _{u_z}=3(1w^2)^2w`$, $`\lambda _z=2(1w^2)^2w`$, and $`\lambda _w=0`$. Hence, $`RU_2`$ is also an unstable hyperbolic critical set whenever (28) holds. The eigenvector $`\zeta _{u_z}=(1,(1w^2),0,0)`$ defines the two-dimensional separatrices
$$u_z=\alpha r_z,z0,ww_00,\pm 1.$$
One of them joins $`RU_2`$ to $`\mathrm{𝐿𝑊}`$. Besides these separatrices, there is also a three-dimensional manifold, defined by the eigenvalues $`\lambda _{r_z}`$ and $`\lambda _z`$. Almost all trajectories on this manifold are tangent to the eigenvector $`\zeta _{r_z}=(1,3(1w^2),0,0)`$, which determines the separatrices
$$u_z=\frac{3}{2}\alpha ^2w_03\alpha s+o(s)\text{as }s=r_z+\frac{3}{2}\alpha w_00,z0,ww_0,$$
where $`w_00,\pm 1`$. In addition, there exist two-dimensional separatrices, which are tangent to the eigenvector
$$\zeta _z=(\frac{3}{40}\left(1w^2\right)\left(1312w^2\right),\frac{9}{40}\left(1w^2\right)^2\left(119w^2\right),1,\frac{3}{4}\left(1w^2\right))$$
and take the form
$$\begin{array}{cc}\hfill r_z& =\frac{3}{2}\alpha w_0+\frac{3}{40}\alpha \left(1312w_0^2\right)z+o(z),\hfill \\ \hfill u_z& =\frac{3}{2}\alpha ^2w_0\frac{9}{40}\alpha ^2\left(119w_0^2\right)z+o(z),\hfill \\ \hfill w& =w_0\frac{3}{4}\alpha z+o(z)\hfill \end{array}$$
(30)
as $`z0`$, where $`w_00,\pm 1`$. These separatrices, together with (29), have corresponding singular EYM solutions. Conversely, all trajectories, defined by the eigenvalues $`\lambda _{u_z}`$ and $`\lambda _{r_z}`$, belong to the hyperplane $`z=0`$ and do not have counterparts neither for the dynamical system (23), nor for the EYM equations.
Analysis of (23) in the vicinity of the curve $`r=(1w^2)`$ for $`|w|1`$ is completely analogous to the previous case. The dynamical system (23), written in the local coordinates
$$r_z=\frac{1}{z}(r+1w^2),u_z,z,w,$$
has the same critical sets in the hyperplane $`z=0`$, as (27), to the exclusion of the curves $`RU_1`$ and $`RU_2`$, which in this case have the opposite sign of $`u_z`$. Asymptotic formulas (29) and (30) convert to
$`r_z`$ $`=\alpha w_0\frac{1}{8}\alpha \left(35w_0^2\right)z+o(z),`$
$`u_z`$ $`=\frac{1}{2}\alpha ^2w_0\frac{1}{4}\alpha ^2\left(23w_0^2\right)z+o(z),`$
$`w`$ $`=w_0+\frac{1}{2}\alpha z+o(z),`$
and
$`r_z`$ $`=\frac{3}{2}\alpha w_0\frac{3}{40}\alpha \left(1312w_0^2\right)z+o(z),`$
$`u_z`$ $`=\frac{3}{2}\alpha ^2w_0\frac{9}{40}\alpha ^2\left(119w_0^2\right)z+o(z),`$
$`w`$ $`=w_0+\frac{3}{4}\alpha z+o(z)`$
as $`z0`$, respectively. Recall that $`\alpha =1w_0^2=r_00,1`$ here. Combining these solutions with (29) and (30), we get
###### Proposition 7.
In the vicinity of any point $`r_0>0`$, $`r_01`$, the EYM equations (2) have solutions of the form
$$u=\pm 4\xi \sqrt{r_0}w_0s^3+O(s^4),w=w_0\pm \sqrt{r_0}s+o(s),$$
and
$$u=\pm \frac{16}{3}\xi w_0w_{12}s^3+O(s^4),w=w_0\pm w_{12}s+o(s),$$
as $`s=\sqrt{r_0r}0`$, where $`r_0=|1w_0^2|`$, $`\xi =\mathrm{sgn}(1w_0^2)`$, and $`w_{12}=\sqrt{3r_0/2}`$. These solutions do not have free parameters.
To the best of the author’s knowledge, the presented local solutions are new.
Notice that Proposition 7 excludes the cases $`r_0=1`$ and $`r_0=0`$, which correspond to $`w_0=0,\pm 1`$. Analysis of the first one reveals complex solutions of the EYM equations. We do not discuss them here, since their physical interpretation is unclear. The latter case has already been discussed in Sec. IV.
## VII SOLUTIONS IN THE FAR FIELD
The behavior of the EYM solutions in the far field, $`r1`$, was studied in great details, see and references therein. In this section, we briefly give another existence proof for the asymptotically flat solutions and obtain a description of the limiting behavior of the EYM solutions as the number of the gauge function nodes tends to infinity. To implement this task, we return to the EYM equations (1), but we change $`r`$ to $`z=1/r`$. Next, we rewrite (1) as a dynamical system of the form
$$\begin{array}{cc}\hfill \dot{z}& =z^2N,\hfill \\ \hfill \dot{N}& =\left[1+\left(1+2z^4v^2\right)N+\left(1w^2\right)^2z^2\right]zN,\hfill \\ \hfill \dot{v}& =\left[13N\left(1w^2\right)^2z^2\right]zv\left(1w^2\right)w,\hfill \\ \hfill \dot{w}& =z^2vN,\hfill \end{array}$$
(31)
where $`v=w^{}(z)`$, and an overdot stands for derivatives with respect to $`t`$ defined by $`dz=z^2Ndt`$.
The dynamical system (31) has two critical sets in the hyperplane $`z=0`$, namely, the planes $`\mathrm{𝐴𝐹}^\pm `$: $`(0,N,v,\pm 1)`$ and $`\mathrm{𝑂𝑆}`$: $`(0,N,v,0)`$. Both of them are degenerate. Thus we perform the standard procedure of their investigation for finite $`N`$ and $`v`$.
### A The critical planes $`\mathrm{𝐴𝐹}^\pm `$
In this case, we introduce the local coordinates $`(z,N,v,w_z=\overline{w}/z)`$, where $`w=\overline{w}\pm 1`$; here the upper sign applies for $`\mathrm{𝐴𝐹}^+`$ and the lower one for $`\mathrm{𝐴𝐹}^{}`$. Now (31) can be written as
$$\begin{array}{cc}\hfill \dot{z}& =zN,\hfill \\ \hfill \dot{N}& =\left[1+\left(1+2z^4v^2\right)N+(2\pm zw_z)^2z^4w_z^2\right]N,\hfill \\ \hfill \dot{v}& =\left[13N(2\pm zw_z)^2z^4w_z^2\right]v+(1\pm zw_z)(2\pm zw_z)w_z,\hfill \\ \hfill \dot{w}_z& =(vw_z)N,\hfill \end{array}$$
(32)
where an overdot stands for derivatives with respect to $`\tau `$ defined by $`d\tau =zdt`$.
The dynamical system (32) has two critical lines in the hyperplane $`z=0`$, namely, $`Z_1`$: $`(0,1,w_z,w_z)`$ and $`Z_2`$: $`(0,0,2w_z,w_z)`$. The eigenvalues of $`Z_1`$ are $`\lambda _z=\lambda _N=1`$, $`\lambda _v=3`$, and $`\lambda _{w_z}=0`$. The eigenvector $`\zeta _v=(0,0,2,1)`$ defines the ingoing two-dimensional separatrices
$$z0,N1,w_z=w_0\frac{1}{2}v,$$
where $`w_0`$ is an arbitrary constant. Since these separatrices belong to the hyperplane $`z=0`$, they do not correspond to any trajectories of (31).
The eigenvalues $`\lambda _z`$ and $`\lambda _N`$ have the eigenvectors $`\zeta _z=(1,0,\pm \frac{3}{2}w_z^2,\pm \frac{3}{4}w_z^2)`$ and $`\zeta _N=(0,1,\frac{3}{2}w_z,\frac{3}{4}w_z)`$, where the signs in $`\zeta _z`$ correspond to the signs in the right hand sides of (32). Thus, the three-dimensional outgoing separatrices take the form
$$\begin{array}{cc}\hfill N& =1+n_1z+o(z),\hfill \\ \hfill v& =w_1+\frac{3}{2}(\pm w_1n_1)w_1z+o(z),\hfill \\ \hfill w_z& =w_1+\frac{3}{4}(\pm w_1n_1)w_1z+o(z)\hfill \end{array}$$
as $`z0`$, where $`n_1`$ and $`w_1`$ are arbitrary constants. Clearly, these separatrices correspond to a two-parameter family of the asymptotically flat solutions of (1).
###### Proposition 8.
The EYM equations (1) possess a two-parameter family of solutions such that $`lim_r\mathrm{}w(r)=w_{\mathrm{}}=\pm 1`$. All these solutions have the form
$$\begin{array}{cc}\hfill N& =1+n_1r^1+o(r^1),\hfill \\ \hfill w& =w_{\mathrm{}}+w_1r^1+\frac{3}{4}(w_{\mathrm{}}w_1n_1)w_1r^2+o(r^2)\hfill \end{array}$$
as $`r\mathrm{}`$, where $`n_1`$ and $`w_1`$ are arbitrary constants.
Next, the eigenvalues of $`Z_2`$ are $`\lambda _z=0`$, $`\lambda _N=1`$, $`\lambda _v=1`$, and $`\lambda _{w_z}=0`$. Hence, $`Z_2`$ is an unstable degenerate set. The eigenvalues $`\zeta _N=(0,1,6w_z,3w_z)`$ and $`\zeta _v=(0,0,1,0)`$ determine two-dimensional separatrices, which belong to the hyperplane $`z=0`$. Thus, they have no corresponding trajectories of (31). Closer analysis of $`Z_2`$ did not reveal any trajectories of (32) that correspond to EYM solutions.
### B The critical plane $`\mathrm{𝑂𝑆}`$
In this case, we study (31) in the local coordinates $`(z,N,v,w_z=w/z)`$, in which (31) may be written as
$$\begin{array}{cc}\hfill \dot{z}& =zN,\hfill \\ \hfill \dot{N}& =\left[1+\left(1+2z^4v^2\right)N+\left(1z^2w_z^2\right)^2z^2\right]N,\hfill \\ \hfill \dot{v}& =\left[13N\left(1z^2w_z^2\right)^2z^2\right]v\left(1z^2w_z^2\right)w_z,\hfill \\ \hfill \dot{w}_z& =(vw_z)N,\hfill \end{array}$$
(33)
where an overdot stands for derivatives with respect to $`\tau `$.
The system (33) has two critical sets in the hyperplane $`z=0`$, namely, the point $`P`$: $`(0,1,0,0)`$ and the line $`Z_3`$: $`(0,0,w_z,w_z)`$. The eigenvalues of $`P`$ are $`\lambda _z=\lambda _N=1`$ and $`\lambda _{v,w_z}=\frac{1}{2}(3\pm i\sqrt{3})`$. Thus, $`P`$ is a saddle. The eigenvectors $`\zeta _z=(1,0,0,0)`$ and $`\zeta _N=(0,1,0,0)`$ determine the outgoing two-dimensional separatrices
$$N=1+n_1z+z^2,v0,w_z0,$$
where $`n_1`$ is an arbitrary constant. Obviously, these separatrices correspond to the Reissner–Nordström solution (4).
Next, trajectories that belong to a stable two-dimensional manifold defined by the eigenvalues $`\lambda _v`$ and $`\lambda _{w_z}`$, spiral toward $`P`$ as $`\tau +\mathrm{}`$. These solutions may be written down explicitly, since for $`z0`$ and $`N1`$ the system (33) reads exactly as (22) with $`w_r`$ replaced by $`w_z`$. However, the whole manifold belongs to the hyperplane $`z=0`$, so that the trajectories on it do not correspond to any EYM solution. One may treat these trajectories as a description of the limiting behavior of the EYM solutions as the number of the gauge function nodes tends to infinity.
Finally, the line $`Z_3`$ is an unstable degenerate critical set. The eigenvalues of $`Z_3`$ are $`\lambda _z=0`$, $`\lambda _N=1`$, $`\lambda _v=1`$, and $`\lambda _{w_z}=0`$. One can easily see that the eigenvectors $`\zeta _N=(0,1,\frac{3}{2}w_z,0)`$ and $`\zeta _v=(0,0,1,0)`$ define two-dimensional separatrices, which belong to the hyperplane $`z=0`$. Thus, they do not have corresponding trajectories of (31). Additional study of $`Z_3`$ did not reveal any trajectories of (33) that have corresponding EYM solutions.
Let us mention in conclusion that though our investigation was restricted to local solutions of the EYM equations, dynamical systems methods can also be used for the analysis of the solutions global behavior. This will be the subject of a forthcoming publication.
## Acknowledgments
The author thanks Prof. D. V. Gal’tsov for suggesting the problem and for constant attention to the investigation, Prof. O. I. Bogoyavlensky for explaining some details of the method used in and for comments on the manuscript, Profs. Yu. A. Fomin and G. V. Kulikov for their sustained support, and Prof. J. A. Smoller for kindly sending numerous articles on the EYM equations.
The work was partially supported by the RFBR, Grant No. 96-02-18899.
|
no-problem/9906/cond-mat9906249.html
|
ar5iv
|
text
|
# Capital flow in a two-component dynamical system
## I Introduction
Economy is an intriguing complex dynamical system, understanding of which has vital importance to the society . From the point of view of a physicist, it may be seen as a natural phenomenon, whose microscopic “laws of motion” should be discovered and consequences drawn from them, amenable to experimental verification (or falsification). Statistical physics is successfully involved in investigation of various collective dynamic phenomena, which come from interdisciplinary areas, like car traffic , city growth , pedestrian dynamics , forest fires , river networks or biological evolution .
Like in these problems, numerical simulations of various “minimal” models of economic behavior play important role, in parallel with analytical approaches . Even though the hope of detailed prediction of the future market behavior could be (at least very probably) rarely satisfied, the knowledge of parameters, which are crucial for the probabilistic properties of the economic events is very important to the decision-makers on all levels.
One group of models investigated so far is based on threshold dynamics of the players in the market which was shown to be equivalent to a stochastic process with both multiplicative and additive noise which then leads naturally (see e. g. ) to the power law tails in the distribution of price changes, which are observed in reality. Similar in spirit are the models based on a diffusion-annihilation process, where buyers and sellers are considered as particles which disappear once they meet . The percolation theory was invoked to account for the herd behavior of market agents , which gives a power law distribution truncated by an exponential, if the connectivity of players is close but not exactly at the percolation threshold. This is in accord with the “truncated Lévy” distribution found in more refined recent analyses of stock market data . A model based on non-linear Langevin equation was developed in order to explain the apparent “phase transition” character of market crashes, proposed recently by several authors. (See e. g. .)
Another model, which implements on-line adaptation of the community of players is able to reproduce very well also the scaling of price changes . More abstract approach is used in the minority game model , where real money is lacking. An important feature of this model is the presence of a transition from chaotic to periodic phase, when the number of players is increased. (It is quite interesting, that similar transition seems to take place also in the threshold dynamics model ; when increasing the number of players, we may pass from the intermittent behavior characterized by power-law tails to an ordinary random-walk process. However, this phenomenon was not explicitly investigated in .)
In our model, we want to take into account the fact, that there are at least two types of investors. First, there are individuals who product some commodity and need other commodities to keep the production on. It is the latter type of economical subject, that the market was originally designed for. However, the second type of investor comes soon, a speculator, who observes the price changes due to disequilibrium between demand and offer, and makes profit from the information carried by the price signal. However, it is expected, that the influence of speculators is not at all purely negative. When they discover some regularity in the price fluctuations, they make many of it, but at the same time their activity has a feedback effect on the price, so that the very fluctuations the speculators are exploiting are destroyed. In fact, this may lead to overall decrease of price fluctuations, making the market more stable.
The question is, whether this common sense reasoning can be supported by more rigorous arguments. The scope of the present work is to implement a model of market, in which the mutual influence of producers and speculators may be studied.
## II Description of the two-component model
Economy is an open system. Like thermodynamic systems can self-organize into a low-entropy state only under condition that there is flow of energy through the system, non-trivial self-organization in the market is driven by the supply of wealth from the surrounding environment. So, first fundamental players in the economic game are producers, which exploit the outside opportunities. In our model, their real economic activity will be mimicked by buying and selling a single commodity in a regular manner. For simplicity, we will call it stock. Each producer is supplied periodically fixed amount of stock and money from the outside.
Real market needs also speculators, which play a positive role in absorbing temporary disequilibrium of demand and offer. The result is a more liquid market with reduced fluctuations. The speculators are selfish, but without any explicit wish, besides the net gain. We may look at them as providers of a service, for which they are paid. However, none of them provides individually a specific service, but their utility stems from a collective effect.
An important feature of our model is the possibility of both producers and speculators to abstain from the game, if they feel that it does not pay to participate. So, the number of players in each group is self-adjusted, depending on the parameters of the model. This is similar in spirit to the grand-canonical ensemble in statistical physics.
Borrowing the biological terminology, the producers are “autotrophs” who live in a symbiosis with “heterotrophic” speculators. Like in biological communities, we implement here also Darwinian evolution of the speculators, so that they adapt collectively to the actions of producers.
Let us be more specific now. The dynamical system we are going to investigate is a simplified model of a market, in discrete time. In each step, some amount of stock is traded. The price of stock $`x(t)`$ as a function of time $`t`$ is the output signal of the market. The price is the manifestation of large amount of “microscopic” activity due to players on the market. Each player is characterized by two dynamical variables, the amount of stock $`S_i(t)`$ and the amount of money $`B_i(t)`$, where index $`i`$ denotes the player. So, the total capital owned at time $`t`$ by $`i`$’th player is $`W_i(t)=B_i(t)+x(t)S_i(t)`$.
There are two kinds of players in our model. There are $`N_\mathrm{p}`$ producers and $`N_\mathrm{s}`$ speculators. We denote $`N=N_\mathrm{p}+N_\mathrm{s}`$ the total number of players. The producers follow a fixed strategy of buying and selling, irrespective of the current or past price. On the other hand, the decisions of speculators are based on the analysis of the past evolution of price.
The strategies are characterized as follows. Each producer, $`i=1,2,\mathrm{},N_p`$ has its own period $`\tau _i`$ and time-scale $`T_i`$ on which he or she invests. The periods are chosen randomly among numbers 2 to 6, the time-scales randomly from 7 to 10. The investment follows a random but quenched pattern $`a_i(t^{})`$, $`t^{}=0,1,\mathrm{},\tau _i1`$. In order to avoid a systematic excess on the demand or offer side, we require that the investment is balanced for each producer, $`_{t^{}=0}^{\tau _i1}a_i(t^{})=0`$. Apart from this constraint, the $`a_i`$’s are drawn from uniform distribution on the interval $`(1,1)`$ Finally, all producers are given the same overall amplitude of their investment $`ϵ`$. At the time step $`t`$, the producer $`i`$ participates, if he or she has positive capital. In this case, he or she attempts to buy the following amount of the commodity
$$\overline{\mathrm{\Delta }S_i}=ϵ(a_i([t/T_i]\mathrm{mod}\tau _i)\lambda \mathrm{ln}\frac{x(t)}{<W>}).$$
(1)
In this formula, we denote by $`[t/T_i]`$ integer division of $`t`$ by $`T_i`$ and $`<W>=\frac{1}{N}_{i=1}^NW_i`$ is the average wealth of the players. The last term with the logarithm expresses the fact, that the stock has its intrinsic value. Its price is measured relatively to the average wealth of the population, so that if the price is larger, the strategy of the producer is slightly biased towards selling, while if the price is lower, the producer is more likely to buy. The use of logarithm follows from the fact, that the evolution of price is a multiplicative process, rather than additive one. Analogical term plays a crucial role also in the analytic approach of Ref. . The parameter $`\lambda `$ measures the strength of the bias caused by the intrinsic value of the commodity. Throughout this article, we use the value $`\lambda =0.01`$.
The speculators differ from the producers in two aspects. First, they do not feel the intrinsic value of the traded product, so that the logarithmic term in the Eq. (1) is missing. But the crucial difference resides in the ability of speculators to analyze the past price signal and decide according to their expectations about the future. As the producers do, the speculators may have their time-scales on which they analyze the signal and also different memory length. However, in the present work we limit ourselves only to the case when all speculators have their time-scale equal to one step of the dynamics and memory is fixed and uniform in the whole community of speculators.
The speculators have memory $`M`$. It means that they are able to use information of $`M`$ previous values of price, $`x(t),x(t1),\mathrm{},x(tM)`$. This sequence is then transformed in a bit string $`\sigma =[\sigma _1,\sigma _2,\mathrm{},\sigma _M]`$ containing the information whether the price went up or down in a given instant in the past. We adopted the convention that 0 means increase and 1 means decrease of the price. Therefore $`\sigma _j=\theta (x(tj)x(tj+1))`$, where $`\theta (x)`$ is the Heaviside function. The strategy of the $`i`$-th speculator is the function which prescribes for each bit string, whether the speculator should buy or sell. We adopt the convention, that 0 means selling and 1 buying. Then the strategy is a function $`\mathrm{\Sigma }_i(\sigma )`$ with possible values 0 or 1.
The strategy has a score $`b_i(t)`$ counting its success rate at time $`t`$. If it predicted correctly the change of the price from step $`t`$ to $`t+1`$ one point is added to the score, otherwise one point is subtracted. There is in principle a non-trivial question, what we mean by saying “predicted correctly the price change”. In fact, it should be checked a posteriori that the rule we used for distinguishing successful strategies corresponds really to winning behavior. However, in our model, we used a prescription for price change, which enables us to say a priori which strategy did a good job. At this moment, we present the rule of success rate counting and return to the justification of this rule later, when we will speak of the prescription for price change. The essence is, that it is good to buy if the price will go down and vice versa. So, when the player is inserted in the market, its score is set to zero, and in each successive step, the score is updated as follows: $`b_i(t+1)=b_i(t)+1`$ if $`\theta (x(t)x(t+1))=\mathrm{\Sigma }_i(\sigma (t))`$ and $`b_i(t+1)=b_i(t)1`$ in the opposite case.
There is a Darwinist selection among speculators. Each 5 steps, the speculator with lowest capital is removed and replaced by new player with newly chosen strategy. However, the capital, amount of stock and money is inherited from the removed player. This amounts to not really remove the player, but rather the player picks new strategy instead of the old doomed one. We define the age $`v_i(t)`$ of speculator $`i`$ as number of time steps since the last replacement of the strategy. We implemented also random mutations, which affect equally good and bad players. Each 57 steps a player is chosen at random and its strategy is randomly changed.
If the speculator feels, that the strategy is bad, he or she may abstain, in order to avoid losses. For the player $`i`$ to participate, we require that $`b_i(t)/v_i(t)>0.05`$.
Those speculators, who do participate, attempt to buy the following amount of stock:
$$\overline{\mathrm{\Delta }S_i}=\delta (2\mathrm{\Sigma }_i(\sigma (t))1).$$
(2)
When we know what amount of stock the players want to buy or sell, we can compute the change of price. It is not clear a priori what precisely the price change should be. The only obvious requirement is, that the price should go up, when there is more demand than offer, and go down, when the offer prevails. The demand is $`D=_{i,\overline{\mathrm{\Delta }S_i}>0}\overline{\mathrm{\Delta }S_i}`$ and the offer $`O=_{i,\overline{\mathrm{\Delta }S_i}<0}\overline{\mathrm{\Delta }S_i}`$. In the previous works, two recipes for the price change were used. In the time averages of demand and offer were computed and new price was obtained by multiplying the old price by the ratio of average demand to average offer. Essentially the same prescription is applied in .
On the other hand, within the approaches based on threshold dynamics or diffusion-annihilation processes the deal is realized when the bid and ask prices meet, which determines the reported stock price at that moment. (It is interesting to note, that between the deals the price is undefined.)
Here we adopt an approach closer to the former one. The new price is computed by multiplying the old price by a factor, which increases with the ratio $`D/O`$.
$$x(t+1)=F(\frac{D}{O})x(t)$$
(3)
where the function $`F(r)`$ should obey two conditions, $`F(1)=1`$ and $`F^{}(r)>0`$. The simplest choice consists in taking $`F(r)=r`$, but as we have seen in our simulations, this leads to price fluctuations far beyond realistic values. So, we use a non-linear form which suppresses the fluctuations,
$$F(r)=\mathrm{exp}(\alpha \mathrm{tanh}(\mathrm{log}(r)/\alpha ))$$
(4)
which has the property that $`F(r)r`$ for $`r`$ close to 1. We have found that $`\alpha =0.02`$ gives realistic price fluctuations, so we keep this value throughout the simulations.
There should be conservation of stock in each trade event. Therefore the amount actually traded by a single player is not the same as the attempted volume. If $`D>O`$ then if $`\overline{\mathrm{\Delta }S_i}>0`$, actual change of stock is lower than attempted, $`\mathrm{\Delta }S_i=\frac{O}{D}\overline{\mathrm{\Delta }S_i}`$. If $`\overline{\mathrm{\Delta }S_i}<0`$ the actual traded amount is the same as attempted, $`\mathrm{\Delta }S_i=\overline{\mathrm{\Delta }S_i}`$.
If on the contrary $`D<O`$ then if $`\overline{\mathrm{\Delta }S_i}<0`$, $`\mathrm{\Delta }S_i=\frac{D}{O}\overline{\mathrm{\Delta }S_i}`$, if $`\overline{\mathrm{\Delta }S_i}>0`$, $`\mathrm{\Delta }S_i=\overline{\mathrm{\Delta }S_i}`$.
After the trade is completed, the new amount of capital, stock and money is
$`W_i(t+1)`$ $`=`$ $`B(t)+x(t+1)S_i(t)`$ (5)
$`S_i(t+1)`$ $`=`$ $`S_i(t)+\mathrm{\Delta }S_i`$ (6)
$`B_i(t+1)`$ $`=`$ $`B_i(t)x(t+1)\mathrm{\Delta }S_i`$ (7)
Finally, each 120 steps, the producers receive wealth from outside. The value of the influx is governed by the parameter $`\eta `$. The total amount distributed among producers is $`\eta N_\mathrm{p}`$, but who do not participate in that moment, does not receive anything. If $`p_\mathrm{p}`$ is the fraction of currently participating producers, those who do participate, increase their capital by $`\eta /p_\mathrm{s}`$.
Summarizing the algorithm which defines our model, the following operations are performed in each step.
(i) Calculate amount of the commodity, attempted to buy by producers (Eq. (1)) and speculators (Eq. (2)).
(ii) Calculate demand and offer and from them the new price, according to (3) and (4).
(iii) Calculate new values of capital, stock and money according to (5), (6), and (7).
The following actions are performed periodically.
(iv) If the step is a multiple of 57, randomly chosen speculator changes randomly its strategies. If the step is a multiple of 120, wealth is added to producers.
An important property of the above algorithm is that the game has minority character. Indeed, because the price is established after the attempted traded amounts $`\overline{\mathrm{\Delta }S_i}`$ were fixed, it is an advantage to sell, if the price goes up and buy, when the price comes down. Therefore this means that going counter the majority is an. This enables us to decide, what score should be attributed to the strategies: those leading to minority side receive +1 point, those which lead to majority side, receive -1 point. In the abstract minority game it was possible only for less than half players to be on the minority side, of course. The presence of producers, however, makes it possible for any number of speculators to have minority strategy, at least in principle.
## III Results
The fundamental variable of our model is the price. A typical time evolution of price is shown in Fig. 1. The long-time average of price grows, due to the fact, that capital is regularly injected into the system. We observed, that the increase of price is higher if the capital influx is higher. We measured also the price fluctuations. We observed, that the relative price fluctuations remain constant in the long-time average, so that the absolute fluctuations grow with time with the same rate as the long-time average of price does.
One of the main questions is, how the relative price fluctuations is changed by the presence of the speculators. We define the the time-averaged relative price fluctuations using exponential averages
$$f=\sqrt{\frac{\underset{t=1}{\overset{T}{}}\lambda ^{Tt+1}x^2(t)\underset{t=1}{\overset{T}{}}\lambda ^t}{\left(_{t=1}^T\lambda ^{Tt+1}x(t)\right)^2}1}$$
(8)
where $`T`$ is duration of the simulation run and the parameter was chosen $`\lambda =0.9999`$.
The relative weight of the speculators compared with the producers is the quantity $`\xi =N_\mathrm{s}\delta /N_\mathrm{p}ϵ`$. Figure 2 shows the dependence of the time-averaged relative price fluctuations on $`\xi `$. We can see a pronounced minimum around $`\xi =0.5`$, which suggests, that from the point of view of the price stability, there is an optimal weight of the speculators.
This phenomenon can be better understood, when we observe the participation of producers ($`p_\mathrm{p}`$) and speculators ($`p_\mathrm{s}`$), defined as the percentage of those, who take part in the trading. The Fig. 3 shows the time dependence of the participation in a typical run.
After a transient period, the participation fluctuates around a stationary value, which grows with the capital influx $`\eta `$. The dependence of the time-averaged participation on the parameter $`\xi `$ is shown in the Fig. 4. The most important observation is the substantial decrease of the participation of the speculators for the value of $`\xi `$ close to $`0.5`$. The participation of producers has a shallow minimum around the same value $`\xi =0.5`$, which is also close to the position of the minimum in the relative price noise.
The following picture arises from these observations. If the speculators are too prudent (small $`\delta `$), the price fluctuations are high, because of the demand-offer disequilibrium. The price changes follow a periodic pattern induced by the periodic quenched pattern of trading of each individual producer. Speculators are able to extract the information about the periodic price changes and use it to make profit. Because they trade with little capital (low $`\delta `$) they do not influence much the price and many speculators can gain. On the other hand, the gain is also small.
If the investors became more aggressive, by increasing $`\delta `$, they have larger influence on the price, which leads to the suppression of the price fluctuations, but at the same time their ability to use the periodic price fluctuations to make profit is also suppressed. This results in the fact, that less speculators have successful strategy and less speculators participate.
There is a transition between the low-aggressivity regime, where many speculators make little profit, and the high-aggressivity regime, where few speculators can make large profit. The transition occurs around $`\xi =0.5`$ and it is characterized by optimum of the relative price noise and also by the minimum of the participation of the producers, which means that less producers are able to gain. The advantage is more stable price, the disadvantage is less profit for the producers. This is a manifestation of the common sense consideration, repeated in all economics literature, stating that higher profit is more risky.
In Figs. 5 and 6 the situation is illustrated by time evolution of the increase of the total wealth of producers and speculators, defined by
$`\mathrm{\Delta }W_\mathrm{p}(t)=`$ $`{\displaystyle \underset{i\mathrm{producers}}{}}(W_i(t)1)`$ (9)
$`\mathrm{\Delta }W_\mathrm{s}(t)=`$ $`{\displaystyle \underset{i\mathrm{speculators}}{}}(W_i(t)1).`$ (10)
Lower $`\delta `$ (Fig. 5) is characterized by large fluctuations of the capital of the producers, while capital of the speculators grows slowly. When the $`\delta `$ is larger (Fig. 6), the capital of producers fluctuates less, but the speculators have significantly larger profit.
The influence of the influx of capital into the system, measured by the parameter $`\eta `$, can be seen by comparing the Figs. 6 and 7. The picture remains the same qualitatively, however larger influx means larger profit preferably for the producers, while the increase of the profit of the speculators is much lower. The Fig. 3 shows, that the participation of the producers is much more influenced by the parameter $`\eta `$ than the participation of the speculators.
## IV conclusions
We introduced and studied a model of open economics. The influx of capital leads to the coexistence of producers and speculators. We showed, that the presence of the speculators can be useful to the economics, by suppression of the price fluctuations. If we increase the aggressivity of the speculators, there is a smooth transition from the regime with small, but less risky profit for the speculators, to the regime with larger profit, but accessible to smaller fraction of the set of the speculators. The transition occurs close to the minimum of the price fluctuations, which is the optimal state for the producers. If we accept the supposition, that the optimal strategy for the speculators should be derived by a compromise of the mutually exclusive requirements of risk and profit, we can conclude, that the optimum for the speculators lies also in the transition region. As a result, the optima for producers and speculators lie close one to the other, and their mutual coexistence should be better described as symbiosis than parasitism.
###### Acknowledgements.
We acknowledge the support from the European TMR Network-Fractals c.n. FMRXCT980183. F.S. wishes to thank the University of Fribourg, Switzerland, for the financial support and kind hospitality.
|
no-problem/9906/astro-ph9906026.html
|
ar5iv
|
text
|
# ACCELERATION OF ULTRA HIGH ENERGY COSMIC RAYS
## 1 INTRODUCTION
I have been asked to summarize “conventional” schemes for the acceleration of UHE cosmic rays, though any physical process capable of endowing a subatomic particle with the kinetic energy of a well-hit baseball/cricketball can hardly be considered conventional. This means that I shall leave others to review mechanisms that attribute the origin of these particles to topological defects, strings, monopole decay, supersymmetric hadrons, cosmic necklaces, cryptons and so on. Indeed, I suspect that the “hidden agenda” is for me to fail at my appointed task, and, like my colleagues on the MACHO experiment, to make the world safe for elementary particle theorists. I shall not disappoint.
Many of the issues that I will cover have been recognised for some time and have been well-discussed in several excellent reviews including Hillas (1984) and Cronin (1996) and the many relevant contributions to the recent conference on this subject (Krizmanic, Ormes & Streitmatter 1998), including, especially, the lively summary by the late David Schramm. The conference proceedings edited by Chupp & Benz (1994) is also relevant.
## 2 THE COSMIC RAY SPECTRUM
In order to give this topic some context, consider the complete cosmic ray spectrum (eg Berezinski et al 1990). This extends over nearly twelve decades of energy from the proton rest mass, $`1`$ GeV, where their energy density is that of the microwave background, to at least $`300`$ EeV ($`50`$ J $`3\times 10^8`$ m<sub>Pl</sub>). We can consider the cosmic ray spectral energy density inferred at the solar system, $`U(E)=(4\pi /c)(dI/d\mathrm{ln}E)5\times 10^{14}(E/10\mathrm{G}\mathrm{e}\mathrm{V})^{0.7}`$ J m<sup>-3</sup> (correcting for solar modulation) extending from $`10`$ GeV to the “knee” at $`100`$ TeV- 10 PeV. (10 GeV cosmic rays are about 10 m apart and have an energy density comparable with that of the microwave background. The spectrum steepens above the knee: $`U(E)4\times 10^{18}(E/10\mathrm{P}\mathrm{e}\mathrm{V})^{1.1}`$ J m<sup>-3</sup>. It then dips and flattens around the “ankle” ($`110`$ EeV). UHE cosmic rays - the toenail clippings of the universe - are observed up to 300 EeV and, with a little imagination, $`U(E)1.5\times 10^{21}(E/10\mathrm{E}\mathrm{e}\mathrm{V})^{0.5}`$ J m<sup>-3</sup>, comparable with the estimated, integrated background from $`\gamma `$-ray bursts. (Despite the large uncertainty, and the fact that the number density has fallen by $`10`$ orders of magnitude, we do measure the EeV spectrum better than the MeV spectrum, of which we are, quite decently, ignorant.)
The $`1`$ GeV-$`100`$ TeV cosmic rays are of Galactic origin. The ratio of Li, Be, B secondaries to C, N, O primaries measures their range to be $`\lambda (E)100(E/10\mathrm{G}\mathrm{e}\mathrm{V})^{0.6}`$kg m<sup>-2</sup> (eg Axford 1994). The cosmic ray luminosity of the Galaxy is then estimated as $`M_dU(E)c/\lambda (E)2\times 10^{33}(E/1\mathrm{G}\mathrm{e}\mathrm{V})^{0.1}`$ W where $`M_d`$ is the gas mass of the disk. Scaling from the local galaxy luminosity density (per $`\mathrm{ln}E`$ and assuming $`h=0.6`$), we derive an average, cosmological, luminosity density (per $`\mathrm{ln}E`$), $`<>(E)4\times 10^{37}(E/10\mathrm{G}\mathrm{e}\mathrm{v})^{0.1}`$ W m<sup>-3</sup>, for $`10\mathrm{G}\mathrm{e}\mathrm{V}<E<100\mathrm{T}\mathrm{e}\mathrm{V}`$. (For comparison, the stellar luminosity density is $`10^{33}`$ W m<sup>-3</sup>.)
The UHE particles are almost surely extragalactic. As with $`\gamma `$-ray bursts, there is no good evidence for disk, halo, cluster or supercluster anisotropy (despite some tantalising hints in the past), (Takeda et al 1999). Furthermore, magnetic confinement by the Galaxy is impossible - the Larmor radius $`r_L(E)`$ of a 300 EeV cosmic ray in a $`\mu `$G field is $`300`$kpc. If we assume that UHE cosmic rays are protons, (and assuming that they are not, only makes matters worse), then they have a short lifetime to photo-pion production on the microwave background, (Greisen 1966, Zatsepin & Kuzmin 1966). The characteristic lifetime of a $`60300`$ EeV cosmic ray is, very roughly, $`T(E)0.1(E/300\mathrm{E}\mathrm{e}\mathrm{V})^2`$ Gyr. This implies that the luminosity density increases with energy $`<>(E)U(E)/T(E)10^{37}(E/300\mathrm{E}\mathrm{e}\mathrm{V})^{1.5}`$ W m<sup>-3</sup>. At the highest measured energy, the estimated cosmological luminosity density is not significantly different from that of 10 GeV cosmic rays. The change in slope in the source spectrum, above $`60`$ EeV, is a strong indication that these UHE cosmic rays comprise a quite distinct component from their lower energy counterparts.
In order to investigate this further, it is necessary to take account of the fluctuations in energy loss. Taking the 17 events reported by the AGASA collaboration above 60 EeV, it is possible to derive a maximum likelihood estimate of the unnormalized energy density, uncorrected for biases in detection efficiency. I find that if $`U(E)E^\alpha ;E>E_{\mathrm{min}}=60`$ EeV, then $`\alpha =1.2\pm 0.5`$. I then calculate the probability that a particle of energy $`E_0`$ has energy $`>E`$ after time $`t`$, $`P(E,t;E_0)`$, following Aharonian & Cronin (1994) and Bahcall & Waxman (1999). If we assume a power law for the luminosity density $`(E)E^\beta ;E>E_{\mathrm{min}}=60`$ EeV, then the logarithm of the likelihood for obtaining the observed events is
$$\underset{i}{}\mathrm{ln}\left[(1\beta )E_{\mathrm{min}}^{(1\beta )}_{E_i}𝑑E_0E_0^{\beta 2}𝑑t\frac{P}{\mathrm{ln}E}\right]$$
(1)
Maximizing this function with respect to variation of $`\beta `$, gives the estimate $`\beta =0.3\pm 0.2`$. A more sophisticated computation that takes into account the detection probabilities of the different events should be performed, but it is unlikely to change the conclusion that the spectral luminosity density actually increases in the 60-300 EeV energy range and may even be consistent with a single, “top down” source with energy well above 300 EeV.
There have been reports that UHE cosmic rays are significantly clustered on the sky. Specifically, in a sample of 47 events observed with AGASA, there are three pairs and one triple above 40 EeV with separations $`<2.5^{}`$, comparable with the positional errors (Takeda et al 1999). (There are two more coincidences with events drawn from other samples.) There is no clear pattern for the associated particles to be ordered in energy and, in particular, one double has a 106EeV particle arriving over 3 yr. after a 44 EeV particle.
If these associations are real, then there are three important implications. Firstly, as particles are likely to be deflected by intergalactic magnetic field through an angle $`\delta \theta (D\mathrm{}_B)^{1/2}/r_L(E)`$ then they will be delayed by $`D^2\mathrm{}_B/r_L(E)^2cE^2`$, where $`\mathrm{}_B`$ is the field correlation length and $`DcT(150\mathrm{E}\mathrm{e}\mathrm{V})30`$ Mpc is the supposed source distance ( cf Miralda-Escudé & Waxman 1996). Even Aesop would be challenged to explain how a $`40`$ EeV cosmic ray precedes a $`100`$ EeV cosmic ray if they started at the same time and we must conclude that the source persists for several years, at least. This would rule out all particle/defect and $`\gamma `$-ray burst models. Secondly, the small deflection angles at low energy limit the intergalactic field strength to $`B<20(\mathrm{}_B/1\mathrm{M}\mathrm{p}\mathrm{c})^{1/2}`$ fT, far smaller than generally supposed, though probably not excludable by direct observation. Thirdly, the presence of three $`40`$ EeV cosmic rays associated with high energy cosmic rays of much shorter range, implies that the background of low energy cosmic rays not associated with high energy events must be larger than the incidence of clustered events by roughly the ratio of their typical lifetimes $`30`$, which more than accounts for the remainder of the low energy sample. This, in turn, implies that the high energy cosmic rays must come from a very few sources which are, consequently, quite energetic: $`E10^{44}(\tau /3\mathrm{y}\mathrm{r})`$ J, where $`\tau `$ is their lifetime. (If the low energy cosmic rays are scattered through $`2.5^{}`$, then $`\tau >10^5`$ yr and $`E3\times 10^{48}`$ J.)
However, this clustering hypothesis, which is necessarily a posteriori when expressed in detail, is only supported with modest confidence. (A simple, Monte Carlo simulation distributing 47 points at random on half the sky and looking for similar patterns is quite instructive.) Particle/defect/burst explanations of UHE cosmic rays need not yet be rejected on these grounds.
## 3 COSMIC RAY ACCELERATION
The standard model of bulk cosmic ray production is first order Fermi acceleration at strong, super-Alfvénic, shocks associated with supernova remnants and, possibly, winds from hot stars (eg Blandford & Eichler 1987). A typical relativistic proton will cross a shock, travelling with speed $`u`$, $`O(c/u)`$ times, gaining energy $`\mathrm{\Delta }E/E=O(u/c)`$ each traversal through scattering by hydromagnetic waves moving slowly with respect to the converging fluid flows on either side of the front. The net mean relative energy gain is $`O(1)`$, but the process is statistical and a kinetic calculation shows that the transmitted spectrum will be a power law in momentum, $`f(p)p^{3r/(r1)}`$, where $`r`$, ($`=4`$ for a strong shock), is the compression ratio. This mechanism can account, broadly, for the power (eg Malkov 1999), the slope (eg Axford 1994) and the composition (eg Ellison et al (1997)) of GeV cosmic rays. Shock acceleration is also, arguably, observed directly in SN1006 (Koyama et al 1995), as well as in the solar system (eg Erdös & Balogh 1994).
The maximum energy to which a particle can be accelerated at a shock front is dictated by the scattering mean free path, $`\mathrm{}(E)(B/\delta B)_{r_L}^2r_L(E)`$, where $`\delta B`$ is the amplitude of resonant hydromagnetic waves with wavelength matched to the particle Larmor radius. The diffusion scale-length of cosmic rays ahead of the shock is $`\mathrm{}c/u`$ and, assuming that this is limited by the size of the shock $`R`$ we arrive at the unsurprising result that the maximum energy achievable in shock acceleration, assuming $`\delta B<B`$ and the presence of a large scale magnetic field, is $`E_{\mathrm{max}}=eeuBRed\mathrm{\Phi }/dt`$, the product of the charge and the motional potential difference across the whole shock. Equivalently, we conclude that in order to accelerate a proton by this mechanism to an energy $`e`$, the rate of dissipation of energy exceeds $`L_{\mathrm{min}}^2/Z`$, where $`Z=\mu _0u=\mu _0E/B`$ is the effective impedance of the accelerator in SI units.
Imposing this condition for a supernova remnant in the interstellar medium leads to an estimate $`E_{\mathrm{maxsnr}}30`$ TeV (eg Axford 1994), close to the knee. An additional source is needed between the knee and the ankle, where the source is generally supposed to be metagalactic. Larger shocks, especially those at Galactic wind termination shocks (eg Jokipii & Morfill 1987) and associated with gas flows around groups and clusters of galaxies have been invoked (eg Norman et al 1995). These shocks are likely to be relatively weak and therefore to transmit steeper spectra, as observed. The major uncertainty is the strength of the magnetic field. If $`B30`$ pT at a galactic shock and $`10`$ pT at a cluster shock, then $`E_{max}10,100`$ PeV respectively. Neither site is likely to accelerate the highest energy particles.
An alternative accelerator is the unipolar inductor (eg Goldreich & Julian 1967). The archetypical example is a pulsar \- a spinning, magnetised, neutron star. The surface field will be quite complex but a certain quantity of magnetic flux $`\mathrm{\Phi }`$ can be regarded as “open” and tracable to large distances from the star, (well beyond the light cylinder). As the star is an excellent conductor, an EMF will be electromagnetically induced across these open field lines $`\mathrm{\Omega }\mathrm{\Phi }`$, where $`\mathrm{\Phi }`$ is the total, open magnetic flux. This EMF will cause currents to flow along the field and as the inertia of the plasma is likely to be insignificant the only appreciable impedance in the circuit is related to the electromagnetic impedance of free space $`Z0.3\mu _0c100`$ $`\mathrm{\Omega }`$. The maximum energy to which a particle can be accelerated is $`E_{\mathrm{max}}e`$ and the total rate at which energy is extracted from the spin of the pulsar is $`L_{\mathrm{min}}^2/Z`$. Taking the Crab pulsar as an example, $`E_{\mathrm{max}}30`$ PeV for protons and $`L_{\mathrm{min}}10^{31}`$ W. As the stellar surface may well comprise iron, even the Crab pulsar has the capacity to accelerate up to EeV cosmic rays. However, it is not obvious that all of this potential difference will actually be made available for particle acceleration. In particular, this is unlikely to happen in the pulsar magnetosphere as a large electric field parallel to the magnetic field will be shorted out by electron-positron pairs, which are very easy to produce, and radiative drag is likely to be severe. A more reasonable site is the electromagnetic pulsar wind and the surrounding nebula where particles can gain energy as they undergo gradient drift between the pole and the equator (Bell 1992). Pulsars may well contribute to the spectrum of intermediate energy cosmic rays.
A third, protoypical accelerator is a flare, for example one occuring on the solar surface or the Earth’s magnetotail. Here magnetic instability leads to a catastrophic rearrangement, which must be accompanied by a large inductive EMF. Unless the instabilities are explosive, the effective impedance is again $`\mu _0u`$, where $`u`$ is a characteristic speed. Non-relativistic flares generally convert most of the dissipated magnetic energy into heat and are notoriously inefficient in accelerating high energy particles.
Other acceleration mechanisms have been proposed and may contribute to the acceleration of the bulk of Galactic cosmic rays and relativistic electrons in non-thermal sources. These include a variety of second order processes and steady, magnetic reconnection. Many of them can be observed to operate within the solar system. However, they are thought to be too slow to be relevant to the acceleration of the highest energy cosmic rays.
## 4 ZEVATRONS
Having argued that the three most potent, observed acclerators are shocks, unipolar inductors and flares, let us see how they can be modified to account for $``$ ZeV cosmic rays. Firstly, note that, as $`uc`$, mildly relativistic shocks minimise the power that has to be invoked to attain high energy. Specifically, we need a power $`>10^{39}`$ W to account for 300 EeV cosmic rays and this exceeds the bolometric luminosity of a powerful quasar. One of the few sites where such a large potential difference can be achieved is the termination shock of a powerful radio jet like that associated with Cygnus A (eg Cavallo 1978). Stretching the numbers a little, we combine a field strength $`10`$ nT, with a speed $`c`$ and a transverse scale $`3`$ kpc which gives $`E_{\mathrm{max}}300`$ EeV. The problem with this model is that observed UHE cosmic rays are not positionally identified with the few known radio sources within $`D(E)30`$ Mpc that might be powerful enough to account for them (cf Farrar & Biermann 1998).
A more elaborate shock accelerator is the $`\gamma `$-ray burst blast wave (eg Waxman 1995). Here, the shocks (assumed to be spherical) are ultrarelativistic with Lorentz factor $`\mathrm{\Gamma }`$. The maximum energy, measured in the frame of the explosion, to which a proton can be accelerated in a dynamical timescale from an ultrarelativistic shock of radius $`R`$ is $`E_{\mathrm{max}}eB^{}Rc`$, where $`B^{}`$ is the comoving field strength. The explosion power, adopting the most elementary of assumptions, is then $`L_{\mathrm{min}}4\pi \mathrm{\Gamma }^2(E_{\mathrm{max}}/e)^2/\mu _0c`$. Observed bursts have typical explosion powers estimated to be $`L_{\mathrm{exp}}10^{45}`$ W, which can be consistent with 300 EeV proton acceleration as long as $`\mathrm{\Gamma }<300`$, which is just compatible with existing models. A serious physical constraint is the avoidance of radiative loss in this environment. An observational concern with this model is the improbability of having enough active bursts close to supply the highest energy particles roughly isotropically (cf Waxman & Miralda-Escudé 1996).
The most relevant variant on unipolar induction is magnetic energy extraction from spinning, black holes, where the magnetic field is supported by external current, and the horizon is an imperfect conductor with resistance $`100\mathrm{\Omega }`$ (eg Thorne it et al 1986). This impedance is matched to the electromagnetic load so that roughly half of the available spin energy ends up in the irreducible mass of the hole, the remainder being made available for particle acceleration. The total electromagnetic power needed to account for $`300`$ EeV acceleration is, once more, $`10^{39}`$ W. A rapidly spinning, $`10^9`$ M hole endowed with a field strength $`>1`$ T or a $`10^5`$ M hole threaded by a $`>10^4`$ T field suffices to accelerate 300EeV particles. The major concern with this model is that the radiation background must be extremely low in order that catastrophic loss due to pion and pair production be avoided. Specifically, it is necessary that the microwave luminosity in an acceleration zone, of size $`R`$, be $`<10^{34}(R/10^{14}\mathrm{m})`$ W, far smaller than the unobserved electromagnetic power.
The best generalization of flare acceleration involves “magnetars” which are young, spinning neutron stars endowed with a $`10100`$GT surface magnetic field as first postulated by Thompson & Duncan, (1996). Now, the observation of $`57`$ s period pulsations from three “soft gamma repeaters” effectively confirms their identification as old magnetars that have been decelerated by electromagnetic torque and which are now powered by magnetic energy which is released in a series of giant flares, Kouveliotou et al 1998. The inductive EMFs associated with an electromagnetic flare from a magnetar can be as high as $`3\times 10^{19}`$ V, making them candidate UHE accelerators because the surface composition is likely to be Fe. However, the available reservoir of magnetic energy is only $`10^{40}`$ J and the magnetar birthrate is no more than $`10^3`$ yr<sup>-1</sup> in the Galaxy. This rules them out as an extragalactic source. Only if UHE cosmic rays have a Galactic origin, (and the large scale anisotropy observations suggest quite strongly that do not), can there be enough power in magnetars to account for the UHE energy density.
## 5 DISCUSSION
I have argued, tentatively, that UHE cosmic rays are created in a new population of extragalactic sources with an average luminosity density that approaches that of Galactic cosmic rays. I have also described problems with each of the candidate “conventional” mechanisms for accelerating protons to these high energies. Quite different, general inferences have been drawn here, from the same data, by Waxman, and elsewhere by others. All of this underscores the need for better statistics which should be met by the Auger project. Perhaps the most pressing need is to understand if particles of very different energy have a common origin. If true, this must rule out essentially all primordial particle/topological defect, neutron star, $`\gamma `$-ray burst explanations, leaving only massive black holes and radio source models among the possibilities discussed above. In this case, it will be possible to seek identifications, especially at the highest energies, where the positions will be most accurate and the delays due to magnetic scattering the smallest. If, alternatively, clustering and its implications are not substantiated, then the next best clues will probably come from composition studies and detailing the large scale distribution on the sky.
The most exciting outcome of all of this is that we are dealing with a new particle or defect with energy well out of the range of terrestrial accelerators. (For example, if there is a particle of energy $`E_X`$ which decays with half life $`\tau _X`$ into $`N`$ protons, then the cosmological energy density of these particles must be $`\mathrm{\Omega }_X3\times 10^8N^1(E_X/1\mathrm{Y}\mathrm{e}\mathrm{V})^1(H_0\tau _X)^1`$.) Whatever happens, in a subject where the dullest and most conventional theories involve massive, spinning, black holes, ultrarelativistic blast waves and 100 GT fields threading nuclear matter, the future is guaranteed to be interesting.
## Acknowledgements
I am indebted to John Bahcall, Jim Cronin, Michael Hillas, Martin Rees, Alan Watson and Eli Waxman for stimulating discussions and the editors for their forbearance. I also gratefuly acknowledge the hospitality of the Insitute for Advanced Study (through the Sloan Foundation) and the Institute of Astronomy (through the Beverly and Raymond Sackler Foundation) as well as NASA grant 5-2837.
|
no-problem/9906/hep-ex9906021.html
|
ar5iv
|
text
|
# Upper limit on the prompt muon flux derived from the LVD underground experiment
## I Introduction
The depth – angular distribution of muon intensity measured in an underground experiment is closely related to the muon energy spectrum at surface. Assuming the muon survival probabilities are well known for every depth and every muon energy at surface, the analysis of the measured depth – zenith angle distribution of intensity allows us to evaluate the parameters of the muon spectrum at the sea level, i.e. the normalization constant, the power index of the primary all-nucleon spectrum, $`\gamma `$, and the prompt muon flux from the decay of charmed particles produced together with pions and kaons in the high-energy hadron-nucleus interactions.
Among these characteristics the value of prompt muon flux attracts a particular interest. It can be evaluated from the zenith-angle distributions of muon intensities, measured at various muon energies or various depths. The fraction of prompt muons cannot be estimated from the muon energy spectrum or depth-intensity curve measured at one zenith angle because the same effect can be produced either by the prompt muons, or the decrease of $`\gamma `$, or both.
The charmed particles are produced together with pions and kaons in the collisions of primary cosmic rays with air nuclei. They have such short live times that they decay immediately (if their energy is less than 1000 TeV) into muons and other particles. Thus, for them there is no competition between interaction and decay, and the prompt muon energy spectrum has almost the same slope as the primary spectrum. Due to the rise of the charm production cross-section in the energy range 100–1000 TeV, the power index of the prompt muon spectrum, $`\gamma _c`$, can be little lower than $`\gamma `$. However, possible scaling violation in the fragmentation region can increase the value of $`\gamma _c`$. Due to the absence of the competition between interaction and decay of charmed particles the zenith-angle distribution of prompt muons is almost flat, comparing with the $`\mathrm{sec}\theta `$ \- distribution of the conventional muons (from the decay of pions and kaons). This allows to estimate the fraction of prompt muons by analysing the zenith-angle distribution of muon intensities.
Numerous calculations of the prompt muon flux were done (see, for example, ). Different models give the prompt muon fluxes which vary by 2 orders of magnitude. This is due to the uncertainties in the charm production cross-section, $`\sigma _c`$, $`x`$-distribution of charmed particles ($`x=E_c/E_0`$), produced in pA-collisions, and the branching ratio of charmed particle decay into muons. The most uncertain parameter, that results in the large dispersion of the predicted prompt muon flux, is the $`x`$-distribution of produced charmed particles in the fragmentation region, important for the charm-produced cosmic-ray muons. This distribution at high energies cannot be measured precisely at accelerators which give the information only about small $`x`$. Thus, to check the models of the charm production, the experiments with cosmic-ray muons at high energies are useful.
The search for the prompt muon flux was done with several detectors located at the surface and underground (see, for example, ). In practice, it is convenient to express the prompt muon flux in terms of the ratio, $`R_c`$, of prompt muon flux to that of pions at vertical. Since the slope of the prompt muon spectrum is close to that of pion spectrum, the ratio $`R_c`$ is almost constant for all muon energies available in the existing experiments. The experimental data, collected up to now, show a large variation of $`R_c`$ (from $`0`$ to $`410^3`$).
In a previous paper we have presented our measurement of the single muon ‘depth – vertical intensity’ curve and the evaluation of the power index of the meson spectrum in the atmosphere using the ‘depth – vertical intensity’ relation for single muons. Here we present the analysis of all muon sample which include the muon events with all multiplicities. The muon survival probabilities, used to obtain the value of $`\gamma `$ in , have been presented in . They have been calculated using the muon interaction cross-sections from . After the publication of these results, new calculation of the cross-section of muon bremsstrahlung and of the corrections to the knock-on electron production cross-section have been done . In the present analysis we have taken into account the corrections proposed in and we have estimated the uncertainties of $`\gamma `$ due to the uncertainties of the cross-sections used to simulate the muon transport through the rock. In this paper we present a more detailed evaluation of the characteristics of the muon spectrum at the sea level, including the ratio of the prompt muon flux to that of pions, using the depth – zenith angle distributions of muon intensities ($`I_\mu (x,\theta )`$) measured with LVD in the underground Gran Sasso Laboratory. The analysis is based on an increased statistics comparing with the previous publications. The ‘depth – vertical intensity’ relation for all muon sample and its analysis are presented in a separate paper .
In Section 2 the detector and the procedure of data processing are briefly described. In Section 3 the results of the analysis of the muon intensity distribution ($`I_\mu (x,\theta )`$) are presented. In Section 4 we discuss our results in comparison with the data of other experiments and theoretical expectations. Section 5 contains the conclusions.
## II LVD and data processing
The LVD (Large Volume Detector) experiment is located in the underground Gran Sasso Laboratory at a minimal depth of about 3000 hg/cm<sup>2</sup>. The LVD will consist of 5 towers. The 1st tower is running since June, 1992, and the 2nd one - since June, 1994. The data presented here were collected with the 1st LVD tower during 21804 hours of live time.
The 1st LVD tower contains 38 identical modules . Each module consists of 8 scintillation counters and 4 layers of limited streamer tubes (tracking detector) attached to the bottom and to one vertical side of the supporting structure. A detailed description of the detector was given in . One LVD tower has the dimensions of $`13\times 6.3\times 12`$ m<sup>3</sup>.
The LVD measures the atmospheric muon intensities from 3000 hg/cm<sup>2</sup> to more than 12000 hg/cm<sup>2</sup> (which correspond to the median muon energies at the sea level from 1.5 TeV to 40 TeV) at the zenith angles from $`0^o`$ to $`90^o`$ (on the average, the larger depths correspond to higher zenith angles).
We have used in the analysis the muon events with all multiplicities, as well as the sample of single muons. Our basic results have been obtained with all muon sample. This sample contains about 2 millions of reconstructed muon tracks.
The acceptances for each angular bin have been calculated using the simulation of muons passing through LVD taking into account muon interactions with the detector materials and the detector response. The acceptances for both single and multiple muons were assumed to be the same.
As a result of the data processing the angular distribution of the number of detected muons $`N_\mu (\varphi ,\mathrm{cos}\theta )`$ has been obtained. The angular bin width $`1^o\times 0.01`$ has been used. The analysis refers to the angular bins for which the efficiency of the muon detection and track reconstruction is greater than 0.03. We have excluded from the analysis the angular bins with a large variation of depth.
The measured $`N_\mu (\varphi ,\mathrm{cos}\theta )`$-distribution has been converted to the depth – angular distribution of muon intensities, $`I_\mu (x,\mathrm{cos}\theta )`$, using the formula:
$$I_\mu (x_m,\mathrm{cos}\theta _i)=\frac{\underset{j}{}N_\mu (x_m(\varphi _j),\mathrm{cos}\theta _i)}{_j(A(x_m(\varphi _j),\mathrm{cos}\theta _i)ϵ(x_m(\varphi _j),\mathrm{cos}\theta _i)\mathrm{\Omega }_{ij}T)}$$
(1)
where the summing up has been done over all angles $`\varphi _j`$ contributing to the depth $`x_m`$; $`A(x_m(\varphi _j),\mathrm{cos}\theta _i)`$ is the cross-section of the detector in the plane perpendicular to the muon track at the angles ($`\varphi _j,\mathrm{cos}\theta _i`$); $`ϵ(x_m(\varphi _j),\mathrm{cos}\theta _i)`$ is the efficiency of muon detection and reconstruction; $`\mathrm{\Omega }_{ij}`$ is the solid angle for the angular bin, and $`T`$ is the live time. We have chosen the depth bin width increasing with the depth to have comparable statistics at all depth bins from 3 to 10 km w.e.. Thus, the depth bin width increases from about 100 m w.e. at 3000 m w.e. to more than 500 m w.e. at about 10000 m w.e. The muon intensities have been converted to the middle points of the depth bins taking into account the predicted depth – intensity relations for different zenith angles (we have used the parameters of the muon spectrum at sea level which fit well the ‘depth – vertical muon intensity’ relation measured by LVD ). The angular bin width has been taken equal to $`\mathrm{\Delta }(\mathrm{cos}\theta )=0.025`$. The conversion to the middle points of the angular bins has been done according to the predicted angular dependence for muons from pion and kaon decay. However, due to the small angular bins this conversion does not change angular distributions.
## III Analysis of the depth – zenith angle distribution of muon intensity measured by LVD
The data analysis has included the procedure of fitting of the measured depth – zenith angle distribution of muon intensity with the distributions calculated using the known muon survival probabilites (see , and references therein) modified for a new muon bremsstrahlung cross-section and muon spectrum at sea level with three free parameters: normalization constant, $`A`$, power index of primary all-nucleon spectrum, $`\gamma `$, and the ratio of prompt muons to pions, $`R_c`$. The depth – angular distributions of muon intensity have been calculated using the equation:
$$I_\mu (x,cos\theta )=_0^{\mathrm{}}P(E_{\mu 0},x)\frac{dI_{\mu 0}(E_{\mu 0},\mathrm{cos}\theta )}{dE_{\mu 0}}𝑑E_{\mu 0},$$
(2)
where $`P(E_{\mu 0},x)`$ is the probability for muon with an initial energy $`E_{\mu 0}`$ at sea level to survive at the depth $`x`$ in Gran Sasso rock, and $`\frac{dI_{\mu 0}(E_{\mu 0},\mathrm{cos}\theta )}{dE_{\mu 0}}`$ is the muon spectrum at sea level which has been taken according to :
$`{\displaystyle \frac{dI_{\mu 0}(E_{\mu 0},\mathrm{cos}\theta )}{dE_{\mu 0}}}`$ $`=`$ $`A0.14E_{\mu 0}^\gamma `$ (3)
$`\times `$ $`\left({\displaystyle \frac{1}{1+\frac{1.1E_{\mu 0}\mathrm{cos}\theta ^{}}{115GeV}}}+{\displaystyle \frac{0.054}{1+\frac{1.1E_{\mu 0}\mathrm{cos}\theta ^{}}{850GeV}}}+R_c\right)`$ (4)
where the values of $`cos\theta `$ have been substituted by $`cos\theta ^{}`$ which have been taken from either or a simple consideration of the curvature of the Earth atmosphere. In a search for a small contribution of prompt muons it is necessary to know precisely the angular dependence of conventional (from pion and kaon decay) muon intensity at all energies of interest. In $`cos\theta ^{}=E_{\pi ,K}^{cr}(cos\theta =1)/E_{\pi ,K}^{cr}(cos\theta )`$, where $`E_{\pi ,K}^{cr}`$ are the critical energies of pions and kaons. $`cos\theta ^{}`$ can be understood also as the cosine of zenith angle of muon direction at the height of muon production. The height of muon production increases from 17 km at $`cos\theta =1`$ to about 32 km at $`cos\theta =0`$. We have found that the values of $`cos\theta ^{}`$ depend on the model of the atmosphere in the range of $`cos\theta =00.3`$. In Figure 1 we present the predicted angular dependences of conventional muon intensities at the energy of 10 TeV. As can be seen, all curves almost coincide at $`cos\theta =0.31`$. However, there is a large spread of functions at $`cos\theta =00.3`$. The calculations using eq. (4) with $`cos\theta ^{}`$ from (upper solid curve) or the treatment of the Earth curvature with a muon production height of 32 km (dash-dotted curve), as well as the results of (dashed curve) give quite similar results at all $`cos\theta `$, while the original calculations of (lower solid curve) and the treatment of the Earth curvature with a muon production height of 17 km (dotted curve) are far below or above other curves at small $`cos\theta `$. To be independent of the model we have restricted the range of $`cos\theta `$ used in the analysis to 0.3 – 1. This increases the statistical error of the results decreasing at the same time the systematical uncertainty related to model used. This also reduces the sensitivity of the experiment to small values of $`R_c`$. We note that the uncertanties in the rock thickness and rock density are high enough at small $`cos\theta `$. Moreover, large derivative of the column density with angle together with muon scattering effect lead to the high uncertainties of the muon flux. This also justifies our decision to restrict the range of zenith angles used in the analysis.
We have added to the original formula of the term $`R_c`$, which is the ratio of prompt muons to pions. Here it has been assumed that the power index of the prompt muon spectrum is equal to that of primary spectrum. Really, due to rapid rise of charm production cross-section and the possible scaling violation in the fragmentation region, the prompt muon spectrum may have the power index, $`\gamma _c`$, different from $`\gamma `$. But the value of $`\gamma _c`$ depends on the model of charm production. To be independent of the models we have used at the first approximation the assumption: $`\gamma _c`$=$`\gamma `$. The full formula has been multiplied by the additional normalization constant $`A`$ which has been considered as a free parameter together with $`\gamma `$ and $`R_c`$.
As a result of the fitting procedure we have obtained the values of the free parameters: $`A=1.84\pm 0.31`$, $`\gamma =2.77\pm 0.02`$ and the upper limit on $`R_c210^3`$. Here and hereafter we present the errors at 68% confidence level (C.L.) and the upper limits at 95% C.L. The value of $`\chi ^2`$ is equal to 316.7 for 330 degrees of freedom. The estimates of the parameters $`A`$ and $`\gamma `$ are strongly correlated. The larger the value of $`\gamma `$ is, the larger the normalization factor $`A`$ should be. Figure 2 shows the contour plot of allowed region in $`A\gamma `$ – plane. The dependence of $`\chi ^2`$ on $`R_c`$ is presented in Figure 3 which was used to obtain an upper limit on $`R_c`$. The errors of the parameters include both statistical and systematic uncertainties. The latter one takes into account the possible uncertainties in the depth and local density, but does not take into account the uncertainty in the cross-sections used to simulate the muon transport through the rock. If we add the uncertainty in the muon interaction cross-sections, the error of $`\gamma `$ will increase from 0.02 to 0.05 (for the discussion about the uncertainty due to different cross-sections see ). This uncertainty, however, does not influence the upper limit on $`R_c`$. We note that the energy in eq. (4) is expressed in GeV and the intensity is expressed in cm<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup>. If we restrict our analysis to the depth range 5 – 10 km w.e., we obtain the following values of parameters: $`A=1.6_{0.6}^{+0.8}`$, $`\gamma =2.76\pm 0.06`$ and $`R_c310^3`$.
The angular distributions of muon intensities for depth ranges of interest are presented in Figure 4 together with calculations with $`R_c=0`$ (best fit – solid curve) and $`R_c=210^3`$ (upper limit – dashed curve). The normalizations of both calculations have been done independently using the fitting procedure. The data at all zenith angles are shown but the analysis was restricted to the range $`0.3<cos\theta <1`$. The error bars show both statistical and systematic uncertainties. The calculated distributions have been obtained using the eq. (4) and the values of $`cos\theta ^{}`$ from . As can be seen from Figure 4, there is no evident increase of the deflection of the data points from the best fit predictions ($`R_c=0`$) with the increase of depth at large $`\mathrm{cos}\theta `$ as it should be if the significant prompt muon flux is present. The deepest depth bin is the exception. However, due to small statistics, the data at very large depth do not affect much the total value of $`\chi ^2`$.
If the formula from is used for the muon spectrum at sea level instead of eq. (4), the best fit values of $`\gamma `$ will be decreased by 0.04-0.05 and will be in agreement with the previously published values for single muons analysed using the formula from . This difference, being comparable with our total error, is due to the factor which is present in the formula from and takes into account the rise of hadron–nucleus cross-section at high energies. This factor appears in the calculation if the rise of the total hadron–nucleus cross-section with energy is due to the rise of the differential cross-section in the central region, while the scaling is conserved in the fragmentation region. This factor makes the muon energy spectrum steeper and the difference in the power index of muon spectrum is about 0.04-0.05.
Similar analysis performed for single muons shows no evidence for prompt muon flux, too. We found the same values of power index and upper limit to the prompt muon flux, while the absolute intensity is 10% smaller.
## IV Discussion
From the analysis of the depth-angular and depth distributions of muon intensities measured by LVD the following estimates of the parameters of the muon spectrum at the sea level have been obtained: $`A=1.84\pm 0.31`$, $`\gamma =2.77\pm 0.02`$ (68$`\%`$ C.L.), $`R_c210^3`$ (95$`\%`$ C.L.). The errors include both statistical and systematic errors with the systematic error dominating. The systematic error takes into account the possible uncertainties in the depth and local density, which have been estimated from the difference between the measured and predicted intensities for all angular and depth bins. The uncertainties of rock thickness and local density both result in the uncertainty of the column density and, hence, in the uncertainty of the muon flux. The distribution of fractional differences between measured and predicted intensities has been found to be close to gaussian with a standard deviation of about 0.04. This value has been assumed as a systematic error of muon intensity due to the column density uncertainty. This value is equivalent to the column density error of about 1$`\%`$ at a depth of 3 km w.e. It is obvious that the systematic error is more important at small depth where the statistics is high and statistical error is negligibly small. An additional systematic error due to the uncertainties of the cross-sections of muon interactions used to simulate the muon survival probabilities should be included. According to the discussion in we estimate the total uncertainty in $`\gamma `$ as 0.05 and in $`A`$ as 0.5. The uncertainty in the cross-sections, however, does not affect the upper limit to $`R_c`$. To check this we have fitted LVD data with the intensities calculated with muon bremsstrahlung cross-section from and obtained the following results: $`A=1.86\pm 0.32`$, $`\gamma =2.78\pm 0.02`$ (68$`\%`$ C.L.), $`R_c210^3`$ (95$`\%`$ C.L.). The muon bremsstrahlung cross-section from is a little smaller than that from . This makes the muon ’depth-intensity’ curve (with fixed $`A`$, $`\gamma `$ and $`R_c`$) flatter. This is compensated in the data analysis by the increase of $`\gamma `$. But the shape of the calculated angular distribution of muon intensities at any fixed depth, used to extract the value of $`R_c`$, is not changed and, hence, the limit on ratio of prompt muon flux to that of pions remains unchanged. However, the absolute value of prompt muon flux (or its limit) varies with the muon cross-sections used since the flux depends also on normalization constant, $`A`$, and power index, $`\gamma `$ (see eq. (4)).
The value of $`\gamma `$ obtained with LVD data is in reasonable agreement with the results of many other surface and underground experiments (see, for example, ). However, the results obtained in the experiments which used the indirect method of the measurement of the muon spectrum, in particular, the measurement of the depth–intensity curve, are strongly affected by the muon interaction cross-sections and the algorithm applied to calculate the muon intensities. We have used the most accurate cross-sections, known at present, and the algorithm which allows us to calculate the muon intensities with an accuracy of $`1\%`$ for a given set of muon interaction cross-sections and for homogeneous medium. The algorithm can influence strongly the calculated muon intensities and, then, the final results (for a discussion see, for example ). Thus, the observed agreement (or disagreement) in the value of $`\gamma `$ does not mean the agreement (or disagreement) in the data themselves.
The conservative upper limit to the fraction of prompt muons, obtained with the LVD data ($`R_c<210^3`$), even in the simple assumption that the power index of the prompt muon spectrum, $`\gamma _c`$, is equal to that of primaries, $`\gamma `$, rules out many models of the prompt muon production, which predict a fraction of prompt muons more than $`210^3`$. To make this conclusion more reliable we have carried out the analysis of the depth – angular distribution of muon intensity using the prompt muon spectra predicted by different models (without a constant term $`R_c`$). We conclude that the LVD data contradict the predictions of model 1 , model II and model A . The predictions of the model 3 , model I , models B, C , recombination quark-parton model (RQPM) and model by are comparable with the LVD upper limit, and these models cannot be ruled out. At the same time the LVD result favours the models of charm production based on QGSM (see, for example, ) and the dual parton model , which predict low prompt muon flux.
The upper limit (95$`\%`$ C.L.) obtained with the LVD data is lower than the value of $`R_c`$ found in the MSU experiment ($`R_c=(2.6\pm 0.8)10^3`$ at $`E_{\mu 0}=5`$ TeV ). The LVD upper limit does not contradict the values of prompt muon flux, obtained in Baksan and KGF underground experiments. Our result agrees with that of NUSEX which did not reveal any deviation from the angular distribution expected for conventional muons.
We point out that the LVD sensitivity to the prompt muon flux is restricted mainly by the systematic uncertainties connected with the uncertainties of the slant depth and local density fluctuations and the differences in the theoretical shape of the muon underground intensities.
## V Conclusions
The analysis of the depth–angular distribution of muon intensity measured by LVD in the depth range 3000-10000 hg/cm<sup>2</sup> has been done. The parameters of the muon energy spectrum at the sea level have been obtained (see eq. (4)): $`A=1.8\pm 0.5`$, $`\gamma =2.77\pm 0.05`$ and $`R_c<210^3`$ (95$`\%`$ C.L.). The errors include both statistical and systematic uncertainties. The upper limit to the fraction of prompt muons, $`R_c`$, favours the models of charm production based on QGSM and the dual parton model , and it rules out several models which predict a high prompt muon flux. Similar analysis performed for single muon events revealed the same values of power index and upper limit to the fraction of prompt muons, while the normalization constant is 10% smaller.
## VI Acknowledgements
We wish to thank the staff of the Gran Sasso Laboratory for their aid and collaboration. This work is supported by the Italian Institute for Nuclear Physics (INFN) and in part by the Italian Ministry of University and Scientific-Technological Research (MURST), the Russian Ministry of Science and Technologies, the Russian Foundation of Basic Research (grant 96-02-19007), the US Department of Energy, the US National Science Foundation, the State of Texas under its TATRP program, and Brown University.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.