id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9909/nucl-ex9909001.html
|
ar5iv
|
text
|
# Measurements of Neutrons in 11.5 A GeV/c Au + Pb Heavy-Ion Collisions
## I Introduction
As constituents of the colliding nuclei in a relativistic Au + Pb heavy ion interaction, neutrons carry over 60% of the incident energy. Knowledge of the final distribution of neutrons resulting from these collisions is then important for the determination of the amount of energy deposited in the central rapidity region in such collisions. Because the time scale of these collisions is so small (the total duration until hadronic freeze out is believed to be on the order 10 fm/c ), the collision dynamics are dictated by the strong interaction. It is then reasonable to assume that the behaviour of neutrons in the collision should closely parallel that of protons which have been extensively measured for similar collision systems and energies . Indeed, in the absence of neutron data this has been widely assumed in the calculation of light nuclei coalescence parameters (sometimes with the explicit assumption that the neutron to proton ratio available for coalescence is the primordial ratio) .
It is also widely assumed that the neutron to proton ratio at hadronic freeze-out should show significant equilibration from the initial Au + Pb ratio of 1.52:1. Measurements of nucleons in Au + Au collisions at lower energy do in fact exhibit this. This equilibration is expected to be enhanced at AGS energies as a result of a large amount of strong resonance production in the collision region which should have the effect of speeding the system toward chemical equilibrium and so transferring some of the initial isospin imbalance to an excess of $`\pi ^{}`$ over $`\pi ^+`$.
## II Experiment 864
### A The E864 Spectrometer
BNL Experiment 864 is an open geometry, high data rate spectrometer that was designed chiefly to search for cold strange quark matter (strangelets) which may be produced in heavy ion collisions. Plan and elevation views of the spectrometer are shown in Figure 1. A thorough description of the apparatus is provided in .
A beam of gold ions with momentum 11.5 A GeV/c is incident on a fixed lead target. The interaction products then travel downstream through two dipole magnets M1 and M2. A collimator inside of M1 defines the experimental acceptance; for neutral particles this is -32 mr to 114 mr in the horizontal and -17 mr to -51.3 mr in the vertical.
The charged particle tracking system consists of three hodoscope scintillator walls (H1, H2, and H3) and two straw tube stations (S2 and S3). The hodoscopes provide for each charged particle hit a measurement of time, charge, and position. This information is then used to build track candidates which are either rejected or confirmed and further refined by straw tube position information. With knowledge of the fields in M1 and M2, tracked particles are then identified by mass computed through rigidity, charge, and velocity with the assumption that the tracks originate from the target.
At the downstream end of the apparatus is the E864 hadronic calorimeter which is crucial to the neutral particle analyses. The calorimeter (see Figure 2) consists of an array of 58x13 towers, each 10cm x 10cm on the front face and 117cm long. This lead/scintillator sampling calorimeter is of a spaghetti design with scintillating fibers running lengthwise down each calorimeter tower giving a total lead to scintillator ratio of 4.55:1 by volume. The calorimeter has excellent resolution for hadronic showers in energy ($`\sigma _E/E=.34/(\sqrt{E})+.035`$ for $`E`$ in units of GeV) and time ($`\sigma _t`$ 400 ps) and is described in detail in .
Collision centrality is defined in E864 through a measurement of charged particle multiplicity. The E864 multiplicity counter is an annular piece of scintillator placed around the beam pipe 13 cm downstream of the target that subtends an angular range from 16.6<sup>o</sup> to 45.0<sup>o</sup>. The annulus is separated into four quadrants, each of which is viewed by a photomultiplier tube. The sum of the integrated charge signal from the four quadrants is proportional to the charged particle multiplicity of the collision and is used to define event centrality.
### B Neutron analysis
We measure the invariant multiplicity of neutrons by dividing momentum space into bins in rapidity and transverse momentum of size $`\mathrm{\Delta }y`$ by $`\mathrm{\Delta }p_T`$. In terms of the actual experimental quantities, we then have the invariant multiplicity in a momentum bin with average transverse momentum $`<p_T>`$ as
$$\frac{1}{2\pi p_T}\frac{d^2N}{dydp_T}=\frac{1}{2\pi <p_T>\mathrm{\Delta }y\mathrm{\Delta }p_T}\frac{N_{counts}}{N_{events}}\frac{1}{ϵ_{ACC}(y,p_T)\times ϵ_{REC}(y,p_T)}$$
(1)
Here $`N_{counts}`$ is the number of neutrons reconstructed in our calorimeter analysis of $`N_{events}`$. $`ϵ_{ACC}(y,p_T)`$ is the geometric acceptance for neutrons in our apparatus and $`ϵ_{REC}(y,p_T)`$ is the efficiency for reconstructing with our analysis algorithm those neutrons which are accepted.
The first step in determining $`N_{counts}`$ is to identify all those calorimeter towers in a given event which are peak towers. We define a peak tower as a tower which has more energy deposited than any of its 8 neighbors. For each peak tower, we define the corresponding energy shower as including all towers in a 3x3 grid centered on the peak tower. A 3x3 array is used because the improvement in energy resolution obtained by using a 5x5 grid is slight and the resulting contamination is much larger. Approximately 90% of the shower energy for a neutron with a kinetic energy of 6 GeV is contained in a 3x3 grid.
Each of these energy showers is then put through the following series of contamination cuts:
* There must be no charged particle track found using only the hodoscopes which points to any of the nine towers in the shower. With the dipole fields in M1 and M2 set to 1.5 Tesla (the fields are aligned in the same direction), most charged particles are swept out of the neutron fiducial region so that the ratio of proton hits to neutrons hits is approximately 1 to 3 with some variation as a function of position. Charged pions, kaons, and deuterons are light and/or rare enough that calorimeter hits due to these species are suppressed by at least another order of magnitude.
Of the charged particle peaks in the neutral fiducial region of the calorimeter, we estimate from monte carlo simulations that 81% are rejected by this method compared with 6% of neutron peaks.
* There must be no energy peak larger than some minimum energy, $`E_{PK}`$, in the square of 16 towers which borders the shower. Values of $`E_{PK}`$ used in the analysis vary from 1.5 to 2.5 GeV as a function of rapidity.
* A cut is made on the ratio, $`R_{5x5/3x3}`$, of total energy in 25 towers around the peak to total energy in 9 towers around the peak. The maximum allowable value of $`R_{5x5/3x3}`$ in the analysis ranged from 1.7 to 2.5. The cut values for $`E_{PK}`$ and $`R_{5x5/3x3}`$ were chosen by observation of the value of these quantities for those showers which were designated as clean by other contamination cuts, with consideration given both to the level of background present as a function of rapidity and to keeping the efficiency as high as reasonably possible.
* Each tower of the shower which has a nonzero time must show agreement within a time window, $`t_{max}`$, with the peak tower. Values of $`t_{max}`$ used were 1.6 ns for bins of $`y`$ = 1.7 and 1.75 ns elsewhere. Side tower time resolutions in neutron showers were approximately 500 ps (not gaussian) with some variation as a function of energy. This cut was adjusted to be 95% efficient for isolated neutron showers at rapidity 1.9 and above.
* In bins of rapidity 2.5 and greater, a clear separation can be seen between neutrons and photons on a plot of shower mass versus percentage of shower energy in the peak tower, indicating that neutron showers are much wider than photon showers. For these rapidities, we place a cut on the ratio of energy in the peak tower to energy in the 9 tower sum to reduce contamination from photons; rejecting showers for which this ratio is larger than 0.83.
* Finally, the shower energy profile is compared with the energy profiles of several hundred thousand isolated proton showers which span the full range of incident angles and front face hit positions and most of the rapidity range of neutrons incident on the calorimeter.
The fraction of shower energy in each of the nine towers is calculated and rounded to the nearest 5%. This set of nine fractions is then compared with the set of fractions for each of these isolated proton showers. If fewer than two matching sets of fractions are found, the shower is discarded.
For those showers which survive the cuts listed above, mass is calculated from the peak tower time, nine tower energy sum, and shower position as $`m=E_{sum}/(\gamma _{peak}1)`$. A momentum is also assigned to the shower assuming a neutron mass and using energy and time measurements weighted according to their errors.
Momentum space is then divided into bins of 50 MeV/c in $`p_T`$ by .2 units in $`y`$. For each bin we make a mass plot as shown in Figure 3. There is a background at low mass which is clearly evident for rapidities of 2.3 and below. This background is predicted qualitatively by our GEANT detector simulations as a mixture of scattered photons and hadrons, but the predicted level is less by a factor of four or more than the level seen in the data. We believe that this discrepancy is due to a combination of our modelling of the calorimeter time response near threshold being somewhat incorrect and the absence from our GEANT simulations of certain downstream geometries which may contribute scattered particles to this low energy background.
We count neutrons in the mass range from .55 $`GeV/c^2`$ to 1.55 $`GeV/c^2`$ and then subtract out the contribution of this background according to a parameterization of an exponentially decaying background plus a gaussian signal. Subtracting away the background in this manner leaves a neutron signal shape that agrees well with simulations (Figure 4) in which isolated proton showers are overlaid on the calorimeter to simulate the calorimeter response to neutron showers with the contamination of a heavy-ion event. (Note that while the shape agrees well, the energy scale in the simulations must often be adjusted by around 5% to show agreement with the data; possible systematic error from this effect is dealt with separately as part of the study of differences in calorimeter response between protons and neutrons.) This low mass background produces only a small correction to the number of neutrons counted; never larger than 14% according to our parameterization.
Two classes of background can produce mass peaks under the neutron peak and are subtracted away using monte-carlo simulations. The first class is from calorimeter hits by particle species other than neutrons. This includes mainly neutral kaons and protons which are missed by the tracking system. This class amounts to less than a 10% correction to the number of counted neutrons in most of the momentum bins in which we measure. The second class of background is due to neutrons which do not come directly from the target but come from inelastic scatterings in other parts of the apparatus (neutrons which elastically scatter are dealt with as part of the geometric acceptance calculation). These are largely from the upper edge of the collimator which sits approximately one meter downstream of the target; only scattering sources near the target can produce neutrons with time and energy combinations which will allow them to fall under the mass peak of neutrons from the target. For this background, we have a check on the accuracy of the monte-carlo simulations because a similar background is present for protons. For protons we can use tracking information to determine if a track originated in the target or collimator and so we can compare the monte-carlo predictions of this scattered background to what is present in the actual data. We find agreement to better than 25% between the amount of background predicted by detector simulations using two different input distributions and the background seen in the data for the protons. Corrections for this background are as large as 25% in central collisions near center of mass rapidity and decrease with increasing rapidity.
Each of these backgrounds is calculated and subtracted separately in each $`y,p_T`$ bin: the backgrounds discussed above are summarized in Table I.
$`ϵ_{ACC}(y,p_T)`$ is essentially the ratio of the number of neutrons which leave the target with momentum inside a given $`(y,p_T)`$ bin to the number of neutrons which strike the calorimeter (not including those which are from inelastic scattering) with momentum inside that $`(y,p_T)`$ bin. It is determined simply by a GEANT simulation of the experimental apparatus. The results of the acceptance simulation are largely insensitive to the assumed neutron input distribution, but some sharing between bins does take place particularly at large rapidity and transverse momentum.
To determine $`ϵ_{REC}`$, we have constructed a library of isolated proton showers using the charged tracking system to identify protons and contamination cuts both in the calorimeter and from tracking to ensure clean showers. To determine the efficiency for neutron reconstruction in the calorimeter as a function of energy and position (or rapidity and transverse momentum), the overall times and energies of these clean proton showers were altered while leaving the relative times and energy fractions intact to simulate neutron showers. These fake neutron showers were overlaid on complete events in the calorimeter, one per event. Because in this manner we can simulate a shower of a neutron of known momentum striking the calorimeter in a known position, we can determine the efficiency for reconstructing these fake neutron showers and take this to be our reconstruction efficiency for real neutron showers. We find an average efficiency of approximately 35% with variations as a function of momentum.
The efficiencies of the individual analysis cuts are listed in Table II, both for a neutron shower on an empty calorimeter and for a neutron shower in a central heavy ion event. These efficiencies vary as a function of momentum and so are listed in two different rapidity ranges.
The overall efficiency on an empty calorimeter is on average about 70% and higher near central rapidity than near beam rapidity. The extra factor of two (from 70% to 35%) of loss in efficiency is then due to occupancy in the calorimeter. There are on average 10 showers in the neutral fiducial region of the calorimeter with peak energy greater than 1 GeV in a central event, leading to an overall occupancy of about 15% of the towers having an energy of 500 MeV or higher in an average event. The occupancy is somewhat greater nearer to the neutral line ($``$ 20%) and less near the edges ($``$ 5%). Although the overall occupancy is smaller for less central events, it is in fact slightly larger near the neutral line and thus the overall efficiency increases only by a few percent, with a larger increase at high transverse momentum (away from the neutral line).
The efficiencies for finding the neutron showers are crucial numbers in this analysis, so it is important that the method described above give us an accurate calculation of these efficiencies. As a check of this method we have repeated the process by following essentially the same recipe using protons rather than neutrons. That is, we add a proton shower from our shower library (along with fake hits in our other detectors to simulate the corresponding charged particle track) to a heavy ion event and calculate our efficiency for finding this fake proton shower (when we find the corresponding fake track). For protons, we can compare this efficiency to the efficiency for finding a real proton shower when we know a real proton hits the calorimeter (i.e. we find a proton track in our data). We do in fact find that the two methods of calculating the proton efficiencies agree to within 10% of one another with the differences largely explained by inefficiencies in our method for identifying isolated proton showers (refer to the paragraph following this one). With the implicit assumption that proton and neutron shower energy profiles will be basically indistinguishable at these energies of a few GeV, we conclude from this study that our method for determining the neutron efficiencies is sound to within 10%. Other differences in these processes for the protons and neutrons (faking of a charged particle tracks, slightly different angles of incidence for protons and neutrons across the calorimeter) have been studied and are not significant sources of error ( and , respectively).
Two small corrections are then made to the $`ϵ_{REC}`$ numbers determined in this manner. The first is because we are artificially increasing the calorimeter occupancy by adding these fake neutron showers. We account for this following reference and find that it amounts to never more than a 5% correction for any momentum bin and is significantly less over most of the momentum space we measure. The second is because the cuts placed on the proton shower library which were necessary to ensure isolated showers result in throwing out a few percent of showers which are not significantly contaminated. In particular, since part of the requirement for a clean proton shower is timing agreement among the towers in the shower, the set of proton showers does not include the tails of the timing agreement distribution. Thus neutron efficiencies calculated using proton showers are slightly overestimated. We determine the size of this correction partially by using data from a later run of the experiment with an incident beam of protons rather than heavy ions to reduce contamination of the proton showers. We estimate a resulting correction that is 2% at center of mass rapidity and rises to 9% at beam rapidity.
Sources of possible systematic error which we have quantified include:
* Error due to the assumed input distributions of neutrons and sharing between neighboring bins (particularly in transverse momentum) in determination of both $`ϵ_{REC}`$ and $`ϵ_{ACC}`$. By using alternative input distributions, we estimate the size of this effect to be approximately 5% over most of the momentum space which we measure.
* Possible differences in the calorimeter response to proton and neutron showers. This is quantified by explicitly changing the gains factors used in the analysis and observing the resulting change in measured yields. This adds only 3% to 5% systematic error over most of the acceptance but becomes larger near edges of our kinematic acceptance.
* Assumed input distribution for background studies, both for scattered neutrons and for particle species other than neutrons. This adds a 10% systematic uncertainty near center of mass rapidity and decreases at higher rapidity.
* Uncertainty in fit parameters in the subtraction of low mass background. This is estimated to add a maximum systematic error of 8% in any given bin and the uncertainty decreases as rapidity increases.
The statistical errors are generally dominated by systematics. We add these two types of errors in quadrature and list the total uncertainty in each bin along with the measurement in Table III.
## III Results and Discussion
As in the E864 light nuclei measurements, we divide events into three centrality classes: 10% most central, 10-38% central and 38-66% central. The centrality is defined by our multiplicity detector and is reported in terms of percent of the geometric cross section defined by $`\sigma =\pi r_0^2(A_{Au}^{1/3}+A_{Pb}^{1/3})^2`$ with $`r_0`$ = 1.2 fm.
Measurements of neutron multiplicity in 10% most central Au + Pb collisions are presented in Figure 5 and in Table III. Also shown in Figure 5 are E864 measurements of proton invariant multiplicity in central collisions for the rapidity range where they overlap the neutron measurements. Measurements in each rapidity bin are multiplied by a different factor of ten for presentational purposes. Agreement with the protons is quite close where comparisons are possible. This is consistent with the assumption that the spectra of the two species should not differ considerably other than by an overall scale factor, justifying for example the calculation of the light nuclei coalescence parameters, $`B_A`$, in terms of the ratio of coalesced nuclei to protons only rather than protons and neutrons.
The agreement between the two species is made more quantitative by Figure 6 in which we show the neutron to proton ratio along with the triton to $`{}_{}{}^{3}He`$ ratio in the rapidity region where all four species are measured by E864. If we assume no kinematic dependence of the $`n/p`$ ratio and take a statistically weighted average of each point shown in Figure 6, we find an average $`n/p`$ ratio of $`1.14\pm .08`$.
To determine the ratio which is present at hadronic freeze-out of the system, we need to subtract from the nucleon multiplicities the results of feed-down from hyperon decays which occur long after freeze-out. To make this subtraction, we assume a $`\mathrm{\Lambda }`$ distribution which is parameterized according to measurements by E891 and use a distribution for the $`\mathrm{\Sigma }`$ hyperons given by the cascade code RQMD version 2.3. We follow the hyperon decay products through a GEANT simulation of the E864 apparatus to determine the number of neutrons and protons in each momentum bin which are produced in these decays. We find that the contribution to nucleon invariant multiplicities from hyperon feed down is on the order of 15% . After correcting for this feed down, we find an average $`n/p`$ ratio at freeze out of $`1.19\pm .08`$.
At rapidities far from the beam rapidity of 3.2 such as are shown in Figure 6, light nuclei are very unlikely to be beam fragments and must therefore be formed by a coalescence mechanism. We expect then that the triton to $`{}_{}{}^{3}He`$ ratio should match the neutron to proton ratio which is present at the time when this coalescence occurs. Computing a statistically weighted average, we obtain a $`t/^3He`$ ratio of $`1.23\pm .04`$. which is consistent with our value for the freeze out $`n/p`$ ratio.
The incident nuclei have a total $`n/p`$ ratio of 1.52, so this observed final ratio of 1.19 signifies considerable equilibration of the two species from their initial abundances. This is not surprising in light of the amount of strong resonance production which is believed to occur in the collision system and which should facilitate the evolution of the system toward chemical equilibrium. Evidence for a large amount of $`\mathrm{\Delta }(1232)`$ resonance production is present in the measured pion transverse momentum spectra and $`\pi ^{}`$ to $`\pi ^+`$ ratio in Au + Au type collisions at the AGS from experiments 866 and 877. RQMDv2.3 in fact predicts that for some duration of the evolution of an AGS Au + Au collision, the majority of baryons exist as strong resonances and the resulting RQMD prediction for n/p ratios match reasonably well with our measurements (see Figure 7). Indeed, in a simplified isobar model (following ) in which half of all the incident nucleons are excited to resonances by the reaction $`N+NN+\mathrm{\Delta }`$ with isospin conserved, the neutron to proton ratio reaches a value of less than 1.1 without any further interactions.
In a model which imposes chemical and thermal equilibrium on the system, there is the approximate constraint that $`R_1(N_n/N_p)^2=(N_\pi ^{}/N_{\pi ^+})`$ (this is only strictly true if we assume also a Boltzmann distribution for each species). If we also impose the approximate conditions that the total number of nucleons is conserved (ignoring strange baryons) and that the total charge of the nucleons plus pions at freeze-out is equal to the initial charge of the system, we can obtain an approximate value for the ratio $`R_2(N_{\pi ^+}+N_\pi ^{})/(N_n+N_p)`$. With a freeze out neutron to proton ratio of 1.19, we obtain values of approximately 1.4 and 3 for $`R_1`$ and $`R_2`$, respectively. Including feed-down from resonances in this simple equilibrium model does little to change these numbers.
These ratios are strongly dependent on the input $`n/p`$ ratio, and the results of these simple calculations can be made to agree reasonably well with measurements by E866 and E877 if we instead assume an $`n/p`$ ratio of 1.11 (the lower end of the range included in 1.19$`\pm `$.08). This yields values of approximately 1.2 and 1.3 for $`R_1`$ and $`R_2`$, respectively; thus this set of measurements can be accommodated within this simple model. Note also that the inclusion of a light quark saturation factor of larger than 1 as proposed in can change the predictions from such a simple picture.
The predictions of neutron invariant multiplicity from RQMD version 2.3 with and without mean field potentials are shown in Figure 8. As demonstrated, with potentials turned on the multiplicities near center of mass rapidity are under predicted by a factor of approximately two. Agreement in this rapidity range is much better with mean fields switched off. Near beam rapidity, RQMD over predicts the neutron yields; this is due at least in part to the fact that light nuclei are not included in RQMD but are likely present in large numbers as beam fragments near beam rapidity.
In the rapidity bins in which we have sufficient coverage in transverse momentum, the neutron data fit well to Boltzmann distributions in transverse mass,
$$\frac{1}{2\pi p_T}\frac{d^2N}{dydp_T}m_Te^{\frac{m_T}{T}}$$
(2)
(with $`m_T=\sqrt{p_T^2+m^2}`$) as shown in Figure 9. The extracted inverse slope parameters, $`T`$, are shown in Table IV. For the fits shown in Figure 9, we have excluded the points at lowest transverse momentum near beam rapidity (shown in Figure 9 as hollow circles) to minimize the effect of spectator neutrons on these slope parameters. Alternatively, we can use a fit to a sum of two Boltzmann distributions in these bins to account for these spectator neutrons, and the resulting slope parameters are the same as shown in Table IV within the quoted uncertainties. For the bins $`y`$=2.3 and $`y`$=2.5 where we also have measurements of proton inverse slope parameters, the slopes agree quite closely (see Table IV).
In Figure 10 we display the yields dN/dy for participant neutrons. These were determined by directly integrating our measurements where available and extrapolating with the Boltzmann fits shown in Figure 9 where necessary. The points with significant contributions from spectator neutrons which are displayed in Figure 9 as hollow points were not integrated directly (i.e. the Boltzmann fit was used to integrate these points).
Due to our limited coverage in $`p_T`$ near center of mass rapidity we cannot accurately integrate the $`m_T`$ spectra to measure $`dN/dy`$ for neutrons in this region. To examine the behaviour of the spectrum as a function of rapidity we plot the invariant multiplicity near $`p_T`$=0 versus $`y`$ in Figure 11 a). This $`p_T`$ range (150 to 250 MeV/c) was chosen because it was common among all rapidity bins. A similar plot showing comparison with the protons in a similar $`p_T`$ bite (100 to 200 MeV/c) is shown in Figure 11 b). There is some evidence here that the neutrons exhibit a slight peak near midrapidity while the protons are flat, but in light of the size of the systematic errors on these points, this evidence is slight. One can ask if such a difference in shape would be consistent with the additional coulomb repulsion felt by the protons. Under the assumption that the coulomb force only has an effect after the nucleons reach freeze-out from the strong force, one can with a very simplified model estimate the effect of the coulomb force on a proton following reference . Assuming a freeze out radius $`r`$ and that all net charge of the source is contained within $`r`$ (in a simple spherically symmetric model, this should provide a generous upper limit for the coulomb effect), a proton with center of mass momentum $`p_p`$ will be accelerated to a momentum of $`\sqrt{p_p^2+2Z_Ne^2/r}`$. Taking $`Z_N`$ = 150 and $`r`$ = 5 fm, we find that a proton at center of mass rapidity with $`p_T`$=150 will receive an extra $`p_T`$ kick of 20 MeV/c from the coulomb interaction. With a more realistic assumption including some form of radial flow, however, the amount of charge that is contained within a sphere with radius equal to the freeze out radius of such a proton should be at most only a few percent of $`Z_N`$ and so we do not expect any observable effect from the coulomb interaction.
Shown in Figure 12 are the neutron multiplicities for 10-38% and 38-66% most central events. These measurements include larger uncertainties than are present in the 10% most central data, particularly near center of mass rapidity where to a first approximation the neutron signal scales as the number of participants while background sources tend to remain constant or grow as the number of spectators. Corrections due to beam interactions which do not occur in the target are taken into account for these centralities using data from empty target runs, and this is not a significant source of systematic error.
We do see the qualitative behaviour which we expect in these centralities; the multiplicities near center of mass rapidity scale crudely with the number of participant nucleons and we see larger contribution from spectator neutrons at high rapidity and low $`p_T`$ as we go to less central events. We also note that the inverse slope parameters (see Table IV) become smaller as centrality decreases and as in the central data agreement between the proton and neutron inverse slope parameters is quite close where comparisons are possible.
## IV Summary
We have presented results from Experiment 864 for neutron invariant multiplicities produced in 11.5 A GeV/c Au + Pb collisions. These are the first neutron measurements for a system of comparable size at AGS energies or above.
We observe little kinematic dependence of the neutron to proton ratio, consistent with the idea that the neutron spectrum should to a good approximation differ from the proton spectrum only by an overall scale factor. An average neutron to proton freeze-out ratio of $`1.19\pm .08`$ is observed within .8 units of midrapidity. This value is consistent with E864 measurements of the ratio of coalesced tritons to $`{}_{}{}^{3}He`$ nuclei and represents a significant equilibration from the initial state.
## V Acknowledgements
We gratefully acknowledge the efforts of the AGS staff in providing the beam. This work was supported in part by grants from the Department of Energy (DOE) High Energy Physics Division, the DOE Nuclear Physics Division, and the National Science Foundation.
|
no-problem/9909/astro-ph9909272.html
|
ar5iv
|
text
|
# Comment on “Dynamic Screening…” by Opher and Opher
## 1 Introduction
The rate of thermonuclear reactions in stars is increased by the Debye-Huckle screening because the screening reduces the Coulomb repulsion. In the weak screening limit, which is the first classical approximation, the reaction rate enhancement factor $`w`$ was calculated by Salpeter (1954): $`w=1+\mathrm{\Lambda }`$, where $`\mathrm{\Lambda }Z_1Z_2e^2/TR_D`$, $`Z_{1,2}`$ are the charges of the fusing nuclei, $`T`$ is the temperature, $`R_D`$ is the Debye radius.
Thirty-four years later Carraro, Schafer, & Koonin (1988) made an interesting suggestion. They noticed that the Gamow energy of the reacting nuclei is high, i.e., only rare fast-moving nuclei have a noticeable chance to fuse. Fast ions will induce a smaller electrostatic response in the plasma than assumed by Salpeter. The authors proposed that Salpeter’s weak screening formula is not strictly valid because $`\mathrm{\Lambda }`$ actually depends on the Gamow energy of the reaction ($`\mathrm{\Lambda }`$ decreases with the increasing Gamow energy). The phenomenon was termed dynamic screening. Stellar evolution people started to use the modified fusion rates in their numerical codes.
I have recently explained that this effect is actually absent (Gruzinov, 1998). In the classical weak screening limit, $`w`$ does not depend on the Gamow energy of the fusing nuclei, and is given by the Salpeter formula. My explanation boils down to the following. The weak screening formula of Salpeter can be derived in the framework of classical statistical mechanics (e.g. DeWitt, Graboske, & Cooper, 1973). In classical statistical mechanics, fast nuclei are just as screened as slow nuclei, because the Gibbs distribution factorizes into kinetic and configuration parts. Therefore, the Gamow energy cannot enter the expression for the screening enhancement factor. A physical mechanism responsible for this somewhat paradoxical velocity-independence of the screening was identified.
Now I have to return to this problem because Opher and Opher have published an ApJ paper claiming that my proof is wrong. I will explain that their arguments are incorrect.
## 2 Opher & Opher (1999)
(i) I quote Opher & Opher (1999) “The exact Gibbs distribution takes into account dynamic corrections”. I quote Landau & Lifshitz (1980) “…the probabilities for momenta and coordinates are independent, in the sense that any particular values of the momenta do not influence the probabilities of the various values of the coordinates, and vice versa.”
(ii) In my paper I calculate the thermal electric field using the classical ($`\mathrm{}=0`$) theory. Opher & Opher (1999) propose to use $`\mathrm{}=1`$. I know that $`\mathrm{}>0`$, and I have a rough idea of how to calculate quantum corrections to the fusion rates’ enhancement factor (Gruzinov & Bahcall, 1998). But the point of my paper was to show that the dynamic screening effect is in fact absent. Since the paper of Carraro, Schafer, & Koonin (1988) introduces the dynamic screening in a purely classical way, I have used the classical theory to show that the effect is spurious.
## 3 Conclusion
Today Salpeter’s weak screening formula is the most reliable approximation for fusion rates in the Sun (Gruzinov & Bahcall, 1998, Adelberger et al., 1998). The accuracy of the Salpeter’s formula is not worse than few percent (the calculated fusion rates deviate from the real fusion rates by no more than a few percent for all relevant solar nuclear reactions).
###### Acknowledgements.
This work was supported by NSF PHY-9513835.
|
no-problem/9909/hep-lat9909084.html
|
ar5iv
|
text
|
# HLRZ1999_41 Scaling of magnetic monopoles in the pure compact QED
## 1 MOTIVATION
The phase transition between the confinement and Coulomb phases of the strongly coupled pure U(1) lattice gauge theory (pure compact QED) remains to be puzzling. For the extended Wilson and Villain action, the presence of the two-state signal on finite lattices has been recently confirmed. On the other hand, a scaling behaviour of various bulk quantities and of the gauge-ball spectrum consistent with a second order phase transition and universality has been observed outside the narrow region in which the two-state signal occurs. (See ref. for references.)
Because the order of the phase transition for these actions is unknown, the extrapolation of these phenomena to the thermodynamic limit is uncertain. However, even if the scaling behaviour is only a transient phenomenon, it indicates that there is a region of the phase diagram described by an interacting effective field theory. It is of interest to investigate the properties of such a theory even if it is “only” effective, as effective theories are useful in physics. Here we address the question whether such a theory includes monopole degrees of freedom.
## 2 MAIN RESULTS
We have observed scaling of some observables related to the magnetic monopoles in the pure compact QED with Villain action.
In the Coulomb phase we find at various values of the coupling $`\beta `$ a very clean exponential decay of the monopole correlation function in a large range of distances. This demonstrates the dominance of a single particle state in this correlation function, the monopole, whose mass we determine. Due to its Coulomb magnetic field, the monopole mass strongly depends on the finite lattice size. However, we find that it can be reliably extrapolated to the infinite volume.
The scaling behaviour of the extrapolated monopole mass $`m_{\mathrm{}}`$ at the phase transition follows a simple power law (fig. 1)
$$m_{\mathrm{}}(\beta )=a_m(\beta \beta _c^{\mathrm{Coul}})^{\nu _\mathrm{m}},$$
(1)
with the critical exponent
$$\nu _\mathrm{m}=0.49(4).$$
(2)
The inverse mass achieves at least the magnitude of three lattice spacings.
The monopole condensate in the confinement phase shows a much weaker $`L`$ dependence. Its value extrapolated to the infinite volume, $`\rho _{\mathrm{}}`$, scales with the power law
$$\rho _{\mathrm{}}=a_\rho (\beta _c^{\mathrm{conf}}\beta )^{\beta _{\mathrm{exp}}},$$
(3)
whith the magnetic exponent
$$\beta _{\mathrm{exp}}=0.197(3).$$
(4)
As shown in fig. 2 the function (3) describes extremely well the data in a broad interval and the scaling behaviour of the condensate is thus well established.
The superscripts “Coul” and “conf” indicate that the corresponding values of $`\beta _c`$ have been determined by the power law fits using data only from one phase. Their values are
$$\beta _c^{\mathrm{Coul}}=0.6424(9)$$
(5)
and
$$\beta _c^{\mathrm{conf}}=0.6438(1).$$
(6)
Both values are consistent within two error bars.
Further results and technical details of our calculations are published in . We have adopted the methods of ref. . A quantity related to $`\rho _{\mathrm{}}`$ has been studied also in refs. .
## 3 INTERPRETATION OF RESULTS
The monopole mass in the Coulomb phase scales with the same Gaussian exponent $`\nu _m`$ which is also observed for the scalar gauge ball. This holds at least until the inverse mass of the latter achieves five lattice spacings. This implies that if one chooses the scalar gauge ball to become massless while the other gauge balls, whose $`\nu `$ is about 1/3, have finite non-vanishing masses, the monopoles will be massless and therefore important. Even if the scalar mass is chosen finite nonzero, and other gauge balls thus decouple, the monopoles stay present. Therefore the effective field theory would include monopole degrees of freedom, being thus a very interesting abelian gauge theory. This may be a sufficient motivation for further investigation of compact QED by the lattice community.
Now let us try to interpret our results from the point of view of Statistical Mechanics. The coexistence of first and second order phenomena is a typical property of tricritical points (TCP) . As indicated schematically in fig. 3, in their vicinity crossover regions (shaded) separate regions of different behaviour even in the thermodynamic limit.
Approachig the phase transition along the path a may reveal first a second-order-like behaviour determined by the tricritical point, and only very close to the phase transition the presence of the two-state signal shows up. In finite systems, a two-state signal can appear even at the end of the path b.
The observed properties of the compact QED with various actions can be explained by assuming the existence of a tricritical part of the manifold separating the confinement and Coulomb phases in the multidimensional space of possible couplings. Thus under this hypothesis a genuine continuum limit of the compact QED would exist.
Such a manifold may, but does not need to include the couplings which have been already used for the investigation of compact QED. Therefore, the search for this manifold may require an introduction and investigation of new types of coupling terms. As the monopoles are relevant, the space of generalized couplings in which the TCP is to be located, is likely to include the monopole degrees of freedom. Their influence on the transition has been studied in refs. . A possible TCP in this context has been discussed by Kleinert
However, even if the finding of such a manifold may be challenging, the indications for its existence are remarkable: (i) the clean scaling behaviour like that of the monopole observables, and (ii) several universal phenomena in some intervals of couplings close to the phase transition points. In fact, these properties allow to investigate the corresponding continuum limit without an actual localization of the tricritical points.
Another scenario is that the coexistence of first and second order phenomena is due to a rare, but not impossible hybrid situation depicted in fig. 4. A continuum limit would then exist in spite of the latent heat present in the thermodynamic limit.
## ACKNOWLEDGEMENTS
We thank J. Cox and U.-J. Wiese for discussions and suggestions. Computations have been performed at NIC Jülich (former HLRZ Jülich).
|
no-problem/9909/astro-ph9909311.html
|
ar5iv
|
text
|
# A time-space varying speed of light and the Hubble Law in static Universe
## I INTRODUCTION
Recently a number of papers have been published \- in which the possibility of light speed variability with time has been investigated. It has been shown that models of Varying Light Speed might resolve some cosmological problems, such as: the flatness problem, the quasi-flatness problem , the horizon problem etc.
Investigations of the possibility of variability of fundamental constants with time have a long history and there are various approaches to the problem -. After it had become clear that such a fundamental constant as the curvature radius of our Universe varies with time, a doubt arose about the constancy of other physical constants. An excellent review of the research devoted to the variability of physical constants with time can be found in Ref. .
It is obvious that introduction of variability of light speed with time is not possible without considerable modification of the Theory of Relativity. Recently a generalization of the Lorentz transformation (so-called Projective Lorentz Transformation), has been obtained , . Within this approach the variability of light speed with time and distance arises naturally from the analysis of transformations between two observers within different inertial reference systems. In the Projective Theory of Relativity, besides the fundamental speed $`c`$ there exists a new constant $`\lambda `$ that determines the magnitude of corrections to the Theory of Relativity. If $`\lambda =0`$, we return to the Lorentz transformations and the Theory of Relativity.
In this paper we consider in detail a possibility of the variability of light speed with time, and show that to describe it consistently it is necessary to modify not only the Lorentz transformation but also the translational transformations between two rest observers. This eliminates some contradictions and makes the physical picture clearer.
In Sec. II, on the basis of two simple postulates we obtain the form of a functional dependence of light speed on time and distance. Agreement between the variability of light speed and the relativistic principles requires modification of the Theory of Relativity, which is considered in Sec. III for the case of rest observers within one inertial reference system. It is shown in Sec. IV that the variability of the light speed with time and distance results in Hubble’s redshift for rest sources. The formulae for aberration are obtained which can also be interpreted in terms of Hubble Law. Possible experimental implications of the theory are studied in Sec. V.
## II A TIME-SPACE VARYING SPEED OF LIGHT
First of all let us consider in general the possibility of the variability of light speed with time. Our purpose is to obtain most simple and natural mode of variability of speed with time. In particular, it is preferable that photons <sup>*</sup><sup>*</sup>* By ”photon” we mean a light signal or wave packet that is much smaller than the distance it is travelling still move without acceleration.
We require that the following postulates hold:
1. The Light Speed varies with time and distance: $`CC(t,\stackrel{}{r})`$
2. The speed of a particular photon is constant along its trajectory.
The first postulate seems obvious from the relativistic point of view. If a physical constant varies with time it must vary with distance as well. The second one in some respect introduces the variability of light speed with time minimally. This means that though in some point of space $`r_0`$ the light speed varies with time $`C(t,r_0)`$, if we observe the movement of the particular photon, we will find it travelling uniformly along the trajectory $`\stackrel{}{r}=\stackrel{}{r}_0+\stackrel{}{C}(t_0,\stackrel{}{r}_0)(tt_0)`$, at a constant speed $`\stackrel{}{C}_0=\stackrel{}{C}(t_0,\stackrel{}{r}_0)=\stackrel{}{C}(t,\stackrel{}{r})`$, where $`\stackrel{}{r}_0`$, $`t_0`$ are some fixed point and moment of time. In other words, the function of light speed $`\stackrel{}{C}(t,\stackrel{}{r})`$ must satisfy the following functional equation:
$$\stackrel{}{C}(t,\stackrel{}{r}_0+\stackrel{}{C}(t_0,\stackrel{}{r}_0)(tt_0))=\stackrel{}{C}(t_0,\stackrel{}{r}_0)$$
(1)
for any $`t`$, $`t_0`$, $`\stackrel{}{r}_0`$.
To solve of this equation, let us consider the trajectory of the moving photon . Since $`\stackrel{}{C}_0=\stackrel{}{C}(t_0,\stackrel{}{r}_0)`$, $`\stackrel{}{r}_0`$ is a function of $`\stackrel{}{C}_0`$ and $`t_0`$. Thus the trajectory of the photon
$$\stackrel{}{r}=\stackrel{}{r}_0+\stackrel{}{C}_0(tt_0)=\stackrel{}{F}_1(\stackrel{}{C}_0,t_0)+\stackrel{}{C}_0t$$
(2)
or, since $`\stackrel{}{C}_0=\stackrel{}{C}(t,\stackrel{}{r})`$ we have
$$\stackrel{}{F}_1(\stackrel{}{C}(t,\stackrel{}{r}),t_0)=\stackrel{}{r}\stackrel{}{C}(t,\stackrel{}{r})t.$$
(3)
The fixed moment of time $`t_0`$ can be chosen arbitrarily and does not depend on the current position $`\stackrel{}{r}`$ and time $`t`$, thus the function $`\stackrel{}{F}_1`$ does not depend on $`t_0`$. So, the most general solution of the equation (1) has the following form:
$$\stackrel{}{C}(t,\stackrel{}{r})=\stackrel{}{F}\left(\stackrel{}{r}\stackrel{}{C}(t,\stackrel{}{r})t\right)$$
(4)
where $`\stackrel{}{F}(\stackrel{}{\xi })`$ is an arbitrary function.
To make the function $`\stackrel{}{F}(\stackrel{}{\xi })`$ a more specific one we need to introduce additional postulates. It is however easy to see that resolving Eq.(4) in elementary functions is only possible if $`\stackrel{}{F}`$ is linear: $`\stackrel{}{F}(\stackrel{}{\xi })=\stackrel{}{c}+\lambda c^2\stackrel{}{\xi }`$, where $`\stackrel{}{c}`$ and $`\lambda c^2`$ are constants. Therefore, the simplest non-trivial dependence of the light speed on time and distance satisfying the above formulated axioms has the following form:
$$\stackrel{}{C}(t,\stackrel{}{r})=\frac{\stackrel{}{c}+\lambda c^2\stackrel{}{r}}{1+\lambda c^2t}$$
(5)
The constant $`\lambda `$ is a new fundamental constant which determines the magnitude of effects caused by dependence of light speed on time and distance. In particular, if $`\lambda =0`$, the light speed is constant and is equal to the constant $`c`$ ($`\stackrel{}{c}=c\stackrel{}{n}`$, where $`\stackrel{}{n}`$ is a unit vector). The initial moment of time $`t=0`$ corresponds to the present moment when the fixation of units of measurement takes place. The unit of time is chosen so that the light velocity is equal to $`C(0,0)=c=299792458ms^1`$ at that moment ($`t=0`$).
If the parameter $`\lambda `$ is small, the effects connected with the light speed variability with time and distance will manifest themselves in long times $`t`$ and at big distances $`r`$ from an observer. That is, only at cosmological scale.
As it was mentioned in Sec.I, consistent introduction of light speed variability with time and distance requires a considerable generalization of the Theory of Relativity. In Refs. , it was shown how such a generalization can be applied to the Lorentz transformation.
The functional dependence (5) requires a generalization of transformation between two rest observers within one inertial reference system. Indeed, let us consider the rest observer at the origin $`x=0`$ which at the moment $`t=0`$ emits a light signal in the direction of the second rest observer at the point $`x=R`$. The speed of this signal equals $`C(0,0)=c`$, and propagating according to the second postulate at the constant speed $`c=C(t,ct)`$, it reaches the second observer at the moment of time $`t=R/c`$. However, the second observer cannot reflect this signal with the same speed because in that case it would return to $`x=0`$ with the speed ”$`c`$” which is greater than the speed of light for that moment:
$$c>C(\frac{2R}{c},0)=\frac{c}{1+2\lambda cR}.$$
(6)
It is especially strange from the point of view of the observer at $`x=0`$, because for him the light speed
$$\stackrel{}{C}(t,0)=\frac{\stackrel{}{c}}{1+\lambda c^2t}$$
(7)
is isotropic, and he can receive and emit signals with the same speed in any direction ( for the given moment of time).
Such a seeming non-equality of two rest observers shows that it is necessary to consider in detail the relation not only between the measurements performed by observers in different inertial frames of reference but also between observers within the same inertial frame. These transformations along with the Projective Lorentz Transformations , provide the necessary generalization of the Theory of Relativity.
## III GENERALIZATION OF TRANSLATIONAL TRANSFORMATIONS.
Let us consider two rest observers within one inertial system who are situated at the points $`x=0`$ and $`x=R`$. We denote coordinates and times of events as measured by the first and second observers respectively by $`x,y,z,t`$ and $`X,Y,Z,T`$. The question is as follows: ”What is the most natural way to generalize translational transformations?”
$$\{\begin{array}{c}X=xR,\hfill \\ Y=y\hfill \\ T=t\hfill \end{array}\stackrel{\mathrm{?}}{}\{\begin{array}{c}X=X(x,y)\hfill \\ Y=Y(x,y)\hfill \\ T=T(x,y,t)\hfill \end{array}$$
(8)
(Below we will only consider two dimensions $`(x,y)`$, because all the formulae for $`y`$ and $`z`$ components are equivalent.)
To solve the stated problem we use the Principle of Parametrical Incompleteness which consists in the following. The set of axioms of classical mechanics is complete and any statement formulated within the theory framework can be either proved or denied on the basis of these axioms. Reducing the number of axioms would result in appearance of indeterminable parameters and functions, i. e. incompleteness of the theory. However, there possibly are such informational simplifications that only a finite set of constants remain indeterminable. These constants then will play the role of the fundamental physical constants and incompleteness will be parametrical.
In this way one could build the relativistic theory with the constant $`c`$ and quantum mechanics with the Planck constant $`\mathrm{}`$. This is, so to say, the principle of correspondence inversely. We conventionally obtain classical mechanics from relativistic mechanics in the limit $`c=\mathrm{}`$. However, it is possible to obtain relativistic mechanics (and other generalizations) from classical mechanics, by reducing the number of axioms. With each of these generalizations of classical mechanics some fundamental physical constant will be connected.
Let us formulate five axioms concerning two observers in the same reference frame.
Axioms
1. The transformations of coordinates and time are continuous, differentiable and single valued-functions.
2. If from the point of view of one observer a free particle moves uniformly, it will move uniformly from the point of view of another observer.
3. The observers negotiate a units of length so that their relative distance is equal to $`R`$.
4. All the observers are equal and the transformations compose a group.
5. Space is isotropic.
The first axiom is standard for the majority of physical constructions. The second one is actually a definition of inertial reference systems and time. We define the time so that the movement of a free particle is as simple as possible. The third one is a definition of units of length: two rest observers, assume, by mutual agreement, that the distance between them is equal to $`R`$. These axioms are very strong and completely fix the functional form of transformations. We can show (see Appendix), that the most general transformations satisfying the first three axioms are:
$`X`$ $`=`$ $`{\displaystyle \frac{xR}{1\sigma (R)x}},`$ (9)
$`Y`$ $`=`$ $`{\displaystyle \frac{\gamma (R)y}{1\sigma (R)x}}`$ (10)
$`T`$ $`=`$ $`{\displaystyle \frac{a(R)x+b(R)t+c(R)+d(R)y}{1\sigma (R)x}},`$ (11)
where $`\sigma (R),\gamma (R)a(R),b(R),c(R),d(R)`$ are some unknown functions. The linear fractional transformations (10) are well-known as the most general geometrical transformations imaging a straight line into a straight line. This is the main point of the second axiom.
The requirement of fulfillment of group properties (axiom 4) means that there are at least three equal observers for whom:
$$x_2=\frac{x_1R_1}{1\sigma _1x_1},x_3=\frac{x_2R_2}{1\sigma _2x_2}=\frac{x_1R_3}{1\sigma _3x_1},$$
(12)
where $`\sigma _i=\sigma (R_i)`$. These equations are satisfied only if
$$\frac{\sigma (R_1)}{R_1}=\frac{\sigma (R_2)}{R_2}=\alpha =const$$
(13)
and
$$R_3=\frac{R_1+R_2}{1+\alpha R_1R_2}.$$
(14)
Since relative distances $`R_1`$ and $`R_2`$ are arbitrary, $`\alpha `$ is a fundamental constant which is the same for all the observers.
The reverse transformation corresponds to substitution $`RR`$, and since
$$y=\frac{1\alpha R^2}{\gamma (R)}\frac{Y}{1+\alpha RX},$$
(15)
we have $`\gamma (R)\gamma (R)=1\alpha R^2`$. The isotropy of space (axiom 5) implies that transformations are invariant under inversion of the spatial axes $`yy`$, $`YY`$, $`RR`$ etc. This leads to the fact that the function $`\gamma (R)`$ is even, and for the space transformations we obtain
$$X=\frac{xR}{1\alpha Rx},Y=\frac{y\sqrt{1\alpha R^2}}{1\alpha Rx}.$$
(16)
.
These formulae formally coincide with the velocity transformations in the relativistic theory. It means that observers are placed in homogeneous and isotropic space of constant curvature. The coordinates they use to measure physical distance are Cartesian coordinates on Beltrami’s map. Beltrami’s space touches the space at the point where the observer is situated, and it possesses the property that any geodesic line is projected on it as a straight line. In the simplest case of a two-dimensional sphere, Beltrami’s map is a plane tangent to the sphere. The projection on the plane is made from the centre of the sphere. Physical and geometrical distances to some point are connected by the equation $`S_{phys}=\mathrm{tan}(S_{geom})`$, and for Lobachevsky space of negative curvature by $`S_{phys}=\mathrm{tanh}(S_{geom})`$. Analogous relations between geometrical and physical values also exist in the velocity space of the Theory of Relativity.
Now let us consider the transformation of time. Suppose that all events lying in some plane normal to the $`x`$ axis occur simultaneously from the point of view of one observer. Then they will be simultaneous from the point of view of another observer as well. It means that $`T=T(x,t)`$ and $`d(R)=0`$. The requirement of isotropy (axiom 5) leads to the fact that the functions $`c(R),b(R)`$ are even, and $`a(R)`$ is an odd one. Analogously to the coordinate case we find the reverse transformation and require it to coincide with the initial one after the replacement $`RR`$. This gives the following equations:
$$b(R)=\sqrt{1\alpha R^2},c(R)=\frac{a(R)}{\alpha R}(\sqrt{1\alpha R^2}1)$$
(17)
The composition of transformations $`t_2=f(t_1,x_1,R_1)`$, $`t_3=f(t_2,x_2,R_2)=f(t_1,x_1,R_3)`$ is possible only if
$$\frac{a(R_1)}{R_1}=\frac{a(R_2)}{R_2}=\lambda =const.$$
(18)
So, we obtain:
$$T=\frac{t\sqrt{1\alpha R^2}+\lambda Rx+(\sqrt{1\alpha R^2}1)\lambda /\alpha }{1\alpha Rx}.$$
(19)
We should note that the synchronization procedure is derived automatically: the event that happens between observers at equal distances from these observers, $`x=X=(1\sqrt{1\alpha R^2})/\alpha R`$, is simultaneous for them: $`T=t`$.
Using transformations (16) and (19) it is easy to obtain transformations for the speed of particle as measured by each of the observers $`\stackrel{}{U}=d\stackrel{}{X}/dT`$, $`\stackrel{}{u}=d\stackrel{}{x}/dt`$:
$`U_X`$ $`=`$ $`{\displaystyle \frac{u_x\sqrt{1\alpha R^2}}{1+\lambda Ru_x\alpha R(xu_xt)}}`$ (20)
$`U_Y`$ $`=`$ $`{\displaystyle \frac{u_y+\alpha R(yu_xxu_y)}{1+\lambda Ru_x\alpha R(xu_xt)}}.`$ (21)
If the particle moves uniformly $`\stackrel{}{r}=\stackrel{}{r}_0+\stackrel{}{u}t`$, transformations of speed do not vary with time but vary with the ”initial” position of the particle $`\stackrel{}{r}_0`$.
Here we should note that, if $`\alpha =(\lambda c)^2`$, the formula (5) for the light speed $`\stackrel{}{C}(t,\stackrel{}{r})`$ possesses the following properties:
1. $`\stackrel{}{C}(t,\stackrel{}{r})`$ is invariant for both observers. It means that, if $`\stackrel{}{C}(t,\stackrel{}{r})`$ is transformed as a speed (20),(21), the same function expressed in coordinates of each observer stands on the right and on the left of the transformations (20),(21). In case of light moving along the $`x`$ axis we have:
$$C(T,X)=\frac{C(t,x)\sqrt{1(\lambda cR)^2}}{1+\lambda RC(t,x)(\lambda c)^2R(xC(t,x)t)}$$
(22)
The movement in an arbitrary direction is considered in the next section.
2. $`\stackrel{}{C}(t,\stackrel{}{r})`$ is the maximal possible speed for the given point of space $`\stackrel{}{r}`$ and given moment of time $`t`$.
On the basis of these two properties we call $`\stackrel{}{C}(t,\stackrel{}{r})`$ the speed of light.
Therefore, for the consistent introduction of the varying with time and distance light velocity (5) into the theory, it is necessary to generalize the translational transformations for rest observers:
$$X=\frac{xR}{1(\lambda c)^2Rx},Y=\frac{y\sqrt{1(\lambda cR)^2}}{1(\lambda c)^2Rx}.$$
(23)
.
$$1+\lambda c^2T=\frac{\sqrt{1(\lambda cR)^2}}{1(\lambda c)^2Rx}\left(1+\lambda c^2t\right).$$
(24)
If two observers move at a relative speed $`v`$, the generalized Lorentz transformations have the following form ,:
$`x^{}`$ $`=`$ $`{\displaystyle \frac{\gamma (xvt)}{1+\lambda v\gamma x\lambda c^2(\gamma 1)t}},`$ (25)
$`y^{}`$ $`=`$ $`{\displaystyle \frac{y}{1+\lambda v\gamma x\lambda c^2(\gamma 1)t}},`$ (26)
$`t^{}`$ $`=`$ $`{\displaystyle \frac{\gamma (tvx/c^2)}{1+\lambda v\gamma x\lambda c^2(\gamma 1)t}},`$ (27)
where $`\gamma =1/\sqrt{1v^2/c^2}`$ is the Lorentz factor. The formulae (23)-(27) form the basis of kinematics of the Projective Theory of Relativity, within which the speed of light varies with time but at the same time is an invariant of the theory.
The contradiction considered in the Sec. II is easy to resolve now. From the point of view of the first observer the signal reaches the second observer $`x=R`$ at the moment of time $`t=R/c`$ with the speed $`u=c`$. From the point of view of the second observer the speed of the signal (20) and the moment of time (24) are equal to:
$$U=c\sqrt{\frac{1\lambda cR}{1+\lambda cR}},\lambda c^2T=\sqrt{\frac{1+\lambda cR}{1\lambda cR}}1.$$
(28)
The observer reflects the signal with the same speed $`UU`$. However, due to the transformations of speed its speed relative to the first observer (20) equals:
$$u=c\frac{1\lambda cR}{1+\lambda cR}.$$
(29)
In a time $`t=R/c+R/|u|`$ the signal returns to the first observer to the point $`x=0`$, and has the same speed as any other light signal at that moment of time:
$$C(\frac{R}{c}+\frac{R}{|u|},0)=c\frac{1\lambda cR}{1+\lambda cR}=|u|$$
(30)
Therefore, if two particles have the same speed from the point of view of one observer, they will have different speeds for another observer. The equality of speeds is as relative a notion as the simultaneity of events is. This happens because the Projective Transformations do not conserve parallelism of straight lines.
If we add to the initial system of axioms the requirements of absolutivity of equality of two speeds and absolutivity of time, we obtain the complete axiom system in which incompleteness connected with undefineable constants $`\lambda =0`$, $`\alpha =0`$ disappears. If we exclude these axioms, we obtain the more general parametrically incomplete theory with new fundamental physical constants. This is the Principle of Parametrical Incompleteness.
## IV THE HUBBLE LAW. EXPANSION OF THE STATIC UNIVERSE.
An interesting consequence of the results of previous sections arises when the Doppler effect is analyzed within one inertial system.
1. Hubble Low
Let us consider a remote rest source with coordinates $`\stackrel{}{R}`$ emitting light in the direction of observer which is situated at the origin $`x=0`$. According to observer’s clock the light pulse emitted at the moment of time $`t_1`$ reaches it at the moment $`t_2`$. Since the speed of this signal is constant $`C(R,t_1)=C(0,t_2)`$ and it moves in the direction towards the observer $`\stackrel{}{c}=c\stackrel{}{R}/R`$, we have the following relation between $`R,t_1,t_2`$:
$$(t_2t_1)c=R+\lambda c^2Rt_2.$$
(31)
Let us assume that light pulses are emitted with the period $`\tau _0=\mathrm{\Delta }T_1`$, and are received with the period $`\tau =\mathrm{\Delta }t_2`$. Since the source’s time $`T`$ and the observer’s time $`t`$ are related by Eq. (24), the interval $`\mathrm{\Delta }T`$ equals $`\mathrm{\Delta }t/\sqrt{1(\lambda cR)^2}`$ for $`\stackrel{}{x}=\stackrel{}{R}`$. Thus the period of emission is $`\tau _0=\mathrm{\Delta }t_1/\sqrt{1(\lambda cR)^2}`$, and introducing the parameter of redshift $`z`$ we finally obtain:
$$1+z=\frac{\tau }{\tau _0}=\sqrt{\frac{1+\lambda cR}{1\lambda cR}}.$$
(32)
Interpreting the redshift according to Doppler’s formula, we obtain the Hubble law: $`\stackrel{}{V}=\lambda c^2\stackrel{}{R}`$, but such an interpretation would not be correct in this case.
2. Distance Measurement
We can obtain the same result by the following speculations. Suppose, the observer at $`x=0`$ makes a radiolocating experiment measuring the distance to the rest object at $`x=R`$. At the moment of time $`t_1`$ this observer emits a light signal at the speed of $`C(0,t_1)`$ receiving it at time $`t_2`$ at the speed of $`C(0,t_2)`$. If the observer (despite different speeds of the emitted and reflected signals) assumed the distance to the object to be equal to $`l=(t_2t_1)c/2`$, he would, probably, conclude that the object moves away from him at Hubble’s speed:
$$l=\frac{c}{2}\left(\frac{R}{C(t_1)}+\frac{R}{C(t_2)}\right)=R+\lambda c^2Rt=R+Vt,$$
(33)
where $`t=(t_2+t_1)/2`$. Such an interpretation would not, of course, be correct. If the observer emitted signals at speed $`u<C(0,t_2)`$, he could (with apropriate conditions of reflection) receive them at the same speed, and the distance $`l=(t_2t_1)u/2`$ would be unchanging and equal to $`R`$.
3. Aberration of Light
Let us obtain another useful equation which also can be interpreted in terms of the Hubble speed. An expression similar to that for aberration in the Theory of Relativity arises for the rest light source and receiver. Suppose, the light travels in some direction $`(\mathrm{cos}\omega ,\mathrm{sin}\omega )`$ relative to the observer at $`x=0`$, and in direction $`(\mathrm{cos}\mathrm{\Omega },\mathrm{sin}\mathrm{\Omega })`$ relative to the observer at $`x=R`$. Then the components of light velocity will be equal to
$$C_x=c\frac{\mathrm{cos}\omega +\lambda cx}{1+\lambda c^2t}C_y=c\frac{\mathrm{sin}\omega +\lambda cy}{1+\lambda c^2t}.$$
(34)
If we put (34) and similar equations for the second observer in (20),(21), where $`\alpha =(\lambda c)^2`$, we would obtain the identity for any $`x,y,t`$, only if:
$`\mathrm{sin}\mathrm{\Omega }`$ $`=`$ $`{\displaystyle \frac{\sqrt{1(\lambda cR)^2}\mathrm{sin}\omega }{1+\lambda cR\mathrm{cos}\omega }},`$ (35)
$`\mathrm{cos}\mathrm{\Omega }`$ $`=`$ $`{\displaystyle \frac{cos\omega +\lambda cR}{1+\lambda cR\mathrm{cos}\omega }}.`$ (36)
These formulae formally coincide with those for aberration in the Theory of Relativity if we set $`\lambda cR=V/c`$. So, we again come to the Hubble formula.
4. ”Expansion” of the static Universe
If we admit the possibility light speed of variability with time, we will necessarily come to the following cosmological model. The Universe is a stationary space of constant curvature (the Lobachevsky space). The curvature is not connected with the presence of matter and is an intrinsic property of the empty space. The course of time in the Universe is defined so that it would look as simple as possible. This leads to the flat pseudo-Euclidian space-time.
The evolution of the Universe is connected with the decreasing of the speed of light with time and 14(?) billion years ago the speed of light was equal to infinity. We now take this moment as the origin of time, i.e. make the shift $`tt1/\lambda c^2`$ in all the formulae . Because of the infinite speed of interactions, the early Universe was homogeneous and hot. However, there was no singularity of matter. All the clocks in the Universe were synchronized ($`C=\mathrm{}`$) and pointed at the zero time mark:
$$T=\frac{\sqrt{1(\lambda cR)^2}}{1(\lambda c)^2\stackrel{}{R}\stackrel{}{r}}t.$$
(37)
With the course of time the speed of light was decreasing, the Universe was cooling, and the clocks located at the distance $`r=R`$ from us started to advance compared to our clock:
$$T=\frac{t}{\sqrt{1(\lambda cR)^2}}>t$$
(38)
Nevertheless, we observe the Universe in its past state
$$T_v=\sqrt{\frac{1\lambda cR}{1+\lambda cR}}t=\frac{t}{1+z}<t,$$
(39)
because the speed of light is finite $`C(t,0)=(\lambda ct)^1`$ (here $`z`$ is the parameter of redshift).
The frequency of the light we receive from remote rest sources is shifted to the red. The farther the source is situated from us the more the frequency of the light is shifted to the red, in agreement with the Hubble Law.
The distance $`R_m=1/\lambda c`$ is the maximal possible distance an observer can measure, and at the same time is the radius of curvature of the Lobachevsky space. We point out that we are talking here about physical distances but not about geometrical distances which are unlimited in the Lobachevsky space. The situation is completely identical to the velocity space of the theory of relativity, for which there is the maximum possible speed $`c`$ but there is no finite limit on geometrical distance $`s=\mathrm{artanh}(u/c)`$. At any moment of time according to our clocks $`t`$ we see areas situated at the distance $`R_m`$ from us at the moment of time $`T=0`$ according to the local clock. The infinite value of the red shift parameter $`z`$ corresponds to these areas.
Although the Hubble Law is realized automatically in this cosmological model, it is obvious that including matter and gravitation into consideration can change the properties of our Space in some way, for instance, to make it expand. In this case the Hubble effect will consist of two components - the usual Doppler redshift and the shift connected with the new fundamental constant $`\lambda `$. As a result, the actual age of our Universe could be much greater than the value derived from the Hubble Law.
## V CONCLUSION: VARYING SPEED OF LIGHT AND EXPERIMENT
Let us discuss applicability of the proposed theory to the real World. Since Hubble’s effect is naturally described within the Projective Theory of Relativity, it would be interesting to associate Hubble’s constant $`H=65km/sec/Mps=6.710^{11}year^1`$ with the constant $`\lambda c^2`$. In this case the change of the light velocity with time would be as follows ($`r=0`$, $`t=0`$ ):
$$\frac{\mathrm{\Delta }C}{C}=\lambda c^2\mathrm{\Delta }t=6.710^{11}\frac{\mathrm{\Delta }t}{year}.$$
(40)
Obviously, the dimensional value $`C(t,0)`$ can be expressed in terms of some units of length and time, e.g. the atomic units $`\mathrm{}^2/me^2`$ and $`\mathrm{}^3/me^4`$. In particular, the dimensionless combination $`\alpha (t)=e^2/\mathrm{}C(t)`$ should change. The laboratory value of $`\alpha `$ is known at present (1997) with accuracy of $`410^9`$: $`\alpha ^1=137.03599993(52)`$, which is close enough to change (40).
Here let us make clear one point about testing the dependence of the light velocity on time. There are two entities in our theory: $`C(t,\stackrel{}{r})`$ and $`c`$. The first one is the light velocity and the maximal possible speed of material objects, the second one is the fundamental speed arising from the parametrical incompleteness of the axioms of the theory. Only after generalization of Quantum Electrodynamics for the case of the Projective Theory of Relativity would it be possible to say which of the constants would enter into $`\alpha `$. The fine-structure constant may, thus, depend on $`c`$: $`\alpha =e^2/\mathrm{}c`$, and do not change with time.
Recently, a new direct limit on $`|\dot{\alpha }/\alpha |<10^{14}year^1`$, has been derived from spectral properties of distant ($`z=1÷3.5`$) quasars ,. However, this does not mean that (40) is falsified by experiment. Indeed, we observe an object which is situated at the distance $`R`$ from us in its past state at the moment $`t=R/c`$ according to our clock. That time corresponds to the local time of an object $`T=z/(1+z)\lambda c^2`$ and, therefore, the light velocity measured by the observer, which is situated near the object, equals $`C(0,T)=c(1+z)`$. From his point of view, the dimensionless combination $`\alpha (T)=e^2/\mathrm{}C(0,T)`$, is $`1+z`$ times less than our measurement shows at the present moment of time $`t=0`$. According to the rule of transformation for speeds measured by distant observers, the light emitted by the object at the speed of $`c(1+z)`$ is equal to $`c`$ from our point of view and, therefore, $`\alpha =e^2/\mathrm{}c`$. That is why the measurement of $`\alpha `$ based on the spectra of quasars does not allow us to test the change of light velocity in time.
It is likely that only direct laboratory measurement of the light velocity in terms of the atomic units of length and time would provide a direct test for (40).
ACKNOWLEDGMENTS
I would like to thank Prof. Orlyanskij and Prof. Manida for fruitful discussions and Dr. Zaslavsky and Dr. Tishchenko for their comments on this manuscript.
APPENDIX
Let us consider arbitrary independent differentiable transformations of the coordinate $`x`$ and time $`t`$:
$$X=f(x),T=g(x,t).$$
(41)
We require the system of coordinates $`(x,t)`$ and $`(X,T)`$ to satisfy the definition of inertial reference systems:
$$\frac{du}{dt}=0\frac{dU}{dT}=0,$$
(42)
i.e. a free particle moves uniformly from the point of view of all observers.
By definition, the speeds are $`u=dx/dt`$ and $`U=dX/dT`$, thus:
$$U=\frac{uf_x}{g_xu+g_t},$$
(43)
where $`g_x=g(x,t)/x`$, etc. Differentiating (43) on T ( $`dT=(g_xu+g_t)dt`$ ) and taking into account that the coefficients of the obtained polynomial in $`u`$ must be equal to zero (since $`u`$ is arbitrary) we obtain the system of differential equations:
$`f_{xx}g_x`$ $`=`$ $`g_{xx}f_x`$ (44)
$`f_{xx}g_t`$ $`=`$ $`2g_{xt}f_x`$ (45)
$`g_{tt}f_x`$ $`=`$ $`0.`$ (46)
Solving this system, we obtain:
$`f(x)`$ $`=`$ $`{\displaystyle \frac{ax+b}{1+\alpha x}},`$ (47)
$`g(x,t)`$ $`=`$ $`{\displaystyle \frac{\gamma t+a^{}x+b^{}}{1+\alpha x}}.`$ (48)
In a more general case of two dimensions, linear fractional transformations have the following form :
$`X`$ $`=`$ $`{\displaystyle \frac{ax+by+c}{1+\alpha x+\beta y}},`$ (49)
$`Y`$ $`=`$ $`{\displaystyle \frac{\overline{a}x+\overline{b}y+\overline{c}}{1+\alpha x+\beta y}},`$ (50)
$`T`$ $`=`$ $`{\displaystyle \frac{\gamma t+a^{}x+b^{}y+c^{}}{1+\alpha x+\beta y}}.`$ (51)
It is assumed that the third axiom is equivalent to the following equations:
$$\{\begin{array}{c}X(0,y)=R,X(R,y)=0,\hfill \\ Y(R,0)=0,Y(0,0)=0.\hfill \end{array}$$
(52)
and we obtain the Eq.(10).
|
no-problem/9909/hep-ex9909013.html
|
ar5iv
|
text
|
# References
Experiment WA102 is designed to study exclusive final states formed in the reaction
$$ppp_f(X^0)p_s$$
(1)
at 450 GeV/c. The subscripts $`f`$ and $`s`$ indicate the fastest and slowest particles in the laboratory respectively and $`X^0`$ represents the central system that is presumed to be produced by double exchange processes: in particular Double Pomeron Exchange (DPE). The experiment has been performed using the CERN Omega Spectrometer, the layout of which is described in ref. . In previous analyses it has been observed that when the centrally produced system has been analysed as a function of the parameter $`dP_T`$, which is the difference in the transverse momentum vectors of the two exchange particles , all the undisputed $`q\overline{q}`$ states (i.e. $`\eta `$, $`\eta ^{}`$, $`f_1(1285)`$ etc.) are suppressed as $`dP_T`$ goes to zero, whereas the glueball candidates $`f_0(1500)`$, $`f_0(1710)`$ and $`f_2(1950)`$ are prominent .
In addition, an interesting effect has been observed in the azimuthal angle $`\varphi `$ which is defined as the angle between the $`p_T`$ vectors of the two outgoing protons. Historically it has been assumed that the Pomeron, with “vacuum quantum numbers”, transforms as a scalar and hence that the $`\varphi `$ distribution would be flat for resonances produced by DPE. The $`\varphi `$ dependences observed are clearly not flat and considerable variation is observed among the resonances produced.
Several theoretical papers have been published on these effects . All agree that the exchanged particle must have J $`>`$ 0 and that J = 1 is the simplest explanation for the observed $`\varphi `$ distributions. Close and Schuler have calculated the $`\varphi `$ dependences for the production of resonances with different $`J^{PC}`$ for the case where the exchanged particle is a Pomeron that transforms like a non-conserved vector current. In order to try to get some insight into the nature of the particles exchanged in central $`pp`$ interactions we will compare the predictions of this model with the data for resonances with different $`J^{PC}`$ observed in the WA102 experiment.
The simplest situation is for the production of $`J^{PC}`$ = $`0^+`$ states where the model of Close and Schuler predicts
$$\frac{d^3\sigma }{d\varphi dt_1dt_2}t_1t_2\mathrm{sin}^2\varphi $$
(2)
where $`t_1`$ and $`t_2`$ are the four momentum transfer at the beam-fast and target-slow vertices respectively. Fig. 1a) and b) show the experimental $`\varphi `$ distributions for the $`\eta `$ and $`\eta ^{}`$. They have been fitted to the form $`\alpha \mathrm{sin}^2\varphi `$ which describes the data well. It has also been found experimentally that $`d\sigma /dt`$ is proportional to $`t`$ (where $`t`$ is $`t_1`$ or $`t_2`$) as predicted from equation (2).
The fact that both the $`\eta `$ and $`\eta ^{}`$ signals are suppressed at small four-momentum transfers, where Double Pomeron Exchange (DPE) is believed to be dominant, was assumed to imply that the $`0^+`$ states do not couple to DPE . However, from equation (2) it can be seen that if DPE is mediated via Pomerons transforming as vector particles then the production of $`0^+`$ resonances will be suppressed at small $`t`$. Equation (2) is general to all vector vector exchange processes, so to investigate if the $`\eta `$ and $`\eta ^{}`$ are produced by DPE we have attempted to determine their cross sections as a function of energy. To do so we have calculated the ratio of the cross sections measured by the WA76 experiment, at 85 GeV/c ($`\sqrt{s}`$ = 12.7 GeV), to those measured by the WA102 experiment at 450 GeV/c ($`\sqrt{s}`$ = 29.1 GeV).
Up to now the determination of this ratio has been limited to resonances that decay to final states containing only charged particles due to the fact that there was no calorimeter in the 85 GeV/c run of the WA76 experiment. However, the WA76 experiment was able to reconstruct the $`\eta \pi ^+\pi ^{}`$ mass spectrum using the decay $`\eta `$ $``$ $`\pi ^+\pi ^{}(\pi ^0)_{missing}`$ . In this mass spectrum the $`\eta ^{}`$ and $`f_1(1285)`$ are seen. The cross section of the $`f_1(1285)`$ at 85 GeV/c has been well measured through its all charged particle decay mode and hence can be used to determine the cross section of the $`\eta ^{}`$ after taking into account the different acceptance and combinatorial effects and gives
$$\frac{\sigma _{450}(\eta ^{})}{\sigma _{85}(\eta ^{})}=0.72\pm 0.16$$
(3)
For Pomeron-Pomeron exchange we would expect a value of $``$ 1.0, while for $`\rho `$-$`\rho `$ exchange the value would be $``$ 0.2. Due to charge conjugation the $`\eta ^{}`$ can not be produced by $`\omega `$-Pomeron exchange and Isospin forbids $`\rho `$-Pomeron exchange. Therefore, it would appear that DPE is dominant in $`\eta ^{}`$ production. For the $`\eta `$ there is no possibility of determining the ratio because in the $`\pi ^+\pi ^{}\pi ^0`$ channel there is no suitable reference signal.
The cross section as a function of energy for the $`J^{PC}`$ = $`1^{++}`$ $`f_1(1285)`$ and $`f_1(1420)`$ has been found to be constant . Hence both the $`f_1(1285)`$ and $`f_1(1420)`$ are consistent with being produced by DPE . For the $`J^{PC}`$ = $`1^{++}`$ states the model of Close and Schuler predicts that $`J_Z`$ = $`\pm 1`$ should dominate, which has been found experimentally to be correct , and in addition that
$$\frac{d^3\sigma }{d\varphi dt_1dt_2}(\sqrt{t_2}\sqrt{t_1})^2+4\sqrt{t_1t_2}\mathrm{sin}^2\varphi /2$$
(4)
Fig. 1c) and d) shows the the $`\varphi `$ distributions for the $`f_1(1285)`$ and $`f_1(1420)`$. The distributions have been fitted to the form $`\alpha +\beta \mathrm{sin}^2\varphi /2`$, which describes the data well. Equation (4) also predicts that when $`|t_2t_1|`$ is small $`d\sigma /d\varphi `$ should be proportional to $`\mathrm{sin}^2\varphi /2`$ while when $`|t_2t_1|`$ is large $`d\sigma /d\varphi `$ should be constant. Fig. 1e) and f) show the $`\varphi `$ distributions for the $`f_1(1285)`$ for $`|t_1t_2|`$ $``$ 0.2 GeV<sup>2</sup> and $`|t_1t_2|`$ $``$ 0.4 GeV<sup>2</sup> respectively; as can be seen from the figures the expected trend is observed in the data.
The $`f_0(980)`$, $`f_0(1500)`$ and $`f_2(1270)`$ are other states for which the cross section as a function of energy has been found to be constant and hence are consistent with being produced by DPE. For the scalar states and for the tensor states with $`J_Z`$ = 0 Close and Schuler have predicted that
$$\frac{d\sigma }{d\varphi }(R\mathrm{cos}\varphi )^2$$
(5)
where R is predicted from ref. to be a function of $`t_1t_2`$ and can be negative or positive. In order to compare the data with the model we have studied the $`\varphi `$ dependences for the $`f_0(980)`$, $`f_0(1500)`$ and $`f_2(1270)`$ by studying their decays to $`\pi ^+\pi ^{}`$ .
In ref. a Partial Wave Analysis (PWA) was performed in six bins of $`\varphi `$ in order to determine the $`\varphi `$ dependences of the above resonances. It has not been possible to perform a PWA as a function of $`\varphi `$ and $`t_1t_2`$. Therefore in order to determine the $`\varphi `$ dependences in intervals of $`t_1t_2`$ we have performed a fit to the total mass spectrum in each interval using the method described in ref. . A PWA has been performed in each $`t_1t_2`$ interval discussed below, integrated over $`\varphi `$, to determine the amount of $`f_2(1270)`$ produced with $`J_Z`$ = 0 compared to $`J_Z`$ = $`\pm 1`$. The amount of $`J_Z`$ = $`\pm 1`$ is found to be $``$ 10 % of the $`J_Z`$ = $`0`$ contribution irrespective of the $`t_1t_2`$ interval. The $`J_Z`$ = $`\pm 2`$ contribution is consistent with zero.
Fig. 2a), d) and g) show the $`\varphi `$ distributions, for the $`f_0(980)`$, $`f_0(1500)`$ and $`f_2(1270)`$ respectively, for all the data. These distributions are similar to those found from a fit to the PWA amplitudes . The $`\varphi `$ distributions have been fitted to the form given in equation (5). The values of R determined from the fit are given in table 1. Since R is predicted to be a function of $`t_1t_2`$ the $`\varphi `$ distributions have been analysed in two different intervals of $`t_1t_2`$, the corresponding values of R are given in table 1. Fig. 2b), e) and h) show the $`\varphi `$ distributions, for the $`f_0(980)`$, $`f_0(1500)`$ and $`f_2(1270)`$ respectively for $`|t_1t_2|`$ $``$ 0.01 GeV<sup>4</sup>. Fig. 2c), f) and i) show the corresponding $`\varphi `$ distributions for $`|t_1t_2|`$ $``$ 0.08 GeV<sup>4</sup>.
For the resonances studied to date, in the WA102 experiment, the model of Close and Schuler is in qualitative agreement with the data. Hence the data are consistent with the hypothesis that the Pomeron transforms as a non-conserved vector current.
In order to understand what happens if a different particle is exchanged a study has been made of the reactions
$$pp\mathrm{\Delta }_f^{++}(\pi ^{})p_s$$
(6)
and
$$pp\mathrm{\Delta }_f^{++}(\rho ^{})p_s$$
(7)
In this case a particle with $`I`$ = 1 has to be exchanged from the $`p`$-$`\mathrm{\Delta }^{++}`$ vertex and hence we are no longer studying reactions which are DPE. In the case of central $`\pi ^{}`$ production the most likely production mechanism is $`\pi `$-Pomeron exchange. For the $`\rho ^{}`$ production the most likely production mechanisms are $`\rho `$-Pomeron and $`\pi `$-$`\pi `$ exchange.
To select reaction (6) a study has been made of the reaction
$$ppp_f(\pi ^+\pi ^{})p_s$$
(8)
at 450 GeV/c. The isolation of reaction (8) has been described in ref. . Fig 3a) shows the $`p_f\pi ^+`$ effective mass spectrum where a clear peak corresponding to the $`\mathrm{\Delta }^{++}(1232)`$ can be observed. In order to separate reaction (6) from the reaction
$$ppN_f^{}p_s$$
(9)
where $`N_f^{}\mathrm{\Delta }_f^{++}\pi ^{}`$, the rapidity gap between the $`\pi ^{}`$ and the $`p_f\pi ^+`$ system has been required to be greater than 2.0 units. The resulting $`p_f\pi ^+`$ effective mass spectrum is shown in fig. 3b). Reaction (6) has been selected by requiring $`M(p_f\pi ^+)`$ $``$ 1.4 GeV. The remaining $`p_f\pi ^+\pi ^{}`$ and $`\pi ^+\pi ^{}`$ mass spectra have no resonance contributions.
The four momentum transfer ($`|t_{fast}|`$) at the beam-$`\mathrm{\Delta }^{++}`$ vertex is shown in fig. 3c) and has been fitted to the form
$$\frac{d\sigma }{dt}=\frac{\alpha |t|}{(|t|+m_\pi ^2)^2}e^{2\beta (|t|+m_\pi ^2)}$$
(10)
which is the standard expression used to describe $`\pi `$ exchange . The first bin in the distributions has been excluded from the fit due to the fact that the uncertainties in the acceptance correction are greatest in this bin. The fit describes the data well, and yields $`\beta `$ = 2.4 $`\pm `$ 0.2 GeV<sup>-2</sup>, consistent with $`\pi `$ exchange .
The four momentum transfer ($`|t_{slow}|`$) at the target-slow vertex is shown in fig. 3d) and has been fitted to the form $`e^{bt}`$. The data with $`|t|`$ $``$ 0.1 GeV<sup>2</sup> has been excluded from the fit due to the poor acceptance for the slow proton in this range. The fit yields a value of $`b`$ = 6.2 $`\pm `$ 0.1 GeV<sup>-2</sup> which is compatible with Pomeron exchange being the dominant contribution .
In this case the azimuthal angle $`\varphi `$ is defined as the angle between the $`p_T`$ vectors of the slow proton and the $`\mathrm{\Delta }^{++}`$ and is shown in fig. 3e). The $`\varphi `$ distribution is consistent with being flat as would be expected if the process was dominated by the exchange of a particle with spin 0, as in $`\pi `$ exchange.
To select reaction (7) a study has then been made of the reaction
$$ppp_f(\pi ^+\pi ^{}\pi ^0)p_s$$
(11)
the isolation of which has been described in ref. . Fig 4a) shows the $`p_f\pi ^+`$ effective mass spectrum where a clear peak corresponding to the $`\mathrm{\Delta }^{++}(1232)`$ can be observed which has been selected by requiring $`M(p_f\pi ^+)`$ $``$ 1.4 GeV in order to select the reaction
$$pp\mathrm{\Delta }_f^{++}(\pi ^{}\pi ^0)p_s$$
(12)
A rapidity gap of 2.0 units is required between the $`\mathrm{\Delta }^{++}`$ and the $`\pi ^{}\pi ^0`$ system.
Fig. 4b) shows the $`\pi ^{}\pi ^0`$ effective mass spectrum where a clear peak corresponding to the $`\rho ^{}(770)`$ can be observed. The mass spectrum has been fitted using two Breit-Wigners, representing the $`\rho ^{}(770)`$ and the broad enhancement in the 1.65 GeV region, plus a background of the form $`a(mm_{th})^bexp(cmdm^2)`$, where $`m`$ is the $`\pi ^{}\pi ^0`$ mass, $`m_{th}`$ is the $`\pi ^{}\pi ^0`$ threshold mass and a, b, c, d are fit parameters. The fit yields for the $`\rho ^{}(770)`$ M = 771 $`\pm `$ 3 MeV, $`\mathrm{\Gamma }`$ = 160 $`\pm `$ 15 MeV and for the 1.65 GeV region M = 1660 $`\pm `$ 9 MeV, $`\mathrm{\Gamma }`$ = 240 $`\pm `$ 25 MeV, which could be due to the $`\rho _3(1690)`$ or the $`\rho (1700)`$.
In order to determine the four momentum transfer ($`|t|`$) at the beam-$`\mathrm{\Delta }^{++}`$ vertex for $`\rho ^{}`$ production the $`\pi ^{}\pi ^0`$ mass spectrum has been fitted in 0.05 GeV<sup>2</sup> bins of $`t`$ with the parameters of the resonances fixed to those obtained from the fits to the total data. The resulting distribution is shown in fig. 4c). In this case it can not be fitted with the $`\pi `$ exchange formula and instead has been fitted to the form $`e^{bt}`$. The first bin in the distributions has been excluded from the fit due to the fact that the uncertainties in the acceptance correction are greatest in this bin. The fit yields a value of $`b`$ = 4.7 $`\pm `$ 0.1 GeV<sup>-2</sup> which is compatible with $`\rho `$ exchange being the dominant contribution .
The four momentum transfer ($`|t_{slow}|`$) at the target-slow vertex is shown in fig. 3d) and has been fitted to the form $`e^{bt}`$. The data for $`|t|`$ $``$ 0.1 GeV<sup>2</sup> has been excluded from the fit due to the poor acceptance for the slow proton in this range. The fit yields a value of $`b`$ = 6.1 $`\pm `$ 0.1 GeV<sup>-2</sup> which is compatible with Pomeron exchange being the dominant contribution .
In order to determine the azimuthal angle $`\varphi `$ between the $`p_T`$ vectors of the slow proton and the $`\mathrm{\Delta }^{++}`$ for $`\rho ^{}`$ production the $`\pi ^{}\pi ^0`$ mass spectrum has been fitted in 30 degree bins of $`\varphi `$ with the parameters of the resonances fixed to those obtained from the fits to the total data. The resulting distribution is shown in fig. 4e). The $`\varphi `$ distribution is clearly not flat in this case. Hence the $`\rho ^{}`$ is consistent with being produced by particles that carry spin, for example $`\rho `$-Pomeron exchange with the Pomeron transforming like a non-conserved vector current.
In summary, for the resonances studied to date which are compatible with being produced by DPE, the model of Close and Schuler is in qualitative agreement with the data and hence is consistent with the Pomeron transforming like a non-conserved vector current. When one of the particles exchanged is known to have spin 0, namely $`\pi `$-Pomeron exchange, the $`\varphi `$ distribution is flat. When $`\rho `$-Pomeron exchange is the dominant contribution the $`\varphi `$ distribution is not flat.
Acknowledgements
This work is supported, in part, by grants from the British Particle Physics and Astronomy Research Council, the British Royal Society, the Ministry of Education, Science, Sports and Culture of Japan (grants no. 04044159 and 07044098), the French Programme International de Cooperation Scientifique (grant no. 576) and the Russian Foundation for Basic Research (grants 96-15-96633 and 98-02-22032).
Figures
Figure 1
Figure 2
Figure 3
Figure 4
|
no-problem/9909/cond-mat9909441.html
|
ar5iv
|
text
|
# 1. Introduction
## 1. Introduction
It is known that inside a rotating superconductor (SC) the magnetic field does not vanish, but takes a certain value $`B_L`$ called “London field strength”; this is given, in terms of the electron charge $`e`$, the electron mass $`m`$ and the angular velocity $`\omega `$ of the SC, by a very simple expression , namely
$$B_L=2m\omega /e$$
(1)
This relation holds for any value of the external magnetic field, including the case $`B_{ext}=0`$. The strength of $`B_L`$ is determined by the London equation (or better by a generalization of it—see ), written in the reference frame co-rotating with the SC.
The London field can be interpretated as the field needed to give the superelectrons the same rotation velocity of the crystal lattice. While the normal electrons keep pace with the positive ions due to ohmic friction, the Cooper pairs are mechanically decoupled from the lattice and need the field $`B_L`$ to move on circular orbits with frequency $`\omega `$. This interpretation also justifies a simple intuitive argument to explain a posteriori the presence of the London field inside a rotating SC, with strength given by (1): it is essential that over the sample as a whole $`v_{pairs}=v_{lattice}`$, because otherwise very large bulk currents would flow.
Starting in the Sixties, the London field has been measured in several experiments. In all measurements it was found that the mass $`m`$ defining the strength $`B_L`$ corresponds to the bare electron mass, i.e., not the effective mass $`m^{}`$, renormalized by the interactions with the lattice, but the mass of the free electron (apart from small relativistic corrections and some further minor corrections ). This made possible to use precise measurements of $`B_L`$ to deduce precise values of the parameters $`e`$ and $`m`$ (and also $`\mathrm{}`$, since in a rotating ring of area $`S`$ the flux is quantized and the quanta correspond to steps $`\mathrm{\Delta }\nu =\mathrm{}/4mS`$ in the rotation frequency) .
The easiest way to check that $`m`$ is the bare electron mass and not the effective mass is by starting from the full Hamiltonian of the solid including the electrons, the ions, and all their interactions. Transforming to the rotating frame, one finds that each electron “feels” the additional vector potential
$$𝐀_\omega (𝐫)=m/e(\omega \times 𝐫)$$
(2)
(with field $`(m/e)\omega `$), regardless of its complicated interaction with the ions and the other electrons.
## 2. Acceleration-deceleration phases. Variations of $`m^{}`$ in a layered disk
Within this well-established framework, two important variations can occur. Let us consider cases when
(i) The rotation velocity of the SC is suddenly changed
or
(ii) The SC has a composite structure, being made of two or more parts with different chemical and crystal properties <sup>2</sup><sup>2</sup>2It is known for instance that the crystal structure of YBCO depends on the oxigen doping of the material. It happens quite frequently that the doping process of large ceramic samples results in portions of the samples having different oxigen content. The effective electron mass $`m^{}`$ (typically 4-5 times larger than the bare mass) depends on this content. Compare also Section 4. and different values of the effective electron mass $`m^{}`$,
or both (i) and (ii).
As we shall show in the following, the combined effect of accelerations-decelerations and inhomogeneities in the material can spoil the dynamical equilibrium between superelectrons and lattice ions usually associated with the London field.
When this happens, the relative velocity of the superelectrons with respect to the lattice ions can indeed lead to strong “slide” supercurrents—although we shall see that these are not bulk currents, but are always confined to thin layers.
It is quite straightforward to evaluate the order of magnitude of these currents. Let us consider a ceramic SC like YBCO. A typical value of the London length $`\lambda `$ in the conduction planes $`ab`$ is $`\lambda =0.2\mu m`$. From the expression for the London length $`\lambda =\sqrt{m^{}/(\mu _0n_se^2)}`$ one finds for the density $`n_s`$ of superconducting charge carriers $`n_s10^{27}m^3`$. In a material with critical current density of the order of $`10^8A/m^2`$, this corresponds to an intrinsic velocity of the carriers $`v_j=j/\rho =j/(en_s)0.6m/s`$. For comparison, in a SC rotating at 100 $`Hz`$ (6000 $`rpm`$) the rotation velocity of the lattice 10 $`cm`$ apart from the axis is $`v_{rot}63m/s`$. Therefore, in this case the slide current can be up to 100 times larger than the critical intrinsic supercurrent.
## 3. London-Maxwell equations for accelerating SCs
In order to achieve a better understanding of the problem, it is useful to recall first in short the theoretical ingredients employed for the description of accelerated SCs at thermodinamic equilibrium.
More generally, let us consider the interplay between a moving SC and the e.m. field. This includes the case of rotating SC samples, or samples which oscillate or accelerate along a line. Some of these cases were studied very early . A general formalism, suitable for the description of all these situations, has been given by Peng et al. . They proposed a unified phenomenological approach to study the electrodynamics of both an arbitrary moving SC and a SC under the influence of non-electromagnetic external forces including the Newtonian gravitation and gravitational waves. This theoretical work has provided a basis for the analysis of several experiments which exploit the London field for precise determinations of the Cooper-pair mass and the ratio $`\mathrm{}/m`$ ; a similar analysis was applied to the readout systems of the Stanford gyroscope experiment .
Usually one starts from the covariant generalizations of the London or Ginzburg-Landau equations. The Ginzburg-Landau theory is needed if one wants to include non-linear effects and spatial variations in the order parameter $`|\psi |^2`$. If we focus on situations in which the perturbating fields and currents are so weak that $`|\psi |^2=n_s=const.`$, then the Ginzburg-Landau equation reduces to the London equations which describe the motion of superelectrons. (In dealing with Type II SCs, we must further assume that the external magnetic field is zero, otherwise partial flux penetration will occur.)
In order to account for the motion of the SC, London introduced the concept that the net current should be the sum of the supercurrent and the current due to the motion of the ions. He then combined the London equations, the equations of motion of ions, and Maxwell equations to study the electrodynamics of a rotating SC in the presence of e.m. fields. More generally, denoting by $`u`$ and $`U`$ the 4-velocities of superelectrons and ions, respectively, the net electric 4-current will be
$$J^\mu =2n_se(U^\mu u^\mu )$$
(3)
This equation must be added to the Maxwell equations and the covariant equations of motion for the superelectrons. In the non-relativistic limit of low velocities ($`𝐯`$ for the electrons, $`𝐕`$ for the ions) these take the usual form
$`{\displaystyle \frac{d𝐯}{dt}}{\displaystyle \frac{e}{m^{}}}𝐄+{\displaystyle \frac{1}{m^{}}}𝐟`$ (4)
$`\times 𝐯{\displaystyle \frac{e}{m^{}}}𝐁+{\displaystyle \frac{1}{m^{}}}{\displaystyle 𝑑t\times 𝐟}`$ (5)
implying
$$𝐯=\frac{e}{m^{}}𝐀+\frac{1}{m^{}}𝑑t𝐟$$
(6)
where $`𝐟`$ is the external force acting on superelectrons. In conclusion, after taking the curl and time derivative, we find
$`^2𝐄{\displaystyle \frac{d^2𝐄}{dt^2}}={\displaystyle \frac{2\mu _0n_se^2}{m^{}}}\left(𝐄{\displaystyle \frac{𝐟}{e}}+{\displaystyle \frac{m^{}}{e}}{\displaystyle \frac{d𝐕}{dt}}\right)`$ (7)
$`^2𝐁{\displaystyle \frac{d^2𝐁}{dt^2}}={\displaystyle \frac{2\mu _0n_se^2}{m^{}}}\left(𝐁+{\displaystyle \frac{1}{e}}{\displaystyle 𝑑t\times 𝐟}{\displaystyle \frac{m^{}}{e}}\times 𝐕\right)`$ (8)
By solving these equations with suitable boundary conditions and with the equations for the motion of ions and for the external forces acting on the superelectrons, one can describe the electrodynamics of an arbitrary moving SC in the presence of e.m. fields to lowest order. The forces acting on ions, which do not appear explicitly in these equations, are involved in the different expressions of $`𝐕`$ for various cases. We note, as stressed by Peng et al. , that:
\- an external force $`𝐟`$, the electric field and the acceleration of ions are coupled;
\- the curl of an external force, the magnetic field and the curl of $`𝐕`$ are coupled, too;
\- the motion of ions, $`𝐕`$, and the vector magnetic potential, play the same role.
The effective electron mass $`m^{}`$ coincides with the bare mass if the electrons are in relative equilibrium with the lattice ($`𝐯=𝐕`$), but in general it is different (the cyclotron frequency of charge carriers in a crystal depends on $`m^{}`$—see for instance ).
Note that the London equations (4), (5)—a crucial component of the final eq.s (7), (8)—are equivalent to the minimization of the free energy $`F`$ of the SC; thus they hold at thermodynamical equilibrium. One might wonder if transient phases can be important in the presence of sudden accelerations. For the cases to which these equations have been previously applied , they seem to work well, and this means that the system is always close to equilibrium. Layered SCs in fast non-uniform rotation may represent a notable exception. In the next section we describe a first heuristic approach to this problem.
## 4. Heuristic description of non-equilibrium states
We have seen that a SC accelerating under the action of an external force can be described in a first approximation by the London-Maxwell equations (7), (8). All the quantities involved in these expressions are mutually coupled in a complex way, so the problem is hard to solve in general form. Moreover, in certain situations the system could be far from thermodynamic equilibrium. Let us then give here an heuristic description of the case of a rotating layered SC, taking advantage of the causal connections which are already known, namely:
(i) a bulk “slide current” due to a sudden acceleration generates, by virtue of the London equations (i.e., by minimization of the free energy of the SC), an increase in the surface supercurrent;
(ii) by virtue of the Maxwell equations, the surface supercorrent produces a London field inside the SC;
(iii) this field brings the Cooper pairs in relative equilibrium with the rotating lattice.
Let us proceed by steps and illustrate three different possible situations: the case of an homogeneous rotating bar; the case of a rotating bar with two parts of comparable thickness, made of different materials; the case of a rotating bar with a thin layer of different material.
Case of an homogeneous rotating bar
Suppose an homogeneous cylindrical SC bar is rotating at angular velocity $`\omega `$ and equilibrium has been reached, with a London field $`B_L=e\omega /m`$ inside the bar. Then the bar accelerates, reaching an angular velocity $`\omega ^{}`$. At first the superelectrons are left behind; therefore they recover at once their effective mass and a large bulk slide current arises. After that, however, the supercurrent at the surface of the bar grows, in such a way to produce a London field $`B_L^{}=2\omega ^{}m^{}/e`$. This sets the superelectrons again in equilibrium with the lattice, and finally (possibly after a few further “oscillations” with respect to relative equilibrium) the field attains a new equilibrium value $`B_L^{}=2\omega ^{}m/e`$. The whole process is fast and usually not observed in the classical experiments involving slow and steady rotors (compare our conclusions in Section 5).
More precisely, note that in this situation the steps (i) and (ii) above can be best visualized through an analogy between the SC and an ideal solenoid, as follows:
(i’) A driving electromotive force (EMF) is applied to the solenoid. Like many EMFs, it is of “thermodynamic” origin (minimization of the free energy of the SC) and takes a characteristic time $`\mathrm{\Delta }t_{EMF}`$ to reach its maximum.
(ii’) The current in the solenoid grows in response to the external EMF, but due to the self-inductance of the system, this growth takes a characteristic time $`\mathrm{\Delta }t_{induction}`$.
The total time required to reach the new equilibrium configuration is thus of the order of $`\mathrm{\Delta }t_{equilibrium}=\mathrm{\Delta }t_{EMF}+\mathrm{\Delta }t_{induction}`$. If this is much smaller than the characteristic acceleration time, then the system will just pass through a sequence of equilibrium states.
In the following, however, we shall consider “sudden accelerations”, with characteristic times smaller than $`\mathrm{\Delta }t_{equilibrium}`$.
Case of a rotating bar with two parts of comparable thickness
Let us next consider a cylindrical bar made of two parts, 1 and 2, of comparable thickness (see figure, A). The material is different in the two parts, and so is the effective mass of the electrons. Suppose the system is initially in equilibrium: the bar rotates with constant angular velocity around its vertical axis and the London field is the same in both parts, corresponding to the bare electron mass ($`B_{L1}=B_{L2}=e\omega /m`$).
If the angular velocity is suddenly increased, a transient phase will follow. The skin supercurrents $`j_1`$ and $`j_2`$ must increase, too, in order to produce a stronger London field. In the meanwhile, the superelectrons are unable to follow the rotation frequency of the lattice, and are in relative motion with respect to it.
The magnetic field needed to bring the electrons again to rest with respect to the lattice is different in the two parts, because the effective masses are different. The same is true for the skin supercurrents $`j_1`$ and $`j_2`$. Like in the previous case, there will be some oscillations around the relative equilibrium positions, but soon a new state is reached (if there are no further accelerations), with $`B_{L1}^{}=B_{L2}^{}=e\omega ^{}/m`$. This is clearly the state with minimum total free energy $`F=F_1+F_2`$.
Case of a rotating bar with a thin layer of different material
Finally, we consider the previous case in the limit when the Part 2 of the cylindrical bar is much thinner than Part 1 (see figure). Let us also suppose that $`m_2^{}>m_1^{}`$, i.e., the effective electron mass is larger in 1 than in 2. It is easy to see that after a sudden acceleration, the Part 2 cannot reach a new equilibrium situation, but remains in a sort of metastable state.
In fact, following the acceleration the skin supercurrent $`j_1`$ increases until the London field $`B_{L1}`$ brings again the superelectrons in 1 in relative equilibrium with the lattice (possibly after some oscillations); this field, however, is not strong enough to establish relative equilibrium in 2, where the effective electron mass is larger.
Note that being the Part 2 very thin (and much thinner than the radius of the bar), the magnetic field in it cannot be substantially different from $`B_{L1}`$. For the same reason, the free energy $`F_2`$ gives a negligible contribution to the total free energy, so while $`F_1`$ must be at a minimum and no bulk slide current can exist in 1, such a current can indeed be present in 2 in the circumstances we are considering.
Also note that the “feedback” magnetic field generated by this superficial current lies in a plane orthogonal to the bar axis; therefore it does not tend to compensate for the insufficient London field and does not oppose to the surface current.
In practice, a thin layer like the Part 2 considered above can be present in a SC not only because of intentional differences in the oxigen doping of the material, but also for other reasons. For instance, the bulk of the material might have been subjected to a melting treatment, while the base was less affected because in contact with a coolant.
The essential point, for the anomalous behavior described above to occur, is that the crystal structure of Part 2 must be different from that of Part 1, and the effective electron mass larger. This behavior could also depend on the temperature, because the different crystal structures of parts 1 and 2 could imply different critical temperatures, and typically we expect $`T_{c2}<T_{c1}`$. In this case, the metastable states will be most relevant at temperatures $`T`$ such that $`T_{c2}<T<T_{c1}`$.
## 5. Conclusions
In this work, after recalling in Section 3 the Maxwell-London equations for the general case of a SC in accelerated motion, we have set out in Section 4 an heuristic approach to the case of a layered SC in fast non-uniform rotation.
We have seen that in this situation the system can enter non-equilibrium states. In particular, if one of the layers is much thinner than the others, and the effective electron mass in it larger, “slide” surface currents can arise, with density higher than the critical density $`j_c`$ of the material. (Compare the estimate given in the Introduction, which yelds $`j100j_c`$ for a 100% difference between the rotation velocity of the lattice and that of the superconducting carriers; for smaller differences, $`j/j_c`$ varies in direct ratio).
This phenomenon is interesting in itself, but also because in some experiments involving rotating SCs the real operating conditions are not far from those considered here in principle: namely one has large ceramic disks (10-30 $`cm`$ in diameter), rotating at frequencies of thousands of $`rpm`$, with acceleration and braking phases during which the rotation frequency varies by some % in a few seconds; moreover, these large superconducting samples are often made by several layers, having different crystal structure and oxigen doping. (For comparison, the classical experiments involve rotors with a maximum size of $`5cm`$ and maximum rotation frequencies of $`5Hz`$, driven by steady gas flows).
The possible presence, in these systems, of large surface currents like those predicted by our analysis, could be checked directly through magneto-optical techniques . Alternatively, one could look for indirect evidence, for instance investigating the effect of these anomalous currents on the material that supports them.
This work has been partially supported by the A.S.P., Associazione per lo Sviluppo Scientifico e Tecnologico del Piemonte, Turin - Italy.
FIGURE CAPTION
Fig. 1 - A: rotating bar with two parts of comparable thickness; B: rotating bar with a thin layer of different material. The rotation axis is vertical in the figures.
|
no-problem/9909/hep-lat9909040.html
|
ar5iv
|
text
|
# ITFA-99-23 THU-99-26 Real-time dynamics in the 1+1 D abelian Higgs model with fermionsWork supported by FOM/NWO, presented by J. Smit.
## Abstract
In approximate dynamical equations, inhomogenous classical (mean) gauge and Higgs fields are coupled to quantized fermions. The equations are solved numerically on a spacetime lattice. The fermions appear to equilibrate according to the Fermi-Dirac distribution with time-dependent temperature and chemical potential.
1. The real-time path integral for quantum fields is very difficult to evaluate numerically and approximations need to be made before giving the problem to the computer. Two types of approximations are currently in use: classical and gaussian, such as Hartree, large $`N`$. The classical approximation gives valuable nonperturbative results but suffers complications due to (Rayleigh-Jeans type) divergencies. The gaussian approximation has the benefit of staying within the quantum domain where we know how to deal with divergencies, but it is not good enough for large times. Naturally, one would like to combine the good aspects of both approximations . A crucial test is to see whether the system equilibrates quantum-like, and not classical equipartition-like, despite the fact that one is just solving a large number of coupled nonlinear equations which conserve energy. This is one motivation for the present study. Another is the intrinsic interest in the complicated nonperturbative dynamics of the abelian Higgs model with fermions.
2. The 1+1 D abelian Higgs model coupled axially to fermions is qualitatively similar to the electroweak sector of the Standard Model. As for the SU(2) case in 4D, it can be rewritten in a form with vectorial gauge couplings and Majorana-Yukawa couplings. For $`N\mathrm{}`$ fermion replicas, the equations of motion reduce to a classical field approximation for the bosonic variables, with a quantal fermion backreaction :
$`_\mu F^{\mu \nu }+e^2i(D^\nu \phi ^{}\phi \phi ^{}D^\nu \phi )`$
$`+(e^2/2)\overline{\psi }i\gamma ^\nu \psi `$ $`=`$ $`0,`$
$`(D_\mu D^\mu +\mu ^2+2\lambda \phi ^{}\phi )\phi `$ $`=`$ $`0,`$
where we have specialized to zero Yukawa coupling. The fermion backreaction is specified as follows. Introduce a complete set of orthonormal mode functions $`u_\alpha ,v_\alpha `$ for the fermions, which satisfy the Dirac equation
$`\gamma ^\mu D_\mu u_\alpha =0,\gamma ^\mu D_\mu v_\alpha =0.`$
Next define the fermion operator $`\widehat{\psi }`$,
$$\widehat{\psi }(x)=\underset{\alpha }{}[\widehat{b}_\alpha u_\alpha (x)+\widehat{d}_\alpha ^{}v_\alpha (x)],$$
in terms of annihilation and creation operators $`\widehat{b}_\alpha `$, $`\widehat{b}_\alpha ^{}`$, …. The fermion back reaction is then specified by the initial conditions $`\widehat{b}_\alpha ^{}\widehat{b}_\alpha ^{}=n_\alpha \delta _{\alpha \alpha ^{}}`$, $`\widehat{d}_\alpha ^{}\widehat{d}_\alpha ^{}=\overline{n}_\alpha \delta _{\alpha \alpha ^{}}`$, etc.
The above system of equations has been implemented on a lattice using Wilson’s fermion method for the spatial derivative and the staggered fermion interpretation for a ‘naive’ discrete time derivative . Usual expectations on fermion number non-conservation tied to sphaleron transitions are correctly represented on the lattice. For simplicity we continue with continuum notation.
3. Here we are especially interested in thermalization properties of the fermions. We tested for local (in time) equilibration of fermions by comparing their distribution function with the Fermi-Dirac distribution. The distribution function was identified from the equal-time fermion two point function, averaged over space (a circle with circumference $`L`$),
$`S(z,t)={\displaystyle \frac{1}{L}}{\displaystyle _0^L}𝑑x\psi (x,t)\overline{\psi }(x+z,t)_{\text{g.f.}}.`$
Here g.f. indicates a complete gauge fixing. Alternatively, the two point function can be rendered gauge invariant by supplying a parallel transporter $`U(x,y)=\mathrm{exp}[i_x^y𝑑zA_1(z)/2]`$. We have used the latter method, but it is actually closely related to complete Coulomb gauge fixing . If the fermions were free, the Fourier transform
$`S(p,t)={\displaystyle _0^L}𝑑ze^{ipz}S(z,t)`$
would be given in terms of distribution functions $`N_p`$, $`\overline{N}_p`$ als follows:
$`\text{Tr}S(p,t)`$ $`=`$ $`[1N_p(t)\overline{N}_p(t)]{\displaystyle \frac{m_p(t)}{\omega _p(t)}},`$
$`\text{Tr}i\gamma ^1S(p,t)`$ $`=`$ $`[1N_p(t)\overline{N}_p(t)]{\displaystyle \frac{p}{\omega _p(t)}},`$
$`\text{Tr}i\gamma ^0S(p,t)`$ $`=`$ $`1N_p(t)+\overline{N}_p(t),`$
$`\omega _p(t)`$ $`=`$ $`\sqrt{m_p^2(t)+p^2},`$
with $`\text{Tr}\gamma _5S(p,t)=0`$ because of parity invariance. For free fermions $`N_p`$, $`\overline{N}_p`$ and $`m`$ are time-independent. Assuming that the interacting model can be described approximately by quasiparticles, we now use the above equations to define $`N_p(t)`$, $`\overline{N}_p(t)`$ and $`m_p(t)`$. In a non-equilibrium situation they will depend on time.
The simulations had the following parameters: $`n_f=2`$ flavors (related to fermion doubling in time), spatial size $`m_\phi L6.4`$, $`\lambda /e^2=0.25`$ ($`m_AL9`$), coupling $`e^2/m_\phi ^20.25`$, with spatial lattice spacing $`am_\phi 0.10`$, temporal spacing $`a_0/a=0.005`$ and $`L/a=64`$ spatial lattice sites.
Fig. 1 shows the Chern-Simons number $`C=𝑑xA_1/2\pi `$ and the axial charge $`Q_5/n_f`$ for a simulation starting with a fermionic vacuum (i.e. $`n_\alpha ,\overline{n}_\alpha =0`$) and some kinetic energy stored in a few low momentum modes of the Higgs field. The anomalous fermion number non-conservation equation $`\mathrm{\Delta }Q_5=n_f\mathrm{\Delta }C`$, is well obeyed since the two curves are indistinguishable (initially $`Q_5=C=0`$). The oscillations correspond roughly to the basic period $`2\pi /m_A`$. To smoothen these we average $`S(p,t)`$ over a time interval $`t_{\mathrm{av}}`$ before extracting the distribution functions. We used $`et_{\mathrm{av}}=4`$ and studied the behavior of $`N_{\mathrm{av}p}(t)[N_p(t)+\overline{N}_p(t)]/2`$. As expected, $`m_p(t)`$ is effectively zero. Surprisingly, $`N_{\mathrm{av}p}(t)`$ resembles quite fast a Fermi-Dirac distribution for inverse temperature $`\beta `$ and $`Q_5`$-chemical potential $`\mu `$:
$`f_p(\beta ,\mu )`$ $`=`$ $`\{\mathrm{exp}[\beta (E_p\mu q_{5p})]+1\}^1,`$
$`E_p`$ $`=`$ $`|p|,q_{5p}=p/|p|`$
(the axial charge of a fermion depends on the sign of $`p`$). Fig. 2 shows $`\mathrm{ln}(N_{\mathrm{av}p}^11)`$ versus $`ap`$ at various times. We see linear behavior, $`\mathrm{ln}(N_{\mathrm{av}p}^11)\beta (t)[|p|\pm \mu (t)]`$, suggesting local (in time) equilibrium.
The distribution functions of modes with $`ap0.5`$ are consistent with zero, so these modes are practically not excited. Furthermore, since the relevant $`p`$ are small in lattice units, discretization effects are reasonably small. To achieve this the initial energy stored in the Bose fields has to be in the relatively low momentum modes only – the fields are far from classical equilibrium.
Figs. 3 and 4 show the effective temperature and chemical potential as a function of time. Note that $`T_{p>0}(t)T_{p<0}(t)`$. If the fermions were free, then their energy and axial charge densities would follow from the Fermi-Dirac distribution according to
$`E_F/L`$ $`=`$ $`n_f(\pi T^2/6+\mu ^2/2\pi ),`$
$`Q_5/L`$ $`=`$ $`n_f\mu /\pi .`$
Conversely, $`E_F`$ and $`Q_5`$ imply an effective temperature and chemical potential; these are also plotted in Figs. 3,4 (data labeled $`E_F,C`$ resp. $`C`$). The $`E_F,C`$-temperature appears systematically lower that that from $`N_{\mathrm{avp}}`$. This may be due to the fact that the fermions are not free.
4. In conclusion, we see evidence for fast equilibration of fermions coupled to classical Bose fields. It is important that the classical fields can be spatially inhomogeous, since this allows the fermions to scatter nontrivially, e.g. by their (screened) Coulomb interaction. The Bose fields have created fermions and lost some energy, and they are not in (classical) equilibrium in the time span shown here. More details on this work can be found in .
|
no-problem/9909/cond-mat9909093.html
|
ar5iv
|
text
|
# Excitation Spectra of Structurally Dimerized and Spin-Peierls Chains in a Magnetic Field
## I Introduction
In the absence of a magnetic field, structurally dimerized spin chains, such as $`(VO)_2P_2O_7`$, have a magnetic response similar to spin-Peierls chains, such as $`CuGeO_3`$. In both cases, there is a one-magnon triplet bound state at energy transfers $`\omega =\mathrm{\Delta }_z`$, where $`\mathrm{\Delta }_z`$ measures the singlet-triplet spin gap, followed by a two-magnon continuum of states with an onset at higher frequencies, $`\omega =2\mathrm{\Delta }_z`$. The one-magnon bound state arises from the confinement of soliton-antisoliton pairs due to an effective attractive potential caused by the lattice dimerization. In the case of spin-Peierls chains, there is additional coupling of the spins to the elastic degrees of freedom of the lattice, causing the softening of a phonon mode. In contrast, the atomic positions of structurally dimerized chains are completely locked, ruling out a feedback between the spins and the phonons. It is this magneto-elastic feedback in the spin-Peierls compounds, which allows the spin gap to remain open even at large magnetic fields, while in the structurally dimerized chains, the gap closes beyond a critical magnetic field strength, $`h_{c1}`$, due to the deconfinement of the soliton-antisoliton pairs. In both cases the triplet spectra become incommensurate at $`h>h_{c1}`$. However, we will see that for the structurally dimerized chains they are gapless soliton-antisoliton continua, whereas in the spin-Peierls compounds the dominant one-magnon bound state remains gapped, acquiring a modulation which depends on the magnitude of the applied field.
In the adiabatic approximation (suppressing the phonon dynamics), the effective Hamiltonian of antiferromagnetically correlated spins coupled to a crystal lattice is given by,
$`H=J{\displaystyle \underset{r=1}{\overset{N}{}}}(1+\delta _r)𝐒_r𝐒_{r+1}+{\displaystyle \frac{K}{2}}{\displaystyle \underset{r=1}{\overset{N}{}}}\delta _r^2,`$ (1)
where $`J`$ is the Heisenberg exchange constant, $`\delta _r`$ are local lattice distortions, and $`K`$ is the lattice spring constant. The feedback of the phonons to the spin degrees of freedom is contained in the dependence of $`\delta _r`$ on $`K`$. In the absence of a magnetic field, $`\delta _r=\delta (1)^r`$. The dimerization parameter $`\delta `$ is a constant for structurally dimerized chains, whereas $`\delta K^{3/2}`$ for spin-Peierls systems. The structurally dimerized chain can be viewed as a limiting case of Eq. (1) with a vanishing spring constant ($`K=0`$) and a “frozen” (inelastic) lattice modulation, which does not vary with an applied magnetic field. On the contrary, spin-Peierls chains in a sufficiently high magnetic field, $`h>h_{c1}`$, gain elastic energy by adjusting their lattice to the field. This magneto-elastic distortion can be rather well approximated by a sinusoidal form, $`\delta _r=\delta \mathrm{cos}(qr)`$, with $`q=\pi +2\pi S_{tot}^z/N`$. With these two modulations of $`\delta _r`$, the magnetic field induced transition from the dimerized to the incommensurate phase is continuous for structurally dimerized chains, while it is first order for spin-Peierls chains, where a jump of $`K\delta ^2/4`$ occurs in the elastic energy.
In this work, the spin excitation spectra in a magnetic field of these two systems are contrasted. Using exact diagonalization techniques on finite lattices of up to $`N=24`$ sites with periodic boundary conditions, the triplet and singlet excitations are calculated, allowing a direct comparison with inelastic neutron and Raman experiments. The strength of this method, although restricted to relatively small lattice sizes, is the accessibility of the full excitation spectrum for a given cluster. As our interest is primarily in the magnetic response, we concentrate on two effective, purely magnetic model Hamiltonians, $`H_{dim}`$ and $`H_{sP}`$, derived from $`H`$ (Eq. 1). Phonon contributions, other than entering via the parametrization of $`\delta _r`$ will be neglected.
For the structurally dimerized spin-1/2 antiferromagnetic Heisenberg chain, the model Hamiltonian $`H`$ reduces to
$`H_{dim}=J{\displaystyle \underset{r=1}{\overset{N}{}}}(1+\delta (1)^r)𝐒_r𝐒_{r+1}.`$ (2)
In a quasi-one-dimensional compound, such as $`KCuCl_3`$, the dimerization parameter, $`\delta >0`$, originates from the alternating spacing of the spin-carrying $`Cu^{2+}`$ ions. In the limit $`\delta =1`$, the system is an ensemble of $`N/2`$ uncoupled dimers with only two energy levels per dimer, and a spin gap $`\mathrm{\Delta }_z=2J`$. In the opposite limit $`\delta =0`$, $`H_{dim}`$ reduces to the isotropic antiferromagnetic Heisenberg chain which is quasi-long-range ordered, and thus belongs to a different universality class from the dimerized system. For sufficiently small lattice dimerizations, a regime with sizeable inter-dimer interactions can be identified with the scaling properties of the massive Thirring model.
The effective magnetic Hamiltonian for spin-Peierls compounds is given by
$`H_{sP}=J{\displaystyle \underset{r=1}{\overset{N}{}}}(1+\delta \mathrm{cos}(qr))𝐒_r𝐒_{r+1}.`$ (3)
Here, the feedback due to the interactions of the spins with the lattice phonons enters through a field dependent modulation of the effective nearest-neighbor exchange integral, $`J_{eff}(r)=J(1+\delta \mathrm{cos}(qr))`$, where $`q=\pi +2\pi S_{tot}^z/N`$. In the commensurate phase ($`h<h_{c1}`$), the modulation is fixed at $`q=\pi `$, and $`H_{sP}`$ is identical to $`H_{dim}`$. However, in the incommensurate regime $`q`$ continuously grows from $`\pi `$ to $`2\pi `$, mimicking the elastic distortion of the underlying lattice due to the dynamical coupling with the spins. Hence, all eigenstates of this system have a spin as well as a phonon component.
In the subsequent section, the spin excitation spectra of structurally dimerized chains ($`H_{dim}`$) in a magnetic field are discussed, followed by a section on spin-Peierls systems ($`H_{sP}`$). We finish with some concluding remarks. As we are interested in capturing the generic features, and in particular the differences, of the two physical situations described above, no compound specific parameters, such as inter-chain or next nearest-neighbor exchange couplings, are considered. Rather, our focus will be on understanding the characteristic features of the phases in the most elementary magnetic models.
## II Structurally Dimerized Chain
It is remarkable that for some quasi-one-dimensional compounds, such as $`(VO)_2P_2O_7`$ and $`CuGeO_3`$, it has been quite difficult to establish a unique microscopic model. For example, from fits to early measurements of the uniform susceptibility on $`(VO)_2P_2O_7`$ it has been concluded that this material is either a structurally dimerized or a frustrated spin-1/2 Heisenberg chain with a sizeable next-nearest-neighbor exchange coupling. In both cases, a spin gap opens up either due to a structural or to a frustration induced dimerization, and the resulting thermodynamic response is quite similar for the two proposed models. Only recently, it could be shown by inelastic neutron scattering spectroscopy that $`(VO)_2P_2O_7`$ is indeed a structurally dimerized chain, and that it is the lattice distortion rather than any frustration which gives rise to the observed spin gap. Similarly, there are still rather different effective parameter sets for $`\delta `$ and $`J_2`$ used in the current literature on the spin-Peierls compound $`CuGeO_3`$. In one case $`(\delta =0.03,J_2=0.24)`$ the spin gap opens because of the lattice distortion , while in the other case $`(\delta =0.014,J_2=0.36)`$ the frustration alone is large enough to cause a spin gap. It is thus of particular interest to examine the full spin excitation spectrum of these quasi-one-dimensional materials in a magnetic field, in order to pinpoint the most relevant microscopic interactions.
Let us start by discussing the phase diagram of the structurally dimerized spin-1/2 Heisenberg chain in a magnetic field, shown in Fig. 1. (i) for $`|h|<h_{c1}`$, the system is in a spin-liquid phase with a singlet-triplet spin gap $`\mathrm{\Delta }_z`$ and $`h_{c1}=\mathrm{\Delta }_z`$; (ii) for $`h_{c1}<|h|<h_{c2}`$ it is a gapless spin-density wave with a field-dependent modulation; and (iii) for $`|h|>h_{c2}=2J`$ it is fully spin-polarized in the direction of the applied magnetic field. To determine the dependence of $`\mathrm{\Delta }_z`$ on the dimerization, we use Shanks’ transformation on lattices of up to N=24 sites. The asymptotic form of the spin gap for a given dimerization obeys a finite-size scaling relation,
$`\mathrm{\Delta }_z(N,\delta )=\mathrm{\Delta }_z(N=\mathrm{},\delta )+A(\delta )\mathrm{exp}(\mathrm{\Gamma }(\delta )N),`$ (4)
where the constants $`A(\delta )`$ and $`\mathrm{\Gamma }(\delta )`$ are obtained from Shanks’ recursive equations. In accordance with Ref. , we find that the initially proposed dependence, $`\mathrm{\Delta }_z(N=\mathrm{},\delta )\delta ^{2/3}/\sqrt{|\mathrm{log}\delta |},`$ matches our extrapolation rather poorly, while the form $`\mathrm{\Delta }_z(N=\mathrm{},\delta )=2\delta ^{3/4}`$ (shown in Fig. 1) gives an excellent fit to our data over the whole range of parameter space, $`\delta (0,1]`$. One likely reason for this discrepancy is that the initial analytical prediction is valid only for very small values of $`\delta `$, difficult to access with a finite-size scaling procedure. Furthermore, in this regime higher order logarithmic corrections also become important. Down to rather small values of $`\delta `$ ($`\delta >0.3`$), the dependence of the spin gap is linear to leading order, $`\mathrm{\Delta }_z(1+3\delta )/2`$, indicating that the picture of weakly interacting dimers - a perturbation about the limit of isolated dimers ($`\delta =1`$) - is applicable in this parameter regime. However, at low values of the dimerization parameter ($`\delta 0.3`$) the interactions between the dimers become increasingly important, leading to a deviation from the linear dependence of the spin gap on $`\delta `$.
The energy gaps between the groundstate and the lowest excited singlet and triplet as a function of the lattice dimerization are shown in Fig. 2. The magnitudes of the gaps in this figure quantitatively resemble the thermodynamic limit ($`N\mathrm{}`$), as they were obtained from Shanks’ transformation. In the limit of vanishing dimerization, the gaps disappear, indicating that the groundstate is qualitatively different for the cases of vanishing and finite dimerization. The singlet gap is always larger than the triplet gap, and their ratio is $`\mathrm{\Delta }_S/\mathrm{\Delta }_z=2`$ for most of parameter space. However, at lower values of $`\delta `$ this ratio becomes smaller, possibly approaching the predicted value of $`\sqrt{3}`$ as $`\delta 0`$. Unfortunately, the quality of our finite size extrapolation procedure deteriorates in this limit, and no definite confirmation of $`\mathrm{\Delta }_S/\mathrm{\Delta }_z=\sqrt{3}`$ can be drawn from this study, although our data is consistent with this value. It is clear, however, that the region of small dimerization must be governed by a field theory such as the massive Thirring model with a non-linear dependence of the excitation gaps on the effective mass, which in turn is proportional to $`\delta `$.
In Fig. 3, the triplet excitation spectra of the structurally dimerized spin-1/2 Heisenberg chain are shown, along with the corresponding dispersion relations in the insets. These spectra were calculated by an exact numerical diagonalization of 18-site chains with periodic boundary conditions, combined with a continued fraction expansion to obtain the full dynamical response functions. Let us first examine the dynamical spin structure factor,
$`S^{zz}(k,\omega )={\displaystyle \underset{n}{}}|n|S_k^z|0|^2\delta (\omega E_n+E_0),`$ (5)
where $`S_k^z=\frac{1}{\sqrt{N}}_r\mathrm{exp}(ikr)S_r^z`$ is the projection of the spin operator parallel to the applied magnetic field, $`|n`$ denotes an eigenstate of the Hamiltonian with energy $`E_n`$, orthogonal to the groundstate with energy $`E_0`$, and magnetization $`m=S_{tot}^z/N[0,1/2]`$. In the dimerized phase (Fig. 3(a)), the dominating feature in the spectrum is the one-magnon bound state with an onset frequency $`\omega =\mathrm{\Delta }_z(1+3\delta )/2`$, well separated from a continuum of states starting at twice this energy. Increasing the magnetic field, the one-magnon bound state moves down to lower energies, and eventually the gap closes at $`h=h_{c1}`$ (Fig. 3(b)). Beyond $`h_{c1}`$, the soliton-antisoliton confinement potential is thus overcome by the magnetic field, and the bound state decays into a low-energy two-spinon continuum, similar to the spin-1/2 Heisenberg chain. In addition, there are continua of states at higher energies. In particular the lowest continuum (starting at $`\omega =\mathrm{\Delta }_z`$) carries most of the spectral weight. At the onset of the incommensurate phase ($`h=h_{c1}`$, Fig. 3(b)) the phase space of this low-energy continuum is strongly restricted, and therefore its width is small. The reason will become clear from the discussion below in terms of the corresponding spinless fermion picture. As the applied field is increased from $`h_{c1}`$ to $`h_{c2}`$, the width of the low-energy continuum grows, and the wave vector of the dominant infrared divergence moves continuously from $`q=\pi `$ to $`q=2\pi `$. In Fig. 3(c), the triplet excitation spectra are shown at a particular magnetization, $`m=4/18`$, corresponding to a magnetic field $`h1.58J`$. Clearly, the modulation at this field is incommensurate, and the magnetic unit cell is enlarged by approximately a factor of two with respect to its size at zero field. Furthermore, the phase space for triplet excitations is reduced with increasing magnetic field, leading to an overall loss of spectral weight at higher fields. Finally, close to $`h_{c2}`$ two triplet bands emerge, split by a dimerization gap, $`\mathrm{\Delta }_\pm =2\delta `$ (Fig. 3(d)). Low-energy spectral weight away from long wavelengths disappears, and the low-frequency dispersion approaches $`\omega k^2`$, characteristic for ferromagnetism.
The Hamiltonian of the structurally dimerized spin-1/2 Heisenberg chain can be mapped onto a model of spinless fermions via the Jordan-Wigner transformation. Due to the lattice dimerization, the spinless fermion band is split into two parts which disperse according to
$`\omega _\pm (k)/J=1\pm \sqrt{\delta ^2+(1\delta ^2)\mathrm{cos}^2(k)},`$ (6)
giving rise to a dimerization gap, $`\mathrm{\Delta }_\pm =2\delta `$, and to a total single particle bandwidth of $`2J`$. While these dispersion relations are exact in the XY-limit of $`H_{dim}`$, they are only slightly renormalized in the isotropic Heisenberg limit for sufficiently large dimerization values ($`\delta >0.3`$), as can be seen by comparing $`\omega _{}(k)`$ and $`\omega _+(k)`$ with the dispersions in the inset of Fig. 3(d). Furthermore, in the spinless fermion picture, the applied field corresponds to a chemical potential. At vanishing magnetic field, the chemical potential lies in the center of the gap between $`\omega _{}`$ and $`\omega _+`$. In order to excite an unbound pair of particles, a minimum energy of $`2\mathrm{\Delta }_\pm `$ is needed. In addition, due to the attractive scattering between the spinless fermions, an exciton-type particle-hole bound state is formed with a minimum energy of $`\mathrm{\Delta }_z(h)`$, dispersing at $`h=0`$ as
$`\omega _z(k)/J=(1+\delta )(1\delta )\mathrm{cos}(2k)/2,`$ (7)
as observed in the inset of Fig. 3(a). At small magnetic fields ($`h<h_{c1}`$) this dominant one-magnon triplet mode carries most of the weight in the dynamical structure factor. Furthermore, there is a second gap ($`\mathrm{\Delta }_2=\mathrm{\Delta }_z`$) between the one-magnon bound state and a continuum of states which is a simple convolution of two magnons with a dispersion $`\omega _z(k)`$. With increasing magnetic field ($`hh_{c1}`$) the bound state moves down to lower energies, and eventually $`\mathrm{\Delta }_z`$ vanishes at $`h_{c1}`$, whereas the onset of the continuum now occurs at $`\mathrm{\Delta }_2(h=h_{c1})=\mathrm{\Delta }_z(h=0)`$. Beyond $`h_{c1}`$ the one-magnon bound state disappears and decays into a particle-hole continuum as the effective confining potential is overcome by the applied field.
Using only $`\omega _z(k)`$ and $`\omega _\pm (k)`$, the complete magnetic field dependence of the triplet spectra can thus be understood qualitatively within a simple rigid-band picture. In the incommensurate phase, the chemical potential moves into the lower band, $`\omega _{}(k)`$. The continuum of states at low energies arises from two-particle excitations within $`\omega _{}(k)`$, whereas the continua at higher frequencies stem from processes involving interband scattering. The modulation wave vector $`q`$, corresponding to the applied magnetic field $`h`$, is obtained from the solution of $`h=\omega _{}(q)`$. At fields slightly above $`h_{c1}`$, the phase space for intraband scattering processes are restricted within the lower band which is almost full. This is the reason for the narrow width of the low-energy continuum in Fig. 3(b), just at $`h=h_{c1}`$. At large magnetic fields, $`hh_{c2}`$, the phase space of triplet excitations is exhausted, and the dynamical structure factor traces out the single particle bands of the spinless fermions (Fig. 3(d)).
Because the spectra in Fig. 3 were obtained on finite-size lattices, there are no true branch cuts in $`S^{zz}(k,\omega )`$. Rather, discrete sets of poles appear where continua are expected to emerge in the thermodynamic limit. In order to distinguish bound state poles from sets of poles which become part of a continuum in the thermodynamic limit, a finite-size scaling analysis of the individual pole positions and weights is necessary. From such an extrapolation, using chains with N = 4, … , 24 sites, we find that the bound state with $`\omega _z(k)`$ (Fig. 3(a)) is indeed stable, whereas the other poles in the spectrum merge into continua as the lattice size is increased to infinity. At smaller values of the lattice dimerization ($`\delta 0.3`$), the general features of the triplet spectra are the same as discussed above. However, once the bandwidths of $`\omega _\pm (k)`$ become larger than the dimerization gap separating the two bands, $`\mathrm{\Delta }_\pm `$, the higher-energy continua, which are separated from each other for larger dimerizations, begin to overlap. Using the exact dispersion expressions for the XY limit (Eq. 6), this crossover occurs at $`\delta _c=1/30.3`$, consistent with the deviations from the weakly interacting dimer picture observed in our numerical data.
Let us now turn to the spin excitation spectra of the structurally dimerized spin chain in a magnetic field, as they are probed by Raman scattering measurements. Within the Loudon-Fleury theory \- assuming resonant scattering - the effective $`A_{1g}`$ Raman operator of a one-dimensional spin system is proportional to $`_r𝐒_r𝐒_{r+1}`$. Setting the proportionality constant equal to one, the dynamical Raman response function takes the form
$`I(\omega )={\displaystyle \underset{n}{}}|n|{\displaystyle \underset{r}{}}𝐒_r𝐒_{r+1}|0|^2\delta (\omega E_n+E_0).`$ (8)
In the following discussion of $`I(\omega )`$, terms in the Raman operator which are proportional to the Hamiltonian are omitted. In Fig. 4 the Raman spectra for 20-site chains are shown as a function of the magnetic field. At zero magnetization ($`h<h_{c1}`$), one singlet bound state is expected with a gap $`\mathrm{\Delta }_s`$, followed by a continuum of excitations at higher energies, involving 4 spinons. In the regime of large dimerization ($`\delta >\delta _c`$), the singlet gap is at $`\mathrm{\Delta }_s=2\mathrm{\Delta }_z(1+3\delta )J`$ (Fig. 4(c)). This bound state is best understood by considering the limit of complete dimerization ($`\delta =1`$). The corresponding groundstate at $`h=0`$ is a product of singlet dimers with energy $`3J/2`$ per dimer. Two such dimer singlets can be excited by the Raman operator into a 4-site singlet state. The energy difference, $`\mathrm{\Delta }_s`$, between these states is approximately equal to $`(1+3\delta )J`$ as long as the dimer-dimer interactions are sufficiently small ($`\delta >\delta _c`$). In the limit of complete dimerization, one obtains exactly $`\mathrm{\Delta }_s=4J`$. In the opposite limit, the singlet bound state moves to lower energies (Fig. 4(b)), and most of the spectral weight is transferred into the zero-frequency peak, not shown here. Apart from the bound state at $`\mathrm{\Delta }_s`$, there is a continuum of excitations at higher energies. Evidently, the spectra in Fig. 4 are plagued by severe finite-size effects, such that it is particularly difficult to distinguish by inspection the precursors of continua from emerging isolated bound states. However, from the scaling behavior of the individual pole positions and weights we conclude that our data at zero magnetic field is consistent with the picture of an isolated bound state at $`\mathrm{\Delta }_s`$, followed by a continuum of states above a threshold $`\omega \mathrm{\Delta }_z`$ in the thermodynamic limit. Beyond $`h_{c1}`$, the bound state disappears, and the onset frequency of the continuum increases from $`\mathrm{\Delta }_z`$ at $`h_{c1}`$ up to $`2J`$ close to $`h_{c2}`$. In Fig. 4(a) the integrated weight of the Raman spectrum, $`W=𝑑\omega I(\omega )`$, is plotted as a function of the magnetization. $`W`$ becomes very small as $`\delta 0`$. Furthermore, with increasing magnetic field, $`W(m)`$ decreases more rapidly - with a purely concave shape - in the case of small dimerization (Fig. 4(b)). However, this subtle difference is most likely of little experimental relevance because important compound specific contributions, such as frustrating longer range interactions or interchain coupling, have been neglected in this discussion.
## III Spin-Peierls Chain
In contrast to the structurally dimerized chains, spin-Peierls chains have a gapped incommensurate phase as their lattice distortion adapts magneto-elastically to the applied field. For a self-consistent treatment of this incommensurate regime, the phonon and the spin degrees of freedom have thus to be treated on an equal footing. In large scale numerical studies of the full adiabatic spin-phonon Hamiltonian $`H`$ (Eq. 1), it has been shown that the structural distortion of the lattice and the modulation of the local magnetization have the shape of solitons, natural for one-dimensional systems. As the focus of this work is the magnetic response, an effective parametrization for the lattice distortion of the form $`\delta _r=\delta \mathrm{cos}(qr)`$ is used, instead of treating the elastic part of $`H`$ self-consistently. Especially at higher magnetic fields, this form gives results almost identical to the self-consistent treatment of the lattice dynamics for observables such as the local magnetization. Furthermore, we have verified that other (solitonic) parametrizations yield spin excitation spectra which are only minutely different to those with a sinusoidal lattice distortion.
In Fig. 5, the magnetization curves, $`m(h)`$, of the spin-1/2 structurally dimerized Heisenberg chain and of the spin-Peierls system are shown. They were calculated by numerical diagonalization of 22-site chains. The magnetization of the structurally dimerized chain has a plateau between $`h=0`$ and $`h_{c1}=\mathrm{\Delta }_z`$. $`m(h)`$ then rises continuously from zero at $`h_{c1}`$ up to $`m=1/2`$ at $`h_{c2}=2J`$. The particle-hole symmetry of the corresponding spinless fermion band is reflected by the point symmetry of $`m(h)`$ about its midpoint. In contrast, the magnetization of the spin-Peierls chain jumps discontinuously at $`h_{c1}`$ from zero to a finite value. The position of $`h_{c1}`$ and the magnitude of the discontinuity depend on the lattice spring constant $`K`$. Therefore, a precise determination of these quantities is beyond the realm of our effective \- purely magnetic - theory using $`H_{sP}`$. However, from self-consistent calculations it has been concluded that the second order transition line, $`h_{c1}(\delta )`$, for the structurally dimerized chain (Fig. 1) becomes first order in the spin-Peierls case, and moves down towards lower fields with increasing lattice spring constant.
Let us now turn to the triplet excitation spectrum of the spin-Peierls chain in a magnetic field, shown in Fig. 6. For $`h<h_{c1}`$, the spin response is identical to that of the structurally dimerized chain (Fig. 6(a)). The lattice dimerization gives rise to a scattering potential peaked at $`q=2k_F=\pi `$. This attraction between pairs of particles at opposite ends of the Brillouin zone leads to the formation of a one-magnon bound state. In the incommensurate phase (Fig. 6(b-d)), the lattice distortion, $`q=\pi +2\pi m(h)`$, adapts to the magnetic field, thus supporting a bound state with a minimum energy $`\omega =\mathrm{\Delta }_z(h)\mathrm{\Delta }_z(h=0)`$ and a modulation $`q`$ due to an effective field dependent potential, $`V_{eff}(q)\delta \mathrm{cos}(qr)`$. At higher energies, there are continua of states. In contrast to the dimerized phase, in the incommensurate regime the second gap, between the onset of the bound state and the onset of the lowest continuum, is smaller than $`\mathrm{\Delta }_z(h)`$. This can be understood within the spinless fermion picture: as the magnetic field is increased beyond $`h_{c1}`$, the corresponding Fermi wave vector moves away from $`\pi /2`$. Opposite to the case of the structurally dimerized chain, the scattering potential adapts to the changing magnetic field, such that there is always an instability at the Fermi level due to particle-hole scattering with momentum transfer $`q=2k_F(h)=\pi +2\pi m(h)`$. Furthermore, in the incommensurate phase the chemical potential is offset from the center of the gap. The energy difference between $`\mu `$ and the lower edge of the gap is $`\mathrm{\Delta }_{}(h)`$, and between $`\mu `$ and the upper edge it is $`\mathrm{\Delta }_+(h)`$, shown in Fig. 7, where the values of $`\mathrm{\Delta }_{}`$ and $`\mathrm{\Delta }_+`$ have been evaluated by acting with $`S^{}`$ and $`S^+`$ on the groundstate. The onset of the one-magnon bound state occurs at a higher energy, $`\mathrm{\Delta }_z(h)>(\mathrm{\Delta }_{}(h)+\mathrm{\Delta }_+(h))/2`$. Only in the dimerized phase, it is found that $`\mathrm{\Delta }_z=\mathrm{\Delta }_{}=\mathrm{\Delta }_+`$, and thus $`\mathrm{\Delta }_z=(\mathrm{\Delta }_{}+\mathrm{\Delta }_+)/2`$ because of the additional particle-hole symmetry for the special case of half-filling. The magnetic field dependence of the triplet spectra in Fig. 6 follows exactly this picture. For example, at magnetization $`m=4/18`$ (Fig. 6(c)), the onset of the one-magnon bound state is at $`\omega =0.82J`$, whereas the lowest continuum of states starts at $`\mathrm{\Delta }_{}+\mathrm{\Delta }_+=0.39J+0.69J=1.08J`$, thus separating the bound state from the continuum by a second gap of $`\mathrm{\Delta }_2=0.26J<\mathrm{\Delta }_z(h)=0.82J`$.
Similar to the structurally dimerized chain, the phase space of triplet excitations is gradually reduced in the incommensurate phase of the spin-Peierls system, leading to a loss of spectral weight in $`S^{zz}(k,\omega )`$ with increasing magnetic field. Furthermore, the widths of the triplet bands shrink at higher fields, indicating an effective localization of triplets. For example, the band width $`W`$ of the one-magnon bound state is drastically reduced (see inset of Fig. 7), going practically to zero beyond $`m_c0.3`$. Its dispersion, $`\omega _z(k)`$, oscillates rapidly as the modulation vector $`q`$ approaches $`2\pi `$. The reason for this behavior is that the real-space magnetic unit cell grows with increasing magnetic field. At sufficiently high fields, it grows beyond the size of any finite cluster. For the parameter choice and lattice size we use, this happens at approximately $`m_c`$. Beyond $`m_c`$, the corresponding modulation of the effective nearest-neighbor exchange integral, $`J_{eff}(r)=J(1+\delta \mathrm{cos}(qr))`$, has only one minimum in the finite chain where triplets are trapped, leading to a “smearing” of $`\omega (k)`$ in momentum space (Fig. 6(d)). While this localization of magnons is obviously an artefact of the finite cluster calculation, it may be realized in mesoscopic chains, as soon as the size of the magnetic unit cell exceeds the mesoscopic length scale. Also, such a localization can easily be stabilized by a pinning of the distortion, to which a physical system is highly susceptible as the modulation grows toward infinity. In this case, a real-space picture of (almost) localized triplets is most appropriate at high magnetic fields, close to $`h_{c2}`$.
The singlet gap $`\mathrm{\Delta }_s`$, shown in Fig. 7, corresponds to the onset of the finite frequency Raman spectrum (Fig. 8). It is always larger than the triplet gap, and has a similar dependence on the magnetization. From an examination of the finite-size scaling behavior of the poles in the Raman spectrum, there appears to be a low-energy singlet bound state for all fields $`h<h_{c2}`$. In particular, an analysis of the $`\delta =0.4`$ Raman spectra (Fig. 8(c)) suggests that in the thermodynamic limit the two lowest poles merge into one, and their spectral weight extrapolates to a finite value, thus indicating the existence of a two-magnon bound state. For smaller dimerizations (such as $`\delta =0.1`$ in Fig. 8(b)) it is difficult to determine from our finite-size data whether there is a bound state. This is consistent with a recent numerical study of the Raman spectrum in $`CuGeO_3`$ which has considered an even smaller dimerization constant $`\delta 0.03`$. Here it was argued that the strong magnon-magnon interactions destabilize the singlet bound state. However, as seen in Figs. 8(b) and (c), the Raman excitation spectra are qualitatively rather similar for these two parameter choices. Consider for example the spectra at the onset of the incommensurate phase ($`m=1/20`$), shown in the second lowest curves of Figs. 8(b) and (c). At low energies, there is a pair of poles (the second pole is not visible for $`\delta =0.1`$), separated from a set of poles at higher energies with a close spacing. As discussed above, the low-frequency poles merge in the thermodynamic limit, whereas the high-frequency set of poles appears to evolve into a continuum of states. We therefore suspect that singlet bound states may exist at the lower edge of the spectrum for any finite $`\delta `$, but much larger clusters may be needed for a numerical confirmation of the singlet bound state at small values of $`\delta `$. Furthermore, the total finite-frequency spectral weight $`W(m)`$, shown in Fig. 8(a), has the same dependence on the magnetization for both choices of $`\delta `$, indicating that there should be only one massive field theory applicable for the whole range $`\delta (0,1]`$.
## IV Conclusions
In summary, we have analyzed the spin excitation spectra for two distinct models in a magnetic field: the structurally dimerized chain and the spin-Peierls chain. Below a critical field, $`h_{c1}`$, both systems are in a spin liquid phase, composed of interacting singlet dimers. Above $`h_{c1}`$, the spin gap of the structurally dimerized chain closes, whereas the spin-Peierls system supports a singlet-triplet gap up to $`h_{c2}`$. Therefore, the excitation spectra of these two models are quite different in their incommensurate phases. In the structurally dimerized chain, a soliton-antisoliton continuum appears at low energies, separated by a dimerization gap $`\mathrm{\Delta }_\pm =2\delta `$ from a second continuum at higher frequencies. In the spin-Peierls chain there is a triplet bound state with an onset at $`\mathrm{\Delta }_z`$, and a higher-energy continuum of states, starting at $`\mathrm{\Delta }_{}+\mathrm{\Delta }_+<2\mathrm{\Delta }_z`$. Parts of the triplet band may thus overlap with the continuum. Common features in the spin excitation spectra of these two systems are (i) an incommensurate, field-dependent modulation $`q=\pi +2\pi m(h)`$ for $`h_{c1}<h<h_{c2}`$, (ii) a loss of overall spectral weight with increasing magnetic field, and (iii) and full spin polarization beyond $`h_{c2}`$.
Furthermore, there are qualitative changes in the spectra of the structurally dimerized chain, depending on the magnitude of the dimerization parameter $`\delta `$. The region of large dimerization in the $`h\delta `$ phase diagram can be understood within the valence bond picture of weakly interacting dimers, whereas for small values of $`\delta `$ the interactions between the dimers are important, reducing the ratio of the singlet to the triplet gap and increasing the bandwidths of the spectral features.
The triplet excitation spectra of the spin-Peierls chain in the incommensurate phase contain a one-magnon bound state with an onset at $`\mathrm{\Delta }_z`$, and a soliton-antisoliton continuum with an onset at higher frequencies, $`\omega =\mathrm{\Delta }_{}+\mathrm{\Delta }_+`$. The actual dimerization strengths of real spin-Peierls compounds are typically smaller than the values we have studied here, which were chosen to improve numerical stability. However, the spin excitation spectra obtained for $`H_{sP}`$ are qualitatively similar for the whole range of $`\delta `$ we were able to study.
We wish to thank A. Honecker, B. Normand, G. Uhrig, and S. Wessel for useful discussions, and acknowledge the Zumberge Foundation for financial support.
|
no-problem/9909/astro-ph9909179.html
|
ar5iv
|
text
|
# Leptonic Jet Models of Blazars: Broadband Spectra and Spectral Variability
## Introduction
Recent high-energy detections and simultaneous broadband observations of blazars, determining their spectra and spectral variability, are posing strong constraints on currently popular jet models of blazars. 66 blazars have been detected by EGRET at energies above 100 MeV hartman99a , the two nearby high-frequency peaked BL Lac objects (HBLs) Mrk 421 and Mrk 501 are now multiply confirmed sources of multi-GeV – TeV radiation punch92 ; petry96 ; quinn96 ; brad97 , and the TeV detections of PKS 2155-314 chadwick99 and 1ES 2344+514 catanese98 are awaiting confirmation. Most EGRET-detected blazars exhibit rapid variability mukherjee97 , in some cases on intraday and even sub-hour (e. g., gaidos96 ) timescales, where generally the most rapid variations are observed at the highest photon frequencies.
The broadband spectra of blazars consist of at least two clearly distinct spectral components. The first one extends in the case of flat-spectrum radio quasars (FSRQs) from radio to optical/UV frequencies, in the case of HBLs up to soft and even hard X-rays, and is consistent with non-thermal synchrotron radiation from ultrarelativistic electrons. The second spectral component emerges at $`\gamma `$-ray energies and peaks at several MeV – a few GeV in most quasars, while in the case of some HBLs the peak of this component appears to be located at TeV energies.
The bolometric luminosity of EGRET-detected quasars and some low-frequency peaked BL Lac objects (LBLs) during flares is dominated by the $`\gamma `$-ray emission. If this emission were isotropic, it would correspond to enormous luminosities (up to $`10^{49}`$ erg s<sup>-1</sup>) which, in combination with the short observed timescales (implying a small size of the emission region) would lead to a strong modification of the emissivity spectra by $`\gamma \gamma `$ absorption, in contradiction to the observed smooth power-laws at EGRET energies. This has motivated the concept of relativistic beaming of radiation emitted by ultrarelativistic particles moving at relativistic bulk speed along a jet (for a review of these arguments, see schlickeiser96 ). While it is generally accepted that blazar emission originates in relativistic jets, the radiation mechanisms responsible for the observed $`\gamma `$ radiation are still under debate. It is not clear yet whether in these jets protons are the primarily accelerated particles, which then produce the $`\gamma `$ radiation via photo-pair and photo-pion production, followed by $`\pi ^0`$ decay and synchrotron emission by secondary particles (e. g., mannheim93 ), or electrons (and positrons) are accelerated directly and produce $`\gamma `$-rays in Compton scattering interactions with the various target photon fields in the jet mg85 ; dermer92 ; sikora94 ; boettcher97 .
In this review, I will describe the current status of blazar models based on leptons (electrons and/or pairs; in the following, the term “electrons” refers to both electrons and positrons) as the primary constituents of the jet which are responsible for the $`\gamma `$-ray emission. Hadronic jet models are discussed in a separate paper by J. Rachen rachen99 . In Section 2, I will give a description of the model and discuss the different $`\gamma `$-ray production mechanisms and their relevance for different blazar classes. In Section 3, I will review recent progress in understanding intrinsic differences between different blazar classes and present state-of-the-art model calculations, using a leptonic jet model, to undermine the general theoretical concept. In Section 4, I will discuss broadband spectral variability of individual blazars and their interpretation in the framework of leptonic jet models. A simple quasi-analytical toy model for blazar broadband spectral variability will be presented in section 5.
## Model description and radiation mechanisms
The basic geometry of leptonic blazar jet models is illustrated in Fig. 1. At the center of the AGN, an accretion disk around a supermassive, probably rotating, black hole is powering a relativistic jet. Along this pre-existing jet structure, occasionally blobs of ultrarelativistic electrons are ejected at relativistic bulk velocity.
The electrons are emitting synchrotron radiation, which will be observable at IR – UV or even X-ray frequencies, and hard X-rays and $`\gamma `$-rays via Compton scattering processes. Possible target photon fields for Compton scattering are the synchrotron photons produced within the jet (the SSC process, mg85 ; maraschi92 ; bloom96 ), the UV – soft X-ray emission from the disk — either entering the jet directly (the ECD \[External Comptonization of Direct disk radiation\] process; dermer92 ; ds93 ) or after reprocessing at the broad line regions or other circumnuclear material (the ECC \[External Comptonization of radiation from Clouds\] process; sikora94 ; bl95 ; dss97 ), or jet synchrotron radiation reflected at the broad line regions (the RSy \[Reflected Synchrotron\] mechanism; gm96 ; bednarek98 ; bd98 ).
The relative importance of these components may be estimated by comparing the energy densities of the respective target photon fields. Denoting by $`u_B^{}`$ the co-moving energy density of the magnetic field, the energy density of the synchrotron radiation field, governing the luminosity of the SSC component, may be estimated by $`u_{sy}^{}u_B^{}\tau _T\gamma _e^2`$, where $`\tau _T=n_{e,B}^{}R_B^{}\sigma _T`$ is the Thomson depth of the relativistic plasma blob and $`\gamma _e`$ is the average Lorentz factor of electrons in the blob. The SSC spectrum exhibits a broad hump without strong spectral break, peaking around $`ϵ_{SSC}(B^{}/B_{cr})D\gamma _e^4ϵ_{sy}\gamma _e^2`$, where $`B^{}`$ is the co-moving magnetic field, $`B_{cr}=4.41410^{13}`$ G, and $`D=\left(\mathrm{\Gamma }[1\beta _\mathrm{\Gamma }\mathrm{cos}\theta _{obs}]\right)^1`$ is the Doppler factor associated with the bulk motion of the blob. Throughout this paper, all photon energies are described by the dimensionless quantity $`ϵ=h\nu /(m_ec^2)`$.
If the blob is sufficiently far from the central engine of the AGN so that the accretion disk can be approximated as a point source of photons, its photon energy density (in the co-moving frame) is $`u_D^{}L_D/(4\pi z^2c\mathrm{\Gamma }^2)`$, where $`L_D`$ is the accretion disk luminosity, and $`z`$ is the height of the blob above the accretion disk. The ECD spectrum can exhibit a strong spectral break, depending on the existence of a low-energy cutoff in the electron distribution function, and peaks at $`ϵ_{ECD}ϵ_D(D/\mathrm{\Gamma })\gamma _e^2`$, where $`ϵ_D`$ is the average photon energy of the accretion disk radiation (typically of order $`10^5`$ for Shakura-Sunyaev type accretion disks shakura73 around black holes of $`10^8`$$`10^{10}M_{}`$).
Part of the accretion disk and the synchrotron radiation will be reprocessed by circumnuclear material in the broad line region and can re-enter the jet. Since this reprocessed radiation is nearly isotropic in the rest-frame of the AGN, it will be strongly blue-shifted into the rest-frame of the relativistically moving plasma blob. Thus, assuming that a fraction $`a_{BLR}`$ of the radiation is rescattered into the jet trajectory, we find for the energy density of rescattered accretion disk photons: $`u_{ECC}^{}L_Da_{BLR}\mathrm{\Gamma }^2/(4\pi r_{BLR}^2c)`$, where $`r_{BLR}`$ is the average distance of the BLR material from the central black hole. The ECC photon spectrum peaks around $`ϵ_{ECC}ϵ_DD\mathrm{\Gamma }\gamma _e^2ϵ_{ECD}\mathrm{\Gamma }^2`$.
For the synchrotron mirror mechanism, additional constraints due to light travel time effects need to be taken into account in order to estimate the reflected synchrotron photon energy density (for a detailed discussion see bd98 ), which is well approximated by $`u_{RSy}^{}u_{sy}^{}\mathrm{\hspace{0.17em}4}\mathrm{\Gamma }^3a_{BLR}(R_B^{}/\mathrm{\Delta }r_{BLR})(12\mathrm{\Gamma }R_B^{}/z)`$, where $`\mathrm{\Delta }r_{BLR}`$ is a measure of the geometrical thickness of the broad line region. Similar to the SSC spectrum, the RSy spectrum does not show a strong spectral break. It peaks around $`ϵ_{RSy}(B^{}/B_{cr})D\mathrm{\Gamma }^2\gamma _e^4ϵ_{SSC}\mathrm{\Gamma }^2`$.
## Trends between different blazar classes
There appears to be a more or less continuous sequence in the broadband spectral properties of blazars, ranging from FSRQs over LBLs to HBLs, which was first presented in a systematic way in fossati97 . While in FSRQs the synchrotron and $`\gamma `$-ray peaks are typically located at infrared and MeV – GeV energies, respectively, they are shifted towards higher frequencies in BL Lacs, occurring at medium to even hard X-rays and at multi-GeV – TeV energies in some HBLs. The bolometric luminosity of FSRQs is — at least during $`\gamma `$-ray high states — strongly dominated by the $`\gamma `$-ray emission, while in HBLs the relative power outputs in synchrotron and $`\gamma `$-ray emission are comparable.
Detailed modeling of several blazars has indicated that this sequence appears to be related to the relative contribution of the external Comptonization mechanisms ECD and ECC to the $`\gamma `$-ray spectrum. While most FSRQs are successfully modelled with external Comptonization models (e. g., dss97 ; sambruna97 ; muk99 ; hartman99b ), the broadband spectra of HBLs are consistent with pure SSC models (e. g., mk97 ; pian98 ; petry99 ). BL Lacertae, a LBL, appears to be intermediate between these two extremes, requiring an external Comptonization component to explain the EGRET spectrum madejski99 ; bb99 . Figs. 2 and 3 illustrate detailed modeling results of two objects located at opposite ends of this sequence of blazars, using the jet radiation transfer code described in boettcher97 ; bb99 .
A physical interpretation of this sequence in the framework of a unified jet model for blazars was given in ghisellini98 . Assume that the average energy of electrons, $`\gamma _e`$, is determined by the balance of an energy-independent acceleration rate $`\dot{\gamma }_{acc}`$ and radiative losses, $`\dot{\gamma }_{rad}(4/3)c\sigma _T(u^{}/m_ec^2)\gamma ^2`$, where the target photon density $`u^{}`$ is the sum of the sources intrinsic to the jet, $`u_B^{}+u_{sy}^{}`$ plus external photon sources, $`u_{ECD}^{}+u_{ECC}^{}+u_{RSy}^{}`$. The average electron energy will then be $`\gamma _e(\dot{\gamma }_{acc}/u^{})^{1/2}`$. If one assumes that the properties determining the acceleration rate of relativistic electrons do not vary significantly between different blazar subclasses, then an increasing energy density of the external radiation field will obviously lead to a stronger radiation component due to external Comptonization, but also to a decreasing average electron energy $`\gamma _e`$, implying that the peak frequencies of both spectral components are displaced towards lower frequencies.
## Spectral variability of blazars
Between flaring and non-flaring states, blazars show very distinct spectral variability. Not only does the emission at the highest frequencies generally vary on the shortest time scales, but also the flaring amplitudes are significantly different among different wavelength bands. FSRQs often show spectral hardening of their $`\gamma `$-ray spectra during $`\gamma `$-ray flares (e. g., collmar97 ; hartman96 ; wehrle98 ), and the flaring amplitude in $`\gamma `$-rays is generally larger than in all other wavelength bands. The concept of multi-component $`\gamma `$-ray spectra of quasars, as first suggested for PKS 0528+134 in collmar97 , offers a plausible explanation for this spectral variability due to the different beaming patterns of different radiation mechanisms, as pointed out in dermer95 . This has been applied to PKS 0528+134 in bc98 and muk99 .
The results of muk99 indicate that $`\gamma `$-ray flaring states of PKS 0528+134 are consistent with an increasing bulk Lorentz factor $`\mathrm{\Gamma }`$ of ejected jet material, while at the same time the low-energy cutoff $`\gamma _1`$ of the electron distribution injected into the jet is lowered. This is in agreement with the physical picture that due to an increasing $`\mathrm{\Gamma }`$, the quasi-isotropic external photon field is more strongly Lorentz boosted into the blob rest frame, leading to stronger external Compton losses, implying a lower value of $`\gamma _1`$. The external Compton $`\gamma `$-ray components depend much more strongly on the bulk Lorentz factor than the synchrotron and SSC components do. This leads naturally to a hardening of the $`\gamma `$-ray spectrum, if the SSC mechanisms plays an important or even dominant role in the X-ray — soft $`\gamma `$-ray regime, while external Comptonization is the dominant radiation mechanism at higher $`\gamma `$-ray energies. The results of this study on PKS 0528+134 are discussed in more detail in muk99p .
While this flaring mechanism is plausible for FSRQs, short-timescale, correlated X-ray and $`\gamma `$-ray flares of the HBLs Mrk 421 and Mrk 501 mk97 ; pian98 and synchrotron flares of other HBLs (e. g., PKS 2155-304, georg98 ; kataoka99 ) have been explained successfully in the context of SSC models where flares are related to an increase of the maximum electron energy, $`\gamma _2`$, and a hardening of the electron spectrum. How the spectral evolution in synchrotron flares can be used to constrain the magnetic field and the physics of injection and acceleration of relativistic pairs, has been described in detail in the previous talk by R. Sambruna sambruna99 .
Comparing detailed spectral fits to weekly averaged broadband spectra of Mrk 501 petry99 over a period of 6 months, we have found that TeV and hard X-ray high states on intermediate timescales are consistent with a hardening of the electron spectrum (decreasing spectral index) and an increasing number density of high-energy electrons, while the value of $`\gamma _2`$ has only minor influence on the weekly averaged spectra. Fig. 5 shows how the spectral index of the injected electron distribution and the density of high-energy electrons resulting from our fits are varying in comparison to the RXTE ASM, BATSE, and HEGRA 1.5 TeV light curves. For a more detailed discussion of this analysis see boettcher99 .
These variability studies seem to indicate that due to the different dominant $`\gamma `$ radiation mechanisms in quasars and HBLs also the physics of $`\gamma `$-ray flares and extended high states is considerably different. While in FSRQs the $`\gamma `$-ray emission and its flaring behavior appears to be dominated by conditions of the external radiation field, this influence is unimportant in the case of HBLs where emission lines are very weak or absent, implying that the BLR might be very dilute, leading to a very weak external radiation field, which becomes negligible compared to the synchrotron radiation field intrinsic to the jet.
## A toy model for spectral variability
On the basis of the estimates of the photon energy densities of the various radiation fields and the peak energies of the diverse radiation components given in Section 2, one can develop a very simple, quasi-analytic toy model which allows us to study the influence of parameter variations on the predicted broadband spectrum of a blazar. For construction of this toy model, I assume that the magnetic field within the jet is in equipartition with the ultrarelativistic electrons, and that the light-travel time effects affecting the efficiency of the synchrotron mirror mechanism can be parametrized by a correction factor $`f_{ltt}0.1`$ so that $`u_{RSy}^{}u_{sy}^{}a_{BLR}\mathrm{\Gamma }^3f_{ltt}`$. Then, the entire broadband spectrum, accounting for all synchrotron and inverse-Compton components, is determined by 9 parameters: $`\mathrm{\Gamma }`$, $`\theta _{obs}`$, $`\dot{\gamma }_{acc}`$, $`L_D`$, $`ϵ_D`$, $`a_{BLR}`$, the scale height $`z`$ of energy dissipation in the jet, $`R_B^{}`$, and $`\tau _T`$. In most cases, several of these parameters can be constrained by independent observations. The radiation spectra of each individual component are approximated by double power-laws with a smooth transition.
The thick solid curve in Fig. 6 shows a toy model calculation with the location of peak frequencies and the relative contributions of the $`\gamma `$ radiation components as found in our fits to PKS 0528+134 based on detailed simulations muk99 . The hard X-ray to soft $`\gamma `$-ray spectrum below $`1`$ MeV is dominated by the SSC mechanism, while at higher energies, the ECC mechanism is dominant. The other curves in Fig. 6 indicate the effect of single parameter changes on the broadband spectrum which could be thought of as the cause of flares at $`\gamma `$-ray energies.
From Fig. 6, one can see that an increasing bulk Lorentz factor leads to a strong flare at $`\gamma `$-ray energies, while only moderate flaring at infrared and X-ray frequencies results (dotted curve). A shift of the synchrotron peak towards lower frequencies is predicted boe99b . If the BLR albedo $`a_{BLR}`$ increases (short-dashed curve), the result is a slight increase in the power output at high-energy $`\gamma `$-rays, while due to enhanced external-Compton cooling the flux in the synchrotron and SSC components even decreases. An increased acceleration rate $`\dot{\gamma }_{acc}`$ (long-dashed curve), leads to a strong synchrotron and SSC flare, where the largest variability amplitude is expected at MeV energies, while only moderate variability at higher frequencies is predicted. If the density of relativistic electrons in the blob increases during a flare (dot-dashed curve), the variability amplitude is again predicted to be largest at MeV energies and the peak frequencies of all components are expected to be shifted slightly towards lower frequencies. A decreasing observing angle — which could be a consequence of a bending jet — leads to a shift of the entire broadband spectrum towards higher fluxes and slightly higher peak frequencies.
Obviously, definite conclusions can not be drawn from this simplistic toy-model analysis. However, it may support previous results that an increasing bulk Lorentz factor is a viable and plausible explanation for the spectral variability observed in PKS 0528+134 and possibly also in other FSRQs.
Fig. 7 illustrates the variation of the power output in the different external Compton $`\gamma `$-ray components with respect to the synchrotron component, if $`\mathrm{\Gamma }`$ is the only parameter changing during a $`\gamma `$-ray flare. Most notably, a very steep relation between $`\nu F_\nu ^{peak}(sy)`$ and $`\nu F_\nu ^{peak}(ECC)`$ is predicted for values of the Lorentz factor close to the critical $`\mathrm{\Gamma }`$, at which the observer is looking at the superluminal angle. This dependence can be significantly steeper than quadratic, $`\mathrm{\Delta }\nu F_\nu ^{peak}(sy)\left[\mathrm{\Delta }\nu F_\nu ^{peak}(ECC)\right]^\alpha `$ with $`\alpha >2`$, which has recently been observed in the prominent 1996 flare of 3C279 wehrle98 .
|
no-problem/9909/cond-mat9909115.html
|
ar5iv
|
text
|
# Quantum Antidots: Coulomb Blockade or no Coulomb Blockade?
In a recent Letter Kataoka et al. critique our statement ”… there is no Coulomb Blockade for resonant tunneling through an antidot since there is no isolated region that is being charged.” They furnish their ”proof” of Coulomb Blockade (CB) in quantum antidots (QAD) in the integer quantum Hall regime by means of an intricate experiment: they sense the variation of the fringe electric field when the electron occupation of a QAD changes as a function of applied magnetic field.
While using or not using the terminology ”Coulomb Blockade” is a matter of semantics to a large extent, the common usage of the term refers to charging of an isolated metallic island where: (i) the electron energy spectrum is continuous without the CB, and (ii) the charging energy can be conveniently expressed as $`U_C=e^2/2C`$ with $`C`$ being the total capacitance of the island to the outside world. Here ”isolated” is important because an elecrical insulator defines conduction electron vacuum, and thus allows us to (i) define the metallic island, and (ii) to fix the number of electrons therein. The notion of CB is most useful when $`C`$ is constant (so long as just a few electrons are added or subtracted), and can be easily found from the geometry. The notion of CB is less useful even for single electron tunneling in quantum dots, where $`U_C`$ acquires additional large contributions, such as the size quantization energy and the intradot Coulomb interaction energy, both being of the same order as the geometric charging energy. Here the point is that the physics changes from essentially single particle in metallic islands to the many body in true quantum dots; the simple single particle models give numerically inaccurate energies, and neglect new class of many body effects, such as the spin singlet - triplet transitions of a two electron state in a quantum dot.
The notion of CB is even less useful for QADs, and is even conceptually ill defined as no vacuum separates the electrons bound on the QAD from all the rest of electrons in the system. That is, it is not clear which electrons are to be considered as bound on the QAD and which are not (Fig. 1). The only natural criterion is to consider the energy spectrum as discrete (electrons bound on the QAD) for $`\mathrm{\Delta }E`$ greater than temperature and any external excitation, and continuous otherwise. Thus we are forced to conclude that the ”size” of the QAD depends on temperature and on voltage used to measure conductance. In addition, if one were to estimate the geometric CB charging energy for a QAD, one obtains $``$12 meV for QAD of Ref. and $``$4 meV for QAD of Ref. ; these values are some 200 times greater than the experimental level spacing $`\mathrm{\Delta }E`$ obtained from thermal activation in both works.
Further, all experimental results of Ref. simply tell us that the electron energy spectrum on the QAD is discrete, and that electrons respond to electric field. However, a discrete energy spectrum by itself is not commonly taken to imply Coulomb Blockade. For example, atomic and molecular energy spectra are discrete, yet standard texts do not attribute this to CB.
V. J. Goldman
Department of Physics
SUNY at Stony Brook
Stony Brook, NY 11794-3800
|
no-problem/9909/gr-qc9909081.html
|
ar5iv
|
text
|
# Propagation of light in non-inertial reference frames
## 1 Introduction
One of the fundamental facts of modern physics is the constancy of the speed of light. Einstein regarded it as one of the two postulates on which special relativity is based. So far, however, little attention has been paid to the status of this postulate when teaching special relativity. It turns out that the constancy of the speed of light is a direct consequence of the relativity principle, not an independent postulate. To see this let us consider the two postulates of special relativity as formulated by Einstein in his 1905 paper ”On the electrodynamics of moving bodies”: ”the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good. We will raise this conjecture (the purport of which will hereafter be called the ”Principle of Relativity”) to the status of a postulate, and also introduce another postulate, which is only apparently irreconcilable with the former, namely, that light is always propagated in empty space with a definite velocity c which is independent of the state of the motion of the emitting body” . As the principle of relativity states that ”the laws of physics are the same in all inertial reference frames” and the constancy of the speed of light means that ”the speed of light is the same in all inertial reference frames (regardless of the motion of the source or the observer)” it follow that the second postulate is indeed a consequence of the first - the law describing the propagation of light is the same for all inertial observers.
This becomes even clearer if it is taken into account that the relativity principle is a statement of the impossibility to detect absolute motion. Since all inertial observers (moving with constant velocity) are completely equivalent (none is in absolute motion) according to the principle of relativity, they all observe the same phenomena and describe them by using the same laws of physics. Therefore it does follow that light should propagate with the same speed in all inertial frames; otherwise, if an inertial observer found that the speed of light were not c in his reference frame, that observer would say that he detected his absolute motion.
That all inertial observers are equivalent is also seen from the fact that they are represented by geodesic worldlines which in the case of flat spacetime are straight worldlines. However, when an observer is accelerating his worldline is not geodesic (not a straight worldline in flat spacetime). Therefore, accelerated motion, unlike motion with constant velocity, is absolute - there is an absolute difference between a geodesic and a non-geodesic worldline. This means that the laws of physics in inertial and non-inertial reference frames are not the same. An immediate consequence is that the speed of light is not constant in non-inertial frames - a non-inertial observer can detect his accelerated motion by using light signals.
It is precisely this corollary of special relativity that received little attention in the courses and books on relativity. In fact, that corollary is regularly used but since it is done implicitly confusions are not always avoided. For instance, an observer in Einstein’s thought experiment involving an accelerating elevator can discover his accelerated motion by the deflection of a light ray from its horizontal path. An observer in a rotating reference frame, a rotating disk for example, can detect the disk accelerated motion also by using light: light signals emitted from a point M in opposite directions along the rim of the disk do not arrive at the same time at M (this is the so called Sagnac effect) . It is explicitly stated in general relativity that the local speed of light is always c, which implies that the speed of light along a finite distance is not necessarily c. However, up to now no average velocity of light propagating between two points has been defined. Before introducing such a velocity let us consider several examples that demonstrate the need for it.
Einstein’s thought experiment involving an elevator at rest in a parallel gravitational field of strength $`𝐠`$ and an elevator accelerating with an acceleration $`𝐚=𝐠`$ was designed to demonstrate the equivalence of the non-inertial reference frames $`N^g`$ (elevator at rest in the gravitational field) and $`N^a`$ (elevator accelerating in space devoid of gravity). Einstein called this equivalence the principle of equivalence: it is not possible by experiment to distinguish between the non-inertial frames $`N^a`$ and $`N^g`$ which means that all physical phenomena look the same in $`N^a`$ and $`N^g`$. Therefore if a horizontal light ray propagating in $`N^a`$ bends, a horizontal light ray propagating in $`N^g`$ should bend as well.
Although even introductory physics textbooks - have started to discuss Einstein’s elevator experiment an obvious question has been overlooked: ”Are light rays propagating in an elevator in a vertical direction (parallel and anti-parallel to $`𝐚`$ or $`𝐠`$) also affected by the accelerated motion of the elevator or its being in a gravitational field?” The answer to this question requires the introduction of an average *coordinate* velocity of light which turns out to be different from $`c`$ in the case of vertical light rays (see Figure 1 and the detailed discussion in Section 2). It should be stressed that it is the average coordinate velocity of light between two points that is different from $`c`$; the local speed of light measured at a point is always $`c`$.
A second average velocity of light - an average *proper* velocity of light - is required for the explanation of the Shapiro time delay , . It also turns out not to be always $`c`$. The fact that it takes more time for a light signal to travel between two points $`P`$ and $`Q`$ in a gravitational field than between the same points in flat spacetime as determined by an observer at one of the points indicates that the average velocity of light between the two points is smaller than $`c`$. As the proper time of the observer is used in measuring that velocity it seems appropriate to call it average *proper* velocity of light. Unlike the average coordinate velocity the average proper velocity of light between two points depends on which point it is measured at. This fact confirms the dependence of the Shapiro time delay on the point where it is measured and shows, as we shall see in Section 3, that in the case of a parallel gravitational field it is not always a delay effect (in such a field the average proper velocity of light is defined in terms of both the proper distance and proper time of an observer). A light signal will be delayed *only* if it is measured at a point $`P`$ that is farther from the gravitating mass producing the parallel field; if it is measured at the other point $`Q,`$ closer to the mass, it will take less time for the signal to travel the same distance which shows that the average proper velocity of the signal determined at $`Q`$ is greater than that measured at $`P`$ and greater than $`c`$.
Due to the calculation of the average velocities of light in $`N^a`$ and $`N^g`$ to verify their agreement with the equivalence principle only a parallel gravitational field will be considered in Sections 2 and 3. That is why the expressions for the average light velocities will be derived for this case. Their generalized expressions for the case of the Schwarzschild metric and other metrics can be easily obtained (see Appendix).
Section 4 deals with the propagation of light in a rotating frame. In the case of light signals propagating in opposite directions along the rim of a rotating disk only the introduction of a coordinate velocity (depending on the centripetal acceleration of the disk) can explain why an observer on the disk discovers that the light signals do not arrive at the same time at the source point.
The introduction of the average velocities of light also sheds some light on a subtle feature of the propagation of light in the vicinity of a massive body - whether or not light falls in its gravitational field. The particle aspect of light seems to entail that a photon, like any other particle, should fall in a gravitational field (due to the mass corresponding to its energy); the deflection of light by a massive body appears to support such a view. And indeed this view is sometimes implicitly or explicitly expressed in papers and books although the correct explanation is given in some books on general relativity (see for instance -). It has been recently claimed that the issue of whether or not a charge falling in a gravitational field radiates can be resolved by assuming that the charge’s electromagnetic field is also falling . An electromagnetic field falling in a gravitational field, however, implies that light falls in the gravitational field as well. Even Einstein and Infeld appear to suggest that as a light beam has mass on account of its energy it will fall in a gravitational field: ”A beam of light will bend in a gravitational field exactly as a body would if thrown horizontally with a velocity equal to that of light” . This comparison is not quite precise since the vertical component of the velocity of the body will increase as it falls whereas the velocity of the ”falling” light beam is decreasing for a non-inertial observer (supported in a gravitational field) , as we shall see below. Statements such as ”a beam of light will accelerate in a gravitational field, just like objects that have mass” and therefore ”near the surface of the earth, light will fall with an acceleration of 9.81 $`m/s^2`$” have found their way in introductory physics textbooks . It will be shown in Section 3 that during its ”fall” in a gravitational field light is slowing down - a negative acceleration of 9.81 $`m/s^2`$ is decreasing its velocity (at a point $`P`$ near the Earth’s surface as seen from another point above or below $`P`$).
## 2 Average coordinate velocity of light
Why the average velocity of light between two points in a gravitational field is not generally equal to $`c`$ can be most clearly shown by considering two extra light rays parallel and anti-parallel to the gravitational acceleration $`𝐠`$ in the Einstein thought experiment involving an elevator at rest in the Earth’s gravitational field (Figure 1).
* Figure 1 - Propagation of light in the Einstein elevator at rest in a parallel gravitational field.
Three light rays are emitted simultaneously in the elevator which is at rest in the non-inertial reference frame $`N^g`$. Two rays are emitted from points $`A`$ and $`C`$ towards point $`B`$ and the third light ray is following the null path from $`D`$, spatially directed along constant $`z`$ towards $`B`$, to $`B^{}`$. The emission of the three rays is also simultaneous in the local Lorentz (inertial) frame $`I`$ which is momentarily at rest with respect to $`N^g`$ at the moment the light rays are emitted ($`I`$ and $`N^g`$ have a common instantaneous three-dimensional space at this moment and therefore common simultaneity). At the next moment, as $`I`$ starts to fall in the gravitational field, it will appear to an observer in $`I`$ that the elevator moves upwards with an acceleration $`g=\left|𝐠\right|`$. Therefore as seen from $`I`$ the three light rays will arrive simultaneously not at point $`B`$, but at $`B^{}`$ since for the time $`t=r/c`$ the elevator moves (from $`I`$’s viewpoint) at a distance $`\delta =gt^2/2=gr^2/2c^2`$. As the simultaneous arrival of the three rays at point $`B^{}`$ is an *absolute* (observer-independent) fact due to its being a *single* event, it follows that the rays arrive simultaneously at $`B^{}`$ as seen from $`N^g`$ as well. Since for the *same* coordinate time $`t=r/c`$ in $`N^g`$ the three light rays travel different distances $`DB^{}r`$, $`AB^{}=r+\delta `$, and $`CB^{}=r\delta `$ before arriving simultaneously at $`B^{}`$ an observer in $`N^g`$ concludes (to within terms $`c^2`$ ) that the *average* velocity of the light ray propagating from $`A`$ to $`B^{}`$ is slightly greater than $`c`$
$$c_{AB^{}}^g=\frac{r+\delta }{t}c\left(1+\frac{gr}{2c^2}\right).$$
The average velocity $`c_{CB^{}}^g`$ of the light ray propagating from $`C`$ to $`B^{}`$ is slightly smaller than $`c`$
$$c_{CB^{}}^g=\frac{r\delta }{t}c\left(1\frac{gr}{2c^2}\right).$$
It is easily seen that to within terms $`c^2`$ the average velocity of light between $`A`$ and $`B`$ is equal to that between $`A`$ and $`B^{}`$, i.e. $`c_{AB}^g=c_{AB^{}}^g`$ and also $`c_{CB}^g=c_{CB^{}}^g`$:
$$c_{AB}^g=\frac{r}{t\delta /c}=\frac{r}{tgt^2/2c}=\frac{c}{1gr/2c^2}c\left(1+\frac{gr}{2c^2}\right)$$
(1)
and
$$c_{CB}^g=\frac{r}{t+\delta /c}c\left(1\frac{gr}{2c^2}\right).$$
(2)
As the average velocities (1) and (2) are not determined with respect to a specific point since the *coordinate* time $`t`$ is involved in their calculation, it is clear that (1) and (2) represent the average *coordinate* velocities of light between the points $`A`$ and $`B`$ and $`C`$ and $`B`$, respectively.
These expressions for the average coordinate velocity of light in $`N^g`$ can be also obtained from the coordinate velocity of light at a point in a parallel gravitational field. In such a field proper and coordinate times do not coincide whereas proper and coordinate distances are the same as follows from the standard spacetime interval in a *parallel* gravitational field
$$ds^2=\left(1+\frac{2gz}{c^2}\right)c^2dt^2dx^2dy^2dz^2$$
which can be also written as \[18, p. 173\]
$$ds^2=\left(1+\frac{gz}{c^2}\right)^2c^2dt^2dx^2dy^2dz^2.$$
(3)
Note that due to the existence of a horizon at $`z=c^2/g`$ \[18, pp. 169, 172-173\] there are constraints on the size of non-inertial reference frames (accelerated or at rest in a parallel gravitational field) which are represented by the metric (3). If the origin of $`N^g`$ is changed, say to $`z_B=0`$ (See Figure 1), the horizon moves to $`z=c^2/g\left|z_B\right|`$.
The coordinate velocity of light at a point $`z`$ can be obtained from (3) (for $`ds^2=0`$)
$$c^g\left(z\right)\frac{dz}{dt}=\pm c\sqrt{\left(1+\frac{gz}{c^2}\right)^2}=\pm c\left(1+\frac{gz}{c^2}\right).$$
(4)
The $`+`$ and $``$ signs are for light propagating along or against $`+z`$, respectively. Therefore, the coordinate velocity of light at a point $`z`$ is locally isotropic in the $`z`$ direction. It is clear that the coordinate velocity (4) cannot become negative due to the constraints on the size of non-inertial frames which ensure that $`\left|z\right|<c^2/g`$ \[18, pp. 169, 172\].
As seen from (4) the coordinate velocity of light is a function of $`z`$ which shows that we can calculate the average coordinate velocity between $`A`$ and $`B`$ by taking an average over the distance from $`A`$ to $`B`$. As $`c^g\left(z\right)`$ is not only continuous on the interval $`[z_A,z_B]`$ (for $`\left|z\right|<c^2/g`$), but is also a linear function of $`z`$, we can write
$$c_{AB}^g=\frac{1}{z_Bz_A}_{z_A}^{z_B}c^g\left(z\right)𝑑z=c\left(1+\frac{gz_B}{c^2}+\frac{gr}{2c^2}\right),$$
(5)
where we took into account that $`z_A=z_B+r`$. When the coordinate origin is at point $`B`$ ($`z_B=0`$) the expression (5) coincides with (1).
The coordinate velocity of light $`c^g\left(z\right)`$ is also continuous on the interval $`[t_A,t_B]`$, but in order to calculate $`c_{AB}^g`$ by taking an average of the velocity of light over the time of its propagation from $`A`$ to $`B`$ we should find the dependence of $`z`$ on $`t`$. From (3) we can write (for $`ds^2=0`$):
$$dz=c\left(1+\frac{gz}{c^2}\right)dt.$$
By integrating and keeping only the terms proportional to $`c^2`$ we find that $`z=ct`$ which shows that $`c^g\left(z\right)`$ is also linear in $`t`$ (to within terms proportional to $`c^2`$):
$$c^g\left(t\right)=\pm c\left(1+\frac{gt}{c}\right).$$
Therefore for the average coordinate velocity of light between points $`A`$ and $`B`$ we have:
$`c_{AB}^g`$ $`=`$ $`{\displaystyle \frac{1}{t_Bt_A}}{\displaystyle _{t_A}^{t_B}}c^g\left(z\right)𝑑t={\displaystyle \frac{1}{t_Bt_A}}{\displaystyle _{t_A}^{t_B}}c\left(1+{\displaystyle \frac{gz}{c^2}}\right)𝑑t`$ (6)
$`=`$ $`{\displaystyle \frac{1}{t_Bt_A}}{\displaystyle _{t_A}^{t_B}}c\left(1+{\displaystyle \frac{gt}{c}}\right)𝑑t=c\left(1+{\displaystyle \frac{gz_B}{c^2}}+{\displaystyle \frac{gr}{2c^2}}\right),`$
where the magnitude of $`c^g\left(z\right)`$ was used and it was taken into account that $`z_A=z_B+r`$ and $`z_A=ct_A`$ and $`z_B=ct_B`$. As expected this expression coincides with (5) and for $`z_B=0`$ is equal to (1).
The fact that $`c^g\left(z\right)`$ is linear in both $`z`$ and $`t`$ (to within terms $`c^2`$) makes it possible to calculate the average coordinate velocity of light propagating between $`A`$ and $`B`$ (See Figure 1) by using the values of $`c^g\left(z\right)`$ only at the end points $`A`$ and $`B`$:
$$c_{AB}^g=\frac{1}{2}\left(c_A^g+c_B^g\right)=\frac{1}{2}\left[c\left(1+\frac{gz_A}{c^2}\right)+c\left(1+\frac{gz_B}{c^2}\right)\right]$$
and as $`z_A=z_B+r`$
$$c_{AB}^g=c\left(1+\frac{gz_B}{c^2}+\frac{gr}{2c^2}\right).$$
(7)
This expression coincides with the expressions for $`c_{AB}^g`$ in (5) and (6).
For the average coordinate velocity of light propagating between $`B`$ and $`C`$ we obtain
$$c_{BC}^g=c\left(1+\frac{gz_B}{c^2}\frac{gr}{2c^2}\right)$$
(8)
since $`z_C=z_Br`$. As noted above when the coordinate origin is at point $`B`$ ($`z_B=0`$) the expressions (7) and (8) coincide with (1) and (2).
The average coordinate velocities (7) and (8) correctly describe the propagation of light in $`N^g`$ yielding the right expression $`\delta =gr^2/2c^2`$ (See Figure 1). It should be stressed that without these average coordinate velocities the fact that the light rays emitted from $`A`$ and $`C`$ arrive not at $`B,`$ but at $`B^{}`$ cannot be explained.
As a coordinate velocity, the average coordinate velocity of light is not determined with respect to a specific point and depends on the choice of the coordinate origin. Also, it is the same for light propagating from $`A`$ to $`B`$ and for light travelling in the opposite direction, i.e. $`c_{AB}^g=c_{BA}^g`$. Therefore, like the coordinate velocity (4) the average coordinate velocity is also isotropic. Notice, however, that the average coordinate velocity of light is isotropic in a sense that the average light velocity between two points is the same in both directions. But as seen from (7) and (8) the average coordinate velocity of light between different pairs of points, whose points are the same distance apart, is different. As a result, as seen in Figure 1, the light ray emitted at $`A`$ arrives at $`B`$ before the light ray emitted at $`C`$.
In an elevator (at rest in the non-inertial reference frame $`N^a`$) accelerating with an acceleration $`a=\left|𝐚\right|`$, where the metric is \[18, p. 173\]
$$ds^2=\left(1+\frac{az}{c^2}\right)^2c^2dt^2dx^2dy^2dz^2,$$
(9)
the expressions for the average coordinate velocity of light between $`A`$ and $`B`$ and $`B`$ and $`C`$, respectively, are
$$c_{AB}^a=c\left(1+\frac{az_B}{c^2}+\frac{ar}{2c^2}\right)$$
(10)
and
$$c_{BC}^a=c\left(1+\frac{az_B}{c^2}\frac{ar}{2c^2}\right)$$
(11)
in agreement with the equivalence principle.
## 3 Average proper velocity of light
The average coordinate velocity of light explains the propagation of light in the Einstein elevator and in non-inertial reference frames in general, but cannot be used in a situation where the average light velocity between two points (say a source and an observation point) is determined with respect to one of the points. Such situations occur in the Shapiro time delay. As the local velocity of light is $`c`$ the average velocity of light between a source and an observation point depends on which of the two points is regarded as a reference point with respect to which the average velocity is determined (at the reference point the local velocity of light is always $`c`$). The dependence of the average velocity on which point is chosen as a reference point demonstrates that that velocity is anisotropic. This anisotropic velocity can be regarded as an average *proper* velocity of light since it is determined with respect to a given point and therefore its calculation involves the proper time at that point. It is also defined in terms of the proper distance as determined by an observer at the same point (in the case of a parallel gravitational field).
Consider a light source at point $`B`$ (See Figure 1). To calculate the average proper velocity of light originating from $`B`$ and observed at $`A`$ (that is, as seen from $`A`$) we have to determine the initial velocity of a light signal at $`B`$ and its final velocity at $`A`$, both with respect to $`A`$. As the local velocity of light is $`c`$ the final velocity of the light signal determined at $`A`$ is obviously $`c`$. Noting that in a parallel gravitational field proper and coordinate distances are the same we can determine the initial velocity of the light signal at $`B`$ as seen from $`A`$
$$c_B^g=\frac{dz_B}{d\tau _A}=\frac{dz_B}{dt}\frac{dt}{d\tau _A}$$
where $`dz_B/dt=c^g\left(z_B\right)`$ is the coordinate velocity of light (4) at $`B`$
$$c^g\left(z_B\right)=c\left(1+\frac{gz_B}{c^2}\right)$$
and $`d\tau _A=ds_A/c`$ is the proper time for an observer with constant spatial coordinates at $`A`$
$$d\tau _A=\left(1+\frac{gz_A}{c^2}\right)dt.$$
As $`z_A=z_B+r`$ and $`gz_A/c^2<1`$ (since for any value of $`z`$ in $`N^g`$ there is a restriction $`\left|z\right|<c^2/g`$) for the coordinate time $`dt`$ we have (to within terms $`c^2`$)
$$dt\left(1\frac{gz_A}{c^2}\right)d\tau _A=\left(1\frac{gz_B}{c^2}\frac{gr}{c^2}\right)d\tau _A.$$
Then for the initial velocity $`c_B^g`$ at $`B`$ as seen from $`A`$ we obtain
$$c_B^g=c\left(1+\frac{gz_B}{c^2}\right)\left(1\frac{gz_B}{c^2}\frac{gr}{c^2}\right)$$
or keeping only the terms $`c^2`$
$$c_B^g=c\left(1\frac{gr}{c^2}\right).$$
(12)
Therefore an observer at $`A`$ will determine that a light signal is emitted at $`B`$ with the velocity (12) and during the time of its journey towards $`A`$ (away from the Earth’s surface) will accelerate with an acceleration $`g`$ and will arrive at $`A`$ with a velocity exactly equal to $`c`$.
For the average proper velocity $`\overline{c}_{BA}^g=(1/2)(c_B^g+c)`$ of light propagating from $`B`$ to $`A`$ as seen from $`A`$ we have
$$\overline{c}_{BA}^g\left(asseenfromA\right)=c\left(1\frac{gr}{2c^2}\right).$$
(13)
As the local velocity of light at $`A`$ (measured at $`A`$) is $`c`$ it follows that if a light signal propagates from $`A`$ towards $`B`$ its initial velocity at $`A`$ is $`c`$, its final velocity at $`B`$ is (12) and therefore, as seen from $`A`$, it is subjected to a negative acceleration $`g`$ and will slow down as it ”falls” in the Earth’s gravitational field. This shows that the average proper speed $`\overline{c}_{AB}^g\left(asseenfromA\right)`$ of a light signal emitted at $`A`$ with the initial velocity $`c`$ and arriving at $`B`$ with the final velocity (12) will be equal to the average proper speed $`\overline{c}_{BA}^g\left(asseenfromA\right)`$ of a light signal propagating from $`B`$ towards $`A`$. Thus, as seen from $`A`$, the back and forth average proper speeds of light travelling between $`A`$ and $`B`$ are the *same*.
Now let us determine the average proper velocity of light between $`B`$ and $`A`$ with respect to point $`B`$. A light signal emitted at $`B`$ as seen from $`B`$ will have an initial (local) velocity $`c`$ there. The final velocity of the signal at $`A`$ as seen from $`B`$ will be
$$c_A^g=\frac{dz_A}{d\tau _B}=\frac{dz_A}{dt}\frac{dt}{d\tau _B}$$
where $`dz_A/dt=c^g\left(z_A\right)`$ is the coordinate velocity of light at $`A`$
$$c^g\left(z_A\right)=c\left(1+\frac{gz_A}{c^2}\right)$$
and $`d\tau _B`$ is the proper time at $`B`$
$$d\tau _B=\left(1+\frac{gz_B}{c^2}\right)dt.$$
Then as $`z_A=z_B+r`$ we obtain for the velocity of light at $`A`$ as determined from$`B`$
$$c_A^g=c\left(1+\frac{gr}{c^2}\right).$$
(14)
Using (14) the average proper velocity of light propagating from $`B`$ to $`A`$ as determined from$`B`$ becomes
$$\overline{c}_{BA}^g\left(asseenfromB\right)=c\left(1+\frac{gr}{2c^2}\right).$$
(15)
If a light signal propagates from $`A`$ to $`B`$ its average proper speed $`\overline{c}_{AB}^g\left(asseenfromB\right)`$ will be equal to $`\overline{c}_{BA}^g\left(asseenfromB\right)`$ \- the average proper speed of light propagating from $`B`$ to $`A`$. This demonstrates that for an observer at $`B`$ a light signal emitted from $`B`$ with a velocity $`c`$ will accelerate towards $`A`$ with an acceleration $`g`$ and will arrive there with the final velocity (14). As determined by the $`B`$-observer a light signal emitted from $`A`$ with the initial velocity (14) will be slowing down (with $`g`$) as it ”falls” in the Earth’s gravitational field and will arrive at $`B`$ with a final velocity exactly equal to $`c`$. Therefore an observer at $`B`$ will agree with an observer at $`A`$ that a light signal will accelerate with an acceleration $`g`$ on its way from $`B`$ to $`A`$ and will decelerate while ”falling” in the Earth’s gravitational field during its propagation from $`A`$ to $`B`$ but disagree on the velocity of light at the points $`A`$ and $`B`$.
Comparing (13) and (15) demonstrates that the two average proper speeds between the same points $`A`$ and $`B`$ are not equal and depend on where they are measured from. As we expected the fact that the local velocity of light at the reference point is $`c`$ makes the average proper velocity between two points dependant on where the reference point is. An immediate consequence from here is that the Shapiro time delay does not always mean that it takes more time for light to travel a given distance in a parallel gravitational field than the time needed in flat spacetime.
In the case of a parallel gravitational field the Shapiro time effect for a round trip of a light signal propagating between $`A`$ and $`B`$ determined from point $`A`$ will be indeed a delay effect:
$$\mathrm{\Delta }\tau _A=\frac{2r}{c\left(1gr/2c^2\right)}\mathrm{\Delta }t_{flat}\left(1+\frac{gr}{2c^2}\right),$$
where $`\mathrm{\Delta }t_{flat}=2r/c`$ is the time for the round trip of light between $`A`$ and $`B`$ in flat spacetime. However, an observer at $`B`$ will determine that it takes less time for a light signal to complete the round trip between $`A`$ and $`B`$:
$$\mathrm{\Delta }\tau _B=\frac{2r}{c\left(1+gr/2c^2\right)}\mathrm{\Delta }t_{flat}\left(1\frac{gr}{2c^2}\right).$$
However, in the Schwarzschild metric the Shapiro effect is always a delay effect since the average proper speed of light in that metric is always smaller that c as shown in the Appendix.
The average proper velocity of light between $`A`$ and $`B`$ can be also obtained by using the average coordinate velocity of light (7) between the same points:
$$c_{AB}^g\frac{r}{\mathrm{\Delta }t}=c\left(1+\frac{gz_B}{c^2}+\frac{gr}{2c^2}\right)$$
Let us calculate the average proper velocity of light propagating between $`A`$ and $`B`$ as determined from point $`A.`$ This means that we will use $`A`$’s proper time $`\mathrm{\Delta }\tau _A=\left(1+gz_A/c^2\right)\mathrm{\Delta }t`$:
$$\overline{c}_{AB}^g(asseenfromA)=\frac{r}{\mathrm{\Delta }\tau _A}=\frac{r}{\mathrm{\Delta }t}\frac{\mathrm{\Delta }t}{\mathrm{\Delta }\tau _A}$$
Noting that $`r/\mathrm{\Delta }t`$ is the average coordinate velocity (7) and $`z_A=z_B+r`$ we have (to within terms $`c^2`$)
$$\overline{c}_{AB}^g(asseenfromA)c\left(1+\frac{gz_B}{c^2}+\frac{gr}{2c^2}\right)\left(1\frac{gz_A}{c^2}\right)c\left(1\frac{gr}{2c^2}\right)$$
which coincides with (13).
The calculation of the average proper velocity of light propagating between $`A`$ and $`B`$, but as seen from $`B`$ yields the same expression as (15):
$$\overline{c}_{AB}^g(asseenfromB)=\frac{r}{\mathrm{\Delta }\tau _B}=\frac{r}{\mathrm{\Delta }t}\frac{\mathrm{\Delta }t}{\mathrm{\Delta }\tau _B}c\left(1+\frac{gz_B}{c^2}+\frac{gr}{2c^2}\right)\left(1\frac{gz_B}{c^2}\right)c\left(1+\frac{gr}{2c^2}\right).$$
As evident from (13) and (15) the average proper velocity of light emitted from a common source and determined at different points around the source is anisotropic in $`N^g`$ \- if the observation point is above the light source the average proper speed of light is slightly smaller than $`c`$ and smaller than the average proper speed as determined from an observation point below the source. If an observer at point $`B`$ (See Figure 1) determines the average proper velocities of light coming from $`A`$ and $`C`$ he will find that they are also anisotropic - the average proper velocity of light coming from $`A`$ is greater than that emitted at $`C`$ and therefore the light from $`A`$ will arrive at $`B`$ before the light from $`C`$ (provided that the two light signals from $`A`$ and $`C`$ are emitted simultaneously in $`N^g`$). However, if the observer at $`B`$ (See Figure 1) determines the back and forth average proper speeds of light propagating between $`A`$ and $`B`$ he finds that they are the same (the back and forth average proper speeds of light between $`B`$ and $`C`$ are also the same).
The calculation of the average proper velocities of light in an accelerating frame $`N^a`$ gives:
$$c_{BA}^a\left(asseenfromA\right)=c\left(1\frac{ar}{2c^2}\right)$$
and
$$c_{BA}^a\left(asseenfromB\right)=c\left(1+\frac{ar}{2c^2}\right).$$
where $`a=|𝐚|`$ is the frame’s proper acceleration.
## 4 The Sagnac effect
The Sagnac effect can de described as follows. Two light signals emitted from a point M on the rim of a rotating disk and propagating along its rim in opposite directions will not arrive simultaneously at M. There still exist people who question special relativity and their main argument has been this effect. They claim that for an observer on the rotating disk the speed of light is not constant - that the Galilean law of velocity addition ($`c+v`$ and $`cv`$, where $`v`$ is the orbital speed at a point on the disk rim) should be used by the rotating observer in order to explain the time difference in the arrival of the two light signals at M. What makes such claims even more persistent is the lack of a clear position on the issue of the speed of light in non-inertial reference frames. What special relativity states is that the speed of light is constant only in inertial reference frames - this constancy follows from the impossibility to detect absolute motion (more precisely, it follows from the non-existence of absolute motion). Accelerated motion can be detected and for this reason the coordinate velocity of light in non-inertial reference frames is a function of the proper acceleration of the frame. The rotating disk is a non-inertial reference frame and its acceleration can be detected by different means including light signals. That is why it is not surprising that the coordinate velocity of light as determined on the disk depends on the centripetal acceleration of the disk. As we shall see below the coordinate velocity of light calculated on the disk is not a manifestation of the Galilean law of velocity addition.
Consider two disks whose centers coincide. One of them is stationary, the other rotates with constant angular velocity $`\omega `$. As the stationary disk can be regarded as an inertial frame its metric is the Minkowski metric:
$$ds^2=c^2dt^2dx^2dy^2dz^2.$$
(16)
To write the interval $`ds^2`$ in polar coordinates we use the transformation
$$t=tx=R\mathrm{cos}\mathrm{\Phi }y=R\mathrm{sin}\mathrm{\Phi }z=z.$$
(17)
By substituting (16) in (17) we get
$$ds^2=c^2dt^2dR^2R^2d\mathrm{\Phi }^2dz^2.$$
(18)
Let an observer on the rotating disk use the coordinates $`t`$, $`r`$, $`\phi `$, and $`z`$. The transformation between the coordinates on the stationary and on the rotating disk is obviously:
$$t=tR=r\mathrm{\Phi }=\phi +\omega tz=z.$$
(19)
Time does not change in this transformation since the coordinate time on the rotating disk is given by the clock at its center and this clock is at rest with respect to the inertial stationary disk . By substituting (19) in (18) we obtain the metric on the rotating disk:
$$ds^2=\left(1\frac{\omega ^2r^2}{c^2}\right)c^2dt^2dr^2r^2d\phi ^22\omega r^2dtd\phi dz^2.$$
(20)
As light propagates along null geodesics ($`ds^2=0`$) we can calculate the tangential coordinate velocity of light $`c^\phi r\left(d\phi /dt\right)`$ from (20) by taking into account that $`dr=0`$ and $`dz=0`$ for light propagating on the surface of the rotating disk along its rim (of radius $`r`$). First we have to determine $`d\phi /dt`$. From (20) we can write
$$r^2\left(\frac{d\phi }{dt}\right)^2+2\omega r^2\left(\frac{d\phi }{dt}\right)\left(1\frac{\omega ^2r^2}{c^2}\right)c^2=0.$$
The solution of this quadratic equation gives two values for $`d\phi /dt`$ \- one in the direction in which $`\phi `$ increases ($`+\phi `$) (in the direction of the rotation of the disk) and the other in the opposite direction ($`\phi `$):
$$\left(\frac{d\phi }{dt}\right)^{+\phi }=\omega +\frac{c}{r};\left(\frac{d\phi }{dt}\right)^\phi =\omega \frac{c}{r}.$$
Then for the tangential coordinate velocities $`c^{+\phi }`$ and $`c^\phi `$ we obtain
$$c^{+\phi }r\left(\frac{d\phi }{dt}\right)^{+\phi }=c\left(1\frac{\omega r}{c}\right)$$
(21)
and
$$c^\phi r\left(\frac{d\phi }{dt}\right)^\phi =c\left(1+\frac{\omega r}{c}\right).$$
(22)
As seen from (21) and (22) the tangential coordinate velocities $`c^{+\phi }`$ and $`c^\phi `$ are constant for a given $`r`$ which means that (21) and (22) also represent the average coordinate velocities of light. The coordinate speed of light propagating in the direction of the rotation of the disk is smaller that the coordinate speed in the opposite direction.
This fact allows an observer on the rotating disk to explain why two light signals emitted from a point M on the disk rim and propagating along the rim in opposite directions will not arrive simultaneously at M \- as the coordinate speed of the light signal travelling against the disk rotation is greater that the speed of the other signal it will arrive at M first.
The time it takes a light signal travelling along the rim of the disk in the direction of its rotation to complete one revolution is
$$\mathrm{\Delta }t^{+\phi }=\frac{2\pi r}{c^{+\phi }}=\frac{2\pi r}{c\left(1\omega r/c\right)}=\frac{2\pi r}{c\omega r}.$$
The time for the completion of one revolution by the light signal propagating in the opposite direction is:
$$\mathrm{\Delta }t^\phi =\frac{2\pi r}{\left|c^\phi \right|}=\frac{2\pi r}{c\left(1+\omega r/c\right)}=\frac{2\pi r}{c+\omega r}.$$
The arrival of the two light signals at M is separated by the time interval:
$$\delta t=\mathrm{\Delta }t^{+\phi }\mathrm{\Delta }t^\phi =\frac{4\pi \omega r^2}{c^2\omega ^2r^2}.$$
(23)
The time difference (23) is caused by the different coordinate speeds of light in the $`+\phi `$ and $`\phi `$ directions. Here it should be specifically stressed that $`c^{+\phi }`$ and $`c^\phi `$ are different from $`c`$ owing to the accelerated motion (rotation) of the disk. In terms of the orbital velocity $`v=\omega r`$ it appears that the two tangential coordinate velocities can be written as a function of $`v`$
$$c^{+\phi }=c\left(1\frac{v}{c}\right)=cv;c^\phi =c\left(1+\frac{v}{c}\right)=c+v,$$
which resemble the Galilean law of velocity addition. However it is completely clear that that resemblance is misleading - due to the centripetal (normal) acceleration $`a^N=v^2/r`$ the direction of the orbital velocity constantly changes during the rotation of the disk which means that $`c^{+\phi }`$ and $`c^\phi `$ depend on the normal acceleration of the disk:
$$c^{+\phi }=c\left(1\frac{\sqrt{a^Nr}}{c}\right)$$
(24)
and
$$c^\phi =c\left(1+\frac{\sqrt{a^Nr}}{c}\right).$$
(25)
As expected the expressions (24) and (25) are similar to the average coordinate velocities (10) and (11) (for $`z_B=0`$) in a sense that all coordinate velocities depend on acceleration, not velocity.
## 5 Conclusions
The paper revisits the question of the constancy of the speed of light by pointing out that it has two answers - the speed of light is constant in all inertial reference frames but when determined in a non-inertial frame it depends on the frame’s proper acceleration. It has been shown that the complete description of the propagation of light in non-inertial frames of reference requires an average coordinate and an average proper velocity of light. The need for an average coordinate velocity was demonstrated in the case of Einstein’s thought elevator experiment \- to explain the fact that two light signals emitted from points $`A`$, and $`C`$ in Figure 1 meet at $`B^{}`$, not at $`B`$. It was also shown that an average proper velocity of light is implicitly used in the Shapiro time delay effect; when such a velocity is explicitly defined it follows that in the case of a parallel gravitational field the Shapiro effect is not always a delay effect.
The Sagnac effect was also revisited by defining the coordinate velocity of light in the non-inertial frame of the rotating disk. That velocity naturally explains the fact that two light signals emitted from a point on the rim of the rotating disk and propagating along its rim in opposite directions do not arrive simultaneously at the same point.
## 6 Acknowledgments
I would like to thank Mark Stuckey for his constructive and helpful comments.
## 7 Appendix - Shapiro time delay
Although it is recognized that the retardation of light (the Shapiro time delay) is caused by the reduced speed of light in a gravitational field \[12, pp. 196, 197\], an expression for the average velocity of light has not been derived so far. Now we shall see that the introduction of an average proper velocity of light makes it possible for this effect to be calculated by using this velocity. It is the average proper velocity of light that is needed in the Shapiro time delay since the time measured in this effect is the proper time at a given point.
We shall consider the treatment of the Shapiro time delay in \[12, Sec. 4.4\]. A light (in fact, a radio) signal is emitted from the Earth (at $`z_1<0`$) which propagates in the gravitational field of the Sun, is reflected by a target planet (at $`z_2>0`$), and travels back to Earth. The path of the light signal (parallel to the $`z`$ axis) is approximated by a straight line \[12, pp. 196\]. The distance between this line and the Sun (along the $`x`$ axis) is $`b`$. The total proper time from the emission of the light signal to its arrival back on Earth is \[12, pp. 197, 198\]:
$$\mathrm{\Delta }\tau =2\left(1\frac{2GM_{}}{c^2\sqrt{z_1^2+b^2}}\right)\left(\frac{z_2+\left|z_1\right|}{c}+\frac{2GM_{}}{c^3}\mathrm{ln}\frac{\sqrt{z_2^2+b^2}+z_2}{\sqrt{z_1^2+b^2}\left|z_1\right|}\right).$$
(26)
As the approximated distance between the Earth (at $`z_1<0`$) and the target planet (at $`z_2>0`$) is $`z_2+\left|z_1\right|`$ we can define the average proper velocity of a light signal travelling that distance as determined on Earth:
$`\overline{c}_{z_1z_2}^g(asseenfromEarth)`$ $`=`$ $`{\displaystyle \frac{z_2+\left|z_1\right|}{\mathrm{\Delta }\tau _{Earth}}}={\displaystyle \frac{z_2+\left|z_1\right|}{\mathrm{\Delta }t}}{\displaystyle \frac{\mathrm{\Delta }t}{\mathrm{\Delta }\tau _{Earth}}}`$ (27)
$`=`$ $`c_{z_1z_2}^g{\displaystyle \frac{1}{\left(1\frac{2GM_{}}{c^2\sqrt{z_1^2+b^2}}\right)}},`$
where $`c_{z_1z_2}^g`$=$`\left(z_2+\left|z_1\right|\right)/\mathrm{\Delta }t`$ is the average coordinate velocity of light and it was taken into account that
$$\mathrm{\Delta }\tau _{Earth}=\left(1\frac{2GM_{}}{c^2\sqrt{z_1^2+b^2}}\right)\mathrm{\Delta }t$$
is the proper time as measured on Earth and obtained from the Schwarzschild metric (the effect of the Earth’s gravitational field is neglected).
We have seen in Section 2 that the average coordinate velocity $`c_{z_1z_2}^g`$ can be calculated either as an average over time or over distance, so
$$c_{z_1z_2}^g=\frac{1}{z_2+\left|z_1\right|}_{z_1}^{z_2}c^{}\left(z\right)𝑑z,$$
where
$$c^{}\left(z\right)=c\left(1\frac{2GM_{}}{c^2\sqrt{z^2+b^2}}\right)$$
is the coordinate velocity of light at a point in the case of the Schwarzschild metric. Then
$`c_{z_1z_2}^g`$ $`=`$ $`{\displaystyle \frac{c}{z_2+\left|z_1\right|}}{\displaystyle _{z_1}^{z_2}}\left(1{\displaystyle \frac{2GM_{}}{c^2\sqrt{z^2+b^2}}}\right)𝑑z`$
$`=`$ $`{\displaystyle \frac{c}{z_2+\left|z_1\right|}}\left(z_2+\left|z_1\right|{\displaystyle \frac{2GM_{}}{c^2}}\mathrm{ln}{\displaystyle \frac{\sqrt{z_2^2+b^2}+z_2}{\sqrt{z_1^2+b^2}\left|z_1\right|}}\right)`$
$`=`$ $`c\left(1{\displaystyle \frac{2GM_{}}{c^2\left(z_2+\left|z_1\right|\right)}}\mathrm{ln}{\displaystyle \frac{\sqrt{z_2^2+b^2}+z_2}{\sqrt{z_1^2+b^2}\left|z_1\right|}}\right).`$
By substituting this expression for the average coordinate velocity of light in (27) we can obtain the average proper velocity of light in the Schwarzschild metric:
$$\overline{c}_{z_1z_2}^g(asseenfromEarth)=\frac{c}{\left(1\frac{2GM_{}}{c^2\sqrt{z_1^2+b^2}}\right)}\left(1\frac{2GM_{}}{c^2\left(z_2+\left|z_1\right|\right)}\mathrm{ln}\frac{\sqrt{z_2^2+b^2}+z_2}{\sqrt{z_1^2+b^2}\left|z_1\right|}\right)$$
or
$$\overline{c}_{z_1z_2}^g(asseenfromEarth)c\left(1+\frac{2GM_{}}{c^2\sqrt{z_1^2+b^2}}\frac{2GM_{}}{c^2\left(z_2+\left|z_1\right|\right)}\mathrm{ln}\frac{\sqrt{z_2^2+b^2}+z_2}{\sqrt{z_1^2+b^2}\left|z_1\right|}\right).$$
For the total proper time
$$\mathrm{\Delta }\tau =\frac{2\left(z_2+\left|z_1\right|\right)}{\overline{c}_{z_1z_2}^g(asseenfromEarth)}$$
from the emission of the light signal to its arrival back on Earth we have
$`\mathrm{\Delta }\tau `$ $`=`$ $`{\displaystyle \frac{2\left(z_2+\left|z_1\right|\right)\left(1\frac{2GM_{}}{c^2\sqrt{z_1^2+b^2}}\right)}{c\left(1\frac{2GM_{}}{c^2\left(z_2+\left|z_1\right|\right)}\mathrm{ln}\frac{\sqrt{z_2^2+b^2}+z_2}{\sqrt{z_1^2+b^2}\left|z_1\right|}\right)}}`$
$``$ $`2\left(1{\displaystyle \frac{2GM_{}}{c^2\sqrt{z_1^2+b^2}}}\right)\left({\displaystyle \frac{z_2+\left|z_1\right|}{c}}+{\displaystyle \frac{2GM_{}}{c^3}}\mathrm{ln}{\displaystyle \frac{\sqrt{z_2^2+b^2}+z_2}{\sqrt{z_1^2+b^2}\left|z_1\right|}}\right)`$
and (26) is recovered. The total proper time can be also written (to within terms proportional to $`c^3`$) as
$$\mathrm{\Delta }\tau 2\left(\frac{z_2+\left|z_1\right|}{c}\frac{2GM_{}\left(z_2+\left|z_1\right|\right)}{c^3\sqrt{z_1^2+b^2}}+\frac{2GM_{}}{c^3}\mathrm{ln}\frac{\sqrt{z_2^2+b^2}+z_2}{\sqrt{z_1^2+b^2}\left|z_1\right|}\right).$$
|
no-problem/9909/cond-mat9909272.html
|
ar5iv
|
text
|
# 1. Introduction
## 1. Introduction
In recent years, we have witnessed a careful experimental investigation of the question of long-range magnetic order in rare-earth based icosahedral quasicrystals . Nevertheless, discussions of this matter have been somewhat unclear as to the actual nature of the magnetic order one would expect to see in antiferromagnetic (AF) quasicrystals, if they were to exist. A partial answer to this question can be obtained from a theory of the symmetry of magnetically ordered quasicrystals . I intend to show here that such a theory not only provides a valuable tool for analyzing neutron diffraction data, but also helps to narrow down the possible magnetic ordering one would expect to see in the classes of quasicrystals that are known to exist today. I hope that this will help in guiding the continuing search for new quasicrystals with this unique physical property.
## 2. The spin density field and its symmetry
A magnetically-ordered crystal, whether periodic or aperiodic, is most directly described by its spin density field $`𝐒(𝐫)`$. This field is a 3-component real-valued function, transforming like an axial vector under $`O(3)`$ and changing sign under time inversion. One may think of this function as defining a set of classical magnetic moments, or spins, on the atomic sites of the material. For quasiperiodic crystals the spin density field may be expressed as a Fourier sum with a countable infinity of wave vectors
$$𝐒(𝐫)=\underset{𝐤L}{}𝐒(𝐤)e^{i𝐤𝐫}.$$
(1)
The set $`L`$ of all integral linear combinations of the wave vectors in (1) is called the magnetic lattice. Its rank $`D`$ is the smallest number of wave vectors needed to generate it by integral linear combinations. For quasiperiodic crystals, by definition, the rank is finite. For the special case of periodic crystals the rank is equal to the dimension $`d`$ of physical space.
In elastic neutron scattering experiments, every wave vector $`𝐤`$ in $`L`$ is a candidate for a magnetic Bragg peak whose intensity is given by
$$I(𝐤)|𝐒(𝐤)|^2|\widehat{𝐤}𝐒(𝐤)|^2,$$
(2)
where $`𝐤`$ is the scattering wave vector and $`\widehat{𝐤}`$ is a unit vector in its direction. I have shown elsewhere that under generic circumstances there can be only three reasons for not observing a magnetic Bragg peak at $`𝐤`$ even though $`𝐤`$ is in $`L`$: (a) The intensity $`I(𝐤)0`$ but is too weak to be detected in the actual experiment; (b) The intensity $`I(𝐤)=0`$ because $`𝐒(𝐤)`$ is parallel to $`𝐤`$; and (c) The intensity $`I(𝐤)=0`$ because magnetic symmetry requires the Fourier coefficient $`𝐒(𝐤)`$ to vanish. I shall explain below exactly how this symmetry requirement, or “selection rule,” comes about.
The theory of magnetic symmetry in quasiperiodic crystals, which is described in more detail in Ref. , is a reformulation of Litvin and Opechowski’s theory of spin space groups . Their theory, which is applicable to periodic crystals, is extended to quasiperiodic crystals by following the ideas of Rokhsar, Wright, and Mermin’s “Fourier-space approach” to crystallography . At the heart of this approach is a redefinition of the concept of point-group symmetry which enables one to treat quasicrystals directly in physical space . The key to this redefinition is the observation that point-group rotations (proper or improper), when applied to a quasiperiodic crystal, do not leave the crystal invariant but rather take it into one that contains the same spatial distributions of bounded structures of arbitrary size.
This generalized notion of symmetry, termed “indistinguishability,” is captured by requiring that any symmetry operation of the magnetic crystal leave invariant all spatially-averaged autocorrelation functions of its spin density field $`𝐒(𝐫)`$ for any order $`n`$ and for any choice of components $`\alpha _i\{x,y,z\}`$,
$`C_{\alpha _1\mathrm{}\alpha _n}^{(n)}(𝐫_1,\mathrm{},𝐫_n)`$ (3)
$`=\underset{V\mathrm{}}{lim}{\displaystyle \frac{1}{V}}{\displaystyle _V}𝑑𝐫S_{\alpha _1}(𝐫_1𝐫)\mathrm{}S_{\alpha _n}(𝐫_n𝐫).`$
I have shown in the Appendix of Ref. that an equivalent statement for the indistinguishability of any two quasiperiodic spin density fields, $`𝐒(𝐫)`$ and $`𝐒^{}(𝐫)`$, is that their Fourier coefficients are related by
$$𝐒^{}(𝐤)=e^{2\pi i\chi (𝐤)}𝐒(𝐤),$$
(4)
where $`\chi `$, called a gauge function, is a real-valued scalar function which is linear (modulo integers) on $`L`$. Only in the case of periodic crystals can one replace $`2\pi \chi (𝐤)`$ by $`𝐤𝐝`$, reducing indistinguishability to the requirement that the two crystals differ at most by a translation $`𝐝`$.
With this in mind, we define the point group $`G`$ of the magnetic crystal to be the set of operations $`g`$ from $`O(3)`$ that leave it indistinguishable to within rotations $`\gamma `$ in spin space, possibly combined with time inversion. Accordingly, for every pair $`(g,\gamma )`$ there exists a gauge function, $`\mathrm{\Phi }_g^\gamma (𝐤)`$, called a phase function, which satisfies
$$𝐒(g𝐤)=e^{2\pi i\mathrm{\Phi }_g^\gamma (𝐤)}\gamma 𝐒(𝐤).$$
(5)
Since $`𝐒([gh]𝐤)=𝐒(g[h𝐤])`$, one easily establishes that the transformations $`\gamma `$ in spin space form a group $`\mathrm{\Gamma }`$ and that the pairs $`(g,\gamma )`$ satisfying the point-group condition (5) form a subgroup of $`G\times \mathrm{\Gamma }`$ which we call the spin point group $`G_𝐒`$. The corresponding phase functions, one for each pair in $`G_𝐒`$, must satisfy the group compatibility condition,
$$(g,\gamma ),(h,\eta )G_𝐒:\mathrm{\Phi }_{gh}^{\gamma \eta }(𝐤)\mathrm{\Phi }_g^\gamma (h𝐤)+\mathrm{\Phi }_h^\eta (𝐤),$$
(6)
where “$``$” denotes equality modulo integers. A spin space group, describing the symmetry of a magnetic crystal, whether periodic or aperiodic, is thus given by a magnetic lattice $`L`$, a spin point group $`G_𝐒`$, and a set of phase functions $`\mathrm{\Phi }_g^\gamma (𝐤)`$, satisfying the group compatibility condition (6).
## 3. The diffraction pattern: A thinned-out magnetic lattice or a shifted nuclear lattice?
I said earlier that every wave vector in the magnetic lattice is a candidate for a diffraction peak unless symmetry forbids it. We are now in a position to understand how this happens. Given a wave vector $`𝐤L`$ we examine all spin point-group operations $`(g,\gamma )`$ for which $`g𝐤=𝐤`$. These elements form a subgroup of the spin point group, called the little spin group of $`𝐤`$, $`G_𝐒^𝐤`$. For elements $`(g,\gamma )`$ of $`G_𝐒^𝐤`$, the point-group condition (5) can be rewritten as
$$\gamma 𝐒(𝐤)=e^{2\pi i\mathrm{\Phi }_g^\gamma (𝐤)}𝐒(𝐤).$$
(7)
This implies that the Fourier coefficient $`𝐒(𝐤)`$ is required to be a simultaneous eigenvector of all spin transformations $`\gamma `$ in the little spin group of $`𝐤`$, with the eigenvalues given by the corresponding phase functions. If a non-trivial 3-dimensional axial vector satisfying Eq. (7) does not exist then $`𝐒(𝐤)`$ will necessarily vanish. If such an eigen vector does exist its form might still be constrained to lie in a particular subspace of spin space.
Of particular interest are spin transformations $`\gamma `$ that leave the spin density field indistinguishable without requiring any rotation in physical space. These transformations are paired in the spin point group with the identity rotation $`e`$ and form a normal and abelian subgroup of $`\mathrm{\Gamma }`$ called the lattice spin group $`\mathrm{\Gamma }_e`$. In the special case of periodic crystals, the elements of $`\mathrm{\Gamma }_e`$ are spin transformations that when combined with translations leave the magnetic crystal invariant.
The lattice spin group plays a key role in determining the outcome of elastic neutron scattering, for if a magnetic crystal has a nontrivial lattice spin group $`\mathrm{\Gamma }_e`$ then $`\{e\}\times \mathrm{\Gamma }_eG_𝐒^𝐤`$ for every $`𝐤`$ in the magnetic lattice, restricting the form of all the $`𝐒(𝐤)`$’s. This may result in a substantial thinning-out of the magnetic lattice, whereby only a fraction of the wave vectors give rise to actual magnetic Bragg peaks. Because this thinning of the magnetic lattice is often quite extensive, it is common practice to describe the magnetic peaks not as a thinned-out magnetic lattice but rather in terms of the nuclear lattice $`L_0`$ (the one observed above the magnetic ordering temperature) which is shifted by so-called “magnetic propagation vectors.” These two descriptions are in fact equivalent and with some care can be used interchangeably.
## 4. Where should we look?
In the past I have tabulated all the decagonal spin space groups , as well as all the lattice spin groups for icosahedral quasicrystals . In the latter case I also listed explicitly, for every wave vector $`𝐤`$ in the magnetic lattice, whether through Eq. (7), symmetry requires $`𝐒(𝐤)`$ to vanish or to take any special form. In a future publication I plan to provide complete tables of spin space groups and the requirements which they impose on neutron scattering experiments for all the relevant quasiperiodic crystal systems (octagonal, decagonal, dodecagonal, and icosahedral).
Clearly, the theory of spin space groups provides a helpful tool for analyzing neutron diffraction experiments. It lists the patterns of magnetic Bragg peaks, compatible with each symmetry class, which can then be directly compared with experiment. But on a more basic level this theory answers one of the fundamental questions that have been debated in recent years, which is whether it is even possible to have long-range quasiperiodic magnetic order. It establishes that even though symmetry may impose constraints on the possible forms of magnetic order one can have in a given quasicrystal, it clearly does not forbid the existence of such order. Thus, there is no symmetry-based argument which disallows long-range magnetic order in quasicrystals.
Why is it then, that we have not yet observed unequivocal long-range magnetic order in a quasicrystal? It might be because energetic considerations lead to local frustration and spin-glass ordering; It might be due to some other physical argument; Or it might be simply because we have not found it yet. If this is the case, then a more practical question to ask of a theory of magnetic symmetry is whether it can offer any suggestions as to where to look for such order. Indeed, symmetry considerations may assist us in deciding in which quasicrystal systems to look first for the simplest kind of non-trivial magnetic ordering. Such ordering would be the quasiperiodic analog of a simple AF periodic crystal where half the spins are pointing “up” and the other half are pointing “down.” Symmetry arguments can guide us to those systems where such ordering is possible.
I therefore close this essay with a short discussion of what this quasiperiodic AF order looks like, followed by the list of systems which are compatible with such order. It would then be up to the metallurgists and material scientists to find the right chemical systems which can sustain local magnetic moments and at the same time are likely to have stable phases in these crystal systems.
## 5. The quasiperiodic antiferromagnet
The simple AF crystal, whether periodic or aperiodic, has a lattice spin group $`\mathrm{\Gamma }_e`$ containing only two elements: the identity operation $`ϵ`$ and time inversion $`\tau `$. In the case of time inversion the selection rule (7) becomes
$$\tau 𝐒(𝐤)=𝐒(𝐤)=e^{2\pi i\mathrm{\Phi }_e^\tau (𝐤)}𝐒(𝐤),$$
(8)
which requires $`𝐒(𝐤)`$ to vanish unless $`\varphi _e^\tau (𝐤)1/2`$. On the other hand, application of the group compatibility condition (6) to $`(e,\tau )^2=(e,ϵ)`$ gives two possible values for this phase,
$$\varphi _e^\tau (𝐤)0\mathrm{or}\frac{1}{2}.$$
(9)
It is not too difficult to show that exactly half of the wave vectors in the magnetic lattice $`L`$ have $`\varphi _e^\tau (𝐤)0`$ and will therefore not appear in the neutron diffraction pattern. These wave vectors constitute a sublattice $`L_0`$ of index 2 in $`L`$. One can then describe the set of wave vectors appearing in the diffraction diagram either as the magnetic lattice $`L`$ without all the wave vectors in $`L_0`$, or as $`L_0`$ shifted by $`𝐪`$, where $`𝐪`$, a “magnetic propagation vector,” is any vector in $`L`$ which is not in the sublattice $`L_0`$. In the simplest scenario $`L_0`$ is also the nuclear lattice but this is not necessarily the case.
Consider a 1-dimensional spin chain with this lattice spin group. If the chain is periodic then its (Fourier) magnetic lattice is given by all integral multiples of a single wave vector $`𝐛^{}`$ (I will keep the superscript-$``$ as a reminder that we are in Fourier space). Because phase functions are linear it suffices to specify the value of $`\varphi _e^\tau `$ on $`𝐛^{}`$ and that will determine its value on any wave vector in the lattice. Of the two possible values (9) the first, $`\varphi _e^\tau (𝐛^{})0`$ will result through the selection rule (8) in $`𝐒(𝐤)`$ being zero everywhere and therefore $`𝐒(𝐫)=0`$ as well. The only non-trivial assingment is therefore $`\varphi _e^\tau (𝐛^{})1/2`$ which through the selection rule (8) implies that all lattice wave vectors that are even multiples of $`𝐛^{}`$ will be missing, or “extinct,” from the diffraction pattern.
If the spin chain is quasiperiodic, say having a rank of 2, then its magnetic lattice will be given by all integral linear combinations of two wave vectors, $`𝐛_1^{}`$ and $`𝐛_2^{}`$, whose magnitudes are incommensurate. In this case the phase function $`\varphi _e^\tau `$ is fully determined by specifying its two independent values on $`𝐛_1^{}`$ and $`𝐛_2^{}`$. At first glance it would seem as if there are three distinct non-trivial assignments of values given by
$$(\varphi _e^\tau (𝐛_1^{}),\varphi _e^\tau (𝐛_2^{}))(0,\frac{1}{2})\mathrm{or}(\frac{1}{2},0)\mathrm{or}(\frac{1}{2},\frac{1}{2}).$$
(10)
It turns out that these three assignments are equivalent, leading to the same spin space group, due to the fact that for a quasiperiodic chain one has the added freedom of changing the basis of the magnetic lattice. A basis transformation from $`(𝐛_1^{},𝐛_2^{})`$ to $`(𝐛_1^{}+𝐛_2^{},𝐛_2^{})`$ or to $`(𝐛_1^{},𝐛_1^{}+𝐛_2^{})`$ takes one, respectively, from the first or second assignment in (10) to the third. Thus, the diffraction pattern of a quasiperiodic AF spin chain can always be described as a magnetic lattice given by wave vectors of the form $`𝐤=n_1𝐛_1^{}+n_2𝐛_2^{}`$ where all vectors with $`n_1+n_2`$ even are extinct. Equivalently, it may described as a lattice $`L_0`$, generated by the wave vectors $`𝐛_1^{}+𝐛_2^{}`$ and $`𝐛_1^{}𝐛_2^{}`$, and shifted by the vector $`𝐛_1^{}`$.
Knowing the different possibilities in Fourier space allows us to immediately construct simple direct-space examples of AF spin chains having these symmetries. Figures 1(a) and (b) show two periodic AF spin chains in which the “magnetic unit cell” is twice or four-times as large as the “nuclear unit cell.” Both of these chains will exhibit the same magnetic diffraction peaks, the only way to distinguish them being a direct comparison with the nuclear diffraction pattern, which can be obtained above the magnetic ordering temperature. Figures 1(c)-(e) show three AF Fibonacci chains, obtained by setting the ratio $`b_1^{}/b_2^{}`$ to the golden mean $`(1+\sqrt{5})/2`$, and using the three different assignment of the phase function values given in (10). Again, as discussed above, all three are expected to have the same magnetic diffraction peaks and the only way to distinguish them is a comparison with the nuclear diffraction pattern.
Which of the actual quasicrystal systems that are known to exist today allow simple AF order? Axial quasicrystals admit two kinds of simple AF order. Since they are all quasiperiodic in the plane normal to the $`n`$-fold axis and periodic along this axis it is always possible to have periodic AF order along the $`n`$-fold axis. This would give an AF quasicrystal but not in the true sense that we are interested in. Only when $`n`$ is a power of 2 is it possible to have true quasiperiodic AF order in the plane normal to the $`n`$-fold axis . Thus, among the known axial quasicrystals one should concentrate the search for simple AF order in the octagonal crystal system.
Only two of the three Bravais classes in the icosahedral system admit simple AF order . Such order is possible if the nuclear lattice is either simple (giving a magnetic lattice which is body-centered in Fourier space) or if the nuclear lattice is face-centered in Fourier space (giving a simple icosahedral magnetic lattice). Unfortunately, most of the known icosahedral quasicrystals, including the rare-earth based ones, are face-centered in direct space and therefore do not allow simple AF order. Furthermore, icosahedral quasicrystals which are body-centered in direct space are not yet known to exist. Thus, in the icosahedral system, one should look for simple AF order in crystals that have a simple icosahedral nuclear lattice.
|
no-problem/9909/cond-mat9909099.html
|
ar5iv
|
text
|
# Progress in Monte Carlo Calculations of Fermi Systems: Normal Liquid 3He
## Abstract
The application of the diffusion Monte Carlo method to a strongly interacting Fermi system as normal liquid <sup>3</sup>He is explored. We show that the fixed-node method together with the released-node technique and a systematic method to analytically improve the nodal surface constitute an efficient strategy to improve the calculation up to a desired accuracy. This methodology shows unambiguously that backflow correlations, when properly optimized, are enough to generate an equation of state of liquid <sup>3</sup>He in excellent agreement with experimental data from equilibrium up to freezing.
Liquid <sup>3</sup>He has been for many years a benchmark in the field of quantum-many body physics. The Fermi statistics of its atoms, combined with the strong correlations induced by the hard core of their interatomic potential, has turned it into a paradigm of strongly-correlated Fermi systems. At zero temperature, an approximate microscopic description has been achieved by means of variational methods, both for the Fermi liquid <sup>3</sup>He as well as for its bosonic counterpart liquid <sup>4</sup>He . From a Monte Carlo viewpoint, the quantum many-body problem can be tackled in a more ambitious way with the aid of the Green’s function Monte Carlo (GFMC) and the diffusion Monte Carlo (DMC) methods . The variational wave function can be used as an input for the Monte Carlo method which, for boson systems, is able to solve the Schrödinger equation of the $`N`$-body system providing exact results. In Fermi systems as liquid <sup>3</sup>He, the exactness of the method is lost due the involved sign problem that makes a straightforward interpretation of the wave function not possible.
The cancellation methods developed up to now to solve this intricate problem have proved their efficiency in model problems or with very few particles but become unreliable for real many-body systems . In the meantime, the approximate fixed-node (FN) method has become a standard tool. In the FN-DMC method, the antisymmetry is introduced in the trial wave function used for importance sampling imposing its nodal surface as a boundary condition. This approach provides upper bounds to the exact eigenvalues, the quality of which is related to the accuracy of the nodal surface of the trial wave function. The main drawback of the FN-DMC method is the lack of control over the influence of the imposed nodal surface on the results obtained, not to say the impossibility of properly correcting for such effect. In the present Letter, we come back to this problem using the FN-DMC as a main approach but crucially combined with two auxiliary methods: the released-node (RN) estimation technique and an analytical method able to enhance the quality of any given nodal surface. The combination of the above methods provides information on the bias due to the imposed nodal surface and a procedure which can evaluate and bring down, in principle to any arbitrary required precision, this influence. This complete program has been applied to the study of liquid <sup>3</sup>He bringing, as we will show, the systematic error under control and down to levels below the current statistical errors. This has allowed a very accurate microscopic calculation of the equation of state of liquid <sup>3</sup>He, including a prediction for the negative-pressure region and the spinodal density.
DMC and GFMC calculations have provided up to now the best upper bounds to the ground-state energy of liquid <sup>3</sup>He . These calculations have unambiguously shown the relevance of the Feynman-Cohen-type backflow correlations in the improvement of the energy. However, the energy gain appears too small to recover the experimental data and, what is more conclusive, the density dependence of the pressure which stresses the curvature of the equation of state shows clear differences with the experiment. A conclusion that naturally emerged from those results was that the nodal surface, originated by backflow correlations, is not accurate enough and probably new state-dependent correlations ought to be considered. Our present results prove that backflow correlations, when properly optimized, effectively do provide very accurate nodal surfaces.
In the DMC method, the Schrödinger equation written in imaginary time is translated into a diffusion-like differential equation which can be stochastically solved in an iterative procedure. Specific information on the implementation of the DMC method is given in Ref. . As far as the FN framework is concerned, the choice of the trial wave function $`\psi (\text{R})`$ used for importance sampling is a key point. The simplest model is the Jastrow-Slater wave function
$$\psi =\psi _\mathrm{J}D_{}D_{},$$
(1)
with $`\psi _\mathrm{J}=_{i<j}f(r_{ij})`$ a Jastrow wave function and $`D_{}`$ ($`D_{}`$) a Slater determinant of the spin-up (spin-down) atoms with single-particle orbitals $`\phi _{\alpha _i}(𝐫_j)=\mathrm{exp}(i𝐤_{\alpha _i}𝐫_j)`$.
In this variational description (1), the dynamical correlations induced by the interatomic potential are well modelled by the Jastrow factor, and the statistical correlations, implied by the antisymmetry, are introduced with a Slater determinant of plane waves which is the exact wave function of the free Fermi sea. The two factors account well for the dynamical correlations and the Fermi statistics when these effects are independently considered but their product is only a relatively poor approximation for a strongly correlated Fermi liquid. It is well known, from previous variational and GFMC/DMC calculations , that a significant improvement on the Jastrow-Slater model is achieved by introducing backflow correlations in $`\phi _{\alpha _i}(\text{r}_j)`$, a name which is taken from the Feynman-Cohen famous work on the microscopic description of the phonon-roton spectrum in liquid <sup>4</sup>He .
At this point, it becomes essential to set up a method for analytically enhancing a given model. Such a procedure is already contained in the imaginary-time Schrödinger equation. Let us consider a time-dependent wave function $`\varphi (𝐑,t)`$, with $`\varphi (𝐑,t=0)=\psi (𝐑)`$ the initial guess for the trial wave function, satisfying
$$\frac{\varphi (𝐑,t)}{t}=H\varphi (𝐑,t).$$
(2)
A natural choice for a more accurate trial wave function is obtained solving Eq. (2) at first order in $`t`$. Near the nodes, which is the relevant region to our purposes, one readily captures the main correction to the original $`\psi _\mathrm{A}(𝐑)D_{}D_{}`$ in the form $`\varphi (𝐑,t)=\psi _\mathrm{J}(𝐑)\psi _\mathrm{A}(\stackrel{~}{𝐑})`$ with $`\stackrel{~}{𝐑}=𝐑(t)`$ and $`𝐑/t=D𝐅_\mathrm{J}(𝐑)`$, $`𝐅_\mathrm{J}(𝐑)`$ being the drift force coming from the Jastrow wave function $`\psi _\mathrm{J}(𝐑)`$ and $`D=\mathrm{}^2/(2m)`$. In this form, the new nodal surface is described by the original antisymmetric wave function but with arguments that are shifted due to the effect of dynamical correlations. It is worth noting that this approach generates the Feynman-Cohen backflow in $`\phi _{\alpha _i}(𝐫_j)`$ ($`\psi _\mathrm{A}^{\mathrm{BF}}(𝐑)`$) as a first order correction to the plane-wave orbitals. Recursively, entering with $`\psi _\mathrm{A}^{\mathrm{BF}}(𝐑)`$ the next order correction can be analytically obtained (see Eq. 5).
Finally, once a specific model for the nodal surface has been chosen it is necessary to establish a method to test its quality. This can be accomplished by means of the released-node technique . In the RN approach a superposition of a small boson component in the wave function is allowed, with the primary effect of resetting the nodal surface to the exact position. This is technically accomplished by introducing a positively-defined guiding wave function $`\psi _\mathrm{g}(\text{R})`$ so that the walkers are not confined into a region of definite sign of $`\psi (\text{R})`$. The basic requirements on choosing $`\psi _\mathrm{g}(\text{R})`$ are twofold: proximity to $`|\psi (\text{R})|`$ away from the nodal surface and being positive-defined at the nodes. The choice we have made is
$$\psi _\mathrm{g}(𝐑)=(\psi (𝐑)^2+a^2)^{1/2},$$
(3)
$`a`$ being a parameter which controls the crossing frequency. Other different choices can be considered but the specific details of $`\psi _\mathrm{g}(\text{R})`$ are not relevant since the RN energy is calculated projecting out its antisymmetric component. The use of $`\psi _\mathrm{g}(\text{R})`$ does not introduce any systematic bias in the RN energies, which approach the exact eigenvalue when $`t_\mathrm{r}\mathrm{}`$ , $`t_\mathrm{r}`$ being the maximum allowed lifetime after the first crossing. However, the variance of the energy grows exponentially with $`t_\mathrm{r}`$ due to the boson component, and thus in general the asymptotic value cannot be obtained. In contrast, what is straightforwardly available is the slope of the energy versus $`t_\mathrm{r}`$ at $`t_\mathrm{r}0`$, which provides a direct measure of the quality of the input nodal surface (the true antisymmetric ground-state wave function would generate a zero slope), and constitutes a means of comparing different trial wave functions. In particular, it provides feedback information on whether the next analytical correction to $`\psi _\mathrm{A}^{\mathrm{BF}}(𝐑)`$ is necessary.
We have applied all the above methodology to the study of normal liquid <sup>3</sup>He at zero temperature. The results reported have been obtained with $`N=66`$ particles, but we have made size checks using also $`N=54`$ and $`N=114`$. In Fermi systems, the kinetic energy includes statistical contributions that show an oscillating behavior with $`N`$. We have observed that this behavior follows very closely that of a discretized Fermi-gas energy, a fact that could be expected since such a term appears explicitly in the local kinetic energy. It is worth noticing that the case $`N=66`$ is specially well suited for MC calculations as the correction amounts only 0.015 K.
As in previous calculations, we use a short-ranged backflow in the form $`\phi _{\alpha _i}(𝐫_j)=\mathrm{exp}(i𝐤_{\alpha _i}\stackrel{~}{𝐫}_j^{\mathrm{BF}})`$, with
$$\stackrel{~}{𝐫}_j^{\mathrm{BF}}=𝐫_j+\lambda _\mathrm{B}\underset{kj}{}\eta (r_{jk})𝐫_{jk},$$
(4)
and $`\eta (r)=\mathrm{exp}(((rr_\mathrm{B})/\omega _\mathrm{B})^2)`$. The two-body correlation factor has been chosen of McMillan type, $`f(r)=\mathrm{exp}(0.5(b/r)^5)`$, and the pairwise HFD-B(HE) Aziz potential , which has proved high accuracy in liquid <sup>4</sup>He calculations , has modelled the atomic interactions. At the experimental equilibrium density $`\rho _0^{\mathrm{expt}}=0.273\sigma ^3`$ ($`\sigma =2.556`$ Å), we have started the calculation with $`b=1.15\sigma `$ and the backflow parameters optimized in Ref. ($`\lambda _\mathrm{B}=0.14`$, $`r_\mathrm{B}=0.74\sigma `$, $`\omega _\mathrm{B}=0.54\sigma `$). With this initial set of parameters, the results obtained are clearly biased by the trial wave function. Even though the RN approach corrects numerically the shortcomings of $`\psi (𝐑)`$ and, in some applications, allows for an exact estimation of the eigenvalue, this is not the case for liquid <sup>3</sup>He. In Fig. 1, the RN energies as a function of the released times are shown for the cases $`\lambda _\mathrm{B}=0`$ and $`\lambda _\mathrm{B}=0.14`$. As one can see, at small imaginary times the RN method reveals the presence of corrections to the FN energies but a common asymptotic regime is far beyond the scope of the available MC data.
The next step then was looking for the next order correction to backflow correlations, as well as for a possibly better set of backflow parameters. We have found that the ones we were using correspond to a local minimum of the FN energy, and that a narrower but deeper minimum exists with $`\lambda _\mathrm{B}=0.35`$ and $`r_\mathrm{B}`$ and $`\omega _\mathrm{B}`$ unchanged. The resulting energy versus released time is also plotted in Fig. 1. The relation of initial slopes, $`1:0.27:0.016`$ for $`\lambda _\mathrm{B}=0,0.14,0.35`$, provides information on the accuracy of $`\psi (𝐑)`$. In the optimal case, $`\lambda _\mathrm{B}=0.35`$, the slope is practically inexistent and the energy correction would be $`0.01`$ K if the asymptotic regime could be reached. In order to get additional evidence on the size of this correction, and as a closing checkmark of the reliability of our results, we have included corrections to the backflow trial wave function using the analytical method previously described. It can be shown that these new terms incorporate explicit three-body correlations in $`\phi _{\alpha _i}(\stackrel{~}{𝐫}_j)`$ of the form
$$\stackrel{~}{𝐫}_j^{\mathrm{BFT}}=\stackrel{~}{𝐫}_j^{\mathrm{BF}}+\lambda _{\mathrm{BT}}\underset{kj}{}\eta (r_{jk})(_j_k),$$
(5)
with $`_i=_{li}\eta (r_{il})𝐫_{il}`$. We have carried out a FN-DMC calculation with this new trial wave function at $`\rho _0^{\mathrm{expt}}`$ and the result for the energy correction has been found $`<0.01`$ K. Both this analytical check and the numerical findings provided by the RN method point out the excellent description that backflow correlations make of the nodal surface in liquid <sup>3</sup>He.
The FN energies with $`\lambda _\mathrm{B}=0`$ (no backflow), $`\lambda _\mathrm{B}=0.14`$, and $`\lambda _\mathrm{B}=0.35`$ are reported in Table I, together with the corresponding kinetic energy obtained as the difference between the total energy and a pure estimation of the potential energy. The comparison with the experimental energy , also contained in the table, shows the successive improvement of the FN-DMC result until an excellent agreement with $`\lambda _\mathrm{B}=0.35`$. Concerning the kinetic energy, a sizeable difference between theory and experiment survives, a fact that has been generally attributed to long-range wings in the high-$`q`$ inelastic response that are difficult to incorporate effectively in the experimental analysis.
The FN-DMC calculation has been extended to a wide range of densities ranging from the spinodal point up to a maximum value $`\rho =0.403\sigma ^3`$, located near to the experimental freezing density $`\rho _\mathrm{f}^{\mathrm{expt}}=0.394\sigma ^3`$. We have used $`N=114`$ only at the highest density and below that $`N=66`$ have proved to be accurate enough. Among the three variational parameters entering in the backflow wave function (4) only $`\lambda _\mathrm{B}`$ shows a density dependence which is nearly linear in the range studied ($`\lambda _\mathrm{B}=0.42`$ at $`\rho =0.403\sigma ^3`$). The results are displayed in Fig. 2 in comparison with the experimental data of Ref. . The solid line in the same figure is a third-degree polynomial fit to our data with $`\chi ^2/\nu =1.2`$. According to this fit, the equilibrium density is $`\rho _0=0.274(1)\sigma ^3`$ and the energy at this density $`(E/N)_0=2.464(7)`$ K, in close agreement with experimental data.
The quality of the equation of state is even more stressed by looking at its derivatives. In Fig. 3, the behavior of the pressure and the sound velocity with the density is shown in comparison with experimental data from Refs. . The theoretical prediction for both quantities, derived form the polynomial fit to $`E/N(\rho )`$ (Fig. 1), shows again an excellent agreement with the experimental data from equilibrium up to freezing. The sound velocity that at $`\rho _0^{\mathrm{expt}}`$ is $`c=182.2(6)`$ m$`/`$sec, in close agreement with the experimental value $`c^{\mathrm{expt}}=182.9`$ m$`/`$sec , goes down to zero at the spinodal point. The location of this point has been previously obtained both from extrapolation of experimental data at positive pressures and from density-functional theories . The present microscopic calculation allows for an accurate calculation, free from extrapolation uncertainties, that locates the spinodal point at a density $`\rho _\mathrm{s}=0.202(2)\sigma ^3`$ corresponding to a negative pressure $`P_\mathrm{s}=3.09(20)`$ atm, much closer to the equilibrium than in liquid <sup>4</sup>He where $`P_\mathrm{s}=9.30(15)`$ atm .
In conclusion, we have analyzed the possibilities of the diffusion Monte Carlo method in the study of Fermi systems. The FN-DMC method, combined with the RN technique to evaluate the effect of the nodal surface model used in the trial function, and with a systematic method to analytically improve the nodal surface, constitutes a closed loop able to improve the quality of the antisymmetric wave function up to a required precision. The fact that this general approach has allowed to deal with a strongly interacting Fermi liquid suggests that it could be also useful for tackling other Fermi systems. We have applied this methodology to the study of liquid <sup>3</sup>He. The effect of corrections beyond the backflow terms has been evaluated and found to be less than 0.01 K. The equation of state so obtained presents an accuracy comparable to the result obtained in bosonic liquid <sup>4</sup>He. The precision proved by the method allows for posterior studies as the characterization of spin-polarized liquid <sup>3</sup>He. Preliminary calculations for the fully-polarized phase at $`\rho _0^{\mathrm{expt}}`$ indicate a less binded system with an energy $`E/N=2.22(4)`$ K.
This research has been partially supported by DGES (Spain) Grant N<sup>0</sup> PB96-0170-C03-02. We also acknowledge the supercomputer facilities provided by the CEPBA.
|
no-problem/9909/astro-ph9909453.html
|
ar5iv
|
text
|
# Parent Stars of Extrasolar Planets V: HD 75289
## 1 Introduction
In our continuing series on the parent stars of extrasolar planets (Gonzalez 1997, Paper I; Gonzalez 1998, Paper II; Gonzalez & Vanture 1998, Paper III; and Gonzalez et al. 1999, Paper IV), we have reported on the results of our spectroscopic analyses of these stars. Other similar studies include Fuhrmann et al. (1997, 1998) and Sadakane et al. (1999). The most significant finding so far has been the high mean metallicity of these stars, as a group, compared to the metallicity distribution of nearby solar-type stars. Additional extrasolar planet candidates continue to be announced by planet hunting groups using the Doppler method. Herein, we report on a Local Thermodynamic Equilibrium (LTE) abundance analysis of HD 75289, which was announced on 1 February 1999 (Udry et al. 1999) to harbor a low-mass object with a 3.5 day nearly-circular orbit.
In addition to the new candidate listed above, we also present new analyses of the spectra of $`\upsilon `$ And and $`\tau `$ Boo, which had been the subject of Paper I. The basic stellar parameters and abundances of these two stars were not well-detemined in that study, due to their relatively broad lines, which resulted in a short linelist. We will improve on that study by adding several Fe I,II lines carefully chosen to better constrain the solutions. Also, we report on extended spectroscopic analyses of the following parent stars: HD 187123, HD 210277, 14 Her, and $`\rho ^1`$ Cnc. These stars were discussed in Papers III and IV, but we had not performed a general abundance analysis from their spectra (note, we had included $`\rho ^1`$ Cnc in Paper II, but that study was superceded by Paper III). We close with a summary of the abundance patterns among stars with planets and compare them with those of nearby F and G dwarfs without (known) planets.
## 2 Observations
High-resolution, high S/N ratio spectra were obtained with the 2dcoude echelle spectrograph (described in Tull et al. 1995) at the McDonald observatory 2.7 m telescope. This is the same instrument employed in Papers I to III (in Paper IV, we analyzed spectra obtained by G. Marcy with HIRES on the Keck I). The spectral resolving power (determined from the FWHM of the Th-Ar lines in the comparison lamp spectrum) is about 65,000, and the S/N ratio is about 450-500. The spectral coverage ranges from 3700 to 10,000 Å, with gaps between orders beyond about 5500 Å. The data reduction methods are the same as those employed in Papers I to III. Two spectra of HD 75289 were obtained, for a total exposure time of 20 minutes. Spectra of a hot star with a high $`v\mathrm{sin}i`$ value were also obtained in order to divide out telluric lines (with the IRAF program, telluric).
We derived a heliocentric radial velocity of $`+9.9\pm 0.5`$ km s<sup>-1</sup> (formal error) from our spectra of HD 75289 obtained on HJD $`=2451215.784`$; this estimate is based on four clean Fe I lines with laboratory wavelengths adopted from Gratton et al.’s (1989) study. Note that while we did not observe a radial velocity standard, the stability of the 2dcoude spectrograph should result in systematic velocity errors of no more than about 0.5 km s<sup>-1</sup>. Our velocity estimate differs significantly from Gratton et al.’s radial velocity of $`+1`$ km s<sup>-1</sup>, which is the mean of six observations they made over a two week period. Based on a comparison with other published velocity estimates for this star, they concluded that HD 75289 is a radial velocity variable (with an amplitude of a few km s<sup>-1</sup>). However, Udry et al. report a systemic velocity of 9.26 km s<sup>-1</sup>, which is consistent with our single velocity estimate. In addition, they obtain a very good fit with a simple Keplerian model, implying that the velocity is not variable at the few km s<sup>-1</sup> level.
## 3 Analysis
### 3.1 Spectroscopic analysis
The present method of analysis is the same as that employed in Paper III for $`\rho ^1`$ Cnc. Briefly, it makes use of the line analysis code, MOOG (Sneden 1973, updated version), the Kurucz (1993) LTE plane parallel model atmospheres, and Fe I, II equivalent width (EW) measurements to determine the following atmospheric parameters: T<sub>eff</sub>, $`\mathrm{log}g`$, $`\xi _\mathrm{t}`$, and \[Fe/H\], where the symbols have their usual meanings. The Fe linelist was put together from Table 1 of Paper I, Table 2 of Paper II, and Table 1 of Paper IV. Lines of elements other than Fe were selected from Table 1 of Paper 1 and Table 2 of Paper II. We have also added additional lines to both lists. Their $`gf`$-values were calculated from an inverted solar analysis using the Kurucz et al. (1984) Solar Flux Atlas or our spectrum of Vesta. The final linelist, along with the EW values, is listed in Tables 1 and 2. The number of lines employed for each star varies somewhat due to the variations in intrinsic line width and temperature from one star to another; for example, low excitation lines are weaker in hot star spectra, and weak lines are more difficult to measure in spectra with relatively high $`v\mathrm{sin}i`$, i.e., $`\upsilon `$ And and $`\tau `$ Boo, and in cool stars with strong-lined spectra due to crowding, i.e., 14 Her and $`\rho ^1`$ Cnc.
The abundances of Li and Al were determined via comparison of synthesized spectra with the observed spectra. The method employed to determine the Li abundance is described in our previous papers. Our estimate of the Al abundance is based on the Al I pair at 6696 and 6698 Å; they are sufficiently close to the Li I line at 6708 Å that we were able to determine the Li and Al abundances with the same synthesized spectral region. This is a change from our previous studies, where we had relied primarily on the 7835 and 8772 Å pairs. Unfortunately, the spectrograph setup was such that the 6696 and 6698 Å Al pair fell just outside the order containing the Li I line in our spectrum of HD 75289, so we do not quote an Al abundance for this star.
The results of the analyses are presented in Tables 2 to 5. The calculations of the uncertainties and the contribution from systematic errors are the same as those discussed in Paper III; systematic errors should be negligible in the present study, since these stars are similar to the Sun. We argued in Paper III that our LTE analysis of $`\rho ^1`$ Cnc, which is about 500 K cooler than the Sun, probably does not suffer from significant systematic errors. As in Paper III, we refrain from quoting formal uncertainties in \[Fe/H\] less than 0.05 dex.
### 3.2 Derived parameters
We have determined the masses and ages in the same way as in Paper II. Using the Hipparcos parallaxes (ESA 1997) and the stellar evolutionary isochrones of Schaller et al. (1992) and Schaerer et al. (1993), along with our spectroscopic $`T_{\mathrm{eff}}`$ estimates, we have estimated the masses and ages for $`\upsilon `$ And, $`\tau `$ Boo, and HD 75289. Due to the large parallaxes and hence small distances, neither the Lutz & Kelker (1973) nor extinction corrections were applied. We list the results in Table 4. The corresponding theoretical surface gravities are: $`\mathrm{log}g=4.10\pm 0.04`$, $`4.25\pm 0.03`$, and $`4.33\pm 0.02`$ for $`\upsilon `$ And, $`\tau `$ Boo, and HD 75289, respectively. The close agreement between the observed and theoretical surface gravities for all three stars supports the assumptions that went into the calculation of the stellar evolutionary isochrones and the LTE abundance analyses.
## 4 Discussion
### 4.1 $`\upsilon `$ And and $`\tau `$ Boo
The present spectroscopic analyses of $`\upsilon `$ And and $`\tau `$ Boo are a significant improvement over those reported in Paper I, both as evidenced by the reduction in the uncertainties of the derived physical parameters and the closer agreement with the spectroscopic analyses of Fuhrmann et al. (1998) and photometrically-derived parameters. While our new $`T_{\mathrm{eff}}`$ estimates are significantly smaller than those in Paper I - $`\upsilon `$ And is less by 110 K and $`\tau `$ Boo is less by 180 K - the \[Fe/H\] values are similar; this is due to the fact that we derived a complete new set of atmosphere parameters for each star, not just a new $`T_{\mathrm{eff}}`$. As a result, the basic conclusions of Gonzalez (1999a), which is a study of the chemo-dynamical properties of stars-with-planets, are not altered. In particular, the conclusion that $`\tau `$ Boo possesses an anomalously high \[Fe/H\] value for its age and Galactocentric distance still holds. Finally, we note that the value of $`\xi _\mathrm{t}`$ we obtain for $`\tau `$ Boo is unusually small compared to the other hot stars we analyzed; its $`\xi _\mathrm{t}`$ value should be larger, given its higher $`T_{\mathrm{eff}}`$ value.
### 4.2 HD 75289
The Bright Star Catalog (Hoffleit 1982) designates HD 75289 as G0Ia-0:, which is clearly incorrect. Gratton et al. included HD 75289 in their spectroscopic abundance study of G and K supergiants (their work confirmed that HD 75289 was in fact a metal-rich dwarf and not a supergiant). Of the EW measurements reported in their paper and ours, there are 15 spectral lines in common, with an average difference of only $`+1.5`$ mÅ between them. They went on to derive the atmospheric parameters based on a total of 35 Fe I and 5 Fe II lines, and obtained the following results: $`T_{\mathrm{eff}}=6000`$ K, $`\mathrm{log}g=3.8`$, $`\xi _\mathrm{t}=1.3`$ km s<sup>-1</sup>, and \[A/H\] $`=0.2`$. We note that the present work, which uses a similar method of analysis and a larger number of Fe lines to better constrain these same stellar parameters, is in close agreement with their results. The only exception to this statement is $`\mathrm{log}g`$, where our derived value of 4.47 differs considerably. The stellar evolutionary $`\mathrm{log}g`$ value tends to support our estimate.
Gonzalez (1999a) compared the \[Fe/H\] estimates of the parent stars to the mean trends of \[Fe/H\] with age and mean Galactocentric distance, $`R_\mathrm{m}`$, among field stars. Among the young stars (age $`2`$ Gyr), not only is $`\tau `$ Boo too metal-rich for its value of $`R_\mathrm{m}`$, but so is HD 75289. They are both metal-rich relative to the typical field star of the same $`R_\mathrm{m}`$ by $`+0.26`$ dex.
Henry et al. (1996) reported a $`\mathrm{log}R_{\mathrm{HK}}^{^{}}`$ value of $`5.00`$ for HD 75289 from a single measurement; we confirm the low chromospheric activity level of this star from examination of the Ca II H and K lines in our spectra. This measure places it among the low-activity stars of the roughly 800 stars observed by them. Employing the activity-age relation of Donahue (1993)<sup>1</sup><sup>1</sup>1As reported in Henry et al. (1996). we derive an age of 5.6 Gyr, nearly a factor of three greater than the age derived from its position on the HR diagram. None of the other parent stars displays such a large activity age relative to the evolutionary age<sup>2</sup><sup>2</sup>270 Vir has an evolutionary age nearly four times its activity age, but this discrepancy is likely due to its more evolved state than the other parent stars.. Further, Udry et al. reported a $`v\mathrm{sin}i`$ value of 4.37 km s<sup>-1</sup> \- about half the $`v\mathrm{sin}i`$ value of $`\upsilon `$ And, itself 3.5 Gyr old according to its location on the HR diagram.
Hence, according to its chromospheric activity level and rotation, HD 75289 is older than the Sun, while it is younger according to stellar evolution. A possible way out of this dilemma may be to invoke a phenomenon that spun-down HD 75289 faster than is typical for stars of its spectral type.
### 4.3 Abundance Trends
To search for subtle abundance anomalies among the star-with-planets sample, we will compare our results to high-quality abundance analyses of the general field population. The best sources of data on abundances of field stars are Edvardsson et al. (1993), Tomkin et al. (1997), Feltzing & Gustafsson (1998), and Gustafsson et al. (1999). All four studies are based on the Uppsala Astronomical Observatory group analysis techniques, and, hence, should be consistent with each other. In addition to these, we will make use of several studies of Li abundances among field and open cluster stars. In the following, for most elements, we will compare abundances relative to Fe (as \[X/Fe\]), since such a quantity is less sensitive to systematic differences among various studies.
Among the elements measured in the stars-with-planets sample, Li has the potential to give us the greatest insight into the process of planet formation. Its abundance in a stellar atmosphere is affected by a number of physical processes, some of which are possibly related to the presence of planets (see discussion in Paper II). Among the stellar parameters found to correlate with Li abundance are $`T_{\mathrm{eff}}`$, age, and metallicity (Pasquini, Liu, & Pallavicini 1994). What’s more, the Li abundance on the surface of an F star might be enhanced as a result of the accretion of rocky material (see Alexander 1967 and Paper II). Following Pasquini et al., we have derived an equation relating $`\mathrm{log}ϵ(Li)`$ to $`T_{\mathrm{eff}}`$, the chromospheric emission measure ($`R_{\mathrm{HK}}^{^{}}`$), and \[Fe/H\]:
$$\mathrm{log}ϵ(Li)=80.246+0.806[Fe/H]+0.431\mathrm{log}R_{\mathrm{HK}}^{^{}}+22.436\mathrm{log}T_{\mathrm{eff}}$$
(1)
The stars used to calibrate this equation are from Pasquini et al., Favata et al. (1997), and Randich et al. (1999) with the $`R_{\mathrm{HK}}^{^{}}`$ values from Henry et al. (1996). The range of applicability of the parameters are: $`0.61`$ \[Fe/H\] $`+0.22`$, $`5458T_{\mathrm{eff}}6180`$ K, $`5.24\mathrm{log}R_{\mathrm{HK}}^{^{}}4.34`$, and $`+0.83\mathrm{log}ϵ(Li)+2.92`$. Note, some of our stars are outside the metallicity range of equation 1. All the stars-with-planets but one have observed Li abundances less than the values calculated from equation 1 (Figure 2). The largest deviation on this plot is $`\tau `$ Boo; there are two points to note about it: its $`T_{\mathrm{eff}}`$ value is beyond the range for which equation 1 is calibrated, and it is within the so-called ”Li dip” seen among open cluster stars (Balachandran 1995). It is also important to note that many stars of the same temperature, age, and metallicity range as those used to determine equation 1 do not have detectable Li; an example of this is the large spread in Li among single stars of the same colors in M67 (Jones, Fischer, & Soderblom 1999). The Li abundance of HD 75289 is not unusual compared to the field star sample, but it might be slightly high with respect to its evolutionary age.
The study of Gustafsson et al. is probably the most accurate study of C abundances among F and G disk to date. They employed the \[C I\] line at 8727 Å. The \[C/Fe\] values display remarkably small scatter about a mean trend with respect to \[Fe/H\] (see Figure 4 of Gustafsson et al.). In Figure 1 we present the \[C/Fe\] estimates from Gustafsson et al. and Tomkin et al. (who did not employ the \[C I\] line) for field stars as well as the stars-with-planets. A small trend of \[C/Fe\] with Galactocentric distance has been removed from the individual data points (amounting to $`0.015`$ dex per kpc). Some of the Tomkin et al. stars and all but one of the stars-with-planets fall below the mean trend line; $`\tau `$ Boo displays the largest negative deviation. The \[C/Fe\] estimate for HD 217107 is from Sadakane et al.’s two measurements: the \[C I\] and 5380 Å lines; the estimate for 51 Peg is the average from Paper II and Tomkin et al. It is always possible that there is a systematic offset between our \[C/Fe\] estimates and those of Gustafsson et al. due to the different lines used, but it is not likely to be significant since both studies are differential relative to the Sun. The deviation of the Sun’s \[C/Fe\] value from the mean trend in Figure 1 is also notable; while it may not seem like a large difference, the error bars on the data point corresponding to the Sun are effectively zero, since Gustafsson et al.’s study is differential with respect to the Sun (for additional discussion on this point see Gustafsson et al. and Gonzalez 1999b).
Feltzing & Gustafsson examined abundance trends (as \[X/Fe\] versus \[Fe/H\]) among metal-rich disk stars. For most elements, there are no significant deviations from the solar ratios, but they did find a significant upturn in \[Na/Fe\] for stars with \[Fe/H\] $`>0.00`$, reaching \[Na/Fe\] $`0.20`$ for the most metal-rich stars. Among the stars-with-planets sample, the mean \[Na/Fe\] value is $`0.02`$; $`\rho ^1`$ Cnc, HD 75289, and HD 210277 have the smallest \[Na/Fe\]. They used the same two Na I lines we employed in our study. The mean values of \[X/Fe\] for the other elements among our sample stars do not appear to differ significantly from the trends seen among disk stars.
The most obvious abundance trend among the stars-with-planets studied so far is their high mean metallicity compared to the general field population. Our estimate for the \[Fe/H\] value of HD 75289, $`+0.28`$, is close to the mean of the so-called ”hot Jupiter” systems. Another recently announced system, HD 217107, was studied spectroscopically by Randich et al. and Sadakane et al., who obtained \[Fe/H\] $`=+0.30`$ and $`+0.31`$, respectively.
## 5 Conclusions
The results of our analysis of high-resolution spectra of HD 75289 confirm that it is a metal-rich star, with \[Fe/H\] $`=+0.28`$. Its evolutionary age, 2.1 Gyr, is much less than the age derived from its chromospheric emission measure, $`R_{\mathrm{HK}}^{^{}}`$.
Compared, as a group, to nearby F and G dwarfs, the stars-with-planets sample display the following peculiarities:
* The latest additions to this group, HD 75289 and HD 217107, continue the trend, first noted in Paper I, that stars-with-planets are metal-rich relative to the nearby field star population.
* The stars, $`\tau `$ Boo, $`\rho ^1`$ Cnc, 14 Her, HD 75289, and HD 217107, are much more metal-rich than F and G dwarfs of similar ages and mean Galactocentric distances.
* Compared to field stars with detectable Li, stars-with-planets tend to have smaller Li abundances when corrected for differences in $`T_{\mathrm{eff}}`$, \[Fe/H\], and $`R_{\mathrm{HK}}^{^{}}`$.
* The \[Na/Fe\] and \[C/Fe\] values of stars-with-planets are, on average, smaller than the corresponding quantities among field stars of the same \[Fe/H\].
In summary, while the numbers are still small, the data on stars-with-planets are beginning to indicate ways in which they differ from the general field star population. These abundance anomalies might be useful in constraining future searches for extrasolar planets, and they will be very helpful in theoretical studies of planet formation.
The authors are grateful to David Lambert for obtaining spectra of HD 75289 at our request. Thanks also go to Robert Kurucz for his model atmospheres and Chris Sneden for use of his code, MOOG. This research has made use of the Simbad database, operated at CDS, Strasbourg, France. The research has been supported in part by the Kennilworth Fund of the New York Community Trust.
## FIGURE CAPTIONS
|
no-problem/9909/hep-ph9909380.html
|
ar5iv
|
text
|
# Oscillations of Cumulant Moments - Universality of Amplitudes
## Abstract
We demonstrate on simple examples that oscillatory behaviour of moments of multiplicity distributions $`P(n)`$ observed in $`e^+e^{}`$ annihilations, in hadronic $`pp`$ collisions and in collisions on nuclei, $`p+A`$, is to a large extend caused by the experimental artifact of measuring only limited range of $`P(n)`$. In particular we show that by applying a suitable universal cutt-off procedure to the measured $`P(n)`$ one gets for reactions mentioned before oscillations of similar magnitude. The location of zeros of oscillations as a function of the rank of moments and their shapes remain, however, distinctively different for different types of reactions considered. This applies to some extend also to collisions of nuclei, which otherwise follow their own pattern of behaviour.
PACS numbers: 13.65.+i 13.65.-7 24.60.-k
The problem of the possible physical origin and information content of oscillations in the cumulant moments of the corresponding multiplicity distributions $`P(n)`$ started with QCD calculation of the respective generating functional. It turned out that the resulting cumulant moments oscillate as a function of their rank in the way depending on the QCD parameters used . This finding was confirmed by the analysis of $`e^+e^{}`$ and hadronic $`pp`$ data, which showed that, indeed, the $`q`$-th rank normalized cumulant moment of observed negatively charged multiplicity distributions oscillates irregularly around zero with increasing $`q`$ (the minimum points being located around $`q5`$). There was therefore a hope that analysis of these oscillations can then prove a crucial test for the QCD .
These expectations were, however, soon confronted with observations that the same data can be equally well described by a more phenomenological methods based on the solutions of stochastic processes , for example by the negative binomial distribution (NBD) (in its truncated version) or by the modified negative binomial distribution (MNBD) . The interesting finding was that the NBD and MNBD differ distinctively in this context in the following sense: for untruncated multiplicity distributions the $`q`$-th rank normalized cumulant moment of the NBD is always positive and decreases monotically with increasing $`q`$, whereas for MNBD it can oscillate in the way depending on the choice of parameters. Therefore, in this approach, the behavior of cumulant moments obtained from experimental data seemed to provide a new constraint on models of multiplicity distributions. In particular it was shown in that cumulant moments of negatively charged particles in $`e^+e^{}`$ collisions can be described by both the truncated NBD and (truncated or not) MNBD (which performs better in the $`e^+e^{}`$ case).
The observation that truncation of the NBD makes the corresponding moments oscillate has confirmed the statement made before in Ref. . It was said there that important (if not exclusive) factor leading to oscillations of moments is the experimental fact of necessary truncation of the observed $`P(n)`$ at some maximal multiplicity $`n_{max}`$. The observed differences between results from different reactions seem to reflect therefore only the level of this truncation.
Cumulant moments obtained from the $`hA`$ experimental data also oscillate with magnitude which is much bigger than that observed in $`hh`$ collisions . The specific feature of these oscillation is that they can be atributed not only to truncation of $`P(n)`$ but also to the fact that the number of elementary collisions in the $`hA`$ reaction is necessary limited by the geometry of collision . This is, in fact, a kind of truncation as well, but this time it is caused by the geometry of the collision rather than by the experimental setup. This new geometrical factor should be therefore even more important in heavy ion collisions .
The above discussion clearly shows that information content of the oscillation phenomenon remains still unclear and subject to debate. The aim of our note is thus to shed a new light on this problem by discussing a couple of simple but illustrative numerical examples of oscillations in $`e^+e^{}`$ annihilations, hadronic $`pp`$ collisions and collisions involving nuclei, $`pA`$. We shall not attempt here a fit to experimental data because this was done already in the relevant works quoted here. Our intention was rather to use the existing experience on this subject (especially that contained in ) in order to demonstrate a possible universality existing in the $`e^+e^{}`$, $`pp`$ and $`pA`$ data on oscillations of moments. We shall also address, albeit only shortly, heavy ion collisions, $`AB`$, in this context.
We shall use, as is usually done, the following moments of the multiplicity distribution $`P(n)`$ (cf. ):
$$H_q=\frac{K_q}{F_q}$$
(1)
where
$$K_q=\frac{k_q}{n^q},F_q=\frac{f_q}{n^q}$$
(2)
with $`k_q`$ and $`f_q`$ being the usual cumulant and factorial moments of rank $`q`$ of $`P(n)`$. What is observed experimentally is the fact that they oscillate and that these oscillations differ substantially depending on the type of reaction and do it in two ways:
* their amplitudes vary, increasing from the value of $`10^3`$ for $`e^+e^{}`$ annihilations, via the value of $`10^2`$ for $`pp`$ collisions up to the value of $`10^1`$ for $`pA`$ and heavy ion ($`AB`$) reactions;
* their shapes are different with frequency of oscillations in $`q`$ being highest for $`e^+e^{}`$ reactions.
As was said before, the main cause of these oscillation is supposed to be experimental truncation of the corresponding $`P(n)`$. In order to compare results of such truncation for different reactions we propose to use a universal variable $`u`$ defined in the following way:
$$u=\frac{n_{max}n}{\sigma _n}.$$
(3)
This variable measures distance of the cut-off point, $`n_{max}`$, from the (specific for the process under consideration) mean multiplicity $`n`$. It does this in terms of standard deviation $`\sigma _n`$ (which is obtained from the same $`P(n)`$). In this way it allows to compare results from different reactions by providing a kind of natural and universal measure for terminating $`P(n)`$ under considerations at some value of $`n_{max}`$.
In Fig. 1 we show examples of $`H_q`$ moments calculated for $`e^+e^{}`$, $`pp`$ and $`pA`$ reactions for two different choices of the values of cut-off parameter $`u`$. In each case we have used identical multiplicity distributions (and all other relevant parameters) as those used in Refs. when describing the same reactions. They were then cut-off for each reaction considered at the same value of the variable $`u`$ defined in eq. (3) and from them the corresponding moments $`H_q`$ were calculated. As can be seen, cutting off multiplicity distributions $`P(n)`$ (calculated for different reactions) at the same values of $`u`$ results in comparable values of amplitudes of observed oscillations. Although they are still not identical, the previously mentioned differences in amplitudes are enormously reduced, being now of the same order of magnitude. This feature apparently does not depend on the actual value of variable $`u`$ used (although the changes of $`u`$ affect the shape of oscillations). It proofs therefore that the increase of amplitudes of oscillations observed between $`e^+e^{}`$ and $`pA`$ reactions is caused mainly by different experimental cut-off procedures (quantified here by different values of variable $`u`$ used in the respective processes) applied to the measured $`P(n)`$. In $`e^+e^{}`$ processes, with smallest amplitudes of oscillations, the $`P(n)`$ were measured most accurately, up to the very high multiplicities (i.e., to large values of the ratio $`z=n/n`$). The opposite situation is encountered in $`pA`$ processes. This is the main result of our note.
This kind of universality (even if only approximate) makes the sizes of amplitudes of oscillations not particularly sensitive to the dynamical details of $`P(n)`$ of interest. Not much is left in this observable when different experiments, but with the same values of variable $`u`$, are compared with each other. On the other hand, the character of oscillations, as visualised by their frequency in the rank $`q`$ of moments, remains in a visible way different for different types of reactions and can therefore be used for dynamical discrimanation between different models. For example, in Fig. 2 we show oscillations of $`H_q`$ moments obtained for the same value of $`u=7`$ using $`P(n)`$ in the form of MNBD as given in :
$`P(0)`$ $`=`$ $`{\displaystyle \frac{\left(1+r_1\right)^N}{\left(1+r_2\right)^{N+k}}},`$
$`P(n)`$ $`=`$ $`{\displaystyle \frac{1}{n!}}\left({\displaystyle \frac{r_1}{r_2}}\right){\displaystyle \underset{j=0}{\overset{N}{}}}{}_{N}{}^{}C_{j}^{}{\displaystyle \frac{\mathrm{\Gamma }(k+n+j)}{\mathrm{\Gamma }(k+j)}}\left({\displaystyle \frac{r_2r_1}{r_1}}\right)^j{\displaystyle \frac{r_2^n}{\left(1+r_2\right)^{n+k+j}}},`$ (4)
where $`N,k,r_{1,2}`$ are parameters. Referring to for details we shall say only that if $`k=0`$, the summation in eq. (4) runs from $`j=1`$ up to $`j=N`$ and the resultant distribution is called MNBD. In this case parameters $`r_{1,2}`$ are given by the average multiplicity $`n`$ and second moment $`C_2`$ of the corresponding $`P(n)`$ as $`r_{1,2}=\frac{1}{2}\left(C_21\frac{1}{n}\frac{1}{N}\right)n`$. For $`N=0`$, parameter $`r_1`$ disapears from (4) and it reduces to the NBD with $`r_2=\frac{n}{k}`$. The parameters $`r_1`$ and $`r_2`$ of the NMBD reflect now the structure of oscillations rather than their amplitudes. As one can see they change systematically from parameters describing $`e^+e^{}`$ annihilation (left-top panel) to those typical for hadronic $`pp`$ collisions (right-bottom panel) .
To summarize, we stress again that the magnitude of observed oscillations of $`H_q`$ moments of multiplicity distributions $`P(n)`$ reflect essentially our ability to measure, in a given reaction, large multiplicities. When analysing data using the same value of our universal cut-off parameter $`u`$ one gets comparable values of amplitudes for all reactions of interest. It means that this quantity is not sensitive to dynamical details of reaction. The shape of oscillations remains, however, sensitive to such details. It can therefore be used to extract a new dynamical information from different multiplicity distributions (when compared at the same values of the cut-off parameter $`u`$).
The separate issue is the problem of oscillations in heavy ion collisions $`AB`$, which we should now briefly address for the sake of the completeness of presentation. They do not share the property discussed above. The reason is the following. As was already mentioned, in the collisions of two nuclei, $`A`$ and $`B`$, the nuclear geometry is the main factor responsible for the shape and properties of the corresponding multiparticle distribution of produced secondaries $`P(n)`$ . This fact is crucial in generating oscillations in the respective cumulant moments. To show it on some example let us first write the typical corresponding multiplicity distribution for $`A+B`$ collision:
$$P(n)=\underset{\mu =1}{\overset{\mu _{tot}}{}}p(\mu )\underset{i=1}{\overset{\mu }{}}P_i(n_i)\delta \left(n\underset{i=1}{\overset{\mu }{}}n_i\right).$$
(5)
It contains two ingredients: distribution $`p(\mu )`$ of the number of emitting sources $`\mu `$ and the respective “elementary” multiplicity distributions of particles produced from such sources, $`P_i(n_i)`$. The emitting sources can be, for example, understood as in . Their distribution can be calculated in the same way as in . In the example below we have used a simple Monte Carlo code in which two colliding nuclei consisting of $`A`$ and $`B`$ nucleons, respectively, collide with each other. Nucleons are distributed in nucleus according to a standard Saxon-Woods (SW) distribution (with diffusiveness $`0.49`$ fm for $`S`$ and $`0.545`$ fm for $`Pb`$ nuclei and corresponding nuclear radii given by the formula: $`r[\mathrm{fm}]=1.12A^{1/3}0.86/A^{1/3}`$). They collide with each other with probability given by their (total inelastic) cross section $`\sigma =32`$ mb. This provides us with $`p(\mu )`$. On the other hand $`P_i(n_i)`$ has been taken again from the MNBD fits to elementary collisions performed in ). In nuclear collisions two distinct classes of events occur and must be treated separately: central and minimum bias collisions. In our case central collisions were chosen as $`1\%`$ of the collisions with smallest impact parameter. In Fig. 3 we show results for moments $`H_q`$ of $`P(n)`$ from eq. (5) calculated for $`S+S`$ (left panels) and $`Pb+Pb`$ (right panels) minimum bias (upper panels) and central (lower panels) collision for the same values of the variable $`u=5`$. Notice that the magnitude of amplitude of oscillations (especially for central collisions) remains different from that in the corresponding panels of Fig. 1. It means that in this case there is no such universality as in the previously discussed reactions. On the other hand, however, minimum bias collisions are distinctively different from the central ones, which show only very small oscillations. The patterns shown apparently depend only weakly on the choice of the colliding nuclei (i.e., on the parameters of the Monte Carlo producing $`p(\mu )`$).
To understand better results presented in Fig. 3 one should realise that central collisions result in a large number of elementary collisions, i.e., in a large number of emitting sources $`\mu `$ in each event. Therefore, because of central limit theorem, irrespectively of details of elementary collisions $`P(n)`$ must have a gaussian-like shape. We can parametrize it as: $`P(n)=P_0\mathrm{exp}\left(\frac{(nn_0)^2}{2a}\right)`$. On the other hand, the minimum bias collisions result in $`p(\mu )`$ of the box-like, or Saxon-Woods (SW)-like shape and such will be also resultant $`P(n)`$: $`P(n)=\frac{P_0}{1+e^{(nn_0)/a}}`$. In Fig. 4 we show, as illustration, some typical examples of oscillation patterns emerging from both types of distributions. Notice that whereas we essentially observe no oscillations in the case of gaussian $`P(n)`$ (or, if at all, they do start at large $`q`$), we see strong oscillations for the SW $`P(n)`$. They are caused in this case by the box-like shape of the SW distribution, which is best demonstrated by the fact that they gradually vanish with the increasing difussiveness of SW used, i.e., with the increasing values of parameter $`a`$ . It should be pointed here that results presented in Fig. 4 were obtained without additional truncation in multiplicity, i.e., in (3) $`n_{tot}=\mathrm{}`$. All oscillations present there are thus entirely of different origin than the simple truncation of $`P(n)`$. They are governed by the geometrical parameter $`a`$ and by the level of observability of the total $`P(n)`$.
Summarizing, we have demonstrated (approximate) universality of amplitudes of oscillations of cumulant moments when compared at the same values of the variable $`u`$ as defined in (3). It shows up for a range of reactions from $`e^+e^{}`$ annihilation processes, via hadronic $`pp`$ collisions, to $`pA`$ reactions. The latter start to show influence of the geometry of collision process, which entirely dominates the truly nuclear collisions.
Acknowledgements
G.W. would like to extend his gratitude to the Physic Department of Shinshu University and to Matsusho Gakuen Junior College for their warm hospitality during his visit to Matsumoto where this work originated. M.B. is partially supported by the Grant-in Aid for Scientific Research from the Ministry of Education, Science and Culture (No. 06640383) and by the Exchange Program between JSPS and the Polish Academy of Science. N.S. thanks for the financial support by Matsusho Gakuen Junior College.
Figure captions
* Examples of $`H_q`$ moments of multiplicity distributions for the $`e^+e^{}`$ annihilation (upper panels), $`pp`$ reactions (middle pannels) and $`pA`$ reactions (bottom panels) for two chosen values of the parameter $`u`$: $`u=5`$ (left panels) and $`u=8`$ (right pannels). (The $`P(n)`$ data are the same as in ).
* The $`H_q`$ moments obtained from the MNBD for $`P(n)`$ for different values of its characteristic parameters $`r_1`$ and $`r_2`$ (cf. ). The upper-left panel corresponds to $`e^+e^{}`$ and bottom-right one to $`pp`$ reactions, respectively. The value of parameter $`u=7`$ reamains all time the same. Notice the gradual change of frequency of oscillations whereas their amplitudes remain essentially of the same order of magnitude.
* The $`H_q`$ moments calculated for $`S+S`$ (left panels) and $`Pb+Pb`$ (right panels) collisions of the minimum bias (upper panels) and central (lower panels) type. In both cases $`u=5`$.
* Examples of oscillation patterns for gaussian-like (upper panels) and SW-like (lower panels) shapes of multiplicity distributions $`P(n)`$, see text for details. Short dash, long dash and full lines correspond to parameter $`a`$ equal to $`a=2,20,80`$ for gaussian-like distributions and to $`a=99,50,10,0.001`$ for SW-like distributions; in both cases $`n_0=400`$.
Figure 1
Figure 2
Fig. 3
Fig. 4
|
no-problem/9909/astro-ph9909055.html
|
ar5iv
|
text
|
# A Wide–Field Spectroscopic Survey In The Cluster Lens Cl0024+17
## 1. Introduction
In the past, studies of the lensing cluster Cl0024+17 have revealed a strong discrepancy between estimates of the cluster mass using different methods (galaxy dynamics, X–ray emission, strong and weak lensing) of up to a factor of 3. These results are summarized in Fig. 1. In order to better understand this discrepancy, our group has undertaken a wide field spectroscopic survey in the cluster.
## 2. Spectroscopic Sample
The largest redshift sample (prior to 1999) of members of Cl0024 is the one by Dressler & Gunn (1992), which included 31 redshifts resulting in a velocity dispersion $`\sigma =1200`$ km/s. We obtained a total of 626 spectra in the cluster field during three observing runs at CFHT in 1993, 1995 and 1996 and one run at WHT in 1996. Adding to our catalogue the sample of 107 cluster redshifts of Dressler et al. (1999) gives a total of 697 spectra. We obtain sufficiently secure redshifts for 615 objects in a 21 by 28 arcmin<sup>2</sup> field centred on the cluster.
## 3. Results
The histogram (Fig. 2) shows the detailed structure of the velocity distribution around the cluster redshift. We can clearly distinguish the relaxed main cluster and an unrelaxed extension towards lower redshifts.
Defining as cluster members the 227 objects with redshifts $`0.388<z<0.405`$, we find a central redshift of $`\overline{z}=0.3949\pm 0.0006`$ and a velocity dispersion $`\sigma =667_{51}^{+74}\text{km/s}`$ (biweight estimators, bootstrap errors). A Gaussian with these parameters gives a good visual fit (see Fig. 2). Using the simple spherically symmetric isothermal model of Schneider, Dressler & Gunn (1986), we obtain a mass for the main cluster of $`M=1.4\times 10^{14}h^1\text{M}_{}`$ (at 500 kpc).
We tentatively interpret the foreground extension as a filament aligned with the line of sight and estimate a lower limit for its mass as $`5\times 10^{13}h^1\text{M}_{}`$. In order to better separate the contributions of the cluster and the filament to the total lensing mass, a precise weak lensing mass profile out to $`>1h^1`$ Mpc will be needed.
## References
Bonnet, H., Mellier, Y., Fort, B. 1994, ApJ, 427, L83
Broadhurst, T., astro-ph/9902316
Dressler, A., Gunn, J. E. 1992, ApJS, 78, 1
Dressler, A. et al. 1999, ApJS, 122, 51
Schneider, D. P., Dressler, A., Gunn, J. E. 1986, AJ, 92, 523
Soucail, G. et al. 1999, in preparation
|
no-problem/9909/astro-ph9909134.html
|
ar5iv
|
text
|
# The Hubble Constant from the HST Key Project on the Extragalactic Distance Scale
## 1. Introduction
Back in 1984, the goal of the HST Key Project on the Extragalactic Distance Scale (hereafter, KP) was announced to be the measurement of the Hubble constant, $`H_0`$, with 10% accuracy. The plan of attack was to set the zero points for a variety of secondary distance indicators by measuring distances to 18 nearby calibrators using the most reliable of the primary standard candles, Cepheid variables (Figure 1). Crucial to the success of the mission was the ability of secondary distance indicators to reach beyond the local supercluster into the unperturbed Hubble flow. Fifteen years later, four have proven up to the challenge: the Surface Brightness Fluctuation Method, the Fundamental Plane for early-type galaxies, the Tully-Fisher relation and the Type Ia Supernovae. All are subject to biases arising from implicit assumptions made on the stellar population of the galaxies they target. Such biases, which could have serious effects if any one distance indicator were to be used alone, are attenuated when the constraints imposed by all indicators are combined. The final KP results are presented in Ferrarese et al. (2000a), Kelson et al. (2000), Sakai et al. (2000), Gibson et al. (2000), Mould et al. (2000) and Freedman et al. (2000). Here we will summarize those efforts, focusing on the merit and disadvantages of each indicators, and pointing out areas where future work is needed.
The following sets the scene for the remainder of this paper: the calibration of the PL relation is based on the LMC sample of Madore & Freedman (1991). It assumes a true distance modulus to the LMC of $`18.50\pm 0.13`$ mag (Mould et al. 2000), no dependence of the Cepheid PL relation on the metallicity of the variable stars, a ratio of total to selective absorption $`R_V=A(V)/E(BV)=3.3`$, and a reddening law following Cardelli, Clayton and Mathis (1989). Each of these assumptions is examined and their effects on the final error budget are assessed in the last section. We strove for homogeneity between the Cepheid distance scale and all secondary distance indicators: the treatment of the calibrator sample is consistent internally and with the distant sample to which the calibration is applied (Ferrarese et al. 2000b, Macri et al. 2000, Gibson et al. 2000, Kelson et al. 2000), providing a meaningful comparison of the results. This is perhaps the most distinctive mark of the KP compared to previous work.
## 2. Surface Brightness Fluctuations
The KP calibration of the Surface Brightness Fluctuation method (Tonry & Schneider 1988) is discussed in Ferrarese et al. (2000a). The largest database of SBF measurements comprises $``$300 galaxies within the local supercluster, observed from the ground in the Kron-Cousins $`I`$-band by John Tonry and collaborators (Tonry et al. 1999, Ajhar et al. 2000). A much smaller ($`20`$ galaxies), but more distant survey has employed the WFPC2 on board HST (Ajhar et al. 1997, Thomsen et al. 1997, Pahre et al. 1999, Lauer et al. 1998); indeed, it is from this pool of $`20`$ galaxies that the six at $`cz30007000`$ km s<sup>-1</sup> have been singled out by the KP for deriving $`H_0`$. The use of HST for SBF measurements has some drawbacks, however: the calibration of the fluctuation magnitudes cannot proceed directly against the Cepheids, since only one galaxy, M31, is in common between the two methods. Furthermore, fluctuation magnitudes are known to depend strongly on the metallicity (traditionally expressed as a V–I color) of the underlying stellar population (Tonry et al. 1997). Such a dependence cannot be properly quantified for the HST-SBF sample because of its limited size (Ajhar et al. 1997). Both problems can be overcome, but they certainly are important enough to deserve further study. As for the color dependence, the KP approach is to assume it to be the same as determined empirically for the larger ground based $`I`$-band survey (Tonry et al. 1997). This choice is supported by stellar population synthesis models (Worthey 1994), and the similar response curves of the $`I`$ and the filter used for the HST/WFPC2 measurements, F814W.
Once the color dependence is accounted for, the absolute magnitude of the fluctuations measured with HST can be derived using the 16 galaxies which also have ground based $`I`$-band SBF measurements, calibrated against the Cepheids. The latter calibration, however, poses some further problems: all galaxies with $`I`$-SBF and Cepheid distances (see Table 1) are early-type spirals, for which SBF measurements become challenging (Tonry et al. 1999). Dust and other contaminants can conspire to artificially brighten the measured fluctuations, and the stellar population in bulges might not be identical to that of the ellipticals which are the preferred targets of the method (leading to a different color dependency). On the other hand, using only SBF data to early-type galaxies would force an indirect calibration whose validity relies on the precarious assumption of a spatial coincidence between late and early-type galaxies, further aggravated by the small sample of galaxies with Cepheid distances in any group. Given the current data (Ferrarese et al. 2000b), the direct calibration for the six galaxies in Table 1, and the indirect one for the six groups/clusters with mean Cepheid and SBF distances (including Leo I, Virgo and Fornax), differ by 0.1 mag, the indirect calibration leading to larger distances. To avoid uncertainties introduced by cluster depth effects, the KP has preferred the direct calibration, but we stress that the reliability of the SBF measurements in spirals remains to be tested, and coupled with the not fully satisfactory constraints imposed on the color dependence of HST-SBF, is the main source of concern in the present calibration of the SBF method.
Because the 5 galaxies comprising the SBF distant sample are confined within 5000 km s<sup>-1</sup>(excluding the not very well constrained measurement in the Coma cluster by Thomsen et al. 1997), SBF is more susceptible to the effects of large scale flows than the other indicators. Furthermore, three of the five galaxies lie in the immediate vicinity of one of the major contributors to the local flow field, i.e., the Great Attractor. To provide a first order estimate of the bias introduced by the flow field in estimating $`H_0`$, we considered a simple multi-attractor model fully described in Mould et al. (2000). The model assumes three mass concentrations, the local supercluster, the Great Attractor, and the Shapely Concentration, acting independently so that corrections for each are additive. Using velocities corrected for this flow model, and the HST-SBF distances derived using the direct calibration described above, we obtain $`H_0`$ = 69$`\pm `$4 (random) $`\pm `$5 (systematic) km s<sup>-1</sup> Mpc<sup>-1</sup>. $`H_0`$ happens to remains unaltered if velocities corrected to the CMB frame are used instead; nevertheless, we estimate an 8 km s<sup>-1</sup> Mpc<sup>-1</sup> random error on $`H_0`$ per cluster due to corrections for the flow field. This represents the major contributor to the random uncertainty, random errors in the fluctuation magnitudes and V–I colors carry only 30% of the weight. The systematic uncertainty is due mostly to the systematic error in the Cepheid distance scale (the distance to the LMC and the photometric calibration of the WFPC2), and partly to the internal error in the SBF calibration.
## 3. Fundamental Plane
Among the distance indicators targeted by the KP, the Fundamental Plane (FP) has the distinct disadvantage of not being calibratable directly against the Cepheids. The approach followed for the KP by Kelson et al. (2000) is to base the FP calibration on the Leo I group and the Fornax and Virgo clusters, for which several Cepheid distances exist. Based on Monte Carlo simulations following Gonzales & Faber (1997), Kelson et al. test the hypothesis of a spatial coincidence between the Cepheid spirals and the FP ellipticals in these clusters, and conclude that this assumption leads to an underestimate of the FP distances. Accurate evaluation of this bias would require a detailed knowledge of the clusters’ 3D spatial structure which, alas, is lacking. The simulations suggest a 5% downward correction to $`H_0`$, but at the very high price of a 5% uncertainty, which alone will account for a quarter of the systematic error budget on $`H_0`$.
The kinematic data (i.e., the velocity dispersion) used by Kelson et al. for the fundamental plane in Leo I, Virgo and Fornax are from Dressler et al. (1987) and Faber et al. (1989), while the photometric parameters (i.e., effective radius and surface brightness) are derived from original $`I`$-band data. The distant sample consists of 11 clusters observed in Gunn $`r`$ by Jorgensen et al. (1995ab) in the range $`cz110012000`$ km s<sup>-1</sup>. For consistency, the slope of the FP for the calibrators is assumed to be the same as for the distant sample, the $`I`$ band photometry for the local calibrators is transformed to Gunn $`r`$, and the velocity dispersion is corrected to the same aperture used for the distant sample. None of these steps produces major contributions to the error budget.
There appear to be no additional major impasses with the calibration: the zero points derived from the three clusters are consistent with each other. In view of the findings of Fruchter et al. (this volume) it is noteworthy, even if inconsequential, that the FP dispersion is found to be almost double in Fornax compared to Virgo and Leo I ($``$0.09 dex compared to $``$0.05 dex). The calibration applied to the distant sample leads to $`H_0`$=78$`\pm `$7 (random) $`\pm `$8 (systematic) km s<sup>-1</sup> Mpc<sup>-1</sup>. The major source of random uncertainty is associated with the slope and zero point of the fundamental plane and only in part with the Cepheid distances of the calibrating clusters. The systematic uncertainty derives mainly from the systematic error in the Cepheid distance scale, the uncertain accounting of the spatial separation between spirals and ellipticals (see above), and only in minor part from cluster population incompleteness bias.
## 4. Tully-Fisher Relation
The KP calibration of the $`BVRIH`$ Tully-Fisher relations is presented in Sakai et al. (2000). A summary has been written by Shoko Sakai for these proceedings, therefore only a few words will be spent here. The calibration, based on 21 galaxies with Cepheid distances (Table 1), is applied to four distant cluster samples. Of these the largest is the $`I`$-band survey of Giovanelli et al. (1997), which comprises 555 galaxies in 24 clusters within 9500 km s<sup>-1</sup>. The $`B`$ and $`V`$-band samples of Bothun et al. (1985) and the $`H`$-band sample of Aaronson et al. (1982, 1986) span a comparable range in redshift space, but are of significantly smaller size. The most significant problem with the TF analysis is the discrepancy, at the 2$`\sigma `$ level, between the values of $`H_0`$ derived from the $`I`$ and $`H`$ surveys: 74$`\pm `$2 km s<sup>-1</sup> Mpc<sup>-1</sup> and 67$`\pm `$3 km s<sup>-1</sup> Mpc<sup>-1</sup> respectively (random errors only). Sakai et al. thoroughly investigated the cause of this discrepancy, and even though a secure culprit could not be identified, circumstantial evidence points to the $`H`$-band photometry as the most likely suspect. Work is in progress (Macri et al. 2000) to re-derive $`H`$ magnitudes for the local calibrators, and resolve the $`I`$ vs $`H`$ disagreement. At this time, the most prudent course of action is to adopt a weighted average of the values of $`H_0`$ from all four surveys. This leads to $`H_0`$ = 71$`\pm `$4 (random) $`\pm `$10 (systematic) km s<sup>-1</sup> Mpc<sup>-1</sup>. The random error is shared equally by errors in the CMB velocities for the distant samples, and the random error in the Tully-Fisher moduli (mainly deriving from errors in the photometry, and only partly in the linewidths). The systematic uncertainties come from the systematic error in the Cepheid distance scale and in the TF zero point. Notice that unlike SBF and FP, which target the dust free environments of early-type galaxies, TF and SNIa carry the extra burden of having to deal with internal extinction, which produces an additional term in the final error budget.
## 5. Type Ia Supernovae
A chronicle on $`H_0`$ from Type Ia Supernovae (SNIa) would be divided into three chapters. In chapter 1, values in the upper 50s would be routinely recorded under the assumption that the magnitude at peak acts as a standard candle (Hamuy et al. 1996, Riess et al. 1996, Saha et al. 1997). Chapter 2 opens with the realization that the intrinsic brightness at maximum light correlates with the decline rate (as first suggested by Phillips 1993): slow decliners are intrinsically brighter than fast decliners. Because the local calibrators happen to be slower decliners than the average SNIa in the distant samples, correction for this effect leads to a substantial 8% increase in $`H_0`$ (Saha et al. 1999, Hamuy et al. 1996, Riess et al. 1998, Phillips et al. 1999). The KP wrote chapter 3 by re-deriving Cepheid distances to the local SNIa host galaxies (Gibson et al. 2000), thus revising the calibration and leading to a further 6% increase in $`H_0`$.
It is thanks to the effort of Allan Sandage and collaborators if we can now rely on Cepheid distances to six nearby SNIa host galaxies when none existed before (Saha et al. 1994, 1995, 1996a, 1996b, 1997, 1999), even if nothing compares to the strategic planning of Tanvir et al. (1995) who published a Cepheid distance to NGC 3368 three years before its SNIa went off. Loyal to the KP commitment of providing a consistent footing to all secondary distance indicators, Gibson et al. (2000) set out to derive new photometry and distances to all of the SNIa host galaxies. In two cases (NGC 4496A and IC4182) the new distances agree with the ones originally published; but in all other cases the re-analysis led to consistently smaller distance moduli, by an average of 0.12 magnitudes. The causes of the discrepancies vary from galaxy to galaxy; they include disagreement in the photometry (which are quickly amplified by the de-reddening procedure used in deriving the distance moduli), differences in the sample of Cepheids, and differences in the treatment of reddening. Many pages of justifications and details are given in Gibson et al.; the punch line is that these new distances should be preferred when SNIa are compared to the other secondary distance indicators, as they are derived in a consistent manner as for the 18 galaxies originally observed as part of the KP.
Three local supernovae are excluded from the calibration because of the poor quality of their light curves: SN1895B in NGC 5253 (the galaxy also hosted the better sampled SN1972E) and the SNe in NGC 4414 and NGC 4496A. Gibson et al. follow Suntzeff et al. (1999) in the adoption of the SNe data ($`B`$, $`V`$ and $`I`$ photometry and extinction corrections) for both the local and distant sample. The latter comprises 35 Calán-Tololo/CfA SNe (Phillips et al. 1999), in the range $`cz100031000`$ km s<sup>-1</sup>. Both the distant and nearby samples are corrected for the decline rate versus peak luminosity relation obtained by Phillips et al. (1999) from a low extinction subset of the Calán-Tololo SNe. Averaging the $`B`$, $`V`$ and $`I`$ data Gibson et al. derive a weighted Hubble constant of $`H_0`$=68$`\pm `$2 (random) $`\pm `$5 (systematic) km s<sup>-1</sup> Mpc<sup>-1</sup>. The main contributions to the random errors come from the scatter in the Hubble diagram and from errors in the photometry and distances for the local calibrators. The systematic error is propagated directly from the Cepheid distance scale.
How does the KP value of $`H_0`$ compare to the findings of other groups? We will consider the two most recent works. Suntzeff et al. (1999), which is the source of the KP database for both the local and distant SNe, quote $`H_0`$ = 63$`\pm `$2 km s<sup>-1</sup> Mpc<sup>-1</sup>which, as expected, agrees with Gibson et al. when the new distances for the local calibrators are accounted for. Saha et al. (1999) obtain $`H_0`$ = 60$`\pm `$2 km s<sup>-1</sup> Mpc<sup>-1</sup> in both $`B`$ and $`V`$. Part of the difference with Gibson et al. is again due to the adoption of new distances, but an additional 6% must be accounted for. This is due to differences in the internal extinction corrections for the local sample, and in the foreground extinction for the distant sample (the two having opposite effects on $`H_0`$), and in the actual photometry adopted for the local SNe. Other factors, such as the adoption of slightly different distant samples, different formalism for the rate of decline versus peak magnitude relation, and different assumption as to a dependence of $`H_0`$ on redshift, do not produce appreciable differences.
## 6. Combining the Constraints
By combining the constraints imposed on $`H_0`$ by each indicator (Figure 2), we can reduce the propagation of uncorrelated systematics. There is some amount of covariance in the values of $`H_0`$ from Table 2, due for example to the sharing of some of the calibrator galaxies (Table 1), and to the common underlying assumption on the distance to the LMC. To properly account for the interplay of random and systematic errors, Mould et al. (2000) have developed Monte Carlo simulations in which all uncertainties and parameter dependences for each indicator are propagated through and investigated thoroughly. The final result is $`H_0`$=71$`\pm `$6 km s<sup>-1</sup> Mpc<sup>-1</sup>. The error distribution is symmetrical, with a 1$`\sigma `$ width of 9%, fulfilling the KP original goal.
## 7. Future Directions
There is still substantial room for improvement in the result given above. One common dimension they all share is the assumption of a 50$`\pm `$3 kpc distance to the LMC. A distribution of LMC distances compiled from the literature (Mould et al. 2000), if indeed peaked at 50 kpc, is not symmetric: red clump distances are as low as 43 kpc (e.g., Stanek et al. 1999), while Mira variables define the upper envelope at 55 kpc (Reid 1998). Mould et al. investigate the consequences of replacing the adopted probability distribution with this empirical, skewed compilation. As a result $`H_0`$ would increase by 4.5%, and the associated error would jump to 12%. While this is an extreme, and somewhat unorthodox revision, it does illustrate the rather heavy repercussion of this one assumption. The debate on the LMC distance promises to be as heated as the controversy on $`H_0`$ itself, and a resolution might have to await the launch of SIM in 2005. In the meantime, an update on the LMC PL relation is long overdue: the current calibration is based on 32 Cepheids, less than the Cepheid sample size of several KP galaxies! Progress is being made (e.g., Moffett 1998, Barnes et al. 1999, Tanvir & Boyle 1999, Udalski et al. 1999); within the KP, Kim Sebo is leading an effort which has so far produced BVRIH light curves for over 200 LMC Cepheids (Sebo et al. 2000). Finally, systematics in the calibration of the PL relation must also be explored in more galaxies having Cepheid-independent distance estimates: promising starts are the study conducted by Maoz et al. (1999) in NGC 4258, and the DIRECT project targeting M31 and M33 (e.g., Mochejska et al. 1999).
The metallicity dependence of the Cepheid PL relation, and the uncertainties in the photometric calibration of the WFPC2, closely follow the LMC distance in generating the largest uncertainty in $`H_0`$. While the latter will soon be better constrained (Stetson et al. 2000, Saha et al. 2000), not much agreement has yet been reached for the former (e.g., Alibert et al. 1999, Caputo et al. 1999, Storm et al. 1998, Sasselov et al. 1997, Kochanek 1997). If the mild, and only marginally significant, metallicity dependence found for the KP by Kennicutt et al. (1998) is applied to the Cepheid distances of the local calibrators, the value of $`H_0`$ would decrease by 4.5%.
Further progress is also to be expected in improving the calibrator samples for some of the secondary distance indicators. Sandage and collaborators are still actively hunting down SNIa hosts. Indeed in the near future three more calibrators will be added to the current sample of six. One is a member of Fornax, and will help to better constrain the distance to the cluster needed for the FP calibration. Finally, during this conference we learned that a lot of effort is being spent in developing new distant samples (Germany, Willick, Courteau, Giovanelli, Colless, this volume). In particular, the Mount Stromlo Abell Cluster SN Search (see also Reiss et al. 1998) coupled with the ongoing Calán-Tololo and Asiago surveys, promises to double the current number of distant SNe, with the result that not only the Hubble diagram, but also dependences of the peak magnitude on second parameters can be further refined. The $`I`$-band Tully-Fisher sample of Dale et al. (1999) will push the method to 25000 km s<sup>-1</sup>, twice as far as the sample currently used by the KP, while Fundamental Plane parameters for 80 new clusters are expected from the EFAR project.
## References
$``$Aaronson, M., et al. 1982, ApJS, 50, 241 $``$Aaronson, M., et al. 1986, ApJ, 302, 536 $``$Ajhar, E. A., et al. 1997, AJ, 114, 626 $``$Alibert, Y., et al. 1999, A&A, 344, 551 $``$Barnes, T., et al. 1999, astro-ph/9903095 $``$Bothun, G. D., et al. 1985, ApJS, 57, 423 $``$Caputo, F., et al. 1999, astro-ph/9902279 $``$Cardelli, J. A., et al. 1989, ApJ, 345,245 $``$Dale, A. D., et al. 1999, astro-ph/9907059 $``$Dressler, A., et al. 1987, ApJ, 313, 42 $``$Faber, S. M., et al. 1989, ApJS, 69, 763 $``$Ferrarese, L., et al. 2000a, ApJ, in press (astro-ph/9908192) $``$Ferrarese, L., et al. 2000b, ApJS, submitted $``$Freedman, W. L., et al. 2000, in preparation $``$Gibson, B., et al. 2000, ApJ, in press (astro-ph/9908149) $``$Giovanelli, R., et al. 1997, AJ, 113, 22 $``$Gonzales, A. H., & Faber, S. M. 1997, ApJ, 485, 80 $``$Hamuy, M., et al. 1996, AJ, 112, 2398 $``$Huchra, J., & Mader, J. 1998, http://cfa-www.harvard.edu/`~`huchra, ZCAT Version July 15, 1998 $``$Jorgensen, I., et al. 1995a, MNRAS, 273, 1097 $``$Jorgensen, I., et al. 1995b, MNRAS, 276, 1341 $``$Kelson, D., et al. 2000, ApJ, submitted $``$Kennicutt, R. et al. 1998, ApJ, 498, 181 $``$Kochanek, C. S. 1997, ApJ, 491, 13 $``$Lauer, T. R., et al. 1998, ApJ, 499, 577 $``$Macri, L., et al. 2000, ApJS, submitted $``$Madore, B. F. & Freedman, W. L. 1991, PASP, 103, 933 $``$Maoz, E. et al. 1999, Nature, in press, astro-ph/9908140 $``$Mochejska, B. J., et al. 1999, astro-ph/9904343 $``$Moffett, T. J., et al. 1998, ApJS, 117, 135 $``$Mould, J. R., et al. 2000, ApJ, submitted $``$Pahre, M. A., et al. 1999, ApJ, 515, 79 $``$Phillips, M. M. 1993, ApJ, 413, 105 $``$Phillips, M. M., et al. 1999, astro-ph/9907052 $``$Reid, I. N. 1998, AJ, 115, 204 $``$Reiss, D. J., et al. 1998, AJ, 115, 26 $``$Riess, A. G., et al. 1996, ApJ, 473, 88 $``$Riess, A. G., et al. 1998, AJ, 116, 1009 $``$Saha, A., et al. 1994, ApJ, 425, 14 $``$Saha, A., et al. 1995, ApJ, 438, 8 $``$Saha, A., et al. 1996a, ApJ, 466, 55 $``$Saha, A., et al. 1996b, ApJS, 107, 693 $``$Saha, A., et al. 1997, ApJ, 486, 1 $``$Saha, A., et al. 1999, astro-ph/9904389 $``$Sakai, S., et al. 2000, ApJ, submitted $``$Sasselov, D., et al. 1997, A&A, 324, 471 $``$Stanek, K. Z., et al. 1999, astro-ph/9908041 $``$Storm, J., et al. 1998, astro-ph/9811376 $``$Suntzeff, N., et al. 1999, AJ, 117, 1175 $``$Tanvir, N. R., & Boyle, A. 1999, MNRAS, 304, 957 $``$Tanvir, N. R. et al. 1995, Nature, 377, 27 $``$Thomsen, B., et al. 1997, ApJ, 483, L37 $``$Tonry, J. L., & Schneider, P. 1988, AJ, 96, 807 $``$Tonry, J.L, et al. 1997, ApJ, 475, 399 $``$Tonry, J. L., et al. 1999, astro-ph/9907062 $``$Udalski, A., et al. 1999, astro-ph/9907236 $``$Willick, J. A. 1999, ApJ, 516, 47 $``$Worthey, G. 1994, ApJS, 95, 107
|
no-problem/9909/hep-lat9909041.html
|
ar5iv
|
text
|
# Matrix Elements without Quark Masses on the Lattice
## 1 Introduction
Since the original proposals of using lattice QCD to study hadronic weak decays , substantial theoretical and numerical progress has been made: the main theoretical aspects of the renormalization of composite four-fermion operators are well understood ; the calculation of $`K^0`$$`\overline{K}^0`$ mixing has reached a level of accuracy which is unpaired by any other approach ; increasing precision has been gained in the determination of the electro-weak penguin amplitudes necessary to the prediction of the CP-violation parameter $`ϵ^{}/ϵ`$ ; finally matrix elements of $`\mathrm{\Delta }S=2`$ operators which are relevant to study FCNC effects in SUSY models have been computed . Methods, symbols and results reported in this talk are fully described in .
## 2 Matrix elements without quark masses
The analysis of $`K^0\overline{K}^0`$ mixing with the most general $`\mathrm{\Delta }S=2`$ effective Hamiltonian requires the knowledge of matrix elements $`\overline{K}^0|O_i|K^0`$ of parity conserving parts of the following operators
$`O_1`$ $`=`$ $`\overline{s}^\alpha \gamma _\mu (1\gamma _5)d^\alpha \overline{s}^\beta \gamma _\mu (1\gamma _5)d^\beta ,`$
$`O_2`$ $`=`$ $`\overline{s}^\alpha (1\gamma _5)d^\alpha \overline{s}^\beta (1\gamma _5)d^\beta ,`$
$`O_3`$ $`=`$ $`\overline{s}^\alpha (1\gamma _5)d^\beta \overline{s}^\beta (1\gamma _5)d^\alpha ,`$ (1)
$`O_4`$ $`=`$ $`\overline{s}^\alpha (1\gamma _5)d^\alpha \overline{s}^\beta (1+\gamma _5)d^\beta ,`$
$`O_5`$ $`=`$ $`\overline{s}^\alpha (1\gamma _5)d^\beta \overline{s}^\beta (1+\gamma _5)d^\alpha .`$
On the lattice, matrix elements of weak four-fermion operators are computed from first principles. But, following the common lore, they are usually given in terms of the so-called $`B`$-parameters which measure the deviation of their values from those obtained in the Vacuum Saturation Approximation (VSA). For operators in (1), the $`B`$-parameters are usually defined as
$`\overline{K}^0|O_1(\mu )|K^0`$ $`=`$ $`{\displaystyle \frac{8}{3}}M_K^2f_K^2B_1(\mu ),`$ (2)
$`\overline{K}^0|O_i(\mu )|K^0`$ $`=`$ $`{\displaystyle \frac{C_i}{3}}\left({\displaystyle \frac{M_K^2f_K}{m_s(\mu )+m_d(\mu )}}\right)^2B_i(\mu ),`$
where $`C_i=5,1,6,2`$ for ($`i=2,\mathrm{},5`$). In (2), $`\overline{K}^0|O_1|K^0`$ is parameterized in terms of well-known experimental quantities and $`B_1(\mu )`$ ($`B_K(\mu )B_1(\mu )`$). On the contrary, $`\overline{K}^0|O_i|K^0`$ ($`i=2,\mathrm{},5`$) depend quadratically on the quark masses in (2), while they are expected to remain finite in the chiral limit and depend only linearly on the quark masses. Contrary to $`f_K`$, $`M_K`$, etc., quark masses can not be directly measured by experiments and the present accuracy in their determination is still rather poor.
Therefore, whereas for $`O_1`$ we introduce $`B_K`$ as an alias of the matrix element, by using (2) we replace each of the ”SUSY” matrix elements with 2 unknown quantities, i.e. the $`B`$-parameter and $`m_s+m_d`$. To overcome these problems, we propose the following new parameterization of $`\mathrm{\Delta }S=2`$ operators
$`\overline{K}^0|O_1(\mu )|K^0`$ $`=`$ $`{\displaystyle \frac{8}{3}}M_K^2f_K^2B_1(\mu ),`$ (3)
$`\overline{K}^0|O_i(\mu )|K^0`$ $`=`$ $`M_K^{}^2f_K^2\stackrel{~}{B}_i(\mu ).`$
The $`\stackrel{~}{B}_i(\mu )`$-parameters are still dimensionless quantities and can be computed on the lattice by studying appropriate ratios of three- and two-point functions . By simply using them, we have eliminated any fictitious reference to the quark masses, hence reducing the systematic errors on the corresponding physical amplitudes. An alternative parameterization, not used here, which can be useful in the future is reported in . The VSA and $`B`$-parameters are also used for matrix elements of operators which enter the $`\mathrm{\Delta }S=1`$ effective Hamiltonian. Notice that this ”conventional” parameterization is the only responsible for the apparent quadratic dependence of $`ϵ^{}/ϵ`$ on the quark masses. This introduces a redundant source of systematic error which can be avoided by parameterizing the matrix elements in terms of measured experimental quantities and therefore a better determination of the strange quark mass $`m_s(\mu )`$ will not improve our theoretical knowledge of $`ϵ^{}/ϵ`$. In this work we have computed the matrix elements $`\pi |O_i^{3/2}|K`$ of the four fermion operators $`O_i^{3/2}`$ ($`i=7,8,9`$) which contribute to the $`\mathrm{\Delta }I=3/2`$ sector of $`ϵ^{}/ϵ`$. In the chiral limit $`\pi \pi |O_i^{3/2}|K`$ can be obtained, using soft pion theorems, from $`\pi ^+|O_i^{3/2}|K^+`$. For degenerate quark masses, $`m_s=m_d=m`$, and in the chiral limit, we find
$`\underset{m0}{lim}\pi ^+|O_7^{3/2}|K^+`$ $`=`$ $`M_\rho ^2f_\pi ^2\underset{m0}{lim}\stackrel{~}{B}_5(\mu )`$
$`\underset{m0}{lim}\pi ^+|O_8^{3/2}|K^+`$ $`=`$ $`M_\rho ^2f_\pi ^2\underset{m0}{lim}\stackrel{~}{B}_4(\mu )`$
$`\underset{m0}{lim}\pi ^+|O_9^{3/2}|K^+`$ $`=`$ $`{\displaystyle \frac{8}{3}}M_\pi ^2f_\pi ^2\underset{m0}{lim}B_1(\mu ).`$
## 3 Renormalization Group Invariant Operators
Physical amplitudes can be written as
$$F|_{eff}|I=F|\stackrel{}{O}(\mu )|I\stackrel{}{C}(\mu ),$$
(4)
where $`\stackrel{}{O}(\mu )(O_1(\mu ),\mathrm{},O_N(\mu ))`$ is the operator basis (for example the one in (1) for the $`\mathrm{\Delta }S=2`$) and $`\stackrel{}{C}(\mu )`$ the corresponding Wilson coefficients represented as a column vector. $`\stackrel{}{C}(\mu )`$ is expressed in terms of its counter-part, computed at a large scale $`M`$, through the renormalization-group evolution matrix $`\widehat{W}[\mu ,M]`$
$$\stackrel{}{C}(\mu )=\widehat{W}[\mu ,M]\stackrel{}{C}(M),$$
(5)
where the initial conditions $`\stackrel{}{C}(M)`$, are obtained by perturbative matching of the full theory to the effective one at the scale $`M`$ where all the heavy particles have been removed. $`\widehat{W}[\mu ,M]`$ can be written as (see for example )
$`\widehat{W}[\mu ,M]=\widehat{M}[\mu ]\widehat{U}[\mu ,M]\widehat{M}^1[M],`$ (6)
where $`\widehat{U}=(\alpha _s(M)/\alpha _s(\mu ))^{(\gamma _O^{(0)T}/2\beta _0)}`$ is the leading-order evolution matrix and $`M(\mu )`$ is a NLO matrix defined in . The Wilson coefficients $`\stackrel{}{C}(\mu )`$ and the renormalized operators $`\stackrel{}{O}(\mu )`$ are usually defined in a given scheme, at a fixed renormalization scale $`\mu `$, and they depend on the renormalization scheme and scale in such a way that only $`H_{eff}`$ is scheme and scale independent. To simplify the matching procedure, we propose a Renormalization Group Invariant (RGI) definition of Wilson coefficients and composite operators which generalizes what is usually done for $`B_K`$ and for quark masses. We define
$$\widehat{w}^1[\mu ]\widehat{M}[\mu ]\left[\alpha _s(\mu )\right]^{\widehat{\gamma }_O^{(0)T}/2\beta _0},$$
(7)
and using Eqs. (6) and (7) we obtain
$$\widehat{W}[\mu ,M]=\widehat{w}^1[\mu ]\widehat{w}[M].$$
(8)
The effective Hamiltonian (4) can be written as $`_{eff}=\stackrel{}{O}^{RGI}\stackrel{}{C}^{RGI}`$, where
$$\stackrel{}{C}^{RGI}=\widehat{w}[M]\stackrel{}{C}(M),\stackrel{}{O}^{RGI}=\stackrel{}{O}(\mu )\widehat{w}^1[\mu ].$$
(9)
$`\stackrel{}{C}^{RGI}`$ and $`\stackrel{}{O}^{RGI}`$ are scheme and scale independent at the order we are working. Therefore the effective Hamiltonian is splitted in terms which are individually scheme and scale independent. This procedure is generalizable to any effective weak Hamiltonian. The RGI $`\stackrel{~}{B}`$-parameters can be defined as
$$\stackrel{~}{B}_i^{RGI}=\underset{j}{}\stackrel{~}{B}_j(\mu )w(\mu )_{ji}^1.$$
(10)
## 4 Numerical results
All details concerning the extraction of matrix elements from correlation functions and the computation of the non-perturbative renormalization constants of lattice operators can be found in . In this talk we report the results obtained in . The simulations have been performed at $`\beta =6.0`$ (460 configurations) and $`6.2`$ (200 configurations) with the tree-level Clover action, for several values of the quark masses and for different meson momenta. The main results we have obtained for $`\mathrm{\Delta }S=2`$ and $`\mathrm{\Delta }I=3/2`$ matrix elements and their comparison with the results in are reported in Tables 1 and 2. It is interesting to note, as expected from chiral perturbation theory, that matrix elements of $`\mathrm{\Delta }S=2`$ SUSY operators are enhanced respect to the SM one by a factor $`212`$ at $`\mu =2`$ GeV.
In Figure 1 we show the strong dependence of $`\overline{K}^0|O_4|K^0`$ on the strange quark mass when the “conventional” parameterization (2) is used, to be compared with the result obtained with the new parameterization. The results for the analogous $`\mathrm{\Delta }C=2`$ and $`\mathrm{\Delta }B=2`$ matrix elements presented at the conference are reported in . Although we have data at two different values of the lattice spacing, the statistical errors, and the uncertainties in the extraction of the matrix elements, are too large to enable any extrapolation to the continuum limit. For this reason, the best estimate of the central values of the $`B`$-parameters can be obtained by averaging the results obtained at the two values of $`\beta `$ . As far as the errors are concerned we take the largest of the two statistical errors.
|
no-problem/9909/cond-mat9909425.html
|
ar5iv
|
text
|
# Magnetic Field Dependent Tunneling in Glasses
## Abstract
We report on experiments giving evidence for quantum effects of electromagnetic flux in barium alumosilicate glass. In contrast to expectation, below $`100`$mK the dielectric response becomes sensitive to magnetic fields. The experimental findings include both, the complete lifting of the dielectric saturation by weak magnetic fields and oscillations of the dielectric response in the low temperature resonant regime. As origin of these effects we suggest that the magnetic induction field violates the time reversal invariance leading to a flux periodicity in the energy levels of tunneling systems. At low temperatures, this effect is strongly enhanced by the interaction between tunneling systems and thus becomes measurable.
The low-temperature properties of glasses have been attributed to low-energy excitations present in nearly all amorphous solids and disordered crystals . Considerable theoretical and experimental investigations have been expended to understand these excitations. In the standard tunneling model (TM) they are described on a phenomenological basis by non-interacting two-level tunneling systems (TLS). These TLSs are thought to consist of small atomic entities which are able to tunnel between at least two equilibrium positions. Thus, a TLS can be approximately treated like a particle moving in an asymmetric double-well potential. The excitation energy between the two lowest states of the asymmetric double-well, $`E=\sqrt{\Delta ^2+\Delta _0^2}`$, is determined by the asymmetry $`\Delta `$ and the tunnel splitting $`\Delta _0`$. Because of the random structure of glasses $`\Delta `$ and $`\Delta _0`$ are broadly distributed. According to the TM a distribution $`P(\Delta ,\Delta _0)=\overline{P}/\Delta _0`$ is assumed, where $`\overline{P}`$ is a constant.
Treating the coupling of TLSs to external acoustic and electric fields as a weak perturbation, the TM successfully explains many of the anomalous thermal, acoustic and dielectric low-temperature properties of glasses. Various recent experiments, however, demonstrate considerable deviations from this theory. In particular, we refer here to the permittivity (dielectric constant) $`\epsilon ^{}`$ which levels off at very low temperatures whereas the TM predicts a $`\mathrm{ln}(1/T)`$ behavior. Interaction between TLSs has been suggested as a possible origin for these discrepancies , although a comprehensive theory is still missing. Phenomenologically, a low energy cut-off $`\Delta _{0,\mathrm{min}}`$ may be introduced for the spectral density of tunneling states to account for the levelling-off. Values of $`\Delta _{0,\mathrm{min}}/k_\mathrm{B}1\mathrm{mK}`$ indeed describe the $`\epsilon ^{}(T)`$ data quite well. However, applying the TM to the dielectric and acoustic behavior of glasses at higher temperatures requires values of $`\Delta _{0,\mathrm{min}}/k_\mathrm{B}<10^3\mathrm{mK}`$. The remarkably enhanced magnitude of $`\Delta _{0,\mathrm{min}}`$ at very low temperatures indicates that interaction between TLS leads to a renormalization of essential parameters of the TM such as $`\overline{P}`$ and $`\Delta _0`$. In this case quasiparticles are considered rather than bare non-interacting TSs. Such renormalization effects might partially account for the general success of the standard tunneling model although this has not been explicitly stated yet.
Moreover, it is conceivable that due to interaction between TLSs the action of electromagnetic fields on the quantum-mechanical state of charged TLSs becomes measurable. This is motivated mainly by the recently reported observation of the dielectric response of multicomponent glasses being extremely sensitive to weak magnetic fields . A theoretical interpretation as magnetic flux effect (Aharonov-Bohm effect) can be given in a generalized tunneling model .
In the present paper we report on measurements of the dielectric response of a BaO-Al<sub>2</sub>O<sub>3</sub>-SiO<sub>2</sub>-glass in magnetic fields ranging from a few mT up to $`25`$T. Indeed, we were able to demonstrate the existence of quantum effects of electromagnetic flux in glasses: magnetic fields cause drastic changes of the dielectric response and lead to oscillatory variations.
Barium alumosilicate glass is characterized by a large intrinsic polarizability. Therefore, thick-film sensors based on this glass have already been used in glass capacitance thermometers and also in previous experiments at very low temperatures . In order to substantiate the existence of the surprising phenomena we are going to report, and to exclude experimental artifacts, the measurements were carried out in Berlin, Karlsruhe and Grenoble under different experimental conditions. In the Grenoble experiment at fields up to $`25`$T the sensor was inserted into the mixing chamber, whereas in Berlin it was mounted on a silver post bolted to a nuclear demagnetization stage. In Karlsruhe it was attached to a silver cold-finger connected to the mixing chamber of a dilution refrigerator. The measurements were performed at $`1`$kHz either with a home-made bridge, consisting of a nine-decade inductive voltage divider and a lock-in amplifier, or with a self-balancing capacitance bridge (Andeen-Hagerling, model 2500A). Based on an extended temperature scale the temperature was measured by means of a <sup>3</sup>He melting curve thermometer, an Au:Er susceptibility thermometer, and a pulsed platinum NMR thermometer. In the three experiments with substantially different setups fully consistent results were obtained.
The temperature dependence of the permittivity $`\mathrm{\Delta }\epsilon ^{}(T)/\epsilon ^{}=[\epsilon ^{}(T)\epsilon ^{}(T_0)]/\epsilon ^{}(T_0)`$ and of the dielectric loss $`\mathrm{\Delta }\mathrm{tan}\delta (T)=\mathrm{tan}\delta (T)\mathrm{tan}\delta (T_0)`$ without magnetic field is shown in Fig. 1 by the full dots (open symbols refer to measurements at finite fields and will be discussed later). In qualitative agreement with the predictions of the TM, $`\epsilon ^{}`$ decreases logarithmically with decreasing temperature, passes a minimum at $`T_\mathrm{m}=113`$mK and increases logarithmically. The low temperature logarithmic variation is caused by the resonant response of coherently driven TLSs in the low frequency limit. At higher temperatures relaxation sets in leading to the minimum and the subsequent increase. As mentioned before, the weak temperature variation of $`\epsilon ^{}(T)`$ at the lowest temperatures, sometimes called ”dielectric saturation”, is not predicted by the TM. Moreover, the theoretically expected ratio of –2:1 of the two log($`T`$)-slopes is not observed, either. Instead, as in previous experiments , the ratio is found to be closer to –1:1.
Within the TM the resonant part of the permittivity is given by
$$\epsilon _{\mathrm{res}}^{}1=\frac{2\overline{P}p^2}{3\epsilon _0}\underset{\Delta _{0,\mathrm{min}}}{\overset{E_{\mathrm{max}}}{}}dE\frac{\sqrt{E^2\Delta _{0,\mathrm{min}}^2}}{E^2}\mathrm{tanh}\left(\frac{E}{2k_\mathrm{B}T}\right),$$
(1)
and formally, the saturation effect of our sample can be accounted for by a cut-off energy $`\Delta _{0,\mathrm{min}}/k_\mathrm{B}=12.2`$mK which is a remarkably high value. The quantity $`\overline{P}p^2/\epsilon _0`$ is determined to be $`1.03\times 10^2`$, where $`p`$ is the average magnitude of the dipole moment of the TLSs.
At low frequencies the dielectric loss is caused by relaxation and $`\mathrm{tan}\delta T^3`$ is expected since one-phonon processes should be dominant. However, as shown in Fig. 1 the measured temperature variation is considerably weaker. It is worth noting that analogous deviations from the prediction of the TM have been observed in low frequency acoustic experiments.
A further intriguing effect, which cannot be explained by the TM either, is the non-linear dielectric response to the amplitude of the excitation voltage as shown in Fig. 2. With increasing voltage the minimum of $`\epsilon ^{}`$ is shifted towards higher temperatures and leads at the same time to different plateau heights at lowest temperatures. The dependence of the dielectric response on the amplitude of the applied ac-field has been treated theoretically by Stockburger et al. . In contrast to our observation, however, their model predicts a change of the ln($`1/T`$)-slope and a voltage independent plateau. We suppose that the dependence on the amplitude of the applied ac-field is a direct consequence of electric flux acting on the quantum-mechanical state of TLSs and is therefore linked to the surprising magnetic field effects which will be discussed in the remainder of this paper.
Astonishing phenomena were found in magnetic fields. As shown in Fig. 3 the permittivity in the resonant regime varies non-monotonically with the magnetic field: $`\mathrm{\Delta }\epsilon ^{}(B)/\epsilon ^{}=[\epsilon ^{}(B)\epsilon ^{}(B=0)]/\epsilon ^{}(B=0)`$ increases with field strength, passes through a first maximum at about $`30`$mT, and exhibits a second one around $`250`$mT. As indicated in the insert of this figure another even more pronounced maximum occurs at the high field of about $`18`$T. In the lower part of the figure the variation of dielectric loss $`\mathrm{\Delta }\mathrm{tan}\delta =\mathrm{tan}\delta (B)\mathrm{tan}\delta (B=0)`$ is drawn which exhibits an oscillatory behavior, too: With increasing field strength the small rise at low fields is followed by a strong decrease. After going through a minimum the loss increases again.
An interpretation of the surprising magnetic field dependence of the dielectric response of glasses at low temperature can be given in a generalized tunneling model . There, the motion of a particle with charge $`Q`$ on a closed path with a double-well potential is considered. The presence of an induction field $`𝑩`$ violates the time reversal invariance due to the Aharonov-Bohm phase and leads to an energy spectrum of the TLSs which varies periodically with magnetic flux. The energy splitting of the ground state is then given by $`E(\varphi )=\sqrt{\Delta ^2+t(\varphi )^2}`$, where the tunneling splitting $`t(\varphi )=\Delta _0\mathrm{cos}(\pi \varphi /\varphi _0)`$ depends periodically on the magnetic flux $`\varphi `$ through the area enclosed by the path of the tunneling particle. The period of the oscillation is determined by $`\varphi _0=h/Q`$. The magnetic field causes the TLSs to carry a persistent tunneling current resulting in a magnetic moment. The low-temperature thermodynamic properties such as the specific heat or the permittivity calculated in the generalized model of independent TLSs are consequently periodic functions of the magnetic flux. The resonant part $`\epsilon _{\mathrm{res}}^{}`$ of the permittivity can be calculated using Eq. (1) as before, after substituting the tunneling parameter $`\Delta _0`$ and its lower limit $`\Delta _{0,\mathrm{min}}`$ by the flux dependent quantities $`\Delta _0\mathrm{cos}(\pi \varphi /\varphi _0)`$ and $`\Delta _{0,\mathrm{min}}\mathrm{cos}(\pi \varphi /\varphi _0)`$, respectively. $`\Delta _{0,\mathrm{min}}`$ vanishes for $`\varphi =\varphi _0/2`$ and $`\epsilon _{\mathrm{res}}^{}`$ should exhibit a maximum. In this case the integral in Eq. (1) can be approximated by $`\mathrm{ln}(1/T)`$ meaning that $`\epsilon _{\mathrm{res}}^{}(T)`$ is expected to vary logarithmically with temperature in accordance with the TM.
The experimental confirmation of this prediction follows from Fig. 4 where the variation of the permittivity with the magnetic field is shown for different temperatures. The maximum values of $`\mathrm{\Delta }\epsilon ^{}(B)`$ were taken from this graph and plotted in Fig. 1 as open circles. These data points fall approximately onto the curves predicted by the TM for independent TLSs with the small value of $`\Delta _{0,\mathrm{min}}1`$mK. It is worth mentioning that already a field of the order of $`100`$mT completely lifts the saturation of $`\epsilon _{\mathrm{res}}^{}(T)`$ at low temperatures. Similarly, the minimum values of $`\mathrm{\Delta }\mathrm{tan}\delta `$ have been taken and drawn in the lower part of Fig. 1. These data points coincide approximately with the $`T^3`$-dependence predicted by the TM.
As mentioned above, the specific heat is expected to exhibit an oscillatory behavior like the permittivity . Although we did not measure this quantity we have observed that the time needed to reach thermal equilibrium after changing the external magnetic field depends on the applied field. It seems that the variation of $`\epsilon ^{}`$ is accompanied by a corresponding changes of the specific heat.
From Fig. 3 we estimate that the oscillation period of $`\epsilon ^{}(B)`$ is roughly $`200`$mT. Consequently a charge of $`Q4\times 10^5|e|`$ is required, where $`e`$ is the elementary charge. According to this large value originates from the strong coupling between the TLSs. This is consistent with the large renormalized value of $`\Delta _{0,\mathrm{min}}/k_\mathrm{B}=12.2`$ mK which also indicates that the coupling between the TLSs is rather strong. Assuming an average distance between the TLSs of $`10^8`$m and an averaged dipole moment of $`p=2|e|\times 10^{10}`$m the mean dipole-dipole interaction energy $`U_{\mathrm{int}}`$ is estimated to be of order $`U_{\mathrm{int}}/k_\mathrm{B}100`$mK. This means that flux effects should indeed become observable below $`TU_{\mathrm{int}}/k_\mathrm{B}100`$mK. It has been shown that under certain conditions the excitation spectrum of $`N`$ strongly coupled TLSs with charge $`q`$ is equivalent to that of a single particle with charge $`Q=Nq`$ tunneling along a closed path. Therefore, it is tempting to introduce quasiparticles whose tunneling path are pierced by a flux with the periodicity $`\varphi _0=h/(Nq)`$.
The theoretical considerations by Kettemann et al. show that another mechanism exists which contributes to changes in the energy spectrum of coupled TLSs, too. The dipole moment of asymmetrical TLSs increases with decreasing tunneling splitting $`t(\varphi )`$, and thus varies with the external induction field $`𝑩`$. This implies that the dipolar coupling between TLSs also depends on $`𝑩`$. Thus, the energy of the ground and the first excited state of a cluster of two or three coupled TLSs may cross already at weak magnetic fields. As a result the value of $`\Delta _0`$ and therefore also of $`\Delta _{0,\mathrm{min}}`$ is altered by the magnetic field and consequently also the dielectric response of the TLSs.
In a quantitative analysis the average over all orientations and over the charges carried by the clusters of coupled TLSs must be calculated. Thus the oscillations become smeared out especially at higher magnetic fields. In addition, the decrease of the dielectric loss and its oscillatory behavior indicate that the magnetic field has also an influence on the dynamics of coupled TLSs, in particular on their relaxation times. In a comprehensive treatment of the phenomena the interplay of the two effects, level crossing and strong coupling between TLSs, has to be taken into account. In this way it should be possible to understand also the maximum in $`\epsilon ^{}`$ observed at $`18`$T.
As demonstrated by Fig. 4 the magnetic effects become increasingly pronounced with decreasing temperature. Therefore, the question arises: What happens at ultra-low temperatures? It seems that a transition takes place from mesoscopic to macroscopic large clusters of strongly coupled TLSs. In our view the phase transition reported recently to occur at $`5.84`$mK in BaO-Al<sub>2</sub>O<sub>3</sub>-SiO<sub>2</sub>-glass is therefore intimately connected with the observations discussed here.
The authors thank G. Schuster, A. Hoffmann, D. Hechtfischer, and P. van der Linden for helping us with the realization of the experiments. The work has been supported through the DFG (Grant Hu359/11) and the TMR Programme (n ERBFMGECT950077).
|
no-problem/9909/quant-ph9909013.html
|
ar5iv
|
text
|
# Nonclassical correlations of phase noise and photon number in quantum nondemolition measurements
## I Introduction
In classical measurements, infinite precision is always desireable. Therefore there is no need for a fundamental measurement theory describing limited resolution. Instead, the lack of precision in any actual measurement is either neglected or considered to be an error which degrades the value of the measurement data obtained. In quantum mechanics, however, measurement precision always comes at a price. In particular, infinite precision requires a measurement interaction which completely randomizes some of the unobserved system properties. Consequently, limited precision may actually be desireable in quantum measurements.
For instance, a single mode of the electromagnetic field with a well defined photon number must have a completely random phase. Therefore a precise photon number measurement destroys phase coherence and all associated interference properties of the field mode with other coherent modes. If phase coherence and interference properties are preserved the intensity of the field mode can only be determined with a precision too low to resolve single photons. On the other hand, quantization emerges only when phase coherence is lost. Nevertheless a complete characterization of the light field dynamics requires both information about the intensity and the phase distributions. In general, it is therefore realistic to consider a compromise between phase uncertainty and intensity uncertainty.
In quantum nondemoltion measurements of photon number, information about the photon number $`\widehat{n}`$ is obtained through the interaction of the measured field with a probe field or with probe atoms . This interaction introduces phase noise into the measured system, as required by the uncertainty relations . Since the purpose of the procedure is a measurement of photon number, it is very tempting to assume that a perfect resolution of photon number is the ideal case and therefore more desireable than a limited resolution. However, Kitagawa and coworkers have pointed out that even if photon number states are not resolved, a quantum nondemolition measurement of photon number may produce a minimum uncertainty state of phase and photon number. There is a trade off, then, between the noise introduced and the resolution achieved, which requires the definition of a much larger class of ideal quantum measurements. By generalizing the conventional projective measurement postulate, it is possible to investigate this class of ideal quantum measurement, focussing especially on the transitional regime between classical low noise measurements at low resolution and the extreme quantum regime of fully resolved quantization and complete dephasing. It is shown in the following, that the statistical properties of such intermediate resolution measurements include nonclassical correlations between the measured photon number and the phase noise introduced in the measurement which can only be observed in this transitional regime.
In part II, a theoretical description of photon number measurements with variable resolution is given and the effective measurement postulate is derived. In particular, the measurement operator provides a description of the dephasing caused by the measurement interaction.
In part III, the statistics of the measurement results are obtained. The transition from the classical limit to the quantum limit is discussed by pointing out the appearance of nonclassical correlations between the measurement result and the coherence after the measurement.
In part IV, the correlations are compared to fundamental properties of the operator formalism. It is shown that the statistics of the measurement results correspond to a specific operator ordering in the evaluation of correlations.
In part V, the results are summarized and possible implications are discussed. It is argued that the measurement statistics reveal that there is more to quantum reality than the integer photon number. By providing coherence, half integer photon numbers or “fuzzy” photon numbers also contribute to observable fact.
## II Variable resolution in ideal photon number measurements
### A Light field quantization and measurement precision
Based on the application of lasers, modern quantum optics has provided a characterization of the quantum mechanical light field which is much closer to a classical theory of noisy fields than the operator formalism would suggest . In particular, the classical property of light field coherence is much easier to control than the nonclassical property of quantized photon number. It is indeed difficult to measure the exact photon number of a single, well defined light field mode. In multi-mode open systems such as lasers, Langevin equations offer a better description of the light field dynamics than photon number rate equations, even in the presence of amplitude squeezing . This dominance of the classical wave properties in lasers has motivated a new kind of criticism of the photon picture, expressed especially in the notion of “lasing without photons” by Siegmann . Even in the light of conventional quantum mechanics, it is questionable whether the concept of photon number has any meaning before it is definitely measured. In particular, Heisenberg emphasized that no value can be assigned to a physical property if the system is not in an eigenstate of that property . After all, what photon number should be assigned to a coherent superposition of photon number states? It should be obvious that one cannot just pick out one eigenvalue while neglecting the others. Nevertheless, this point is so contrary to our natural intuition that it still raises controversies among physicists .
In a quantum nondemolition measurement of photon number, a nonlinear coupling mechanism is utilized to shift a noisy and continuous pointer variable by an amount proportional to the photon number. As a consequence, the measurement readout of the photon number measurement is generally both noisy and continuous. The discretess of the photon number eigenvalues only emerges if the noise in the pointer variable is sufficiently low. Thus, the actual measurement result obtained is usually a continuous variable and not a discrete one. In order to study the emergence of photon number quantization, one should therefore examine the properties of quantum measurements with variable resolution and continuous values for the photon number measurement results. If the reality of integer photon numbers is somehow “created” in the measurement, there should be a transition from classical fields to quantized fields depending only on the measurement resolution. While the basic tools for such an analysis are indeed provided by the standard quantum theory of measurement , the axiomatic nature of the mathematical approach often obscures the intuitive classical limit. Therefore, it is useful to formulate a generalized measurement postulate taking into account the limited measurement resolution. This measurement postulate summarizes the conventional results while illustrating the fundamental aspects of coherence and decoherence more clearly, providing a shortcut to the derivation of quantum noise features.
### B Generalized measurement postulate for pointer measurements
In a quantum nondemolition measurement, a pointer variable $`n_m`$ of the probe system is shifted by an amount corresponding to the photon number $`n`$ of the light field. However, since the pointer variable $`n_m`$ is itself noisy, there is some error in this procedure. Assuming Gaussian noise, the probability distribution of $`n_m`$ subject to an uncertainty of $`\delta n`$ reads
$$P(n_m)=\left(2\pi \delta n^2\right)^{1/2}\mathrm{exp}\left(\frac{(nn_m)^2}{2\delta n^2}\right).$$
(1)
This distribution applies to a photon number eigenstate. In order to describe the effects of a measurement on superpositions of photon number states, it is necessary to define an operater $`\widehat{P}_{\delta n}(n_m)`$, such that the general effect of a measurement result $`n_m`$ with a quantum mechanical uncertainty $`\delta n`$ on an initial state $`\psi _i`$ is given by $`\widehat{P}_{\delta n}(n_m)\psi _i`$. The probability of obtaining the result $`n_m`$ and the state $`\psi _f(n_m)`$ after the measurement are then given by
$`P(n_m)`$ $`=`$ $`\psi _i\widehat{P}_{\delta n}^{}(n_m)\widehat{P}_{\delta n}(n_m)\psi _i`$ (2)
$`\psi _f(n_m)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{P(n_m)}}}\widehat{P}_{\delta n}(n_m)\psi _i.`$ (3)
Note that the measurement thus described is ideal, since a pure state remains pure and no additional decoherence is introduced. It is assumed that the measurement system is prepared in a well defined quantum state and that the readout is accurate. The source of the uncertainty in the measurement is the quantum noise in the pointer variable $`n_m`$ before the measurement interaction takes place. By increasing this noise, the phase noise introduced in the measurement interaction is reduced and vice versa. In a realistic situation, there may be additional measurement uncertainties due to an inaccurate readout of the pointer or due to additional phase noise introduced in the measurement interaction. Such additional noise sources cause decoherence and change the pure state $`\psi _f(n_m)`$ into a mixture which would have to be represented by a density matrix. In the following, however, it is assumed that such additional noise sources can be avoided. It is then possible to deduce the correct measurement operator by comparing equations (1) and (2). It reads
$$\widehat{P}_{\delta n}(n_m)=\left(2\pi \delta n^2\right)^{1/4}\mathrm{exp}\left(\frac{(\widehat{n}n_m)^2}{4\delta n^2}\right).$$
(4)
This operator describes the relation of the photon number operator $`\widehat{n}`$ with the value $`n_m`$ obtained in the measurement. Thus the connection between the quantum system and the classical measurement readout is established. Although the standard measurement postulate as formulated by von Neumann can be recovered by either letting $`\delta n`$ approach zero or by applying $`\widehat{P}_{\delta n}(n_m)`$ many times, the generalized concept of measurement represented by $`\widehat{P}_{\delta n}(n_m)`$ describes a much wider range of physical situations and is definitely closer to the kind of perception we know from everyday experience. In particular, it describes the classical limit of the uncertainty relations in the case of low resolution, $`\delta n1`$.
### C Photon number squeezing and phase noise
Although, strictly speaking, the phase of a light field mode is not an observable since no phase operator can be constructed, approximate operators and phase space distributions show that there is an uncertainty relation between photon number and phase given by $`\delta n\delta \varphi 1/2`$ . The role of this uncertainty in quantum nondemolition measurements of photon number has been investigated in the context of measurements using the optical Kerr effect . It will be shown in the following that the generalized measurement operator $`\widehat{P}_{\delta n}(n_m)`$ faithfully reproduces these experimentally confirmed results.
Since the phase itself cannot be represented by an operator, it is more realistic to illustrate the decoherence induced by the phase noise by analyzing the reduction in the expectation value of the complex field amplitude $`\widehat{a}`$. Adding Gaussian phase noise with a variance of $`\delta \varphi ^2`$ to an arbitrary field state reduces the initial expectation value of the amplitude $`\widehat{a}_i`$ to a final value of
$$\widehat{a}_f=\mathrm{exp}\left(\frac{\delta \varphi ^2}{2}\right)\widehat{a}_i.$$
(5)
The overall average $`\widehat{a}_f(\text{av.})`$ of the field expectation value after the measurement is given by
$`\widehat{a}_f(\text{av.})`$ $`=`$ $`{\displaystyle \psi _f(n_m)\widehat{a}\psi _f(n_m)P(n_m)𝑑n_m}`$ (6)
$`=`$ $`{\displaystyle \psi _i\widehat{P}_{\delta n}(n_m)\widehat{a}\widehat{P}_{\delta n}(n_m)\psi _i𝑑n_m}`$ (7)
$`=`$ $`\mathrm{exp}\left({\displaystyle \frac{1}{8\delta n^2}}\right)\psi _i\widehat{a}\psi _i.`$ (8)
According to equation (5), this reduction in amplitude corresponds to a Gaussian phase noise with a variance of
$$\delta \varphi ^2=\frac{1}{4\delta n^2}.$$
(9)
Thus the amount of phase noise introduced in the measurement corresponds to the minimum noise required by the uncertainty relation of phase and photon number for a measurement resolution of $`\delta n`$. This is a direct consequence of assuming an ideal quantum mechanical measurement which does not introduce additional phase noise. In a realistic situation, it is likely that the phase noise introduced is somewhat higher than this ideal quantum limit. Relation (9) may then be used to determine how much excess phase noise is introduced in a given experimental setup. Note that this excess noise may originate not only from an additional source of decoherence, but also from an inaccurate readout of the pointer variable.
## III The emergence of quantization
### A Measurement of a coherent state
If the initial state $`\psi _i`$ is a coherent state $`\alpha `$ with the photon number state expansion
$$\alpha =\mathrm{exp}(\frac{|\alpha |^2}{2})\underset{n}{}\frac{\alpha ^n}{\sqrt{n!}}n,$$
(10)
then the measurement statistics defined by equation (2) reads
$`P(n_m)`$ $`=`$ $`\alpha \widehat{P}_{\delta n}^2(n_m)\alpha `$ (11)
$`=`$ $`{\displaystyle \frac{\mathrm{exp}(|\alpha |^2)}{\sqrt{2\pi \delta n^2}}}{\displaystyle \underset{n}{}}{\displaystyle \frac{|\alpha |^{2n}}{n!}}\mathrm{exp}\left({\displaystyle \frac{(nn_m)^2}{2\delta n^2}}\right),`$ (12)
and the coherent amplitude $`\widehat{a}_f`$ after the measurement reads
$`\widehat{a}_f(n_m)`$ $`=`$ $`{\displaystyle \frac{\alpha \widehat{P}_{\delta n}(n_m)\widehat{a}\widehat{P}_{\delta n}(n_m)\alpha }{\alpha \widehat{P}_{\delta n}^2(n_m)\alpha }}`$ (13)
$`=`$ $`\alpha \mathrm{exp}\left({\displaystyle \frac{1}{8\delta n^2}}\right){\displaystyle \frac{_n\frac{|\alpha |^{2n}}{n!}\mathrm{exp}\left(\frac{(n+\frac{1}{2}n_m)^2}{2\delta n^2}\right)}{_n\frac{|\alpha |^{2n}}{n!}\mathrm{exp}\left(\frac{(nn_m)^2}{2\delta n^2}\right)}}.`$ (14)
The results shown in figures 1 to 4 have been calculated using these exact results. However, it is helpful to apply some approximations in order to identify the quantization effects.
For $`|\alpha |^21`$, the photon number distribution may be approximated by a Gaussian distribution with a mean photon number $`|\alpha |^2`$ and a photon number fluctuation of $`|\alpha |`$. The application of the measurement operator $`\widehat{P}_{\delta n}(n_m)`$ then results in a convolution of two Gaussians. If the resolved photon number $`\delta n`$ is much smaller than the photon number fluctuation $`|\alpha |`$, then the amplitude of the photon number state components of $`\alpha `$ does not change much within the measurement interval of $`n_m\pm \delta n`$ and the convolution may be approximately factorized into a product reading
$`\widehat{P}_{\delta n}(n_m)\alpha `$ $``$ $`\stackrel{\text{Gaussian intensity distribution of }\alpha }{\stackrel{}{(2\pi |\alpha |^2)^{1/4}\mathrm{exp}\left({\displaystyle \frac{(n_m|\alpha |^2)^2}{4|\alpha |^2}}\right)}}`$ (16)
$`\times \underset{\text{decoherence and quantization effects}}{\underset{}{{\displaystyle \underset{n}{}}(2\pi \delta n^2)^{1/4}\mathrm{exp}\left({\displaystyle \frac{(nn_m)^2}{4\delta n^2}}\right)\mathrm{exp}\left(i\varphi n\right)n}},`$
where the phase $`\varphi `$ is defined by $`\alpha =|\alpha |\mathrm{exp}(i\varphi )`$.
It is thus possible to separate the state dependent photon number distribution from the fundamental effects of decoherence and quantization. By applying the approximations of equation (16) to the measurement statistics described by equations (11) and (13), an even clearer separation of classical noise properties and quantization effects is obtained. The approximate results read
$$P(n_m)\underset{\text{ classical intensity distribution}}{\underset{}{(2\pi |\alpha |^2)^{1/2}\mathrm{exp}\left(\frac{(n_m|\alpha |^2)^2}{2|\alpha |^2}\right)}}\underset{\text{ quantization effects}}{\underset{}{\underset{n}{}(2\pi \delta n^2)^{1/2}\mathrm{exp}\left(\frac{(nn_m)^2}{2\delta n^2}\right)}}$$
(17)
for the probability, and
$$\widehat{a}_f(n_m)\underset{\text{ classical amplitude average}}{\underset{}{\mathrm{exp}\left(i\varphi \right)\sqrt{n_m+\frac{1}{2}}\mathrm{exp}\left(\frac{1}{8\delta n^2}\right)}}\underset{\text{ quantization effects}}{\underset{}{\frac{_n\mathrm{exp}\left(\frac{(n\frac{1}{2}n_m)^2}{2\delta n^2}\right)}{_n\mathrm{exp}\left(\frac{(nn_m)^2}{2\delta n^2}\right)}}}$$
(18)
for the coherent amplitude. Note that only the phase of the coherent amplitude expectation value $`\widehat{a}_f`$ after the measurement depends on the initial value of $`\alpha `$. The absolute value is determined by the measurement result and is proportional to $`\sqrt{n_m+1/2}`$. This result corresponds to the classical notion that the absolute value of the coherent amplitude should be the square root of the intensity.
The sums which express the quantization effects in equations (17) and (18) are periodic functions of $`n_m`$. In other words, quantization effects only depend on how close the measurement result $`n_m`$ is to an integer value. Because of this periodicity, the sums can be expressed as Fourier series. Specifically,
$$(2\pi \delta n^2)^{1/2}\underset{n}{}\mathrm{exp}\left(\frac{(nn_m)^2}{2\delta n^2}\right)=1+2\underset{k=1}{\overset{\mathrm{}}{}}\mathrm{exp}\left(2\pi ^2\delta n^2k^2\right)\mathrm{cos}\left(2\pi kn_m\right)$$
(19)
and
$$(2\pi \delta n^2)^{1/2}\underset{n}{}\mathrm{exp}\left(\frac{(n\frac{1}{2}n_m)^2}{2\delta n^2}\right)=12\underset{k=1}{\overset{\mathrm{}}{}}\mathrm{exp}\left(2\pi ^2\delta n^2k^2\right)\mathrm{cos}\left(2\pi kn_m\right).$$
(20)
Note that the Fourier coefficients are Gaussians in the modulation frequency variable $`k`$. The high frequency components of the periodic modulations are therefore strongly suppressed. Depending on the measurement resolution $`\delta n`$, it is reasonable to limit the expansion to only the first few contributions. This resolution dependent truncation of the Fourier series defines the transition from the classical regime to the quantum regime.
### B From the classical limit to full quantization
In the classical limit, all Fourier components with $`k>1`$ are negligible. The measurement probability and the expectation value of the coherent field after the measurement read
$`P_{\text{class.}}(n_m)`$ $`=`$ $`(2\pi |\alpha |^2)^{1/2}\mathrm{exp}\left({\displaystyle \frac{(n_m|\alpha |^2)^2}{2|\alpha |^2}}\right)`$ (21)
$`\widehat{a}_{f,\text{class.}}(n_m)`$ $`=`$ $`\sqrt{n_m+1/2}\mathrm{exp}\left(i\varphi \right)\mathrm{exp}\left({\displaystyle \frac{1}{8\delta n^2}}\right).`$ (22)
These results correspond to the classical assumption of continuous light field intensity and equally continuous Gaussian noise in the light field phase and amplitude. A typical example is shown in figure 1 for a coherent state with an amplitude of $`\alpha =3`$. The measurement resolution is at $`\delta n=0.7`$, quite close to the quantum limit. Nevertheless, the approximate results of equations (21) correspond quite well to the more precise results of equations (11) and (13). Indeed, the main discrepancy between the probability distribution $`P(n_m)`$ given by equation (21) and the exact result is due to the asymmetry of the Poissonian photon number distribution which has been neglected by assuming a Gaussian photon number distribution in equations (17) and (18). This deviation gets much smaller as the average photon number of the coherent state is increased. However, it is already a good approximation at the average photon number of nine shown in the examples.
As the quantum limit is approached, the classical results are modulated by quantum effects. In the probability distribution of measurement results, this modulation appears as a fringe pattern similar to that caused by an interference effect. At the same time, a complementary fringe pattern emerges in the coherence after the measurement as given by $`\widehat{a}_f(n_m)`$. The lowest order contributions to these quantization effects read
$`P(n_m)`$ $`=`$ $`P_{\text{class.}}(n_m)\left(1+2\mathrm{exp}\left(2\pi ^2\delta n^2\right)\mathrm{cos}\left(2\pi n_m\right)\right)`$ (23)
$`\widehat{a}_f(n_m)`$ $`=`$ $`\widehat{a}_{f,\text{class.}}(n_m){\displaystyle \frac{12\mathrm{exp}\left(2\pi ^2\delta n^2\right)\mathrm{cos}\left(2\pi n_m\right)}{1+2\mathrm{exp}\left(2\pi ^2\delta n^2\right)\mathrm{cos}\left(2\pi n_m\right)}}.`$ (24)
The accuracy of this approximation is worst for $`\widehat{a}_f(n_m)`$ at integer or half integer values of $`n_m`$. At these points, it is accurate to within 1% for $`\delta n0.27`$ and accurate to within 10% for $`\delta n0.23`$. Thus, the reliability of the lowest order approximation is generally very high above $`\delta n0.25`$. Figure 2 shows the probability distribution and the coherent amplitude after the measurement at a resolution of $`\delta n=0.4`$. This resolution corresponds to a modulation factor of $`2\mathrm{exp}(2\pi ^2\delta n^2)=0.085`$. The modulation is still very weak and the likelihood of obtaining an integer result is only about 1.2 times higher than the likelihood of obtaining a half integer result. Nevertheless, the quantization fringes in $`P(n_m)`$ and the decoherence fringes in $`\widehat{a}_f(n_m)`$ are clearly visible. The anticorrelation of the probability peaks and the coherence maxima is illustated in figure 2 c) which shows the respective modulations near $`n_m=9`$, normalized using the classical results at $`n_m=9`$. Figure 3 shows the probability distribution and the coherent amplitude after the measurement at a resolution of $`\delta n=0.3`$. This resolution corresponds to a modulation factor of $`2\mathrm{exp}(2\pi ^2\delta n^2)=0.338`$. The likelihood of obtaining an integer result is about twice as high as that of obtaining a half integer result and the reduction in the coherent amplitude is about four times greater for integer $`n_m`$ than for half integer $`n_m`$. At an average decoherence factor of $`\mathrm{exp}(1/(8\delta n^2))=0.25`$, the average coherent amplitude after the measurement is still quite significant. A measurement resolution of $`\delta n=0.3`$ thus combines aspects of photon number quantization and aspects of phase coherence, defining the center of the transitional regime between continuous field intensities and quantized photon numbers.
Between a resolution of $`\delta n=0.3`$ and a resoltuion of $`\delta n=0.2`$, the approximation given by equation (23) breaks down. For $`\delta n<0.2`$, the probability distribution is given by isolated Gaussians centered around integer measurement results $`n_m`$. Half integer results become extremely unlikely. However, if such an unlikely result is obtained, there still is coherence even in extremely precise measurments. This fact is usually obscured by the assumption of infinite precision inherent in the conventional projective measurement postulate. Figure 4 shows the probability distribution and the coherence after the measurement for a resolution of $`\delta n=0.2`$. Note that the approximation given by equation (23) is still very good for the probability distribution. However, the relative error in the peak values of the coherent amplitude $`\widehat{a}_f`$ after the measurement is nearly 100%. Therefore, the dashed curve in figure 4 b) does not show the approximate result, but instead shows the classical approximation $`\widehat{a}_{f,\text{class.}}`$ given by equation (21). This comparison illustrates the relatively high coherence at half integer measurement results $`n_m`$. At half integer measurement results $`n_m`$, the expectation value $`\widehat{a}_{f,\text{class.}}`$ of the coherent amplitude is equal to $`(\sqrt{n_m+1/2})/2`$, or one half of the amplitude corresponding to a classical light field intensity of $`n_m+1/2`$. This result is valid for all $`\delta n<0.2`$, regardless of the average dephasing induced by the measurement interaction. Therefore, the peak values of the coherence after the measurement are much higher than the classical results, while the minima at integer photon number are actually closer to zero than the classical interpretation of dephasing would suggest. In the case of $`\delta n=0.2`$ shown in figure 4, the classical approximation predicts an average decoherence factor of $`\mathrm{exp}(1/(8\delta n^2))=0.044`$. However, the peak values of coherence at half integer photon number are more than ten times higher and the minima at integer photon number are more than ten times lower than the classically expected coherence after dephasing. Since the likelihood of integer results is about ten times higher than the likelihood of half integer results, the main contribution to the average coherence after the measurement still originates from half integer photon number results. Even at fully resolved quantization, the half integer phopton number esults thus provide a contribution to the dephasing statistics.
### C Correlation between quantization and dephasing
The discussion above reveals a clear qualitative difference between measurement results $`n_m`$ of integer photon number and of half integer photon number. To obtain a quantitative expression, it is necessary to define a measure of quantization associated with each measurement result $`n_m`$. In the following, the quantization $`Q`$ of a measurement result $`n_m`$ is therefore defined as
$$Q(n_m)=\mathrm{cos}\left(2\pi n_m\right).$$
(25)
Thus, the quantization $`Q`$ of integer values of $`n_m`$ is +1 and the quantization of half integer values is -1. In the classical case, this results in an average quantization of zero. The average quantization $`\overline{Q}`$ of the measurement results is given by
$`\overline{Q}`$ $`=`$ $`{\displaystyle 𝑑n_mQ(n_m)P(n_m)}`$ (26)
$`=`$ $`\mathrm{exp}\left(2\pi ^2\delta n^2\right).`$ (27)
Since $`\overline{Q}`$ depends only on $`\delta n`$, it may be used as an experimental measure of the resolution obtained in quantum nondemolition measurements of photon number. It is now possible to evaluate the correlation between the quantization observed and the coherence after the measurement by averaging the product,
$`\overline{Q\widehat{a}_f}`$ $`=`$ $`{\displaystyle 𝑑n_mQ(n_m)\widehat{a}_f(n_m)P(n_m)}`$ (28)
$`=`$ $`\mathrm{exp}\left(2\pi ^2\delta n^2\right)\mathrm{exp}\left({\displaystyle \frac{1}{8\delta n^2}}\right)\alpha `$ (29)
$`=`$ $`\overline{Q}\widehat{a}_f(\text{av.}).`$ (30)
The average of the product of quantization and coherence is exactly equal to the negative product of the averages. Therefore, quantization and coherence are strongly anti-correlated. The correlation $`C(Q,\widehat{a}_f)`$ is given by
$`C(Q,\widehat{a}_f)`$ $`=`$ $`\overline{Q\widehat{a}_f}\overline{Q}\widehat{a}_f(\text{av.})`$ (31)
$`=`$ $`2\overline{Q}\widehat{a}_f(\text{av.})`$ (32)
$`=`$ $`2\mathrm{exp}\left(2\pi ^2\delta n^2\right)\mathrm{exp}\left({\displaystyle \frac{1}{8\delta n^2}}\right)\alpha .`$ (33)
Figure 5 shows this correlation as a function of measurement resolution $`\delta n`$. The correlation is maximal at $`\delta n=1/(2\sqrt{\pi })`$, which is a resolution of about 0.282 photons. At this point, the average quantization $`\overline{Q}`$ is equal to $`\mathrm{exp}(\pi /2)=0.208`$ and the average coherent amplitude $`\widehat{a}_f(\text{av.})`$ after the measurement is equal to $`\mathrm{exp}(\pi /2)=0.208`$ times the original amplitude $`\alpha `$.
There appears to be a well defined transition from the classical limit to the quantum limit of measurement resolution at $`\delta n=1/(2\sqrt{\pi })`$, which is characterized by statistical properties not observable in either the extreme quantum limit or in the classical limit. Since it should be possible to obtain these statistical properties from experimental results, some measure of reality must be attributed to the concept of variable quantization $`Q`$. Specifically, even though it is clear that only measurement results of full quantization $`Q=1`$ remain as the resolution is increased, the reduced decoherence at $`Q=1`$ demonstrates that such results can not be interpreted as measurement errors due to either a higher or a lower photon number. This measurement scenario thus highlights the problem of assuming the existance of an integer photon number before the photon number is actually measured. Obviously, quantization is not a property of the system which is simply hidden by the noise of the low precision measurement in the classical limit. Some very real physical properties are associated with noninteger values of photon number measurement results. Possibly, it is necessary to consider operator values other than the eigenvalues as part of the physical reality associated with quantum mechanical operator variables.
## IV Fundamental properties of the operator formalism
### A Quantization and the parity operator
The generalized measurement operator $`\widehat{P}_{\delta n}(n_m)`$ describes both classical and quantum mechanical features of measurements in terms of a quantum mechanical operator. Classically, it would be possible to distinguish between the measurement result $`n_m`$ and the actual photon number $`n`$. In quantum mechanics, however, the photon number $`\widehat{n}`$ is an operator which does not have a well defined value unless the field is in a photon number eigenstate. Therefore, the relationship between the measurement result $`n_m`$ and the photon number operator $`\widehat{n}`$ is quite different from the classical relationship between a noisy measurement result and the true value of the measured quantity.
A quantum mechanical property which may provide a connection between the definition of quantization $`Q`$ based on the measurement result $`n_m`$ and the properties of the photon number operator $`\widehat{n}`$ is the parity $`\widehat{\mathrm{\Pi }}`$ defined as
$$\widehat{\mathrm{\Pi }}=(1)^{\widehat{n}}.$$
(34)
The square of the parity $`\widehat{\mathrm{\Pi }}^2`$ may then be associated with the quantization $`Q`$. Of course, the quantum mechanical value of quantization is always one. However, by “breaking apart” the square of the parity, a correlation between quantization and coherent field amplitude may be established. It reads
$$\widehat{\mathrm{\Pi }}\widehat{a}\widehat{\mathrm{\Pi }}\widehat{\mathrm{\Pi }}^2\widehat{a}=2\widehat{\mathrm{\Pi }}^2\widehat{a}.$$
(35)
If $`\widehat{\mathrm{\Pi }^2}`$ is identified with $`\overline{Q}`$ and $`\widehat{a}`$ is identified with $`\widehat{a}_f(\text{av.})`$, this correlation corresponds to the one given in equation (31). The relationship between coherence and quantization can thus be traced to the anti-commutation between parity and field amplitude, $`\widehat{\mathrm{\Pi }}\widehat{n}=\widehat{n}\widehat{\mathrm{\Pi }}`$. One could indeed argue that the correlation which appears in the measurement is hidden in the commutation relations of the operator formalism.
### B Ambiguous correlations in the operator formalism
The correlation given in equation (35) is of course a result of the specific order in which the operators have been applied. Since $`\widehat{\mathrm{\Pi }}^2`$ is always one, there is no correlation as soon as both parity operators are placed on the same side of the field operator $`\widehat{a}`$. In principle, it is not possible to determine the correlation between noncommuting quantum variables directly from the operator formalism because of this ambiguity concerning the ordering of the operators.
In particular, the case of photon number quantization and parity belongs to a general class of correlations based on the inequality
$$\frac{1}{2}\widehat{A}\widehat{B}^2+\widehat{B}^2\widehat{A}\widehat{B}\widehat{A}\widehat{B},$$
(36)
where $`\widehat{A}`$ and $`\widehat{B}`$ represent arbitrary noncommuting operator variables. The operator ordering $`\widehat{B}\widehat{A}\widehat{B}`$ allows correlations even if the quantum state is an eigenstate of $`\widehat{A}`$ or $`\widehat{B}^2`$. This property definitely contradicts any assumption of classical statistics. Nevertheless, such correlations can be obtained in experiment, even though the outcome of a direct measurement of $`\widehat{A}`$ or $`\widehat{B}`$ performed on the initial state would be perfectly predictable. Thus the quantum nondemolition measurement discussed in this paper represents an example of a more general class of measurements revealing fundamental nonclassical properties of quantum statistics.
### C Operator ordering and physical reality
In the theory of quantum mechanics, the classical values of physical variables are replaced by operators. Consequently, it is not possible to assign a well defined value to an operator variable if the system is not in an eigenstate of the operator. This situation calls for a review of our concepts of physical reality, as can be seen from the arguments concerning entanglement and the debate of hidden variables . Quantum mechanical uncertainty is definitely quite different from a classical lack of knowledge, and this difference is revealed in the correlations between noncommuting variables. For instance, the EPR argument basically uses the entanglement of two particles to establish a correlation between position and momentum of the same particle - thus trying to circumvent the restrictions imposed by uncertainty on Einsteins arguments in the Bohr-Einstein dialogue . However, as Bell has shown, the correlations between noncommuting variables thus obtained cannot be represented by a classical probability distribution . Since this paradox is an inherent property of the operator formalism, it should be possible to trace its origin directly to the fundamental nonclassical properties of quantum mechanical measurements.
In principle it would be desireable to know the value of a correlation between noncommuting variables such as the parity $`\widehat{\mathrm{\Pi }}`$ and the coherent amplitude $`\widehat{a}`$ without reference to a measurement. If there were hidden variables defining classical values for both operators, there should also be a well defined correlation. However, the formalism itself introduces an ambiguity. A formal calculation of correlations based on the expectation values of operator products raises the question of operator ordering. A particularily striking ambiguity is represented by equation (35), since it permits a correlation of $`\widehat{\mathrm{\Pi }}^2`$ with the coherent amplitude even though the eigenvalues of $`\widehat{\mathrm{\Pi }}^2`$ are all one. Of course one could argue that it should not be allowed to separate the square of the parity operator. However, such a postulate would not be based on any physical observation but only on preconceived notions of what reality should be like. It is therefore important to note that unusual correlations such as the one given by equation (35) can have a real physical meaning in measurement statistics.
Since quantum mechanics does not allow the simultaneous assignment of well defined physical values to noncommuting observables, it is not possible to discuss correlations between such observables without a definition of the measurement by which such correlations are obtained. The futility of trying a more general approach is clearly revealed by the ambiguity of the correlations caused by the commutation relations between operators.
## V Conclusions and Outlook
### A Interpretation of the nonclassical correlations
The results presented above show that a quantum nondemolition measurement reveals much more than just the photon number of a light field at an intermediate measurement resolution close to $`\delta n=0.3`$. In this intermediate regime, the property that phase coherence in the field requires quantum coherence between neighbouring photon number states emerges visibly as a correlation between the continuous measurement result $`n_m`$ and the coherence after the measurement $`\widehat{a}_f`$. This measurement scenario thus reveals the difference between quantum mechanical uncertainty and a classical lack of precision. In particular, there is a real physical difference between the measurement results of half integer photon number and the measurement results of integer photon number which makes it impossible to argue that the measurement of half integer photon number is merely an error. By introducing the variable $`Q`$ to denote the quantization of the measurement result, it is possible to evaluate the correlation between quantization and decoherence in the measurement. In the operator formalism, the quantization can be interpreted as the square of the parity operator $`\widehat{\mathrm{\Pi }}`$. It is then possible to derive the observed correlation directly from the operator formalism.
The correlation obtained both from the statistics of the quantum nondemolition measurement and from the operator statistics suggests the reality of half integer photon number results. Depending on the circumstances, quantum measurements may therefore reveal physical values of operator variables which are quite different from the eigenvalues of the corresponding operators. At the same time, the ambiguity of the correlations between operator variables shows that an identification of neither eigenvalues $`n`$ nor measurement results $`n_m`$ with elements of reality can be valid. It is therefore not suficient to extend the range of photon number values. Instead, the statistics of physical properties should be based on the measurement results obtained in a specific measurement setup. The ambiguity in the formalism can then be resolved by applying the appropriate generalized measurement postulate.
It seems that the physical property of light field intensity given by the photon number can not be attributed to any measurement independent elements of reality. Possibly, it might be a useful compromise to regard the measurement results $`n_m`$ as elements of a fundamentally noisy reality, while acknowledging the qualitative dependence of the measurement result on the resolution $`\delta n`$. In the classical limit, the identification of $`n_m`$ with the actual light field intensity is usually not problematic. Therefore, our classical concept of reality survives on the macroscopic level, even though it has to be abondoned in the microscopic regime. In the quantum limit, $`n_m`$ can again be identified with the eigenvalues of the operator $`\widehat{n}`$. In this manner, a continuous transition between our classical concept of reality and the mysterious properties of the quantum regime can be described.
### B Experimental possibilities
The measurement statistics described here should be obtainable by carefully evaluating the data obtained in any quantum nondemolition measurement followed by a measurement of field coherence, e.g. by homodyne detection. It is important, however, to keep track of the correlation between the measurement result $`n_m`$ and the corresponding average results of the field measurements $`\widehat{a}_f(n_m)`$. This requires some amount of time resolution, for example in the form of light field pulses or perhaps of solitons in fibers . Unfortunately, it is extremely difficult to realize quantum nondemolition measurements of high resolution in the optical regime. The experimental results cited here are still well in the classical regime of $`\delta n>1`$. Possibly, a realization based on the interaction of single atoms with a microwave mode might be more promising. In particular, the use of a variable number of single probe atom passed through the cavity should allow a particulatily reliable variation of the photon number resolution parameter $`\delta n`$.
The challenge presented by the aspects of quantum theory discussed above is to obtain sufficient control of quantum coherence to explore the properties at the very limit of quantum mechanical uncertainty. The effects observed in this regime should then help to illustrate the quantum mechanical properties utilized for quantum computation, quantum communication, and other aspects of quantum information . The continuous transition from the classical aspects of optical coherence to the quantum properties of the light field can also serve as a tool to pinpoint the technological requirements for more complex implementations of quantum optical devices.
## Acknowledgements
The Author would like to acknowledge support from the Japanese Society for the Promotion of Science, JSPS.
|
no-problem/9909/cs9909001.html
|
ar5iv
|
text
|
# Emerging Challenges in Computational Topology
## 1 Introduction and Background
Over the past 15 years, computational geometry has become a very productive area, with applications in fields such as graphics, robotics, and computer-aided design. Computational geometry, however, primarily focuses on discrete problems involving point sets, polygons, and polyhedra, and uses combinatorial techniques to solve these problems. It is now time for computational geometry to broaden its scope in order to meet the challenges set forth in the President’s Information Technology Advisory Committee (PITAC) Report \[PIT98\] and the Information Technology for the Twenty-First Century (IT<sup>2</sup>) Initiative \[IT99\], specifically the need for accelerated progress in information visualization, advanced scientific and engineering computation, and computational algorithms and methods.
There is a need to extend computational geometry—with its emphasis on provable correctness, efficiency, and robustness—to continuous domains, curved surfaces, and higher dimensions. Such an extension brings computational geometry into contact with classical topology, just as earlier research led to inextricable connections with combinatorial geometry—to the great benefit of both fields.
We intend the name computational topology to encompass both algorithmic questions in topology (for example, recognizing knots) and topological questions in algorithms (for example, whether a discrete construction preserves the topology of the underlying continuous domain).
Research into computational topology has started already \[Veg97\], and is at present being undertaken separately by topology, computational geometry, and computer graphics communities, among others. Each of these fields has developed its own favored approaches to shape representation, manipulation, and analysis. Algorithms are often specific to certain data representations, and the underlying questions common to all approaches have not been given adequate attention.
The Workshop on Computational Topology, June 11–12, 1999, in Miami Beach, Florida, brought together researchers involved in aspects of computational topology. The purposes of this interdisciplinary workshop were to set goals for computational topology, identify important problems and areas, and describe key techniques common to many areas.
## 2 Goals
Geometric computing is a fundamental element in several of the areas highlighted in the IT<sup>2</sup> initiative: information visualization \[IT99, section 2, “Fundamental Information Technology Research”, under human-computer interaction and information management\], advanced science and engineering computation \[IT99, section 3, “Advanced Computing for Science, Engineering, and the Nation”\], and computational and algorithmic methods \[IT99, section 3, under computer science and enabling technology\]. Scientific and engineering computing often simulates physical objects and their interactions, on scales that vary from the atomic to the astronomical. Modeling the shapes of these objects, and the space surrounding them, is a difficult part of these computations. Information visualization also involves shapes and motions, as well as sophisticated graphics rendering techniques. Each of these two areas, as well as many others, would benefit from advances in generic computational and algorithmic methods.
Some of the most difficult and least understood issues in geometric computing involve topology. Up until now, work on topological issues has been scattered among a number of fields, and its level of mathematical sophistication has been rather uneven. This report argues that a conscious focus on computational topology will accelerate progress in geometric computing.
Topology separates global shape properties from local geometric attributes, and provides a precise language for discussing these properties. Such a language is essential for composing software programs, such as connecting a mesh generator to a computational fluid dynamics simulation. Mathematical abstraction can also unify similar concepts from different fields. For example, basic questions of robot reachability or molecular docking become similar topological questions in the appropriate configuration or conformation spaces. Finally, by separating shape manipulation from application-specific operations, we expect to improve reliability of geometric computing in many domains, just as other large software systems (for example, operating systems and internet routing) have gained reliability through layered design.
## 3 Areas and Problems
We have identified five main areas in which computational topology can lead to advances in simulation and visualization.
* Shape acquisition. The entry of the shapes of physical objects into the computer is becoming increasingly automated. Part of this process is developing algorithms that turn a set of measurements or readings into a topologically valid shape representation.
* Shape representation. Many different computer representations of shape are in use. Describing the relationships between them, converting from one to the other, and developing new representations, all require topological ideas and methods.
* Physical simulation. For scientific and engineering computations, shape representations are typically meshed into small pieces. Many of the issues that have arisen in mesh generation are topological.
* Configuration spaces. Configuration or conformation spaces represent the possible motions of objects moving among obstacles, mechanical devices, or molecules. These spaces are usually high-dimensional and non-Euclidean, and hence raise some rather deep topological questions.
* Topological computation. Some recent advances in topology itself involve algorithms and computation. Better software for geometric computing will help advance this approach to topology, while new techniques and representations developed for topological problems will contribute to the advancement of geometric computing.
In each area we have selected a few problems for more detailed discussion.
### 3.1 Shape Acquisition
Computer representations of shapes can either be designed by a person using CAD tools or acquired from an existing physical object. The latter approach offers advantages of speed and faithfulness to an original, which is of course crucial in applications such as medical imaging. Automatic acquisition of shapes poses a wealth of geometric and topological challenges.
#### Shape Reconstruction from Scattered Points.
Modern laser scanners can measure a large number of points on the surface of a physical object in a matter of seconds. The most basic computational problem is the reconstruction of the “most reasonable” geometric shape that generated the point sample. We find algorithmic solutions to this problem in both the computational geometry and the computer graphics literature. Consistent with the dominant cultures in these two areas, solutions suggested in computational geometry are discrete in nature, while solutions in computer graphics are based on numerical ideas.
The early work of Hoppe et al. \[HDMS92\] in computer graphics drew wide attention to the reconstruction problem. The authors give an algorithm that works for point data sampled everywhere densely on the surface of the object. A basic step in the reconstruction estimates the surface normal at a point using a best-fit plane determined by near neighbors. This idea is inherently differential and limits the algorithm to shapes and data for which locally linear approximations provide useful information. Raindrop Geomagic, Inc. takes a more global approach in its software Wrap, which reconstructs a shape using the 3-dimensional Delaunay triangulation of the sampled points \[Rai96\]. Amenta and Bern describe another algorithm using the Delaunay triangulation together with differential ideas \[AB98\], and go on to prove that their algorithm gives a geometrically close and topologically correct output, under certain assumptions about the point sample. This result rationalizes the reconstruction process and focuses attention on the more difficult cases in which the assumptions are violated, for example, surfaces with creases or boundaries, and sample points with noise.
#### Manifold and Space Learning.
Mathematically, it makes perfect sense to generalize the reconstruction problem from $`R^\mathrm{3}`$ to higher-dimensional Euclidean space. Perhaps somewhat surprisingly, this generalization makes sense also from the viewpoint of applications, including speech recognition, weather forecasting, and economic prediction. Many natural phenomena can be sampled by individual measurements, where each measurement can be interpreted as a point in $`R^d`$, for some fixed dimension $`d`$.
The reconstruction problem in $`R^d`$ is more difficult than in $`R^\mathrm{3}`$ not only because $`d`$ usually exceeds 3, but also because we have no a priori knowledge about the intrinsic dimension of the shape that we wish to reconstruct. It might have mixed or fractal, or even altogether ambiguous, dimension. Since the input data is a discrete set of points, which by definition has dimension zero, the question itself is highly ambiguous and the answer depends on the scale at which we view the data. The idea of scale dependent variation as applied in the definition of fractal or Hausdorff dimension \[Mat95\] thus suggests itself. It appears in the work of Jones \[Jon90\], where local dimension is estimated through the variation of linear best-fits in a hierarchy of nested neighborhoods. It is also manifest in the work of Edelsbrunner and Mücke \[EM94\], who define alpha shapes as a family of reconstructed shapes parametrized by scale. One of the challenging problems in this context is the study of the interaction between noise and scale.
#### Reconstruction from Slices.
In many applications the input data includes additional information that can help in reconstructing the shape. Examples are estimates of surface normals provided by the scanner or information encoded in the sampling sequence. A classic version of the reconstruction problem in the latter category presents the data in slices, each slice consisting of one or more polygons given by a cyclic sequence of vertices. Usually, the slicing planes are parallel, in which case the reconstruction reduces to connecting each pair of contiguous slices.
A problematic aspect of this approach to shape reconstruction is the nonsymmetric treatment of coordinate directions. In other applications, however, nonsymmetric treatment seems warranted or even necessary. For example, the animation of a moving planar shape can be viewed as sweeping a surface in $`R^\mathrm{2}`$ times time. Branching occurs at critical points, which correspond to moments in time where the shape changes its topology. The relevant mathematics here is Morse theory, which studies the combinatorial and differential structure of critical points \[Mil63\].
A related problem is morphing: given two surfaces in $`R^\mathrm{3}`$, construct a continuous deformation of one surface into the other. The deformation may be a homotopy or a cobordism. Again we can view this as a problem in Morse theory, only one dimension higher than before. This view is adopted in \[CEF98\], where canonical deformations are used in the construction of “shape spaces”. Such spaces could be useful in building databases of shapes, such as drug compounds, anatomical structures, and mechanical tools.
#### Crystal Structures from X-ray Data.
The standard tool for determining conformations of atoms and molecules in crystals is X-ray crystallography. Missing phase information must be inferred to convert observed Bragg diffraction intensity data into a phased Fourier amplitude set. This process continues to become more routine even for macromolecules such as proteins and viruses. A Fourier transform of the amplitude set then produces an electron density function over $`R^\mathrm{3}`$. If the observed intensity set extends far enough, the peaks of the the density function provide starting atomic coordinates suitable for least squares refinement, however macromolecular density maps usually do not have atomic resolution.
It is again convenient to describe the situation using the language of Morse theory. The density is viewed as the height function of a 3-manifold in $`R^\mathrm{4}`$, with four types of critical points: “peaks”, “passes”, “pales”, and “pits”. The reconstruction of the atomic or molecular configuration may be complicated by the presence of excessive noise, thermal motion, positional and occupancy disorder, or lack of atomic resolution. Morse theory interpretations of crystallographic density functions are being carried out for a variety of crystal structures ranging from macromolecules with less than atomic resolution \[FCGL97\], to ultra precise small molecule structures for which quantum mechanical perturbations such as lone pair density peaks in the middle of covalent bonds are detectable \[Bad94\]. When thermal motion is the primary focus, neutron Bragg diffraction data often are used, from which nuclear density rather than electon density maps are produced and thus do not include quantum perturbations.
Advances in computational topology can contribute to the above and to other problems such as classification schemes for crystal structures using Heegaard level surfaces between passes and pales \[Joh99\], and certain related minimal surfaces \[LN99\]. Delaunay-based reconstruction might provide useful tools to address the problem of “topological noise” such as spurious peaks and passes. Morse theory can also play a role in efficient algorithms for finding structures in electron density data \[CSA99\].
### 3.2 Shape Representation
Data structures for representing shapes have emerged independently in many different fields. These representations include unstructured collections of polygons (“polygon soup”), polyhedral models, subdivision surfaces, spline surfaces, implicit surfaces, skin surfaces, and alpha shapes. Generally speaking, these methods are at best adequate within their own fields, and not well-suited for connecting across fields. At least in the CAD area, there is growing awareness that future systems must be more mathematically sophisticated than today’s systems. Rida Farouki writes, “At the heart of this problem lie some deep mathematical issues, concerned with the computation, representation, and manipulation of complex geometries” \[Far99\]. Shape representation appears to be an ideal area for collaboration between mathematicians and computer scientists.
#### Conversion Between Different Representations.
A number of ad hoc methods exist for converting between different types of representations, usually to and from polyhedral models. These methods typically use geometric criteria to evaluate the conversion, for example, guaranteeing that the original surface is pointwise not more than a small tolerance distant from the polygonal mesh. Geometry alone is insufficient, however, as it does not guarantee topological properties such as “watertightness”. Including topological criteria in the evaluation will lead to more correct conversion programs.
#### Topology Preserving Simplification.
The process of replacing a polygonal surface with a simpler one, while essential to many hierarchical representations, is notorious for introducing topological errors which can be fatal for later operations. A popular method, edge contraction \[HDD<sup>+</sup>93\], can be applied to general simplicial complexes, but is not in general guaranteed to produce a complex homeomorphic to the original. Dey et al. \[DEGN98\], however, have proved that the complex after contraction is homeomorphic to the original if the neighborhood of the contracted edge satisfies a link condition. For 2- and 3-manifolds, the link condition defines the contractable edges.
Even if the output of a simplification process is homeomorphic to the input, however, there is no guarantee that the output is correctly embedded. Self-intersections are often introduced, for instance, a problem sometimes know as “bubbling”. One step towards guaranteeing a correct embedding was a paper by Varshney et al. \[CVM<sup>+</sup>96\], in which a simplified 2-manifold is fitted into a shell around the original. Much more work, both in providing mathematically verifiable guarantees and in developing efficient algorithms, is required.
#### Smoothness and Nonsmoothness.
Smooth surface representations are commonly divided into implicit and explicit representations. Implicit surfaces can be defined by blending parametrized surfaces such as splines, or as level sets of scalar functions. The advantages of implicit surfaces include high degree of smoothness for arbitrary topology, ease of raytracing, and ease of combining several objects by blending. Disadvantages include difficulties with parameterization and conversion to polygonal meshes. Moreover, implicit surfaces may have singularities, which can be difficult to detect and control.
The most common explicit representation—ubiquitous in CAD—is that of nonuniform rational B-spline (NURBS) patches. A more systematic approach, which offers the advantage of multiresolution control, involves subdivision surfaces \[ZSD<sup>+</sup>99\]. Both of these methods, however, produce surfaces with defects, for example, flat spots or areas of relatively low smoothness near extraordinary points. Whether or not a point is extraordinary depends on the local topology within the representing mesh, and has nothing to do with its geometric location. The changed amount of smoothness is thus an artifact of the representation, and should ideally not exist. An important challenge in smooth surfaces is to ensure integral measures of visual smoothness (fairness). Variational surfaces aim to handle such measures directly.
In the other direction, there is also need for representations that can handle singularities such as boundaries, creases, and corners. With standard polyhedral models, there is no distinction between the creases resulting from discretization and those that represent true surface features. Spline patches give rise to creases of high algebraic degree that cannot be manipulated directly, and implicit surfaces rarely allow any control of singularities.
#### Multiscale Representations.
Multiscale representations, whether implicit or explicit, hold out the promise of efficiency even for very complex geometries. We identify the main challenge as developing representations that allow controlled topology changes between levels, while supporting a variety of efficient multiscale operations, such as animation, editing, and “signal processing”.
Implicit multiscale representations (level sets) have been used in volume rendering. Volume data are themselves represented on regular or adaptively refined (octree) grids; thus it is natural to use classic functional multiscale representations such as wavelets \[WE97, Wes94\], yet it is also possible to construct unstructured mesh hierarchies on volume data \[WJ95\]. While allowing topological changes at different levels of the hierarchy, implicit representations offer little control of such changes. On the other hand, at least in the case of volume data represented on regular grids, signal processing techniques can be used to handle some of the topological problems \[Wes94\].
Some of the current explicit methods \[SZL92, RB93, GH97\] do allow topological changes, but the control over such changes is relatively limited and the hierarchies created by these methods are unsuitable for many purposes, for example, it may be difficult or impossible to parameterize finer levels of hierarchies over the coarse levels. The recent work of El-Sana and Varshney \[ESV98\] based on alpha shapes \[EM94\] aims to perform topological simplification in a more controlled manner.
#### Qualitative Geometry and Multiscale Topology.
For the final highlighted problem, we move from shape representation to shape analysis. Topological invariants (see Section 3.5) such as Betti numbers are insensitive to scale, and do not distinguish between tiny holes and large ones. Moreover, features such as pockets, valleys, and ridges—which are sometimes crucial in applications—are not usually treated as topological features at all. Nevertheless, topological spaces naturally associated with a given surface can be used to capture scale-dependent and qualitative geometric features.
For example, the lengths of shortest linking curves \[DG98\], closed curves through or around a hole, can be used to distinguish small from large holes, and the areas of compressing disks, which “seal off” a hole, can be used to distinguish long narrow pipes from direct openings. The topology of offset or “neighborhood” surfaces is an appropriate tool for classifying depressions in a surface: a sinkhole with a small opening will seal off as the neighborhood grows, whereas a shallow puddle will not. Edelsbrunner has already used this idea to design an algorithm to detect pockets in molecular surfaces \[EFL98\], but further investigations are necessary to answer questions on the border of geometry and topology.
### 3.3 Physical Simulation
Scientific computing has traditionally been concerned with numerical issues such as the convergence of discrete approximations to partial differential equations (PDEs), the stability of integration methods for time-dependent systems, and the computational efficiency of software implementations of these numerical methods. Of central importance has been ultimate use of these techniques in the solution of complex problems in science and engineering such as the modeling of combustion systems, aerodynamics, structural mechanics, molecular dynamics, and problems from a large number of other application areas. However, as these applications have become more complex, the local convergence properties of numerical methods have not proven to be sufficient to ensure either correctness or robustness. There are a number of research areas where topological and differential methods could be integrated with existing numerical techniques in scientific computing to help resolve these difficulties.
#### Hexahedral Mesh Generation.
For many scientific applications, the preferred discretization is a hexahedral mesh partitioning the domain into cuboids. A common approach to hexahedral mesh generation involves extending a quadrilateral mesh on the domain surface to a three-dimensional volume mesh of the entire domain \[BP97\]. Even though several software implementations of this approach exist \[TM95\], it is not yet known whether this extension can always be done. An obvious necessary condition for the existence of a hexahedral mesh is that there be an even number of boundary quadrilaterals; this is also sufficient to guarantee the existence of a topological mesh, meaning one in which hexahedral faces may be slightly nonplanar, for domains forming a simple topological structure \[Mit96, Thu93\] or having a bipartite boundary \[Epp99\]. However, it is not clear whether similarly simple conditions can guarantee the existence of a polyhedral mesh or whether additional algebraic conditions on the surface must be imposed.
Another important issue in automatic mesh generation is element quality; poorly shaped elements (flat or skinny, especially skinny in the “wrong direction”) are directly responsible for poorly conditioned matrices \[Fri72\] and hence slow and inaccurate numerical computations \[BR78\]. For triangular, tetrahedral, and quadrilateral meshes, the solution to poor quality elements has been the introduction of “provably good” meshing methods that guarantee to produce a mesh with all elements having good quality according to various metrics \[BE97, BEG94\]. However for hexahedral meshes, little is known about quality metrics and even less is known about provably good meshing. Recent work using the Jacobian matrix norm as a quality metric for hexahedral elements has shown promise for finite-element calculations \[Knu99\].
#### Anisotropic Mesh Generation.
In many applications the underlying physics is not isotropic and, as a result, standard mesh generation methods and element quality metrics are not appropriate. This is the case, for example, in modeling the fluid flow in a boundary layer, or in groundwater flow calculations where the porosity is highly nonisotropic because of geological features such as faults and layering of strata. For these problems element aspect ratios of 1000:1 are sometimes necesssary; however, the generation of these meshes is often ad hoc. For isotropic problems the shape optimization of elements based on measures generated from local metrics computed from the Hessian of element error functions has proved useful \[Rip92\] and optimal for the finite-element approximation of given functions \[Sim94\]. A promising area of future research is the extension of these results to anisotropic problems. For example, one would like to characterize the existence of canonical triangulations (perhaps something akin to Delaunay triangulation) given the Riemannian metrics generated by the error function estimates.
#### Moving Meshes.
In problems such as casting and molding, the domain changes with time, and it is convenient to adapt the existing mesh rather than recomputing an entirely new mesh. Moving mesh problems also arise in Lagrangian discretization strategies for time-dependent PDEs. The challenge problem here is the identification and correction of topological changes as the mesh changes over time.
#### Visualization.
Large-scale simulations can generate terabytes of numerical data. The analysis and interpretation of this voluminous data has become an increasingly important research problem. One promising approach extracts features such as vortex lines or sheets \[SZF<sup>+</sup>93\]. The topology and qualitative geometry of these features can be of great interest. Examples include identification of voids and pockets in molecular surfaces \[Bad94, EFL98\], and simulation of high-temperature superconductors, in which magnetic field lines “tangle” with impurities in the material \[GKL<sup>+</sup>96, JP93\].
### 3.4 Configuration Spaces
The notion of *configuration space* (also called *parametric space* or *realization space*) is used in numerous areas, including robotics, graphics, molecular biology, computer vision, and databases for representing the space of all possible states of a system characterized by many degrees of freedom. Instead of defining configuration spaces in general, we will illustrate the concept by giving an example in robotics.
In *robot motion planning*, the problem is to compute a collision-free motion between two given placements—or configurations—of a given robot among a set of obstacles. A configuration is typically described as a list of real parameters, and the set of all possible configurations is called the configuration space. *Free configuration space* $``$ is the subset of the configuration space at which the robot does not intersect any obstacle. The robot can move from an initial configuration to a final configuration without intersecting any obstacle if and only if these two configurations lie in the same connected component of free configuration space. Planning a collision-free motion thus maps to planning the motion of a point in $``$. In other words, the motion-planning problems map to connectivity questions, or related topological questions, in $``$. Many other problems can be couched in terms of configuration spaces. Important examples include assembly planning and molecular docking \[HKL97, HLW97, Lat91\]. The topology of configuration spaces is little understood, except in very rudimentary cases, such as that of an object under rigid motion.
#### Representation and Computation.
Most interesting configuration spaces are semialgebraic sets, finite Boolean combinations of solution sets of polynomial inequalities and equalities. The question of representing and computing a semi-algebraic set has received much attention in the last two decades. Since the topology of a semi-algebraic set can be quite intricate, developing a suitable representation is a challenging (and not fully solved) problem. A common technique to represent a semialgebraic set is to partition it into semialgebraic sets of constant description complexity, each of which is homeomorphic to $`R^j`$ for some $`j`$ \[Bri93, Lie91, SS83\]. Some commonly used general decomposition schemes are Collin’s decomposition \[ACM84, Col75\] and vertical decomposition \[CEGS89\]. Because of efficiency considerations, we want to minimize the number of cells in the decomposition. A major open question in this area is to compute a decomposition of minimum size.
In motion planning, we are interested in computing a single connected component of $``$. (It is not even obvious that a connected component of a semialgebraic set is also semi-algebraic; this was proved only recently \[BPR98\].) What is the combinatorial or topological complexity of such a component? Recently, Basu proved tight bounds on the sum of Betti numbers and used it to prove a sharp bound on the combinatorial complexity of a single component \[Bas98\]. However, no efficient algorithm is known for computing a single cell. A related open problem is to develop an efficient stratification scheme for a single component of a semialgebraic set.
In some applications even more challenging problems arise. If the obstacles are moving as well as the robot, then we need to update a road-map or stratification dynamically. Also, flexible objects such as elastic bands, rope, or cloth cannot be properly represented with a finite number of degrees of freedom. How can we represent configuration spaces of such objects? The key here may be to capture the notion that different configurations have an energy associated with them, and that only low-energy configurations are of interest. Are there good ways to parametrize these low-energy configurations and to plan motions among them?
#### Approximation.
Computing exact high-dimensional configuration spaces is impractical. Thus it is reasonable to ask for approximate representations. Much of the difficulty in approximating a high-dimensional configuration space is in understanding and simplifying the topology of the space. Although several algorithms are known for simplifying the geometry of a surface, little is known about simplifying topology.
Recently, Monte Carlo algorithms have been developed for representing a higher dimensional semialgebraic set by a $`\mathrm{1}`$-dimensional network \[KLMR95, KŠL96\]. Intuitively, this network is an approximate representation of the road map (a network of $`\mathrm{1}`$-dimensional curves that captures the connectivity information of $``$). These methods sample points in $``$ and connect them by an edge if they can be connected by a direct path inside $``$. So far very simple strategies have been developed for choosing random points. These methods work well when $``$ is simple, but better sampling techniques are needed to handle planning problems involving narrow corridors or other difficult areas, in such a way that the connectivity of the sampled configuration space is preserved.
#### Decomposition.
Dimension reduction is one approach to developing faster algorithms for problems in high dimensions. One possibility for motion planning is to search for solutions in one or more projections of the configuration space and then lift the solution back to the original space. For example, suppose we want to plan a motion for two disks in the plane amid obstacles. The four-dimensional free space of this system can be computed by decomposing the two-dimensional free space of each disk into simple cells, and then lifting these cells into $`R^\mathrm{4}`$. Proving that such a strategy succeeds requires several sophisticated techniques from algebraic topology, including Mayer-Vietoris sequences \[AdBvdS<sup>+</sup>98, FWY86, HW86a, HW86b\]. A characterization of the sitations in which the configuration space can be decomposed and finding the “optimal” decomposition of the configuration space are two interesting open problems in this area.
### 3.5 Topological Computation
The study of algorithms for topological problems has grown quite popular in recent years; it is one of the few growing branches of topology. In the last few years, there have been several workshops and the founding of an on-line community—www.computop.org. Much of the recent effort has focused on classifying the inherent complexity of topological problems. Typically, planar problems are easy (polynomially solvable), problems in $`R^\mathrm{3}`$ are hard (exponentially solvable and thought to be NP-complete), and problems in $`R^\mathrm{4}`$ and higher dimensions are known to be undecidable.
#### Unknot Recognition.
A knot is said to be unknotted if it can be deformed to a (geometric) circle without passing through itself. In the early 1960s, Haken used a combinatorial representation of surfaces, called normal surfaces, in an algorithm for deciding if a knot is unknotted \[Hak61\]. A recent collaboration between mathematicians and computer scientists showed that this algorithm will take at most exponential time in the number of crossings of the knot \[HLP99\]. It is still open, however, whether this problem is NP-complete or can be solved in subexponential time.
#### Knot and Link Equivalence.
Two knots are equivalent if one can be deformed into the other without passing through itself. Knot equivalence is known to be decidable \[Hem92\], but the algorithm is extremely complicated and the computational complexity is as yet unknown. An important related question asks whether two links (collections of intertwined knots) are equivalent. No algorithm is yet known for this problem.
#### Three-Sphere Recognition.
The development of almost normal surfaces, a generalization of normal surfaces, has led to the Rubinstein-Thompson algorithm for deciding if a manifold is the $`\mathrm{3}`$-sphere \[Tho94\]. Recent work of Casson shows that this algorithm will take at most exponential time.
#### Shellings.
A shelling of a cell complex is an ordering of the cells such that if cells are added one by one in that order the topological type remains invariant. While interesting for their own sake, shellings also provide a very useful calculational tool. Hence it is an important algorithmic problem to determine if a cell complex is shellable and, if not, modify it so that it is. Current algorithms \[Let99\] are not yet practical, and improvements are needed.
#### Hyperbolic Geometry.
Three-dimensional manifolds with a hyperbolic structure have many useful properties, allowing extremely efficient and powerful topological calculations. The computer software package SnapPea \[Wee\], written by Weeks, implements many of these calculations and has proven an exteremely useful tool for low-dimensional topology. There are still many open questions in algorithmic hyperbolic geometry, for example, whether it is possible to decide if a manifold has a hyperbolic structure.
#### Topological Invariants.
Another area of interest, with a number of practical applications outside mathematics, is the calculation of topological invariants. Many physical objects can change geometry more easily than they can change topology. Examples range from molecules to alphabetic characters to geological formations. For these objects, topological invariants offer a more meaningful description than geometric measures.
The most useful topological invariants involve homology, which defines a sequence of groups describing the “connectedness” of a topological space. For example, the Betti numbers of an object embedded in $`R^\mathrm{3}`$ are respectively the number of connected components separated by gaps, the number of circles surrounding tunnels, and the number of shells surrounding voids. Technically, the Betti numbers are the ranks of the free parts of the homology groups. For more abstract topological spaces, not embedded in $`R^\mathrm{3}`$, the relevant invariants include torsion coefficients as well.
For $`\mathrm{2}`$-manifolds without boundary, the homology can be computed quite easily by computing Euler characteristics and orientability. The case of 3-complexes requires more sophistication, but computational geometers have devised quite efficient algorithms for the case of 3-complexes embedded in $`R^\mathrm{3}`$ \[DE95, DG98\]. However, these algorithms use the three-dimensional embedding heavily and it is not yet clear whether they can be extended to general complexes. These problems are not just of mathematical interest: nonmanifold $`\mathrm{2}`$-complexes are used quite often in modeling shock fronts, crack propagation, or domains made of two different materials. In dimension beyond 3, there is yet no algorithm that would be practical for large complexes.
From a practical point of view it may often be impossible to determine the topology of an object completely, and estimation of topological invariants may be appropriate. In materials science, structural properties of composite materials such as concrete or high-impact plastic appear to be related to the Betti numbers of randomly selected cross sections.
Finally, in addition to studying the shape of objects in space, topological computations may prove useful in studying the shape of space itself! Research is currently underway using astronomical data to investigate the geometry and topology of the universe. One approach uses maps of cosmic background radiation to piece together the global structure of the universe \[CW98, Wee98\].
## 4 Techniques
We can already identify a number of techniques that computational topology could bring to bear on the applications described above. We list them in order from general scientific principles down to specific algorithmic methods.
* Mathematical Viewpoint. Topology separates global shape properties from local geometric attributes, and provides a precise language for discussing these properties. Such a language is essential for composing software applications, such as connecting a mesh generator to a computational fluid dynamics simulation. Mathematical abstraction can also unify similar concepts from different fields. For example, basic questions of robot reachability or molecular docking become topological questions in the appropriate configuration or conformation spaces.
* Asymptotic Analysis. The signature technique of theoretical computer science is asymptotic worst-case and average-case analysis of algorithms. This type of analysis, while sometimes overemphasized as an end in itself, is helpful in providing a common yardstick to measure progress and encourage future work. Although proving upper bounds on algorithm performance is usually a matter of concrete analysis, topological ideas such as Betti numbers can be useful in proving lower bounds \[Yao94\].
* Exact Geometric Computation. This technique draws on algebraic number theory to ensure the topological correctness of geometric computations. In principle, this technique solves most of the numerical robustness problems in such applications as CAD modeling and computational simulation.
* Differential Methods. Many techniques from differential geometry, such as Morse theory for studying singularities, are essential in analyzing surfaces and models in diverse applications such as medical imaging, crystallography, and molecular modeling.
* Topological Methods in Discrete Geometry. Topological results such as the Borsuk-Ulam theorem \[Bor33\], that any continuous antipodal function on a sphere must have a zero, have commonly been used in discrete geometry to prove the existence of geometric configurations such as ham sandwich cuts and centerpoints \[Bjö95, Živ97\]. However, such methods do not generally lead to efficient algorithms for finding such configurations \[Ko95\], so further research on effective existence proofs may be warranted.
* Multiscale Synthesis and Analysis. Multiresolution techniques have already assumed great importance in the synthesis of computer graphics models and in numerical methods for physical simulation. Multiscale techniques are fast becoming equally important in visualization and analysis of unstructured “natural” data. One example \[Ler, Jon90\] uses techniques drawn from geometric measure theory and harmonic analysis for approximating a set with best fit planes at different resolutions. This approach segments a point set or image into subsets of different geometric structure; by combining continuous and discrete analysis, it produces results even for noisy data.
* Normal Surfaces. Invented by Kneser \[Kne29\], normal surfaces were the basis for Haken’s knot algorithm \[Hak61\], and have been used in numerous algorithmic and finiteness results in topology \[Has97, JS98\]. Instead of representing a curve or surface with an explicit mesh or parameterization, normal surface theory describes how that curve or surface intersects a given mesh of the ambient space. This yields a very efficient representation for densely folded curves and surfaces, which has potential in applications where such curves and surfaces occur. Moreover, normal surface theory provides a natural “addition” operation for surfaces, which is useful for their manipulation.
## 5 Recommendations
* Research Community. There is need to build a computational topology research community including computer scientists, engineers, and mathematicians. Such a community could be held together by organizing workshops and conference special sessions, and by maintaining Web sites, bibliographies, and collections of open problems. Techniques from topology have already been used in geometric computing and vice versa. We want to strengthen and formalize this link.
* Online Clearinghouse. To encourage a sense of community, we should establish a clearinghouse of research projects, papers, software, and informal communications between workers in this area. The web site already present at www.computop.org could possibly provide a location for this collection.
* Research Funding. Grant opportunities are needed to encourage further work in these areas, either as a separate initiative or continued funding from the relevant areas within NSF.
* Continuation of Workshop. It seems premature to establish a journal or annual conference series in this area, but at the least there should be another workshop on computational topology. This year’s workshop was an invitation-only, direction-finding session; what is needed now is a forum for collecting new work in the area and fostering continued interdisciplinary collaboration. Perhaps such an event could be held in conjunction with the annual ACM Symposium on Computational Geometry, to be held next year in Hong Kong.
## Bibliography
|
no-problem/9909/physics9909038.html
|
ar5iv
|
text
|
# THE SCALED UNIVERSE
## 1 Introduction
It has been argued by several authors that it is a Brownian process that underlies quantum behaviour and a fractal dimension of two for quantum paths. On the other hand the fractal nature of the macro universe has been noticed over the past several years . Indeed, this is obvious– matter is concentrated in atoms, stars and planets, galaxies, clusters and so on and not spread uniformly. Indeed, uniformity is dependant on the scale of observation or resolution. Not just that: the mysterious curiosity of a ”cosmic” quantization has also been noticed. We will now show that the underlying connection between quantum type phenomena at different scales has a Brownian underpinning: there exists what may be called, a ”Scaled Quantum Mechanics”.
## 2 Scaling
We first observe that in Brownian motion we have
$$x\mathrm{\Delta }x\sqrt{n}$$
(1)
where, for example $`\mathrm{\Delta }x`$ is the typical length of a step, $`n`$ is the number of steps and $`x`$ is the distance covered.
We next observe that the following relations hold:
$$Rl_1\sqrt{N_1}$$
(2)
$$Rl_2\sqrt{N_2}$$
(3)
$$l_2l_3\sqrt{N_3}$$
(4)
$$Rl\sqrt{N}$$
(5)
where $`N_110^6`$ is the number of superclusters in the universe, $`l_110^{25}cms`$ is a typical supercluster size $`N_210^{11}`$ is the number of galaxies in the universe and $`l_210^{23}cms`$ is the typical size of a galaxy, $`l_31`$ light year is a typical distance between stars and $`N_310^{11}`$ is the number of stars in a galaxy, $`R`$ being the radius of the universe $`10^{28}cms,N10^{80}`$ is the number of elementary particles, typically pions in the universe and $`l`$ is the pion Compton wavelength.
Equations (5) and (1) have been compared and it has been argued that this is symptomatic of quantum behaviour. Here, the Compton wavelength is a length scale within which we can find the corresponding mass. In this same spirit and in the light of the comments in Section 1, we can expect that equations (2) to (4) would also lead to a quantum type behaviour though, not at the micro scale represented by the pion (as in equation (5)) but rather at a suitable higher scale.
Infact as we will now show, this is indeed the case with a scaled Planck constant given by
$$h_110^{93}$$
(6)
for super clusters;
$$h_210^{74}$$
(7)
for galaxies and
$$h_310^{54}$$
(8)
for stars.
Let us start with equation (5). It is quite remarkable that (5), (and a corresponding equation with the radius of the universe replaced by its age and the Compton wavelength replaced by its Compton time) can be deduced in a cosmological scheme in which elementary particles, typically pions are fluctuationally produced out of a background Zero Point Field. This scheme is consistent with astrophysical observations and also deduces from theory the various large number relations which were hitherto considered to be magical coincidences. Further, we have,
$$M=Nm,$$
(9)
where $`M`$ is the mass of the universe and $`m`$ the pion mass and $`N`$ is defined in (5). From (9) and (5) we can deduce,
$$\left(\frac{R}{l}\right)^2\frac{M}{m}$$
(10)
From (10), we can easily deduce that there is the scaled Planck constant $`h_1`$ given in (6), such that,
$$R=\frac{h_1}{Mc}$$
(11)
Equation (11) shows that with this scaled constant $`h_1`$, the radius of the universe turns out to be the counterpart of the Compton wavelength. Earlier it was argued that an electron could be modelled as a Kerr-Newman Black Hole with radius given by the Compton wavelength. It is interesting to note that this is also true for the universe itself with the scaled Compton wavelength: Infact in the case of the electron, the spin was given by
$$S_K=ϵ_{klm}x^lT^{m0}d^3x=\frac{h}{2}$$
(12)
where the domain of integration was a sphere of radius given by the Compton wavelength. If this is carried over to the case of the universe, with radius given by (5) or (11) and mass as in (9) we get from (12)
$$S_U=N^{3/2}hh_1$$
(13)
where $`h_1`$ is as in (6) and $`S_U`$ denotes the counterpart of electron spin.
Infact the origin of $`h_1`$ is in (13): From this point of view, (6) is not mysterious. In this case $`h_1`$ turns out to be the spin of the universe itself in broad agreement with Godel’s spin value for Einstein’s equations . Incidentally this is also in agreement with the Kerr limit of the spin of the rotating Black Hole of mass given by (9). Further as pointed out by Kogut and others, the angular momentum of the universe given in (13) is compatible with a rotation from the cosmic background radiation anisotropy. Finally it is also close to the observed rotation as deduced from anisotropy of cosmic electromagnetic radiation as reported by Nodland and Ralston and others.
We next use (3), and the well known fact that $`G\frac{m_G}{l_2}v^2`$ , along with the relation,
$$m_Gvl_2=h_2,$$
(14)
which is the analogue of quantized angular momenta. It immediately follows that $`h_2^2=G^2m_G^3l10^{74}`$, which gives equation (7). Further from (14) it follows that
$$v10^7cms,$$
which is consistent with the quantized large scale velocities that have been observed .
Similarly taking the cue from (4), if in conjunction with the well known Kepler type equation viz.,
$$G\frac{m_S}{r}V^2,$$
where $`m_S`$ is the mass of the sun, we use the counterpart of equation (14) for the sun, can get the relation (8), and then the planetary angular momentum, $`nh_3`$ gives correctly what may be called the BodeTitius type relation for the planetary distances. This was noted by Nottale, Agnese, Festa, Laskar, Carneiro and others and so will not be elaborated here. The above considerations not only provide a rationale for this behaviour, but also show how this fits into a more generalized scaling principle.
## 3 Discussion
(i) We have given a rationale for quantum like behaviour at large scales, expressed by equations (6), (7) and (8). These express scales at the level of superclusters (and the universe), galaxies and stars. It may be mentioned that a hierarchical structure in the context of the now defunct strong gravity was considered by Caldirola and coworkers.
(ii) We saw in Section 2 that the universe shows up as a black hole. Indeed this has been argued from an alternative view point (cf.ref.). Moreover, this is symptomatic of a holistic or Machian behaviour (cf.ref.). This infact has been the purport of earlier considerations (cf.refs. and , for example).
Moreover, this universal black hole could just be a scale expressing the upper limit of our capability to observe. In the spirit of reference , there could be several such black holes or parallel universes.
In this context, we may refer to the fact that in earlier work (cf. for example refs. and ), a background zero point field (ZPF) or Prigogine’s quantum vacuum was considered, our of which elementary particles, typically pions were fluctuationally created - the energy of the ZPF within the pion Compton wavelength being the rest energy of the pion.
We could turn the perspective around and consider the creation of the universal black hole instead, at the scaled ”Compton” wavelength of the universe, as given by equation (11). It would then appear that the structure formation of the universe would be due to, as Mandelbrot pointed out, a curdling process (cf.ref. and ).
Indeed, we are prisoners of perspective: the following analogy would clarify. Let us consider the drawing on (two dimensional) paper of a (three dimensional) match box or rectangular parallelopiped. There are two rectangles - the outside rectangle and the inside one. Depending on which of these we start with, the match box would be either going inwards or coming outwards, two totally different possibilities.
(iii) Given equations like (6), (11) and subsequent considerations of Section 2, it would be natural to expect the universe to be a ”wave packet”, though not in the spirit of Hawking. We can see that this is indeed so. Infact for a Gaussian wave packet, we have,
$$R\frac{\sigma }{\sqrt{2}}\left(1+\frac{h_1^2T^2}{\sigma ^4M^2}\right)^{1/2}\left(\frac{1}{\sqrt{2}}\frac{h_1T}{\sigma M}\right)$$
(15)
where, now, $`R`$ and $`T`$ denote the radius and age of the universe (at a given time), $`M`$ its mass and $`\sigma R`$ is the spread of the wave packet.
Remembering that $`RcT`$, (15) actually reduces to equation (11)! The width of the wave packet is the ”Compton” length. Differentiation of (15) gives,
$$\dot{R}\frac{h_1}{\sigma M}\frac{h_1}{M\sigma ^2}RHR$$
(16)
Equation (16) resembles Hubble’s law. We can show that this is indeed so: Using (9), and (13), it follows from (16) that,
$$H\frac{c}{\sqrt{N}l}\frac{Gm^3c}{h^2}$$
(17)
where the last equality has been deduced previously (cf.refs.) and can be easily verified. Not only does (17) give the correct value of the Hubble constant, but it is also Weinberg’s ”mysterious” empirical relation (cf.ref.), giving the pion mass in terms of the Hubble constant or vice versa.
Interestingly, from (15), using again $`RcT`$, we can deduce that
$$Mc^2Th_1$$
(18)
Equation (18) and (11) are the analogues of Heisenberg’s Uncertainity Principle.
|
no-problem/9909/astro-ph9909332.html
|
ar5iv
|
text
|
# Light curves and metal abundances of RR Lyrae variables in the bar of the Large Magellanic Cloud
## 1. Observations, data reduction and calibration
We have collected B,V CCD photometry in two 13 x 13 fields located close to the bar of the LMC and overlapping with fields #6 and #13 of the MACHO microlensig experiment (see http://wwwmacho.mcmaster.ca), using the 1.5m Danish telescope in La Silla. The photometric data set consists of 58 V and 25 B frames of each field. 118 variables were identified in the two fields (62 ab type RR Lyrae, 30 RR<sub>c</sub>, 10 RR<sub>d</sub>, 6 Cepheids, 9 eclipsing binaries and 1 $`\delta `$ Scuti). Photometry was accurately tied to the Johnson standard photometric system using a large number of standard stars from Landolt (1992).
Low resolution spectra (R=450, res.element=9 Å) were obtained for 7 of the RR<sub>d</sub> variables with EFOSC2 at the 3.6 m ESO telescope, and metal abundances have been derived using the $`\mathrm{\Delta }`$S technique (Preston 1959). For calibration purposes we took also spectra at minimum light of 8 field RR Lyraes of known $`\mathrm{\Delta }`$S (of which a c type followed along the pulsation cycle), and of 14 stars of the open cluster Collinder 140, which contains spectral type standard stars.
Photometric data were analyzed using the package DoPHOT (Schechter, Mateo, & Saha, 1993). Spectroscopic data were reduced using the standard IRAF packages for long slit spectra. Total numbers of 28000 and 25000, and 23000 and 19000 objects were measured in the V and B frames of the two fields, respectively.
The average magnitude the LMC clump stars is $`<V>`$ = 19.202 ($`\sigma `$=0.202; 8979 stars) The comparison with the Alcock et al. (1997, hereinafter A97) $`<V>`$ for the clump, as can be read from their Fig. 3, shows that our value is about 0.10 mag ”fainter”.
## 2. Identification, period search and pulsational properties of the RR Lyrae variables
Variables were identified on the V and B frames independently. Periods were defined using the program GRATIS (GRaphycal Analyzer TIme Series; Montegriffo, Clementini, & Di Fabrizio 1999, in preparation) which was run on the differential photometry of the variables with respect to a number of stable reference stars. The period search procedure was to perform a Lomb analysis (Lomb 1976) on a wide period interval first, and then use a Fourier analysis to refine periods and find the best fitting models. Average residuals from the best fitting models for single-mode pulsators with well sampled light curves are 0.02–0.03 mag in V, and 0.04–0.06 mag in B. Figure 1 shows examples of the V and B light curves of an ab, a c type RR Lyrae in our sample. The period distribution of the c type RR Lyraes in our fields peaks at $`<`$P(RR<sub>c</sub>)$`>`$=0.314 $`\pm `$ 0.047 (average on 30 stars), while $`<`$P(RR<sub>ab</sub>)$`>`$=0.577$`\pm `$0.077 (average on 60 stars) to compare with 0.342 and 0.583 of Alcock et al. (1996, hereinafter A96). Our shortest period ab type RR Lyrae has period 0.318 d, and there are two other RR<sub>ab</sub>’s with periods around 0.40 d. We derived $`<V>`$ and $`<B>`$ intensity average magnitudes as well as V and B amplitudes (A<sub>V</sub> and A<sub>B</sub>) for all variables with complete light curves. The average $`<V>`$ apparent magnitude of the RR Lyraes in our sample is : $`<V>=19.325\pm 0.170`$ (75 stars), to compare with $`<V>`$=19.4 from A96. On the assumption that : E(B$``$V)=0.10 (Bessel 1991) and A<sub>V</sub>=3.1\[E(B$``$V)\] for the LMC we find :
$`<V_0>`$=19.015$`\pm 0.020`$ at \[Fe/H\]$`1.5`$ field RR Lyraes, this paper
$`<V_0>`$=19.09 at \[Fe/H\]$`1.7`$ field RR Lyraes, A96
$`<V_0>`$=19.06$`\pm 0.06`$ field RR Lyraes, Kinman et al. ( 1991)
$`<V_0>`$=18.94$`\pm 0.040`$ at \[Fe/H\]=$``$1.9 cluster RR Lyraes, Walker (1992)
Allowing for the 0.4 dex difference in \[Fe/H\] our $`<V_0>`$ is in very good agreement with Walker (1992), thus showing that there is no clear evidence for a difference in luminosity between field and cluster RR Lyraes in the LMC.
## 3. Spectroscopy of the double-mode RR Lyraes
Spectra for 7 of the RR<sub>d</sub> variables falling in our fields were obtained at phases corresponding to the minimum light. Metallicities were inferred from these spectra using the $`\mathrm{\Delta }S`$ index and the Clementini et al. (1995) calibration of $`\mathrm{\Delta }`$S in terms of metallicity (\[Fe/H\]=$`0.194\times \mathrm{\Delta }`$S $``$0.08).
Dealing with variables which pulsate both in the fundamental and first overtone, the question arises whether these stars should be treated as ab or c type pulsators in measuring $`\mathrm{\Delta }`$S. Following Kemper (1982) and discussions in Clement, Kinman, & Suntzeff (1991), we considered our targets as c type variables. $`\mathrm{\Delta }`$S values were thus measured from spectra with Hydrogen spectral type later than A8, and applying phase corrections derived from the field RR<sub>c</sub> T Sex. Table 1 lists the $`\mathrm{\Delta }`$S values and corresponding metallicities we derived for our targets. Values for star #5 are rather uncertain since the spectrum of this star has very low S/N. Errors on the quoted $`\mathrm{\Delta }`$S are of the order of 0.7–1 $`\mathrm{\Delta }`$S subclasses, corresponding to an error of about 0.20–0.30 dex in \[Fe/H\].
## 4. The mass-metallicity relation for the RR Lyrae stars
A97 provides P<sub>0</sub>/P<sub>1</sub> ratios for the 7 double-mode variables in Table 1. These ratios can be used together with Petersen diagrams (Petersen 1973) and Bono et al. (1996) loci of model pulsators to estimate the masses of our targets (see e.g. Figure 2 of A97). Masses obtained with this procedure are listed in the last column of Table 1 and plotted against metallicity in Figure 2. Also shown is the mass-metallicity relation defined by double-mode pulsators in the globular clusters M68 (Walker 1994), M15 (Nemec 1985) and IC 4499 (Clement et al. 1986, Walker & Nemec 1996) and two RR<sub>d</sub>’s in the Milky Way (Clement et al., 1991). Although there is some scatter, and there are only few field objects, most of the LMC RR<sub>d</sub>’s seem to follow the general mass-metallicity relation defined by the cluster RR<sub>d</sub>’s. Hence, no clear-cut evidence is found for a difference in mass between field and cluster RR Lyraes.
## References
Alcock C. et al. 1996, AJ, 111, 1146, A96
Alcock C. et al. 1997, ApJ, 482, 89, A97
Bessel M.S. 1991, A&A, 242, L17
Bono G., Caputo F., Castellani, V. & Marconi M. 1996, ApJ, 471, L33
Clement C.M., Nemec J.M., Robert N., Wells T., Dickens R.J. & Bingham E.A. 1986, AJ, 92, 825
Clement C.M., Kinman T.D. & Suntzeff N.B. 1991, ApJ, 372, 273
Clementini G., Carretta E., Gratton R.G., Merighi R., Mould J.R. & McCarthy J.K. 1995, AJ, 110, 2319
Kemper E. 1982, AJ, 87, 1395
Kinman T.D., Stryker L.L., Hesser J.E., Graham J.A., Walker A.R, Hazen M.L., & Nemec J.M. 1991, PASP, 103, 1279
Landolt A. 1992, AJ, 104, 340
Lomb N.R. 1976, Ap. Space Sci., 39, 447
Nemec J.M. 1985, AJ, 90, 240
Petersen J.O. 1973, A&A 27, 89
Preston G.W. 1959, ApJ, 130, 507
Schechter, P.L., Mateo M. & Saha A. 1993, PASP, 105, 1342
Walker A.R 1992, ApJ, 390, L81
Walker A.R 1994, AJ, 108, 555
Walker A.R. & Nemec J.M. 1996, AJ, 112, 2026
|
no-problem/9909/cond-mat9909419.html
|
ar5iv
|
text
|
# Comment on ”Quantum Scattering of Heavy Particles from a 10 K Cu(111) Surface”
In a recent Letter Althoff et al. reported a study of scattering of thermal Ne, Ar and Kr atoms from Cu(111) surface in which they assessed the corresponding Debye-Waller factor (DWF) as a function of the particle mass $`m`$ in a wide range of substrate temperature $`T`$. The experiments were interpreted by the semiclassical DWF theory in which the projectile is treated as moving on the classical recoilless trajectory $`𝐫(t)`$ and the surface vibrations are quantized. This gives the DWF in the form $`I_{00}=\mathrm{exp}[2W(m,T)]`$ where the Debye-Waller exponent $`2W`$ (DWE) in the essentially one-dimensional (1D) approach of Refs. and the limit $`T0`$ depends on the particle incoming energy $`E_i`$, surface Debye temperature $`\mathrm{\Theta }_D`$ and the static atom-surface potential $`V(z)`$, but not on $`m`$. On the other hand, in the scattering regime $`T\mathrm{\Theta }_\tau =\mathrm{}/(k_B\tau )`$ and $`\mathrm{\Theta }_\tau <\mathrm{\Theta }_D`$, where $`\tau `$ is the effective collision time, it scales as $`2Wm^{1/2}T`$. However, the experiments described in were carried out in the quantum scattering regime in which, as we show below, neither of the above semiclassical scalings holds and the semiclassical DWE significantly deviates from the exact quantum one both in the low and high $`T`$-limits, irrespective of the functional form of $`V(z)`$. Hence, the quantum scattering data cannot be reliably interpreted by the semiclassical but rather by the quantum theory.
To substantiate the above statements we carry out fully three-dimensional quantum and semiclassical calculation of the DWF relevant to the experiments of Ref. . We start from the quantum DWE (c.f. Ref. , Eq. (3)) in which we also include prompt sticking processes because of their large contribution to the quantum DWF and use theoretical $`V(z)`$’s from which produce closer fit of the semiclassical DWF to the data. The quantum DWF has the correct semiclassical limit which enables pinpointing the break down of the semiclassical description. A comparison of the measured and calculated DWF’s shown in Fig. 1 reveals general agreement between the measured and quantum and not the semiclassical values, and thereby establishes the validity of the quantum approach. A systematic small underestimate at very low $`T`$ appears because our results are uncorrected for $`T`$-independent diffuse elastic scattering. The breakdown of the semiclassical approach is illustrated in Fig. 2 which shows the $`m`$-dependence of the quantum and semiclassical DWE’s calculated in the low (main panel) and high (inset) $`T`$-limit for fixed $`V(z)`$ and $`E_i`$. It is seen that the semiclassical 1D scaling results for the DWE are reached for masses which largely exceed those of the scattered atoms (classical limit). Hence, although our calculations support the conjectures of Ref. on the $`m`$-dependence of the DWF for heavy particle-surface scattering in the classical limit, they also demonstrate that the semiclassical approach invoked to interpret the data is not yet applicable in the studied scattering regime and that the quantum approach should be used instead.
|
no-problem/9909/hep-lat9909112.html
|
ar5iv
|
text
|
# Recent results on self-dual configurations on the toruspresented by A. González-Arroyo
## 1 Motivation
The purpose of this paper is to briefly review the progress made during the last year in the investigation of $`SU(N)`$ self-dual Yang-Mills (YM) configurations on the 4-dimensional torus ($`T^4`$). It is well-known that there is a procedure (the ADHM construction ) to construct all self-dual configurations on the sphere. Conversely, for the torus case not even the simplest fully non-abelian solution is known analytically (abelian-like solutions are known for some torus sizes). Constructing them provides a challenge for both physicists and mathematicians.
From the Physics point of view, compactification on the sphere amounts to the condition of finite action implying that the configuration approaches a pure gauge at infinity. This is not a physically necessary condition even at a purely classical level. Since the action is an extensive quantity it makes more sense to demand finiteness of the action density. In the Confinement regime, it is reasonable to expect that typical configurations are different from the classical vacuum almost everywhere. On the other hand, configurations on the torus can be looked at as periodic configurations on $`R^4`$ having finite action density. Although periodicity is also an unphysical feature for the dominant Yang-Mills configurations to have, knowledge of these configurations might help us to understand better some structures which might be present in the vacuum.
Besides, there are some cases in which one deals with periodicity in some of the variables. This can be seen as formulations on the non-compact manifolds $`T^n\times R^{4n}`$. The case $`n=1`$ occurs naturally when studying field theory at finite temperature. The case $`n=3`$ has been studied in relation with the Hamiltonian formulation for YM fields on the torus. Recently , it has been seen how the $`n=2`$ case is relevant in constructing $`SU(N)`$ YM vortex-like configurations in $`R^4`$. It is an interesting question to investigate how the $`T^4`$ configurations are related to these non-compact manifold ones.
## 2 Introduction
Yang-Mills fields on the torus are classified by the topological charge $`Q`$ and by the twist sectors (see Ref. for an introduction to the subject and a review of older results). The latter are a discrete number of sectors labelled by two 3-vectors of integers modulo $`N`$ ($`\stackrel{}{k}`$ and $`\stackrel{}{m}`$). The possible values of the topological charge are restricted by twist:
$$Q=\frac{\stackrel{}{k}\stackrel{}{m}}{N}+\mathrm{integer}.$$
(1)
Orthogonal twists are those for which $`\stackrel{}{k}\stackrel{}{m}=0modN`$ (the rest are called non-orthogonal). Only in this case the topological charge is an integer (and the action a multiple of $`8\pi ^2`$).
Self-dual solutions, if existing, form a manifold whose dimensionality (up to gauge transformations) is given by $`4QN`$. Four of these modes correspond to space-time translations of the solution. Existence and non-existence has been proved in some cases. Particularly relevant is the non-existence of $`Q=1`$ self-dual configurations on the torus without twist ($`\stackrel{}{k}=\stackrel{}{m}=0modN`$).
Apart from the afore-mentioned quasi-abelian solutions, most of what is known about these solutions comes from numerical studies on the lattice. There is, however, an important duality (involutive) transformation –the Nahm transform – which maps $`SU(N)`$ self-dual configurations on the torus with topological charge $`Q`$, onto $`SU(Q)`$ self-dual configurations on the dual torus with topological charge $`N`$. Unfortunately, besides its role in the proof of Ref. , little use has been made of this property to increase our knowledge on these configurations. Part of the progress we report, has to do with fixing this state of affairs.
The study of self-dual configurations depends on the group rank, twist, topological charge and torus size. For simplicity the study has focused on $`SU(2)`$ and low values of the topological charge ($`Q=\frac{1}{2},1`$). This forces non-zero twist. For obvious symmetry reasons the study has centered on hypercubical tori of size $`l_s^3\times l_t`$. For large and small aspect ratios $`l_s/l_t`$ one can study and exploit the connection with $`R^3\times S_1`$ and $`T^3\times R`$ configurations. Now we will summarise what was known of these configurations before.
$`𝐥_𝐭/𝐥_𝐬\mathrm{𝟏}`$: For $`Q=\frac{1}{2}`$ (non-orthogonal twist) the solution is unique up to translations. The configuration is exponentially localised in time and as $`l_t/l_s`$ goes to infinity approaches a configuration known as the twisted instanton. For $`Q=1`$ the description of the resulting configuration depends on twist. For $`\stackrel{}{m}0`$ and $`\stackrel{}{k}=0modN=2`$ (Space-like twist) the configuration is generically described by 2 twisted instantons separated in time. The 2 space-time locations describe the 8 parameters of the moduli space. For $`Q=1`$, $`\stackrel{}{k}0`$ and $`\stackrel{}{m}=0mod2`$ (Time-like twist) one gets a family of (exponentially localised in time) configurations which as $`l_t/l_s`$ goes to infinity can be parametrised by the holonomies: the spatial Polyakov loops at infinite time.
$`𝐥_𝐬/𝐥_𝐭\mathrm{𝟏}`$: In this case the $`R^3\times S_1`$ configurations with $`Q=1`$ are known analytically . Generically, they are given by 2 lumps which for large separations are simply BPS monopoles of various masses, fixed by the holonomy of the configuration (the time-like Polyakov loop at spatial infinity). We recently studied, by lattice methods, the large $`l_s/l_t`$ configurations on the torus . These configurations neatly approach the analytic calorons with some restrictions on the moduli space. For $`Q=\frac{1}{2}`$ a single fixed mass periodic BPS monopole is obtained. For $`Q=1`$ and time-like twist one obtains in each cell a couple of equal mass constituent monopoles with arbitrary locations. Conversely for $`Q=1`$ and space-like twist one gets a couple of variable mass monopoles with fixed relative positions.
## 3 New results
In the last year there have been several developments which we will now list:
* A numerical method, based on lattice gauge theories, has been developed to implement the NT numerically . It allows to obtain the NT of lattice configurations approximating the self-dual continuum ones. The method has proved quite precise and stable.
* It has been shown how to extend the definition of the Nahm transform to the non-zero twist case . The extension preserves the main properties of the original NT. The transform of an $`SU(N)`$ self-dual configuration on the torus with topological charge $`Q`$, is now an $`SU(QN_0)`$ self-dual configuration on some Nahm-dual torus with topological charge $`N/N_0`$. The integer $`N_0`$ depends on twist and equals $`1`$ for zero twist.
* An analysis of the NT of $`Q=1`$ self-dual configurations on $`T^3\times R`$ for twisted boundary conditions in time has been performed. Although, the original self-dual configuration is not known, its NT is known to be an abelian field in $`T^3`$, which is self-dual everywhere except at certain pointlike singularities. These singularities act as dyonic sources and their location is determined by the holonomies (spatial Polyakov loop at infinite time) of the original $`T^3\times R`$ configuration. With this information one is able to construct this Nahm-dual abelian field everywhere.
* The numerical investigation of the periodic calorons mentioned before .
Equipped with this new information and techniques we recently studied the action of the NT on the torus configurations under concern. One finds that configurations with large and small $`l_s/l_t`$ are mapped onto each other by the NT. Furthermore, for $`Q=1`$ the time-like or space-like nature of the twist is preserved by the transformation. The results can be summarised as follows:
* The $`Q=\frac{1}{2}`$ twisted instanton maps onto the $`Q=\frac{1}{2}`$ periodic caloron. This can be tested by the numerical NT and shows remarkable precision (see fig. 1 of the second ref. in ).
* For $`Q=1`$ and time-like twist we see that the approximate holonomy of the $`l_t/l_s1`$ periodic instanton, maps onto the relative position of two equal mass constituent monopoles. Taking the limit $`l_t\mathrm{}`$ ($`l_s`$ fixed) we see that the dyonic singularities of the NT of $`T^3\times R`$ confs, are nothing but BPS monopoles with the non-abelian cores (of size $`1/l_t`$) shrunk to 0.
* For $`Q=1`$ and space-like twist, the holonomy of the large $`l_t/l_s`$ configuration is fixed, which explains the fixed relative position of the constituent monopoles of the Nahm transform. Furthermore, the time distance between the 2 twisted instantons of this configuration maps onto the holonomy (and hence the monopole masses) of the periodic caloron configuration. This is clearly depicted in Fig. 1.
For details the reader is referred to
In summary, all the known information is nicely linked non-trivially together by the NT. A general pattern mapping approximate holonomies to lump positions emerges from our study.
|
no-problem/9909/hep-lat9909100.html
|
ar5iv
|
text
|
# Edinburgh-1999/12 Determination of 𝑉_ub from 𝐵→𝜋𝑙𝜈̄ on the lattice
## 1 INTRODUCTION
Semileptonic decays of mesons containing a $`b`$ quark play an important role in the determination of Cabibbo-Kobayashi-Maskawa (CKM) matrix elements. The transition amplitude of the decay $`B\pi l\overline{\nu }`$ factorizes into leptonic and hadronic parts. This hadronic matrix element can be parameterised by two form factors
$`\pi (\stackrel{}{k})|V^\mu |B(\stackrel{}{p})`$ $`=`$ $`f_+(q^2)(p+kq\mathrm{\Delta }_{m^2})^\mu `$ (1)
$`+`$ $`f_0(q^2)q^\mu \mathrm{\Delta }_{m^2}`$
where $`\mathrm{\Delta }_{m^2}=(m_B^2m_\pi ^2)/q^2`$ and $`q=pk`$. In the limit of zero lepton mass, the total decay rate is given by
$$\mathrm{\Gamma }=\frac{G_F^2|V_{\mathrm{ub}}|^2}{192\pi ^3m_B^3}_0^{\eta ^2}[\lambda (q^2)]^{3/2}|f_+(q^2)|^2𝑑q^2$$
(2)
where $`\eta ^2=(m_Bm_\pi )^2`$ and
$$\lambda (q^2)=(m_B^2+m_\pi ^2q^2)^24m_B^2m_\pi ^2.$$
(3)
We can determine the decay rate from the $`q^2`$ dependence of the form factor $`f_+(q^2)`$ and then compare to the experimental measure of the decay rate to extract $`V_{\mathrm{ub}}`$.
## 2 DETAILS OF THE CALCULATION
The 216 gauge quenched configurations were generated using the Wilson action on a $`24^3\times 48`$ lattice. The quark propagators were calculated using an $`𝒪(a)`$ improved action, where the coefficient $`c_{SW}`$ has been determined non-perturbatively (NP). We use four heavy quarks with masses around charm, $`(\kappa _H=0.1200,0.1233,0.1266,0.1299)`$. Three light quarks with masses around strange $`(\kappa _L=0.1346,0.1351,0.1353)`$ are used for the active propagator, and the heaviest two for the spectator. The heavy quarks were smeared and the light quarks fuzzed. The chiral limit has been determined to be $`\kappa _{\mathrm{crit}}=0.135815`$, and the physical value of $`m_\pi /m_\rho `$ corresponds to $`\kappa _n=0.13577`$. The lattice spacing is set by $`m_\rho `$ and $`a^1=2.64`$ GeV.
We obtain the form factors from the heavy-to-light three-point correlation functions, using the masses and amplitudes from the heavy-light and light-light two-point correlation functions. The general method is given in . We place the operator for the heavy-light pseudoscalar meson at $`T=20`$ rather than the mid-point of the lattice to check for contamination from different time orderings. We use eight different combinations of $`\stackrel{}{p}`$ and $`\stackrel{}{k}`$ determine the $`q^2`$ dependence; $`00`$, $`01`$, $`0\sqrt{2}`$, $`10`$, $`11`$, $`11_{}`$, $`11_{}`$ and $`1\sqrt{2}_{}`$ in lattice units. There is no $`00`$ channel for $`f_+`$.
### 2.1 Mass dependent renormalisation
We can also remove all $`𝒪(a)`$ errors from matrix elements of on-shell states by an appropriate definition of the currents. For the vector current for degenerate quarks of mass $`m_Q`$, we have
$$V_\mu ^R=Z_V(1+b_Vam_Q)\{V_\mu +c_Va\frac{1}{2}(_\nu +_\nu ^{})T_{\mu \nu }\}$$
(4)
where $`V_\mu `$ and $`T_{\mu \nu }`$ are the local lattice vector and tensor currents respectively. Both $`b_V`$ and $`Z_V`$ have been determined non-perturbatively . Preliminary results for a non-perturbative determination of the mixing coefficient $`c_V`$ exist, but we use the one-loop perturbative estimate, which is small. Defining
$$Z_V^{\mathrm{eff}}Z_V(1+b_Vam_Q).$$
(5)
For the forward degenerate matrix element, we can calculate $`Z_V^{\mathrm{eff}}`$ from our data. We show the comparison to $`Z_V^{\mathrm{eff}}`$ evaluated for these quark masses using the non-perturbative $`Z_V`$ and $`b_V`$ in Table 1. The excellent agreement suggests higher order discretisation effects are limited.
## 3 CHIRAL EXTRAPOLATION
To evaluate the form factor $`f_i`$ at physical quark masses we must consider both the intrinsic dependence of $`f_i`$ and the indirect mass dependence arising from the change in $`q^2`$:
$$f_i=f_i(q_{(\kappa _A,\kappa _S)}^2,\kappa _A,\kappa _S).$$
(6)
In previous UKQCD analyses the $`q^2`$ dependence was modelled by an extra term. This is potentially difficult to control. Here we extrapolate whilst holding $`q^2`$ fixed. This approach yields a more reliable extrapolation. This is discussed in more detail in .
We first interpolate the form factors to a chosen set of $`q^2`$ values for each quark mass combination. The values of $`q^2`$ are chosen such that we interpolate for each light quark combination and that for different heavy quark masses, the sets of $`q^2`$ values correspond to the heavy quarks having the same velocity. This is discussed in the section on the heavy extrapolation.
The form of the interpolation function is motivated by pole dominance models,
$$f_i(q^2)=\frac{f_i(0)}{(1q^2/m_i^2)^{n_i}}.$$
(7)
where $`i`$ is either $`+`$ or $`0`$. However, as we interpolate in $`q^2`$, any model dependence in the chiral extrapolation is mild, this is shown in figure 1.
We then extrapolate the form factors at fixed $`q^2`$ to $`\kappa _n`$ with the light quarks non-degenerate;
$`f(\kappa _S,\kappa _A)`$ $`=`$ $`\alpha +\beta \left({\displaystyle \frac{1}{\kappa _S}}{\displaystyle \frac{1}{\kappa _{\mathrm{crit}}}}\right)`$ (8)
$`+\gamma \left({\displaystyle \frac{1}{\kappa _S}}+{\displaystyle \frac{1}{\kappa _A}}{\displaystyle \frac{2}{\kappa _{\mathrm{crit}}}}\right).`$
## 4 HEAVY QUARK MASS EXTRAPOLATION
Heavy quark effective theory (HQET) is used to motivate the form of the extrapolation to the $`B`$ meson scale. The scaling relations, $`f_+\sqrt{M}`$ and $`f_01/\sqrt{M}`$ are determined at fixed four-velocity, $`v`$. Defining the recoil variable,
$$vk=\frac{M_P^2+m_\pi ^2q^2}{2M_P}$$
(9)
we can then extrapolate the form factors at fixed $`vk`$ to the $`B`$ meson scale:
$$Cf_i(vk)M_P^{s_i/2}=\gamma _i\left(1+\frac{\delta _i}{M_P}+\frac{ϵ_i}{M_P^2}\right)$$
(10)
where $`s_i=1`$ when $`i=+`$, and $`s_i=+1`$ when $`i=0`$. The coefficient $`C`$ is the logarithmic matching factor,
$$C(M_P,m_B)=\left(\frac{\alpha _s(m_B)}{\alpha _s(M_P)}\right)^{2/\beta _0}$$
(11)
and $`\beta _0=11`$ in quenched QCD.
## 5 RESULTS
The resulting form factors are plotted in figure 2.
Pole dominance models, equation 7, combined with the heavy quark scaling relations suggest that $`n_+=n_0+1`$. Light-cone scaling further suggests $`n_0=1`$. We also impose the kinematic constraint $`f_0(0)=f_+(0)`$, to parameterise the form factors by a pole for $`f_0`$ and a dipole for $`f_+`$. A slightly more sophisticated pole/dipole parameterisation for $`f_0`$ and $`f_+`$, consistent with the same constraints, has been suggested by Becirevic and Kaidalov (BK) :
$`f_+(q^2)`$ $`=`$ $`{\displaystyle \frac{c_B(1\alpha )}{(1q^2/m_B^{}^2)(1\alpha q^2/m_B^{}^2)}}`$
$`f_0(q^2)`$ $`=`$ $`{\displaystyle \frac{c_B(1\alpha )}{(1q^2/\beta m_B^{}^2)}}.`$ (12)
We fit both parameterisations to the form factors. This is shown in figure 2. We can now use these models to calculate the total decay rate from equation 2. The results are
$$\mathrm{\Gamma }(B\pi l\nu )/|V_{\mathrm{ub}}|^2=9.0\pm 3.0\pm 3.2(\mathrm{ps})^1$$
(13)
where the first error is statistical and the second is systematic. The systematic errors are estimated by trying different interpolation functions for the chiral extrapolation, a linear fit to the heaviest three quarks for the heavy extrapolation and estimates of the lattice spacing from different quantities, i.e. $`r_0`$. We can use this model dependent result to extract $`V_{\mathrm{ub}}`$ from experimental data ,
$$|V_{\mathrm{ub}}|=(3.7\pm 0.5\pm 0.7\pm 0.7)\times 10^3.$$
(14)
The third error is the experimental error in the branching ratio.
This is a preliminary model dependent result. The form factors are well determined in the range $`1622\mathrm{Gev}^2`$. The total decay rate is dominated by low $`q^2`$ due to phase space. Here we have no data and are reliant on models of $`q^2`$. The differential decay rate could be used to extract $`V_{\mathrm{ub}}`$ in a model independent manner, but there is no experimental data available yet. This work was supported by EPSRC grant GR/K41663 and PPARC grant GR/L29927.
|
no-problem/9909/nucl-ex9909008.html
|
ar5iv
|
text
|
# CLUSTER EMISSION OF 8Be IN THE 28Si+12C FUSION REACTION AT LOW TEMPERATURE.
## I INTRODUCTION
In recent years, extensive efforts have been made to understand the decay of light di-nuclear systems (A$`<`$60) formed through low-energy heavy-ion reactions . In most of the reactions studied, the properties of the observed fully energy damped yields have been successfully explained in terms of either a fusion-fission (FF) mechanism or a deep-inelastic (DI) orbiting mechanism behavior. The strong resonance-like structures observed in elastic and inelastic excitation functions of <sup>24</sup>Mg+<sup>24</sup>Mg and <sup>28</sup>Si+<sup>28</sup>Si have indicated the presence of shell stabilized, highly deformed configurations in the <sup>48</sup>Cr and <sup>56</sup>Ni compound systems respectively . The present work aims to investigate the possible occurence of highly deformed configurations in the <sup>40</sup>Ca di-nucleus produced in the <sup>28</sup>Si+<sup>12</sup>C reaction through the study of light charged particle (LCP) emission.
## II EXPERIMENTAL PROCEDURES
The experiment was performed at the IReS Strasbourg VIVITRON tandem facility using 112.6 MeV <sup>28</sup>Si beams on a <sup>12</sup>C(160 $`\mu `$g/cm<sup>2</sup>) target. Both the heavy ions and their associated LCP’s were detected using the ICARE charged particle multidetector array . The heavy ions were detected in eight telescopes, each consisting of an ionisation chamber (IC) followed by a 500 $`\mu `$m Si detector. The in-plane coincident LCP’s were detected using four triple telescopes ( Si 40 $`\mu `$m, Si 300 $`\mu `$m, 2 cm CsI(Tl)), 16 double telescopes ( Si 40 $`\mu `$m, 2 cm CsI(Tl)) and two double telescopes (IC, Si 500 $`\mu `$m) located at the most backward angles. Typical inclusive $`\alpha `$ energy spectra are shown in Fig.1.a.
## III EXPERIMENTAL RESULTS AND STATISTICAL-MODEL CALCULATIONS
The data analysis was performed using CACARIZO , the Monte Carlo version of the statistical-model code CASCADE. The angular momenta distribution, needed as the main input to constrain the calculation, was taken from <sup>28</sup>Si+<sup>12</sup>C complete fusion data . The other ingredients such as the nuclear level densities and the barrier transmission coefficients, are usually deduced from the study of the evaporated LCP spectra. Standard statistical-model calculations are not able to reproduce the shape of experimental $`\alpha `$-particle energy spectra satisfactorily . Several attempts have been made to explain this anomaly either by changing the emission barrier or by using a spin-dependent level density. In hot rotating nuclei forme density at higher angular momentum should be spin dependent. In CACARIZO, the level density, $`\rho (E,J)`$, for a given angular momentum $`J`$ and energy $`E`$ is given by the well known Fermi gas expression:
$$\rho (E,J)=\frac{(2J+1)}{12}a^{1/2}(\frac{\mathrm{}^2}{2\mathrm{}_{eff}})^{3/2}\frac{1}{(E\mathrm{\Delta }E_J)^2}exp(2[a(E\mathrm{\Delta }E_J)]^{1/2}),$$
Where $`a`$ is the level density parameter, $`\mathrm{\Delta }`$ is the pairing correction, $`E_J=\frac{\mathrm{}^2}{2\mathrm{}_{eff}}J(J+1)`$ and $`\mathrm{}_{eff}=\mathrm{}_0\times (1+\delta _1J^2+\delta _2J^4)`$ with $`\mathrm{}_0`$ the rigid body moment of inertia and $`\delta _1,\delta _2`$ the deformability parameters. By changing the deformability parameters one can simulate the deformation effects on the level densities. For the <sup>28</sup>Si + <sup>28</sup>Si reaction , the shape of the inclusive and exclusive $`\alpha `$ energy spectra are well reproduced by using large deformation effects . Similarly the experimental inclusive $`\alpha `$ energy spectra for <sup>28</sup>Si + <sup>12</sup>C of Fig.1.a are better described by using deformation effects (dotted lines) than with the standard liquid-drop deformation (dashed lines).
The exclusive energy spectra of $`\alpha `$-particle in coincidence with individual S and P ER’s shown in Figs.1.b and 1.c are quite interesting. The dotted lines are the predictions of CACARIZO using non-zero values of the deformability parameters. The energy spectra associated with S are completely different from those associated with P. The latter are reasonably well reproduced by the CACARIZO curves whereas the model could not predict the shape of the spectra obtained in coincidence with S (Fig.1.b). This is due to the fact that an additional component might be significant in this case. One could suggest the hypothesis of a contribution arising from the decay of unbound <sup>8</sup>Be produced in a binary reaction <sup>40</sup>Ca $``$ <sup>32</sup>S+<sup>8</sup>Be. In order to determine the sources of both the $`\alpha `$ emission and <sup>8</sup>Be breakup the invariant cross sections in coincidence with P and S are plotted in Fig.2 in the (V,V) plane. Fig.2.b shows the invariant cross sections in coincidence with P which maxima are centered on the compound nucleus velocity as expected for a fusion-evaporation mechanism. Fig.2.a presents two additional contributions (in circles) for angles close to 30 and 60 arising from the binary decay of unbound <sup>8</sup>Be. This conclusion is also consistent with the ER kinematical analysis of the S and P exclusive energy spectra (not shown here). The question of the real nature (FF or orbiting) of this decay process remains to be explored.
## IV SUMMARY
The $`\alpha `$-particle energy spectra measured in coincidence with S have an additional component which may come from the decay of <sup>8</sup>Be, which is unbound and produced through the binary decay of <sup>40</sup>Ca $`^{32}`$S + <sup>8</sup>Be. Work is in progress to analyse the proton energy spectra as well as the in-plane angular correlations of both the $`\alpha `$-particles and the protons.
|
no-problem/9909/astro-ph9909009.html
|
ar5iv
|
text
|
# Any Recent Progress in the Theory of Pulsating Stars?
## 1 Introduction
Well in accordance with the information era in which we live, variable-star research prospers, producing huge amounts of high-quality data. For the various families of pulsators, a large number of member stars are monitored regularly. Concerning theory, this calls for new – probably statistical – avenues to be taken to eventually improve our understanding of stellar physics and evolution. We are finally no longer confined to detailed studies of single objects only, but have access to statistically significant ensembles. Therefore, conceptual advances are to be expected once the ensemble aspect is given proper consideration in theoretical studies.
The last few years, however, have mostly seen the continuation of traditional approaches to stellar pulsation theory. The contributions from these approaches show that important basic aspects are still to be clarified, and even discovered. We have not yet reached a plateau in our understanding with only minor quantitative bunny-hill problems left to be solved.
The section 2 of this contribution deals with classical, Newtonian stars for which formal tool-making aspects and excitation results are discussed. Section 3 is devoted to compact objects for which we have seen a considerable body of papers appearing recently. Much of the excitement is generated by additional efforts in theory in connection with the upcoming gravitational-wave detectors, and the recent launch of the Chandra spacecraft.
## 2 Newtonian Stars
### 2.1 Formal Developments
Lee and Saio (1986, 1987) introduced a series expansion of the latitudinal ($`\theta `$) dependence of pulsational perturbations in rotating stars. The (usually rather low) number of terms in this series representation always left some doubt about the quality of the results, as it was not obvious if the dominant contributions were included since the basis functions – associated Legendre polynomials – generally do not converge quickly. This phase has finally been overcome with the introduction of a direct integration method for Laplace’s tidal equation in the $`\theta `$ direction (Bildsten et al. 1996, Lee & Saio 1997). Solving this additional eigenvalue problem provides further sets of eigenvalues associated with modified g and r modes and oscillatory convective modes, respectively. An important observation is that the g modes’ amplitudes become increasingly more confined to low latitudes as the ratio of the rotational to the oscillation frequency increases. Oscillatory convective modes, on the other hand, have only low amplitudes close to the equator. Lee & Saio (1997) concluded positively that the instability results for resonant couplings of envelope modes with oscillatory convective modes in upper main-sequence stars (Lee & Saio 1986) remain valid when using the new scheme, rather than the crude two-term approximation used in the past.
For tidally disturbed stars in close binary systems, Savonije & Papaloizou (1997) went a step further, fully accounting for the Coriolis force and casting the perturbation problem into a two-dimensional boundary-value problem which was solved with a complex-valued, finite-difference approach in both radial and latitudinal directions. The effect of the time-dependent external tidal potential was included as a fixed-$`\mathrm{}`$ (quadrupole) perturbation with adjustable orbital frequency. Based on this scheme, Witte & Savonije (1999) studied the tidal exchange of energy and angular momentum in a rotating, massive, slightly evolved main-sequence star in an eccentric binary system. For various uniform rotation rates and series of orbital periods, the resonances with r and g modes were computed. These resonances appear to efficiently alter the orbital evolution, as well as synchronize the massive star’s rotation.
### 2.2 Excitation of Pulsations
The most celebrated pulsating variables of the recent past are clearly the EC 14026 or variable sdB (sdBV) stars. Their popularity arose from the rare incidence that the theoretical prediction appeared before the publication of observational evidence of such objects (Charpinet et al. 1996, Kilkenny et al. 1997). The observational data for this class continues to grow rapidly (e.g. Koen et al. 1998a) and an instability domain is beginning to emerge from these data. Theoretically, the instability is clearly induced by the Z-bump in the opacity. The sdBVs have rather large surface gravities and comparably high densities in their envelopes. A consideration of the ‘opacity mountain’, plotted over the density-temperature plane, shows that the sharpness of the Z-bump decreases as the density increases in the critical temperature interval. Therefore, there is no further surprise to realize that canonical heavy-element abundances are not sufficient to provide enough driving to overcome radiative damping. Nevertheless, the radiation field seems to be appropriate for radiative levitation to induce a spatial distribution of heavier ions to sufficiently steepen the Z-bump so that eventually the model stars become pulsationally overstable in the observed temperature – $`\mathrm{log}g`$ domain (e.g. Fontaine et al. 1998). Concerning the particularities of the Z-bump, PG 1605+072 is of interest (Koen et al. 1998b). It shows about a dozen, and possibly even more, oscillation modes with periods about in the range of 200 to 540 s. These periods are a factor two to three longer than what is found in the other sdBV stars. Spectroscopic deduction of the stellar parameters indicates that PG 1605+072 has a surface gravity of up to 0.5 dex lower than the rest of the class. This might indeed hint at an enhanced excitation of p modes under lower-density conditions.
The success with the sdBV class fostered evolutionary and pulsational studies of post-horizontal branch stars evolving towards the white dwarf cooling tracks across the subdwarf domain. Therefore, the prediction of gravity-mode instabilities induced by the $`ϵ`$-mechanism in the thin H-burning shell of low-mass stars having settled close to the white dwarf cooling track is not surprising (Charpinet et al. 1997). Potential instabilities were found for low-$`\mathrm{}`$ and low radial-order g modes with periods between 40 and 120 s. The instability region extends from about 4.64 to 4.88 in $`\mathrm{log}T_{\mathrm{eff}}`$. The corresponding objects in the sky would probably be classified as DAO stars. Currently, however, there is no observational evidence for the existence of such variables.
Based on the coincidence of the thermal time-scale on top of a prospective driving region and the pulsation period, it is clear that the H I/He I ionization zone must play the dominant role in the excitation of the high-order p modes of roAp stars. Dziembowski & Goode (1996) also argued in this direction, but they were not successful in actually finding overstable modes. After postulating a chromosphere, Gautschy et al. (1998) could raise the outer edge of the p -mode cavity sufficiently high to eventually identify overstable roAp-like acoustic modes. Homogeneous stellar envelopes allowed, however, for rapidly oscillating Ap stars, as well as longer-period $`\delta `$ Scuti-like modes to be excited. Only after additionally introducing chemically stratified envelopes (hypothetically caused by element sedimentation) could they restrict the excited modes to the short-period domain. The Gautschy et al. (1998) picture still has weak points, such as the absence of any observational evidence for chromospheres in roAp stars, and too many simultaneously excited modes of various spherical degrees.
The nature, and in particular the physics, of strange modes is still debated. Their origin is becoming considerably clearer now. Buchler et al. (1997) even found strange modes in Cepheid models. Their strange modes can be analyzed even in the adiabatic limit, very much like in the massive main-sequence star models of Glatzel & Kiriakidis (1993) or Saio et al. (1998). Finding strange modes in the adiabatic limit removes much of their strangeness. It is seen that they owe their existence to a sharp ridge in the acoustic cavity where waves with appropriate oscillation frequency are effectively reflected, giving birth to an additional oscillation spectrum of the stellar envelope. Also, the large growth rates of these modes – let us call them adiabatic strange modes – are easily explainable. They are confined so much to the outermost stellar layers that the mode inertia is very small. From quasi-adiabatic treatments we know that the imaginary part of the eigenfrequency scales inversely with the mode inertia. Therefore, even for a rather low driving efficiency, the growth rate can become very large. These adiabatic strange modes are, however, only part of the whole story. As pointed out in Saio et al. (1998), there are stars with ‘nonadiabatic strange modes’ that do not show up in the adiabatic limit. In this latter case the cavity splitting is believed to show up only in the fully nonadiabatic case. In other words, for these modes, the interaction between the thermal and the mechanical reservoir of the oscillator is essential. The nonadiabatic strange modes are of particular interest, since they contain all the ‘strange’ mode physics, such as instability bands in the nonadiabatic reversible limit. The crossings of adiabatic strange modes with regular modes, on the other hand, unfold into avoided crossings only. The detailed physics of the unfolding is still unexplained. A first step towards understanding the instability of nonadiabatic strange modes was made by Saio et al. (1998). In a simple model system, the radiation-pressure gradient is identified to drive strange-mode instabilities very much like the gas-pressure gradient does for dynamical instabilities. Numerical experience shows indeed that nonadiabatic strange modes pop up whenever the model stars show strongly radiation-dominated layers.
Partly in connection with strange modes in massive stars a well-documented controversy between Glatzel & Kiriakidis (1998) and Stothers (Stothers & Chin 1993, Stothers 1999) developed around the concept of dynamical instability. Stothers (1999) attributed nonlinear oscillatory instabilities of massive model-star envelopes to the nonadiabatic manifestation of an adiabatically diagnosed $`\mathrm{\Gamma }<4/3`$ instability. The brackets denote a suitably defined spatial average; this suitability was also a point of controversy. Glatzel & Kiriakidis (1993) criticized dynamical-instability claims by Stothers & Chin (1993) and argued for the necessity of a fully nonadiabatic treatment to understand LBV variability and eruptions in these stars. Even if the nonadiabatic, nonlinear simulations of dynamically unstable models (computed with the adiabaticity constraint) are strongly pulsationally unstable, it is formally unclear how to connect the pulsational with the dynamical instability. In a system in which the time-scale of energy exchange and the time-scale for sound propagation through the system become comparable, the concept of dynamical instability loses its foundation. Adopting, however, a more pragmatic point of view, we can certainly say that whenever we come across a stellar model with a dynamically unstable adiabatic fundamental mode, it is worth a closer look as it is obviously hardly bound anymore, even in a thermodynamically more appropriate framework.
Based on the Chandrasekhar–Milne expansion of the mechanical structure equations for a rotating star, Lee (1998) investigated the influence of rotational deformation on the stability of axisymmetric acoustic and gravity modes in B-type main sequence stars. Acoustic nonradial modes were found to be stabilized at high rotation frequencies, i.e. for rotation speeds approaching break-up speed. Quasi-radial modes, however, remained overstable even in the very rapid rotators. The results were, therefore, comparable with the conclusions by Lee & Baraffe (1995), who studied the same phenomenon for non-axisymmetric perturbations. Lee’s (1998) explorative nonadiabatic investigation of low-frequency g modes, representative for the pulsation modes in slowly pulsating B stars (SPBs), led him to conclude that they are unlikely to be damped out by the star’s rotation, despite the fact that the importance of rotation, measured by the ratio of the rotational to the oscillation frequency, is higher for the g modes than for the acoustic modes. Quasi-adiabatic eigensolutions by Ushomirsky & Bildsten (1998) showed, on the other hand, that g modes appropriate for SPBs could be damped by rotation. Furthermore, oscillation modes which were found to be stable in non-rotating models could be spun up to overstability. Ushomirsky & Bildsten (1998) argued that it is the period measured in the co-rotating frame of reference which is the important quantity to fit the thermal time-scale of the envelope above the driving region. Lee’s (1998) view is different; he argues that the mean radius of the equipotential surfaces, introduced in his analyses, shifts the excitation region to larger effective radii as the rotation speed of a star increases. Thereby, the effective period which can match the thermal time-scale envelope overlying the excitation region drops. This effect was found to be more pronounced for the p modes in $`\beta `$ Cephei stars than for the g modes in SPBs. In contrast to the Ushomirsky & Bildsten (1998) analysis, which relies on the direct solution of the tidal equation, Lee (1998) used a two-term expansion in the $`\theta `$-dependence of the perturbations which might not be fully adequate for the problem. In other words, the quasi-adiabatic analyses versus the Legendre-polynomial expansions in latitude have not yet converged to a common prediction. In any case, it appears to be clear that rotating models are necessary to fully understand the observed populations of blue main-sequence pulsators, particularly in stellar clusters. It might well be that rotation removes some of the instabilities found in non-rotating models. Rapid rotators might also be unstable at lower effective temperatures than non-rotating stars. As stellar rotation constricts the amplitudes of g modes to lower stellar latitudes, the lack of rapid rotators observed among SPB stars (e.g. Balona & Koen 1994) might possibly be attributable to an observational selection effect in the end.
The behavior of r modes in differentially rotating stellar envelopes was studied by Wolff (1998) in connection with solar oscillations. The geostrophic mode with vanishing oscillation frequency measured in the co-rotating frame picks up a finite oscillation frequency in the differential rotation case. The r -mode spectrum consists then of a slow and a fast branch. Depending on the magnitude of rotational shear and the generalized spherical degrees of rapid and slow r modes, their characters can converge. They even can merge in frequency space (as a function of shear parameterization) and thereafter cease to exist.
## 3 Compact Objects
As white dwarfs cool, the central density eventually rises sufficiently for the stellar matter to pass through a first-order phase transition to crystallize. The existence of a crystalline interior has consequences for the interpretation of the white dwarfs’ luminosity function and therefore their dating. As DA white dwarfs can show up as g mode pulsators whose instability region – measured in the $`\rho `$-$`T`$ plane – extends into the domain where crystallization could have started, they are prospective candidates to search for observable consequences of a solid interior. Indeed, the DAV star BPM 37093 is the prime candidate for such investigations. Bradley (1996) contemplated that crystallization should manifest itself in the rate of period change which differs from a star with a Coulomb-fluid interior, first because of the release of latent heat at crystallization and later on due to Debye cooling. The monitoring time necessary to detect such an effect would be very long – of the order of ten to twenty years. The effect dominating the period change might, however, be the growing of the crystallized sphere in the star’s interior (Winget et al. 1997). The latter authors suggested, based on a simplified treatment of the linear eigenvalue problem, the measurement of the period spacings of identified g modes in stars like BPM 37093 to deduce the magnitude of crystallization. They found that the mean period spacing, measured relative to an uncrystallized star with otherwise identical properties, increases by up to about 30%, depending on the mass fraction of the crystallized interior. As the mean period spacing of g modes also depends on other stellar parameters, such as thickness of H and He layers and position on the HR plane, complementary mode properties have to be found to measure crystallization conclusively.
A more favorable environment to observe effects of solid stellar matter was discussed by Duncan (1998). He suggested a search for seismic toroidal modes excited during star-quakes in neutron stars. If soft gamma repeaters (SGR) are highly magnetized neutron stars experiencing fracture events in their solid crusts, global seismic oscillations can be excited very much like on Earth. As the crust is rather superficial, seismic modes might attain significant amplitudes on the surface to make them observable. Duncan (1998) therefore proposed a search for periodicities in SGR signals from some ten Hz upwards.
Considerable efforts have gone into studying relativistic oscillations of compact objects. A huge body of data exists showing time-variable phenomena in the frequency domain from Hz to kHz which are, in principle, attributable to oscillations on neutron stars. Much of the present enthusiasm originated from the prospect to observe such signatures in the forthcoming gravitational wave detectors (Owen et al. 1998).
The Chandrasekhar–Friedman–Schutz (CFS) instability is known to destabilize spheroidal modes in inviscid rotating relativistic bodies. Under the influence of viscous forces, only f modes with $`\mathrm{}=m=2`$ survive in objects rotating close to break-up speed (e.g. Lindblom 1995). Therefore, the CFS instability was considered to be of little importance in nature. This situation changed when Andersson (1998) found that toroidal modes in rotating stars are strongly overstable to the CFS gravitational-radiation reaction. The instability grows proportional to $`(\mathrm{\Omega }\sqrt{R^3/M})^{10}`$ for sectoral dipole modes; $`\mathrm{\Omega }`$ stands for the rotation rate of a star with mass $`M`$ and radius $`R`$. Friedman & Morsink (1998) proved that Andersson’s numerical finding applies to any relativistic rotating body. For most of the r modes, the CFS instability persists at any value of $`\mathrm{\Omega }`$. Hence, the severe constraint of very rapidly rotating relativistic bodies associated with spheroidal modes disappeared. Nevertheless, to estimate which modes could survive in realistic neutron stars, the effects of rather uncertain viscosity effects needed to be studied. First attempts revealed that r -modes might survive in the temperature window between $`10^9`$ and $`10^{10}`$ K. Below the lower boundary, shear viscosity damps the r -modes, and above about $`10^{10}`$ K bulk viscosity over-compensates the CFS instability. In this scenario, young hot, rapidly spinning neutron stars are expected to pass through the instability window within the first few years after their birth, losing up to $`0.01M_{}c^2`$ of energy, thereby slowing their rotation rate to five to ten percent of the break-up rotation speed. This mechanism could explain why some young pulsars are observed to rotate slowly compared with what is expected from angular-momentum conservation during collapse. Furthermore, the r -mode instability would not permit milli-second pulsars to form via accretion-induced collapse of white dwarfs. The white dwarfs would get too hot in the process and lose much of their angular momentum through the CFS instability. Instead, milli-second pulsars must be formed by means of accretion onto a neutron star, keeping the recipient object always at sufficiently low temperature to suppress the r -mode instability. Observationally, gravitational wave radiation generated by the CFS instability is now expected to be detectable by enhanced LIGO interferometers to distances as far as Virgo cluster (Owen et al. 1998).
The r -modes referred to in the last paragraph are only one sub-class of inertial modes that owes their existence to rotation. Others, for which the spheroidal contribution to the eigensolutions is of comparable strength as the toroidal one were investigated for CFS instability by Yoshida & Lee (1999). They find that the ‘inertial modes’ in these stars are also CFS unstable, but less so than the r -modes. Nevertheless, the most overstable inertial modes appear to survive even in the dissipative case.
Thermally or compositionally stratified neutron stars also possess g modes with sufficiently long periods to satisfy the condition for CFS instability. Lai (1999) investigated this case and concluded that g modes become overstable by means of the CFS instability if the rotation rate is comparable or larger than the considered g -mode frequency. Viscosity seems to extinguish the instability, except possibly around temperatures of $`10^9`$ K. The g -mode instability is, in any case, orders of magnitude weaker than the inertial-mode instability and might, therefore, not be of importance in nature.
Gravity-mode oscillations in neutron stars have been a focus of numerous studies in the past (see e.g. Gautschy & Saio 1995). Lately, Bildsten & Cumming (1998) added a new prospective g -mode cavity to the theoretical picture. They proposed an additional class of g modes in matter-accreting neutron stars. These modes owe their existence to the compositional inhomogeneity developing at the base of the hydrogen and helium layer that builds up on the surface and transmutes into iron-group elements by unstable H/He burning and by electron captures. Very much like at the outer edge of the convective cores of massive main-sequence stars, a sharp gradient in the molecular weight builds up which acts as a cavity for the previously mentioned g modes and an interface mode. In the case of the upper main-sequence stars this kind of gravity mode was baptized core g modes. In the case of neutron stars the cavity lies rather close to the surface, actually at the base of the superficial suprafluid ocean. We refer to modes being essentially trapped in this composition interface as interface g modes. In addition, if the neutron star’s background is not isentropic, the non-vanishing Brunt–Väisälä frequency permits the ‘normal’ thermal buoyancy g modes to show up. The resulting frequency spectrum can then become rather intricate as the frequency domains of the interface and thermal g modes can overlap. In the adiabatic mode treatment of Bildsten & Cumming (1998) the resulting mode interaction between the two families unfolds into avoided crossings.
The complicated nuclear physics and thermodynamics in neutron stars can produce a multitude of narrow g mode cavities by density jumps alone. Much of the attention given to the interface g modes at the bottom of the superficial suprafluid originates from the frequencies of these modes which are compatible with observed QPOs in the frequency domain of a few times 10 Hz. The adiabatic properties of the interface modes in slowly rotating neutron stars have been well discussed. The case of rapid rotation – measured relative to the g mode frequency – has still to be tackled in more detail. In particular, however, the excitation mechanism of the interface g modes and f -type interface mode have to be addressed. As the interface modes are well trapped in the compositional transition region, they will hardly see anything of the overlying H/He burning. A conceivable driving agent could, however, be an electron-capture instability of the transition layer becoming oscillatory by nonadiabaticity so close to the neutron star’s surface.
###### Acknowledgements.
The Swiss National Science Foundation supported the author via a PROFIL2 fellowship during the writing of this review and Vienna’s Café Eiles provided a comfortable Bohemian atmosphere to finalize it. As usual, H. Harzenmoser gave selflessly away his knowledge of the literature collected in his unequaled brain-oriented database. Finally I would like to thank L. Szabados and D. Kurtz for their editing efforts.
|
no-problem/9909/cond-mat9909391.html
|
ar5iv
|
text
|
# Partition function zeros of the 𝑄-state Potts model for non-integer 𝑄
## I introduction
The $`Q`$-state Potts model on a lattice $`G`$ for integer $`Q`$ is defined by the Hamiltonian
$$=J\underset{i,j}{}\delta (\sigma _i,\sigma _j),$$
(1)
where $`J`$ is the coupling constant, $`i,j`$ indicates a sum over nearest-neighbor pairs, and $`\sigma _i=1,\mathrm{},Q`$. Fortuin and Kasteleyn have shown that the partition function can be written as
$$Z=\underset{G^{}G}{}Q^{n(G^{})}(e^{\beta J}1)^{b(G^{})},$$
(2)
where the summation is taken over all subgraphs $`G^{}G`$, and $`n(G^{})`$ and $`b(G^{})`$ are, respectively, the number of clusters and occupied bonds in $`G^{}`$. In Eq. (2) $`Q`$ need not be an integer and Eq. (2) defines the partition function of the $`Q`$-state Potts model for non-integer $`Q`$. In this paper we discuss the ferromagnetic (FM) and antiferromagnetic (AF) properties of the two-dimensional Potts model for non-integer $`Q`$ using the partition function zeros in the complex temperature plane (Fisher zeros).
We have calculated the exact partition function of the Potts model on finite lattices of size $`3L8`$ by the Chen-Hu algorithm based on the Fortuin-Kasteleyn representation. We have used self-dual boundary conditions for which the Fisher zeros of the Potts model map into themselves under the duality transformation, $`p1/p`$, where $`p=(e^{\beta J}1)/\sqrt{Q}`$. The self-dual lattices considered in this paper are periodic in the horizontal direction and there is another site above the $`L\times L`$ square lattice, which connects to $`L`$ sites on the first row.
Fig. 1 shows the Fisher zeros in the complex $`p`$ plane of the Potts model for several values of non-integer $`Q`$. For $`Q<1`$ no zero lies on the unit circle $`p_0=e^{i\theta }`$ and as $`Q`$ approaches one from below some zeros approach the unit circle. In Fig. 1a we have omitted half of the Fisher zeros which give no more information because they can be obtained using the duality transformation from the Fisher zeros in the figure. For $`Q>1`$ zeros begin to lie on the unit circle $`p_0`$ and the number of zeros on the unit circle increases with increasing $`Q`$. Fig. 1b shows the Fisher zeros in the range $`1<Q<2`$. The two circles shown are the FM and AF loci for the 2-state (Ising) model in the thermodynamic limit. For $`Q>2`$ the zeros in the AF region ($`\mathrm{Re}(p)<0`$) become more scattered. Finally, as shown in Fig. 1d, for large $`Q`$ all the zeros lie on the unit circle $`p_0`$. The distribution of zeros varies continuously with $`Q`$ with the exception of $`Q=1`$ and $`Q=2`$. For $`Q=1`$ the zeros are all degenerate at $`p=1`$, while for $`Q=2`$ the symmetry between the FM and AFM Ising model forces the AF zeros to lie on the locus $`p=\sqrt{2}+e^{i\theta }`$.
## II the ferromagnetic Potts model
There is no rigorous proof that the FM critical point $`p_c=1`$ for $`Q<4`$ except for $`Q=2`$. For $`1<Q<4`$ we observe that the zero closest to the positive real axis always lies on the unit circle and approaches the critical point $`p_c=1`$ in the thermodynamic limit. For $`Q<1`$, however, because no zero lies on the unit circle for a finite-size lattice, we need to calculate the FM critical point from the zero closest to the point $`p_c=1`$ (the first zero). By using the Bulirsch-Stoer (BST) algorithm we extrapolated our results for finite lattices to infinite size. The error estimates are twice the difference between the ($`n1,1`$) and ($`n1,2`$) approximants. We find that the first zero converges on the critical point $`p_c=1`$.
From the first zero, $`p_1`$, we have calculated the thermal exponent $`y_t(L)`$ defined as
$$y_t(L)=\frac{\mathrm{ln}\{\mathrm{Abs}[p_1(L+1)1]/\mathrm{Abs}[p_1(L)1]\}}{\mathrm{ln}[(L+1)/L]}$$
(3)
or
$$y_t(L)=\frac{\mathrm{ln}\{\mathrm{Im}[p_1(L+1)]/\mathrm{Im}[p_1(L)]\}}{\mathrm{ln}[(L+1)/L]}.$$
(4)
For $`Q<1`$ the imaginary part of the first zero is not a monotonic function of $`L`$, and so in this range we used Eq. (3) to calculate $`y_t`$. For $`Q>1`$ Eq. (4) was found to give the best estimate. Fig. 2 shows the BST estimates of the thermal exponent which are in excellent agreement with the den Nijs formula $`y_t=(33x)/(2x)`$, where $`x=(2/\pi )\mathrm{cos}^1(\sqrt{Q}/2)`$. Blöte et al. have obtained results similar to Fig. 2 using heat capacity data on infinitely long strips.
## III the antiferromagnetic Potts model
For AF interaction $`J<0`$ the physical interval is $`0e^{\beta J}1`$, which corresponds to $`1/\sqrt{Q}p0`$. Baxter has conjectured the existence of an AF critical point for $`Q3`$ at $`p_c=(\sqrt{4Q}2)/\sqrt{Q}`$, and we expect that in the thermodynamic limit the locus of the Fisher zeros cuts the negative real axis between $`1/\sqrt{Q}`$ and 0. For $`Q=2.9`$ and $`L=8`$ (Fig. 1c) two zeros already lie on the negative real axis, but they are outside the physical interval. Table I shows the real and imaginary parts of the zero $`p_a(L)`$ closest to the AF interval of the negative real axis for $`Q=0.1`$ and 2.9 for even-size lattices, and in Table I the last row are the BST estimates for $`L\mathrm{}`$. Fig. 3 shows the BST estimates of the critical points of the Potts antiferromagnet from the Fisher zeros for $`0<Q<3`$ which are in excellent agreement with the Baxter conjecture.
From the Fisher zeros we have also calculated the thermal exponent $`y_t(L)`$ defined as
$$y_t(L)=\frac{\mathrm{ln}\{\mathrm{Abs}[p_a(L+2)p_c]/\mathrm{Abs}[p_a(L)p_c]\}}{\mathrm{ln}[(L+2)/L]}.$$
(5)
Fig. 4 shows the thermal exponent $`y_t(L)`$ for $`0<Q<3`$. In Fig. 4 as $`Q`$ increases the thermal exponent decreases and around $`Q=2`$ it is consistent with the known exact value $`y_t=1`$ of the Ising model. Note that these data are calculated from lattices of size $`L=4`$, 6, and 8 only, and no attempt is made to extrapolate to infinite size.
To our knowledge these are the first such calculations of $`y_t`$ for non-integer $`Q`$. Of course for the Ising model, $`Q=2`$, the exact result $`y_t=1`$ has been known for many years, and our results are consistent with this. The value of $`y_t`$ for $`Q=3`$ has been the subject of some debate recently. Ferreira and Sokal expect $`y_t=1/2`$, while Wang et al. found $`y_t=0.77`$. Although it might appear from Fig. 4 that our results for $`Q=3`$ agree with the result of Wang et al., we must emphasize that these results are for very small lattices and have not been extrapolated to infinite size. If we include calculations for $`Q=3`$ using the microcanonical transfer matrix for $`L=10`$, and apply the BST algorithm our results are inconclusive. We are currently extending our calculations to larger lattices in order to address this interesting question.
ACKNOWLEDGMENTS
This work was supported in part by the National Science Council of the Republic of China (Taiwan) under grant number NSC 88-2112-M-001-011.
|
no-problem/9909/hep-lat9909116.html
|
ar5iv
|
text
|
# The flavour and quark mass dependence of thermodynamic quantities in lattice QCD This work was partly supported by the Deutsche Forschungsgemeinschaft under grant Ka 1198/3-1 and the EU TMR network grant ERBFMRX-CT97-0122.
## 1 INTRODUCTION
In this paper thermodynamic observables like the critical temperature $`T_c/\sqrt{\sigma }`$ and the pressure $`p/T^4`$ have been calculated in full QCD with an improved staggered fermion action. The use of improved actions in finite temperature calculations is strongly suggested by results from the pure gauge sector which have shown that improved actions reduce finite cut-off effects in thermodynamic quantities significantly . In addition, the calculation of the pressure with the standard staggered action did show strong cut-off effects when comparing results obtained on lattices with temporal extent $`N_\tau =4`$ and 6 .
The relevant number of flavours in finite temperature QCD is expected to be two light and one heavier quark. To quantify the effect of varying the number of quark flavours simulations with $`N_f=`$2, 2+1 and 3 are performed.
## 2 THE SIMULATION
The fermion fields have been simulated using the tree-level p4 action including fat-links with a fat-weight $`\omega =0.2`$ in the one-link derivative. This action is constructed to improve the rotational symmetry of the free quark propagator up to $`𝒪(p^4)`$. The high temperature ideal gas limit of the free energy is approached much faster for the p4 than for the staggered action (Figure 1). Figure 1. The fermion free energy in the ideal gas limit.
Fat-links lead to an improved flavour symmetry which results in a reduced pion splitting measured by the quantity
$$\delta =\frac{m_{\pi _2}^2m_\pi ^2}{m_\rho ^2}.$$
In Figure 2, $`\delta `$ is shown for the staggered and p4 fat action measured at different quark masses. Comparing results at the same lattice spacing $`a`$ one finds a reduction of the pion splitting of about a factor of 2.
For the gauge fields the tree-level $`1\times 2`$ action has been used which in pure gauge simulations lead to an improved finite cut-off behaviour for quantities like latent heat and pressure .
The gauge and fermion fields have been updated with the standard Hybrid R algorithm with a step size $`\mathrm{\Delta }\tau <m_qa/2`$ and a trajectory length of 0.8. The lattice sizes are $`16^3\times 4`$ and $`16^4`$. Figure 2. The pion splitting $`\delta `$ for standard $`(N_f=2)`$ and p4 fat improved $`(N_f=3)`$ actions.
## 3 THE CRITICAL TEMPERATURE AND THE TEMPERATURE SCALE
The critical temperature has been calculated for various quark masses for 3 flavour QCD and in addition also for 2 and 2+1 flavours at a quark mass of $`m_q=0.10`$. The pseudo-critical coupling $`\beta _c`$ was determined by the peak position of the susceptibility of the Polyakov-loop and the chiral condensate, respectively. At the pseudo-critical couplings zero-temperature calculations on $`16^4`$ lattices have been performed measuring the string-tension $`\sqrt{\sigma }`$ and the meson masses $`m_{PS}`$ and $`m_V`$. The results for $`T_c/\sqrt{\sigma }`$ are plotted in Figure 3. The qualitative behaviour of the 2 and 3 flavour values for the standard and p4 action is quite similar; in both cases $`T_c`$ drops quite fast already Figure 3. The critical temperature $`T_c/\sqrt{\sigma }`$ for 2, 2+1 and 3 flavours for the p4 action. For comparison also 2 flavour results obtained with the standard action are plotted.
at large quark masses. For the improved p4 action $`T_c/\sqrt{\sigma }`$ for different number of flavours does only show a small difference.
To set the temperature scale for the pressure calculations zero temperature simulations have been performed at a variety of $`\beta `$ values for 2, 2+1 and 3 flavours. The resulting string tension data have been fitted to an ansatz proposed by Allton (see Figure 4)
$$(a\sqrt{\sigma })(\beta )=R(\beta )(1+c_2\widehat{a}^2(\beta )+c_4\widehat{a}^4(\beta )+\mathrm{})/c_0$$
with $`\widehat{a}^2(\beta )=R(\beta )/R(\beta _0)`$, the two-loop beta function $`R(\beta )`$ and the normalization $`\beta _0`$.
Figure 4. The string tension data and the fit to the data for the $`N_f=2+1`$ case.
## 4 THE EQUATION OF STATE
The pressure $`p/T^4`$ has been calculated for 2, 2+1 and 3 quark flavours. The masses of the light quarks are $`m/T=0.4`$, the mass of the heavy quark is $`m/T=1.0`$. From the difference of the gluonic part of the action on finite temperature and zero temperature lattices $`S^{1\times 2}_0S^{1\times 2}_T`$ the pressure can be calculated according to
$$\frac{p}{T^4}|_{\beta _0}^\beta =N_\tau ^4_{\beta _0}^\beta 𝑑\beta ^{}\left(S^{1\times 2}_0S^{1\times 2}_T\right)$$
In Figure 5 the result for the differences of the gluonic action densities are shown. One observes an increase for the maximum of that quantity corresponding to the increase of the number of flavours. The integration over these curves gives the pressure $`p/T^4`$ plotted in Figure 6. For larger
Figure 5. The action differences $`S^{1\times 2}_0S^{1\times 2}_T`$ for $`N_f=2`$, 2+1 and 3 for the p4 fat improved action.
temperatures the 2+1 flavour pressure is closer to the 3 flavour than to the 2 flavour pressure as expected from the high temperature Stefan Boltzmann limit (see arrows in Figure 6). If one normalizes the pressure to the appropriate ideal gas value one finds the same temperature dependence for 2 and 3 flavours for the p4 action (Figure 7). Figure 7 also shows that the systematic deviations from the Stefan Boltzmann limit obtained with different fermion actions are qualitatively in agreement with what has been calculated as the cut-off effect of the free energy at $`N_\tau =4`$ for the different actions (see Figure 1).
The cut-off effects of the 2 flavour pressure for the p4-fat action and the standard staggered action are compared in Figure 8. As the pressure calculated with the p4 action seems to scale with the relevant number of degrees of freedom we may expect to obtain an estimate for the
Figure 6. The pressure for $`N_f=2`$, 2+1 and 3 for the p4 fat improved action.
Figure 7. The pressure normalized to the ideal gas value $`p/p_{SB}`$ for $`N_\tau =4`$.
Figure 8. The 2 flavour pressure for the standard action ( $`N_\tau =4`$, 6), the p4 fat action ($`N_\tau =4`$) and the rescaled continuum pure gauge result.
remaining cut-off dependence by comparing with the appropriately rescaled continuum extrapolation for the pure gauge pressure. This is shown as the black curve in Figure 8. If this curve approximates the continuum extrapolated pressure of full QCD at high temperatures the p4 action indeed reduces strongly the cut-off effects. A direct verification will require a simulation on a $`N_\tau =6`$ lattice.
|
no-problem/9909/hep-lat9909073.html
|
ar5iv
|
text
|
# UTCCP-P-69 Sept. 1999 Equation of state for SU(3) gauge theory with RG improved actionTalk presented by M. Okamoto
## 1 Introduction
The study of thermodynamic properties of QCD is crucial for understanding the early Universe and relativistic heavy-ion collisions. The data is encapsulated in the equation of state (EOS). Recently the Bielefeld group determined the EOS in the continuum limit for pure gauge theory using the standard plaquette action. Extending this result to full QCD will require the use of improved actions to compensate the increased computer power necessary for full QCD simulations. As a first step of such a program, we have studied the EOS for pure gauge theory with a renormalization-group (RG) improved action. A summary of results is presented in this article.
## 2 Simulation parameters
The RG-improved action we use has the form $`S_g=c_0(1\times 1\mathrm{loop})+c_1(1\times 2\mathrm{loop})`$ with $`c_0=18c_1`$ and $`c_1=0.331`$.
We perform simulations on $`16^3\times 4`$ and $`32^3\times 8`$ lattices, and also on symmetric $`16^4`$ and $`32^4`$ lattices, at about 10 values of $`\beta =6/g^2`$ in the range $`T/T_c0.9`$$`3.5`$. We generate $`\mathrm{20\hspace{0.17em}000}`$ to $`\mathrm{36\hspace{0.17em}000}`$ iterations after thermalization on asymmetric lattices, and about $`\mathrm{10\hspace{0.17em}000}`$ iterations on symmetric lattices, where one iteration consists of one pseudo-heat-bath sweep followed by four over-relaxation sweeps. Errors are determined by the jack-knife method.
## 3 Temperature scale and critical temperature
We fix the temperature scale using the string tension of the static quark potential: $`\frac{T}{T_c}=\frac{(a\sqrt{\sigma })(\beta _c)}{(a\sqrt{\sigma })(\beta )}`$. For this purpose, we fit results for $`a\sqrt{\sigma }`$ using a scaling ansatz proposed by Allton,
$$(a\sqrt{\sigma })(\beta )=f(\beta )(\mathrm{\hspace{0.17em}1}+c_2\widehat{a}(\beta )^2+c_4\widehat{a}(\beta )^4)/c_0$$
(1)
where $`f(\beta )`$ is the two-loop scaling function of SU(3) gauge theory, and power corrections in the pseudo lattice spacing $`\widehat{a}(\beta )\frac{f(\beta )}{f(\beta _1)}`$ are introduced to incorporate deviations from two-loop scaling, with $`\beta _1`$ an arbitrary reference point.
In Fig. 1, we plot the critical temperature in units of the string tension, $`T_c/\sqrt{\sigma }=1/(N_ta\sqrt{\sigma }(\beta _c))`$, for the RG-improved action together with the result for the standard plaquette action. Making a quadratic extrapolation in $`1/N_t`$, we find $`T_c/\sqrt{\sigma }=0.650(5)`$ in the continuum limit for the RG-improved action, which is 3% higher than the value $`0.630(5)`$ for the standard plaquette action. The discrepancy may be caused by systematic uncertainties in the determination of the string tension for the two actions, which differ in the details.
## 4 Equation of state
### 4.1 Integral method
We calculate the energy density $`ϵ`$ and pressure $`p`$ using the integral method:
$$\frac{p}{T^4}|_{\beta _0}^\beta =_{\beta _0}^\beta d\beta ^{}\mathrm{\Delta }S,\frac{ϵ3p}{T^4}=T\frac{\mathrm{d}\beta }{\mathrm{d}T}\mathrm{\Delta }S.$$
(2)
Here $`\mathrm{\Delta }SN_t^4\left(S_TS_0\right)`$, where $`S_T`$ and $`S_0`$ are the expectation values of the action density $`S=S_g/N_s^3N_t`$ at finite and zero temperature, respectively. The beta function $`d\beta /dT`$ is determined from the scale parametrized by (1).
Our results for the pressure $`p`$ are shown in Fig.2 together with those from the standard action. We find that the magnitude of the cut-off dependence for the RG-improved action is similar to that for the standard action, and opposite in sign.
We extrapolate the energy density and pressure to the continuum limit, assuming a quadratic dependence on $`1/N_t`$ as the RG-improved action has discretization errors of $`O(a^2)`$. In Fig.3, results of the extrapolation are plotted by solid lines, and are compared with those for the standard plaquette action (dot-dashed lines). As observed in this figure, results in the continuuum limit for the two actions are in good agreement with each other within the estimated error of 3–4%.
We note that the curves for the plaquette action in Fig.3 are obtained from a reanalysis of the data in Ref. employing the same ansatz for the string tension (1) in order to avoid ambiguities arising from the choice of scale. In practice we found the changes in the pressure and energy density due to the choice of scale to be very small, being less than the statistical error of 1–3%.
### 4.2 Operator method
The energy density can also be calculated through the operator method:
$$\frac{ϵ}{T^4}=\frac{18N_t^4}{g^2}\left[c_s(S_sS_0)c_t(S_tS_0)\right]$$
(3)
where $`c_s`$ and $`c_t`$ are the asymmetry coefficients. The pressure can then be obtained with the second equation of (2).
In Fig. 4 we compare results obtained with the integral and operator methods. The one-loop perturbative values are used for the asymmetry coefficients. We observe that the two sets of results are consistent with each other. The remaining discrepancy may well arise from the use of one-loop asymmetry coefficients; for the plaquette action, non-perturbative effects are known to be important.
In the high-temperature limit one may calculate the EOS by perturbation theory. The leading-order results for the EOS are shown by horizontal lines in Fig. 4. As has been noted some time ago, the perturbative value for $`N_t=4`$ is very small for the RG-improved action. Our numerical results are much larger than this value, at least in the range $`T/T_c`$ $`<`$$``$ $`3.5`$ explored in the present work.
A possible source of the discrepancy is a breakdown of perturbation theory due to the infrared divergence. Another possibility is that pressure and energy density decrease towards the perturbative values at high temperatures. This, however, would require an unusual situation of a negative $`\mathrm{\Delta }S`$ since the pressure is expressed as an integral of $`\mathrm{\Delta }S`$ with the integral method.
## 5 Conclusions
Our continuum result for the EOS of pure SU(3) gauge theory obtained with an RG-improved gauge action shows a good agreement with that from the plaquette action. This provides a concrete support for the expectation that continuum results are insensitive to the choice of lattice actions. We also found that the energy density and pressure for $`N_t=4`$ overshoot the perturbative high temperature limit. Understanding the origin of this behavior is left for future investigations.
This work is supported in part by the Grants-in-Aid of Ministry of Education, Science and Culture (Nos. 09304029, 10640246, 10640248, 10740107, 11640250, 11640294, 11740162). SE, KN and M. Okamoto are JSPS Research Fellows. AAK and TM are supported by the Research for the Future Program of JSPS.
|
no-problem/9909/nucl-th9909033.html
|
ar5iv
|
text
|
# Saturation of the width of the strength function *footnote **footnote *Supported in part by FAPESP.
## Abstract
The strength function of a single state $`|d`$ is studied using the deformed Gaussian orthogonal ensemble. In particular we study the dependence of the spreading width of $`|d`$ on the degree of mixing.
The mixing of a single state with a background of complicated states is important in the description of a variety of phenomena in nuclear physics such as isobaric analogue resonances, giant dipole resonances and the decay out superdeformed rotational bands. Such can mixing be conveniently described by the strength function and it is interesting to study the generic features of this object. Previous studies in this vein have investigated a single state coupled to a background generated by a two-dimensional anharmonic oscillator and the spreading of a shell model basis state over the shell model eigenstates due to the the residual interaction . Here we use random matrix theory .
We write the Hamiltonian, H, as the sum of two terms,
$$H=H_0+𝒱.$$
(1)
The eigenstates, $`|n`$, and eigenvalues, $`E_n`$, of $`H`$ satisfy
$$H|n=E_n|n,n=1,\mathrm{}N+1,$$
(2)
whilst for $`H_0`$ we have
$`H_0|k`$ $`=`$ $`E_k|k,k=1,\mathrm{}N,`$ (3)
$`H_0|d`$ $`=`$ $`E_d|d,d=N+1.`$ (4)
The strength function is then defined as
$$F_d(E)=\underset{n=1}{\overset{N+1}{}}|d|n|^2\delta (EE_n),$$
(5)
and describes how the state $`|d`$ is distributed over the $`N+1`$ eigenstates of $`H`$. We shall choose the energy of $`|d`$, $`E_d`$, such that it lies in the middle of the the spectrum of $`H`$.
Some insight into the behaviour of the strength function can be obtained by performing a two step diagonalisation of $`H`$ . Let us represent the Hamiltonian in the basis $`\{|k,|d\}`$. Diagonalising $`H`$ in the $`N`$–dimensional subspace defined by excluding $`|d`$ we obtain the set of eigenvectors $`|q=_kk|q|k`$ with eigenvalues $`E_q`$, $`q=1,\mathrm{},N`$. The Hamiltonian in the basis $`\{|q,|d\}`$ has diagonal matrix elements $`E_q`$, $`q=1,\mathrm{},N`$ and $`E_d`$, $`d=N+1`$ and non-zero off-diagonal elements $`𝒱_{dq}=𝒱_{qd}=_k𝒱_{dk}k|q`$ (the Hamiltonian is assumed to be real symmetric). The diagonalisation of the intermediate matrix may be carried out analytically so that using a Lorentzian of width $`I`$ to represent the $`\delta `$–function in Eq. (5), one obtains
$$F_d(E)=\frac{1}{2\pi }\frac{\mathrm{\Gamma }_d^{}+I}{(EE_d\mathrm{\Delta }_d^{})^2+(\frac{\mathrm{\Gamma }_d^{}+I}{2})^2},$$
(6)
where
$`\mathrm{\Gamma }_d^{}(E)=I{\displaystyle \underset{q}{}}{\displaystyle \frac{|𝒱_{dq}|^2}{(EE_q)^2+(\frac{I}{2})^2}}`$ (7)
and
$`\mathrm{\Delta }_d^{}(E)={\displaystyle \underset{q}{}}{\displaystyle \frac{|𝒱_{dq}|^2(EE_q)}{(EE_q)^2+(\frac{I}{2})^2}}.`$ (8)
By making the further assumptions that the eigenvalues $`E_q`$ are equi-distant with mean spacing $`D`$, that the squared matrix elements $`|𝒱_{qd}|^2`$ have approximately the same order of magnitude, $`𝒱^2`$, for all $`q`$ and that the magnitude of $`\sqrt{𝒱^2}`$ is smaller than the energy range in which it may be considered constant (whilst being larger than $`D`$ in order for the strength function to have meaning) the strength function may be approximated by a Lorentzian
$$F_d(E)\frac{1}{2\pi }\frac{\mathrm{\Gamma }_d^{}}{(EE_d)^2+(\frac{\mathrm{\Gamma }_d^{}}{2})^2},$$
(9)
where the spreading width is given by the “golden rule”
$$\mathrm{\Gamma }_d^{}2\pi \frac{𝒱^2}{D}.$$
(10)
We wish to study how the distribution, $`F_d(E)`$, depends on the degree of mixing by which we mean the strength of $`𝒱`$. To that end we employ deformed Gaussian orthogonal ensemble (DGOE) . In this model $`H`$ is real symmetric and it’s matrix elements are taken to be independent Gaussian distributed random numbers with zero mean and variances $`(H_0)_{k,k}^2=\frac{a^2}{2N}`$ for the diagonal matrix elements and $`(𝒱)_{k,k^{}}^2=\frac{\lambda ^2a^2}{4N}`$ for the off-diagonal matrix elements. We take $`(𝒱)_{k,d}^2=(𝒱)_{d,k}^2=\frac{\lambda ^2a^2}{4N}`$ as well although choosing a different variance for this matrix element may be appropriate in some applications.
The parameter $`\lambda `$ may be varied between $`0`$ and $`1`$ and determines the degree of mixing. The parameter $`a`$ determines the energy interval over which the eigenvalues are distributed. In the limiting case $`\lambda =0`$ the eigenvalues $`E_n`$ are Gaussian distributed;
$$\rho _0(E)=\frac{N^{\frac{3}{2}}}{a\sqrt{\pi }}\text{exp}(\frac{NE^2}{a^2}),$$
(11)
whilst in the limit $`\lambda =1`$ they are distributed according to the Wigner semicircular law;
$$\rho _1(E)=N\frac{2}{\pi a^2}\sqrt{a^2E^2}.$$
(12)
We perform an unfolding of the spectrum of the spectrum of $`H`$ defined by
$$x_n=_{\mathrm{}}^{E_n}𝑑E\rho (E),$$
(13)
where $`\rho (E)`$ is the smoothly varying part of the level density. Thus the unfolded spectrum has mean level density equal to 1 and is dimensionless. The smooth variation the level density was obtained in practice by fitting the cumulative level density (staircase function)
$$x=\underset{n=1}{\overset{N+1}{}}\theta (EE_n)$$
(14)
(the letter $`\theta `$ denotes the unit step function) to a polynomial using the method of linear least squares . For the case $`N=50`$ (used in the calculations below) we found the function which gave the best visual fit was a polynomial of degree 5.
In order to compare our calculation of the strength function on the unfolded energy scale with the Lorentzian approximation, Eqs. (9) and (10), we apply a simplified unfolding to these approximate formulae, defined by
$$x=\frac{E}{D}.$$
(15)
Thus the unfolded version of the golden rule is
$$\frac{\mathrm{\Gamma }_d^{}}{D}2\pi \frac{𝒱^2}{D^2}.$$
(16)
Persson and Ȧberg proposed the following formula for the mean (meaning averaged over the energy range where the eigenvalues are distributed) density of states of the DGOE which interpolates between $`\lambda =0`$ and $`\lambda =1`$, valid when $`a=2`$:
$$\overline{\rho }_\lambda =\frac{N^{\frac{3}{2}}}{4\lambda N^{\frac{1}{2}}+7N^{1.5\lambda }}.$$
(17)
Figure 1 displays our calcuations of the strength function (solid lines) for various $`\lambda `$ using Eqs. (68) where the eigenvalues (unfolded using the procedure defined by Eq. (13)) and eigenvectors are generated by the DGOE. We performed our calculations using $`N=50`$ whilst the width of the the Lorentzian used to represent the $`\delta `$–function (Eq. (5)) is $`I=3`$. We set $`a=2`$.
The dotted lines in fig. 1 are the strength function calculated using the Lorentzian approximation, Eqs. (9) and (10). The mean square value of the coupling is taken to be $`𝒱^2\frac{\lambda ^2a^2}{4N}`$. The mean level spacing $`D`$ is taken to be $`\frac{1}{\rho _0(E=0)}=\frac{a\sqrt{\pi }}{N^{\frac{3}{2}}}`$; the lower limit for the level spacing of the DGOE spectra.
Using the estimates of the previous paragraph, the condition $`\frac{𝒱^2}{D^2}>1`$ (for the single state $`|d`$ to be significantly mixed with more than a single one of the $`|q`$) implies $`\lambda >\frac{2\sqrt{\pi }}{N}`$. Thus for $`N=50`$, although we can calculate the strength function for arbitrary small $`\lambda `$ it is only meaningful for $`\lambda >0.07`$. The value of $`I`$ should be chosen so that it is a negligable fraction of the combined width $`\mathrm{\Gamma }_d^{}+I`$ whilst being greater than the level spacing (unity after unfolding).
From fig. 1 we can see that for weak $`\lambda `$ the strength function has an approximately Lorentzian shape. As we decrease $`\lambda `$ below 0.07 the strength function approaches a Lorentzian whose width is dominated by $`I`$. For $`\lambda >0.07`$ the strength function starts to deviate significantly from the Lorentzian shape already at $`\lambda =0.1`$ broadening towards a semicircle before $`\lambda =1`$. An ensemble average was performed over 100 realizations for the cases $`\lambda =0.05`$, 0.08 and 0.1 whilst to obtain a relatively smooth strength function for the $`\lambda =0.2`$, 0.5 and it was necessary average over 1000 realizations.
In fig. 2 we plot the following calculations for the spreading width of $`|d`$ as a function of $`\lambda `$:
(i) Eq. (7), at the peak value $`E=E_d`$ where the eigenvalues (unfolded using the procedure defined by Eq. (13)) and eigenvectors are generated by the DGOE. An ensemble average was performed over 100 realisations for all $`\lambda `$; otherwise the same parameters are used as were used in fig. 1.
(ii) The limiting value ($`\lambda =1`$) for the width calculated using the golden rule (Eq. 16) and density Eq. (12) at $`E=0`$: $`2\pi \left(\frac{a^2}{4N}\right)\left(\frac{2N}{a\pi }\right)^2=\frac{2N}{\pi }`$.
(iii) The golden rule expression for the width using the Poisson density, Eq. (11), at $`E=0`$: $`2\pi \left(\frac{\lambda ^2a^2}{4N}\right)\left(\frac{N^{\frac{3}{2}}}{a\sqrt{\pi }}\right)^2=\frac{\lambda ^2N^2}{2}`$.
(iv) The golden rule with the density Eq. (17): $`2\pi \left(\frac{\lambda ^2a^2}{4N}\right)\overline{\rho }_\lambda ^2`$.
(v) The full width at half maximum FWHM of the strength function $`F_d(E)`$, Eqs. (68), calculated using the DGOE.
Three regions may be identified in the DGOE calculations (lines (i) and (v)) in fig. 2). We see that $`\mathrm{\Gamma }_d^{}(E_d)`$ for very weak coupling ($`\lambda <0.04`$) has a quadratic dependence which accurately follows the golden rule calculation (line (ii)). Between $`\lambda =0.05`$ and $`\lambda =0.15`$ the this dependence is linear becoming very weak as $`\lambda `$ approaches unity. The behaviour of the DGOE is simulated by Eq. (17) of Persson and Ȧberg for the average level density (line (iv)). At $`\lambda =1`$ our DGOE calculation of $`\mathrm{\Gamma }_d^{}(E_d)`$ is approximately a factor $`\left(\frac{\rho _1(E=0)}{\overline{\rho }_1}\right)^2=\left(\frac{4}{\pi }\right)^2=1.6`$ greater than the calculation which employs Eq. (17) . The FWHM (line (v)) also has a quadratic dependence on $`\lambda `$ for weak $`\lambda `$. A linear dependence is maintained up to $`\lambda =0.3`$ after which the $`\lambda `$–dependence becomes very weak as $`\lambda `$ approaches unity. When $`\lambda =0`$ the FWHM is just equal to the averaging interval $`I(=3)`$. In the opposite limit of $`\lambda =1`$ the FWHM is essentially equal to $`N(=50)`$; ie. the state $`|d`$ is spread over all the eigenstates of $`H`$. Thus in the limit $`\lambda =1`$, the golden rule (line (v)) is a factor $`\frac{2}{\pi }`$ smaller than the FWHM.
Ref. investigated the spreading width of a basis state of a shell model Hamiltonian as a function of the strength of the residual interaction (corresponding to $`\lambda a`$ above). They identified regions where the spreading width has a quadratic dependence on the $`\lambda `$ (weak mixing) becoming linear for larger $`\lambda `$. They also found that the golden rule cannot be used to estimate the FWHM of the strength function for strong mixing.
In conclusion we have studied how the shape and width of the the strength function of a single state depends on the degree of mixing, using random matrix theory. An application of these results to the decay out of a superdeformed rotational band is in progress.
A.J.S. acknowledges helpful comments on the manuscript by Professor R.C. Johnson.
|
no-problem/9909/hep-ph9909328.html
|
ar5iv
|
text
|
# CONSTITUENT QUARK STRUCTURE AND 𝐹₂^𝑝 DATA
## Abstract
We have calculated the partonic structure of a constituent quark in the leading order in QCD for the first time and examined its implications on the proton structure function $`F_2^p`$ data from HERA. It turned out that although qualitatively it agrees with the data but for a finer refinement we need to consider an inherent component due to gluons which act as the binding agent between constituent quarks in the proton. Good agreement with nucleon structure function, $`F_2(x,Q^2)`$, for a wide range of $`x=[10^6,1]`$ and $`Q^2=[0.5,5000]GeV^2`$ is reached. PACS Numbers 13.60 Hb, 12.39.-x, 13.88 +e, 12.20.Fv
Our understanding of hadronic structure is based on two ingredients:(i) The hadronic spectroscopy; which was the original motivation for the introduction of quarks. In this discipline the quarks are massive particles and their bound states describe the static properties of hadrons. Quarks of this domain are usually refered to as Constituent Quarks (CQ). (ii) The Deep Inelastic Scattering (DIS) data is the other source of information about the hadronic structure. The interpretation of DIS data relies upon the quarks of the QCD Lagrangian, current quarks, which have a very small mass. The two kind of quarks are not only different in their masses, but there are other important differences in their other properties; for example, the color charge of quark field in QCD Lagrangian in not gauge invariant, whereas the color associated with the constituent quark is a well defined entity.
The constituent quark is defined as a quasi-particle emerging from the dressing of valence quark with gluons and $`q\overline{q}`$ pairs in QCD. It is, however, not so easy to pin down the process of dressing itself. For the construction of such an object requires that it must carry color and spin among other quantum numbers and needs to be in conformity with the color confinement. Such an object has been studied recently within the framework of field theory and its emergence from QCD is established.
The concept of using CQ as an intermediate step between the quarks of QCD Lagrangian and hadrons is not new. Altarelli and Cabibbo and co-workers used it in the context of a broken $`SU(6)`$x$`O(3)`$ long before, R.C. Hwa in his elaborated work,terming a CQ as valon, extended it and showed its application to many physical processes . In Ref. it was suggested that the concept of dressed quark and gluon might be useful in the area of jet physics and the heavy quark effective theory. Although the concept and the use of CQ has been around for quite sometimes, but no one actually calculated its partonic contents. The goal of this paper is to calculate the structure function of a CQ and try to examine its applicability to the structure function $`F_2`$ of proton, for which there are ample data from DIS experiments covering a wide range of kinematics both in $`x`$ and $`Q^2`$. Since, by definition a constituent quark has identical structure in every hadron, once its structure is established, in principle it would permit to calculate the structure of any hadron. In doing so, we will follow the philosophy outlined in Ref.; that is, in a DIS experiment, at high enough $`Q^2`$, it is the structure of CQ which is being probed and at sufficiently low $`Q^2`$ this structure cannot be resolved and a CQ behaves as valence quark. In this picture, partons ”observed” in DIS experiments are considered as the components of CQ. Under the assumption given above, in the high $`Q^2`$ DIS experiment the structure function of a CQ can be written as:
$$F_2^U(z,Q^2)=\frac{4}{9}z(G_{\frac{u}{U}}+G_{\frac{\overline{u}}{U}})+\frac{1}{9}z(G_{\frac{d}{U}}+G_{\frac{\overline{d}}{U}}+G_{\frac{s}{U}}+G_{\frac{\overline{s}}{U}})+\mathrm{}$$
(1)
where all the functions on the right-hand side are the probability functions for quarks having momentum fraction $`z`$ in a $`U`$-type CQ at $`Q^2`$. Similar expression can be written for the $`D`$-type CQ. These functions are calculated using moment space representation. Moments are defined as
$$M_2(n,Q^2)=_0^1z^{n2}F_2(z,Q^2)𝑑z$$
(2)
$$M_\mathrm{\Omega }(n,Q^2)=_0^1z^{n1}F_\mathrm{\Omega }(z,Q^2)𝑑z$$
(3)
where $`\mathrm{\Omega }`$ stands for valence, singlet and non-singlet partons in a CQ. These moments in the leading order in QCD are calculated and are given as:
$$M^{u,d(val)/CQ}(n,Q^2)=M^{NS}=exp[d_{NS}t]$$
(4)
$$M^{sea/CQ}(n,Q^2)=\frac{1}{2f}[M^SM^{NS}]$$
(5)
where, $`f`$ stands for number of active flavors, and $`t`$ is the evolution parameter defined as
$$t=\mathrm{𝑙𝑛}\frac{\mathrm{𝑙𝑛}\frac{Q^\mathit{2}}{\Lambda ^\mathit{2}}}{\mathrm{𝑙𝑛}\frac{Q_\mathit{0}^\mathit{2}}{\Lambda ^\mathit{2}}}$$
(6)
We have taken $`Q_0^2=0.215`$ $`Gev^2`$ and $`\mathrm{\Lambda }=0.22`$ Gev. $`M^{NS}`$ is the non-singlet moment and $`M^S`$ is the singlet one given as:
$$M^S=\frac{1}{2}(1+\rho )exp(d_+t)+\frac{1}{2}(1\rho )exp(d_{}t)$$
(7)
$`d_{NS}`$, $`d_{+()}`$, and $`\rho `$ are anamolus dimensions given in Ref . In Figure 1, we have shown these moments for several $`Q^2`$ values. Once the moments are calculated, the associated distribution functions can be obtained using inverse Mellin transformation techniques, though after going through tedious calculations. For the valence sector the following expression is obtained:
$$zu_{v/CQ}(z,Q^2)=zd_{v/CQ}(z,Q^2)=az^b(1z)^c$$
(8)
where $`a`$, $`b`$, and $`c`$ are functions of $`t`$. This distribution function for the valence quark in CQ satisfies the usual number sum rules:$`_0^1u_v(z,Q^2)𝑑z=1`$ for all $`Q^2`$’s. For the sea quark distribution in a CQ, the following form is obtained:
$$zq_{sea(CQ)}(z,Q^2)=z^\beta (1z)^\gamma (\alpha z^{0.5}+\delta z+\psi )$$
(9)
Again, $`\beta `$, $`\gamma `$,… are functions of $`t`$. In Figure 2, the parton distribution in a CQ is depicted for several $`Q^2`$. Of course, there is no experimental data on the structure function of a CQ to check against the above results. The best we can do is to use convolution theorem and attempt to calculate the proton structure function using eqs.(8,9). To put it differently; we convolute the CQ structure function with the CQ distribution in a proton and sum over all three CQ in the proton:
$$F_2^p(x,Q^2)=\underset{CQ}{}_x^1𝑑yG_{\frac{CQ}{p}}(y)F_2^{CQ}(\frac{x}{y},Q^2)$$
(10)
where $`G_{\frac{CQ}{p}}(y)`$ is the probability of finding a CQ with momentum fraction $`y`$ in a proton. They are given in eqs.(14-15), bellow . The summation runs over all constituent quarks of the hadron. In Figures 3 and 4 we present various parton distributions in a proton using the above procedure. As it is evident from Figure 5, going one step forward to the structure function of proton, $`F_2^p`$ we notice that these results fall a few percent short of representing the actual data. We attribute this shortcoming to the fact that in the formation of bound state, a CQ can emit gluon which in turn decays into $`q`$-$`\overline{q}`$ pairs, therefore there should be a residual component to the partons in a hadron, we call this component the inherent partons and its contribution can be calculated using splitting functions. Event hough this component is intimately related to the bound state problem, and hence it has a non-perturbative origin, but for not so small values of $`Q^2`$ we have calculated the process of $`CQCQ+gluonq\overline{q}`$ perturbatively at an initial $`Q_{CQ}^2=0.65`$ where $`\alpha _s`$ is still small enough. The corresponding splitting functions are as follows:
$$P_{gq}(z)=\frac{4}{3}\frac{1+(1z)^2}{z}$$
(11)
$$P_{qg}(z)=\frac{1}{2}(z^2+(1+z)^2)$$
(12)
The joint probability distribution for the process at hand we get:
$$q_{inh}(x,Q^2)=\overline{q}_{inh}=N\frac{\alpha _s^2}{(2\pi )^2}_x^1\frac{dy}{y}P_{qg}(\frac{x}{y})_y^1\frac{dz}{z}P_{gq}(\frac{y}{z})G_{CQ}(z)$$
(13)
$`N`$ is a factor depending on $`Q^2`$ and $`G_{CQ}`$ is the distribution of CQ in proton. We borrow from :
$$G_{U/p}(y)=7.98y^{0.65}(1y)^2$$
(14)
$$G_{D/p}(y)=6.01y^{0.35}(1y)^{2.3}$$
(15)
Now, adding this last contribution to the see quark distributions emerged from the CQ structure, we can see from Figure 5 that it can reproduce experimental data on $`F_2^p`$ rather nicely, even in the Leading-Order, for a wide range of $`Q^2`$ as low as $`0.5`$ $`Gev^2`$ and all the way up to several thousand $`Gev^2`$, and for $`x`$ down to $`10^6`$. For the purpose of comparison we have also shown the Next-to-Leading-Order solution of GRV results. One last point deserves to be addressed here is the incorporation of $`\overline{u}\overline{d}`$ for the proton. We have not taken this breaking of $`SU(2)`$ in the nucleon sea in to consideration here; because there is no asymmetry for the creation of light flavor sea within the CQ. The asymmetry is specific for the hadron, for which the structure of a CQ is common. This asymmetry at the hadronic level can be incorporated in this model by requiring that an inherent $`q`$ or $`\overline{q}`$ to recombine with a CQ and make a meson-baryon bound state. That means a nucleon may fluctuate to $`\pi `$$`N`$ and $`\pi \mathrm{\Delta }`$ bound states, but this is very similar to the notion of meson cloud models of nucleon and will not be addressed here.
In conclusion, For the first time we have calculated the internal structure of a constituent quark explicitely in the leading order in QCD and examined its applicability to the real data on proton structure function, $`F_2^p`$. We have found that to describe the data one needs to consider an extra component in the sea sector which is attributed to the formation of CQ bound states in the hadron.
Acknowledgment: We thank the Abdus Salam ICTP, for warm haspitality of the center, where this work was carried out in part.
Appendix In this appendix we give the numerical values for the parameters appearing in eqs. (8, 9, 13) of the text at a typical value of $`Q^2=0.65`$ $`GeV^2`$:
For the valence quark distribution of eq. 8:
$`a=0.348`$, $`b=1.238`$, $`c=0.672`$.
For the sea distribution of eq. 9 we have:
$`\alpha =0.0155`$, $`\beta =0.073`$, $`\gamma =1.097`$, $`\delta =0.0084`$, and $`\psi =0.0086`$; and for the normalization factor, $`N`$, appearing in eq. 13 we have: $`N=0.65`$.
Figure Caption
Figure-1. Moments of partons in a CQ at various $`Q^2`$.
Figure-2. Parton distributions in a CQ at several $`Q^2`$.
Figure-3. Prediction of the model for valence distribution in proton.
Figure-4. Prediction of the model for the sea quark distribution in proton is depicted. Notice that the thin line is that of the CQ and the dotted line is the contribution of inherent partons. The thick line is the sum of the two.
Figure-5. Proton structure function $`F_2^p`$ as a function of $`x`$ calculated using the model (thick line) and compared with the data from Ref. for different $`Q^2`$ values. The thin line is the prediction of GRV Ref..
|
no-problem/9909/astro-ph9909043.html
|
ar5iv
|
text
|
# VLT observations of GRB 990510 and its environment Based on observations collected at the European Southern Observatory, Chile, programs 63.P-0003, 63.H-0009, and 63.O-0567 (B).
## 1 Introduction
Gamma-ray bursts (GRBs) have their origin in highly dynamical processes which result in relativistic blast waves (e.g. Piran 1999). The currently best candidates are mergers of compact stars or the collapse of massive stars. The blast wave and its interaction with the surrounding interstellar or circumstellar matter creates an optical afterglow which has been observed for eleven GRBs thus far. For seven of these, the host galaxies have been found and for five the redshift could be measured (Hogg & Fruchter 1999). While the final cataclysms of massive stars occur within or near the host galaxies, less massive systems like binary neutron stars can be expelled from their hosts at large velocities and suffer the merging event at distances from the hosts exceeding a Mpc (Fryer et al. 1999).
GRB 990510 was detected by BATSE, Ulysses, and BeppoSAX as a very bright burst lasting 68 s (see Wijers et al. 1999 for the burst profile and a first summary). The optical afterglow was discovered by Vreeswijk et al. (1999a) and subsequently followed by numerous observers. An early spectrum located the optical transient (OT) at a redshift $`z1.62`$ (Vreeswijk et al. 1999b). GRB 990510 is the first burst to show a clearly defined achromatic break in the optical light curve (Harrison et al. 1999, Israel et al. 1999, Stanek et al. 1999) which was readily interpreted as firm evidence for beaming and a total energy release substantially less than the isotropic value of $`3\times 10^{53}`$ ergs. In this Letter, we report observations with the ESO VLT of the late decline of the optical afterglow at two epochs.
## 2 Observations
GRB 990510 was observed with ESO’s Very Large Telescope (VLT) Unit 1 Antu equipped with the Focal Reducer Low Dispersion Spectrograph (FORS1) between May 14 and 18, 1999, and again between June 8 and 11, 1999 (3.8–8.0 and 28.7–31.6 heliocentric days after burst, hereafter HDAB). We performed photometry in the Bessel B,V,R, and the Gunn I bands, supplemented by long-slit spectrophotometry. The scale was 0.2”/pixel in all observations except for the B, R images on June 8, 1999 which were taken in the high-resolution mode of FORS1 with 0.1”/pixel. The data were reduced with standard ESO-MIDAS procedures. Photometry was derived relative to the comparison stars given by Pietrzynski & Udalski (1999), with stars A and B near the optical transient (Fig. 3) serving as secondary standards (Table 1). Due to wind buffeting, the point spread function (PSF) was slightly non-circular in some images, so magnitudes were determined by a special PSF-template routine. Table 1 provides a summary of the observations with times given in heliocentric Julian days (HJD) and HDAB. 1-$`\sigma `$ internal statistical errors are quoted for the photometric magnitudes.
Long-slit spectrophotometry was performed on May 14 and 16, 1999. The slit was oriented at a position angle $`15^{}`$ to include objects C south and A north of the OT (Fig. 3) and, coincidentally, also two M stars 12” north of the OT ($`R=22.1,23.5`$, outside of Fig. 3). The late spectroscopy of the OT is reported elsewhere (Vreeswijk et al. 1999).
The seeing encountered during the observations was never better than 0.8” (Table 2). This resulted in 5-$`\sigma `$ detection thresholds of $`B`$ = 26.6 on June 8, $`V`$ = 25.6 on May 18, $`R`$ = 26.1 on June 8, and $`I`$ = 25.0 on May 14 and June 11.
## 3 Results
### 3.1 Spectrophotometry
Our slit spectra of objects A and C reveal their stellar nature. Object A is a dK$`6\pm 1`$ star of the disk population. Its brightness (Table 1) places it at a distance of $`d2`$ kpc, outside the galactic dust layer. The spectral type implies an intrinsic $`VI`$$`=1.40\pm 0.13`$ and $`E_{\mathrm{B}\mathrm{V}}`$$`=0.14\pm 0.09`$, consistent with $`E_{\mathrm{B}\mathrm{V}}`$$`=0.20\pm 0.03`$ quoted by Stanek et al. (1999). Object C, initially suspected to be the host galaxy, is pointlike. Its spectrum is included in Fig. 1. The probable presence of Mg I $`\lambda 5167`$ suggests that it is an early dK star at a distance of $`15`$ kpc. The photometry of objects B and D suggests they are stars of spectral types dK5 and dM0, respectively (Table 1).
Figure 1 also shows the extinction-corrected spectrophotometric flux distribution of the OT on HDAB 3.9 when the OT had $`R`$ = 21.98, corrected with $`E_{\mathrm{B}\mathrm{V}}`$ = 0.20 (histogram; binned in intervals of 200Å). The solid dots denote the photometric result. For the interval of 4900–9000Å, the flux distribution can be described by a power law $`F_\nu `$$`\nu ^\beta \lambda ^\beta `$ with $`\beta =0.55\pm 0.10`$. For higher frequencies, the spectral flux distribution steepens, as noted already by Stanek et al. (1999). The steepening may be intrinsic to the source or indicate additional absorption outside our Galaxy.
### 3.2 Light curve
Our BVRI photometry provides colour information late in the afterglow which is complementary to the ample information gathered earlier by other observers. Around HDAB 1.5, the light curve was found to change from an early power law $`F=kt^\alpha `$ with $`\alpha =0.8`$ to a steeper power law with index 2.2 (Harrison et al. 1999, Israel et al. 1999, Stanek et al. 1999, and references therein).
In the HDAB 0.6–1.1 interval, i.e. before the break, the OT had colours $`BV=0.57\pm 0.02,VR=0.41\pm 0.01`$, and $`RI=0.47\pm 0.01`$. The early spectral flux distribution was possibly slightly bluer, as judged from the Mt. Stromlo data (Harrison et al. 1999) which yield $`VR=0.31\pm 0.03`$ at HDAB 0.15 and $`VR=0.36\pm 0.05`$ for HDAB 0.4. Throughout the further evolution, however, there is no evidence for a significant change in colours: (i) in the HDAB 1.6–2.0 interval are $`BV=0.61\pm 0.07,VR=0.42\pm 0.03`$, and $`RI=0.40\pm 0.02`$, consistent with the colours before the break; (ii) on HDAB 3.85, well after the brak, our BRI photometry yields $`BR=0.98\pm 0.07`$ and $`RI=0.49\pm 0.06`$; and (iii) the latest colour information available, for HDAB 5.7–8.0, yields a mean $`VR=0.37\pm 0.03`$ (all colours derived from magnitudes corrected for long-term trends). These results demonstrate that the decay of the afterglow is achromatic in VRI at least for the interval HDAB 0.6–8.0 and in BVRI for HDAB 0.8–3.9. We feel justified, therefore, in converting all measurements to equivalent $`R`$-magnitudes and fitting a single grand total light curve (Fig. 2. upper panel). The final mean colours used in this conversion were determined iteratively and are given below.
The errors attached to the individual data points are the statistical errors derived from the noise in the CCD images. Systematic zero point errors are typically quoted as 0.02–0.03 mag, consistent with the internal scatter of $`0.02`$ mag and the lack of obvious time variability among the 101 (of a total of 163) data points in the HDAB 0.6–1.1 interval. While short term variability is certainly small, a systematic search for such variability among all photometric data is still pending. In order to see if systematic zero point errors affect the fit, we also analysed a data set for which 0.03 mag was quadratically added to the statistical errors before fitting the light curve.
Burst models (Piran 1999, Rhoads 1999, Sari et al. 1999) explain the afterglow as synchrotron emission of shock-accelerated electrons injected into an expanding medium with a power law spectrum $`E^p`$. In jet models, the time dependent spectral flux at frequencies below the cooling break varies as $`F_\nu (t)\nu ^\beta t^\alpha `$, with $`\beta =(p1)/2`$ independent of time and $`\alpha =3(p1)/4`$ or $`\alpha =p`$, depending on, respectively, whether the opening angle of the relativistically beamed radiation $`\vartheta 1/\gamma <\theta `$ early in the expansion, or $`\vartheta >\theta `$ at later time when the jet has been slowed down ($`\theta `$ = opening angle of the jet, $`\gamma `$ = bulk Lorentz factor). We choose a function
$$F(t)=(F_1^n+F_2^n)^{1/n}\mathrm{with}F_ik_\mathrm{i}t^{\alpha _i},n>0$$
(1)
to describe the transition between the early and late power laws $`F_1`$ and $`F_2`$, where $`F_1=F_2`$ at the transition time $`t=t_{}`$. Eq. (1) was also employed by Rhoads (1999) to parameterise his numerical models and is a more general form of the expressions used by Israel et al. (1999), Stanek et al. (1999), and by Harrison et al. (1999), who assumed $`n=1`$ and $`n1.5`$, respectively. Of the five free parameters in Eq. (1) ($`k_{1,2},\alpha _{1,2},n`$), the exponent $`n`$ provides a measure of the relative width and the smoothness of the transition from $`F_1`$ to $`F_2`$. The extrapolation of the OT brightness from HDAB$`8.0`$ to late times will also depend on the choice of $`n`$.
We first fit the light curve of the OT using all of the available data up to HDAB 8.0. For the data sets with zero (0.03 mag) systematic errors added, we find $`\alpha _1=0.79\pm 0.04(0.81\pm 0.05),\alpha _2=2.41\pm 0.16(2.31\pm 0.16),n=0.87\pm 0.22(1.09\pm 0.34)`$, and $`t_{}=1.75\pm 0.23(1.57\pm 0.19)`$ days. The quoted uncertainties are 1-$`\sigma `$ errors. The fitted $`\chi ^2/\nu 333/165d.o.f=2.02`$ shows that the fit to the unaltered data is not perfect, but is acceptable if systematic errors of 0.03 mag are considered ($`\chi ^2/\nu =0.83`$). The observed $`n0.91.1`$ differs significantly from the value $`n=0.4`$ found by Rhoads (1999) when fitting his numerical results, which predict a rather smooth transition. The more abrupt break may be due to the superposition of several effects, as the slowing of the jet and its sideways spread (Harrison et al. 1999). By fitting the shape of the mean light curve to the fluxes in each photometric band we obtain the magnitude normalisations corresponding to the quantities $`k_i`$ in Eq. 1: $`B_1=20.550\pm 0.012`$, $`V_1=19.961\pm 0.003`$, $`R_1=19.565\pm 0.003`$, and $`I_1=19.075\pm 0.003`$. The corresponding colours are consistent with those for HDAB 0.6–1.1. The lack of substantial colour variations can be seen from the small scatter in the residuals of the best fit (Fig. 2).
For comparison, for $`n`$ fixed at 1.0 we obtain $`\alpha _1=0.80\pm 0.02`$, $`\alpha _2=2.33\pm 0.04`$, $`t_{}=1.63\pm 0.08`$ days and $`\chi ^2/\nu =2.0`$ for a fit to the unaltered data points. Since the errors of the $`\alpha `$’s and of $`n`$ are correlated, fixing $`n`$ reduces the errors in the other parameters artificially. Our fit to the light curve with $`n1`$ allows us to predict the brightness of the OT at later times with some confidence that uncertainties in the shape of the light curve have been accounted for as far as possible. We predict an OT magnitude of $`R=26.92_{+0.46}^{0.33}`$ on June 8 (HDAB 28.8), when our BR photometry was performed.
Fruchter et al. (1999) have reported detections with the HST-STIS open CCD on June 8.1 and June 17.9 (HDAB 28.7 and 38.5). Assuming an $`F_\nu `$$`\nu ^{0.6}`$ spectrum, they obtained $`V=27.0\pm 0.2`$ and $`V=27.8\pm 0.3`$, corresponding to $`R=26.6\pm 0.2`$ and $`R=27.4\pm 0.3`$, respectively. Given the red sensitivity of the STIS-CCD, these $`R`$ magnitudes are quite robust. We conclude that our June 8 result and the first of the HST detections are consistent with each other.
Assuming that all photometry including the late data points is that of the decaying OT, we obtain nearly identical light curve parameters: $`R_1=19.539\pm 0.063(19.555\pm 0.082),\alpha _1=0.82\pm 0.03(0.84\pm 0.03);\alpha _2=2.23\pm 0.07(2.18\pm 0.07)`$; $`n=1.17\pm 0.17(1.43\pm 0.26);t_{}=1.51\pm 0.09(1.41\pm 0.08);\text{and}\chi ^2/\nu =1.87(0.81)`$. Thus, all of the available photometry is consistent with the light curve of a pure OT.
The observed spectral index $`\beta 0.55`$ and the values of $`\alpha _1`$ and $`\alpha _2`$ are in excellent agreement with the theoretical predictions if the cooling break stays at wavelengths shorter than the visual band until HDAB 8.0. The slope of the electron injection spectrum then is $`p=2.1\pm 0.1`$ in agreement with the conclusions drawn by Harrison et al. (1999) and Stanek et al. (1999).
Figures 3a-c depict the late decay of the OT from an equivalent $`R`$-magnitude of 21.95 (HDAB 3.8) over 23.75 (HDAB 8.0) to 26.4 (HDAB 28.8) (Table 2). Down to the 5-$`\sigma `$ thresholds for a detection at an arbitrary position, $`B26.6,V25.6,R26.1`$ and $`I25.0`$ our images show no other object within 6” of the OT, besides the stars A, B, C, and D. The faint emission on June 8 at the OT position is more easily seen when the high-resolution image (0.1” pixels) is slightly smoothed. Figure 3d is an enhanced version of 3c, smoothed with a Gaussian of 0.3” standard deviation and with star C PSF-subtracted. While the emission appears to be slightly offset from the OT position by 0.4” and may signify an underlying object, the shift is within the statistical uncertainty expected for such a faint source.
### 3.3 The quest for the host galaxy of GRB 990510
Although the light curve analysis does not require another light source in addition to the OT, such contribution can not be excluded. One possibility is that an underlying SN-event contributes to the June 1999 light level. Taking the $`R`$-band light curve of SN1998bw (Galama et al. 1998, Iwamoto et al. 1998) adjusted to $`z=1.62`$ (Bloom et al. 1999) one expects a peak magnitude of $`R=26.8`$ (reddened 27.3), just consistent with the 3-$`\sigma `$ upper limit of $`R=27.4`$ of the fit \[fitted flux relative to star A with R = 19.17 is $`(1.0\pm 1.4)\times 10^4`$\]. Forcing the fit to use a peak value of 27.4 results in an increase in $`\alpha _2`$ from 2.37 to 2.71 and a correlated decrease in $`n`$ from 0.94 to 0.60. Alternatively, the best-fit contribution by an underlying galaxy, which adds a constant to the late-time light curve, yields a 3-$`\sigma `$ upper limit of 27.6 (27.1 de-reddened) \[flux relative to star A $`(2.0\pm 1.7)\times 10^4`$\]. The forced fit with the 3-$`\sigma `$ upper limit requires $`\alpha _2`$ and $`n`$ changes nearly identical to those of the corresponding SN fit. We note that many burst models do not require a SN event and that a host fainter than 27 mag presents no problem with the galaxy population responsible for GRBs (Hogg & Fruchter 1999). Fig. 2 shows the fits obtained for the OT alone (early data or all data) and the forced fits with the OT and the 3-$`\sigma `$ upper-limit SN and host contributions added: there is presently no way to distinguish between the three models, but the fact that the early and late OT light curve fits are so similar strongly suggests that we have only seen the OT.
For $`z=1.62`$ (Galama et al. 1999) and a standard cosmology with $`H_o=70`$ km s<sup>-1</sup>Mpc<sup>-1</sup> and $`\mathrm{\Omega }_o=0.3`$, the distance modulus is 45.2. The de-reddened host magnitude $`R>27.1`$ then translates to a restframe $`M_\mathrm{R}`$$`>18.4`$ for a starburst spectrum similar to NGC 4449 (Bruzual & Charlot 1993) and to $`M_\mathrm{R}`$$`>17`$ for a pure starburst spectrum. Hence, the host is intrinsically faint if at z=1.62. While one might be willing to place the burst at a larger $`z`$, there is a probable limit from the expected Ly$`\alpha `$-forest or Lyman-continuum absorption. The lack of a pronounced depression at wavelengths longer than 3600Å (Fig. 1) limits the acceptable redshift to $`z2.0`$ for Ly$`\alpha `$-forest and to $`z2.9`$ for Lyman-continuum absorption.
The only brighter object possibly related to the OT is the one denoted E in Fig. 3d, 12” to the SE. With a diameter of $`1^{\prime \prime }`$ and its decidedly blue colour it may qualify as a starburst galaxy. Its de-reddened magnitudes and colours are well within the range found for the hosts of other GRBs and OTs, $`V23.8`$, $`BV0.1,VR0.3`$, $`RI0.4`$, assuming $`E_{\mathrm{B}\mathrm{V}}`$ = 0.20. The observed spectral flux distribution is typical of the UV spectrum of a starburst or irregular galaxy (e.g. Bruzual & Charlot 1993). If this is actually a galaxy at $`z=1.62`$, it would be separated by at least 100 kpc from the OT. This would not be a problem if the GRB progenitor were a low-mass system ejected from this galaxy (Fryer et al. 1999). The absence of an associated SN or host galaxy at the OT position would support such a scenario, while the suspected connection of long gamma-ray bursts with massive progenitors does not.
## 4 Conclusions
The recent observations of the optical transient of GRB 990510 strongly support the synchrotron model for gamma-ray bursts. The pre-break light curve and the $`\lambda >4900`$ Å optical flux distribution are consistent with the adiabatic cooling of relativistic electrons with an $`E^p`$ injection spectrum with $`p2.1`$. The steep late decline is a firm indicator of the presence of a slowing jet (Meszaros & Rees 1999, Rhoads 1999, Sari, et al. 1999).
The light level on June 8 is quite consistent with being solely due to the transient, but contributions by either a SN with $`R>27.4`$ or a host with $`R>27.6`$ (3-$`\sigma `$ upper limits) cannot be ruled out. If located at $`z=1.62`$, the SN could have been nearly as bright as expected from appropriately scaling SN1998bw, while a starburst galaxy as the host would have $`M_\mathrm{R}`$$`17`$. Proving the presence or absence of a host galaxy at the OT position or identifying fainter nearby candidates requires additional deep exposures. If ejection of the progenitor of GRB 990510 is considered a possibility, the blue extended object 12” east with de-reddened $`R=23.6`$ could be a host candidate.
###### Acknowledgements.
We thank the ESO staff for the competent performance of part of the observations in TOO service mode and our referee, Ralph Wijers, for his helpful comments. KB thanks Dieter Hartmann for enlightening discussions and comments on GRBs and Wolfram Kollatschny for helpful comments on starburst galaxies.
|
no-problem/9909/hep-ph9909391.html
|
ar5iv
|
text
|
# New Physics Effects in Doubly Cabibbo Suppressed 𝐷 Decays
## I Introduction
The decay $`\overline{D}^0K^+\pi ^{}`$ proceeds via the quark sub-process $`\overline{c}d\overline{u}\overline{s}`$ and is Cabibbo favored:
$$A^{\mathrm{SM}}(\overline{D}^0K^+\pi ^{})G_F|V_{cs}V_{ud}|.$$
(1)
The decay $`D^0K^+\pi ^{}`$ proceeds via the quark sub-process $`cdu\overline{s}`$ and is doubly Cabibbo suppressed:
$$A^{\mathrm{SM}}(D^0K^+\pi ^{})G_F|V_{cd}V_{us}|.$$
(2)
If $`D^0`$$`\overline{D}^0`$ mixing is large, then there could be a significant contribution to the latter from $`D^0\overline{D}^0K^+\pi ^{}`$, where the second stage is Cabibbo favored. The most sensitive experimental searches for $`D^0`$$`\overline{D}^0`$ mixing use indeed this process. The fact that the first-mix-then-decay amplitude gives a different time dependence than the direct decay allows experimenters to distinguish between the two contributions and to set unambiguous upper bounds on the mixing.
The standard model (SM) prediction for $`D^0`$$`\overline{D}^0`$ mixing, $`(\mathrm{\Delta }m_D/m_D)_{\mathrm{SM}}10^{16}`$ , is well below the present experimental sensitivity, $`(\mathrm{\Delta }m_D/m_D)_{\mathrm{exp}}<8.5\times 10^{14}`$ . If mixing is discovered within an order of magnitude of present bounds, its theoretical explanation will require contributions from New Physics. Even more convincing evidence for New Physics will arise if CP violation plays a role in the $`D^0K^+\pi ^{}`$ decay . The reason is that, while the calculation of the total rate suffers from large hadronic uncertainties related to the long distance contributions, the SM prediction that there is no CP violation is very safe since it is only related to the fact that the third generation plays almost no role in both the mixing and the decay.
Most if not all present analyses of the search for $`D^0`$$`\overline{D}^0`$ mixing through $`DK\pi `$ decays make the assumption that the New Physics can affect significantly the mixing but not the decay. This is a plausible assumption. The SM contribution to the mixing is highly suppressed because it is second order in $`\alpha _W`$ and has a very strong GIM suppression factor, $`m_s^4/(M_Wm_c)^2`$. The mixing is then sensitive to New Physics which could contribute at tree level (as in multi-scalar models), or through strong interactions (as in various supersymmetric models), etc. On the other hand, the SM contribution to the decay is through the tree-level $`W`$-mediated diagram. One does not expect that New Physics could give competing contributions.
Yet, since the decay in question is doubly Cabibbo suppressed, one may wonder if indeed the assumption that it gets no New Physics contributions is safe. It is the purpose of this work to test this assumption in a more concrete way. (For previous work on related processes, see .) We examine various reasonable extensions of the standard model with new tree level contributions to the decay. For each model, we present the relevant phenomenological constraints and find an upper bound on the new contributions to $`D^0K^+\pi ^{}`$.
From (1) and (2) we get the following (naive) estimate for the ratio of amplitudes:
$$\left|\frac{A^{\mathrm{SM}}(D^0K^+\pi ^{})}{A^{\mathrm{SM}}(\overline{D}^0K^+\pi ^{})}\right|\left|\frac{V_{cd}V_{us}}{V_{cs}V_{ud}}\right|0.05.$$
(3)
The value of this ratio from the recent CLEO results is about $`0.058`$. Thus, if New Physics contributions to $`D^0K^+\pi ^{}`$ are to compete with the doubly Cabibbo suppressed SM amplitude, the corresponding effective New Physics coupling $`G_N`$ should satisfy
$$G_N>10^2G_F.$$
(4)
In Section II we investigate if this is possible in various New Physics scenarios. In Section III we study model-independent bounds on new tree-level contributions to $`D^0K^+\pi ^{}`$. We conclude in Section IV.
## II Specific Models
### A Supersymmetry without $`R`$-parity
Supersymmetry without $`R`$-parity ($`R_p`$) predicts new tree diagrams contributing to the decay. The lepton number violating terms $`\lambda _{ijk}^{}L_iQ_jd_k^c`$ give a slepton-mediated contribution with an effective coupling:
$$G_N^\lambda ^{}=\frac{\lambda _{21k}^{}\lambda _{12k}^{}}{4\sqrt{2}M^2(\stackrel{~}{\mathrm{}}_{Lk}^{})}.$$
(5)
These couplings are severely constrained by $`K^0`$$`\overline{K}^0`$ mixing (see e.g. ):
$$\lambda _{21k}^{}\lambda _{12k}^{}<10^9(\mathrm{for}M(\stackrel{~}{\mathrm{}}_{Lk}^{})=100\text{ GeV}).$$
(6)
This rules out any significant contribution to $`D^0K^+\pi ^{}`$ from slepton exchange in models of $`R_p`$ violation:
$$\frac{G_N^\lambda ^{}}{G_F|V_{cd}V_{us}|}<3\times 10^8.$$
(7)
The baryon number violating terms $`\lambda _{ijk}^{\prime \prime }u_i^cd_j^cd_k^c`$ give a squark-mediated contribution with an effective coupling
$$G_N^{\lambda ^{\prime \prime }}=\frac{\lambda _{113}^{\prime \prime }\lambda _{223}^{\prime \prime }}{4\sqrt{2}M^2(\stackrel{~}{b}_R)}.$$
(8)
The $`\lambda _{223}^{\prime \prime }`$ couplings is only constrained by requiring that it remains in the perturbative domain up to the unification scale and could be of order unity . The $`\lambda _{113}^{\prime \prime }`$ coupling is, however, severely constrained by the upper bound on $`n\overline{n}`$ oscillations :
$$|\lambda _{113}^{\prime \prime }|<10^4(\mathrm{for}M(\stackrel{~}{q})=100\text{ GeV}).$$
(9)
This rules out a significant contribution to $`D^0K^+\pi ^{}`$ from squark exchange in models of $`R_p`$ violation:
$$\frac{G_N^{\lambda ^{\prime \prime }}}{G_F|V_{cd}V_{us}|}<3\times 10^3.$$
(10)
### B Multi-Scalar Models
Extensions of the scalar sector, beyond the single Higgs doublet of the SM, predict new tree diagrams contributing to the decay.
In two Higgs doublet models (2HDM) with natural flavor conservation, there is a charged Higgs ($`H^\pm `$) mediated contribution. The trilinear coupling of the physical charged Higgs to the $`u_i\overline{d}_j`$ bilinear is
$$_{H^\pm }=\frac{ig}{\sqrt{2}m_W}\overline{u_i}\left[m_{u_i}\mathrm{cot}\beta P_L+m_{d_j}\mathrm{tan}\beta P_R\right]V_{ij}d_jH^++h.c.,$$
(11)
where $`m_W`$ is the mass of the $`W`$-boson, $`m_q`$ is the mass of the quark $`q`$, $`\mathrm{tan}\beta =v_u/v_d`$ is the ratio of vevs and $`P_{R,L}=(1\pm \gamma _5)/2`$. It follows that the charged Higgs mediated contribution is also doubly Cabibbo suppressed. Then, for large $`\mathrm{tan}\beta `$, the suppression with respect to the SM contribution is given by
$$\frac{G_N^{H^\pm }}{G_F|V_{cd}V_{us}|}\frac{m_dm_s\mathrm{tan}^2\beta }{M_{H^\pm }^2}<4\times 10^4.$$
(12)
To obtain the upper bound, we used the constraint from $`bc\tau \nu `$ :
$$\mathrm{tan}\beta <0.5\left(\frac{M_{H^\pm }}{\text{ GeV}}\right),$$
(13)
and the ranges of quark masses given in Ref. . For $`\mathrm{tan}\beta 1`$ we have
$$\frac{G_N^{H^\pm }}{G_F|V_{cd}V_{us}|}\frac{m_sm_c}{M_{H^\pm }^2}<10^4.$$
(14)
To obtain the upper bound, we used $`M_{H^\pm }>54.5`$ GeV . Thus there are no significant contributions to $`D^0K^+\pi ^{}`$ from charged Higgs exchange within 2HDM.
Multi Higgs doublet models with natural flavor conservation but with more than two Higgs doublets have parameters that are less constrained and, in particular, provide new sources of CP violation. There are several charged scalars that can mediate the $`D^0K^+\pi ^{}`$ decay. If we take the simplest case that only one of them contributes in a significant way (see e.g. ), then its couplings are similar to those of Eq. (11) except that $`\mathrm{tan}\beta `$ and $`\mathrm{cot}\beta `$ are replaced by, respectively, $`X`$ and $`Y`$. In general, $`X`$ and $`Y`$ are complex and, moreover, $`|XY|1`$. Eq. (12) is modified:
$$\frac{G_N^{H^\pm }}{G_F|V_{cd}V_{us}|}\frac{m_dm_s|X|^2}{M_{H^\pm }^2}<10^2.$$
(15)
To obtain the upper bound, we used the perturbativity bound $`|X|<130`$ and the lower bound on $`M_{H^\pm }`$. Note that this contribution is not only constrained to be small, but also it carries no new CP violating phase. In contrast, the new contribution that replaces that of Eq. (14) does carry a new phase:
$$\frac{G_N^{H^\pm }}{G_F|V_{cd}V_{us}|}\frac{m_sm_cYX^{}}{M_{H^\pm }^2}<3\times 10^4.$$
(16)
To obtain the upper bound, we used the constraint from $`bs\gamma `$, $`|XY|<4`$ . The bound on a CP violating contribution is even somewhat stronger, since the measurement of $`bs\gamma `$ gives $`m(X^{}Y)<2`$ . In any case, the contribution from charged Higgs exchange in multi Higgs doublet models is, at most, at the percent level. The CP violating part of this contribution is at most of order $`10^4`$ .
It is possible that Yukawa couplings are naturally suppressed by flavor symmetries rather than by natural flavor conservation . In such a framework, there is a contribution to $`D^0K^+\pi ^{}`$ from neutral scalar exchange. To estimate these contributions, we use the explicit models of Ref. . Here, a horizontal $`U(1)_{}`$ symmetry is imposed. At low energies, the symmetry is broken by a small parameter $`\lambda `$ (usually taken to be of the order of the Cabibbo angle, $`\lambda 0.2`$), leading to selection rules. The scalar sector consists of two Higgs doublets, $`\varphi _u`$ and $`\varphi _d`$, and a single scalar singlet $`S`$. The effective coupling of the $`S`$ scalar to quarks is given by
$$_S=Z_{ij}^qS\overline{q_{iR}}q_{jL}+h.c.(q=u,d,i,j=1,2,3).$$
(17)
The order of magnitude of $`Z_{ij}^q`$ is determined by the selection rules related to the broken flavor symmetry:
$$Z_{ij}^q\frac{M_{ij}^q}{S},M_{ij}^q\lambda ^{(q_{jL})+(\overline{q_{iR}})+(\varphi _q)}\varphi _q.$$
(18)
The horizontal charges $``$ of the quark and Higgs fields are determined by the physical flavor parameters:
$`|V_{ij}|`$ $``$ $`\lambda ^{(q_{iL})(q_{jL})},`$ (19)
$`m(q_i)`$ $``$ $`\lambda ^{(q_{iL})+(\overline{q_{iR}})+(\varphi _q)}<\varphi _q>.`$ (20)
Using (19) we can express the suppression of the relevant Yukawa couplings in terms of the quark masses and mixing angles:
$$|Z_{uc}^u|\frac{m_c|V_{12}|}{S},|Z_{cu}^u|\frac{m_u}{S|V_{12}|},|Z_{ds}^d|\frac{m_s|V_{12}|\mathrm{tan}\beta }{S},|Z_{sd}^d|\frac{m_d\mathrm{tan}\beta }{S|V_{12}|}.$$
(21)
These couplings give rise to various operators that induce $`cud\overline{s}`$ at tree level. For the leading contributions, we find
$$\frac{G_N^S}{G_F|V_{cd}V_{us}|}\frac{m_cm_s\mathrm{tan}\beta }{S^2}\frac{m_W^2}{m_S^2}<5\times 10^3.$$
(22)
To obtain the upper bound, we used $`\mathrm{tan}\beta <130`$ and the very conservative bound $`Sm_S>m_W`$. Other models give a similar or even stronger suppression. We conclude that there are no significant contributions to $`D^0K^+\pi ^{}`$ from neutral Higgs exchange within multi-scalar models with approximate flavor symmetries.
### C Left-Right Symmetric Models
Left-right symmetric (LRS) models predict new tree-level contributions, mediated by the $`W_R`$ gauge bosons. The relevant interactions are given by
$$_{CC}=\frac{g_R}{\sqrt{2}}\overline{u_{iR}}\gamma _\mu V_{ij}^Rd_{jR}W_R^{\mu +}+h.c.,$$
(23)
where $`V^R`$ is the mixing matrix for the right-handed quarks. For a general model of an extended electroweak gauge group $`G=SU(2)_L\times SU(2)_R\times U(1)_{BL}`$, the interactions of Eq. (23) lead to
$$\frac{G_N^{W_R}}{G_F\left|V_{cd}V_{us}^{}\right|}=\frac{g_R^2}{g_L^2}\frac{m_{W_L}^2}{m_{W_R}^2}\left|\frac{V_{cd}^RV_{us}^R}{V_{cd}V_{us}^{}}\right|.$$
(24)
However, in left-right symmetric models, an extra discrete symmetry is imposed. It leads to the relation $`g_L=g_R`$ and, in models of spontaneous CP violation or of manifest left-right symmetry, to $`|V_{ij}|=|V_{ij}^R|`$. Then Eq. (24) is simplified:
$$\frac{G_N^{\mathrm{LRS}}}{G_F\left|V_{cd}V_{us}^{}\right|}=\frac{m_{W_L}^2}{m_{W_R}^2}<\frac{1}{430},$$
(25)
where the upper bound comes from the $`\mathrm{\Delta }m_K`$ constraint .
In $`SU(2)_L\times SU(2)_R\times U(1)_{BL}`$ models where $`V`$ and $`V_R`$ are independent mixing matrices, it is possible to avoid the $`\mathrm{\Delta }M_K`$ constraints . This is done by fine tuning the relevant entries in $`V_R`$ to be very small. In particular, it was shown that in such a framework there could be interesting implications on CP violation in the $`B`$ system . However, as concerns the $`D^0K^+\pi ^{}`$ decay, the situation is different: the same mixing elements that contribute to $`D^0K^+\pi ^{}`$, that is $`V_{cd}^RV_{us}^R`$, contribute also to $`K\overline{K}`$ mixing. If they are switched off, to avoid the $`\mathrm{\Delta }m_K`$ constraint, the new contribution to $`D^0K^+\pi ^{}`$ vanishes as well. One can see that independently of the details of the model by noticing that the $`G_N^{W_R}`$ effective coupling of Eq. (24) can be combined with the flavor-changing $`G_FV_{cd}V_{us}^{}`$ coupling of the SM to produce a contribution to $`K\overline{K}`$ mixing. Indeed, one finds for the CP conserving contribution :
$$e\left(\frac{G_N^{W_R}}{G_FV_{cd}V_{us}^{}}\right)<0.2,$$
(26)
and for the CP violating contribution :
$$m\left(\frac{G_N^{W_R}}{G_FV_{cd}V_{us}^{}}\right)<0.002.$$
(27)
We learn that in such fine-tuned models, the $`W_R`$-mediated contribution to the decay rate could be non-negligible, but the CP violating contribution is very small.
### D Extra Quarks in SM Vector-Like Representations
In models with non-sequential (‘exotic’) quarks, the $`Z`$-boson has flavor changing couplings, leading to a $`Z`$-mediated contribution to the $`D^0K^+\pi ^{}`$ decay. For example, in models with additional up quarks in the vector-like representation $`(\mathrm{𝟑},\mathrm{𝟏},+2/3)(\overline{\mathrm{𝟑}},\mathrm{𝟏},2/3)`$ and additional down quarks in the vector-like representation $`(\mathrm{𝟑},\mathrm{𝟏},1/3)(\overline{\mathrm{𝟑}},\mathrm{𝟏},+1/3)`$, the flavor changing $`Z`$ couplings have the form
$$_Z=\frac{g}{2\mathrm{cos}\theta _W}\left(U_{ij}^u\overline{u}_{Li}\gamma _\mu u_{Lj}U_{ij}^d\overline{d}_{Li}\gamma _\mu d_{Lj}\right)Z^\mu +h.c..$$
(28)
Here, $`U^q=V_L^q\mathrm{diag}(1,1,1,0)V_L^q`$, where $`V_L^q`$ is the $`4\times 4`$ diagonalizing matrix for $`M_qM_q^{}`$ ($`M_q`$ being the quark mass matrix). The flavor changing couplings are constrained by $`\mathrm{\Delta }M_K`$ and $`\mathrm{\Delta }M_D`$:
$$|U_{sd}^d|<2\times 10^4,|U_{cu}^u|<7\times 10^4.$$
(29)
The resulting effective four fermi coupling is given by
$$\frac{G_N^Z}{G_F|V_{cd}V_{us}|}\frac{|U_{sd}^dU_{cu}^u|}{|V_{cd}V_{us}|}<3\times 10^6.$$
(30)
The same bound applies for the case of vector-like quark doublets, $`(\mathrm{𝟑},\mathrm{𝟐},+1/6)(\overline{\mathrm{𝟑}},\mathrm{𝟐},1/6)`$ . The flavor changing $`Z`$ couplings are to right-handed quarks, with a mixing matrix $`U^q=V_R^q\mathrm{diag}(0,0,0,1)V_R^q`$. Here $`V_R^q`$ is the $`4\times 4`$ diagonalizing matrix for $`M_q^{}M_q`$.
We learn that a significant contribution to $`D^0K^+\pi ^{}`$ from $`Z`$-mediated flavor changing interactions is ruled out.
## III Model Independent Analysis
We have seen that the contributions to $`D^0K^+\pi ^{}`$ in various reasonable extensions of the SM cannot compete with the $`W`$-mediated process. Still, it would be useful if one could show model-independently that CP violation in decay can be neglected. We try to accomplish this task for all possible tree level contributions to the $`D^0K^+\pi ^{}`$ decay. Our analysis proceeds as follows : We first list all relevant (anti)quark bilinears and their transformation properties under the SM gauge group $`𝒢_{SM}=SU(3)_C\times SU(2)_L\times U(1)_Y`$. If the two quarks have the same (opposite) chirality, they couple to a scalar (vector) boson. Altogether there are ten possible bilinears (plus their hermitian conjugates) that are shown in Tab. 1 . Here $`Q`$ denotes the left-handed quark doublet, $`q=u,d`$ denote the right-handed quark singlets, and the superscript $`c`$ refers to the respective antiquarks. The examples given in the last column refer to the models discussed in Section II.
In general, the presence of a heavy boson $``$ that couples to any of the above quark bilinears $`B_{ij}`$ with trilinear couplings $`\lambda _{ij}^{}`$, where $`i,j=1,2,3`$ refer to the quark flavors, gives rise to the four quark operator $`B_{ij}^{}B_{kl}`$ with the effective coupling
$$G_N^{}=C_{CG}\frac{\lambda _{ij}^{}{}_{}{}^{}\lambda _{kl}^{}}{4\sqrt{2}M_{}^2},$$
(31)
at energy scales well below the mass $`M_{}`$. ($`C_{CG}`$ is the appropriate Clebsch-Gordan coefficient.) For intermediate diquarks, we only discuss color triplets. The discussion of color sextets follows similar lines.
| Bilinear $`B`$ | $`SU(3)_C`$ | $`SU(2)_L`$ | $`Y`$ | Couples to Boson $``$ | Example |
| --- | --- | --- | --- | --- | --- |
| $`Qd^c`$ | $`\mathrm{𝟏}`$ | $`\mathrm{𝟐}`$ | $`1/2`$ | $`𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)`$ | $`\stackrel{~}{L}`$ (SUSY $`\overline{)}R_p`$) |
| $`u^cd^c`$ | $`\overline{\mathrm{𝟑}}\overline{\mathrm{𝟑}}`$ | $`\mathrm{𝟏}`$ | $`1/3`$ | $`𝒮(\overline{\mathrm{𝟑}},\mathrm{𝟏},1/3)`$ | $`\stackrel{~}{d}^c`$ (SUSY $`\overline{)}R_p`$) |
| $`Qu^c`$ | $`\mathrm{𝟏}`$ | $`\mathrm{𝟐}`$ | $`1/2`$ | $`𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)`$ | $`H_u`$ (2HDM) |
| $`QQ`$ | $`\mathrm{𝟑}\mathrm{𝟑}`$ | $`\mathrm{𝟐}\mathrm{𝟐}`$ | $`1/3`$ | $`𝒮(\mathrm{𝟑},𝑳,1/3)`$ \[$`𝑳`$=1,3\] | |
| $`ud^c`$ | $`\mathrm{𝟏}`$ | $`\mathrm{𝟏}`$ | $`1`$ | $`𝒱(\mathrm{𝟏},\mathrm{𝟏},1)`$ | $`W_R`$ (LRS) |
| $`qq^c`$ | $`\mathrm{𝟏}`$ | $`\mathrm{𝟏}`$ | $`0`$ | $`𝒱(\mathrm{𝟏},\mathrm{𝟏},0)`$ | |
| $`Qd`$ | $`\mathrm{𝟑}\mathrm{𝟑}`$ | $`\mathrm{𝟐}`$ | $`1/6`$ | $`𝒱(\mathrm{𝟑},\mathrm{𝟐},1/6)`$ | |
| $`Qu`$ | $`\mathrm{𝟑}\mathrm{𝟑}`$ | $`\mathrm{𝟐}`$ | $`5/6`$ | $`𝒱(\mathrm{𝟑},\mathrm{𝟐},5/6)`$ | |
| $`QQ^c`$ | $`\mathrm{𝟏}`$ | $`\mathrm{𝟐}\mathrm{𝟐}`$ | $`0`$ | $`𝒱(\mathrm{𝟏},𝑳,0)`$ \[$`𝑳`$=1,3\] | $`Z`$ (extra $`q`$’s) |
Tab. 1: Quark-(Anti)Quark Bilinears
In order to predict the rates of the relevant hadronic process one would need to take into account QCD corrections as well as the hadronic matrix elements. Since we are mainly interested in ratios between the rates due to New Physics and those from the SM, using (31) is sufficient to obtain an order-of-magnitude estimate for such ratios.
The first entry in Tab. 1 is realized in supersymmetric models without $`R_p`$ (SUSY $`\overline{)}R_p`$): $`𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)`$ is the slepton doublet $`\stackrel{~}{L_k}`$, with $`\lambda _{ij}^{𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)}=\lambda _{ijk}^{}`$. As we have pointed out in Section II A, non-vanishing $`\lambda _{12}^{𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)}`$ and $`\lambda _{21}^{𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)}`$ give rise not only to tree-level contributions to $`D^0K^+\pi ^{}`$ but also to $`K^0`$$`\overline{K}^0`$ mixing, which severely constraints the effective coupling $`G_N`$. In this case the bound arises only from the presence of the trilinear couplings and supersymmetry does not play a role. The bound in (7) is then model-independent.
The second entry in Tab. 1 is also realized in supersymmetric models without $`R_p`$: $`𝒮(\mathrm{𝟑},\mathrm{𝟏},1/3)`$ is the down squark $`\stackrel{~}{d_k^c}`$, with $`\lambda _{ij}^{𝒮(\mathrm{𝟑},\mathrm{𝟏},1/3)}=\lambda _{ijk}^{\prime \prime }`$. For the $`\lambda ^{\prime \prime }`$ coupling, however, the constraint comes from the upper bound on $`n\overline{n}`$ oscillations: to violate baryon number but conserve strangeness or beauty, an internal loop with charginos is required . Supersymmetry does play a role in the bound on $`\lambda _{113}^{\prime \prime }`$, and the bound does not hold for a generic $`\lambda _{11}^{𝒮(\mathrm{𝟑},\mathrm{𝟏},1/3)}`$. More generally, there is no strong model-independent bound on any diagonal $`\lambda _{ii}^{𝒮(\mathrm{𝟑},\mathrm{𝟏},1/3)}`$ coupling. The bound on the scale of compositeness , $`\mathrm{\Lambda }(qqqq)>1.6`$ TeV, suggests a bound for the $`i=1`$ case, $`|\lambda _{11}^{𝒮(\mathrm{𝟑},\mathrm{𝟏},1/3)}|<0.2`$ which implies $`G_N^{𝒮(\mathrm{𝟑},\mathrm{𝟏},1/3)}<0.3G_F`$ (assuming $`|\lambda _{22}^{𝒮(\mathrm{𝟑},\mathrm{𝟏},1/3)}|1`$). We learn then that one could construct models which incorporate color-triplet weak-singlet scalars where there is a large CP violating contribution to $`D^0K^+\pi ^{}`$.
The coupling of $`Qu^c`$ to $`𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)`$ appears in the two Higgs doublet model with natural flavor conservation, as discussed in Section II B. In this model, the effective coupling is suppressed by the quark masses and the CKM matrix elements. But also if the doublet $`𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)`$ is unrelated to the generation of the quarks masses, one can derive a model-independent bound, which only relies on the $`SU(2)_L`$ symmetry: Non-vanishing $`\lambda _{12}^{𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)}`$ and $`\lambda _{21}^{𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)}`$ give not only a charged scalar mediated contribution to $`D^0K^+\pi ^{}`$, but also a neutral scalar mediated contribution to $`D^0`$$`\overline{D}^0`$ mixing. We are assuming that the New Physics takes place at a scale that is comparable to or higher than the electroweak breaking scale, so that $`SU(2)_L`$ breaking effects are not large and the masses of the charged and neutral scalars are similar . Consequently, the upper bound on $`D^0`$$`\overline{D}^0`$ mixing translates into
$$G_N^{𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)_{12}}=\frac{\lambda _{12}^{𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)}{}_{}{}^{}\lambda _{21}^{𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)}}{4\sqrt{2}M_{𝒮(\mathrm{𝟏},\mathrm{𝟐},1/2)}^2}<10^7G_F,$$
(32)
too small to compete with the SM contribution.
The coupling of the $`QQ`$ bilinear to a scalar field could induce $`D^0K^+\pi ^{}`$ if the flavor diagonal entries, $`\lambda _{11}^{𝒮(\mathrm{𝟑},𝑳,1/3)}`$ and $`\lambda _{22}^{𝒮(\mathrm{𝟑},𝑳,1/3)}`$, are non-zero. For an $`SU(2)_L`$ singlet ($`𝑳=\mathrm{𝟏}`$), the coupling is flavor anti-symmetric and therefore $`\lambda _{ii}^{𝒮(\mathrm{𝟑},\mathrm{𝟏},1/3)}=0`$. For an $`SU(2)_L`$ triplet ($`𝑳=\mathrm{𝟑}`$), the coupling is flavor symmetric and $`\lambda _{ii}^{𝒮(\mathrm{𝟑},\mathrm{𝟑},1/3)}0`$ is possible. (For scalar $`SU(3)_C`$ sextets the situation would be reversed.) However, while the $`Q_{EM}=1/3`$ component mediates $`D^0K^+\pi ^{}`$, the $`Q_{EM}=+2/3`$ component induces $`K^0`$$`\overline{K}^0`$ mixing and the $`Q_{EM}=4/3`$ component induces $`D^0`$$`\overline{D}^0`$ mixing. We find:
$$G_N^{𝒮(\mathrm{𝟑},\mathrm{𝟑},1/3)_{12}}=\frac{\lambda _{11}^{𝒮(\mathrm{𝟑},\mathrm{𝟑},1/3)}\lambda _{22}^{𝒮(\mathrm{𝟑},\mathrm{𝟑},1/3)}}{4\sqrt{2}M_{𝒮(\mathrm{𝟑},\mathrm{𝟑},1/3)}^2}<10^7G_F,$$
(33)
too small to compete with the SM contribution.
Among the vector bosons listed in Tab. 1 we already encountered specific examples for the color singlets $`𝒱(\mathrm{𝟏},\mathrm{𝟏},1)`$ ($`W_R^{}`$ in LRS models) and $`𝒱(\mathrm{𝟏},\mathrm{𝟑},0)`$ ($`Z`$-induced FCNCs due to extra quarks). The discussion we presented in Section II C can be generalized to any theory that contains a vector boson $`𝒱(\mathrm{𝟏},\mathrm{𝟏},1)`$ that couples to $`ud^c`$ as in (23). Note that the $`W_R`$ (being a gauge boson) has flavor diagonal couplings in the flavor basis and only the charged components induce flavor transitions between the mass eigenstates, while the neutral component cannot mediate FCNCs. Still, as we have seen in Section II C, the contribution from the left-right box diagram to $`\mathrm{\Delta }M_K`$ and $`ϵ_K`$ imposes severe constraints on the $`D^0K^+\pi ^{}`$ amplitude due to $`𝒱(\mathrm{𝟏},\mathrm{𝟏},1)`$ exchange and rules out significant CP violation in the decay.
For the generic coupling $`\lambda ^{𝒱(\mathrm{𝟏},\mathrm{𝟑},0)}`$ we can adopt the specific result obtained in Section II D. Since the argument based on the bounds from $`K^0`$$`\overline{K}^0`$ and $`D^0`$$`\overline{D}^0`$ oscillations only used the trilinear couplings (28), it can be generalized to the generic couplings $`\lambda ^{𝒱(\mathrm{𝟏},\mathrm{𝟏},0)}`$. Because all quark-antiquark bilinears $`B_{ij}`$ that couple to $`𝒱(\mathrm{𝟏},\mathrm{𝟏},0)`$ are gauge-invariant one can easily see that $`𝒱(\mathrm{𝟏},\mathrm{𝟏},0)`$ exchange induces not only the flavor-conserving effective operator $`B_{ij}^{}B_{ij}`$ but also the flavor-violating operator $`B_{ij}B_{ij}`$ that gives rise to neutral meson mixing.
For the remaining vector couplings in Tab. 1, the decay $`D^0K^+\pi ^{}`$ can be induced, if the flavor diagonal couplings $`\lambda _{ii}^{𝒱(\mathrm{𝟑},\mathrm{𝟐},Y)}`$ for both $`i=1`$ and 2 are non-zero. Note that the intermediate vector boson carries color. Since the respective quark bilinears contain one $`SU(2)_L`$ doublet the effective operator that gives rise to $`D^0K^+\pi ^{}`$ is related by an $`SU(2)_L`$ rotation to an operator that induces $`K^0`$$`\overline{K}^0`$ \[for $`𝒱(\mathrm{𝟑},\mathrm{𝟐},1/6)`$\] and $`D^0`$$`\overline{D}^0`$ \[for $`𝒱(\mathrm{𝟑},\mathrm{𝟐},5/6)`$\] oscillations at tree level. Since $`SU(2)_L`$ breaking effects are small , the data from neutral meson mixing imply $`G_N^{𝒱(\mathrm{𝟑},\mathrm{𝟐},1/6)}<10^7G_F,`$ and $`G_N^{𝒱(\mathrm{𝟑},\mathrm{𝟐},5/6)}<10^6G_F,`$ ruling out any significant contribution to $`D^0K^+\pi ^{}`$.
## IV Conclusions
We have examined well-motivated extensions of the standard model that give new, tree-level contributions to the $`D^0K^+\pi ^{}`$ decay. We showed that in all the models that we considered, strong phenomenological constraints imply that these contributions can be safely neglected.
We have extended our discussion to a model-independent analysis of all possible tree level contributions to the decay. We found that there is only one case where a large contribution to $`D^0K^+\pi ^{}`$ is possible. This is the case where two right-handed quarks, $`u^cd^c`$, couple to an $`SU(2)_L`$-singlet scalar, $`𝒮(\overline{\mathrm{𝟑}},\mathrm{𝟏},1/3)`$. Such a coupling is present in SUSY without $`R_p`$ but in this model the relevant coupling is constrained by $`n\overline{n}`$ oscillations, ruling out a contribution that is comparable to the SM doubly Cabibbo suppressed diagram.
In our analysis, we have implicitly assumed that there are no significant accidental cancellations between various contributions to the processes from which we derive our constraints. It is possible to construct fine-tuned models where there is a large new contribution to $`D^0K^+\pi ^{}`$ while the related contributions to flavor changing neutral current processes are small.
We conclude that, in general, the assumption that New Physics effects could affect the $`D^0K^+\pi ^{}`$ decay and, in particular, its CP violating part, only through $`D^0`$$`\overline{D}^0`$ mixing is a good assumption and it holds to better than one percent in all the reasonable and well-motivated extensions of the standard model that we have examined. One could construct, however, viable (even if unmotivated) models where there is a new, large \[$`𝒪(0.3)`$\] and possibly CP violating contribution to the decay.
###### Acknowledgements.
We thank Zvi Lipkin for asking us the questions that led to this study. Y. N. is supported in part by the United States – Israel Binational Science Foundation (BSF), by the Israel Science Foundation founded by the Israel Academy of Sciences and Humanities, and by the Minerva Foundation (Munich).
|
no-problem/9909/astro-ph9909489.html
|
ar5iv
|
text
|
# Most Real Bars are Not Made by the Bar Instability
## 1. Introduction
I am probably not the only person who once thought that the bar instability was responsible for the bars we see in a decent fraction of nearby galaxies (Sellwood & Wilkinson 1993). While understanding the absence of strong bars in most galaxies was a major headache for galaxy dynamicists (Ostriker & Peebles 1973; Toomre 1974), we took comfort from the fact that we thought we knew why some galaxies have bars. Now we understand why galaxies are not barred, we can no longer claim to comprehend the origin of bars – our problem has inverted!
I have two reasons for doubting that bars, especially those in bright galaxies, could have formed through the usual global instability. The first, which has been evident for some time, is that many barred galaxies have strong inner Lindblad resonances (ILRs) and the second is the recent claim by Abraham et al. (1999; see also Merrifield, this volume) that bars were less common at $`z>0.5`$. The latter result is clearly still quite tentative but, even if it went away, the ILRs in bars are themselves ample evidence against straightforward instabilities.
## 2. Inner Lindblad resonances in bars
A number of lines of evidence all suggest that many bars possess ILRs. Recall that the gravitational stresses from a bar drive gas inwards towards the center until an ILR is encountered, where it piles up in a ring (e.g. Athanassoula 1992). Optical nuclear rings with diameters of a few hundred pc (Buta & Crocker 1993), often the sites of vigorous star formation, are seen in many, though not quite all, barred galaxies; a particularly beautiful case is the HST image of NGC 4314 (Benedict et al. 1998). Ring-like concentrations of molecular gas, often with “twin peaks” morphology, are now being observed with molecular interferometer arrays (e.g. Helfer & Blitz 1995; Turner 1996; Kenney 1997; Sakamoto et al. 1999). Further, Athanassoula (1992) argued that gas flow models could produce shocks at the positions of the offset dust lanes along the bar in many galaxies only if a strong ILR were present. Those barred galaxies for which the observed gas velocity field has been modelled all appear to have ILRs (Duval & Athanassoula 1983; Lindblad et al. 1996; Regan et al. 1997; Weiner et al. 1999). Finally, there is strong evidence for an ILR in the bar of the Milky Way (Binney et al. 1991; Weiner & Sellwood 1999).
### 2.1. Disc Stability
Much of this overwhelming body of evidence in favour of a central density high enough to ensure an ILR has been known for some time. Yet it did not seem to represent much more than a nagging worry because we did not fully understand how galaxy disks were stabilized. Now that we know a high central density really does stabilize a galaxy, this minor worry has suddenly become serious.
Toomre (1981) argued that a dense centre could prevent the bar instability by inserting an ILR to cut the feedback to the swing-amplifier. Only numerical simulations with reasonable particle numbers and good time and spatial resolution are able to reproduce the correct behaviour in the central regions and confirm this prediction. They have now established that galaxy models containing massive discs can be dynamically cool and yet not form bars (Sellwood 1985; Sellwood & Moore 1999; Sellwood 1999). Rubin et al. (1997) and Sofue et al. (1999) show that virtually all bright galaxies ($`V_{\mathrm{max}}\text{ }>150\text{ km s}^1`$) have dense centres – the reason for the stability of real galaxies is now clear.
If the mass distribution in barred galaxies today is such that it should have inhibited a bar from forming, why are these galaxies barred? We can dismiss two obvious ideas. If bars formed with much higher pattern speeds and have since slowed down (without getting longer, see below) then co-rotation would lie well beyond the end of the bar, which contradicts much of the evidence already cited as well as direct measurements (Merrifiend & Kuijken 1993; Gerssen et al. 1999). Perhaps the mass distribution was originally more uniform, but enough gas has subsequently been driven into the centre to create the ILR. This idea seems physically reasonable since as little as 1–2% of the galaxy mass, together with the supporting response of the stars, is sufficient (Sellwood & Moore 1999). However, this same process weakens or destroys the bar, as has been argued by Norman and his co-workers (Hasan & Norman 1990; Pfenniger & Norman 1990) and reproduced in simulations (Friedli 1994; Norman et al. 1996).
## 3. Hubble Deep Fields
The study of the barred galaxy fraction as a function of redshift by Abraham et al. (1999; see also Merrifield, this volume) raises a further difficulty for the bar instability picture. They find very few strongly barred galaxies at $`z>0.5`$, suggesting that bars develop long after the discs of these galaxies are assembled. This result may suggest a gradual build-up of the disc until the rapid dynamical instability occurs. However, late infalling material probably contributes to the outer disc, and not to the central density (e.g. Simard et al. 1999), and so will have less effect on global stability and, furthermore, such an idea would not avoid the stabilizing effect of the observed dense centres.
## 4. Other Bar-Formation Mechanisms
The above discussion suggests that we should abandon the idea that bars are caused by the global dynamical instability. If this most obvious mechanism for bar formation is excluded, what are the alternatives?
One possibility is an encounter with another galaxy which triggers a bar (e.g. Noguchi 1987; Gerin et al. 1990; Mihos et al. 1997). There is some evidence for higher barred fraction in dense environments (Elmegreen et al. 1990; Giuricin et al. 1993), suggesting that this does occur in practice. However, the idea is unattractive for two reasons: first, interactions were more common in the early universe, so the bar fraction should build up quickly, in contradiction to Abraham et al. Second, Miwa & Noguchi (1998) find that bars formed through tidal encounters generally have rather low pattern speeds, whereas most bars are believed to rotate rapidly, as noted above.
Lynden-Bell (1979) argued for a gradual secular bar growth through orbit trapping. However, his mechanism would again form bars having slow figure rotation, whereas almost all evidence points to rapid figure rotation.
I currently favour episodic growth, which I reported in some of my early simulations (Sellwood 1981). In this process, a short, weak bar can become longer and stronger through trapping of erstwhile disc particles into the bar; strong spiral patterns, which carry away angular momentum, can add many particles to the bar. It differs from Lynden-Bell’s mechanism because changes occur in $`1`$ orbital period and depend crucially on the phase of the spiral relative to the bar. After one such spiral pattern, the bar is significantly longer and slightly slower than before, but co-rotation remains just beyond the end of the bar. It should be noted, hovever, that all simulations so far in which I have witnessed this process have required an initial seed bar.
## 5. Conclusions
If bars were formed by the global bar instability, then (1) they probably should form a bar early in a galaxy’s life and (2) they should not form when the centre is dense. Both predictions are inconsistent with observations, the second much more decisively, arguing strongly that bars were not formed in this manner.
Thus an alternative bar-forming mechanism is needed. I propose one such possibility, but the idea is not fully worked out. Ideally some observational test is needed that would be able to distinguish a bar formed through this, or any other secular process, from one formed through a global dynamical instability. It would also be desirable to be able to predict the distribution of bar strengths in galaxies today, although this may have to await substantial progress in our understanding of the late stages of galaxy formation.
#### Acknowledgments.
This work was supported by NSF grant AST 96/17088 and NASA LTSA grant NAG 5-6037.
## References
Abraham, R. G., Merrifield, M. R., Ellis, R. S., Tanvir, N. & Brinchman, J. 1999, MNRAS, 308, 596
Athanassoula, E. 1992, MNRAS, 259, 345
Benedict, G. F., Howell, A., Jorgensen, I., Chapell, D., Kenney, J. & Smith, B. J. 1998, STSCI Press Release C98
Buta, R. & Crocker, D. A. 1993, AJ, 105, 1344
Binney, J., Gerhard, O., Stark, A., Bally, J. & Uchida, K. 1991, MNRAS, 252, 210
Duval, M. F. & Athanassoula, E. 1983, A&A, 121, 297
Elmegreen, D. M., Elmegreen, B. G. & Bellin, A. D. 1990. ApJ, 364, 415
Friedli, D. 1994, in Mass-Transfer Induced Activity in Galaxies, ed. I. Shlosman (Cambridge: Cambridge University Press) p 268
Gerin, M., Combes, F. & Athanassoula, E. 1009, A&A, 146, 268
Gerssen, J., Kuijken, K. & Merrifield, M. R. 1999, MNRAS, 306, 926
Giuricin, G., Mardirossian, F., Mezzetti, M. & Monaco, P. 1993, ApJ, 411, 13
Hasan, H. & Norman, C. 1990, ApJ, 361, 69
Helfer, T. T. & Blitz, L. 1995, ApJ, 450, 90
Kenney, J. 1997, in The Central Regions of the Galaxy and Galaxies, IAU Symp. 184, (to appear)
Lindblad, P. A. B., Lindblad, P. O. & Athanassoula, E. 1996, A&A, 313, 65
Lynden-Bell, D. 1979. MNRAS, 187, 101
Merrifield, M. R. & Kuijken, K. 1995, MNRAS, 274, 933
Mihos, J. C., McGaugh, S. S. & de Blok, W. J. G. 1997, ApJ, 477, 79
Miwa, T. & Noguchi, M. 1998, ApJ, 499, 149
Norman, C. A., Sellwood, J. A. & Hasan, H. 1996, ApJ, 462, 114
Noguchi, M. 1987, MNRAS, 228, 635
Ostriker, J. P. & Peebles, P. J. E. 1973, ApJ, 186, 467
Pfenniger, D. & Norman, C. 1990. ApJ, 363, 391
Regan, M. W., Vogel, S. N. & Teuben, P. J. 1997, ApJ, 482, 143
Rubin, V. C., Kenney, J. D. P. & Young, J. S. 1997, AJ, 113, 1250
Sakamoto, K., Okamura, S. K., Ishizuki, S. & Scoville, N. Z. 1999, astro-ph/9906454
Sellwood, J. A. 1981, A&A, 99, 362
Sellwood, J. A. 1985, MNRAS, 217, 127
Sellwood, J. A. 1999, in Galaxy Dynamics – A Rutgers Symposium, eds. D. Merritt, J. A. Sellwood & M. Valluri (San Francisco: ASP) 182, p 351
Sellwood, J. A. & Moore, E. M. 1999, ApJ, 510, 125
Sellwood, J. A. & Wilkinson, A. 1993, Rep. Prog. Phys., 56, 173
Simard, L., et al. 1999, astro-ph/9902147
Sofue, Y. et al. 1999, ApJ, in press (astro-ph/9905056)
Toomre, A. 1974, in Highlights of Astronomy, 3, ed. G. Contopoulos (Dordrecht: Reidel) p 457
Toomre, A. 1981, in Structure and Evolution of Normal Galaxies, eds. S. M. Fall & D. Lynden-Bell (Cambridge: Cambridge University Press) p 111
Turner, J. L. 1996, in Barred Galaxies, IAU Colloq. 157, eds. R. Buta, B. G. Elmegreen, D. A. Crocker (San Franciso: ASP), 91 p 143
Weiner, B. & Sellwood, J. A. 1999, ApJ, 524
Weiner, B., Sellwood, J. A., Williams, T. B. & van Gorkom, J. 1999, ApJ, (submitted)
|
no-problem/9909/chao-dyn9909042.html
|
ar5iv
|
text
|
# Constrained randomization of time series data
## Abstract
A new method is introduced to create artificial time sequences that fulfil given constraints but are random otherwise. Constraints are usually derived from a measured signal for which surrogate data are to be generated. They are fulfilled by minimizing a suitable cost function using simulated annealing. A wide variety of structures can be imposed on the surrogate series, including multivariate, nonlinear, and nonstationary properties. When the linear correlation structure is to be preserved, the new approach avoids certain artifacts generated by Fourier-based randomization schemes. PACS: 05.45.+b
Randomization of data and Monte Carlo resampling of probability distributions is a common technique in statistics . In the context of nonlinear time series analysis it has been discussed by several authors and is usually referred to as the method of surrogate data . A null hypothesis for the nature of a time series can be tested by comparing the value of an observable $`\gamma `$ obtained using the data with values obtained using a collection of surrogate time series representing the null hypothesis. All but the simplest null assumptions allow for certain structures, for example linear serial correlations. There are two distinct ways to implement such structures when creating surrogate series. Traditional bootstrap methods use explicit model equations that have to be extracted from the data. This typical realizations approach can be very powerful for the computation of confidence intervals, provided the model equations can be extracted successfully. As discussed in Ref. , the alternative approach of constrained realizations is more suitable for the purpose of hypothesis testing. It avoids the fitting of model equations by directly imposing the desired structures onto the randomized time series. However, the choice of possible null hypothesis has so far been limited by the difficulty of imposing arbitrary structures on otherwise random sequences. Algorithms exist mainly for the following cases. (1) The null hypothesis of independent random numbers from a fixed but unknown distribution can be tested against permutations without repetition of the data since these conserve the sample distribution exactly. (2) The case of Gaussian noise with arbitrary linear correlations leads to the Fourier transform method. The Fourier transform of the data is multiplied by random phases and then transformed back, conserving the sample periodogram. (See Ref. for the multivariate case.) (3) Surrogates with a given distribution and given linear correlations are needed for the null hypothesis of a monotonically rescaled Gaussian linear stochastic process. This is approximately achieved by the amplitude adjusted Fourier transform (AAFT) algorithm and the more accurate iterative method proposed in Ref. .
This paper will introduce a general method for generating random time sequences subject to quite general constraints. Any null hypothesis that leads to a complete set of observables can thus be tested for. All the above cases can be dealt with (often with higher accuracy), but also multivariate, nonstationary, nonlinear or other constraints can be implemented. In all the applications in this paper, the single time probability distribution will be one of the constraints, leading to the requirement that the randomized sequence is a permutation of a fixed collection of values. All other constraints, for example part or all of the lags of the autocorrelation function, will be formulated in terms of a cost function which is then minimized among all possible permutations by the method of simulated annealing.
After giving the actual randomization scheme I will discuss some major applications. We will show that the algorithm yields a more accurate nonlinearity test and avoids known artifacts that are introduced by end effects with ordinary, Fourier-based surrogates . We will also give examples with more general null hypothesis than that of a rescaled stationary linear stochastic process. For these examples, previous methods could not provide appropriate surrogates.
The algorithm is conceptually very simple:
1. Specify constraints $`𝒞_i(\{\stackrel{~}{x}_n\})=0`$ in terms of a cost function $`E(\{\stackrel{~}{x}_n\})`$, constructed to have a global minimum when the constraint is fulfilled.
2. Minimize $`E(\{\stackrel{~}{x}_n\})`$ among all permutations $`\{\stackrel{~}{x}_n\}`$ of a time series $`\{x_n\}`$ by simulated annealing. Configurations are updated by exchanging pairs in $`\{\stackrel{~}{x}_n\}`$.
Examples of its use will be given below.
The simulated annealing method is particularly useful for combinatorial minimization with false minima. It goes back to Metropolis et al. , and is thoroughly discussed in the literature . Essentially, the cost function is interpreted as an energy in a thermodynamic system. At some finite “temperature” $`T`$, system configurations are visited consecutively with a probability according to the Boltzmann distribution $`e^{E/T}`$ of the canonical ensemble. This is achieved by accepting changes of the configuration with a probability $`p=1`$ if the energy is decreased $`(\mathrm{\Delta }E<0)`$ and $`p=e^{\mathrm{\Delta }E/T}`$ if the energy is increased, $`(\mathrm{\Delta }E0)`$. The temperature is decreased slowly, thereby “annealing” the system to the ground state of minimal “energy”, that is, the minimum of the cost function. In the limit $`T0`$, all ground state configurations can be reached with equal probability. Although some general rigorous convergence results are available, in practical applications of simulated annealing some problem-specific choices have to be made. In particular, apart from the cost function itself, one has to specify a method of updating the configurations and a schedule for lowering the temperature. A way to efficiently reach all permutations by small individual changes is by exchanging randomly chosen (not necessarily close-by) pairs. In many cases, an exchange of two points is reflected in a rather simple update of the cost function. This is important for speed of computation. Many cooling schemes have been discussed in the literature . In this work, the temperature is multiplied by $`\alpha `$ at each cooling step. Cooling is done if either the number of successful updates since the last cooling exceeds $`N_{\mathrm{succ}}`$, or the total number of configurations visited during this cooling step exceeds $`N_{\mathrm{total}}`$. It is difficult to give general rules on how to choose $`\alpha ,N_{\mathrm{succ}},`$ and $`N_{\mathrm{total}}`$. Slow cooling is necessary if the desired accuracy of the constraint is high. It seems reasonable to increase $`N_{\mathrm{succ}}`$ and $`N_{\mathrm{total}}`$ with the system size, but also with the number of constraints incorporated in the cost function. Generally, one can choose a tolerance for the constraints, start with rather fast cooling and repeat the analysis with a slower cooling rate if the accuracy has not been met. Other more sophisticated cooling schemes may be suitable depending on the specific situation. The reader is referred to the standard literature .
Let us first demonstrate that the algorithm yields more accurate results than previous methods for the most prominent application of surrogate data, which is statistical testing for nonlinearity in a time series. Consider the null hypothesis that there is a sequence $`\{y_n\}`$ that has been generated by a Gaussian linear stochastic process. As the only allowed kind of nonlinearity, the actual data $`\{x_n\}`$ consists of observations of $`\{y_n\}`$ made through a monotone instantaneous measurement function: $`x_n=f(y_n)`$. As discussed e.g. in Ref. , the corresponding Monte Carlo sample has to be constrained to have (i) the same single time probability distribution and (ii) the same sample auto-covariance function
$$C(\tau )=\frac{1}{N\tau }\underset{n=\tau }{\overset{N1}{}}x_nx_{n\tau }$$
(1)
for all lags $`\tau =0,\mathrm{},N1`$. (Zero mean has been imposed for simplicity of notation.)
In the actual test, a nonlinear observable $`\gamma `$ is computed for the data and a collection of surrogate data sets. (See Ref. for a comparison of the performance of different statistics $`\gamma `$.) The null hypothesis will be rejected if the result $`\gamma _0`$ obtained for the data is incompatible with the probability distribution of $`\gamma `$ estimated from the surrogates. Note that although even different realizations of the same process will have the same sample auto-covariance function only up to statistical fluctuations, it is essential that the surrogates are constrained to $`C(\tau )^{(\mathrm{data})}`$ as accurately as possible–since almost every discriminating statistic $`\gamma `$ will depend on $`C(\tau )`$, we are otherwise likely to introduce a bias and possibly spurious rejections of the null hypothesis. See also the discussion in Ref. .
Previous attempts to implement the above constraints have only been partially successful. In the scheme introduced here, property (i) is easily implemented by considering as candidates for randomized series all permutations of the measured time sequence $`\{y_n\}`$. Requirement (ii) can be achieved by finding a permutation of $`\{y_n\}`$ which, within the desired accuracy, minimizes a cost function like the following :
$$E^{(q)}=\left[\underset{\tau =0}{\overset{N1}{}}|C(\tau )C^{(\mathrm{data})}(\tau )|^q\right]^{1/q}.$$
(2)
Provided the annealing scheme is brought to convergence with high accuracy, the known artifacts that remain with previous approaches can be avoided.
As discussed in , the original (AAFT) algorithm can show a bias towards a flat spectrum for short sequences. The iterative scheme proposed in removes this bias to a satisfactory approximation for practical work. Let us however compare the accuracy of the previously proposed schemes to the present algorithm. For comparability, a cost function is chosen with respect to the time periodic sample auto-covariance function
$$C_p(\tau )=\frac{1}{N}\underset{n=0}{\overset{N1}{}}x_nx_{(n\tau )\mathrm{mod}N}.$$
(3)
which corresponds to the Fourier spectrum through the Wiener-Khinchin theorem. Minimizing $`E_p^{(\mathrm{})}=\mathrm{max}_{\tau =0}^{N/2}|C_p(\tau )C_p^{(\mathrm{data})}(\tau )|`$ will reproduce the auto-covariance $`C_p^{(\mathrm{data})}`$ measured on the data. Time series of length $`N=1000`$ are generated by an autoregressive model but measured using a nonlinear measurement function: $`x_n=y_n^3,y_n=0.9y_{n1}+\eta _n`$. The residual maximal deviations of the auto-covariances of the time series and surrogate sets were determined for (i) random permutations of the data, (ii) usual AAFT surrogates , (iii) surrogates created with the iterative scheme given in Ref. and (iv) outcomes of the annealing procedure for different cooling protocols. Note that with slower cooling, arbitrarily high accuracy can be reached in principle. Averages over 20 realizations were determined for cases (i) to (iii). The iterative scheme (iii) was repeated until a fixed point was reached which was the case after about 200 iterations. Table I summarizes the results. Computation time time on a DEC alpha workstation at 400MHz clock rate are given only for relative comparison. The price for the superior accuracy of the annealing scheme is its much higher computational cost.
As mentioned earlier, all previous randomization schemes make use of the Fourier transform in order to achieve the desired linear correlation structure. Note, however, that two sequences with the same Fourier amplitudes do not quite have the same auto-covariance function $`C(\tau )`$, eq. (1). The Wiener-Khinchin theorem only says that the periodic sample auto-covariance function $`C_p(\tau )`$, eq. (3), will be the same. This amounts to assuming that the measured time series is exactly one period of an infinite periodic signal, which is of course not what we believe to be the case. The artifact generated by this flaw of previous algorithms has been discussed e.g. in Ref. . The periodically extended sequence may undergo a phase slip or even a finite jump at $`n=N`$. The surrogate series will have the power contained in that slip spread out over the whole observation time, leading to additional high frequency content. Although spurious results can be partially suppressed by selecting a segment of the data that approximately returns to the initial value, it is desirable to preserve the auto-covariance function $`C(\tau )`$ in eq. (1) rather than $`C_p(\tau )`$ in eq. (3). With the annealing scheme proposed in this paper, this can be easily done by choosing an appropriate cost function.
As an illustration, consider a particular autoregressive process of order two, $`x_n=1.3x_{n1}0.31x_{n2}+\eta _n`$. Since it is almost unstable, short realizations often show a large difference between the first and the last point. Periodic continuation turns this difference into a large step with broad frequency content. For a realization of 160 points we found that for a Fourier-based surrogate (method in Ref. , same periodic auto-covariance function $`C_p(\tau )`$), the sample autocorrelation $`C(1)/C(0)`$ was reduced from 0.92 to 0.85. Consequently, the power in the first differences is increased by a factor of two and short term predictability is strongly reduced. This can lead to spurious rejections of the null hypothesis of a linear process. A sequence obtained by minimizing $`E^{(\mathrm{})}=\mathrm{max}_{\tau =0}^{N1}|C(\tau )C^{(\mathrm{data})}(\tau )|/\tau `$ yielded the correct value of $`C(1)`$ within $`2\times 10^4`$.
Apart from its potential for greater accuracy, the most striking feature of the new scheme is its generality and flexibility. This point will be demonstrated in the following examples which are by no means exhaustive. Note that none of the examples below could be studied with previous surrogate data schemes. Let us first study a multivariate example, a simultaneous recording of the breath rate and the instantaneous heart rate of a human subject during sleep. (Data set B of the Santa Fe Institute time series contest in 1991 , samples 1800–4350.) Regarding the heart rate recording on its own, one easily detects nonlinearity, in particular an asymmetry under time reversal. An interesting question however is, how much of this structure can be explained by linear dependence on the breath rate, the breath rate also being non-time-reversible. In order to answer this question, one has to make surrogates that have the same autocorrelation structure but also the same cross-correlation with respect to the fixed input signal, the breath rate. (Here the breath rate data is not randomized, which is of course also possible within this framework.) Accordingly, a constraint is formulated involving all lags up to 500 of the auto-covariance and the cross-covariance ($`C_{xy}`$) functions. The cost function is taken to be $`\mathrm{max}_{\tau =0}^{500}|C(\tau )C^{(\mathrm{data})}(\tau )|/\tau +\mathrm{max}_{\tau =500}^{500}|C_{xy}(\tau )C_{xy}^{(\mathrm{data})}(\tau )|/(|\tau |+1)`$, other choices are possible. Further suppose that during one minute the equipment spuriously recorded a constant value. In order not to interpret this artifact as structure, the same artifact is generated in the surrogates, simply by excluding these data points from the permutation scheme.
Figure 1 shows the measured breath rate (upper trace) and instantaneous heart rate (middle trace). The lower trace shows a surrogate conserving both, auto- and cross-correlations. The cooling rate was $`\alpha =0.95`$, $`N_{\mathrm{succ}}=10000`$, $`N_{\mathrm{total}}=3\times 10^5`$. None of the auto- and cross-covariances differed from the goal by more than $`5\times 10^4`$ in units of the variance of the data after 3h of annealing. (DEC alpha workstation at 400MHz clock rate.) The visual impression from Fig. 1 is that while the linear cross-correlation with the breath rate explains the cyclic structure of the heart rate data, other features remain unexplained. In particular, the surrogates don’t show the asymmetry under time seen in the data. Possible explanations of the remaining structure include artifacts due to the peculiar way of deriving heart rate from inter-beat intervals, nonlinear coupling to the breath activity, nonlinearity in the cardiac system, and others.
Let us finally give a nonstationary example, an AR(2) process with periodically modulated variance: $`x_n=1.6x_{n1}0.8x_{n2}+b_n\eta _n`$ with $`b_n=1+\mathrm{sin}^22\pi n/1000`$. In Fig. 2 a realization ($`N=2000`$) is shown together with two surrogate series. The first (middle trace) has been generated by the AAFT algorithm, the second (lower trace) has been generated by the annealing scheme to preserve the first 100 lags of $`C(\tau )`$ but also the running variance in blocks of length 200, overlapping by 100.
In this paper it has been demonstrated that randomization under a wide variety of constraints can be achieved with a permutation scheme that minimizes a suitable cost function using simulated annealing. The approach is very general. Constraints are not restricted to linear correlations. Multivariate, nonlinear, but also time dependent, nonstationary properties can be easily implemented. A wider range of examples will be studied elsewhere .
Resampling with constraints is the method of choice for hypothesis testing, where it is preferable to parametric bootstrap methods. Although a general, nonparametric resampling scheme has been introduced in this paper, care has to be taken when similar ideas are to be exploited for the determination of error bounds. The variance of statistical estimators usually depends on the constraints imposed. To which extent reliable error distributions can be obtained by selecting a minimal set of constraints and using resampling with replacement will be a subject of future work.
I thank James Theiler, Daniel Kaplan, Thomas Schürmann, Holger Kantz, Rainer Hegger, and Eckehard Olbrich for stimulating discussions, and the Max Planck Institute for Physics of Complex Systems in Dresden for kind hospitality. This work was supported by the SFB 237 of the Deutsche Forschungsgemeinschaft.
|
no-problem/9909/hep-lat9909140.html
|
ar5iv
|
text
|
# How well do domain wall fermions realize chiral symmetry?
## 1 Introduction
Lattice QCD with massless domain wall fermions (including fermion loop effects) is expected to have the $`\mathrm{SU}_L(N_f)\mathrm{SU}_R(N_f)`$ chiral symmetry of the continuum when the extent of the extra dimension, $`L_s`$, becomes infinite. For simulations, where the volume is finite and particles are not strictly massless, reliable techniques are needed to quantify the symmetry breaking for finite $`L_s`$. Such techniques are needed to see the expected $`\mathrm{exp}(\alpha L_s)`$ dependence of chiral breaking for full QCD and determine if this is also the case for the quenched theory.
Here we report results from two techniques for measuring chiral symmetry breaking due to finite $`L_s`$; the first uses the pion mass and the second the axial Ward identity. At zero temperature, the pion mass is governed by the axial Ward identity. However, in simulations with finite volume and with finite quark masses, it is important to check the agreement between these approaches.
The axial Ward identity is the origin of the Gell-Mann-Oakes-Renner (GMOR) relation, discussed previously for domain wall fermions in . The fermion action of is used, with the modifications of . Some details on the numerical methods are in . See for a general review on domain wall fermions and references.
## 2 $`m_{\mathrm{res}}`$ for domain wall fermions
If the dominant effect of finite $`L_s`$ is to produce an extra contribution, $`m_{\mathrm{res}}`$, to the total quark mass, then one would expect
$$m_\pi ^2=c_0(V)+c_1(V)(m_f+m_{\mathrm{res}})+\mathrm{}$$
(1)
where $`V`$ is the space-time volume and it is expected that $`c_0(V)0`$ as $`V\mathrm{}`$. For finite volume, a result we call $`m_{\mathrm{res}}^{(m_\pi ^2)}`$, can be found from $`m_\pi ^2(m_f=0)/c_1(V)`$, which is $`m_{\mathrm{res}}`$ when $`c_0(V)=0`$.
Using the flavor non-singlet axial current in we integrate the axial Ward–Takahashi identity to get
$$\overline{q}_0q_0=m_f\chi _\pi +\mathrm{\Delta }J_5$$
(2)
where the pseudoscalar susceptibility is (no sum on $`a`$)
$$\chi _\pi 2\underset{x}{}\overline{q}_x\gamma _5\frac{\lambda ^a}{2}q_x\overline{q}_0\gamma _5\frac{\lambda ^a}{2}q_0,$$
(3)
the additional contribution from chiral mixing due to finite $`L_s`$ is
$$\mathrm{\Delta }J_52\underset{x}{}j_5^a(x,L_s/2)\overline{q}_0\gamma _5\frac{\lambda ^a}{2}q_0,$$
(4)
and $`q_x`$ are four-dimensional fermion fields defined from the appropriate right- and left-handed fields at the boundaries of the extra dimension.
For large volumes in the chirally broken phase, the pseudoscalar susceptibility is expected to behave as
$$\chi _\pi =a_1/(m_f+m_{\mathrm{res}})+a_0+𝒪(m_f+m_{\mathrm{res}}).$$
(5)
The first term again says that, for large volumes, the pion is massless at $`m_f=m_{\mathrm{res}}`$, while $`a_0`$ gives the contribution due to the massive modes. Clearly, the pion pole contribution only dominates for large enough volumes and small enough $`m_f+m_{\mathrm{res}}`$.
$`j_5^a(x,L_s/2)`$, a pseudoscalar density located midway between the domain walls, also has a pole contribution, whose coefficient is suppressed by propagation from $`L_s/2`$ to the boundaries. Since $`\chi _\pi `$ and $`\mathrm{\Delta }J_5`$ both have a pole at $`m_f=m_{\mathrm{res}}`$ and when the pole terms dominate (2) $`\overline{q}_0q_0`$ is finite, we can write in general
$$\mathrm{\Delta }J_5=m_{\mathrm{res}}\chi _\pi +b_0+𝒪(m_f+m_{\mathrm{res}}).$$
(6)
We define $`m_{\mathrm{res}}^{}{}_{}{}^{(\mathrm{GMOR})}`$ by simultaneously fitting to the form
$$\overline{q}_0q_0=(m_f+m_{\mathrm{res}})\chi _\pi +b_0.$$
(7)
and $`\chi _\pi `$ as given in (5). For a given $`L_s`$, this is a four parameter fit for $`a_1,a_0,b_0`$ and $`m_{\mathrm{res}}^{(GMOR)}`$. Note that only if $`b_0/\chi _\pi `$ is small, can we get a reliable estimate for $`m_{\mathrm{res}}^{(GMOR)}`$ from $`\overline{q}_0q_0/\chi _\pi m_f`$. For full QCD, both $`m_{\mathrm{res}}^{(GMOR)}`$ and $`b_0`$ should approach zero exponentially in $`L_s`$, since both involve propagation from $`L_s/2`$ to the walls.
## 3 $`m_{\mathrm{res}}`$ in quenched QCD
We first find $`m_{\mathrm{res}}^{(m_\pi ^2)}`$ for the the $`\beta =5.7`$, $`m_0=1.65`$, $`8^3\times 32`$ quenched domain wall spectrum study we reported last year . For quenched QCD, the observed zero mode contribution to $`\overline{q}_0q_0`$ is small for $`m_f0.02`$, so we restrict our attention this mass range here. Figure 1, shows $`m_{\mathrm{res}}^{(m_\pi ^2)}`$ for our $`\beta =5.7`$, $`m_0=1.65`$, $`8^3\times 32`$ simulations from a correlated, linear fit of $`m_\pi ^2`$ to valence masses 0.02, 0.06, 0.10, with errors from jacknifing. The $`L_s=32`$ and 48 values are the same within errors, making the large $`L_s`$ limit seem non-zero. For $`L_s=24`$ the result for a $`16^3\times 32`$ lattice is also shown, revealing that finite volume effects are noticeable.
Figure 1 also shows $`m_{\mathrm{res}}^{(GMOR)}`$ from a correlated fit to (7) and (5). The $`16^3`$ GMOR point is on top of the $`8^3`$ point in the plot. The fits $`m_{\mathrm{res}}=0.059(14)\mathrm{exp}[0.052(10)L_s]`$ and $`b_0=0.0035(11)\mathrm{exp}[0.051(13)L_s]`$ (not shown) are consistent with $`\mathrm{\Delta }J_5`$ vanishing in the $`L_s\mathrm{}`$ limit.
The agreement between the two methods for $`L_s32`$ is reasonable and only occurs since $`b_0`$ is included in the fits. $`m_{\mathrm{res}}^{(GMOR)}`$ is volume independent for $`L_s=24`$, while $`m_{\mathrm{res}}^{(m_\pi ^2)}`$ is not. The discrepancy for $`L_s=48`$ may be due to finite volume, but needs further study.
## 4 $`m_{\mathrm{res}}`$ in $`N_f=2`$ QCD
For full QCD, we have done extensive simulations with the Wilson gauge action and domain wall fermions on $`8^3\times 4`$ volumes for several values of $`L_s`$ with $`\beta =5.2`$, $`m_0=1.9`$ and $`m_f=0.02`$. For these lattices, which are in the low temperature phase, we show $`\overline{q}q/\chi _\pi `$ in Figure 2. An exponential fit, yielding $`\overline{q}q/\chi _\pi =0.02+0.082(3)\mathrm{exp}[0.027(2)L_s]`$ for $`16L_s40`$ with $`\chi ^2/N_{\mathrm{dof}}=2.76/2`$, is also shown.
We have also done uncorrelated fits, using valence masses between 0.02 and 0.14, to extract $`m_{\mathrm{res}}^{(GMOR)}`$ and $`b_0`$, since for the dynamical simulations there is not enough data to resolve the covariance matrix. (For the quenched case there was little difference between the correlated and uncorrelated fits.) All fits have $`N_{\mathrm{dof}}=4`$ and $`\chi ^2/N_{\mathrm{dof}}1`$. $`m_{\mathrm{res}}^{}{}_{}{}^{(\mathrm{GMOR})}`$ is also shown in Figure 2 and the dashed line fit is $`m_{\mathrm{res}}^{}{}_{}{}^{(\mathrm{GMOR})}=0.17(2)\mathrm{exp}[0.026(6)L_s]`$ for $`10L_s40`$ with $`\chi ^2/N_{\mathrm{dof}}=0.35/4`$. $`b_0`$ is not shown, but also fits the exponential form $`b_0=0.0100(16)\mathrm{exp}[0.0147(67)L_s]`$ with $`\chi ^2/N_{\mathrm{dof}}=0.20/4`$ over the same range in $`L_s`$.
In Table 1 we compare the two methods of extracting $`m_{\mathrm{res}}`$ using valence spectrum data from $`N_f=2,8^3\times 32`$ scale setting calculations . All data was fit for $`0.02m_f^{(\mathrm{val})}0.1`$. For the GMOR fit, $`N_{\mathrm{dof}}=2`$ and $`\chi ^2/N_{\mathrm{dof}}1`$ for all fits. We note that the two methods agree within statistics.
We can also calculate $`m_{\mathrm{res}}`$ both ways but with data as a function of the dynamical mass. Since there are only two dynamical masses, both methods are unconstrained so the errors quoted come from naive extrapolation. The results are summarized in Table 2.
## 5 Conclusions
We have gotten good agreement between $`m_{\mathrm{res}}^{(m_\pi ^2)}`$ and $`m_{\mathrm{res}}^{(GMOR)}`$ for a wide range of quenched and dynamical simulations by including the non-pole terms in the susceptibilities. From our current data, $`m_{\mathrm{res}}^{(GMOR)}`$ appears less volume dependent. For the quenched simulations at $`L_s=48`$, the two methods do not agree, possibly as a result of finite volume effects. This case warrants further study.
Whether chiral symmetry is fully restored for quenched simulations in the $`L_s\mathrm{}`$ limit of domain wall fermions is still an open question. For $`N_f=2`$ QCD, $`m_{\mathrm{res}}^{(GMOR)}`$ falls exponentially, even at quite strong coupling. The rate of chiral symmetry restoration is very slow leaving much room for improvement.
All numerical calculations were performed on the 400 Gflops QCDSP machine at Columbia and the 6 Gflops QCDSP machine at Ohio State.
|
no-problem/9909/math9909128.html
|
ar5iv
|
text
|
# Irreducibility of some quantum representations of mapping class groups
## 1. Introduction
The Witten-Reshetikhin-Turaev topological quantum field theories (see Reshetikhin and Turaev \[RT\] or Turaev’s book \[T\]) provide many interesting finite-dimensional representations of mapping class groups of surfaces, about which little is currently known. In this paper we will consider only the representations coming from the $`SU(2)`$ theory, which may be defined and studied via the Kauffman bracket skein theory (see Lickorish \[L\], Roberts \[R1\]). Calculations using this approach are easier and more concrete than in the more general cases (where one has to work more explicitly with quantum groups) but typically provide insight into the general cases, which can be worked out along the same lines (as for example with the integrality results of Masbaum-Roberts \[MR\] and then Masbaum-Wenzl \[MW\]).
Recent papers by Funar \[Fun\] and Masbaum \[M\] studied the question of whether the image of the mapping class group of a closed surface under such a representation (at an $`r`$th root of unity) was infinite or not. Here, another aspect will be considered: are the representations irreducible? I was asked this question by Ivan Smith, who was interested in the geometric quantization approach to the representations, and found it difficult to answer using such an algebro-geometric approach. Surprisingly, even the TQFT literature does not seem to provide the answer. The purpose of this note, therefore, is to make a start by answering the question at least in some cases.
###### Theorem.
Let $`r3`$ be prime. Then the $`SU(2)`$ TQFT representation of the mapping class group of a closed surface of genus $`g`$, at an $`r`$th root of unity, is irreducible.
Unfortunately the proof which will be explained below does not seem to generalise in a straightforward way (unlike, say, the methods of \[R2\]) to either non-prime $`r`$ or to higher rank quantum groups. Neither is it completely clear how to extend it to the case of surfaces with punctures.
## 2. Skein theory preliminaries
The proof of the theorem is not very long or complicated, so this explanation of the background will be kept equally brief. The main purpose is simply to fix the notation. The paper by Lickorish \[L\] and the book of Kauffman and Lins \[KL\] contain full explanations of how one uses skein theory to build $`3`$-manifold invariants, whilst \[BHMV, R3\] explain how to develop a full TQFT using the same principles.
Fix $`r3`$, the integer ‘level’, and let $`A=e^{2\pi i/4r}`$. The symbol $`𝒮M`$ denotes the Kauffman skein space (a complex vector space defined using the parameter $`A`$) of a compact oriented 3-manifold $`M`$. It is the vector space generated by isotopy classes (rel boundary) of framed links inside $`M`$, modulo the usual local Kauffman bracket relations.
The Jones-Wenzl idempotents $`f^{(a)}`$ of the Temperley-Lieb algebras (skein spaces of a cylinder with $`a`$ points at each end) are defined for $`a=0,1,\mathrm{},r1`$. Inside any skein space $`𝒮M`$ one can consider the subspace spanned by elements consisting of $`f^{(r1)}`$ in a small cylinder with the ends connected up in any way. Factoring out by this subspace gives the reduced skein space $`M`$. It is a fact (see Roberts \[R3\]) that the reduced skein space of any $`3`$-manifold depends only on its boundary, and gives a model for the Witten-Reshetikhin-Turaev vector space of this boundary.
In particular if $`H`$ is a handlebody and $`\mathrm{\Sigma }`$ its boundary then the reduced skein space $`H`$ is identified with the W-R-T space usually written (for example in Blanchet, Habegger, Masbaum and Vogel \[BHMV\]) as $`V(\mathrm{\Sigma })`$. In \[BHMV\], the space $`V(\mathrm{\Sigma })`$ is constructed as a quotient of $`𝒮H`$, but in a fairly abstract way; the point of the reduced skein space is simply that it is an explicit local combinatorial description of the quotient.
The most important such spaces are those associated to a solid torus, denoted $`𝒮T`$ and $`T`$ for convenience. The skein space $`𝒮T`$ is a polynomial algebra with one generator $`\alpha `$, a single curve winding once around the torus. The elements $`\varphi _a𝒮T`$ ($`a=0,1,\mathrm{},r2`$), given by taking the $`a`$th Chebyshev polynomial of $`\alpha `$ (or by closing up Jones-Wenzl idempotents) are particularly important, as they descend to a basis for the $`(r1)`$-dimensional space $`T`$. Lickorish’s construction of $`3`$-manifold invariants is based on an element $`\mathrm{\Omega }T`$ defined by
$$\mathrm{\Omega }=\eta \underset{a=0}{\overset{r2}{}}\mathrm{\Delta }(a)\varphi _a,$$
where
$$\eta =\frac{A^2A^2}{i\sqrt{2r}}=\sqrt{2/r}\mathrm{sin}(\pi /r)\text{and}\mathrm{\Delta }(a)=(1)^a\frac{A^{2(a+1)}A^{2(a+1)}}{A^2A^2}.$$
(In \[BHMV\] and \[MR\], the symbol $`\omega `$ was used instead of $`\mathrm{\Omega }`$; the convention here agrees with the one in \[R2\].)
For a handlebody $`H`$ of genus $`g2`$, one can again write down a basis of $`H`$. The usual basis is given by picking $`3g3`$ discs chopping up $`H`$ in a pants decomposition, and then drawing a trivalent spine dual to the decomposing discs. The standard basis elements $`v`$ are made by attaching Jones-Wenzl idempotents to the edges of this graph and joining them suitably at the vertices. They are parametrised by the labellings of their idempotents, in other words by a subset of the set of labellings of the edges by integers in the range $`0`$ to $`r2`$. The vacuum vector $`v_0`$ is the basis vector corresponding to the empty link (all labels are $`0`$).
The action of the mapping class group $`\mathrm{\Gamma }_g`$ on $`H`$ can be defined in a natural but implicit way (see \[R1\]), but here it is more useful to have an explicit description of the action of a positive Dehn twist $`T_\gamma `$ about a curve $`\gamma \mathrm{\Sigma }`$. If $`x`$ is an element of $`H`$ then $`T_\gamma x`$ is represented by adjoining to a skein element representing $`x`$ the curve $`\gamma `$ with $`\mathrm{\Omega }`$ inserted onto it (with framing $`1`$ relative to the surface), drawn just inside the boundary of $`H`$.
In particular, if $`\gamma `$ is the boundary of one of the pants discs, the twist $`T_\gamma `$ acts on a standard basis vector $`v`$ by putting a full positive twist in the edge passing through this disc. Idempotents are eigenvectors under such twist operations. Consequently, if the relevant edge is coloured with $`a`$, then $`T_\gamma v=\xi _av`$, where $`\xi _a=(1)^aA^{a^2+2a}`$ is the associated eigenvalue (twist coefficient).
###### Remark.
The representation defined by these twist generators (or as constructed in \[R1\]) is only projective, and it is customary to lift to a genuine linear representation of a central extension of $`\mathrm{\Gamma }_g`$. However, the projective ambiguity has no bearing on the question of irreducibility, so this fact can be safely ignored.
## 3. Proof of the theorem
###### Lemma 1.
If $`r`$ is prime then the vectors $`t_b`$, for $`b=0,1,\mathrm{},r2`$, given by placing $`b`$ parallel $`1`$-framed copies of $`\mathrm{\Omega }`$ in the solid torus form a basis for $`T`$.
###### Proof.
The quickest way to see this is to use the non-degenerate pairing $`T\times T`$ obtained by gluing together two solid tori to make $`S^3`$ (whose skein space is canonically $``$). Pairing $`t_b`$ with $`\varphi _a`$ results in $`\xi _b^a\mathrm{\Delta }(a)`$, so that the change of basis matrix expressing the $`t_b`$, viewed as linear functionals, in terms of the dual functionals $`\varphi _a^{}`$ is a Vandermonde matrix $`(\xi _a^b)`$ times a diagonal matrix whose diagonal entries are the (non-zero) $`\mathrm{\Delta }(a)`$. Now $`\xi _a=(1)^aA^{a^2+2a}`$, and one can easily check that these are all distinct when $`r`$ is prime, hence the first matrix is invertible. The second is obviously invertible, and so the $`t_b`$ indeed form a basis. ∎
###### Lemma 2.
Suppose $`C`$ is a collection of disjoint curves on $`\mathrm{\Sigma }_g`$. Then one can obtain an element of $`H_g`$ by viewing these curves as lying in $`H_g`$ and attaching $`\mathrm{\Omega }`$ to each one with framing $`1`$ relative to the surface. Such vectors $`\mathrm{\Omega }(C)`$ span $`H_g`$. Therefore the image of the group algebra of $`\mathrm{\Gamma }_g`$ applied to the vacuum vector $`v_0`$ is all of $`H_g`$.
###### Proof.
Consider $`H_g`$ as a thickened $`(g1)`$-holed disc. The reduced skein space is spanned by a finite number of elements of $`𝒮H_g`$, which may be represented by links lying in the holed disc (consider the usual planar projection onto this disc). Given such a planar link $`L`$, isotop it to be near to the boundary $`\mathrm{\Sigma }_g`$, and rewrite each of its components (thought of as an element of $`T`$) as a linear combination of the basis elements $`t_b`$. Since each $`t_b`$ is really $`\mathrm{\Omega }(C)`$, for $`C`$ a single curve parallelled $`b`$ times, this proves the assertion about spanning. The final part follows immediately from the description of the action of a Dehn twist on $`H_g`$. ∎
Note that this does not immediately imply irreducibility. For example, if $``$ acts on $`^2`$ with distinct eigenvalues then the action is reducible but the group algebra applied to any non-eigenvector is all of $`^2`$. (This example can even be chosen to be unitary.)
###### Lemma 3.
The subgroup $`P_g`$ of the mapping class group generated by Dehn twists on the standard pants curves is a free abelian group, under whose action $`H_g`$ breaks up as a sum of one-dimensional representations, spanned by the standard basis vectors $`v`$.
###### Proof.
The standard basis vectors are certainly simultaneous eigenvectors for the twists generating $`P_g`$, as each twist just multiplies the vector by a twist coefficient $`\xi _a`$. But their collections of eigenvalues are distinct since the $`\xi _a`$ are (when $`r`$ is prime) and hence they span individual one-dimensional eigenspaces. ∎
Now the theorem can be proved. Suppose $`\theta `$ is any endomorphism of $`H_g`$ commuting with the action of the whole mapping class group $`\mathrm{\Gamma }_g`$. Then it certainly acts diagonally with respect to the standard basis, because it commutes in particular with the subgroup $`P_g`$ whose eigenvectors they are. Let us write $`\theta v=\lambda _vv`$ for a standard basis vector $`v`$. All we need to do is to show that the matrix of $`\theta `$ is actually a scalar to conclude, via Schur’s lemma, that $`H_g`$ is an irreducible representation of $`\mathrm{\Gamma }_g`$. To see this, observe that any standard basis vector $`v`$ can be generated from the vacuum vector $`v_0`$ by the action $`\psi `$ of some element of the group algebra of $`\mathrm{\Gamma }_g`$, by lemma 2. Then, since $`\theta `$ commutes with $`\psi `$,
$$\theta v=\theta \psi v_0=\psi \theta v_0=\lambda _{v_0}\psi v_0=\lambda _{v_0}v,$$
but also $`\theta v=\lambda _vv`$, so $`\theta `$ is a scalar.
## 4. Further comments
There are certainly cases in which the representations are not irreducible. It is difficult to find these in general, but in genus $`1`$, there is a large body of literature studying modular invariant partition functions for affine Lie algebras which provides a more than complete solution. See Capelli-Itzykson-Zuber \[CIZ\], Fuchs’ book \[Fuc\], and Gannon \[Ga\] for a short proof of the result of \[CIZ\].
The problem studied in these references is to find all $`SL(2,)`$-invariant linear combinations
$$Z=\underset{a,b=0}{\overset{r2}{}}Z_{a,b}\varphi _a\overline{\varphi }_bV\overline{V},$$
where $`V=V(\mathrm{\Sigma }_1)`$ (and $`SL(2,)=\mathrm{\Gamma }_1`$ is the mapping class group of the torus acting on it), and the coefficients $`Z_{a,b}`$ are non-negative integers. Since $`\overline{V}V^{}`$, such elements can be thought of as invariant endomorphisms of $`V`$, and a reasonable first step in classifying such elements $`Z`$ is to find the commutant of $`SL(2,)`$ in $`\mathrm{End}(V)`$. This is carried out by Cappelli, Itzykson and Zuber \[CIZ\], who find the dimension of the commutant in terms of the number of divisors of $`r`$. (They then go on to find the non-negative integer matrices $`Z`$ lying in the commutant and to obtain an amazing $`ADE`$ classification.) In particular, the commutant is trivial when $`r`$ is prime, which agrees with the irreducibility theorem proved above, and also shows that it is sharp in genus $`1`$.
The higher-genus situation does not seem to have been studied much. It is worth observing that the method of \[CIZ\] for finding the commutant in genus $`1`$ relies on averaging over the image of $`\mathrm{\Gamma }_1=SL(2,Z)`$, which is finite (see for example Gilmer \[Gi\]). In higher genus, as noted in the introduction, the image is known to be (with finitely many small $`r`$ exceptions) infinite, so such methods fail. Whether one can apply the methods of skein theory to the problem of modular invariants, or vice versa, remains to be seen.
|
no-problem/9909/hep-lat9909127.html
|
ar5iv
|
text
|
# High-density QCD: the effects of strangeness
## 1 Introduction
Although lattice gauge theory has been successfully applied to QCD at zero baryon density and non-zero temperature, we know very little about QCD at high density and low temperature, a regime which is physically relevant to neutron star physics and low-energy heavy-ion collisions. Strong evidence has been marshalled that the ground state in this regime spontaneously breaks the color gauge symmetry by a condensate of “Cooper pairs” of quarks. The pattern of symmetry breaking has been found to be very different for the cases of 2 and 3 flavors. In this paper I report on the results of recent investigation into the more realistic 2+1 flavor theory.
Let us start by reviewing the two- and three-flavor cases.
(1) Two masssless flavors. The ground state is two-flavor color superconducting (2SC): chirally symmetric $`u`$-$`d`$ pairing . The pattern is $`q_i^\alpha C\gamma _5q_j^\beta \epsilon ^{\alpha \beta 3}\epsilon _{ij}`$ (color indices $`\alpha ,\beta `$, flavor indices $`i,j`$), which breaks $`SU(3)_{\mathrm{color}}SU(2)_{\mathrm{color}}`$, leaving $`SU(2)_L\times SU(2)_R`$ unbroken.
(2) Three massless flavors. The ground state is 3-flavor color-flavor locked (CFL), with pairing between all flavors. Chiral symmetry is broken. The pattern is $`q_i^\alpha C\gamma _5q_j^\beta \delta _i^\alpha \delta _j^\beta c\delta _j^\alpha \delta _i^\beta `$, breaking $`SU(3)_{\mathrm{color}}\times SU(3)_L\times SU(3)_RSU(3)_{\mathrm{color}+\mathrm{L}+\mathrm{R}}`$. The ansatz is symmetric only under equal and opposite color and flavor rotations. Since color is vectorial, this breaks the axial flavor symmetry. Even though it only pairs left-handed quarks with left-handed and right-handed with right-handed, color-flavor locking breaks chiral symmetry.
(3) 2+1 flavors. Even if one quark is massive, there is still a CFL phase. The pattern is more complicated than the 3-flavor case (see for details), but the essence is that $`u`$-$`s`$ and $`u`$-$`d`$ pairing breaks the $`SU(3)_{\mathrm{color}}\times SU(2)_L\times SU(2)_R`$ to $`SU(2)_{\mathrm{color}+\mathrm{L}+\mathrm{R}}`$, breaking chiral symmetry because the flavor symmetries are locked to color.
## 2 Phase diagram
In Fig. 1 we give a conjectured phase diagram for 2+1 flavor QCD, classifying the phases according to which global symmetries of the Hamiltonian they leave unbroken. These are the $`SU(2)_L\times SU(2)_R`$ flavor rotations of the light quarks, and the $`U(1)_S`$ of strangeness.
We make the following assumptions about the quark phases: (1) $`m_u=m_d=0`$. Including small $`u,d`$ masses would have little effect on pairing . (2) Zero temperature. (3) The quark phase can be described by an NJL-type model, with a 4-fermion interaction normalized by the zero-density chiral condensate. (4) Electromagnetism is ignored. (5) Weak interactions are ignored.
In the baryonic phases, we assume that baryon Fermi surfaces are unstable to pairing in channels which preserve rotational invariance, breaking internal symmetries such as isospin if necessary.
To explain Fig. 1, imagine following two lines of increasing density ($`\mu `$), one at high $`m_s`$, then one at low $`m_s`$.
At high $`m_s`$, we start in the vacuum, where chiral symmetry is broken. At $`\mu _\mathrm{o}300\mathrm{MeV}`$, one finds nuclear matter, in which $`p`$-$`p`$ and $`n`$-$`n`$ pairing breaks isospin. At $`\mu _\mathrm{V}`$, we find a first-order phase transition to the 2SC phase of color-superconducting quark matter, in which the red and green $`u`$ and $`d`$ quarks pair in isosinglet channels. Pairing of the blue quarks is weak , and we ignore it. When $`\mu `$ exceeds the constituent strange quark mass we enter “2SC+s” in which there is a strange quark Fermi surface, with weak $`s`$-$`s`$ pairing , but no $`u`$-$`s`$ or $`d`$-$`s`$ pairing. Finally, when the chemical potential is high enough that the Fermi momenta for the strange and light quarks become comparable, there is a first-order phase transition to the color-flavor locked (CFL) phase. Chiral symmetry is once again broken. Gapless superconductivity occurs in the metastable region near the locking transition.
At low $`m_s`$, the story starts out the same way. As density rises we enter the nuclear matter phase with $`pp`$ and $`nn`$ pairing. Then we enter the strange baryonic matter phase, with Fermi surfaces for the $`\mathrm{\Lambda }`$ and/or $`\mathrm{\Sigma }`$ and $`\mathrm{\Xi }`$. These pair with themselves in spin singlets, breaking $`U(1)_S`$ and isospin and chirality. We can imagine two possibilities for what happens next as $`\mu `$ increases further. (1) Deconfinement: the baryonic Fermi surface is replaced by $`u,d,s`$ quark Fermi surfaces, which are unstable against pairing, and we enter the CFL phase, described above. An “isospin” $`SU(2)_{\mathrm{color}+V}`$ is restored, but chiral symmetry remains broken. (2) No deconfinement: the Fermi momenta of all of the octet baryons are now similar enough that baryons with differing strangeness can pair in isosinglets ($`p\mathrm{\Xi }^{}`$, $`n\mathrm{\Xi }^0`$, $`\mathrm{\Sigma }^+\mathrm{\Sigma }^{}`$, $`\mathrm{\Sigma }^0\mathrm{\Sigma }^0`$, $`\mathrm{\Lambda }\mathrm{\Lambda }`$), restoring isospin. The interesting point is that scenario (1) and scenario (2) are indistinguishable. They both have a global $`SU(2)`$ symmetry, and an unbroken $`U(1)`$ gauge symmetry. This is the “continuity of quark and hadron matter” described by Schäfer and Wilczek . We conclude that for low enough strange quark mass, $`m_s<m_s^{\mathrm{cont}}`$, there may be a region where sufficiently dense baryonic matter has the same symmetries as quark matter, and there need not be any phase transition between them. The mapping between the baryonic and quark gaps is given in table 1, along with their transformation properties under the unbroken symmetries.
## 3 Possible lattice calculations
It has been pointed out that there is a tricritical point in the $`\mu `$-$`T`$ plane for two-flavor QCD, which may be experimentally detectable in heavy-ion collisions . As $`m_s`$ is reduced below its physical value $`m_s^{\mathrm{phys}}`$, this tricritical point moves to lower chemical potential, and at strange quark mass $`m_s^{}`$ it occurs at $`\mu =0`$ (Fig. 2). For lower $`m_s`$ the phase transition is first-order. Lattice calculations give $`m_s^{}\frac{1}{3}m_s^{\mathrm{phys}}`$ .
For $`m_s`$ just above $`m_s^{}`$, $`\mu _{\mathrm{crit}}`$ is very low, so one should be able to estimate it by calculating derivatives of observables such as the chiral susceptibility with respect to $`\mu `$ at $`\mu =0`$. (Previously, $`\mu `$-derivatives have only been calculated at $`m_s=\mathrm{}`$, where there is no nearby critical behavior, and no clear signal was seen ). This could be extrapolated to give some indication of $`\mu _{\mathrm{crit}}(m_s^{\mathrm{phys}})`$, the value of the critical chemical potential in real-world QCD—a prediction that would be of direct value to heavy-ion experimentalists, who need to know at what energy to run their accelerators in order to see the phenomena predicted to occur near the critical point .
It should also be noted that existing finite-$`\mu `$ techniques such as imaginary chemical potential may well be useful in this crossover/critical region, since baryon masses become light as the baryons continuously melt into quarks, so there will be little suppression of the amplitudes of higher Fourier modes in imaginary $`\mu `$.
Acknowledgments
I thank J. Berges, K. Rajagopal, and F. Wilczek for their collaboration on the work reported here. It was supported in part by the U.S. Department of Energy (D.O.E.) under cooperative research agreement #DF-FC02-94ER40818.
|
no-problem/9909/astro-ph9909023.html
|
ar5iv
|
text
|
# Evidence for Doppler-Shifted Iron Emission Lines in Black Hole Candidate 4U 1630-47
## 1 Introduction
Discovered by the Uhuru mission (Jones et al. 1976), 4U 1630-47 is a well-studied black hole candidate (BHC; Tanaka & Lewin 1995 and references therein; also see, e.g., Parmar et al. 1995, 1997, Kuulkers et al. 1997, and Oosterbroek et al. 1998 for more recent results). The source lies in the general direction of the Galactic center. Observations show that the source is heavily absorbed in soft X-rays, indicating a large distance ($``$10 kpc) within the disk of our Galaxy. No optical counterpart has yet been identified, perhaps due to expected large visual extinction ($`>`$ 20 magnitude), so the dynamical evidence for the presence of a black hole in the binary system is still missing. 4U 1630-47 is considered a BHC only because of its similarities in the X-ray properties to some of the dynamically-determined black hole systems (Parmar et al. 1986). It is a transient X-ray source, like most known BHCs, but is one of few such sources that exhibit frequent X-ray outbursts (e.g. Jones et al. 1976; Priedhorsky 1986; Kuulkers et al. 1997).
The canonical X-ray continuum of a BHC consists of two components (see review by Tanaka & Lewin 1995 and references therein): an ultra-soft component at low energies ($`<`$ 10 keV) and a power-law component at high energies, with photon indices roughly in the range of 1.5–2.5. The shape of the former is roughly that of a blackbody with temperature a few tenths to $``$2 keV, and has been successfully modeled by X-ray emission from the innermost portion of an optically thick (but geometrically thin) accretion disk surrounding the central black hole, while the latter is generally attributed to inverse-Comptonization processes due to the presence of energetic electrons (thermal or non-thermal or both) in the binary system. For transient BHCs, the disk component dominates the overall X-ray energy output over the Compton component near the peak of an X-ray outburst (which is similar to the soft state for persistent BHCs like Cyg X-1, in terms of the observed X-ray properties). Observations show that the power-law component is steep (with photon index $``$2.5) and extends to nearly 1 MeV without any apparent breaks (Grove et al. 1998), which would seem to signify the role of non-thermal Comptonization in the X-ray production processes (Coppi 1999, and references therein). As the outburst decays, the power-law component becomes flatter and thus increasingly dominant in the energy distribution; it also breaks at tens to several hundred keV, which is the signature of thermal Comptonization. The studies of 4U 1630-47 have shown that, for the most part, the source fits this description.
Emission lines, as well as absorption lines and edges, are sometimes observed in the X-ray spectrum of BHCs (e.g., Barr et al. 1985; Kitamoto et al. 1990; Done et al. 1992; Ebisawa et al. 1996; Cui et al. 1997, 1998; Ueda et al. 1998). These narrow spectral features appear in the energy range 6–8 keV and are usually attributed to emission or absorption processes involving iron K-shell electrons. Although the exact location of line-emitting matter is often debatable, there is evidence, at least for some BHCs (e.g., Fabian et al. 1989; Cui et al. 1998; Życki et al. 1999), that the observed iron $`K\alpha `$ line originates in the innermost part of the accretion disk, close to the black hole. If this proves to be the case, the profile of such lines would be distorted by the strong gravitational field of the hole (e.g., Fabian et al. 1989) and could then be carefully modeled to possibly derive the intrinsic properties of the hole, such as its angular momentum (Laor 1991; Bromley et al. 1997). Also observed are Doppler-shifted emission lines from the relativistic jets of SS 433 (Kotani et al. 1996), which is sometimes thought to contain a neutron star, although this is highly debatable (Mirabel & Rodriguez 1999). The studies of such lines have proven fruitful in gaining insights into the physical properties of the jets (Kotani et al. 1996). The emission lines could, therefore, become a valuable tool for probing the immediate surroundings of X-ray emitting regions, in terms of its geometry and kinematics, and provide important diagnostics of matter there, such as its abundance and ionization state. In this paper, we report, for the first time, the detection of a pair of correlated emission lines in the X-ray spectrum of 4U 1630-47 during its 1996 outburst, using archival data from the Rossi X-ray Timing Explorer (RXTE).
## 2 Observations
Fig. 1 shows the light curve of 4U 1630-47, obtained by the All-Sky Monitor (ASM) aboard RXTE, that highlights the portion of its “flat-topped” 1996 X-ray outburst when the source was also observed by RXTE’s main instruments.<sup>1</sup><sup>1</sup>1The full ASM light curve can be obtained from the public archival database maintained by the RXTE Guest Observer Facility through their web site at http://heasarc.gsfc.nasa.gov/docs/xte/asm\_products.html. The 16 pointed RXTE observations (as marked in Fig. 1) cover a good portion of the peak plateau of the outburst, initially at a pace of once per day but the pace is much reduced towards the end of the monitoring campaign. The last observation occurred near the end of the decaying phase, about two months after the onset of the outburst. This particular observation has provided a valuable dataset for studying the transition period as the source was on its way back to the quiescent state and for comparing the X-ray properties between a “low state”and a “high state” (i.e., during peak of the outburst).
RXTE carries two pointing instruments, the Proportional Counter Array (PCA) and the High-Energy X-ray Timing Experiment (HEXTE). The PCA covers a nominal energy range 2–60 keV and the HEXTE 15–250 keV. Although our main focus in this study is on spectral lines, we have examined data from both instruments, hoping to better constrain the underlying continuum with a wider energy band. Unlike typical BHCs, however, the observed X-ray spectrum of 4U 1630-47 at the peak of the outburst is so steep (few PCA counts above $``$30 keV) that the short HEXTE exposures provide little additional information. Hence, we present the results from analyzing only the PCA data. Table 1 summarizes the key parameters of the PCA observations. Note that the PCA consists of five individual Proportional Counter Units (PCUs), but not all PCUs were turned on during all observations.
## 3 Data Analysis and Results
We performed spectral analysis using the Standard 2 data. For all PCUs, the calibration of the first xenon layer has achieved the most satisfactory results, while that of other two layers is more uncertain, especially in the energy range where iron spectral features are often observed. Therefore, we based our analysis on data from the first xenon layer only.
We used the calibration files (including response matrices and background models) delivered with the most recent release of FTOOLS (version 4.2). For each observation, we constructed separate spectra (both source+background and background only) for the first layer of each PCU. We then carefully examined the quality of background models, under the assumption that the observed counts in the highest energy channels can be attributed entirely to the background. We found that the results were quite satisfactory except for two observations. In those two cases (observations 3 and 4 in Table 1), the background spectrum seems to be of the correct shape but the overall normalization is a bit too low. We have tried but failed to find any obvious culprits for the problem. For these two observations, we simply re-normalized the background spectrum (by a factor of 1.08 and 1.15, respectively) so that it matches the sum spectrum at the highest energy channels. This procedure seems to work well in the sense that the quality of background subtraction achieved for these two observations appears to be comparable to that for others. We also note that for observations 14 and 15, although the background spectrum looks reasonable on average, the individual data points scatter much more than usual at high energies. The cause for such excessive scatter is not clear either.
An important calibration issue is the lack of satisfactory estimation of the systematic uncertainties in the response matrices of the PCA. A common practice is to add a certain amount (e.g., 1% of total counts) in quadrature to Poisson errors uniformly across the entire energy range. We know that the uncertainty is significantly larger near xenon absorption edges, so this approach does not always work well for RXTE observations of bright sources, which are dominated by systematic uncertainties. In fact, this leads to a more fundamental problem about the applicability of the $`\chi ^2`$ statistics to the modeling of such RXTE data. While we can perhaps still use this statistics to compare different models to a certain degree, one should be cautious of relying on such statistics alone to reject or accept a certain model. Unfortunately, we are currently not aware of any more objective ways of estimating systematic uncertainties in the calibration of the PCA. Since our primary objective here is to study spectral lines, instead of modeling the continuum, we have decided to simply follow the common practice by adding 1% systematic error to the data. For this study, we are mostly concerned with the instrumental artifacts that are localized near the xenon edges, because they could mimic line-like features. For the first xenon layer, the known artifacts seem to be quite small.
After subtracting the background spectrum from the sum, we proceeded to model the resulted source spectrum. For the continuum, we adopted a composite model of a multi-color disk (“diskbb” in XSPEC; e.g., Mitsuda et al. 1984) and a power law, which is typical of BHCs. We then simultaneously applied the model to the first-layer spectra of all PCUs, with the relative (to PCU 0) normalization of the PCUs floating to account for any slight mismatches among the detectors. All the results are, therefore, normalized to the absolute calibration of PCU 0. This model alone fails to fit the observed X-ray spectrum in all cases (see the upper panel of Fig. 2 for Observation 1); the residual plot reveals two line-like features between 5 and 8 keV. The fit becomes much improved (and formally acceptable in terms of the $`\chi ^2`$ statistics) after two Gaussian functions have been added to the model (see the lower panel of Fig. 2). Fig. 3 shows the unfolded spectrum for Observation 1, along with each individual components of the model. The lines are located at $``$5.5 keV and $``$7.5 keV, respectively, with equivalent widths $``$188 eV and $``$76 eV (see Table 3). We carried out the same analyses for other observations. The results are summarized in tables 2 and 3 for the continuum and lines, respectively.
During the first observation, a large “dip” was observed in the X-ray intensity of the source, as shown in Fig. 4. It has been suggested that the dip can be accounted for by a partial covering of the X-ray emitting disk by some passing-by clouds, and that the complicated structures within the dip simply reflect the change in the absorbing column density through the clouds (Tomsick et al 1998). Although it is not our primary goal here to study the origin of the dip, for the purpose of comparison (the dip was excluded from the analysis already described), we constructed an average X-ray spectrum for the dip and conducted similar analyses with the same continuum model. We found that the dip spectrum can be described satisfactorily (in terms of the reduced $`\chi ^2`$ value) with the addition of only one Gaussian function (Fig. 5). The Gaussian seems to correspond to the lower-energy line seen in the non-dip spectrum, although the line seems much broader and stronger. The results for the dip are summarized in Table 4. In this model, no significant change is required in the line-of-sight column density during the dip (by comparing Table 4 to Table 2). Instead, only the disk component seems to have varied significantly: the temperature at the inner edge of the disk becomes higher, but the overall normalization is lower, which perhaps indicates that the inner edge of the disk moves closer to the central black hole. This would be opposite to some of the dips observed in black hole candidate GRS 1915+105, which have been attributed to a viscous instability that causes the disruption of the inner part of the disk (Belloni et al. 1997). No similar dips are seen during the subsequent RXTE observations of 4U 1630-47.
It is clear, from tables 2 and 3, that the model adopted seems to describe the observed X-ray spectra of 4U 1630-47 quite well, except for observations 10, 14, and 15. For Observation 10, the large reduced $`\chi ^2`$ are almost entirely due to significant structures in the residual at low energies ($`<`$ 10 keV); Observation 14 suffers from the same problem, in addition to large residuals at high energies ($`>`$ 20 keV) which are caused by excessive scatter of data points in the background spectrum already mentioned; the latter also explains the problem for Observation 15. These problems are illustrated in Fig. 6, taking Observation 14 as an example. The magnitude of the low-energy structures in observations 10 and 14 are quite small, compared to the emission lines detected. While the features might represent interesting real effects, we will not pursue them any further in this study.
The continuum of 4U 1630-47 is unusually soft during the peak of the 1996 outburst, with power-law photon indices in the range of 3–5 (for comparison, the photon index is typically around 2.5 for BHCs). The power-law component varies significantly during the outburst. The disk component appears quite typical of BHCs, with the temperature $``$1.3 keV at the inner edge of the disk. The high line-of-sight column density may simply indicate the location of the source: it is buried in the disk of our Galaxy and far away from us, as also concluded from other studies.
Although a pair of emission lines are detected in all observations during the peak of the outburst (the first 15 in Table 1), in some cases, the higher-energy line is quite weak and thus the detection is not very significant. For such cases, the derived line equivalent width can be significantly affected by the calibration uncertainties. It is clear, however, that the detected lines cannot be entirely attributed to instrumental artifacts, because they vary greatly from observation to observation and show no apparent correlation with the exposure time (which roughly determines the signal-to-noise ratio of the data here). To further investigate possible artifacts caused by calibration uncertainties, we analyzed a PCA observation of the Crab (Observation ID 10200-01-20-00) which was made during the same epoch (on 01/31/97), following exactly the same procedure (e.g., only the first xenon layer of each PCU is used). The X-ray spectrum is well described by an absorbed power law and the derived model parameters are all within the nominal values for the source ($`N_H=3.1\times 10^{21}\text{ }cm^2`$, photon index $`2.16`$, and normalization $`12.5\text{ }ph\text{ }keV^1\text{ }cm^2\text{ }s^1`$). The residuals of the fit can be quantified by the ratio of the data to the model. We obtained such residual plots for both the Crab and 4U 1630-47 (from the continuum model without Gaussian functions), and the ratio between the two plots should roughly be free of any calibration uncertainties. One such ratio plot (for PCU 0 alone) is shown in Fig. 7. The two emission lines are clearly present. Assuming that the spectrum of the Crab is featureless, we conclude that both lines are physically associated with 4U 1630-47, even though the inferred line parameters (physical width and equivalent width, in particular) may be sensitive to calibration uncertainties. Fig. 8 highlights the detected emission lines in all observations after the underlying continuum being subtracted from the source spectrum.
Only one emission line is present in the last observation (#16, as listed in Table 1; see also Fig. 8), as the source is just about to return to the quiescent state. Interestingly, the line is located roughly midway between the pair of lines seen during the peak of the outburst, and it appears to be much stronger. The measured power-law photon index ($``$2) is typical of transitional periods between “high” and “low” spectral states in BHCs (e.g., Cui et al. 1997). The X-ray emitting portion of the disk seems to be much cooler, and the inner disk edge becomes farther away from the central black hole. The measured column density is $``$30% lower than that during the peak. Similar evolution of the column density was also noted in another study of 4U 1630-47 during the decaying phase of its 1998 outburst (Oosterbroek et al. 1998). It was speculated that a substantial amount of material could be produced at the onset of the outburst, which, subsequently, is either accreted onto the black hole or expelled from the binary system. Alternatively, the larger column density might indicate a significant increase in the scale height of the accretion disk during the outburst, if the binary system is highly inclined with respect to the line of sight (as speculated by Kuulkers et al. 1998).
## 4 Discussion
The RXTE observations of 4U 1630-47 have revealed the presence of two emission lines when the source was near the peak of its 1996 X-ray outburst. Although the lines seem to move around at random, they move in unison, so as to keep their separation roughly constant (see Table 3). Also, the lines vary significantly in strength, but with the lower-energy line always much stronger than the higher-energy one. The measured line fluxes are reasonably correlated, as shown in Fig. 9. The correlation between the lines seems to imply a causal connection — perhaps they share a common origin. Probably, both originates in a single $`K_\alpha `$ line from highly ionized iron that is Doppler-shifted either in a Keplerian accretion disk or in a bi-polar outflow or even both. The required line-of-sight velocity is only roughly $`0.15c`$, which can easily be accommodated in both scenarios. Although the detailed line properties (such as profiles) are expected to be different for the two cases, the lack of adequate energy resolution of the data makes it impossible to favor one over the other at present. In both cases, a change in the line energy might be due to the variation in the ionization state of the line-emitting matter.
If the observed emission line pair is from a single, double-peaked iron $`K_\alpha `$ line that originates in the accretion disk close to the central black hole, both gravitational and transverse Doppler shifts would tend to move line photons toward lower energies. In this case, a stronger red peak may only reflect the skewness of the line profile which is not resolved here, again due to the lack of energy resolution. Further support for a disk origin of the line pair can be derived from the fact that the higher-energy line appears to be much narrower than the lower-energy one in most cases (see Table 3). This favors a line profile with a much more extended red wing (than the blue wing), which is characteristic of a disk line. Moreover, during the dip, the inner edge of the disk appears to have moved closer to the black hole (see discussion in § 3). We would, therefore, expect that the red peak grows stronger, due to stronger gravitational redshift; this indeed appears to be the case (see Table 4). For a disk line, the highly blue-shifted peak would necessarily require a rather high inclination angle of the accretion disk (e.g, Fabian et al. 1989), which would be consistent with the dipping activity observed of 4U 1630-40 if the dip is caused by absorption (see arguments by Kuulkers et al. 1998). The presence of broad (or ”smeared”) iron lines from the accretion disk of other BHCs has previously been suggested by Życki and his co-workers in a series of papers (e.g., Życki, Done, & Smith 1999, and references therein), by modeling the reflection component of the X-ray spectrum.
The outflow model, on the other hand, cannot naturally explain why the red-shifted line is so much stronger than the blue-shifted one, since the opposite is expected as the result of Doppler boosting. However, the problem is not fatal, due to our ignorance of astrophysical jets and the physical processes therein. It is known that the measured radio fluxes from the receding and approaching jets in “micro-quasars” are not always consistent with Doppler boosting (Hjellming & Rupen 1995; Fender et al. 1999). In fact, the receding jet sometimes appears brighter than the approaching one. Sometimes, the relative brightness of twin jets flip-flops as the jets evolve (see Fig. 2 in Fender et al. 1999). These “anomalies” almost certainly reflect the intrinsic difference between the two jets. Similar difference could also have existed in a bi-polar outflow from 4U 1630-47, to account for the observed line flux ratio. Moreover, the asymmetry in the environment surrounding either side of the outflow might cause, for example, the emission from the approaching flow being obscured more than that from the receding one, independent of any other physical processes. Therefore, the outflow scenario is by no means ruled out, although a disk origin of the observed emission lines does seem more likely.
Only one emission line is seen during the transition of the source to the quiescent state. For a line that originates in the innermost portion of the accretion disk, it would imply that the inner edge of the disk has moved farther away from the black hole (and thus the effects of Doppler shift become small) as the transition proceeded, consistent with the inferred evolution of the disk from the model fit (see § 3). It is worth pointing out that such evolution of the accretion disk agrees qualitatively with the expectation of the “advection-dominated accretion flow” (ADAF) models for state transitions in BHCs (e.g., Esin et al. 1998). If the line originates in a bi-polar outflow, on the other hand, a single-peaked line would imply that the velocity of the outflow is much reduced during the transition. Given the limited spectral resolution of the PCA, it is also possible that the observed lines are a mixture of emission lines both from the inner region of the accretion disk and from a bi-polar outflow which might only occur during the peak of the outburst. In this case, the evolution of the emission line from being double-peaked to single-peaked, as the source approaches the quiescent state, could be attributed to the cessation of the outflow and the receding of the inner edge of the accretion disk during the process. During the transition, the increase in the equivalent width of the line (see Table 3) is probably due to the combination of a much flatter power-law component and a larger solid angle subtended by a larger Comptonizing region (as required, e.g., by the ADAF models).
It is interesting that no emission lines have ever been reported of 4U 1630-47 previously. This can either be attributed to the much improved spectral capability of the RXTE instrumentation, especially the large effective area of the PCA which is essential for detecting relatively weak, broad lines, or to the rare occurrence of such spectral features. Indeed, the observed X-ray properties of 4U 1630-47 during the 1996 outburst appear to be somewhat unusual, compared to other transient BHCs in outburst. Most notably, the observed X-ray continuum is extremely soft: hardly any counts are detected above $``$30 keV. At high energies, the continuum can still be well described by a single power law but the photon index reaches as high as $``$5, compared to typical values of $`<`$ 3 for BHCs. In 1998, 4U 1630-47 experienced another X-ray outburst, which was well covered by the ASM/RXTE,<sup>2</sup><sup>2</sup>2see, again, http://heasarc.gsfc.nasa.gov/docs/xte/asm\_products.html for the ASM light curve. as well as by the RXTE’s pointing instruments. We will report the results for this outburst in a future publication (Cui et al., in preparation). The preliminary spectral results from the PCA data give no indication for the presence of a similar emission-line pair during the outburst. The observed power-law continuum is quite typical of BHCs, which was also noted based on the results from BeppoSAX observations (Oosterbroek et al. 1998). Perhaps, the 1996 outburst is an unusual one for 4U 1630-47, in which a bi-polar outflow might indeed have been formed. The source was observed at radio wavelengths with the Australia Telescope Compact Array and the Very Large Array during this period, but no emission was detected (Kuulkers et al. 1997; Hjellming et al. 1999, although radio emission was observed during the 1998 outburst, which the authors suggested as evidence for the presence of jets).
Finally, it seems reasonable to ask why no Doppler-shifted emission lines have ever been observed of the “microquasars” that are known to occasionally produce relativistic bi-polar jets with superluminal motion, if the observed pair of emission lines might indicate the formation of bi-polar outflows in 4U 1630-47 during the peak of its 1996 outburst. Insufficient coverage at X-rays is unlikely the answer, since at least in the case of GRS 1915+105 the source was regularly monitored by RXTE during a period when superluminal radio jets were detected (Fender et al. 1999). In fact, the observations of GRS 1915+105 at other times provided evidence for more subtle mass ejection events, combined with observations at other wavelengths (Eikenberry et al. 1998; Mirabel et al. 1998), yet no Doppler-shifted X-ray lines were reported. Either such lines were overlooked in the published work based on these observations, or they were simply not present. We think that the latter is more likely, especially considering the fact that no reliable detection has been reported of any emission lines in microquasars. It has been speculated that the general lack (or the weakness) of iron $`K_\alpha `$ lines in BHCs can be attributed to the high ionization state of matter in the vicinity of central black holes due to relatively high X-ray luminosity (Ross & Fabian 1993; Matt, Fabian, & Ross 1996). It is known that the fluorescent photons from Fe XVII–XXIII are very likely to be resonantly absorbed by the next ionized species and eventually destroyed by the Auger effect in a typical environment such as an accretion disk (Ross & Fabian 1993). Consequently, no (or very weak) iron $`K_\alpha `$ lines are expected. At even higher ionization state, however, the $`K_\alpha `$ lines from Fe XXIV–XXVI can escape rather easily, due to the lack of competing Auger processes, and thus the yield is quite high. In reality, there should exist a range of ionization states in the accretion disk or outflows, so the dominating ionization state of matter determines the strength of iron $`K_\alpha `$ lines and ultimately whether the lines are detectable. It is worth noting that iron absorption lines have been detected in microquasars GRS 1915+105 and GRO J1655-40 by ASCA (Ebisawa 1996; Ueda et al. 1998), although arguments can be made, in the case of GRO J1655-40, for the presence of an emission line, during a “dipping” period, with a characteristic profile of a disk line (see the residual plot for this case in both Ebisawa 1996 and Ueda et al. 1998). Much improved spectral capability of X-ray spectrometers on future missions, such as Chandra, XMM, and Astro-E, can hopefully resolve much of the ambiguity in the interpretation of emission lines observed in BHCs.
The authors gratefully acknowledge support from NASA through its Long-Term Space Astrophysics program and RXTE Guest Observer program. This work has made use of the results provided by the ASM/RXTE teams at MIT and at the RXTE SOF and GOF and of the archival databases maintained by the High Energy Astrophysics Science Archive Research Center at NASA’s Goddard Space Flight Center.
|
no-problem/9909/physics9909058.html
|
ar5iv
|
text
|
# Photoelectron Spectra of Aluminum Cluster Anions: Temperature Effects and Ab-Initio Simulations
## Abstract
Photoelectron (PES) spectra from aluminum cluster anions, Al$`{}_{}{}^{}{}_{n}{}^{}`$ ($`12n15`$), at various temperature regimes, were studied using ab-initio molecular dynamics simulations and experimentally. The calculated PES spectra, obtained via shifting of the simulated electronic densities of states by the self-consistently determined values of the asymptotic exchange-correlation potential, agree well with the measured ones, allowing reliable structural assignments and theoretical estimation of the clusters’ temperatures.
PACS: 36.40.Cg, 36.40.Mr, 71.24.+q
Photoelectron spectroscopy (PES) is a rich source of information pertaining to the electronic structure and excitation spectra of atoms, molecules and condensed phases. Materials clusters exhibit a high sensitivity of the electronic spectrum to the geometrical structure which often differs from that of the bulk, and show a high propensity to form structural isomers that may dynamically interconvert at finite temperatures. Consequently, high-resolution PES emerges as an important tool in cluster science, particularly in the face of severe difficulties in applying common direct structure-determination techniques to such systems.
However, a reliable interpretation of PES spectra is often theoretically challenging due to several factors, including: finite-state effects, electronic and ionic relaxations, thermal ionic motions, and structural isomerizations. With the advent of accurate ab-initio methods for electronic structure calculations, theoretical investigations of some of these issues have been pursued . Particularly pertinent to our study is the development of methods which allow practical and reliable simulations of PES spectra including dynamical (finite-temperature) effects \[2-4a\].
In this paper we address, via the use of ab-initio BO-LSD-MD (Born-Oppenheimer local-spin-density molecular dynamics) simulations \[4a\], methodological issues pertaining to simulations and analysis of finite-temperature PES spectra. We performed accurate (and practical) calculations of PES spectra from recorded density of states of the clusters using a ”generalized Koopmans’ theorem” (GKT), concurrent with simulations of the ionic dynamics. Furthermore, in conjunction with measured high-resolution PES spectra for Al$`{}_{}{}^{}{}_{n}{}^{}`$ (12$``$n$``$15) cluster anions, we illustrate that the simulated spectra provide a (quantitatively) faithful description of the measured ones, including thermal effects, thus allowing reliable assignments of ground as well as isomeric structures. Additionally, we demonstrate that through comparisons between simulated PES spectra and those measured in three (experimentally undetermined) temperature regimes estimates of the clusters’ temperatures can be obtained.
In the BO-LSD-MD method the motions of the ions evolve in time according to classical equations of motion, with the electronic Hellmann-Feynman forces evaluated at each MD time step via a self-consistent solution of the Kohn-Sham (KS) equations, using the LSD exchange-correlations after Ref. , and in conjunction with non-local norm-conserving pseudopotentials . An important element of the method, distinguishing it from those used previously in PES studies , is the fact that it does not employ supercells (periodic replicas of the system), and consequently charged systems as well as those having permanent and/or dynamically developed multipole moments are simulated accurately in a straightforward manner on equal footing with neutral ones (i.e. alleviating the need for an artificial neutralizing background, large calculational cells, and/or approximate treatment of long-range multipole image interactions).
The ground state structures of Al$`{}_{}{}^{}{}_{12}{}^{}`$–Al$`{}_{}{}^{}{}_{15}{}^{}`$, determined by us through structural optimization starting from those of the corresponding neutral clusters \[4b\], are displayed in Figure 1. Aluminum clusters in this size range favor energetically icosahedral-based structures ; Al$`{}_{}{}^{}{}_{12}{}^{}`$ having an oblate deformed shape, that of Al$`{}_{}{}^{}{}_{13}{}^{}`$ being close to an ideal icosahedron, and those of Al$`{}_{}{}^{}{}_{14}{}^{}`$ and Al$`{}_{}{}^{}{}_{15}{}^{}`$ being capped icosahedra. For Al$`{}_{}{}^{}{}_{15}{}^{}`$ we find that in the energy-optimal structure the two capping atoms are located on the opposite sides of a ”core” icosahedron, resulting in a strongly deformed prolate shape (see Fig. 1) .
The electronic structure of the ground state cluster anions exhibits sizable gaps (E<sub>g</sub>) between the highest-occupied KS molecular orbitals (HOMO) and the lowest unoccupied ones (LUMO), as well as odd-even alternation (as a function of the number of electrons) in the vertical detachment energy (vDE) shown in Table I. Al$`{}_{}{}^{}{}_{13}{}^{}`$ is electronically ”magic” (i.e. 40 valence electrons), having an exceptionally high vDE and the largest E<sub>g</sub>. Its electronic structure reflects the corresponding neutral cluster, which was found \[4b\] to exhibit a clear jellium-type filling sequence $`1s^21p^61d^{10}2s^2`$ for the lowest 20 single-particle states. The remaining 20 states, which would correspond to jellium $`1f^{14}2p^6`$ states, are grouped into two broadly overlapping subbands (finite temperature broadening of these bands is displayed in Figure 2) and show significant $`pf`$ mixing; this level scheme is known to be a consequence of the $`I_h`$ icosahedral symmetry.
Although the KS states are not necessarily the ”true” molecular orbitals of the system, it has been observed that the KS HOMO eigenvalue of the $`N`$-electron system, $`ϵ_{HOMO}(N)`$, bears a well-defined relation to the ionization potential $`I(N)`$ and electron affinity $`A(N)`$ ; through Janak’s theorem . Following these ideas we make use here of a ”generalized Koopmans’ theorem” (GKT)
$$ϵ_{HOMO}(N)v_{xc}^{\mathrm{}}=I_{GKT}(N),$$
(1)
where $`v_{xc}^{\mathrm{}}`$ is the asymptotic limit of the exchange-correlation potential. This nonvanishing energy shift is required for an accurate description of the asymptotic KS equations . While rigorously the vertical detachment energy would be given by $`E(N1)E(N)`$, where $`E(N)`$ and $`E(N1)`$ are, respectively, the total energies of the anion and neutral (unrelaxed) clusters, Eq. (1) suggests a practical approach for evaluation of the threshold region of finite-temperature PES spectra through MD simulations. Accordingly, neglecting hole-relaxation effects, the vDE for removing the electron from the HOMO state may be well estimated by ($`ϵ_{HOMO}+v_{xc}^{\mathrm{}}`$), provided that $`v_{xc}^{\mathrm{}}`$ remains constant to a good approximation, regardless of spatial details of the $`N`$–electron system (such as isomeric atomic configurations of the cluster). To explore the validity of this condition we have calculated for each of the cluster anions the energy shift $`v_{xc}^{\mathrm{}}=E(N1)E(N)+ϵ_{HOMO}(N)`$ for a selected set of structures (including the ground state one and 10 other configurations chosen randomly from finite-temperature MD trajectories). The calculated values of $`v_{xc}^{\mathrm{}}`$ were found to have a spread of no more than 0.04 eV for each of the clusters, and furthermore we found that the dependence of $`v_{xc}^{\mathrm{}}`$ on $`N`$ ($`37N46`$) is very weak (see Table I).
While this procedure could be repeated to determine $`v_{xc}^{\mathrm{}}`$ values for vDEs from deeper (lower-energy) KS states, we have chosen to use a simpler (and more practical) procedure whereby we use the shift calculated for the HOMO level also for the deeper states . In this way we generate the full PES spectra from the density of states (DOS) recorded in the course of the BO-LSD-MD simulations . As shown below, this theoretically founded procedure yields a faithful description of the experimental data, and it is a viable and reliable alternative to previously used methods for finite-temperature ab-initio MD simulations of PES spectra which were based on either ad-hoc shifts (aligning the theoretical DOS with the dominant features in the measured spectra) or on a first-order pertubative treatment \[3b\]. Furthermore, the comparative analysis of the simulated PES spectra and the measured ones (see below) validates a posteriori certain general assumptions underlying MD simulations of PES spectra, i.e.: neglect of finite-lifetime effects of the hole (see also Ref. where it is noted that such effects may contribute only for very small clusters); the use of vDEs (i.e. neglect of ionic relaxations following the detachment process); and, assumed equal weights for all states contributing to the PES spectrum (i.e. neglect of photoelectron transition-matrix effects, which may affect line-shape features and certainly the absolute PE cross-sections, but not the locations of spectral features i.e. binding energies).
The measured PES spectra for Al$`{}_{}{}^{}{}_{12}{}^{}`$–Al$`{}_{}{}^{}{}_{15}{}^{}`$ are shown in Figures 2 and 3 (solid line). It is found that clusters leaving the nozzle early (short residence time) are quite ”hot” whereas clusters leaving the nozzle late (long residence time) are ”colder”. Indeed the PES spectra for the cold clusters shown in Fig. 2 and the bottom panel of Fig. 3 exhibit well-defined features. On the other hand, hot clusters exhibit much broader and diffused spectral features, as shown in Figure 3 for Al$`{}_{}{}^{}{}_{13}{}^{}`$, where we display spectra measured for three different residence times, labeled as ”cold”, ”warm”, and ”hot”. Comparisons between the locations (binding energies) of the peaks and shoulders in the measured and simulated spectra for the cold clusters (simulation temperatures of 130 K to 260 K, see caption to Fig. 2), validate the $`v_{xc}^{\mathrm{}}`$–shifting procedure of the calculated DOS described above. The widths of the peaks in the theoretical PES spectra originate solely from atomic thermal vibrations since at these low temperatures isomerization effects and/or strong shape fluctuations do not occur. The good agreement achieved here, without any adjustable parameters other than the ionic temperature in the MD simulations, strongly indicates that the ”cold” clusters in the experiments are indeed well below room temperature.
Theoretical PES spectra corresponding to isomeric structures of Al$`{}_{}{}^{}{}_{13}{}^{}`$–Al$`{}_{}{}^{}{}_{15}{}^{}`$, calculated at 0 K, are also shown in Fig. 2 (see inset for the threshold regions of Al$`{}_{}{}^{}{}_{13}{}^{}`$, and the dotted line in the panels for Al$`{}_{}{}^{}{}_{14}{}^{}`$ and Al$`{}_{}{}^{}{}_{15}{}^{}`$). The isomers for Al$`{}_{}{}^{}{}_{13}{}^{}`$ and Al$`{}_{}{}^{}{}_{14}{}^{}`$ are the aforementioned decahedral ones , and in the Al$`{}_{}{}^{}{}_{15}{}^{}`$ isomer two neighboring triangular facets are capped. Comparison between these spectra and those calculated for the ground state clusters as well as with the measured ones, suggests overall that at low temperatures either these isomers do not occur, or that their abundance in the cluster beam is rather low. In this context we note that starting from the decahedral isomer of Al$`{}_{}{}^{}{}_{13}{}^{}`$, it transformed readily during short MD simulations into the icosahedral one at about room temperature. This supports our conclusion pertaining to the low abundance in the cold beam of clusters ”trapped” in isomeric structures; however, an even small relative (quenched) concentration of such isomer in the cold Al$`{}_{}{}^{}{}_{13}{}^{}`$ beam may be sufficient to account for the low-binding energy tail observed in the measured PES spectra for Al$`{}_{}{}^{}{}_{13}{}^{}`$ (see inset in Fig. 2).
Both the experimental and theoretical PES spectra, shown in Figure 3 for Al$`{}_{}{}^{}{}_{13}{}^{}`$, which were measured at the three temperature regimes mentioned above and simulated at the indicated temperatures, exhibit gradual broadening and ”smearing” of the PES spectral features as the temperature increases. We also observe that the binding energy of the main peak is rather insensitive to the thermal conditions, while the line-shape near the threshold region (lower binding energies) exhibits a rather pronounced temperature dependence.
The broadening of the spectral features and the (so called) ”thermal tail effect” near threshold originate from the variations of the electronic structure caused by enhanced vibrational motions at the higher temperatures, as well as from increased isomerization rates (e.g. in the ”warm” regime) governed by the free-energy of the cluster (that is enhanced contributions of lower frequency modes to the vibrational entropy ), and from disordering (”melting”) of the cluster in the ”hot” regime (where inspection of the atomic trajectories reveals frequent transitions between a broad assortment of configurations). Indeed, examination of the vibrational DOS of the simulated clusters (obtained via Fourier transformation of the atomic velocity autocorrelation functions) revealed a marked gradual softening of the clusters at the ”warm” and ”hot” regimes (that is shifting of the vibrational spectrum to lower frequencies) coupled with increasing overlap between the frequency regions of the various modes due to large anharmonicities.
In light of the above we judge the overall agreement between the simulated and measured spectra and their thermal evolution as rather satisfactory, and the remaining discrepancies (mainly in line-shapes) may be attributed to insufficient sampling during the 5 ps MD simulations of the thermally-expanded phase-space of the clusters.
The methodology developed in this study for practical calculations of finite-temperature PES spectra, through ab-initio MD simulations of aluminum cluster anions with no adjustable parameters other than the clusters’ temperatures, was demonstrated to yield results in agreement with high-resolution PES spectra measured at various thermal conditions of the cluster beam. Such comparative analysis allows reliable structural assignments and theoretical estimation of the clusters’ temperatures, as well as gaining insights into the electronic and structural properties of clusters and their thermal evolution.
Computations were performed mainly on Cray T3E at the Center for Scientific Computing, Espoo, Finland, and in part on an IBM RISC 6000/SP2 at the GIT Center for Computational Materials Science. Work in the University of Jyväskylä is supported by the Academy of Finland, and at Georgia Tech by the US DOE. The experimental work is supported by the NSF and performed at EMSL, a DOE user facility located at PNNL, which is operated for DOE by Battelle Memorial Institute. L. S. W. acknowledges support from the Alfred P. Sloan Foundation. J. A. wishes to thank the Väisälä Foundation for support.
|
no-problem/9909/astro-ph9909271.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Previous work described a computerized system for automatically reconstructing cascade tracks from digital images of an emulsion chamber’s x-ray films, originally developed for JACEE analysis (Zager, 1997). We build on that work to describe how spot densities may be measured from information available to the computer as a byproduct of the track reconstruction process.
The result of the reconstruction process is a list of tracks in the chamber, and the spots which make up those tracks. The next step in our analysis is to measure the density on the x-ray films by hand for each spot. Since the computer has both the images of those films and the location of each spot, it is reasonable to try to automate this process.
Spot density is commonly used as an indication of shower energy in emulsion chambers (Burnett, 1986). Traditionally, spot density is measured by an optical instrument which measures the trasmission of light through a 200–300 micron slit. Recently this technique has been extended to micro-densitometry, in which an optical density measurement is made by computer analysis of a microscopic image of that film. Here we describe attempts to measure density by computer analysis of an image of an entire x-ray film. This image is necessarily of far lower resolution than the microscopic image used for micro-densitometry, so a new technique must be developed.
## 2 Technique
We use optical density to estimate $`N_e`$, the number of electrons in a shower. $`N_e`$ is directly related to the total energy in the electromagnetic coomponent of a shower. The relation between $`N_e`$ and density is dependent on film and development conditions, so for each set of x-ray films, a calibration is done between the density of a given spot and the number of singly-ionizing tracks seen under a microscoope in the nuclear emulsion. Optical density is defined as
$$D=\mathrm{log}_{10}\left(\frac{I_{\mathrm{𝑡𝑟𝑎𝑛𝑠𝑚𝑖𝑡𝑡𝑒𝑑}}}{I_{\mathrm{𝑖𝑛𝑐𝑖𝑑𝑒𝑛𝑡}}}\right).$$
(1)
To generate a quantity which is independent of $`I_{\mathrm{𝑖𝑛𝑐𝑖𝑑𝑒𝑛𝑡}}`$, we use
$$D_{\mathrm{net}}D_{\mathrm{fg}}D_{\mathrm{bg}}=\mathrm{log}_{10}\left(\frac{I_{\mathrm{𝑓𝑔}}}{I_{\mathrm{𝑏𝑔}}}\right),$$
(2)
where $`D_{\mathrm{fg}}`$ and $`D_{\mathrm{bg}}`$ are the densities of the foreground (the spot) and the background (the neighborhood of the spot), respectively.
The x-ray film is imaged by a CCD camera at approximately $`3400\times 2700`$ pixels, 12 bit grayscale. The film itself is $`50\times 40`$ cm, so a pixel is approximately 150 microns on an edge.
### 2.1 Measuring the Background Density
The background density of the film itself varies due to slight irregularities in the development process. The digital image of that film has further variations in background density due to illumination of the film and the optical qualities of the lens used. These combine to produce significant variations in the local background density, so it is necessary to measure $`D_{\mathrm{bg}}`$ in the neighborhood of each spot.
To rephrase the problem, we wish to measure the average density over all pixels in the neighborhood of a spot which are not part of that spot or of any other spot. Fortunately in the process of identifying spots, the program has already identified every pixel of the image which belong to a spot. Since a spot has blurry edges, it is important to exclude the edges from the calculation.
To exclude these edges we use a standard image processing technique called dilation, which causes the edges of a feature to grow by one pixel (Russ, 1995). By appling fourteen dilation rounds, we move the edge of each spot out fourteen pixels, or about 2 mm. This is sufficient to exclude the edges of all but the largest showers. We can then safely consider all the remaining pixels in the neighborhood to be background, unaffected by showers.
### 2.2 Foreground Density
The size of a single pixel in our image is comparable to the size of the slit in optical densitometry. Since a CCD has a linear response to intensity, it should be possible to measure the density by taking the darkest pixel of the spot as $`I_{\mathrm{𝑓𝑔}},`$ and the average intensity of the background near the spot as $`I_{\mathrm{𝑏𝑔}}.`$
Figure 1 shows the calibration of our CCD and image acquisition system against Bausch & Lomb neutral density filters. Our system is linear, although it reads systematically low by a scale factor of about 1.7. We have seen similar scale factors when cross-calibrating other densitometers, so this is not a cause for concern (Olson, 1995).
With a 12-bit grayscale image, the
largest $`D_{\mathrm{net}}`$possible is
$`D_{\mathrm{net}}=\mathrm{log}_{10}(1/4096)=3.6.`$
However, under realistic illumination conditions we find a situation where $`I_{\mathrm{fg}}`$ is about 75% of the full range, and $`I_{\mathrm{bg}}`$ is about 10%. This translates to a maximum $`D_{\mathrm{net}}`$ of about 0.9. The density of spots on an x-ray film varies with the development of the film: the longer the development, the greater the density of a given spot. We tend to develop our films to produce a maximum $`D_{\mathrm{net}}`$ around 2.0. This leaves the bulk of spots well under $`D_{\mathrm{net}}`$ of 1.0. The simple method described above may be adequate for the majority of events, but we need a different method to estimate the density of the highest energy events.
### 2.3 Spot Area
Although density may saturate for higher energy events, a spot’s size will continue to grow. Shower energy has been successfully related to the area of a microscopic emulsion image (Fuki, 1995). The concept here is similar, but at a macroscopic scale. To turn spot area into a useful measurement, we need two things: a way to accurately measure the area, and a correlation between area and density.
Measuring the area of a spot is somewhat tricky. The lateral profile of charged particles in a cascade falls off approximately as $`r^1`$ (Olson, 1995), so the spot has very soft edges and blurs seamlessly into the background. Since there is no hard edge, we somewhat arbitrarily choose one which we can construct. We compute the spot’s area as the sum of all pixels contiguous with the spot which are darker than a given threshold. The selection of this threshold is discussed below. Once again, this information is a byproduct of the track reconstruction phase.
### 2.4 Adjustment for Inclination
A simple model of an inclined event would indicate that the area for an inclined shower varies as $`[cos\theta ]^1`$ due to the projection of the spot onto the film plane. But density also has a slope dependence due to physical characteristics of the x-ray film. Emulsion thickness, grain size, and the presence of a second emulsion on the back of the film all contribute. A study of the effect of inclination on density found results consistent with a simple $`[\mathrm{cos}\theta ]^1`$ scaling, but could not rule out exponents in the range (-0.8) – (-1.2) (Olson, 1995). We stick to a simple model in which the effect of inclination on spot area will tend to be canceled out by the effect on density, so inclination is neglected.
## 3 Results
We applied the techniques above to a set of x-ray films exposed during the thirteen day JACEE-13 Antarctic balloon flight (Wilkes, 1995). Tracks in one emulsion chamber were reconstructed manually, then densities measured manually by micro-densitometry. We used the software to generate an independent map and set of spot measurements. Finally we matched the two sets of tracks.
### 3.1 Density-Density Correlation
$`D_{\mathrm{net}}`$ measured manually and $`D_{\mathrm{net}}`$ measured by the software are loosely correlated, as shown in figure 2. At the low end, $`D_{\mathrm{net}}<0.2,`$ the errors are probably dominated by the measurement of $`D_{\mathrm{fg}}`$. When measuring $`D_{\mathrm{fg}}`$ manually, the image is carefully aligned so that the darkest part of the spot is centered in the window. The automatic system does not have this luxury. These low-density spots are small, generally 7–8 pixels on an edge. The darkest point may or may not align with the center of the pixel. Large spots are less vulnerable to this problem because the darkest part of the spot will occupy more than one pixel.
### 3.2 Density-Area Correlation
Figure 3 shows the correlation between spot area and density. Only measurements from one x-ray film are shown because the area of a spot is dependent in part on the threshold intensity chosen to separate an image from its background. As the track reconstruction software adjusts this threshold differently for each image, it is only meaningful to compare spot areas measured on the same image.
Clearly we would like to be able to compare density, and therefore area, across different images. This requires a consistent threshold for each image. Another pass through the images can easily accomplish this once track reconstruction is complete.
The correlation between area and density looks good above $`D_{\mathrm{net}}`$ = 0.3. The variation seen at the lower end is probably due to the difficulty is discriminating between pixels which make up the image, and pixels which make up the background. Setting the threshold between foreground and background to be higher may produce better results, but may also tend to produce precision errors. These smaller spots are images of as few as 36 pixels. Increasing the discrimination threshold will reduce these spots further.
## 4 Conclusions
A careful study of the discrimation between the spot and its background is likely to benefit both the measurement of area and of literal density. The measurement of density may also be helped by fitting the measured intensities to an assumed lateral distribution function. By combining the literal density measurement with measurement of spot area, the automatic system may be able to produce good measurements of spot density. It is likely that this combination would rely more on density for lower-energy particles, and more on area for higher energy particles. This would allow us to estimate shower energy very quickly compared to current methods. The technique is not limited to analysis of x-ray films. Real-time electronic detectors which produce similar images of a cascade could employ the same method.
The author wishes to acknowledge the densitometry work of Bjorn S. Nilsen.
## 5 References
Burnett, T. H. et al., 1986 Nucl Inst Meth, A251:583-595.
Fuki, M. and Takahashi, Y. Proc. 25th ICRC (Rome, 1995) 3, 677.
Olson, E. D. 1995. The Energy Spectrum of Primary Cosmic Ray Nuclei above 1 TeV per Nucleon. Ph.D. thesis, Universion of Washington, Seattle.
Russ, John C. 1995, The Image Processing Handbook. CRC Press, Boca Raton. Wilkes, R. J. et al, Proc. 24th ICRC (Rome, 1995) 3, 615.
Zager, E. L. et al, Proc. 25th ICRC (Durban, 1997) 7, 293.
|
no-problem/9909/astro-ph9909109.html
|
ar5iv
|
text
|
# A method for spatial deconvolution of spectra
## 1 Background and motivation
Tremendous efforts have been devoted to the development of numerical methods aimed at improving the spatial resolution of astronomical images. However, the techniques proposed so far and most commonly used (e.g., Richardson, 1972, Lucy, 1974, Skilling & Bryan, 1984), tend to produce the so-called “deconvolution artefacts” (oscillations in the vicinity of high spatial frequency structures) and to alter the photometric and astrometric properties of the original data. Recently, Magain, Courbin & Sohy (1998; hereafter MCS) proposed and implemented a new deconvolution algorithm which overcomes such drawbacks. Its success is mainly the consequence of a deliberate choice to achieve an improved resolution rather than an infinite one, hence avoiding to retrieve spatial frequencies forbidden by the sampling theorem. Many successful applications of the algorithm have been carried out in the framework of an intensive effort to obtain detailed light/mass maps for lensing galaxies (e.g., Courbin et al. 1997, 1998, Burud et al. 1998a,b). These results, when compared with more recent Hubble Space Telescope (HST) images, demonstrate the efficiency of the method to produce reliable high resolution images.
Undoubtedly, high spatial resolution plays a key role in most of the major advances made in observational astrophysics. This is true not only in imaging, but also in spectroscopy. We present in this paper a spectroscopic version of the MCS algorithm and we demonstrate from realistic simulations that flux calibrated spectra of severely blended point sources can be accurately recovered. We also show how the algorithm can be used to decontaminate the spectrum of extended objects from light pollution by very nearby, eventually bright, point sources.
## 2 Spatial deconvolution of spectra
MCS have shown that sampled images should not be deconvolved with the observed Point Spread Function (PSF), but with a narrower function, chosen so that the final deconvolved image can be properly sampled, whatever sampling step is adopted to represent it. For this purpose, one can choose the final PSF $`(\stackrel{}{x})`$ of the deconvolved image and compute the PSF $`𝒮(\stackrel{}{x})`$ which should be used to perform the deconvolution instead of the observed PSF $`𝒯(\stackrel{}{x})`$. The profile $`𝒮(\stackrel{}{x})`$ may be obtained by inverting the equation (see Sect. 3.):
$`𝒯(\stackrel{}{x})=𝒮(\stackrel{}{x})(\stackrel{}{x}),`$ (1)
where the symbol “$``$” stands for the convolution. Equation (1) should always be considered together with the important constraint that all three profiles must be properly sampled.
A straightforward consequence of choosing the shape of the PSF $`(\stackrel{}{x})`$ is that it is, indeed, exactly known. Such prior knowledge can be used to decompose the data (image or spectrum) into a sum of point sources with known analytical spatial profile, and a deconvolved numerical background. This decomposition was successfully used in the MCS image deconvolution algorithm. We apply the same fundamental rule to construct an algorithm for the spatial deconvolution of spectra.
### 2.1 Constructing the algorithm
A 2-D spectrum can be described as $`M`$ spectral resolution elements (e.g., $`M`$ lines), composed of $`N`$ spatial resolution elements (e.g., $`N`$ columns). Each spectral resolution element of the spectrum can be approximated as a quasi-monochromatic 1-D image (see Appendix) that we decompose as in the MCS image deconvolution algorithm. The pixel intensities $`(\stackrel{}{x})`$ of such a 1-D image may then be written as
$`(\stackrel{}{x})=(\stackrel{}{x})+{\displaystyle \underset{k=1}{\overset{N_{}}{}}}a_k(\stackrel{}{x}\stackrel{}{c_k}),`$ (2)
which is the sum of a 1-D numerical deconvolved background $`(\stackrel{}{x})`$ and of $`N_{}`$ profiles $`(\stackrel{}{x})`$ with intensities $`a_k`$ and centers $`c_k`$. The profile $`(\stackrel{}{x})`$ is chosen to be Gaussian, with fixed width (i.e., resolution) all along the spectral direction. The final deconvolved spectrum is therefore corrected for seeing variations with wavelength. Moreover, the spectra may suffer from slit misalignment with respect to the physical dimensions of the detector and from atmospheric refraction. As a consequence, the position of a given point source on the detector is wavelength dependent. The deconvolved (and decomposed) 2-D spectrum which matches the data at best, may therefore be obtained by minimizing the function
$`𝒞_{\chi ^2}`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{M}{}}}{\displaystyle \underset{i=1}{\overset{N}{}}}{\displaystyle \frac{1}{\sigma _{i,j}^2}}\left[𝒮_j(\stackrel{}{x})\left(_j(\stackrel{}{x})+{\displaystyle \underset{k=1}{\overset{N_{}}{}}}a_{k,j}(\stackrel{}{x}\stackrel{}{c}_{k,j})\right)𝒟_j(\stackrel{}{x})\right]_i^2,`$ (3)
where $`𝒟_j(\stackrel{}{x})`$ is the $`j^{th}`$ spectral element of the data spectrum, $`𝒮_j(\stackrel{}{x})`$ is the $`j^{th}`$ spectral element of the (narrower) PSF. Note that the width of the profile $`𝒮(\stackrel{}{x})`$ may vary with wavelength, as does the observed profile $`𝒯(\stackrel{}{x})`$, in order to ensure that $`(\stackrel{}{x})`$ does not. The sum over $`i`$ is the summation of the $`N`$ pixel values along the spatial direction of the $`M`$ spectral resolution elements.
Importantly, the deconvolution has to be performed under the constraint that the deconvolved background $`(\stackrel{}{x})`$ is smooth on the length scale of the final resolution (represented by the profile $`(\stackrel{}{x})`$). This is efficiently done by minimizing
$`_1`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{M}{}}}{\displaystyle \underset{i=1}{\overset{N}{}}}\left[_j(\stackrel{}{x})(\stackrel{}{x})_j(\stackrel{}{x})\right]_i^2`$ (4)
where the notations and indices are the same as in equation (3). This smoothing applies in the spatial direction only and independently for each spectral resolution element.
In spectroscopy, one may take advantage of an additional prior knowledge available: the fact that the position of a given point source at a given wavelength is highly correlated with its position in the neighbouring spectral resolution elements. We introduce this prior knowledge as a second constraint:
$`_2`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{M}{}}}\left[c_j{\displaystyle \underset{j^{}=W/2}{\overset{W/2}{}}}g_j^{}c_{j+j^{}}\right]^2`$ (5)
where the function $`g`$ has a Gaussian shape, for simplicity. It is defined over a box of W pixels centered on the $`j^{th}`$ spectral resolution element. Its Full Width at Half Maximum (FWHM) $`w_g=2\sqrt{\frac{\mathrm{ln}2}{b_g}}`$ defines the typical scale length where the correlation applies. The function is normalized to a total flux of one and is simply written as
$`g_j`$ $`=`$ $`{\displaystyle \frac{1}{G}}e^{b_gj^2},`$ (6)
$`G`$ $`=`$ $`{\displaystyle \underset{j=W/2}{\overset{W/2}{}}}g_j`$
The final algorithm we propose for the spatial deconvolution of spectra therefore involves the minimization of the function
$`𝒞=\lambda 𝒞_{\chi ^2}+_1+\mu _2`$ (7)
The Lagrange multipliers $`\lambda `$ and $`\mu `$ should be chosen so that the deconvolved spectrum matches the data correctly, in the $`\chi ^2`$, once it is re-convolved with the PSF $`𝒮(\stackrel{}{x})`$. This is described for image deconvolution in Courbin et al. (1997, 1998) and Burud et al. (1998a,b) and in the following section for the specific case of spectra.
## 3 Simulations
The algorithm is tested on two different types of simulated data. The first simulation involves blends of point sources only, while the second one also considers extended sources, hence illustrating the capability of the algorithm to unveil faint extended objects hidden by much brighter ones. Both simulations include the effects of slit misalignment, seeing variation as a function of wavelength, as well as exaggerated atmospheric refraction. Gaussian photon noise and readout noise are also added to the data. A typical signal-to-noise ratio for the data is 200-300 per spectral resolution element. The deconvolutions are performed as in imaging and the optimal choice of the different Lagrange multipliers to be used is guided by the visual inspection of the residual maps (RM). The RM is the difference between the raw data and the deconvolved image re-convolved by the PSF $`𝒮(\stackrel{}{x})`$, in units of the noise. An accurate deconvolution should therefore leave a flat RM with a mean value of 1.
### 3.1 Blended point sources
The two point sources included in the simulations are separated by two pixels and are observed with a resolution of 4 pixels FWHM. The deconvolved spectra have a resolution of 2 pixels FWHM. Figure 1 compares the spatial profiles of the simulated data before and after deconvolution. In all simulations, the PSF used is not assumed to be perfectly known. It is determined from the spectrum of a simulated star by applying the PSF construction algorithm described in Section 4.
We first test the algorithm on a blend of two very different objects, with extreme and opposite colors. For example, Figures 2 and 3 consider the spectrum of a quasar contaminated by that of a star of similar or fainter luminosity (from 0. to 1.8 mag fainter depending on wavelength). Figure 2 shows the result of the deconvolution as well as the RM which reflects the good agreement between the data and the deconvolved spectrum all along the wavelength range considered. Figure 3 displays the 1-D spectra obtained by integrating the light of the 2-D spectra along the spatial direction, together with the flux ratio between the deconvolved spectra and the spectra used to build the simulated data (insets). The latter clearly demonstrates that (1) the simulated and recovered spectra agree very well within the noise and (2) there is no mutual light contamination between the spectra.
A second test involves a blend of point sources with identical spectra but different luminosities, e.g., the images of a lensed quasar. Our simulation consist of two quasar images with magnification ratio of 1.8 mag. As in our first example, the separation between the sources is 2 pixels and the spectra have a spatial resolution of 4 pixels FWHM. The 1-D deconvolved spectra are displayed in Figure 4, confirming the results obtained with our first simulation. Figure 4 also illustrates the effect of the correlation introduced in equation 5 on the position of the sources as a function of wavelength. While the deconvolved spectra on the left panels were obtained without introducing any correlation ($`\mu =0`$ in equation 7), the ones on the right panels were obtained by choosing the $`\mu `$ parameter leading to the best possible RM. Choosing a too small $`\mu `$ multiplier leads to over-fitting of the data and to a “noisy” deconvolved spectrum, while a larger $`\mu `$ leads to under-fitting.
### 3.2 Extended sources
The simulations presented in the previous section show that the algorithm is efficient in deconvolving/extracting the individual spectra of very blended point sources and that their relative flux distribution is not modified by the deconvolution process. We now show how the algorithm can also be used to extract the spectrum of extended faint objects hidden by (eventually much brighter) point sources.
Figure 5 presents the results of such a simulation. The spectra of two quasar images are generated, with a separation of 6 pixels. The spatial resolution is 4 pixels FWHM and the signal-to-noise ratio of the brightest spectrum is about 200-300 per spectral resolution element, depending on wavelength. The flux ratio between the QSO images is 3 (1.2 magnitudes). The spectrum of the – extended – lensing galaxy is also incorporated in the simulated data. It is situated only 2 pixels away from the centroid of the faintest QSO image and is about 3 to 5 magnitudes fainter than the QSO images (depending on wavelength) and therefore completely invisible in the raw data (see panel (a) of Figure 5). Panel (b) shows the result of the deconvolution, panel (c) displays the 2-D deconvolved spectrum of the lensing galaxy alone. This spectrum is our best result among a number of deconvolutions using different smoothness intensities and different correlation factors on the center of the point sources ($`\lambda `$ and $`\mu `$ in equation 7). The RM, as defined at the beginning of this section is displayed in panel (e) and does not show any significant structure, as expected for an appropriate choice of $`\lambda `$ and $`\mu `$. Figure 6 confirms the good results obtained in Figure 5. The 1-D spectra of the 2 QSO images as well as the spectrum of the very faint lensing galaxy are in very good agreement with the input spectra, in spite of the blending and high luminosity contrast. The emission line in the spectrum of the lensing galaxy is well recovered and its (spectral) position is retrieved with an accuracy of 0.1 pixel.
## 4 Generating the PSF
Deconvolving spectra requires a good knowledge of the instrumental PSF all along the wavelength range available. This condition is fulfilled as soon as the spectrum of a star or any other point source is recorded together with the spectrum of scientific interest. The construction of the PSF is carried out as with the image deconvolution algorithm, i.e., the PSF $`𝒮(\stackrel{}{x})`$ is modeled as the sum of a Moffat profile and of a numerical image containing all small residual differences between the Moffat and the observed PSF. The analytical one dimensional spatial profile at wavelength $`j`$ is simply written as:
$`_j(\stackrel{}{x})=a_j\left[1+b_j(\stackrel{}{x}c_j)^2\right]^\beta ,`$ (8)
where $`a_j`$ is the intensity of the profile, $`b_j`$ defines its width, $`c_j`$ is its center along the spatial direction and $`\beta `$ characterizes the wings of the profile. $`(\stackrel{}{x})`$ must have the resolution of the PSF $`𝒮(\stackrel{}{x})`$ needed for the deconvolution process and is obtained by minimizing the $`\chi ^2`$ between the observed PSF $`𝒯(\stackrel{}{x})`$ and $`(\stackrel{}{x})(\stackrel{}{x})`$. As in the MCS deconvolution, $`(\stackrel{}{x})`$ is the adopted shape of the PSF after deconvolution. The $`\chi ^2`$ can be written as follow:
$`\chi _{}^2`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{M}{}}}{\displaystyle \underset{i=1}{\overset{N}{}}}{\displaystyle \frac{1}{\sigma _{i,j}^2}}\left[(\stackrel{}{x})_j(\stackrel{}{x}\stackrel{}{c_j})𝒯_j(\stackrel{}{x}\stackrel{}{c_j})\right]_i^2`$ (9)
where the $`i`$ and $`j`$ indices are respectively running along the spatial and spectral directions. As for the deconvolution, it is highly desirable to use any prior knowledge available on the shape and position of the PSF spectrum. The center $`c_j`$ of the spectrum at wavelength $`j`$ is highly correlated to the position at neighbouring wavelength. The same constraint can be applied to the shape ($`b`$ and $`\beta `$ in equation 8) of the Moffat profile. Such constraints are taken into account by minimizing equation (9) together with the three constraints:
$`_1`$ $`=`$ $`{\displaystyle \underset{j=1,}{\overset{M}{}}}\left[c_j{\displaystyle \frac{1}{G}}{\displaystyle \underset{j^{}=W/2}{\overset{W/2}{}}}g_j^{}c_{j+j^{}}\right]^2`$ (10)
$`_2`$ $`=`$ $`{\displaystyle \underset{j=1,}{\overset{M}{}}}\left[b_j{\displaystyle \frac{1}{G}}{\displaystyle \underset{j^{}=W/2}{\overset{W/2}{}}}g_j^{}b_{j+j^{}}\right]^2`$ (11)
$`_3`$ $`=`$ $`{\displaystyle \underset{j=1,}{\overset{M}{}}}\left[\beta _j{\displaystyle \frac{1}{G}}{\displaystyle \underset{j^{}=W/2}{\overset{W/2}{}}}g_j^{}\beta _{j+j^{}}\right]^2`$ (12)
where the function $`g`$ is the same as in equations (5) and (6). Constructing the analytical Moffat component of the PSF can therefore be done by minimizing the function:
$`𝒞_1=\chi _{}^2+\mu _1_1+\mu _2_2+\mu _3_3`$ (13)
The strengths of the correlations introduced on the shape and position of the Moffat profile are modified by the three Lagrange multipliers $`\mu _1`$, $`\mu _2`$, $`\mu _3`$. While their choice obviously influences the quality of the fit, it is not a critical parameter. Indeed, a PSF is never a perfect Moffat profile and a numerical residual image has to be added to the analytical component of the PSF. Thus, the parameters $`\mu _1`$, $`\mu _2`$ and $`\mu _3`$ have to be chosen so that $`(\stackrel{}{x})`$ matches at best the PSF, but an additional numerical component is mandatory to build $`𝒮(\stackrel{}{x})`$ with the accuracy required for the deconvolution algorithm to work properly. This numerical image $`(\stackrel{}{x})`$ must not contain any structure of spatial frequency above the highest frequency contained in $`(\stackrel{}{x})`$. It is therefore constructed by minimizing:
$`𝒞_2`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{M}{}}}{\displaystyle \underset{i=1}{\overset{N}{}}}{\displaystyle \frac{\lambda }{\sigma _{i,j}^2}}\left[(\stackrel{}{x})_j(\stackrel{}{x})𝒦_j(\stackrel{}{x})\right]_i^2`$ (14)
$`+`$ $`{\displaystyle \underset{j=1}{\overset{M}{}}}{\displaystyle \underset{i=1}{\overset{N}{}}}\left[_j(\stackrel{}{x})(\stackrel{}{x})_j(\stackrel{}{x})\right]_i^2,`$
where,
$`𝒦(\stackrel{}{x})`$ $`=`$ $`𝒯(\stackrel{}{x})\left[(\stackrel{}{x})(\stackrel{}{x})\right]`$ (15)
is the numerical component of the PSF.
The Lagrange parameter $`\lambda `$ is chosen so that $`(\stackrel{}{x})(\stackrel{}{x})`$ matches at best $`𝒦(\stackrel{}{x})`$ in the sense of a $`\chi ^2`$ and the final PSF is simply the sum of $`(\stackrel{}{x})`$ and $`(\stackrel{}{x})`$.
The result of the process is a PSF $`𝒮(\stackrel{}{x})`$ which incorporates seeing variations as a function of wavelength and takes into account atmospheric refraction. Using such a PSF for the deconvolution of spectra affected by the same atmospheric refraction therefore leads to a deconvolved spectrum corrected both for seeing variation and for atmospheric refraction.
One may first think that obtaining the spectrum of a reference star is a serious limitation to the technique. However, while for some applications (e.g., multiple QSOs) long slit spectroscopy may be difficult since a suitable PSF star well aligned with the different objects of interest might not be available, observing with a Multi Object Spectrograph (MOS) will in most cases not only allow to observe the blended objects, but also to obtain simultaneously the spectra of one or more PSFs. Observing several PSF stars has the further advantage of allowing substantial improvement of the spatial sampling. Higher spatial resolution can then be achieved as well as a more accurate point source/background separation. In any case, either in long slit spectroscopy or MOS, particular care should be paid to the centering of objects and PSFs on the slit(s). The slit edges clip the PSF’s wings. Although our deconvolution procedure can handle this, clipping has to be similar in the object spectrum and in the PSF spectrum. Observing with wide slits will minimize the effect of centering errors and PSF clipping.
## 5 Limitations: signal-to-noise and sampling
To any algorithm there are limitations. The present one is not an exception to the rule. Also, it should be understood that while improvement of the data is aimed at, the algorithm can not extract non-existing information.
Our simulations show that high S/N data are usually required to achieve accurate point-source/background separation, especially when dealing with strong blends. In the most extreme case of two objects exactly superposed, e.g., a QSO and its host galaxy, the quality of the decomposition also depends on the physical size of the extended object projected on the plane of the sky, as compared with the seeing value. As clearly there is no way to separate two or more PSFs located exactly at the same position, the main limitation in such a case remains the seeing of the observations.
In addition to the seeing, the spatial sampling of the data also influences the results. On 4m class telescopes pixel sizes tend to be large in order to beat readout noise and to observe faint objects. A common pixel size is 0.25<sup>′′</sup>or more, which often leads to poor or even critical sampling, in the case of near-IR spectrographs or space instruments. As a consequence, the gain in spatial resolution is often limited to less than a factor 2 for data taken with present day spectrographs. However, the situation is improving, as 8m class telescopes have much smaller pixel sizes, of the order of 0.1<sup>′′</sup>. In addition, our algorithm can make use of an oversampled PSF (in the spatial direction) which is obtained from the spectra of several PSF stars. Such observations are possible in MOS mode. The spatial information needed to restore the PSF spectrum on a grid of pixels smaller than in the original data is then available, as the PSF spectra are not all centered in the same way, relative to the central pixel of each slitlet. According to the sampling theorem, the gain in resolution in only limited by the sampling in the deconvolved spectrum, not by the sampling of the original data. This sampling can in principle be as small as the user wishes, but we restrict ourselves to a factor two gain. The number of PSF stars to be observed simultaneously to improve further the sampling would be too large. This still allows substantial improvement of the spatial resolution, even for critically sampled data. Note finally, that even if the use of an oversampled PSF allows one to deconvolve critically sampled spectra, better sampling (FWHM $``$ 5-6 pixels) leads, as one should expect, to much more reliable results, in particular in view of accurate background/point source separation.
## 6 Conclusions
We have described a new method for the spatial deconvolution of spectra which can be used not only to de-blend point sources, but also to decompose spectra into a sum of point sources and extended sources. We have shown from realistic simulations that the relative flux distribution in such deconvolved spectra is very well recovered, hence making it possible to perform accurate spectrophotometric measurements of very blended objects. In our simulations, we resolve and extract the individual spectra of sources separated by only 2 pixels, under seeing conditions of 4 pixels FWHM. For modern spectrographs mounted on 8-10m class telescopes, this is equivalent to sources separated by 0.2-0.3<sup>′′</sup>under 0.4-0.6<sup>′′</sup>seeing conditions. The signal-to-noise required for the method to work efficiently is of the order of 200, which is presently easy to reach in a few hours integration time, even down to magnitudes of the order of 21-22.
Clearly, the new extension of the MCS image deconvolution algorithm has a wide field of applications (see Courbin et al. 1999 for more details on how to use the algorithm in practise). The most original and promising applications may consist in spectroscopic studies involving extended objects hidden by – often brighter – point sources. Our simulations show an example of application to gravitationally lensed quasars, where the redshift of a very faint lensing galaxy can be measured, hence making it possible to estimate H<sub>0</sub> from multiply imaged QSOs with known time delays. A similarly interesting application will be to take full advantage of the ability of the algorithm to decompose spectra, in order to carry out the first systematic spectroscopic study of quasar host galaxies. With current instrumentation mounted on 8-10m class telescopes, sufficiently high signal-to-noise spectra can be obtained for low redshift quasars in order to derive precise rotation curves of their host galaxy, provided the spectrum of the bright QSO nucleus can be removed accurately. The present spectra deconvolution algorithm is very well suited for such a purpose and may therefore allow significant progress towards the measurement of the mass of the central black hole in QSO host galaxies.
APPENDIX
We assume that the spatial deconvolution of spectra simplifies to a number of independent deconvolutions of quasi-monochromatic 1-D images. This assumption makes sense if, (i) the PSF is stable in the spatial direction (ii) it varies slowly with wavelength, i.e., it does not show any significant changes in the spectral direction on a length scale comparable to the seeing disk at a given wavelength and, (iii) the PSF is separable.
Conditions (i) and (ii) are usually fulfilled provided PSFs stars can be found close to the objet to deconvolve and provided very low spectral resolution is not aimed at. Condition (iii) might be more difficult to fulfill exactly. Using the same notations as in the main body of the paper, but this time in 2 dimensions, we note $`𝒮(x,y)`$ the (narrower) PSF at pixel $`(x,y)`$, where $`x`$ is in the spatial direction and $`y`$ is in the spectral direction. $`𝒮`$ is separable if it can be written as
$`𝒮(x,y)`$ $`=`$ $`_y(x)𝒦(y),`$ (16)
where $`_y(x)`$ and $`𝒦(y)`$ are two 1-D spatial distributions. The index $`y`$ refers to possible variations of the PSF $`_y(x)`$ with wavelength. In other words, $`𝒮(x,y)`$ is not necessarily symmetric about its center but, if elongated, must have its major axis parallel (or perpendicular) to the slit. In practice, this means that the algorithm may not be fully applicable if significant tracking errors affect the data.
In the following we will assume that the instrument used to take the data has a decent optical quality, operates at relatively high spectral resolution and has a reliable tracking system. We can then write the (noise free) intensity of a data pixel $`𝒟(x,y)`$ as
$`𝒟(x,y)=𝒮_y(x,y)(x,y)`$ (17)
where $`(x,y)`$ is the signal of scientific interest. Moderately high spectral resolution ensures us that $`𝒮_y(x,y)`$ does not vary too fast with wavelength. We can therefore assume that it is constant over a spatial area approximately equal to the seeing disk. We can now consider only one 1-D spectral resolution element $`y`$ so that $`𝒮_y(x,y)`$ can be simplified to $`𝒮(x,y)`$. Blurring by the spectral PSF of the spectrograph $`𝒲(y)`$ leads to
$`𝒟(x,y)={\displaystyle \left[𝒮\right](x,y^{})𝒲(yy^{})𝑑y^{}}`$ (18)
or more explicitly,
$`𝒟(x,y)={\displaystyle 𝒮(xx\mathrm{"},y^{}y\mathrm{"})(x\mathrm{"},y\mathrm{"})𝑑x\mathrm{"}𝑑y\mathrm{"}𝒲(yy^{})𝑑y^{}}`$ (19)
Now, we can write,
$$(xx\mathrm{"},yy\mathrm{"})=𝒮(xx\mathrm{"},y^{}y\mathrm{"})𝒲(yy^{})𝑑y^{}$$
to obtain,
$`𝒟(x,y)=(x,y)(x,y)`$ (20)
Therefore, if $`𝒮(x,y)`$ is separable, $`(x,y)`$ is also separable, so that we can finally write $`𝒟(x,y)`$ as the convolution of a spectrum with a 1-D profile in the spatial direction:
$`𝒟(x,y)`$ $`=`$ $`(x)\left[𝒦(y)(x,y)\right]`$ (21)
with
$`(x,y)=(x)𝒦(y)`$ (22)
The function $`𝒦(y)(x,y)`$ is the spectrum of scientific interest, and $`(x)`$ has here the same role as the 1-D PSF $`𝒮(x)`$ used in the algorithm.
F. Courbin acknowledges financial support from Chilean grant FONDECYT/3990024. M. Kirkove and S. Sohy are supported by contracts ARC 94/99-178 “Action de Recherche Concertée de la Communauté Française (Belgium)” and “Pôle d’Attraction Interuniversitaire” P4/05 (SSTC Belgium). We also thank the European Southern Observatory for additional support.
|
no-problem/9909/cond-mat9909245.html
|
ar5iv
|
text
|
# Worse fluctuation method for fast Value-at-Risk estimates
## 1 Introduction
A very important issue for the control of risk of complex portfolios, which involves many non linear assets such as options, interest rate derivatives, etc. is to be able to estimate reliably its Value-at-Risk, or equivalently the probability of large downward moves, deep in the tails of the probability distributions . This is a difficult problem, since both the non-Gaussian nature of the fluctuations of the underlying assets and the non-linear dependence of the price of the derivatives must be dealt with. A solution to cope with non-linearity is to use Monte-Carlo simulations based on Gaussian multivariate statistics for the time evolutions of all the assets underlying the portfolio. This solution is however time consuming (specially to obtain good statistics in the tails of the distribution) and cannot be used for fast VaR, or $`\mathrm{\Delta }`$VaR calculations, which are important for real time estimates of the influence of a particular trade on the global exposure of a portfolio. More importantly, this method is not reliable because of the strongly non-Gaussian nature of the extreme moves. ‘Fat tails’ effects are well known and lead to a significant increase of the VaR estimate, even in the simplest case of a linear portfolio, for example containing stocks only. These fat tails can be further amplified by the non linear nature of the relation between derivative products and the underlying assets, thereby leading to very large differences between a Gaussian VaR estimate and reality.
The aim of this paper is to introduce a method, called the ‘optimal fluctuation method’ in the physics literature . This method is well suited to estimate large risks in the case where the fluctuations of the ‘explicative factors’ are strongly non-Gaussian (a more precise statement will be made below). An approximate formula can be obtained for the Value-at-Risk of a general non linear portfolio. This formula can easily be implemented numerically. The basic idea is to identify the ‘most dangerous’ market configuration for a given portfolio. In the case of strongly non Gaussian fluctuations, the largest moves of the portfolio correspond to a large change in one explicative factor, accompanied by the simultaneous ‘typical’ changes of all the others. In a Gaussian world, on the opposite, the large moves of a portfolio correspond to a ‘conspiracy’, where all factors coherently change by a small amount.
## 2 Fat-tailed explicative factors
Let us assume that the variations of the value of the portfolio can be written as a function $`df(e_1,e_2,\mathrm{},e_M)`$ of a set of $`M`$ independent random variables $`e_a`$, $`a=1,\mathrm{},M`$, called ‘explicative factors’. These factors can be determined by a classical Principal Component Analysis, where the correlation matrix of all assets’ increments is diagonalized. However, it might be more useful for our purpose to consider other definitions of the correlation, more suited to tail events (see, e.g. ). On short time scales, most relevant for VaR estimates, all trend effects are negligible, and we shall therefore set $`e_a=0`$ and $`e_ae_b=\delta _{a,b}\sigma _a^2`$, where $`\mathrm{}`$ denotes the average over the relevant probability distribution. The sensitivity of the portfolio to these ‘explicative factors’ can be measured as the derivatives of the value of the portfolio with respect to the $`e_a`$. We shall therefore introduce the different $`\mathrm{\Delta }`$’s and $`\mathrm{\Gamma }`$’s as:
$$\mathrm{\Delta }_a=\frac{f}{e_a}\mathrm{\Gamma }_{a,b}=\frac{^2f}{e_ae_b}.$$
(1)
We are interested in the probability for a large fluctuation $`df^{}`$ of the portfolio. We will first surmise that this is due to a particularly large fluctuation of one explicative factor – say $`a=1`$ – that we will call the ‘dominant’ factor. (The generalization to several factors will be discussed below). Note that this is not always true, and depends on the statistics of the fluctuations of the $`e_a`$. A condition for this assumption to be true will be discussed below, and requires in particular that the tail of the dominant factor should not decrease faster than an exponential. Fortunately, this is a good assumption in financial markets, but would be completely wrong for Gaussian statistics. Note also that the dominant factor depends a priori, via the $`\mathrm{\Delta }`$’s, on the portfolios composition.
## 3 The dominant factor approximation
The aim is to compute the Value-at-Risk of a certain portfolio. This is defined as the value $`df^{}`$ such that the probability $`𝒫_>(df^{})`$ (defined as the cumulative probability that the variation of $`f`$ exceeds $`df^{}`$) is equal to a certain probability $`p`$ – say $`1\%`$ for a $`99\%`$ confidence VaR. Our assumption about the existence of a dominant factor means that these events correspond to a market configuration where the fluctuation $`\delta e_1`$ is large, while all other factors are relatively small. Therefore, the large variations of the portfolio can be approximated as:
$$df(e_1,e_2,\mathrm{},e_M)=df(e_1)+\underset{a=2}{\overset{M}{}}\mathrm{\Delta }_ae_a+\frac{1}{2}\underset{a,b=2}{\overset{M}{}}\mathrm{\Gamma }_{a,b}e_ae_b,$$
(2)
where $`df(e_1)`$ is a shorthand notation for $`df(e_1,0,\mathrm{}.0)`$, and all the derivatives are calculated at the point $`(e_1,0,\mathrm{}.0)`$. Now, we use the fact that:
$$𝒫_>(df^{})=\underset{a=1}{\overset{M}{}}de_aP(e_1,e_2,\mathrm{},e_M)\mathrm{\Theta }(df(e_1,e_2,\mathrm{},e_M)df^{}),$$
(3)
where $`\mathrm{\Theta }(x>0)=1`$ and $`\mathrm{\Theta }(x<0)=0`$ is the Heaviside function. We now expand the $`\mathrm{\Theta }`$ function to second order, and perform the integration over the $`e_a`$’s ($`a>1`$), to finally obtain (see for details):
$$𝒫_>(df^{})=𝒫_>(e_1^{})+\underset{a=2}{\overset{M}{}}\frac{\mathrm{\Gamma }_{a,a}^{}\sigma _a^2}{2\mathrm{\Delta }_1^{}}P(e_1^{})\underset{a=2}{\overset{M}{}}\frac{\mathrm{\Delta }_a^2\sigma _a^2}{2\mathrm{\Delta }_1^2}\left(P^{}(e_1^{})+\frac{\mathrm{\Gamma }_{1,1}^{}}{\mathrm{\Delta }_1^{}}P(e_1^{})\right),$$
(4)
where $`e_1^{}`$ is such that $`df(e_1^{})=df^{}`$, and $`\mathrm{\Delta }_1^{}`$ is computed for $`e_1=e_1^{}`$, $`e_{a>1}=0`$ and where $`P(e_1)`$ is the marginal probability distribution of the first factor. This probability density can be estimated empirically, and fitted to one of the possible distribution known to describe well financial data, such as a Truncated Lévy, Hyperbolic, or Student distribution .
In order to find the Value-at-Risk $`df^{}`$, one should thus solve (4) for $`e_1^{}`$ with $`𝒫_>(df^{})=p`$, and then compute $`df(e_1^{},0,\mathrm{},0)`$. Note that the equation is not trivial since the Greeks must be estimated at the solution point $`e_1^{}`$. In practice, the dominant factor can be found by trial and error, by computing $`e_a^{}`$ for all $`M`$ explicative factors, and choosing the one that leads to the largest VaR.
## 4 Discussion of the result
Let us discuss the general result (4) in the simple case of a linear portfolio of assets, such that no convexity is present: the $`\mathrm{\Delta }_a`$’s are constant and the $`\mathrm{\Gamma }_{a,a}`$’s are all zero. The equation then takes the following simpler form:
$$𝒫_>(e_1^{})\underset{a=2}{\overset{M}{}}\frac{\mathrm{\Delta }_a^2\sigma _a^2}{2\mathrm{\Delta }_1^2}P^{}(e_1^{})=p.$$
(5)
Naively, one could have thought that in the dominant factor approximation, the value of $`e_1^{}`$ would be the Value-at-Risk value of $`e_1`$ for the probability $`p`$, defined as:
$$𝒫_>(e_{1,VaR})=p.$$
(6)
However, the above equation shows that there is a correction term proportional to $`P^{}(e_1^{})`$. Since the latter quantity is negative, one sees that $`e_1^{}`$ is actually larger than $`e_{1,VaR}`$, and therefore $`df^{}>df(e_{1,VaR})`$. This reflects the effect of all other factors, which tend to increase the Value-at-Risk of the portfolio.
The result obtained above relies on a second order expansion; when are higher order corrections negligible? It is easy to see that higher order terms involve higher order derivatives of $`P(e_1)`$. A condition for these terms to be negligible in the limit $`p0`$, or $`e_1^{}\mathrm{}`$, is that the successive derivatives of $`P(e_1)`$ become smaller and smaller. This is true provided that $`P(e_1)`$ decays more slowly than exponentially, for example as a power-law. In this case, which corresponds to financial reality , each term in the expansion is a factor $`1/e_1^{}`$ smaller than the previous one, which indeed becomes negligible in the limit $`p0`$, $`e_1^{}\mathrm{}`$. On the contrary, when $`P(e_1)`$ decays faster than exponentially (for example in the Gaussian case), then the expansion proposed above completely looses its meaning, since higher and higher corrections become dominant when $`p0`$. This is indeed expected: in a Gaussian world, a large event results from the accidental superposition of many small events; the idea of expanding around one large even is therefore not adapted. In a power-law world, large events are associated to one single large fluctuation which dominates over all the others, and the above method is congenial for fast and precise VaR estimates. The limiting case where $`P(e_1)`$ decays as an exponential is interesting, since it is often a good approximation for the tail of the fluctuations of financial assets. Taking $`P(e_1)\alpha _1\mathrm{exp}\alpha _1|e_1|`$, one finds that $`e_1^{}`$ is the solution of:
$$e_1^{}=\frac{1}{\alpha _1}\left[\mathrm{log}\frac{1}{p}+\mathrm{log}\left(1+\underset{a=2}{\overset{M}{}}\frac{\mathrm{\Delta }_a^2\alpha _1^2\sigma _a^2}{2\mathrm{\Delta }_1^2}\right)\right].$$
(7)
Since one has $`\sigma _1^2\alpha _1^2`$, the correction term is small provided that the variance of the portfolio generated by the dominant factor is much larger than the sum of the variance of all other factors.
Coming back to equation (4), one expects that if the dominant factor is correctly identified, and if the distribution is such that the above expansion makes sense, an approximate solution is given by $`e_1^{}=e_{1,VaR}+ϵ`$, with:
$$ϵ\underset{a=2}{\overset{M}{}}\frac{\mathrm{\Gamma }_{a,a}\sigma _a^2}{2\mathrm{\Delta }_1}\underset{a=2}{\overset{M}{}}\frac{\mathrm{\Delta }_a^2\sigma _a^2}{2\mathrm{\Delta }_1^2}\left(\frac{P^{}(e_{1,VaR})}{P(e_{1,VaR})}+\frac{\mathrm{\Gamma }_{1,1}}{\mathrm{\Delta }_1}\right),$$
(8)
where now all the Greeks at estimated at $`e_{1,VaR}`$.
## 5 Numerical tests and improved approximation schemes
We have tested numerically the above idea by calculating the VaR of a simple portfolios which depend on four independent factors $`e_1,\mathrm{},e_4`$, that we chose to be independent Student variables of unit variance with a tail exponent $`\mu =4`$. We have considered two different portfolios, ‘linear’ (L), such that its variations are given by $`d=e_1+\frac{1}{2}e_2+\frac{1}{5}e_3+\frac{1}{20}e_4`$, and ‘quadratic’ (Q), such that its variations are given by $`d𝒬d+(d)^2`$. We have determined the $`99\%`$, $`99.5\%`$ and $`99.9\%`$ VaR of these portfolios both using a Monte-Carlo (mc) scheme and the above approach. The results are summarized in Table 1, where we give the simple estimate based on the VaR on the most dangerous factor $`e_{1,VaR}`$, (called Th. 0) and the estimate which includes the contribution of the other factors $`e_1^{}`$ (called Th. 1). For both L and Q, one sees that the latter estimate represents a significant improvement over the naive $`e_{1,VaR}`$ estimate. One the other hand, our calculation still underestimates the mc result, in particular for the Q portfolio. This can be traced back to the fact that there are actually other different dangerous market configurations which contribute to the VaR for this particular choice of parameters. Our formalism can however easily be adapted to the case where two (or more) dangerous configurations need to be considered. The general equations read:
$$𝒫_{>a}=𝒫_>(e_a^{})+\underset{ba}{\overset{M}{}}\frac{\mathrm{\Gamma }_{b,b}^{}\sigma _b^2}{2\mathrm{\Delta }_a^{}}P(e_a^{})\underset{ba}{\overset{M}{}}\frac{\mathrm{\Delta }_b^2\sigma _b^2}{2\mathrm{\Delta }_a^2}\left(P^{}(e_a^{})+\frac{\mathrm{\Gamma }_{a,a}^{}}{\mathrm{\Delta }_a^{}}P(e_a^{})\right),$$
(9)
where $`a=1,\mathrm{},K`$ are the $`K`$ different dangerous factors. The $`e_a^{}`$, and therefore $`df^{}`$, are determined by the following $`K`$ conditions:
$$df^{}(e_1^{})=df^{}(e_2^{})=\mathrm{}=df^{}(e_K^{})𝒫_{>1}+𝒫_{>2}+\mathrm{}+𝒫_{>K}=p.$$
(10)
We have computed the VaR for the different portfolios in the $`K=2`$ approximation. The results are reported in Table 1 under the name ‘Th. 2’; one sees that the agreement with the mc simulations is improved, and is actually excellent in the linear case, where both the configurations $`e_1`$ positive and large and $`e_2`$ positive and large contribute. In the quadratic case, the two most dangerous configurations are $`e_1`$ positive and large and $`e_1`$ negative and large. In order to get perfect agreement with the Monte-Carlo result, one should extend the calculation to $`K=3`$, taking into account the configuration where $`e_2`$ positive and large.
## 6 Conclusion
In summary, we have shown how one can actually take advantage of the strongly non-Gaussian nature of the fluctuations of financial assets to simplify the analytic calculation of the Value-at-Risk of complicated non linear portfolios. The resulting equations are not hard to solve numerically, and should allow fast VaR and $`\mathrm{\Delta }`$VaR estimates of large portfolios, where by construction the influence of rare events is taken into account reliably. This method is a short-cut to Monte-Carlo methods, in the sense that we directly identify the events which are most relevant for extreme risks, without having to ‘wait’ for them to appear in a Monte-Carlo sampling. Interestingly, the calculation allows one to visualize directly the ‘dangerous’ market configurations which corresponds to these extreme risks, by re-expressing all the real assets variations in terms of the dominant factors. In this sense, our method corresponds to a correctly probabilized ‘scenario’ calculation.
### Acknowledgments
We thank J.P. Aguilar, P. Cizeau, L. Laloux, A. Miens and A. Matacz for enlightening discussions. This work has been supported by the software company atsm, with which this idea is currently being implemented.
|
no-problem/9909/astro-ph9909299.html
|
ar5iv
|
text
|
# Gamma-Ray Bursts as Internal Shocks Caused by Deceleration
## 1 INTRODUCTION
Gamma-ray bursts (GRBs) are characterized by chaotic time histories which are often followed by x-ray, optical, and radio afterglows (Metzger et al. (1997); Costa et al. (1997); Frail et al. (1997)). The optical afterglows have shown redshifted absorption features which firmly established a cosmological distance scale for the events. The distance implies that GRBs emit the order of $`10^{52}`$ to $`10^{54}`$ erg (assuming isotropic emission). GRBs are also often characterized by emission up to 100 MeV with occasional reports of emission up to 10 Gev (Hurley et al. (1994)). Given the photon density implied by the cosmological distances, photons above $``$ 1 MeV would be destroyed by photon-photon attenuation if the emission is isotropic in our rest frame. Large relativistic bulk motion (Lorentz factors of $`>`$$``$ 100) allows for a much larger emitting surface combined with relativistic beaming that reduces the photon-photon attenuation (Fenimore, Epstein, & Ho (1993)).
The high Lorentz factor plays a crucial role in virtually all models of GRBs. Originally, the prime suspect for the source of $`10^{52}`$ erg was a neutron star-neutron star merger (Pacyznski (1986)). However, such mergers were thought to occur on timescales (a few millisec) much shorter than GRB timescales (up to $`10^3`$ sec). Mészáros & Rees (1993) suggested that a relativistic shell would be formed by the initial release of energy. The shell could emit for a long time ($`10^7`$ sec). If the shell is mostly moving directly at the observer, the shell stays close to the photons it emits such that they all arrive at a detector over a short period of time. If the shell moves with velocity $`v`$, then photons emitted over a duration $`\mathrm{\Delta }t`$ arrive at the detector compressed into a duration of only $`(cv)\mathrm{\Delta }t/c\mathrm{\Delta }t/(2\mathrm{\Gamma }^2)`$ where $`\mathrm{\Gamma }`$ is the bulk Lorentz factor = $`(1\beta ^2)^{1/2}`$ and $`\beta =v/c`$. The shell would emit due to the formation of an “external” shock when the shell decelerates by sweeping up the interstellar medium (ISM). In this explanation, density variations in the ISM cause the observed time structure. The deceleration is expected to occur at
$$R_{\mathrm{dec}}=5(\rho E_0)^{1/3}\mathrm{\Gamma }_0^{2/3}\mathrm{cm}$$
(1)
where $`\rho `$ is the ambient density (in cm<sup>-3</sup>), $`E_0`$ is total energy (in erg) generated by the central site, and $`\mathrm{\Gamma }_0`$ is the initial Lorentz factor (Rees & Mészáros (1994)). For typical values such as $`\rho =1`$ cm<sup>-3</sup>, $`E_0=10^{53}`$ erg, and $`\mathrm{\Gamma }_0=100`$, the deceleration occurs at about $`10^{17}`$ cm. The initial Lorentz factor is set by the baryon loading, that is
$$E_0=\mathrm{\Gamma }_0m_0c^2$$
(2)
where $`m_0`$ is the mass of the shell, presumably carried by the baryons.
An alternative explanation is that the central site produced a series of shells. Collisions between shells produce the gamma rays through internal shocks (Rees & Mészáros (1994)). The faster shells catch up with the slower shells. The collision radius is roughly
$$R_{\mathrm{col}}=c\mathrm{\Gamma }^2\mathrm{\Delta }T$$
(3)
where $`\mathrm{\Delta }T`$ is a typical time of variation in a GRB (Rees & Mészáros (1994)). For typical values such as $`\mathrm{\Delta }T`$ = 0.1 to 1.0 s, $`R_{\mathrm{col}}`$ is about $`10^{15}`$ cm. The observed duration of the GRB is set by the duration of the activity at the central site.
There are a series of arguments that indicate that the gamma-ray phase is not caused by external shocks. These arguments are all related to the fact that the size of the shell at the deceleration radius is much larger than a causally connected region on the shell. For example, precursors and gaps require large causally disconnected regions to coordinate their activity (Fenimore, Madras, & Nayakshin (1996)). The observed variability implies that only a small fraction of the shell emits (Fenimore et al. (1996); Sari & Piran (1997)), and the average profile of many GRBs is inconsistent with that expected from a shell (Fenimore (1999)). Finally, the constancy of the pulse width throughout the bursts indicates that we are not seeing a range of angles on a shell and that the shell is not decelerating (Ramirez-Ruiz & Fenimore 1999a ). A single decelerating shell would produce pulses that get progressively wider.
However, external shocks have been very successful is explaining the x-ray, optical, and radio afterglows (see review by, e.g., Piran (1999)). These afterglows usually show power law decays and spectral variability expected from a decelerating shell. Thus, a general picture has formed where the central site produces multiple shells for tens of seconds. These shells collide, producing the gamma-ray phase by internal shocks and then merge into a single shell which interacts with the ISM to produce the afterglows (Sari & Piran (1997)). One-dimensional hydrodynamical calculations have reproduced many of the features of the spectral evolution (Panaitescu & Mészáros, (1998)). There have been several detailed calculations of what is expected from the internal shocks. Mochkovitch, Maitia, & Marques, (1995), Kobayashi, Piran, & Sari (1997), and Daigne & Mochkovitch (1998) used Monte Carlo calculations of internal shocks where the $`\mathrm{\Gamma }`$, mass, time, and thickness of multiple shells are picked randomly to demonstrate that internal shocks can produce the variability seen in GRBs. By following their trajectories, it is determined when they collide. The duration of the resulting simulated GRBs is effectively the duration of the activity at the central site. The rise of each pulse depends on the time for a reverse shock to cross the shell and the fall depends on the curvature of the shell at the radius of interaction (Kobayashi, Piran, & Sari (1997)). The radius of interaction depends on the Lorentz factors and the amount of time between the production of the shells at the central site. Indeed, the resulting time histories bear some similarities to the observed bursts.
The strong optical emission discovery by the Robotic Optical Transient Search Experiment, ROTSE, (Akerlof et al. (1999)) as served as an excellent test case for the external shock model. Sari & Piran (1999) and Mészáros & Rees (1999) fit forward and reverse external shocks and had excellent agreement with the time of the optical peak, the rise and fall times, the overall magnitude, and the break in the decay phase.
## 2 The Role of $`\mathrm{\Gamma }`$
The Lorentz factor is not well determined observationally. The lack of apparent photon-photon attenuation up to $``$ 100 MeV implies only a lower limit of $`100`$ (Fenimore, Epstein, & Ho (1993)). The $`\mathrm{\Gamma }`$ determined by Sari & Piran (1999) for GRB990123 depended on some parameters adopted from Wijers & Galama (1999) for GRB970508, but gave a similar low value of $`\mathrm{\Gamma }=200`$. However, there have been recent reports of possible TeV emission from GRBs (Leonor et al., (1999)) implying that $`\mathrm{\Gamma }`$ may be much larger. Also, if $`\mathrm{\Gamma }`$ is small, the efficiency of internal shocks is small, the order of 10%.
The first shell starts to decelerate when it has swept up $`\mathrm{\Gamma }^1`$ of its initial mass. Thus, larger $`\mathrm{\Gamma }`$ means the deceleration will occur at a smaller radius. But, a larger $`\mathrm{\Gamma }`$ means that the multiple shells will collide at a larger radius. Combining equations (1) and (3), internal shocks will occur at about the same radii as the deceleration if $`\mathrm{\Gamma }>10^{10}(\rho E_0)^{1/4}`$. From the few observed redshifts, we know that GRBs have a rather large range of fluences, with typical values between $`10^{52}`$ and $`10^{54}`$ erg for isotropic emission. We have little direct knowledge of the ambient density in the vicinity of a GRB, but it is reasonable to assume values of $`\rho `$ between 0.1 and 10.0, so $`\rho E_0`$ varies from $`10^{51}`$ to $`10^{55}`$ erg cm<sup>-3</sup>. For values of $`\mathrm{\Gamma }`$ of $`500`$ to $`3000`$, the internal shocks will occur about the same place as the deceleration.
Once the first shell decelerates (making an external shock), the rest of the shells will rapidly catch up to it resulting in rather efficient internal shocks. In previous models (e.g., Kobayashi et al. (1997)), $`\mathrm{\Gamma }`$ was about $`10^2`$ to $`10^4`$ but deceleration was not included. It was assumed that the internal shocks would form at small radii and later the merged shell would suffer deceleration and an external shock. Perhaps a few straggler shells would catch up to the first shell after it decelerated and rejuvenate it during the afterglow phase (Panaitescu, Mészáros, & Rees, (1998)), but most of the gamma-rays were assumed to form at small radii relative to the external shock.
We propose that the typical Lorentz factor is large enough such that the first shell decelerates before all of the multiple shells have a chance to collide. The deceleration is very rapid once it starts, effectively equivalent to slamming on the breaks. The rest of the shells catch up and collide with it. Since the efficiency of converting bulk motion to radiation in an internal shock depends on the difference of the colliding $`\mathrm{\Gamma }`$’s, the fact that the first shell is decelerating implies that the efficiency will be higher than in previous models. Thus, the collisions are internal shocks but the place and efficiency of the many of collision are caused by deceleration.
## 3 Ingredients for a Model
In the internal shock model, multiple shells are generated by an unspecified process at a central site. The parameters of our model will be similar to those of Mochkovitch, Maitia, & Marques, (1995), Kobayashi et al. (1997) and Daigne & Mochkovitch (1998), including the time the $`i`$-th shell was generated ($`t_{0i}`$), the initial width of the shell ($`l_i`$), the minimum and maximum initial Lorentz factor ($`\mathrm{\Gamma }_{\mathrm{min}},\mathrm{\Gamma }_{\mathrm{max}}`$), and the initial energy ($`E_i`$). Kobayashi et al. (1997) allowed for selecting the initial mass, energy, or density. However, all three gave similar results and we will restrict ourselves to selecting the energy. The initial $`m_i`$ is then found from equation (2). Since Kobayashi et al. (1997) presented unitless intensities, it was unnecessary for them to specify $`E_i`$. The bulk energy is necessary to set the deceleration. The peak energy can be estimated from bursts with observed redshifts. GRB970508 had a peak luminosity, $`L`$, of $`3\times 10^{51}`$ erg s<sup>-1</sup>. Other GRBs have shown extreme redshifts (e.g., Kulkarni et al. (1999)), implying $`L2\times 10^{53}`$ erg s<sup>-1</sup>. We will uniformly select $`E_i`$ between $`E_{\mathrm{min}}`$ (=$`10^{49}`$ erg s<sup>-1</sup>) and $`E_{\mathrm{max}}`$, and vary $`E_{\mathrm{max}}`$ from $`10^{51}`$ to $`10^{53.5}`$ erg.
We randomly select $`t_{0i+1}t_{0i}`$ from a Poisson distribution based on the rate of peak occurrence. Thus, we specific the duration of the activity at the central site ($`T_{\mathrm{dur}}`$) and the expected number of peaks ($`N`$) such that the actual number of peaks is random. Since they were presenting results in unitless time, Kobayashi et al. (1997) set the burst duration, shell separation, and the number of peaks to be constants. These differences are not important when there is no deceleration. We will use parameters that are roughly equivalent to Kobayashi et al. (1997), that is, $`N=85`$, $`T_{\mathrm{dur}}`$ = 60 s, and $`l_i`$ = 0.2 s. Burst often have gaps which implies that the activity at the central site can turn off for a while. To demonstrate the effects of turning off the central site, we impose a gap in the activity between $`T=20`$ and $`T=33`$ s.
Until they collide with the first shell or each other, the motion of the every shell except the first is constant, $`R_i(t)=c\beta _i(tt_{0i})`$. If $`\mathrm{\Gamma }_i`$ is greater than $`\mathrm{\Gamma }_j`$, the two shells will collide at time $`t_{ij}`$ when $`R_i(t_{ij})=R_j(t_{ij})`$, which we call the collision radius (=$`R_c`$), and it occurs at
$$t_{ij}=2\frac{\mathrm{\Gamma }_i^2\mathrm{\Gamma }_j^2}{\mathrm{\Gamma }_i^2\mathrm{\Gamma }_j^2}\mathrm{\Delta }t_{0ij}$$
(4)
where $`\mathrm{\Delta }t_{0ij}=t_{0i}t_{0j}`$. The resulting pulse arrives at a detector at the relative time of arrive
$$T_{\mathrm{toa}}=t_{ij}R_c/c=t_{oi}+\frac{\mathrm{\Gamma }_j^2}{\mathrm{\Gamma }_i^2\mathrm{\Gamma }_j^2}\mathrm{\Delta }t_{0ij}.$$
(5)
Thus, the relative time of arrival at a detector will have a close one-to-one relationship with the time the shell was created (i.e., $`t_{0i}`$).
In order to conserve both momentum and energy when shells collide, some of the bulk energy must be converted to internal energy which will be radiated away. If $`E_{\mathrm{rad}}`$ is the generated internal energy, conservation of energy dictates that
$$m_i\mathrm{\Gamma }_i+m_j\mathrm{\Gamma }_j=\left[m_{ij}+\frac{E_{\mathrm{rad}}}{c^2}\right]\mathrm{\Gamma }_{ij}$$
(6)
where $`\mathrm{\Gamma }_{ij}`$ is the Lorentz factor of the resulting shell and the resulting mass is $`m_{ij}=m_i+m_j`$. Conservation of momentum gives:
$$m_i\beta _i\mathrm{\Gamma }_i+m_j\beta _j\mathrm{\Gamma }_j=\left[m_{ij}+\frac{E_{\mathrm{rad}}}{c^2}\right]\beta _{ij}\mathrm{\Gamma }_{ij}$$
(7)
where, as usual, the $`\mathrm{\Gamma }`$ terms are related to the $`\beta `$ terms as $`\mathrm{\Gamma }=(1\beta ^2)^{1/2}`$. The post collision $`\beta `$ is
$$\beta _{ij}=\frac{m_i\beta _i\mathrm{\Gamma }_i+m_j\beta _j\mathrm{\Gamma }_j}{m_i\mathrm{\Gamma }_i+m_j\mathrm{\Gamma }_j}$$
(8)
which has the approximate solution (Kobayashi et al. (1997))
$$\mathrm{\Gamma }_{ij}^2=\mathrm{\Gamma }_i\mathrm{\Gamma }_j\frac{m_i\mathrm{\Gamma }_i+m_j\mathrm{\Gamma }_j}{m_i\mathrm{\Gamma }_j+m_j\mathrm{\Gamma }_i}.$$
(9)
The first shell is decelerated by the ISM. We use equations 7 and 6 with $`\mathrm{\Gamma }_j=1`$ and $`m_j`$ equal to the mass swept up during the time step to determine the velocity of the first shell as a function of time.
The colliding shells make a peak in the gamma-ray time history at relative time $`t_{ij}R_c/c`$. When two shells collide, forward and reverse shocks traverse the shells. If the internal energy is promptly converted into radiation, the merged shell emits for about the time that it takes for the reverse shock to cross the shell (see Kobayashi et al. (1997)). The $`\mathrm{\Gamma }`$ factors for the forward and reverse shocks are found from Sari & Piran (1995)
$$\mathrm{\Gamma }_{fs}=\mathrm{\Gamma }_{ij}\left[\frac{1+2\mathrm{\Gamma }_{ij}/\mathrm{\Gamma }_i}{2+\mathrm{\Gamma }_{ij}/\mathrm{\Gamma }_i}\right]^{1/2}$$
(10)
and
$$\mathrm{\Gamma }_{rs}=\mathrm{\Gamma }_{ij}\left[\frac{1+2\mathrm{\Gamma }_{ij}/\mathrm{\Gamma }_j}{2+\mathrm{\Gamma }_{ij}/\mathrm{\Gamma }_j}\right]^{1/2}.$$
(11)
## 4 Pulse Shape
A shell that coasts without emitting photons and then emits for a short period of time produces a pulse with a rise time related to the time the shell emits and a decay dominated by curvature effects (Fenimore et al. (1996)). In the internal shock model, the shell emits for the time it takes the reverse shock to cross the shell that is catching up, that is (Kobayashi et al. (1997)),
$$\mathrm{\Delta }t_{\mathrm{cross}}=l_j/(\beta _j\beta _{rs}).$$
(12)
The time of arrival at a detector (relative to the start of the pulse) of photons generated at angle $`\theta `$ from the line of sight is
$$T(\theta )=R_c(1\mathrm{cos}\theta )/c$$
(13)
(Note in our previous papers, it was more convenient to measure time from when the shell left the central site; this is not used here because the shell does not move at a constant speed.) At angle $`\theta `$, the Doppler factor, $`\mathrm{\Lambda }`$, is $`\mathrm{\Gamma }_{ij}(1\beta _{ij}\mathrm{cos}\theta )`$. At time $`T`$ in the pulse, the $`\mathrm{\Lambda }`$ factor is
$$\mathrm{\Lambda }(T)=\frac{R_c+2\mathrm{\Gamma }_{ij}^2cT}{2\mathrm{\Gamma }_{ij}R_c}$$
(14)
To calculate the observed pulse shape, one needs to combine the Doppler beaming with the volume of material that can contribute at time $`T`$. Following the method in Summer & Fenimore (1998), the resulting pulse shape is
$`V(T)`$ $`=`$ $`0\mathrm{if}T<0`$
$`=`$ $`\psi {\displaystyle \frac{(R_c+2\mathrm{\Gamma }_{ij}^2cT)^{\alpha +3}R_c^{\alpha +3}}{(R_c+2\mathrm{\Gamma }_{ij}^2cT)^{\alpha +1}}}\mathrm{if}0<2\mathrm{\Gamma }_{ij}^2T<\mathrm{\Delta }t_{\mathrm{cross}}`$
$`=`$ $`\psi {\displaystyle \frac{(R_c+\mathrm{\Delta }t_{\mathrm{cross}})^{\alpha +3}R_c^{\alpha +3}}{(R_c+2\mathrm{\Gamma }_{ij}^2cT)^{\alpha +1}}}\mathrm{if}2\mathrm{\Gamma }_{ij}^2T>\mathrm{\Delta }t_{\mathrm{cross}}`$
where $`\psi `$ is a constant and $`T`$ is measured from the start of the pulse.
The cooling is very rapid so the internal energy generated by the collision, $`E_{\mathrm{rad}}`$, is immediately turned into photons. An observer, using a detector such as BATSE, sees the fraction that is in the BATSE bandpass of 50 to 300 KeV, $`f_{\mathrm{BATSE}}`$. Since we do not understand exactly how the internal energy is distributed, we cannot predict $`f_{\mathrm{BATSE}}`$. However, GRB often have a “Band” spectral shape with $`\alpha =1,\beta =2.5,`$ and $`E_{\mathrm{peak}}=250`$ KeV (Band et al. (1993)). If that shape is valid over the entire range of emission, $`f_{\mathrm{BATSE}}`$ is $`0.37`$. We generate simulated time histories as the sum of pulses with the shape from equation (4) and integrated fluence of $`f_{\mathrm{BATSE}}E_{\mathrm{rad}}`$. We generate the time history with 0.064 s samples (to mimic BATSE) and then find the peak emitted luminosity in 0.256 s ($`=L_{256}`$). Ignoring cosmological redshift effects, the BATSE catalog value of $`P_{256}`$ should be related to $`L_{256}`$.
## 5 A Typical Simulation
To summarize our model, we have eight parameters: the duration of the activity ($`T_{\mathrm{dur}}`$), the rate of explosions at the central site ($`N/T_{\mathrm{dur}}`$), the range of Lorentz factors ($`\mathrm{\Gamma }_{\mathrm{min}},\mathrm{\Gamma }_{\mathrm{max}}`$), the range of energy release at the central site ($`E_{\mathrm{min}},E_{\mathrm{max}}`$), the ISM density ($`\rho `$), and the range of initial thicknesses (0 to $`l`$). To be comparable to Kobayashi et al. (1997), we will set $`N=85`$, $`T_{\mathrm{dur}}=60`$ s, and $`l`$ = 0.2 s. Kobayashi et al. (1997) parameterized much of their results based on $`\mathrm{\Gamma }_{\mathrm{max}}/\mathrm{\Gamma }_{\mathrm{min}}`$ since the overall efficiency of the conversion of bulk energy to radiation was primarily dependent on that parameter. With deceleration, we have found that the efficiency depends mostly on $`\mathrm{\Gamma }_{\mathrm{max}}`$, so we have set $`\mathrm{\Gamma }_{\mathrm{min}}`$ to the minimum required for the high energy emission (100), and varied $`\mathrm{\Gamma }_{\mathrm{max}}`$ from $`10^{2.5}`$ to $`10^{4.5}`$. Kobayashi et al. (1997) had no analog to $`E_{\mathrm{min}},E_{\mathrm{max}}`$, and $`\rho `$. Since we selected $`E`$ uniformly between $`E_{\mathrm{min}}`$ and $`E_{\mathrm{max}}`$, $`E_{\mathrm{min}}`$ is not important as long as it is much less than $`E_{\mathrm{max}}`$. We set $`E_{\mathrm{min}}`$ to $`10^{49}`$ erg. For $`\rho `$ we have used 1 cm<sup>-3</sup>.
Figure Gamma-Ray Bursts as Internal Shocks Caused by Deceleration is a typical simulation $`E_{\mathrm{max}}=3.6\times 10^{53}`$ erg, $`\mathrm{\Gamma }_{\mathrm{max}}=3.2\times 10^4`$. In Figure Gamma-Ray Bursts as Internal Shocks Caused by Decelerationa, $`\rho `$ is zero, so there is no deceleration. About $`1.5\times 10^{55}`$ erg (assuming isotropy) were released at the central site in 81 shells. The burst duration at the observer is approximately the duration of the activity at the central site. About 25% of the bulk energy was received by the observer in the period $`T_{\mathrm{dur}}`$. Figure Gamma-Ray Bursts as Internal Shocks Caused by Decelerationb is the same simulation (i.e., same set of random numbers), but includes deceleration of the first shell in an ISM with $`\rho =1`$ cm<sup>-3</sup>. Both simulations appear similar because both reflect the activity of the central engine (see eq. ). The dotted line is the contribution to the time history from collisions with the first shell. It tends to add a DC level with a few wide peaks but it raises the fraction of the bulk energy converted to radiation to 45%. Some of the radiation will arrive after $`T_{\mathrm{dur}}`$ because curvature will delay it.
Figure Gamma-Ray Bursts as Internal Shocks Caused by Decelerationa gives the Lorentz factor for the first shell in Figure Gamma-Ray Bursts as Internal Shocks Caused by Decelerationb. Given the high value of $`\mathrm{\Gamma }_{\mathrm{max}}`$, it quickly decelerates but other shells collide with it, giving it a boost and maintaining a large $`\mathrm{\Gamma }`$ for most of the burst. The deceleration occurs because the first shell collides with the ISM. The resulting internal energy must also radiate away. In Figure Gamma-Ray Bursts as Internal Shocks Caused by Decelerationb we show the contribution to the time history of Figure Gamma-Ray Bursts as Internal Shocks Caused by Deceleration from the internal energy from the deceleration if it radiates in the BATSE bandpass. It tends to be smooth and would fill in gaps if it had a $`f_{\mathrm{BATSE}}`$ similar to that from the internal shocks. We define $`E_{\mathrm{dec},\mathrm{dur}}`$ to be the internal energy generated by the collision of the first shell with the ISM that would arrive at the detector with $`T_{\mathrm{toa}}<T_{\mathrm{dur}}`$. For the case in Figure Gamma-Ray Bursts as Internal Shocks Caused by Deceleration, $`E_{\mathrm{dec},\mathrm{dur}}`$ is 38% of the bulk motion energy.
## 6 Efficiency of Converting Bulk Energy
The efficiency of the conversion is an important constraint. Although the time histories imply that GRBs are central engines with internal shocks, internal shocks usually do not convert most of the bulk motion into energy (e.g., $`<`$$``$25%, Kobayashi et al. (1997)). Observationally, the afterglows only account for a small percentage of the energy so it is not clear where most of the energy goes. The efficiency for an individual collision can be found from the initial and final bulk energies:
$$ϵ_{ij}=1\frac{m_{ij}\mathrm{\Gamma }_{ij}}{m_i\mathrm{\Gamma }_i+m_j\mathrm{\Gamma }_j}$$
(16)
If there is no deceleration, the shells will collide until the remaining shells are ordered with decreasing value of the Lorentz factors. Let $`n`$ be the number of remaining shells. The overall efficiency depends on how much energy remains in un-collided shells:
$$ϵ=1\frac{\underset{ij=0}{\overset{ij=n}{}}m_{ij}\mathrm{\Gamma }_{ij}}{_{i=0}^{i=N}m_i\mathrm{\Gamma }_{0i}}$$
(17)
When deceleration occurs, $`n`$ is 1, another reason why our model will give higher efficiency than previous models.
To study the effects of deceleration we have generated sets of 128 bursts, under a variety of conditions. Figure Gamma-Ray Bursts as Internal Shocks Caused by Deceleration shows the average efficiency as a function of $`\mathrm{\Gamma }_{\mathrm{max}}`$. The curves labeled “No Deceleration” is effectively the same result as Kobayashi et al. (1997). The curves labeled “Deceleration, IS” includes an ISM with $`\rho =1`$ cm<sup>-3</sup>. We ran models for a range of $`E_{\mathrm{max}}`$ (maximum energy per shell) and interpolated the results to find the efficiency at three values of the peak $`L_{256}`$ in the time histories: $`3\times 10^{50},3\times 10^{51}`$, and $`3\times 10^{52}`$ erg s<sup>-1</sup> (the solid, dotted, and dashed lines, respectively). Figure Gamma-Ray Bursts as Internal Shocks Caused by Deceleration shows the corresponding average radii for the internal shocks that produces pulses that arrive with $`T_{\mathrm{toa}}<T_{\mathrm{dur}}`$, that is, during the GRB phase. (Including all internal shocks would produce a misleading result when there is little deceleration because a few stragglers would finally collide at radii orders of magnitude larger.) The curves labeled “ES” are the average radii at which the Lorentz factor of the first shell is reduced by half. Once the first shell starts to decelerate, a fair number of the shells collide with it, raising the average amount of the bulk energy which is released in internal shocks. These collisions are more efficient since there is a greater disparity between the $`\mathrm{\Gamma }`$ factors. For large vales of $`\mathrm{\Gamma }_{\mathrm{max}}`$, the efficiency rises to 40%.
The curves labeled “Deceleration, IS” in Figure Gamma-Ray Bursts as Internal Shocks Caused by Deceleration are based on the ratio of the internal energy generated by collisions between shells (including the first shell) to the total generated bulk motion energy. It does not include the bulk motion energy lost to energizing the ISM. Eventually, all of the bulk motion energy is lost to the ISM. In previous models it was assumed that this was far from where the internal shocks occur. In the curves labeled “Deceleration, IS+ES”, we include $`E_{\mathrm{dec},\mathrm{rad}}`$ in the efficiency, that is, the internal energy from collision with the ISM whose photons would start to arrive during the burst. For large values of $`\mathrm{\Gamma }_{\mathrm{max}}`$, nearly 85% of the bulk motion energy is lost during the GRB phase.
## 7 DISCUSSION
Internal shocks are capable of producing the variability that is the signature of GRBs (Kobayashi et al. (1997)). However, it has been believed that internal shocks are inefficient, converting only $`<25`$% of the bulk motion energy into radiation. Since the afterglows only account for a few percent of the radiated energy, it has been unclear where most of the energy goes.
In this paper, for the first time, deceleration of the first shell is included in an internal shock model. For $`\mathrm{\Gamma }_{\mathrm{max}}>10^3`$, there are two ways that deceleration is an important catalyst for converting bulk motion into radiation. First, the deceleration occurs because the bulk motion must energize the ISM that it runs into. Much of the energy to energize the ISM goes into internal energy. This is rather effective because the bulk motion energy of the first shell is $`\mathrm{\Gamma }_iM_ic^2`$ where the mass grows as other shells run into the first shell. If deceleration causes the $`\mathrm{\Gamma }`$ of the first shell to drop by 50%, nearly 50% of the bulk motion energy will be used. Second, the rapid deceleration causes shells to plow into the back of the first shell. The efficiency for converting bulk motion (for equal mass shells) scales as $`1(\mathrm{\Gamma }_j/\mathrm{\Gamma }_i)^{1/2}`$ if $`\mathrm{\Gamma }_i`$ is much larger than $`\mathrm{\Gamma }_j`$, as is the case when the $`j`$-th shell is decelerating. These two effects combine to release up to 85% of the bulk motion energy while $`T_{\mathrm{toa}}`$ is less than $`T_{\mathrm{dur}}`$, that is, during the GRB phase. Although the bulk motion might be effectively converted to internal energy, the resulting electron distribution is difficult to predict, so we do not know how much of it will occur in the BATSE bandpass (i.e., $`f_{\mathrm{BATSE}}`$ is uncertain).
Thus, one can identify three types of contributions to the time history, each with a different character. The internal shocks that do not involve the first shell, internal shocks involving the first shell, and the external shock produced as the first shell decelerates.
The internal shocks that do not involve the first shell are characterized by narrow pulses, and have nearly constant width throughout the time history (see, for example, Fig. Gamma-Ray Bursts as Internal Shocks Caused by Decelerationa). The $`T_{\mathrm{toa}}`$ for these pulses is dominated by the time the shells were produced at the central site (following eq.). They occur at a similar radius from the central site. If $`\mathrm{\Gamma }_i`$ is selected randomly between a small $`\mathrm{\Gamma }_{\mathrm{min}}`$ and $`\mathrm{\Gamma }_{\mathrm{max}}`$, many pulses form with a similar Lorentz factor: $`\mathrm{\Gamma }_{ij}(\mathrm{\Gamma }_i\mathrm{\Gamma }_j)^{1/2}\mathrm{\Gamma }_{\mathrm{max}}/2`$. The pulse shape depends mostly on $`R_c`$ and $`\mathrm{\Gamma }_{ij}`$ (eq. ), so they are quite similar.
The internal shocks involving the first shell occur at ever increasing radii with a generally decreasing Lorentz factor. Thus, equation (4) produces peaks that are wider and wider (see the dotted curve in Fig. Gamma-Ray Bursts as Internal Shocks Caused by Decelerationb). In our previous papers, we argued that the time history could not arise from a single shell because the pulses did not get wider and wider. This argument is still valid and this paper shows how multiple shells can produce narrow peaks throughout the event in the presence of wider and wider pulses from a single shell. Indeed, a recent analysis of 387 pulses in 28 BATSE GRBs shows that the most intense pulses in a burst have nearly identical widths throughout the burst, but the weak pulses show a trend to become wider as the burst progresses (Ramirez-Ruiz & Fenimore 1999b ). This is precisely what is seen in simulations when deceleration is included.
There have been claims of upper limits on the possible Lorentz factor because the deceleration must occur at greater radii than the internal shocks (Lazzati, Ghisellini, & Celotti (1999)) to avoid making progressively wider peaks. We do not find this to be the case, and the Lorentz factor can be much larger, allowing more of the bulk motion energy to be released during the GRB phase.
The third type of contribution arises from the external shock as the first shell energizes the ISM. This is a smooth component with some variation as the Lorentz factor increases due to collisions with faster shells and decreases due to deceleration (see Fig. Gamma-Ray Bursts as Internal Shocks Caused by Decelerationb). Previous external shock models (e.g., Dermer & Mitman (1999)) have assumed the shock interacts with ISM clouds that are much smaller that the size of the shell. This was necessary to produce the temporal variability. We do not assume any structure in the ISM so the contribution from this component is quite smooth.
Figure Gamma-Ray Bursts as Internal Shocks Caused by Deceleration shows BATSE time histories that have the characteristics of our simulations. Figure Gamma-Ray Bursts as Internal Shocks Caused by Decelerationa is BATSE burst 2831. It has many narrow peaks throughout the time history but also gaps that go back to background. The presence of gaps implies little deceleration of the first shell because the gaps are not filled in. The gaps would occur because the central site turns on and off. We note that this burst is the record holder for the highest energy photons, 18 GeV (Hurley et al. (1994)).
Figure Gamma-Ray Bursts as Internal Shocks Caused by Decelerationb is BATSE burst 2329, and initially, it shows narrow peaks, but then broader peaks, on top of a slower varying level. The structure on the rise in burst 2329 is statistically significant. The slower varying function and widening pulses implies substantial deceleration of the first shell. In other bursts (e.g., BATSE burst 130), there are gaps where the last pulse before the gap has a slow decay, similar to that seen in Figure Gamma-Ray Bursts as Internal Shocks Caused by Decelerationb. However, these gaps can go all the way down to the background. Such bursts seem to show the signature of internal shocks on a decelerating first shell but not the contribution from the energization of the ISM. Since we do not understand the mechanism by which the internal energy is distributed, the $`f_{\mathrm{BATSE}}`$ values associated with each component might be different and the emission might appear in different bandpasses. Perhaps the $`f_{\mathrm{BATSE}}`$ for energizing the ISM is small and its internal energy is radiated at lower energy such as the x-ray excesses reported by Preece et al. (1995).
Apparently, some burst involve deceleration and some do not. The Lorentz factor required to have deceleration depends very weakly on $`\rho `$ and $`E_0`$ (i.e., $`(\rho E_0)^{1/4}`$). Thus it seems more likely that intrinsic variations in $`\mathrm{\Gamma }_{\mathrm{max}}`$ might be the reason why some bursts show more deceleration than others.
If the prompt emission is caused by the first shell, as suggested by the analysis of GRB990123 (Sari & Piran (1997)), we would expect events with a slowly varying component to be more likely to slow prompt, bright optical emission or early afterglows.
In summary, it is possible to convert a large fraction ($`85`$%) of the bulk motion energy into radiation during the gamma-ray burst phase with internal shocks if deceleration of the first shell is account for and the Lorentz factor is $`>10^3`$.
acknowledgment: The authors gratefully acknowledge useful conversations with Re’em Sari.
Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
|
no-problem/9909/hep-ph9909402.html
|
ar5iv
|
text
|
# Properties of scalar–isoscalar mesons from multichannel interaction analysis below 1800 MeV
## 1 INTRODUCTION
Study of scalar-isoscalar mesons is an important issue of QCD : one expects presence of some scalar ($`J^{PC}=0^{++},I=0`$) glueballs . Here we shall try to see what can be learned from the present experimental knowledge of the scalar-isoscalar $`\pi \pi `$ and $`K\overline{K}`$ phase shifts. We consider an unitary model with separable interactions in three channels: $`\pi \pi `$, $`K\overline{K}`$ and an effective $`2\pi 2\pi `$, denoted $`\sigma \sigma `$ , in a mass range from the $`\pi \pi `$ threshold up to 1800 MeV . Several solutions are obtained by fitting $`\pi \pi `$ phase shifts from the CERN-Cracow-Munich analysis of the $`\pi ^{}p_{}\pi ^+\pi ^{}n`$ reaction on a polarized target together with lower energy $`\pi \pi `$ and $`K\overline{K}`$ data from reactions on unpolarized target (see references given in ).
## 2 RESULTS
The different solutions A, B, E and F of our model are characterized by presence or absence of $`K\overline{K}`$ and $`\sigma \sigma `$ bound states when all the interchannel couplings are switched off (see Table 2 of ). For the fully coupled case, poles of the $`S`$-matrix, located in the complex energy plane not too far from the physical region, are interpreted as scalar resonances. In all our solutions we find a wide $`f_0(500)`$, a narrow $`f_0(980)`$ and a relatively narrow $`f_0(1400)`$ which splits into two resonances, lying on different sheets classified according to the sign of $`Imk_{\pi \pi },Imk_{K\overline{K}},Imk_{\sigma \sigma }`$. Their average masses and widths are summarized in Table 1. The finding of the two states near 1400 MeV seems to indicate that the $`\pi \pi `$ data with polarized target are quite compatible with the Crystal-Barrel and other LEAR data which need, in order to be explained, a broad $`f_0(1370)`$ and a narrower $`f_0(1500)`$ . In we have furthermore studied the dependence of the positions of the $`S`$-matrix singularities on the interchannel coupling strengths to find origin of resonances. We have also looked at the interplay between $`S`$-matrix zeroes and poles.
In the $`\pi \pi `$ channel ($`j=1`$) one can define three branching ratios $`b_{1j}=\sigma _{1j}/\sigma _{11}^{tot},j=1,2,3`$. Our model has in total nine such ratios. Here $`\sigma _{11}`$ is the elastic $`\pi \pi `$ cross section, $`\sigma _{1j}`$ are the transition cross sections to $`K\overline{K}`$ ($`j=2`$) and $`\sigma \sigma `$ ($`j=3`$) and $`\sigma _{11}^{tot}`$ is the total $`\pi \pi `$ cross section. One has $`b_{11}+b_{12}+b_{13}=1.`$ The energy dependence of these ratios is plotted in Fig. 1 for solution B.
Above the $`K\overline{K}`$ threshold one can define an average branching ratio:
$$\overline{b}_{12}=\frac{1}{M_{max}M_{min}}_{M_{min}}^{M_{max}}b_{12}(E)𝑑E.$$
In the branching ratios for the $`f_0(1500)`$ decay into five channels, $`\pi \pi `$ , $`\eta \eta `$, $`\eta \eta ^,`$, $`K\overline{K}`$ and $`4\pi `$, are given as 29, 5, 1, 3 and 62 %, respectively. The two main disintegration channels are $`\pi \pi `$ and $`4\pi `$. In our model the $`4\pi `$ channel is represented by the effective $`\sigma \sigma `$ channel and we also obtain large fractions for the averaged branching ratios $`\overline{b}_{11}`$ and $`\overline{b}_{13}`$. If we calculate the ratios $`b_{13}/b_{11}`$ exactly at 1500 MeV then we obtain numbers 2.4, 1.2 and 2.3 for the solutions A, B and E, respectively. These numbers show the importance of the $`4\pi `$ channel in agreement with the experimental result of . If we choose the energy interval from 1350 MeV to 1500 MeV our average branching ratios near $`f_0(1400)`$ for our solution B are $`\overline{b}_{11}=0.61`$ , $`\overline{b}_{12}=0.16`$ and $`\overline{b}_{13}=0.23`$ . We know that the extraction of the branching ratios from experiment is a difficult task . The average branching ratios depend quite sensitively on the energy bin chosen in the actual calculation as seen in Fig. 1. In particular the branching ratio $`b_{12}`$ ($`\pi \pi `$ $``$$`K\overline{K}`$ transition) is very small around 1420 MeV, close to the position of our $`f_0(1400)`$ resonance poles. This is in qualitative agreement with the small number for the $`K\overline{K}`$ branching ratio (3 %) given in .
## 3 CROSSING SYMMETRY AND CHIRAL CONSTRAINTS
We have looked for a new solution fitting the previous $`\pi \pi `$ and $`K\overline{K}`$ data and satisfying some chiral constraints at the $`\pi \pi `$ threshold $`s=4m_\pi ^2`$ . We have fixed 2 parameters of our model in such a way that the $`\pi \pi `$ scattering amplitude $`T_{11}(4m_\pi ^2)=0.21m_\pi ^1`$ and the $`K\overline{K}`$ amplitude $`T_{22}(4m_\pi ^2)=0.13m_\pi ^1`$. The first value corresponds to a scalar-isoscalar scattering length $`a_0^0`$ close to those obtained in two loop calculations in chiral perturbation theory and the second, for the $`K\overline{K}`$ , to the leading order value . Our unitary model for the $`J=I=0`$ amplitudes should be supplemented by a suitable parameterization of $`J=0,I=2`$ and $`J=I=1`$ waves in order to satisfy some minimum crossing symmetry properties. Parameters of our separable potentials can be constrained in such a way that the above set of amplitudes satisfies in an approximate way Roy’s equations . In order to do so we have used the equations given in with the higher energy and $`J`$ 2 contributions as estimated in . We have integrated the partial wave spectral functions up to $`s=46m_\pi ^2`$. The parameterization given in has been used for the $`I=J=1`$ wave. For the scalar isotensor wave we have built a fit to the available phases as in but with a rank 2 separable potential imposing a scattering length, $`a_0^2=0.045m_\pi ^1`$, close to the two loop results of . With such a value the new set of the three amplitudes satisfies better Roy’s equations as can be seen in Fig. 2.
There we have compared (for $`J=0`$ and $`I=0,2`$) the real parts of the partial wave amplitudes $`Ref_J^I(s)`$ as calculated from $`Ref_J^I=(1/2)\sqrt{s/(s4)}sin2\delta _J^I(s)`$ for solution A (dash-dot line) and for the new solution (solid line) to those given by Roy’s equations for the solution A (short-dashed line) and for the new solution (long-dashed line). We find that if $`a_0^0`$ is close to 0.2 $`m_\pi ^1`$ then $`a_0^2`$ should be in the vicinity of -0.04 $`m_\pi ^1`$ in order to satisfy Roy’s equations for the isoscalar and isotensor waves. The $`I=J=1`$ wave, not shown here, is less sensitive to these values and does satisfy Roy’s equation relatively well. This preliminary study can be further extended by inclusion of other possible chiral constraints such as those on the transition $`\pi \pi `$ to $`K\overline{K}`$ . One can also try to improve treatment of high partial waves and high energy contributions to Roy’s equations.
Acknowledgments: We thank B. Moussallam, J. Stern and R. Vinh Mau for helpful discussions. This work has been supported by IN2P3-Polish laboratories Convention (project No. 99-97). R. Kamiński thanks NATO for a grant.
|
no-problem/9909/astro-ph9909503.html
|
ar5iv
|
text
|
# Overshoot in Giant Stars
## 1 Introduction
The observation of metal-poor stars in globular clusters and field stars has revealed many well-defined correlations of abundance ratios. If these results are combined with theoretical predictions for nucleosynthetic production site the chemical evolution of stellar populations can be reconstructed. Among the nucleosynthetic production sites to be considered are the asymptotic giant branch (AGB) stars which are potentially efficient producers of carbon, nitrogen and $`s`$-process elements \[Jehin et al., 1999\]. In contrast to the contribution from massive stars and supernovae, which can be identified down to the lowest \[Fe/H\] ratios, AGB stars contribute with some delay to the enrichment of the ISM.
AGB stars are characterized by an increasingly degenerate C/O core and two shells of nucleosynthetic processing: the helium burning shell and the hydrogen burning shell \[Iben and Renzini, 1983\]. The He-shell can become thermally unstable which leads to the recurrent thermonuclear run-away of He-burning in the shell, the thermal pulses (TP). During the TP the region between the hydrogen and the helium shell, the intershell region, becomes convectively unstable due to the huge nuclear energy production of the He-shell (Fig. 1) \[Blöcker, 1999\].
The predictions of yields, chemical enrichment and surface abundances as well as stellar parameters, like luminosity, are sensitively dependent on the overshoot phenomenon. In the next section we will review the most important changes introduced to the AGB models if overshoot is considered. Section 3 will discuss new stellar evolution models of hydrogen-deficient post-AGB stars and their observational counterparts. We argue that this class of stars can only be understood if overshoot in AGB stars is indeed operating.
## 2 Nucleosynthesis and mixing in AGB stars with overshoot
Stellar AGB models with overshoot are different from models without overshoot with respect to
* the formation of a $`{}_{}{}^{13}\text{C}`$ pocket which is a crucial ingredient of the concept of $`{}_{}{}^{13}\text{C}`$ being the dominant neutron source of the main solar $`s`$-process component produced in low-mass AGB stars,
* the occurrence and efficiency of the third dredge-up \[Herwig et al., 1997, Herwig et al., 1999b\],
* the composition in the intershell and thereby also of the ejecta and
* the structural evolution \[Herwig et al., 1998\].
### 2.1 The neutron source for the $`s`$-process
At the end of the third dredge-up phase the hydrogen-rich convective envelope and the carbon-rich intershell region have direct contact. A tiny region can form were protons and $`{}_{}{}^{12}\text{C}`$ coexist. It has been shown by Herwig et al. \[Herwig et al., 1997\] that such a region follows naturally if depth dependent overshoot is considered which models an exponentially declining turbulent velocity field. During the following interpulse evolution $`{}_{}{}^{13}\text{C}`$ forms. The $`{}_{}{}^{13}\text{C}`$ burns under radiative conditions \[Straniero et al., 1995\] and releases neutrons via the reaction $`{}_{}{}^{13}\text{C}(\alpha ,n)^{16}\text{O}`$. Current models of the $`s`$-process nucleosynthesis show that this is the dominant mechanism of heavy-element synthesis in low-mass AGB stars \[Gallino et al., 1998\]. Together with the H-burning ashes the heavy elements formed by this mechanism are then engulfed by the He-flash convection zone and can reach the surface by the next dredge-up event.
Also note, that dredge-up below the He-flash convection leads to a phase where the temperature at the bottom of the convective boundary exceeds $`T2.710^8\mathrm{K}`$, which lasts for about $`30\mathrm{yr}`$ while models without overshoot have a much shorter high-T phase of only a few years.
### 2.2 The third dredge-up
Maybe one of the most compelling aspects of models with overshoot is the ease with which they produce low-mass carbon stars due to efficient dredge-up. For example, our $`M_{\mathrm{ZAMS}}`$=1.7$`\mathrm{M}_{}`$, Z=0.02 model sequence becomes a carbon star after the 10<sup>th</sup> TP when the hydrogen-free core has a mass of $`M_\mathrm{H}0.6\mathrm{M}_{}`$. The luminosities are $`\mathrm{log}L\stackrel{<}{_{}}4.1`$, $`3.6`$ and $`\stackrel{>}{_{}}3.9`$ at the TP luminosity maximum, minimum and during the quiescent interpulse phase. Similar results have also been found for models of lower metallicity and it seems that the problem of the theoretically missing low-luminosity carbon stars can be solved by some overshoot below the He-flash convection zone. The enhanced dredge-up efficiency is caused by the modification of the intershell abundance. The abundances found with overshoot show qualitatively the same dependence on the pulse number like models without overshoot \[Schönberner, 1979, Boothroyd and Sackmann, 1988\]. However, with overshoot the intershell contains much more carbon and oxygen at the expense of helium. For example, for a 3$`\mathrm{M}_{}`$ stellar model after the 13<sup>th</sup> TP we find \[He,C,O\]=\[0.39,0.41,0.16\]<sup>1</sup><sup>1</sup>1All abundances are given in mass fractions. whereas the values from calculations without overshoot are \[He,C,O\]=\[0.70,0.26,0.01\]. It has actually been found that the amount of overshoot applied to the bottom of the He-flash convection zone, the resulting helium abundance and the dredge-up after the TP are proportional \[Herwig et al., 1999b\].
The combined effect of efficient dredge-up and enhanced oxygen and carbon content in the intershell region inverts the common belief that AGB stars do not contribute to the Galactic oxygen production. Models with overshoot predict that a 3$`\mathrm{M}_{}`$ star of solar metallicity will eject about 0.03$`\mathrm{M}_{}`$ of primary $`{}_{}{}^{16}\text{O}`$ which was formed by He-shell burning. Also, the surface abundance of oxygen (and thus the O/H ratio) is larger by about a factor of two to three due to overhoot. Unfortunately, the determination of the oxygen abundance in giant stars is difficult and yields sometimes ambiguous results \[Fulbright and Kraft, 1999\].
## 3 Evidence for overshoot in AGB stars from models and observations of post-AGB stars
In the previous section we have shown that overshoot from below the He-flash convection zone leads to important modifications of structure and chemical evolution of AGB stars. It is therefore important to collect evidence for the existence and efficiency of overshooting at this convective boundary. Such evidence comes from a class of H-deficient post-AGB stars with unusual surface abundances: the \[WC\]-type central stars of planetary nebulae \[Koesterke and Hamann, 1997, Hamann, 1997, De Marco et al., 1998\] and the PG 1159 stars \[Dreizler and Heber, 1998, Werner et al., 1999\]. The spectroscopic abundance analysis revealed that these stars are very carbon rich and also *oxygen rich!*. Typically, one finds \[He,C,O\]=\[0.50,0.33,0.17\] for PG 1159 stars. In particular the large mass fraction of oxygen has been puzzling because AGB stellar models have shown such a high oxygen abundance only *below* the intershell and helium-burning region where the helium abundance is already much lower then the observed $`50\%`$. Thus, AGB models without overshoot do not show the observed abundance pattern of H-deficient post-AGB stars at any depth and, moreover, no scenario of combined mixing and mass loss could be identified to solve the problem. Almost any post-AGB evolutionary model sequence is H-normal \[Blöcker, 1995\]. The only advance has been the discovery that a thermal pulse on the post-AGB, shortly before the star becomes a white dwarf, may lead to convective H-burning in the intershell during the He-flash (see Iben & McDonald \[Iben and McDonald, 1995\] and references there). This *very late* TP brings up the intershell abundance and thereby leads to a significant abundance change at the surface. However, the models which where available so far fail to predict the observed high abundance of oxygen.
With overshoot the abundances in the intershell coincide naturally with the abundance pattern found in the H-deficient post-AGB stars (Fig. 3). Also, a certain range of observed abundances may reflect the time and metallicity dependence of the intershell abundance. We have performed new stellar evolution calculations of the post-AGB phase which take the modifications of the intershell abundance of AGB stars due to overshoot into account. Also, we employ a time-dependent and coupled treatment of nuclear burning and convective mixing to correctly describe the nuclear processing of protons under the conditions encountered in the convectively unstable He-shell (for details see Herwig \[Herwig, 2000\]). We found that, within the very late thermal pulse scenario, H-deficient Born-again post-AGB stars (see Fig. 4) exhibit surface abundances (also for oxygen) and parameters in the range observed \[Herwig et al., 1999a\].
## 4 Conclusions
In this paper we have reported of interdependent advances of recent stellar evolution modeling of AGB and post-AGB stars which mutually add to their significance. Currently the modification of the intershell abundances following the overshoot concept appears to be the *only* possible theoretical explanation for the large oxygen abundances observed in H-deficient post-AGB stars. Thus, the application of some overshoot from below the He-flash convection zone of AGB stars solves a long standing observational puzzle.
In turn, this importance of AGB-overshoot for the H-deficient post-AGB stars presently is one of the strongest evidence that such overshoot is indeed operating. Also other problems can be resolved, like the modeling of carbon stars with low luminosities as observed in the Magellanic Clouds. More support for the overshoot concept comes also from hydrodynamic modeling (see contribution by D. Arnett, this volume). On the other hand, we have so far not found any evidence against the operation of overshoot in AGB stars (though we might not have searched for it hard enough yet).
At the present stage we can conclude that the operation of some overshoot in AGB stars appears to be very likely. The consequences for the chemical yields and the enrichment are both qualitative and quantitative. While the larger dredge-up efficiency enhances the mass fraction of synthesized material in the ejecta in general also the relative abundance fractions are changed, which is most remarkably the case for oxygen.
## Acknowledgements
This work has been supported by the *Deutsche Forschungsgemeinschaft, DFG* (La 587/16). We would also like to thank Drs. W.-R. Hamann, L. Koesterke and N. Langer for very helpful discussion.
|
no-problem/9909/astro-ph9909123.html
|
ar5iv
|
text
|
# The Space Density of Galaxies through 𝜇_B(0) = 25.0 mag arcsec-2
## 1 Introduction
Our ability to detect objects in the Universe has been likened to standing in a well-lit room in the middle of the night, and trying to look through the window to describe the garden 100 yards away. Although it may be possible to definitively say that the garden exists, and even describe some of the large, well-defined plants, coming up with a quantitative description of the fainter, or smaller, or more distant plants is an extremely difficult task. Moreover, of primary scientific interest is not the garden itself, but rather the evolutionary history of the plants that occupy it. With only this one view of the garden available to us, it would be extremely unlikely that our derived evolutionary history would be very accurate. The parallel between the garden and galaxy detection should be clear. In 1965 Arp attempted to quantify this limited view of the universe by defining a “band of visibility,” outside of which we are unable to discern galaxies, due to either the galaxies small apparent size or optically diffuse nature. Arp’s argument was later quantified by Disney (1976), showing that the visibility bias was rather severe.
Since Arp’s work was published, we have successfully broadened the “band of visibility” through improvements in both instruments and detection techniques. As an example, the superb angular resolution of HST has allowed for the distinction between true stars and galaxies which appear star-like in lower resolution surveys (e. g. Lilly, et.al. 1995; Ellis, et.al. 1996; Cowie, et.al. 1996; Steidel, et.al. 1996; Abraham, et.al. 1996; Morris, et.al. 1999; O’Neil, Bothun, & Impey 1999). The fact that most of these newly resolved galaxies are very far away means that the local Universe is not filled up with little, dinky, high surface brightness galaxies. On the other end of the spectrum, both improvements in detection techniques (i.e. Malin 1978; Schwartzenberg, et.al. 1995) and the advent of CCD cameras as a tool in observing has allowed for the detection of increasingly diffuse (low signal-to-noise) stellar systems (e. g. Impey, Bothun, & Malin 1988; Davies, Phillipps, & Disney 1988; Dalcanton, et.al. 1997; Schombert & Bothun 1988; O’Neil, Bothun, & Cornell 1997; Matthews & Gallagher 1997). Indeed the recent detection of extremely low surface brightness (LSB) dwarf spheroidal galaxies around Andromeda by Armandroff, Davies, & Jacoby (1998) is consistent with the local Universe having a large population of low mass, nearly invisible galaxies. The Andromeda discovery underscores the severity of surface brightness selection effects. Where once the Milky Way stood alone in the Local Group as a unique host of 7 LSB dwarf spheroidals, we now have detected an apparently equivalent population around M31.
As new observational techniques broaden the visibility band, thus allowing new objects to become detected, the issue shifts from an existence proof to establishing the true space density of these newly discovered galaxies. In establishing this, we must understand the survey limitations. To compensate for these limitations, corrections must be made for the decreased probability of detecting a galaxy the closer it lies to the survey limits. The mathematical formalism of this correction has been both extensively discussed in the literature over the last 20 years (i.e. Disney 1976; Disney & Phillipps 1983; McGaugh 1996; de Jong 1996), and applied to the available data (i.e. McGaugh, Bothun, & Schombert 1995; de Jong 1996). These preliminary applications suggest that the space density of galaxies as a function of their central surface brightness as measured in the blue ($`\mu _B`$(0) ) is relatively flat or at most slow falling out to $`\mu _B`$(0) = 23.5 B mag arcsec<sup>-2</sup>. However, Dalcanton et.al. 1997 present data which suggests that space density continues to rise pass this limit. We emphasize that these results refer exclusively to non-dwarf galaxies; i.e. those objects with scale lengths larger than $``$ 1 kpc.
In this paper we use the sample of LSB galaxies in the O’Neil, Bothun and Schombert (1999) catalog to determine the surface brightness distribution function from a $`\mu _B`$(0) of 22.0 B mag arcsec<sup>-2</sup> through 25.0 B mag arcsec<sup>-2</sup> (the catalog limits). In addition to extending the known distribution by at least one mag arcsec<sup>-2</sup>, the O’Neil, Bothun, & Schombert survey has the advantage of having both well defined survey limits and known $`\mu _B`$(0) , scale lengths, and redshifts, allowing for the use of a bivariate correction for the survey selection and the accurate determination of the surface brightness distribution function in the 23.0 – 25.0 B mag arcsec<sup>-2</sup> range. The paper is laid out as follows: Section 2 of describes the volumetric correction applied to the data, section 3 discusses the overall form of our determined surface brightness distribution, and section 4 briefly describes the implications of these results.
## 2 The Volume Correction
As has often been discussed, detecting high surface brightness (HSB) galaxies within a survey is considerably easier than detecting low surface brightness (LSB) galaxies (i.e. Freeman 1970; Disney 1976; Disney & Phillipps 1983; McGaugh, Bothun, & Schombert 1995; Davies 1990; de Jong 1996; Bothun, Impey, & McGaugh 1997; Dalcanton et.al. 1997). Thus determining the true (underlying) surface brightness distribution of a sample of galaxies requires accounting for the probability of a galaxy being detected by a survey of a given design. For a field galaxy survey, the probability of detection is determined simply by the available volume which can be sampled for a galaxy of a given size and luminosity (e.g. its surface brightness). The volume corrected surface brightness distribution is thus
$$\varphi (\mu _0)=\underset{i=1}{\overset{N}{}}\frac{S^i}{V_{max}^i}$$
(1)
where i is summed over all N galaxies in the sample, $`S^i`$ is 0 or 1 depending on whether a galaxy lies within the described volume, and $`V_{max}=\frac{4\pi }{3}d_{max}^3`$, the maximum volume in which a galaxy could be detected.
For a surface brightness limited sample (i.e. the catalog of O’Neil, Bothun, & Schombert 1999, OBS from now on) $`d_{max}`$ can be found by requiring that the diameter of the galaxy be equal to, or greater than, the minimum detectable diameter ($`\theta =2r\theta _l`$). For a galaxy with an exponential surface brightness profile this gives:
$$\mu (r)=\mu _0+\mathrm{\hspace{0.25em}1.086}\frac{r}{\alpha }$$
(2)
$$\theta =\mathrm{\hspace{0.25em}2}r=1.84\alpha (\mu _l\mu _0)\frac{h}{d}(\mu _l\mu _0)$$
(3)
$$d_{max}(\mu _0)\frac{h}{\theta _l}(\mu _l\mu _0).$$
(4)
where $`\mu `$(0) is the central surface brightness of the galaxy, $`\alpha `$ is its scale length in arcsecs, and h is its scale length in kpcs. Thus, for a surface brightness limited sample,
$$V_{max}(\mu _0)\left(\frac{h}{\theta _l}\right)^3(\mu _l\mu _0)^3\left(\frac{\alpha d}{\theta _l}\right)^3(\mu _l\mu _0)^3.$$
(5)
## 3 The Surface Brightness Distribution, $`\varphi (\mu _0)`$
Figure 5 shows the results of applying the correction given in equation 5 to the O’Neil, Bothun and Cornell (1997) data using the redshifts available in OBS. The limiting diameter was set to 25”, and $`\mu _l`$ = 25.0 mag arcsec<sup>-2</sup>. This corresponds to an approximate minimum physical diameter of 3 kpc (so again, these are non-dwarf galaxies). For the OBS survey, the limiting central surface brightness was found through an extensive series of computer models, in which Monte-Carlo-type simulations of the images were created and searched for galaxies (O’Neil, Bothun, & Cornell 1997). As the true underlying galaxy distribution of the computer-generated images were known, the detection cut-off could be well determined. As such, $`\mu _l`$, and thus V<sub>max</sub> for the OBS catalog is extremely well known.
Because the OBS sample is not uniformly distributed in space, but instead follows the same large scale structure as the high surface brightness (HSB) galaxies in the region (i.e. Figure 2 of OBS), performing a V/V<sub>max</sub> test on the galaxies, and normalizing the distribution function to that (i.e. de Jong 1996) would be extremely difficult and possibly misleading at best. In practice, the OBS sample lies in a shell bounded by radial velocities of 4000 and 12000 km sec<sup>-1</sup> . The data for this sample, as well as for the comparison samples, have therefore been normalized to one (Figure 5). Additionally, to insure against bias due to under sampling within a bin, the data from OBS was binned to 0.5 mag arcsec<sup>-2</sup>. The errors bars for this data are simply $`\sqrt{N}/N`$. The low values for the surface brightness distribution between 22 – 23 mag arcsec<sup>-2</sup> are artificial, caused by the 22.0 B mag arcsec<sup>-2</sup> cut-off in the survey sample imposed in the OBS catalog. This was not corrected for.
The data from this survey extends the faint end of the surface brightness distribution function in a horizontal line from 23.0 mag arcsec<sup>-2</sup> through 25.0 mag arcsec<sup>-2</sup>, the survey cut-off, matching the predictions made by, i. e. McGaugh (1996) and Impey & Bothun (1997). Our distribution does, however, contradict some of the data points of both de Jong (1996) and Davies (1990) in the 23.0 – 24.0 mag arcsec<sup>-2</sup> range, where the de Jong and Davies samples appear to dip downwards.
The first, and most obvious, explanation for the discrepancy between our data and that of both Davies (1990) and de Jong (1996) is that the volumetric corrections for one or more of these samples was done incorrectly, either due to mis-identifying the selection limits of the survey or through poor statistical sampling. The OBS sample is designed to look for galaxies above the 22.0 mag arcsec<sup>-2</sup> range, and is therefore complete through 24.0 mag arcsec<sup>-2</sup>, with the uncorrected data having a flat surface brightness distribution from 22.5 through 24.0 mag arcsec<sup>-2</sup> (i. e. Figure 8 of O’Neil 1998) . Additionally, the surface brightness and diameter cut-off for the OBS sample was determined through computer modeling (O’Neil, Bothun, & Cornell 1997) and therefore is well determined. In contrast, the de Jong sample ranges in central surface brightness from approximately 20.0 mag arcsec<sup>-2</sup> through 24.1 mag arcsec<sup>-2</sup>, with the the majority of the galaxies lying in the 21.0 – 22.0 mag arcsec<sup>-2</sup> range. The volumetric corrections for the de Jong sample are concerned only with the diameter limit ($`\theta _l`$) imposed on the survey and do not account for potential surface brightness selection effects, an omission which de Jong states could cause his survey to be undersampled at the faint $`\mu `$(0) , large scale length and/or small scale length, bright $`\mu `$(0) ends of the spectrum. Combined with the survey’s under-sampling in the $`\mu `$(0) $`>`$ 22.5 mag arcsec<sup>-2</sup> range, this could result in an artificial drop in the de Jong surface brightness distribution.
Like the de Jong (1996) sample, the Davies (1990) sample is concerned with the entire range of central surface brightnesses, but in this case the total number of galaxies involved in the survey should preclude any difficulties with under-sampling. The Davies sample was corrected for surface brightness selection effects, with a limiting central surface brightness $`\mu _{lim}(0)`$ = 25.6 mag arcsec<sup>-2</sup>, and $`\theta _l`$=7”. This low value for $`\theta _l`$ potentially mixed dwarfs and non-dwarfs together which could greatly confuse the situation. More importantly, no galaxies were actually detected near the defined survey limits. Thus it is entirely possible that the chosen sample limits simply do not accurately reflect the nature of the survey and thus are inappropriate in determining the volumetric correction. This could account for the apparent under-sampling in the 23.25 mag arcsec<sup>-2</sup> bin compared to our data.
Figure 5 show the results of changing the binning for the OBS sample, from bins of 1.0 mag arcsec<sup>-2</sup> through bins of 0.3 mag arcsec<sup>-2</sup>. The behavior of the surface brightness distribution as the data becomes undersampled imitates the behavior of the de Jong and Davies samples. It is therefore possible that, as both the de Jong and Davies samples are primarily HSB galaxy samples, they are relatively undersampled in the lower surface brightness regions.
At this point the importance of the chosen value for $`\mu _{lim}`$ should also be noted. Choosing a value which is fainter (or brighter) than the true survey limits will result in an artificial lowering (raising) of the surface brightness distribution slope at faint $`\mu _0`$. This is not surprising as it simply is a statement that if a survey is believed to extend to, say, 26.0 mag arcsec<sup>-2</sup> and yet detects no objects with $`\mu _0`$$``$25.0 mag arcsec<sup>-2</sup>, it would be accurate to assume that a fall-off in galaxy number density at faint ($`\mu _0`$$``$25.0) surface brightness is occurring. The OBS sample is no exception to this rule. Were $`\mu _{lim}`$ reduced to 26.0, a slight decline in the slope of the surface brightness distribution function, beginning at 24.0 mag arcsec<sup>-2</sup> would be evident. As $`\mu _{lim}`$ was carefully determined for the OBS sample, though, it should be an accurate representation of the survey’s true limitations. With this, and the above, considerations in mind, it is likely the flat surface brightness distribution given by the OBS sample through 24.5 – 25.0 mag arcsec<sup>-2</sup> is an accurate representation of the surface brightness distribution in the local (z$`<`$0.05) universe. The implication of such a flat distribution remains profound.
## 4 LSB Galaxies and the Baryon Fraction
In 1991 Walker, et.al. used the latest nuclear cross sections to calculate the baryon abundance (D, <sup>3</sup>He, <sup>4</sup>He, and <sup>7</sup>Li) within the framework of the standard hot big bang cosmological model. Their calculations led to a nucleon-photon ratio of
$$2.8n_b/n_\gamma \times \mathrm{\hspace{0.25em}10}^{10}\mathrm{\hspace{0.25em}4.0}$$
and a baryon density parameter of $`\mathrm{\Omega }_Bh_{100}^2`$ = 0.0125 $`\pm `$ 0.0025. Estimating the known baryon mass density of the universe using $`\mathrm{\Omega }_B`$ = $`\mathrm{\Omega }_{E/S0}`$ \+ $`\mathrm{\Omega }_{Sp}`$ \+ $`\mathrm{\Omega }_{clusters}`$ \+ $`\mathrm{\Omega }_{groups},`$ Persic and Salucci (1992) found $`\mathrm{\Omega }_B`$ = 2.2 + 0.6$`h_{100}^{3/2}`$ x 10<sup>-3</sup>, showing that 70% – 80% of the predicted baryon mass does not even exist in standard galaxy catalogs.
In their review, Impey & Bothun (1997) speculate that if the M/L of LSB galaxies is higher than in HSB galaxies, as is indicated in most LSB galaxy studies (i.e. McGaugh & de Blok 1997; OBS), the faint end of the luminosity function has a slope of $``$1.6 – $``$1.8 (i.e. de Propris, et.al. 1995; Bothun, Impey, & Malin 1991), and the central surface brightness distribution remains flat through 28.0 – 30.0 B mag arcsec<sup>-2</sup>, then the total contribution to the baryonic mass from galaxies is $`\mathrm{\Omega }_Bh_{100}^2`$ = 0.014 – 0.020, well within the bounds set by Walker, et.al. . Whether the underlying distribution is flat through 28.0 B mag arcsec<sup>-2</sup> or falls off after 26.0 B mag arcsec<sup>-2</sup> awaits deeper data to determine. Nevertheless, the results of this study fortify those of McGaugh, Bothun, & Schombert (1995) and McGaugh (1996) and greatly suggest that a lot of baryons are contained in potentials that host very diffuse and hard to detect galaxies.
## 5 Conclusion
Using a bivariate volume correction, we extend the surface brightness distribution function in a horizontal line from the Freeman value of 21.65 B mag arcsec<sup>-2</sup> through 25.0 B mag arcsec<sup>-2</sup>, the limit of the OBS catalog. This result is consistent with previous studies (e.g. McGaugh, Bothun, & Schombert 1995; Dalcanton et.al. 1997) but extends them to fainter surface brightness levels. Our result is somewhat inconsistent with the findings of two previous surveys in the 23.0 – 24.0 B mag arcsec<sup>-2</sup> range (Davies 1990; de Jong 1996). However, our survey was designed specifically to detect galaxies in this surface brightness range and we have quite well defined survey limits ($`\theta _l,mu_l`$). This leads us to have considerable confidence in our results. It can therefore be said with confidence that LSB galaxies are both important in their own right, and are very significant contributors to the total baryonic mass in the universe.
REFERENCES
Abraham, R..G. Valdes, F., Yee, H., van den Bergh, A. 1994, ApJ, 432, 75
Armandroff, T., Davies, J., Jacoby, G. 1998 AJ 110, 2287
Arp, H. 1965 ApJ 142, 402
Bothun, G., Impey, C., & McGaugh, S. 1997 PASP 109, 745
Bothun, G., Impey, C., & Malin, D. 1991 ApJ 376, 404
Cowie, Lennox L., Songaila, Antoinette, Hu, Ester M., & Cohen, J. G. 1996 AJ 112, 839
Dalcanton, J., et.al. 1997 AJ 114, 635
Davies, J. 1990 MNRAS 244, 8
Davies, J., Phillipps, S., Disney, M. 1988 MNRAS 231, 69
de Jong, R. 1996 A&A 313, 46
de Propris, R., Pritchet, C., Harris, W., & McClure, R. 1995 ApJ 450, 524
Disney, M., & Phillipps, S. 1983 MNRAS 205, 1253
Disney, M. 1976, Nature 263, 573
Ellis, Richard S., Colless, Matthew, Broadhurst, Tom, Heyl, Jeremy, & Glazebrook, Karl 1996 MNRAS, 280, 235
Freeman, K. 1970 ApJ 160, 811
Impey, C., & Bothun, G. 1997 ARA&A 35, 267
Impey, C., Bothun, G., & Malin, D. 1988 ApJ 330, 634
Lilly, S. J., Le Fevre, O., Crampton, David, Hammer, F., & Tresse, L. 1995 ApJ 455, 50
Malin, D. 1978 Nature 276, 591
Matthews, L., & Gallagher, J. 1997 AJ 114, 1899
McGaugh, S., & de Blok, W.J.G. 1997 MNRAS 290m 533
McGaugh, S. 1996 MNRAS 280, 337
McGaugh, S., Bothun, G., & Schombert, J. 1995 AJ 110, 573
Morris, S. et.al. 1999 ApJ, in press
O’Neil, K., Bothun, G., & Impey, C. 1999, preprint
O’Neil, K., Bothun, G., & Schombert, J. 1999, preprint
O’Neil, K. 1998 Ph.D. Dissertation University of Oregon:Eugene
O’Neil, K., Bothun, G., & Cornell, M 1997 AJ 113, 1212
Persic, M. & Salucci, P. 1992 MNRAS 258, 14
Schombert, J., & Bothun, G. 1988 AJ 95, 1389
Schwartzenberg, J., et.al. 1995 MNRAS 275, 121
Steidel, C., Giavalisco, M., Pettini, M., Dickinson, M., Adelberger, K, 1996 ApJ 462, 17
Walker, T., et.al. 1991 ApJ 376, 51
FIGURES
|
no-problem/9909/hep-ph9909354.html
|
ar5iv
|
text
|
# Acknowledgements
## Acknowledgements
We are grateful to Professor T.Yamanaka for a discussion on the results and details of the KTeV experiment.
|
no-problem/9909/hep-ph9909255.html
|
ar5iv
|
text
|
# 1 Exclusion regions for resonance production of the first KK graviton excitation in (a) the Drell-Yan (corresponding to the diagonal lines) and dijet (represented by the bumpy curves) channels at the Tevatron and (b) Drell-Yan production at the LHC. (a) The solid curves represent the results for Run I, while the dashed, dotted curves correspond to Run II with 2, 30 fb⁻¹ of integrated luminosity, respectively. (b) The dashed, solid curves correspond to 10, 100 fb⁻¹ . The excluded region lies above and to the left of the curves.
The large disparity between the electroweak and apparent fundamental scale of gravity, known as the hierarchy problem, is a primary mystery of particle physics. Traditionally, new symmetries, particles, or interactions have been introduced at the electroweak scale to stabilize this hierarchy. However, it is possible that our 4-dimensional vision of gravity does not represent the full theory and that the observed value of the Planck scale, $`M_{Pl}`$ is not truly fundamental. A scenario of this type due to Arkani-Hamed, Dimopoulos, and Dvali (ADD) proposes the existence of $`n`$ additional compact dimensions and relates the fundamental $`4+n`$ dimensional Planck scale, $`M`$, to our effective 4-dimensional value through the volume of the compactified dimensions, $`M_{Pl}^2=V_nM^{2+n}`$. Setting $`M`$ TeV to remove the above hierarchy necessitates a large size for the extra dimensions with a compactification scale of $`\mu _c=1/r_c`$ eV$``$MeV for $`n=27`$. This, unfortunately, introduces another hierarchy between $`\mu _c`$ and $`M`$, which must somehow be stabilized. Nonetheless, this scenario has received much attention as it affords concrete phenomenological tests. Since it is experimentally determined that the Standard Model (SM) fields do not feel the effects of additional dimensions of this size, they are confined to a wall, or 3-brane, while gravity is allowed to propagate freely in the full higher-dimensional space, or bulk. Kaluza-Klein (KK) towers of gravitons, which can interact with the wall fields, result from compactification of the bulk. The coupling of each KK excitation is $`M_{Pl}`$ suppressed, however the mode spacing is determined by $`\mu _c`$ and is thus very small compared to typical collider energies. This allows the summation over an enormous number of KK states which can be exchanged or emitted in a physical process, thereby reducing the summed suppression from $`1/M_{Pl}`$ to $`1/M`$, or $``$TeV<sup>-1</sup>. This has resulted in a vast array of phenomenological and astrophysical studies with present collider data bounding $`M\begin{array}{c}>\hfill \\ \hfill \end{array}1`$ TeV for all $`n`$ and Supernova 1987A cooling and $`\gamma `$ ray flux constraints setting $`M\begin{array}{c}>\hfill \\ \hfill \end{array}50110`$ TeV for $`n=2`$ only.
An alternative higher dimensional scenario has recently been proposed by Randall and Sundrum (RS), where the hierarchy is generated by an exponential function of the compactification radius, called a warp factor. They assume a 5-dimensional non-factorizable geometry, based on a slice of $`AdS_5`$ spacetime. Two 3-branes, one being visible with the other being hidden, with opposite tensions reside at $`S_1/Z_2`$ orbifold fixed points, taken to be $`\varphi =0,\pi `$, where $`\varphi `$ is the angular coordinate parameterizing the extra dimension. The solution to Einstein’s equations for this configuration, maintaining 4-dimensional Poincare invariance, is given by the 5-dimensional metric
$$ds^2=e^{2\sigma (\varphi )}\eta _{\mu \nu }dx^\mu dx^\nu +r_c^2d\varphi ^2,$$
(1)
where the Greek indices run over ordinary 4-dimensional spacetime, $`\sigma (\varphi )=kr_c|\varphi |`$ with $`r_c`$ being the compactification radius of the extra dimension, and $`0|\varphi |\pi `$. Here $`k`$ is a scale of order the Planck scale and relates the 5-dimensional Planck scale $`M`$ to the cosmological constant. Similar configurations have also been found to arise in M/string-theory. An extension of this scenario where the higher dimensional space is non-compact, i.e., $`r_c\mathrm{}`$, is discussed in Ref. and several aspects of this and related ideas have been investigated in Ref. . Examination of the action in the 4-dimensional effective theory in the RS scenario yields
$$\overline{M}_{Pl}^2=\frac{M^3}{k}(1e^{2kr_c\pi })$$
(2)
for the reduced effective 4-D Planck scale. Assuming that we live on the 3-brane located at $`|\varphi |=\pi `$, it is found that a field on this brane with the fundamental mass parameter $`m_0`$ will appear to have the physical mass $`m=e^{kr_c\pi }m_0`$. TeV scales are thus generated from fundamental scales of order $`M_{Pl}`$ via a geometrical exponential factor and the observed scale hierarchy is reproduced if $`kr_c12`$. Hence, due to the exponential nature of the warp factor, no additional large hierarchies are generated. In fact, it has been demonstrated that the size of $`\mu _c`$ in this scenario can be stabilized without fine tuning of parameters, making this theory very attractive.
The graviton KK spectrum is quite different in this scenario than in the case with factorizable geometry, resulting in a distinctive phenomenology. As we will see below, the masses and couplings of each individual KK excitation are determined by the scale $`\mathrm{\Lambda }_\pi =\overline{M}_{Pl}e^{kr_c\pi }`$ TeV. This implies that these KK states can be separately produced on resonance with observable rates at colliders up to the kinematic limit. We will examine the cases of KK graviton production in Drell-Yan and dijet events at hadron colliders as well as the KK spectrum line-shape at high-energy linear $`e^+e^{}`$ colliders. In the circumstance where a resonance is observed, we outline the procedure to be employed in order to uniquely determine the parameters of this model. In the case where no direct production is observed, we compute the bounds on the parameter space in the contact interaction limit. We find that data from present accelerators already place meaningful constraints on the parameter space of this scenario. The phenomenology of these KK gravitons is similar in spirit to the production of traditional KK excitations of the SM gauge fields, but differs in detail because of the form of the KK wavefunction due to the non-factorizable metric and their spin.
We now calculate the mass spectrum and couplings of the graviton KK modes in the effective 4-dimensional theory on the 3-brane at $`\varphi =\pi `$. The starting point is the 5-dimensional Einstein’s equation for the RS configuration, which is given in Ref. . We parameterize the tensor fluctuations $`h_{\alpha \beta }`$ by taking a linear expansion of the flat metric about its Minkowski value, $`\widehat{G}_{\alpha \beta }=e^{2\sigma }\left(\eta _{\alpha \beta }+\kappa ^{}h_{\alpha \beta }\right)`$, where $`\kappa ^{}`$ is an expansion parameter. In order to obtain the mass spectrum of the tensor fluctuations, we consider the 4-dimensional $`\alpha \beta `$ components of Einstein’s equation with the replacement $`G_{\alpha \beta }\widehat{G}_{\alpha \beta }`$, keeping terms up to $`𝒪(\kappa ^{})`$. We work in the gauge with $`^\alpha h_{\alpha \beta }=h_\alpha ^\alpha =0`$. Upon compactification the graviton field $`h_{\alpha \beta }`$ is expanded into a KK tower
$$h_{\alpha \beta }(x,\varphi )=\underset{n=0}{\overset{\mathrm{}}{}}h_{\alpha \beta }^{(n)}(x)\frac{\chi ^{(n)}(\varphi )}{\sqrt{r_c}},$$
(3)
where the $`h_{\alpha \beta }^{(n)}(x)`$ correspond to the KK modes of the graviton on the background of Minkowski space on the 3-brane. In a gauge where $`\eta ^{\alpha \beta }_\alpha h_{\beta \gamma }^{(n)}=\eta ^{\alpha \beta }h_{\alpha \beta }^{(n)}=0`$, the equation of motion of $`h_{\alpha \beta }^{(n)}`$ is given by
$$\left(\eta ^{\alpha \beta }_\alpha _\beta m_n^2\right)h_{\mu \nu }^{(n)}(x)=0,$$
(4)
corresponding to the states with masses $`m_n0`$. Using the KK expansion (3) for $`h_{\alpha \beta }`$ in $`\widehat{G}_{\alpha \beta }`$, Einstein’s equation in conjunction with the above equation of motion yields the following differential equation for $`\chi ^{(n)}(\varphi )`$
$$\frac{1}{r_c^2}\frac{d}{d\varphi }\left(e^{4\sigma }\frac{d\chi ^{(n)}}{d\varphi }\right)=m_n^2e^{2\sigma }\chi ^{(n)}.$$
(5)
The orthonormality condition for $`\chi ^{(n)}`$ is found to be $`_\pi ^\pi 𝑑\varphi e^{2\sigma }\chi ^{(m)}\chi ^{(n)}=\delta _{mn}`$. In deriving Eq. (5), we have used $`\left(d\sigma /d\varphi \right)^2=(kr_c)^2`$ and $`d^2\sigma /d\varphi ^2=2kr_c\left[\delta (\varphi )\delta (\varphi \pi )\right]`$, as required by the orbifold symmetry for $`\varphi [\pi ,\pi ]`$ . The solutions for $`\chi ^{(n)}`$ are then given by
$$\chi ^{(n)}(\varphi )=\frac{e^{2\sigma }}{N_n}\left[J_2(z_n)+\alpha _nY_2(z_n)\right],$$
(6)
where $`J_2`$ and $`Y_2`$ are Bessel functions of order 2, $`z_n(\varphi )=m_ne^{\sigma (\varphi )}/k`$, $`N_n`$ represents the wavefunction normalization, and $`\alpha _n`$ are constant coefficients.
Defining $`x_nz_n(\pi )`$, and working in the limit that $`m_n/k1`$ and $`e^{kr_c\pi }1`$, the requirement that the first derivative of $`\chi ^{(n)}`$ be continuous at the orbifold fixed points yields
$$\alpha _nx_n^2e^{2kr_c\pi },\text{and}J_1(x_n)=0,$$
(7)
so that the $`x_n`$ are simply roots of the Bessel function of order 1. Note that the masses of the graviton KK excitations, given by $`m_n=kx_ne^{kr_c\pi }`$, are dependent on the roots of $`J_1`$ and are not equally spaced, contrasted to most KK models with one extra dimension. For $`x_ne^{kr_c\pi }`$, we see that $`\alpha _n1`$, and hence $`Y_2(z_n)`$ can be neglected compared to $`J_2(z_n)`$ in Eq. (6). We thus obtain for the normalization
$$N_n\frac{e^{kr_c\pi }}{\sqrt{kr_c}}J_2(x_n);n>0,$$
(8)
and the normalization of the zero mode is simply $`N_0=1/\sqrt{kr_c}`$.
Having found the solutions for $`\chi ^{(n)}`$, we can now derive the interactions of $`h_{\alpha \beta }^{(n)}`$ with the matter fields on the 3-brane. Starting with the 5-dimensional action and imposing the constraint that we live on the brane at $`\varphi =\pi `$, we find the usual form of the interaction Lagrangian in the 4-dimensional effective theory,
$$=\frac{1}{M^{3/2}}T^{\alpha \beta }(x)h_{\alpha \beta }(x,\varphi =\pi ),$$
(9)
where $`T_{\alpha \beta }(x)`$ is the symmetric conserved Minkowski space energy-momentum tensor of the matter fields and we have used the definition $`\kappa ^{}=2/M^{3/2}`$. Expanding the graviton field into the KK states of Eq. (3) and using the above normalization in Eq. (8) for $`\chi ^{(n)}(\varphi )`$ we find via Eq. (2)
$$=\frac{1}{\overline{M}_{Pl}}T^{\alpha \beta }(x)h_{\alpha \beta }^{(0)}(x)\frac{1}{\mathrm{\Lambda }_\pi }T^{\alpha \beta }(x)\underset{n=1}{\overset{\mathrm{}}{}}h_{\alpha \beta }^{(n)}(x).$$
(10)
Here we see that the zero mode separates from the sum and couples with the usual 4-dimensional strength, $`\overline{M}_{Pl}^1`$, however, all the massive KK states are only suppressed by $`\mathrm{\Lambda }_\pi ^1`$, where we find that $`\mathrm{\Lambda }_\pi =e^{kr_c\pi }\overline{M}_{Pl}`$, which is of order the weak scale.
Our calculations have been performed with the assumption $`k<M`$ with $`M\overline{M}_{Pl}`$, so that the 5-dimensional curvature is small compared to $`M`$ and the solution for the bulk metric can be trusted. This implies that the ratio $`k/\overline{M}_{Pl}`$ cannot be too large and we take $`k/\overline{M}_{Pl}1`$ in our analysis below. As we will see, the value of this ratio is central to the phenomenological investigation of this model. In order to get a feel for the natural size of this parameter, we perform a simple estimate using string theoretic arguments. The string scale $`M_s`$ can be related to $`\overline{M}_{Pl}`$ in 4-dimensional heterotic string theories by $`M_sg_{_{YM}}\overline{M}_{Pl}`$, where $`g_{_{YM}}`$ is the 4-dimensional Yang-Mills gauge coupling constant, and the tension $`\tau _3`$ of a $`D`$ 3-brane is given by
$$\tau _3=\frac{M_s^4}{g(2\pi )^3},$$
(11)
where $`g`$ is the string coupling constant. For $`g_{_{YM}}0.7`$ and $`g1`$, we find $`\tau _310^3\overline{M}_{Pl}^4`$. In the RS scenario, the magnitude of the 3-brane tension is given by $`V=24\overline{M}_{Pl}^2k^2`$. Requiring that $`V=\tau _3`$, suggests
$$\frac{k}{\overline{M}_{Pl}}10^2.$$
(12)
We take the range $`0.01k/\overline{M}_{Pl}1`$ in our phenomenological analysis, however, the above discussion suggests that string theoretic and curvature considerations favor the lower end of this range. We note that recent work on gauge unification in a modified RS scenario also favors smaller values for this ratio.
Constraints on the parameters of this model can be obtained by direct collider searches for the first graviton excitation at the Tevatron or LHC. The cleanest signal for graviton resonance production will be either an excess in Drell-Yan events, $`q\overline{q},ggG^{(1)}\mathrm{}^+\mathrm{}^{}`$ (in analogy to searches for extra neutral gauge bosons), or in the dijet channel, $`q\overline{q},ggG^{(1)}q\overline{q},gg`$. Note that gluon-gluon initiated processes now contribute to Drell-Yan production. This differs from the ADD scheme where individual resonances associated with graviton exchange are not observable due to the tiny mode spacing. Using the above Lagrangian (10), the production cross section, decay widths, and branching fractions relevant for graviton production can be obtained in a straightforward manner. We assume that the first excitation only decays into SM states, so that for a fixed value of the first graviton excitation mass, $`m_1`$, the value of $`k/\overline{M}_{Pl}`$ completely determines all of the above quantities. In fact, the total width is found to be proportional to $`(k/\overline{M}_{Pl})^2`$. Keeping in mind that theoretic arguments favor a smaller value for this parameter, and to get a handle on the possible constraints that arise from these channels, we employ the narrow width approximation. This is strictly valid only for values of $`k/\overline{M}_{Pl}\begin{array}{c}<\hfill \\ \hfill \end{array}0.3`$ but well approximates the true search reach obtained via a more complete analysis. We then compare our results with the existing Tevatron bounds. The lack of any signal for a new resonance in either the Drell-Yan or dijet channel in the data then provides a constraint on $`k/\overline{M}_{Pl}`$ for any given value of $`m_1`$ as shown in Fig. 1(a). We also perform a similar analysis to estimate the future $`95\%`$ C.L. parameter exclusion regions at both Run II at the Tevatron and at the LHC under the assumption that no signal is found; these results are displayed in Figs. 1(a) and (b). The dijet constraints for Run II were estimated by a simple luminosity (and $`\sqrt{s}`$) rescaling of the published Run I results. Note that the Drell-Yan and dijet channels play complementary roles at the Tevatron in obtaining these limits. We expect a dijet search at the LHC to yield poor results due to the large QCD background at this higher center-of-mass energy.
The discovery of the first graviton excitation as a resonance at a collider will immediately allow the determination of all of the fundamental model parameters through measurements of its mass and width, $`m_1`$ and $`\mathrm{\Gamma }_1`$, respectively. To demonstrate this, we make use of the two relations $`\mathrm{\Lambda }_\pi =m_1\overline{M}_{Pl}/kx_1`$ and $`\mathrm{\Gamma }_1=\rho m_1x_1^2(k/\overline{M}_{Pl})^2`$, where $`x_1`$ is the first non-zero root of the $`J_1`$ Bessel function and $`\rho `$ is a constant which depends on the number of open decay channels; it is fixed provided we assume that the graviton decays only to SM fields. Using these relations we immediately find that $`r_c=\mathrm{log}[m_1/kx_1]/k\pi `$ with $`k=\overline{M}_{Pl}[\mathrm{\Gamma }_1/m_1\rho x_1^2]^{1/2}`$. In addition, the spin-2 nature of the graviton can be determined via angular distributions of its decay products.
To exhibit how the tower of graviton excitations may appear at a collider, Fig. 2 displays the cross section for $`e^+e^{}\mu ^+\mu ^{}`$ as a function of $`\sqrt{s}`$, assuming $`m_1=600`$ GeV and taking various values of $`k/\overline{M}_{Pl}`$ for purposes of demonstration. We see that for small values of $`k/\overline{M}_{Pl}`$ the gravitons appear as ever widening peaks and are almost regularly spaced, with the widths and the spacing both being dependent on successive roots of $`J_1`$. However, as $`k/\overline{M}_{Pl}`$ grows, the peaks become too wide to be identified as true resonances and the classic KK signature of successive peaks becomes lost. Instead, it would appear experimentally that there is an overall large enhancement of the cross section, similar to what might be expected from a contact interaction. One may worry that at some point the cross section may grow so large as to violate the partial wave unitarity bound of $`\sigma _U=20\pi /s`$, which is appropriate to the case of initial and final fermion states with helicity of 1. However, even for values of $`k/\overline{M}_{Pl}`$ as large as unity we find that unitarity will not be violated until $`\sqrt{s}`$ is at least several TeV.
In the circumstance that gravitons are too massive to be directly produced at colliders, their contributions to fermion pair production may still be felt via virtual exchange. For smaller values of $`k/\overline{M}_{Pl}`$, this would be similar to observing the effects of the SM $`Z`$ boson before the resonance turns on, or for larger values, to searching for contact interactions. The 4-fermion matrix element is easily computed from the Lagrangian (10) and is seen to reproduce that derived for the scenario of ADD with large extra factorizable dimensions with the replacement
$$\frac{\lambda }{M_s^4}\frac{i^2}{8\mathrm{\Lambda }_\pi ^2}\underset{n=1}{\overset{\mathrm{}}{}}\frac{1}{sm_n^2}.$$
(13)
The advantage in this scenario over the factorizable case is that there are no divergences associated with performing the sum since there is only one new dimension, and hence uncertainties associated with the introduction of a cut-off do not appear. In the limit of $`m_n^2s`$, the sum over the KK graviton propagators becomes $`[k\mathrm{\Lambda }_\pi /\overline{M}_{Pl}]^2_n1/x_n^2`$ which rapidly converges. The $`95\%`$ C.L. search reach in the $`\mathrm{\Lambda }_\pi k/\overline{M}_{Pl}`$ plane are given in Fig. 3 for various (a) $`e^+e^{}`$ and (b) hadron colliders. In $`e^+e^{}`$ annihilation we have examined the unpolarized (and polarized for the case of high energy linear colliders) angular and $`\tau `$ polarization distributions, summing over $`e,\mu ,\tau ,c,b`$ (and $`t`$, if kinematically accessible) final states, and included initial state radiation, heavy quark tagging efficiencies, an angular cut around the beam pipe, and $`90\%`$ beam polarization where applicable. For hadron colliders we examined the lepton pair invariant mass spectrum and forward-backward asymmetry in Drell-Yan production, for both $`e`$ and $`\mu `$ final states. We also investigated the case where the first two excitations are too close to the collider center-of-mass energy to use the approximation $`m_n^2s`$. The bounds in $`e^+e^{}`$ annihilation for this case are given by the solid curves in Fig. 3(a). We see that there is very little difference in the resulting constraints.
As a last point, we note that whereas graviton tower emission was an important probe of the ADD scenario, this is no longer true in the RS model since the graviton states are so massive and can be individually examined on resonance.
In this paper we have explored the phenomenological implications of the Randall-Sundrum localized gravity model of non-factorizable 5-dimensional spacetime, and contrasted it with the ADD scenario. We ($`i`$) derived the interaction of the KK tower of gravitons with the SM fields, ($`ii`$) obtained limits on the model parameters using existing data from colliders, both through direct production searches and via virtual exchange contributions, and estimated what future colliders can do to extend these bounds. ($`iii`$) We described the appearance of KK tower production at high energy linear colliders, the possible loss of the conventional KK signature of successive peaks due to the ever growing widths of these excitations, and ($`iv`$) demonstrated how measurements of the properties of the first KK state would completely determine the model parameters.
We find the scenario of gravity localization to be theoretically very attractive, and even more importantly, to have distinctive experimental tests. We hope that future experiment will eventually reveal the existence of higher dimensional spacetime.
Acknowledgements:
We would like to thank M. Dine, M. Schmaltz, and M. Wise for beneficial discussions.
|
no-problem/9909/astro-ph9909277.html
|
ar5iv
|
text
|
# Timing Spectroscopy of Quasi-Periodic Oscillations in the Low-Mass X-ray Neutron Star Binaries
## 1 Introduction
KiloHertz quasi-periodic oscillations (QPO) in low mass X-ray neutron star (NS) binaries (LMXB’s) discovered by Rossi X-Ray Timing Explorer (Strohmayer et al. 1996, van der Klis et al. 1996, Zhang et al. 1996) made it possible to evaluate quantitatively the previously suggested beat-frequency model (Alpar & Shaham 1985). Most of the nineteen sources with QPO have twin peaks in the kHz part of the spectrum (lower frequency marked by K and the higher one by h in the spectrum of the source Sco X-1 shown in Figure 1). Within the beat-frequency interpretation, the upper kHz QPO represents the Keplerian frequency (van der Klis et al. 1996) while the lower kHz QPO is produced by the beating of this frequency with the NS spin frequency, $`\nu _{spin}`$, which has been identified with the frequency of nearly coherent oscillations observed during bursts, $`\nu _{burst}`$ or half that (Strohmayer et al. 1996, Miller et al. 1998). According to Mendez & van der Klis (1999), “if this interpretation is correct, $`\mathrm{\Delta }\nu `$ equals $`\nu _{spin}`$ and should remain constant. The NS spin cannot change by $`20\%30\%`$ on time scales of days to months.” For a number of sources it has been shown that $`\mathrm{\Delta }\nu =\nu _h\nu _K`$ decreases systematically with $`\nu _K`$ (van der Klis et al. 1997, Psaltis et al. 1998, Markwardt, Strohmayer & Swank, 1999, Mendez et al. 1998, Mendez & van der Klis 1999). These recent findings have undermined the beat-frequency model. The new (two-oscillator) model (Osherovich and Titarchuk 1999a and Titarchuk and Osherovich 1999, hereafter OT99a and TO99, respectively) assumes that the lower frequency $`\nu _K`$ is the Keplerian frequency. Following Titarchuk, Lapidus & Muslimov (1998 hereafter TLM), we assume a viscous transition layer in the NS disk (see Figure 1) which is bounded by the solid surface of the NS (the inner boundary) and by the first Keplerian orbit (the outer boundary). The adjustment of the Keplerian disk to the sub-Keplerian inner boundary creates conditions favorable for the formation of a hot blob at the outer boundary of the transition layer (TLM). When thrown out into the magnetosphere, the blob becomes a Keplerian oscillator under the influence of the Coriolis force with two modes: a) a radial mode with frequency
$$\nu _h=[\nu _K^2+(\mathrm{\Omega }/\pi )^2]^{1/2}$$
(1)
where $`\mathrm{\Omega }`$ is the angular velocity of the rotating magnetosphere, and b) a mode perpendicular to the disk with frequency
$$\nu _L=(\mathrm{\Omega }/\pi )(\nu _K/\nu _h)\mathrm{sin}\delta $$
(2)
where $`\delta `$ is the angle between $`𝛀`$ and the vector normal to the plane of the Keplerian oscillations ($`\delta <<1`$). From the observed $`\nu _h`$ and $`\nu _K`$, using (1), one finds $`\mathrm{\Omega }`$ as a function of $`\nu _K`$ and verifies the existence of the angle $`\delta `$ which satisfies equation (2) for all observed $`\nu _K`$ and $`\nu _h`$. For Sco X-1, it was found that $`\delta =5.5\pm 0.5^o`$ (OT99a). The second oscillator in the two-oscillator model has the frequency of viscous oscillations $`\nu _\mathrm{v}`$ for which the dependence of $`\nu _\mathrm{v}`$ on $`\nu _K`$ is obtained through the dimensional analysis of the radial transport of the angular momentum in the transition layer controlled by a viscous force determined by the Reynold’s number, which, in turn, is related to the mass accretion rate (TLM, TO99). The solution for $`\nu _\mathrm{v}`$ can be presented in the form
$$\nu _\mathrm{v}=f(\nu _K)C_N$$
(3)
where $`f(\nu _K)`$ is a universal function for any NS and the constant $`C_N`$ reflects the properties of the specific source. The diffusive process in the transition layer is characterized by a break frequency $`\nu _b`$ (see Figure 1) which is related to $`\nu _\mathrm{v}`$ by a power law (TO99):
$$\nu _b\nu _\mathrm{v}^{1.61}.$$
(4)
For the source 4U 1728-34, the coefficient of proportionality is found to be 0.041 (TO99). The angle $`\delta `$, the constant $`C_N`$ and the profile of $`\mathrm{\Omega }`$ determine the frequency range of $`\nu _L`$, $`\nu _\mathrm{v}`$ and $`\nu _b`$. Some of the QPOs have been described previously within the framework of the two-oscillator model (TO99). For the first time, we identify all predicted frequency branches and compare the inferred main physical parameters for two stars (section 2, 3 and 4). In section 5, we extend the comparison of the theoretical relation (4) with observed spectra from other atoll sources and Z-sources. The summary and discussion are presented in section 6.
2. Classification of QPOs in Sco X-1
The frequencies of the observed QPOs for Sco X-1 are plotted in Figure 2 as a function of $`\nu _K`$. From the observed $`\nu _h`$ (upper hybrid frequency marked by asterisks in Figure 2) and $`\nu _K`$, the profile of $`\mathrm{\Omega }(\nu _K)`$ has been calculated and modeled using the theoretically inferred magnetic multipole structure of the differentially rotating magnetosphere (OT99a). The resulting $`\mathrm{\Omega }`$ profile is
$$\mathrm{\Omega }/2\pi =C_0+C_1\nu _K^{4/3}+C_2\nu _K^{8/3}+C_3\nu _K^4$$
(5)
where $`C_2=2(C_1C_3)^{1/2}`$. The constants $`C_0\mathrm{\Omega }_0/(2\pi )=345`$ Hz, $`C_1=3.2910^2`$ Hz<sup>-1/3</sup>, $`C_2=1.01710^5`$ Hz<sup>-5/3</sup> and $`C_3=7.7610^{10}`$ Hz<sup>-3</sup> have been obtained by the least-squares fit with $`\chi ^2=37.6/39`$ (OT99a). Using equation (5) for $`\mathrm{\Omega }`$, from formula (2) we have calculated $`\nu _L`$ for different $`\delta `$. The best fit to the data presented by open circles in Figure 2 has been found for $`\delta =5.5^o`$. The second harmonic of the low branch, $`2\nu _L90`$ Hz observed by van der Klis et al. 1997 has been also fitted by our theoretical curve in Figure 2. We employ the distance determination in the function Lebesgue’s space, $`L_2`$ (e.g. Korn & Korn, 1961, §15.2.2) defining the Relative Lebesgue’s measure $`\epsilon _{RL}`$ as a root of the ratio of the sum of the square difference (between the theoretical and observational values) and the sum of square of the theoretical values. Presumably $`\epsilon _{RL}`$ is a better measure of the difference between observations and theory than the $`\chi ^2`$ criteria in the case of the actual large uncertainty in determining the QPOs and break frequencies. This uncertainty is likely significantly larger than statistical errors because the low frequency features are broad and lie atop continuum noise of unknown shape. If the data set is not uniform for the better control one can use the second measure: the rms deviation, $`\epsilon _{rms}`$ which is defined as a ratio of the rms and the mean value of the frequency range, (cf. Bendat & Piersol 1971). Typically $`\epsilon _{RL}\epsilon _{rms}`$; for $`\nu _L`$ in Fig. 2 $`\epsilon _{RL}=\epsilon _{rms}=`$4.6%. The frequencies $`\nu _L`$ and $`2\nu _L`$ are marked as HBO in the original spectra for Sco X-1 QPOs presented by Wijnands & van der Klis 1999 (Figure 3 in their paper). In the same figure, the break frequency was marked (with question mark) and also an extra noise component was shown ($`\nu _\mathrm{v}`$ in our interpretation). We have also analyzed the archival data obtained by the RXTE observatory in the course of Sco X-1 monitoring on May 25-28, 1996. Standard tasks from FTOOLS v4.2 were used in our data analysis according to the RXTE Cook Book. To obtain a broad band power density spectra (PDS), the original 1/8192 s data were regrouped in 32 s segments with 1/4096 s time resolution data and each spectrum was calculated over a continuous time interval with a constant offset. The PDS obtained were fit over the whole range (1/32 – 2048 Hz) using the King and Lorentzian profiles. The Lorentzian components were added to the model when their statistical significance was higher than $`4\sigma `$. In order to characterize the break in each PDS, we assume such a “break frequency” $`\nu _b`$, at which the power drops to 50% of the low frequency plateau value (in terms of the King model). Our confidence in the identification of $`\nu _b`$ and $`\nu _\mathrm{v}`$ (Figs. 1-2) for Sco X-1 is based on two tests. Firstly, the observations for the two branches in the lower part of the spectra satisfy our theoretical relation (4), namely, $`\nu _b=0.041\nu _\mathrm{v}^{1.61}`$. As a second test, we use the verification that the extra noise component has a frequency which satisfies the theoretically derived dependence of $`\nu _\mathrm{v}`$ on $`\nu _K`$. In formula (3), the universal function $`f(\nu _K)`$ obtained numerically from a solution for radial viscous oscillations at the outer edge of the transition region, in fact, can be approximated by a polynomial
$$\nu _\mathrm{v}=C_N(a_1\nu _K+a_2\nu _K^2+a_3\nu _K^3+a_4\nu _K^4)$$
(6)
where $`a_1=1.503310^3`$, $`a_2=1.228910^6`$Hz<sup>-1</sup>, $`a_3=1.17610^8`$Hz<sup>-2</sup> and $`a_4=1.228910^{11}`$Hz<sup>-3</sup>. The best fit for $`\nu _\mathrm{v}`$ ($`\epsilon _{RL}=8.4`$%) presented in Figure 2 by a solid line (marked viscous) is found for $`C_N=9.76`$. The line for the related $`\nu _b`$ with the observational points (marked by filled circles) closely matching the theory ($`\epsilon _{RL}=13.9`$%) is also shown in Figure 2. Notice that in vicinity of $`\nu _K`$800 Hz the line for $`\nu _\mathrm{v}`$ crosses the line for $`\nu _L`$ suggesting the possibility of interactions of two oscillators.
3. Classification of the QPOs in the Source 4U 1728-34
The classification of QPOs in the source 4U 1728-34 is presented in Figure 3. The profile of $`\mathrm{\Omega }/2\pi `$ has been modelled using observations for $`\nu _h`$ (Mendez & van der Klis 1999) and formula (5) with constants $`C_0=382`$ Hz, $`C_1=5.47510^2`$ Hz<sup>-1/3</sup>, $`C_2=1.513210^5`$ Hz<sup>-5/3</sup> and $`C_3=1.049210^9`$Hz<sup>-3</sup> for $`\chi ^2=10.5/8`$ (OT99a). A survey of the observed QPO frequencies in the source 4U 1728-34 was presented in Ford & van der Klis 1998, (hereafter FVK). The 100 Hz Lorentzian frequency (presented in the Table 1 there) we identify with $`2\nu _L`$. The best fit is found for $`\delta =8.3\pm 1.0^o`$ using the extrapolation of $`\mathrm{\Omega }(\nu _K)`$ into the range for $`\nu _K`$ below 550 Hz and also observed $`\nu _L`$ for $`\nu _K>800`$ Hz. The previously identified $`\nu _\mathrm{v}`$ (TO99) corresponds to the LF Lorentzian frequency in the above mentioned Table 1 of FVK and $`\nu _b`$ corresponds to the broken PL break frequency for three days March 1, February 24 and February 18. The data for February 16, as explained in FVK and TO99 have to be treated separately (the observed $`4.83\pm 0.69`$ Hz oscillation does not belong to the $`\nu _b`$ branch). For this source, the constant $`C_N`$ is found to be equal to 7.1. In comparison with the Sco X-1 classification (Figure 2), the $`\nu _\mathrm{v}`$ and $`\nu _b`$ branches for the source 4U 1728-34 have a much wider range and therefore are verified with greater confidence: $`\epsilon _{RL}=8.5`$% for $`\nu _\mathrm{V}`$ and $`\epsilon _{RL}=19.5`$% for $`\nu _b`$.. The branches $`\nu _L`$ and $`2\nu _L`$ are not observed simultaneously, but still match the theoretical curves ($`\epsilon _{RL}=21`$% for $`\nu _L`$).
4. Comparison Between QPO Spectra of Sco X-1 and Atoll Source 4U 1728-34
Due to the slightly higher $`\nu _h`$ for source 4U 1728-34, the resulting $`\mathrm{\Omega }`$ profile has a higher frequency than the $`\mathrm{\Omega }`$ profile for Sco X-1. For Sco X-1, $`\mathrm{\Omega }_{max}/2\pi 348`$ Hz, while for source 4U 1728-34, $`\mathrm{\Omega }_{max}/2\pi `$ 415 Hz. But even more important for the values of $`\nu _L,`$ is the difference in $`\delta `$ between the two stars. For the source 4U 1728-34, $`\delta =8.3\pm 1.0^o`$ and the $`2\nu _L`$ branch in Figure 3 changes from 124 Hz ($`\nu _K=353`$ Hz) to 182 Hz ($`\nu _K=788`$ Hz). The same $`2\nu _L`$ branch for Sco X-1 stays below 100 Hz for all observed $`\nu _K`$ due to the smaller $`\delta =5.5\pm 0.5^o`$ and the smaller $`\mathrm{\Omega }`$ \[see formula (2)\]. On the other hand, for Sco X-1 the constant $`C_N=9.76`$ is greater than $`C_N=7.1`$ for source 4U 1728-34. Therefore, the frequency of viscous oscillations $`\nu _\mathrm{v}`$ and the related $`\nu _b`$ are higher for Sco X-1 for the same $`\nu _K`$, as shown in Figures 2 and 3. For the same Reynold’s number, the greater $`C_N`$ implies a higher viscosity (TO99).
5. Relation Between the Break Frequencies $`\nu _b`$ and Viscous Frequency $`\nu _\mathrm{v}`$
The relation between $`\nu _b`$ and $`\nu _\mathrm{v}`$ has been approximated by a power law (4) (TO99). Recent observations of Wijnands & van der Klis (1999) make it possible to verify formula (4) for a number of atoll sources and Z-sources. In Figure 4, the observed $`\nu _b`$ are plotted vs. $`\nu _\mathrm{v}`$ on a log-log scale. The open circles correspond to Z-sources and asterisks represent data for atoll sources. The straight theoretical line confirms that the power law $`\nu _b0.041\nu _\mathrm{v}^{1.61}`$ is correct for atoll sources. When $`\nu _b`$ is multiplied by a factor of 4, data for Z-sources match the same straight line (closed circles represent $`4\nu _b`$ for Z-sources in Figure 4). Thus for Z-sources
$$\nu _b0.01\nu _\mathrm{v}^{1.61}.$$
(7)
It is conceivable that the factor of 4 difference in the proportionality constant is due to the dominant role of different eigenmodes of the diffusive process for Z-sources in comparison with atoll sources. If this is the case, then relation (7) corresponds to the first mode and formula (4) represents the second mode of break frequency (see e.g. Titarchuk 1994 for the details of the diffusion theory). In general, the diffusion equation suggests $`\nu _b(n)n^2`$, where $`n`$ is the integer number of the corresponding eigenmode. The overall $`\epsilon _{RL}`$ including the Z-sources reduced points (closed circles) is 27%.
6. Summary and Discussion
In this Letter we have identified all predicted frequency branches for two stars. Their frequencies are compared favorably with the two-oscillator model. For the source 4U 1728-34, the angle $`\delta `$ is found ($`\delta =8.3\pm 1.0^o`$). This is the third star for which $`\delta `$ has been inferred from QPO observations. For Sco X-1, $`\delta =5.5\pm 0.5^o`$ (OT99a) and for source 4U 1702-42, $`\delta =3.9\pm 0.2^o`$ (Osherovich & Titarchuk 1999b). The predicted relation (4) has been confirmed for Sco X-1 and source 4U 1728-34 and also for a number of other atoll and Z-sources. The difference by a factor of 4 between the value of $`\nu _b`$ for two groups of stars suggests the participation of two modes in the diffusive process in the viscous transition layer in the disk. By comparison Sco X-1 has greater viscosity in the transition layer than the source 4U1728-34, assuming the same Reynold’s number. We stress the identification of the lowest of the twin kHz frequencies $`\nu _K`$ as a Keplerian frequency in contrast to the previous views. The solutions for $`\nu _\mathrm{v}`$ and $`\nu _b`$ have been found by integration along the radius $`R`$. The conversion of the resulting profiles into the frequency domain is done in according to the formula for the Keplerian frequency $`\nu _K`$ (e.g. TO99, Eq.1). The same is true for the modelling of the $`\mathrm{\Omega }`$ profile. The resulting striking correspondence between the observed spectra of QPO (Figures 2 and 3) and the predictions of the two-oscillator model we view as confirmation of the correct identification. An extrapolation of $`\mathrm{\Omega }`$profile is not expected to be accurate as the data are insufficient to allow a reliable estimate of the NS spin frequency. The complexity of the QPO phenomena still leaves unsolved problems such as $`1`$Hz and $`6`$Hz QPO phenomena.
We are grateful to Eric Ford, Mariano Mendez, Rudy Wijnands and Michiel van der Klis for the data and Joe Fainberg, Bob Stone and particularly the referee for the fruitful suggestions.
|
no-problem/9909/astro-ph9909103.html
|
ar5iv
|
text
|
# The Structure of the Local Universe and the Coldness of the Cosmic Flow
## 1. Introduction: Cosmic Migrations versus Local Chills
When speaking in terms of the motions of the objects populating the immediate vicinity of our Local Group, i.e. out to a distance of several tens of Megaparsec, our cosmic neighbourhood represents a rather chilly sector of the Universe. Rather than resembling a buzzing hive of galaxies rushing crisscross through space without any well-defined destination, we appear to be participating in a highly organized and coherent matter stream advancing towards a seemingly preordained direction. These streams are the instruments through which vast amounts of matter get channelled from their initial locations in the pristine and almost featureless primordial Universe towards the sites where matter accumulates in the process of building up the pronounced and complex patterns that nowadays we recognize in the large scale distribution of galaxies.
Within the gravitational instability scenario of structure formation, it are the same continuously waxing density excesses and depressions that induce these cosmic migration flows through their combined gravitational action. It is this intimate dynamical link between the distribution of matter, the induced cosmic flows, and the emergence of structure in the Universe that prompted substantial interest in the characteristics of the observed cosmic velocity field as useful fossil probes of the structure formation process. The amount of matter residing within the observed structures will be directly reflected in the corresponding matter streams. Hence, in a high-$`\mathrm{\Omega }`$ Universe we expect the cosmic flows to involve higher velocities. This is true on any scale, whether it concerns the large scale bulk motions associated with the assembly of the characteristic foamlike patterns encountered on scales of tens of Megaparsec or structures on much smaller scales, whose dynamical timescales are so short that they have evolved to highly nonlinear stages in which the matter currents got (partially) “thermalized” and therefore lost the memory of their original state.
Early assessment of the small-scale random motions of galaxies, estimated on the basis of pairwise velocity dispersions, revealed that locally the Universe is rather cold. While we participate in a bulk flow of $`600\text{km/s}`$, the random velocities with respect to the mean flow are estimated to be in the range of a mere 200–300 km/s. This low value of the velocity dispersion in combination with the pronounced structure displayed by the distribution of galaxies was in fact a strong argument for the latter being a biased tracer of the underlying matter distribution. The assumption of bias, in particular in the form of the (overly) simplistic linear bias factor $`b`$, would then imply the matter distribution not to have evolved as far as suggested by the pronounced nature of the galaxy distribution, and hence would be in agreement with the low value of the “thermal” motions in the local Universe.
## 2. Potential Eccentricities and Tidal Stresses
In an attempt to interpret the significance of the coldness of the local cosmic flow, we postulate an alternative view. Rather than interpreting the coldness of the flow as a property of the global Universe, we hold the view that the solution of the issue is contained in our rather atypical local cosmic vicinity. Within a distance of a few tens of Megaparsec we have not yet reached a fully representative volume of the Universe. This can be discerned rather straightforwardly from the local galaxy distribution. Even more explicit, however, is the situation when assessing the velocities of galaxies in our cosmic neighbourhood. These velocities reflect the underlying gravitational force and potential field, both having far larger coherence lengths than the density distribution. Hence, the volume probed by peculiar galaxy velocity surveys represents a rather restricted dynamical probe, unlikely to be anywhere near to fairly sampling that of the overall Universe.
Meticulous analysis of the peculiar velocities of galaxies in the immediate cosmic neighbourhood have unveiled the nearby superstructures of the Great Attractor (GA) and the Perseus-Pisces supercluster (PP) as the dynamically dominating protagonists in the local cosmic tug of war. Although there are certainly several other contenders pulling their weight, be it in the form of nearby local structures or far-away monsters like the Shapley concentration, their contribution is unlikely to represent more than a moderate modification to the basic dynamical constellation set by the GA and the PP. Moreover, there does not appear to be any compelling evidence for the existence of dynamically influential mass concentrations beyond a distance of $`150h^1\text{Mpc}`$.
A telling illustration of this preponderance of GA and PP is shown in the contour map of figure 1. It contains a reconstruction of the linear gravitational potential field (Gaussian scale $`R_f=5h^1\text{Mpc}`$) in our local Universe, within a region of size $`160h^1`$ Mpc centered on our Local Group. Shown is a planar section approximately coinciding with the Supergalactic Plane. It is based on the set measured peculiar velocities of galaxies in the Mark III catalogue (Willick et al. 1997), processed by means of the Wiener filter reconstruction technique developed by Zaroubi et al. (1995). Evidently, the gravitational potential is dominated by two huge potential wells, the Great Attractor on the lefthand side and the Pisces-Perseus supercluster region on the righand side. Also the pdf of the gravitational potential (insert figure 1), displaying an atypical shoulder, corroborates the atypical nature of the local gravitational potential. Moreover, seemingly we find ourselves located right near the centre of a configuration strongly reminiscent of that of a canonical quadrupolar pattern, with two massive density enhancements along the horizontal axis and void regions concentrated around the perpendicular bisecting plane. The direct implication of the morphology depicted in figure 1 is that we are located near the saddle point of strong field of tidal shear. The red edges in figure 1, superposed on the potential contours and having a size and direction proportional to the strength of the compression along the indicated direction, represent the compressional component of this implied tidal force. Evidently, the tidal shear is very strong within the realm of the two huge matter concentrations where the density reaches high values. Very important to note, however, is that we also see that the tidal shear is indeed very strong near our own position, a region of rather modest density, due to the fact that we are located roughly halfway in between the Great Attractor and the Perseus-Pisces chain.
Such a region of moderate to low local density in combination with a strong external tidal field may be expected to experience a different kinematical evolution from that of a similar isolated site. Its contraction will not only be dominated by its own selfgravity, external tidal forces of a similar order of magnitude will substantially shear the corresponding matter flows and lead to an anisotropic collapse. Although collapse may be accelerated along the compressional direction (Icke 1973), the shearing along the other directions may readily delay virialization and thus yield an ungenerically cold region.
## 3. Cosmic Moulding
To assess the viability of the heuristic picture sketched in the preceding section, we set out on an exploration of the kinematical evolution of a Local Group like region in an appropriate large-scale setting. The issue at stake involves the “thermalization” of a small-scale feature like our Local Group, definitively a nonlinear phenomenon, but also the influence of large-scale linearly or quasi-linearly evolving structures setting the external force field. No analytical approximations are known that would describe such situations to any satisfactory extent. This prompted our investigation by means of N-body simulations of the evolution of configurations resembling the local cosmic vicinity.
Of crucial importance to this study is the issue of an appropriate representation of the local Universe. We might restrict ourselves to some of the known properties of the Local Group, as for instance the density perturbation it represents, its peculiar velocity of $`600\text{km/s}`$, or maybe even some additional environmental requirements like the presence of a few nearby massive clusters. However, as indicated by e.g. Van de Weygaert & Bertschinger (1996) such single localized constraints still leave ample freedom for the overall matter distribution. So much freedom that we cannot be guaranteed of actually assessing the appropriate cosmological situation. We would not be able to assure the correct spatial extent and structure of the large scale environment and consequently would also fail in having the correct temporal development of the local force fields. Because we argue that it is precisely the collusion between the specific localized conditions and the particular configuration of the the large-scale environment that may offer the explanation for the coldness of the local cosmic flow, we choose to set up optimally moulded initial conditions for our simulations instead of taking large random realizations and searching for objects that appear to be rather similar to our own Local Group.
Focussing specifically on the dynamical and kinematical evolution of the local Universe, we invoke the observational information yielded by surveys of galaxy peculiar velocities, arguably the best available objective objective source on the mass distribution in the cosmic neighbourhood. On sufficiently large scales these velocities reflect directly the mass distribution, so that we can use them to distill an optimally significant reconstruction of the prevailing primordial linear density field in the local Cosmos. We achieve this by applying a Wiener filter algorithm to the sample of measured peculiar velocities of galaxies in the Mark III catalogue (Willick et al. 1997), yielding a reconstruction that is optimal in terms of having a maximum signal/noise ratio (see Zaroubi etal. 1995). The Wiener filtered field in Figure 1 represents our cosmic environment on linear scales of $`R_f>5h^1\text{Mpc}`$. Such a reconstruction restricts itself to regions that are still within the linear regime, and whose statistical properties are still Gaussian.
We discard further observational constraints on the small-scale clumpiness and motions in the local Universe. Instead, we generate and superpose several realizations of small-scale density and velocity fluctuations according to a specific power spectrum of fluctuations, with a global cosmological background specified by $`H_0`$ and $`\mathrm{\Omega }_0`$, and with the small-scale noise being appropriately modulated by the large-scale Wiener filter reconstructed density field. To this end we invoke the technique of constrained random fields (see Hoffman & Ribak 1991, van de Weygaert & Bertschinger 1996), with the Wiener filtered field, $`𝐬_{wn}=`$ playing the role of “mean field” This setup allows us to test the likelihood of a cold local patch of the Universe embedded within a large scale environment reminiscent of the observed one in the local Universe.
## 4. Local Universe in a computer shell
A particular constrained realization of a patch of the Universe resembling the primordial density field in our local cosmic neighbourhood is depicted in the upper lefthand frame of figure 2. It concerns a constrained realization of our Local Universe for the Standard Cold Dark matter scenario, with $`\mathrm{\Omega }_0=1.0`$ and $`H_0=50\text{km/s/Mpc}`$. Its subsequent evolution is followed by means of the a P<sup>3</sup>M N-body code. The resulting distribution of the particles in a central slice through the simulations box is shown in the upper righthand frame of figure 2. Clearly recognizable are massive concentrations of matter at the locations where in the real Universe we observe the presence of the Great Attractor region (slightly “north” of the “west” direction) and the Perseus-Pisces region (slightly “south” of the “east”). Interesting is to see how vast and extended these regions in fact are, certainly not to be identified with well-defined singular objects.
## 5. The local cosmic weather: Cool and Stormy
The corresponding velocity field is shown in the lower lefthand frame. To appreciate the small-scale and large-scale contributions to the peculiar velocity field, in figure 3 we have decomposed the full velocity field (central frame) into the large-scale bulk flow $`𝐯_{bulk}`$ (lefthand, vectors in the top frame, amplitude contour plot in the lower frame) and the residual small-scale “velocity dispersion” $`𝐯_\sigma `$ (for technical details see Van de Weygaert & Hoffman 1999). The decomposition criterion is set by top-hat filtering the velocity field at the scale of nonlinearity, $`R_{TH}=8h^1\text{Mpc}`$. Note the striking matter displacement pattern revealed by the bulk flow
field and its close relationship to the large-scale features emerging in the particle distribution (figure 2). Equally interesting is the fact that also the small-scale velocity dispersion field appears to bear the marks of underlying large-scale features: not only do we see substantial “thermal” velocities at the sites of cluster concentratios, but we can also recognize sizeable small-scale velocities near the locations of filaments. When we compare the contour plots of the bulk motion and the dispersion velocities, we can already discern the fact that while we – located at the centre of the simulation box – are embedded in a region with high bulk flow, evidently incited by the GA and the PP region, but that we also find ourselves in a region of exceptional low velocity dispersion. Indeed a telling reproduction of the observed “coldness” of the local cosmos.
Assessing the relative large-scale and small-scale contributions to the velocity field, we quantify the “coldness” of the cosmic flow by means of the ratio between the local bulk flow velocities $`|𝐯|_{bulk}`$ and the “dispersion” velocity $`\sigma (𝐯)`$,
$$|𝐯|_{bulk}/\sigma (𝐯).$$
(1)
yielding a quantity whose cosmic average was introduced as “cosmic Mach number” $``$ by Ostriker & Suto (1990). They propagated this quantity as a useful and complementary characterization of structure formation scenarios, quantifying the relative contributions of large and small scale matter perturbations with the great virtue of being insensitive to the amplitude of the power spectrum of density perturbations. We, however, are not so much interested in the cosmic average as well as in the spatial distribution and coherence of the point-to-point Mach number $`(𝐱)`$, and its relation to the underlying matter field. For the simulation mentioned above and illustrated in figure 2 and figure 3, we show the spatial structure of the Mach number field in the lower righthand frame of figure 2. A comparison with the corresponding density map reveals the interesting aspect of a large coherent band of high Mach number values running from the lower lefthand side to the upper righthand side of the simulation box, avoiding both the Great Attractor region and the Pisces-Perseus region, situated approximately in between those two complexes. Superposed on this large-scale pattern are a plethora of small-scale features. For our purpose the most significant of these is the fact that we – situated near the centre – appear to be right near a towering peak of the Mach number distribution. Zooming in on the Mach number field we can readily appreciate this in figure 4, in the contour map frames. We should note that as the cosmic environment was specified on a scale of $`R_g>5h^1\text{Mpc}`$ we cannot really accurately pinpoint our own location to within a region of a similar size. In this respect it is very interesting to see a small Local Group size clump of particles near the peak of the Mach number distribution (top righthand frame fig. 4). It might indeed be the ultimate illustration of a Local Group like object, cold, relatively isolated, but member of a huge coherent complex. Even within a high-$`\mathrm{\Omega }`$ Universe such a configuration may lead to an uncharacteristically cold situation. This can most clearly apprehended when comparing the global Mach number one-point probability distribution in the bottom frame of figure 4 with the distribution in limited central regions (sphere in the top frame). Superposed on the global pdf are distributions from central regions in four different realizations, the most shifted one corresponding to the simulation illustrated in the previous figures. Evidently, in two cases nothing exceptional is observed, but in two other cases we observe uncharacteristically “cold” environments. Hence, the atypical local velocity field realization may make us prone to infer flawed conclusions with respect to the global Universe, and we should take care to take into account the rather particular spatial local matter configuration in which we are embedded.
### Acknowledgments.
We are grateful to A. Dekel for his permission to use the reconstructed density field based on the Mark III, and to B. Jones and E. Branchini for encouraging remarks.
## References
Hoffman, Y., Ribak, E., 1991, ApJ, 380, L5
Icke, V., 1973, A&A, 27, 1
Ostriker, J.P., Suto, Y., 1990, ApJ, 348, 378
van de Weygaert, R., Bertschinger, E., 1996, MNRAS, 281, 84
van de Weygaert, R., Hoffman, Y., 1999, ApJ, submitted
Willick, J.A., Courteau, S., Faber, S.M., Burstein, D., Dekel, A., Strauss, M.A., 1997, ApJS, 109,333
Zaroubi, S., Hoffman, Y., Fisher, K.B., Lahav, O., 1995, ApJ, 449, 446
|
no-problem/9909/physics9909021.html
|
ar5iv
|
text
|
# NUCLEAR DATA REQUIREMENTS FOR THE PRODUCTION OF MEDICAL ISOTOPES IN FISSION REACTORS AND PARTICLE ACCELERATORS
## 1 Introduction
Medical radioisotope production is receiving increased attention due to the many advances in nuclear medicine. In addition to further development in diagnostic nuclear medicine, pioneering work is being done in therapeutic applications of radioisotopes. For example, radiolabeled monoclonal antibodies are being used to treat leukemia and lymphoma, brachytherapy is being used to treat prostate cancer, radioactive stents are being used to prevent restenosis (reclogging of arteries) following angioplasty treatment of coronary heart disease, and radioisotopes are being used to palliate the excruciating bone pain associated with metastatic cancer.
Continued success in developing cures for cancer and ultimately in treating a large number of cancer patients is adversely impacted by this lack of knowledge of certain neutron capture cross sections for medically important radioisotopes. Without this data, medical radioisotope production cannot be optimized. Optimization is not only critical for economic reasons, but also for applications requiring the production of very high specific activity radioisotopes. In many cases, trial and mixed success is “de rigueur” for producing certain radioisotopes of medical significance.
## 2 Reactor-Spectrum Data Needs
As an example, the thermal and resonance integral cross sections are known for <sup>186</sup>W and <sup>187</sup>W , but not for <sup>188</sup>W. Thus, optimal production of the medically important radioisotope <sup>188</sup>W may not be realized since calculations related to the design of <sup>186</sup>W targets, their placements in a reactor and irradiation times cannot accurately be performed.
The design for a reactor focused on isotope production needs to consider the neutron cross sections of the medical radioisotopes to be produced so that proper neutronic conditions can be achieved for optimal radioisotope production. Neutron cross section information is needed to design targets (and hopefully a reactor) for the optimal production of medical radioisotopes.
Production of medical radioisotopes in fission reactor systems must be optimized with respect to several different parameters: position, target composition, density, configuration, etc. Research is required to determine the needed cross sections. Knowledge of these cross sections will benefit several practical applications and will also provide important modern data information for many isotopes previously unavailable.
The main objective of an initiative to address the cross section deficiencies is to access the cross sections that are of the greatest projected need. Table 1 identifies several medical radioisotopes that harbor deficiencies in cross section knowledge required for efficient, high specific activity production. In order to demonstrate how the lack of knowledge of the cross section impacts production results, calculations were made for six important medical isotope products. Table 2 shows these results comparing values with known and unity cross sections.
## 3 Medium-Energy Data Evaluations
In the radiation environment of a proton accelerator target, neutron and proton reactions may significantly contribute to the production of the desired radionuclide. Medium-energy protons may each produce a few tens of neutrons in a high-Z target, each having a significant range and contribution to particle flux. The complexities resulting from the myriad of possible reaction paths, along with spatially varying flux magnitudes and spectra, require the evaluation of pertinent cross sections and fluxes. These are evaluated in sequential calculations with the LAHET Code System LCS — the combination of LAHET and MCNP, or their subsequent combination in MCNPX — with the CINDER’90 nuclide inventory code;<sup>,</sup> in this sequence, cross sections for reactions of protons and medium-energy neutrons are calculated with on-line nuclear models and evaluated lower-energy neutron reaction cross section are contained in the CINDER’90 library. This state-of-the-art sequence is used effectively in the analysis of medium-energy designs but requires a significant investment of CPU time.
Nuclear models have also been utilized with limited available measured cross section data to form evaluations for a growing number of target nuclides. Neutron and proton cross sections from threshold to 1.7 GeV have been evaluated for the stable isotopes of O, F, Ne, Na, Mg, Al, S, Cl, Ar, K, Zn, Ga, Ge, As, Zr, Nb, Mo, Xe, Cs, Ba, La, and Hg — or about 30% of the naturally-occuring stable nuclides. These evaluations have used available measured data from the LANL T-2 compilation, the evaluations of the EAF97 library for neutrons below 20 MeV, and calculations with HMS-ALICE, CEM95, and the BERTINI and ISABEL models of LAHET. Samples of the data and evaluations for two of nearly 700 reactions evaluated to date are shown in Figure 1. Complete results are shown in Ref. 10.
## 4 Conclusions
The status of simulation methods and data available for the description of isotope production is fair and improving, but many additional cross section measurements and evaluations are needed. Consequently, further research in obtaining better cross section information will have positive benefits in the field of medical science.
|
no-problem/9909/hep-ph9909395.html
|
ar5iv
|
text
|
# Four-Neutrino Mixing
## I Introduction
Neutrino oscillations have been proposed by B. Pontecorvo more than forty years ago following an analogy with $`K^0\overline{K}^0`$ oscillations. Today this beautiful quantum mechanical phenomenon is subject to intensive experimental and theoretical research. Besides the intrinsic interest related to the investigation of the fundamental properties of neutrinos, it is considered to be one of the best ways to explore the physics beyond the Standard Model.
The best evidence in favor of the existence of neutrino oscillations has been recently provided by the measurement in the Super-Kamiokande experiment of an up–down asymmetry of high-energy $`\mu `$-like events generated by atmospheric neutrinos:
$$𝒜_\mu (D_\mu U_\mu )/(D_\mu +U_\mu )=0.311\pm 0.043\pm 0.01.$$
(1)
Here $`D_\mu `$ and $`U_\mu `$ are, respectively, the number of downward-going and upward-going events, corresponding to the zenith angle intervals $`0.2<\mathrm{cos}\theta <1`$ and $`1<\mathrm{cos}\theta <0.2`$. Since the fluxes of high-energy downward-going and upward-going atmospheric neutrinos are predicted to be equal with high accuracy on the basis of geometrical arguments , the Super-Kamiokande evidence in favor of neutrino oscillations is model-independent and provides a confirmation of the indications in favor of oscillations of atmospheric neutrinos found in the Super-Kamiokande experiment itself and in other experiments through the measurement of the ratio of $`\mu `$-like and $`e`$-like events (Kamiokande, IMB, Soudan 2) and through the measurement of upward-going muons produced by neutrino interactions in the rock below the detector (MACRO) . Large $`\nu _\mu \nu _e`$ oscillations of atmospheric neutrinos are excluded by the absence of a up–down asymmetry of high-energy $`e`$-like events generated by atmospheric neutrinos and detected in the Super-Kamiokande experiment ($`𝒜_e=0.036\pm 0.067\pm 0.02`$) and by the negative result of the CHOOZ long-baseline $`\overline{\nu }_e`$ disappearance experiment . Therefore, the atmospheric neutrino anomaly consists in the disappearance of muon neutrinos and can be explained by $`\nu _\mu \nu _\tau `$ and/or $`\nu _\mu \nu _s`$ oscillations (here $`\nu _s`$ is a sterile neutrino that does not take part in weak interactions).
Other indications in favor of neutrino oscillations have been obtained in solar neutrino experiments (Homestake, Kamiokande, GALLEX, SAGE, Super-Kamiokande) and in the LSND experiment .
The flux of electron neutrinos measured in all five solar neutrino experiments is substantially smaller than the one predicted by the Standard Solar Model and a comparison of the data of different experiments indicate an energy dependence of the solar $`\nu _e`$ suppression, which represents a rather convincing evidence in favor of neutrino oscillations . The disappearance of solar electron neutrinos can be explained by $`\nu _e\nu _\mu `$ and/or $`\nu _e\nu _\tau `$ and/or $`\nu _e\nu _s`$ oscillations .
The accelerator LSND experiment is the only one that claims the observation of neutrino oscillations in specific appearance channels: $`\overline{\nu }_\mu \overline{\nu }_e`$ and $`\nu _\mu \nu _e`$. Since the appearance of neutrinos with a different flavor represents the true essence of neutrino oscillations, the LSND evidence is extremely interesting and its confirmation (or disproof) by other experiments should receive high priority in future research. Four such experiments have been proposed and are under study: BooNE at Fermilab, I-216 at CERN, ORLaND at Oak Ridge and NESS at the European Spallation Source . Among these proposals only BooNE is approved and will start in 2001.
Neutrino oscillations occur if neutrinos are massive and mixed particles , i.e. if the left-handed components $`\nu _{\alpha L}`$ of the flavor neutrino fields are superpositions of the left-handed components $`\nu _{kL}`$ ($`k=1,\mathrm{},N`$) of neutrino fields with definite mass $`m_k`$:
$$\nu _{\alpha L}=\underset{k=1}{\overset{N}{}}U_{\alpha k}\nu _{kL},$$
(2)
where $`U`$ is a $`N\times N`$ unitary mixing matrix. From the measurement of the invisible decay width of the $`Z`$-boson it is known that the number of light active neutrino flavors is three , corresponding to $`\nu _e`$, $`\nu _\mu `$ and $`\nu _\tau `$ (active neutrinos are those taking part to standard weak interactions). This implies that the number $`N`$ of massive neutrinos is bigger or equal to three. If $`N>3`$, in the flavor basis there are $`N_s=N3`$ sterile neutrinos, $`\nu _{s_1}`$, …, $`\nu _{s_{N_s}}`$, that do not take part to standard weak interactions. In this case the flavor index $`\alpha `$ in Eq. (2) takes the values $`e,\mu ,\tau ,s_1,\mathrm{},s_{N_s}`$.
## II The necessity of at least three independent $`\mathrm{\Delta }m^2`$’s
The three evidences in favor of neutrino oscillations found in solar and atmospheric neutrino experiments and in the accelerator LSND experiment imply the existence of at least three independent neutrino mass-squared differences. This can be seen by considering the general expression for the probability of $`\nu _\alpha \nu _\beta `$ transitions in vacuum, that can be written as
$$P_{\nu _\alpha \nu _\beta }=\left|\underset{k=1}{\overset{N}{}}U_{\alpha k}^{}U_{\beta k}\mathrm{exp}\left(i\frac{\mathrm{\Delta }m_{kj}^2L}{2E}\right)\right|^2,$$
(3)
where $`\mathrm{\Delta }m_{kj}^2m_k^2m_j^2`$, $`j`$ is any of the mass-eigenstate indices, $`L`$ is the distance between the neutrino source and detector and $`E`$ is the neutrino energy. The range of $`L/E`$ characteristic of each type of experiment is different: $`L/E10^{11}10^{12}\mathrm{eV}^2`$ for solar neutrino experiments, $`L/E10^210^3\mathrm{eV}^2`$ for atmospheric neutrino experiments and $`L/E1\mathrm{eV}^2`$ for the LSND experiment. From Eq. (3) it is clear that neutrino oscillations are observable in an experiment only if there is at least one mass-squared difference $`\mathrm{\Delta }m_{kj}^2`$ such that
$$\frac{\mathrm{\Delta }m_{kj}^2L}{2E}0.1$$
(4)
(the precise lower bound depends on the sensitivity of the experiment) in a significant part of the energy and source-detector distance intervals of the experiment (if the condition (4) is not satisfied, $`P_{\nu _\alpha \nu _\beta }\left|_kU_{\alpha k}^{}U_{\beta k}\right|^2=\delta _{\alpha \beta }`$). Since the range of $`L/E`$ probed by the LSND experiment is the smaller one, a large mass-squared difference is needed for LSND oscillations:
$$\mathrm{\Delta }m_{\mathrm{LSND}}^210^1\mathrm{eV}^2.$$
(5)
Specifically, the maximum likelihood analysis of the LSND data in terms of two-neutrino oscillations gives
$$0.20\mathrm{eV}^2\mathrm{\Delta }m_{\mathrm{LSND}}^22.0\mathrm{eV}^2.$$
(6)
Furthermore, from Eq. (3) it is clear that a dependence of the oscillation probability from the neutrino energy $`E`$ and the source-detector distance $`L`$ is observable only if there is at least one mass-squared difference $`\mathrm{\Delta }m_{kj}^2`$ such that
$$\frac{\mathrm{\Delta }m_{kj}^2L}{2E}1.$$
(7)
Indeed, all the phases $`\mathrm{\Delta }m_{kj}^2L/2E1`$ are washed out by the average over the energy and source-detector ranges characteristic of the experiment. Since a variation of the oscillation probability as a function of neutrino energy has been observed both in solar and atmospheric neutrino experiments and the ranges of $`L/E`$ characteristic of these two types of experiments are different from each other and different from the LSND range, two more mass-squared differences with different scales are needed:
$`\mathrm{\Delta }m_{\mathrm{sun}}^210^{12}10^{11}\mathrm{eV}^2\text{(VO)},`$ (8)
$`\mathrm{\Delta }m_{\mathrm{atm}}^210^310^2\mathrm{eV}^2.`$ (9)
The condition (8) for the solar mass-squared difference $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ has been obtained under the assumption of vacuum oscillations (VO). If the disappearance of solar $`\nu _e`$’s is due to the MSW effect , the condition
$$\mathrm{\Delta }m_{\mathrm{sun}}^210^4\mathrm{eV}^2\text{(MSW)}$$
(10)
must be fulfilled in order to have a resonance in the interior of the sun. Hence, in the MSW case $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ must be at least one order of magnitude smaller than $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$.
It is possible to ask if three different scales of neutrino mass-squared differences are needed even if the results of the Homestake solar neutrino experiment is neglected, allowing an energy-independent suppression of the solar $`\nu _e`$ flux. The answer is that still the data cannot be fitted with only two neutrino mass-squared differences because an energy-independent suppression of the solar $`\nu _e`$ flux requires large $`\nu _e\nu _\mu `$ or $`\nu _e\nu _\tau `$ transitions generated by $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ or $`\mathrm{\Delta }m_{\mathrm{LSND}}^2`$. These transitions are forbidden by the results of the Bugey and CHOOZ reactor $`\overline{\nu }_e`$ disappearance experiments and by the non-observation of an up-down asymmetry of $`e`$-like events in the Super-Kamiokande atmospheric neutrino experiment .
## III Four-neutrino schemes
The existence of three different scales of $`\mathrm{\Delta }m^2`$ imply that at least four light massive neutrinos must exist in nature. Here we consider the schemes with four light and mixed neutrinos , which constitute the minimal possibility that allows to explain all the existing data with neutrino oscillations. In this case, in the flavor basis the three active neutrinos $`\nu _e`$, $`\nu _\mu `$, $`\nu _\tau `$ are accompanied by a sterile neutrino $`\nu _s`$ that does not take part in standard weak interactions.
The six types of four-neutrino mass spectra with three different scales of $`\mathrm{\Delta }m^2`$ that can accommodate the hierarchy $`\mathrm{\Delta }m_{\mathrm{sun}}^2\mathrm{\Delta }m_{\mathrm{atm}}^2\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ are shown qualitatively in Fig. III. In all these mass spectra there are two groups of close masses separated by the “LSND gap” of the order of 1 eV. In each scheme the smallest mass-squared difference corresponds to $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ ($`\mathrm{\Delta }m_{21}^2`$ in schemes I and B, $`\mathrm{\Delta }m_{32}^2`$ in schemes II and IV, $`\mathrm{\Delta }m_{43}^2`$ in schemes III and A), the intermediate one to $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ ($`\mathrm{\Delta }m_{31}^2`$ in schemes I and II, $`\mathrm{\Delta }m_{42}^2`$ in schemes III and IV, $`\mathrm{\Delta }m_{21}^2`$ in scheme A, $`\mathrm{\Delta }m_{43}^2`$ in scheme B) and the largest mass squared difference $`\mathrm{\Delta }m_{41}^2=\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ is relevant for the oscillations observed in the LSND experiment. The six schemes are divided into four schemes of class 1 (I–IV) in which there is a group of three masses separated from an isolated mass by the LSND gap, and two schemes of class 2 (A, B) in which there are two couples of close masses separated by the LSND gap.
## IV The disfavored schemes of class 1
In the following we will show that the schemes of class 1 are disfavored by the data if also the negative results of short-baseline accelerator and reactor disappearance neutrino oscillation experiments are taken into account . Let us remark that in principle one could check which schemes are allowed by doing a combined fit of all data in the framework of the most general four-neutrino mixing scheme, with three mass-squared differences, six mixing angles and three CP-violating phases as free parameters. However, at the moment it is not possible to perform such a fit because of the enormous complications due to the presence of too many parameters and to the difficulties involved in a combined fit of the data of different experiments, which are usually analyzed by the experimental collaborations using different methods. Hence, we think that it is quite remarkable that one can exclude the schemes of class 1 with the following relatively simple procedure.
Let us define the quantities $`d_\alpha `$, with $`\alpha =e,\mu ,\tau ,s`$, in the schemes of class 1 as
$$d_\alpha ^{(\mathrm{I})}|U_{\alpha 4}|^2,d_\alpha ^{(\mathrm{II})}|U_{\alpha 4}|^2,d_\alpha ^{(\mathrm{III})}|U_{\alpha 1}|^2,d_\alpha ^{(\mathrm{IV})}|U_{\alpha 1}|^2.$$
(11)
Physically $`d_\alpha `$ quantifies the mixing of the flavor neutrino $`\nu _\alpha `$ with the isolated neutrino, whose mass is separated from the other three by the LSND gap.
The probability of $`\nu _\alpha \nu _\beta `$ ($`\beta \alpha `$) and $`\nu _\alpha \nu _\alpha `$ transitions (and the corresponding probabilities for antineutrinos) in short-baseline experiments are given by
$$P_{\nu _\alpha \nu _\beta }=A_{\alpha ;\beta }\mathrm{sin}^2\frac{\mathrm{\Delta }m_{41}^2L}{4E},P_{\nu _\alpha \nu _\alpha }=1B_{\alpha ;\alpha }\mathrm{sin}^2\frac{\mathrm{\Delta }m_{41}^2L}{4E},$$
(12)
with the oscillation amplitudes
$$A_{\alpha ;\beta }=4d_\alpha d_\beta ,B_{\alpha ;\alpha }=4d_\alpha (1d_\alpha ).$$
(13)
The probabilities (12) have the same form as the corresponding probabilities in the case of two-neutrino mixing, $`P_{\nu _\alpha \nu _\beta }=\mathrm{sin}^2(2\vartheta )\mathrm{sin}^2(\mathrm{\Delta }m^2L/4E)`$ and $`P_{\nu _\alpha \nu _\alpha }=1\mathrm{sin}^2(2\vartheta )\mathrm{sin}^2(\mathrm{\Delta }m^2L/4E)`$, which have been used by all experimental collaborations for the analysis of the data in order to get information on the parameters $`\mathrm{sin}^2(2\vartheta )`$ and $`\mathrm{\Delta }m^2`$ ($`\vartheta `$ and $`\mathrm{\Delta }m^2`$ are, respectively, the mixing angle and the mass-squared difference in the case of two-neutrino mixing). Therefore, we can use the results of their analyses in order to get information on the corresponding parameters $`A_{\alpha ;\beta }`$, $`B_{\alpha ;\alpha }`$ and $`\mathrm{\Delta }m_{41}^2`$.
The exclusion plots obtained in short-baseline $`\overline{\nu }_e`$ and $`\nu _\mu `$ disappearance experiments imply that
$$d_\alpha a_\alpha ^0\text{or}d_\alpha 1a_\alpha ^0(\alpha =e,\mu ),$$
(14)
with
$$a_\alpha ^0=\frac{1}{2}\left(1\sqrt{1B_{\alpha ;\alpha }^0}\right)(\alpha =e,\mu ),$$
(15)
where $`B_{e;e}^0`$ and $`B_{\mu ;\mu }^0`$ are the upper bounds, that depend on $`\mathrm{\Delta }m_{41}^2`$, of the oscillation amplitudes $`B_{e;e}`$ and $`B_{\mu ;\mu }`$ given by the exclusion plots of $`\overline{\nu }_e`$ and $`\nu _\mu `$ disappearance experiments. From the exclusion curves of the Bugey reactor $`\overline{\nu }_e`$ disappearance experiment and of the CDHS and CCFR accelerator $`\nu _\mu `$ disappearance experiments it follows that $`a_e^03\times 10^2`$ for $`\mathrm{\Delta }m_{41}^2=\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ in the LSND range (6) and $`a_\mu ^00.2`$ for $`\mathrm{\Delta }m_{41}^20.4\mathrm{eV}^2`$ .
Therefore, the negative results of short-baseline $`\overline{\nu }_e`$ and $`\nu _\mu `$ disappearance experiments imply that $`d_e`$ and $`d_\mu `$ are either small or large (close to one). Taking into account the unitarity limit $`d_e+d_\mu 1`$, for each value of $`\mathrm{\Delta }m_{41}^2`$ above about $`0.3\mathrm{eV}^2`$ there are three regions in the $`d_e`$$`d_\mu `$ plane that are allowed by the results of disappearance experiments: region SS with small $`d_e`$ and $`d_\mu `$, region LS with large $`d_e`$ and small $`d_\mu `$ and region SL with small $`d_e`$ and large $`d_\mu `$. These three regions are illustrated qualitatively by the three shadowed areas in Fig. IV. For $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ there is no constraint on the value of $`d_\mu `$ from the results of short-baseline $`\nu _\mu `$ disappearance experiments and there are two regions in the $`d_e`$$`d_\mu `$ plane that are allowed by the results of $`\overline{\nu }_e`$ disappearance experiments: region S with small $`d_e`$ and region LS with large $`d_e`$ and small $`d_\mu `$ (the smallness of $`d_\mu `$ follows from the unitarity bound $`d_e+d_\mu 1`$). These two regions are illustrated qualitatively by the two shadowed areas in Fig. IV.
Let us consider now the results of solar neutrino experiments, which imply a disappearance of electron neutrinos. The survival probability of solar $`\nu _e`$’s averaged over the fast unobservable oscillations due to $`\mathrm{\Delta }m_{41}^2`$ and $`\mathrm{\Delta }m_{31}^2`$ is bounded by
$$P_{\nu _e\nu _e}^{\mathrm{sun}}d_e^2.$$
(16)
Therefore, only the possibility
$$d_ea_e^0$$
(17)
is acceptable in order to explain the observed deficit of solar $`\nu _e`$’s with neutrino oscillations. Indeed, the solar neutrino data imply an upper bound for $`d_e`$, that is shown qualitatively by the vertical lines in Figs. IV and IV. It is clear that the regions LS in Figs. IV and IV are disfavored by the results of solar neutrino experiments.
In a similar way, since the survival probability of atmospheric $`\nu _\mu `$’s and $`\overline{\nu }_\mu `$’s is bounded by
$$P_{\nu _\mu \nu _\mu }^{\mathrm{atm}}d_\mu ^2,$$
(18)
large values of $`d_\mu `$ are incompatible with the asymmetry (1) observed in the Super-Kamiokande experiment. The upper bound for $`d_\mu `$ that follows from atmospheric neutrino data is shown qualitatively by the horizontal lines in Figs. IV and IV. It is clear that the region SL in Fig. IV, that is allowed by the results of $`\nu _\mu `$ short-baseline disappearance experiments for $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$, and the large–$`d_\mu `$ part of the region S in Fig. IV are disfavored by the results of atmospheric neutrino experiments. A precise calculation shows that the Super-Kamiokande asymmetry (1) and the exclusion curve of the Bugey $`\overline{\nu }_e`$ disappearance experiment imply the upper bound
$$d_\mu 0.55a_\mu ^{\mathrm{SK}}.$$
(19)
This upper bound is depicted by the horizontal line in Fig. IV (the vertically hatched area above the line is excluded).
In Fig. IV we have also shown the bound $`d_\mu a_\mu ^0`$ or $`d_\mu 1a_\mu ^0`$ obtained from the exclusion plot of the short-baseline CDHS $`\nu _\mu `$ disappearance experiment, which exclude the shadowed region. It is clear that the results of short-baseline disappearance experiments and the Super-Kamiokande asymmetry (1) imply that $`d_\mu 0.55`$ for $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ and that $`d_\mu `$ is very small for $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$. However, this range of $`d_\mu `$ is disfavored by the results of the LSND experiment, that imply a lower bound $`A_{\mu ;e}^{\mathrm{min}}`$ for the amplitude $`A_{\mu ;e}=4d_ed_\mu `$ of $`\nu _\mu \nu _e`$ oscillations. Indeed, we have
$$d_ed_\mu A_{\mu ;e}^{\mathrm{min}}/4.$$
(20)
This bound, shown qualitatively by the LSND exclusion curves in Figs. IV and IV, excludes region SS in Fig. IV and the small-$`d_\mu `$ part of region S in Fig. IV. From Figs. IV and IV one can see in a qualitative way that in the schemes of class 1 the results of the solar, atmospheric and LSND experiments are incompatible with the negative results of short-baseline experiments.
A quantitative illustration of this incompatibility is given in Fig. IV. The curve in Fig. IV labelled LSND + Bugey (the diagonally hatched area is excluded) represents the constraint
$$d_\mu A_{\mu ;e}^{\mathrm{min}}/4a_e^0,$$
(21)
derived from the inequality (20) using the bound (17). One can see that the results of the LSND experiment exclude the range of $`d_\mu `$ allowed by the results of short-baseline disappearance experiments and by the Super-Kamiokande asymmetry (1). Hence, in the framework of the schemes of class 1 there is no range of $`d_\mu `$ that is compatible with all the experimental data.
The incompatibility of the experimental results with the schemes of class 1 is shown also in Fig. IV, where we have plotted in the $`A_{\mu ;e}`$$`\mathrm{\Delta }m_{41}^2`$ plane the upper bound $`A_{\mu ;e}4a_e^0a_\mu ^0`$ for $`\mathrm{\Delta }m_{41}^2>0.26\mathrm{eV}^2`$ and $`A_{\mu ;e}4a_e^0a_\mu ^{\mathrm{SK}}`$ for $`\mathrm{\Delta }m_{41}^2<0.26\mathrm{eV}^2`$ (solid line, the region on the right is excluded). One can see that this constraint is incompatible with the LSND-allowed region (shadowed area).
Summarizing, we have reached the conclusion that the four schemes of class 1 shown in Fig. III are disfavored by the data.
## V The favored schemes of class 2
The four-neutrino schemes of class 2 are compatible with the results of all neutrino oscillation experiments if the mixing of $`\nu _e`$ with the two mass eigenstates responsible for the oscillations of solar neutrinos ($`\nu _3`$ and $`\nu _4`$ in scheme A and $`\nu _1`$ and $`\nu _2`$ in scheme B) is large and the mixing of $`\nu _\mu `$ with the two mass eigenstates responsible for the oscillations of atmospheric neutrinos ($`\nu _1`$ and $`\nu _2`$ in scheme A and $`\nu _3`$ and $`\nu _4`$ in scheme B) is large . This is illustrated qualitatively in Figs. V and V, as we are going to explain.
Let us define the quantities $`c_\alpha `$, with $`\alpha =e,\mu ,\tau ,s`$, in the schemes A and B as
$$c_\alpha ^{(\mathrm{A})}\underset{k=1,2}{}|U_{\alpha k}|^2,c_\alpha ^{(\mathrm{B})}\underset{k=3,4}{}|U_{\alpha k}|^2.$$
(22)
Physically $`c_\alpha `$ quantify the mixing of the flavor neutrino $`\nu _\alpha `$ with the two massive neutrinos whose $`\mathrm{\Delta }m^2`$ is relevant for the oscillations of atmospheric neutrinos ($`\nu _1`$, $`\nu _2`$ in scheme A and $`\nu _3`$, $`\nu _4`$ in scheme B). The exclusion plots obtained in short-baseline $`\overline{\nu }_e`$ and $`\nu _\mu `$ disappearance experiments imply that
$$c_\alpha a_\alpha ^0\text{or}c_\alpha 1a_\alpha ^0(\alpha =e,\mu ),$$
(23)
with $`a_\alpha ^0`$ given in Eq. (15).
The shadowed areas in Figs. V and V illustrate qualitatively the regions in the $`c_e`$$`c_\mu `$ plane allowed by the negative results of short-baseline $`\overline{\nu }_e`$ and $`\nu _\mu `$ disappearance experiments. Figure V is valid for $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ and shows that there are four regions allowed by the results of short-baseline disappearance experiments: region SS with small $`c_e`$ and $`c_\mu `$, region LS with large $`c_e`$ and small $`c_\mu `$, region SL with small $`c_e`$ and large $`c_\mu `$ and region LL with large $`c_e`$ and $`c_\mu `$. The quantities $`c_e`$ and $`c_\mu `$ can be both large, because the unitarity of the mixing matrix imply that $`c_\alpha +c_\beta 2`$ and $`0c_\alpha 1`$ for $`\alpha ,\beta =e,\mu ,\tau ,s`$. Figure V is valid for $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$, where there is no constraint on the value of $`c_\mu `$ from the results of short-baseline $`\nu _\mu `$ disappearance experiments. It shows that there are two regions allowed by the results of short-baseline $`\overline{\nu }_e`$ disappearance experiments: region S with small $`c_e`$ and region L with large $`c_e`$.
Let us take now into account the results of solar neutrino experiments. Large values of $`c_e`$ are incompatible with solar neutrino oscillations because in this case $`\nu _e`$ has large mixing with the two massive neutrinos responsible for atmospheric neutrino oscillations and, through the unitarity of the mixing matrix, small mixing with the two massive neutrinos responsible for solar neutrino oscillations. Indeed, in the schemes of class 2 the survival probability $`P_{\nu _e\nu _e}^{\mathrm{sun}}`$ of solar $`\nu _e`$’s is bounded by
$$P_{\nu _e\nu _e}^{\mathrm{sun}}c_e^2/2,$$
(24)
and its possible variation $`\mathrm{\Delta }P_{\nu _e\nu _e}^{\mathrm{sun}}(E)`$ with neutrino energy $`E`$ is limited by
$$\mathrm{\Delta }P_{\nu _e\nu _e}^{\mathrm{sun}}(E)\left(1c_e\right)^2.$$
(25)
If $`c_e`$ is large as in the LS or LL regions of Fig. V or in the L region of Fig. V, we have
$$P_{\nu _e\nu _e}^{\mathrm{sun}}\frac{\left(1a_e^0\right)^2}{2}\frac{1}{2},\mathrm{\Delta }P_{\nu _e\nu _e}^{\mathrm{sun}}(E)(a_e^0)^210^3,$$
(26)
for $`\mathrm{\Delta }m_{41}^2=\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ in the LSND range (6). Therefore $`P_{\nu _e\nu _e}^{\mathrm{sun}}`$ is bigger than 1/2 and practically does not depend on neutrino energy. Since this is incompatible with the results of solar neutrino experiments interpreted in terms of neutrino oscillations , we conclude that the regions LS and LL in Fig. V and the region L in Fig. V are disfavored by solar neutrino data, as illustrated qualitatively by the vertical exclusion lines in Figs. V and V.
Let us consider now the results of atmospheric neutrino experiments. Small values of $`c_\mu `$ are incompatible with atmospheric neutrino oscillations because in this case $`\nu _\mu `$ has small mixing with the two massive neutrinos responsible for atmospheric neutrino oscillations. Indeed, the survival probability of atmospheric $`\nu _\mu `$’s is bounded by
$$P_{\nu _\mu \nu _\mu }^{\mathrm{atm}}\left(1c_\mu \right)^2,$$
(27)
and it can be shown that the Super-Kamiokande asymmetry (1) and the exclusion curve of the Bugey $`\overline{\nu }_e`$ disappearance experiment imply the upper bound
$$c_\mu 0.45b_\mu ^{\mathrm{SK}}.$$
(28)
This limit is depicted qualitatively by the horizontal exclusion lines in Figs. V and V. Therefore, we conclude that the regions SS and LS in Fig. V and the small-$`c_\mu `$ parts of the regions S and L in Fig. V are disfavored by atmospheric neutrino data.
Finally, let us consider the results of the LSND experiment. In the schemes of class 2 the amplitude of short-baseline $`\nu _\mu \nu _e`$ oscillations is given by
$$A_{\mu ;e}=\left|\underset{k=1,2}{}U_{ek}U_{\mu k}^{}\right|^2=\left|\underset{k=3,4}{}U_{ek}U_{\mu k}^{}\right|^2.$$
(29)
The second equality in Eq. (29) is due to the unitarity of the mixing matrix. Using the Cauchy–Schwarz inequality we obtain
$$c_ec_\mu A_{\mu ;e}^{\mathrm{min}}/4\text{and}\left(1c_e\right)\left(1c_\mu \right)A_{\mu ;e}^{\mathrm{min}}/4,$$
(30)
where $`A_{\mu ;e}^{\mathrm{min}}`$ is the minimum value of the oscillation amplitude $`A_{\mu ;e}`$ observed in the LSND experiment. The bounds (30) are illustrated qualitatively in Figs. V and V. One can see that the results of the LSND experiment confirm the exclusion of the regions SS and LL in Fig. V and the exclusion of the small-$`c_\mu `$ part of region S and of the large-$`c_\mu `$ part of region L in Fig. V.
Summarizing, if $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ only the region SL in Fig. V, with
$$c_ea_e^0\text{and}c_\mu 1a_\mu ^0,$$
(31)
is compatible with the results of all neutrino oscillation experiments. If $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ only the large-$`c_\mu `$ part of region S in Fig. V, with
$$c_ea_e^0\text{and}c_\mu b_\mu ^{\mathrm{SK}},$$
(32)
is compatible with the results of all neutrino oscillation experiments. Therefore, in any case $`c_e`$ is small and $`c_\mu `$ is large. However, it is important to notice that, as shown clearly in Figs. V and V, the inequalities (30) following from the LSND observation of short-baseline $`\nu _\mu \nu _e`$ oscillations imply that $`c_e`$, albeit small, has a lower bound and $`c_\mu `$, albeit large, has an upper bound:
$$c_eA_{\mu ;e}^{\mathrm{min}}/4\text{and}c_\mu 1A_{\mu ;e}^{\mathrm{min}}/4.$$
(33)
## VI Conclusions
We have seen that only the two four-neutrino schemes A and B of class 2 in Fig. III are compatible with the results of all neutrino oscillation experiments. Furthermore, we have shown that the quantities $`c_e`$ and $`c_\mu `$ in these two schemes must be, respectively, small and large. Physically $`c_\alpha `$, defined in Eq. (22), quantify the mixing of the flavor neutrino $`\nu _\alpha `$ with the two massive neutrinos whose $`\mathrm{\Delta }m^2`$ is relevant for the oscillations of atmospheric neutrinos ($`\nu _1`$, $`\nu _2`$ in scheme A and $`\nu _3`$, $`\nu _4`$ in scheme B).
The smallness of $`c_e`$ implies that electron neutrinos do not oscillate in atmospheric and long-baseline neutrino oscillation experiments. Indeed, one can obtain rather stringent upper bounds for the probability of $`\nu _e`$ transitions into any other state and for the size of CP or T violation that could be measured in long-baseline experiments in the $`\nu _\mu \nu _e`$ and $`\overline{\nu }_\mu \overline{\nu }_e`$ channels .
Let us consider now the effective Majorana mass in neutrinoless double-$`\beta `$ decay,
$$|m|=\left|\underset{k=1}{\overset{4}{}}U_{ek}^2m_k\right|.$$
(34)
In scheme A, since $`c_e`$ is small, the effective Majorana mass is approximately given by
$$|m|\left|U_{e3}^2+U_{e4}^2\right|m_4\left|U_{e3}^2+U_{e4}^2\right|\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}.$$
(35)
Therefore, in scheme A the effective Majorana mass can be as large as $`\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}`$ . On the other hand, in scheme B the contribution of the “heavy” neutrino masses $`m_3`$ and $`m_4`$ to the effective Majorana mass is strongly suppressed :
$$|m|_{34}\left|U_{e3}^2m_3+U_{e4}^2m_4\right|c_em_4a_e^0\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}.$$
(36)
Finally, if the upper bound $`N_\nu ^{\mathrm{BBN}}<4`$ for the effective number of neutrinos in Big-Bang Nucleosynthesis is correct , the mixing of $`\nu _s`$ with the two mass eigenstates responsible for the oscillations of atmospheric neutrinos must be very small . In this case atmospheric neutrinos oscillate only in the $`\nu _\mu \nu _\tau `$ channel and solar neutrinos oscillate only in the $`\nu _e\nu _s`$ channel. This is very important because it implies that the two-generation analyses of solar and atmospheric neutrino data give correct information on neutrino mixing in the two four-neutrino schemes A and B. Otherwise, it will be necessary to reanalyze the solar and atmospheric neutrino data using a general formalism that takes into account the possibility of simultaneous transitions into active and sterile neutrinos in solar and atmospheric neutrino experiments .
## Acknowledgments
I would like to thank S.M. Bilenky, W. Grimus and T. Schwetz for enjoyable and stimulating collaboration on the topics presented in this report.
|
no-problem/9909/physics9909013.html
|
ar5iv
|
text
|
# Rigorous estimates of the tails of the probability distribution function for the random linear shear model.
## I Background
It is a well documented experimental fact that, while the statistics of the velocity field in a turbulent flow are roughly Gaussian, the statistics of other quantities like the pressure, derivatives of velocity and a passively advected scalar are generally far from Gaussian. For example Castaing, et. al. observed in experiments in a Rayleigh-Bénard convection cell that for Rayleigh number $`Ra<10^7`$ the distribution of temperature appeared to be roughly Gaussian, while for larger Rayleigh numbers, $`Ra>10^8`$, the temperature distribution appeared to be closer to exponential. In related work Ching studied the probability distribution functions (pdfs) for temperature differences at different scales, again in a Rayleigh-Bénard cell, and found that the pdfs over a wide range of scales were well approximated by a ‘stretched exponential’ distributions of the form
$`P(T)=e^{C|T|^\beta }.`$
At the smallest scales the observed value of the exponent was $`\beta .5`$, while at the largest scales the observed exponent was roughly $`\beta 1.7`$. Kailasnath, Sreenivasan and Stolovitky measured the pdfs of velocity differences in the atmosphere for a wide range of separation scales. They found similar distributions to the ones found by Ching, with exponents ranging from $`\beta .5`$ for separation distances in the dissipative range to $`\beta 2`$ on the integral scale. Finally Thoroddsen and Van Atta studied thermally stratified turbulence in a wind tunnel and found the probability distributions of the density to be roughly Gaussian, while the distributions of the density gradients were exponential.
A complete understanding of such intermittency lies at the heart of understanding fluid turbulence, and would certainly require a detailed understanding of the creation of small scale fluid structures involving both patchy regions of strong vorticity and intense gradients . An alternative starting point is to assume the statistics of the flow are known a priori and to determine how these statistics are manifest in a passively evolving quantity. This question of inherited statistics is significantly easier than the derivation of a complete theory for fluid turbulence, though still retains many inherent difficulties such as problems of closure.
Motivated by the Chicago experiments of the late 80’s , and earlier work, there has been a tremendous effort towards understand the origin of the intermittent temperature probability distribution function in passive scalar models with prescribed (usually Gaussian) velocity statistics. For a very complete review of the subject of turbulent diffusion, including a full discussion of scalar intermittency, see the recent survey article of Majda and Kramer . Most of the work on the scalar statistics has either been directed at understanding the anomalous scaling of temperature structure functions, or at understanding the shape of the tail of the limiting scalar pdf.
There has been a wealth of theoretical efforts addressing this last issue of the tail. A somewhat common theme, particularly in the pumped case, is the prediction that the scalar pdf should develop an exponential tail. For example Kraichnan, Shraiman and Siggia and Balkovsky and Falkovich all find exponential tails. Another important question is to understand the pdf of the scalar gradient. Naturally, gradient information may be expected to amplify contributions from small scales, and a general theory relating the scalar tail with the gradient tail, even for passively evolving quantities would be quite valuable. There has been somewhat less theoretical effort aimed at exploring the difference in statistics between the scalar and the scalar gradient. Chertkov, Falkovich and Kolokolov, Chertkov Kolokolov and Vergassola and Balkovsky and Falkovich have explored this question and have found a stretched exponential distribution of the scalar gradient in situations for which the scalar has an exponential tail. Holzer and Siggia, and Chen and Kraichnan have observed similar phenomena numerically.
In this paper we examine the scalar and scalar gradient pdf tail in an exactly solvable model first studied by Majda and McLaughlin and Majda who were able to construct explicit moment formulas for the moments of a passive scalar advected by a rapidly fluctuating linear shear flow in terms of $`N`$-dimensional integrals. In that work, it was established that the degree of length scale separation between the initial scalar field and the fluid flow is inherent to the development of a broader than Gaussian pdf.
Here, we explicitly calculate the tails of the pdf for this model. We begin by analyzing the expression derived by Majda for the large time $`2N`$th moment of the pdf for the random uniform shear model, which is given by an integral over $`𝐑^N`$. From these normalized moments, we will construct the tail of the associated pdf. We point out that in this calculation the convergence of the pdf for finite time to the pdf for infinite time is weak - for fixed moment number the finite time moment converges to the limiting moment. The convergence is almost certainly not uniform in the moments. For a more thorough investigation of the uniformity of this limiting process in the context of general, bounded periodic shear layers, see Bronski and McLaughlin.
The tail is calculated in two steps. First, using direct calculation and gamma function identities we are able to reduce the $`N`$-dimensional integral to a single integral of Laplace type, from which the asymptotic behavior of the $`2N`$th moment follows easily. The asymptotic behavior of the moments is important for determining the tails of the probability distribution function, as we establish below. Second, we consider the problem of reconstructing the probability measure from the moments. Using ideas from complex analysis, mainly some basic facts about entire functions of finite order and type, we are able to provide rigorous estimates for the rate of decay of the tails of the measure. We find that the tails decay like
$`\mathrm{exp}(c_\alpha |T|^{\frac{4}{3+\alpha }})`$
so depending on the precise value of the parameter $`\alpha `$ (defined in section II, below, which sets the degree of scale separation between the scalar and flow field) the model admits tails which are Gaussian, exponential, or stretched exponential. We also show that in this model higher order derivatives of the scalar in the shear direction are always more intermittent, with a very simple relationship between the exponents of the scalar and its derivative. The distributions of derivatives in the cross-shear direction, however, display the same tails as the scalar itself.
We remark that, while the stream-line topology for shear profiles is admittedly much simpler than that in fully developed turbulence, the fact that the exact limiting tail for the decaying scalar field may be explicitly and rigorously constructed suggests such models to be exceptionally attractive for testing the validity of different perturbation schemes. It is also extremely interesting because it demonstrates that, at least for unbounded flows, a positive Lyapunov exponent (as would typically occur for a general Batchelor flow) is not necessary for intermittency. For an interesting discussion of the role of Lyapunov exponents in producing intermittency see the work of Chertkov, Falkovich, Kolokolov and Lebedev.
### A The random shear model
Here, we briefly review the framework of the random shear model. We follow Majda, and consider the free evolution of a passive scalar field in the presence of a rapidly fluctuating shear profile:
$`{\displaystyle \frac{T}{t}}+\gamma (t)v(x){\displaystyle \frac{T}{y}}`$ $`=`$ $`\overline{\kappa }\mathrm{\Delta }T.`$ (1)
The random function, $`\gamma (t)`$, represents multiplicative, mean zero Gaussian white noise, delta correlated in time:
$`\gamma (t)\gamma (s)=\delta (|ts|)`$
where the brackets, $``$, denote the ensemble average over the statistics of $`\gamma (t)`$. The original model considered by Majda involved the case of a uniform shear layer, $`v(x)=x`$, which leads to the moments considered below . It a quite general fact, not special to shear profiles, that a closed evolution equation for the arbitrary N-point correlator is available for the special case of rapidly fluctuating Gaussian noise, see work of Majda for a path integral representation of this fact for the special case of random shear layers. For the scalar evolving in (1), the N point correlator, defined as:
$`\psi _N(𝐱,𝐲,t)`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{N}{}}}T(x_j,y_j,t)`$ (2)
$`𝐱`$ $`=`$ $`(x_1,x_2,x_3,\mathrm{},x_N)`$ (3)
$`𝐲`$ $`=`$ $`(y_1,y_2,y_3,\mathrm{},y_N)`$ (4)
is a function: $`\psi _N:R^{2N}\times [0,\mathrm{})R^1`$ satisfying
$`{\displaystyle \frac{\psi _N}{t}}`$ $`=`$ $`\overline{\kappa }\mathrm{\Delta }_{2N}\psi _N+{\displaystyle \frac{1}{2}}{\displaystyle \underset{i,j=1}{\overset{N}{}}}v(x_i)v(x_j){\displaystyle \frac{^2\psi _N}{y_iy_j}}`$ (5)
where $`\mathrm{\Delta }_{2N}`$ denotes the $`2N`$ dimensional Laplacian.
We next describe the initial scalar field. Following Majda, we assume that the scalar is initially a mean zero, Gaussian random function depending only upon the variable, $`y`$:
$$T|_{t=0}=_{R^1}𝑑W(k)e^{2\pi iky}|k|^{\frac{\alpha }{2}}\widehat{\varphi }_0(k)\alpha >1$$
(6)
Here, $`\widehat{\varphi }_0(k)`$ denotes a rapidly decaying (large k) cut-off function satisfying $`\widehat{\varphi }_0(k)=\widehat{\varphi }(k),\widehat{\varphi }_0(0)0`$ and $`dW`$ denotes complex Gaussian white noise with
$`dW_W`$ $`=`$ $`0`$
$`dW(k)dW(\eta )_W`$ $`=`$ $`\delta (k+\eta )dkd\eta `$
The spectral parameter, $`\alpha `$ appearing in (6) is introduced to adjust the excited length scales of the initial scalar field, with increasing $`\alpha `$ corresponding to initial data varying on smaller scales. We remark that the more general case involving initial data depending upon both $`x`$ and $`y`$, and data possessing both mean and fluctuating components, was analyzed McLaughlin and Majda .
For this case involving shear flows, the evolution of this $`N`$ point correlator may be immediately converted to parabolic quantum mechanics through partial Fourier transformation in the $`𝐲`$ variable. For the particular initial data presented in (6), this yields the following solution formula:
$`\psi _N={\displaystyle _{R^N}}e^{2\pi i𝐤𝐲}\widehat{\psi }_N(𝐱,𝐤,t){\displaystyle \underset{j=1}{\overset{N}{}}}\widehat{\varphi }_0(k_j)|k_j|^{\frac{\alpha }{2}}dW(k_j)`$
where the N-body wavefunction, $`\widehat{\psi }_N(𝐱,𝐤,t)`$ satisfies the following parabolic Schrödinger equation:
$`{\displaystyle \frac{\widehat{\psi }_N}{t}}`$ $`=`$ $`\overline{\kappa }\mathrm{\Delta }_𝐱V_{int}(𝐤,𝐱)\widehat{\psi }_N`$ (7)
$`\widehat{\psi }_N|_{t=0}`$ $`=`$ $`1`$ (8)
The interaction potential, $`V_{int}(𝐤,𝐱)`$, is
$`V_{int}`$ $`=`$ $`4\pi ^2|𝐤|^2+2\pi ^2({\displaystyle \underset{j=1}{\overset{N}{}}}k_jv(x_j))^2.`$
For the special case of a uniform, linear shear profile, with $`v(x)=x`$, the quantum mechanics problem in (7) is exactly solvable in any spatial dimension. Taking the ensemble average over the initial Gaussian random measure using a standard cluster expansion, the general solution formula for $`\psi _N(𝐱,𝐲,t)_W`$ is obtained in terms of $`N`$ dimensional integrals. The normalized, long time flatness factors, $`\mu _{2N}^\alpha =lim_t\mathrm{}\frac{T^{2N}}{T^2^N}`$, are calculated by evaluating the correlator along the diagonal,
$`𝐱`$ $`=`$ $`(x,x,x,\mathrm{},x)`$
$`𝐲`$ $`=`$ $`(y,y,y,\mathrm{},y)`$
and utilizing the explicit long time asymptotics available through Mehler’s formula. This leads to the following set of normalized moments for the decaying scalar field, $`T`$:
$`\mu _{2N}^\alpha `$ $`=`$ $`{\displaystyle \frac{(2N)!}{2^NN!\sigma ^N}}{\displaystyle _{R^N}}𝑑𝐤{\displaystyle \frac{_{j=1}^N|k_j|^\alpha }{\sqrt{\mathrm{cosh}(|𝐤|)}}}`$ (9)
$`\sigma `$ $`=`$ $`{\displaystyle _{R^1}}𝑑k{\displaystyle \frac{|k|^\alpha }{\sqrt{\mathrm{cosh}|k|}}}.`$ (10)
Observe that these normalized moments depend upon the parameter $`\alpha `$. By varying this parameter Majda and McLaughlin established that the degree of scale separation between the initial scalar and flow field is important in the development of a broader than Gaussian pdf . They demonstrated through numerical quadrature of these integrals for low order moments that as the initial scalar field develops an infrared divergence (with $`\alpha 1`$, corresponding to the loss of scale separation between the initial scalar field, and the infinitely correlated linear shear profile) the limiting single point scalar distribution has Gaussian moments. Conversely they showed that as the length scale of the initial scalar field is reduced, corresponding to increasing values of $`\alpha `$, the limiting distribution shows growing moments indicative of a broad tailed distribution. On the basis of these low order moment comparisons, these studies suggest that within these models, the limiting pdf should be dependent upon the scale separation between the scalar and flow field. A fundamental issue concerns whether and how this scale dependence is manifest in the pdf tail. Below, we address precisely this issue, and rigorously establish that the intuition put forth by Majda and McLaughlin is correct through the explicit calculation of the limiting pdf tail.
## II Asymptotics of the probability distribution
### A Notation
Recall from the previous section that the work of Majda derived exact expressions for the moments of a one parameter family of models indexed by the exponent $`\alpha `$. In the remainder of the paper $`d\mu ^\alpha (T)`$ will denote the probability measure for the passive scalar $`T`$ in the Majda model with exponent $`\alpha `$. The $`i^{th}`$ moment of the probability measure $`d\mu ^\alpha (T)`$ will be denoted by $`\mu _i^\alpha `$. In this particular model the distribution is symmetric and thus all odd moments vanish.
### B Large $`N`$ asymptotics of the moments
In this model the exact expression for the $`2N`$th moment is given by
$`\mu _{2N}^\alpha `$ $`=`$ $`{\displaystyle \frac{(2N)!}{\sigma ^N2^NN!}}{\displaystyle \frac{_{j=1}^N|k_j|^\alpha }{\sqrt{\mathrm{cosh}(|\stackrel{}{k}|)}}𝑑k_1𝑑k_2𝑑k_3\mathrm{}𝑑k_N}`$
$`\sigma `$ $`=`$ $`{\displaystyle \frac{|k|^\alpha dk}{\sqrt{\mathrm{cosh}(k)}}}`$
As noted by Majda $`\mathrm{cosh}(|\stackrel{}{k}|)\mathrm{cosh}(k_i)`$ which implies the normalized flatness factors are strictly larger than those of a Gaussian, implying broad tails. The simplest way to analyze this integral, and in particular to understand the behavior for large $`N`$, is to introduce spherical coordinates. Spherical coordinates in $`N`$ dimensions can easily be constructed iteratively in terms of spherical coordinates in $`N1`$ dimensions as follows. The coordinates in $`N`$ dimensional spherical coordinates are $`\{r,\theta _1,\theta _2,\theta _3\mathrm{}\theta _{N1}\}.`$ If $`\{x_1^{N1},x_2^{N1}\mathrm{}x_{N1}^{N1}\}`$ are coordinates on $`𝐑^{N1}`$ then coordinates on $`𝐑^N`$ are given by
$`x_i^N`$ $`=`$ $`x_j^{N1}\mathrm{sin}(\theta _{N1})j1\mathrm{}N1`$
$`x_N^N`$ $`=`$ $`r\mathrm{cos}\theta _{N1}`$
Using this construction it is simple to calculate that the volume element in $`N`$ dimensional spherical coordinates is given by
$`dx_1dx_2\mathrm{}dx_N=r^{N1}dr{\displaystyle \underset{j=1}{\overset{N1}{}}}\mathrm{sin}^{j1}(\theta _j)d\theta _j\theta _1[0,2\pi ]\theta _{i>1}[0,\pi ].`$
Since the volume element is a product measure the $`N`$ dimensional integral factors as a product of $`N`$ one dimensional integrals and we are left with the expression
$`\mu _{2N}^\alpha ={\displaystyle \frac{(2N)!}{\sigma ^N2^NN!}}I_0(N){\displaystyle \underset{j=1}{\overset{N1}{}}}I_j,`$
where the $`I_j`$ are given by
$`I_0(N)`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}r^{N(\alpha +1)1}{\displaystyle \frac{dr}{\sqrt{\mathrm{cosh}(r)}}}`$ (11)
$`I_1`$ $`=`$ $`{\displaystyle _0^{2\pi }}|\mathrm{sin}(\theta )|^\alpha |\mathrm{cos}(\theta )|^\alpha 𝑑\theta `$ (12)
$`I_j`$ $`=`$ $`{\displaystyle _0^\pi }|\mathrm{sin}(\theta )|^{j(\alpha +1)1}|\mathrm{cos}(\theta )|^\alpha 𝑑\theta j>1.`$ (13)
The angular integrals can be done explicitly in terms of gamma functions, using the beta function identity
$`2{\displaystyle _0^{\pi /2}}|\mathrm{sin}(\theta )|^{2z1}|\mathrm{cos}(\theta )|^{2w1}𝑑\theta =\beta (z,w)={\displaystyle \frac{\mathrm{\Gamma }(z)\mathrm{\Gamma }(w)}{\mathrm{\Gamma }(z+w)}}`$
which leads to the expression
$`\mu _{2n}^\alpha `$ $`=`$ $`2{\displaystyle \frac{(2N)!}{\sigma ^N2^NN!}}I_0(N){\displaystyle \underset{j=1}{\overset{N1}{}}}{\displaystyle \frac{\mathrm{\Gamma }(\frac{\alpha +1}{2})\mathrm{\Gamma }(j\frac{\alpha +1}{2})}{\mathrm{\Gamma }((j+1)\frac{\alpha +1}{2})}}`$ (14)
$`=`$ $`2{\displaystyle \frac{(2N)!(\mathrm{\Gamma }(\frac{\alpha +1}{2}))^{N1}}{\sigma ^N2^NN!}}I_0(N){\displaystyle \underset{j=1}{\overset{N1}{}}}{\displaystyle \frac{\mathrm{\Gamma }(j\frac{\alpha +1}{2})}{\mathrm{\Gamma }((j+1)\frac{\alpha +1}{2})}}.`$ (15)
Observe that the product telescopes - the numerator of one term is the denominator of the next - giving the final expression
$`\mu _{2n}^\alpha `$ $`=`$ $`2{\displaystyle \frac{(2N)!}{\sigma ^N2^NN!}}{\displaystyle \frac{(\mathrm{\Gamma }(\frac{\alpha +1}{2}))^N}{\mathrm{\Gamma }(N\frac{\alpha +1}{2})}}{\displaystyle r^{N(\alpha +1)1}\frac{dr}{\sqrt{\mathrm{cosh}(r)}}}`$ (16)
$`=`$ $`2{\displaystyle \frac{(2N)!}{\sigma ^N2^NN!}}{\displaystyle \frac{(\mathrm{\Gamma }(\frac{\alpha +1}{2}))^N}{\mathrm{\Gamma }(N\frac{\alpha +1}{2})}}I_0(N)`$ (17)
The integral over the radial variable $`I_0(N)`$ cannot be done explicitly, but the large $`N`$ asymptotics are given by
$`I_0(N)2^{N(\alpha +1)+\frac{1}{2}}\mathrm{\Gamma }(N(\alpha +1)),`$
so that the large $`N`$ behavior of the moments is given by
$$\mu _{2N}^\alpha 2^{N\alpha +\frac{3}{2}}\frac{(2N)!}{\sigma ^NN!}\frac{\mathrm{\Gamma }(N(\alpha +1))(\mathrm{\Gamma }(\frac{\alpha +1}{2}))^N}{\mathrm{\Gamma }(N(\frac{\alpha +1}{2}))}.$$
(18)
Note that since
$`{\displaystyle \frac{\mathrm{\Gamma }(N(\alpha +1))}{\mathrm{\Gamma }(\frac{N(\alpha +1)}{2})}}\mathrm{}\mathrm{as}N\mathrm{}`$
the moments are strictly larger than the moments of the Gaussian. We will use this to provide rigorous quantitative estimates for the tails of the distribution.
### C The Hamburger moment problem
Having computed simple expressions for the moments of the pdf, as well as asymptotic expressions for large moment number, it is natural to ask the question of whether one can do the inverse problem, and deduce the pdf itself. The problem of determining a measure from its moments is a classical one, known as the Hamburger moment problem. This problem has a rich theory, and we mention only a very few of the most basic results here. For an overview of the subject, see the book by Shohat and Tamarkin or the recent electronic preprint by Simon.
The two most important questions are, of course, existence and uniqueness. There is a necessary and sufficient condition for a set of numbers $`\{\mu _i\}`$ to be the moments of some probability measure, namely that the expectation of any positive polynomial be positive. This translates into the following linear algebraic conditions on the diagonal determinants of the Hankel matrix, the matrix with $`i,j^{th}`$ entry $`\mu _{i+j}`$:
$`\left|\mu _0\right|>0,\left|\begin{array}{cc}\mu _0& \mu _1\\ \mu _1& \mu _2\end{array}\right|>0,\left|\begin{array}{ccc}\mu _0& \mu _1& \mu _2\\ \mu _1& \mu _2& \mu _3\\ \mu _2& \mu _3& \mu _4\end{array}\right|>0\mathrm{}`$
These conditions appear to be quite difficult to check in practice. However since the moments considered here are, by construction, the moments of a pdf this condition must hold.
A more subtle question is the issue of uniqueness of the measure, usually called determinacy in the literature of the moment problem. One classical sufficient condition for the determinacy of the moment problem is the following condition, due to Carleman: If the moments $`\mu _n`$ are such that the following sum diverges
$`{\displaystyle \underset{j=1}{\overset{\mathrm{}}{}}}(\mu _{2j})^{\frac{1}{2j}}=\mathrm{}`$
then the Hamburger moment problem is determinate. Given the asymptotic expression for the moments given in Equation (18) it is easy to check that
$`(\mu _{2j}^\alpha )^{\frac{1}{2j}}cj^{\frac{\alpha +3}{4}}`$
and thus there is a unique measure with these moments for $`1\alpha 1`$. We will see later that this corresponds to probability distribution functions with tails that range from Gaussian through exponential.
In the case $`\alpha >1`$ which, as we will see later, corresponds to stretched exponential tails, the problem probably does not have a unique solution. Indeed there are classical examples of collections of moments with the same asymptotic behavior as the stretched exponential distribution for which the moment problem has a whole family of solutions.
Given this we come to the question of actually calculating the measure given the moments. There is a rather involved theory for this in the determinate case involving, among other things, orthogonal polynomials and continued fractions, but in general this problem is extremely difficult. However we show in the next section that it is relatively straightforward to reconstruct the tails of the measure from the moments.
### D Asymptotics of the tails of the distribution
Recall that $`\mu _{2N}^\alpha `$ is the $`2N`$th moment of some probability measure $`d\mu ^\alpha (T)`$,
$$\mu _{2N}^\alpha =T^{2N}𝑑\mu ^\alpha (T).$$
(19)
We are interested in calculating the asymptotic rate of decay of the tails of the probability measure $`d\mu ^\alpha (T)`$. The information about the behavior of the tails of the distribution is contained in the asymptotic behavior of the large moments. We study the tails of the measure $`d\mu ^\alpha (T)`$ by introducing the function
$$f^\alpha (z)=\underset{j=0}{\overset{\mathrm{}}{}}\frac{\mu _{2j}^\alpha z^{2j}}{\mathrm{\Gamma }(\frac{j(3+\alpha )}{2})C^{2j}},$$
(20)
where $`C`$ is some as yet unspecified constant. The factor of $`\mathrm{\Gamma }(\frac{j(3+\alpha )}{2})`$ is chosen so that the series for $`f^\alpha `$ has a finite but non-zero radius of convergence. This will give us the sharpest control over the tails of $`d\mu ^\alpha (T)`$. It is convenient to demand that the radius of convergence of the series be one. Using the root test it is straightforward to check that the radius of convergence of the sum is given by
$`r^{}=C2^{(\alpha +2)}{\displaystyle \frac{(\alpha +3)^{\frac{\alpha +3}{4}}}{(\alpha +1)^{\frac{\alpha +1}{4}}}}\sqrt{{\displaystyle \frac{\sigma }{\mathrm{\Gamma }(\frac{\alpha +1}{2})}}},`$
so we choose
$`C=2^{\alpha +2}{\displaystyle \frac{(\alpha +1)^{\frac{\alpha +1}{4}}}{(\alpha +3)^{\frac{\alpha +3}{4}}}}\sqrt{{\displaystyle \frac{\mathrm{\Gamma }(\frac{\alpha +1}{2})}{\sigma }}}.`$
Since the coefficients $`\mu _{2N}^\alpha `$ are the moments of a probability measure $`d\mu ^\alpha (T)`$ we have the alternative expression
$`f^\alpha (z)={\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{z^{2j}}{C^{2j}\mathrm{\Gamma }(\frac{i(3+\alpha )}{2})}}{\displaystyle T^{2i}𝑑\mu ^\alpha (T)}.`$
When $`z`$ is inside the radius of convergence of the sum (i.e. $`|z|<1`$) we can switch the integration and the summation and get the following expression for $`f^\alpha `$
$`f^\alpha (z)`$ $`=`$ $`{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}\frac{T^{2j}z^{2j}}{C^{2j}\mathrm{\Gamma }(\frac{N(3+\alpha )}{2})}d\mu ^\alpha (T)}`$ (21)
$`=`$ $`{\displaystyle F^\alpha (zT)𝑑\mu ^\alpha (T)}.`$ (22)
We note a few simple facts. First notice that the function $`f^\alpha (z)`$ is a kind of generalized Laplace transform of the measure $`d\mu ^\alpha (T)`$. The quantity inside the integral, $`F^\alpha (zT)=\frac{T^{2j}z^{2j}}{C^{2j}\mathrm{\Gamma }(\frac{j(3+\alpha )}{2})}`$ converges absolutely for all $`z`$ and thus $`F^\alpha (zT)`$ is an entire function of the complex variable $`z`$. Further we know that the integral must converge for $`|z|<1`$ and diverge for some $`|z|>1`$, since the original series converged in a circle of unit radius. We note that the entire function $`F^\alpha (z)`$ satisfies
$`|F^\alpha (z)|`$ $`=`$ $`|{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{z^{2j}}{C^{2j}\mathrm{\Gamma }(i\frac{3+\alpha }{2})}}|`$ (23)
$``$ $`{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{|z|^{2j}}{|C^{2j}\mathrm{\Gamma }(j\frac{3+\alpha }{2})|}}`$ (24)
$``$ $`F^\alpha (|z|),`$ (25)
so the function $`F^\alpha (z)`$ grows fastest along the real axis. Thus we know that the integral in Equation (22) converges for $`1<z<1`$ and diverges for $`z>1,z<1`$. Thus the problem of understanding the rate of decay of the tails of the probability measure $`d\mu ^\alpha (T)`$ has been reduced to that of determining the rate of growth of the function $`F(zt)`$. There is a well-developed theory for studying the rate of growth of entire functions, the theory of entire functions of finite order. We recall only the basic facts here - the interested reader is referred to the texts of Ahlfors and Rubel with Colliander.
The radial maximal function $`M_F(r)`$ of an entire function $`F(z)`$ is defined to be the maximum of the absolute value of $`F`$ over a ball of radius $`r`$ centered on the origin:
$`M_F(r)=\underset{|z|=r}{\mathrm{max}}|F(z)|`$
The order $`\rho `$ of a function $`F`$ is defined to be
$`\rho =\underset{r\mathrm{}}{lim\; sup}{\displaystyle \frac{\mathrm{log}_+\mathrm{log}_+M_F(r)}{\mathrm{log}_+(r)}},`$
where $`\mathrm{log}_+(x)=\mathrm{max}(0,\mathrm{log}(x))`$, if this limit exists. It is easy to see from this definition that $`F`$ is of order $`\rho `$ means that $`F`$ grows asymptotically like $`\mathrm{exp}(A(z)|z|^\rho )`$ along the direction of maximum growth in the complex plane, where $`A(z)`$ grows more slowly than any power of $`z`$. A related notion is the type of a function of finite order. If $`F`$ is of order $`\rho `$ then the type $`\tau `$ is defined to be
$`\tau =\underset{r\mathrm{}}{lim\; sup}{\displaystyle \frac{\mathrm{log}_+M_F(r)}{r^\rho }}`$
when this limit exists. Again speaking very roughly the type $`\tau `$ gives the next order asymptotics: if $`F`$ is of order $`\rho `$ and type $`\tau `$ then $`F`$ grows like $`B(z)\mathrm{exp}(\tau |z|^\rho )`$, where $`B(z)`$ is subdominant to the exponential term. Note that by Equation (25) the function $`F^\alpha `$ grows fastest along the real axis, and thus the maximal rate of growth in the complex plane is exactly the rate of growth along the real axis.
There exist alternate characterizations of the order and type of a function in terms of the Taylor coefficients $`A_n`$ which are very useful for our purposes. These are given as follows:
$`\rho `$ $`=`$ $`\underset{r\mathrm{}}{lim\; sup}{\displaystyle \frac{\mathrm{log}_+\mathrm{log}_+M_F(r)}{\mathrm{log}_+(r)}}=\underset{n\mathrm{}}{lim\; sup}{\displaystyle \frac{n\mathrm{log}(n)}{\mathrm{log}(|A_n|)}}`$ (26)
$`\tau `$ $`=`$ $`\underset{r\mathrm{}}{lim\; sup}{\displaystyle \frac{\mathrm{log}_+M_F(r)}{r^\rho }}={\displaystyle \frac{1}{\rho e}}\underset{n\mathrm{}}{lim\; sup}n|A_n|^{\rho /n}.`$ (27)
For the proofs we refer to the text of Rubel with Colliander. Using the expressions given in equations (26) and (27) we find that the order $`\rho `$ and type $`\tau `$ of $`F^\alpha (z)`$ are given by
$`\rho ^\alpha =\underset{n\mathrm{}}{lim\; sup}{\displaystyle \frac{2n\mathrm{log}(2n)}{\mathrm{log}(C^{2n}\mathrm{\Gamma }(\frac{(3+\alpha )n}{2}))}}={\displaystyle \frac{4}{3+\alpha }}`$
$`\tau ^\alpha ={\displaystyle \frac{1}{\rho e}}\underset{n\mathrm{}}{lim\; sup}n|\mathrm{\Gamma }({\displaystyle \frac{(3+\alpha )n}{2}})|^{\frac{\rho }{n}}={\displaystyle \frac{1}{C^\rho }}`$
Thus we know that $`F^\alpha (zT)`$ grows like $`A(zT)\mathrm{exp}(C^\rho |z|^{\frac{4}{3+\alpha }}|T|^{\frac{4}{3+\alpha }})`$ along the real axis, where $`A(zT)`$ grows more slowly than $`\mathrm{exp}(D|T|^{\frac{4}{3+\alpha }})`$ for any $`D`$. Further we know that the integral
$`{\displaystyle F^\alpha (zT)𝑑\mu ^\alpha (T)}`$
converges for $`|z|<1`$ and diverges for $`z>1`$ or $`z<1`$, so to leading order the rate of decay of the measure $`d\mu ^\alpha (T)`$ is given by $`\mathrm{exp}(|C|^{4/(3+\alpha )}|T|^{4/(3+\alpha )})`$. It is easy to check that as $`\alpha 1`$ this estimate becomes $`\mathrm{exp}(\frac{T^2}{4})`$, recovering the normalized Gaussian.
This result is probably best restated in terms of the cumulative distribution function, rather than the probability measure. If $`P(T,T^{})=_T^T^{}𝑑\mu (T)`$, with $`T^{}>T`$, then it is easy to show that the above implies that
$`\underset{T\mathrm{}}{lim}\mathrm{exp}(c|T|^{\frac{4}{3+\alpha }})P(T,T^{})`$ $`=`$ $`0c<|C|^{\frac{4}{3+\alpha }}`$
$`=`$ $`\mathrm{}c>|C|^{\frac{4}{3+\alpha }}`$
## III Interpretation and concluding remarks
Physically the Majda model can be thought of as a model for the behavior of a passive scalar at small scales, when the scale of the flow field is much larger than the scale of the variations of the scalar. Recall that the random scalar is given by
$`T(y)`$ $`=`$ $`{\displaystyle |k|^{\frac{\alpha }{2}}\widehat{\varphi }_0(k)e^{2\pi iky}𝑑W(k)}`$ (28)
$`<T(y)T(y^{})>`$ $`=`$ $`{\displaystyle |k|^\alpha |\widehat{\varphi }_0(k)|^2e^{2\pi ik(yy^{})}𝑑k}.`$ (29)
In the limit as $`\alpha `$ approaches $`1`$ there is an infrared divergence, so that the energy of the scalar is concentrated at larger and larger scales. In this case $`\frac{4}{3+\alpha }2`$, so the normalized distribution function becomes Gaussian, as was originally observed by Majda.
One important fact about this model which we would like to emphasize is that it predicts that higher derivatives of the advected scalar should be increasingly intermittent, a fact which was not strongly emphasized in previous work. Observe that due to the special nature of shear flows the scalar derivative $`T/y`$ satisfies the same equation as the scalar $`T`$ with no additional terms!. We further note that the initial condition for the derivative of the scalar is given by
$`{\displaystyle \frac{T}{y}}`$ $`=`$ $`{\displaystyle 2\pi i|k|^{\frac{\alpha }{2}}k\widehat{\varphi }_0(k)e^{2\pi iky}𝑑W(k)}`$ (30)
$`<{\displaystyle \frac{T}{y}}{\displaystyle \frac{T}{y^{}}}>`$ $`=`$ $`4\pi ^2{\displaystyle |k|^{\alpha +2}|\widehat{\varphi }_0(k)|^2e^{2\pi ik(yy^{})}𝑑k},`$ (31)
so the derivative of the scalar has a representation of the same form as the representation of the scalar itself, but with the exponent $`\alpha `$ increased by two, and a slightly modified $`\varphi _0(k)`$. Recall that the exponent $`\alpha `$ determines the amount of energy at the largest scales and thus the degree of intermittency, with the tails decaying as $`\mathrm{exp}(T^{4/(3+\alpha )})`$. Our calculation shows that increasing $`\alpha `$ increases the width of the tails of the probability distribution function, implying that derivatives are more intermittent! These predictions for the behavior of the tails of the scalar as compared with the scalar gradient are in extremely good agreement with experimental and numerical results. For instance our calculation shows that if the scalar has exponent $`\alpha =1`$, so that the probability distribution function of the scalar has Gaussian tails, then the derivative of the scalar has exponent $`\alpha =1`$, implying that the distribution of the derivative has exponential tails. This agrees quite well with the experiments of Van Atta and Thorddsen, as just one example, who observe that in turbulent thermally stratified flow that the pdf for the density has Gaussian tails, while the pdf of the density gradient has exponential tails. Similarly if the scalar has exponent $`\alpha =1`$, so that the distribution of the scalar itself is exponential, then derivative of the scalar should have exponent $`\frac{2}{3}`$. This agrees with the calculations of Chertkov, Falkovich and Kolokolov, and Balkovsky and Falkovich also predict exponential tails for the scalar and stretched exponential tails with exponent $`\frac{2}{3}`$ for the scalar gradient in the Batchelor regime. This also shows reasonably good agreement with the numerical experiments of Holzer and Siggia. In their experiments Holzer and Siggia find that a scalar with exponential tails has a gradient with stretched exponential tails. For large Peclet number the exponent of these stretched exponential tails is in the range of $`.661.563`$.
Of course one can eliminate $`\alpha `$ entirely, and one finds the following relationship between the distribution of the scalar and the scalar gradient within this model. If $`T`$ is distributed according to a stretched exponential pdf with exponent $`\rho `$, and the gradient $`T_y`$ according to a stretched exponential pdf with exponent $`\rho ^{}`$, then $`\rho ,\rho ^{}`$ are related by
$`{\displaystyle \frac{1}{2}}+{\displaystyle \frac{1}{\rho }}={\displaystyle \frac{1}{\rho ^{}}}.`$
It would be extremely interesting to check if this relationship, or some generalization of it, holds in greater generality than shear flows. The above numerical and experimental evidence suggest that this might not be an unreasonable hope.
The distribution of the $`x`$, or cross-shear, derivatives can also be calculated using the same explicit representations derived by Majda. Calculations by the authors for deterministic initial data have shown that derivatives in the cross-shear direction have a distribution with the same asymptotic behavior as the scalar itself. This should be compared to and contrasted with the papers of Son, and Balkovsky and Fouxon, which predict distributions with very broad tails (all of the higher moments diverge as $`t\mathrm{}`$) and which predicts the same distribution for derivatives of the scalar as for the scalar itself.
We would also like to comment on the relationship between intermittency and the Lyapunov exponents of the underlying flow field. A number of papers have addressed the problem of intermittency in the large Peclet number limit by attempting to relate broader than Gaussian tails to the Lyapnuov exponents of the flow field. It is worth noting that a shear flow does not possess a positive Lyapunov exponent, but as we have shown here a shear flow can generate exponential and stretched exponential tails in the passive scalar. This shows that chaotic behavior in the underlying flow, while probably an important effect in realistic flows, is not necessary for the generation of broad tails and intermittency.
Finally we would like to comment on the rate of approach to the limiting measure in time. The results presented here analyze the infinite time limit of the measure. As mentioned earlier the convergence to this limiting measure is expected to be highly non-uniform. A preliminary calculation by the authors for a special choice of the cut-off function $`\widehat{\varphi }_0(k)`$ suggests that for large but finite times the pdf looks like the pdf for the infinite time problem in some core region, with Gaussian tails outside this core region. As time increases the size of this core region demonstrating non-Gaussian statistics grows, and the Gaussian tails get pushed out to infinity. We believe this same picture to hold for any choice of the cut-off function $`\widehat{\varphi }_0(k)`$, but more work is needed to establish this fact.
Acknowledgements: Jared C. Bronski would like to acknowledge support from the National Science Foundation under grant DMS-9972869. Richard M. McLaughlin would like to acknowledge support from NSF Career Grant DMS-97019242, and would like to thank L. Kadanoff and the James Franck Institute for support during the writing of this paper, and Raymond T. Pierrehumbert for several useful conversations. The authors would like to thank Misha Chertkov, Leo Kadanoff and Kenneth T-R. McLaughlin for several conversations, and Pete Kramer for an extremely thorough reading of the original manuscript.
|
no-problem/9909/gr-qc9909050.html
|
ar5iv
|
text
|
# A superradiance resonance cavity outside rapidly rotating black holes
\[
## Abstract
We discuss the late-time behaviour of a dynamically perturbed Kerr black hole. We present analytic results for near extreme Kerr black holes that show that the large number of virtually undamped quasinormal modes that exist for nonzero values of the azimuthal eigenvalue $`m`$ combine in such a way that the field oscillates with an amplitude that decays as $`1/t`$ at late times. This prediction is verified using numerical time-evolutions of the Teukolsky equation. We argue that the observed behaviour may be relevant for astrophysical black holes, and that it can be understood in terms of the presence of a “superradiance resonance cavity” immediately outside the black hole.
\]
A brief background. — Our understanding of the generic response of a black hole to dynamic perturbations is based on seminal work from 30 years ago. Exponentially damped quasinormal-mode (QNM) ringing was first observed in numerical experiments by Vishveshwara , and the subsequent late-time power-law fall-off (that all perturbative fields decay as $`t^{2l3}`$ in the Schwarzschild geometry) was discovered by Price . A considerable body of work has since established the importance of these two phenomena for black-hole physics. We now know that most black-hole signals are dominated by the slowest damped QNMs, and many reliable methods for investigating these modes have been developed . The nature of the late-time tail has also been studied in great detail. In particular it has been established that it is a generic effect independent of the presence of an event horizon: The tail arises from backscattering off of the weak gravitational potential in far zone . However, the fact that our understanding has reached a mature level does not mean that no problems remain in this field. A few years ago, the quasinormal modes had been calculated also for Kerr black holes , but there were no actual calculations demonstrating the presence of power-law tails. Neither were there any dynamical studies of rotating black holes. Several recent developments have served to change this situation and improve our understanding of dynamical rotating black holes. Of particular relevance has been an effort to develop a reliable framework for perturbative time-evolutions of Kerr black holes . There has also been recent efforts to analytically approximate the late-time power-law tails for Kerr black holes . Furthermore, numerical relativity is reaching a stage where fully nonlinear studies of spinning black holes are feasible.
Kerr black-hole spectroscopy. — With the likely advent of gravitational-wave astronomy only a few years away the onus is on theorists to provide detailed predictions of the many scenarios that may lead to detectable gravitational waves. In this context, the question whether we can realistically hope to do “black-hole spectroscopy” by detecting QNM signals and inverting them to infer the black holes mass and angular momentum is highly relevant . For slowly rotating black holes this presents a serious challenge. Using standard results one can readily estimate that the effective gravitational-wave amplitude for QNMs is (cf. similar estimates for pulsating stars )
$$h_{\mathrm{eff}}4.2\times 10^{24}\left(\frac{\delta }{10^6}\right)^{1/2}\left(\frac{M}{M_{}}\right)\left(\frac{15\mathrm{M}\mathrm{p}\mathrm{c}}{r}\right)$$
(1)
where $`\delta `$ is the radiated energy as a fraction of the black-hole mass $`M`$. The frequency of the radiation depends on the black-hole mass as $`f12(M_{}/M)\mathrm{kHz}`$. Given these relations, and recalling the estimated sensitivity of the generation of detectors that is under construction, the detection of QNM signals from slowly rotating solar-mass black holes seems rather unlikely. The situation will be rather different for low-frequency signals from supramassive black holes in galactic nuclei and detection with LISA, the space-based interferometric gravitational-wave antenna. Also, there is recent evidence that “middle weight” black holes, in the range $`1001000M_{}`$ may exist . For such black holes the most important QNMs would radiate at frequencies where the new generation of ground based detectors reach their peak sensitivity. If there are indeed such black holes out there we may hope to take their fingerprints in the future.
It has been suggested that QNM signals from rapidly rotating black holes would be easier to detect. This belief is based on the fact that some QNMs become very long lived as $`aM`$. In fact, mode calculations predict the existence of an infinite set of essentially undamped modes in the extreme Kerr limit . The available investigations into the detectability of QNM signals have focused on the slow damping of these modes . It has been shown that the decreased damping of the mode may increase the detectability considerably. However, these results have to be interpreted with some caution. What has been shown is the (anticipated) effect that a slower damped mode is easier to detect than a short-lived one, provided that the modes are excited to a comparable amplitude. This is a rather subtle issue that pertains to the question whether it is easier to excite a slowly damped QNM than a short-lived one. Intuitively, one might expect this not to be the case. In similar physical situations the build-up of energy in a long-lived resonant mode takes place on a time-scale similar to the eventual mode damping. Thus it ought to be very difficult to excite a QNM that has characteristic damping several times longer than the dynamical timescale of the excitation process. This argument suggests that the amplitude of each long-lived mode ought to vanish in the limit $`aM`$ when the e-folding time of the mode increases dramatically . In view of this it would seem rather dubious to conclude that the detectability of a QNM signal actually improves as $`aM`$. All may not be lost, however, because even if each individual QNM has an infinitesimal amplitude for rapidly spinning black holes a large number of modes approach the same limiting frequency as $`aM`$. These modes may combine to give a considerable signal .
A surprising analytic result. — We want to assess the change in “detectability” of the QNMs as $`aM`$, i.e. as we approach the extreme Kerr black hole case. As a suitable model problem, we consider a massless scalar field. As is well known, the equation that governs such a field (which follows immediately from $`\mathrm{}\mathrm{\Phi }=0`$) is similar to the master equation for both electromagnetic and gravitational perturbations of a rotating black hole that was first derived by Teukolsky . In the following we briefly outline our calculation and discuss the main results. A more exhaustive discussion will be presented elsewhere. We use standard Boyer-Lindquist coordinates, and approach the QNM problem in the frequency domain (obtained via the integral transform used in ). Furthermore, we use the symmetry of the problem to separate the dependence on the azimuthal angle $`\phi `$. In essence, we are using a decomposition;
$$\mathrm{\Phi }=\frac{e^{im\phi }}{2\pi }\underset{l=0}{\overset{\mathrm{}}{}}_{\mathrm{}}^+\mathrm{}\frac{R_{lm}(\omega ,r)}{\sqrt{r^2+a^2}}S_{lm}(\omega ,\theta )e^{i\omega t}𝑑\omega $$
(2)
It should be noted that the rotation of the black hole couples the various multipoles through the (frequency dependent) spheroidal angular functions $`S_{lm}`$ .
In direct analogy with the Schwarzschild case the initial value problem for the scalar field can be solved using a Green’s function constructed from solutions to the homogeneous radial equation for $`R_{lm}(\omega ,r)`$. One of the required solutions, that satisfies the causal condition at the event horizon $`r_+=M+\sqrt{M^2a^2}`$, has asymptotic behaviour
$$R_{lm}^{\mathrm{in}}\{\begin{array}{cc}e^{ikr_{}}\text{as }rr_+,\hfill & \\ A_{\mathrm{out}}e^{i\omega r_{}}+A_{\mathrm{in}}e^{i\omega r_{}}\text{as }r+\mathrm{}.\hfill & \end{array}$$
(3)
Here
$$k=\omega \frac{ma}{2Mr_+}=\omega m\omega _+,$$
(4)
where $`\omega _+`$ is the angular velocity of the event horizon, and $`r_{}`$ is the tortoise coordinate. It is useful to recall that a monochromatic wave is superradiant if it has frequency in the range $`0<\omega <m\omega _+`$ .
A QNM is defined as a frequency $`\omega _n`$ at which $`A_{\mathrm{in}}=0`$. Assuming that $`A_{\mathrm{in}}(\omega \omega _n)\alpha _n`$ close to $`\omega =\omega _n`$ we can deduce (via the residue theorem) that the contribution from each such mode to the evolution of the scalar field is
$$\mathrm{\Phi }_n(t,r,\theta )=\frac{A_{\mathrm{out}}}{2\omega _n\alpha _n}e^{i\omega _n(tr_{})}\underset{l=0}{\overset{\mathrm{}}{}}S_{lm}(\omega _n,\theta )_{lm}$$
(5)
where $`_{lm}(\omega _n,r)`$ is a complicated expression that depends on the details of the initial data (here assumed to have support only far away from the black hole), cf. .
Let us now focus on the case of nearly extreme Kerr black holes, i.e. on the case $`aM`$. Then we can benefit from an approximation due to Teukolsky and Press , that suggests that there will exist an infinite set of QNMs that can be approximated by
$$\omega _nM\frac{m}{2}\frac{1}{4m}e^{(\theta 2n\pi )/2\delta }(\mathrm{cos}\phi i\mathrm{sin}\phi )$$
(6)
where $`\delta `$, $`\theta `$ and $`\phi `$ are positive constants (not to be confused with the coordinates), and $`n`$ is an integer labelling the modes. It is easy to see that as $`n\mathrm{}`$ the modes become virtually undamped, and that they are located close to the upper limit of the superradiant frequency interval. That such a set of long-lived QNMs will exist agrees with other mode-calculations . Given the location of the QNMs we can extend the calculation to deduce also the form of the asymptotic amplitudes $`A_{\mathrm{out}}`$ and $`A_{\mathrm{in}}`$ (or rather, the coefficient $`\alpha _n`$) for each $`\omega _n`$. This enables us to approximate the contribution of each long-lived QNM to the field via (5). Doing this we find that the longest lived modes have exponentially small amplitudes. Thus we predict that the individual QNM will not in general be excited to a large amplitude, in agreement with the intuitive expectation. Expressing this result in terms of the effective amplitude of a corresponding gravitational-wave QNM, we would have
$$h_{\mathrm{eff}}\sqrt{\frac{\text{Re }\omega _n}{\text{Im }\omega _n}}\frac{A_{\mathrm{out}}}{\alpha _n}e^{n\pi /2\delta }$$
(7)
In other words, the assumption that the long-lived modes may be easier to detect than (say) their short-lived counterparts for slowly rotating black holes is cast in serious doubt. A recent, more detailed, calculation of the QNM excitation coefficients for $`aM`$ supports this conclusion.
This does not, however, mean that the long-lived QNMs are without relevance. On the contrary, the fact that there is a large number of such modes has a very interesting consequence. After combining all the long-lived modes we find
$$\underset{n=0}{\overset{\mathrm{}}{}}\frac{A_{\mathrm{out}}}{\alpha _n}e^{i\omega _n(tr_{})}\frac{e^{im\omega _+t}}{t}\text{ as }t\mathrm{}$$
(8)
This is an unexpected result: It suggests that, when summed, the contribution from the slowly damped QNMs of a near extreme Kerr black hole corresponds to a oscillating signal whose magnitude falls of with time as a power-law. Furthermore, the decay of this signal is considerable slower than the standard power-law tail. The decay of $`1/t`$ should be compared to the tail-results of, for example, Ori and Barack that suggests that $`\mathrm{\Phi }t^{l|m|3q}`$ where $`q=0`$ for even $`l+m`$ and 1 for odd $`l+m`$ (derived only for non-extreme black holes). Hence, we predict that the oscillating QNM-tail will dominate the late-time behaviour of a perturbed near extreme Kerr black hole.
Numerical confirmation. — Our analytic result is obviously surprising. However, in view of the many approximations involved in the derivation of (8) considerable caution is warranted, and an alternative confirmation of the analytic prediction is desirable. Fortunately, the recent effort to develop a framework for doing perturbative time-evolutions for Kerr black holes provides the means for testing our result. Hence, we have performed a set of evolutions (for various values of $`m`$) using the same scalar field code that was used to study superradiance in a dynamical context . As initial data we have chosen a generic Gaussian pulse originally located far away from the black hole.
Our numerical evolution results can be succinctly summarised as follows (further details will be discussed elsewhere):
i) For extreme Kerr black holes ($`a=M`$) the numerical evolutions show the predicted oscillating $`1/t`$ behaviour for all $`m0`$, cf. Figure 1.
ii) For $`a<M`$ we find a similar behaviour, i.e. that the field is well approximated oscillations whose amplitude decays as $`t^\mu `$, with $`\mu `$ rapidly increasing from 1 as $`a`$ departs from $`M`$, at late times.
iii) For axisymmetric perturbations ($`m=0`$) the numerical evolution recovers the standard power-law tail. For our particular choice of initial data (that contains the $`l=0`$ multipole) the tail falls off as $`t^3`$.
Our interpretation of these results are: Firstly, the numerical evolutions confirm the analytic prediction for extreme Kerr black holes, i.e. that the field will oscillate with an amplitude that decays as $`1/t`$ at very late times. Secondly, and more important physically, the numerical data suggests that the late-time behaviour is qualitatively similar also in cases when $`a`$ is significantly smaller than $`M`$. The late-time behaviour was found to be consistent with an oscillating tail in all cases we have considered so far (essentially $`a0.9M`$). We have also verified that the observed late-time behaviour cannot be accounted for by a single slowly damped QNM when $`a<M`$. It is important to emphasise that the result for $`a<M`$ was not predicted by the analytical work (since the approximate modes we used are relevant only for $`aM`$). In other words, the numerical experiments indicate that the effect could be of relevance for all rapidly rotating black holes and may well dominate the standard power-law tail for a large range of astrophysical black hole parameters. Intuitively one would expect there to exist a critical value of the rotation parameter $`a`$ above which the new effect becomes relevant. More detailed numerical work is needed to establish this, and investigate the role of the new effect further.
A physical interpretation. — Given both the analytic prediction for extreme Kerr black holes and the numerical evolution results for $`aM`$, an intriguing picture emerges. The results seem to suggest the existence of a new phenomenon in black-hole physics, with relevance at late times. We recall that the QNMs are typically interpreted, in analogy with scattering resonances in quantum physics, as originating from waves that are temporarily trapped close to the peak of the curvature potential (corresponding to the unstable photon orbit at $`r=3M`$ in the Schwarzschild spacetime), and that the late-time power-law tail arises because of backscattering off of the weak potential in the far zone. Can the present results be interpreted in a similar intuitive vein? We think they can, and propose the following explanation: Consider the fate of an essentially monochromatic wave that falls onto the black hole. Provided that the frequency is in the interval $`0<\omega <m\omega _+`$ the wave will be superradiant. In effect, this means that a distant observer will see waves “emerging from the horizon”, cf. (3), even though a local observer sees the waves crossing the event horizon (at $`r_+`$) . This results in the scattered wave being amplified. In addition to this, one can establish that the effective potential has a peak outside the black hole (which is not immediately obvious since the potential is frequency dependent in the Kerr case) for a range of frequencies including the superradiant interval. Now, the combination of the causal boundary condition at the horizon effectively corresponding to waves “coming out of the black hole” (according to a distant observer) and the presence of a potential peak leads to waves potentially being trapped in the region close to the horizon. In effect, there is a “superradiance resonance cavity” outside the black hole. Again according to a distant observer, waves can only escape from this cavity by leakage through the potential barrier to infinity. Presumably it is this leakage that then leads to the observed $`1/t`$ decay. Furthermore, it should be noted that superradiant amplification is strongest for frequencies close to $`m\omega _+`$. Thus superradiance effects a form of parametric amplification on waves in the cavity. At very late times, the dominant oscillation frequency ought to be that which experiences the strongest amplification, i.e. $`m\omega _+`$. This is, of course, exactly the result of our analytic calculation. The above argument is illustrated schematically in Figure 2.
Concluding remarks. — We have presented the results of an investigation into the late-time behaviour of a perturbed Kerr black hole. An analytic calculation for the near extreme Kerr black hole case led to two important results. Firstly, we deduced that even though some QNMs become very slowly damped as $`aM`$ these modes will not be easier to detect with a gravitational-wave detector. Secondly, we arrived at the rather surprising prediction that the large number of virtually undamped QNMs that exist for each value of $`m0`$ combine in such a way that the field oscillates with an amplitude that decays as $`1/t`$ at late times. This decay is considerably slower than the standard power-law tail. The analytic prediction was then verified using numerical time-evolutions of the Teukolsky equation. These evolutions, performed for a larger range of the black-hole rotation parameter, suggest that the observed behaviour may well be present also for astrophysical black holes (which we recall must have $`a0.998M`$ ). Finally, we proposed an intuitive explanation of the observed phenomenon: That waves of certain frequencies are effectively trapped in a “superradiance resonance cavity” immediately outside the black hole. In conclusion, we find these results tremendously exciting: They indicate the presence of a new phenomenon in black-hole physics that may well be of astrophysical relevance.
We acknowledge helpful discussions with Amos Ori and Leor Barack. KG thanks the State Scholarship Foundation of Greece for financial support.
|
no-problem/9909/cond-mat9909288.html
|
ar5iv
|
text
|
# Generalized bit cumulants for chaotic systems: Numerical results
## I. INTRODUCTION
Bit cumulants offer a convenient characterization of the fluctuating bit numbers of probability distributions generated by chaotic systems . Especially, the second bit cumulant which measures variance of bit number, is equivalent to heat capacity in the thermodynamic analogy. This quantity is also helpful to discuss sensitivity to correlations among subsystems .
Our purpose here is to generalize the bit cumulants within nonextensive thermostatistics of Tsallis . The latter formalism is based on a non-logarithmic entropy
$$S_q=\frac{1_{i=1}^Wp_i^q}{q1},$$
(1)
where $`|1q|`$ is a measure of nonextensivity of the entropy i.e. its feature of non-additivity with respect to entropies of statistically independent subsystems. Based on the idea of q-deformed bit numbers , Tsallis entropy can be written in two equivalent forms
$$S_q=\underset{i=1}{\overset{W}{}}[a_i]p_i=\underset{i=1}{\overset{W}{}}[b_i]p_i^q,$$
(2)
where $`b_i=a_i`$ is the bit number and $`[x]=\frac{q^x1}{q1}`$. As $`q1`$, $`[x]x`$ and $`S_qp_i\mathrm{ln}p_i`$, the Shannon entropy. Thermostatistics based on this formalism obeys the Legendre Transform structure of the standard formalism . Apart from this, various (in)equalities of thermodynamics are properly generalized or are left invariant with respect to $`q`$.
Tsallis formalism has found a number of significant applications, such as stellar polytropes , two-dimensional pure electron plasma turbulence , solar neutrinos , anomalous diffusion , dynamical response theory , to name only a few. These systems are characterized by one of the following: long range interactions, long term memory effects or multifractal-like phase space. Particularly, Tsallis formalism has yielded important insights into low-dimensional dissipative systems at the onset of chaos or at bifurcation points . Recently, a nonextensive thermostatistics based on multifractal formalism was developed by the authors which related degree of nonextensivity $`(1q)`$ to the precision of a calculation . Thus there is relevance to discuss the alternative tool of bit cumulants within nonextensive approach.
This paper is organized as follows: in section II we briefly discuss the standard bit cumulants. In section III, we present generalized version of bit cumulants and apply the first and second cumulant to logistic-like family of maps. Section IV concludes the study.
## II. STANDARD BIT CUMULANTS
Bit cumulants $`\mathrm{\Gamma }_k`$ of order $`k`$ are defined via a generating function
$$G(\sigma )=\mathrm{ln}(\underset{i}{}p_i\mathrm{exp}(\sigma a_i))=\underset{k=0}{\overset{\mathrm{}}{}}\left(\frac{\sigma ^k}{k!}\right)\mathrm{\Gamma }_k.$$
(3)
Alternatively, we can write
$$\mathrm{\Gamma }_k=\frac{^k}{\sigma ^k}G(\sigma )|_{\sigma =0}.$$
(4)
The zeroth cumulant is zero. The first cumulant $`\mathrm{\Gamma }_1`$ is Shannon entropy $`<a_i>`$, of the distribution $`\{p_i\}`$. The second cumulant is variance of the bit number i.e., $`\mathrm{\Gamma }^2=<a_i^2><a_i>^2`$. From thermodynamics point of view, $`\mathrm{\Gamma }_2`$ is of major importance. An important property of $`\mathrm{\Gamma }_k`$ is that it is additive with respect to statistically independent systems or subsystems. Thus for a composite system $`(I+II)`$ whose joint probabilities factorize as: $`p_{ij}^{(I+II)}=p_i^{(I)}.p_j^{(II)}`$,we have $`\mathrm{\Gamma }_k^{(I+II)}=\mathrm{\Gamma }_k^{(I)}+\mathrm{\Gamma }_k^{(II)}`$. This may be taken as extensive feature of the standard bit cumulants.
## III. GENERALIZED BIT CUMULANTS
We have seen (Eq. (2)) that within Tsallis thermostatistics, the generalized bit-number is given by $`[a_i]`$. Thus we define the new generating function of bit cumulants as
$$G^{(q)}(\sigma )=\mathrm{ln}(\underset{i}{}p_i\mathrm{exp}(\sigma [a_i]))=\underset{k=0}{\overset{\mathrm{}}{}}\left(\frac{\sigma ^k}{k!}\right)\mathrm{\Gamma }_k^{(q)}.$$
(5)
The generalized bit cumulant may be defined as
$$\mathrm{\Gamma }_k^{(q)}=\frac{^k}{\sigma ^k}G^{(q)}(\sigma )|_{\sigma =0}.$$
(6)
It is easy to see that the first cumulant is Tsallis entropy $`<[a_i]>`$. The second cumulant is the variance of the generalized bit number $`[a_i]`$, and is given by
$$\mathrm{\Gamma }_2^{(q)}=<[a_i]^2><[a_i]>^2.$$
(7)
Alternately, in terms of the bit number $`[b_i]`$ (Eq. (2)), one can write $`\mathrm{\Gamma }_2^{(q)}=[b_i]^2p_i^{(2q1)}\left(_i[b_i]p_i^q\right)^2`$. Note that it is also possible to obtain the second cumulant, by defining a generalized free energy $`\mathrm{\Psi }_q`$ and using the relation $`\mathrm{\Gamma }_2^{(q)}=\frac{^2\mathrm{\Psi }_q}{\beta ^2}|_{\beta =1}`$ such that
$$\mathrm{\Gamma }_2^{(q)}=q\left\{[b_i]^2p_i^{(2q1)}\left(\underset{i}{}[b_i]p_i^q\right)^2\right\}.$$
(8)
Thus the two cumulants differ only by factor of $`q`$. In the following, we will apply Eq. (8), as it is related to the general thermodynamic framework as established in .
An important distinctive feature of the new cumulants is that they are non-additive (nonextensive) with respect to independent subsystems. In this paper, we apply the first and second generalized cumulants to the study of chaotic systems, such as logistic map and logistic-like family of maps.
### A. First cumulant
As said above, the first generalized bit cumulant is Tsallis entropy $`S_q`$ itself. For an ergodic map, we write $`S_q=\frac{<\rho ^{q1}1>}{1q}`$ where $`\rho `$ is the natural invariant density of the map. Consider the standard logistic map $`x_{n+1}=rx_n(1x_n)`$, $`x_n=[0,1]`$, which is chaotic above $`r=r_c=3.569945\mathrm{}`$. As Fig. 1 shows, for $`q<1`$ with decrease in box size $`ϵ`$, Tsallis entropy shows a corresponding increase. This behaviour is comparable to that shown in Fig. 2 by Shannon entropy $`S_1=<\mathrm{ln}\rho >`$, although at a given parameter value $`r`$, and for $`q<1`$, $`S_q>S_1`$, at the same box size. This latter feature is already known for an equiprobability distribution , but here it is shown for a nonuniform distribution such as generated by logistic map. Moreover, Tsallis entropy provides notable variation with respect to $`q`$ (Fig. 3). For $`q<1`$, as $`(1q)`$ increases, $`S_q`$ also shows an increase. Thus results of Fig. 1 and Fig. 3 suggest an interesting possibility. Tsallis entropy evaluated at smaller box size and small $`(1q)`$ can be matched by the value of entropy at large box size and large $`(1q)`$ value. In other words, for a given value of Tsallis entropy, there can be a range of $`(1q)`$ and box size values $`ϵ`$ and it is interesting to see the relation between the two. For concreteness, we chose a fixed $`r`$ value and evaluate $`S_q`$ at some box size and given $`q`$ value. Then we keep $`S_q`$ fixed to within good approximation and plot the corresponding $`1q`$ and $`1/V=1/\mathrm{ln}ϵ`$ values in Fig. 4. Note that $`V`$ is the volume parameter in thermodynamic analogy . Thus thermodynamic limit $`V\mathrm{}`$ is equivalent to $`ϵ0`$. The direct proportionality between $`(1q)`$ and $`1/V`$ plays important role in the nonextensive formalism for chaotic systems .
### B. Second cumulant
In the canonical framework, second cumulant is equivalent to heat capacity. In the following, we make a detailed study of generalized second cumulants. In terms of probabilities $`p_i`$, we can write Eq. (8) as
$$\mathrm{\Gamma }_2^{(q)}=\frac{q}{(q1)^2}\left\{\underset{i}{}p_i^{2q1}(p_i^q)^2\right\}.$$
(9)
$`\mathrm{\Gamma }_2^{(q)}`$ goes to second bit cumulant, $`\mathrm{\Gamma }_2=<(\mathrm{ln}p_i)^2><\mathrm{ln}p_i>^2`$ as $`q1`$.
To discuss non-additive property of $`\mathrm{\Gamma }_2^{(q)}`$, consider again a composite system (I+II), for which $`p_{ij}^{I+II}=p_i^{(I)}p_j^{(II)}`$. Then
$`\mathrm{\Gamma }_2^{(q)}(I+II)`$ $`=`$ $`\mathrm{\Gamma }_2^{(q)}(I)+\mathrm{\Gamma }_2^{(q)}(II)`$ (12)
$`2(1q)\left\{\mathrm{\Gamma }_2^{(q)}(I)<[a_j]>_{II}+\mathrm{\Gamma }_2^{(q)}(II)<[a_i]>_I\right\}`$
$`+q(1q)^2\left\{<[a_i]^2>_I<[a_j]^2>_{II}<[a_i]>_I^2<[a_j]>_{II}^2\right\}.`$
The non-additive feature also indicates correlations among subsystems I and II when $`q1`$.
Now for ergodic maps, based on Eq. (9) we propose the generalized bit variance density or heat capacity given by
$$C_{2}^{}{}_{}{}^{(q)}=\frac{q}{(q1)^2}(<\rho ^{(2q1)}><\rho ^q>^2).$$
(13)
For $`q1`$, we have $`C_2=<(\mathrm{ln}\rho )^2><\mathrm{ln}\rho >^2`$. We make a study of Eq. (13) for logistic-like family of maps. These are given by $`x_{n+1}=1a|x_n|^z`$, $`z>1`$, $`0<a<2`$ and $`1x1`$. Especially for $`z=2`$, we have standard logistic map in its centered representation. Fig. 5 shows both $`C_2`$ and $`C_{2}^{}{}_{}{}^{(q)}`$ vs. $`a`$ for $`z=2`$. It appears there is a kind of scaling factor between $`C_2`$ and $`C_{2}^{}{}_{}{}^{(q)}`$. To check this, we plot $`C_2`$ vs. $`C_{2}^{}{}_{}{}^{(q)}`$ in Fig. 6 and note that most of the points can be fitted to a straight line.
One can ask how this relation between $`C_2`$ and $`C_{2}^{}{}_{}{}^{(q)}`$ depends on the nature of map. In Fig. 7, we show results for different $`z`$ values. The scaling factor between $`C_{2}^{}{}_{}{}^{(q)}`$ and $`C_2`$ which is measured by the slope of straight line fits to the graphs such as Fig. 6, shows a monotonic decrease with increasing $`z`$ value (Fig. 8). In other words, the deviation of the slope from unity decreases with increase in $`z`$ value, i.e. the function $`C_{2}^{}{}_{}{}^{(q)}`$ is less sensitive to $`q`$ for higher values of $`z`$.
Alternatively, for a given map (fixed $`z`$ value), one can enquire how the above mentioned slope changes with $`q`$. Naturally for $`q1`$, $`C_{2}^{}{}_{}{}^{(q)}C_2`$ and the slope tends to unity. These results are shown in Fig. 9.
## IV. CONCLUSION
We have generalized the bit cumulants within nonextensive approach. In this paper, we have concentrated on properties of first and second bit cumulants. We have seen how keeping Tsallis entropy (first cumulant) constant, we get a connection between box size (which represents precision of a calculation) and degree of nonextensivity $`1q`$. Secondly, we have done detailed study on second bit cumulant applying it to logistic-like family of maps. We note that for large $`z`$, $`C_{2}^{}{}_{}{}^{(q)}C_2`$. In the light of this, we would like to point out a feature seen in studies on sensitivity to initial conditions in similar systems . There as $`z\mathrm{}`$, the nonextensivity index $`q1`$. Further work elucidating this connection will be welcome.
## ACKNOWLEDGEMENTS
RR would like to thank University Grants Commision, India for grant of Senior Research Fellowship.
|
no-problem/9909/astro-ph9909294.html
|
ar5iv
|
text
|
# The Future of Extragalactic Research
## 1 Introduction
Predictions regarding the future directions of research in science have a very poor track record. The main reason for this is, no doubt, that the most important developments in science are also the most unexpected ones. It follows that the great discoveries in astronomy cannot be predicted by linear extrapolation from past trends. Furthermore, serendipitous discovery plays a major role in the advancement of our science. Nevertheless, it is perhaps safe to let one’s thinking about the future be guided by George Ellery Hale’s notion that progress in observational astronomy always requires “Light! More light!” (Woodbury, 1944). During recent decades the greatest advance has come from the replacement of photographic plates by charge coupled detectors. Such CCDs increase the efficiency of photon detection by two orders of magnitude. Furthermore they produce data in a format that can be easily manipulated by digital computers. An additional gain by a factor of $`1\times 10^2`$ in global light gathering power has resulted by going from Hale’s single 5-m reflector on Palomar Mountain to a world-wide arsenal of a dozen or so 5-m to 10-m telescopes. Since the efficiency of CCD detectors is already close to 100% little additional gain can be expected from increases in detector speed. Although significant increases in telescope aperture are technically possible it seems doubtful whether Society would be willing to support the expenditures required to increase the size of telescopes by two orders of magnitude. The only remaining way to increase the number of photons available to astronomers would appear to be by increasing detector size, and hence the area of the sky, that can be imaged at any given time. Such wide-field detectors will make it possible to undertake enormous homogeneous surveys of various classes of astronomical objects. One might also dream of detectors that could determine the wavelength of each incident photon. This would allow one to avoid the inefficiencies that are inherent in photometry through intermediate-band filters and in (low dispersion) spectroscopy.
On visiting the offices of the Mount Wilson and Palomar Observatories in the nineteen-sixties I was shown one or more plates of the Crab Nebula by four individuals. Each of these astronomers jealously guarded such plates in his own private files! Clearly such a situation, where large telescope data do not (after a short proprietary period) become part of the common heritage of mankind, is highly inefficient. In future all large telescope observations will be available in large publicly available digital data bases where they can be mined, or combined, at will. This will make it possible for all photons collected globally over many years to contribute to the progress of astronomy.
## 2 Galaxies
A few years ago a funding agency asked me to help evaluate the proposal for the Sloan Digital Sky Survey. On reading this document I was suddenly struck by the realization that the acquisition and manipulation of such enormous data bases would become central to the astronomy of the twenty-first century. In this connection one is reminded of Joseph Stalin’s dictum that quantity \[of tanks\] has a quality all its own. By the same token a computer chip with millions of diodes on its surface has become qualitatively different from a device that contains a few vacuum tubes. Similarly the human brain, which contains millions of neurons, allows us to explore “dimensions” that are not available to a small-brained mouse.
The availability of homogeneous surveys of millions of galaxies should allow us to gain deep new insights into many aspects of galaxy evolution. The accuracy of automatic classifications of individual galaxy images by “neural networks” is greatly inferior to that of craftsmen like G. de Vaucouleurs, A.R. Sandage, and W.W. Morgan. However, such mass-produced computer classifications will allow one to investigate the differences between the distributions of millions of galaxies of differing morphological type over the sky. Furthermore enormous homogeneous data sets on galaxies at different redshifts will make it possible to study both the evolutionary histories of different types of galaxies, and the changes in merger rates between galaxies, over a significant fraction of the lifetime of the Universe. Finally very large homogeneous samples of galaxies may enable one to identify very rare (or unusual) objects that might be worthy of more detailed study.
It is perhaps ironic that such a future style of astronomical observation, which is based on large data samples, will be more similar to that employed by Harlow Shapley and his collaborators at Harvard in the nineteen-thirties, than it is to the modern Palomar/KPNO style of observing, which tends to involve more intensive study of individual objects and small samples.
## 3 Quasars
Astronomers appear to be particularly attracted to violent events, such as supernova explosions and outbursts in active galactic nuclei. Supernovae are of enormous interest because such events (1) liberate vast amounts of energy, (2) produce a large fraction of the heavy elements that are required to sustain life, (3) as possible calibrators of the extragalactic distance scale, and (4) because they form black holes. It appears probable that individual supernovae of Type II might be detectable at greater redshifts than those at which the first generation of metal-poor galaxies are visible. Large digital sky surveys might therefore be used to generate samples of ancient supernovae at the edge of the observable Universe.
Large, deep, and homogeneous surveys should also allow one to study rare quasars with unusually large redshifts, that were formed when the Universe was still quite young. Furthermore such surveys would probably turn up many unusual active nuclei that would warrant more detailed study with large narrow-field telescopes. One should, however, heed the warning by Erasmus (1510) that “We fools have a particular trick of liking best whatever comes to us from farthest away.” One should therefore never forget that much can also be learned from detailed study of nearby galaxies, such as the members of the Local Group, and from large homogeneous surveys that can be used for systematic discovery of the oldest and most metal-poor objects in the Milky Way.
## 4 The Future
One should always expect the unexpected. The 100-inch Hooker telescope on Mt. Wilson was mainly built to provide the light-gathering power needed for high-dispersion spectroscopy of stars, but its main claim to fame is that it allowed Hubble (1929) to discover the expansion of the Universe. By the same token the Palomar 5-m reflector was mainly built to establish the extragalactic distance scale, but it will probably go down in history as the telescope that discovered quasars.
Harlow Shapley once remarked that one should hang ribbons around telescopes, rather than medals around the necks of famous astronomers. Recent experience with the Keck telescope and Hubble Space Telescope tends to support this view. The development of more powerful telescopes has clearly been the dominant driving force behind many of the most spectacular advances of modern astronomy. Nevertheless it might turn out that larger detectors, and faster computers to analyze their output, will be the most important drivers of twenty-first century astronomy. This confirms the view of baseball great Yogi Berra that “The future isn’t what it used to be”.
|
no-problem/9909/hep-th9909140.html
|
ar5iv
|
text
|
# 1 Wilson graph for the disk correlators
hep-th/9909140
ETH-TH/99-25
ESI-759
September 1999
CONFORMAL BOUNDARY CONDITIONS
AND THREE-DIMENSIONAL
TOPOLOGICAL FIELD THEORY
Giovanni Felder , Jürg Fröhlich , Jürgen Fuchs and Christoph Schweigert
ETH Zürich
CH – 8093 Zürich
> Abstract
> We present a general construction of all correlation functions of a two-dimensional rational conformal field theory, for an arbitrary number of bulk and boundary fields and arbitrary topologies. The correlators are expressed in terms of Wilson graphs in a certain three-manifold, the connecting manifold. The amplitudes constructed this way can be shown to be modular invariant and to obey the correct factorization rules.
Two-dimensional conformal field theory plays a fundamental role in the theory of two-dimensional critical systems of classical statistical mechanics , in quasi one-dimensional condensed matter physics and in string theory . The study of defects in systems of condensed matter physics , of percolation probabilities and of (open) string perturbation theory in the background of certain string solitons, the so-called D-branes , forces one to analyze conformal field theories on surfaces that may have boundaries and / or can be non-orientable.
In this letter we present a new description of correlation functions of an arbitrary number of bulk and boundary fields on general surfaces. We also show how to compute various types of operator product coefficients from our formulas. For simplicity, in this letter we restrict our attention to boundary conditions that preserve all bulk symmetries. Moreover, we take the modular invariant torus partition function that encodes the spectrum of bulk fields of the theory to be given by charge conjugation. Technical details and complete proofs will appear in a separate publication.
Given a chiral conformal field theory, such as a chiral free boson, our aim is to compute correlation functions on a two-dimensional surface $`X`$ that may be non-orientable and can have a boundary. To this end, we first construct the so-called double $`\widehat{X}`$ of the surface $`X`$. This is an oriented surface, on which an orientation reversing map $`\sigma `$ of order two acts in such a way that $`X`$ is obtained as the quotient of $`\widehat{X}`$ by $`\sigma `$. Thus $`\widehat{X}`$ is a two-fold cover of $`X`$; but this cover is branched over the boundary points, which correspond to fixed points of the map $`\sigma `$. For example, when $`X`$ is the disk $`D`$, then $`\widehat{X}`$ is the two-sphere and $`\sigma `$ is the reflection about its equatorial plane. For $`X`$ the cross-cap, i.e. the real projective plane $`^2`$, $`\widehat{X}`$ is again the two-sphere, but $`\sigma `$ is now the antipodal map. Finally, when $`X`$ is closed and orientable, the double $`\widehat{X}`$ consists of two disconnected copies of $`X`$ with opposite orientation, $`\widehat{X}X(X)`$.
Quite generally, correlation functions on a surface $`X`$ can be constructed from conformal blocks on its double $`\widehat{X}`$ . As a first step, one has to find the pre-images on $`\widehat{X}`$ of all insertion points on $`X`$, and associate a primary field of the chiral conformal field theory to each of them. Since bulk points have two pre-images, for a bulk field two chiral labels $`j`$ and $`j^{}`$ are needed, corresponding to left and right movers. Boundary fields, in contrast, carry a single label $`k`$; yet, they should not be thought of as chiral objects.
Having associated these labels to the geometric data, we can assign a vector space of conformal blocks, not necessarily of dimension one, to every collection of bulk and boundary fields on $`X`$. The correlation function is one specific element in this space. This element must obey modular invariance and factorization properties. The conformal bootstrap programme allows to determine the correlation function by imposing these properties as constraints. Fortunately, the connection between conformal field theory in two dimensions and topological field theory in three dimensions supplies us with a most direct way to construct concrete elements in the spaces of conformal blocks. What one must do in order to specify a a definite element in the space of conformal blocks is to find a three-manifold $`M_X`$ whose boundary is $`\widehat{X}`$,
$$M_X=\widehat{X},$$
(1)
as well as a Wilson graph $`W`$ in $`M_X`$ that ends at the marked points on $`\widehat{X}`$. This can be done for any arbitrary rational conformal field theory; for details, which are based on the axiomatization in , we refer to . In the particular case of WZW models, ChernSimons theory can be used for this construction. For these models, the element in the space of conformal blocks is obtained by the ChernSimons path integral
$$𝒟AW\mathrm{exp}\text{(}\mathrm{i}\frac{k}{4\pi }_{M_X}\mathrm{Tr}(A\mathrm{d}A+\frac{2}{3}AAA)\text{)}$$
(2)
with appropriate parabolic conditions at the punctures.
Thus to obtain a correlation function on $`X`$, we first construct a certain three-manifold $`M_X`$ with boundary $`\widehat{X}`$, which we call the connecting three-manifold. Technically, the manifold $`M_X`$ can be characterized as follows. When $`X`$ does not have a boundary, then $`M_X=(\widehat{X}\times [1,1])/_2`$, where the group $`_2`$ acts on $`\widehat{X}`$ by $`\sigma `$ and on the interval $`[1,1]`$ by the sign flip $`tt`$ for $`t[1,1]`$. Thus $`M_X`$ consists of pairs $`(x,t)`$ with $`x`$ a point on the double $`\widehat{X}`$ and $`t`$ in $`[1,1]`$, modulo the identification $`(x,t)(\sigma (x),t)`$. For fixed $`x`$, the points of the form $`(x,t)`$ form a segment, the connecting interval, joining the two pre-images of a point in $`X`$. When $`X`$ has a boundary, we obtain $`M_X`$ from $`(\widehat{X}\times [1,1])/_2`$ by contracting the connecting intervals over the boundary to single points, in such a way that $`M_X`$ remains a smooth manifold. (An equivalent construction, in which the boundary intervals are not contracted, was given in .)
It is readily checked that the boundary of the connecting manifold $`M_X`$ is indeed the double $`\widehat{X}`$. Moreover, $`M_X`$ connects the two pre-images of a bulk point by an interval in such a manner that the connecting intervals for distinct bulk points do not intersect. Let us list a few examples. For a disk, the connecting manifold is a solid three-ball, and the connecting intervals are all perpendicular to the equatorial plane. Similarly, when $`X`$ is the annulus, $`M_X`$ is a solid torus. For $`X`$ the cross-cap, the connecting manifold $`M_X`$ is best characterized by the fact that when glueing to its boundary a solid ball, we obtain $`S^3/_2^3`$, which coincides with the group manifold of the Lie group SO$`(3)`$. For closed orientable surfaces $`X`$, the bundle $`M_X`$ is just the trivial bundle $`X\times [1,1]`$; e.g. when $`X`$ is a sphere, then $`M_X`$ can be visualized as consisting of the points between two concentric spheres.
The next step is to specify a certain Wilson graph in $`M_X`$. The prescription, which is illustrated in figure 1 for the case of a disk with an arbitrary number of insertions in the bulk and on the boundary, is as follows. First, for every bulk insertion $`j`$, one joins the pre-images of the insertion point by a Wilson line running along the connecting interval. Next, one inserts one circular Wilson line parallel to each component of the boundary (a similar idea was presented in ) and joins every boundary insertion $`k`$ on the respective boundary component by a short Wilson line to the corresponding circular Wilson line. Moreover, the circular Wilson lines are required to run “close to the boundary”, in the sense that none of the connecting intervals of the bulk fields passes between the circular Wilson lines and the boundary of $`X`$.
So far we have only specified the geometric information for the conformal blocks. To proceed, we also must attach a primary label of the chiral conformal field theory to each segment of the Wilson graph. For the bulk points, this prescription is immediate, as we are dealing with the charge conjugation modular invariant. Similarly, we are naturally provided with the labels $`k`$ for the short Wilson lines that connect the boundary insertions with the circular Wilson lines. In addition, the segments of the circular Wilson lines should encode the boundary conditions of the corresponding boundary segments. Recalling that those boundary conditions which preserve all bulk symmetries can be labelled by the primary fields of the chiral conformal field theory , we attach such a primary label $`a`$ to every segment of the circular Wilson lines. Finally, we must consider the three-valent junctions on the circular Wilson lines. For each of them we choose an element $`\alpha `$ in the space of chiral couplings between the label $`k`$ for the boundary field and the two adjacent boundary conditions $`a,b`$. The dimension of this space of couplings is given by the fusion rules $`\mathrm{N}_{kb}^a`$ of the chiral theory. Indeed, it is known that boundary operators need an additional degeneracy label that takes its values in the space of chiral three-point blocks.
As a matter of fact, every segment of the Wilson graph should also be equipped with a framing – in other words, we should not just specify a graph, but a ribbon graph. Moreover, the boundary $`\widehat{X}`$ of $`M_X`$ must be endowed with additional structure, too. A careful discussion of these issues will be presented in . As a side remark, we mention that the circular Wilson lines already come with a natural thickening to ribbons, which is obtained by connecting them to the pre-image of the boundary of $`X`$ in $`\widehat{X}`$. (In figure 1 this is indicated by a shading.) Note that in the case of symmetry breaking boundary conditions the labels of boundary fields and boundary conditions can be more general than in the bulk. This can be implemented in our picture, as the corresponding part of the graph with the circular Wilson line is disconnected from the rest of the Wilson graph.
Using appropriate surgery on three-manifolds, we can prove that the correlation functions obtained by our prescription possess the correct factorization (or sewing) properties and that they are invariant under large diffeomorphisms or, in more technical terms, under the relative modular group . For a detailed account of these issues we refer to . Here we restrict ourselves to the analysis of a few situations of particular interest; we also show how to recover known results for the structure constants from our formulas.
In our approach the structure constants are obtained as the coefficients that appear in the expansion of the specific element in the space of conformal blocks that represents a correlation function in a standard basis for the conformal blocks. For two points on the boundary of a solid three-ball such a standard basis is given by a Wilson line (with trivial framing) connecting the two points, while for three points one takes a Mercedes star shaped junction of three Wilson lines. Our general strategy for computing the coefficients is then to glue another three-manifold to the connecting manifold so as to obtain the partition function or, in mathematical terms, the link invariant, for a closed three-manifold. The values of such link invariants are available in the literature, see e.g. .
Our first example is the correlator of two (bulk) fields on $`S^2`$, a closed and orientable surface. For the space of blocks to be non-zero, the two fields must be conjugate, i.e. carry labels $`j`$ and $`j^{}`$, respectively. According to our prescription, the connecting manifold then consists of the filling between two concentric two-spheres, and the Wilson graph consists of two disjoint lines connecting the spheres, both labelled by $`j`$; this is depicted in figure 2. The space of conformal blocks for this situation is one-dimensional; its standard basis is displayed in figure 3. Thus the relevant three-manifold is given by the disconnected sum of two balls, each of which carries a single Wilson line. To both manifolds we glue two balls in which a Wilson line labelled by $`j`$ is running. In the case of the correlation function, the resulting manifold is a three-sphere with an unknot labelled by $`j`$, for which the value of the link invariant is $`S_{0,j}`$. ($`S`$ is the modular S-transformation matrix of the chiral conformal field theory, and the label $`0`$ refers to the vacuum primary field.) When applied to the manifold in figure 3, the glueing procedure produces two disjoint copies of $`S^3`$, each with an unknot labelled by $`j`$; the corresponding partition function is $`S_{0,j}^2`$. Comparing the two results we see that the two-point function on the sphere is expressed in terms of the standard basis as
$$C(S^2;j,j^{})=S_{0,j}^1B(S^2;j,j^{})B(S^2;j,j^{}).$$
(3)
In other words, the normalization of the bulk fields $`j`$ differs by a factor of $`(S_{0,j})^{1/2}`$ from the more conventional prescription where they are ‘canonically normalized to one’.
Next we discuss an example featuring an orientable surface with boundary; we compute the one-point amplitude for a bulk field $`j`$ on a disk $`D`$ with boundary condition $`a`$. Again the space of blocks is one-dimensional. Our task is then to compare the Wilson graph of figure 4 with the standard basis that is displayed in figure 5. (In the present context, this particular conformal block is often called an ‘Ishibashi state’). We now obtain the three-sphere by glueing with a single three-ball. When applied to the graph of figure 5, we get the unknot with label $`j`$ in $`S^3`$, for which the partition function is $`S_{0,j}`$. In the case of figure 4 we get a pair of linked Wilson lines with labels $`a`$ and $`j`$ in $`S^3`$; the value of the link invariant for this graph is $`S_{a,j}`$. Comparison thus shows that the correlation function is $`S_{a,j}/S_{0,j}`$ times the standard two-point block on the sphere,
$$C(D_a;j)=(S_{a,j}/S_{0,j})B(S^2;j,j^{}).$$
(4)
Taking into account the normalization of the bulk fields as obtained in formula (3), we recover the known result that the correlator for a canonically normalized bulk field $`j`$ on a disk with boundary condition $`a`$ is $`S_{a,j}/\sqrt{S_{0,j}}`$ times the standard two-point block on the sphere. (This relation forms the basis of the so-called boundary state formalism .)
As a third example, we study again a one-point correlator of a (bulk) field $`j`$, now on the cross-cap $`^2`$, which does not have a boundary, but is non-orientable. The latter property forces us to be careful with the framing. The structure constants are obtained by comparing the correlator $`C(^2;j)`$ with the ‘cross-cap state’ $`\psi _j`$. This state is defined in figure 6; it is similar to the basis element $`B(S^2;j,j^{})`$ of the two-point blocks on $`S^2`$, but now the Wilson line in the three-ball has a non-trivial framing, and accordingly in figure 6 we have drawn a ribbon instead of a line. A priori we could twist the line either by $`+\pi `$, thereby obtaining some state $`\psi _j^+`$, or by $`\pi `$ and obtain another state $`\psi _j^{}`$. These two vectors differ by a factor of $`\mathrm{e}^{2\pi \mathrm{i}\mathrm{\Delta }_j}`$, with $`\mathrm{\Delta }_j`$ the conformal weight of $`j`$. Salomonically, we define the cross-cap state as
$$\psi _j:=\mathrm{e}^{\pi \mathrm{i}\mathrm{\Delta }_j}\psi _j^{}=\mathrm{e}^{\pi \mathrm{i}\mathrm{\Delta }_j}\psi _j^+.$$
(5)
Again the comparison of the correlator $`C(^2;j)`$ with the standard basis $`\psi _j`$ is carried out by glueing a three-ball with a Wilson line to the ball of figure 6. In contrast to the previous cases, however, this line is given a non-trivial framing; choosing the framing in such a way that the twist of the cross-cap state is undone, glueing the ball to the cross-cap state yields $`S^3`$ with the unknot, with partition function $`Z(S^3;j)=S_{0,j}`$.
As already mentioned, glueing the three-ball to the connecting manifold of the cross-cap yields SO$`(3)`$. It is also known that SO$`(3)`$ can be obtained from $`S^3`$ by a surgery on the unknot with framing $`2`$. (Following how the framed graph is mapped by the surgery, one may visualize the situation as in figure 7.) Taking all framings properly into account, we obtain
$$Z(\mathrm{SO}(3);j)=T_0^{1/2}\underset{k}{}S_{0,k}(T_k)^2S_{k,j}T_j^{1/2}=P_{0,j}$$
(6)
(with $`T_j\mathrm{e}^{2\pi \mathrm{i}(\mathrm{\Delta }_jc/24)}`$) for the invariant of this three-manifold, where in the second equality we expressed the result through the matrix $`P:=T^{1/2}ST^2ST^{1/2}`$. We have thereby recovered the known formula
$$C(^2;j)=(P_{0,j}/S_{0,j})\psi _j$$
(7)
for the one-point correlator on the cross-cap.
As a final example, consider three boundary fields $`i,j,k`$ on a disk. The relevant Wilson graph in the three-ball is of the type shown in figure 1, without any vertical Wilson lines along connecting intervals; it consists of a circular line (with segments labelled $`a,b,c`$) with three short Wilson lines (labelled $`i,j,k`$) attached to it. We must compare it to the standard basis for three-point blocks on the sphere, which is a Mercedes star shaped junction. This comparison can be made by performing a single fusing operation, followed by a contraction of the loop. For boundary fields, it is natural to define the correlation functions as linear forms on the degeneracy spaces for boundary operators. Denoting a basis of the degeneracy space for the boundary operator $`\psi _i^{ac}`$ by $`\{e_\alpha [ica^{}]\}`$, normalized by the quantum trace condition $`\mathrm{tr}(e_\alpha [ica^{}]e_\beta [i^{}ac^{}])=\delta _{\alpha ,\beta }`$, we find that
$$C(D_{a,b,c};i,j,k)\text{(}e_\alpha [ica^{}]e_\beta [jab^{}]e_\gamma [kbc^{}]\text{)}=\underset{\kappa }{}\frac{S_{0,0}}{S_{k,0}}\text{{}\begin{array}{ccc}& & \\ i& c& a\\ b^{}& j^{}& k^{}\end{array}\text{}}_{\gamma \kappa }^{\alpha \beta }e_\kappa [kji],$$
(8)
where the symbol $`\text{{}\begin{array}{ccc}& & \\ i& j& k\\ l& m& n\end{array}\text{}}_{\gamma \delta }^{\alpha \beta }`$ denotes a fusing matrix (or quantum $`6j`$-symbol) .
One important conclusion we can draw from our results is that the construction of correlation functions from conformal blocks can be performed in a completely model-independent manner. All structure constants, for any arbitrary conformal field theory, can be expressed in terms of purely chiral data, such as conformal weights, the modular S-matrix, fusing matrices and the like. All specific properties of concrete models already enter at the chiral level. Physical quantities, such as the magnetization of an open spin chain or open string amplitudes in the background of D-branes, can be expressed in terms of the correlators studied in this letter.
|
no-problem/9909/hep-ph9909530.html
|
ar5iv
|
text
|
# RM3-TH/99-8 Penguin amplitudes: charming contributionsTalk given by M. Ciuchini at KAON ‘99, June 21–26, 1999, University of Chicago, Chicago, IL, USA.
## Abstract
We briefly introduce the Wick-contraction parametrization of hadronic matrix elements and discuss some applications to $`B`$ and $`K`$ physics.
<sup>a</sup>INFN - Sezione di Roma III and Dipartimento di Fisica,
Università di Roma Tre, Via della Vasca Navale 84, I-00146 Roma, Italy
<sup>b</sup>INFN - Sezione di Roma and Dipartimento di Fisica,
Università di Roma “La Sapienza”, P.le A. Moro 2, I-00185 Roma, Italy
<sup>c</sup>Technische Universität München, Physik Department,
D-85784 Garching, Germany
In spite of the progresses in non-perturbative techniques, the computation of hadronic matrix elements is still an open problem, particularly when the final state contains more than one meson. In this case, methods based on Euclidean field theory, such as QCD sum rules or lattice QCD, have serious difficulties in computing physical amplitudes . Besides the standard parametrization of hadronic matrix elements in terms of $`B`$ parameters, it is useful for phenomenological studies to introduce a different parametrization based on the contractions of quark fields inside the matrix element. In the following, we briefly discuss the Wick-contraction parametrization introduced within the framework of non-leptonic $`B`$ decays in ref. .
To be concrete, let us illustrate how this parametrization works in few examples taken from $`B`$ physics. Consider the Cabibbo-allowed decay $`B^+\overline{D}^0\pi ^+`$. Only two operators of the $`\mathrm{\Delta }B=1`$ effective weak Hamiltonian contribute to this amplitude, namely
$`\overline{D}^0\pi ^+|Q_1^{\mathrm{\Delta }C=1}|B^+`$ $`=`$ $`\overline{D}^0\pi ^+|\overline{b}\gamma _\mu (1\gamma _5)d\overline{u}\gamma ^\mu (1\gamma _5)c|B^+,`$
$`\overline{D}^0\pi ^+|Q_2^{\mathrm{\Delta }C=1}|B^+`$ $`=`$ $`\overline{D}^0\pi ^+|\overline{b}\gamma _\mu (1\gamma _5)c\overline{u}\gamma ^\mu (1\gamma _5)d|B^+.`$ (1)
In this particularly simple example, the quark fields in the operators can be contracted only according to the emission topologies $`DE`$ and $`CE`$, shown in fig. 1. In the absence of a method for computing them, these contractions can be taken as complex parameters in phenomenological studies. The matrix elements can be rewritten as
$`\overline{D}^0\pi ^+|Q_1^{\mathrm{\Delta }C=1}|B^+`$ $`=`$ $`CE_{LL}(d,u,c;B^+,\overline{D}^0,\pi ^+)`$
$`+DE_{LL}(c,u,d;B^+,\pi ^+,\overline{D}^0),`$
$`\overline{D}^0\pi ^+|Q_2^{\mathrm{\Delta }C=1}|B^+`$ $`=`$ $`CE_{LL}(c,u,d;B^+,\overline{D}^0,\pi ^+)`$
$`+DE_{LL}(d,u,c;B^+,\pi ^+,\overline{D}^0).`$
The subscript $`LL`$ refers to the Dirac structure of the inserted operators. In general there are 14 different topologies . Of course, in order to be predictive, one needs to introduce relations among different parameters given by dynamical assumptions based on flavour symmetries, chiral properties, heavy quark expansion, $`1/N`$ expansion, etc. This approach proves particularly useful for studying the $`\mathrm{\Delta }S=1`$ $`B`$ decays. For instance, let us consider the decay $`B^+K^+\pi ^0`$. Its amplitude receives contributions from all the operators of the $`\mathrm{\Delta }B=1`$, $`\mathrm{\Delta }S=1`$ effective Hamiltonian. We consider only the matrix elements of operators which are both proportional to the largest Wilson coefficients $`C_1`$ and $`C_2`$ and leading order in the Cabibbo angle. They read
$`K^+\pi ^0|Q_1^c|B^+`$ $`=`$ $`K^+\pi ^0|\overline{b}\gamma _\mu (1\gamma _5)s\overline{c}\gamma ^\mu (1\gamma _5)c|B^+`$
$`=`$ $`DP_{LL}(c,s,u;B^+,K^+,\pi ^0),`$
$`K^+\pi ^0|Q_2^c|B^+`$ $`=`$ $`K^+\pi ^0|\overline{b}\gamma _\mu (1\gamma _5)c\overline{c}\gamma ^\mu (1\gamma _5)s|B^+`$
$`=`$ $`CP_{LL}(c,s,u;B^+,K^+,\pi ^0).`$
The penguin contractions $`CP`$ and $`DP`$ are shown in fig. 1. We stress the difference between penguin operators, which we have neglected here, and penguin contractions, which can contribute to the matrix elements of any operator. This kind of non-perturbative contributions, called “charming penguins” in refs. , could dominate $`\mathrm{\Delta }S=1`$, $`\mathrm{\Delta }C=0`$ $`B`$ decays because other contributions are either proportional to the small Wilson coefficients $`C_3`$$`C_{10}`$ or are doubly Cabibbo suppressed, as in the case of the factorizable emission topologies of $`Q_{1,2}^u`$. A detailed analysis of non-leptonic $`B`$ decays in this framework can be found in refs. . The presence of “charming penguin” contributions are likely to make the naïve factorization approach fail in describing this class of decays.
A different, but related parametrization of hadronic matrix elements has been recently proposed in ref. . In this approach, the parameters are the suitable combinations of Wick contractions and Wilson coefficients which are renormalization scale and scheme independent. In this way, the relations among contractions enforced by the renormalization group equations are explicit. Besides, the phenomenological determination of the parameters do not depend on the choice of the Wilson coefficients. On the other hand, imposing relations among parameters based on dynamical assumptions may be more involved.
Matrix-element parametrizations are less useful when applied to $`K`$ decays, because there are few decay channels to fix the parameters and test the assumptions. <sup>1</sup><sup>1</sup>1Indeed, in the case of $`K\pi \pi `$, there are only two complex amplitudes corresponding to the $`\pi \pi (I=0,2)`$ final states. In addition, chiral relations allow the connection between matrix elements with one pion and those with two or more pions in the final states, the former being calculable with lattice QCD. However, a reliable lattice determination of $`\pi \pi |Q_6|K`$, the dominant contribution to $`\epsilon ^{}/\epsilon `$ , is presently missing .
Let us apply the Wick-contraction parametrization to $`K\pi \pi `$ and verify whether there could be a connection between the longstanding problem of the $`\mathrm{\Delta }I=1/2`$ rule and a large value of the matrix element of $`Q_6`$, as suggested by the recent measurement of $`\epsilon ^{}/\epsilon `$. In terms of the Buras-Silvestrini parameters , the amplitudes $`K\pi \pi `$ with definite isospin are
$`\mathrm{Re}A_2`$ $``$ $`{\displaystyle \frac{1}{3}}\left(E_1+E_2\right),`$
$`\mathrm{Re}A_0`$ $``$ $`\left({\displaystyle \frac{2}{3}}E_1+{\displaystyle \frac{1}{3}}E_2A_2+P_1^{}+P_3^{}\right),`$ (4)
$`\mathrm{Im}A_0`$ $``$ $`\left(P_1+P_3\right),`$
where $`E_{1,2}`$ are the emission parameters, $`A_2`$ is built with annihilations and
$`P_1`$ $``$ $`{\displaystyle \underset{i=2}{\overset{5}{}}}\left(y_{2i1}Q_{2i1}_{\mathrm{𝐶𝐸}}+y_{2i}Q_{2i}_{\mathrm{𝐷𝐸}}\right)`$
$`+`$ $`{\displaystyle \underset{i=3}{\overset{10}{}}}\left(y_iQ_i_{\mathrm{𝐶𝑃}}+y_iQ_i_{DP}\right)+{\displaystyle \underset{i=2}{\overset{5}{}}}\left(y_{2i1}Q_{2i1}_{\mathrm{𝐶𝐴}}+y_{2i}Q_{2i}_{\mathrm{𝐷𝐴}}\right),`$
$`P_3`$ $``$ $`{\displaystyle \underset{i=2}{\overset{5}{}}}\left(y_{2i1}Q_{2i1}_{\mathrm{𝐷𝐴}}+y_{2i}Q_{2i}_{\mathrm{𝐶𝐴}}\right)`$
$`+`$ $`{\displaystyle \underset{i=3}{\overset{10}{}}}\left(y_iQ_i_{\mathrm{𝐶𝑃𝐴}}+y_iQ_i_{DPA}\right),`$
are the penguin-like parameters. The notation $`Q_i_{\mathrm{𝐶𝐸}}`$ refers to the connected emission with the insertion of the operator $`Q_i`$, etc.
Neglecting annihilations, as suggested by the large-$`N`$ counting or by CPS+chiral symmetries , we are left with four parameters and three measured quantities. It is unlikely that $`\mathrm{Re}A_0`$ is dominated by emissions, since the large ratio $`\mathrm{Re}A_0/\mathrm{Re}A_2`$ would require large cancellations between $`E_1`$ and $`E_2`$, see eq. (RM3-TH/99-8 Penguin amplitudes: charming contributionsthanks: Talk given by M. Ciuchini at KAON ‘99, June 21–26, 1999, University of Chicago, Chicago, IL, USA.). Therefore, in the most natural scenario, both $`\mathrm{Re}A_0`$ and $`\mathrm{Im}A_0`$ are dominated by penguin parameters. Notice that $`P_1`$ and $`P_1^{}`$ are different, so that no parametric relation between $`\mathrm{Re}A_0`$ and $`\mathrm{Im}A_0`$ can be established. However, the following relation holds
$$P_1^{}=z_1Q_1_{DP}+z_2Q_2_{CP}+P_1(yz),$$
(6)
where $`y_i`$ and $`z_i`$ are the Wilson coefficients of the 3-flavour effective weak Hamiltionian. Given this relation, we can envisage a dynamical mechanism to connect the two parameters. Let us assume that $`P_1^{}P_1(yz)`$ and that $`Q_{1,2}_{DP}`$ and $`Q_{5,6}_{DP}`$ share the same enhancement. <sup>2</sup><sup>2</sup>2We found that these two assumptions are compatible. Arguments may be provided to assume that
$$f=Q_1_{DP}1/N_cQ_2_{CP}Q_5_{DP}1/N_cQ_6_{CP}.$$
(7)
By using the experimental value of $`\mathrm{Re}A_0`$ and factorizing the emission contractions, we extract $`f`$, from which we derive
$$B_1=9,B_2=7.5,B_5=B_6=1.5.$$
(8)
It is interesting that the same mechanism enhances the $`B`$ parameters entering $`\mathrm{Re}A_0`$ by a factor of $`10`$ and those of $`\mathrm{Im}A_0`$ by a factor of $`2`$, as required by the theoretical calculations to explain the experimental data.
Alternatively, we could assume $`P_1^{}P_1(yz)`$ in eq. (6), namely that everything comes from the penguin operators $`Q_5`$ and $`Q_6`$. This is the old suggestion of SVZ . Using perturbative coefficients, it is possible to show that this requires $`B_620`$ in order to fit $`(\mathrm{Re}A_0)_{\text{exp}}`$. Such a large value is excluded by the measurement of $`\epsilon ^{}/\epsilon `$.
To summarize, a connection between the enhancement of $`\mathrm{Re}A_0`$ and a large value of $`\epsilon ^{}/\epsilon `$ cannot be established without some assumption on the long-distance dynamics. We have presented a simple example, which assume penguin-contraction dominance, that shows the correct pattern of enhancements. In this respect, models could give some insight, but quantitative predictions may prove hard to produce. Hopefully, non-perturbative renormalization and new computing techniques will help overcoming the problems which prevent the lattice computation of $`\mathrm{Re}A_0`$ and $`B_6`$ .
M.C. and L.S. thank A. Buras for useful discussions and excellent steaks, beer and particularly desserts. G.M. looks forward to acknowledging the same in the future.
|
no-problem/9909/hep-ph9909207.html
|
ar5iv
|
text
|
# I Introduction
## I Introduction
In the Minimal Standard Model (MSM) the neutrino is massless. This is because with only the left-handed neutrino $`\nu _L`$ we cannot build a Dirac mass term, and with only the Higgs doublet $`\varphi `$ we cannot build a Majorana mass term for $`\nu _L`$ after spontaneous symmetry breaking. However, strong indications for nonzero neutrino mass come from solar and atmospheric neutrino experiments and on the theoretical side there are several extensions of the MSM that lead to nonzero neutrino mass .
The simplest one is to add the right-handed neutrino $`\nu _R`$ in order to have the analogous of the quark $`u_R`$ in the leptonic sector. When this is done, it becomes possible to give a Dirac mass to the neutrino by means of the same mechanism used for the other fermions. Thus we expect this mass to be of the same order of magnitude of the other fermion masses. Moreover, it is now also possible to have a bare Majorana mass term for $`\nu _R`$, and the corresponding value of the mass is not constrained if the gauge group is the same of the MSM. Therefore we have a new mass scale in the extended theory , and it is a key problem to understand if this new scale is associated to new physics, that is a larger gauge group, and at what energy it eventually happens.
If the Dirac mass of the neutrino is of the same order of the other quark or lepton masses, the seesaw mechanism relates the smallness of the neutrino mass to a very large scale in the Majorana term. Of course we have three generations of fermions and we expect three light neutrinos and three heavy ones. We assume that the light neutrino mass spectrum is hierarchical as it happens for the quark and charged lepton mass spectra. We denote by $`m_1`$, $`m_2`$, $`m_3`$ the Dirac masses, by $`M_1`$, $`M_2`$, $`M_3`$ the heavy neutrino masses, and by $`m_{\nu _1}`$, $`m_{\nu _2}`$, $`m_{\nu _3}`$ the light neutrino masses. From the solar and atmospheric neutrino experiments we can infer , with some uncertainty, the values of $`m_{\nu _2}`$ and $`m_{\nu _3}`$, and of the neutrino mixing matrix $`U`$,
$$\nu _\alpha =U_{\alpha i}\nu _i,$$
(1)
where $`U`$ is unitary and connects the mass eigenstates $`\nu _i`$ ($`i=1,2,3`$) to the weak eigenstates $`\nu _\alpha `$ ($`\alpha =e,\mu ,\tau `$).
The aim of this paper is to calculate the heavy neutrino masses under simple hypotheses on the Dirac masses of neutrinos and on the matrix
$$V_D=V_\nu ^{}V_e$$
(2)
which is the analogous of the CKM matrix in the Dirac leptonic sector. Namely, we assume $`V_DV_{CKM}`$, and $`M_\nu M_u`$ or $`M_\nu M_e`$. For $`m_{\nu _1}`$ we allow a variation of three orders in the hierarchical regime. We are mostly interested in $`M_3`$ (the mass of the heaviest right-handed neutrino), which is related to the new mass scale of the theory, and in $`M_1`$ (the mass of the lightest right-handed neutrino), which has some importance in baryogenesis $`via`$ leptogenesis . In Grand Unified Theories (GUTs) $`M_3`$ is associated to unification or intermediate scales , thus we match our results with these scales. General considerations on heavy neutrino masses in the seesaw mechanism can be found in ref.. In the present paper we give a numerical analysis based on the hypotheses above and the experimental data on solar and atmospheric neutrinos.
## II Theory
Let us briefly explain the effect of the seesaw mechanism on the leptonic mixing. The part of the Lagrangian we have to consider is
$$\overline{e}_LM_ee_R+\overline{\nu }_LM_\nu \nu _R+g\overline{\nu }_Le_LW+\overline{\nu }_L^cM_R^{}\nu _R$$
(3)
where $`M_e`$ and $`M_\nu `$ are the Dirac mass matrices of charged leptons and neutrinos respectively, and $`M_R^{}`$ is the Majorana mass matrix of right-handed neutrinos. If we assume the elements of $`M_R^{}`$ much greater than those of $`M_\nu `$, the seesaw mechanism leads to the effective Lagrangian
$$\overline{e}_LM_ee_R+\overline{\nu }_LM_L^{}\nu _R^c+g\overline{\nu }_Le_LW+\overline{\nu }_L^cM_R^{}\nu _R$$
(4)
where
$$M_L^{}=M_\nu M_{R}^{}{}_{}{}^{1}M_\nu ^T$$
(5)
is the Majorana mass matrix of left-handed neutrinos (in this context the left-handed neutrinos are called light neutrinos and the right-handed neutrinos are called heavy neutrinos). Diagonalization of $`M_e`$, $`M_L^{}`$ gives (renaming the fermion fields)
$$\overline{e}_LD_ee_R+\overline{\nu }_LD_L\nu _R^c+g\overline{\nu }_LV_{lep}e_LW+\overline{\nu }_L^cM_R^{}\nu _R.$$
(6)
Of course we can also diagonalize $`M_R^{}`$ without changing other parts of this Lagrangian. The unitary matrix $`V_{lep}`$ describes the weak interactions of light neutrinos with charged leptons. The following steps clarify the structure of $`V_{lep}`$.
If in eqn.(3) we first diagonalize $`M_e`$ and $`M_\nu `$, obtaining
$$\overline{e}_LD_ee_R+\overline{\nu }_LD_\nu \nu _R+g\overline{\nu }_LV_De_LW+\overline{\nu }_L^cM_R\nu _R,$$
(7)
then the seesaw mechanism gives
$$\overline{e}_LD_ee_R+\overline{\nu }_LM_L\nu _R^c+g\overline{\nu }_LV_De_LW+\overline{\nu }_L^cM_R\nu _R$$
(8)
with
$$M_L=D_\nu M_R^1D_\nu .$$
(9)
Then, we diagonalize also $`M_L`$,
$$\overline{e}_LD_ee_R+\overline{\nu }_LD_L\nu _R^c+g\overline{\nu }_LV_sV_De_LW+\overline{\nu }_L^cM_R\nu _R,$$
(10)
and, comparing with eqn.(6), we recognize that
$$V_{lep}=V_sV_D$$
(11)
where
$$V_sM_LV_s^T=D_L.$$
(12)
We also understand that
$$V_{lep}=U^{},$$
(13)
and point out that $`M_R^{}`$ ($`M_L^{}`$) differs from $`M_R`$ ($`M_L`$) by a unitary transformation, hence they have the same eigenvalues.
In the Lagrangian (3) it is possible to diagonalize $`M_e`$ or $`M_R`$ without changing the observables quantities. The same is not true for $`M_\nu `$. Moreover, $`M_e`$ and $`M_R`$ can be diagonalized simultaneously. In the Lagrangian (4) the following matrices can be diagonalized: $`M_e`$, $`M_L`$, $`M_R`$, both $`M_L`$ and $`M_R`$, both $`M_e`$ and $`M_R`$. When we set $`M_e=D_e`$ in eqn.(4) we have $`M_L=UD_LU^T`$, and when we set $`M_L=D_L`$ we get $`M_e=U^{}D_eU`$ if $`M_e`$ is chosen hermitian or $`M_eM_e^{}=U^{}D_e^2U`$ if $`M_e`$ contains three zeros .
## III Experiment
Experimental informations on neutrino masses and mixings are increasing rapidly. To be definite we refer to , where the matrix $`U`$ is written as
$$U=\left(\begin{array}{ccc}c_{12}& s_{12}& 0\\ s_{12}c_{23}& c_{12}c_{23}& s_{23}\\ s_{12}s_{23}& c_{12}s_{23}& c_{23}\end{array}\right).$$
(14)
There is a zero in position 1-3, although it is only constrained to be much less than one . The experimental data on oscillation of atmospheric and solar neutrinos lead to three possible numerical forms for $`U`$, corresponding to the three solutions of the solar neutrino problem, namely small mixing and large mixing MSW (smMSW and lmMSW) , and vacuum oscillations (VO) . Choosing the central values of neutrino masses and of $`s_{12}`$ and $`s_{23}`$ from ref., we have always $`m_{\nu _3}=5.7\times 10^{11}`$ GeV, and
$$U=\left(\begin{array}{ccc}1& 0.04& 0\\ 0.032& 0.80& 0.60\\ 0.024& 0.60& 0.80\end{array}\right)U_1,$$
(15)
$`m_{\nu _2}=2.8\times 10^{12}`$ GeV for small mixing MSW,
$$U=\left(\begin{array}{ccc}0.91& 0.42& 0\\ 0.336& 0.726& 0.60\\ 0.252& 0.544& 0.80\end{array}\right)U_2,$$
(16)
$`m_{\nu _2}=4.4\times 10^{12}`$ GeV for large mixing MSW, and
$$U=\left(\begin{array}{ccc}0.80& 0.60& 0\\ 0.474& 0.632& 0.61\\ 0.366& 0.488& 0.79\end{array}\right)U_3,$$
(17)
$`m_{\nu _2}=9.2\times 10^{15}`$ GeV for vacuum oscillations. We also consider maximal and bimaximal mixing as limiting cases of $`U_1`$ and $`U_3`$, respectively:
$$U_m=\left(\begin{array}{ccc}1& 0& 0\\ 0& \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\\ 0& \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\end{array}\right),$$
(18)
$$U_b=\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}& 0\\ \frac{1}{2}& \frac{1}{2}& \frac{1}{\sqrt{2}}\\ \frac{1}{2}& \frac{1}{2}& \frac{1}{\sqrt{2}}\end{array}\right).$$
(19)
Experimental data on oscillations only give $`\mathrm{\Delta }m_{32}^2`$, $`\mathrm{\Delta }m_{21}^2`$, from which, for hierarchical light neutrino masses, we yield the values of $`m_{\nu _3}`$, $`m_{\nu _2}`$, because $`\mathrm{\Delta }m_{32}^2m_{\nu _3}^2`$, $`\mathrm{\Delta }m_{21}^2m_{\nu _2}^2`$. For $`m_{\nu _1}`$ we will assume $`m_{\nu _1}10^1m_{\nu _2}`$. From the unitary matrices written above we see that leptonic mixing between second and third family is large, while the mixing between first and second family may be large or small. It is well-known that in the quark sector all mixings are small.
## IV Calculation
Our determination of $`M_1,M_2,M_3`$ is based on the following assumptions. Looking at eqn.(11) we see that $`V_s`$ could be responsible of the enhancement of lepton mixing . Therefore, it is suggestive to assume that the matrix $`V_D`$ has just the form of the CKM matrix
$$V_D=\left(\begin{array}{ccc}1\frac{1}{2}\lambda ^2& \lambda & \lambda ^4\\ \lambda & 1\frac{1}{2}\lambda ^2& \lambda ^2\\ \lambda ^3\lambda ^4& \lambda ^2& 1\end{array}\right).$$
(20)
This is similar to the form which might originate from GUTs
$$V_D=\left(\begin{array}{ccc}1\frac{1}{18}\lambda ^2& \frac{1}{3}\lambda & \lambda ^4\\ \frac{1}{3}\lambda & 1\frac{1}{18}\lambda ^2& \lambda ^2\\ \frac{1}{3}\lambda ^3\lambda ^4& \lambda ^2& 1\end{array}\right),$$
(21)
the difference being in the element $`V_{D12}`$, and this, in turn, to the $`V_D`$ which results from the analogy of ref., where $`V_{D23}2\lambda ^2`$. We also assume $`M_\nu M_u`$ or $`M_\nu M_e`$. In particular,
$$M_\nu =D_\nu =\frac{m_\tau }{m_b}D_u,$$
(22)
where the factor is due to running , or
$$M_\nu =D_\nu =D_e.$$
(23)
We use quark (and charged lepton) masses at the scale $`M_Z`$ as in ref.. It is important to notice that the values $`M_1,M_2,M_3`$ do not depend on the assumption $`M_\nu =D_\nu `$, because $`M_R`$ undergoes a unitary transformation. Also $`V_D`$ does not change if we rotate $`e_L`$ as $`\nu _L`$. In fact, one can always diagonalize $`M_u`$ without changing $`V_{CKM}`$ , and $`M_\nu `$ without changing both $`V_D`$ and $`M_1,M_2,M_3`$. We vary $`m_{\nu _1}`$ by three orders down from $`m_{\nu _1}=10^1m_{\nu _2}`$ to $`m_{\nu _1}=10^4m_{\nu _2}`$. In the tables we report our results (notation: $`xeyx\times 10^y`$, in GeV). They are obtained in the following way. From eqn.(11) we have
$$V_s=V_{lep}V_D^{};$$
(24)
using eqn.(12) we get
$$M_L=V_s^TD_LV_s,$$
(25)
and from eqn.(9) we obtain
$$M_R=D_\nu M_L^1D_\nu $$
(26)
and then its eigenvalues. We see that in the case $`M_\nu M_u`$ the VO solution leads to a scale for $`M_3`$ well above the unification scale (around the Planck scale), while the MSW solutions are consistent with this scale. $`M_2`$ is around the intermediate scale. In the case $`M_\nu M_e`$ the VO solution gives $`M_3`$ near the unification scale, while the MSW solutions bring it near an intermediate scale. Also we notice that $`M_1`$ may be of the order $`10^6`$, a relatively small value . The huge value of $`M_3`$ in the VO case is due mainly to the lower values of $`m_{\nu _1}`$, $`m_{\nu _2}`$ respect to the MSW case. There is no substantial change if the factor $`m_\tau /m_b`$ is erased from eqn.(22): the numerical results are rescaled by the value 2.8. We have introduced such a factor because the relation $`M_uM_\nu `$ is typical of GUTs, where it is true at the unification scale, while the factor $`m_\tau /m_b`$ appears at low energy due to running. It can be checked that there is not an essential difference between the results obtained by eqn.(20) ($`V_DV_{CKM}`$) and those obtained by eqn.(21) ($`V_DV_{GUT}`$). In fact, numbers differ by no more than one order of magnitude. Moreover, comparing values in the two MSW cases, the effect of changes in $`m_{\nu _1}`$ is appearent: in the small mixing solution $`M_3`$ varies by one order, in the large mixing solution by three orders. If we want $`M_3`$ not to exceed the unification scale, then $`m_{\nu _1}`$ cannot be much smaller than $`m_{\nu _2}`$, in the large mixing MSW. Maximal and bimaximal mixings confirm the results obtained for small mixing MSW and vacuum oscillations, respectively.
| | smMSW | lmMSW | VO | max | bimax |
| --- | --- | --- | --- | --- | --- |
| $`M_1`$ | 1.3e6 | 6.8e5 | 2.5e6 | 1.1e6 | 4.0e6 |
| | 1.6e6 | 2.3e6 | 3.1e9 | 1.2e6 | 3.7e9 |
| $`M_2`$ | 3.1e10 | 1.5e10 | 1.7e12 | 3.0e10 | 1.4e12 |
| | 9.4e11 | 2.0e10 | 2.0e12 | 1.1e13 | 1.5e12 |
| $`M_3`$ | 1.7e15 | 2.8e15 | 2.2e18 | 2.2e15 | 3.7e18 |
| | 4.7e16 | 1.9e18 | 1.9e21 | 5.0e15 | 3.3e21 |
Table 1: $`V_DV_{CKM},M_\nu M_u`$
| | smMSW | lmMSW | VO | max | bimax |
| --- | --- | --- | --- | --- | --- |
| $`M_1`$ | 1.7e5 | 8.6e4 | 2.2e5 | 1.4e5 | 1.7e5 |
| | 2.0e5 | 9.7e4 | 1.3e7 | 1.6e5 | 1.4e7 |
| $`M_2`$ | 2.1e9 | 1.0e9 | 1.2e11 | 2.0e9 | 9.3e10 |
| | 5.4e10 | 1.3e9 | 1.4e11 | 3.8e11 | 1.0e11 |
| $`M_3`$ | 5.0e11 | 7.9e11 | 6.1e14 | 6.2e11 | 1.0e15 |
| | 1.5e13 | 5.3e14 | 5.2e17 | 2.8e12 | 9.3e17 |
Table 2: $`V_DV_{CKM},M_\nu M_e`$
As a matter of fact $`V_DV_{CKM}`$ and $`V_DV_{GUT}`$ are not so different from $`V_DI`$. In such a case $`V_{lep}V_s`$. The opposite case is $`V_{lep}V_D`$ and then $`V_sI`$, that is, when $`M_\nu =D_\nu `$ also $`M_R=D_R`$. From the seesaw mechanism we obtain
$$M_i=\frac{m_i^2}{m_{\nu _i}},$$
(27)
which gives $`M_310^{14}`$, $`M_210^{10}`$, $`M_110^6÷10^9`$ (MSW), $`M_210^{13}`$, $`M_110^9÷10^{12}`$ (VO) GeV in the case $`M_\nu M_u`$; $`M_310^{10}`$, $`M_210^9`$, $`M_110^5÷10^8`$ (MSW), $`M_210^{12}`$, $`M_110^8÷10^{11}`$ (VO) GeV in the case $`M_\nu M_e`$. In the VO solution with $`M_\nu M_e`$, $`M_2`$ exceeds $`M_3`$, and also $`M_1`$ can do it.
Let us now briefly discuss the sensitivity to input mixing angles of the results reported in the first three columns of tables 1,2. By allowing $`s_{12}`$ and $`s_{23}`$ to vary inside the ranges reported in ref. we have found that the numerical values of right-handed neutrino masses change by no more than one order of magnitude. The same happens if $`U_{13}`$ is different from zero up to 0.1. Therefore the above considerations on the physical scales do not change.
It is also interesting to match our results, obtained by a hierarchical spectrum, with the degenerate spectrum and the democratic mixing
$$U_d=\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}& 0\\ \frac{1}{\sqrt{6}}& \frac{1}{\sqrt{6}}& \frac{2}{\sqrt{6}}\\ \frac{1}{\sqrt{3}}& \frac{1}{\sqrt{3}}& \frac{1}{\sqrt{3}}\end{array}\right).$$
(28)
Assuming as light neutrino mass $`m_0=2`$ eV, relevant for hot dark matter, and the democrating mixing $`U=U_d`$, we obtain $`M_1=9.2\times 10^2`$, $`M_2=7.7\times 10^7`$, $`M_3=5.5\times 10^{12}`$ GeV for $`M_\nu M_u`$ and $`M_1=1.2\times 10^2`$, $`M_2=5.3\times 10^6`$, $`M_3=1.5\times 10^9`$ GeV for $`M_\nu M_e`$, that is $`M_3`$ at the intermediate scale and $`M_1`$ even at the electroweak scale. From eqns.(25),(26) we see that in the case of degenerate masses $`M_R`$ is proportional to $`1/m_0`$ and one can easily obtain $`M_3`$ when $`m_0`$ is lowered.
## V Conclusion
We have calculated the heavy neutrino masses in a seesaw framework, under simple hypotheses on the Dirac sector, and using experimental limits on light neutrino masses and mixings. The results have been matched with intermediate and unification scales. A key result is that the large mixing MSW solution can be reconciled with GUTs. The analysis can be improved when more precise data will be available. Also the effect of phases should be considered . There are several recent studies about the seesaw mechanism , based on various forms of mass matrices; a nice review is in ref.. Instead, in this paper, we work on the matrix $`V_D`$ and on $`D_\nu `$, that is the leptonic quantities which correspond, in the quark sector, to the observable quantities.
|
no-problem/9909/cond-mat9909215.html
|
ar5iv
|
text
|
# Disorder-driven non-Fermi-liquid behavior in CeRhRuSi2
## I Introduction
A breakdown of the standard Landau Fermi-liquid theory is signaled in certain heavy-fermion metals by anomalies in thermodynamic, transport, and optical properties at low temperatures and frequencies. Although exceptions exist, the anomalous properties are usually as follows: the Sommerfeld specific heat coefficient $`\gamma (T)=C(T)/T`$ diverges as $`\mathrm{ln}T`$; the magnetic susceptibility $`\chi (T)`$ varies as $`1aT^{1/2}`$ or diverges as $`\mathrm{ln}T`$ or a weak inverse power of temperature; the electrical resistivity departs linearly with temperature from its $`T=0`$ value; and optical conductivity experiments in the non-Fermi-liquid (NFL) alloy UCu<sub>3.5</sub>Pd<sub>1.5</sub> indicate a transport relaxation rate which varies linearly with frequency at low temperatures.
Attempts to understand this NFL behavior invoke one or more characteristics common to most such systems, viz., the possibility of an unconventional Kondo effect, proximity to a quantum critical point (QCP), structural disorder, or a combination of the latter two. Recent experimental work has stressed the role that disorder can play. In particular, the observed inhomogeneous broadening of copper nuclear magnetic resonance (NMR) lines in the NFL alloys UCu<sub>5-x</sub>Pd<sub>x</sub>, $`x=1.0`$ and 1.5, could be described by a disorder-induced spatial distribution of local susceptibilities $`\chi _j`$. Such a susceptibility distribution originates in the interplay between structural disorder and many-body effects intrinsic to $`f`$-electron systems, such as the Kondo effect and the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction between the magnetic moments.
It is clear that if an interaction between moments is present the term “non-Fermi liquid” must be used with care, since strictly speaking Fermi-liquid theory deals only with the lowest-lying excitations of a system of interacting fermions and hence is correct only in the zero-temperature limit in the absence of a phase transition. Any kind of magnetic phase transition or glassy spin freezing at nonzero temperature invalidates this condition, and NFL behavior is no longer a surprise. Following convention in this field, we nevertheless continue to designate as NFL systems materials in which the properties mentioned above are found at intermediate temperatures, with the proviso “nearly NFL” applied if there is evidence for a crossover to a new state at low temperatures.
In this paper we report measurements of the magnetic susceptibility and <sup>29</sup>Si NMR linewidth in the nearly-NFL heavy-fermion alloy CeRhRuSi<sub>2</sub>, and consider the data in the light of two such disorder-driven scenarios: (1) the so-called “Kondo disorder” picture of Bernal et al. and Miranda, Dobrosavljević, and Kotliar, in which the RKKY interaction is disregarded and the local susceptibility distribution is associated with a corresponding distribution of single-ion Kondo temperatures $`T_K`$, and (2) a recent model by Castro Neto and co-workers based on the existence of quantum Griffiths singularities in a disordered system with RKKY couplings which is close to a QCP. In the latter case Kondo and RKKY phenomena compete with each other in the random environment, and the susceptibility is associated with fluctuations of magnetic clusters. Both of these models have been shown to account for the observed susceptibility and NMR broadening in UCu<sub>5-x</sub>Pd<sub>x</sub>, and the Kondo-disorder model is in agreement with the transport and optical data for UCu<sub>5-x</sub>Pd<sub>x</sub> alloys. The Griffiths-phase picture describes the thermodynamic properties of a number of NFL materials.
It should be pointed out that NFL mechanisms based on unconventional Kondo effects have to date been treated only for isolated $`f`$ ions. It would be useful to extend such pictures, first to the case of ordered $`f`$-ion-based compounds and then with the inclusion of disorder in analogy with the Griffiths-phase model.
Similarities and differences between the Kondo-disorder and Griffiths-phase theories of NFL behavior and the necessity for their modification at low temperatures are discussed in the light of our experimental data. Fits of the theories to the temperature dependence of the bulk susceptibility determine the parameters of each model, each of which then predicts the temperature dependence and size of the NMR linewidth with no further adjustable parameters. The measured linewidths are in good agreement with both models. This corroborates the conclusion of Graf et al., based on susceptibility and specific heat measurements, that disorder-driven NFL behavior is important in this system.
The isostructural alloy system Ce(Ru<sub>1-x</sub>Rh<sub>x</sub>)<sub>2</sub>Si<sub>2</sub> exhibits a variety of complex behavior associated with the Kondo effect and magnetic interactions. The phase diagram of this system is shown in Fig. 1.
The end compound CeRu<sub>2</sub>Si<sub>2</sub> is a heavy-fermion metal which shows no long-range magnetic order down to 20 mK, whereas CeRh<sub>2</sub>Si<sub>2</sub> is an antiferromagnet with Néel temperature $`T_N=36`$ K. Antiferromagnetism is found for low ($`0.1x0.3`$) and high ($`0.5x1`$) rhodium doping. Neutron diffraction experiments in the low-doping range show that the antiferromagnetism is incommensurate (i.e., the $`4f`$ electrons are itinerant), but becomes commensurate, indicative of local moments, for $`x0.5`$. For $`x0.4`$ the specific heat indicates a characteristic energy scale, usually associated with the average Kondo temperature $`T_K`$ of the material, which is higher than $`T_N`$ in this composition range.
The concentration $`x=0.5`$ is near the critical value for suppression of $`T_N`$ to zero. In spite of its stoichiometric composition CeRhRuSi<sub>2</sub> is a disordered alloy, as the Rh and Ru atoms occupy the same crystallographic site and there is no evidence of superlattice formation. For $`x=0.5`$ Graf et al. found the weak divergences characteristic of NFL behavior in $`\gamma (T)`$ and $`\chi (T)`$ for temperatures between 1 and 30 K. There was no evidence for magnetic ordering, and Graf et al. concluded that NFL phenomena in CeRhRuSi<sub>2</sub> are driven by structural disorder. To our knowledge the region $`0.3x0.5`$ has not been examined for NFL behavior.
Below 1 K $`\gamma (T)`$ was seen to saturate, suggesting that CeRhRuSi<sub>2</sub> exhibits a crossover from a region of anomalous magnetic response to a Fermi-liquid ground state as the temperature is lowered. It should be noted, however, that recent specific heat and ac susceptibility studies of UCu<sub>5-x</sub>Pd<sub>x</sub> suggest that saturation of $`\gamma (T)`$ in this system may be associated with magnetic ordering, possibly of a spin-glass nature or in the form of superparamagnetic clusters. More information on the behavior of NFL systems at low temperatures is clearly needed.
The bulk susceptibility agrees well with the Kondo-disorder model but is overestimated by the Griffiths-phase picture at low temperatures. This is not surprising, since the possibility of nearly-NFL behavior is built into the Kondo-disorder model, whereas the Griffiths-phase theory in its present form neglects effects, such as transverse Kondo fluctuations or residual interactions between clusters, which could modify or remove the Griffiths singularities which cause the NFL behavior.
The experimental data obtained to date do not discriminate clearly between the two pictures, since the distribution function $`P(\chi )`$ which describes the inhomogeneous distribution of susceptibilities contains no qualitative feature sensitive to the existence of RKKY-coupled clusters. We speculate that dynamical properties such as nuclear spin-lattice relaxation rates may be more sensitive to low-lying excitations associated with spin-spin couplings, particularly at low temperatures, and suggest that further measurements of spin relaxation rates be carried out.
Sec. II of the paper describes our measurements of bulk magnetic susceptibility and <sup>29</sup>Si NMR spectra in CeRhRuSi<sub>2</sub>. The relation between inhomogeneity in the susceptibility and the NMR linewidth is reviewed in Sec. III. Sec. IV treats the single-ion Kondo-disorder and Griffiths-phase disorder-driven NFL mechanisms. Analysis of the susceptibility and NMR data is discussed in Sec. V, and Sec. VI gives our conclusions.
## II Experiment
Measurements of bulk susceptibility and <sup>29</sup>Si NMR spectra were carried out on unaligned and field-aligned powder samples of CeRhRuSi<sub>2</sub> at a frequency of 20.220 MHz and temperatures in the range 4.2–230 K. Field-swept NMR spectra were obtained using pulsed-NMR spin-echo signals and the frequency-shifted-and-summed Fourier-transform processing technique described by Clark et al. The solid curve in Fig. 2 shows a <sup>29</sup>Si NMR spectrum from an unaligned powder sample at 4.2 K.
We attempted to fit this spectrum to an anisotropic powder pattern convoluted with a Gaussian broadening function, but found a poor fit if the width of the broadening is assumed independent of crystallite orientation. The low-field side of the spectrum, which is due to those crystallites with $`c`$ axes parallel to the applied field $`𝐇_0`$ ($`H_024`$ kOe), is more strongly broadened than the high-field side. Fits to the low-field region of the spectrum yielded a crude estimate of the extra broadening, which becomes large at low temperatures as predicted by the disorder-driven NFL mechanisms discussed above.
The magnetic susceptibility $`\chi (T)=M(H,T)/H`$, where $`M(H,T)`$ is the bulk magnetization of the system, is strongly anisotropic in the Ce(Ru<sub>1-x</sub>Rh<sub>x</sub>)<sub>2</sub>Si<sub>2</sub> series, with the $`c`$-axis susceptibility $`\chi _c(T)`$ ($`𝐇_0𝐜`$) much larger than the $`ab`$-plane susceptibility $`\chi _{ab}(T)`$ ($`𝐇_0𝐜`$). This suggests that the extra broadening observed for $`𝐇_0𝐜`$ might be due to disorder in the susceptibility similar to that found in UCu<sub>5-x</sub>Pd<sub>x</sub>, and we were motivated to measure the linewidth in a field-aligned powder sample. The powder was mixed with epoxy, which was allowed to harden in a magnetic field of 60 kOe. The torque on the anisotropic moment aligned the $`c`$ axis of each single-crystal powder grain in the direction of the applied field before the epoxy hardened.
The anisotropic susceptibility measured in this field-aligned powder sample is shown in Fig. 3.
These data agree well with measurements on a small single crystal of CeRhRuSi<sub>2</sub> (not shown). A strong Curie-Weiss-like temperature dependence is found for $`\chi _c(T)`$, whereas $`\chi _{ab}(T)`$ is small and only weakly temperature dependent. The curves give fits of $`\chi _c(T)`$ to the Kondo-disorder and Griffiths-phase models as discussed below in Sects. IV A and IV B, respectively.
Figure 2 also gives the <sup>29</sup>Si NMR spectra measured in the field-aligned powder sample for $`𝐇_0𝐜`$ and $`𝐇_0𝐜`$. It can be seen that, as expected, the line is wider for $`𝐇_0𝐜`$ than for $`𝐇_0𝐜`$. The small shoulder on the high-field side of the $`𝐇_0𝐜`$ line indicates that the alignment of crystallites in this sample is not perfect. The large linewidth anisotropy implies, however, that neither a small misalignment of the crystallites in the sample nor a slight misalignment of the sample with respect to $`𝐇_0`$ affect linewidth measurements appreciably for $`𝐇_0𝐜`$. <sup>29</sup>Si NMR spectra from a more completely aligned but smaller sample (not shown) confirmed this expectation. The misalignment does, however, preclude any attempt to obtain information about the shape of $`P(\mathrm{\Delta })`$ from the shape of the NMR line.
## III Susceptibility inhomogeneity and NMR linewidth
Since the NMR frequency shift of a given nucleus is determined by the interaction between its magnetic moment and those of the surrounding electrons, any spatial variation of the electronic magnetic susceptibility will be reflected in the NMR linewidth as a distribution of frequency shifts. A quantitative understanding of the susceptibility inhomogeneity requires analysis of the relation between it and the NMR linewidth, independent of the particular mechanism which causes the inhomogeneity.
The NMR frequency shift $`K`$ measures the time-averaged effective field produced by the local moment at the resonating nucleus. In a paramagnet the relative shift $`K_i`$ of the $`i^{\mathrm{th}}`$ nucleus is related to the local susceptibility $`\chi _j`$ associated with the $`j^{\mathrm{th}}`$ $`f`$-ion electronic moment by
$$K_i=\underset{j}{}a_{ij}\chi _j,$$
(1)
where $`a_{ij}`$ is the hyperfine coupling constant between the $`j^{\mathrm{th}}`$ moment and the $`i^{\mathrm{th}}`$ nucleus. It is straightforward to carry out the spatial averages and show that
$`\overline{K}=a\overline{\chi },a{\displaystyle \underset{j}{}}a_{ij},`$
where a bar designates a spatial average in this and the following. Similarly, the rms spread of shifts $`\delta K(\overline{K^2}\overline{K}^2)^{1/2}`$ is related to the corresponding rms spread of susceptibilities $`\delta \chi `$ by
$`\delta K=a^{}\delta \chi ,`$
where $`a^{}`$ is an effective hyperfine coupling constant, discussed in more detail below. As a consequence
$$\delta \chi /\overline{\chi }=\delta K/(a^{}\overline{\chi }).$$
(2)
If each nucleus is coupled to more than one moment \[cf. Eq. (1)\], any spatial correlation between the moment susceptibilities will affect the value of $`a^{}`$. There are two extreme limits in considering this spatial correlation. The term “long-range correlation” (LRC) will be used to describe the situation where the correlation length between local moments is much longer than the local-moment near-neighbor spacing. Similarly, “short-range correlation” (SRC) describes the situation where the variation of susceptibility from site to site is random or nearly so, i.e., where the correlation length which describes this variation is of the order of or shorter than a lattice constant. Note that this correlation is only a phenomenological description of the inhomogeneous susceptibility, and is not necessarily related to critical behavior of the system. For a given system we do not know a priori which (if either) of these limits is applicable, although in the single-ion Kondo-disorder model we might expect that random ligand disorder would lead to relatively short-range correlation.
It can be shown that the values of $`a^{}`$ in the LRC and SRC limits (denoted by $`a_{\mathrm{LRC}}^{}`$ and $`a_{\mathrm{SRC}}^{}`$, respectively) are given by
$`a_{\mathrm{LRC}}^{}=|a|;a_{\mathrm{SRC}}^{}=\left({\displaystyle \underset{j}{}}a_{ij}^2\right)^{1/2}.`$
Assuming for simplicity that the hyperfine coupling is predominantly to an effective number $`n_{\mathrm{eff}}`$ of $`f`$-ion near neighbors and is the same effective value $`a_{\mathrm{eff}}`$ for each of these neighbors, it follows that
$`a_{\mathrm{SRC}}^{}=\sqrt{n_{\mathrm{eff}}}a_{\mathrm{eff}}\mathrm{and}a_{\mathrm{LRC}}^{}=n_{\mathrm{eff}}a_{\mathrm{eff}},`$
so that
$$a_{\mathrm{SRC}}^{}=a_{\mathrm{LRC}}^{}/\sqrt{n_{\mathrm{eff}}}.$$
(3)
In the LRC limit the fractional susceptibility inhomogeneity $`\delta \chi /\overline{\chi }`$ is given by the relative NMR linewidth $`\delta K/\left|\overline{K}\right|`$. Since $`\delta K=\sigma /H_0`$, where $`\sigma `$ is the rms linewidth in magnetic field units, we have from Eq. (2)
$`{\displaystyle \frac{\delta \chi }{\overline{\chi }}}`$ $`=`$ $`{\displaystyle \frac{\delta K}{a_{\mathrm{LRC}}^{}\overline{\chi }}}={\displaystyle \frac{\delta K}{\left|\overline{K}\right|}}`$ (4)
$`=`$ $`{\displaystyle \frac{\sigma }{\left|\overline{K}\right|H_0}}\text{(LRC limit)}.`$ (5)
Thus $`\sigma /(\left|\overline{K}\right|H_0)`$, which can be derived from the NMR data, is an estimator for $`\delta \chi /\overline{\chi }`$ in the LRC limit. The corresponding estimator in the SRC limit can be obtained from $`\sigma /(\left|\overline{K}\right|H_0)`$ simply by scaling by the factor $`\sqrt{n_{\mathrm{eff}}}`$ \[Eq. (3)\]:
$`{\displaystyle \frac{\delta \chi }{\overline{\chi }}}`$ $`=`$ $`{\displaystyle \frac{\delta K}{a_{\mathrm{SRC}}^{}\overline{\chi }}}=\left({\displaystyle \frac{\delta K}{\left|\overline{K}\right|}}\right)\left({\displaystyle \frac{a_{\mathrm{LRC}}^{}}{a_{\mathrm{SRC}}^{}}}\right)`$ (6)
$`=`$ $`\sqrt{n_{\mathrm{eff}}}\left({\displaystyle \frac{\delta K}{\left|\overline{K}\right|}}\right)\text{(SRC limit)}.`$ (7)
The above assumes that the coupling constants $`a_{ij}`$ are not disordered, i.e., that they have the same values for crystallographically equivalent positions of nucleus $`i`$ and $`f`$-ion $`j`$. If this is not the case and the $`a_{ij}`$ are also disordered, then it can be shown that
$$\frac{\delta K}{\left|\overline{K}\right|}=\left[\left(\frac{\delta \chi }{\overline{\chi }}\right)^2+A^2\right]^{1/2},$$
(8)
where $`A`$ is a term which expresses the effect of the disordered $`a_{ij}`$. Now in existing disorder-driven models it is found that $`\delta \chi /\overline{\chi }`$ varies considerably with $`\overline{\chi }`$ (with temperature an implicit parameter), tending to a value $`1`$ at low temperatures and vanishing as $`\overline{\chi }0`$ (high temperatures). On the other hand $`A`$ is found to be independent of $`\overline{\chi }`$. Disorder in the $`a_{ij}`$ will therefore result in a nonzero intercept in a plot of $`\delta K/\left|\overline{K}\right|`$ vs. $`\overline{\chi }`$, and its effect can be removed by subtracting this intercept in quadrature from the raw $`\delta K/\left|\overline{K}\right|`$ data. It can be shown that this correction is valid in both the LRC limit and the SRC limit.
It should be stressed that this “NMR technology” is quite independent of the specific mechanism which causes the susceptibility inhomogeneity.
## IV Disorder-driven NFL mechanisms
In considering systems where NFL behavior is driven by disorder it is convenient to study the spatially-distributed local susceptibility $`\chi _j`$. Simple linear response theory shows that the zero-temperature local susceptibility can be associated with a characteristic local energy scale $`\mathrm{\Delta }_j`$ by
$$\chi _j\frac{1}{\mathrm{\Delta }_j},$$
(9)
where $`\mathrm{\Delta }_j`$ is essentially the excitation energy from the ground state to the first excited state. At finite temperatures $`T`$ the local susceptibility $`\chi (\mathrm{\Delta },T)`$ depends strongly on the microscopic details which couple the magnetic degrees of freedom.
We can therefore speak of distributions of susceptibilities or energy scales, characterized by distribution functions $`P(\mathrm{\Delta })`$ and $`P(\chi )`$, respectively; thus $`P(\chi ,T)=P(\mathrm{\Delta })/|\chi (\mathrm{\Delta },T)/\mathrm{\Delta }|`$. Once $`P(\mathrm{\Delta })`$ \[or $`P(\chi ,T)`$\] is known we can obtain spatial averages of physical quantities such as the $`n^{\mathrm{th}}`$ moment $`\overline{\chi ^n}(T)`$ of the local susceptibility distribution, which is given by
$$\overline{\chi ^n}(T)=_0^{\mathrm{}}\chi ^n(\mathrm{\Delta },T)P(\mathrm{\Delta })𝑑\mathrm{\Delta }$$
(11)
$$=_0^{\mathrm{}}\chi ^nP(\chi ,T)𝑑\chi .$$
(12)
Knowledge of the first and second moments of $`P(\chi ,T)`$ is sufficient for interpretation of the bulk susceptibility and NMR linewidth data.
### A Single-ion Kondo disorder
We take the Ce-ion spins to be coupled to the conduction-electron spins by a $`s`$-$`f`$ exchange coupling described by a coupling constant $`g=N(E_F)J`$, where $`N(E_F)`$ is the density of conduction-electron states at the Fermi surface and $`J`$ is the Ce-ion–conduction-electron exchange interaction. If the system is disordered on the ligand sites, as in CeRhRuSi<sub>2</sub>, $`g`$ will be randomly distributed according to a distribution function $`P(g)`$. In the simplest picture of the Kondo effect, the Kondo temperature $`T_K`$, which characterizes the energy scale of the single-ion Kondo effect, is given by $`T_K=E_Fe^{1/g}`$, where $`E_F`$ is the Fermi energy. Thus a narrow distribution of $`g`$ can lead to a wide distribution of $`T_K`$ when $`g`$ is small. In this picture we immediately identify $`\mathrm{\Delta }`$ with $`T_K`$.
If the distribution function $`P(\mathrm{\Delta })=P(T_K)`$ is broad enough so that $`P(T_K0)`$ does not vanish, then at any nonzero temperature $`T`$ those $`f`$ ions for which $`T_K<T`$ are not compensated (i.e., are not described by Fermi-liquid theory) and give rise to the NFL behavior. In view of Eq. (9) one sees that regions of the system where $`T_K`$ is very small (sites with very large low-temperature susceptibility) dominate the thermal and transport properties. Miranda et al. have treated this picture in detail, and have shown that it predicts the observed low-temperature behavior of the Sommerfeld coefficient $`\gamma (T)`$, susceptibility $`\chi (T)`$, and resistivity $`\mathrm{\Delta }\rho (T)=\rho (T)\rho (0)`$ ($`\gamma \chi \mathrm{ln}T`$ and $`\mathrm{\Delta }\rho T`$, respectively) provided that $`P(T_K0)`$ is finite.
The resulting distribution function $`P(T_K)`$ is given by
$$P(T_K)=P(g)\left|\frac{dg}{dT_K}\right|=\frac{g^2P(g)}{T_K}$$
(13)
with $`g=1/\mathrm{ln}(E_F/T_K)`$. As a convenient parameterization of the Kondo physics we take the susceptibility to have the Curie-Weiss form
$$\chi (T,T_K)=𝒞/(T+\alpha T_K),$$
(14)
where $`𝒞`$ is the Curie constant. The value of $`\alpha `$ was estimated by comparing this Curie-Weiss law to the exact Bethe-ansatz solution; the two functional forms differ by $`10`$% for $`\alpha 2.5`$. Assuming a Gaussian distribution for $`P(g)`$, the mean $`\overline{g}`$ and rms width $`\delta g`$ of the distribution can be found by fitting Eq. (IV) with $`n=1`$ to the measured bulk (i.e., spatially averaged) susceptibility.
### B Griffiths-phase model
In the Griffiths-phase model of NFL behavior the low-energy physics is dominated by rare and large clusters which can tunnel over classically forbidden regions. These correlated regions are generated by above-average values of the RKKY interaction. The tunneling is produced by the spin-flip processes present in the Kondo effect. In this scenario the Griffiths singularities appear close to a QCP below percolation threshold and are therefore intrinsically related to QCP physics. It is intuitively clear that the clusters can be effectively described in terms of two level systems, with tunneling energy $`E`$ which is distributed over the sample due to the structural disorder. Obviously we have $`\mathrm{\Delta }=E`$ in this picture.
The distribution of $`E`$ is obtained by mapping the problem onto the Ising model in a transverse field; this procedure is valid in the limit of large magnetic anisotropy as appears to be the case in CeRuRhSi<sub>2</sub> (cf. Fig. 3). Then it can be shown that
$$P(E)=\{\begin{array}{cc}\frac{\lambda }{ϵ_0}\left(\frac{E}{ϵ_0}\right)^{1+\lambda },\hfill & 0<E<ϵ_0,\hfill \\ \multicolumn{2}{c}{}\\ 0,\hfill & E>ϵ_0,\hfill \end{array}$$
(15)
where $`\lambda `$ is an exponent that determines the behavior of the response functions ($`0\lambda 1`$), and $`ϵ_0`$ is a high energy cut-off which must be determined for each specific system.
As discussed above the local zero-temperature susceptibility in this picture is $`\chi (0,E)=𝒞/E`$; large clusters with small energy scales have large susceptibilities. At high temperatures one expects the clusters to be disordered and behave paramagnetically, resulting in a Curie behavior $`\chi (E,T)=𝒞/T`$ for $`TE`$. We therefore assume a Curie-Weiss interpolation formula
$$\chi (E,T)=\frac{𝒞}{T+E}$$
(16)
as for the Kondo-disorder model. But in the present case this approximation is intended to incorporate all the interaction effects which determine the susceptibility of a multi-ion cluster, not just single-ion Kondo physics.
Using Eqs. (IV), (15), and (16) it is straightforward to show that for $`Tϵ_0`$
$$\overline{\chi }(T)=\frac{\pi \lambda 𝒞}{ϵ_0}\left(\frac{ϵ_0}{T}\right)^{1\lambda }$$
(17)
and
$$\frac{\delta \chi (T)}{\overline{\chi }(T)}=\left[\frac{1\lambda }{2\pi \lambda }\right]^{1/2}\left(\frac{ϵ_0}{T}\right)^{\lambda /2}.$$
(18)
The critical behavior is determined by the single nonuniversal temperature-independent exponent $`\lambda `$. For $`\lambda <1`$ the susceptibility diverges algebraically as $`T0`$ ($`H=0`$), and NFL behavior is obtained. The divergence increases (i.e., the NFL behavior becomes stronger) with decreasing $`\lambda `$. The case $`\lambda =1`$ is marginal, and leads to logarithmic singularities as in the Kondo-disorder approach.
### C Kondo disorder versus Griffiths singularities
Both the Kondo-disorder and Griffiths-singularities pictures deal with a similar aspect of disorder, viz., the physics of rare events with large susceptibilities. It is clear, however, that the microscopic aspects of the two models are very different. The Kondo-disorder model uses non-interacting single-ion physics, and no aspect of the RKKY interaction is present. The Griffiths-phase approach, on the other hand, tries to take both RKKY and Kondo phenomena into account on an equal footing, and has a strong connection with QCP physics.
From the point of view of local properties as measured in NMR spectra these approaches give similar results. The spatial properties of the two approaches are very different, however, since the formation of clusters in the Griffiths phase requires spatially extended structure. In this case one could look for the existence of clusters via superparamagnetic response, which is well understood in the context of spin glasses, or for a momentum dependence of the inelastic neutron scattering. In addition, one would expect cluster formation to slow down the spin fluctuations relative to the free-ion fluctuation rate, which is essentially $`T_K`$. Experiments that are sensitive to the fluctuation rate may therefore be able to distinguish between the two theories.
## V Results and Discussion
### A Bulk magnetic susceptibility
A fit of the Kondo-disorder model result for $`\overline{\chi }(T)`$ \[Eq. (IV) with $`n=1`$ and $`\mathrm{\Delta }=T_K`$, and Eqs. (13) and (14) for $`P(T_K)`$ and $`\chi (T,T_K)`$\] to the experimental $`c`$-axis susceptibility $`\chi _c(T)`$ is shown as the solid curve in Fig. 3. We obtain the same coupling constant distribution width $`\delta g=0.021`$ as Graf et al., and a somewhat smaller mean $`\overline{g}=0.160`$ compared to 0.175 from Ref. . The coupling constants are less widely distributed than in the NFL system UCu<sub>5-x</sub>Pd<sub>x</sub>, $`x=1.0`$ and 1.5, consistent with weaker “nearly NFL” behavior in CeRhRuSi<sub>2</sub>.
Figure 3 also gives the Griffiths-phase model prediction for $`\chi _c(T)`$ (dashed curve), obtained by fitting $`\overline{\chi }(T)`$ from Eq. (IV) ($`n=1`$), with $`\mathrm{\Delta }=E`$ and using Eqs. (15) and (16), to the bulk $`c`$-axis susceptibility. The best fit is given by the dashed curve in Fig. 3. It can be seen that at low temperatures the Griffiths-phase fit curve overestimates the experimental data slightly. This is to be expected, since in the simple Griffiths-phase model there is no possibility of a return to Fermi-liquid behavior at low temperatures: the system is a true NFL as long as $`\lambda <1`$. But the susceptibility data begin to exhibit the saturation expected from the conclusions of Graf et al., and therefore are not well described by an algebraically divergent temperature dependence. There is, however, a region of intermediate temperatures in which both Kondo-disorder and Griffiths-singularity models agree very well with experiment. From the Griffiths-phase fit in this region we obtain $`ϵ_0=170\pm 10`$ K and $`\lambda =0.88\pm 0.02`$. The latter value is considerably larger (i.e., the NFL behavior is weaker) than found in UCu<sub>5-x</sub>Pd<sub>x</sub> as in the Kondo disorder model.
Figure 4 shows the distribution functions $`P(\mathrm{\Delta })`$ which result from the Kondo-disorder ($`\mathrm{\Delta }=T_K`$) and Griffiths-phase ($`\mathrm{\Delta }=E`$) model fits.
It can be seen that the two functions are very different. The Kondo-disorder distribution function $`P(T_K)`$ exhibits a maximum near 12 K and is small below $``$1 K and above $``$100 K, whereas the Griffiths-phase distribution function $`P(E)`$ is broader and diverges weakly as $`E0`$. These differences are much more marked in $`P(\mathrm{\Delta })`$ than in the corresponding fits to the susceptibility (Fig. 3); spatially-averaged experimental quantities are insensitive to the exact form of $`P(\mathrm{\Delta })`$. It is clear that the Griffiths-phase model could fit the data better if $`P(E)0`$ as $`E0`$, which in an extended Griffiths-phase picture would occur if there were an upper cutoff on the cluster susceptibility.
Thus CeRhRuSi<sub>2</sub> does not exhibit “true” NFL behavior. In the Kondo-disorder model, which shows this most explicitly, the best fit indicates that all spins are in a Kondo-compensated Fermi-liquid state at low enough temperatures. This is consistent with the results of Graf et al. that $`\gamma (T)\mathrm{const}.`$ and $`\mathrm{\Delta }\rho (T)T^2`$ as $`T0`$. We note again, however, that as mentioned in Sec. I the saturation of $`\gamma (T)`$ does not necessarily indicate Fermi-liquid behavior at low temperatures; other physics, such as magnetic freezing, may be involved.
### B <sup>29</sup>Si NMR linewidths
The <sup>29</sup>Si $`c`$-axis NMR shift $`K_c`$ and linewidth $`\sigma _c`$ are plotted against the $`c`$-axis bulk susceptibility $`\chi _c`$ in Fig. 5, with temperature an implicit parameter.
(The $`ab`$-plane parameters $`K_{ab}`$ and $`\sigma _{ab}`$, not shown, are small and only weakly temperature dependent.) It can be seen that $`\sigma _c`$ varies more rapidly with $`\chi _c`$ than $`K_c`$, as expected qualitatively from disorder-driven theories of NFL behavior. Although the expected linear relation between $`K_c`$ and $`\chi _c`$ is observed at high temperatures (small $`\chi _c`$), $`K_c(\chi _c)`$ tends to a constant for large $`\chi _c`$. This saturation is not well understood, but may be due to a small amount of second phase; this could have a strong Curie-Weiss-like susceptibility but little effect on the NMR shift since the number of nuclei in the second phase would be small. The observed nonlinearity is not more than $``$20% and does not affect our conclusions significantly.
Figure 6 plots $`\sigma _c/(K_cH_0)=\delta K_c/K_c=\delta K_c/(a_{\mathrm{LRC}}^{}\chi _c)`$ \[Eq. (5)\] versus $`\chi _c`$.
As discussed in Sec. III $`\sigma _c/(K_cH_0)`$ is an estimator for $`\delta \chi /\overline{\chi }`$ in the LRC limit. The data extrapolate to a non-zero value as $`\chi _c0`$, which indicates that the coupling constant $`a_{ij}`$ is also disordered. We therefore subtracted the extrapolated intercept from the raw values in quadrature \[cf. Eq. (8)\] to obtain corrected data in the LRC limit, also shown in Fig. 6 (triangles). These corrected data represent $`\delta K_c/(a_{\mathrm{LRC}}^{}\chi _c)`$ with $`\delta K_c`$ due only to susceptibility inhomogeneity. The corresponding (corrected) values of $`\delta K_c/(a_{\mathrm{SRC}}^{}\chi _c)`$ were obtained from Eq. (7) of Sec. III, with $`n_{\mathrm{eff}}`$ chosen as described below.
For both the Kondo-disorder and Griffiths-phase models $`\delta \chi /\overline{\chi }`$ was calculated from Eq. (IV) ($`n=2`$) and $`\overline{\chi }(T)`$ with no further adjustable parameters, since $`P(\mathrm{\Delta })`$ had been previously determined by the fits to the bulk susceptibility. Figure 7 compares $`\delta K_c/(a^{}\chi _c)`$ in both limits with the theoretical behavior of $`\delta \chi /\overline{\chi }`$ from these theories, again with temperature an implicit parameter.
It can be seen that the theoretical predictions are similar and that they both overestimate $`\delta K_c/(a_{\mathrm{LRC}}^{}\chi _c)`$ considerably, but that the agreement with $`\delta K_c/(a_{\mathrm{SRC}}^{}\chi _c)`$ is excellent when $`n_{\mathrm{eff}}`$ in Eq. (7) is taken to be 6.
That this value is sensible can be concluded from examination of the Al<sub>4</sub>Ba-type crystal structure of CeRhRuSi<sub>2</sub>, shown in Fig. 8, where it can be seen that each Si site is coordinated by four Ce nearest neighbors and one Ce next-nearest neighbor.
Thus $`n_{\mathrm{eff}}`$ is approximately the coordination number for the first two near-neighbor shells and is therefore reasonable, given the approximation of an effective number of equally-coupled neighbors.
For CeRhRuSi<sub>2</sub> we do not have the independent verification of the SRC limit that was available from comparison of NMR and muon spin rotation ($`\mu `$SR) spectra in the case of UCu<sub>5-x</sub>Pd<sub>x</sub>. (For a review of the $`\mu `$SR technique see, for example, Ref. .) The latter alloys have a cubic crystal structure and the positive-muon ($`\mu ^+`$) interstitial stopping sites possess octahedral and tetrahedral point symmetries, which are sufficiently high to render the $`\mu ^+`$ frequency shift isotropic. Then the $`\mu ^+`$ linewidth reflects the susceptibility inhomogeneity rather than anisotropic powder-pattern broadening. Preliminary $`\mu `$SR measurements in an unaligned powder sample of CeRhRuSi<sub>2</sub> show that in this alloy the anisotropic contribution dominates the powder-pattern linewidth, much as in the unaligned-powder spectrum of Fig. 2, and the disorder-induced broadening cannot be determined accurately.
Unfortunately field-aligned powder samples cannot be used in $`\mu `$SR experiments. The packing fraction of the powder must be small ($`20`$%) in order to allow free rotation of the powder grains during alignment, and then only a correspondingly small fraction of the muons stop in the sample; the rest stop in the epoxy and give a spurious signal. Thus we cannot confirm the SRC limit by comparing results between NMR and $`\mu `$SR. We also note that no other nucleus in CeRhRuSi<sub>2</sub> is favorable for NMR; stable Ce isotopes possess no nuclear magnetic moment, and Ru and Rh isotopes have very small gyromagnetic ratios. Nevertheless, the SRC-limit estimate of $`\delta \chi _c/\chi _c`$ is in excellent self-consistent agreement with the disorder-driven models.
## VI Conclusions
The picture that emerges from our <sup>29</sup>Si NMR study of CeRhRuSi<sub>2</sub> exhibits similarities and differences when compared to the preceding NMR investigation of UCu<sub>5-x</sub>Pd<sub>x</sub>, $`x=1.0`$ and 1.5. The most important similarity is the fact that in both cases the NMR data are in excellent agreement with predictions of disorder-driven theories of NFL behavior. Our results therefore confirm the conclusions of Graf et al. that such a mechanism drives NFL properties in CeRhRuSi<sub>2</sub>. The most important differences between the two systems are that in CeRhRuSi<sub>2</sub> (1) within the single-ion Kondo-disorder model the disorder is not enough to prevent a return to Fermi-liquid behavior at temperatures $`1`$ K, and (2) the determination of the appropriate correlation length limit (LRC or SRC) has not been made independently of comparison with theory. The agreement between theory and experiment assuming the SRC limit (Fig. 7) is satisfactory.
From the experimental point of view, the relatively small differences between the predictions of the single-ion Kondo-disorder picture and the Griffiths-phase theory show how difficult it is to discriminate between these two mechanisms for disorder-driven NFL behavior in the NMR linewidths. We speculate, however, that the dynamics of the spins will be quite different in the two cases, particularly at low temperatures.
The single-ion Kondo disorder model predicts inhomogeneous relaxation due to the distributed $`T_K`$. This mechanism yields a spatially averaged spin-lattice relaxation rate $`\overline{T_1^1(T)}=𝑑T_KP(T_K)T_1^1(T,T_K)`$. It is convenient to approximate $`T_1^1(T,T_K)`$ by
$`T_1^1(T,T_K)\{\begin{array}{cc}1/T_K,\hfill & T>T_K,\hfill \\ \multicolumn{2}{c}{}\\ T/T_K^2,\hfill & T<T_K,\hfill \end{array}`$
which captures the crossover to Fermi-liquid (Korringa) behavior for $`T<T_K`$. For a model rectangular $`P(T_K)`$ given by
$`P(T_K)=\{\begin{array}{cc}{\displaystyle \frac{1}{T_MT_m}},& T_m<T_K<T_M,\hfill \\ \multicolumn{2}{c}{}\\ 0,& \text{otherwise,}\hfill \end{array}`$
where $`T_m`$ and $`T_M`$ are minimum and maximum values of $`T_K`$, respectively, it is straightforward to show that $`\overline{T_1^1}`$ varies linearly with $`T`$ for $`T<T_m`$ and goes smoothly to a constant for $`T>T_M`$. Such behavior would be generally expected to characterize $`\overline{T_1^1}`$ as long as $`P(T_K0)=0`$. Thus in this scenario $`\overline{T_1^1}`$ depends on temperature but is independent of resonance frequency $`\omega `$.
In contrast, it can be easily shown from the expression for the dissipative dynamic susceptibility $`\chi ^{\prime \prime }(\omega )`$ in the Griffiths-phase theory
$`\chi ^{\prime \prime }(\omega )\omega ^{1+\lambda }\mathrm{tanh}(\mathrm{}\omega /k_BT)`$
that, assuming the validity of this picture for the very low nuclear (muon) frequencies ($`\mathrm{}\omega k_BT`$)
$`\overline{T_1^1}\omega ^{1+\lambda },`$
independent of temperature. The frequency is given by $`\omega =\gamma H_0`$, where $`\gamma `$ is the nuclear (muon) gyromagnetic moment. Thus the dependence of $`\overline{T_1^1}`$ on temperature and $`H_0`$ differs consierably between the two theories.
It should be noted that the NMR linewidths obtained from experimental data are not necessarily true sample rms averages, if the lines have extended shoulders which are lost in the noise and not taken into account. The fact that good fits are obtained with Gaussian lines seems to make this unlikely, but it is not hard to see that at low temperatures $`P(\chi )`$ should be broad and asymmetric in both the Kondo-disorder and Griffiths-phase models. If the experimental linewidth characterizes only “typical” environments it will underestimate the true sample average. This would render our quantitative results somewhat uncertain, but would not invalidate the conclusion that disorder is an important element in the NFL behavior of CeRhRuSi<sub>2</sub> above 1 K.
Finally, we discuss the relation of the disorder-driven theories to the observed crossover to a new regime (Fermi-liquid behavior, cluster formation, magnetic freezing, etc.) in CeRuRhSi<sub>2</sub> below 1 K. The crossover is described empirically by the Kondo-disorder model, which by itself gives no clue as to why there should (or should not) be a suppression of low Kondo temperatures. Recently, however, Miranda and Dobrosavljvić have reported a microscopic calculation of the form of $`P(T_K)`$ for various levels of disorder. They find that the distribution is singular only for sufficiently strong disorder, whereas for slightly weaker disorder $`P(T_K)0`$ at small $`T_K`$, onsistent with a return to Fermi-liquid behavior at the lowest temperatures. This feature is in at least qualitative agreement with our results.
To explain the crossover the Griffiths-phase scenario would have to be extended beyond its simplest form to include a mechanism which reduces the response of the largest clusters. The mechanism behind such a reduction could be the breakdown of the assumption of strong single-ion anisotropy made in the Griffiths-phase theory; transverse fluctuations of the Ce ions might constitute a damping mechanism which rounds off the Griffiths singularities. Alternatively, superparamagnetic spin freezing of the clusters could occur at very low temperatures.
It is possible that similar crossovers occur in other NFL materials, perhaps at temperatures which have not yet been explored. In any event, our experimental findings indicate that further development of current theories of disorder-driven NFL behavior is required to understand NFL phenomena at low temperatures.
One of us (D.E.M.) is grateful for discussions with C. H. Booth, R. H. Heffner, M. F. Hundley, and R. Modler. This research was supported by the U.S. NSF, Grant no. DMR-9418991 (U.C. Riverside), by the Research Corporation (Whittier College), and by the U.C. Riverside Academic Senate Committee on Research, and was performed in part under the auspices of the U.S. DOE (Los Alamos). One of us (A.H.C.N.) acknowledges support from the Alfred P. Sloan Foundation.
|
no-problem/9909/cond-mat9909330.html
|
ar5iv
|
text
|
# Mechanisms for slow strengthening in granular materials
## I INTRODUCTION
The strength of granular matter is an important macroscopic property. Under many circumstances a layer of granular material at rest can sustain a load, i.e. it behaves like a solid. However, applied shear forces can lead to partial fluidization, a phenomenon that has no direct analog in homogeneous materials . The threshold shear stress $`\sigma _{Max}`$ needed to initiate flow also determines when a granular material will lose much of its ability to sustain loads, and is therefore a good measure of strength.
The shear strength of a granular layer is of great interest in geophysical applications, due to the granular composition of earthquake faults. Various studies (described in section II) have shown a gradual strengthening of a granular layer at geophysical pressures, as the time between stress relieving events is increased. Since strengthening of geophysical faults can affect the temporal distribution of earthquakes (see Ref. for references), understanding the underlying mechanisms and developing appropriate models is of practical interest .
In previous studies by our group the frictional properties of wet and dry sheared granular materials at low pressures were determined in detail by shearing the layer by means of a plate resting upon it. At small applied stresses, high resolution stress and displacement measurements can provide an accurate determination of the instantaneous frictional force experienced by the plate. The static friction coefficient $`\mu _s`$ (the ratio of the threshold static shear stress $`\sigma _{Max}`$ to normal stress) was found to be reproducible under given experimental conditions. However, changes in the experimental history influence $`\mu _s`$. In this article we study the effect of the experimental history on the threshold static shear stress of granular material at low applied normal stress. One central parameter of the experimental history is the waiting time $`\tau `$ for which the layer has been at rest. In addition, the shear stress and humidity during the waiting time are varied; and measurements are made in fluid saturated material. The effects of applied stress reversal are also studied.
In order to determine mechanisms for strengthening, we measure the instananeous frictional force and the instantaneous vertical dilation of the layer during waiting and during motion. We also image a cross section of the granular layer in some experiments and track the motion of individual grains during waiting and during motion of the plate. Our results indicate that several different mechanisms are needed to account for the observed strengthening. We find that granular matter can strengthen due to a slow shift in the particle arrangement under shear stress. This observation is consistent with a model for granular matter as consisting of a fragile network of stress chains that adjusts to the applied stress . Strengthening is also influenced by humidity, since condensation of liquid bridges adds an attractive surface tension force between grains . We show clearly that this cannot be the only cause of strengthening by carrying out experiments under water, where liquid bridges cannot be formed. Finally, strengthening may occur by the evolution of individual microcontacts through slow creep . Our experiments are not sufficiently sensitive to determine whether creep on length scales of the size of microcontacts actually occurs. However, strengthening of individual contacts alone cannot account for the experimental results. In addition to these slow time-dependent effects, the static friction coefficient can also be increased by compaction of the granular material under some circumstances, and by cycling of the applied shear stress.
Relevant work on friction and work related to proposed strengthening mechanisms are discussed in section II. The experimental results are described in sections III and discussed in section IV.
## II BACKGROUND
### A Strengthening in solid-on-solid friction
In friction between dry solid surfaces (solid-on-solid friction) the static friction coefficient $`\mu _s`$ is determined by the number and strength of microcontacts that support the applied stress (for recent reviews on friction, see Refs. ). Since the strength of microcontacts changes with time or applied stress, and since their number changes as well through stress-induced creep, $`\mu _s`$ is not a constant, but is influenced by the state of the system. The static friction coefficient - the ratio of the shear force needed to initiate sliding to the normal force - has generally been found to increase approximately logarithmically with the time of (quasi)-stationary contact between materials.
The static strength of a glass-on-glass interface was recently investigated experimentally by Berthoud et al. . In addition to logarithmic strengthening, the authors find that the rate of increase grows with the temperature of the material. The strengthening rate is twice as high when a shear stress is applied during the waiting time, but strengthening persists without an applied shear stress. This strengthening was found to be a result of an increase in the load bearing area through load-induced creep of individual microcontacts.
### B Strengthening in geophysical experiments
In studies at geophysical pressures ($`20\mathrm{M}\mathrm{P}\mathrm{a}`$) using simulated fault gouge (granular quartz powder) Marone et al. found logarithmic strengthening of a granular layer with time, if shear stress is applied during the waiting time. On the other hand, when the shear stress is removed, an immediate strengthening of the material is observed followed by a slow weakening with waiting time. Nakatani et al. showed that the magnitude of the instantaneous strengthening is proportional to the amount by which the applied shear stress is reduced. These effects are suggested to be a consequence of ’overconsolidation’ i.e. rearrangement of particles into a more compact configuration.
### C Rate and state theories
In order to describe the dependence of the frictional force on the state of the system, rate and state theories were developed by Dieterich and Ruina , which introduce an additional variable $`\mathrm{\Theta }`$ that characterises the system state. In general, several variables might be necessary to describe the state of the system, but the two models focus on the simplest possible case of one state variable determining the system state. In both theories the frictional force is given by
$$\mu =\mu _0+a\mathrm{ln}\frac{V}{V_0}+b\mathrm{ln}\frac{V_0\mathrm{\Theta }}{D_c}$$
(1)
where $`\mu _0`$, $`V_0`$, and $`D_c`$ are characteristic constants of the materials, and $`V`$ is the instantaneous sliding velocity. In this approach one does not distinguish between static and dynamic cases. The models differ in the differential equation for the state variable $`\mathrm{\Theta }`$. In the Ruina model, for which $`d\mathrm{\Theta }/dt=V\mathrm{\Theta }/D_c\mathrm{ln}\left[V/\mathrm{\Theta }D_c\right]`$, creep is necessary ($`V0`$) to change $`\mathrm{\Theta }`$. On the other hand, time of contact alone increases $`\mathrm{\Theta }`$ and thereby $`\mu `$ in the Dieterich model, where $`d\mathrm{\Theta }/dt=1V/\mathrm{\Theta }D_c`$. Different equations have also been proposed by Nielsen, Carlson and Olsen to describe earthquake faults. In their model, $`\mathrm{\Theta }`$ changes both with the time of contact and with creep using a characteristic strengthening time $`\tau _c`$ and a characteristic weakening length of contacts $`l_c`$ ($`d\mathrm{\Theta }/dt=(1\mathrm{\Theta })/\tau _c\mathrm{\Theta }V/l_c`$). The friction coefficient is determined by $`\mathrm{\Theta }`$ and increases with velocity with a viscous coefficient $`\eta `$ ($`\mu =\mathrm{\Theta }+\eta V`$).
While they successfully explain important earthquake characteristics , these rate and state models do not describe the instantaneous strengthening when the shear stress is released during a waiting time .
### D Force transmission in granular media
In solid-on-solid friction, the microcontacts form a two-dimensional disordered array at the interface between the solids. In granular materials, however, simulations indicate that several layers of grains move during the shearing process . This is not entirely surprising, given that the interior of the material is not stronger than the surface region, as long as there are no cohesive forces between particles. In order to study granular friction theoretically, one therefore must analyse a three dimensional network of microcontacts. The failure of a subset of those contacts allows motion of the shearing plate. Distinct element method simulations by Morgan et al. indicate that the grain size distribution and interparticle frictional properties can strongly affect the shear stress necessary for a failure of that contact network and thus granular flow.
Microcontacts can be characterized by the total force and shear force on the contact, and the orientation of the microcontact relative to the total applied stress. Recent experiments have shown that the distribution and orientation of individual particle contacts is highly nonuniform and hysteretic. Shear and normal stresses are transmitted along stress chains, as for example demonstrated in recent simulations by Radjai et al and in experiments using birefringent disks by Howell et al. . The magnitude of stress between particles follows an exponential probability distribution unless the packing fraction exceeds a critical threshold. The arrangements of stress chains strongly depends on the history of the sample. Small particle rearrangements, e.g. through heating of an individual particle, can alter the stress transmission . Recent experiments by Vanel et al. have revealed that the stresses at the bottom of a sandpile can have either a maximum or a local minimum at the center, depending on the preparation procedure. This experiment indicates that even the preferential direction of stress chains may be affected by the history of the sample. One model of stress transmission by Cates et al. does not try to develop a stress-strain relations, but instead relates different components of the stress tensor to each other. The granular material is described as a fragile network of stress transmitting contacts. If the direction of the applied stress changes, the network breaks. Simulations by Radjai et al. suggest that the network of stress chains can be separated into a strong subnetwork in which most contact surfaces are perpendicular to the direction of total stress, and a weak subnetwork of contacts oriented preferentally perpendicular to it. The weak subnetwork sustains shear stress and breaks through frictional sliding between bead surfaces. The strong subnetwork sustains the normal load, and its contacts change mostly through rolling.
### E Role of dilation
It has been known since Reynolds that shearing a granular material produces dilation, i.e. an expansion of the material in the direction perpendicular to the motion. In wet granular materials, our group found that a granular layer dilates by $`10\%`$ of one grain diameter while the top layer is translated by one grain diameter ; this observation suggests that at most a few layers are involved in the dilation. Computer simulations by Thompson and Grest and Zhang and Campbell have indicated that dilation is prominent in stick-slip motion. A region of about 6-12 grain diameters typically dilates and starts to flow in the simulations at small normal forces. Zhang and Campbell found an approximately linear decrease in particle velocity with depth at high normal forces, and a faster than linear decrease at low normal forces.
Experiments at geophysical pressures by Marone have shown that the granular layer becomes compacted during the waiting time at high normal stresses . Marone notes that compaction increases in proportion to the frictional strength of the material; this indicates a close relation between compaction and strength. The role of dilation therefore should be investigated in connection with measurements of the frictional strength.
### F Role of humidity
Experiments in a rotating cylinder have indicated that strengthening of granular material (indicated by an increased critical angle of repose) might be due to the formation of liquid bridges, which introduce attractive capillary forces between grains. A logarithmic increase of the critical angle of repose of the pile with waiting time was observed. The rate of increase was found to depend strongly on humidity and no strengthening occurred for bead diameters $`d>500\mu m`$ or for vanishing humidity. This behavior is consistent with liquid bridges being the sole cause of strengthening in this study, but angle of repose experiments only look at strengthening in the limit of very small stress, where the role of other strengthening mechanisms might be diminished.
We varied the humidity in some experiments and carried out experiments under water, where liquid bridges cannot form, in order to distinguish this mechanism of strengthening from other possibilities.
### G This work on strengthening in granular materials
Judging from the three dimensional nature of the sheared region, the role of dilation, humidity, and the nonuniform and hysteretic transmission of stress through a granular material, we cannot expect a priori that the laws that govern strengthening of solids will hold for granular materials.
In our experiments a small, but non-negligible normal force is applied to the granular material, in contrast to the strengthening studies based on the critical angle of repose. On the other hand, the applied normal force is smaller than the one used in geophysical experiments by a factor of about $`10^6`$. The threshold stress for plastic deformation of the glass particles is reached for an approximate area for individual microcontacts of the order of $`25\mathrm{nm}^2`$ per particle in our experiments . Creep of individual microcontacts would therefore be at the nm scale, which cannot be resolved with our apparatus. Particle fracture - which occurs in geophysical experiments and yields characteristic grain size distributions - plays no role in our studies. With the ability to image the motion of particles and to measure creep and dilation on the $`\mu \mathrm{m}`$ scale, our experiments focus on the role of the arrangement of microcontacts (the fabric of the granular material) in determining its shear strength.
## III Experimental Results
### A Apparatus
The experimental setup, shown in Figure 1 has been described previously , including the modifications for experiments under water . Our experiments were carried out in a thin $`3\mathrm{m}\mathrm{m}`$ layer of $`103\pm 14\mu \mathrm{m}`$ diameter glass beads (Jaygo Inc.) in a $`11\times 18.5\mathrm{cm}`$ tray. A transparent acrylic plate ($`5.28\times 8.15\mathrm{cm}`$) of weight $`26.7`$ g is placed on top of the granular material. Good contact between plate and layer is assured either by etching grooves or gluing a layer of beads to the plate’s lower surface. The plate is pushed across the granular material by a spring that touches only a small steel ball glued to the plate; this allows for vertical motion of the plate. The spring’s fulcrum is moved toward the plate at constant speed by a microstep stepper motor. Bending of the spring is measured with a displacement sensor, which indicates the force applied to the plate by the spring with a relative precision of better than $`0.1\%`$. The vertical position of the plate is measured with a second displacement sensor having a resolution of $`0.1\mu \mathrm{m}`$. As indicated in Figure 1, the granular material and the plate are kept under water in some experiments. The motion of particles is imaged from the side with a fast camera (Kodak Motioncorder SR-500).
### B Particle motion in sheared granular layers
In order to establish how deeply the motion of the top plate penetrates into the granular material, we image the position of glass beads at the side of the cell. A $`5.3\times 18.5\mathrm{cm}`$ tray - slightly wider than the plate - was designed with smooth glass sidewalls to allow for optimal visualization. Through direct illumination we obtain small circular bright spots for all beads close to the wall. We track the motion of all particles in the layer closest to the wall from images taken at $`500`$ frames per second during motion of the plate. (The motion is captured during one slip in the stick-slip regime, which is described in section III C). The changes in bead positions between frames are used to calculate the particle tracks and instantaneous velocities. Since the conditions at the wall are not identical to the inside of a granular material, motion of particles near the side yield only an approximation of particle motion in the interior. However, the observations regarding the depth profile and nonuniformity of motion are unlikely to be qualitatively different for the interior.
One measurement, where $`2000`$ glass beads are tracked during a short ($`40\mathrm{m}\mathrm{s}`$) slip, is shown in Figure 2. Individual particle velocities and directions of motion are indicated by the direction and length of individual lines. Approximately $`5`$ layers of beads are moving. The particle velocity decreases more strongly with depth than would be the case for a fluid, but qualitatively very similar to simulations . The motion of neighboring particles is not perfectly correlated but differs in direction and velocity. This indicates that the material cannot be described as a solid with a single fracture plane, but that most individual particle contacts are broken up. Slip of the plate therefore involves the breakup of most particle contacts within about 5 layers close to the plate: this process is quite different from the breakup of a 2D array of contacts that occurs when a solid starts sliding on another solid. A more detailed study of particle motion is in progress and will be reported elsewhere .
### C Strengthening in dry granular materials
In dry granular materials stick-slip motion with long sticking times and short, fast slips is the prevalent behavior at constant motor speed in our experiments. The strength of the material was therefore measured from the spring displacement, i.e. the maximum spring displacement prior to a slip. In the steady state, the slip reproducibly starts at the same spring displacement, but the first slip after the motor is started can be different. In the first set of experiments to be considered, the motor is stopped during stick-slip motion and restarted after a waiting time $`\tau `$. The applied stress during waiting varies by roughly $`30\%`$ (depending on how long after a slip event the motor is stopped). Figure 3 shows the spring displacement vs time, with the motor started at $`t=1.19`$ s after a waiting time $`\tau =22`$ s (dotted line), $`\tau =1130`$ s (dashed line), or $`\tau =26,400`$ s (solid line). The total spring displacement before the first slip increases with waiting time, but after at most two slips a steady stick-slip motion is reached with the same maximum spring displacement for all waiting times (the curves are offset by $`150\mu \mathrm{m}`$).
In some experiments no shear stress was applied during the waiting time by moving the motor backward prior to waiting. The spring displacement when the motor started with a completely unbent spring at $`t=8.2`$ s after a waiting time of $`37,213`$ s is shown in figure 4. After several slips the motor direction is reversed at $`t=50`$ s. In this case, the stress just before the first slip is not enhanced. At low normal stress in the stick-slip regime, the strength of dry granular material therefore only increases with waiting time if a shear stress is applied during the waiting time.
The ratio of the maximum spring displacement prior to the first slip $`F_{max}`$ to the average maximum spring displacement $`F_{norm}`$ of all except the first two slips indicates the relative strengthening of the material with waiting time. This ratio is shown in figure 5 for the stressed case. The waiting time strengthening under shear stress appears to be faster than logarithmic when the full range of waiting times $`\tau `$ is included. On shorter timescales the material becomes roughly $`2\%`$ stronger (compared to continuous stick-slip motion) per decade increase in $`\tau `$. On longer timescales $`\tau >1000`$ s the strengthening is roughly $`10\%`$ per decade with a characteristic initiation time $`\tau _0`$ (the time before which little strengthening occurs based on an extrapolation of the approximately logarithmic increase) of roughly $`600\pm 300`$ s.
### D Role of Dilation
The vertical position $`h(t)`$ of the plate indicates the dilation or compaction of the granular material. Due to small deviations from perfect flatness of the target plate for the vertical displacement sensor and slow fluctuations in the sensor readout due to small variations in temperature, the absolute dilation cannot be computed to better than $`1\mu \mathrm{m}`$ over long times or over horizontal plate movements long compared to the particle diameter.
Figure 6 shows the vertical plate position after a waiting time under shear stress for the experiments described in Figure 3. After the motor is started, a gradual dilation of the granular material during the sticking time takes place, followed by rapid dilation as the slip starts and compaction immediately following the slip. The first slip after a long waiting time (solid line) is followed by especially strong compaction.
The vertical dilation without applied shear stress during waiting is shown in Figure 7, which corresponds to the spring displacement data of Figure 4. As the plate starts to move, the layer gradually dilates. The gradual dilation does not influence the maximum frictional force, which is comparable for all slips as shown in Figure 4. As soon as the shear stress is released at $`t>60`$ s, the layer compacts almost instantaneously.
### E Strengthening under water
Since the formation of liquid bridges between particles could be the cause for the observed strengthening of granular material, as described in , a second set of experiments was carried out under water, where liquid bridges are absent. Continuous sliding is prevalent under water because the fluid lubricates the contacts. Figure 8 shows the typical behavior of the spring displacement $`d(t)`$ (Fig. 8a) and of the vertical position $`h(t)`$ (Fig. 8b) as functions of time $`t`$ in two different cases: (1) The horizontal stress is released before the experiment; (2) The horizontal stress is continuously applied. In both cases the spring displacement during the transient (which is proportional to the frictional force at the small accelerations considered here) is larger than the frictional force during steady sliding. The force reaches a maximum $`F_{max}`$ at the maximum dilation rate. At later times the material continues to dilate and approaches a steady state dilation, while the frictional force decreases toward a steady sliding frictional force. In case (2), the layer is initially less packed and the total dilation $`\mathrm{\Delta }h`$ observed during the experiment is smaller.
As observed for the dry granular material, we find that the maximum frictional force $`F_{max}`$ depends on the resting time $`\tau `$, and on the horizontal stress applied during this interval, as shown in Figure 9. The experimental procedure is as follows: The plate is initially pushed at constant velocity ($`28.17\mu m/s`$) until the steady state regime is reached with a dynamic friction force $`F_d`$. Then, the motion of the translator is suddenly stopped and the plate stops at a well-defined horizontal applied stress ($`F=3.210^2NF_d`$. The translator motion is started again after a delay $`\tau `$. As for dry granular materials, we find that the maximum value of the frictional force $`F_{max}`$ depends on the waiting-time $`\tau `$: The maximum frictional force increases by roughly $`10\%`$ of $`F_d`$ for each order of magnitude increase in waiting time with a characteristic initiation time $`\tau _0=0.6\pm 0.2`$ s, a factor of $`10^3`$ faster than for a dry granular material. The maximum frictional force $`F_{max}`$ increases by about $`40\%`$ in $`10`$ hours. On the other hand, if no horizontal stress is applied during the waiting-time $`\tau `$ (the spring is pulled back), no increase of the maximum frictional force is measured. For short waiting times $`F_{max}`$ is found to be larger when no stress is applied; this result is in agreement with the fact that the maximum frictional force increases with the total dilation of the layer $`\mathrm{\Delta }h`$ in the continuous sliding case for a wet granular material . The two curves for strengthening with and without applied shear stress intersect for $`\tau 10^4`$ s. For longer waiting times $`F_{max}`$ becomes larger for waiting under an applied shear stress while the dilation following the waiting ($`\mathrm{\Delta }h`$) remains larger when no stress is applied.
A careful study of the behavior of the vertical position of the plate $`h`$ as a function of its horizontal position $`x`$ shown in Figure 10 indicates that $`h`$ reaches a maximum during the transient regime when a horizontal stress is applied during the waiting time. The distance over which the vertical position of the plate reaches its maximum is about one particle radius $`R`$, comparable to the distance over which the system reaches the steady state regime in experiments starting without applied shear stress . However, after a waiting time under stress, the vertical position of the plate decreases for a sliding distance of a few particle diameters. The frictional force reaches its asymptotic value over the distance $`R`$, and appears not to be affected by this decrease of the vertical position of the plate.
Creep during the waiting time can be directly observed. If the plate is kept under a shear stress comparable to the steady sliding value, it creeps slowly ($`dx/dt1\mu \mathrm{m}/\mathrm{h}`$) and goes up (Fig.11). Such strong creep has not been observed in dry granular materials. The vertical velocity $`dh/dt`$ of the plate is of the same order of magnitude as the horizontal velocity $`dx/dt`$. Moreover, the whole vertical displacement of the plate $`\delta h`$ after $`7`$ hours is about $`20\mu m`$, much larger than any vertical displacement of the plate observed in the dynamical regime.
## IV Discussion and Conclusion
The main results of this investigation of the shear strength of a granular material at low normal forces are as follows:
(a) The strength of a granular material increases roughly logarithmically with the time of stationary contact $`\tau `$ (waiting time) in both dry material (Fig. 5) and wet material (Fig. 9), if a shear stress is applied during the waiting time. The characteristic initiation time $`\tau _0`$ is roughly three orders of magnitude smaller for wet granular material.
(b) In both dry and wet granular matter, the strength does not increase with $`\tau `$ if no shear stress is applied during the waiting time (Figs. 4 and 9).
(c) In both the dry and the wet granular material, the layer compacts immediately, when the shear stress is released. This compaction leads to an instantaneous increase in the strength of the wet granular material, but not in the dry case (Figs. 4 and 9).
(d) Particle tracking reveals that in a region of approximately $`5`$ particle layers below the sheared surface, particles move and lose contact with each other during stick-slip motion (Fig. 2). The particle velocities within this fluidized region decrease faster than linearly.
(e) Dilation or compaction often influences the strength of the material. However, gradual slight dilation in a dry material (Fig. 7) and gradual slight compaction in a wet material (Fig. 10) do not necessarily influence the frictional forces.
In order to understand these results we need to go back to the main question: What are the microscopic mechanisms for strengthening? In the following paragraphs we look at individual mechanisms and the experimental results that support them or show their limitations.
The gradual strengthening of individual microcontacts through creep on the level of microcontacts has been found in other work to lead to strengthening in solid-on-solid friction. The microcontact size is determined by the yield stress of the material. Unless the contact area is flat, the material deforms until the microcontact is just large enough to support the stress. Strengthening of microcontacts should therefore be present for granular materials with nm scale contact areas just as it is for solid-on-solid friction.
Another mechanism that can strengthen microcontacts is an attractive surface force, created by gradually developing liquid bridges between particles at rest. This mechanism accounts for the observed increase in strengthening at higher humidity, but can not explain why strengthening also occurs for a granular layer under water. Other mechanisms for strengthening must therefore be present.
No strengthening is observed in the absence of an applied shear stress even though the absence of applied shear stress only decreases the total stress on microcontacts by $`1020\%`$. In addition, the observed creep distance under water (Fig. 11) is several orders of magnitude larger than the characteristic microcontact length; this suggests that microcontacts break rather than strengthen during the time at rest. Microcontact strength can therefore not account for strengthening under water or the lack of strengthening in the absence of an applied shear stress in both dry and wet material.
The large creep under water suggests that grains rearrange more easily in wet than in dry granular matter. One possible explanation is that water lubricates individual contacts between grains, which allows grains to slip past each other with significantly smaller frictional force. Lubrication is also evident in the reduced friction coefficient of a wet granular material . Even though large creep decreases the contact time of microcontacts, strengthening occurs at the same rate, and with a much smaller characteristic initiation time $`\tau _0`$ in wet granular matter. This indicates that rearrangments in the network of contacts contibute significantly to strengthening. The rigidity of the network of contacts can influence the frictional strength of the material, since direct observations indicate that a three dimensional network of contacts is broken (and a fluidized region of several layers close to the sheared surface forms) when sliding starts.
The observation of strengthening solely under shear stress differs from results in solid-on-solid friction, but is similar to results at geophysical pressures . It implies that the contact network changes in response to an applied shear stress regardless of how the contacts themselves evolve. This is consistent with the description of a granular material as a fragile network that breaks if the direction of the applied stress changes. Recent experiments in a Couette cell, which will be reported in detail elsewhere , support the conclusion that the direction of the applied stress matters. In the Couette cell a clockwise shear stress that is less than the slip threshold is first applied for a waiting time $`\tau `$. If the granular material is subsequently sheared clockwise, a roughly logarithmic increase in the strength with $`\tau `$ is observed, as for the flat layer studied in this paper. On the other hand, if the subsequent shear is counterclockwise, no strengthening is observed.
The density of the granular material is a rough indicator of the density of the contact network, as more grains tend to touch each other in denser configurations. However, compaction and dilation are not always correlated with frictional strength in our experiments, possibly because we are limited to measuring the mean density, not the local density of the surface region, where the network of microcontacts fails. The possibility that the orientation of microcontacts could be as important for the strength of the material as the density of contacts should also be explored further.
When the shear stress is completely released, a significant density change occurs. This suggests that the granular material under normal stress can only compact if the existing contact network breaks - which in our case happens when the shear stress is removed. Recent experiments in a Couette cell , show that changing the direction of shear stress can be used to strengthen the granular material rapidly. The inner cylinder of the Couette cell is connected to a motor through a soft spring, which allows for variations in the applied shear force at forces below the frictional strength of the granular material. When the direction of the applied shear stress is reversed periodically at stresses below that necessary to initiate sliding, the strength of the material can be increased rapidly in proportion to the number of direction reversals.
In conclusion, strengthening in these experiments can be explained by two fundamentally different phenomena. One is a strengthening of individual microcontacts due to time of contact or slow creep - or due to the formation of liquid bridges. The other fundamental strengthening mechanism is related to the spatial arrangement of beads and hence the arrangement and orientation of microcontacts. The strength of the contact network (sometimes called the fabric of the granular material) can be related to the compaction of the granular material, but our experiments indicate that compaction is not the only determinant of the strength of the network.
Strengthening due to rearrangements alone can also be related to a wide range of other systems that can jam, such as molecules in a glass. It has recently been suggested that some aspects of the jamming behavior, or the ’unjamming’ which we study, might have a common theoretical description, so a good understanding of the strengthening might eventually be useful for a description of the properties of these other systems as well.
## V Acknowledgments
This work was supported by the U.S. National Science Foundation under Grant DMR-9704301. J.-C. G. thanks the Centre National de la Recherche Scientifique (France) for supporting the research of its members carried out in foreign laboratories. The optical measurements were carried out by P. Ingebretson. We appreciate helpful discussions with C. Scholz and C. Marone.
|
no-problem/9909/hep-lat9909034.html
|
ar5iv
|
text
|
# Topology in CPN-1 models: a critical comparison of different cooling techniques
## Abstract
Various cooling methods, including a recently introduced one which smoothes out only quantum fluctuations larger than a given threshold, are applied to the study of topology in 2d CP<sup>N-1</sup> models. A critical comparison of their properties is performed.
Two-dimensional CP<sup>N-1</sup> models play an important role in quantum field theory because they share many properties with QCD. In particular, they possess instanton classical solutions and one can define topological charge and susceptibility in analogy with QCD. CP<sup>N-1</sup> models represent therefore a useful theoretical laboratory to investigate numerical methods to be eventually applied to study the topology in QCD. One of the most powerful tools for the study of the topological structure of the vacuum is the “cooling” method. It consists in measuring the topologically relevant quantities on the ensemble of lattice configurations obtained by replacing each equilibrium configuration by the one resulting after a sequence of local minimizations of the action.
The aim of this work is to get insight into a new cooling method first adopted in Ref. , by comparing it with the “standard” and with its “controlled” version adopted by the Pisa group .
We chose the standard discretization for the action of the 2d CP<sup>N-1</sup> model:
$$S^L=N\beta \underset{n,\mu }{}\left(\overline{z}_{n+\mu }z_n\lambda _{n,\mu }+\overline{z}_nz_{n+\mu }\overline{\lambda }_{n,\mu }2\right)$$
(1)
where $`z_n`$ is an $`N`$component complex scalar field with $`\overline{z}_nz_n=1`$, $`\lambda _{n,\mu }`$ is a U(1) gauge field satisfying $`\overline{\lambda }_{n,\mu }\lambda _{n,\mu }=1`$ and $`\beta =1/(Ng)`$, with $`g`$ the lattice coupling. We used the standard action both in Monte Carlo simulations and in the cooling instead of any improved lattice action, since we wanted to test the cooling techniques in a situation where cutoff effects are large. The lattice topological charge density was defined as
$$q^L(x)=\frac{i}{2\pi }\underset{\mu \nu }{}ϵ_{\mu \nu }\mathrm{Tr}\left[P(x)\mathrm{\Delta }_\mu P(x)\mathrm{\Delta }_\nu P(x)\right],$$
(2)
with
$`\mathrm{\Delta }_\mu P(x)`$ $``$ $`{\displaystyle \frac{P(x+\mu )P(x\mu )}{2}},`$
$`P_{ij}(x)`$ $``$ $`\overline{z}_i(x)z_j(x),`$ (3)
giving for the lattice topological susceptibility
$$\chi ^L\underset{x}{}q^L(x)q^L(0)=\frac{1}{L^2}\left(Q^L\right)^2,$$
(4)
where $`Q^L_xq^L(x)`$ and $`L`$ is the lattice size. The cooling algorithm consists in assigning to each lattice variable $`z_n`$ and $`\lambda _{n,\mu }`$ a new value $`z_n^{\mathrm{new}}`$ and $`\lambda _{n,\mu }^{\mathrm{new}}`$ which locally minimizes the action, keeping all other variables fixed. In the “standard cooling” these replacements are unconstrained. We will call “new cooling” the one for which the replacements are done only if the angle $`\alpha `$ between the new and the old field variables, is larger than a given value $`\delta `$, and “Pisa cooling” the one for which the local minimization is performed with the constraint $`\alpha \delta _{\mathrm{Pisa}}`$. Notice that between the “Pisa cooling” and the “new cooling” there is a substantial difference: while “Pisa cooling” acts first on the smoother fluctuations, the “new cooling” performs local minimizations only if these fluctuations are larger than a given threshold. Moreover the “new cooling” automatically stops when there are no more fluctuations beyond the threshold. It should be pointed out that any cooling procedure causes a partial loss of the topological content of the cooled configuration (namely “small instantons”). However this loss occurs at a fixed scale in lattice units and thus vanishes in the continuum limit, unless the instantons distribution is ultraviolet singular (as in the case of CP<sup>1</sup>).
We considered first an “artificial” 1-instanton configuration discretized on the lattice. Although the three different cooling procedures act differently, since they start to deform the configuration in different regions, the curves for $`Q^L`$ and $`S^L`$ under the three coolings as a function of the instanton size $`\rho `$ fall on top of each other (see Fig. 1).
We determined the topological susceptibility using the “new cooling” and compared the results with those from the “field theoretical method” . In the field theoretical approach one has
$`\chi ^L(\beta )`$ $`=`$ $`a^2Z(\beta )^2\chi ^{\mathrm{cont}}+M(\beta )+O(a^4)`$
$`M(\beta )`$ $`=`$ $`a^2A(\beta )S(x)_{\mathrm{np}}+P(\beta )I,`$ (5)
where $`Z(\beta )`$ is a finite multiplicative renormalization of the discretized topological charge density , while $`M(\beta )`$ is an additive renormalization containing mixing to operators of equal or lower dimension and same quantum numbers, namely the action density and the identity operator. On the lattice $`\chi ^L(\beta )`$ is measured during the Monte-Carlo simulation, and $`\chi ^{\mathrm{cont}}`$ is extracted by subtracting the renormalizations, which are computed non-perturbatively on the lattice by means of the “heating method” ($`Z`$ and $`P`$ can be also computed perturbatively ). The “field theoretical method” can be improved by using a smeared topological charge density operator , built from the standard operator by replacing the fields $`z`$ and $`\lambda _\mu `$ with smeared fields $`z^{\mathrm{smear}}`$ and $`\lambda _\mu ^{\mathrm{smear}}`$. In this way the renormalizations are strongly reduced and a much better accuracy is achieved for $`\chi ^{\mathrm{cont}}`$.
We performed numerical simulations for $`N=4`$, $`10`$, $`21`$. The simulation algorithm is a mixture of 4 microcanonical updates and 1 over-heat bath. For each simulation we collected 100K configurations after 10K thermalization updating steps. We used several values of the $`\delta `$ parameter in the “new cooling” algorithm, while for the “Pisa cooling” we considered $`\delta _{\mathrm{Pisa}}=0.2`$. To set the scale we have taken the correlation length $`\xi _G`$ defined as the second moment of the correlation function $`\text{Tr}P(x)P(0)`$. In Fig. 2 we show the behavior of $`\chi ^L`$ on configurations cooled by the “new cooling” with different $`\delta `$’s, while in Fig. 3 we plot $`\chi ^{\mathrm{cont}}\xi _G^2`$ for the values of $`\delta `$ which correspond to the peak region in $`\chi ^L`$ in Fig. 2 and compare the results with the field theoretical determination (with and without smearing). There is consistency between the two determinations for all the considered values of $`\delta `$, which correspond to those for which the smoothing of the configuration is better.
We have also calibrated the three cooling techniques (number of cooling steps for the standard and Pisa cooling versus $`\delta `$ for the new cooling) in order that the average energy on the ensemble cooled in the three different ways is the same. Then, comparing by eye several thermal configurations obtained after equivalent amounts of the three coolings, we have observed that the distributions and the shape of the instanton bumps is roughly the same. Also the values of $`\chi ^{\mathrm{cont}}\xi _G^2`$ obtained after “equivalent” coolings are in agreement within the statistical errors. The only measurement where we could see a discrepancy is that of the “shell” correlation function of the topological charge density $`q_L(r)q_L(0)`$ (see Fig. 4), which is slightly larger in the case of the new cooling method in the short distance region, with respect to the standard and Pisa coolings.
Our conclusion is that, except for the last observation which deserves further study, there is no appreciable difference between the three types of cooling we have investigated.
|
no-problem/9909/hep-lat9909059.html
|
ar5iv
|
text
|
# Feasibility study of using the overlap-Dirac operator for hadron spectroscopy. Presented by C. McNeile.
## 1 INTRODUCTION
Two critical systematic errors in the calculation of the $`f_B`$ decay constant from lattice QCD are the chiral extrapolations and the unquenching errors . The only way to reduce these errors is to simulate QCD with lighter quark masses. Unfortunately, because of exceptional configurations, it is difficult to further reduce the masses of the light quarks in quenched simulations with the clover operator. Progress in reducing the sea quark masses in dynamical fermion simulations with Wilson like quarks is slow .
It seems plausible that the difficulty of simulating with light quark masses with the clover operator is due to explicit chiral symmetry breaking in the action. Neuberger has derived a fermion operator, called the overlap-Dirac operator, that has a lattice chiral symmetry .
Our goal is to simulate the overlap-Dirac operator in a mass region: ($`M_{PS}/M_V=0.30.5`$), which is inaccessible to clover quarks (but not staggered quarks ). Most of the techniques developed in the quenched theory can be used for full QCD simulations .
## 2 THE OVERLAP-DIRAC OPERATOR
The massive overlap-Dirac operator is
$$D^N=\frac{1}{2}(1+\mu +(1\mu )\gamma _5\frac{H(m)}{\sqrt{H(m)H(m)}})$$
(1)
where $`H(m)`$ is the hermitian Wilson fermion operator with negative mass, defined by
$$H(m)=\gamma _5(D^Wm)$$
(2)
where $`D^W`$ is the standard Wilson fermion operator. The parameter $`\mu `$ is related to the physical quark mass and lies in the range $`0`$ to $`1`$. The $`m`$ parameter is a regulating mass, in the range between a critical value and 2. The physics should be independent of the mass $`m`$, but the value of $`m`$ effects the locality of the operator and the number of iterations required in some of the algorithms used to compute the overlap-Dirac operator.
## 3 NUMERICAL TECHNIQUES
Quark propagators are calculated using a sparse matrix inversion algorithm. The inner step of the inverter is the application of the fermion matrix to a vector. For computations that use the overlap-Dirac, the step function
$$ϵ(H)\underset{¯}{b}=\frac{H}{\sqrt{HH}}\underset{¯}{b}$$
(3)
must be computed using some sparse matrix algorithm. The nested nature of the algorithm required to calculate the quark propagators for the overlap-Dirac operator makes the simulations considerably more expensive than those that use traditional fermion operators.
Practical calculations of the overlap operator are necessarily approximate. To judge the accuracy of our approximate calculation we used the (GW) Ginsparg-Wilson error:
$$\gamma _5D^N\underset{¯}{x}+D^N\gamma _5\underset{¯}{x}2D^N\gamma _5D^N\underset{¯}{x}\frac{1}{\underset{¯}{x}}$$
(4)
which just checks that the matrix obeys the Ginsparg-Wilson relation .
Our numerical simulations were done using $`\beta =5.9`$ quenched gauge configurations, with a volume of $`16^332`$. The quark propagators were generated from point sources. For all the algorithms we investigated, we used $`m`$ equal to $`1.5`$.
## 4 LANCZOS BASED METHOD
Borici has developed a method to calculate the action of the overlap-Dirac operator on a vector, using the Lanczos algorithm. In exact arithmetic, the Lanczos algorithm generates an orthonormal set of vectors that tridiagonalises the matrix.
$$HQ_n=Q_nT_n$$
(5)
where $`T_n`$ is a tridiagonal matrix. The columns of $`Q_n`$ contain the Lanczos vectors.
The “trick”, to evaluate the step function (Eq. 3), is to set the target vector $`\underset{¯}{b}`$, as the first vector in the Lanczos sequence. An arbitrary function $`f`$ of the matrix $`H`$ acting on a vector is constructed using
$`(f(H)b)_i`$ $`=`$ $`{\displaystyle \underset{j}{}}(Q_nf(T_n)Q_n^{})_{ij}b_j`$ (6)
$`=`$ $`b(Q_nf(T_n))_{i\mathrm{\hspace{0.33em}1}}`$ (7)
where the orthogonality of the Lanczos vectors has been used. The $`f(T_n)`$ matrix is computed using standard dense linear algebra routines. For the step function the eigenvalues of $`T_n`$ are replaced by their moduli. Eq. 7 is linear in the Lanczos vectors and thus can be computed in two passes.
The major problem with the Lanczos procedure is the loss of the orthogonality of the sequence of vectors due to rounding errors. It is not clear how this lack of orthogonality effects the final results. Some theoretical analysis has been done on this method . It is claimed that the lack of orthogonality is not important for some classes of functions.
In Fig. 1, we plot the eigenvalues of the Ginsparg-Wilson operator, as a function of the number of Lanczos steps, for a $`2^4`$ hot $`SU(3)`$ configuration. As the number of Lanczos steps increases, the eigenvalue spectrum moves closer to a circle (the correct result). Even after 50 iterations of the Lanczos algorithm, there are still small deviations from the circle.
Unfortunately, it is much harder to look at the eigenvalue spectrum using a production gauge configuration, so we computed the GW error instead. The GW error was: $`\mathrm{5\hspace{0.33em}10}^3`$ (50 iterations), $`\mathrm{6\hspace{0.33em}10}^4`$ (100) iterations, and $`\mathrm{3\hspace{0.33em}10}^4`$ (300 iterations) on a single gauge configuration.
Fig. 2, is an effective mass plot of the pion, for two choices of mass and number of Lanczos sweeps. The “plateau” in the pion effective mass plot for approximate operator that used 50 Lanczos iterations is higher than the lowest pion mass that can be reached with non-perturbatively improved clover. It is not clear what causes the ”shoulder” in Fig. 2. We would like to compute the eigenvalues of the overlap-Dirac operator on the bigger gauge configuration, to check how accurately we are computing the overlap-Dirac operator.
## 5 RATIONAL APPROXIMATION
The step function can be be approximated by a rational approximation.
$$ϵ(H)H(c_0+\underset{k=1}{\overset{N}{}}\frac{c_k}{H^2+d_k})$$
(8)
The rational approximation typically approximates the step function, between two values. The eigenvalues of the matrix $`H`$ should lie in the region where the approximation is good. The coefficients $`c_k`$ and $`d_k`$ can be obtained from the Remez algorithm . The number of iterations required in the inverter is controlled by the smallest $`d_k`$ coefficient, which acts like a mass. We have not yet implemented the technique of projecting out some of the low lying eigenmodes .
On one configuration we obtained GW errors of: $`\mathrm{1\hspace{0.33em}10}^4`$, and $`\mathrm{4\hspace{0.33em}10}^5`$, for the $`N=6`$, and $`N=8`$, optimal rational approximations . The multiplicative scaling factor was tuned to obtain the best results. Unfortunately, the above results required up to 600 iterations for the smallest $`d_k`$, which was too large to use as the inner step of a quark propagator inverter.
One feature of the optimal rational approximation , is that the lowest $`d_k`$ factor is smaller than the square of the validity of the approximation, which means that the condition number of the inversion is that of the matrix $`H^2`$. We experimented with a hybrid quadrature and series approximation to Robertson’s integral representation of the step function.
$`ϵ(H)`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{2H}{\pi (t^2+H^2)}}𝑑t`$
$``$ $`{\displaystyle _0^{\theta _S}}{\displaystyle \frac{2H}{\pi (t^2+H^2)}}𝑑t+{\displaystyle _{\theta _S}^{\theta _L}}{\displaystyle \frac{2H}{\pi (t^2+H^2)}}𝑑t`$
In Eq. 5, the first integral was approximated using an open quadrature rule and the second integral was approximated by a Chebyshev series. The step size in the quadrature formulae reduces the condition number of the required inversion. In our preliminary tests of the algorithm, the hybrid method produced a substantial reduction in the number of iterations required over the optimal rational approximation. However the computed solution was less accurate than that produced by the optimal rational approximation, because the Ginsparg-Wilson error was $`\mathrm{6\hspace{0.33em}10}^4`$.
Clearly more work is required on the algorithms that calculate the step function, before the overlap-Dirac operator can be used in the quark mass region we are interested in.
This work is supported by PPARC. The computations were carried out on the T3D and T3E at EPCC in Edinburgh.
|
no-problem/9909/cond-mat9909315.html
|
ar5iv
|
text
|
# Acoustic Energy and Momentum in a Moving Medium
## I Introduction
Many discussions of the “energy” and “momentum” associated with waves propagating through moving fluids can be found in the physics, engineering, and mathematical fluid mechanics literature . Various definitions are proposed, some of which lead to conserved quantities, and some to quantities that are not conserved but instead exchanged between the wave and the mean flow. In part the multiplicity of definitions is due to difficulty in deciding what fraction of the energy or momentum of the system properly belongs to the wave and what fraction should be associated with the moving medium. It is also often unclear how to divide equations expressing conservation laws into terms relating to the conserved quantity, and terms acting as sources for this quantity. Related to these primarily cosmetic problems are more fundamental issues as to whether the “energy” or “momentum” under discussion is the true newtonian energy or momentum, or instead pseudo-energy and pseudomomentum. Thus we have the question “what is the momentum of a sound wave” raised by Sir Rudolf Peierls in his book Surprises in Theoretical Physics , and the salutary polemic “On the ‘Wave Momentum’ Myth” by M. E. McIntyre .
The most extensive analyses of conserved wave properties have been carried out by the fluid mechanics community . Typically these papers adopt a lagrangian (following individual particles in the flow) or mixed lagrangian-eulerian approach, as opposed to the purely eulerian (describing the flow in terms of a velocity field) approach which would be most familiar to a physicist. In addition a physicist reading this literature feels the lack of a general organising principle behind the definition and derivation of the conservation laws. The present paper is intended to remedy some of these problems — at least for the special case of sound waves propagating through an irrotational homentropic flow. Although rather a restricted class of motions, this is still one of considerable interest in condensed matter physics. It includes phonon propagation in a bose condensate, and therefore lies at the heart of the two fluid model of superfluids. By exploiting W. Unruh’s ingenious identification of the wave equation for sound waves in such a flow with the equation for a scalar field propagating in a background gravitational field, I extract the conservation laws from the principle of general covariance. Deriving the conservation laws in this way may seem like a case of taking a sledge-hammer to crack a nut, but the formalism is familar to most physicists, automatic in application, and the ambiguities in defining the conserved quantities turn out to lie in the choice of whether to identify the energy-momentum tensor as $`T^{\mu \nu }`$ or as $`T_\nu ^\mu `$. Also, when quantities are not conserved, as is the case of the wave momentum in a shear flow, their sources arise naturally from the connection terms in the covariant derivative.
In section two I discuss the action describing the irrotational motion of an homentropic fluid. In section three I derive Unruh’s equation from the action principle. In section four I explain why we often need information beyond the solutions of linearized wave equation, and in section five derive the conservation equations that follow from the linearized equation. Section six interprets these equations in terms of the motion of phonons. The discussion section repeats the warnings from that the appearance of quantum quasi-particles in any argument is a sure sign that the we are considering pseudomomentum and not the actual momentum of the system.
The work reported here was motivated by a desire to understand better the role of acoustic radiation stress in the two-fluid model of a superfluid. It may be relevent to the recent controversy over the Iordanskii force acting on a vortex moving with respect to the normal fluid component. The use of the Unruh formalism in this context was suggested by Volovik.
## II The Action Priciple
The most straight forward way of deriving conservation laws starts from an action principle. Noether’s theorem then provides us with an explicit formula for a conserved quantity corresponding to each symmetry of the action. In fluid mechanics, unfortunately, at least when we restrict ourselves to an eulerian decription of the flow field, action principles are in short supply. Of course there must exist some action principle because ultimately the fluid can be treated as a system of particles. A particle-based action, however, requires a lagrangian description of the flow. When it is re-expressed in eulerian terms constraints appear, and these limit its utility.
If we restrict ourselves to flows that are both irrotational and homentropic — the latter term meaning in practice that we assume the pressure to be a function of the fluid density only — then the number of degrees of freedom available to the fluid is dramatically reduced. In this case the eulerian equations of motion are derivable from the action
$$S=d^4x\left\{\rho \dot{\varphi }+\frac{1}{2}\rho (\varphi )^2+u(\rho )\right\}.$$
(1)
Here $`\rho `$ is the mass density, $`\varphi `$ the velocity potential, and the overdot denotes differentiation with respect to time. The function $`u`$ may be identified with the internal energy density.
Equating to zero the variation of $`S`$ with respect to $`\varphi `$ yields the continuity equation
$$\dot{\rho }+(\rho 𝐯)=0,$$
(2)
where $`𝐯\varphi `$. Varying $`\rho `$ gives Bernoulli’s equation
$$\dot{\varphi }+\frac{1}{2}𝐯^2+\mu (\rho )=0,$$
(3)
where $`\mu (\rho )=du/d\rho `$. In most applications $`\mu `$ would be identified with the specific enthalpy. For a superfluid condensate the entropy density, $`s`$, is identically zero and $`\mu `$ is the local chemical potential.
It is worth noting that our action could not have arisen from some rewriting of the action for the motion of a system of individual particles. We are allowing variations of $`\rho `$ without requiring simultaneous variations of $`\varphi `$, and such variations conjure new matter out of nothing.
The gradient of the Bernoulli equation is Euler’s equation of motion for the fluid. Combining this with the continuity equation yields a momentum conservation law
$$_t(\rho v_i)+_j(\rho v_jv_i)+\rho _i\mu =0.$$
(4)
We simplify (4) by introducing the pressure, $`P`$, which is related to $`\mu `$ by $`P(\rho )=\rho 𝑑\mu `$. Then we can write
$$_t(\rho v_i)+_j\mathrm{\Pi }_{ji}=0,$$
(5)
where $`\mathrm{\Pi }_{ij}`$ is given by
$$\mathrm{\Pi }_{ij}=\rho v_iv_j+\delta _{ij}P.$$
(6)
This is the usual form of the momentum flux tensor in fluid mechanics.
The relations $`\mu =du/d\rho `$ and $`\rho =dP/d\mu `$ show that $`P`$ and $`u`$ are related by a Legendre transformation, $`P=\rho \mu u(\rho )`$. From this and the Bernoulli equation we see that the pressure is equal to minus the action density,
$$P=\rho \dot{\varphi }+\frac{1}{2}\rho (\varphi )^2+u(\rho ).$$
(7)
Consequently we can write
$$\mathrm{\Pi }_{ij}=\rho _i\varphi _j\varphi \delta _{ij}\left\{\rho \dot{\varphi }+\frac{1}{2}\rho (\varphi )^2+u(\rho )\right\}.$$
(8)
This is the flux tensor that would appear were we to use Noether’s theorem to derive a law of momentum conservation directly from the invariance of the action under the translation $`\varphi (𝐫)\varphi (𝐫𝐚)`$, $`\rho (𝐫)\rho (𝐫𝐚)`$. This is not a trivial point because there are at two similar, but distinct, notions of “momentum”. True momentum is associated with the symmetry of the action under a simultaneous translation all the particles in the system. Its conservation requires an absence of external forces. Pseudomomentum is the quantity that is preserved when the action is left invariant when the disturbance in the medium is relocated, but the reference position of each individual particle is left unchanged. Conservation of pseudomomentum requires homogeneity of the medium rather than of space. Replacing the field $`\varphi (𝐫)`$ by $`\varphi (𝐫𝐚)`$ would normally correspond to the latter symmetry, but, because of the absence of explicit particles, at this point in our discussion the two concepts coincide.
## III The Unruh Metric
We now obtain the linearized wave equation for the propagation of sound waves in a background mean flow. Let
$`\varphi `$ $`=`$ $`\varphi _{(0)}+\varphi _{(1)}`$ (9)
$`\rho `$ $`=`$ $`\rho _{(0)}+\rho _{(1)}.`$ (10)
Here $`\varphi _{(0)}`$ and $`\rho _{(0)}`$ define the mean flow. We assume that they obey the equations of motion. The quantities $`\varphi _{(1)}`$ and $`\rho _{(1)}`$ represent small amplitude perturbations. Expanding $`S`$ to quadratic order in these perturbations gives
$$S=S_0+d^4x\left\{\rho _{(1)}\dot{\varphi }_{(1)}+\frac{1}{2}\left(\frac{c^2}{\rho _{(0)}}\right)\rho _{(1)}^2+\frac{1}{2}\rho _{(0)}(\varphi _{(1)})^2+\rho _{(1)}𝐯\varphi _{(1)}\right\}.$$
(11)
Here $`𝐯𝐯_{(0)}=\varphi _{(0)}`$. The speed of sound, $`c`$, is defined by
$$\frac{c^2}{\rho _{(0)}}=\frac{d\mu }{d\rho }|_{\rho _{(0)}},$$
(12)
or more familiarly
$$c^2=\frac{dP}{d\rho }.$$
(13)
The terms linear in the perturbations vanish because of our assumption that the zeroth-order variables obey the equation of motion.
The equation of motion for $`\rho _{(1)}`$ derived from (11) is
$$\rho _{(1)}=\frac{\rho _{(0)}}{c^2}\{\dot{\varphi }_{(1)}+𝐯\varphi _{(1)}\}.$$
(14)
In general we are not allowed to substitute a consequence of an equation of motion back into the action integral. Here however, because $`\rho _{(1)}`$ occurs quadratically, we may use (14) to eliminate it and obtain an effective action for the potential $`\varphi _{(1)}`$ only
$$S_{(2)}=d^4x\left\{\frac{1}{2}\rho _{(0)}(\varphi _{(1)})^2\frac{\rho _{(0)}}{2c^2}(\dot{\varphi }_{(1)}+𝐯\varphi _{(1)})^2\right\}.$$
(15)
The resultant equation of motion for $`\varphi _{(1)}`$ is
$$\left(\frac{}{t}+𝐯\right)\frac{\rho _{(0)}}{c^2}\left(\frac{}{t}+𝐯\right)\varphi _{(1)}=(\rho _{(0)}\varphi _{(1)}).$$
(16)
Note that in deriving this equation we have not assumed that the background flow $`𝐯`$ is steady, only that it satisfies the equations of motion. Naturally, in order for our waves to be distinguishable from the background flow, the latter should be slowly changing and have a longer length scale than the wave motion.
Now (16) is equivalent to a perhaps more familiar equation
$$\left(\frac{}{t}+𝐯\right)\frac{1}{c^2}\left(\frac{}{t}+𝐯\right)\varphi _{(1)}=\frac{1}{\rho _{(0)}}(\rho _{(0)}\varphi _{(1)}),$$
(17)
as can be seen by using the mass conservation equation $`_t\rho _{(0)}+\rho _{(0)}𝐯=0`$, but the form (16) has the advantage that it can be written as<sup>*</sup><sup>*</sup>*I use the convention that greek letters run over four space-time indices $`0,1,2,3`$ with $`0t`$, while roman indices refer to the three space components.
$$\frac{1}{\sqrt{g}}_\mu \sqrt{g}g^{\mu \nu }_\nu \varphi _{(1)}=0,$$
(18)
where
$$\sqrt{g}g^{\mu \nu }=\frac{\rho _{(0)}}{c^2}\left(\begin{array}{cc}1,& 𝐯^T\\ 𝐯,& \mathrm{𝐯𝐯}^Tc^2\mathrm{𝟏}\end{array}\right).$$
(19)
This is perhaps most easily seen by observing that the action (15) is equal to $`S`$ where
$$S=d^4x\frac{1}{2}\sqrt{g}g^{\mu \nu }_\mu \varphi _{(1)}_\nu \varphi _{(1)}=d^4x\sqrt{g}L.$$
(20)
Equation (18) has the same form as that of a scalar wave propagating in a gravitational field with Riemann metric $`g_{\mu \nu }`$. The idea of writing the acoustic wave equation in this way, as well as the general relativity analogy, is due to Unruh . I will therefore refer to $`g_{\mu \nu }`$ as the Unruh metric.
The $`4`$-volume measure $`\sqrt{g}`$ is equal to $`\rho _{(0)}^2/c`$, and the covariant components of the metric are
$$g_{\mu \nu }=\frac{\rho _{(0)}}{c}\left(\begin{array}{cc}c^2v^2,& 𝐯^T\\ 𝐯,& \mathrm{𝟏}\end{array}\right).$$
(21)
The associated space-time interval is therefore
$$ds^2=\frac{\rho _{(0)}}{c}\left\{c^2dt^2\delta _{ij}(dx^iv^idt)(dx^jv^jdt)\right\}.$$
(22)
Metrics of this form, although without the overall conformal factor $`\rho _{(0)}/c`$, appear in the Arnowitt-Deser-Misner (ADM) formalism of general relativity. There $`c`$ and $`v^i`$ are refered to as the lapse function and shift vector repectively. They serve to glue successive three-dimensional time slices together to form a four dimensional space-time. In our present case, provided $`\rho _{(0)}/c`$ can be regarded as a constant, each $`3`$-space is ordinary flat $`𝐑^\mathrm{𝟑}`$ equipped with the rectangular cartesian metric $`g_{ij}^{(space)}=\delta _{ij}`$ — but the resultant space-time is in general curved, the curvature depending on the degree of inhomogeneity of the mean flow $`𝐯`$.
In the geometric acoustics limit sound will travel along the null geodesics defined by $`g_{\mu \nu }`$. Even in the presence of spatially varying $`\rho _{(0)}`$ we would expect the ray paths to depend only on the local values of $`c`$ and $`𝐯`$, so it is perhaps a bit surprising to see the density entering the expression for the Unruh metric. However an overall conformal factor does not affect null geodesics, and thus variations in $`\rho _{(0)}`$ do not influence the ray tracing. For steady flow, and in the case that only $`𝐯`$ is varying, it is shown in the appendix that the null geodesics coincide with the ray paths obtained by applying Hamilton’s equations for rays
$$\dot{x}^i=\frac{\omega }{k_i},\dot{k}_i=\frac{\omega }{x^i},$$
(23)
to the appropriate Doppler shifted frequency
$$\omega (𝐱,𝐤)=c|𝐤|+𝐯𝐤.$$
(24)
When $`𝐯`$ is in the $`x`$ direction only, we can also rewrite $`ds^2`$ as
$$ds^2=\frac{\rho _{(0)}}{c}\left\{\left(dx(v+c)dt\right)\left(dx(vc)dt\right)dy^2dz^2\right\}.$$
(25)
This shows that the $`xt`$ plane null geodesics coincide with the expected characteristics of the wave equation in the background flow.
## IV Momentum Flux
The fluid in a sound wave has average velocity zero, but since the fluid is compressed in the half cycle when it is moving in the direction of propagation and rarefied when it is moving backwards there is a net mass current (and hence a momentum density) which is of second order in the sound wave amplitude $`a_0`$. This becomes clearer if one solves the equation
$$\frac{dx}{dt}=v(x)=a_0\mathrm{cos}(kx\omega t)$$
(26)
for the trajectory $`x(t)`$ of a fluid particle. This equation is non-linear ($`x`$ appears inside the cosine), and solving perturbatively one finds a secular drift at second order in $`a_0`$.
$$x(t)=x(0)+\text{oscillations}+\frac{1}{2}a_0^2\left(\frac{k}{\omega }\right)t+\mathrm{}.$$
(27)
Although the time average of the eulerian fluid velocity, $`v`$, is zero, the time average of the lagrangian velocity, $`v_L=\dot{x}`$, is not. The difference beweeen the two average velocities is the Stokes drift. The Stokes drift is $`O(a_0^2)`$ while the wave equation is accurate only to $`O(a_0)`$, so care is necessary before using its solutions to evaluate the mass current. Similar problems occur in defining the energy density and energy and momentum fluxes which also require second order accuracy.
We can expand the velocity field as
$$𝐯=𝐯+𝐯_{(1)}+𝐯_{(2)}+\mathrm{},$$
(28)
where the second-order correction $`𝐯_{(2)}`$ arises as as consequence of the nonlinearities in the equations of motion. This correction will possess both oscillating and steady components. The oscillatory components arise because a strictly harmonic wave with frequency $`\omega _0`$ will gradually develop higher frequency components due to the progressive distortion of the wave as it propagates. (A plane wave eventually degenerates into a sequence of shocks.). These distortions are usually not significant in considerations of energy and momentum balance. The steady terms, however, represent $`O(a_0^2)`$ alterations to the mean flow caused by the sound waves, and these often possess energy and momentum comparable to that of the sound field.
Even if we temporarily ignore these effects and retain only $`𝐯_{(1)}`$ as determined from the linearized wave equation, the density and pressure will still have expansions
$`\rho `$ $`=`$ $`\rho _{(0)}+\rho _{(1)}+\rho _{(2)}+\mathrm{}`$ (29)
$`P`$ $`=`$ $`P_{(0)}+P_{(1)}+P_{(2)}+\mathrm{}.`$ (30)
As before, the grading $`(n)`$ refers to the number of powers of the sound wave amplitude in an expression. The small parameter in these expansions is the Mach number given by a typical value of $`v_{(1)}`$ divided by the local speed of sound.
Consider for example the momentum density $`\rho 𝐯`$ and the momentum flux
$$\mathrm{\Pi }_{ij}=\rho v_iv_j+\delta _{ij}P.$$
(31)
It is reasonable to define the momentum density and the momentum flux tensor associated with the sound field as the second order terms
$$𝐣^{(\mathrm{phonon})}=\rho 𝐯=\rho _{(1)}𝐯_{(1)}+𝐯\rho _{(2)},$$
(32)
and
$$\mathrm{\Pi }_{ij}^{(\mathrm{phonon})}=\rho _{(0)}v_{(1)i}v_{(1)j}+v_i\rho _{(1)}v_{(1)j}+v_j\rho _{(1)}v_{(1)i}+\delta _{ij}P_{(2)}+v_iv_j\rho _{(2)}.$$
(33)
(The angular brackets indicate that we should take a time average over a sound wave period. There is no need to consider terms first order in the amplitude because these average to zero.) We see that to compute them we need to consider the second order contributions to both $`P`$ and $`\rho `$.
We can compute $`P_{(2)}`$ in terms of first order quantities from
$$\mathrm{\Delta }P=\frac{dP}{d\mu }\mathrm{\Delta }\mu +\frac{1}{2}\frac{d^2P}{d\mu ^2}(\mathrm{\Delta }\mu )^2+O((\mathrm{\Delta }\mu )^3)$$
(34)
and Bernoulli’s equation in the form
$$\mathrm{\Delta }\mu =\dot{\varphi }_{(1)}\frac{1}{2}(\varphi _{(1)})^2𝐯\varphi _{(1)},$$
(35)
together with
$$\frac{dP}{d\mu }=\rho ,\frac{d^2P}{d\mu ^2}=\frac{d\rho }{d\mu }=\frac{\rho }{c^2}.$$
(36)
Expanding out and grouping terms of appropriate orders gives
$$P_{(1)}=\rho _{(0)}(\dot{\varphi }_{(1)}+𝐯\varphi _{(1)})=c^2\rho _{(1)},$$
(37)
which we already knew, and
$$P_{(2)}=\rho _{(0)}\frac{1}{2}(\varphi _{(1)})^2+\frac{1}{2}\frac{\rho _{(0)}}{c^2}(\dot{\varphi }_{(1)}+𝐯\varphi _{(1)})^2.$$
(38)
We see that $`P_{(2)}=\sqrt{g}L`$ where $`L`$ is the Lagrangian density for our sound wave equation.
To extract $`\rho _{(2)}`$ in this manner we need more information about the equation of state of the fluid than is used in the linearized theory. This information is most conveniently parameterized by the logarithmic derivative of the speed of sound with pressure (a fluid-state physics analogue of the Grüneisen parameter). Using this together with the previous results for $`P_{(2)}`$, we find that
$$\rho _{(2)}=\frac{1}{c^2}P_{(2)}\frac{1}{\rho _{(0)}}\rho _{(1)}^2\frac{d\mathrm{ln}c}{d\mathrm{ln}\rho }|_{\rho _{(0)}}.$$
(39)
## V Conservation Laws
While we cannot compute the “true” energy and momentum densities and fluxes without including non-linear corrections to the motion, it is often more useful find closely related quantities whose conservation laws are a consequence of the linearized wave equation, and which therefore provide information about the solutions of this equation. Our “general relativistic” formalism provides a sytematic way of finding such conserved quantities. It is well known that any action $`S`$ automatically provides us with a covariantly conserved and symmetric energy-momentum tensor $`T_{\mu \nu }`$ defined by
$$T_{\mu \nu }=\frac{2}{\sqrt{g}}\frac{\delta S}{\delta g^{\mu \nu }}.$$
(40)
The functional derivative is here defined by
$$\delta S=d^4x\sqrt{g}\frac{\delta S}{\delta g^{\mu \nu }}\delta g^{\mu \nu }.$$
(41)
It follows from the equations of motion derived from $`S`$ that
$$D_\mu T^{\mu \nu }=0,$$
(42)
where $`D_\mu `$ is the covariant derivative. For example
$$D_\alpha A_\sigma ^{\mu \nu }=_\alpha A_\sigma ^{\mu \nu }+\mathrm{\Gamma }_{\alpha \gamma }^\mu A_\sigma ^{\gamma \nu }+\mathrm{\Gamma }_{\alpha \gamma }^\nu A_\sigma ^{\mu \gamma }\mathrm{\Gamma }_{\alpha \sigma }^\gamma A_\gamma ^{\mu \nu }.$$
(43)
The $`\mathrm{\Gamma }_{\beta \gamma }^\alpha `$ are the components of the Levi-Civita connection compatable with the Unruh metric, viz.
$$\mathrm{\Gamma }_{\beta \gamma }^\alpha =g^{\alpha \rho }[\beta \gamma ,\rho ],$$
(44)
where
$$[\beta \gamma ,\rho ]=\frac{1}{2}\left(\frac{g_{\gamma \rho }}{x^\beta }+\frac{g_{\beta \rho }}{x^\gamma }\frac{g_{\beta \gamma }}{x^\rho }\right).$$
(45)
For our scalar field
$$T^{\mu \nu }=^\mu \varphi _{(1)}^\nu \varphi _{(1)}g^{\mu \nu }\left(\frac{1}{2}g^{\alpha \beta }_\alpha \varphi _{(1)}_\beta \varphi _{(1)}\right).$$
(46)
The derivatives with raised indices in (46) are defined by
$$^0\varphi _{(1)}=g^{0\mu }_\mu \varphi _{(1)}=\frac{1}{\rho _{(0)}c}(\dot{\varphi }_{(1)}+𝐯\varphi _{(1)}),$$
(47)
and
$$^i\varphi _{(1)}=g^{i\mu }_\mu \varphi _{(1)}=\frac{1}{\rho _{(0)}c}\left(v_i(\dot{\varphi }_{(1)}+𝐯\varphi _{(1)})c^2_i\varphi _{(1)}\right).$$
(48)
Thus
$`T^{00}`$ $`=`$ $`{\displaystyle \frac{1}{\rho _{(0)}^3}}\left(\rho _{(0)}{\displaystyle \frac{1}{2}}(\varphi _{(1)})^2+{\displaystyle \frac{1}{2}}{\displaystyle \frac{\rho _{(0)}}{c^2}}(\dot{\varphi }_{(1)}+𝐯\varphi _{(1)})^2\right)`$ (49)
$`=`$ $`{\displaystyle \frac{c^2}{\rho _{(0)}^3}}\left({\displaystyle \frac{W_r}{c^2}}\right)`$ (50)
$`=`$ $`{\displaystyle \frac{c^2}{\rho _{(0)}^3}}\stackrel{~}{\rho }_{(2)}.`$ (51)
The last two equalities serve as a definition of $`W_r`$ and $`\stackrel{~}{\rho }_{(2)}`$. The quantity $`W_r`$ is often decribed as the acoustic energy density relative to the frame moving with the local fluid velocity. Because its conservation law will depend on the steadiness of the flow rather than the absence of time-dependent external forces, it is more correctly a pseudo-energy density.
Using (37), and (38) in the form
$$\frac{1}{2}g^{\alpha \beta }_\alpha \varphi _{(1)}_\beta \varphi _{(1)}=\frac{c}{\rho _{(0)}^2}P_{(2)},$$
(52)
we can express the other components of (46) in terms of physical quantities. We find that
$`T^{i0}=T^{0i}`$ $`=`$ $`{\displaystyle \frac{c^2}{\rho _{(0)}^3}}\left({\displaystyle \frac{1}{c^2}}(P_{(1)}v_{(1)i}+v_iW_r)\right)`$ (53)
$`=`$ $`{\displaystyle \frac{c^2}{\rho _{(0)}^3}}\left(\rho _{(1)}v_{(1)i}+v_i\stackrel{~}{\rho }_{(2)}\right).`$ (54)
The first line in this expression shows that, up to an overall factor, $`T^{i0}`$ is the energy flux – the first term being the rate of working by a fluid element on its neigbour, and the second the advected energy. The second line is written so as to suggest the usual relativistic identification of (energy-flux)$`/c^2`$ with the density of momentum. This interpretation, however, requires that $`\stackrel{~}{\rho }_{(2)}`$ be the second order correction to the density, which it is not.
Similarly
$$T^{ij}=\frac{c^2}{\rho _{(0)}^3}\left(\rho _{(0)}v_{(1)i}v_{(1)j}+v_i\rho _{(1)}v_{(1)j}+v_j\rho _{(1)}v_{(1)i}+\delta _{ij}P_{(2)}+v_iv_j\stackrel{~}{\rho }_{(2)}\right).$$
(55)
We again see that if we identify $`\stackrel{~}{\rho }_{(2)}`$ with $`\rho _{(2)}`$ then $`T^{ij}`$ has the exactly the form as we expect for the second order momentum flux tensor.
The reason why the linear theory makes the erroneous identification of $`\rho _{(2)}`$ with $`\stackrel{~}{\rho }_{(2)}`$ is best seen if we set $`𝐯=const.`$ Then the equation
$$_tT^{00}+_iT^{i0}=0,$$
(56)
holds. This reads
$$\frac{c^2}{\rho _{(0)}^3}\left(_t\stackrel{~}{\rho }_{(2)}+_i(\rho _{(1)}v_{(1)i}+v_i\stackrel{~}{\rho }_{(2)})\right)=0,$$
(57)
and looks very much like the second order continuity equation
$$_t\rho _{(2)}+_i(v_{(2)}\rho _{(0)}+\rho _{(1)}v_{(1)i}+v_i\rho _{(2)})=0,$$
(58)
once we ignore $`𝐯_{(2)}`$. When we go beyond the linear theory (58) provides a source or sink term in the mass conservation equation for $`𝐯_{(2)}`$ , and is not an equation determining $`\rho _{(2)}`$.
We can also write the mixed co- and contra-variant components of the energy momentum tensor $`T_\nu ^\mu =T^{\mu \lambda }g_{\lambda \nu }`$ in terms of physical quantities. This mixed tensor turns out to be more useful than the doubly contravariant tensor. Because we no longer enforce a symmetry between the indices $`\mu `$ and $`\nu `$, the quantity $`W_r`$ is no longer required to perform double duty as both an energy and a density. We find
$`T_0^0`$ $`=`$ $`{\displaystyle \frac{c}{\rho _{(0)}^2}}\left(W_r+\rho _{(1)}𝐯_{(1)}𝐯\right)`$ (59)
$`T_0^i`$ $`=`$ $`{\displaystyle \frac{c}{\rho _{(0)}^2}}\left({\displaystyle \frac{P_{(1)}}{\rho _{(0)}}}+𝐯𝐯_{(1)}\right)(\rho _{(0)}v_{(1)i}+\rho _{(1)}v_{(0)i}),`$ (60)
and
$`T_i^0`$ $`=`$ $`{\displaystyle \frac{c}{\rho _{(0)}^2}}\rho _{(1)}𝐯_{(1)i}`$ (61)
$`T_j^i`$ $`=`$ $`{\displaystyle \frac{c}{\rho _{(0)}^2}}\left(\rho _{(0)}v_{(1)i}v_{(1)j}+v_i\rho _{(1)}v_{(1)j}+\delta _{ij}P_{(2)}\right).`$ (62)
We see that $`\stackrel{~}{\rho }_{(2)}`$ does not appear here, and all these terms may be identified with physical quantities which are reliably computed from solutions of the linearized wave equation.
The covariant conservation law can be written as either $`D_\mu T^{\mu \nu }=0`$ or as $`D_\mu T_\nu ^\mu =0`$. The two equations are consistent with each other because the covariant derivative is defined so that$`D_\lambda g_{\mu \nu }=g_{\mu \nu }D_\lambda `$. To extract the physical meaning of these equations we need to evaluate the the connection forms $`\mathrm{\Gamma }_{\nu \lambda }^\mu `$.
In what follows I will consider only a steady background flow, and further one for which $`\rho _0`$, $`c`$, and hence $`\sqrt{g}=\rho _{(0)}^2/c`$ can be treated as constant. To increase the readabilty of some expressions I will also choose units so that $`\rho _0`$ and $`c`$ become unity and no longer appear as overall factors in the metric or the four dimensional energy-momentum tensors. I will however reintroduce them when they are required for dimensional correctness in expressions such as $`\rho _{(0)}𝐯_{(1)}`$ or $`W_r/c^2`$.
From the Unruh metric we find
$`[ij,k]`$ $`=`$ $`0`$ (63)
$`[ij,0]`$ $`=`$ $`{\displaystyle \frac{1}{2}}(_iv_j+_jv_i)`$ (64)
$`[i0,j]`$ $`=`$ $`{\displaystyle \frac{1}{2}}(_iv_j_jv_i)`$ (65)
$`[0i,0]`$ $`=`$ $`[i0,0]={\displaystyle \frac{1}{2}}_i|v|^2`$ (66)
$`[00,i]`$ $`=`$ $`{\displaystyle \frac{1}{2}}_i|v|^2`$ (67)
$`[00,0]`$ $`=`$ $`0.`$ (68)
I have retained the expression $`\frac{1}{2}(_iv_j_jv_i)`$ in $`[i0,j]`$, since it is possible that our wave equation has greater generality than its derivation.
We therefore find
$`\mathrm{\Gamma }_{00}^0`$ $`=`$ $`{\displaystyle \frac{1}{2}}(𝐯)|v|^2`$ (69)
$`\mathrm{\Gamma }_{i0}^0`$ $`=`$ $`{\displaystyle \frac{1}{2}}_i|v|^2+{\displaystyle \frac{1}{2}}v_j(_iv_j_jv_i)`$ (70)
$`\mathrm{\Gamma }_{00}^i`$ $`=`$ $`{\displaystyle \frac{1}{2}}v_i(𝐯)|v|^2{\displaystyle \frac{1}{2}}_i|v|^2`$ (71)
$`\mathrm{\Gamma }_{ij}^0`$ $`=`$ $`{\displaystyle \frac{1}{2}}(_iv_j+_jv_i)`$ (72)
$`\mathrm{\Gamma }_{j0}^i`$ $`=`$ $`{\displaystyle \frac{1}{2}}v_i_j|v|^2+{\displaystyle \frac{1}{2}}(_jv_k_kv_j)(v_kv_ic^2\delta _{ik})`$ (73)
$`\mathrm{\Gamma }_{jk}^i`$ $`=`$ $`{\displaystyle \frac{1}{2}}v_i(_jv_k+_kv_j).`$ (74)
From (45) we have
$$\mathrm{\Gamma }_{\mu \beta }^\mu =\frac{1}{\sqrt{g}}\frac{\sqrt{g}}{x^\beta },$$
(75)
so, with $`\sqrt{g}=const.`$, the trace $`\mathrm{\Gamma }_{\mu \beta }^\mu `$ is zero. One may verify that the above expressions for $`\mathrm{\Gamma }_{\nu \lambda }^\mu `$ obey this identity.
We now evaluate
$`D_\mu T^{\mu 0}`$ $`=`$ $`_\mu T^{\mu 0}+\mathrm{\Gamma }_{\mu \gamma }^\mu T^{\gamma 0}+\mathrm{\Gamma }_{\mu \nu }^0T^{\mu \nu }`$ (76)
$`=`$ $`_\mu T^{\mu 0}+\mathrm{\Gamma }_{\mu \nu }^0T^{\mu \nu }.`$ (77)
After a little algebra we find
$$\mathrm{\Gamma }_{\mu \nu }^0T^{\mu \nu }=\frac{1}{2}(_iv_j+_jv_i)(\rho _{(0)}v_{(1)i}v_{(1)j}+\delta _{ij}P_{(2)}).$$
(78)
Note the non-appearence of $`\rho _{(1)}`$ and $`\stackrel{~}{\rho }_{(2)}`$ in the final expression — even though both quantities appear in $`T^{\mu \nu }`$.
The conservation law therefore becomes
$$_tW_r+_i(P_{(1)}v_{(1)i}+v_iW_r)+\frac{1}{2}\mathrm{\Sigma }_{ij}(_iv_j+_jv_i)=0,$$
(79)
where
$$\mathrm{\Sigma }_{ij}=\rho _{(0)}v_{(1)i}v_{(1)j}+\delta _{ij}P_{(2)}.$$
(80)
This is an example of the general form of energy law derived by Longuet-Higgins and Stuart, originally in the context of ocean waves. (See also for a slightly earlier, but less general, case.) The relative energy density, $`W_rT^{00}`$, is not conserved. Instead an observer moving with the fluid sees the waves acquiring energy from the mean flow at a rate given by the product of a radiation stress $`\mathrm{\Sigma }_{ij}`$ with the mean-flow rate of strain. Such non-conservation is not surprising. Seen from the viewpoint of the moving frame the flow is no longer steady, while (pseudo) energy conservation requires a time-independent medium.
Notice that, since we are assuming that $`\rho _{(0)}`$ is a constant, we should for consistency require $`𝐯=0`$. Thus the isotropic part of the radiation stress (the part $`\delta _{ij}`$) does no work. This is fortunate because the non-linear theory shows that the isotropic radiation stress contains a part dependent on $`\mathrm{ln}c/\mathrm{ln}\rho `$ which is missed by the linear approximation. (see however,)
We now examine the energy conservation law coming from the zeroth component of the mixed energy-momentum tensor. We need
$`D_\mu T_0^\mu `$ $`=`$ $`_\mu T_0^\mu \mathrm{\Gamma }_{\mu 0}^\rho T_\rho ^\mu `$ (81)
$`=`$ $`_\mu T_0^\mu [\mu 0,\rho ]T^{\mu \rho }`$ (82)
$`=`$ $`_\mu T_0^\mu [i0,0]T^{i0}[00,i]T^{0i}[i0,j]T^{ij}.`$ (83)
We now observe that $`T^{i0}=T^{0i}`$ while $`[00,i]=[i0,0]`$, and that $`[i0,j]=[j0,i]`$, while $`T^{ij}=T^{ji}`$. Thus the connection contribution vanishes. This form of the energy conservation law is therefore
$$_t\left(W_r+\rho _{(1)}𝐯_{(1)}𝐯\right)+_i\left((\frac{P_{(1)}}{\rho _{(0)}}+𝐯𝐯_{(1)})(\rho _{(0)}v_{(1)i}+\rho _{(1)}v_{(0)i})\right)=0.$$
(84)
Here we see that the combination $`W_r+\rho _{(1)}𝐯_{(1)}𝐯`$ does correspond to a conserved energy. This conservation law was originally derived by Blokhintsev for slowly varying flows, and more generally by Cantrell and Hart in their study of the acoustic stability of rocket engines. See also reference , and eq. (5.18).
Now we turn to the equation for momentum conservation. Working similarly to the energy law we find
$`D_\mu T_j^\mu `$ $`=`$ $`_\mu T_j^\mu [\mu j,\rho ]T^{\mu \rho }`$ (85)
$`=`$ $`_\mu T_j^\mu [0j,0]T^{00}[ij,0]T^{i0}[0j,i]T^{0i}`$ (86)
$`=`$ $`_\mu T_j^\mu \rho _{(1)}v_{(1)i}_jv_i.`$ (87)
Again notice the cancellation of the terms containing $`\stackrel{~}{\rho }_{(2)}`$.
The covariant conservation equation $`D_\mu T_j^\mu =0`$ therefore reads
$$_t\rho _{(1)}v_{(1)j}+_i\left(\rho _{(0)}v_{(1)i}v_{(1)j}+v_i\rho _{(1)}v_{(1)j}+\delta _{ij}P_{(2)}\right)+\rho _{(1)}v_{(1)i}_jv_i=0.$$
(88)
The connection terms have provided a source term for the momentum density. Thus, in an inhomogeneous flow field, momentum is exchanged beween the waves and the mean flow.
## VI Phonons and Conservation of Wave Action
If the mean flow changes only slowly over many wavelengths, the sound field can locally be approximated by a plane wave
$$\varphi (x,t)=a_0\mathrm{cos}(𝐤𝐱\omega t).$$
(89)
The frequency $`\omega `$ and the wave-vector $`𝐤`$ are here related by the Doppler-shifted dispersion relation $`\omega =\omega _r+𝐤𝐯,`$ where the relative frequency, $`\omega _r=c|k|`$, is that measured in the frame moving with the fluid. A packet of such waves moves at the group velocity
$$𝐔=\dot{𝐱}=c\frac{𝐤}{|k|}+𝐯.$$
(90)
As the wave progresses through regions of varying $`𝐯`$, the parameters $`𝐤`$ and $`a_0`$ will slowly evolve. The change in $`𝐤`$ is given by the ray tracing formula (A17)
$$\frac{dk_j}{dt}=k_i\frac{v_j}{x^j},$$
(91)
where the time derivative is taken along the ray
$$\frac{d}{dt}=\frac{}{t}+𝐔.$$
(92)
The evolution of the amplitude $`a_0`$ is linked with that of the energy density, $`W_r`$ through
$$W_r=\frac{1}{2}a_0^2\rho _{(0)}\frac{\omega _r^2}{c^2}.$$
(93)
Now for a homogeneous stationary fluid we would expect our macroscopic plane wave to correspond to a quantum coherent state whose energy is given in terms of the (quantum) average phonon density $`\overline{N}`$ as
$$E_{tot}=(\text{Volume})W_r=(\text{Volume})\overline{N}\mathrm{}\omega _r.$$
(94)
Since it is a density of “particles”, $`\overline{N}`$ should remain the same when viewed from any frame, consequently the relation
$$\overline{N}\mathrm{}=\frac{W_r}{\omega _r}$$
(95)
should hold true generally. In classical fluid mechanics the quantity $`W_r/\omega _r`$ is called the wave action.
The time averages of other components of the energy momentum tensor may be also expressed in terms of $`\overline{N}`$. For the mixed tensor we have
$`T_0^0`$ $`=`$ $`W_r+𝐯\rho _{(1)}𝐯_{(1)}=\overline{N}\mathrm{}\omega `$ (96)
$`T_0^i`$ $`=`$ $`({\displaystyle \frac{P_{(1)}}{\rho _{(0)}}}+𝐯𝐯_{(1)})(\rho _{(0)}v_{(1)i}+\rho _{(1)}v_i)=\overline{N}\mathrm{}\omega U_i`$ (97)
$`T_i^0`$ $`=`$ $`\rho _{(1)}v_{(1)i}=\overline{N}\mathrm{}k_i`$ (98)
$`T_j^i`$ $`=`$ $`\rho _{(0)}v_{(1)i}v_{(1)j}+v_i\rho _{(1)}v_{(1)j}+\delta _{ij}P_{(2)}=\overline{N}\mathrm{}k_jU_i.`$ (99)
The last result uses the fact that $`P_{(2)}=0`$ for a plane progressive wave.
Inserting these approximate expressions for the time averages into the Blokhintsev energy conservation law (84) we find that
$$\frac{\overline{N}\mathrm{}\omega }{t}+(\overline{N}\mathrm{}\omega 𝐔)=0.$$
(100)
We can write this as
$$\overline{N}\mathrm{}\left(\frac{\omega }{t}+𝐔\omega \right)+\mathrm{}\omega \left(\frac{\overline{N}}{t}+(\overline{N}𝐔)\right)=0.$$
(101)
The first term is equal to $`d\omega /dt`$ along the rays and vanishes for a steady mean flow as a consequence of the hamiltonian nature of the ray tracing equations. The second term must therefore also vanish. This represents the conservation of phonons, or in classical language, the conservation of wave-action.
In a similar manner the time average of (87) may be written
$`0`$ $`=`$ $`{\displaystyle \frac{\overline{N}k_j}{t}}+(\overline{N}k_j𝐔)+\overline{N}k_i{\displaystyle \frac{v_i}{x^j}}`$ (102)
$`=`$ $`\overline{N}\left({\displaystyle \frac{k_j}{t}}+𝐔k_j+k_i{\displaystyle \frac{v_i}{x^j}}\right)+k_j\left({\displaystyle \frac{\overline{N}}{t}}+(\overline{N}𝐔)\right).`$ (103)
We see therefore that the momentum law is equivalent to phonon-number conservation combined with the ray tracing equation (A17).
## VII Discussion
The possibilty of interpreting the time average of our momentum conservation law in terms of quantum quasi-particles should warn us that we are dealing with pseudomomentum and not with newtonian momentum. Nonetheless the quantity $`\rho _{(1)}𝐯_{(1)}=\overline{N}\mathrm{}𝐤`$ is reliably computed from the linearized wave equation, and is part of the true momentum. It is simply not all of it. Even in the absence of a mean flow with its $`\rho _{(2)}𝐯`$ contribution we still have to contend with $`\rho _{(0)}𝐯_{(2)}`$, and this can be important. As an example , consider a closed cylinder filled with fluid. At one end of the cylinder a piston is driven so as to generate plane sound waves which completely span the cross section of the tube. At the other end a second piston is driven at the same frequency with its phase adjusted so as to absorb the sound waves without reflection. It easy to see that an extra pressure equal to $`W_r`$ is exerted on the ends of the tube over and above whatever isotropic pressure acts on the ends and sides equally. It is “obvious” that this is the force per unit area $`\overline{N}\mathrm{}𝐤c`$ required to generate and absorb the phonon beam “momentum”. Unfortunately for this simple idea, it is equally obvious that the time average center-of-mass velocity of the fluid in the tube vanishes, so the true momentum density in the beam is exactly zero. The $`\rho _{(1)}𝐯_{(1)}`$ contribution to the momentum density is exactly cancelled by a $`\rho _{(0)}𝐯_2`$ counterflow. This eulerian streaming is driven by the fluid source term for $`𝐯_{(2)}`$ implicit in (58) . (In a lagrangian description the particles merely oscillate back and forth with no secular drift). The momentum flux however is exactly the same as if (the italics are from ) there was no medium and the phonons were particles possessing momentum $`\mathrm{}𝐤`$. This is frequently true: the flux of pseudomomentum is often equal to the flux of true momentum to $`O(a^2)`$ accuracy. Pseudomomentum flux can therefore be used to compute forces. On the other hand the density of true momentum in the fluid and the density of pseudomomentum are usually unrelatedThis does not mean that the attribution of momentum to a phonon in the two-fluid model for a superfluid is incorrect. In superfluid hydrodynamics the $`\rho _{(0)}𝐯_2`$ counterflow is accounted for separately from the $`\rho _{(1)}𝐯_{(1)}=\overline{N}h\overline{𝐤}`$ normal-component mass flux. The counterflow is included in the supercurrent needed to enforce $`(\rho _n𝐯_n+\rho _s𝐯_s)=0`$..
It should be said that the $`\rho _{(0)}𝐯_2`$ counterflow will not always cancel the $`\rho _{(1)}𝐯_1`$ wave pseudomomentum. The $`𝐯_2`$ flow depends the geometry. It is found from the source equation (58) and from the force the sound field applies to the fluid. The latter will be small when there is no dissipation, as is the case in a superfluid, and for an isolated sound beam source in an infinite medium $`𝐯_2`$ will consist of a flow directed radially inwards towards the transducer of sufficient magnitude to supply the mass flowing out along the sound beam. In the presence of dissipation the force becomes important, leading to acoustic streaming.
Consider our closed cylinder further. From (39) we see that in a system with fixed $`P`$, and in the presence of the sound wave, the mean density of the fluid will be reduced by
$$\rho _{(2)}=\frac{W_r}{c^2}\frac{d\mathrm{ln}c}{d\mathrm{ln}\rho }|_{\rho _{(0)}}.$$
(104)
Since our cylinder has fixed volume, this density reduction cannot take place. Instead it is opposed by a pressure on the cylinder wall
$$\mathrm{\Delta }P=W_r\frac{d\mathrm{ln}c}{d\mathrm{ln}\rho }|_{\rho _{(0)}},$$
(105)
that must be added to the isotropic pressure in the absence of the sound wave. The complete radiation stress tensor is therefore
$$\mathrm{\Sigma }_{ij}=W_r\left(\frac{k_ik_j}{k^2}+\delta _{ij}\frac{d\mathrm{ln}c}{d\mathrm{ln}\rho }\right).$$
(106)
This result goes back to Brillouin. The true radiation stress therefore differs from the pseudomomentum flux tensor in its isotropic part. Forces computed from pseudomomentum flux will therefore be incorrect if this pressure gradient is important. Usually it is not. See for examples.
## VIII Acknowledgements
This work was supported by grant NSF-DMR-98-17941. I would like to thank Edouard Sonin, David Thouless, and Ping Ao for many discussions, and Stefan Llewellyn-Smith for useful e-mail.
## A Geodesics and Hamiltonian Flows
In this appendix we show that the null geodesics of the Unruh metric coincide with conventional hamiltonian optics ray tracing. The usual ray tracing equations are derived from $`\omega (𝐤,𝐱)`$ as
$$\dot{𝐱}=\frac{\omega }{𝐤},\dot{𝐤}=\frac{\omega }{𝐱}.$$
(A1)
In our case $`\omega (𝐤,𝐱)=c|k|+𝐯𝐤`$. Thus
$$\frac{dx^i}{dt}=v_i+c\frac{k_i}{|k|},\frac{dk_i}{dt}=\frac{v_j}{x^i}k_j.$$
(A2)
We begin by noting that geodesics with an affine parameter $`\tau `$ are stationary paths for the lagrangian
$$L=\frac{1}{2}g_{\mu \nu }\frac{dx^\mu }{d\tau }\frac{dx^\nu }{d\tau }.$$
(A3)
To make connection with the ray tracing formalism we consider the corresponding hamiltonian
$$H=\frac{1}{2}g^{\mu \nu }p_\mu p_\nu ,$$
(A4)
and write down Hamilton’s equations with $`\tau `$ playing the role of time
$`{\displaystyle \frac{dx^\mu }{d\tau }}`$ $`=`$ $`{\displaystyle \frac{H}{p_\mu }}=g^{\mu \nu }p_\nu `$ (A5)
$`{\displaystyle \frac{dp_\mu }{d\tau }}`$ $`=`$ $`{\displaystyle \frac{H}{x^\mu }}={\displaystyle \frac{1}{2}}{\displaystyle \frac{g^{\alpha \beta }}{x^\mu }}p_\alpha p_\beta .`$ (A6)
Combining these gives
$$\frac{d^2x^\mu }{d\tau ^2}=\frac{g^{\mu \beta }}{x^\alpha }\frac{dx^\alpha }{d\tau }p_\mu +g^{\mu \nu }\left(\frac{1}{2}\frac{g^{\alpha \beta }}{x^\nu }\right)p_\alpha p_\beta .$$
(A7)
Now for matrices $`𝐠`$ we have
$$d𝐠^1=𝐠^1(d𝐠)𝐠^1,$$
(A8)
so with $`(𝐠)_{\alpha \beta }=g_{\alpha \beta }`$ and $`(𝐠^1)_{\alpha \beta }=g^{\alpha \beta }`$ we can write
$$\frac{d^2x^\mu }{d\tau ^2}+\frac{1}{2}g^{\mu \nu }\left(\frac{g_{\nu \alpha }}{x^\beta }+\frac{g_{\nu \beta }}{x^\alpha }\frac{g_{\alpha \beta }}{x^\nu }\right)\frac{dx^\alpha }{d\tau }\frac{dx^\beta }{d\tau }=0,$$
(A9)
which is the geodesic equation.
We now examine these equations for the particular case of the Unruh metric. We define a $`4`$-vector $`p_\mu =(\omega ,k_i)`$ so that $`p_\mu x^\mu =\omega t𝐤𝐱`$. Then
$$H=\frac{1}{2}g^{\mu \nu }p_\mu p_\nu =\frac{1}{2}\left((\omega 𝐯𝐤)^2c^2|k|^2\right).$$
(A10)
Hamilton’s equations become
$$\frac{dx^0}{d\tau }=\frac{dt}{d\tau }=\frac{H}{\omega }=\omega 𝐯𝐤,$$
(A11)
and
$$\frac{dx^i}{d\tau }=\frac{H}{k_i}=v_i(\omega 𝐯𝐤)+c^2k_i.$$
(A12)
For null geodesics $`(\omega 𝐯𝐤)^2c^2|k|^2=0`$, or $`(\omega 𝐯𝐤)=c|k|`$. Thus
$$\frac{dx^i}{dt}=v_i+\frac{c^2k_i}{(\omega 𝐯𝐤)},$$
(A13)
or
$$\frac{dx^i}{dt}=v_i+c\frac{k_i}{|k|},$$
(A14)
which is the group velocity equation. We also find
$$\frac{d\omega }{d\tau }=\frac{H}{t}=0$$
(A15)
if the flow is steady, and
$$\frac{dk_i}{d\tau }=\frac{H}{x^i}=(\omega 𝐯𝐤)\frac{v_j}{x^i}k_j,$$
(A16)
which is equivalent to the momentum evolution equation
$$\frac{dk_i}{dt}=\frac{v_j}{x^i}k_j.$$
(A17)
|
no-problem/9909/hep-th9909016.html
|
ar5iv
|
text
|
# INTRODUCTION to the Yuri Golfand Memorial Volume MANY FACES OF SUPERWORLD
## Basic Biographic Data<sup>1</sup><sup>1</sup>1Below I quote from a note written by Misha Marinov in 1994 for the Proceedings of the Israeli Physical Society, a source inaccessible outside Israel. I am grateful to M. Marinov for providing me with this publication from his archive. Some additional data were kindly communicated to me by Mrs. N. Koretz-Golfand.
Yuri Abramovich Golfand was born in Kharkov, Ukraine, on January 10, 1922. Like many Soviet scholars of his generation, he started his education at the Kharkov University, Department of Physics and Mathematics. This was in 1938. The Second World War (WW II) interrupted his academic career; in 1941 Golfand becomes a cadet of the Military Airforce Academy. The end of 1944 found him at a front-line airdrome where he worked as a technician. After the end of the war, in 1945, Golfand resumed his studies, this time at the Department of Mathematics of the Leningrad University. He graduated in 1946 and got his PhD in mathematics within a year and a half. At the end of the 1940’s, Golfand worked for an electrical engineering research institute. In 1951, he joined the group of I.E. Tamm at the Theory Department of the Lebedev Physical Institute (FIAN) in Moscow. He stayed there for 40 years, with a long break, of which more will be said later. For a year or two, Golfand was marginally involved in the nuclear bomb project, like many of his colleagues at that time. Approximately at the same time he got interested in fundamental physics. In the 1950’s and 60’s, Golfand carried out several projects in quantum field theory, in particular, on applications of the functional methods. In 1959, he published a famous work on the method of renormalization, based on the assumption that the four-dimensional momentum space has a constant nonzero curvature. That was one of fascinating attempts to introduce elementary length to relativistic field theory.
In 1972, the Academy of Sciences conducted a routine campaign of personnel cuts. At the FIAN Theory Department it was decided that Golfand was the least worthy member of the group, whose work was unimpactful. As a result, he was fired from FIAN in 1973. This unfortunate turn of events left very little choice to Golfand – he decides to apply for the exit visa to Israel, which only aggravates his situation. In due time there comes a refusal. In those days such an application was considered to be high treason. Thus, Golfand becomes a refusenik – a nonperson, according to the Orwellian nomenclature – with all ensuing political consequences. His struggle lasted for many years. This chapter belongs to a different book, however, which has yet to be written. We will not touch it.
Golfand was unemployed for 7 years, until 1980, when he was accepted back to FIAN (but not to the Theoretical group), under strong pressure from the world physics community, and, in particular, the American Physical Society. It was only in June of 1990 – seventeen years after the original application – that the permission was granted to Golfand’s family to leave the Soviet Union, which at this time was rapidly approaching its demise. Within a few months his family moved to Israel. An official farewell letter from FIAN, signed by Academician L. Keldysh, the Director General, arrived a few days before the departure. The concluding paragraph of the letter reads: “I would like to express my deep and sincere regret for the damage which has been inflicted on you, and henceforth on the Institute, by your dismissal from FIAN.”
It will be fair to add that shortly before this, the Soviet Academy of Sciences awarded Yuri Golfand with the Tamm Prize for Theoretical Physics. This was the only award Yuri Golfand ever received.
Golfand spent the last years of his life in Haifa. Because of his age, he could not get a regular professorship, so he settled for a research fellowship at Technion, under a special program of the Israeli Government. Yuri Abramovich Golfand died on February 17, 1994, in Jerusalem, from complications of a brain stroke.
## Work on Supersymmetry; Chronology
It is known that Golfand discussed the Bose–Fermi symmetry with his colleagues in the late 1960’s, trying to solve the puzzle of weak interactions, before the advent of the Glashow-Weinberg-Salam theory. That is why he was so much preoccuppied with the problem of parity violation which is clearly visible in the first published work on four-dimensional supersymmetry. Evgeny Likhtman recollects that when he appeared in FIAN as Golfand’s PhD student, in the spring of 1968, Golfand had already found an extension of the Poincaré algebra by bispinor generators. (Today the extension found by Golfand is referred to as the super-Poincaré algebra, while the bispinor generators are called the supercharges.) In the review article it is mentioned that the searches of the extensions of the Poincaré algebra conducted by Golfand in the late 1960’s were originally also motivated by the desire to bypass the well-known no-go theorems due to Coleman and Mandula, and Weinberg (or to establish new no-go theorems).
Golfand and Likhtman worked on various aspects of supersymmetry for several years. Their first published paper entitled “Extension of the Algebra of Poincaré Group Generators and Violation of $`P`$ Invariance” contained a field-theoretic model, which in modern terms can be described as supersymmetric quantum electrodynamics (QED) with the mass term of the photon/photino fields, plus two chiral matter superfields. (I suggest we call it the Golfand–Likhtman model.) Adding the photon mass term in the Abelian gauge theory does not spoil renormalizability. Alternatively, one can get this model from massless super-QED by adding a Higgs sector, and breaking U(1) spontaneously. The masses of the physical Higgs fields are then sent to infinity while the photon/photino mass is kept finite. The requirement of renormalizability was very important to Golfand who tried to follow as close as possible the pattern of the only respectable field theory of the time, quantum electrodynamics. On the other hand, the absence of massless particles was also a precondition – otherwise Golfand and Likhtman would have had settled for massless super-QED, which is significantly simpler than the model they found. This shows that Golfand kept in mind phenomenological applications in weak interactions.
The paper was received by the Editorial office of JETP Letters on March 10, 1971. To set the time scale, I should mention that the famous paper of Gervais and Sakita, known to everybody, was received by the Editorial Office of Nuclear Physics on August 13, 1971. Golfand and Likhtman also prepared a detailed publication, which appeared in the I.E. Tamm Memorial Volume. The only date one can infer now with certainty in connection with this publication is that this Volume was sent to print on March 20, 1972. In fact, according to Likhtman’s memoirs, both papers were prepared practically simultaneously in the end of 1970. For Western readers I should explain some essential details regarding the publication process in the Soviet Union. To publish a scientific paper was much more than just typing the manuscript and mailing it to the publisher. There was a long latent period, associated with getting all sorts of clearances. First, the so-called Expert Commission (a group of authorized fellow physicists in the given institution) was supposed to study the paper and recommend its publication. According to the official rules they had to certify that no new discoveries were reported, because if they were, the Expert Commission had to recommend to classify the paper right away. Of course, people tended to stretch the official rules, otherwise not a single breakthrough paper would have ever appeared in the Soviet Union.
At the next stage the paper would go to the so called Regime Department whose task was to check that no references to classified work or undesirable persons were made, no subversive ideas put forward, and so on. With all this paperwork done, the decision to allow (disallow) publication was to be made by the Director of the Institute. This is not the end of the story, however. All materials intended for publication had to be cleared through the so-called GLAVLIT, the almighty agency whose sole obligation was to ensure total Censorship in the country. If, at the previous stages the author would have at least some minimal control over what was going on with his (her) paper, GLAVLIT was a total black box.
The process of getting all clearances could extend anywhere from weeks to many months, and the paper was officially nonexistent until the very end. The author could not even refer to it in his/her further work. Thus, the Likhtman’s recollections that the paper was completed in 1970, and the official submission date of March 10, 1971, are not inconsistent.
In their second paper on supersymmetry, Golfand and Likhtman described in detail a recursive procedure of building supersymmetric models. By this time Likhtman, following Golfand’s instructions, worked out the free field representations of the super-Poicaré algebra in several practically important cases (today we would say, the chiral and vector supermultiplets were constructed). So, they knew how the numbers of the boson and fermion degrees of freedom in the supersymmetric Lagrangians should be balanced. This determined a starting point – the particle content of the models to be built. Then they suggested cataloguing all possible interaction terms in the Lagrangian, compatible with renormalizability, order by order in the coupling constant, and the corresponding terms in the supercharges, with unknown coefficients. The coefficients were to be fixed by imposing the anti-commutation relation $`\{Q_\alpha \overline{Q}_{\dot{\alpha }}\}=2P_{\alpha \dot{\alpha }}`$, order by order in the coupling constant. (I use here the modern notation, $`Q_\alpha `$ denotes the supercharge and $`P_{\alpha \dot{\alpha }}`$ the energy- momentum operator.) Needless to say that this was much more time- and labor consuming procedure than the superfield formalism of the present day. Note, however, that the work I describe now took place eight years before the invention of this formalism.
In the very same paper, in addition to the already established super-Poincaré algebra, Golfand and Likhtman presented (a limiting case of) the super-deSitter algebra.
Golfand continued to work on this range of ideas even after his forced retirement, through the years of unemployment. Misha Marinov recollects: “Soon after the Wess–Zumino preprint appeared, in January or February of 1974, I was invited to give a talk on supersymmetry at the Institute of Physical Problems. Golfand was in the first row and listened to my explanations very attentively. Then we talked about all details. Yuri Abramovich was greatly impressed by the Wess–Zumino work, though he said it was too technically complicated and that his approach was more elegant. It is curious to note that Abrikosov who attended this seminar too, strongly objected against the exploitation of the prefix “super” since it was already in use in another context in superconductors.”
Later Golfand, together with Likhtman, wrote an extended review for the collection Supersymmetry: A Decade of Development, edited by Peter West, where they summarized their own results and tried to indicate where they stood in relation to other numerous results on supersymmetry which were obtained by that time. This was the last paper written as a team by Golfand and Likhtman.
## Likhtman
Under the spell of Golfand’s ill fate, the academic path of Evgeny Pinkhasovich Likhtman went astray. You will read this story in his memoirs published in this Volume. It should only be added that in the beginning of the 1970’s, before Wess and Zumino, Likhtman published several papers of his own, devoted to various aspects of supersymmetry. In particular, on page 8 of Ref. one reads: “As is known, in relativistic quantum field theory, in transforming the free energy operator to the normal-ordered form there emerges an infinite term which is interpreted as the vacuum energy. It is also known that the sign of this term is different for particles subject to the Bose and Fermi statistics. The number of boson states is always equal to the number of fermion states. From this it follows that the infinite positive energy of the boson states in any of the representations of the \[super-Poincaré\] algebra is annihilated by the infinite negative energy of the fermion states.” In one of his JETP Letters papers, Likhtman mentions in passing that in supersymmetric theories the one-loop boson mass diverges not quadratically, but, like the fermion mass, only logarithmically. Thus, he apparently was the first to establish two fundamental properties of the supersymmetric theories, distinguishing them from all others – the vanishing of the vacuum energy and the absence of the quadratic divergences. It was Likhtman who gave a talk on supersymmetry at ITEP in the 1970’s. This was my only personal encounter with him.
E. Likhtman remains the employee of the Institute of Scientific and Technical Information in Moscow till the present. The only change is that back in the 1970’s the institute was referred to as “All-Union”, while now, with the fall of the Soviet Union, this part of the title is gone.
## Missed Crossroads
It is natural to ask why the ideas of supersymmetry did not take root in the Moscow particle physics community right away, immediately after the discovery of Golfand and Likhtman. The community was strong, vibrant and versatile, and yet it missed a key turn on the pathway of theoretical physics.
Of course, it is hardly possible now to give a certain answer. I will still suggest a few conjectures.
One of the reasons might be a negative attitude to field theory in general which was prevalent in the community after Landau’s discovery of the “zero charge.” Even after the renormalizability of the Glashow-Weinberg-Salam model was proved by ’t Hooft in 1972, some of the elders of the community, whose opinions were highly respected, continued to openly express their animosity towards field theory. A radical turn occurred only in November of 1974, after the discovery of $`J/\psi `$, with the advent of quantum chromodynamics (QCD).
Perhaps, more importantly, Golfand was not taken seriously by many of his former colleagues. To this day some of them insist that “he himself did not understand what he did because he was not really a good physicist.” This is a quotation from a letter which I got about half a year ago, when the work on this Volume began. The author of the letter then continued: “I cannot remember a single interesting statement on physics which he ever made. Usually he was quite ironic about doing physics. He would occasionally come to a seminar, sit there and then disappear, without saying much or producing anything.” I hasten to add that this opinion is by no means shared by all of Golfand’s colleagues. Human memory is rather selective, and in many cases we see what we want to see… Still, it gives an idea of the general attitude.
The point of view that Golfand did not understand what he was doing is absolutely unsubstantiated either by the analysis of three papers on supersymmetry produced by Golfand and Likhtman or by recollections of Likhtman and others. From these papers, and from the problems Golfand formulated for Likhtman’s PhD work, it is evident that Golfand clearly saw the contours of the theoretical construction they were building together with Likhtman, posed the right questions, and found adequate theoretical tools for their solution. Perhaps, some weakness was on the side of phenomenology. For instance, the issue of the parity nonconservation in the Golfand-Likhtman model, a persistent theme in Refs. , was never elaborated in full. This is easily explained by the isolation in which they were working, and the lack of enthusiasm on the side of their colleagues. The soil fertile to the ideas of Wess and Zumino, provided by CERN in 1974, was totally absent in the case of Golfand and Likhtman.
## Glimpses
Golfand was a frequent participant of the ITEP theory seminars. I used to bump into him in the corridors of ITEP regularly. At first I did not know who this small man, with warm eyes and a kind smile, was. So, I asked my thesis advisor, Prof. B.L. Ioffe. Ioffe lowered his voice to the level of whisper and replied that this was Golfand, the discoverer of supersymmetry. Later, whenever he spoke of him, Ioffe would automatically lower his voice even if we were alone in Ioffe’s office. This would emphasize, without any words, that Golfand was a nonperson.
Everybody who knew Golfand remembers his smile and his eyes. Usually he looked a little bit out of touch with reality, decoupled from the surrounding world, with thoughts directed inside rather than outside. Marinov wrote in 1994 in Proceedings of the Israeli Physical Society that the “Technion colleagues will remember forever Golfand’s smile and his quiet and sympathizing eyes.” Elsewhere he elaborates<sup>2</sup><sup>2</sup>2An excerpt from the interview with N. Portnova, 1995, unpublished.: “It was extremely interesting to socialize with Golfand. He was sparing with words, he listened more than he talked; his eyes, that were always alive and radiated warm energy, participated in the conversation.”
Here is how Lars Brink describes Golfand: “When Mike \[Green\] and I gave talks at Lebedev in 1984 he had managed to sneak in, and I met him in the shadow there. The last time I saw him was at the Sakharov meeting in 1991. He had emigrated then and was back, so I talked to him several times. He represented to me a character that I have only seen in Russia, the enormous warm heart, the sadness around the eyes, somewhat subdued, a person whom you instantly like and feel complete confidence in…” Stanley Deser, who also knew Golfand personally, called him a man who came premature.<sup>3</sup><sup>3</sup>3 I quote here from Lars Brink and Misha Marinov not accidentally. Lars (together with S. Deser, D. Gross, Y. Ne’eman, B. Zumino and some other Western physicists) was absolutely instrumental in Golfand’s survival through the years of unemployment. It was their victory when Golfand was reinstated in FIAN in 1980. Misha, a recent immigrant in Israel himself, did whatever he could to help Golfand to “blend in” into a complicated Israeli life during the most painful transition period. Golfand’s knowledge of Hebrew was rudimentary, and he would be essentially helpless without Marinov’s constant assistance. Later Marinov took the idea of this Memorial Volume close to heart; he helped to locate Golfand’s widow, Mrs. Koretz-Golfand, in Israel.
After I drafted this introduction, I sent it to a few colleagues whose opinion I value, soliciting comments. As a result, I got a letter from Prof. B. Ioffe which, to my mind, adds important touches to Golfand’s human and scientific portrait. I reproduce it below, with insignificant abbreviations.
Ioffe writes: “I knew Golfand from 1951. Very close friendly relations developed beginning in around 1957: we visited each other at home, which is very unusual for me. Once we celebrated together the New Year (1958 or 1959, I do not remember exactly), meeting the New Year midnight in a frosty and snowy forest (this was near Povarovo, 50 km from Moscow) by the fire we made. We were on skis, Golfand liked skiing as much as I did.
In the 1950’s and 60’s I often discussed physics with him; such discussions were very fruitful for me. Almost nobody knows now that Golfand invented the path integral formulation in field theory independently from Feynman. It was in the early 1950’s (probably, in 1952). He represented a field theory (he considered the scalar field theory) as an integral over the mesonic fields. Golfand did not follow Feynman’s route who started from path integrals in quantum mechanics. In fact, he did not know of Feynman’s work at that time. When later he gave a talk at the Tamm seminar, people were very skeptical, because nobody understood the subject – the presentation was different from Feynman’s, and only very few in the audience knew about Feynman’s paper, and nobody understood it either. After some time it became clear, that it was just the functional integral.
When Golfand was unemployed, there was a serious problem with getting permission for Golfand’s participation in ITEP seminars. As you remember, all “outside” participants were to be included in a “list”, which had to be cleared through the ITEP Regime Department. The permission would be granted only to people who had positions in physics institutes; the name of the institute had to be explicitly indicated in the list we submitted to the Regime Department before each seminar. Since I was responsible for the list and was signing it each time (formally till 1977 it was Berestetsky’s signature, but in fact I did it), I was committing fraud on a regular basis “putting” Golfand to some ad hoc institute, just to let him in. My whisper referred not to Golfand’s name per se, but to the fact, that he was unemployed. At this difficult time, we continued seeing each other, often exchanged phone calls, etc. After his divorce, our relations cooled off a little, though.
When I heard that Golfand was fired from FIAN, I expressed my dissatisfaction to a few FIAN people. Each time the response was: it was not me who did it. As I remember, Golfand was unemployed not all the time from 1973 to ’80; sometimes he had a part-time teaching job at a technical college. After he was taken back to FIAN, Golfand continued to attend the ITEP seminars on a regular basis, but he refused to participate in the FIAN seminars. That was the demonstration of how strongly he was offended.”
## About This Volume
When the idea of this Memorial Volume came to my mind, I wrote a letter to my colleagues, fellow theoretical physicists. I sent it to about two dozen active members of the HEP community, those who determine the trends of the modern high-energy physics, and to several physicists from the younger generation – some of them I considered to be rising stars. The response was overwhelmingly positive. With one or two exceptions, all agreed to participate enthusiastically, and I got many very valuable suggestions as to the structure of the Volume. Its scientific part will, hopefully, represent a full picture of the huge tree into which supersymmetry grew today. The book will be used in the community in the years to come, and this is the best tribute to Golfand one can think of. I am sincerely grateful to all participants of the project, to whom I would like to say thank you. Together, we did a good job.
Also of importance is the first part of the book, which consists of the memoirs of Golfand’s widow, Mrs. Natasha Koretz-Golfand, and his former student, Dr. Evgeny Likhtman. They are both emotional and moving, precious evidence of the past, gone forever… I am very grateful to Mrs. Koretz-Golfand and to Dr. Likhtman for their willingness to share with us, and with those who will come after us, their personal recollections. Also included in the first part is a historical survey of the scientific ideas that paved the way to supersymmetry, written by Prof. M. Marinov, and the English translation of the Golfand-Likhtman paper from the Tamm memorial Volume.
## Conclusions
Golfand’s career in theoretical physics spanned over 40 years. He was the author of several dozen papers<sup>4</sup><sup>4</sup>4See the list of Golfand’s publications at the end of this Volume. devoted to aspects of field theory, out of which two<sup>5</sup><sup>5</sup>5In fact, one can view these two papers as a short and long version of one and the same paper. In essence, Golfand will be remembered as a one-work man. How many 200-paper collections would be gladly traded for a single paper, like that? opened to us the doors to the superworld, that will stay with us forever. I will risk to say that the discovery of supersymmetry was the single most important contribution of Soviet fundamental physics after WW II. I will go so far as conjecture that supersymmetry will play the same revolutionary role in physics of the 21-st century as special and general relativity in physics of the 20-th century. Treatises on the pioneers of supersymmetry – and Yuri Golfand definitely belongs to them – will be written by professional historians of science.
Minneapolis, August 24, 1999
|
no-problem/9909/physics9909049.html
|
ar5iv
|
text
|
# Visualizing Conformations in Molecular Dynamics
## I Introduction
Molecular dynamics simulations on large computers have become one of the mainstays for investigating the functions of biomolecules. Using statistical algorithms, they create a large number of snapshots of the molecule that approximate the expected distribution of molecular shapes in actual molecular processes. By looking for typical shapes (conformations) and transition paths in this large data set, biochemists can learn about the molecular bases of biochemical processes. Such understanding is important in particular in designing more efficient medical drugs.
Identifying typical shapes in such a large data set is itself a difficult task . In most simulation, one focuses on a few characteristic numbers, like the angles or distances between specific atoms in the molecule, and monitors the change of these quantities in the simulation. This approach requires some advance knowledge about which parts of the molecule are important to the dynamics, and makes it also difficult to perform the analysis automatically.
To use all the information computed in a molecular dynamics simulation, one must leave the analysis step again to a computer. We present here two procedures to aid in this task: a projection method to visualize the molecular configurations of a trajectory as a point set in the plane, and a cluster analysis to identify clusters of similar configurations in the trajectory. These methods can be applied automatically to any molecular dynamics trajectory and result in a tentative identification of conformations in the trajectory.
## II Procedure
### II.1 Configurations and feature vectors
The output of a molecular dynamics simulation is a trajectory, i.e. sequence of configurations that depicts the evolution of the molecule in time. If the molecule consists of $`n`$ atoms, the configuration $`x`$ is described by $`n`$ 3-dimensional vectors $`𝐱_i𝐑^3`$ that specify the cartesian position of each atom in 3-dimensional space. Other information, in particular which atoms are connected by chemical bonds, is a property of the molecule and usually does not change during the simulation.
To classify the configurations, we must quantify how much their geometries differ. We thus assign to each configuration a feature vector that describes the geometry of the configuration is such a way that similar configurations have similar feature vectors. However, the set of $`3n`$ numbers that make up the cartesian positions of the atoms is unsuited as identical geometries can appear with different rotations and translations. While the translational freedom can easily be fixed by requiring that the center of mass conincides with the origin of the coordinate frame, the rotational degree of freedom is extremely difficult to eliminate. Fixing the axis of inertia can lead to sudden articifical rotations when the axes become degenerate, while fixing certain atoms to the coordinate axes always introduces a undesirable bias.
A feature vector that is invariant under translations and rotations and does not introduce any bias can be chosen by considering the set of intramolecular distances
$$\{d_{ij}(x)=|𝐱_i𝐱_j|,i,j1,\mathrm{},n\}.$$
(1)
The price to pay is that instead of $`3n`$ elements, this vector has now $`n(n1)/2`$, but geometrically identical configurations have identical feature vectors, and the cartesian distance in $`n(n1)/2`$-dimensional space is a natural measure of conformational distance:
$$d(x,y)=\sqrt{\frac{1}{n(n1)/2}\underset{i>j}{}\left(d_{ij}(x)d_{ij}(y)\right)^2}.$$
(2)
Another frequent way to choose a feature vector is to use the dihedral angles between certain atoms as basic degrees of freedom. This is a natural choice as dihedral angles are the main degrees of motion in the simulations (atomic distances and bond angles are usually much more rigid). However, the potential energy that determines the dynamics of the molecule depends on the spatial distance of the atoms, and the relation between dihedral angles and spatial distances is involved at best. We prefer here to put all information as unbiased as possible to the algorithm and depend on it to extract the relevant degrees of freedom.
The feature vector (1) is vast compared to the number of degrees of freedom (in our example molecule, it has 2415 elements as compared to $`70\times 36=204`$ degrees of freedom). Some of its elements will show little or no fluctuation (e.g. the ones associated to the lengths of chemical bonds), others will fluctuate thermally, and still others will assume different values in different fluctuations and thus exhibit a double-peaked distribution. To reduce the thermal noise, we analyze the elements of the feature vector statistically and select those whose distribution has the largest width. As thermal fluctuations are smaller than the conformational changes, this also selects the distances most affected by conformational changes. A similar procedure has been used by to identify essential degrees of freedom in cartesian coordinate space.
### II.2 Low-dimensional approximations
The feature vector space is by far too large to be visualized directly. To capture the major properties of the point set that represents a trajectory in this space, we seek to visualize it in a plane, i.e. to assign each configuration a point in the two-dimensional plane such that the geometrical similarity between configurations is reproduced as faithfully as possible. After having chosen a distance measure (2) on the trajectory, this reduces to the general problem of visualizing an arbitrary distance matrix $`D_{ij}`$ between a set of $`N`$ configurations, where $`i`$ and $`j`$ now number configurations.
One choice is to require that the mean quadratic deviation of the conformational distance from the distance in the plane, given by
$$D^2=\underset{i>j}{}\left(|𝐱_i𝐱_j|D_{ij}\right)$$
(3)
is minimized by the choice of the points $`𝐱_i`$, i.e. that the derivative of the quantity with respect to the position of the $`k`$-th point
$$\frac{D^2}{𝐱_k}=\underset{i}{}\frac{𝐱_k𝐱_i}{|𝐱_k𝐱_i|}\left(|𝐱_k𝐱_i|d_{ik}\right)=0$$
(4)
vanishes. This equation can be pictured physically by a set of springs that connect the points and whose natural length is given by the desired distance between the points.
We solve the minimum problem of (3) numerically by the conjugate-gradient method. Though there is no guarantee that the minimum found by this method is the global one, the minimization takes place in a $`2N`$-dimensional space where it is improbable that a false minimum is stable in all directions. An example of this is the situation where we have a solution of Eq. (4) in $`D1`$ dimensions and then extend the solution space to $`D`$ dimensions by setting $`x_{i,D}=0`$ for all $`i`$, which still satifies Eq. (4). However, this minimum (in $`D1`$ dimensions) now turns out to be a saddle point in $`D`$ dimensions, where the second derivative of $`D^2`$ is
$$\frac{D^2}{x_{k,D}x_{l,D}}=\{\begin{array}{cc}\frac{d_{kl}|𝐱_k𝐱_l|}{|𝐱_k𝐱_l|}& \text{if }kl\hfill \\ _{ik}\frac{d_{ki}|𝐱_k𝐱_i|}{|𝐱_k𝐱_i|}& \end{array}$$
(5)
In a true minimum, this quantity is positive, thus requiring that
$$d_{kl}|𝐱_k𝐱_l|\text{for all }k\text{}l$$
(6)
but also
$$\underset{ik}{}\frac{d_{ki}|𝐱_k𝐱_i|}{|𝐱_k𝐱_i|}0\text{for all }k.$$
(7)
This will happen only if the first inequality is an equality, i.e. if the solution is complete.
Another widely used low-dimensional approximation is based on the singular-value decomposition (SVD) of the feature matrix . Let $`a_{ij}`$ be the feature matrix of $`i=1,\mathrm{},n`$ objects with $`j=1,\mathrm{},m`$ features each. (In our example, $`n`$ is the number of configurations while $`m`$ is the number of intramolecular distances.) The singular-value decomposition expresses this matrix as a series
$$a_{ij}=\underset{k}{}\lambda _ku_i^{(k)}v_j^{(k)}$$
(8)
where $`𝐮^{(k)}`$ and $`𝐯^{(k)}`$ are $`n`$\- and $`m`$-dimensional, resp., orthonormalized basis vectors, and $`\lambda _k`$ gives the weight of $`k`$-th term. The number of terms in the series is the rank of the matrix, it is at most the lower of $`n`$ and $`m`$.
The relation between singular-value decomposition and point sets in low-dimensional space can be seen by calculating the distance between feature vectors in terms of the SVD:
$`D_{ij}`$ $`=`$ $`{\displaystyle \underset{k}{}}(a_{ik}a_{jk})^2`$ (9)
$`=`$ $`{\displaystyle \underset{k}{}}\left({\displaystyle \underset{l}{}}\lambda _l(u_i^{(l)}u_j^{(l)})v_k^{(l)}\right)^2`$
$`=`$ $`{\displaystyle \underset{k}{}}\lambda _k^2\left(u_i^{(k)}u_j^{(k)}\right)^2`$
when orthonormality of $`v^{(k)}`$ is taken into account. Thus the vectors
$$\{\lambda _ku_i^{(k)}:k=1,\mathrm{},m\}$$
(10)
can be interpreted as specifying the cartesian positions in $`m`$-dimensional space of the $`i`$-th data point. When we chose $`\lambda _k`$ in decreasing order, truncating the series after singular-value decomposition after $`l`$ terms will lead position vectors in $`l`$-dimensional space that are best approximations in a linear sense.
The major difference between the two approaches is that the SVD performs the approximation is a linear manner: When the dimension of the approximation space is decreased from $`D`$ to $`D1`$, the new approximation is simply obtained by orthogonally projecting out the last coordinate. In contrast, in the approximation obtained from minimizing (3), the nonlinearity introduced by the square root redistributes some of the “lost” distance in the remaining dimensions.
### II.3 Cluster analysis
Cluster analysis is a statistical method to partition the point set into disjoint subsets with the property that the points in a subset are in some sense closer to each other than to the remaining points. There are several different ways to make this statement mathematically precise. We choose the notion of minimum residual similarity between clusters which leads to a natural formulation of the problem in terms of eigensystem analysis and to a heuristic algorithm for its solution. This spectral method goes back to works by Donath and Hoffmann on graph partitioning in computer logic and Fiedler on acyclic graphs and was later picked up by Hendrickson . Other cluster analysis methods based on neural networks or fuzzy clustering have also been applied to molecular dynamics simulations .
Amadei et.al. went further by introducing the concept of essential dynamics in which the coordinate space of the molecule is split into a small essential subspace and a larger non-essential subspace. They assumed a linear factorization of the coordinate space and identified essential coordinates by large second moments of their distribution, assuming that these distributions are mainly non-Gaussian double-peaked shapes.
To be as flexible as possible, we assume that a similarity measure
$$0a_{ij}1,1in$$
(11)
is given between the $`n`$ data points, where $`a_{ij}=0`$ indicates complete dissimilarity and $`a_{ij}=1`$ complete identity of configurations $`i`$ and $`j`$. The residual similarity of a cluster $`C\{1,\mathrm{},n\}`$ characterizes how similar elements of the cluster are to elements outside the cluster
$$R(C)=\underset{iC,jC}{}A_{ij}.$$
(12)
We wish to partition the data set into two subsets such that this quantity is minimized. Let $`a_i`$ the characteristic vector of this partition, with value $`a_i=1`$ indicating that $`iC`$, and otherwise $`a_i=1`$. Then the residual similarity can be rewritten
$`R(C)`$ $`=`$ $`{\displaystyle \frac{1}{4}}{\displaystyle \underset{ij}{}}(a_ia_j)^2A_{ij}`$ (13)
$`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{ij}{}}a_i\left({\displaystyle \underset{k}{}}A_{ik}\delta _{ij}A_{ij}\right)a_j`$
$`=`$ $`(a,Ma)`$
with the Laplacian matrix
$$M_{ij}=\{\begin{array}{cc}A_{ij}& \text{if }ij\hfill \\ _kA_{ik}& \text{if }i=j\hfill \end{array}.$$
(14)
To find the minimum of the expectation value $`(a,Ma)`$ over the vectors that have element $`\pm 1`$ only, is a hard combinatorial problem. However, if we relax the problem and allow real values for the $`a_i`$ with the constraint $`|a|=1`$, the problem is exactly the problem of finding the second-lowest eigenvector of the matrix $`M`$ (the lowest eigenvector corresponds to the solution $`a_i1`$ that does not lead to a proper partition). Since eigenvectors are orthogonal, the second-lowest eigenvector satisfies
$$\underset{i}{}a_i^2=1\text{and}\underset{i}{}a_i=0$$
(15)
and thus guarantees that it will contain both positive and negative eigenvalues. In graph theory, this eigenvector is called the characteristic valuation of a graph.
Low-lying eigenvectors of a matrix can be found using iterative methods even for moderately large matrices. There is, however, no safe way to recover the solution of the combinatorial problem, where $`a`$ is restricted to values of $`\pm 1`$ from it, but it can be argued that for most matrices the eigenvector will constitute a good approximation to the combinatorial problem. We thus map the continuous value $`a_i`$ to discrete value $`\stackrel{~}{a}_i`$ using a threshold $`l`$:
$$\stackrel{~}{a}_i=\{\begin{array}{cc}1& \text{if }a_il\hfill \\ +1& \text{if }a_i>l\hfill \end{array}.$$
(16)
The threshold can now be determined by minimizing the residual similarity over all possible thresholds. In this way, the minimization problem is reduced from $`n!`$ to just $`n`$ options, and the characteristic valuation serves as a heuristic to determine the options that are taken into consideration.
The measure of residual similarity favors in general splitting off a single point from the data set since (13) contains in this case only $`n1`$ terms, as compared to $`n^2/4`$ when splitting symmetrically. This automatically introduces a quality control in the splits, as central splits occur only when the cluster separation is rather favorable, but might also hinder the analysis of noisy data. However, the special form (13) was only chosen to turn the problem into an eigenvalue problem. As the whole procedure is heuristic in nature, we may well decide to use a different similarity measure when determining the splitting threshold, e.g. a measure that includes a combinatorial factor
$$R(C)=\frac{1}{|C|(n|C|)}|\underset{iC,jC}{}A_{ij}.$$
(17)
Which measure is correct depends mainly on the application. The original measure is stricter in what it returns as a cluster, while the latter measure favors balanced splittings. In some problems, like partitioning matrices for processing in a parallel computer, one may even demand that each split is symmetrical.
Another approach taken frequently in cluster analysis is to use the singular-value decomposition . If we go back to the feature matrix $`A_{ij}`$ and its singular value decomposition (8), it turns out that the vectors $`u^{(l)}`$ correspond to minima of the expectation value with respect to the feature matrix squared, i.e.
$$\underset{k}{}\left(\underset{i}{}u_iA_ik\right)^2=\underset{ij}{}u_i(A_iA_j)u_j$$
(18)
where we introduced the row vectors $`A_i`$ of the matrix $`A_{ik}`$, i.e. the feature vector of data point $`i`$. Thus in this approach the role of the similarity matrix is taken over by the scalar-product matrix of the feature vectors. The major differences are
1. The scalar products are not less than or equal to one, but this could easily be fixed by globally rescaling the scalar-product matrix, which does not change the vectors $`u^{(l)}`$.
2. The scalar products can be negative. The notion of a scalar-product is not of similarity and dissimilarity but rather the trichotomy of similar, orthogonal, and antagonistic.
Thus, singular-value decomposition seems suitable for feature vectors that characterize orthogonal qualities. However, this is not the case in our feature vectors, so we chose a similarity measure based upon distance.
After partitioning the data set into two subsets, we proceed to apply the algorithm again to these subsets. In this way, one obtains a splitting tree that terminates only when the subset size is smaller than three. For many applications, such a tree is already quite useful as it orders the data points in such a way to similar data points are usually close to each other.
To identify clusters in the splitting tree, we found that the average width of the cluster relative to that of its parent cluster gives the best indication. To calculate the average width of a cluster we use the Euclidean distance in the high-dimensional space and average over all distinct pairs of points in a cluster. This quantity relative to that of the parent cluster basically indicates how much the closer the points are on average in the subcluster than in the original cluster and thus how much the split improves the cluster criterion. Consider e.g. the situation where there are three clusters. The first split will result in one correctly identified cluster and a second pseudo-cluster that encompasses the other two, but the relative width of the true cluster will be much smaller than that of the pseudo-cluster. Only after the next split it will be revealed that the latter consists of two clusters. Typical values for this quantity are between $`0.5`$ and $`0.8`$.
## III Results
We apply our methods to a molecular dynamics simulation of the molecule adenylyl(3’-5’)cytidylyl(3’-5’)cytidin in vacuum . This is a very simple tri-ribonucleotide, consisting of three residues and 70 (effective) atoms. The simulation was performed using the GROMOS96 extended atom force field. For the analysis, we chose a subset of 1000 configurations equidistantly from the trajectory.
Fig. 1 shows the two-dimensional map of the trajectory found by minimizing (3). The points are connected by a line in the same sequence as they are generated in the Monte Carlo simulation. This information does not enter in determining the locations of the points in the plane, so the fact that the line segments are rather short indicates that point adjacent in the trajectory are mapped to nearby points in the plane and thus are recognized as geometrically similar by the algorithm. The one pair of lines that crosses nearly the whole plane horizontally is actually made up of the first three data points and therefore a transient effect before the molecule became equilibrated.
We immediately notice that there are at least three clearly different groups of points which constitute conformations in a geometrical sense, i.e. subsets of the trajectory with similar geometrical properties. That they are also dynamical conformations can be seen from the fact that the connecting line of the points only very rarely crosses from one point group into the next. This again confirms that the two-dimensional layout in the plane chosen by the algorithm represents correctly the dynamics of the system.
The representation of Fig. 1 can be compared to a representation where the dimensionality of the system is reduced by chemical understanding. As the system consists of three residues, most of the conformational dynamics can be assumed to be in the geometrical layout of the residues. This can be described by only three numbers, and we chose two of them to create the two-dimensional representation shown in Fig. 2. This picture is similar to Fig. 1 in that there appear approximately three distinct point groups, and it can be verified that they correspond to the point groups from Fig. 1. However, the separation of the point groups is less clear than in Fig. 1. This indicates that the conformational dynamics is not simply the motion of the centers of the residues, but there are also smaller rearrangements in the residues themselves that are correlated to the large-scale motions. By considering an unbiased measure for geometrical similarity, all those little rearrangements enter and reinforce the distance between conformations in the plane.
### III.1 Cluster analysis
Applying the cluster algorithm to the similarity matrix of the trajectory, the first few splits remove 22 isolated points before the small cluster shown in Fig. 3a with 40 points and a relative width of 0.46 (both compared to its immediate predecessor and to the initial point set) shows up. The remaining points are split some steps further into a cluster with 698 points shown in Fig. 3b and another cluster with 230 points shown in Fig. 3c with relative widths of about 0.53. After some more steps, the larger subcluster is broken into three subclusters with 388, 52, and 21 points, resp., and relative widths of 0.91, 0.73, and 0.61, resp., as shown in Fig. 3d, e, and f. Similarly, the smaller subcluster also separates into three weak subclusters.
The splitting line of the large cluster at the bottom is also visible in Fig. 1. Such a pattern usually indicates that beside a large conformational change that induces the three clearly visible clusters, where the middle cluster is clearly a transitional state, there is another smaller conformational change, possibly in one of the glucose rings, independent of the larger one. As it only affects a small part of the molecule, the conformational distance is smaller and is then imprinted like a fine structure on the clusters. That such changes are visible in the plot is an advantage from considering all atom coordinates without bias.
## IV Conclusions and Outlook
We have demonstrated a method for projecting a molecular dynamics trajectory onto a plane to capture the conformational structure of the trajectory. Conformations can in this way be easily identified by visual inspection. Cluster analysis on the full conformational distance matrix also revealed these clusters, but also allowed to discern fine structure inside the clusters caused by smaller conformational changes.
To simplify the analysis of a trajectory, we have created a Java application that reads the output files of the combined plane mapping/cluster analysis program and displays the two-dimensional map. This program interacts directly with an Open Inventor molecular visualization application by means of a Unix pipe. Whenever the user selects a point in the plane, the corresponding configuration is shown in the visualization program. The user can also choose to display identified clusters using different colors in the map.
Identifying which parts of the molecule are responsible for different conformations is a much more difficult problem. We use a visualization application that allows the user to form groups of atoms that are visualized by ellipsoids. In this way, a molecule can be easily reduced to its functional groups where it is much easier to spot conformational changes. However, small conformational changes as those that show up as a fine structure on the plane map are easily lost in this representation.
As a first attempt at aiding the eye in discovering unusual motions of the molecule, we implemented a simple OpenGL effect in the visualization application that allows to blend several frames of an animation in the hope that large changes stand out more clearly in this representation. Fig. 4 shows one such picture. Certainly more research can be expended on how to identify and visualize the essential degrees of freedoms.
The concept of essential molecular dynamics has been introduced to reduce the number of degrees of freedom in the simulation. Both the plane map and the cluster analysis can be used to inflict new coordinates upon the system. For the point map, these are simply the $`x`$ and $`y`$ positions of the configuration in the plane. Once a certain point map has been established, new configurations can be fitted into the plane by minimizing the residual distance while keeping all other points fixed. Similarly, the cluster analysis assigns to each configuration a position in the tree that can be seen as a (discrete) essential coordinate. How such essential coordinates can be reintroduced into the dynamics of the system is still an open question.
|
no-problem/9909/hep-lat9909123.html
|
ar5iv
|
text
|
# Physical Effects of Infrared Quark Eigenmodes in LQCD Talk presented by E. Eichten
## 1 Basics
In the truncated determinant approach (TDA) to full QCD, the quark determinant, $`𝒟(A)=det(H)=det(\gamma _5(D/(A)m))`$ is split-up gauge invariantly into an infrared part and an ultraviolet part.
$$𝒟(A)=𝒟_{IR}(A)𝒟_{UV}(A)$$
(1)
The ultraviolet part, $`𝒟_{UV}`$, can be accurately fit by a linear combination of a small number of Wilson loops. The infrared part $`𝒟_{IR}(A)`$ is defined as the product of the lowest $`N_\lambda `$ positive and negative eigenvalues of $`H`$, with $`|\lambda _i|\mathrm{\Lambda }_{QCD}`$ (typically, $``$ 400-500 MeV). The eigenvalues $`\lambda _a`$ of $`H`$ are gauge invariant and measure quark off-shellness. The cutoff (for the separation in Eq. 1) is tuned to include as much as possible of the important low-energy chiral physics of the unquenched theory while leaving the fluctuations of $`\mathrm{ln}𝒟_{IR}`$ of order unity after each sweep updating all links with the pure gauge action (assuring a tolerable acceptance rate for the accept/reject stage). This procedure works well even for kappa values arbitrarily close to kappa critical.
Initial studies using TDA focus on the qualitative physical effects of the inclusion of the infrared quark eigenmodes. For this purpose, coarse lattices with large physical volumes are appropriate.
## 2 Bellwethers
The coarse lattices given in Table 1 are being studied on PC clusters. The $`O(a^2)`$ improved gauge action, $`\beta =6.8[1.0(plaq)0.08268(rect)0.01240(para)]`$, was adjusted in Ref. to have approximately the same lattice scale $`a=0.4fm`$ as the naive gauge action at $`\beta =4.5`$. The physical lattice size is 2.4 (fm) for PW6 and RW6 and 3.2 (fm) for RW8; $`\kappa _c=.2190`$ for PW6, and $`.1960`$ for RW6 and RW8; and eigenvalue cutoff scale is 560 MeV for PW6 and RW6, and 445 MeV for RW8. In addition, $`10^3\times 20`$ lattices at $`\beta =5.7`$, $`c_{SW}=1.57`$ and $`n_{cut}=520`$ ($`E_{cut}=460MeV`$) are being generated on ACPMAPS.
As shown in Figure 1 it took about 10,000 full steps for the PW6 lattice configurations to equilibrate (reflecting critical slowing down - a few hundred suffice on small volumes).
Four bellwethers can be used to characterize the physical differences between quenched and full QCD. They are discussed (in order of increasing difficulty to observe in lattice calculations) in the following four subsections.
### 2.1 Topolopy
The topological charge, $`Q_{TOP}`$, can be expressed in terms of the eigenvalues of the Wilson-Dirac operator.
$$Q_{TOP}=\frac{1}{2\kappa }(1\frac{\kappa }{\kappa _c})\underset{i=1}{\overset{N}{}}\frac{1}{\lambda _i}$$
(2)
This sum is quickly saturated by the low eigenvalues.
In full QCD configurations with very small eigenvalues of $`H`$ are suppressed by the quark determinant factor. In particular, non-zero topological charges must be suppressed in the chiral limit ($`m_q0`$). Furthermore, the functional dependence of the topological charge distribution, $`P_Q`$, on the light quark mass $`m_q`$ is predicted by the chiral analysis of Leutwyler and Smilga.
$$P_Q=I_Q(x)^2I_{Q+1}(x)I_{Q1}(x)$$
(3)
where
$$x=1/2Vf_\pi ^2m_\pi ^2=Vm_q<\overline{\psi }\psi >,$$
(4)
$`I_Q`$ are modified Bessel Functions of order Q and V is the total space-time volume.
The quantitative agreement with the expected behaviour of Eq. 3 has already been reported for the TDA method. General agreement is also observed on the coarse lattices of the present studies.
### 2.2 Eta Prime Mass
The relation between the axial $`U(1)`$ anomaly and the $`\eta ^{}`$ mass is well understood in full QCD. For two light quarks ($`N_f=2`$), $`m_\eta ^2=m_\pi ^2+m_0^2`$ where $`m_0^2=2N_f\chi /f_\pi ^2`$ and the topological susceptibility is $`V\chi <Q_{TOP}^2>_{\mathrm{quenched}}`$. The full $`\eta `$ propagator is the sum of a connected (valence quark) term and a disconnected (hairpin) term. Thus, in the continuum, the momentum space full propagator can be written:
$`(p^2+m_\pi ^2+m_0^2)^1=(p^2+m_\pi ^2)^1`$
$`m_0^2(p^2+m_\pi ^2)^1(p^2+m_\pi ^2+m_0^2)^1`$
These separate terms and their sum are shown in Figure 2 for the RW6 lattices. The cancellation between the valence and hairpin terms in the full propagator is evident.
### 2.3 Static Energy
To date no convincing evidence for string breaking in full QCD has been presented using calculations of the static energy alone. However string breaking has been seen using the TDA method in 2D QED and by the HMC method in 3D QCD. Studies of the $`\overline{c}c`$ and $`\overline{b}b`$ systens, lead to the expectation that virtual pair effects (below heavy-light meson pair production threshold) will soften the linear rise in the static energy, while above threshold the potential will flatten out (i.e.) the string will break. The heavy-light meson mass is $`0.81\pm .02`$ for the RW6 lattices.
Figure 3 shows the static energy for 200 RW6 lattices versus the same number of unquenched $`6^4`$ lattices at $`\beta =4.5`$. There is evidence of screening from the virtual pairs but no hard evidence of string breaking is found. Seeing string breaking will require more statistics (to study $`(T=3)/(T=2)`$ with small error bars) and also the study of the $`RW8`$ lattices.
### 2.4 Vector Meson Resonances
For the RW6 and RW8 lattices at $`\kappa =.1950`$, the rho mass is 1.33 and the pion mass is 0.205 (in lattice units); hence the $`\pi /\rho `$ mass ratio is close to the physical value. For example, the rho propagator for the RW6 lattice is shown in Figure 4. However, since this is a P wave coupling, the physical volume of the lattice must be large enough that the decay is allowed with the first nonzero momentum, $`p_{min}=\frac{2\pi }{Na}`$. This requires a $`10^4`$ lattice (RW10) or creating a rho with initial momentum $`p_{min}`$. Neither of these alternatives have been studied as yet.
## 3 Status
The present status of full QCD bellwethers is as follows:
* The behaviour of the topological charge distribution $`Q^2`$ as a function of light quark mass $`m_q`$ – Seen.
* The eta prime mass - $`m_\eta ^{}^2=m_\pi ^2+m_0^2`$ – Seen.
* The static energy - string breaking. – In progress but needs more statistics.
* Vector meson resonances - $`\rho \pi \pi `$. – Yet to be studied in detail.
Results for these four bellwether processes on coarse lattices should be available within a few months.
|
no-problem/9909/cond-mat9909138.html
|
ar5iv
|
text
|
# Experimental Evidence for Resonant-Tunneling in a Luttinger-Liquid
\[
## Abstract
We have measured the low temperature conductance of a one-dimensional island embedded in a single mode quantum wire. The quantum wire is fabricated using the cleaved edge overgrowth technique and the tunneling is through a single state of the island. Our results show that while the resonance line shape fits the derivative of the Fermi function the intrinsic line width decreases in a power law fashion as the temperature is reduced. This behavior agrees quantitatively with Furusaki’s model for resonant tunneling in a Luttinger-liquid.
\]
One-dimensional (1D) electronic systems are expected to show unique transport behavior as a consequence of the Coulomb interaction between carriers . Unlike in two and three dimensions , where the Coulomb interaction affects the transport properties only perturbatively, in 1D it completely modifies the ground state from its well-known Fermi-liquid form and the Fermi surface is qualitatively altered even for weak interactions. Today, it is well established theoretically that the low temperature transport properties of interacting 1D-electron systems are described in terms of a Luttinger-liquid rather than a Fermi-liquid . The difference between a Luttinger-liquid and Fermi-liquid becomes dramatic already in the presence of a single impurity. According to Landauer’s theory the conductance of a single channel wire with a barrier is given by $`G=\left|t\right|^2e^2/h`$, where $`\left|t\right|^2`$ is the transmission probability through the barrier. This result holds even at finite temperatures, assuming the transmission probability is independent of energy, as is often the case for barriers that are sufficiently above or below the Fermi energy. In 1D, interactions play a crucial role in that they form charge density correlations. These correlations, similar in nature to charge density waves , are easily pinned by even the smallest barrier, resulting in zero transmission and, hence, a vanishing conductance at zero temperature. At finite temperatures the correlation length is finite and the conductance decreases as a power-law of temperature, $`G(T)T^{2/g2}`$ . Here $`g1/\sqrt{1+\frac{U}{2E_F}}`$ where $`U`$ is the Coulomb energy between particles and $`E_F`$ is the Fermi energy in the wire. Despite the vast theoretical understanding of Luttinger-liquids only a handful of experiments have been interpreted using such models. For example, in clean semiconductor wires prepared by the cleaved edge overgrowth (CEO) method , contrary to theory, the conductance is suppressed from its universal value . Although not fully understood this suppression is believed to be a result of Coulomb interactions that suppress the coupling between the reservoirs and the wire region. Other measurements done on weakly disordered wires show a weak temperature dependence of the conductance that is attributed to the Coulomb forces between electrons in the wire. Finally, The strongest manifestation of interaction in the clean limit comes from tunneling experiments such as the one recently reported on single walled carbon nanotubes and those performed on the chiral Luttinger-liquid .
In this work we have focused on the transport properties through confined states in a 1D wire, namely, when a 1D island is embedded in a 1D wire. The 1D island is formed at low densities such that the disorder potential in the wire exceeds the Fermi energy at several points along the wire. Resonant tunneling (RT) has previously been studied experimentally in a chiral Luttinger-liquid when the resonant level width was larger than the electron temperature . However, an unequivocal verification of the theoretical prediction has not been obtained. Theoretically, the problem of RT has been considered by Kane and Fisher and was later extended by Furusaki to include many resonant levels and the effects of Coulomb blockade (CB). Our measurements probe the intrinsic width, $`\mathrm{\Gamma }_i`$, of several resonant states as the temperature is lowered. In contrast to conventional CB theory , where $`\mathrm{\Gamma }_i`$ is temperature independent, we find that $`\mathrm{\Gamma }_i`$ decreases as a power law of temperature over our entire temperature range (2.5K to 0.25K). The measured behavior is in quantitative agreement with the theoretical prediction of Furusaki .
The 1D wires are fabricated by the CEO method (see Fig. 1a and ). The electrons are confined by a $`25nm`$ square well potential in one direction and by a triangular potential well approximately $`10nm`$ wide (binding them to the cleaved edge). To create a 1D island within our wire we have chosen to study $`5\mu m`$ long wires that show disorder induced deviations from the conductance plateaus (see Fig. 1b). The wire conductance is measured as a function of its density (by negatively biasing the top gate, see Fig. 1a) using standard lock-in techniques. A fixed excitation voltage of $`10\mu V`$ is applied across the wire and the corresponding current is measured. As the density of electrons in the wire is reduced, the 1D modes are depopulated one by one until only a single mode remains partially populated. Several broad resonances appear superimposed on the ’last plateau’ with an average conductance of $`0.452e^2/h`$. We attribute the broad resonances to above barrier scattering when the Fermi energy is higher than the disorder potential. The deviation of the conductance plateau from the universal value has been studied in detail in , however, a detailed theoretical explanation is still lacking. It should be noted that on similar but cleaner wires we have previously reported plateau conductance values of $`0.82e^2/h`$. The larger deviation observed here suggests that disorder plays an important role in the suppression of the value of the conductance plateaus. As the density is further reduced, the highest potential barrier in the wire crosses the Fermi energy, the last mode is pinched off, and the wire splits into two parts. Upon further decrease in density a second barrier crosses the Fermi energy, thereby, forming a 1D island, and transport occurs through resonant states. Of course as the density is reduced even further, more islands will form in the wire. However, for transport to occur through them, at least one confined state in each island must be concurrently aligned with the Fermi energy. Since this condition is very unlikely, the wire is expected to be completely pinched off when more than one island has developed. Therefore, the sharp resonances in the sub-threshold region, being almost equally spaced in gate voltage are attributed to CB resonances through a single 1D island.
The conductance due to RT of a particle between two Fermi-liquid leads is easily calculated using the Landauer formula, $`G_{FL}=\frac{e^2}{h}\left|t(\epsilon )\right|^2\frac{f}{\epsilon }𝑑\epsilon `$, where $`\left|t(\epsilon )\right|^2`$ has the Breit-Wigner line shape centered around the resonant energy $`\epsilon _0`$, $`\left|t(\epsilon )\right|^2=\frac{\mathrm{\Gamma }_i^2}{\left(\epsilon \epsilon _0\right)^2+\mathrm{\Gamma }_i^2}`$, and $`f`$ is the Fermi function. When $`k_BT\mathrm{\Gamma }_i`$, the case of interest here, one finds that $`G_{FL}=\frac{e^2}{h}\mathrm{\Gamma }_i\frac{\pi }{4k_BT}\mathrm{cosh}^2\left(\frac{\epsilon _0\mu }{2k_BT}\right)`$ with $`\mu `$ the chemical potential in the leads. The main outcome of this analysis is the line shape of the resonance being the derivative of the Fermi function, its full width at half maximum equals $`3.53k_BT`$, and the area under the peak (or the peak height multiplied by $`k_BT`$) is proportional to $`\mathrm{\Gamma }_i`$. In the conventional theory of CB , $`\mathrm{\Gamma }_i`$ depends on the transmission probabilities through the individual barriers, which are independent of temperature and hence should lead to a peak area independent of temperature. In the case of RT from a Luttinger-liquid, the individual transmission probabilities are suppressed as the temperature is lowered . Therefore it is expected and has been shown theoretically that the extracted $`\mathrm{\Gamma }_i`$ should drop to zero as $`\mathrm{\Gamma }_iT^{1/g1}`$. The resonance line shape, however, in the case of $`k_BT\mathrm{\Gamma }_i`$, has been shown to be only slightly modified by the interactions and the change is too small to be detected experimentally. We, therefore, deduce the electron temperature (in units of gate voltage) from the fit to the derivative of the Fermi function. The deduced temperature from all the resonances in Fig. 1b is the same and follows linearly the fridge temperature. Such a fit also allows us to calibrate the gate voltage in units of energy thereby extracting the charging energy (distance between peaks) estimated to be $`2.2meV`$. Knowing the cross-section of the wire we estimate the length of the island from the charging energy to be $`100200nm`$. It is, therefore, likely that the 1D island is connected on both sides to two 1D conductors that are several microns long. Figure 2 shows the extracted $`\mathrm{\Gamma }_i`$ for the peaks marked in Fig. 1. It is clear that $`\mathrm{\Gamma }_i`$ is not constant but rather drops as a power law of temperature. The extracted values of $`g`$ for the two peaks are 0.82 (peak $`\mathrm{\#}`$1 in Fig. 1) and 0.74 (peak $`\mathrm{\#}`$2 in Fig. 1). The change in $`g`$ results from the change in density induced in the 1D wire when moving from one peak to the next. Similar power law behavior is observed for all measured resonances in three different wires. The observed power law behavior is direct proof of Luttinger-liquid behavior in our CEO wires.
At sufficiently high temperatures the assumption of tunneling through a single resonant state breaks down and we should expect an increase in the extracted $`\mathrm{\Gamma }_i`$ due to transport through a few excited states of the 1D island . The possibility of an excited state affecting the temperature dependent conductance is of interest since it allows a better test of Furusaki’s model. The excited state spectrum of the 1D island is extracted from differential conductance measurements at finite source drain voltage, $`V_{ds}`$. Figure 3 shows a gray scale plot of the differential conductance as a function of the top gate voltage and $`V_{ds}`$. For peak $`\mathrm{\#}`$1 (in Fig. 1b) several excited states can be observed. The lowest three, at $`V_{ds}=0.4meV`$, $`V_{ds}=0.7meV`$ and $`V_{ds}=1.5meV`$ are only very weakly coupled (approximately 15% of the intensity of the main peak) and would therefore contribute very little to the overall conductance. However, the fourth excited state at $`V_{ds}=1.7meV`$ is more strongly coupled. Since an excited state contributes to the conductance only when $`4k_BT\mathrm{\Delta }E`$ ($`\mathrm{\Delta }E`$ is the energy of the excited state), within our temperature range of 0.25K to 2.5K only the ground state contributes significantly and one expects a single power law behavior as is indeed observed in Fig. 2. It should be noted that theoretically, $`g`$ can also be written in terms of the charging energy, $`U_c`$, and the level spacing, $`\mathrm{\Delta }E`$, as $`g1/\sqrt{1+U_c/\mathrm{\Delta }E}`$ . In our 1D island the ratio of $`U_c/\mathrm{\Delta }E5`$ and, hence, one expects $`g0.4`$. The large disagreement between the measured $`g`$ and the expected one is not understood at this stage.
A different case is presented in Fig. 4 with a strongly coupled excited state at $`V_{ds}=0.6meV`$. Hence, we expect that at temperatures above 1.2K this excited state would contribute to the conductance. Fig. 5 shows the temperature dependence of the extracted $`\mathrm{\Gamma }_i`$ of this CB peak. Indeed above 1K, $`\mathrm{\Gamma }_i`$ deviates from the low temperature power law, indicating a contribution of an additional transport channel to the total conductance. At low temperatures though, only the ground state contributes to the conductance. Therefore, a fit of the low temperature data to a power law enables us to extract a $`g`$ value of 0.66 for this wire. Using this $`g`$ value and the measured energy of the excited state (-0.6meV from Fig. 4) we use Furusaki’s model to predict the dependence of $`\mathrm{\Gamma }_i`$ over the entire temperature range. The dashed curve in Fig. 5 is the result of such a calculation where only the coupling strength to the excited state has been adjusted. We see that the temperature dependence predicted by the model agrees quantitatively with the measured dependence, further supporting the fact that Luttinger-liquid behavior describes the transport properties of these resonances.
In conclusion we have studied the temperature dependence of CB resonances of a 1D island embedded in an interacting 1D wire. The observed power law behavior of the intrinsic resonance width on temperature is direct proof of Luttinger-liquid behavior in this system. The measured $`g`$ values range from 0.66 to 0.82 for the various resonances studied. The measured behavior agrees quantitatively with the model of Furusaki even when excited states of the 1D island are taken into account.
We would like to thank H. L. Stormer for the fruitful collaboration. We would also like to thank A. M. Finkel’stein and A. Stern for helpful discussions. This work is supported by the US-Israel BSF.
|
no-problem/9909/quant-ph9909073.html
|
ar5iv
|
text
|
# Quantum states and generalized observables: a simple proof of Gleason’s theorem
## Abstract
A quantum state can be understood in a loose sense as a map that assigns a value to every observable. Formalizing this characterization of states in terms of generalized probability distributions on the set of *effects*, we obtain a simple proof of the result, analogous to Gleason’s theorem, that any quantum state is given by a density operator. As a corollary we obtain a von Neumann-type argument against non-contextual hidden variables. It follows that on an individual interpretation of quantum mechanics, the values of effects are appropriately understood as propensities.
PACS numbers: 03.65.Ca; 03.65.Ta; 03.67.-a.
In this paper we will characterize a notion of quantum states that takes into account the general representation of observables as ‘positive operator valued measurements’ (POVMs). The idea of a state as an expectation value assignment will be extended to that of a generalized probability measure on the set $`()`$ of all *effects*, that is, the positive operators which can occur in the range of a POVM . All such generalized probability measures are found to be of the standard form, i.e., determined by a density operator. This result constitutes a simplified proof and at the same time more comprehensive variant of Gleason’s theorem . The paper concludes with an application of this result to the question of hidden variables .
In the traditional formulation of quantum mechanics in Hilbert space, states are described as density operators and observables are represented as self-adjoint operators. Alternatively, and equivalently, experimental events and propositions are represented as orthogonal projection operators, and states are defined as generalized probability measures on the non-Boolean lattice $`𝒫()`$ of projections, i.e., as functions $`Ev(E)`$ with the properties
(P1) $`0v(E)1`$ for all $`E`$;
(P2) $`v(I)=1`$;
(P3) $`v(E+F+\mathrm{})=v(E)+v(F)+\mathrm{}`$ for any sequence $`E,F,\mathrm{}`$ with $`E+F+\mathrm{}I`$.
According to Gleason’s theorem , all states are given by density operators so that $`v(E)=v_\rho (E)=tr[\rho E]`$, provided that the dimension of the complex Hilbert space is at least 3. The duality of states and observables is thus characterized through the trace expression $`tr[\rho E]`$, which in the minimal interpretation gives the probability of an outcome associated with $`E`$, of a measurement performed on a system in state $`\rho `$.
In quantum physics there are many experimental procedures leading to measurements whose outcome probabilities are expectations not of projections but rather of *effects*. It is therefore natural to define a quantum state as a generalized probability measure not just on $`𝒫()`$ but on the full set of effects, $`()`$, in such a way that the conditions $`(P1)(P3)`$ hold for all $`E,F,\mathrm{}()`$. Note that while for sets of projections the condition $`E+F+\mathrm{}𝒫()`$ is equivalent to $`E,F,\mathrm{}`$ being mutually orthogonal and thus commuting, the commutativity is no longer necessary for $`E+F+\mathrm{}I`$ to hold if $`E,F,\mathrm{}`$ are effects. The following analogue of Gleason’s theorem then holds.
Theorem. Any generalized probability measure $`Ev(E)`$ on $`()`$ with the properties $`(P1)(P3)`$ is of the form $`v(E)=tr[\rho E]`$ for all $`E`$, for some density operator $`\rho `$.
Proof. It is trivial to see that $`v(E)=nv(\frac{1}{n}E)`$ for all positive integers. Then it follows immediately that $`v(pE)=pv(E)`$ for any rational $`p[0,1]`$. Observe also the additivity and positivity entail that any effect valuation is order preserving, $`EFv(E)v(F)`$. Let $`\alpha `$ be any real number, $`0\alpha 1`$. Let $`p_\mu `$ and $`q_\nu `$ be sequences of rational numbers in such that $`p_\mu \alpha `$ and $`q_\nu \alpha `$. It follows that $`v(p_\mu E)=p_\mu v(E)v(\alpha E)v(q_\nu E)=q_\nu v(E)`$. Hence, $`v(\alpha E)=\alpha v(E)`$.
Let $`A`$ be any positive bounded operator not in $`()`$. We can always write $`A=\alpha E`$, with $`E()`$ and suitable $`\alpha 1`$. Let $`E_1,E_2()`$ be such that $`A=\alpha _1E_1=\alpha _2E_2`$. Assume without loss of generality that $`1\alpha _1<\alpha _2`$. Then $`v(E_2)=\frac{\alpha _1}{\alpha _2}v(E_1)`$, and so $`\alpha _1v(E_1)=\alpha _2v(E_2)`$. Thus we can uniquely define $`v(A)=\alpha _1v(E_1)`$.
Let $`A,B`$ be positive bounded operators. Take $`\gamma >1`$ such that $`\frac{1}{\gamma }(A+B)()`$. Then we can write $`v(A+B)`$ as $`\gamma v(\frac{1}{\gamma }(A+B))=\gamma v(\frac{1}{\gamma }A)+\gamma v(\frac{1}{\gamma }B)=v(A)+v(B)`$.
Finally, let $`C`$ be an arbitrary bounded self-adjoint operator. Assume we have two different decompositions $`C=AB=A^{}B^{}`$ into a difference of positive operators. We have $`v(A)+v(B^{})=v(B)+v(A^{})`$ and so $`v(A)v(B)=v(A^{})v(B^{})`$. Thus we can uniquely define: $`v(C):=v(A)v(B)`$. It is now straightforward to verify the linearity of the map $`v`$ thus extended to all of $`_s\left(\right)`$. We have found that any generalized probability measure on effects extends to a unique positive linear functional which is normal (due to the $`\sigma `$-additivity). It is well known that any such functional is obtained from a density operator (e.g., , Lemma 1.6.1, or see the direct elementary proof due to von Neumann ). $`\mathrm{}`$
The conclusion of our theorem is the same as that of Gleason’s theorem. The extreme simplicity of the proof in comparison to Gleason’s proof is due to the fact that the domain of generalized probability measures is substantially enlarged, from the set of projections to that of all effects.
The statement of the present theorem also extends to the case of 2-dimensional Hilbert spaces where Gleason’s theorem fails. It is worth noting that the dispersion-free valuations constructed on the set of projections of a 2-dimensional Hilbert space (see, e.g., ), simply do not extend to any valuations on the full set of effects. The reason must be seen in the fact that the additivity requirement for $`v`$ on sets of pairwise orthogonal projections is too weak to enforce the linearity of $`v`$, considering that such sets of projections are mutually commutative.
Here is a simple intuitive argument demonstrating that there are no linear extensions of any dispersion-free valuation on the projections of a 2-dimensional Hilbert space. We use the Poincaré sphere representation of positive operators of trace 1, $`A=1/2(I+𝐚\sigma )`$, where $`\sigma =(\sigma _x,\sigma _y,\sigma _z)`$, $`𝐚=(a_x,a_y,a_z)`$, with $`|𝐚|^2=a_x^2+a_y^2+a_z^21`$. All projections are then either $`I`$ or $`O`$ or $`P=1/2(I+𝐧\sigma )`$, with $`|𝐧|=1`$. Let $`v`$ be a dispersion-free valuation on the projections. Any pair of mutually orthogonal projections $`P,P^{}=IP`$ will have values 1 and 0 such that their sum is 1. Hence there are non-orthogonal pairs $`P=1/2(I+𝐧\sigma )`$, $`Q=1/2(I+𝐦\sigma )`$ such that both have value 0. If $`v`$ had a linear extension, then all the effects corresponding to the line segment joining $`𝐧`$ and $`𝐦`$, $`E=\lambda P+(1\lambda )Q`$, with $`0\lambda 1`$, would have values $`v(E)=\lambda v(P)+(1\lambda )v(Q)=0`$. On the other hand, we can write $`E`$ in its spectral decomposition $`E=\mu R+(1\mu )R^{}`$, where $`0<\mu <1`$ if $`0<\lambda <1`$. Assume that $`v(R)=1`$ $`v(R^{})=0`$, then $`v(E)=\mu 0`$, which contradicts the previous conclusion that $`v(E)=0`$. Hence there is no consistent linear extension of $`v`$.
Up to this point we have restricted ourselves to the minimal interpretation of quantum states and observables, according to which these entities are tools for calculating experimental probabilities. We have shown that, given the set of effects as a representation of all experimental yes-no questions, any quantum state, understood as a generalized probability measure on the set of effects, is given in the familiar way by a density operator.
This result entails a formalization of the well-known fact that quantum mechanics is an irreducibly probabilistic theory: in contrast to classical probability theory, quantum probabilities cannot be decomposed into convex combinations of dispersion-free (that is, $`\{0,1\}`$-valued) generalized probability measures.
We conclude with a brief outline of an application of the above result to interpretations of quantum mechanics that go beyond the scope of the minimal interpretation. Such interpretation will consider observables as representations of *properties* of a system and effects as yes-no propositions about the possible values of the observables. The role of states will be to assign values to observables and effects. In a deterministic world, one would expect a complete state description to assign one of the values 1 or 0 to each effect of a complete collection $`E_i`$ (with $`E_i=I`$), in such a way that 1 occurs exactly once. Thus the sum of the values for all $`E_i`$ is 1.
This consideration leads to the idea of defining states as *effect valuations*, that is, as functions $`v:Ev(E)`$ of effects with the properties: $`v(E)0`$, and $`v(E)+v(F)+\mathrm{}=1`$ if $`E+F+\mathrm{}=I`$.
It is easy to see that every effect valuation has the properties (P1-3) of generalized probability measures, and conversely. Hence the above theorem entails that any effect valuation is of the form $`v(E)=tr[\rho E]`$ for all $`E()`$ and some density operator $`\rho `$.
An interpretation of valuations as truth value assignments would require the numbers $`v(E)`$ to be either 1 or 0, indicating the occurrence or nonoccurrence of an outcome associated with $`E`$. Valuations with this property are referred to as *dispersion-free*. The above theorem entails immediately that dispersion-free effect valuations which are defined everywhere on $`()`$ do not exist. It follows that non-contextual hidden variables, understood as dispersion-free, globally defined, valuations, are excluded in quantum mechanics.
The argument against non-contextual hidden variables thus obtained resembles formally that of von Neumann . However, von Neumann’s problematical assumption, that of additivity of a valuation over arbitrary (countable) sets of (commuting or noncommuting) self-adjoint operators , is here replaced by the requirement of additivity over (countable) sets of effects that add up to $`I`$. Such collections of effects constitute a POVM and are thus jointly measurable in a single experiment. It makes thus sense to consider hypothetical simultaneous (hidden, dispersion-free) values of such sets of effects, and hence also the values of sums of effects provided these sums are bounded by $`I`$.
In the case of a pure state $`\rho =|\phi \phi |`$, the occurrence of values $`v_\rho (E)`$ strictly between 0 and 1 indicates a situation where the property associated with $`E`$ is objectively indeterminate, that is its presence or absence is not just subjectively unknown. This interpretation is in accord with the *propensity* interpretation of probabilities, according to which the number $`v_\rho (E)`$ gives a measure of the system’s objective tendency to trigger an outcome represented by effect $`E`$ if the state is given by $`\rho `$ and a measurement is made of a POVM containing $`E`$ .
As an example, $`E_1`$ and $`E_2=IE_1`$ could represent the propositions that a quantum particle is in the upper and lower path of an interferometer, respectively. If a pure state $`\rho `$ is a superposition of states $`\rho _1`$, $`\rho _2`$ in which $`E_1`$ and $`E_2`$ are real, respectively (i.e., $`v_{\rho _1}(E_1)=v_{\rho _2}(E_2)=1`$), then there is no convex decomposition of that state in terms of valuations which are dispersion-free, even only with respect to $`E_1,E_2`$. The fact that $`0<v_\rho (E_i)<1`$ is then an expression of the *indeterminateness* of the properties $`E_1,E_2`$ in the state $`\rho `$. The most appropriate way of accounting for this situation seems to be to say that the localization of the quantum particle is extended over the space occupied by the two paths of the interferometer. The quantum particle is present, *to a degree quantified by the number* $`v_\rho (E_i)`$, in each of the two paths represented by $`E_i`$. If forced by a measurement to decide whether to show up in the upper or lower path, it will do so with a propensity quantified by those numbers.
A related interpretation of valuations for unsharp measurements as approximate truth values has recently been advocated by T. Breuer in this journal , who applied Gleason’s theorem to obtain a Kochen-Specker theorem for unsharp spin observables.
The nonexistence of dispersion free effect valuations raises the interesting question whether there are subsets in the set of effects, with meaningful structures, on which such dispersion-free valuations can be defined. Interesting constructions demonstrating a positive answer to this question are presented for subsets of projections in , or also for effects in . Intuitively, it appears that the valuations of Bub are defined on relatively sparse sets of projections, but these sets do possess some structures that can be argued to be necessary for a consistent set of definite properties; by contrast, the valuations of Kent are defined on ‘dense’ sets of POVMs where it is not obvious that these are equipped with such ‘logical’ structures. The important task remains to explore how far one can go in defining non-contextual dispersion-free valuations on subsets of effects with appropriate structures, without running into conflict with the modified Gleason theorem proven here.
|
no-problem/9909/astro-ph9909254.html
|
ar5iv
|
text
|
# Is the Large Magellanic Cloud a Large Microlensing Cloud?
## 1 INTRODUCTION
The location of the microlensing events towards the Large Magellanic Cloud (LMC) is a matter of controversy. Alcock et al. (1997a) assert that the lensing population lies in the Galactic halo and comprises perhaps $`50\%`$ of its total mass. Early suggestions that the LMC may provide the bulk of the lenses were made by Sahu (1994) and Wu (1994), and this location is favored by the data on the binary caustic crossing events (Kerins & Evans 1999). One of the main obstacles to general acceptance of this idea has been the sheer number of observed lensing events, which appear to be too great to be accommodated by the LMC alone. The experimental estimate of the microlensing optical depth $`\tau `$ towards the LMC is $`2.1_{0.8}^{+1.3}\times 10^7`$ (e.g., Bennett 1998). This is substantially greater than the optical depth of simple tilted disk models of the LMC. For example, Gould’s (1995) ingenious calculation involving the virial theorem sets the self-lensing optical depth of the LMC disk as $`1\times 10^8`$. Section 2 of this paper generalizes Gould’s analysis to provide upper limits on the self-lensing optical depth of thick models of the LMC disk. These values are smaller than, but of comparable magnitude to, the observations. So, it is reasonable to suggest that the microlensing signal may come either from a fattened LMC disk or a Milky Way halo only partly composed of lensing objects. The timescale distributions and the geometric pattern of events across the face of the LMC disk will of course be different in these two cases. The timescales of events for the same mass functions will be longer for lenses in the LMC as compared to those in the Milky Way halo as the lower velocity dispersion of the LMC outweighs the effects of the smaller Einstein radii. However, the use of the timescales as a discriminant is spoiled by the fact that there is no reason why the Milky Way halo and the LMC should have the same mass function. A more hopeful indicator may be the distribution of events across the face of the LMC disk. If the lenses lie in the Milky Way halo, the events will trace the surface density of the LMC, whereas if the lenses lie in the Clouds, the events will be more concentrated towards the dense bar and central regions, scaling like the surface density squared. How long will it take to distinguish between the two possibilities? Section 3 develops a maximum likelihood estimator that incorporates all the timescale and positional information to provide the answer to this question, both for the existing surveys like MACHO and for the next generation experiments like SuperMACHO (Stubbs 1998). Finally, Section 4 evaluates the strategies by which the riddle of the location of the lenses may be solved.
## 2 OBESE MAGELLANIC DISKS
Gould’s (1995) limit relates the self-lensing optical depth of any thin disk to its vertical velocity dispersion $`\sigma _z`$ via
$$\tau =2\frac{\sigma _{z}^{}{}_{}{}^{2}}{c^2}\mathrm{sec}^2i$$
(1)
where $`c`$ is the velocity of light and $`i`$ is the inclination angle. Taking the observed velocity dispersion of CH stars as $`20\mathrm{kms}^1`$ (Cowley & Hartwick 1991) and the inclination angle $`i=27^{}`$ (de Vaucouleurs & Freeman 1973), Gould argued that the self-lensing optical depth of the LMC disk is likely to be $`1\times 10^8`$, which is some 20 times smaller than the observations. As Gould’s derivation depends only on the Poisson and Jeans equations for highly flattened geometries, the formula is clearly irreproachable. How could it be yielding misleading results as to the self-lensing optical depth? Consider a thought experiment in which a very thin disk is gradually surrounded by a flattened shell of matter, bounded by two similar concentric ellipsoids (a homoeoid). By Newton’s theorem, the attraction at any internal point of a homoeoid vanishes. So, the introduction of the homoeoid leaves the velocity dispersion in the thin disk quite unchanged. But, the self-lensing optical depth is strongly enhanced. In applying Gould’s formula, we must be very careful not to use the velocity dispersion of the thin disk, but rather the mass-weighted velocity dispersion of the entire configuration – otherwise we will obtain a misleadingly small answer. It is clearly worthwhile extending the calculations to more sophisticated, structure-rich models of the LMC, in which the self-lensing optical depth is written in terms of the velocity dispersion of the thin disk population (which is directly accessible to observations) instead of the mass-weighted velocity dispersion.
Let us now derive formulae for the self-lensing optical depth of an ensemble of $`n`$ exponential disks, each with a different scale height $`h_i`$, mid-plane density $`\rho _i`$ and column density $`\mathrm{\Sigma }_i=2\rho _ih_i`$. Clearly, this is a very idealized representation of the LMC, although similar models of the Milky Way disk have already proved useful (c.f., Gould 1989). The vertical density law for the disk is
$$\rho (z)=\underset{i=1}{\overset{n}{}}\rho _i\mathrm{exp}\left(\frac{|z|}{h_i}\right).$$
(2)
The relationship between height $`z`$ and potential $`\varphi `$ is given by solving Poisson’s equation in the form appropriate for a flattened geometry (see Binney & Tremaine 1987, chap 2). Gould (1995) shows that the self-lensing optical depth of any thin disk with total column density $`\mathrm{\Sigma }`$ is
$$\tau =\frac{2\pi G\mathrm{\Sigma }\mathrm{sec}^2i}{c^2}_0^{\mathrm{}}𝑑z\{1[G(z)]^2\},G(z)=\frac{2}{\mathrm{\Sigma }}_0^z𝑑y\rho (y)$$
(3)
For our ensemble of exponential disks we have
$$G(z)=1\underset{i=1}{\overset{n}{}}F_i\mathrm{exp}\left(\frac{|z|}{h_i}\right).$$
(4)
where $`F_i`$ is the mass fraction in each population. The self-lensing optical depth is now entirely analytic and given by
$$\tau =2\frac{\sigma _1^2}{c^2}\mathrm{sec}^2i\times \frac{1}{F_1h_1}\left[\frac{4}{3}\underset{i=1}{\overset{n}{}}F_ih_i\frac{2}{3}\underset{i,j=1}{\overset{n}{}}\frac{F_iF_jh_ih_j}{h_i+h_j}\right],$$
(5)
The formula has been written in terms of $`\sigma _1`$, which is the vertical velocity dispersion of the thinnest disk population only. It is the line-of-sight velocity dispersion of the youngest, thinnest populations in the LMC that are observationally reasonably well-determined. The Jeans equation has been solved under the assumption that the thinnest disk dominates the gravitational potential near the mid-plane. Our formula only assumes that the thinnest population is in equilibrium. It makes no assumptions as to the relationship between velocity dispersion and height for the thicker populations. It is therefore the appropriate formula for an equilibrium thin disk surrounded by dispersed and patchy populations of stars. Let us note that (5) really estimates the value of the optical depth near the LMC center (as the radial structure of the disk is ignored). The assumption that the disks are exponential rather than completely isothermal (that is, sech-squared) causes our estimates to be on the low side. The assumption that the line-of-sight velocity dispersion $`\sigma _{\mathrm{los}}`$ is roughly equal to the vertical velocity dispersion $`\sigma _z`$ causes our estimates to be on the high side. This correction factor depends on the uncertain shape of the velocity dispersion tensor in the LMC thin disk. If the velocity dispersions in the disk are well-approximated by epicyclic theory, then $`\sigma _\varphi ^2\sigma _z^20.45\sigma _R^2`$ (Binney & Tremaine, 1987, p. 199). In this case, $`\sigma _{\mathrm{los}}^2`$ overestimates $`\sigma _z^2`$ by $`(1+0.6\mathrm{sin}^2i)`$. Finally, in the limit of a single, thin disk ($`n=1`$), our result (5) reduces to Gould’s original formula, as it should.
Weinberg (1999) has described self-consistent simulations of the tidal forcing of the Magellanic Clouds by the Milky Way galaxy. He shows that the effect of this tidal heating is to fatten the structure of the LMC. He reports that $`1\%`$ of the disk mass has a height larger than 6 kpc (which we will call “the veil”) and $`10\%`$ above 3 kpc (“the shroud”). Let us devise a three component model of the LMC, composed of a massive thin disk surrounded by an intermediate shroud and an extended veil. To model the LMC, let us take the scale height of the thin disk as $`h_\mathrm{d}300\mathrm{pc}`$ (Bessell, Freeman & Wood 1986). The vertical velocity dispersion of the stars in this disk is $`30\mathrm{kms}^1`$. The scale heights of the shroud $`h_\mathrm{s}`$ and the veil $`h_\mathrm{v}`$ are $`3\mathrm{kpc}`$ and $`6\mathrm{kpc}`$ respectively. As suggested by Weinberg’s (1999) calculation, we put $`10\%`$ of the mass in the shroud and $`1\%`$ in the veil. De Vaucouleurs & Freeman (1973) estimated the inclination angle of the LMC to be $`27^{}`$ by assuming that the optical and 21 cm HI isophotes should be circular. This is not likely to be a good approximation for such an irregular structure like the LMC, and so this widely-used value of the inclination angle is at least open to some doubt. More recently, evidence from detailed fitting of the surface photometry (excluding the star forming regions) and from the low frequency radio observations (which are less sensitive to local effects) suggest a higher value of the inclination angle of the main disk of $`i45^{}`$ (see e.g., Alvarez, Aparici & May 1987; Bothun & Thomson 1988). Westerlund (1997) reviews all the evidence and argues that this higher value of the inclination is most likely. We will consider both possibilities. When $`10\%`$ of the mass is in the shroud and $`1\%`$ in the veil, the self-lensing optical depth is $`0.7\times 10^7`$ if $`i=27^{}`$ and $`1.1\times 10^7`$ if $`i=45^{}`$. Figure 1 shows how the self-lensing optical depth varies as the mass fractions in the shroud and the veil are changed. Marked on Figure 1 are the contours corresponding to the best observational estimate of $`2.1\times 10^7`$, together with the $`1\sigma `$ and $`2\sigma `$ lower limits. If the mass fractions are increased to $`15\%`$ and $`5\%`$ respectively, then the optical depth is $`1.2\times 10^7`$ if $`i=27^{}`$ and $`1.9\times 10^7`$ if $`i=45^{}`$. These values are comparable to the observed optical depth of $`2.1_{0.8}^{+1.3}\times 10^7`$ (Bennett 1998). On moving to the larger inclination, the assumption that the line-of-sight dispersion is roughly equal to the vertical velocity dispersion becomes less valid. Using our earlier correction based on epicyclic theory, some $`15\%`$ of the increase in the optical depth on moving to the larger inclination of $`45^{}`$ is spurious. However, the important conclusion to draw from these calculations is that it requires comparatively little luminous material at higher scale heights above the LMC thin disk to give a substantial boost to the optical depth.
There is one obvious difficulty with this suggestion. There are no visible tracers in the LMC with a velocity dispersion greater than $`33\mathrm{kms}^1`$ (Hughes, Wood & Reid 1991; Westerlund 1997). If in equilibrium, any luminous material belonging to disks with scale heights of $`3\mathrm{kpc}`$ or $`6\mathrm{kpc}`$ must have a larger velocity dispersion than observed. For example, the tidal heating mechanism advocated by Weinberg (1999) must produce some visible hot tracers. The stars that are heated are expected to have the same luminosity function as those that remain in the thin disk. There are two possible loopholes in this line of argument. First, it might be possible for a metal-rich, old population with a large velocity dispersion to have eluded detection. Second, the relationship between scale height and velocity dispersion applies only to steady-state equilibrium models. If this is not the case, then it may be possible for populations to be dispersed at larger heights above the LMC thin disk than suggested by their vertical velocity dispersion. It is worth cautioning that equilibrium models of the LMC may be a poor guide to interpreting the kinematics. In particular, no equilibrium models of galaxies with off-centered bars are presently known, either analytically or as the endpoints of N body experiments. If both these loopholes are closed, then the last possibility is that any lenses in the larger scale height populations must be dark or at very least dim – perhaps low mass stars or compact objects. This is difficult to rule out, although there are no obvious natural mechanisms to produce such components. In this case, our self-lensing formula (5) will overestimate the microlensing optical depth, as the population of lenses and sources do not coincide. It should be replaced by
$$\tau =\frac{2}{3}\frac{\sigma _1^2}{c^2}\mathrm{sec}^2i\times \frac{1}{F_1h_1}\underset{i=1}{\overset{n}{}}F_ih_i$$
(6)
For the same mass fractions $`F_i`$, the optical depths (6) are reduced by a typical factor of $`3`$ from our earlier self-lensing estimates (5). Aubourg et al. (1999) and Salati et al. (1999) have recently advanced models of the LMC surrounded by swathes of low mass stars and suggested that they could provide most of the observed microlensing optical depth, although others have contested this (e.g., Gyuk, Dalal & Griest 1999).
## 3 THE LOCATION OF THE LENSES
Can the positions and timescales of the microlensing events be used to determine whether the dominant lens population lies in the LMC or in the Milky Way halo? The Bayesian likelihood estimator employed by Alcock et al. (1997a) can be extended to consider lenses from multiple galactic components distributed over a finite solid angle. For an experiment of lifetime $`T`$ in which $`N(T)`$ events are observed with Einstein diameter crossing durations $`\widehat{t}_i`$ and Galactic coordinates $`l_i,b_i`$ ($`i=1\mathrm{}N`$), one can ascribe a likelihood $`L`$, where
$$\mathrm{ln}L(f_{1\mathrm{}n},\varphi _{1\mathrm{}n})=\underset{j=1}{\overset{n}{}}f_j𝒩(\varphi _j,T)+\underset{i=1}{\overset{N(T)}{}}\mathrm{ln}\left[\sigma (l_i,b_i)(\widehat{t}_i,l_i,b_i,T)\underset{j=1}{\overset{n}{}}f_j\frac{\mathrm{d}\mathrm{\Gamma }(\varphi _j,l_i,b_i)}{\mathrm{d}\widehat{t}_i}\right],$$
(7)
to a galactic model comprising $`j=1\mathrm{}n`$ components, each component being characterised by a lens fraction $`f_j`$ and mass function $`\varphi _j`$. In the above formula, $`\mathrm{\Gamma }`$ is the theoretical event rate, $``$ is the detection efficiency, $`\sigma `$ is the number of sources per unit solid angle and
$$𝒩(\varphi _j,T)=T\sigma (l,b)(\widehat{t},l,b,T)\frac{\mathrm{d}\mathrm{\Gamma }(\varphi _j,l,b)}{\mathrm{d}\widehat{t}}d\widehat{t}dl\mathrm{d}(\mathrm{sin}b)$$
(8)
is the number of events predicted for component $`j`$ when $`f_j=1`$. The spatial variation of microlensing events has been studied before by Gyuk (1999), though using the optical depth and rate rather than the timescales (and with the emphasis on the inner Galaxy).
Let us set up two competing models. In the first, the Milky Way halo provides the dominant lens population, although there is some residual contribution from the stars in the LMC disk and bar. In the second, there is no Milky Way halo and the LMC disk and bar are augmented by the existence of an enveloping shroud and veil, so that all the lenses reside close to or in the LMC. The density laws describing the components are summarised in Table 1. In both cases, the LMC disk and bar are populated with lens masses $`m`$ drawn from the ordinary stellar disk population. The broken power-law
$`\varphi _{\mathrm{LMC}}`$ $``$ $`m^\gamma (m_\mathrm{L}=0.08\mathrm{M}_{}mm_\mathrm{U}=10\mathrm{M}_{}),`$
$`\gamma `$ $`=`$ $`\{\begin{array}{cc}0.75\hfill & (m_\mathrm{L}m<0.5\mathrm{M}_{})\hfill \\ 2.2\hfill & (0.5\mathrm{M}_{}mm_\mathrm{U})\hfill \end{array}`$ (11)
describes the LMC stellar mass function (c.f., Hill, Madore & Freedman 1994; Gould, Bahcall & Flynn 1997). For our Milky Way halo, we adopt a $`\delta `$-function
$$\varphi _\mathrm{h}\frac{1}{m}\delta (mm_{\mathrm{dark}})$$
(12)
as characterising the lens mass. For the competing LMC-only model, there is an extended shroud and veil (hereafter collectively referred to simply as the shroud) enveloping the LMC stellar disk and bar. For simplicity, let us investigate the case in which the shroud consists primarily of dark lenses (either remnants or low-mass stars). Since the Milky Way halo and LMC shroud populations are both dark, we always make comparisons assuming the same lens mass $`m_{\mathrm{dark}}`$. For this calculation, we make the simplifying assumption that the LMC is virialized, so that any increase in the mass of the shroud implies a corresponding increase in its velocity dispersion. This is important because changes in the velocity dispersion affect the derived lens timescale distribution. Suppose the ratio of the disk to shroud masses is originally $`r`$. Then if the mass of our shroud is increased by a factor $`f_\mathrm{s}`$, the virial theorem indicates that the velocity dispersion increases by a factor $`f_\sigma =\sqrt{(f_\mathrm{s}+r)/(1+r)}`$. We must also make the corresponding transformations $`\widehat{t}f_\sigma ^1\widehat{t}`$ and $`\mathrm{d}\mathrm{\Gamma }/\mathrm{d}\widehat{t}f_\sigma ^2f_\mathrm{s}\mathrm{d}\mathrm{\Gamma }/\mathrm{d}\widehat{t}`$.
Let us proceed by simulating microlensing experiments over a range of lifetimes $`T`$. We assume the Milky Way halo is an isothermal spherical halo of amplitude $`v_0=220\mathrm{kms}^1`$. A fraction $`f_\mathrm{h}`$ of the halo comprises lenses of mass $`m_{\mathrm{dark}}`$. This provides us with our input model with which to generate “observed” events. The expected number of events for an experiment of lifetime $`T`$ is simply $`𝒩(T)=𝒩(\varphi _{\mathrm{LMC}},T)+f_\mathrm{h}𝒩(\varphi _\mathrm{h},T)`$, where $`𝒩(\varphi _{\mathrm{LMC}},T)`$ and $`𝒩(\varphi _\mathrm{h},T)`$ are obtained from eqn (8). We then generate a Poisson realisation $`N(T)`$ for the number of observed events. We approximate the current generation of microlensing surveys by an ideal experiment which monitors the central $`3^{}\times 3^{}`$ of the LMC. For each event a location is generated from within this region using the distribution
$$P(l,b|T)\sigma (l,b)(\widehat{t},l,b,T)\left[\frac{\mathrm{d}\mathrm{\Gamma }(\varphi _{\mathrm{LMC}},l,b)}{\mathrm{d}\widehat{t}}+f_\mathrm{h}\frac{\mathrm{d}\mathrm{\Gamma }(\varphi _\mathrm{h},l,b)}{\mathrm{d}\widehat{t}}\right]d\widehat{t},$$
(13)
which traces the event number density as a function of position. The Einstein diameter crossing time $`\widehat{t}`$ is generated from the distribution
$$P(\widehat{t}|l,b,T)(\widehat{t},l,b,T)\left[\frac{\mathrm{d}\mathrm{\Gamma }(\varphi _{\mathrm{LMC}},l,b)}{\mathrm{d}\widehat{t}}+f_\mathrm{h}\frac{\mathrm{d}\mathrm{\Gamma }(\varphi _\mathrm{h},l,b)}{\mathrm{d}\widehat{t}}\right].$$
(14)
The detection efficiency $``$ is not just a function of $`\widehat{t}`$ and $`T`$, but also Galactic coordinates $`l`$ and $`b`$. The spatial dependency of $``$ has not yet been assessed by any of the current experiments and is inevitably experiment-specific. In the following analysis we consider an idealized microlensing survey in which the spatial dependency is sufficiently weak to be neglected. This is not a good assumption for the current LMC microlensing surveys which do not observe all regions with the same frequency, but the method we present is general and can be used to take account of spatial variations in efficiencies when these become available. As microlensing experiments continue, they become more sensitive to longer duration events. However, the efficiency $``$ does not approach unity because of photometric limits imposed by the observing conditions. Instead one might anticipate, say, a limiting efficiency $`_{\mathrm{max}}0.5`$. We propose the following model for the time evolution of the efficiency for our ideal experiment:
$$=\{\begin{array}{cc}\mathrm{max}[0,_{\mathrm{short}}(\widehat{t})]\hfill & (\widehat{t}<\widehat{t}_{\mathrm{peak}})\hfill \\ \mathrm{max}[0,_{\mathrm{short}}(\widehat{t}_{\mathrm{peak}})]\mathrm{exp}\{[\mathrm{log}(\widehat{t}_{\mathrm{peak}}/\widehat{t})/0.5]^2\}\hfill & (\widehat{t}\widehat{t}_{\mathrm{peak}})\hfill \end{array}$$
(15)
where
$$\widehat{t}_{\mathrm{peak}}=0.12T_{\mathrm{short}}=\mathrm{min}\{_{\mathrm{max}},0.2[\mathrm{log}(\widehat{t}/\text{days})0.38]\}_{\mathrm{max}}=0.5.$$
(16)
Here, $`\widehat{t}_{\mathrm{peak}}`$ is the Einstein diameter crossing time at which the efficiency peaks, which of course depends on the experiment lifetime $`T`$. As Figure 2 shows, the model (dashed lines) provides an excellent approximation of the Alcock et al. (1997a) 2.1-year efficiencies (solid line). It is also broadly consistent with provisional 4-year MACHO efficiency estimates (Sutherland 1999). Note from Figure 2 that the limiting efficiency $`_{\mathrm{max}}`$ is not reached until $`T20`$ years, much longer than the nominal lifetime of the MACHO experiment. Let us emphasize that this model is only a plausible representation of how the efficiencies for the current generation of microlensing experiments might evolve.
We can now use simulated datasets to compute likelihoods for any desired theoretical model via eqn (7). For the dataset, we calculate the likelihood $`L_\mathrm{h}`$ for the input (true) halo, LMC disk and bar parameters. Let the likelihood of the competing model of a shrouded LMC disk and bar be $`L_\mathrm{s}`$. The ratio $`L_\mathrm{s}/L_\mathrm{h}`$ then provides a direct measure of the preference of the dataset for the (true) halo model or (false) shroud model. Given just these two alternatives, we can define a discrimination measure
$$D=\frac{L_\mathrm{h}}{L_\mathrm{s}+L_\mathrm{h}}$$
(17)
which is the probability, given the data, that the halo rather than the shroud, represents the underlying model. Individual datasets can be misleading, so we generate a large ensemble of datasets for every experiment lifetime $`T`$. (Specifically, we use either $`10^5`$ datasets or a cumulative total of $`3\times 10^6`$ events, whichever is reached first). From the resulting distribution of $`D`$ values, it is possible to assess not just the degree of discrimination for a particular dataset between the input and comparison model, but also the likelihood of obtaining a dataset with at least that level of discrimination.
In Figure 3, we plot $`D_{95}`$ (that is, the $`95\%`$ lower limit on $`D`$) computed from the ensemble of simulated datasets, for a variety of input and comparison models (all assuming $`f_\mathrm{s}=1`$) for experiments with a lifespan of up to 20 years. The figure clearly illustrates how much longer it takes to distinguish between the competing models for smaller halo fractions and larger lens masses. For halo fractions $`f_\mathrm{h}0.3`$, we expect our experiment to clearly distinguish between the two models after about 5 years if $`m_{\mathrm{dark}}0.5\mathrm{M}_{}`$. The amount of time $`T`$ required to decisively reject the shroud model is about twice as long for lens masses $`m_{\mathrm{dark}}=0.5\mathrm{M}_{}`$ than for $`m_{\mathrm{dark}}=0.1\mathrm{M}_{}`$, and this is due to the larger number of events typically observed for the lower mass lenses. If our ideal experiment is indicative of the progress of the MACHO survey, it seems that even after 10 years the experiment may still be unable to clearly distinguish between the halo and shrouded LMC models if $`f_\mathrm{h}0.1`$ and $`m_{\mathrm{dark}}0.1\mathrm{M}_{}`$.
Table 2 shows the experimental lifetime $`T`$, in years, required to constrain $`D_{95}>0.95`$, at which point $`95\%`$ of datasets clearly reject the shroud model. The limits displayed in Figure 3 are summarised in columns 2–4 of table 2. Columns 5–7 show the equivalent limits if one employs a likelihood statistic that does not take into account the spatial distribution of the events. Columns 8–10 are for a shroud mass factor half as large as assumed in figure 3. For columns 5–7, we have assumed that the timescale distribution at all locations is the same as the distribution for the line of sight through the LMC centre. We see that the spatial distribution of events becomes an increasingly important discriminant for halos of lower lens fraction and lenses of larger mass. In the case where $`f_\mathrm{h}=0.1`$, the incorporation of the angular distribution of events into the likelihood statistic greatly enhances the sensitivity of the analysis to the lens population. In this case the number of generated events are similar for the competing halo and shroud models, and so the spatial distribution becomes an important discriminatory factor. Comparing columns 2–4 with columns 8–10, where a shroud mass factor $`f_\mathrm{s}=0.5`$ is assumed, we see that less massive LMC disks are easier to distinguish. The overall constraints for $`f_\mathrm{s}=0.5`$ are stronger for a given $`T`$ than those for $`f_\mathrm{s}=1`$ because the halo events always outnumber LMC events. The relative constraints for different $`f_\mathrm{h}`$ and $`m_{\mathrm{dark}}`$ are similar for both values of $`f_\mathrm{s}`$; LMC and halo models with $`m_{\mathrm{dark}}=0.5\mathrm{M}_{}`$ are about twice as difficult to differentiate as those with $`m_{\mathrm{dark}}=0.1\mathrm{M}_{}`$.
Stubbs (1998) has proposed a next generation microlensing survey (provisionally dubbed “SuperMACHO”) capable of detecting events at a rate at least an order of magnitude greater than current experiments. Gould (1999) finds that coverage of the whole LMC disk is the key to maximizing the returns of such a survey. In Figure 4 we compare the discrimination capability of SuperMACHO with that of current surveys, assuming that SuperMACHO commences nine years after the current surveys, and that the current experiments are continued through the next decade (in reality, the current surveys are scheduled to terminate in the next few years). Let us assume that the SuperMACHO angular coverage will be as suggested by Gould (1999), namely $`11^{}\times 11^{}`$ centered on the LMC bar, that the number of detected events will be ten times greater than current yields, and that the detection efficiency evolves according to equations (15) and (16). In reality the SuperMACHO detection efficiency is likely to be qualitatively different than for the current experiments because many of the central fields will be strongly blended.
In Figure 4 we have re-plotted the $`95\%`$ limit on the discriminatory power ($`D_{95}`$) of current surveys for the case $`f_\mathrm{h}=0.1`$, $`m_{\mathrm{dark}}=0.5\mathrm{M}_{}`$ (solid line). This time we plot $`D_{95}`$ against epoch rather than experiment lifetime. We adopt 1992.5 as the start of the current surveys (it actually corresponds to the start of the MACHO survey) with the SuperMACHO survey, shown by the dashed line, is assumed to start in 2001. Whilst current surveys would take 20 years to distinguish clearly between LMC and halo populations for this model, SuperMACHO takes only 18 months to reach the same level of discrimination. SuperMACHO will surpass the sensitivity of current surveys within a year of starting (if it indeed starts on the assumed date). A survey along the lines of SuperMACHO represents one of the best ways to discriminate statistically between halo and LMC lens populations in the next few years.
## 4 CONCLUSIONS
It is possible to build models of the Large Magellanic Cloud (LMC) with microlensing optical depths that are comparable to, although lower than, the observations. Such models are fatter than is conventional, with material extending to scale heights of $`6`$ kpc above the plane of the LMC disk, as is suggested by Weinberg’s (1999) numerical simulations of the evolution of the LMC in the tidal field of the Milky Way. This paper has derived the formula for the self-lensing optical depth of an equilibrium thin disk surrounded by stars dispersed at greater scale heights. As a shorthand, we call such material the shroud, even though its distribution may be quite patchy. When $`10\%`$ of the total column density is in the shroud, the self-lensing optical depth is typically between $`0.7\times 10^7`$ and $`1.1\times 10^7`$. The self-lensing optical depth rises to between $`1.2\times 10^7`$ and $`1.9\times 10^7`$ when $`20\%`$ of the column density is in the shroud. These figures should be compared to the observational estimate of $`2.1_{0.8}^{+1.3}\times 10^7`$. Provisional estimates using the 4-year dataset suggest that the optical depth may be lower (Sutherland 1999). Additionally, the difficulty of reproducing the high optical depths reported by both Udalski et al. (1994) and Alcock et al. (1997b) towards the Galactic Center using barred models of the inner Galaxy (e.g., Häfner et al. 1999) hints at a possible systematic over-estimate afflicting the experimental values. Clearly, the suggestion that almost all the microlensing events emanate at or close to the LMC cannot be dismissed lightly.
The difficulty with fattening the LMC disk is that there are no known LMC populations with a line of sight velocity dispersion exceeding $`33\mathrm{kms}^1`$ (Hughes et al. 1991). Stars in equilibrium in a thick disk with a scale height of $`3\mathrm{kpc}`$ typically possess a larger velocity dispersion than this. One possibility is that the shroud stars belong to an old, metal-rich population that could have evaded detection. More likely, perhaps, is that the material in the shroud is not in a steady-state at all. Its spatial distribution may be quite patchy, making it difficult to pick out against the bright central bar. A final option is that the shroud is composed of dark or dim material, such as low mass stars or compact objects (c.f. Aubourg et al. 1999). Self-lensing optical depths then overestimate the true optical depth by a factor of $`3`$, though this may be partly compensated by increasing the mass fraction in the shroud. The idea is tantamount to enveloping the LMC in its own dark halo. So, a shrouded LMC may not dispense with the need for compact dark matter. It merely re-locates it from the Milky Way halo to the LMC, though of course a much lower total mass budget in compact objects is implied. A dark shroud is difficult to rule out, although there is no obvious way to arrange the low mass stars or compact objects around the LMC thin disk.
It is natural to hope that the spatial distribution of events across the face of the LMC disk and the timescale information can be used to identify the main location of the lenses. In some circumstances, an experiment lifetime of $`<5`$ years is sufficient to decide between the competing claims of Milky Way halo lenses and LMC lenses. However, there is an awkward régime in which fattened LMC disks can mimic anorexic halos and several decades of survey work are needed for discrimination. The difficult models to distinguish are Milky Way halos in which the lens fraction is very low ($`f_\mathrm{h}<0.1`$) and obese LMC disks composed of lenses with a typical mass of low luminosity stars or greater, $`m_{\mathrm{dark}}0.1\mathrm{M}_{}`$. This suggests that the timescales and the geometric distribution of the microlensing events may not be sufficient for an unambiguous resolution of the puzzle of the origin of the lenses within the lifetime of the current surveys.
One suggested approach to this problem is to employ a much more sensitive microlensing survey covering the whole LMC disk, not just the regions around the bar. The proposed “SuperMACHO” survey (Stubbs 1998) should be able to discriminate between even anorexic halos and fattened LMC disks within 18 months of starting. So, the commencement of a program like SuperMACHO represents one of the most promising ways to answer this question in the next few years. In the meantime, we may still hope to differentiate between the lens locations using data from binary caustic crossing events and from the presence or absence of parallax events. As Kerins & Evans (1999) have already argued, the former are a particularly powerful diagnostic. If the next binary caustic crossing event has a high projected velocity, then this securely establishes a lensing component in the Milky Way halo. If the next binary caustic crossing event has a low projected velocity, then – given the existing dataset – it becomes overwhelmingly likely that most of the lenses lie in a fattened LMC. This method, though, does suffer from a possible bias if the Milky Way halo is under-endowed with binaries.
In the longer term, a definitive test is to measure simultaneously the photometric and astrometric microlensing signals of a few events with the Space Interferometry Mission (SIM), which is currently scheduled for launch in mid 2005. This suggestion has been advanced by Boden, Shao & van Buren (1998) and Gould & Salim (1999). It enables the unambiguous identification of the lens location at the cost of about 20 hours exposure time per event with SIM. Since this method is able to discern the location of the lenses on an event-by-event basis, rather than by ensemble likelihood statistics, SIM and SuperMACHO should provide useful and complementary datasets. One way or another, the location of the lenses will be known within five years or so.
We thank Ken Freeman, Andy Gould, Geza Gyuk, Paul Schechter and Will Sutherland for a number of helpful conversations and suggestions. Martin Weinberg and Pierre Salati kindly forwarded material in advance of publication. NWE is supported by the Royal Society, while EK acknowledges financial support from PPARC (grant number GS/1997/00311).
|
no-problem/9909/hep-lat9909056.html
|
ar5iv
|
text
|
# Abelian monopole condensation in lattice gauge theories
## 1 Introduction
The dual superconductivity mechanism to explain color confinement has been suggested since the early day of QCD . The first evidences for the dual superconductivity was obtained by studying the dual Meissner effect . More recently an alternative method to detect the dual superconductivity has been proposed by the Pisa Group : it consists in measuring a disorder parameter given in terms of an operator with non zero magnetic charge and nonvanishing v.e.v. in the confined phase. In the case of non Abelian Gauge theories they need to perform the Abelian projection. Indeed the Pisa Group found evidence of Abelian monopole condensation in several gauges: plaquette gauge, butterfly gauge and Polyakov gauge .
The aim of this work is to investigate the dynamics of lattice gauge theories in an Abelian monopole background field in a gauge-invariant way. We use the gauge-invariant effective action for external background field defined by means of the lattice Schrödinger functional
$$𝒵\left[\stackrel{}{A}_a^{\text{ext}}\right]=𝒟U\mathrm{exp}(S_W),$$
(1)
where $`S_\text{W}`$ is the Wilson action and $`\stackrel{}{A}^{\text{ext}}(\stackrel{}{x})=\stackrel{}{A}_a^{\text{ext}}(\stackrel{}{x})\lambda _a/2`$ is the external field. The integration constraint over the lattice links is $`U_\mu (x)|_{x_4=0}=U_\mu ^{\text{ext}}(\stackrel{}{x},0)`$, where $`U_\mu ^{\text{ext}}`$ is the lattice version of the external field $`A_\mu ^{\text{ext}}`$. The Schrödinger functional is invariant under arbitrary lattice gauge transformations of the boundary links. The lattice effective action for the background field $`A_\mu ^{\text{ext}}(\stackrel{}{x})`$ is ($`L_4`$ extension in Euclidean time):
$$\mathrm{\Gamma }\left[\stackrel{}{A}^{\text{ext}}\right]=\frac{1}{L_4}\mathrm{ln}\left\{\frac{𝒵[U^{\text{ext}}]}{𝒵[0]}\right\}$$
(2)
$`𝒵[0]`$ is the lattice Schrödinger functional without external background field (i.e. $`U_\mu ^{\text{ext}}=1`$). Note that due to the manifest gauge invariance of the lattice background field effective action we do not need to fix the gauge.
## 2 U(1)
We are interested in the effective action with a Dirac magnetic monopole background field. In the continuum the Dirac magnetic monopole field with the Dirac string in the direction $`\stackrel{}{n}`$ is:
$$e\stackrel{}{b}(\stackrel{}{r})=\frac{n_{\mathrm{mon}}}{2}\frac{\stackrel{}{r}\times \stackrel{}{n}}{r(r\stackrel{}{r}\stackrel{}{n})}.$$
(3)
where, according to the Dirac quantization condition, $`n_{\mathrm{mon}}`$ is an integer and $`e`$ is the electric charge (magnetic charge = $`n_{\mathrm{mon}}/2e`$). We consider the gauge-invariant background field action Eq. (2) where the external background field is given by the lattice version of the Dirac magnetic monopole field. In the numerical simulations we put the lattice Dirac monopole at the center of the time slice $`x_4=0`$. To avoid the singularity due to the Dirac string we locate the monopole between two neighbouring sites. We have checked that the results are not too sensitive to the precise position of the magnetic monopole. We introduce the disorder parameter for confinement:
$$\mu =e^{E_{\mathrm{mon}}L_4}=\frac{𝒵[\mathrm{mon}]}{𝒵[0]},$$
(4)
where $`𝒵[0]`$ is the Schrödinger functional with $`n_{\mathrm{mon}}=0`$. According to the physical interpretation of the effective action, $`E_{\mathrm{mon}}`$ is the energy to create a monopole. To avoid the problem of dealing with a partition function we consider $`E_{\mathrm{mon}}^{}=E_{\mathrm{mon}}/\beta `$, that is analogous to the parameter $`\rho `$ introduced by the Pisa group . Note that $`E_{\mathrm{mon}}^{}`$ is given by the difference between the average plaquette $`P`$ obtained from configurations without and with the monopole field.
We performed lattice simulations on $`16^4`$, $`24^4`$ and $`32^4`$ lattices with periodic boundary conditions using the Quadrics/Q4 - QH1 in Bari. Note that the links belonging to the time slice $`x_4=0`$ and to the spatial boundary are constrained (no update). The constraint on the links starting from sites belonging to the spatial boundary corresponds in the continuum to the usual requirement that the fluctuations over the background field vanish at the spatial infinity. The contributions to $`E_{\mathrm{mon}}^{}`$ due to the constrained links must be subtracted, i.e.: only the dynamical links must be taken into account in evaluating $`E_{\mathrm{mon}}^{}`$.
In the strong coupling region $`\beta 1`$ the monopole energy is zero. This means that, according to Eq. (4), the disorder parameter $`\mu 1`$. Near the critical coupling $`\beta _c1`$, $`E_{\mathrm{mon}}^{}`$ diplays a sharp peak which increases and shrinks by increasing the lattice volume. This means that the disorder parameter decreases towards zero in the thermodynamic limit when $`\beta \beta _c`$. In the weak coupling region ($`\beta \beta _c`$) the plateau in $`E_{\mathrm{mon}}^{}`$ indicates that the monopole energy tends to the classical monopole action which behaves linearly in $`\beta `$. In order to obtain $`\mu `$ we perform the numerical integration of $`E_{\mathrm{mon}}^{}`$:
$$E_{\mathrm{mon}}=_0^\beta E_{\mathrm{mon}}^{}𝑑\beta ^{}$$
(5)
We found that the disorder parameter $`\mu `$ is different from zero in the confined phase (i.e. the monopoles condense in the vacuum). Moreover $`\mu 0`$ when $`\beta \beta _c`$ in the thermodynamic limit (the precise determination of $`\beta _c`$ require a F.S.S. analysis). Our result is gauge-invariant for the manifest gauge invariance of the Schrödinger functional.
## 3 SU(3)
We have studied the Abelian monopole condensation in pure SU(2) lattice gauge theory at finite temperature. Here we restrict ourselves to the more interesting case of SU(3) gauge theory. In this case the maximal Abelian group is U(1) $`\times `$ U(1). Therefore we have two different types of Abelian monopole. Let us consider, firstly, the Abelian monopole field
$$g\stackrel{}{b}^a(\stackrel{}{r})=\delta ^{a,3}\frac{n_{\mathrm{mon}}}{2}\frac{\stackrel{}{r}\times \stackrel{}{n}}{r(r\stackrel{}{r}\stackrel{}{n})}.$$
(6)
which we call the $`T_3`$-Abelian monopole. Now the functional integration constraint amounts on the lattice to fix the links belonging to the time slice $`x_4=0`$.
In the present case the disorder parameter is defined as:
$$\mu =e^{F_{\mathrm{mon}}/T}=\frac{𝒵[\mathrm{mon}]}{𝒵[0]},$$
(7)
where $`T=1/L_t`$ is the temperature and $`F_{\mathrm{mon}}`$ is the free energy per monopole. We measure $`F_{\mathrm{mon}}^{}=F_{\mathrm{mon}}/\beta `$. Again this corresponds to measuring the difference between the average plaquette without and with the monopole field.
From Fig. 2 we see that in the thermodynamic limit the disorder parameter $`\mu 1`$ in the confined phase, moreover $`\mu 0`$ when $`\beta \beta _{\mathrm{c},\mathrm{}}=5.6925(2)`$ in the infinite volume limit.
The second type of Abelian monopole field is obtained from Eq. (6) replacing $`\delta ^{a,3}`$ with $`\delta ^{a,8}`$. A previous study finds out that the disorder parameters for the two independent Abelian monopole defined by means of the Polyakov projection coincide within errors. On the contrary, our numerical results show a dramatic difference for $`F_{\mathrm{mon}}^{}`$.
The peak of $`F_{\mathrm{mon}}^{}`$ in the case of the $`T_8`$-Abelian monopole is about an order of magnitude greater than in the $`T_3`$-Abelian monopole case (see Fig. 3). Consequently, in the former case, the disorder parameter $`\mu `$ tends to zero more sharply.
## 4 Conclusions
We have studied the Abelian monopole condensation both in the Abelian gauge theory U(1) and finite temperature non Abelian gauge theories SU(2) and SU(3). We introduce a disorder parameter which signals the Abelian monopole condensation in the confined phase. Our definition of the disorder parameter is by construction gauge invariant. Our numerical results suggest that the disorder parameter is different from zero in the confined phase and tends to zero when the gauge coupling $`\beta \beta _c`$ in the thermodynamic limit. Our estimate of the critical couplings are in fair agreement with the ones in the literarure. The precise determination of the critical couplings and the critical exponents in the infinite volume limit could be obtained by means of the finite size scaling analysis. In the case of the SU(3) gauge theory, there are two independent Abelian monopole fields related to the two diagonal generators of the gauge group. Remarkably we find that the non perturbative vacuum reacts strongly in the case of $`T_8`$-Abelian monopole. This seems to suggest that the vacuum monopole condensate is predominantly formed by $`T_8`$-Abelian monopoles.
|
no-problem/9909/chao-dyn9909045.html
|
ar5iv
|
text
|
# Energy dissipation statistics in a shell model of turbulence
## Abstract
The Reynolds number dependence of the statistics of energy dissipation is investigated in a shell model of fully developed turbulence. The results are in agreement with a model which accounts for fluctuations of the dissipative scale with the intensity of energy dissipation. It is shown that the assumption of a fixed dissipative scale leads to a different scaling with Reynolds which is not compatible with numerical results.
One of the most important problems in fully developed turbulence is the description of the energy transfer mechanism. In stationary situations, the energy injected at the large scale $`\mathrm{}_0`$, transfers at rate $`\overline{ϵ}`$ down to the dissipative scale $`\eta `$, where it is removed at the same rate by viscous dissipation. The fundamental assumption in the study of fully developed turbulence is that in the limit of very high Reynolds numbers $`Re`$, the energy dissipation $`\overline{ϵ}`$ becomes independent of $`Re`$ (i.e. of the viscosity, being $`Re=u_0\mathrm{}_0/\nu `$, with $`u_0`$ a typical large scale velocity) . In the same limit, the Kolmogorov theory predicts universal scaling of the velocity structure functions in the inertial range of scales $`\eta <\mathrm{}<\mathrm{}_0`$:
$$S_q(\mathrm{})\left(\delta u(\mathrm{})\right)^qu_0^q\left(\frac{\mathrm{}}{\mathrm{}_0}\right)^{\zeta _q}$$
(1)
with exponents $`\zeta _q=q/3`$.
Several decades of experimental and numerical investigation have shown that scaling laws (1) are indeed observed but with exponents $`\zeta _q`$ corrected with respect to the Kolmogorov prediction . This is the essence of the intermittency problem, which has received a lot of attention in the modern approach of the study of fully developed turbulence.
Experiments have shown that intermittency also affects energy dissipation statistics which is not uniform in the turbulent domain. A phenomenological description of intermittency is the multifractal model . This model introduces a continuous set of scaling exponents $`h`$ which relate the velocity fluctuations entering in (1) with the large scale velocity $`u_0`$:
$$\delta u(\mathrm{})u_0\left(\frac{\mathrm{}}{\mathrm{}_0}\right)^h.$$
(2)
The exponent $`h`$ is realized with a probability $`\left(\frac{\mathrm{}}{\mathrm{}_0}\right)^{Z(h)}`$ where $`Z(h)`$ is the codimension of the fractal set on which the $`h`$-scaling holds. The scaling exponents of structure functions (1) are obtained by a steepest descent argument over exponents $`h`$:
$$\zeta _q=\underset{h}{inf}\left[ph+Z(h)\right].$$
(3)
The scaling region is bounded from below by the Kolmogorov dissipation scale $`\eta `$ at which dissipation starts to dominate, i.e. the local Reynolds number is of order 1:
$$\frac{\eta \delta u(\eta )}{\nu }1$$
(4)
At variance with the Kolmogorov theory, in the multifractal description of intermittent turbulence, the dissipative scale is a fluctuating quantity. This implies a series of consequences which have been investigated in past years . As we shall see later the description of the fluctuations of the dissipative scale is crucial for the correct evaluation of the Reynolds number dependence.
In this Paper we are interested in the dependence of the statistics of energy dissipation on the Reynolds number. The physical picture is that dissipation becomes more and more intermittent as the Reynolds number increases. Assuming that the multifractal description can be pushed down to the dissipative scale, one predicts for the moment of energy dissipation a power-law dependence on $`Re`$, with exponents related to the structure function exponents (1) . We will see that this prediction is rather natural and confirmed by numerical simulations on a shell model.
The dimensional argument for the prediction goes as follows. In a dimensional approach, the energy dissipation is evaluated as
$$ϵ=\nu \underset{\alpha ,\beta }{}\left(\frac{u_\alpha }{x_\beta }\right)^2\nu \left(\frac{\delta u(\eta )}{\eta }\right)^2$$
(5)
From (2) and (4) one has that $`\eta \mathrm{}_0Re^{\frac{1}{1+h}}`$. Inserting in (5) and computing the average of the different moments, one ends with the expression
$$ϵ^p\overline{ϵ}^p𝑑\mu (h)Re^{\frac{3php+Z(h)}{1+h}}\overline{ϵ}^pRe^{\theta _p}$$
(6)
where the integral has been evaluated by a steepest descent argument (assuming $`Re\mathrm{}`$) and
$$\theta _p=\underset{h}{inf}\left[\frac{3php+Z(h)}{1+h}\right].$$
(7)
The standard inequality in the multifractal model (following from the exact result $`\zeta _3=1`$), $`Z(h)13h`$, implies for (7) $`\theta (1)=0`$ which is nothing but the request of finite nonvanishing dissipation in the limit $`Re\mathrm{}`$. For $`p>1`$, $`\theta _p<0`$, i.e. the tail of the distribution of $`ϵ`$ becomes wider with Reynolds number.
Let us stress that the above argument is only a reasonable dimensional argument. It is essentially based on two assumptions: a physical one concerning the fluctuations of the dissipative scale according to (4), and a more formal one on the possibility of extending the multifractal description down to the dissipative scales. The two assumption are independent: indeed, as we will see, it is possible to give different predictions by changing assumption (4) .
It would thus be important to address the problem with experiments or direct numerical simulation at high Reynolds numbers. Recent high resolution DNS gives support to (7) , but the Reynolds number is not large enough to discriminate clearly between different predictions.
Shell models are extremely simplified models of turbulence. Nevertheless, they are deterministic, nonlinear dynamical models which display intermittency and anomalous scaling exponents reminiscent of real turbulence . Their main advantage is that with shell models one can perform very accurate simulations at very high Reynolds numbers; for this reason they are thus natural candidates for a numerical investigation of Reynolds number dependence.
In shell models, the velocity fluctuations are represented by a single complex variable $`u_n`$ on shells of geometrically spaced wavenumber $`k_n=k_0\mathrm{\hspace{0.17em}2}^n`$. The particular model we adopt for our investigation is a recently introduced model which displays strong intermittency corrections . The model equations are
$$\frac{du_n}{dt}=ik_n\left(u_{n+2}u_{n+1}^{}\frac{1}{4}u_{n+1}u_{n1}^{}+\frac{1}{8}u_{n1}u_{n2}\right)\nu k_n^2u_n+f_n$$
(8)
where $`\nu `$ is the viscosity and $`f_n`$ is a forcing term restricted to the first two shells. For $`\nu =f_n=0`$ the model conserves the total energy $`E=_n|u_n|^2`$. For simplicity, the forcing adopted for the present simulations is $`f_n1/u_n^{}`$, which guarantees a constant energy input $`\overline{ϵ}`$. The large scale Reynolds number of the simulation is estimated as $`Re=\overline{ϵ}^{1/3}/(\nu k_0^{4/3})`$ and is numerically controlled by the value of the viscosity.
The chaotic dynamics is responsible for intermittency corrections to the structure functions exponent $`\zeta _q`$, here defined by means of
$$S_q(n)=|u_n|^qk_n^{\zeta _q},$$
(9)
which are close to the experimental values . In Figure 1 we plot the spectrum of structure function exponents obtained from very long simulations. The multifractal codimension $`Z(h)`$ is numerically obtained from $`\zeta (p)`$ by inverting the Legendre transform (3). The result is shown in Figure 2. We observe that, because of the strong intermittency in the model, it is numerically difficult to obtain statistical convergence of structure functions (9) of order $`q>8`$. As a consequence, the minimum exponent for $`Z(h)`$ is $`h_{min}0.2`$.
From the energy balance equation we have the instantaneous energy dissipation
$$ϵ=2\nu \underset{n}{}k_n^2|u_n|^2$$
(10)
which average is $`ϵ=\overline{ϵ}`$ in stationary conditions.
We have performed very long simulations at different Reynolds numbers, starting from $`Re=2\times 10^5`$ up to $`Re=10^8`$. For each simulation we computed the different moments of energy dissipation, $`ϵ^p`$. Shell models dynamics is characterized by strong bursts of energy dissipation which limits the possibility of computing with confidence high order moments. Here we limited to moments $`p8`$. In Figure 3 we plot the behavior of $`ϵ^p`$ as a function of $`Re`$ for different values of $`p`$. The power law behavior is evident and the scaling exponent $`\theta _p`$ can be estimated with good accuracy. By construction $`ϵ=\overline{ϵ}`$ is independent of $`Re`$.
The scaling exponent $`\theta _p`$ (6) are plotted in Figure 4, together with the multifractal prediction (7). Let us observe that, because the largest $`q`$ in (9) is $`q=8`$, the estimate of $`p`$ in (7) is limited to values less than $`p2.5`$. For higher $`p`$, the numerical evaluation of (7) feels the effect of the cutoff of $`h`$ on $`h_{min}`$. Nevertheless, we have a rather large range of moments ($`0p2.5`$) over which numerical data display a perfect agreement with (7).
As discussed above, prediction (8) makes use of the fluctuating dissipative scale $`\eta `$. If, on the contrary, one assumes that dissipation scale enters in (5) as an averaged quantity the prediction for $`\theta _p`$ is different: assuming $`\nu \delta u(\stackrel{~}{\eta })^2/\stackrel{~}{\eta }^2\overline{ϵ}`$ as the definition of the (non-fluctuating) dissipation scale $`\stackrel{~}{\eta }`$ (this is the only choice that ensures that $`ϵRe^0`$) , one ends up with $`\stackrel{~}{\theta }_p=[p\zeta _2\zeta _{2p}]/[2\zeta _2]`$ .
Our results allow us to discriminate between the prediction (7) and the one obtained with a non-fluctuating dissipative scale. Figure 4 shows that the numerical $`\theta _p`$ is definitely not compatible with the latter alternative, whereas it supports with good accuracy the prediction (7).
In conclusion, it is an expected consequence of the existence of intermittency in the energy transfer that the dissipative scale $`\eta `$ fluctuates according to the local intensity of energy dissipation, being smaller where the dissipation is stronger and vice versa. The fluctuations of the inner scale of turbulence reflect onto the Reynolds dependence of the statistics of energy dissipation. Long numerical simulations of shell-models confirm with great accuracy the validity of the multifractal model, which accounts for the fluctuations of $`\eta `$, and rule out alternative models which do not describe properly the correlations between $`\eta `$ and $`ϵ`$.
|
no-problem/9909/cond-mat9909074.html
|
ar5iv
|
text
|
# The Ground State of the Two-Dimensional Hubbard Model
\[
## Abstract
We have studied the ground state of the two-dimensional Hubbard model by using the recently proposed adaptive sampling quantum monte carlo (ASQMC) method. We have paid attention to the model whose non-interacting band dispersion is almost flat near $`(\pi ,0)`$. To minimize the effect of the finite size gap overlying on the Fermi level, we have tuned both filling and band structure. We found enhancement of the d-wave correlation function at large distance, the spin gap and the momentum distribution function consistent with the d-wave gap. We also found the coexistence of both the commensurate and incommensurate peaks in $`S(\stackrel{}{q})`$, which does not contradict a recent experimental finding that both the resonance peak and the incommensurate peaks reside in the same doping level of YBCO and BSCCO.
\]
It is very difficult to calculate ground state properties of large two-dimensional (2D) Hubbard clusters away from half filling when $`U/t`$ is not small because of the notorious negative sign problem for quantum monte carlo (QMC) methods. We have recently proposed the adaptive sampling quantum monte carlo (ASQMC) method to cope with the problem. The start of the project to develop the ASQMC method was influenced by the publication of the constrained path (CP) monte carlo method by Zhang, Carlson and Gubernatis (ZCG). However we found it is possible to reduce the negative sign ratio without any constraint if we adopt a new update scheme. This may be possible because the update scheme adopted by ZCG itself seems to suppress the negative sign ratio, which should be the reason why their method is superior in accuracy than the more elaborate positive projection method proposed by Fahy and Hammann 8 years ago. We developed our update scheme within the frame work of the ”exact update method” of the auxiliary field quantum monte carlo method. Our scheme is not only easier to understand for those who have been working on Hubbard and related models but also it is superior in accuracy than the ZCG’s scheme because it is free from the inaccurate mixed estimator method for measurements and its variants which are indispensable to the CP method. This should be one of the main reasons why our ASQMC method gives more accurate results than the CP method. Actually, we observed the ASQMC gives better value for the ground state energy of 14 electron $`4\times 4`$ cluster when $`U/t=12`$ Even without constraint, the negative sign ratio is reduced largely in the all cases studied, which makes measurements stable. $`<Sign>`$ decrease to $`0`$ very slowly as we increase the projecting imaginary time $`\tau `$. (Fixing $`\mathrm{\Delta }\tau `$ and $`\tau _c`$ constant.) Typically, $`\tau =20`$ to $`30`$ is feasible. We have made the serial correlation test in many cases and found that samples obtained by the ASQMC are statistically independent and the variance calculated by the standard formula gives a good estimate for the true variance. As an example we show the autocorrelation function for the d-wave superconducting correlation function at $`\stackrel{}{q}=0`$ in the case of parameter set (i) described below in Fig. 1.
We used the ASQMC method to study the ground state properties of the 2D Hubbard model. The parameter region we paid our attention was such that the non-interacting band dispersion was almost flat near the $`(\pi ,0)`$ point. We tuned electron filling and band dispersion such that the energy difference between the highest occupied level (HOL) and the lowest unoccupied level (LUL) was close to zero. The Hubbard model we studied may belong to different class than that frequently studied on the bipartite lattice. We have studied various parameter regions of $`6\times 6`$, $`8\times 8`$, $`10\times 10`$ and $`12\times 12`$ clusters. Here we discuss following three cases; (i) $`6\times 6`$ lattice with $`t^{}=0.1667`$, and $`t^{\prime \prime }=0.2`$. $`U=4`$. The number of electron is $`28`$. (ii) $`10\times 10`$ lattice with $`t^{}=0.2`$, and $`t^{\prime \prime }=0.05`$. $`U=4`$. The number of electron is $`84`$. (iii) $`12\times 12`$ lattice with $`t^{}=0.1676`$, and $`t^{\prime \prime }=0.2`$. $`U=4`$. The number of electron is $`92`$. To make sure that our result is convergent to the $`\tau \mathrm{}`$ limit, we show the $`\tau `$ dependence of the ground state energy of (i) in Fig. 2. The calculation was stable at least up to $`\tau =20`$, but the energy converged at $`\tau =3`$. We observed similar fast convergence in the case of (ii). Hence, we proceeded calculations with $`\tau =3`$ to save CPU time. The fast convergence suggests that energy gaps among the states with the same symmetry are relatively large, while the gap between the ground and low lying excited states in the finite size system can be small, if these state have different symmetries. This is because we are using the projector method. We found the enhancement of the d-wave superconducting correlation at large distance in the ground state when we used the parameter sets (i) and (ii). We also found the momentum distribution function can be well described by using the d-wave BCS mean field theory. We found exponential decay of the distance dependence of the spin correlation function, which is suggestive of opening the spin gap. All these are consistent with the d-wave superconducting ground state. Even more surprisingly, the commensurate peak coexists with the incommensurate peaks in $`S(\stackrel{}{q})=𝑑\omega S(\stackrel{}{q},\omega )`$ of the superconducting state. This is characteristic to the parameter region involving the three parameter sets. The coexistence was most clearly seen in a over doped region (iii) which was shown in Fig. 3. (The peak interval between the incommensurate peaks is largest in (iii).) Experiments on YBCO and BSCCO seem to suggest that both the resonance peak and the incommensurate peaks exist at different $`\omega `$ but in the same filling. The suggestion from experiments does not contradict our result on $`S(\stackrel{}{q})`$. In the other parameter regions, we did not so far observe similar $`S(\stackrel{}{q})`$. The numerical results so far obtained by using the ASQMC method suggest that the band dispersion close to $`(\pi ,0)`$ seems to be very crucial to the ground state of the 2D Hubbard model.
|
no-problem/9909/quant-ph9909049.html
|
ar5iv
|
text
|
# Consistent histories, quantum truth functionals, and hidden variables
## 1 Introduction
Textbook quantum theory is beset by numerous conceptual difficulties when it comes to providing a physical interpretation of the mathematical formalism of quantum theory. The unsatisfactory nature of the usual treatments results in a variety of paradoxes: Schrödinger’s cat, Einstein-Podolsky-Rosen, and the like . The consistent histories (sometimes called decoherent histories) approach to quantum mechanics disposes of these difficulties by combining probability theory with the standard Hilbert space formalism in a coherent way through the use of histories which satisfy certain consistency conditions.
A central principle of the more recent formulations of consistent history (CH) ideas is the single framework (also known as the single family, single logic, or single set) rule, which plays an essential role in ensuring the logical consistency of quantum theory, resolving various quantum paradoxes, and getting rid of the mysterious superluminal influences which are sometimes thought to be a consequence of the quantum formalism. The purpose of this Letter is to explain how the single framework rule renders the CH approach fully compatible with a well-known result on the impossibility of hidden variables in a Hilbert space of dimension three or greater due to Bell , and Kochen and Specker . Following recent work by Bassi and Ghirardi , we shall explore the problem using the notion of a truth functional. While this is not absolutely essential—the ideas of ordinary probability theory suffice, when they are properly used—it is convenient for the conceptual issues we will be discussing, and assists in explaining why our key conclusions are directly contrary to those of .
Since we will be concerned with the Hilbert space description of a system at a single time, most of the formal machinery of the CH approach—histories, consistency, and assignment of probabilities—is not needed for the following discussion. This will allow us to concentrate on the most essential point, which is that a quantum Hilbert space differs in crucial respects from a classical phase space, and this mathematical difference must be reflected in a physical interpretation of the theory. The idea of a truth functional, which may be unfamiliar to some readers, is developed in Sec. 2 using a classical phase space, where classical intuition is a reliable guide, and then applied in Sec. 3 to a quantum Hilbert space, where certain classical ideas run into difficulty. It is at this point that results on no-hidden-variables are used, illustrated with a two-spin paradox due to Mermin , based on an idea of Peres . The CH approach to truth functionals and the no-hidden-variables theorem is the subject of Sec. 4, while the claims found in are discussed in Sec. 5.
## 2 Classical Truth Functionals
Consider a classical mechanical system, such as a simple harmonic oscillator, described by a phase space $`\mathrm{\Gamma }`$, with $`\gamma `$ a representative point. Any physical property $`P`$ of the system corresponds to some set of points $`𝒫`$ in the phase space for which this property is true, and we shall define the corresponding indicator function $`P(\gamma )`$ to be 1 whenever $`\gamma `$ is in $`𝒫`$, and 0 otherwise. For example, let $`P`$ be the property that the total energy of the oscillator is less than some constant $`E_0`$. Then $`𝒫`$ is the the region inside an ellipse in the $`x,p`$ plane ($`x`$ the position, $`p`$ the momentum), and $`P(\gamma )`$ is 1 for $`\gamma `$ inside and 0 for $`\gamma `$ outside this ellipse. If $`I`$ is the function equal to 1 everywhere on the phase space, the indicator of the negation $`\stackrel{~}{P}`$ of $`P`$, “energy greater than or equal to $`E_0`$” in our example, is the function $`IP`$: it is 0 wherever $`P`$ is 1, and 1 wherever $`P`$ is 0. Likewise, if $`P`$ and $`Q`$ are any two properties, the product $`PQ`$ of the two indicators is the indicator for the conjunction $`PQ`$ of $`P`$ and $`Q`$, the property “$`P`$ AND $`Q`$”. Similarly, the disjunction $`PQ`$, “$`P`$ OR $`Q`$”, corresponds to the indicator $`P+QPQ`$.
Consider a coarse graining of the phase space into a collection $`𝒟`$ of $`N`$ non-overlapping regions or “cells”. Then we can write
$$I=\underset{j=1}{\overset{N}{}}D_j,$$
(1)
where $`D_j`$ is the indicator corresponding to the $`j`$’th cell. Since the cells do not overlap it follows that
$$D_jD_k=\delta _{jk}D_j,$$
(2)
consistent with the obvious fact that $`I^2=I`$. The set of $`2^N`$ indicators of the form
$$P=\underset{j=1}{\overset{N}{}}\pi _jD_j,$$
(3)
where $`\pi _j`$ is either 0 or 1, form a Boolean algebra $``$ of properties, in which $`PQ`$ is $`PQ`$, and $`PQ`$ is $`PQ`$, as defined above.
Given such a Boolean algebra $``$, we define a truth functional $`\theta `$ to be a function which assigns to every property $`P`$ in $``$ the value 1 (true) or 0 (false) in a way which satisfies the following three conditions:
$$\theta (I)=1,\theta (IP)=1\theta (P),\theta (PQ)=\theta (P)\theta (Q).$$
(4)
These correspond to the rather sensible requirements that something is always true, that $`P`$ is true if and only if its negation $`\stackrel{~}{P}=IP`$ is false, and both $`P`$ and $`Q`$ are true if and only if their conjuction $`PQ`$ is true.
One can show that for a given coarse graining $`𝒟`$ with Boolean algebra $``$, there is a one-to-one correspondence between truth functionals on $``$ and the elements of $`𝒟`$. That is to say, any function $`\theta `$ taking only the values 0 and 1 and satisfying (4) must be of the form
$$\theta _k(P)=\{\begin{array}{cc}1\hfill & \text{if }PD_k=D_k\text{,}\hfill \\ 0\hfill & \text{if }PD_k=0\text{.}\hfill \end{array}$$
(5)
for some $`k`$. In terms familiar from probability theory, one can regard the non-overlapping cells which constitute the coarse graining $`𝒟`$ as a sample space of mutually exclusive possibilities, one and only one of which actually occurs, or is “true”, namely the cell which contains the phase point $`\gamma `$ which represents the actual or true state of the system. From this perspective $`\theta _k(P)`$ is the probability of the property $`P`$ conditional upon the property $`D_k`$, and we have the usual identification of “true” with “probability one” and “false” with “probability zero”.
Notice that it is because we are assuming that $`P`$ is of the form (3) that the product $`PD_k`$ must have one of the two forms on the right side of (5): no property of the form (3) can include part but not all of some cell $`D_k`$. Consequently, (5) defines a truth functional for indicators belonging to this particular algebra $``$, but not for all possible properties; in this sense a truth functional is relative to a particular decomposition $`𝒟`$, or Boolean algebra $``$. However, it is also possible to construct a universal truth functional which is not limited to a single Boolean algebra, but which will assign 0 or 1 to any indicator on the classical phase space in a manner which satisfies (4). To do this, choose some point $`\gamma _0`$ in $`\mathrm{\Gamma }`$, and let
$$\theta _0(P)=P(\gamma _0).$$
(6)
That is, $`\theta _0`$ assigns the value 1 to any property which contains the point $`\gamma _0`$, and 0 to any property which does not contain this point, in agreement with how one would normally understand “true” in a case in which the state of the system is correctly described by $`\gamma _0`$. As we shall see, a key difference between classical and quantum physics is the fact that in the latter there are no universal truth functionals as long as the Hilbert space has a dimension greater than 2.
## 3 Quantum Truth Functionals
The quantum counterpart of a classical phase space is a Hilbert space $``$. For our purposes it suffices to consider cases in which $``$ is of finite dimension, thus avoiding the mathematical complications of infinite-dimensional spaces. The counterpart of a classical property is a linear subspace $`𝒫`$ of $``$, with a corresponding orthogonal projection operator or projector $`P`$. If $`I`$ is the identity operator, the negation $`\stackrel{~}{P}`$ of the property $`P`$ corresponds to the projector $`IP`$, and the conjunction $`PQ`$ of two properties corresponds to the projector $`PQ`$ in the case in which $`P`$ and $`Q`$ commute with each other. If $`PQQP`$, then neither $`PQ`$ nor $`QP`$ is a projector, so there is no obvious way to define a property corresponding to the conjunction. We shall return to this later, in Sec. 4.
The quantum counterpart of a coarse graining of a classical phase space is a decomposition $`𝒟`$ of the identity which has precisely the form (1), where the $`D_j`$ are now mutually orthogonal projectors which satisfy (2). This decomposition gives rise to a set of projectors of the form (3), all of which commute with each other, and which form a Boolean algebra $``$ analogous to the algebra of classical indicator functions.
A quantum truth functional $`\theta `$ assigns to every projector $`P`$ in the Boolean algebra (3) the value 0 or 1 in a way which satisfies the three conditions in (4). Once again, there is a one-to-one correspondence between truth functionals and the elements of $`𝒟`$, and each such truth functional is of the form (5) for some $`k`$. The intuitive interpretation is also similar in the quantum and classical cases. The projectors which enter the decomposition $`𝒟`$, or equivalently the corresponding subspaces of $``$, form a sample space of mutually exclusive possibilities, one and only one of which is a correct description of the system, and thus “true”. Larger subspaces in $``$ which contain this true space are also true, and the others are false.
A truth functional associated with the decomposition $`𝒟`$ can be used to assign a (real) numerical value to an observable corresponding to a Hermitian operator $`A`$ written in the form
$$A=\underset{j}{}a_jD_j,$$
(7)
where the $`a_j`$ are, of course, the eigenvalues of $`A`$. Thus if the truth functional is $`\theta _k`$, (5), then $`D_k`$ is true, and any $`D_j`$ with $`j`$ unequal to $`k`$ is false, so $`\theta _k`$ assigns to $`A`$ the (eigen)value $`a_k`$. Similarly, given some collection of observables represented by commuting Hermitian operators $`A,B,C,\mathrm{}`$, the operators can be simultaneously diagonalized using a single decomposition of the identity. Then a truth functional associated with this decomposition will assign numerical values to each of the operators in a consistent way, so that, for example, if $`A`$, $`B`$, and $`C`$ are assigned the eigenvalues $`a`$, $`b`$, and $`c`$, then, for example, the operator $`AB+2C`$—which, of course, can also be represented using the same decomposition of the identity—will be assigned the value $`ab+2c`$.
In analogy with the classical case, Sec. 2, let us define a universal quantum truth functional $`\theta `$ as one which assigns to every projector $`P`$ (thus every subspace) of $``$ one of the two values 0 or 1 in a way which satisfies the rules in (4), with, however, the following qualification. If two projectors $`P`$ and $`Q`$ do not commute, so that $`PQ`$ is not a projector, then the third rule in (4) should be ignored; we only require that it hold in cases in which $`PQ=QP`$. When applied to projectors belonging to the Boolean algebra $``$ associated with some particular decomposition $`𝒟`$ of the identity, a universal truth functional has the same properties as an “ordinary” truth functional; that is, if the decomposition is (1), then $`\theta `$ coincides, on this Boolean algebra, with $`\theta _k`$, (5), for some specific $`k`$. Consequently, a universal truth functional, if it exists, can be used to assign to every observable (Hermitian operator) on $``$ one of its eigenvalues, and for commuting collections of Hermitian operators these values will satisfy the usual algebraic rules associated with ordinary numbers when one considers products and sums of operators, as in the example considered above.
Alas, it was shown by Bell , and by Kochen and Specker that if $``$ has a dimension of three or more, universal truth functionals do not exist. A very simple nonexistence proof for a Hilbert space of dimension 4 is provided by Mermin’s paradox for two spin-half particles , based upon the following “magic square”, which was also used in :
$$\begin{array}{ccc}& & \\ \sigma _x^a& \sigma _x^b& \sigma _x^a\sigma _x^b\\ & & \\ \sigma _y^b& \sigma _y^a& \sigma _y^a\sigma _y^b\\ & & \\ \sigma _x^a\sigma _y^b& \sigma _y^a\sigma _x^b& \sigma _z^a\sigma _z^b\end{array}$$
(8)
Here $`\sigma _x^a`$ is the Pauli $`\sigma _x`$ operator for spin $`a`$, $`\sigma _y^b`$ the $`\sigma _y`$ operator for spin $`b`$, and so forth. Note that the operators for particle $`a`$ commute with those for particle $`b`$, whereas the commutator of $`\sigma _x^a`$ and $`\sigma _y^a`$ is $`2i\sigma _z^a`$, etc. Each of the nine operators in the square has eigenvalues $`+1`$ and $`1`$, and each eigenvalue is two-fold degenerate. The three operators in each row in (8) commute with each other, as do the three operators in each column. In addition, it is not hard to show that the product of the three operators in each row is the identity $`I`$. The product of the three operators in each of the first two columns is $`I`$, but the product of those in the third column is $`I`$.
These mathematical properties are incompatible with the existence of a universal truth functional. For suppose that such a functional existed. Then, as explained earlier, it could be used to assign a numerical eigenvalue of $`\pm 1`$ to each of the nine operators in the square. Since the operators in each row commute with one another, the usual algebraic properties would be preserved for the numerical assignments corresponding to this row. This would mean that the product of the numbers in any given row would have to be $`+1`$, since a truth functional must assign to $`I`$ its only eigenvalue, $`+1`$. Similarly, the product of the numerical values in the first two columns would have to be $`+1`$, and in the last column it would have to be $`1`$. But no such assignment of numerical values exists. For example,
$$\begin{array}{ccc}1& 1& +1\\ +1& 1& 1\\ 1& 1& +1,\end{array}$$
(9)
satisfies all the product rules, except that the product of the values in the second column is $`1`$, not $`+1`$. To see that no assignment of $`\pm 1`$ can satisfy all the rules, find the product of the three numbers in every row, next the product of the three numbers in every column, and, finally, take the product of all six of these products. The result will be the product of the squares of all nine entries in the $`3\times 3`$ matrix, thus 1, whereas the rules would require that it be $`1`$.
## 4 Consistent Histories and Truth Functionals
How does the consistent histories approach deal with Mermin’s magic square and similar paradoxes? To understand this, let us return to a problem mentioned earlier, that of making sense of the conjunction $`PQ`$, “$`P`$ AND $`Q`$”, of two quantum properties when the projectors do not commute, $`PQQP`$. (Notice that this problem never arises in classical physics, since the product of two indicators on the classical phase space is the same in either order.) For example, for a spin-half particle, the projector for the property $`\sigma _y=+1`$ is $`\frac{1}{2}(I+\sigma _y)`$, and that for the property $`\sigma _x=+1`$ is $`\frac{1}{2}(I+\sigma _x)`$. As these projectors obviously do not commute with each other, can one make sense of the statement $`\sigma _y=+1`$ AND $`\sigma _x=+1`$?
The answer of the consistent historian is that one cannot make sense of $`\sigma _y=+1`$ AND $`\sigma _x=+1`$: it is a meaningless statement in the sense that the CH interpretation assigns it no meaning. In the CH approach there are no hidden variables, and thus there is a one-to-one correspondence between quantum properties and subspaces of the Hilbert space. Since every one-dimensional subspace of the two-dimensional Hilbert space $``$ of a spin-half particle corresponds to a spin in a particular direction, there is none left over which could plausibly represent $`\sigma _y=+1`$ AND $`\sigma _x=+1`$.
To be sure, one might consider assigning to $`\sigma _y=+1`$ AND $`\sigma _x=+1`$ the zero element of $``$, which is a zero-dimensional subspace corresponding to the property which is always false, analogous to an indicator which is everywhere zero on a classical phase space. This, in fact, was the proposal (for this situation) of Birkhoff and von Neumann in their discussion of quantum logic . It is important to notice the difference between their approach and the one used in CH. A proposition which is meaningful but false is very different from a meaningless proposition: the negation of a false proposition is a true proposition, whereas the negation of a meaningless proposition is equally meaningless. The Birkhoff and von Neumann approach requires, as they pointed out, a modification of the ordinary rules of propositional logic, whereas the CH approach does not. However, in CH quantum theory it is necessary to exclude meaningless properties from meaningful discussions, which is not a trivial task.
In particular, in CH quantum theory any quantum description of a single system at a particular time must employ a single framework, which is to say a single Boolean algebra of commuting projectors based upon a definite decomposition of the identity. To be sure, alternative descriptions can be constructed using different decompositions of the identity, but these cannot be combined to form a single description, nor can logical reasoning about a quantum system be carried out by combining results from two different Boolean algebras. This is the single framework rule, which is central to a correct understanding of CH quantum theory, and the point most often misunderstood by physicists unfamiliar with the CH approach.
In some cases results for two different Boolean algebras can be combined by using the device of a “common refinement”: a third algebra which includes all the projectors of the first two algebras. If a common refinement exists, the two algebras (or frameworks) are said to be compatible; if not, they are incompatible. If two algebras are compatible, one can use the common refinement instead of the original algebras as the single framework required by the single framework rule. However, such a refinement exists if and only if all the projectors in one of the algebras commutes with all of the projectors in the other algebra. Consequently, it is not possible to produce a consistent quantum description which combines results from two Boolean algebras when some of the projectors in one do not commute with some of the projectors in the other.
The single framework rule as applied to Boolean algebras of properties refers to a single system at a single instant of time. Given two nominally identical systems, there is no reason why one cannot use one framework for the first and a different framework for the second. For instance, in the case of the two spin-half particles in (8), there is no problem using $`\sigma _x`$ for one and $`\sigma _y`$ for the other. Thus, whereas $`\sigma _y=+1`$ AND $`\sigma _x=+1`$ is a meaningless expression for a single particle, $`\sigma _y^a=+1`$ AND $`\sigma _x^b=+1`$ makes perfectly good sense. Conversely, when incompatible frameworks turn up in some quantum discussion, it is best to think of them as referring to two different systems, or to a single system at two different times. (The latter is an example of a history, and when histories involve three or more times, the CH approach imposes additional rules—but these are outside the scope of the present discussion.)
In view of the preceding remarks, the reader will not be surprised to learn that truth functionals are meaningful constructions within CH quantum theory provided they refer to a single framework or Boolean algebra. Thus a universal truth functional makes no sense as soon as the Hilbert space is of dimension two, since one already has projectors which do not commute with each other, and simultaneously (i.e., with a single truth functional) assigning truth values to properties which cannot simultaneously enter the same quantum description is meaningless. The same objection applies to universal truth functionals in higher-dimensional Hilbert spaces, but, as already noted, there are no such things in Hilbert spaces of dimension three or more!
As the single framework rule may seem a bit abstract, let us see how it applies to the case of the magic square (8). As long as one considers a single row, the operators commute with each other, and hence they are of the general form (7) using a single decomposition of the identity. There is, therefore, no problem with introducing a truth functional which assigns to each of these operators one of its eigenvalues; for example, those in the top row of (9). Of course, the same remark applies to the operators in the first column of (8): they commute with each other, and can all be expressed in terms of a single decomposition of the identity. However, this decomposition of the identity is different from the one used for the first row, and the two are incompatible—they have no common refinement—as is immediately obvious from the fact that $`\sigma _x^b`$ in the first row does not commute with $`\sigma _y^b`$ in the first column. Consequently, it makes no sense to simultaneously assign values to all the operators in the top row and those in the first column, much less values to all nine operators in the square. For this reason it is impossible to construct a paradox if one pays attention to the CH rules.
## 5 The Argument of Bassi and Ghirardi
In Bassi and Ghirardi use the magic square (8) to argue that CH ideas when combined with three assumptions which they regard as reasonable, and even necessary for a sound interpretation, when combined with a fourth assumption usually made by proponents of the CH interpretation (but which they themselves find questionable) lead to a logical contradiction. The four assumptions can be stated briefly as follows, where the language has been changed so that it refers to a system at a single time (rather than to histories), as this is all that is needed for the present discussion:
Ordinary rules of classical reasoning apply to a single Boolean algebra of projectors.
It is possible to assign a truth value to every projector which occurs in a particular Boolean algebra.
A projector should be assigned the same truth value in all Boolean algebras which contain it.
Any Boolean algebra of projectors can be used to construct a legitimate quantum description.
From these assumptions Bassi and Ghirardi deduce the existence of a universal truth functional, to use the terminology employed in this Letter, and then show that such an object is inconsistent with the mathematical properties of the operators in (8).
Assumptions (a) and (d) are part of the standard CH approach. As for (b), we noted in Sec. 3 that if a decomposition of the identity contains $`N`$ projectors, there are $`N`$ distinct truth functionals associated with the corresponding Boolean algebra $``$, so it is always possible to choose one of them and use it to assign a truth value to the projectors in $``$. Thus (b)<sup>1</sup><sup>1</sup>1 A statement in Sec. 5 of , could be interpreted to mean that (b) is equivalent to the assumption that every property has a truth value; however the interpretation given here coincides with that intended by the authors . agrees with the CH interpretation.
Assumption (c) is ambiguous. As noted in Sec. 3, any quantum truth functional is associated with a particular Boolean algebra. One could understand (c) to mean that for a given projector $`P`$, we should consider all Boolean algebras which contain it, and in each of these algebras we should pick a truth functional which assigns the same value, say 1, to $`P`$. This can certainly be done, as a mathematical exercise, for a particular projector $`P`$. However, the authors of mean something quite a bit stronger . Namely, that every projector on $``$—or, if one does not make assumption (d), every projector belong to some collection of acceptable properties—is assigned a definite truth value, 1 or 0, independent of the Boolean algebra which contains it. Understood in this way, (c) combined with (a), (b), and (d) is equivalent to the assumption of a universal truth functional, an impossibility for a Hilbert space of dimension 3 or more, as explained in Sec. 3, and contrary to CH quantum mechanics for the reasons indicated in Sec. 4.
However, the single framework rule of CH quantum theory is already inconsistent with the first (weaker) interpretation of (c) in the preceding paragraph, in the following sense. Suppose that $`P`$ belongs to two incompatible Boolean algebras $`^{}`$ and $`^{\prime \prime }`$ containing projectors which do not commute with each other. Then $`^{}`$ and $`^{\prime \prime }`$ cannot be combined in a single quantum description. Therefore, if both are to be employed, they cannot refer to the same physical system at the same time. But if we are dealing with two physical systems, or the same system at two different times, there is, in general, no reason to suppose that two truth functionals should coincide for $`P`$ or for any other projector in $`^{}^{\prime \prime }`$.
It may help to consider a specific example. Let $`^{}`$ be the Boolean algebra corresponding to the operators on the first row of the magic square (8), $`^{\prime \prime }`$ that of the first column, and choose truth functionals $`\theta ^{}`$ and $`\theta ^{\prime \prime }`$ which assign (as discussed in Sec. 3) the values shown in the first row and the first column of (9), respectively. Then both $`\theta ^{}`$ and $`\theta ^{\prime \prime }`$ assign to $`\sigma _x^a`$ the value $`1`$, in accordance with the weaker interpretation of (c). Nonetheless, they cannot possibly refer to the same physical situation, because $`\theta ^{}`$ assigns to $`\sigma _x^b`$ the value $`1`$, and $`\theta ^{\prime \prime }`$ to $`\sigma _y^b`$ the value $`+1`$. But a single system at a single time cannot have both a value for $`\sigma _x^b`$ and a value for $`\sigma _y^b`$ if one uses the Hilbert space of standard quantum mechanics, for the reasons discussed in Sec. 4 above. Consequently, the fact that $`\theta ^{}`$ and $`\theta ^{\prime \prime }`$ assign the same value, $`1`$, to $`\sigma _x^a`$ is, from the CH point of view, devoid of any particular significance, since if these truth functionals are describing different systems this agreement is fortuitous.
In summary, if (c) is understood in the stronger of the two senses discussed above it leads, in combination with (d), to the existence of a universal truth functional, which conflicts with the Bell-Kochen-Specker results as well as with consistent histories. However, there is already a clear conflict with the single framework rule of CH quantum theory as soon as (c) is interpreted in a weaker way which only requires using two or more incompatible Boolean algebras to refer to a single physical system at the same time.
Unfortunately, the single framework rule, despite the prominence given to it in several of the references cited in their Letter, is not mentioned by Bassi and Ghirardi when they introduce what they consider to be the basic principles of CH quantum theory, nor is it referred to, except somewhat obliquely in their Sec. 2(c), until after they have completed their definitions and their main argument. When at the beginning of Sec. 4 they state the rule in its entirety for the first time, they admit that their argument is, indeed, in conflict with this rule, but then offer the excuse that they are employing a different form of reasoning from that employed in CH quantum theory. While the single framework rule is appropriate for the latter, it cannot, they assert, apply to the former.
There seems to be no reason to debate this point, as Bassi and Ghirardi are surely not obligated to adopt the rules for quantum reasoning which the developers of the CH approach regard as most appropriate. One could wish, however, that they had made plain much earlier in their Letter that it is not “standard” consistent histories quantum theory, but instead an alternative version they themselves invented, with different rules of reasoning, which leads to a logical contradiction. In this regard theirs can be added to a list of work by other authors<sup>2</sup><sup>2</sup>2See Sec. V C and D, and App. A of ; also . which shows that attempting to construct a histories interpretation of quantum theory while omitting the single framework rule generally leads to unsatisfactory results.
Although the claim that (standard) CH leads to a logical contradiction is unfounded for the reasons just noted, there is another aspect of which deserves comment. Translated into the language used in this Letter, the authors make what is, in essence, the claim that one cannot treat quantum properties as part of an “objective reality” if one denies the existence of a universal truth functional. This is the sort of conceptual and philosophical issue which cannot be settled by an appeal to logic and mathematics; instead it requires the application of physical intuition and an exercise of judgment. It is the case that in CH quantum theory, properties at a single time, and histories, which are sequences of quantum properties at successive times, are, under appropriate conditions, considered to be “real” and “objective”. Some issues involved in treating the CH approach as a realistic interpretation of quantum theory are discussed in Sec. V B of , and the conclusion, translated into the terminology of the present Letter, is that there is no reason why one should regard the existence of universal truth functionals as a necessary part of quantum reality. Rather than repeat the argument here, let me simply note that I believe that provides a quite adequate response, even though it was written earlier, to the concerns raised in .
## Acknowledgments
The author is indebted to T. A. Brun and O. Cohen for reading the manuscript, and to A. Bassi and G. C. Ghirardi for correspondence regarding . The research described here was supported by the National Science Foundation Grant PHY 99-00755
|
no-problem/9909/cond-mat9909055.html
|
ar5iv
|
text
|
# A Fast Algorithm for Generating Long Self-Affine Profiles
## I Introduction
With the advent of the computer as a serious research tool, there has been a revolution in the quantitative description of processes and structures that before were deemed too complex. Two of the key concepts used for this description are the fractal and its close relative, the self-affine structure . In the early eighties, much effort was spent identifying and describing various physical systems having fractal or self-affine structure. As time went by, focus slowly shifted from pure description to asking why such structures would appear. This lead to the development of the science of complex growth phenomena. Now, many aspects of these are well understood. However, there are still hosts of interesting but unanswered questions lingering on — see e.g. Refs. for recent reviews. More recently, focus has again begun to shift somewhat, and one sees work dealing with the physical consequences of the presence of fractal or self-affine structures. A concrete example of these three levels of development may be found in the study of fracture surfaces. In the early eighties, Mandelbrot et al. characterized fracture surfaces as self-affine, in the early nineties attempts were made to understand why fracture surfaces are self-affine . Recently, phenomena such as two-phase flow in fracture joints have been studied .
In order to study the physical consequences of the presence of self-affine surfaces, algorithms generating these must be found. There are already several in existence, see e.g. Feder . However, subtle phenomena require the generation of huge surfaces. Two aspects of the algorithms then become important: (i) how self-affine are the surfaces that are generated, and (ii) how fast is the algorithm.
The most popular algorithm used today is the Fourier filtering algorithm. This algorithm, which has a fast implementation thanks to the fast Fourier transform, consists of generating in the Fourier domain, uncorrelated Gaussian random numbers which are filtered by a decaying power-law filter of exponent $`2H1`$ where $`H`$ is the Hurst exponent (to be defined in Section II). By taking advantage of the inverse fast Fourier transform, self-affine surfaces in real space, with the desirable correlations, are generated.
It has previously been shown that the Fourier filtering algorithm has the disadvantage that the self-affine correlations in the limit of large systems only exists over a fraction of the total system size. This is due to aliasing effects. For large enough systems this fraction might be well below 1%. One might overcome the above problem by e.g. temporarily generating a much larger surface than actually needed, and only use a small fraction of the total size. This is, however, not a very appealing approach, as the computer time and memory needed easily become too large. Another way of getting around this problem is due to Makse et al. . Here the (Fourier-space) filter function is modified by the introduction of a large momentum cut-off through the use of a modified Bessel function in the Fourier transform of the power spectrum. They show that this large momentum cut-off, while irrelevant for the large scale behavior in real space, is essential in order to suppress the aliasing effect and thereby obtaining surfaces with the desired scaling properties over the entire system size.
In this paper we report on an alternative filtering algorithm based on wavelets which a priori, and without modifications, gives rise to self-affine correlations which extends (up to finite size effects) over the entire system independently of its size. This algorithm is also computationally cheaper than the traditional (or modified) Fourier filtering algorithm.
This paper is organized as follows: In Section II we briefly review the defining properties of self-affine surfaces. Here we also include some results which will prove useful to us later. Section III is devoted to the outline of the new algorithm. In Section IV we present numerical studies of the this algorithm. We conclude in Section V.
## II Self-affine surfaces
We limit our discussion in this paper to $`(1+1)`$-dimensional surfaces, which we will call profiles. A (statistically) self-affine profile, $`h(x)`$, is by definition a structure which remains (statistically) invariant under the following scaling relationStrictly speaking, one should in order to fully define the self-affine structure, also introduce the topothesy which is defined as the length-scale $`l`$, over which the RMS-height, $`\sigma `$, is $`\sigma (l)=l`$. However, we will not explicitly need this latter quantity here, and will therefore simply neglect any further reference to it.
$`x`$ $``$ $`\lambda x,`$ (2)
$`h`$ $``$ $`\lambda ^Hh.`$ (3)
Here $`\lambda `$ is a real number and $`H`$, known as the roughness or Hurst exponent, characterizes this invariance. This exponent is usually in the range from zero to one. When $`H=1/2`$, the profile is not correlated. An example of such a profile is the Brownian motion in one dimension. In this case, we interpret time as $`x`$ and $`h(x)`$ as the position on the Brownian particle at time $`x`$. When $`H>1/2`$ the profile is persistent, while when $`H<1/2`$ it is anti persistent.
We show in Fig. 1 an example of a self-affine profile generated by the algorithm to be presented in this paper.
From the scaling relation (II), one can often relatively easily derive scaling relations for related quantities. In this paper we will later explicitly need the scaling relation for the second order structure function
$`S(\mathrm{\Delta }x)`$ $`=`$ $`\left|h(\mathrm{\Delta }x+x)h(x)\right|^2_x,`$ (4)
where $`_x`$ represents the average over the position variable $`x`$, and the power spectrum $`P(q)`$, defined as the Fourier transform of the height-height correlation function. They scale as
$`S(\mathrm{\Delta }x)`$ $``$ $`(\mathrm{\Delta }x)^{2H},`$ (5)
and
$`P(q)`$ $``$ $`q^{2H1}.`$ (6)
We will make use of these two scaling relations in Section IV.
## III The wavelet filtering algorithm
Recently, the wavelet transform has been used to analyze self-affine profiles . The idea behind this analysis is as follows: We denote the wavelet transform of a function $`h(x)`$ by $`𝒲[h](a,b)`$, where $`a`$ and $`b`$ are the scaling and location variables respectively. They form together the wavelet domain. In Ref. REFERENCES the authors introduced what they called the average wavelet coefficient function, defined as $`W[h](a)=\left|𝒲[h](a,b)\right|_b`$, where $`_b`$ denotes the average over all the location parameters $`b`$ corresponding to one and the same scale $`a`$. For a self-affine function, $`h(x)`$, this quantity should scale as
$`W[h](a)`$ $``$ $`a^{H+1/2}.`$ (7)
In much the same way as the Fourier filtering algorithm is used for generating self-affine profiles via the fast Fourier transform, a wavelet based filtering technique can be based on Eq. (7) in combination with the fast wavelet transform. The output of the fast wavelet transform is a vector organized as a collection of various levels or hierarchies all of different lengths where each level, $`\mathrm{}`$, is associated with a corresponding scale $`a_{\mathrm{}}`$. The two first components of this vector, also known as level $`\mathrm{}=0`$, are associated with the scaling function. All the other components are “true” wavelet coefficients, such that at level $`\mathrm{}`$, corresponding to scale $`a_{\mathrm{}}=2^{\mathrm{}}`$, there are $`N_{\mathrm{}}=2^{\mathrm{}}`$ coefficients. These coefficients (using our convention) are arranged such that the coefficients of the highest level are found at the end of the vector, and the levels decrease monotonically towards the top of the vector, corresponding to the level $`\mathrm{}=0`$.
Hence, the wavelet based algorithm which we will be referring to as the wavelet filtering algorithm (WFA), consists of the following three steps:
* Generate in the wavelet-domain normalized uncorrelated Gaussian numbers $`\{\eta _i\}`$, with $`i=1,2,\mathrm{},N`$ where $`N`$ is the number of discrete points $`x_i`$ that together with $`h_i=h(x_i)`$ constitute the self-affine profile. $`N`$ is assumed, due to the use of the fast wavelet transform, to be a power of $`2`$.
* Filter these random numbers according to
$`w_i`$ $`=`$ $`\left(a_{\mathrm{}(i)}\right)^{H+1/2}{\displaystyle \frac{\eta _i}{\left|\eta \right|_{\mathrm{}(i)}}},i=1,2,\mathrm{},N,`$ (8)
to obtain the wavelet coefficients $`\{w_i\}`$. Here $`a_{\mathrm{}(i)}=2^{\mathrm{}(i)}`$ represents the scale, at level $`\mathrm{}(i)`$, where $`\mathrm{}(i)`$ is defined as the level corresponding to the location index $`i`$ of the vector $`w_i`$ (or $`\eta _i`$). Furthermore, $`\left|\eta \right|_{\mathrm{}(i)}`$ represents the average of the absolute value of those $`\eta _i`$ that together form level $`\mathrm{}(i)`$.
* Perform the inverse fast wavelet transform on $`\{w_i\}`$, with the (compactly supported) wavelet of your choice, to obtain the (real-space) self-affine profile of predefined Hurst exponent $`H`$.
With the wavelet filtering algorithm, good quality self-affine surfaces with predefined Hurst exponent can be generated. In Fig. 1 we show an example of a self-affine surface of Hurst exponent $`H=0.6`$ and length $`N=4096`$ generated by the algorithm just outlined. It is worth noting that the above three steps can be modified in order to deal with surfaces in higher dimensions . In this case the speed of the surface generating algorithm becomes very important.
One of the prominent features of the wavelet transform is that the basis functions, the wavelets, are localized in both space and frequency. This has as a consequence, among others, that there is no aliasing, or at least heavily suppressed as compared to the Fourier transform. This implies that the wavelet filtering algorithm should automatically result in surfaces which have the desired correlations over the entire length of the profile, and not just a small fraction of it. Hence, independent of the system size, the WFA is capable of generating profiles with well-defined long-range correlations. We will demonstrate the validity of this claim in Section IV.
Before turning to the numerical studies of the WFA, we add some remarks regarding the computational efficiency of this algorithm. The most time-consuming part of the WFA is the inverse wavelet transform. To a good approximation, at least for larger system sizes, this time determines the overall computational time of the entire algorithm. The fast wavelet transform needs $`𝒪(cN)`$ operations, where $`c`$ is a positive real number which value depends on the wavelet used . Thus, the number of operations need for generating a surface by WFA is linear in the number of points belonging to the profile. In comparison, the Fourier filtering algorithm, which speed is mainly controlled by the fast Fourier transform, needs $`𝒪(N\mathrm{log}_2N)`$ to generate a profile. For large system sizes the difference in execution time between WFA and the Fourier filtering algorithm becomes significant.
## IV Numerical results
In order to test numerically the predictions of the previous section, we have chosen to study the second order structure function, $`S(\mathrm{\Delta }x)`$, and the power spectrum $`P(q)`$ of self-affine profiles generated with the wavelet filtering algorithm. The appropriate scaling relations for these two quantities are given by Eqs. (5) and (6). They will provide us with independent information enabling us to accurately quantify over which length scales the self-affine correlations exist. The numerical experiments, for which the results will be presented shortly, were performed as follows: We generated, by WFA, an ensemble of long self-affine profiles all with the same Hurst exponent $`H`$. For each profile the structure function and power spectrum were calculated, and these were averaged over the ensemble of profiles.
In Fig. 2 we give the numerical results for the second order structure function obtained as described above. The predefined Hurst exponents, used by the surface generator, were from bottom to top $`H=0.8`$, $`0.6`$, $`0.4`$, and $`0.2`$ as indicated in the figure. The length of each profile was $`N=2^{25}=\mathrm{33\hspace{0.17em}554\hspace{0.17em}432}`$. The number of profiles used in obtaining the averages was $`N_h=50`$, and the wavelet used was of the Daubechies-type ($`D12`$) . The dashed lines are regression fits to the numerical data. They corresponds (from bottom to top) to Hurst exponents of $`H=0.80\pm 0.01`$, $`0.60\pm 0.01`$, $`0.41\pm 0.01`$, and $`0.22\pm 0.02`$, all consistent, within the errorbars, with the predefined exponents given above. One easily observes from Fig. 2 that the correlations extend over all scales except for the largest lags $`\mathrm{\Delta }x`$. The reason that the last few large lags do not fit into this general picture is due to finite size effects. We have also undertaken the above analysis for different types of wavelets, taken from the Daubechies family, and for various system sizes, finding no results that are inconsistent with those presented in Fig. 2.
In order to make the comparison with the Fourier filtering algorithm even more apparent, we present in Fig. 3 the average power spectrum obtained using the same surfaces as were used in Fig. 2. The correlations again span most scales. The dashed regression fits lead to the following exponents (from bottom to top): $`H=0.80\pm 0.01`$, $`0.61\pm 0.01`$, $`0.41\pm 0.01`$, and $`0.20\pm 0.01`$, which again is in excellent agreement with the values of the Hurst exponent used for the generation of the underlying profiles.
Figs. 2 and 3 indicate that the self-affine correlations span all but the largest scales of the profiles. We stress that this is a generic property of the wavelet filtering algorithm, and no modification of the algorithm is needed in order to handle large system sizes in a satisfactory manner. This is a consequence of the celebrated property of the wavelets being localized both in space and frequency.
The calculations of this paper were performed on a SGI/Cray Origin 2000 supercomputer based on the R10000 chip from SGI. On this machine the average cpu time needed for generating a profile of the length used above ($`N=2^{25}`$) was $`t_{\text{wfa}}=45`$s and $`t_{\text{ffm}}=125`$s for the WFA and the traditional (or modified) Fourier filtering algorithm respectively. Hence the speedup gained by using the wavelet filtering algorithm over the Fourier filter algorithm is close to a factor $`3`$. For system sizes $`N10^3`$, we could not observe any significant different between the two algorithms.
## V Conclusions
To summarize, we have introduced a fast and simple algorithm for generating long (or short) self-affine profiles. This algorithm, named the wavelet filtering algorithm, is demonstrated to overcome the problem related to the aliasing effect which the traditional Fourier filtering algorithm is troubled with. Furthermore, the wavelet based filtering technique outperforms its Fourier-domain counterpart by large margins with respect to computational costs, at least for large system sizes.
###### Acknowledgements.
I.S. would like to thank the Research Council of Norway and Norsk Hydro ASA for financial support. A.H. thanks H.N. Nazareno and F.A. Oliveira for warm hospitality and the I.C.C.M.P. for support. This work has received support from the Research Council of Norway (Program for Supercomputing) through a grant of computing time.
|
no-problem/9909/cond-mat9909162.html
|
ar5iv
|
text
|
# Finite-size corrections to the free energies of crystalline solids
## Abstract
We analyse the finite-size corrections to the free energy of crystals with a fixed center of mass. When we explicitly correct for the leading ($`\mathrm{ln}N/N`$) corrections, the remaining free energy is found to depend linearly on $`1/N`$. Extrapolating to the thermodynamic limit ($`N\mathrm{}`$), we estimate the free energy of a defect free crystal of particles interacting through an $`r^{12}`$-potential. We also estimate the free energy of perfect hard-sphere crystal near coexistence: at $`\rho \sigma ^3`$=1.0409, the excess free energy of a defect-free hard-sphere crystal is $`5.91889(4)kT`$ per particle. This, however, is not the free energy of an equilibrium hard-sphere crystal. The presence of an finite concentration of vacancies results in a reduction of the free energy that is some two orders of magnitude larger than the present error estimate.
The earliest numerical technique to compute the free-energy of crystalline solids was introduced some 30 years ago by Hoover and Ree . At present, the “single-occupancy-cell” method of Ree and Hoover is less widely used than the so-called “Einstein-crystal” method proposed by Frenkel and Ladd . The latter method employs thermodynamic integration of the Helmholtz free energy along a reversible artificial pathway between the system of interest and an Einstein crystal. The Einstein crystals serves as a reference system, as its free energy can be computed analytically. Since its introduction, Einstein-crystal method has been used extensively in studies of phase equilibria involving crystalline solids. For numerical reasons - to suppress a weak divergence of the integrand - the Einstein-crystal method calculations have to be carried out at fixed center of mass. The free energy of the reference crystal is also calculated under the center-of-mass constraint, and the final calculated free energy of the unconstrained crystal is determined by correcting for the effect of imposing the constraint in the calculations. In the original paper, the fixed center-of-mass constraint was only applied to the particle coordinates, but not to the corresponding momenta. This is irrelevant as long as one computes the free-energy difference between two structures that have either both constrained or both unconstrained centers of mass. However, when computing the absolute free energy of a crystal, one needs to transform from the constrained to the unconstrained system. In the original paper, this transformation was not performed consistently. This resulted in a small but noticeable effect on the computed absolute free energy of the crystal. Below, we describe the proper approach to calculate the free energy of arbitrary molecular crystalline solids. The derivation differs from the earlier work in two respects: first of all, we explicitly show the effect of momentum constraints. And secondly, we generalize the expression to an arbitrary crystal containing atoms or molecules with different masses.
The main point of interest involves the calculation of the partition function of a crystal with and without a constrained center of mass. The partition function for an unconstrained, $`d`$-dimensional crystalline solid of $`N_{mol}`$ molecules composed of a total of $`N`$ atoms is given by
$$Q=c_Nd^{dN}\stackrel{}{r}d^{dN}\stackrel{}{p}\mathrm{exp}[\beta (\stackrel{}{r}_i,\stackrel{}{p}_i)]$$
(1)
where $`c_N`$=$`(h^{dN_{mol}}N_1!N_2!\mathrm{}N_m!)^1`$, where there are $`N_1`$ indistinguishable molecules of type 1, $`N_2`$ molecules of type 2, etc., where $`N_1+N_2+\mathrm{}+N_m=N_{mol}`$, and $`h`$ is Planck’s constant. It should be noted that, in all calculations of phase equilibria between systems that obey classical statistical mechanics, Planck’s constant drops out of the result. Hence, in what follows, we omit all factors $`h`$. Using the result of the appendix in an article by Ryckaert and Ciccotti , one can show that the constrained partition function $`Q^{con}`$ is given by
$`Q^{con}`$ $`=`$ $`c_N{\displaystyle d^{dN}\stackrel{}{r}d^{dN}\stackrel{}{p}\mathrm{exp}[\beta (\stackrel{}{r}_i,\stackrel{}{p}_i)]}`$ (2)
$`\times `$ $`\delta (\stackrel{}{\sigma }(\stackrel{}{r}))\delta (𝐆^1\stackrel{}{\dot{\sigma }})`$ (3)
where $`\stackrel{}{\sigma }(\stackrel{}{r})`$ and $`\stackrel{}{\dot{\sigma }}`$ are the constraints and time derivatives of the constraints, respectively, and
$$G_{ij}=\underset{k=1}{\overset{N}{}}\frac{1}{m_k}\frac{\sigma _i}{\stackrel{}{r}_k}\frac{\sigma _j}{\stackrel{}{r}_k}$$
(4)
The same integration limits implicit in Eqn. (1) are also used in Eqn. (3). To constrain the center of mass (CM), we take $`\stackrel{}{\sigma }(\stackrel{}{r})=_{i=1}^N\mu _i\stackrel{}{r}_i`$, and thus, $`\stackrel{}{\dot{\sigma }}=_{i=1}^N(\mu _i/m_i)\stackrel{}{p}_i`$, where $`\mu _im_i/_im_i`$. Note that in Eqns. (1) and (3) we have assumed that there are no additional internal molecular constraints, such as fixed bond lengths or bond angles.
We first consider the case of an Einstein crystal, which has a potential energy function given by $`U_{Ein}=(\alpha /2)_{i=1}^N(\stackrel{}{r}_i\stackrel{}{r}_i^{(0)})^2`$, where $`\stackrel{}{r}_i^{(0)}`$ are the equilibrium lattice positions. Note that the particles in a crystal are associated with specific lattice points and therefore behave as if they are distinguishable - thus, $`c_N=1`$ (as we omit the factor $`1/h^{d(N1)}`$). It is easy to show that
$$Q_{Ein}^{CM}=Z_{Ein}^{CM}P_{Ein}^{CM},$$
(5)
with
$`Z_{Ein}^{CM}`$ $`=`$ $`{\displaystyle d^{dN}\stackrel{}{r}\underset{i=1}{\overset{N}{}}\mathrm{exp}[(\beta \alpha /2)r_i^2]\delta (\underset{i=1}{\overset{N}{}}\mu _i\stackrel{}{r}_i)}`$ (6)
$`=`$ $`\left({\displaystyle \frac{\alpha \beta }{2\pi _i\mu _i^2}}\right)^{d/2}\left({\displaystyle \frac{2\pi }{\alpha \beta }}\right)^{Nd/2}`$ (7)
$`=`$ $`\left({\displaystyle \frac{\alpha \beta }{2\pi _i\mu _i^2}}\right)^{d/2}Z_{Ein}`$ (8)
and
$`P_{Ein}^{CM}`$ $`=`$ $`{\displaystyle d^{dN}\stackrel{}{p}\underset{i=1}{\overset{N}{}}\mathrm{exp}[(\beta /2m_i)p_i^2]\delta (\underset{i=1}{\overset{N}{}}\stackrel{}{p}_i)}`$ (9)
$`=`$ $`\left({\displaystyle \frac{\beta h^2}{2\pi M}}\right)^{d/2}{\displaystyle \underset{i=1}{\overset{N}{}}}\left({\displaystyle \frac{2\pi m_i}{\beta h^2}}\right)^{d/2}`$ (10)
$`=`$ $`\left({\displaystyle \frac{\beta h^2}{2\pi M}}\right)^{d/2}P_{Ein}\text{ ,}`$ (11)
where $`M=_im_i`$, while $`Z_{Ein}`$ and $`P_{Ein}`$ are the corresponding contribution to $`Q_{Ein}`$,the partition function of the unconstrained Einstein crystal. Clearly,
$$Q_{Ein}^{CM}=(\underset{i}{}m_i/\underset{i}{}m_i^2)^{d/2}(\beta ^2\alpha h^2/4\pi ^2)^{d/2}Q_{Ein}$$
(12)
Similarly, one can show that the partition function for an arbitrary crystalline system subject to the CM constraint is given by
$$Q^{CM}=Z^{CM}(\beta h^2/2\pi M)^{d/2}\underset{i=1}{\overset{N}{}}(2\pi m_i/\beta h^2)^{d/2},$$
(13)
with
$$Z^{CM}=d^{dN}\stackrel{}{r}\mathrm{exp}[\beta U(\stackrel{}{r}_i)]\delta (\underset{i=1}{\overset{N}{}}\mu _i\stackrel{}{r}_i)$$
(14)
while the partition function of the unconstrained crystal is given by
$$Q=Z\underset{i=1}{\overset{N}{}}(2\pi m_i/\beta h^2)^{d/2},$$
(15)
with
$$Z=d^{dN}\stackrel{}{r}\mathrm{exp}[\beta U(\stackrel{}{r}_i)]$$
(16)
Note that, as far as the kinetic part of the partition function is concerned, the effect of the fixed center of mass constraint is the same for an Einstein crystal as for an arbitrary “realistic” crystal. Using Eqns. (13) and (15), the Helmholtz free energy difference between the constrained and unconstrained crystal is given by
$`\beta (FF^{CM})`$ $`=`$ $`\mathrm{ln}(Q/Q^{CM})`$ (17)
$`=`$ $`\mathrm{ln}(Z^{CM}/Z){\displaystyle \frac{d}{2}}\mathrm{ln}(2\pi M/\beta h^2)`$ (18)
We note that
$`{\displaystyle \frac{Z^{CM}}{Z}}`$ $`=`$ $`{\displaystyle \frac{d^{dN}\stackrel{}{r}exp[\beta U(\stackrel{}{r}_i)]\delta \left(_i\mu _i\stackrel{}{r}_i\right)}{d^{dN}\stackrel{}{r}\mathrm{exp}[\beta U(\stackrel{}{r}_i)]}}`$ (19)
$`=`$ $`\delta ({\displaystyle \underset{i}{}}\mu _i\stackrel{}{r}_i)=𝒫(\stackrel{}{r}_{CM}=\stackrel{}{0})`$ (20)
where $`\stackrel{}{r}_{CM}_i\mu _i\stackrel{}{r}_i`$, and $`𝒫(\stackrel{}{r}_{CM})`$ is the probability distribution function of the center of mass, $`\stackrel{}{r}_{CM}`$.
To calculate $`𝒫(\stackrel{}{r}_{CM})`$ we exploit the fact that the equilibrium crystal lattice is invariant to translations over displacements through linear combinations of integer multiples of the lattice vectors. This is true if the crystal lattice is subject to periodic boundary conditions. Consequently, the probability distribution of the center of mass of the lattice is evenly distributed over a volume equal to that of the Wigner-Seitz cell of the lattice positioned at the center of the volume over which we carry out the integration in the partition function. Since the average center of mass of the crystal is equal to the center of mass of the lattice, it follows that $`𝒫(\stackrel{}{r}_{CM})=1/V_{ws}=N_{ws}/V`$, where $`V_{ws}`$ is the volume of a Wigner-Seitz cell, and $`N_{ws}`$ is the number of such cells in the system. Thus, $`Z^{CM}/Z=𝒫(\stackrel{}{r}_{CM}=\stackrel{}{0})=N_{ws}/V`$ In the case of one molecule per cell, this implies $`Z^{CM}/Z=N_{mol}/V`$, where $`N_{mol}`$ is the number of molecules in the system.
In the Frenkel-Ladd free energy calculation, the free energy difference between the constrained crystal and the reference systems is given by
$$\beta F^{CM}=\beta F_{Ein}^{CM}\beta _0^1𝑑\lambda \mathrm{\Delta }U_\lambda ^{CM}$$
(21)
where the statistical average of $`\mathrm{\Delta }UU_{Ein}U`$ is calculated by simulation for fixed CM as a function of $`\lambda `$ under an effective potential given by $`\stackrel{~}{U}(\lambda )=(1\lambda )U+\lambda U_{Ein}`$. Note that the center of mass must be calculated in the same manner as described in the paragraph above. Further, note that this expression is only rigorously valid for systems interacting with continuous potentials. In the case of particles with discontinuous potentials, e.g. hard particles, the internal potential energy cannot be turned off continuously. The calculation for this case differs slightly, and is discussed in detail in the original article and in Ref. .
Using Eqns. (12), (18) and (21), we find that the free energy per molecule of the unconstrained crystal is given by
$`{\displaystyle \frac{\beta F}{N_{mol}}}`$ $`=`$ $`\left({\displaystyle \frac{dN}{2N_{mol}}}\right)\mathrm{ln}(2\pi /\beta \alpha ){\displaystyle \frac{1}{N_{mol}}}\mathrm{ln}{\displaystyle \underset{i=1}{\overset{N}{}}}\left[{\displaystyle \frac{2\pi m_i}{\beta h^2}}\right]^{d/2}`$ (24)
$`{\displaystyle \frac{\beta }{N_{mol}}}{\displaystyle _0^1}𝑑\lambda \mathrm{\Delta }U_\lambda ^{CM}`$
$`{\displaystyle \frac{d}{2N_{mol}}}\mathrm{ln}\left({\displaystyle \frac{\alpha \beta }{2\pi _i\mu _i^2}}\right){\displaystyle \frac{\mathrm{ln}(V/N_{mol})}{N_{mol}}}`$
If we consider the special case of a system of single-atom, identical particles ($`m_i=m`$ and $`N=N_{mol}`$), we obtain the following:
$`{\displaystyle \frac{\beta F}{N}}`$ $`=`$ $`{\displaystyle \frac{d}{2}}\mathrm{ln}\left[{\displaystyle \frac{4\pi ^2m}{\alpha \beta ^2h^2}}\right]{\displaystyle \frac{\beta }{N}}{\displaystyle _0^1}𝑑\lambda \mathrm{\Delta }U_\lambda ^{CM}`$ (26)
$`{\displaystyle \frac{d}{2N}}\mathrm{ln}(\alpha \beta /2\pi ){\displaystyle \frac{d}{2}}{\displaystyle \frac{\mathrm{ln}N}{N}}+{\displaystyle \frac{\mathrm{ln}\rho }{N}}\text{ ,}`$
where $`\rho N/V`$. The difference between the present result and the one obtained in ref. is in the fourth term on the right-hand side: $`d\mathrm{ln}N/2N`$. The original article implicitly gave the value +$`\mathrm{ln}N/2N`$ for a 3D crystal. While the difference between the two expressions tends to zero in the limit of large $`N`$, it is non-negligible for system sizes typically employed in the numerical calculations. However, the calculated free energy differences between two solids, such as that between the FCC and HCP hard-sphere crystals, to which the method was applied both in the original article and, more recently, in Ref. , are unaffected by this correction.
In practice, we usually need not calculate the absolute free energy of a crystal, but excess free energy, $`F_{ex}FF_{id}`$, where $`F_{id}`$ is the ideal gas free energy. Let us therefore compute the finite-size corrections to the latter quantity: Given that $`\beta F_{id}/N=\mathrm{ln}[V^N(2\pi m/\beta h^2)^{dN/2}/N!]/N`$, we find
$`{\displaystyle \frac{\beta F_{ex}}{N}}`$ $`=`$ $`{\displaystyle \frac{d}{2}}\mathrm{ln}\left[{\displaystyle \frac{2\pi }{\alpha \beta }}\right]{\displaystyle \frac{\beta }{N}}{\displaystyle _0^1}𝑑\lambda \mathrm{\Delta }U_\lambda ^{CM}`$ (29)
$`{\displaystyle \frac{d}{2N}}\mathrm{ln}(\alpha \beta /2\pi )+{\displaystyle \frac{\mathrm{ln}\rho }{N}}`$
$`{\displaystyle \frac{d+1}{2}}{\displaystyle \frac{\mathrm{ln}N}{N}}\mathrm{ln}\rho +1{\displaystyle \frac{\mathrm{ln}2\pi }{2N}},`$
where we have used $`\mathrm{ln}N!N\mathrm{ln}NN+(\mathrm{ln}2\pi N)/2`$.
Hoover has analyzed the system-size dependence of the entropy of a classical harmonic crystal with periodic boundaries. In this study, it was established that the leading finite-size correction to the free energy per particle of a harmonic crystal is equal to $`\beta ^1\mathrm{ln}N/N`$. If the harmonic approximation is valid, then this implies that the integral in Eqn. (21) should vary as $`+\mathrm{ln}N/N`$ plus higher order correction terms of the order of $`N^1`$, $`N^2`$, etc. Consequently, an inspection of Eqn. (29) suggests that $`\beta F_{ex}/N+(d1)\mathrm{ln}N/(2N)`$ will scale as $`N^1`$, if we neglect terms of order $`𝒪(1/N^2)`$.
To test this prediction we have used the Einstein-crystal method to calculate the absolute Helmholtz free energy of a (three-dimensional) FCC crystal of soft spheres interacting with pair potential of $`u(r)=ϵ(\sigma /r)^{12}`$ for systems of size $`N`$=216, 810, 1728, 5832 and 12096. The sizes were chosen such that the simulation box shape is cubic for each system. Further, the simulations were carried out at $`k_BT/ϵ`$=1.0, $`\rho \sigma ^3`$=1.1964, and employed a coupling constant of $`\alpha \sigma ^2/ϵ`$=66.0. The results are shown in Figure 1 and are clearly consistent with the predictions. The solid line is a linear fit which extrapolates to $`\beta F_{ex}/N`$=1.97298(7) at $`N=\mathrm{}`$ (where the figure between brackets is an estimate for the error in the last digit). Incidentally, we note that, at this density and temperature, the FCC phase of soft-spheres, is more stable than the HCP phase (by an amount $`\mathrm{\Delta }F_{FCCHCP}/(Nk_BT)=0.0028(8)`$). The present results suggest that we are able to correctly account for the leading ($`\mathrm{ln}N/N`$) dependence of the free energy of an arbitrary crystal. In the analytical calculation of free-energy of a harmonic crystal, it is always assumed that the center of mass of the crystal is fixed. Hence, the numerical results presented above do not provide an independent test of the validity of our expression for the contribution to the free energy due to the center-of-mass motion of the crystal.
We can perform a similar analysis for a system of hard spheres ($`\rho \sigma ^3`$=1.0409, $`\lambda _{max}=\alpha /2`$=3000). The results are shown in Figure 2. For hard spheres, $`\beta F_{ex}/N`$ extrapolates to a value of $`5.91889(4)`$ at $`N=\mathrm{}`$, well within the error margin of the original results of Hoover and Ree ($`5.924(15)`$). Note that the slopes of the fits (which are proportional to the $`1/N`$ behavior of the finite size effect) are similar, although not exactly equal. It should be stressed that none of these calculations take into account the existence of defects in the crystal, which, at these levels of precision, is significant. In fact, using the early numerical results by Bennett and Alder we can estimate the equilibrium vacancy concentration in a hard-sphere crystal at coexistence to be 2.6 $`10^4`$. Such a vacancy concentration has an noticeable effect on the location of the melting point. For instance, the Gibbs free energy per particle at coexistence is lowered by an amount $`\mathrm{\Delta }\mu 3`$ $`10^3kT`$ . This correction is far from negligible, as it is some two orders of magnitude larger than the present numerical accuracy in the absolute free energy. It is likely that vacancies also lower the equilibrium free energy of the soft-sphere crystal. However, for that model, the equilibrium vacancy concentration has, to our knowledge, not been computed.
We would like to thank Stella Consta, Benito Groh, Jonathan Doye and Bela Mulder for stimulating and very useful discussions relating to the material discussed in this work. This work is part of the research program of the “Stichting Fundamenteel Onderzoek der Materie” (FOM) and is supported by NWO (‘Nederlandse Organisatie voor Wetenschappelijk Onderzoek’). JP acknowledges the financial support provided by the Computational Materials Science program of NWO, and by the Natural Sciences and Engineering Research Council of Canada.
|
no-problem/9909/cond-mat9909012.html
|
ar5iv
|
text
|
# Forest-fire models as a bridge between different paradigms in Self-Organized Criticality
\[
## Abstract
We turn the stochastic critical forest-fire model introduced by Drössel and Schwabl (PRL 69, 1629, 1992) into a deterministic threshold model. This new model has many features in common with sandpile and earthquake models of Self-Organized Criticality. Nevertheless our deterministic forest-fire model exhibits in detail the same macroscopic statistical properties as the original Drossel and Schwabl model. We use the deterministic model and a related semi-deterministic version of the model to elaborate on the relation between forest-fire, sandpile and earthquake models.
\]
Several types of models of self-organised criticality (SOC) exist. The original cellular automaton models were defined by a deterministic and conservative updating algorithm, with thresholds (barriers to activity), and stochastic driving . A new variation of models was developed by Olami, Feder and Christensen (OFC) who realised that a non-conservative threshold model might remain critical if driven uniformly. The OFC model is completely deterministic except for a random initial configuration. In both types of model the threshold is assumed to play a crucial role as a local rigidity which allows for a separation of time scales and, equally important, produces a large number of metastable states. The dynamics take the system from one of these metastable states to another. It is believed that separation of time scale and metastability are essential for the existence of scale invariance in these models.
A seemingly very different type of model was developed by Drössel and Schwabl (DS). No threshold appears explicitly in this model and the separation of time scales is put in by hand by tuning the rates of two stochastic processes which act as driving forces for the model. The DS forest-fire (FF) is defined on a $`d`$-dimensional square lattice. Empty sites are turned into “trees” with a probability $`p`$ per site in every time step. A tree can catch fire stochastically when hit by “lightning”, with probability $`f`$ each time step, or deterministically when a neighbouring site is on fire. The model is found to be critical in the limit $`p0`$ together with $`f/p0`$. This model is a generalization of a model first suggested by Bak, Chen and Tang which is identical to the DS model except that it does not contain the stochastic ignition by lightning. The BCT system is not critical (in less than three dimensions, see ). A continuous variable, uniformly driven deterministic version also shows regular behavior for low values of $`p`$. Thus the introduction of the stochastic lightning mechanism appeared to be necessary, at least in two dimensions, for the model to behave critically. A useful review can be found in .
In the present letter we present a transformation of the forest-fire model into a completely deterministic system. This model is an extension of the recently introduced auto-ignition forest-fire, a simple variation on the DS model. As in that model, we find that all macroscopic statistical measures of the system are preserved. Specifically, we show that the three models have the same exponent for the probability density describing clusters of trees, similar probability densities of tree ages and, probably most unexpected, almost the same power spectrum for the number of trees on the lattice as a function of time. It is surprising that the temporal fluctuation spectrum can be the same in the deterministic model as in the DS forest fire, since even a small stochastic element in an updating algorithm is known to be capable of altering the power spectrum in a significant way .
Definition of model – The SOC FF can be recast into an auto-ignition model. This model is identical to the DS model, except that the spontaneous ignition probability $`f`$ is replaced by an auto-ignition mechanism by which trees ignite automatically when their age $`T`$ after inception reaches a value $`T_{max}`$. Choosing this value suitably with respect to $`p`$ gives a system with exactly the same behaviour and statistical properties as the DS model. Thus one stochastic driving process has been removed and a threshold introduced, while maintaining the SOC state; this model also displays explicitly the relationship between threshold dynamics and the separation of time scales so necessary for the SOC state.
The auto-ignition model can be turned into a completely deterministic critical model by eliminating the stochastic growth mechanism. In the deterministic model (which we shall call the regen FF) each cell is given an integer parameter $`T`$ which increases by one each time step. If $`T>0`$, the cell is said to be occupied, otherwise it is empty (or regenerating). The initial configuration is a random distribution of $`T`$-values and fires. Fires spread through nearest neighbours and the auto-ignition mechanism is again operative so that a tree catches fire when its $`T=T_{max}`$. However, in this model when a tree catches fire the result is a decrement of $`T_{regen}`$ from its $`T`$-value. Note that when $`T_{regen}<T_{max}`$, a cell may still be occupied after it has been ignited. The parameters $`T_{max}`$ and $`T_{regen}`$ can be thought of as having a qualitatively reciprocal relationship with $`f`$ and $`p`$ respectively (in terms of the average ‘waiting time’ for spontaneous ignition and tree regrowth), though this is less straightforward in the latter case because trees are not always burned down by fire. It is evident that $`T_{regen}`$ also sets, and allows direct control of, the degree of dissipation of the $`T`$-parameter in the system.
Results – We now turn to a comparison between the statistical properties of the stochastic DS FF and the entirely deterministic regen model, with reference to the partly deterministic auto-ignition model.
First we consider the probability density $`p(s)`$ of the tree clusters sizes simulated for different parameters for the different models. It is well known that the correlation length in the DS model (as measured by the cut-off $`s_c`$ in $`p(s)`$) increases as the critical point is approached by decreasing $`p`$, $`f`$ and $`f/p`$ . There is a corresponding increase in the power law regime for the cluster distribution in the auto-ignition model as $`p`$ is decreased and $`T_{max}`$ is increased. The scaling behaviour of the cut-off $`s_c`$ is difficult to ascertain due to the limited range of data available, but seems to be of the form $`\mathrm{ln}(S_c)pT_{max}`$, although we cannot exclude an algebraic dependence of the form $`s_c(pT_{max})^a`$, with $`a6`$. Fig. 1 shows scaling plots for the regen model, and we see that here too the cut-off $`s_c`$ scales with increasing ratio, $`t=T_{max}/T_{regen}`$. We have approximately $`\mathrm{ln}(s_c)T_{max}`$ though again the relation may be algebraic. The conclusion is that all three models approach a critical state described by the same power law $`p(s)s^\tau `$ with $`\tau 2`$.
One expects the power law observed in the cluster size distribution to be reflected in power laws for spatial correlation functions. It is particularly interesting to study the age-age correlation function:
$$C(r)=T(𝐫+𝐫_0)T(𝐫_0)T(𝐫_\mathrm{𝟎})^2$$
(1)
This correlation function was never studied for the DS FF because the model does not consider the age $`T(𝐫)`$ explicitly. In Fig. 2 we show the behavior of the age-age correlation function in the regen and DS models.
As usual it is difficult to obtain a substantial power law region because of finite size limitations. Nevertheless it is clear that $`C(r)`$ does exhibit power law dependence on $`r`$ and we find $`C(r)r^\eta `$ with $`\eta 0.32,0.21`$ and $`0.23`$ for the regen, auto-ignition and DS models respectively. Interestingly, the same correlation function for empty sites (which have negative $`T`$ in the regen model is also a power law with $`\eta 0.13`$.
Let us now turn to the temporal characteristics of the models. In Fig. 3 we show that the probability distribution of the ages of the trees has a very similar form for all three models.
All are broad and exponential in character. Since it is a microscopic property, it is not surprising that there is some variation between the models. This variation may also be the reason for the different exponents in the age-age correlation functions mentioned above.
It is remarkable that the DS FF exhibits a cut-off in the age distribution which is nearly as sharp as the cut-off in the two threshold models. This shows that the stochastic ignition process in the DS model, characterized by the lightning probability $`f`$, can be replaced in surprising detail by the deterministic age threshold.
The collective temporal behaviour is represented by the power spectrum of the time variation of the total number of trees on the lattice. In Fig. 4 these power spectra are shown for the DS and regen models (again, the power spectrum for the auto-ignition model is nearly identical).
Our most surprising result is that the deterministic regeneration model has nearly the same power spectrum as the two other models, particularly in the light of the differences in the age profiles above.
The equivalence between the three models allows us to think of the probabilistic growth and lightning in the DS FF model as effectively acting as thresholds. Qualitatively one can readily see that the probabilistic nature of the growth and the lightning can be interpreted as a kind of rigidity. Namely, an empty site has a rigidity against being turned into a tree described by $`1/p`$. A tree has rigidity against fire described by the fact that a tree only catches fire if nearest neighbor to a fire or when hit by lightning.
Discussion – We now discuss the relationship between the regen model presented above and other SOC models.
Our regen model is similar to the deterministic model introduced by Chen, Bak and Jensen. The crucial difference however, is that in the previous model the ratio $`T_{regen}/T_{max}`$ \- which must be decreased to move closer to the critical point and obtain scale free behavior - is effectively held fixed at a finite value, and hence the model does not allow one to approach the critical state.
The regen model has several features in common with the sandpile and earthquake models. It is similar to both sets of models in that the intrinsic dynamics is entirely deterministic and controlled by thresholds. The model is uniformly driven like the OFC earthquake model , and moreover, our deterministic FF model is genuinely non-conservative. It is worth noting that distributing the increase in $`T`$ randomly in a limited number of portions (rather than equally across all trees) each time step was found to destroy the criticality as the size of the portions increased. In one important respect our model is more similar to the BTW sandpile model than to the OFC model. Namely, when a site suffers relaxation (a tree catches fire) a fixed amount $`T_{regen}`$ is subtracted from the dynamical variable of that site. The same happens in the BTW model. In the OFC model, on the other hand, the dynamical variable of a relaxing site is reset to zero. This property has been argued to allow for a marginal synchronisation in the model and hence to be responsible for the OFC model’s ability, in contrast to the BTW model, to remain critical even in the non-conservative regime. Seen in this context the deterministic FF model presented here constitutes a very interesting mix of features from the BTW and OFC models. Our regen FF model is non-conservative, uniformly driven and though the microscopic update does not support a marginal synchronisation, nevertheless the model does exhibit the same scale free behavior as the DS FF.
This gives a direct link between the SOC behaviour of the BTW, OFC and DS FF models, each of which are commonly assumed to be representative of different and distinct classes of SOC models (e.g. in ). Furthermore, the change in the mechanism for the renewal of the forest (from a probability for growth to a time for regeneration) and the resultant sandpile-like picture allows the identification of $`p`$ with a dissipation parameter (in terms of the subtraction of $`T_{regen}`$ on ignition) rather than as a driving parameter. This is quite contrary to the normally held and most obvious view - for the DS FF - that $`p`$ is the driving parameter (creating trees in the system), and that if anything $`f`$ controls the dissipation (the complete combustion of trees into empty sites). If this is so, we can speculate that it may be possible to relate the physical limits for critical behaviour in the BTW sandpile:
$`h,h/ϵ0`$
(where $`h`$ is the driving rate and $`ϵ`$ the dissipation) and, recalling the qualitatively reciprocal relationships between $`f,p,T_{max}`$ and $`T_{regen}`$ noted earlier, the DS and regen forest-fire models:
$`f,f/p0`$, and 1/$`T_{max}`$,$`T_{regen}`$/$`T_{max}0`$
The main difference between the deterministic FF model and the sandpile and earthquake models is that the dynamical variable $`T`$ is not transported to neighboring sites when a site relaxes and that the threshold exists only for the initiation and not propagation of avalanches. This difference can be summarized as the FF model being a model of two coupled fields, fires and trees, whereas the sandpile and earthquake models contain one self-coupled field, the energy of a site.
Another difference consists in that the thresholds of the deterministic FF model must be tuned (to infinity) for the model to approach the critical regime. The reason for this is that the thresholds relate directly to the rate of driving in the model. The sandpile and earthquake models are different in that the SOC limit of slow driving can be reached without a tuning of the thresholds.
Finally, we note that the regen model is critical with periodic boundary conditions (in contrast to the BTW and OFC models) and without external driving (unlike the DS model), and is therefore the only system which can be said to be completely self-contained.
Conclusion – We have demonstrated that the stochastic Drossel-Schwabl forest-fire model can be turned into a deterministic threshold model without changing any of the collective statistical measures of the system in a significant way. The model illuminates greatly the relationship between different types of SOC models.
Acknowledgements – HJJ is supported by EPSRC under contract GR/L12325. PSR is the recipient of an EPSRC PhD studentship. We would like to thank Barbara Drössel and Kim Christensen for helpful discussion and insight.
|
no-problem/9909/astro-ph9909264.html
|
ar5iv
|
text
|
# Untitled Document
THE ASTEROSEISMOLOGY METACOMPUTER
T.S. Metcalfe and R.E. Nather
Department of Astronomy, University of Texas, Austin, TX 78701 U.S.A.Received September 1, 1999.
Abstract. We have developed a specialized computational instrument for fitting models of pulsating white dwarfs to observations made with the Whole Earth Telescope. This metacomputer makes use of inexpensive PC hardware and free software, including a parallel genetic algorithm which performs a global search for the best-fit set of parameters.
Key words: instrumentation: miscellaneous – methods: numerical – stars: white dwarfs
1. INTRODUCTION
White dwarf asteroseismology offers the opportunity to probe the structure and composition of stellar objects governed by relatively simple principles. The observational requirements of asteroseismology have been addressed by the development of the Whole Earth Telescope (WET), but the analytical procedures need to be refined to take full advantage of the possibilities afforded by the WET data.
The adjustable parameters in our computer models of white dwarfs presently include the total mass, the temperature, the H and He layer masses, the core composition, and the transition zone thicknesses. Finding a proper set of these to provide a close fit to the observed data is difficult. The current procedure is a cut-and-try process guided by intuition and experience, and is far more subjective than we would like. Objective procedures for determining the best-fit model are essential if asteroseismology is to become a widely-accepted and reliable astronomical technique. We must be able to demonstrate that, within the range of different values the model parameters can assume, we have found the only solution, or the best one if more than one is possible. To address this problem, we are applying a search-and-fit technique employing a genetic algorithm (GA), which can explore the myriad parameter combinations possible and select for us the best one, or ones (cf. Goldberg 1989, Charbonneau 1995, Metcalfe & Nather 1999).
Although genetic algorithms are more efficient than other comparably global techniques, they are still quite demanding computationally. To be practical, the GA-based fitting technique requires a dedicated instrument to perform the calculations. Over the past year, we have designed and configured such an instrument—an isolated network of 64 minimal PCs running Linux. Since the structure of a GA is very conducive to parallelization, this metacomputer allows us to run our code much faster than would otherwise be possible.
2. HARDWARE
In January 1998, around the time that the idea of commodity parallel processing started getting a lot of attention, we were independently designing a metacomputer of our own. Our budget was modest, so we set out to get the best performance possible per dollar without restricting the ability of the machine to solve our specific problem.
The original Beowulf cluster (Becker et al. 1995), which we didn’t know about at the time, had a number of features which, though they contributed to the utility of the machine as a multi-purpose computational tool, were unnecessary for our particular problem. We wanted to use each node of the metacomputer to run identical tasks with small, independent sets of data. The results of the calculations performed by the nodes consisted of just a few numbers which only needed to be communicated to the master process, never to another node. Essentially, network bandwidth was not an issue because the computation to communication ratio of our application was extremely high, and hard disks were not needed on the nodes because our problem did not require any significant amount of data swapping. In the end we settled on a design including one master server augmented by minimal nodes connected by a simple 10base-2 network (see Figure 1).
Fig. 1. The 64 minimal nodes of the metacomputer on shelves surrounding the master computer.
The master computer is a Pentium-II 333 MHz system with three NE-2000 compatible network cards, each of which drives 1/3 of the nodes on a subnet. Since a single ethernet card can handle up to 30 devices, no repeater was necessary.
The slave nodes were assembled from components obtained at a discount computer outlet. Each node consists of an ATX tower case with a motherboard, processor and fan, a single 32 MB SDRAM, and an NE-2000 compatible network card with a custom made boot-EPROM. The nodes are connected in series with 3-ft ethernet coaxial cables. Half of the nodes contain Pentium-II 300 MHz processors, while the other half are AMD K6-II 450 MHz chips. The total cost of the system was around $25k, but it could be built for considerably less today, and less still tomorrow.
3. SOFTWARE
To make the metacomputer work, we relied on the open-source Linux operating system and software tools. We programmed the EPROMs with Gero Kuhlmann’s NETBOOT package to allow each node to download and mount an independent Linux filesystem on a small ramdisk partition. We used Tom Fawcett’s YARD package to create the filesystem, and we included in it a pared down version of the PVM software developed at Oak Ridge National Laboratory (Geist et al. 1994).
Finally, we incorporated the message passing routines of the PVM library into PIKAIA, a general purpose public-domain GA developed by Charbonneau (1995), and we modified the white dwarf evolution and pulsation codes (see Wood 1990, Bradley 1993, Montgomery 1998) to allow reliable and automated calculation of the normal modes of oscillation for white dwarf stars with a wide range of masses, temperatures, and other parameters.
4. BENCHMARKS
Measuring the absolute performance of the metacomputer is difficult because the result strongly depends on the fraction of Floating-point Division operations (FDIVs) used in the benchmark code. Table 1 lists four different measures of the absolute speed in Millions of FLoating-point Operations Per Second (MFLOPS).
| Table 1. | The absolute speed of the metacomputer. | | |
| --- | --- | --- | --- |
| Benchmark | P-II 300 MHz | K6-II 450 MHz | Total Speed |
| MFLOPS(1) | 80.6 | 65.1 | 4662.4 |
| MFLOPS(2) | 47.9 | 67.7 | 3699.2 |
| MFLOPS(3) | 56.8 | 106.9 | 7056.0 |
| MFLOPS(4) | 65.5 | 158.9 | 7180.8 |
The code for MFLOPS(1) is essentially scalar—that is, vector processor performance will reflect scalar performance which will lie far below expected vector performance. Also, the percentage of FDIVs (9.6%) is considered somewhat high. The code for MFLOPS(2) is fully vectorizable. The percentage of FDIVs (9.2%) is still somewhat on the high side. The code for MFLOPS(3) is also fully vectorizable. The percentage of FDIVs (3.4%) is considered moderate. The code for MFLOPS(4) is fully vectorizable, but the percentage of FDIVs is zero.
We feel that MFLOPS(3) provides the best measure of the expected performance for the white dwarf code, because of the moderate percentage of FDIVs. Adopting this value, we have achieved a price to performance ratio near $3.50/MFLOPS.
ACKNOWLEDGEMENTS. We would like to thank Gary Hansen for donating the 32 K6-II 450 MHz processors through an arrangement with AMD. This work was made possible by a grant from the National Science Foundation.
REFERENCES
Becker, D., Sterling, T., Savarese, D., Dorband, J., Ranawak, U., and Packer, C. 1995, Beowulf: A Parallel Workstation for Scientific Computation, in Proceedings of the International Conference on Parallel Processing (New York: Institute of Electrical and Electronics Engineers).
Bradley, P. 1993, Ph.D. Thesis, University of Texas at Austin.
Charbonneau, P. 1995, ApJS, 101, 309.
Geist, A., Beguelin, A., Dongarra, J., Jiang, W., Manchek, R., and Sunderam, V. 1994, PVM: Parallel Virtual Machine, A Users’ Guide and Tutorial for Networked Parallel Computing, (Cambridge: MIT Press).
Goldberg, D. 1989, Genetic Algorithms in Search, Optimization, and Machine Learning, (Reading, MA: Addison Wesley).
Metcalfe, T.S. and Nather, R.E. 1999, Linux Journal, 65, 58.
Montgomery, M. 1998, Ph.D. Thesis, University of Texas at Austin.
Wood, M. 1990, Ph.D. Thesis, University of Texas at Austin.
|
no-problem/9909/astro-ph9909261.html
|
ar5iv
|
text
|
# FLIERs as stagnation knots from partially collimated outflows
## 1. Introduction and model description
FLIERs in planetary nebulae were originally identified with the structures previously known as ansae in elliptical planetary nebulae (Balick et al. 1993) though their peculiar characteristics are now recognized in a much wider variety of PNe (e.g. Corradi et al. 1996. Guerrero et al. 1999), and their interpretation has resisted a consistent explanation up to date (Balick et al. 1998). FLIERs are characterised by outflow radial velocities of the order of 30-50 km/s; ionzation gradients decreasing outwards from the nebular core and ‘head-tail’ morphologies, notably with the tails pointing outwards from the nucleus. More recent observations reveal very high radial velocities for other FLIERs (Corradi et al. 1999, Gonçalves et al. these proceedings, Redman et al. 1999). Mainly two types of models have been explored for the formation of FLIERs (see Balick et al. 1998 and references therein), namely, ionization fronts (IF) on localized dense knots or bow-shocks of fast knots ramming through the shell of the PN. Recently, Redman & Dyson (1999) and Dyson & Redman (these proceedings) discuss a model in which FLIERs represent recombination fronts (RF) in mass-loaded jets.
In this work we present a simple hydrodynamic model for the formation of symmetric axial knots with supersonic velocities from a low-density bipolar outflow with reduced momentum flux along its axis. The knots are formed from cooling shocked ambient gas in the stagnation region of the combined outflow. The evolution of a dense knot propagating through a thin ambient medium has been studied by Jones, Kang & Tregillis (1994) using hydrodynamic simulations and by Soker & Regev (1998) from analytic arguments in the specific context of FLIERs. However, those works do not discuss the formation process for the knots. Previously, we introduced the idea of a ”stagnation knot” to model the large scale structure of the giant envelope of the PN KjPn8 (Steffen & López, 1998). The reduced momentum flux around the axis causes the bow-shock to become concave instead of convex in this region. If the bow-shock is non-radiative, the ambient medium passing through the oblique region of the shock is then ”refracted” towards the axis, instead of away from it, as it happens in conventional bow-shocks. The accumulated material in the stagnation region may then be held together for sufficient time to cool and be compressed to a dense knot. If the shock is radiative, the compressed post-shock material is later crushed to a single or multiple knots on and around the axis. As long as the central outflow continues and drives the expanding shock, the stagnation knot will move roughly at the same speed as the rest of the bow-shock and remains at the bright rim formed by shocked ambient gas. However, when the outflow ceases the envelope slows down rather quickly whereas the dense knot continues to move outwards at its original speed. As the amount of swept up mass from the ambient medium increases, the knot’s motion is progressively slowed down.
## 2. Simulations
In order to investigate the dynamical properties of the knots formed in the stagnation region of a partially collimated low-density outflow, we present two cases (A & B) of 2D-hydrodynamical simulations in axisymmetry using the Corali-code (Raga et al. 1995) with a 5-level binary adaptive grid and a maximum of $`513\times 257`$ grid cells with a physical sizes of $`5\times 10^{17}`$ cm by $`2.5\times 10^{17}`$ cm for run A and half of these measures for run B. The cooling was calculated according to the description in Steffen et al. (1997) and references therein. The outflow was initialized on a sphere with a radius of $`2.5\times 10^{16}`$ cm ($`2.5\times 10^{16}`$ cm), a velocity of 2000 km/s (800 km/s) and a half opening angle $`\theta _0=15`$(15) (the angle of highest momentum flux) for run A (B). The initial number density of the outflow is $`30\mathrm{cm}^3`$ ($`40\mathrm{cm}^3`$) constant on the sphere, whereas that of the ambient medium was assumed to be $`120\mathrm{cm}^3`$. The outflow is switched-off after $`1.5\times 10^{10}`$ seconds (473 years) for both runs. The momentum flux was modulated as a function of the angle $`\theta `$ from the axis arbitrarily using $`v(\theta )=v_0(\theta _0/\theta )^2`$ for $`\theta >\theta _0`$ and $`v(\theta )=v_0/(10.25(\theta \theta _0)/\theta )`$ for $`\theta <=\theta _0`$. S
## 3. Results
Key ingredients often found in PNe with FLIERs are observed to form in the models. The outflow creates a low density bipolar cavity with an outer dense and bright rim (see Figure 1) of shocked halo gas which propagates at a few tens of kilometers per second. The rim develops instabilities which produce high density knots propagating at velocities similar to the rim. These knots might be identified as non-axial FLIERs similar to those observed in NGC 7662 (see also Dwarkadas & Balick 1998). In run A (Figure 1a to 1d) after forming a sort of dense ”polar cap” at $`t300`$ years the stagnation region begins to be compressed to a knot, which we associate with the FLIER. The expansion speed of the rim, the stagnation knot and the instability knots ranges between 60 and 150 km/s at this time. After the stellar wind ceases the expansion speed of the rim and the instability knots rapidly drops to around 50 km/s. The stagnation knot, however, continues its linear motion at a speed of around 150 km/s which drops more slowly. This model leads to representative parameters consistent with observations of PNe with FLIERs.
In the more extreme case of run B (Figure 1e and 1f) the stagnation knot reaches 250 km/s and produces a long feature of fast material far away from the main nebula. The region through which the stagnation knot has propagated shows an outward increase in velocity (Fig. 1d,f). As the dense knot continues, the smaller pieces spread out along the path developing a kinematic pattern of roughly linear increase of speed with distance, as observed in MyCn 18 (Bryce et al. 1997, Redmann et al., these proceedings) and other recent cases (Corradi et al. 1999, and Gonçalves these proceedings).
Within the framework of this model, it would be interesting to search for those young elliptical PNe that show signs of polar caps as likely candidates to develop into compact FLIERs. Full details of this work will appear shortly elsewhere.
## References
Balick, B., et al. 1993, ApJ, 411, 778
Balick, B., et al. 1998, ApJ, 116, 360
Bryce et al. 1997, ApJ, 487, L161
Dwarkadas, V.V. Balick, B. 1998 ApJ, 497, 267
Corradi, R.L.M., et al. 1996, å, 313, 913
Corradi, R.L.M., et al. 1999, ApJ, in press
Guerrero, M., Vázquez, R. & López, J.A. 1999, AJ, 117, 967
Jones, T.W., Kang, H., & Tregillis, I.L. 1994, ApJ, 432, 194
Raga, A.C., et al. 1995, MNRAS, 296, 833
Redman, M.P., Dyson, J.E. 1999, MNRAS, 302, L17
Soker, N., Regev, O. 1998, AJ, 116, 2462
Steffen, W., et al. 1997, MNRAS, 286, 1032
Steffen, W., López, J.A. 1998, ApJ, 508, 696
|
no-problem/9909/cond-mat9909120.html
|
ar5iv
|
text
|
# Application of the lattice Green’s function for calculating the resistance of infinite networks of resistors
## I Introduction
It is an old question to find the resistance between two adjacent grid points of an infinite square lattice in which all the edges represent identical resistances $`R`$. It is well known that the result is $`R/2`$, and an elegant and elementary solution of the problem is given by Aitchison . The electric-circuit theory is discussed in detail in a classic text by van der Pol and Bremmer and they derive the resistance between nearby points on the square lattice. In Doyle’s and Snell’s book the connection between random walks and electric networks is presented, including many interesting results and useful references. Recently Venezian and Atkinson et. al. also studied the problem and in these papers the reader can find additional references. Venezian’s method for finding the resistance between two arbitrary grid points of an infinite square lattice is based on the principle of the superposition of current distributions. This method was further utilized by Atkinson et. al. and applied to two dimensional infinite triangular and hexagonal lattices as well as infinite cubic and hypercubic lattices. In this paper we present an alternative approach using lattice Green’s functions. This Green’s function method may have several advantages: (i) It can be used straightforwardly for more complicated lattice structures such as body and face centered cubic lattices. (ii) The results derived by this method reflect the symmetry of the lattice structures although they may not be suitable for numerical purposes. However, from these results one may derive other integral representations of the resistance between two nodes, which can be used for evaluating the integrals either algebraically or numerically with high precision. Later in this paper we shall give examples for this procedure. (iii) From the equation for the Green’s function one can, in principle, derive some so-called recurrence formulas for the resistances between arbitrary grid points of an infinite lattice. In this paper such recurrence formulas are derived for an infinite square lattice (for the first time to the best of the author’s knowledge). (iv) In condensed matter physics the application of the lattice Green’s function has become a very efficient tool. The analytical behavior of the lattice Green’s function has been extensively studied over the past three decades for several lattice structures. We make use of the knowledge of this analytical behavior for an infinite cubic lattice and give the asymptotic value of the resistance as the separation of the two nodes tends to infinity. Throughout this paper we shall utilize the results known in the literature about the lattice Green’s function and also give some important references. (v) Finally, our approach for networks of resistors may serve as a didactically good example for introducing the Green’s function method as well as many basic concepts such as the Brillouin zone (BZ) used in solid state physics. We therefore feel that our Green’s function method is of some physical interest.
The application of the Green’s function proved to be a very effective method for studying the transport in inhomogeneous conductors and it has been used successfully by Kirkpatrick for percolating networks of resistors. Our approach for obtaining the resistance of a lattice of resistors is closely related to that used by Kirkpatrick. Economou’s book gives an excellent introduction to the Green’s function. A review of the lattice Green’s function is given by Katsura et. al. The lattice Green’s function is also utilized in the theory of the Kosterlitz–Thouless–Berezinskii phase transition of the screening of topological defects (vortices). A review of the latter problem is given by Chaikin et. al. in their book . The phase transition in classical two-dimensional lattice Coulomb gases has been studied by Lee et. al. also by using the lattice Green’s function in their Monte Carlo simulations. The above examples of the applications of the lattice Green’s function are just a selection of many problems known in the literature. A review of the Green’s function of the so-called tight-binding Hamiltonian (TBH) used for describing the electronic band structures of crystal lattices is presented in Economou’s book . The lattice Green’s function defined in this paper is related to the Green’s function of the TBH. Below we shall point out that the resistance in a given lattice of resistors is related to the Green’s function of the TBH at the energy at which the density of states is singular. This singularity is one of the van Hove singularities of the density of states .
## II Hypercubic Lattice
Consider a $`d`$ dimensional lattice which consists of all lattice points specified by position vectors $`𝐫`$ given in the form
$$𝐫=l_1𝐚_1+l_2𝐚_2+\mathrm{}+l_d𝐚_d,$$
(1)
where $`𝐚_1,𝐚_2,\mathrm{},𝐚_d`$ are independent primitive translation vectors, and $`l_1,l_2,\mathrm{},l_d`$ range through all integer values (i.e. positive and negative integers, as well as zero). In the case of a $`d`$ dimensional hypercube all the primitive translation vectors have the same magnitude, say $`a`$, $`i.e.,`$ $`|𝐚_1|=|𝐚_2|=\mathrm{}=|𝐚_d|=a`$. Here $`a`$ is the lattice constant of the $`d`$ dimensional hypercube.
In the network of resistors we assume that the resistance of all the edges of the hypercube is the same, say $`R`$. We wish to find the resistance between the origin and a given lattice point $`𝐫_0`$ of the infinite hypercube. We denote the current that can enter at lattice point $`𝐫`$ by $`I(𝐫)`$. However, for measuring the resistance between two sites the current has to be zero at all other sites. Similarly, the potential at site $`𝐫`$ will be denoted by $`V(𝐫)`$. Then, at site $`𝐫`$, according to Ohm’s and Kirchhoff’s laws, we may write
$$I(𝐫)R=\underset{𝐧}{}\left[V(𝐫)V(𝐫+𝐧)\right],$$
(2)
where the $`𝐧`$ are the vectors from site $`𝐫`$ to its nearest neighbors ($`𝐧=\pm 𝐚_i,i=1,\mathrm{},d`$). The right hand side of Eq. (2) may be expressed by the so-called lattice Laplacian defined on the hypercubic lattice:
$$\mathrm{}_{(𝐫)}f(𝐫)=\underset{𝐧}{}\left[f(𝐫+𝐧)f(𝐫)\right].$$
(3)
The above defined lattice Laplacian corresponds to the finite-difference representation of the Laplace operator. The lattice Laplacian $`1/a^2\mathrm{}_{(𝐫)}f(𝐫)`$ yields the correct form of the Laplacian in the continuum limit i.e. $`a0`$. The lattice Laplacian is widely used to solve partial differential equations with the finite-difference method.
To measure the resistance between the origin and an arbitrary lattice point $`𝐫_0`$ we assume that a current $`I`$ enters at the origin and exits at lattice point $`𝐫_0`$. Therefore, the current is zero at all the lattice points except for $`𝐫=0`$ and $`𝐫_0`$, where it is $`I`$ and $`I`$, respectively. Thus, Eq. (2), with the lattice Laplacian, can be rewritten as
$$\mathrm{}_{(𝐫)}V(𝐫)=I(𝐫)R,$$
(4)
where current at lattice point $`𝐫`$ is
$$I(𝐫)=I\left(\delta _{𝐫,0}\delta _{𝐫,𝐫_0}\right).$$
(5)
The resistance between the origin and $`𝐫_0`$ is
$$R(𝐫_0)=\frac{V(\mathrm{𝟎})V(𝐫_0)}{I}.$$
(6)
To find the resistance we need to solve Eq. (4). This is a Poisson-like equation and may be solved by using the lattice Green’s function:
$$V(𝐫)=R\underset{𝐫^{}}{}G(𝐫𝐫^{})I(𝐫^{}),$$
(7)
where the lattice Green’s function is defined by
$$\mathrm{}_{(𝐫^{})}G(𝐫𝐫^{})=\delta _{𝐫,𝐫^{}}.$$
(8)
Finally, the resistance between the origin and $`𝐫_0`$ can be expressed by the lattice Green’s function. Using Eq. (5) and (6) we obtain
$$R(𝐫_0)=2R[G(\mathrm{𝟎})G(𝐫_0)],$$
(9)
where we have made use of the fact that the lattice Green’s function is even, $`i.e.`$ $`G(𝐫)=G(𝐫)`$. Equation (9) is our central result for the resistance.
To find the lattice Green’s function defined by Eq. (8) we take periodic boundary conditions at the edges of the hypercube. Consider a hypercube with $`L`$ lattice points along each side. Thus the total number of sites in the $`d`$ dimensional hypercube is $`L^d`$. Substituting the Fourier transform
$$G(𝐫)=\frac{1}{L^d}\underset{𝐤\mathrm{BZ}}{}G(𝐤)e^{i\mathrm{𝐤𝐫}}$$
(10)
of the lattice Green’s function into Eq. (8), we find
$$G(𝐤)=\frac{1}{\epsilon (𝐤)},$$
(11)
for the $`d`$ dimensional hypercube where we have defined
$$\epsilon (𝐤)=2\underset{i=1}{\overset{d}{}}\left(1\mathrm{cos}\mathrm{𝐤𝐚}_i\right).$$
(12)
Owing to the periodic boundary conditions, the wave vector $`𝐤`$ in Eq. (11) is limited to the first Brillouin zone and is given by
$$𝐤=\frac{m_1}{L}𝐛_1+\frac{m_2}{L}𝐛_2+\mathrm{}+\frac{m_d}{L}𝐛_d,$$
(13)
where $`m_1,m_2,\mathrm{}m_d`$ are integers such that $`L/2m_iL/2`$ for $`i=1,2,\mathrm{},d`$, and $`𝐛_j`$ are the reciprocal lattice vectors defined by $`𝐚_i𝐛_j=2\pi \delta _{ij}`$, $`i,j=1,2,\mathrm{},d`$. Here we assumed that $`L`$ is an even integer, which will be irrelevant in the limit $`L\mathrm{}`$. The mathematical description of the crystal lattice and the concept of the Brillouin zone can be found in many books on solid state physics.
Finally, the lattice Green’s function takes the form
$$G(𝐫)=\frac{1}{L^d}\underset{𝐤\mathrm{BZ}}{}\frac{e^{i\mathrm{𝐤𝐫}}}{\epsilon (𝐤)}.$$
(14)
If we take the limit $`L\mathrm{}`$ then the discrete summation over $`𝐤`$ can be substituted by an integral :
$$\frac{1}{L^d}\underset{𝐤\mathrm{BZ}}{}v_0_{𝐤\mathrm{BZ}}\frac{d^d𝐤}{(2\pi )^d},$$
(15)
where $`v_0=a^d`$ is the volume of the unit cell of the $`d`$ dimensional hypercube. Thus the lattice Green’s function is
$$G(𝐫)=v_0_{𝐤\mathrm{BZ}}\frac{d^d𝐤}{(2\pi )^d}\frac{e^{i\mathrm{𝐤𝐫}}}{\epsilon (𝐤)}.$$
(16)
Using Eqs. (9) and (16) in $`d`$ dimensions the resistance between the origin and lattice point $`𝐫_0`$ is
$$R(𝐫_0)=2Rv_0_{𝐤\mathrm{BZ}}\frac{d^d𝐤}{(2\pi )^d}\frac{1e^{i\mathrm{𝐤𝐫}_0}}{\epsilon (𝐤)}.$$
(17)
The above result can be simplified if we specify the lattice point as $`𝐫_0=l_1𝐚_1+l_2𝐚_2+\mathrm{}+l_d𝐚_d`$:
$$R(l_1,l_2,\mathrm{},l_d)=R_\pi ^\pi \frac{dx_1}{2\pi }\mathrm{}_\pi ^\pi \frac{dx_d}{2\pi }\frac{1e^{i(l_1x_1+\mathrm{}+l_dx_d)}}{_{i=1}^d(1\mathrm{cos}x_i)}.$$
(18)
From this final expression of the resistance one can see that the resistance does not depend on the angles between the unit vectors $`𝐚_1,𝐚_2,\mathrm{},𝐚_d`$. Physically this means that the hypercube can be deformed without the change of the resistance between any two lattice points. The resistance in topologically equivalent lattices is the same. For further references we also give the lattice Green’s function for a $`d`$ dimensional hypercube:
$$G(l_1,l_2,\mathrm{},l_d)=_\pi ^\pi \frac{dx_1}{2\pi }\mathrm{}_\pi ^\pi \frac{dx_d}{2\pi }\frac{e^{i(l_1x_1+\mathrm{}+l_dx_d)}}{2_{i=1}^d(1\mathrm{cos}x_i)}.$$
(19)
### A Conducting medium, continuum limit
The resistance in an infinite conducting medium can be obtained by taking the limit as the lattice constant $`a`$ tends to zero in Eq. (17). Denoting the electrical conductivity of the medium by $`\sigma `$, the resistance of a $`d`$ dimensional hypercube with lattice constant $`a`$, according to Ohm’s law, is given by
$$R=\frac{1}{\sigma a^{d2}}.$$
(20)
Using the approximation $`\epsilon (𝐤)𝐤^2a^2`$ for $`|𝐚_i|=a0`$, Eq. (17) can easily be reduced to
$$R(𝐫_0)=\frac{2}{\sigma }_{𝐤\mathrm{BZ}}\frac{d^d𝐤}{(2\pi )^d}\frac{1e^{i\mathrm{𝐤𝐫}_0}}{𝐤^2}.$$
(21)
The same result for the resistance of a conducting medium is given in Chaikin’s book.
### B Linear chain, $`d=1`$
Consider a linear chain of identical resistors $`R`$. The resistance between the origin and site $`l`$ can be obtained by taking $`d=1`$ in the general result given in Eq. (18):
$$R(l)=R_\pi ^\pi \frac{dx}{2\pi }\frac{1e^{ilx}}{1\mathrm{cos}x}.$$
(22)
The integral can be evaluated by the method of residues and gives the following very simple result:
$$R(l)=Rl.$$
(23)
This can be interpreted as the resistance of $`l`$ resistances $`R`$ in series. The current flows only between the two sites separated by a finite distance. The two semi-infinite segments of the chain do not affect the resistance.
### C Square lattice, $`d=2`$
Using Eq. (18), the resistance in two dimensions between the origin and $`𝐫_0=l_1𝐚_1+l_2𝐚_2`$ is
$`R(l_1,l_2)=R{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_1}{2\pi }}{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_2}{2\pi }}{\displaystyle \frac{1e^{il_1x_1+il_2x_2}}{2\mathrm{cos}x_1\mathrm{cos}x_2}}`$ (24)
$`=R{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_1}{2\pi }}{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_2}{2\pi }}{\displaystyle \frac{1\mathrm{cos}\left(l_1x_1+l_2x_2\right)}{2\mathrm{cos}x_1\mathrm{cos}x_2}}.`$ (25)
The resistance between two adjacent lattice sites can easily be obtained from the above expression without evaluating the integrals in Eq. (24) or (25). Note that interchanging $`l_1`$ and $`l_2`$ in Eq. (25) does not change the resistance, $`i.e.,`$ $`R(l_1,l_2)=R(l_2,l_1)`$. This is consistent with the symmetry of the lattice. Then, from Eq. (25), the sum of $`R(0,1)`$ and $`R(1,0)`$ yields
$$R(0,1)+R(1,0)=R_\pi ^\pi \frac{dx_1}{2\pi }_\pi ^\pi \frac{dx_2}{2\pi }=R.$$
(26)
Thus $`R(0,1)=R(1,0)=R/2`$. The resistance between two adjacent lattice sites is $`R/2`$, which is a well known result .
In general the integrals in Eq. (24) have to be evaluated numerically. It is shown in Appendix A how one integral can be performed in Eq. (24). We found the same result as Venezian and Atkinson:
$$R(l_1,l_2)=R_0^\pi \frac{dy}{\pi }\frac{1e^{\left|l_1\right|s}\mathrm{cos}l_2y}{\mathrm{sinh}s},$$
(27)
where
$$\mathrm{cosh}s=2\mathrm{cos}y.$$
(28)
It turns out that this expression is more stable numerically than Eq. (25). As an application of the above formula one can calculate the resistance between second nearest neighbors exactly. After some algebra one finds
$$R(1,1)=R_0^\pi \frac{dy}{\pi }\frac{(1\mathrm{cos}y)^2}{\sqrt{(2\mathrm{cos}y)^21}}=\frac{2}{\pi }R.$$
(29)
The energy dependent lattice Green’s function of the tight-binding Hamiltonian for a square lattice is given by
$$G(E;l_1,l_2)=_\pi ^\pi \frac{dx_1}{2\pi }_\pi ^\pi \frac{dx_2}{2\pi }\frac{\mathrm{cos}\left(l_1x_1+l_2x_2\right)}{E\mathrm{cos}x_1\mathrm{cos}x_2}$$
(30)
\[see Eq. (5.31) in Economou’s book\]. This is a generalization of our Green’s function by introducing a new variable $`E`$ instead of the value 2 in the denominator in Eq. (19) for $`d=2`$. Note that a factor 2 appearing in the denominator of our Green’s function in Eq. (19) is missing in Eq. (30). This is related to the fact that in the Schrödinger equation the Laplacian is multiplied by a factor of $`1/2`$ while in our case the Laplace equation is solved. Comparing Eqs. (19) (for $`d=2`$) and (30) one can see that the resistance is
$$R(l_1,l_2)=R\left[G(2;0,0)G(2;l_1,l_2)\right].$$
(31)
Based on the equation for the Green’s function Morita derived the recurrence formulas for the Green’s function $`G(E;l_1,l_2)`$ for an infinite square lattice (see Eqs. (3.8) and (4.2)-(4.4) in Morita’s paper). Applying Morita’s results (with $`E=2`$) to the resistance given in Eq. (31) we obtained the following recurrence formulas for the resistance:
$`R(m+1,m+1)`$ $`=`$ $`{\displaystyle \frac{4m}{2m+1}}R(m,m){\displaystyle \frac{2m1}{2m+1}}R(m1,m1),`$ (32)
$`R(m+1,m)`$ $`=`$ $`2R(m,m)R(m,m1),`$ (33)
$`R(m+1,0)`$ $`=`$ $`4R(m,0)R(m1,0)2R(m,1),`$ (34)
$`R(m+1,p)`$ $`=`$ $`4R(m,p)R(m1,p)R(m,p+1)R(m,p1)\mathrm{if}\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}0}<p<m.`$ (35)
We have seen that $`R(1,0)=R/2`$ and $`R(1,1)=2R/\pi .`$ Since we know the exact values of $`R(1,0)`$ and $`R(1,1)`$ (obviously $`R(0,0)=0`$) one can calculate the resistance exactly for arbitrary nodes by using the above given recurrence formulas. This way we obtained the same results as Atkinson et. al. using Mathematica. The advantages of our recurrence relations are that they provide a new, very simple and effective tool to calculate the resistance between arbitrary nodes on a square lattice. We note that van der Pol and Bremmer also gave the exact values of the resistance for nearby points in a square lattice using a different approach.
It is interesting to see the asymptotic form of the resistance for large values of $`l_1`$ or/and $`l_2`$. In Appendix B we derive the asymptotic form of the lattice Green’s function for a square lattice \[see Eq. (B11)\]. Inserting Eq. (B11) into the general result of the resistance given in Eq. (9) the asymptotic form of the resistance is
$$R(l_1,l_2)=\frac{R}{\pi }\left(\mathrm{ln}\sqrt{l_1^2+l_2^2}+\gamma +\frac{\mathrm{ln}8}{2}\right),$$
(36)
where $`\gamma =0.5772\mathrm{}`$ is the Euler-Mascheroni constant. The same result was obtained by Venezian except that we got an exact value of the numerical constant in Eq. (36) whereas it was numerically approximated in Venezian’s paper. The resistance is logarithmically divergent for large values of $`l_1`$ and $`l_2`$. A similar behavior was found for conducting medium by Chaikin et. al. in their book. In the theory of the Kosterlitz–Thouless–Berezinskii phase transition of the screening of topological defects (vortices) the same asymptotic form of the Green’s function (as given in Eq. (B11)) has been used for a square lattice . Finally, we note that Doyle and Snell showed in their book (pp. 122–123) that the resistance goes to infinity as the separation between nodes tends to infinity but the asymptotic form was not derived.
### D Simple cubic lattice, $`d=3`$
In three dimensions the resistance between the origin and a lattice point $`𝐫_0=l_1𝐚_1+l_2𝐚_2+l_3𝐚_3`$ can be obtained from Eq. (18):
$`R(l_1,l_2,l_3)`$ $`=`$ $`R{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_1}{2\pi }}{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_2}{2\pi }}{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_3}{2\pi }}{\displaystyle \frac{1e^{il_1x_1+il_2x_2+il_3x_3}}{3\mathrm{cos}x_1\mathrm{cos}x_2\mathrm{cos}x_3}}`$ (37)
$`=`$ $`R{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_1}{2\pi }}{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_2}{2\pi }}{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_3}{2\pi }}{\displaystyle \frac{1\mathrm{cos}\left(l_1x_1+l_2x_2+l_3x_3\right)}{3\mathrm{cos}x_1\mathrm{cos}x_2\mathrm{cos}x_3}}.`$ (38)
Similarly to the case of a square lattice, the exact value of the resistance between two adjacent lattice sites can be obtained from the above expression. Clearly, from Eq. (38) (and for symmetry reasons, too) $`R(1,0,0)=R(0,1,0)=R(0,0,1)`$ and
$$R(1,0,0)+R(0,1,0)+R(0,0,1)=R_\pi ^\pi \frac{dx_1}{2\pi }_\pi ^\pi \frac{dx_2}{2\pi }_\pi ^\pi \frac{dx_3}{2\pi }=R.$$
(39)
Therefore, the resistance between adjacent sites is $`R/3`$. It is easy to show much in the same way that in a $`d`$ dimensional hypercube the resistance between adjacent sites is $`R/d`$.
Unlike in a square lattice, in a simple cubic lattice the resistance does not diverge as the separation of the entering and exiting sites increases, but tends to a finite value. One can write for the resistance $`R(l_1,l_2,l_3)=2\left[G(0,0,0)G(l_1,l_2,l_3)\right]`$ where $`G(l_1,l_2,l_3)`$ is given in Eq. (19) for $`d=3.`$ It is well known from the theory of Fourier series (Riemann’s lemma) that $`lim_p\mathrm{}_a^b𝑑x\phi (x)\mathrm{cos}px0`$ for any integrable function $`\phi (x).`$ Hence, $`G(l_1,l_2,l_3)0`$ (which indeed corresponds to the boundary condition of the Green’s function at infinity) and thus, $`R(l_1,l_2,l_3)2G(0,0,0)`$ when any of the $`l_1,l_2,l_3\mathrm{}.`$ The value of $`G(0,0,0)`$ was evaluated for the first time by Watson and subsequently by Joyce in a closed form in terms of elliptic integrals. The following exact result was found
$$2G(0,0,0)=\left(\frac{2}{\pi }\right)^2\left(18+12\sqrt{2}10\sqrt{3}7\sqrt{6}\right)\left[𝐊(k_0)\right]^2=0.505462\mathrm{},$$
(40)
where $`k_0=(2\sqrt{3})(\sqrt{3}\sqrt{2})`$ and
$$𝐊(k)=_0^{\pi /2}𝑑\theta \frac{1}{\sqrt{1k^2\mathrm{sin}^2\theta }}$$
(41)
is the complete elliptic integral of the first kind. It is worth mentioning that a simpler result was obtained by Glasser et. al. (see also Doyle’s and Snell’s book) who calculated the integrals in terms of gamma functions:
$$2G(0,0,0)=\frac{\sqrt{3}1}{96\pi ^3}\mathrm{\Gamma }^2(1/24)\mathrm{\Gamma }^2(11/24).$$
(42)
Thus, the resistance in units of $`R`$ for a simple cubic lattice tends to the finite value 0.505462$`\mathrm{}`$ when the separation between the entering and exiting sites tends to infinity.
For finite separations the formula for the resistance given in Eq. (38) is not suitable for numerical purposes since the integrals converge slowly with increasing the number of mesh points. Similarly to the case of a square lattice, we can use the energy dependent lattice Green’s function of the tight-binding Hamiltonian defined for a simple cubic lattice as
$$G(E;l_1,l_2,l_3)=_\pi ^\pi \frac{dx_1}{2\pi }_\pi ^\pi \frac{dx_2}{2\pi }_\pi ^\pi \frac{dx_3}{2\pi }\frac{\mathrm{cos}\left(l_1x_1+l_2x_2+l_3x_3\right)}{E\mathrm{cos}x_1\mathrm{cos}x_2\mathrm{cos}x_3}$$
(43)
\[see Eq. (5.49) in Economou’s book \]. This is a generalization of our Green’s function by introducing a new variable $`E`$ instead of the value 3 in the denominator in Eq. (19) for $`d=3`$. Note that the reason for the missing factor of 2 in Eq. (43) is the same as that explained in the previous section. Comparison of Eqs. (19) (for $`d=3`$) and (43) yields
$$R(l_1,l_2,l_3)=R\left[G(3;0,0,0)G(3;l_1,l_2,l_3)\right].$$
(44)
The analytic behavior of the lattice Green’s function $`G(E;l_1,l_2,l_3)`$ has been extensively studied over the past three decades. Numerical values of $`G(E;l_1,l_2,l_3)`$ were given by Koster et. al., and Wolfram et. al., by using the following representation of the Green’s function for a simple cubic lattice:
$$G(E3;l,m,n)=_0^{\mathrm{}}𝑑te^{Et}I_l(t)I_m(t)I_n(t),$$
(45)
where $`I_l`$ is the modified Bessel function of order $`l`$. However, it turns out that another representation of the lattice Green’s functions is more suitable for numerical calculations. For $`3E`$ the following form of the Green’s function along an axis was given in terms of complete elliptic integrals of the first kind by Horiguchi
$$G(E;l,0,0)=\frac{1}{\pi ^2}_0^\pi 𝑑x𝐊(k)k\mathrm{cos}lx,$$
(46)
where
$$k=\frac{2}{E\mathrm{cos}x}$$
(47)
and $`𝐊(k)`$ is defined in Eq. (41). In Fig. 1 we plotted the numerical values of the resistance using Eq. (46) for $`E=3`$ and $`1l100`$.
It is seen from the figure that the resistance tends rapidly to its asymptotic value given in Eq. (40). We would mention that Glasser et. al. gave other useful integral representations of the lattice Green’s function for the hypercubic lattice for arbitrary dimension $`d`$.
It was shown by Joyce that the function $`G(E;0,0,0)`$ can be expressed in the form of a product of two complete elliptic integrals of the first kind. Horiguchi and later on Morita obtained recurrence relations for the function $`G(E;l_1,l_2,l_3)`$ of the simple cubic lattice for an arbitrary site $`l_1,l_2,l_3`$ in terms only of $`G(E;0,0,0)`$, $`G(E;2,0,0)`$ and $`G(E;3,0,0)`$. The last two Green’s functions were expressed by Horiguchi and Morita in closed form in terms of complete elliptic integrals. From these results one can calculate the Green’s function for an arbitrary lattice point. As an example, one of the recurrence formulas we obtained from the recurrence formula of the Green’s function (at $`E=3`$) given by Horiguchi is
$$R(m,1,0)=\frac{1}{4}R(m1,0,0)+\frac{3}{2}R(m,0,0)\frac{1}{4}R(m+1,0,0).$$
(48)
Together with other recurrence formulas given in Horiguchi’s and Morita’s paper one can find $`R(l_1,l_2,l_3)`$ for arbitrary values of $`l_1,l_2,l_3`$. The numerical values of the resistance for small $`l_1,l_2,l_3`$ are listed in Table I.
Our results are in agreement with those given by Atkinson.
It is worth mentioning some useful references for the lattice Green’s function of other three dimensional lattices, such as body centered and face centered. The exact values of $`G(1;0,0,0)`$ as well as other exact results can be found in Joyce’s paper for a body centered cubic lattice. Morita gave the recurrence relations of the lattice Green’s function for a body centered cubic lattice. Inoue derived the recurrence relations for a face centered cubic lattice and obtained the exact values of the lattice Green’s function at some lattice points. Based on the results given in these papers one can calculate the resistance between arbitrary nodes for cubic lattices.
Finally we would like to point out that in Eq. (44) the resistance is related to the energy dependent Green’s function at energy $`E=3`$. However, it is known that the density of states at this energy corresponds to one type of the van Hove singularities . Therefore, the resistance for a simple cubic lattice is related to the Green’s function of the tight-binding Hamiltonian at the value of the energy at which the density of states has a van Hove type singularity. In the general case of a $`d`$ dimensional hypercube the Green’s function of the tight-binding Hamiltonian is infinite at $`E=d`$ for $`d=2`$ (logarithmically divergent) while it has a finite value for $`d=3`$ as it was shown above. In the latter case the derivative of the Green’s function with respect to $`E`$ is singular.
## III Rectangular lattice
In this section we shall calculate the resistance of a rectangular lattice in which the resistance of each edge is proportional to its length. Consider a rectangular lattice with unit vectors $`𝐚_1`$ and $`𝐚_2`$ and introduce parameter $`p=\left|𝐚_1\right|/\left|𝐚_2\right|.`$ Let $`R`$ be the resistance of the edge along the direction of $`𝐚_2`$. To find the resistance between the origin and site $`𝐫_0`$ assume a current $`I`$ enters at the origin and exits at site $`𝐫_0`$. We denote the potential at lattice point $`𝐫`$ by $`V(𝐫)`$. Then, according to Ohm’s and Kirchhoff’s laws, we may write
$$\mathrm{}_\mathrm{r}^{\mathrm{rect}}V(𝐫)=I\left(\delta _{𝐫,0}\delta _{𝐫,𝐫_0}\right)R,$$
(49)
where the ‘rectangular’ Laplacian is defined by
$$\mathrm{}_\mathrm{r}^{\mathrm{rect}}V(𝐫)=\frac{V(𝐫+𝐚_1)V(𝐫)}{p}+\frac{V(𝐫𝐚_1)V(𝐫)}{p}+V(𝐫+𝐚_2)V(𝐫)+V(𝐫𝐚_2)V(𝐫).$$
(50)
Then the equation for the lattice Green’s function corresponding to the ‘rectangular’ Laplace equation is
$$\mathrm{}_\mathrm{r}^{}^{\mathrm{rect}}G(𝐫𝐫^{})=\delta _{𝐫,𝐫^{}}.$$
(51)
The Green’s function can be calculated in a way similar to the hypercubic case in Sec. II. We have
$$G(𝐫)=v_0_{𝐤\mathrm{BZ}}\frac{d^d𝐤}{(2\pi )^d}\frac{e^{i\mathrm{𝐤𝐫}}}{\epsilon (𝐤)},$$
(52)
where $`v_0=pa^2`$ is the area of the unit cell, and the Brillouin zone is a rectangle with sides $`2\pi /\left|𝐚_1\right|`$ and $`2\pi /\left|𝐚_2\right|`$ along the directions of $`𝐚_1`$ and $`𝐚_2`$, respectively, and
$$\epsilon (𝐤)=2\left[\frac{1}{p}\left(1\mathrm{cos}\mathrm{𝐤𝐚}_1\right)+1\mathrm{cos}\mathrm{𝐤𝐚}_2\right].$$
(53)
Using Eq. (9), the resistance between the origin and lattice point $`𝐫_0=l_1𝐚_1+l_2𝐚_2`$ for a given $`p`$ is
$`R(p;l_1,l_2)`$ $`=`$ $`R{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_1}{2\pi }}{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_2}{2\pi }}{\displaystyle \frac{1e^{il_1x_1+il_2x_2}}{\frac{1}{p}\left(1\mathrm{cos}x_1\right)+1\mathrm{cos}x_2}}`$ (54)
$`=`$ $`R{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_1}{2\pi }}{\displaystyle _\pi ^\pi }{\displaystyle \frac{dx_2}{2\pi }}{\displaystyle \frac{1\mathrm{cos}\left(l_1x_1+l_2x_2\right)}{\frac{1}{p}\left(1\mathrm{cos}x_1\right)+1\mathrm{cos}x_2}}.`$ (55)
The integral over $`x_2`$ can be evaluated similarly to a square lattice (see Appendix A). We find
$$R(p;l_1,l_2)=R_0^\pi \frac{dx}{\pi }\frac{1e^{\left|l_2\right|s}\mathrm{cos}l_1x}{\mathrm{sinh}s},$$
(56)
where
$$\mathrm{cosh}s=1+\frac{1}{p}\frac{1}{p}\mathrm{cos}x.$$
(57)
Note that $`l_1`$ and $`l_2`$ are interchanged here compared to Eq. (27).
Now it is interesting to see how the resistance between adjacent sites varies with $`p=\left|𝐚_1\right|/\left|𝐚_2\right|.`$ The resistance between nearest neighbors is different along the $`x`$ and the $`y`$ axis. Thus, unlike in the case of a square lattice, $`R(p;1,0)R(p;0,1)`$ for a rectangular lattice except for the trivial case $`p=1`$ (square lattice). No sum rule exists for $`R(p;1,0)+R(p;0,1)`$ as in the case of a square lattice. In Fig. 2 the resistances $`R(p;1,0)`$ and $`R(p;0,1)`$ are plotted as functions of $`p`$.
From Fig. 2 one can see that resistance $`R(p;1,0)`$ increases with increasing $`p`$. It can be shown that $`R(p;1,0)\mathrm{}`$ as $`p\mathrm{}`$. This is physically clear since the lattice constant along the $`x`$ axis increases resulting in an increasing resistance of each of the segments in this direction. On the other hand along the $`y`$ axis a saturation of $`R(p;0,1)`$ can be seen, which is not obvious at all. Expanding the integral in the expression of $`R(p;0,1)`$ in powers of $`p`$, for large $`p`$ we get $`R(p;0,1)R(12/\pi p^{1/2}+O(p^{3/2}))`$. Thus, $`R(p;0,1)R`$ when $`p\mathrm{}`$. Some additional results are presented for the energy dependent lattice Green’s function for a rectangular lattice in the papers of Morita et. al., and Katsura et. al .
## IV Triangular lattice
In this section we calculate the resistance in a triangular lattice in which the resistance of each edge is identical, say $`R`$. First we consider a triangular lattice with unit vectors $`𝐚_1`$ and $`𝐚_2`$, and with a lattice constant $`a=\left|𝐚_1\right|=\left|𝐚_2\right|`$. We choose $`𝐚_1`$ and $`𝐚_2`$ such that the angle between them is $`120^{}`$. We introduce a third vector by $`𝐚_3=(𝐚_1+𝐚_2)`$. The vectors drawn from each lattice point to its 6 nearest neighbors are $`\pm 𝐚_1,\pm 𝐚_2,\pm 𝐚_3`$. Assume that a current $`I`$ enters at the origin and exits at site $`𝐫_0`$, and that the potential at site $`𝐫`$ is $`V(𝐫)`$. Again, from Ohm’s and Kirchhoff’s laws, we find
$$\mathrm{}_\mathrm{r}^{\mathrm{triang}}V(𝐫)=I\left(\delta _{𝐫,0}\delta _{𝐫,𝐫_0}\right)R,$$
(58)
where the ‘triangular’ Laplacian is defined by
$$\mathrm{}_\mathrm{r}^{\mathrm{triang}}V(𝐫)=\underset{i=1}{\overset{3}{}}\left[V(𝐫𝐚_i)2V(𝐫)+V(𝐫+𝐚_i)\right].$$
(59)
It is important to note that the triangular Laplacian $`\mathrm{}_\mathrm{r}^{\mathrm{triang}}`$ defined above is $`2/(3a^2)`$ times that used for solving the Laplace equation with the finite-difference method on a triangular lattice. The factor $`2/(3a^2)`$ ensures that the lattice Laplacian in the finite-difference method yields the correct form of the Laplacian in the continuum limit ($`a0`$). However, in our case this limit does not exist since there are also connections (resistors) between adjacent lattice points along the direction of the vector $`𝐚_3`$.
The equation for the lattice Green’s function corresponding to the ‘triangular’ Laplace equation is given by
$$\mathrm{}_\mathrm{r}^{}^{\mathrm{triang}}G(𝐫𝐫^{})=\delta _{𝐫,𝐫^{}}.$$
(60)
The Green’s function can be calculated in the same way as in the hypercubic case. We have
$$G(𝐫)=v_0_{𝐤\mathrm{BZ}}\frac{d^2𝐤}{(2\pi )^2}\frac{e^{i\mathrm{𝐤𝐫}}}{2_{i=1}^3\left(1\mathrm{cos}\mathrm{𝐤𝐚}_i\right)},$$
(61)
where $`v_0=\sqrt{3}/2a^2`$ is the area of the unit cell, and the vector $`𝐤`$ given in Eq. (13) runs over the Brillouin zone of the triangular lattice, which is a regular hexagon.
The resistance between the origin and site $`𝐫_0`$ can be obtained from Eq. (9) with the lattice Green’s function for a triangular lattice and it yields
$$R(𝐫_0)=Rv_0_{𝐤\mathrm{BZ}}\frac{d^2𝐤}{(2\pi )^2}\frac{1e^{i\mathrm{𝐤𝐫}_0}}{_{i=1}^3\left(1\mathrm{cos}\mathrm{𝐤𝐚}_i\right)}=Rv_0_{𝐤\mathrm{BZ}}\frac{d^2𝐤}{(2\pi )^2}\frac{1\mathrm{cos}\mathrm{𝐤𝐫}_0}{_{i=1}^3\left(1\mathrm{cos}\mathrm{𝐤𝐚}_i\right)}.$$
(62)
Note that the factor $`2`$ dropped out from the denominator.
Using Eq. (62) we can easily find the resistance between two adjacent lattice sites. For symmetry reasons it is clear that $`R(𝐚_1)=R(𝐚_2)=R(𝐚_3)`$. On the other hand
$$\underset{i=1}{\overset{3}{}}R(𝐚_i)=Rv_0_{𝐤\mathrm{BZ}}\frac{d^2𝐤}{(2\pi )^2}=R.$$
(63)
In the last step we have made use of the fact that the volume of the Brillouin zone is $`1/v_0`$. Hence, the resistance between two adjacent lattice sites is $`R(𝐚_1)=R/3`$.
It is worth mentioning that without changing the resistance between two arbitrary lattice points one can transform the triangular lattice to a square lattice in which there are also resistors between the end points of one of the diagonals of each square. This topologically equivalent lattice is more suitable for evaluating the necessary integrals over the Brillouin zone since the Brillouin zone becomes a square. The same transformation was used by Atkinson et. al. in their paper. If we choose $`𝐚_1=(1,0)`$ and $`𝐚_1=(0,1)`$, then the resistance between the origin and lattice point $`𝐫_0=n𝐚_1+m𝐚_2`$ is given by
$$R(n,m)=R_\pi ^\pi \frac{dx}{2\pi }_\pi ^\pi \frac{dy}{2\pi }\frac{1e^{inx+imy}}{3\mathrm{cos}x\mathrm{cos}y\mathrm{cos}(x+y)}.$$
(64)
In Appendix C we perform one integral in Eq. (64) using the method of residues in a similar way to Appendix A. After some algebra it is easy to see that the result given in Eq. (C4) is in agreement with Atkinson’s result:
$$R(n,m)=R_0^{\pi /2}\frac{dx}{\pi }\frac{1e^{\left|nm\right|s}\mathrm{cos}(n+m)x}{\mathrm{sinh}s\mathrm{cos}x},$$
(65)
where $`\mathrm{cosh}s=2\mathrm{sec}x\mathrm{cos}x.`$
For $`n=m`$ in Eq. (65) we have
$$R(n,n)=2R_0^{\pi /2}\frac{dx}{\pi }\frac{1\mathrm{cos}2nx}{\sqrt{\left(3\mathrm{cos}2x\right)^24\mathrm{cos}^2x}}.$$
(66)
Evaluating this expression by means of the program Maple we obtained the same results for $`n=m`$ as Atkinson. Like in the case of a square lattice we believe that similar recurrence formulas exist for a triangular lattice but further work is necessary along this line. Similarly, it would be interesting to find the asymptotic form of the resistance as the separation between the two nodes tends to infinity. According to some preliminary work the resistance is again logarithmically divergent.
## V Honeycomb lattice
In this section we calculate the resistances in an infinite honeycomb lattice of resistors. Atkinson and Steenwijk have studied this lattice structure exploiting the fact that the hexagonal lattice can be constructed from the triangular lattice by the application of the so-called $`\mathrm{\Delta }Y`$ transformation. One advantage of our Green’s function method is that we do not use such a $`\mathrm{\Delta }Y`$ transformation, specific only for the honeycomb lattice, therefore our method can also be used in a straightforward manner for other lattice structures. As we shall see later, in the honeycomb lattice each unit cell contains $`two`$ lattice points, which is the main structural difference from the triangular lattice. Including more than one lattice point in the unit cell, the method outlined in this section can be viewed as a generalization of the Green’s function method discussed in the previous sections.
We assume that all the resistors have the same resistance $`R`$. The lattice structure and the unit cell are shown in Fig. 3. The angle between $`𝐚_1`$ and $`𝐚_2`$ is $`120^{}`$, and $`|𝐚_1|=|𝐚_2|=\sqrt{3}a`$, where $`a`$ is the length of the edges of the hexagons. There are two types of lattice points in each unit cell denoted by $`A`$ and $`B`$ in Fig. 3.
From now on the position of the lattice points $`A`$ and $`B`$ will be given by the position of the unit cell in which they are located. Assume the origin is at one of the lattice points $`A`$, and then the position of a unit cell can be specified by the position vector $`𝐫=n𝐚_1+m𝐚_2`$, where $`n,m`$ are arbitrary integers. We denote the potential and the current in one of the unit cells by $`V_A(𝐫)`$ and $`V_B(𝐫)`$, and $`I_A(𝐫)`$ and $`I_B(𝐫)`$, respectively, where subscripts $`A`$ and $`B`$ refer to the corresponding lattice points. Owing to Ohm’s and Kirchhoff’s laws the currents $`I_A(𝐫)`$ and $`I_B(𝐫)`$ in the unit cell specified by $`𝐫`$ satisfy the following equations:
$`I_A(𝐫)`$ $`=`$ $`{\displaystyle \frac{V_A(𝐫)V_B(𝐫)}{R}}+{\displaystyle \frac{V_A(𝐫)V_B(𝐫𝐚_1)}{R}}+{\displaystyle \frac{V_A(𝐫)V_B(𝐫𝐚_1𝐚_2)}{R}},`$ (67)
$`I_B(𝐫)`$ $`=`$ $`{\displaystyle \frac{V_B(𝐫)V_A(𝐫)}{R}}+{\displaystyle \frac{V_B(𝐫)V_A(𝐫+𝐚_1)}{R}}+{\displaystyle \frac{V_B(𝐫)V_A(𝐫+𝐚_1+𝐚_2)}{R}}.`$ (68)
Assuming periodic boundary conditions again the potential $`V_A(𝐫)`$ can be given by its Fourier transform
$$V_A(𝐫)=\frac{1}{N}\underset{𝐤\mathrm{BZ}}{}V_A(𝐤)e^{i\mathrm{𝐤𝐫}},$$
(69)
where $`N`$ is the number of unit cells, and analogous expressions are valid for $`V_B(𝐫),I_A(𝐫)`$ and $`I_B(𝐫)`$. Here $`𝐤`$ is in the Brillouin zone. Thus, we may rewrite Eq. (68) as
$$𝐋(𝐤)\left[\begin{array}{c}V_A(𝐤)\\ V_B(𝐤)\end{array}\right]=R\left[\begin{array}{c}I_A(𝐤)\\ I_A(𝐤)\end{array}\right],$$
(70)
where
$$𝐋(𝐤)=\left(\begin{array}{cc}3& h^{}\\ h& 3\end{array}\right),$$
(71)
and
$$h=1+e^{i\mathrm{𝐤𝐚}_1}+e^{i𝐤\left(𝐚_1+𝐚_2\right)}.$$
(72)
Equation (70) is indeed the Fourier transform of the Poisson-like equation that determines the potentials for a given current distribution. However, in this case the Laplace operator is a 2x2 matrix. In $`𝐫`$-represantion the analogous equation for a hypercubic lattice was given in Eq. (4).
The equation for the Fourier transform of the Green’s function (which is a 2x2 matrix for a honeycomb lattice, too) can be defined analogously to a hypercubic lattice in Eq. (8):
$$𝐋(𝐤)𝐆(𝐤)=1.$$
(73)
The solution of Eq. (73) for the Green’s function is
$$𝐆(𝐤)=\frac{1}{9\left|h\right|^2}\left(\begin{array}{cc}3& h^{}\\ h& 3\end{array}\right),$$
(74)
where $`9\left|h\right|^2=2\left(3\mathrm{cos}\mathrm{𝐤𝐚}_1\mathrm{cos}\mathrm{𝐤𝐚}_2\mathrm{cos}\mathrm{𝐤𝐚}_3\right)`$ and we have introduced a third vector, $`𝐚_3=\left(𝐚_1+𝐚_2\right)`$.
There are four types of resistance. We denote the resistance between a lattice point $`A`$ as the origin and site $`𝐫_0=n𝐚_1+m𝐚_2`$ (which is an $`A`$-type site) by $`R_{AA}(𝐫_0)`$, while the resistance between the origin and site $`𝐫_0+(2𝐚_1+𝐚_2)/3`$ (which is a $`B`$-type site in the unit cell at $`𝐫_0`$) is denoted by $`R_{AB}(𝐫_0)`$. For symmetry reasons, it follows that for the other two types of resistance: $`R_{BB}(𝐫_0)=R_{AA}(𝐫_0)`$ and $`R_{BA}(𝐫_0)=R_{AB}(𝐫_0).`$ To measure $`R_{AA}(𝐫_0)`$ the current at sites $`A`$ and $`B`$ in the unit cell at $`𝐫`$ are
$$I_A(𝐫)=I\left(\delta _{𝐫,0}\delta _{𝐫,𝐫_0}\right)\text{and}I_B(𝐫)=0,$$
(75)
while for measuring $`R_{AB}(𝐫_0)`$ we have
$$I_A(𝐫)=I\delta _{𝐫,0}\text{and}I_B(𝐫)=I\delta _{𝐫,𝐫_0}.$$
(76)
Thus
$`R_{AA}(𝐫_0)`$ $`=`$ $`{\displaystyle \frac{V_A(𝐫=\mathrm{𝟎})V_A(𝐫_0)}{I}}.`$ (77)
$`R_{AB}(𝐫_0)`$ $`=`$ $`{\displaystyle \frac{V_A(𝐫=\mathrm{𝟎})V_B(𝐫_0)}{I}}.`$ (78)
First, consider the resistance $`R_{AA}(𝐫_0)`$. From Eqs. (70), (73)–(75) we obtain
$$V_A(𝐤)=IRG_{11}(𝐤)\left(1e^{i\mathrm{𝐤𝐫}_0}\right).$$
(79)
Hence, using Eq. (69) we can obtain the $`𝐫`$-dependence of the potential $`V_A(𝐫)`$ and then Eq. (77) yields
$$R_{AA}(𝐫_0)=R\frac{2}{N}\underset{𝐤\mathrm{BZ}}{}G_{11}(𝐤)\left(1\mathrm{cos}\mathrm{𝐤𝐫}_0\right)=R\frac{3}{N}\underset{𝐤\mathrm{BZ}}{}\frac{1\mathrm{cos}\mathrm{𝐤𝐫}_0}{3\mathrm{cos}\mathrm{𝐤𝐚}_1\mathrm{cos}\mathrm{𝐤𝐚}_2\mathrm{cos}\mathrm{𝐤𝐚}_3}.$$
(80)
Using Eq. (15), the summation over $`𝐤`$ can be substituted by an integral and we obtain
$$R_{AA}(𝐫_0)=3Rv_0_{𝐤\mathrm{BZ}}\frac{d^2𝐤}{(2\pi )^2}\frac{1\mathrm{cos}\mathrm{𝐤𝐫}_0}{3\mathrm{cos}\mathrm{𝐤𝐚}_1\mathrm{cos}\mathrm{𝐤𝐚}_2\mathrm{cos}\mathrm{𝐤𝐚}_3},$$
(81)
where $`v_0=3\sqrt{3}a^2`$ is the area of the unit cell. One can see that the same expression was found for a triangular lattice except for the factor $`3v_0`$ here \[see Eq. (62)\]. Note that although the area of the unit cell $`v_0`$ is 6 times bigger than that for the triangular lattice, the area of the Brillouin zone is 6 times less and thus, the ratio of the resistances for honeycomb and triangular lattices is 3. Therefore, the resistance between two $`A`$-type nodes is three times the corresponding resistance in the triangular lattice. This again agrees with Atkinson’s result.
Similarly, using Eqs. (70), (73), (74) and (76) we can obtain $`V_A(𝐤)`$ and $`V_B(𝐤)`$. From Eq. (69) the potential $`V_A(𝐫)`$ can be determined, and analogously $`V_B(𝐫)`$. Finally, Eq. (78) leads to
$`R_{AB}(𝐫_0)`$ $`=`$ $`R{\displaystyle \frac{1}{N}}{\displaystyle \underset{𝐤\mathrm{BZ}}{}}\left(G_{11}(𝐤)+G_{22}(𝐤)G_{12}(𝐤)e^{i\mathrm{𝐤𝐫}_0}G_{21}(𝐤)e^{i\mathrm{𝐤𝐫}_0}\right)`$ (82)
$`=`$ $`Rv_0{\displaystyle _{𝐤\mathrm{BZ}}}{\displaystyle \frac{d^2𝐤}{(2\pi )^2}}{\displaystyle \frac{3\mathrm{cos}\mathrm{𝐤𝐫}_0\mathrm{cos}𝐤\left(𝐫_0+𝐚_1\right)\mathrm{cos}𝐤\left(𝐫_0+𝐚_1+𝐚_2\right)}{3\mathrm{cos}\mathrm{𝐤𝐚}_1\mathrm{cos}\mathrm{𝐤𝐚}_2\mathrm{cos}\mathrm{𝐤𝐚}_3}}`$ (83)
$`=`$ $`{\displaystyle \frac{1}{3}}\left[R_{AA}(𝐫_0)+R_{AA}(𝐫_0+𝐚_1)+R_{AA}(𝐫_0+𝐚_1+𝐚_2)\right].`$ (84)
The same result was found by Atkinson and Steenwijk.
The resistance between second nearest neighbor lattice sites may be found from Eq. (81). For symmetry reasons $`R_{AA}(𝐚_1)=R_{AA}(𝐚_2)=R_{AA}(𝐚_3)`$ and from Eq. (81) we have $`_{i=1}^3R_{AA}(𝐚_i)=3R.`$ Thus, the resistance between second nearest neighbor lattice sites is $`R.`$ Using Eq. (84), the resistance for adjacent lattice sites is $`R_{AB}(\mathrm{𝟎})=1/3\left[R_{AA}(\mathrm{𝟎})+R_{AA}(𝐚_1)+R_{AA}(𝐚_1+𝐚_2)\right]=2R/3`$, since obviously, $`R_{AA}(\mathrm{𝟎})=0`$. We would just mention that the limiting value of the resistance is again infinite as the separation between nodes tends to infinity (see p. 139 of Doyle’s and Snell’s book).
We note that one can transform the honeycomb lattice to a topologically equivalent brick-type lattice shown in Fig. 4 without changing the resistance between two arbitrary lattice points.
This fact has been utilized in the case of the triangular lattice in the previous section.
As it can be seen from the above results, the resistance between arbitrary lattice points in a honeycomb lattice can be related to the corresponding triangular lattice. This kind of relation is the consequence of the so-called duality transformation in which the variables are transformed to Fourier transform variables (for more details see Chaikin’s book pp. 578–584). The dual lattice of a triangular lattice is a honeycomb lattice. However, if the unit cell contains more than two non-equivalent lattice points then such a connection might not be used. On the other hand, the Green’s function method outlined in the example of the honeycomb lattice can still be applied straightforwardly.
###### Acknowledgements.
The author wishes to thank G. Tichy, T. Geszti, P. Gnädig, A. Piróth, S. Redner and L. Glasser for helpful discussions. This work was supported by the Hungarian Science Foundation OTKA T025866, the Hungarian Ministry of Education (FKFP 0159/1997) and OMFB within the program “Dynamics of Nanostructures”.
## A Integration in the expression of the resistance for a square lattice
Starting from Eq. (24) for the resistance $`R(n,m)`$ we have
$$R(n,m)=R_\pi ^\pi \frac{dy}{2\pi }I(y),$$
(A1)
where
$$I(y)=_\pi ^\pi \frac{dx}{2\pi }\frac{1e^{inx}e^{imy}}{2\mathrm{cos}x\mathrm{cos}y}.$$
(A2)
Since $`R(n,m)=R(n,m)`$, we take $`n>0`$. To proceed further, we perform the integral in $`I(y)`$ using the method of residues. Introducing the complex variable $`z=e^{ix}`$ the integral $`I(y)`$ reads
$$I(y)=2i\frac{dz}{2\pi }\frac{1z^ne^{imy}}{2z\left(2\mathrm{cos}y\right)z^21},$$
(A3)
where the path of integration is the unit circle. The denominator has roots at $`z_+=e^{i\alpha _+}`$ and $`z_{}=e^{i\alpha _{}}`$, where $`\alpha _+`$ and $`\alpha _{}`$ satisfy the equation $`\mathrm{cos}\alpha =2\mathrm{cos}y`$ and $`\alpha _{}=\alpha _+.`$ It is clear that for $`\pi <y<\pi `$ we have $`2\mathrm{cos}y>1`$, therefore the solution for $`\alpha `$ is purely imaginary. Thus we introduce $`s`$ with $`\alpha _+=\alpha _{}=is`$, where $`s`$ satisfies the equation
$$\mathrm{cosh}s=2\mathrm{cos}y.$$
(A4)
For $`\pi <y<\pi `$ it is true that $`s>0`$, so the two poles of the integrand in $`I(y)`$ are real numbers and satisfy the following inequalities: $`z_+=e^s<1`$ and $`z_{}=e^s>1`$. Thus the pole $`z_+`$ is within the unit circle, while $`z_{}`$ is outside. According to the residue theorem $`I(y)=2i2\pi i\text{residues within the unit circle}`$. Using Eq. (A4) the residue of the integrand of $`I(y)`$ at $`z_+`$ is
$$\frac{1}{2\pi }\frac{1e^{ns}e^{imy}}{2\left(2\mathrm{cos}y\right)2z_+}=\frac{1}{2\pi }\frac{1e^{ns}e^{imy}}{2\mathrm{sinh}s}.$$
(A5)
We obtain
$$I(y)=2i2\pi i\frac{1}{2\pi }\frac{1e^{ns}e^{imy}}{2\mathrm{sinh}s}=\frac{1e^{ns}e^{imy}}{\mathrm{sinh}s},$$
(A6)
where $`s`$ satisfies Eq. (A4). Finally, the resistance $`R(n,m)`$ for arbitrary integers $`n,m`$ becomes
$$R(n,m)=R_\pi ^\pi \frac{dy}{2\pi }\frac{1e^{\left|n\right|s}e^{imy}}{\mathrm{sinh}s}=R_0^\pi \frac{dy}{\pi }\frac{1e^{\left|n\right|s}\mathrm{cos}my}{\mathrm{sinh}s}.$$
(A7)
The same result was found by Venezian.
## B The Asymptotic form of the lattice Green’s function for a square lattice
In this Appendix we derive the asymptotic form of the lattice Green’s function for a square lattice. The lattice Green’s function at site $`𝐫=0`$ is divergent since $`\epsilon (𝐤)=0`$ for $`𝐤=0`$. Therefore we calculate the asymptotic form of $`G(\mathrm{𝟎})G(𝐫)`$. Starting from Eq. (19) the lattice Green’s function for site $`𝐫=n𝐚_1+m𝐚_2`$ in a square lattice becomes
$$G(\mathrm{𝟎})G(n,m)=\frac{1}{2}_\pi ^\pi \frac{dy}{2\pi }_\pi ^\pi \frac{dx}{2\pi }\frac{1e^{inx}e^{imy}}{2\mathrm{cos}x\mathrm{cos}y}.$$
(B1)
The integral over $`x`$ is the same as $`I(y)`$ in Eq. (A2), so we can use the result obtained in Eq. (A6):
$$G(\mathrm{𝟎})G(n,m)=\frac{1}{2}_\pi ^\pi \frac{dy}{2\pi }\frac{1e^{\left|n\right|s}e^{imy}}{\mathrm{sinh}s}=_0^\pi \frac{dy}{2\pi }\frac{1e^{\left|n\right|s}\mathrm{cos}my}{\mathrm{sinh}s}.$$
(B2)
We follow the same method for deriving the asymptotic form of the lattice Green’s function for large $`m`$ and $`n`$ as Venezian. A similar method was used in Chaikin’s book (see pp. 295–296) in the case of a continuous medium in two dimensions. We break the integral in Eq. (B2) into three parts:
$$G(\mathrm{𝟎})G(n,m)=I_1+I_2+I_3,$$
(B3)
where
$`I_1`$ $`=`$ $`{\displaystyle _0^\pi }{\displaystyle \frac{dy}{2\pi }}{\displaystyle \frac{1e^{\left|n\right|y}\mathrm{cos}my}{y}},`$ (B4)
$`I_2`$ $`=`$ $`{\displaystyle _0^\pi }{\displaystyle \frac{dy}{2\pi }}\left({\displaystyle \frac{1}{\mathrm{sinh}s}}{\displaystyle \frac{1}{y}}\right),`$ (B5)
$`I_3`$ $`=`$ $`{\displaystyle _0^\pi }{\displaystyle \frac{dy}{2\pi }}\left({\displaystyle \frac{e^{\left|n\right|y}\mathrm{cos}my}{y}}{\displaystyle \frac{e^{\left|n\right|s}\mathrm{cos}my}{\mathrm{sinh}s}}\right),`$ (B6)
and $`s`$ satisfies Eq. (A4). The first integral $`I_1`$ can be expressed by the integral exponential $`\text{Ein}(z)`$ :
$$I_1=\frac{1}{2\pi }\text{Re}\left\{_0^\pi 𝑑y\frac{1e^{\left(\left|n\right|im\right)y}}{y}\right\}=\frac{1}{2\pi }\text{Re}\left\{\text{Ein}\left[\pi \left(\left|n\right|im\right)\right]\right\},$$
(B7)
where $`\text{Ein}(z)`$ is defined by
$$\text{Ein}(z)=_0^z𝑑t\frac{1e^t}{t}.$$
(B8)
For large values of its argument, $`\text{Ein}(z)\mathrm{ln}z+\gamma `$, where $`\gamma =0.5772\mathrm{}`$ is the Euler-Mascheroni constant. Thus, for large $`n`$ and $`m`$ $`I_1`$ can be approximated by
$$\text{I}_1\frac{1}{2\pi }\left(\mathrm{ln}\left|\pi \left(\left|n\right|im\right)\right|+\gamma \right)=\frac{1}{2\pi }\left(\mathrm{ln}\sqrt{n^2+m^2}+\gamma +\mathrm{ln}\pi \right).$$
(B9)
Using Eq. (A4) the integral $`I_2`$ can be evaluated exactly:
$$I_2=_0^\pi \frac{dy}{2\pi }\left(\frac{1}{\sqrt{\left(2\mathrm{cos}y\right)^21}}\frac{1}{y}\right)=\frac{1}{2\pi }\left(\frac{\mathrm{ln}8}{2}\mathrm{ln}\pi \right).$$
(B10)
In the integral $`I_3`$ the integrand is close to zero for small values of $`y`$ and $`s`$ since $`s\mathrm{sinh}sy`$, while for larger values of $`y`$ and $`s`$ the exponentials are negligible, therefore $`I_30`$.
Finally, we find that the lattice Green’s function for large arguments, $`i.e.,`$ $`\left|𝐫\right|=a\sqrt{n^2+m^2}\mathrm{}`$ becomes
$$G(𝐫)=G(\mathrm{𝟎})\frac{1}{2\pi }\left(\mathrm{ln}\frac{\left|𝐫\right|}{a}+\gamma +\frac{\mathrm{ln}8}{2}\right).$$
(B11)
The same result is quoted on page 296 of Chaikin’s book. In the theory of the Kosterlitz–Thouless–Berezinkii phase transition Kosterlitz used the same asymptotic form of the Green’s function for a square lattice .
## C Integration in the expression of the resistance for a triangular lattice
Introducing the coordinate transformations $`x^{}=(x+y)/2`$ and $`y^{}=(xy)/2`$, and using the complex variable $`z=e^{iy^{}/2}`$ we have $`R(n,m)=1/2R_\pi ^\pi 𝑑x^{}/2\pi I(x^{})`$ from Eq. (64), where
$$I(x^{})=2i\frac{dz}{2\pi }\frac{1z^{(nm)}e^{i(n+m)\frac{x^{}}{2}}}{z^2\mathrm{cos}\frac{x^{}}{2}z\left(3\mathrm{cos}x^{}\right)+\mathrm{cos}\frac{x^{}}{2}},$$
(C1)
and the path of integration is the unit circle. The factor 1/2 in front of $`R(n,m)`$ is the Jacobian corresponding to the transformations of variables. Since $`R(n𝐚_1m𝐚_2)=R(m𝐚_1n𝐚_2)`$, we take $`nm>0.`$ The denominator has roots in $`z_+=e^{i\alpha _+/2}`$ and $`z_{}=e^{i\alpha _{}/2}`$ and it is easy to see that they satisfy equations $`\alpha _+=\alpha _{}`$ and $`2\mathrm{cos}(\alpha _+/2)=(3\mathrm{cos}x^{})/\mathrm{cos}(x^{}/2)`$. For $`\pi <x^{}<\pi `$ it is clear that $`\alpha _+`$ is purely imaginary. If we choose $`\alpha _+`$ such that $`\mathrm{Re}\alpha _+>0`$, then $`z_+<1`$ and $`z_{}>1.`$ Thus, with $`\alpha _+=is`$, the residue of the integrand of $`I(x^{})`$ at $`z_+`$ (which is inside the unit circle, $`i.e.,`$ $`s>0`$) is
$$\frac{1}{2\pi }\frac{1e^{(nm)\frac{s}{2}}e^{i(n+m)\frac{x^{}}{2}}}{2z_+\mathrm{cos}\frac{x^{}}{2}\left(3\mathrm{cos}x^{}\right)}=\frac{1}{2\pi }\frac{1e^{(nm)\frac{s}{2}}e^{i(n+m)\frac{x^{}}{2}}}{2\mathrm{sinh}\frac{s}{2}\mathrm{cos}\frac{x^{}}{2}},$$
(C2)
where $`s`$ satisfies the equation
$$2\mathrm{cosh}\frac{s}{2}=\frac{3\mathrm{cos}x^{}}{\mathrm{cos}\frac{x^{}}{2}}.$$
(C3)
Finally, the resistance for arbitrary integers $`n,m`$ is
$$R(n,m)=\frac{R}{2}_\pi ^\pi \frac{dx^{}}{2\pi }\frac{1e^{\left|nm\right|\frac{s}{2}}e^{i(n+m)\frac{x^{}}{2}}}{\mathrm{sinh}\frac{s}{2}\mathrm{cos}\frac{x^{}}{2}}.$$
(C4)
|
no-problem/9909/astro-ph9909449.html
|
ar5iv
|
text
|
# Analyzing X-Ray Pulsar Profiles: Geometry and Beam Pattern of Her X-1
## 1 Introduction
Since its discovery in 1972 by the UHURU satellite (Tananbaum et al. 1972), the X-ray binary system Hercules X-1/HZ Herculis has become the best studied of its class of about 44 known today (Bildsten et al. 1997). They are understood to be fast spinning neutron stars that are accreting matter from a massive companion star either via Roche lobe overflow or from the stellar wind of the companion. Since the neutron stars have strong magnetic fields, the accreted matter is funnelled along the field lines onto the magnetic poles, where most of the energy is released in form of X-radiation. Generally the magnetic axis and the rotation axis are not aligned. Therefore a large fraction of the detected flux from these sources is pulsed as during the course of each revolution of the neutron star the beams from the poles sweep through our line of sight.
Her X-1/HZ Her combines most of the properties that can be found in X-ray binaries, this made it one of the favourite sources of X-ray astronomers. From the observation of eclipses and from pulse timing analyses the orbital parameters are well determined. The masses of the neutron star and its optical companion are $`1.3M_{}`$ and $`2.2M_{}`$ respectively, the orbital period of Her X-1 is $`1.7\mathrm{d}`$, and the inclination of the orbital plane is $`i>80^{}`$ (Deeter, Boynton, & Pravdo 1981). In addition to the pulse period of 1.24 s, i.e. the rotation period of the neutron star, Her X-1 also displays X-ray intensity variations on a period of about 35 days. Such a long-term variability is only known for two other pulsars: LMC X-4 and SMC X-1. The 35-day cycle of Her X-1 is nowadays ascribed to the precession of a warped accretion disk which periodically obscures the neutron star from our view (Petterson, Rothschild, & Gruber 1991, Schandl & Meyer 1994). During its high intensity or main-on state, Her X-1 has a luminosity $`L_\mathrm{x}2.510^{37}\mathrm{ergs}\mathrm{s}^1`$ (2-60 keV) (McCray et al. 1982). The maximum flux of the short-on state is typically only 30% of that of the main-on. Balloon observations in 1977 allowed for the first time the indirect measurement of the magnetic field strength of some $`10^{12}`$ G by the revelation of a spectral feature in the hard X-ray spectrum (Trümper et al. 1978), interpreted as a cyclotron absorption line at about 40 keV.
The pulse shapes of Her X-1 are highly asymmetric and depend on energy and on the phase of the 35-day cycle. In several studies phenomenological emission patterns have been used to reproduce the asymmetric pulse profiles of Her X-1. Wang & Welter (1981) fitted the geometry of two antipodal polar caps with asymmetric fan-beam patterns. In this approach the asymmetry of the emission pattern was attributed to asymmetric accretion due to the plasma becoming attached to the magnetic field lines away from the corotation radius. However it is not clear whether an asymmetric accretion stream must produce an asymmetric beam pattern (Basko & Sunyaev 1975). Another way of introducing asymmetry into the pulse shapes is via non antipodal emission regions. Leahy (1991) used two offset rings on the surface of the neutron star with symmetric pencil-beams and Panchenko & Postnov (1994) modelled two antipodal polar caps and one ringlike area which was attributed to a non-coaxial quadrupole configuration of the magnetic field. Further studies have shown that relativistic light deflection near the neutron star plays an important role when emission models are used to explain the observed pulse shapes (e.g. Riffert et al. 1993, Leahy & Li 1995).
In this analysis we take up the idea of a non antipodal location of the emission regions caused by a slightly distorted magnetic dipole field. We assume that the emission originating from the regions near the magnetic poles only depends on the viewing angle between the magnetic axis and the direction of observation which means that the emission is symmetric with respect to the local magnetic axis. In contrast to previous studies where specific emission models have been used to fit the pulse profiles, the method used here does not involve any assumptions on the polar emission. Instead it tests in a general way whether the pulse profiles are compatible with the assumption that they are the sum of two independent symmetric components.
The method we use to analyze pulse profiles is briefly summarized in the following §2.1. In §2.2 we list the analyzed data. The results of the analysis are presented in §2.3. We show that the data of Her X-1 are indeed compatible with the idea of a slightly distorted magnetic dipole field. Further we find indications in the contributions to the pulse profiles that the emission from both poles is identical. We determine the location of the magnetic poles and reconstruct the beam pattern, which is discussed in §3.1. In the following §3.2 we examine the dependence of the pulse shape on the phase of the 35-day cycle. We argue that the contributions to the pulse profile undergo different attenuation resulting in the observed evolution of the pulse shapes during the main-on state of the 35-day cycle.
## 2 Analysis
### 2.1 The Method
This section is a short summary of the method we use to analyze the energy dependent pulse profiles of Her X-1. We will focus on the main ideas and assumptions omitting both formal derivations and technical details. A comprehensive presentation of the material including a test case has been given in Kraus et al. (1995).
Consider the emission region near one of the magnetic poles of the neutron star. Radiation escapes from the accretion stream and from the star’s surface and, while close to the star, is deflected in the gravitational field of the neutron star. A distant observer who cannot spatially resolve the emission region measures the integrated flux coming from the entire visible part of the emission region. The observed integrated flux depends on the direction of observation because the direction of observation determines which part of the emission region is visible and also because the radiation emitted by the accretion stream and the neutron star is presumably beamed. This function, namely the flux of a single emission region measured by a distant observer as a function of the direction of observation, is the link between the properties of the emission region and the contribution of that emission region to the pulse profile. In the following we will call this function the beam pattern of the emission region. The contribution of the emission region to the pulse profile, which we will refer to as a single-pole pulse profile, depends both on the beam pattern and on the pulsar geometry, i.e., on the orientation of the rotation axis with respect to the direction of observation and on the location of the magnetic pole on the neutron star. In short: local emission pattern plus relativistic light deflection determine the beam pattern, beam pattern plus geometry result in a certain single-pole pulse profile and the superposition of the single-pole pulse profiles of the both emission regions is the total pulse profile.
#### a. decomposition into single-pole pulse profiles
In the following we are going to assume that the beam pattern is axisymmetric with respect to the magnetic axis (i.e., to the axis that passes through the center of the neutron star and through the magnetic pole). The axisymmetric beam pattern is a function of only one variable, the angle $`\theta `$ between the direction of observation and the magnetic axis. Consider now the single-pole pulse profile $`f(\varphi )`$, where $`\varphi `$ is the angle of rotation of the neutron star. It can easily be shown that the single-pole pulse profile produced by an axisymmetric beam pattern is symmetric in the following sense: there is a rotation angle $`\mathrm{\Phi }`$, so that $`f(\mathrm{\Phi }\varphi )=f(\mathrm{\Phi }+\varphi )`$ for all values of $`\varphi `$. The fact that $`f`$ is periodic in $`\varphi `$ implies that the same symmetry must hold with respect to the rotation angle $`\mathrm{\Phi }+\pi `$.
Now turn to the total pulse profile produced as the sum of the two symmetric single-pole pulse profiles. If the emission regions are antipodal, i.e., the two magnetic axes are aligned, it turns out that the symmetry points $`\mathrm{\Phi }_1`$ and $`\mathrm{\Phi }_1+\pi `$ of the first single-pole pulse profile fall on the same rotation angles as the symmetry points $`\mathrm{\Phi }_2`$ and $`\mathrm{\Phi }_2+\pi `$ of the second single-pole pulse profile. Their sum, the total pulse profile, is therefore symmetric with respect to the same symmetry points. If the emission regions are not antipodal, however, the symmetry points of the two single-pole pulse profiles do not coincide (except for certain special displacements from the antipodal positions) and the total pulse profile is asymmetric.
Given an observed asymmetric pulse profile, we can ask if it could possibly have been built up out of two symmetric contributions with symmetry points that do not coincide. If so, it must be possible to find two symmetric (and periodic) functions $`f_1`$ and $`f_2`$ with the pulse profile $`f`$ as their sum. By writing the observed pulse profile, defined by a certain number $`N`$ of discrete data points $`f(\varphi _k)`$, as a Fourier sum and with an ansatz for $`f_1`$ and $`f_2`$ in the form of Fourier sums also, the following can easily be shown: For an arbitrary choice of symmetry points $`\mathrm{\Phi }_1`$ and $`\mathrm{\Phi }_2`$, there are two periodic functions $`f_1`$ and $`f_2`$, $`f_1`$ symmetric with respect to $`\mathrm{\Phi }_1`$ and $`f_2`$ symmetric with respect to $`\mathrm{\Phi }_2`$, such that $`f=f_1+f_2`$, and the two symmetric functions are uniquely determined. Exceptions to this rule occur only if $`(\mathrm{\Phi }_1\mathrm{\Phi }_2)/\pi `$ is a rational number. In this case the symmetric functions may not exist or, if they exist, may not be uniquely determined. It must also be noted that the symmetric functions obviously can only be determined up to a constant $`C`$, since $`f_1+C`$ and $`f_2C`$ are also a solution if $`f_1`$ and $`f_2`$ are.
Thus, in principle every choice of a pair of symmetry points corresponds to a unique decomposition of any pulse profile into two symmetric contributions. For such a decomposition to be an acceptable solution, however, $`f_1`$ and $`f_2`$ also have to meet the following physical criteria in order to be interpreted as single-pole pulse profiles:
1. They must not have negative values, since they represent photon fluxes.
2. They must be reasonably simple and smooth. We do not expect the polar contributons to have a shape that is more complex than the pulse profile. Especially modulations of the single-pole pulse profiles that cancel out in the sum are not compatible with the assumption of two independent and therefore uncorrelated emission regions.
3. They must conform to the energy dependence of the pulse profile. The decomposition can be done independently for pulse profiles in different energy ranges. Since the symmetry points are determined by the pulsar geometry, the same symmetry points must give acceptable decompositions according to the criteria 1 and 2 in all energy ranges. Finally the single-pole pulse profiles should show the same gradual energy dependence as the pulse profile.
Given the existence of formal decompositions for all pairs of symmetry points and the criteria mentioned above, we are left with the two-dimensional parameter space of all possible values of $`\mathrm{\Phi }_1`$ and $`\mathrm{\Phi }_2`$, which we search for points with acceptable decompositions. For practical purposes, the parameters we use are the quantities $`\mathrm{\Phi }_1`$ and $`\mathrm{\Delta }:=\pi (\mathrm{\Phi }_1\mathrm{\Phi }_2)`$. The parameter space that contains every possible unique decomposition then is $`0\mathrm{\Phi }_1\pi `$ and $`0\mathrm{\Delta }\pi /2`$. In the analysis of just one pulse profile there will in general be a number of different acceptable decompositions. This number may be significantly reduced by the energy dependence of the pulse profile. In general, the existence of symmetry points with acceptable decompositions in all energy channels is by no means guaranteed. If such a pair of symmetry points is found, then it is indeed possible to build up the observed pulse profile out of two symmetric contributions, and we can conclude that the analyzed data are compatible with the assumption that the asymmetry of the observed pulse profile is caused by the non-antipodal locations of the magnetic poles. The symmetric functions can be interpreted as the single-pole pulse profiles due to the two emission regions.
A successful decomposition provides information both on the geometry and on the beam pattern. As to the geometry, we obtain a value for the parameter $`\mathrm{\Delta }`$. This parameter is related to the locations of the emission regions on the neutron star (see Figure 1). The beam pattern is related to the single-pole pulse profile via the geometric parameters, i.e., the location of the emission region on the neutron star and the direction of observation. Since these parameters are not known, one cannot directly deduce the beam pattern from the single-pole pulse profile. It can be shown, however, that an appropriate transformation of the single-pole pulse profile and the beam pattern turns the transformed single-pole pulse profile into a scaled, but undistorted copy of a section of the transformed beam pattern. Although the scaling factor is a geometric quantity and therefore not known, this still provides an intuitive understanding of what a section of the beam pattern must look like. Since in the case of Her X-1 it is possible to eventually reconstruct the beam pattern, the information obtained at this stage mainly serves as a starting point for the next step of the analysis and we will not go into details about the transformation mentioned above.
#### b. search for an overlap region and determination of the geometry
In general, the two emission regions on the neutron star may or may not be equal (i.e., have the same beam pattern). If they are equal, this fact may be apparent in the single-pole pulse profiles in the following way. Since in general the rotation axis and the magnetic axis of the neutron star are not aligned, the viewing angle $`\theta `$ between the magnetic axis and the direction of observation of each emission region changes with rotation angle $`\varphi `$. The range $`\theta `$ can cover for each magnetic pole depends on the location of that pole on the neutron star and on the direction of observation, where $`0^{}\theta _{\mathrm{min}}\theta _{\mathrm{max}}180^{}`$. Only in the special case where both the magnetic axis and the direction of observation are perpendicular to the rotation axis, $`\theta `$ takes all values between $`0^{}`$ and $`180^{}`$. Since the emission regions have different locations on the neutron star, their ranges of values of $`\theta `$ are different. Depending on the geometry, these two ranges for $`\theta `$ may overlap. For an ideal dipole configuration e.g., the condition under which an overlap in the ranges of values of $`\theta `$ of the both poles exists is $`\mathrm{\Theta }_\mathrm{O}+\mathrm{\Theta }_\mathrm{m}>\pi /2`$, where $`\mathrm{\Theta }_\mathrm{O}`$ is the angle between the rotation axis and the line of sight, and $`\mathrm{\Theta }_\mathrm{m}`$ is the angle between the rotation axis and the magnetic axis. Consider an angle $`\stackrel{~}{\theta }`$ in the overlap region. At some instant during the course of one revolution of the neutron, at rotation angle $`\varphi `$, one emission region is seen under the angle $`\stackrel{~}{\theta }`$. At a different instant, at rotation angle $`\varphi ^{}`$, the other emission region is seen under the same angle $`\stackrel{~}{\theta }`$. If the beam patterns of the two emission regions are identical, then the flux detected from the one emission region at $`\varphi `$ is equal to the flux detected from the other emission region at $`\varphi ^{}`$. Thus, if an overlap region exists, the corresponding part of the beam pattern shows up in both single-pole pulse profiles, though at different values of rotation angle. Since the single-pole pulse profiles can be transformed into undistorted (though scaled) copies of sections of the beam patterns, such a part of the beam pattern that shows up in both single-pole pulse profiles should be readily recognizable. Note that the occurence and size of the overlap region depends on the geometric parameters and must therefore be the same for pulse profiles in different energy channels.
If an overlap region is found in the single-pole pulse profiles obtained in the decomposition, this is an indication that there are two emission regions with identical beam patterns. Since each single-pole pulse profile provides a section of the beam pattern and the two sections overlap, we can then combine the two sections by superposing the overlapping parts. As a result we obtain the total visible section of the beam pattern. Superposing the overlapping parts of the two sections of the beam pattern amounts to determining the relation between the corresponding values $`\varphi `$ and $`\varphi ^{}`$ of the rotation angle. On the other hand, the relation between $`\varphi `$ and $`\varphi ^{}`$ can be expressed in terms of the unknown geometric parameters of the system. Thus, the superposition provides a constraint on the geometry.
Again omitting all details we simply note the procedure for superposing the overlapping parts of the two sections of the beam pattern. The single-pole pulse profiles $`f_1(\varphi )`$ with symmetry point $`\mathrm{\Phi }_1`$ and $`f_2(\varphi )`$ with symmetry point $`\mathrm{\Phi }_2`$ are transformed into functions of a common variable $`q`$ through $`\mathrm{cos}(\varphi \mathrm{\Phi }_1)=q`$ for $`f_1`$ and $`\mathrm{cos}(\varphi \mathrm{\Phi }_2)=(qa)/b`$ for $`f_2`$. The real numbers $`a`$ and $`b>0`$ are determined by means of a fit which minimizes the quadratic deviation between $`f_1(q)`$ and $`f_2(q)`$ in the overlap region. At this point the constant $`C`$, which determines how the unpulsed flux has to be distributed to the single-pole pulse profiles, can also be computed. Since $`a`$ and $`b`$ can be expressed in terms of the unknown geometric parameters of the pulsar, their best-fit values constitute constraints on the pulsar geometry. The results of this second step of the analysis are the total visible beam pattern as a function of $`q`$ and two constraints on the geometric parameters.
The geometric information obtained so far (i.e., the values of $`\mathrm{\Delta }`$, $`a`$, and $`b`$) is not quite sufficient in itself to completely determine the pulsar geometry. It needs to be supplemented by an independent determination of any one additional geometric parameter or by an additional constraint. We suggest that this supplement may be obtained by means of the assumption that the rotation axis of the neutron star is perpendicular to the orbital plane. In this case, the angle $`\mathrm{\Theta }_\mathrm{O}`$ between the direction of observation and the rotation axis of the neutron star is given by the inclination of the orbital plane. The assumption of $`\mathrm{\Theta }_\mathrm{O}=i`$ seems to be quite plausible since accreted mass also carries angular momentum from the massive companion, and this transfer is expected to align the rotation axes of the binary stars on a timescale short compared to the lifetime of the system. However, this assumption must not hold true for all binary systems. With the inclination substituted for $`\mathrm{\Theta }_\mathrm{O}`$, the analysis of the pulse profiles determines the positions of the emission regions on the neutron star.
Once the pulsar geometry is known, we also obtain the equation relating the auxiliary variable $`q`$ and the viewing angle $`\theta `$, so that the reconstructed beam pattern can be transformed into a function of $`\theta `$. However, it turns out that the relation between $`q`$ and $`\theta `$ involves an ambiguity which cannot be resolved within this analysis. It is due to the fact that we are not able to relate a single-pole pulse profile to one of the two emission regions. Therefore, we obtain two different possible solutions for the beam pattern and a choice between them must be based on either theoretical considerations and model calculations, or on additional information on the source.
### 2.2 The Data
The analysis presented in this paper is based on pulse profiles of the main-on and short-on states of Her X-1. The analyzed sample contains a total of 148 pulse profiles from 20 different observations. References, the platform of the detectors, year of observation, the total energy range, the state of the 35-day cycle, the number of separate observations and the total number of pulse profiles of the respective observations are listed in Table 1. The data reduction including background subtraction has been done by the respective authors. In order to compare the pulse profiles from different observations, the pulse profiles of the main-on have been aligned in phase so that their common features match best. Since the pulse profiles of the short-on are markedly different, their features have been aligned with respect to the main-on as suggested by Deeter et al. (1998).
At energies below 1 keV, the pulses of Her X-1 have a sinusoidal shape which is interpreted as reprocessed hard X-radiation at the inner edge of the accretion disk (McCray et al. 1982). Since the origin of these soft X-rays is not the region near the magnetic poles, the analysis is restricted to higher energies. Above 1 keV the pulse profiles of Her X-1 are highly asymmetric and their typical energy dependence has been examined in a variety of studies (see Deeter et al. 1998, and references therein). In the analysis the pulse profiles are written as Fourier series. Since the higher Fourier coefficients are presumably affected by aliasing and also may have fairly large statistical errors, the highest coefficients are set to zero. This has a smoothing effect depending on the number of Fourier coefficients concerned. An example of the typical energy dependence of the pulse profiles and their representation in the analysis is given in the top row of Figure 2. It shows pulse profiles in three different energy ranges of an EXOSAT observation (Kahabka 1987) during the main-on state. The observed pulse profiles are plotted with crosses. The profiles plotted as solid lines are inverse Fourier-transformed using 32 out of originally 60 Fourier coefficients.
### 2.3 Results
#### a. decomposition into single-pole pulse profiles
In a first run, the decomposition method has been simultaneously applied to the 103 pulse profiles of the 15 observations of the main-on state. Due to the large number of distinct pulse shapes and due to the fact that they have a relatively low level of unpulsed flux, the positive flux criterion has led to an exclusion of about 90% of the whole parameter space of possible symmetry points $`\mathrm{\Phi }_1`$ and $`\mathrm{\Phi }_1+\mathrm{\Delta }`$. Further sorting out the decompositions (i.e. the single-pole pulse profiles) that are qualitatively too complicated to match the criterion of two independent emission regions only left over one type of decomposition. The energy dependence of this type of decomposition is as smooth as that of the pulse profiles. Thus we have found acceptable decompositions in a small range of $`\mathrm{\Phi }_1`$ and $`\mathrm{\Phi }_1+\mathrm{\Delta }`$ which are all of the same type. This type of decomposition is unique in the sense that a small deviation from the ’best-values’ of the symmetry points results in decompositions that look similar but become more and more complicated the larger the deviation becomes until they do not match the physical criteria any more. A systematic variation of the best-values of the symmetry points, which could be caused by free precession of the neutron star, is not observed. The lower panels in Figure 2 show the decompositions of the typical pulse profiles of the respective top panels. The unpulsed flux has been distributed to the single-pole pulse profiles according to the constant $`C`$ as derived in the second step of the analysis (see § 2.1). The single-pole pulse profiles show that the energy dependence of the pulse profiles is mainly due to the change of one polar contribution (dashed curve) where an additional peak appears above 10 keV, whereas the pulse shape of the other pole (solid curve) does not change much. Interestingly, the contributions of the emission regions we obtain look very similar to those of Panchenko & Postnov (1994) obtained from a model calculation mentioned in § 1. Similar components were also obtained by Kahabka (1987) in an attempt to model the observed pulse shapes by means of 3 to 5 gaussians, a sinusoidal component and a constant flux.
Extending the analysis to the short-on and the turn-on of the main-on we also find acceptable decompositions in the same range of the symmetry points as in the main-on. Since the pulses of the short-on state have quite a different shape compared to the main-on, their decompositions look different as well. An example of a typical short-on pulse profile and its decomposition is given in Figure 3.
#### b. search for an overlap region and determination of the geometry
In the next step of the analysis a two parameter fit has been applied to the decompositions in order to find out whether there is a range where the shapes of the polar contributions match. For the 62 pulse profiles of 8 of the main-on observations we have found a set of fit parameters which correspond to an overlap range where the two curves of each decomposition match astonishingly well if the statistical errors of the data are taken into account. This is shown in Figure 4 where the solid (dashed) curve corresponds to the respective single-pole pulse profile in Figure 2. The typical errors ($`\pm \sigma `$) as derived from error propagation of the statistical errors of the data are indicated in the upper right corner of each panel. The range where the curves overlap corresponds to values of the viewing angle $`\theta `$ under which both emission regions are seen during the course of one revolution of the neutron star. Introducing a scaling factor as an additional fit parameter we have achieved acceptable fits for the 15 pulse profiles of another three main-on observations. These profiles are further discussed in §3.2. No acceptable fits have been achieved for only four observations at late phases of the main-on when the flux had already dropped to less than 60% of the maximum flux of the respective 35-day cycle.
The dependence of the results for the location of the magnetic poles on the direction of observation $`\mathrm{\Theta }_\mathrm{O}`$ is shown in Figure 5. Assuming that $`\mathrm{\Theta }_\mathrm{O}`$ is equal to the inclination $`i`$ of the system and adopting $`i=83^{}(\pm 4^{})`$ (Kunz 1996, private communication), we obtain the polar angles of the magnetic poles $`\mathrm{\Theta }_118^{}`$ and $`\mathrm{\Theta }_2159^{}`$ with an offset from antipodal positions of $`\delta <5^{}`$ (see Figure 1). The small value obtained for $`\delta `$ confirms the assumption that a fairly small distortion of the magnetic dipole field is enough to explain the considerable asymmetry of the pulse profiles of Her X-1. The error bars at $`\mathrm{\Theta }_\mathrm{O}=83^{}\pm 4^{}`$ demonstrate how little the best fit-parameters determined for different pulse profiles vary.
The remaining ambiguity in the determination of the beam pattern is indicated by the different units of the lower ($`\theta _+`$) and the upper ($`\theta _{}`$) x-axis in Figure 4. However as discussed in § 3.2 the study of the evolution of the pulse profile with the 35-day cycle indicates that the $`\theta _+`$-solution is presumably the correct one. The beam pattern of the emission regions has been reconstructed in the range $`66^{}<\theta _+<116^{}`$ or $`64^{}<\theta _{}<114^{}`$ (for $`\mathrm{\Theta }_\mathrm{O}=83^{}`$). The emission regions are unobservable under values of $`\theta _+`$ ($`\theta _{}`$) outside this range.
Concerning the decompositions of the short-on pulse profiles, fits of a quality similar to those found for the main-on are not found. Additionally the values of the best fit parameters are different from each other and different from those of the main-on. The same holds true for the pulse profiles of the observation of the turn-on which unfortunately ended already when the flux had reached about 2/3 of the maximum flux of this 35-day cycle, as the lightcurve of the All-Sky-Monitor (ASM) onboard Rossi X-ray Timing Explorer (RXTE) shows (Wilms 1999, private communication).
## 3 Interpretation
The results show that the pulse profiles of Her X-1 are compatible with the idea that the beam pattern is symmetric and that a distorted dipole field is responsible for the asymmetry of the pulse profiles. The analysis does not permit to discriminate between exact symmetry and a small asymmetry of the beam pattern. In the case of a small asymmetry, test calculations suggest that the beam pattern derived above can be regarded as a fair approximation to the azimuthally averaged beam pattern. Considering the above results a large asymmetry of the beam pattern seems unlikely. A prominent asymmetry of the pulse profiles that is primarily due to an asymmetric beam pattern cannot in general be mimicked by displaced symmetric emission regions, because one choice of displacement will hardly produce simple and smooth ’false-symmetric constituents’ for many different energies and luminosities with their respective distinct asymmetric pulse shapes. However, the possibility that the asymmetry of the pulse profiles of Her X-1 is primarily due to an asymmetric beam pattern cannot be rigorously excluded. With this caveat in mind, we will in this section discuss the consequences of the reconstructed symmetric beam pattern.
### 3.1 Beam Pattern
The beam pattern can also be plotted as a polar diagram with the magnetic axis ($`\theta =0^{}`$) as symmetry axis. This is done in Figure 6. It shows the $`\theta _+`$-solution for the beam pattern in the energy ranges 6.0 - 8.3 keV (solid line) and 20.0 - 23.0 keV (dashed line). In the overlap range the mean values of the single-pole contributions are plotted. Each beam pattern is normalized so that the total power emitted into the observable solid angle is unity. The $`\theta _{}`$-solution can be obtained by turning the diagram upside down. No information on the beam pattern is available in the shaded regions.
The visibility of the emission region up to an angle of at least $`116^{}`$ is due to a lateral extension of the emission region along the neutron star surface, emission of radiation from the plasma at a certain height above the pole and relativistic light deflection near the neutron star surface. We can get an idea of the effect of light deflection if we imagine the emission to be originating from a hypothetical point source located at the pole of the neutron star. With an assumption about the ratio of the radius of the neutron star $`r_\mathrm{n}`$ to its Schwarzschild-radius $`r_\mathrm{s}`$, the asymptotic angle $`\theta `$ under which the magnetic axis is seen by the distant observer can be transformed into the intrinsic angle $`\vartheta `$ under which the radiation is emitted from the point source (see Figure 7). Figure 8 shows how this transformation changes the asymptotic beam pattern of Figure 6 for $`r_\mathrm{n}/r_\mathrm{s}=2.8`$. Again the emission pattern is normalized to have an integrated power of unity. It also illustrates the necessity of taking the effects of relativistic light deflection into account when modelling the emission regions, as has been previously pointed out by other authors (e.g. Nollert et al. 1989).
All beam patterns obtained from the various observations exhibit the same basic structure and energy dependence. Only the relative sizes of their substructures differ. The overall structure is quite complex as can be seen from the representative beam patterns in Figure 6. It has an increasing component towards the direction of the magnetic axis. Near the highest angles of the visible range the flux has a maximum at $`\theta _+108^{}`$. These major components can be interpreted as a pencil- and a fan-beam respectively. The relative size of the fan-beam component decreases with increasing energy. Another relatively small feature occurs at $`\theta _+80^{}`$. Above 15 keV the beam pattern has an additional increasing component at $`\theta _+>114^{}`$. This feature seems to become dominant above 28 keV and might therefore be responsible for the observed widening of the main peak of the pulse profile in this energy range (Soong 1990a, Kuster 1998, private communication). The occurrence of such a feature in this energy regime indicates a possible relation with electron cyclotron absorption at about 40 keV, favoured by many authors (e.g. Gruber et al. 1999). Unfortunately the energy resolution and the statistics of the data covering the range above 30 keV available to us and suitable for the analysis was not good enough to give an insight into this property of the beam pattern.
The beam patterns describe the flux as a function of viewing angle $`\theta `$ in the various energy ranges. The results can also be plotted as energy dependent spectra showing the flux depending on energy for various viewing angles. The left panel in Figure 9 shows 12 beam patterns in the energy range between 0.92 and 26.0 keV obtained from pulse profiles of an EXOSAT observation (Kahabka 1987). Since in the pulse profiles the response of the detectors is not considered, the flux of the beam patterns derived from the pulse profiles is normalized at the arbitrarily chosen angle $`\theta _+90^{}`$ (indicated by an arrow). The angular range is divided into four sections in which the main features of the beam patterns are located. The other panels of Figure 9 show spectra at various viewing angles $`\theta _+`$. Due to the normalization, the spectrum at the angle of normalization is just a horizontal line and all other spectra are relative to that particular one. Each spectrum contains two curves corresponding to the ME-Argon and the ME-Xenon proportional counters of EXOSAT. It can be easily seen that the spectra in section III, interpreted as fan-beam, are very soft compared to the spectra in section I, interpreted as pencil-beam. The spectra in section IV, interpreted as the high energy feature above, are even harder than those in section I. It should be pointed out that there is a great difference between the kind of spectra presented here and spectra obtained from pulse phase spectroscopy. At a particular pulse phase the poles are generally seen under different viewing angles and the spectra from both poles are always superposed. Nevertheless we can indentify the sections in the beam pattern that are responsible for features in the pulse profile. Then we compare the spectra in these sections with spectra at the phases where the corresponding features in the pulse profile occur. E.g. section III of the beam pattern corresponds to the maxima of the second single-pole contribution around phases 0.15 and 0.45 (see Figure 2) which are responsible for the shoulder in the leading edge and the secondary maximum in the trailing edge of the peak of the pulse profile. The hardness ratio at these parts of the pulse profile is relatively low (see e.g. Deeter et al. 1998) which is consistent with the soft spectra in section III. On the other hand the hardness ratio at the peak of the pulse profile is very high corresponding to the sections I and IV where the spectra are relatively hard as well.
Due to the anisotropy of the beam pattern the flux depends on the viewing angle and therefore on the location of the poles and the inclination of the system. Then the observed luminosity of the pulsar also depends on the geometry and the inclination. Since we expect other pulsars to have similar anisotropic beam patterns but different geometries and inclinations, the fact that none has a luminosity $`L_\mathrm{x}10^{38}`$ erg/s indicates that the trend of the flux to increase towards the direction of the magnetic axis can be expected to reverse at small viewing angles. This would be consistent with the picture of the radiation escaping into the direction of the magnetic axis being blocked due to electron cyclotron absorption.
Since the components identified in the energy dependent beam pattern and the corresponding parts of the pulse profile directly reflect the properties of the processes of the emission regions, the beam pattern should be further compared with emission models.
### 3.2 Evolution of the Pulse Profile with the 35-day Cycle
The evolution of the pulse profile with the 35-day cycle has been studied intensively by many authors (e.g. Kahabka 1987, Ögelman & Trümper 1988, Soong et al. 1990b, Scott 1993). Deeter et al. (1998) summarize the observations establishing that the changes in pulse profile throughout the course of a 35-day cycle are systematic. Several attempts have been made to explain the change of the pulse shape with the 35-day phase (e.g. Bai 1981, Trümper et al. 1986, Petterson et al. 1991). In this section we discuss a scenario in which the column densities along the lines of sight onto the poles are different due to a partial obscuration of the neutron star by the inner edge of the accretion disk. This results in a different attenuation of the polar contributions.
As observed in a two day long continuous monitoring by RXTE, the pulse shape of Her X-1 does not change significantly during turn-on, whereas the spectra show strong photoelectric absorption (Kuster et al. 1998). This is in contrast to the behaviour during the decline of the main-on, when the pulse shape undergoes systematic changes while no spectral changes are prominent (Deeter et al. 1998, and references therein). The observations concerning the spectral behaviour can be explained in terms of a twisted and tilted accretion disk (Schandl & Meyer 1994). At turn-on the outer edge of the warped disk recedes from the line of sight to the neutron star whereas at the end of the main-on the inner edge sweeps into the line of sight. Since the obscuring material at the outer edge of the disk is relatively cool compared to the very dense material at the inner edge, photoelectric absorption is only present during turn-on. By taking into account the scale heights of the corresponding parts of the disk, a warped disk profile also provides a mechanism to explain the different behaviour of the pulse shape. The density gradient in the obscuring material at the outer edge of the disk is relatively small. Thus the radiation emerging from both polar regions experiences the same absorption and the pulse profile does not change appreciably during the early stages of the main-on. Since on the other hand, the scale height of the inner edge of the disk is comparable to the size of the neutron star, the poles become obscured successively towards the end of the main-on. Therefore the radiation from one pole becomes attenuated more with respect to the other, leading to changes in the pulse profile. This situation is schematically illustrated in Figure 10.
The different attenuation of the radiation from the poles is apparent in the decompositions. Figure 11 shows two pulse profiles of an EXOSAT observation (Kahabka 1987) during one 35-day cycle at $`\mathrm{\Psi }_{35}=0.136`$ near maximum intensity (solid) and at $`\mathrm{\Psi }_{35}=0.234`$ during the decay phase of the main-on state (dotted). The shoulder in the leading edge and the secondary maximum in the trailing edge of the peak are less prominent in the decay phase. We find that we can model the pulse profile at the end of the main-on state with the decompositions found for the pulse profile at maximum intensity by scaling one component with respect to the other. The pulse shape plotted with crosses in Figure 11 is reproduced from the decompositions of the pulse shape at maximum intensity by scaling the second component by a factor of 0.7 and adding the unscaled first component. It indeed closely resembles the features of the pulse profile in the decay phase. We conclude that during the decay phase the neutron star was partly obscured.
The fact that the second component has to be scaled simply means that the radiation from the second pole is attenuated more than the radiation from the first pole. Therefore the second pole must be located on that side of the neutron star which is on the opposite side of the accretion disk with respect to the observer. This enables us to decide between the $`\theta _{}`$\- and the $`\theta _+`$-solution discussed in section 2.3. It follows that the second component must correspond to the higher values of the viewing angle $`\theta `$ and therefore the $`\theta _+`$-solution must be the correct one. In a previous analysis of pulse profiles of the X-ray binary Cen X-3 (Kraus et al. 1996), we have also found unique decompositions and the beam patterns and their energy dependence are quite similar to those of Her X-1. But we were not able to decide between the $`\theta _+`$\- and the $`\theta _{}`$-solution. However the similarity of the beam patterns suggests that the $`\theta _+`$-solution is the correct one for this pulsar, too.
Many authors have noted a narrowing of the main peak during the decay phase of the 35-day cycle (see Kunz 1996). Different attenuation of the components obtained in the analysis provides a natural explanation of this behaviour of the pulse profile.
As the amount of matter along the lines of sight increases, not only attenuation but also scattering of radiation from other directions into the line of sight increases. This leads to an increasing fraction of scattered flux in the pulse profile and the pulsed fraction <sup>1</sup><sup>1</sup>1$`\text{pulsed fraction}=1\frac{\text{minimum flux of pulse profile}}{\text{mean flux of pulse profile}}`$ decreases. Other processes that lead to an increase of unpulsed flux are reprocessing of the direct beams by the interposed material or reflection from the disk (McCray et al. 1982). In other words the pulsed fraction is an indicator of the fraction of radiation that is coming directly from the polar regions. A pulse profile which contains a large fraction of scattered flux will have a small pulsed fraction. From such a pulse profile we can not expect to be able to reconstruct the beam pattern. Figure 12 shows that indeed the pulse profiles for which an acceptable fit has been found (denoted by filled symbols) are just those with a high pulsed fraction. The fact that the analysis of the pulse profiles of the short-on state has not led to acceptable fits can then be undestood in terms of the low value of their pulsed fraction. Accounting for possible attenuation, the components can be scaled in the fit procedure. This leads for the 15 pulse profiles of three main-on observations with a flux of about 70% of the typical maximum flux of the main-on state to a significant decrease of the deviation $`\lambda _{\mathrm{red}}^2`$ <sup>2</sup><sup>2</sup>2 $`\lambda _{\mathrm{red}}^2=\frac{1}{N\nu }_\mathrm{N}(f_1(\mathrm{i})f_2(\mathrm{i}))^2`$, where $`\nu `$ is the number of fit parameters between the two curves. These pulse profiles typically have a medium pulsed fraction.
Observations show that the spectral behaviour at X-ray turn-on is similar for the short-on and main-on states and that the pulse shape also changes during short-on (Deeter et al. 1998). This suggests that the configuration of the disk causing the spectral behaviour and the evolution of the pulse profile described above in the case of the main-on state is similar during short-on. Thus the outer part of the disk is responsible for the turn-on of the short-on state, whereas it ends when the inner edge of the disk passes into the line of sight.
This work has been supported by the Deutsche Forschungsgemeinschaft (DFG).
|
no-problem/9909/astro-ph9909515.html
|
ar5iv
|
text
|
# The Complex Phase Lag Behavior of the 3–12 Hz Quasi-Periodic Oscillations during the Very High State of XTE J1550–564
## 1 Introduction
The soft X-ray transient and black-hole candidate XTE J1550–564 was discovered early September 1998 (Smith 1998) with the All Sky Monitor on board the Rossi X-ray Timing Explorer (RXTE). Subsequent observations with the RXTE Proportional Counter Array during the initial rise, showed 0.08–4 Hz quasi-periodic oscillations (QPOs) in the power spectrum, superimposed on strong ($``$30% rms amplitude) band-limited noise (Cui et al. 1999). On 19–20 September 1998, a strong X-ray flare was detected with a peak luminosity of $``$6.5 Crab (Remillard et al. 1998). During this flare, QPOs near 185 Hz were discovered (McClintock et al. 1998; Remillard et al. 1999) simultaneously with low-frequency ($`<`$20 Hz) QPOs, strongly indicating that XTE J1550–564 was in the very high state (VHS) during these observations. The X-ray spectral properties and the rapid X-ray variability during the first part of the outburst were discussed by Cui et al. (1998), Sobczak et al. (1999), and Remillard et al. (1999). A second VHS episode began on March 4 1999, when QPOs near 280 and 6 Hz were detected (Homan, Wijnands, & van der Klis 1999a). These QPOs and the other rapid X-ray variability will be discussed by Homan et al. (1999b). In this Letter, we concentrate on the small subset of the observations discussed by Homan et al. (1999b) when XTE J1550–564 was in the VHS, showing QPOs above 100 Hz and around 6 Hz. We report on the very complex phase lag behavior of the low-frequency QPOs.
## 2 Observation and selection method
Our primary goal is to study the phase lag behavior of the low-frequency QPOs when XTE J1550–564 was in the 1999 VHS episode (see Homan et al. 1999b). The public RXTE observations used are listed in Table 1. Data were accumulated in several different observational modes, which were simultaneously active. We used the ’Binned’ mode data, with a time resolution of 4 ms in 8 photon energy bands (covering 2–13.1 keV), and the ’Event’ mode data, with a time resolution of 16 $`\mu `$s in 16 bands (13.1–60 keV). To determine the QPO properties, we calculated power and cross spectra using 16 or 256-s intervals and a Nyquist frequency of 128 Hz. To correct for the dead-time effects on the lags, we subtracted the average 50–125 Hz cross-vector from the cross spectra (see van der Klis et al. 1987).
The VHS observations can be divided according to the properties of the QPOs below 20 Hz. We distinguish two types based on the Q value (frequency divided by FWHM) of the 6 Hz QPO<sup>2</sup><sup>2</sup>2Although the frequency of this QPO varied between 5 and 7 Hz, for reasons of clarity this QPO will be referred to as the ’6 Hz QPO’., and its harmonic structure: one type with a relatively broad (Q $`<3`$) 6 Hz QPO with a harmonic at 12 Hz (type A QPOs; § 3; Fig. 5a), and no other detectable harmonics, and one with a relatively narrow (Q $`>6`$) 6 Hz QPO with harmonics at 3, 12, 18 (although not always detectable due to the limited statistics), and possibly at 9 Hz (type B QPOs; § 4; Fig. 5b). The observations are listed in Table 1 according to the type of QPO they contain. Although the power spectra of the 40401-01-57-00 and 40401-01-58-01 observations show fewer harmonics and stronger band-limited noise, these observations are classified as containing type B QPOs because of the high Q ($`>6`$) values of the 6 Hz QPOs.
## 3 Type A low-frequency QPOs
Figure 5a shows a typical power spectrum (i.e., for observation 40401-01-50-00) containing type A QPOs. Clearly visible is the 6 Hz QPO with a shoulder at higher frequencies. When using the data combined for the total RXTE energy range (effectively 2–60 keV) and fitting the data with two Lorentzian or two Gaussian functions, the frequency of the second QPO is not twice the frequency of the 6 Hz QPO. The ratio between the frequency of the second QPO and the frequency of the 6 Hz QPO is 1.5–1.8 (depending on observation and on the fit function \[Lorentzian or Gaussian\]). Fitting the total energy band power spectrum with two QPO functions which are harmonically related to each other, results in fits which are unacceptable. However, by using only the data in the range 13.1–48 keV (or 8.0–48 keV when the statistics did not allow to detect the 12 Hz QPO in the 13.1–48 keV range), the 6 Hz QPO is only marginally detectable and the most pronounced QPO is the 12 Hz QPO. The ratio between its frequency and that of the 6 Hz QPO (1.91–2.06; depending on observation) indicates that a harmonic relation with the 6 Hz QPO is likely. Clearly, the structure of the power spectrum is more complex than two harmonically related QPOs. Either the QPO shapes are more complex than Lorentzian or Gaussian functions, or an extra noise component (maybe a QPO) is present between the two QPOs.
A typical phase lag spectrum (i.e., for observation 40401-01-50-00) calculated between 2.5–6.5 and 6.5–48.0 keV is shown in Figure 5c. It can clearly be seen that in the frequency range 6–12 Hz, the soft photons ($`<`$ 6.5 keV) lag the hard ones ($`>`$ 6.5 keV) by $``$0.3 radian. Negative lags mean that the soft photons arrive later than the hard ones (a soft lag). The phase lag of the power law noise component below 1 Hz was consistent with being zero (0.03$`\pm `$0.04 radian for the frequency range 0.01–1 Hz). An extrapolation of the phase lags (assuming a constant phase lag) of this noise component into the QPO frequency range cannot explain the phase lags observed for the QPOs. The lags determined for the QPOs must therefore be intrinsic to the QPOs. In the phase lag spectrum, the QPOs cannot be distinguished from each other or from the possible extra noise component in between them. Therefore, any interpretation of the QPO lags should be performed with caution.
In order to determine the phase lags as a function of photon energy, the frequency and the FWHM of both QPOs are needed. We decided to determine the frequency and FWHM of the 6 Hz QPO by fitting a Gaussian function in the total energy band. A Gaussian function fitted slightly better than a Lorentzian, although both functions yielded acceptable fits. Because the type B QPOs need to be fitted with Gaussian functions in order to obtain acceptable fits (§ 4), we decided, for consistency, to use also Gaussian functions for the type A QPOs (note that a Gaussian function will result in a somewhat smaller FWHM than a Lorentzian function). After determining the properties of the 6 Hz QPO (FWHM 2–5 Hz; frequency 5.2–6.0 Hz) by using the full energy range, we determined the FWHM (2.5–10 Hz) of the 12 Hz QPO in the energy range above 8.0 or 13.1 keV by fixing its frequency to twice the 6 Hz QPO frequency. Usually, the fits were acceptable, although broad excess noise was sometimes present under the 12 Hz QPO. When including an extra component in the fit to account for this noise, the FWHM of the 12 Hz QPO was not significantly altered.
Using the thus obtained QPO parameters, we determined the frequencies and the FWHM of the QPOs for the different type A observations. We determined the phase lags between different energy ranges by calculating the average lags in the frequency range determined by the QPO FWHM centered on the QPO frequency. As a reference band, we used the 4.4–5.1 keV band. Figure 5a shows the phase lags as a function of photon energy for observation 40401-01-50-00. The soft photons ($`<`$5 keV) of the 6 Hz QPO (open squares) lag the hard ones ($`>`$5 keV) by $``$1.3 radian; the soft photons of the 12 Hz QPO (open triangles) lag the hard ones by $``$ 0.6 radian (they are $`<3\sigma `$ different from the 6 Hz QPO lags). The 6 Hz QPO lags in all the other observations with type A QPOs were consistent with these lags. Due to limited statistics, the 12 Hz QPO lags could only significantly be determined for observations 40401-01-50-00 and 40401-01-51-00. The measured lags and the upper limits obtained for the other observations were consistent with each other.
## 4 Type B low-frequency QPOs
Figure 5b shows a typical power spectrum (i.e, for observation 40401-01-53-00) containing type B low-frequency QPOs. Clearly visible is the very significant ($`>50\sigma `$) 6 Hz QPO (with Q $`>6`$) and those near 3, 12, and 18 Hz. The 3 Hz QPO seems to be the fundamental. Between the 6 and 12 Hz QPOs excess noise near 9 Hz is present, possibly due to another harmonic. A typical phase lag spectrum is shown in Figure 5d (calculated between 2.5–6.5 and 6.5–48.0 keV). Owing to the fact that the QPOs are relatively narrow, the individual components in the phase lag spectrum can more easily be distinguished than for the type A QPOs. Most striking is the fact that the 3 and 12 Hz QPO have soft lags (meaning that the photons below 6.5 keV arrive later than the ones above 6.5 keV) of 0.3–0.4 radian, whereas the 6 Hz QPO has a hard lag of 0.3 radian. Thus, the lags in the different QPO harmonics have different signs.
The power law noise between 0.01–0.5 Hz had a marginally significant soft lag of 0.05$`\pm `$0.02 radian. Above 0.5 Hz the lag increased (and became significant) to about 0.23$`\pm `$0.03 radian between 1 and 2.5 Hz. This is the frequency range where an extra noise component between 1 and 3 Hz is present, indicating that the observed lags in this frequency range are most likely those for this extra noise component and not for the power law noise. The above reported lag for the 3 Hz QPO could also be (partly) due to the lag of this extra noise component, but it is difficult to disentangle the two components. Clearly, an extrapolation of the soft lags (assuming a constant phase lag) below 3 Hz into the 6 Hz QPO frequency range cannot explain the hard lags observed for this QPO.
To determine the frequencies and the FWHM of these QPOs, we fitted the power spectra with several Gaussian functions (one for each QPO) that were harmonically related (using Lorentzians functions resulted in unacceptable fits). By including a power law function for the noise component below 1 Hz, and two extra Gaussians to fit the excess noise at 9 Hz and the noise between 1 and 3 Hz, we obtained acceptable fits with $`\chi _{\mathrm{red}}^21`$. In this way, we obtained the frequencies and FWHMs of the various QPO harmonics. We determined the phase lags of the 3, 6, and 12 Hz QPOs in different energy ranges by calculating the lags in a frequency range determined by the QPO FWHM centered on the QPO frequency. As a reference band we again used the 4.4–5.1 keV band. The statistics for the 18 Hz QPOs were not sufficient to allow detections of its lags. During some observations also the statistics of the other QPOs were not sufficient to allow significant detections.
A typical example of the resulting phase lags (i.e., for observation 40401-01-53-00) as a function of photon energy is shown in Figure 5b. The soft photons ($`<`$5 keV) of the 3 and 12 Hz QPOs lag the hard ones ($`>`$5 keV) by as much as 0.6 radian; the soft photons of the 6 Hz QPO precede the hard ones by about 0.6 radian. Again it is clearly visible that the phase lags of the different QPO harmonics have different signs. The QPO lags in the other observations with type B QPOs (except for observations 40401–01–57–00 and 40401–01–58–01, see below) were consistent with these lags, although for observation 40401–01–51–01 the hard lags of the 6 Hz QPOs were only 0.3 radian, i.e., half what is observed for the other observations (the soft lags in the 3 and 12 Hz QPOs during this observation are consistent with those obtained for the QPOs during the other observations).
The QPOs during the 40401–01–57–00 and 40401–01–58–01 observations do not follow the general type B picture. Their power spectra are shown in Figures 5a and b. During observation 40401–01–57–00, a strong band-limited noise is present, besides the type B QPOs (the Q is $``$6 and QPOs at 3 and 12 Hz are visible). The corresponding phase lag spectrum is shown in Figure 5c. Up to about 8 Hz the soft photons (below 6.5 keV) lag the hard ones (above 6.5 keV). Around 8–9 Hz a sudden jump in sign occurs and the lags become positive. The measured lags for these QPOs are likely a combination of the intrinsic QPO lags and the noise lags. The 3 Hz QPO phase lag is heavily affected by the lags of the underlying noise component; the 6 Hz QPO is also affected, though less than the 3 Hz QPO. Figure 5c shows the obtained phase lags versus photon energy for the different QPOs. Clearly, the lags measured in the frequency range of 3 and 6 Hz QPOs now have the same sign: the soft photons lag the hard ones by 0.2 and 0.3 radian, respectively. However, the 12 Hz QPO now has a reversed sign: the soft photons precede the hard ones by $``$0.2 radian. This behavior is quite different from what is observed for the other type B QPOs.
During observation 40401–01–58–01, the source probably made a state transition within several minutes (see Homan et al. 1999b). We only used the data after this transition when two QPOs at 3 and 6 Hz were visible in the power spectrum (see Fig. 5b). An extra noise component in the same frequency range as the QPOs is visible. The phase lag spectrum of this observation is shown in Figure 5d. It is impossible to distinguish the 3 Hz QPO lag from that of the noise component. The combination of these two components results in a soft lag of $`0.2`$ radian. The 6 Hz QPO has hard lags. The absolute amplitude of its lag ($`0.2`$ radian) is smaller than those of the other type B QPOs. This is most likely due to dilution by the broad band-limited noise component, which has soft lags. The phase lags of the two QPOs versus energy is shown in Figure 5d.
## 5 Discussion
We have presented the complex phase lag behavior of the low-frequency QPOs which are observed during the March 1999 VHS episode of XTE J1550–564. We distinguish two QPO types: one type with a relatively broad (Q $`<3`$) 6 Hz QPO with a harmonic at 12 Hz, and one with a relatively narrow (Q $`>6`$) 6 Hz QPO with harmonics at 3, 12, 18, and possibly at 9 Hz. The QPO phase lag behavior is different for both types. The first type always has soft lags, with a maximum amplitude of 0.6–1.3 radian. For the other type, the different harmonics have lags of different sign. The absolute maximum amplitude of the lags are also 0.6–1.3 radian. These phase lags are intrinsic to the QPOs. Extrapolation of the phase lags (assuming a constant phase lag) of the power law noise at frequencies lower than the QPO frequencies cannot explain the observed QPO lags. The noise lags are consistent with zero, with upper limits of 0.06–0.15 radian, which is significantly lower than the QPO lags (0.6–1.3 radian). When an extra noise component (in addition to the power law noise) is present in the QPO frequency range, the phase lag behavior becomes more complex, but detailed studies of the lags are difficult due to the dilution by this noise component.
Our results show that the phase lag behavior of the low-frequency QPOs in XTE J1550–564 is quite complex. Especially the different signs for the lags of different harmonics are unexpected. This different sign strongly indicates that differences in light-travel time between the soft and hard photons cannot account for the lags. In such a situation, one would expect that the lags have the same sign for all the harmonics. This rules out the possibility that the lags are entirely due to a Comptonizing region surrounding the area where the QPOs are produced. The different signs demonstrate that the QPO waveform is significantly different at low photon energies with respect to that at higher energies. The same mechanism which produces the QPOs most likely also causes this difference.
Only for the black-hole transient GS 1124–683 the phase lags for low-frequency VHS QPOs have been measured before (Takizawa et al. 1997). Because we see significant differences in the results in XTE J1550–564 between different observations, it is difficult to compare the results between these two sources. The power spectrum of the 11 Jan 1991 observation of GS 1124–683 (see Fig. 3 of Takizawa et al. 1997) shows similar QPOs as in our power spectra with type B QPOs: several harmonically related QPOs with a Q of $``$6 for the 6 Hz QPO. The lags for the 6 Hz QPO in GS 1124–683 (between photons above 3 keV and those at 3 keV) have similar sign (hard lags) but smaller amplitudes (0.2–0.4 radian) as our type B QPOs. Another possible difference is that in GS 1124–683 also the photons below 3 keV lag the $``$3 keV ones. This energy range cannot accurately be probed with RXTE, and similar behavior cannot be excluded in XTE J1550–564. The non-detection of any lags for the noise is consistent with our results. The main difference is that for GS 1124–683 the lags have the same sign for both the 3 as the 6 Hz QPO (see Takizawa et al. 1997), while for the type B QPOs for XTE J1550–564 the signs are different. Also, during our only observation with strong band-limited noise (40401-01-57-00), the noise and the QPOs had soft lags, contrary to what is seen for GS1124–683. A uniform picture for the low-frequency black-hole QPOs cannot be constructed based on the results so far available.
The only other systems for which the phase lags for the low-frequency QPOs have been studied are the neutron star low-mass X-ray binaries (LMXBs). QPOs around 4–7 Hz are observed in the intrinsically brightest of these systems (van der Klis 1995), and occasionally in the lower-luminosity ones (Wijnands, van der Klis, & Rijkhorst 1998; Wijnands & van der Klis 1999; Revnivtsev et al. 1999). However, so far, no harmonics have been observed and the phase lags of these QPOs are either much larger (up to $`\pi `$ radian; Mitsuda & Dotani 1989) or consistent with zero (e.g., Wijnands et al. 1999). The other type of low-frequency QPO seen in the neutron star LMXBs, are the 15–70 Hz QPOs. Often, two QPOs are seen which are harmonically related. Their lags are around 0.3–0.6 radian, but the hard photons always lag the soft ones (e.g., Vaughan et al. 1994), which is different from XTE J1550–564. Also, in these neutron star QPOs the amplitude of the phase lags of the fundamental is about half that of its second harmonic. In XTE J1550–564, the absolute phase lags for the different harmonics are very similar. On the basis of the lags it is doubtful if the QPOs in the neutron star LMXBs are related to the low-frequency QPOs in XTE J1550–564.
This work was supported by ASTRON (grant 781-76-017), by NOVA, and by NASA through Chandra Postdoctoral Fellowship grant number PF9-10010 awarded by CXC, which is operated by SAO for NASA under contract NAS8-39073. This research has made use of data obtained through the HEASARC Online Service, provided by the NASA/GSFC.
|
no-problem/9909/math9909096.html
|
ar5iv
|
text
|
# Untitled Document
This paper has been withdrawn by the author(s), due a edition’s rights.
|
no-problem/9909/hep-ph9909545.html
|
ar5iv
|
text
|
# Electroweak symmetry breaking in Higgs mechanism with composite operators and solution of naturalness
## I. Introduction
At present, the Standard Model exhibits almost a total success in experimental measurements . The only question being a white spot on its body, is the empirical verification of mechanism for the spontaneous breaking of electroweak symmetry. In this respect, the minimal model involving a single local Higgs field brings a disadvantage: the stability of potential under the quantum loop corrections requires a restriction of quadratic divergency in the self-action by the introduction of “low” energy cut-off $`\mathrm{\Lambda }10^3`$ GeV, which is not a natural physical scale standing far away from what can be desirable : the GUT scale, $`M_{\mathrm{GUT}}10^{16}`$ GeV , or even the Planck mass, $`M_{\mathrm{Pl}}10^{19}`$ GeV. The reason for putting the $`\mathrm{\Lambda }`$ so small, has to originate beyond the Standard Model. Two highways to a “new physics” merit the most popularity. The first one is a technicolor postulating an extra-strong interaction for new technifermions, which form some “QCD-like” condensates, breaking down the electroweak symmetry and giving the masses to the ordinary gauge bosons. Despite some problems with the generation of realistic mass values for the quarks and leptons and suppression of flavor changing neutral currents, the extended technicolor provides quite a clear picture for what happens in the region deeper than $`10^3`$ GeV. However, the most strict objection against such the way is the comparison with the current measurements, which disfavor the technicolor models possessing the calculability . A general consideration of models with the condensation of heavy fermions is reviewed in ref., while a brilliant presentation of both the ideas on the electroweak symmetry breaking with composite operators and techniques as well as results is given in a comprehensive survey by C.T.Hill and E.H.Simmons . However, the condensation, in general, does not provide us with the solution of naturalness. In fact, this approach reformulates the problem as a fine-tuning phenomenon, since the separation of dynamics responsible for the composite operators at a high scale from the low-energy electroweak physics takes place at effective couplings tuned to some critical values. Therefore, we need an additional argumentation in order to address the naturalness in the framework of condensation mechanism with composite operators. We present an idea toward this direction below.
The second way is a supersymmetry reforming the quadratic divergency in the self-action of Higgs field into the logarithmic one, so that it prescribes the scale $`\mathrm{\Lambda }`$ to be a splitting between the particles of Standard Model and their super-partners. Therefore, the supersymmetry has to be broken in a manner conserving the logarithmic behavior of renormalization, which is an additional challenge to study and a degree of ambiguity. However, the advantage is the stability of Higgs potential, so that $`\mathrm{\Lambda }`$ certainly is a reasonable scale reflecting the physics in the supersymmetric theory. What remains is the question: why the basic SUSY scale is so “low” in comparison with the GUT scale? Hence, the naturalness is again the problem standing in the higher-quality context.
If the ultraviolet cut off energy in the loop calculations is placed close to the Planck scale (see Fig.1<sup>1</sup><sup>1</sup>1The figure originally appeared in ref. , and it is taken from ref. , while the two-loop consideration recently was done in ref. .), the Standard Model suffers from the inherent inconsistency except a narrow window in the range of possible values of Higgs particle mass: $`m_H=160\pm 20`$ GeV, which does not contradict the value following from the precise measurements of electroweak parameters in the electron-positron annihilation at the $`Z`$ boson peak.
The reasons for such the inconsistency are the following: At lower masses the vacuum stability is broken, i.e., the quartic coupling constant of scalar field changes its sign . At higher masses the theory enters the strong self-interaction regime, which indicates that the quartic coupling constant becomes infinite (alike the Landau pole) at a scale less than the offered cut off . If the scale of ultraviolet cut off in the SM is much lower than the Planck scale, then the region of higgs masses providing the SM consistency, is more wide. However, such the scales are not natural. A low cut off scale should indicate a new dynamics. While the vacuum instability is an unavoidable physical constraint, the phase of strong higgs self-interaction could be treated in the framework of the following representation: The scalar higgs can be described in terms of the local field below an ultraviolet cut off $`\mathrm{\Lambda }`$ placed close to the region of strong regime. At virtualities higher than $`\mathrm{\Lambda }`$, the strongly self-coupled higgs is not fundamental and local quantum. The dynamics should be described by means of weakly interacting particles, so that a composite operator with appropriate quantum numbers has to correspond to the higgs in the ‘dual’ limit implying that the effective potential of the composite operator yields a development of vacuum expectation value for the global (independent of space-time point) source of operator. This strong self-interaction regime could be realized with no involvement of an extended underlying theory alike the technicolor dynamics, since some composite operators can develop an appropriate effective potential in the framework of standard electroweak symmetry.
Our assumptions are the followings:
1. We choose a form of composite operators describing the nonlocal phase of higgs in the strong self-interaction regime (SSIR) and suppose the connection of such the operators to the higgses. The suggestion on the nonlocality of higgses allows us to replace the strong self-interaction regime in the theory with the local Higgs fields by the weak self-interaction regime (WSIR) of sources for the composite operators.
2. The interactions of fermions and gauge bosons in the SSIR are given by the dynamics of SM with no local scalar higgses as well as no extensions like a technicolor or so.
3. Concerning the position of scale $`\mathrm{\Lambda }`$ denoting the infrared cut off in the calculations with the composite operators as well as the ultraviolet cut off in the local theory with the scalar higgs, we put it into the (infrared) fixed point for the Yukawa coupling constants of heaviest fermion in the local theory. the numerical value of $`\mathrm{\Lambda }`$ is given by the masses of weak-interaction gauge bosons.
4. We consider Yukawa couplings of the only heaviest fermion generation in the SM.
5. In the SSIR we introduce the ultraviolet cut off $`M\mathrm{\Lambda }`$. At $`M`$ the electroweak symmetry is exactly restored.
6. We match the effective potential of sources for the composite operators with the potential of corresponding local scalar fields at the scale $`\mathrm{\Lambda }`$.
The corresponding divisions of virtualities are presented in Fig. 2.
It is important to stress that the global sources of composite operators develop the Higgs-like potential in the region of $`[\mathrm{\Lambda };M]`$, so that the corresponding couplings of self-interaction as well as Yukawa constants fall off to zero under the increase of virtuality from $`\mathrm{\Lambda }`$ to $`M`$. Therefore, the dynamics of local interactions is perturbative in the region of $`[\mathrm{\Lambda };M]`$, while the notion on the “strong self-interaction regime”, strictly speaking, concerns for a theory with the local Higgs field, i.e. we replace $`\mathrm{SSIR}|_{\mathrm{local}}\mathrm{WSIR}|_{\mathrm{composite}}`$.
Postponing a supersymmetric extension in a time, in this paper we develop a new insight into the breaking of electroweak symmetry by means of exploring the dynamics of SM to calculate an effective potential for a source of bi-local operator with no technicolor interactions. The physical reasoning for the choice of operator under study was hinted in ref.. So, in the second order of perturbation theory we write down the following contribution to the action:
$$iS_{2m}=𝑑x𝑑y𝐓[\overline{L}_L(x)\text{ / }B(x)L_L(x)\overline{L}_R(y)\text{ / }B(y)L_R(y)]4\pi \alpha _Y\frac{Y_L}{2}\frac{Y_R}{2},$$
(1)
where we have introduced the notations $`L_L`$ for the left-handed doublets and $`L_R`$ for the right-handed singlets, $`B`$ is the gauge field of weak hypercharge $`Y`$, $`\alpha _Y`$ is its coupling constant. Note, that the gauge field of local $`𝖴(\mathrm{𝟣})`$-group is the only one interacting with both the left-handed and right-handed fermions. If we suggest a nontrivial vacuum correlators with the characteristic distance $`r1/v`$
$`0|𝐓[\text{ / }B(x)L_L(x)\overline{L}_R(y)\text{ / }B(y)]|0`$ $``$ $`{\displaystyle \frac{\delta (xy)}{v^4}}0|𝐓[\text{ / }B(x)L_L(x)\overline{L}_R(x)\text{ / }B(x)]|0`$ (2)
$``$ $`\delta (xy)v,`$
supposing that the scales of expectations for $`BB`$ and $`L_L\overline{L}_R`$ are driven by $`v^2`$ and $`v^3`$, respectively, then the Dirac masses of fermions are determined by the action<sup>2</sup><sup>2</sup>2By the way, Eq.(3) implies that in the SM the neutrino is massless since its right-handed component is decoupled, $`Y_R=0`$.
$$S_{fm}𝑑x\overline{L}_L(x)L_R(x)v4\pi \alpha _Y\frac{Y_L}{2}\frac{Y_R}{2}+\mathrm{h}.\mathrm{c}.$$
(3)
In this way we extend the SM action by the initial bi-local bare $`J`$-term
$$S_{ib}=𝑑x𝑑yN_JJ(x,y)[\overline{L}_R(x)\underset{}{\text{ / }B^{}(x)\text{ / }B^{}(y)}L_L(y)]𝑑x\varphi (x)J(x,x)+\mathrm{h}.\mathrm{c}.,$$
where $`N_J=\pi \alpha _YY_lY_R`$, and $`\underset{}{}`$ denotes the propagation of transversal U(1)-gauge field $`B_\mu ^{}=(g_{\mu \nu }_\mu _\nu /^2)B^\nu `$, which is independent of the longitudinal mode, so that
$$\underset{}{B_\mu ^{}(x)B_\nu ^{}(0)}=ig_{\mu \nu }\frac{d^4p}{(2\pi )^4}e^{ipx}\frac{1}{p^2}$$
to the leading order of perturbative theory. To the bare order the equation of motion for the bi-local field results in the straightforward substitution of local field $`\varphi `$, as it stands in the above consideration for the correlators, developing the vacuum expectation values. After the analysis of divergences in the $`J`$-dependent Green functions, the corresponding contra-terms must be added to the action. Then the $`J`$-source can be integrated out or renormalized, that results in a Higgs-like action, containing some couplings to fermions as well as a suitable potential to develop the spontaneous breaking of electroweak symmetry.
We stress that there are no other suitable composite operators appearing in the second order of SM gauge symmetry with the quantum numbers relevant to the Higgs interactions providing the generation of fermion masses through the Yukawa-like couplings except the operators described above.
In this paper we calculate the effective potential up to the quartic term for the sources corresponding to the bi-local composite operators of quarks and leptons to the one-loop accuracy of renormalization in the SM. The normalization condition of potential parameters: $`\mu ^2`$ and $`\lambda `$ standing in
$$V(J^{},J)=\mu ^2J^{}J+\lambda (J^{}J)^2,$$
is strictly defined in the SM, since we do not involve some additional interactions. Therefore, both $`\mu ^2`$ and $`\lambda `$ for a nonfundamental source must be equal to zero, exactly, i.e. $`V=0`$, which, however, can be satisfied at a single scale $`M`$ because of logarithmic renormalization for couplings, so that
$$\mu ^2(M)=0,\lambda (M)=0.$$
(4)
It is essential that the choice of composite operators is conformed to the effective action of SM in the second order over the gauge couplings. Otherwise, the introduction of arbitrary composite operators with the given properties with respect to the gauge symmetry generally does not imply the imposition of matching condition in (4), which is extremely important, since it removes an uncertainty of the potential due to a finite renormalization of parameters.
Below $`M`$, the mass parameter $`\mu ^2(\mathrm{\Lambda })`$ depending on the “infrared cut-off $`\mathrm{\Lambda }`$, is positive, and the electroweak symmetry is broken down. So, we suppose that the bi-local representation is valid in the range of virtualities: $`[\mathrm{\Lambda };M]`$, and below $`\mathrm{\Lambda }`$ we can explore the local Higgs fields.
As was shown in ref. , a variety of composite operators appropriate for the Higgs quantum numbers can be rearranged so that practically arbitrary values of higgs mass or $`t`$ quark mass could be derived. In other words, in order to get a definite description of higgs sector, one should to suppress contributions by a lot of composite operators except the special ones. this dominance of several composite operators is usually motivated by an extended dynamics beyond the SM, for instance, by the thechnicolor providing the dominance of some bound channels.
In the present paper, the choice of dominant composite operators is dictated by the SM gauge symmetry, since we isolate the only composite structure in the second order over the gauge couplings, while the appearance of other operators takes place at higher orders, and, hence, their contributions must be suppressed<sup>3</sup><sup>3</sup>3This suppression becomes even better at higher virtualities because of the asymptotic freedom of nonabelian interactions, while the abelian charge remains small up to the GUT scale.. Moreover, this motivation on the form of composite operators makes us to add the matching condition of (4), which is a new idea for the composite models, and it is certainly due to the electroweak nature of composite operators.
Thus, for a walker travelling from low scales to higher ones, the whole picture of electroweak symmetry breaking looks as the following:
* The SM extension with several Higgs fields is the local theory with the ultraviolet cut-off $`\mathrm{\Lambda }`$.
* The parameters of Higgs potential at the scale $`\mathrm{\Lambda }`$ is matched with the effective potential of bi-local source, calculated in the range $`[\mathrm{\Lambda };M]`$, so that $`M`$ denotes the scale, where the potential is exactly zero.
The value of $`\mathrm{\Lambda }`$, hence, can be related with the masses of gauge bosons, or the vacuum expectation value (vev) $`v_{\mathrm{SM}}`$ for the Higgs field in the SM. The value of $`M`$ with respect to $`\mathrm{\Lambda }`$ is fixed by two simple requirements: at the matching point $`\mathrm{\Lambda }`$ the Yukawa constant of $`t`$-quark calculated with the composite operators is determined by the condition of infrared fixed point in the local theory , while the Yukawa constant is expressed in terms of abelian gauge coupling at the scales of $`\mathrm{\Lambda }`$ and $`M`$ due to the consistent matching condition of potential (4). At fixed $`\mathrm{\Lambda }`$ this supposition makes $`M`$ to grow to the GUT scale, that implies the solution of naturalness. So, we can read off the third point:
* The Yukawa constants of heaviest fermions in the local theory have the matching conditions at $`\mathrm{\Lambda }`$ to the couplings given by the bi-local representation, so that the infrared fixed point for the $`t`$-quarks is exactly reached.
The masses of $`b`$-quark and $`\tau `$-lepton can be also calculated after the use of both the definite matching at $`\mathrm{\Lambda }`$ and infrared fixed points in the RG equations below $`\mathrm{\Lambda }`$.
Finally, the potential of Higgs fields at $`\mathrm{\Lambda }`$ can serve to estimate the masses of neutral scalar particles by means of RG evolution and the infrared fixed point for the quartic vertex $`\lambda `$. The important property of fixed points under consideration is that the Yukawa constants and quartic coupling are given by appropriate combinations of gauge coupling constants.
Thus, the local theory with the local Higgs fields and the electroweak symmetry breaking can be certainly matched to the effective potential of sources for the bi-local composite operators of quarks and leptons at the scale $`\mathrm{\Lambda }`$ and to the corresponding Yukawa constants, which are calculable in the region of virtualities $`[\mathrm{\Lambda };M]`$, so that the fixed point matching of $`t`$-quark coupling and the symmetry matching-condition of null effective potential (4) result in $`M`$ living in the GUT area.
Then we find the following general results:
1. Three bi-local composite operators formed by the fermions of heaviest generation, develop the effective potential of their sources, so that nonzero vev’s break the electroweak symmetry. We treat these dynamics above the scale $`\mathrm{\Lambda }`$ as the strong self-interaction regime for three independent scalar higgses as equivalent to the weak self-interaction regime for three independent sources of composite operators.
2. The position of matching point $`\mathrm{\Lambda }=633`$ GeV is fixed by the measured masses of gauge bosons, after the higgs sector is given by three independent scalar fields.
3. At $`\mathrm{\Lambda }`$ the infrared fixed point condition is satisfied for the Yukawa coupling of $`t`$ quark, only, while the couplings of $`b`$ quark and $`\tau `$ lepton evolve to the fixed points at lower scales in agreement with the current data available. The masses of higgses evolve too.
4. Under the item 3, the position of ultraviolet cut off $`M10^{12}10^{19}`$ GeV with respect to $`\mathrm{\Lambda }`$ is given by the condition of zero effective potential for the sources of composite operators, that is governed by the renormalization group for the $`𝖴(1)`$ hypercharge, so that we find the natural hierarchy for $`M`$ and $`\mathrm{\Lambda }`$.
The paper is organized in the following way: Section II is devoted to the definition of sources for the bi-local composite operators and calculation of effective potential to the one-loop accuracy. The masses of gauge bosons and Yukawa constants of fermions are evaluated in Section III at the scale $`\mathrm{\Lambda }`$. The exploration of infrared fixed point conditions for the Yukawa constants and quartic Higgs coupling is considered in Section IV. Numerical estimates of masses for the heaviest fermions as well as the Higgs fields are given in Section V. In Section VI we shortly discuss the problem of generations and the vacuum structure. The obtained results and the points of discussion are summarized in Conclusion.
## II. Sources of composite operators and effective potential
Let us define the following bare actions for the sources of bi-local operators
$`S_\tau `$ $`=`$ $`{\displaystyle 𝑑x𝑑yN_\tau J_\tau ^{}(x,y)[\overline{\tau }_R(x)\underset{}{\text{ / }B^{}(x)\text{ / }B^{}(y)}\tau _L(y)]}+\mathrm{h}.\mathrm{c}.,`$
$`S_t`$ $`=`$ $`{\displaystyle 𝑑x𝑑yN_tJ_t^{}(x,y)[\overline{t}_R(x)n\underset{}{\text{ / }B^{}(x)\text{ / }B^{}(y)}\overline{n}t_L(y)]}+\mathrm{h}.\mathrm{c}.,`$ (5)
$`S_b`$ $`=`$ $`{\displaystyle 𝑑x𝑑yN_bJ_b^{}(x,y)[\overline{b}_R(x)n\underset{}{\text{ / }B^{}(x)\text{ / }B^{}(y)}\overline{n}b_L(y)]}+\mathrm{h}.\mathrm{c}.,`$
where we have introduced the $`\mathrm{𝖲𝖴}(\mathrm{𝟥})`$-triplet unit-vector $`n_i`$, so that $`\overline{n}n=1`$, and the $`n`$-dependent terms in the effective action after the account for the loop-corrections have to be averaged over $`n_i`$ to restore the explicit invariance under the transformations of $`\mathrm{𝖲𝖴}(\mathrm{𝟥})`$. For instance, since we generally have $`n_i\overline{n}_j=\frac{1}{3}\delta _{ij}+\frac{1}{\sqrt{3}}\lambda _{ij}^aF_a`$, we can straightforwardly check that
$$n_i\overline{n}_j=\frac{1}{3}\delta _{ij},F_a=0,F_aF_b=\frac{1}{8}\delta _{ab},$$
and so on.
For nonzero $`Y_L`$ and $`Y_R`$, which are under consideration, we can redefine the factors $`N_J`$ to include the hypercharges into the definition of sources, so that $`\stackrel{~}{N}_p=\alpha _Y`$ and $`\stackrel{~}{J}_p=\pi Y_LY_RJ_p`$ for $`p=t,b,\tau `$, which will not change the final results concerning for the physical quantities: masses and couplings. In what follows we will omit the tildes for the sake of briefness.
In the calculations of effective potential we consider the global values of sources independent of local coordinates: $`_{x,y}J(x,y)0`$. The corresponding vertex derived from actions (5) is shown in Fig. 3. For the $`t`$-quark it has the form
$$\mathrm{\Gamma }_t=i\alpha _YJ^{}\overline{t}_R(p)n\frac{4i}{p^2}\overline{n}t_L(p)+\mathrm{h}.\mathrm{c}.$$
(6)
The diagrams for the calculation of quadratic and quartic terms of effective potential are shown in Figs. 4 and 5, respectively.
The parameters of potential
$$V(J^{},J)=\mu ^2J^{}J+\lambda (J^{}J)^2,$$
(7)
can be written down in the euclidean space as
$`i\mu _B^2`$ $`=`$ $`iN_J^2{\displaystyle _{\mathrm{\Lambda }^2}^{M^2}}{\displaystyle \frac{d^4p}{(2\pi )^4}}{\displaystyle \frac{4^2\mathrm{tr}[P_L\text{ / }p\text{ / }p]}{(p^2)^4}},`$ (8)
$`i4\lambda _B`$ $`=`$ $`2iN_J^4{\displaystyle _{\mathrm{\Lambda }^2}^{M^2}}{\displaystyle \frac{d^4p}{(2\pi )^4}}{\displaystyle \frac{4^4\mathrm{tr}[P_L\text{ / }p\text{ / }p\text{ / }p\text{ / }p]}{(p^2)^8}},`$ (9)
which are independent of the fermion flavor. Here $`P_L=\frac{1}{2}(1\gamma _5)`$ is the projector on the left-handed fermions.
Supposing $`M^2\mathrm{\Lambda }^2`$, we find
$`\mu _B^2`$ $`=`$ $`N_J^2{\displaystyle \frac{2}{\pi ^2}}{\displaystyle \frac{1}{\mathrm{\Lambda }^2}},`$ (10)
$`\lambda _B`$ $`=`$ $`N_J^4{\displaystyle \frac{4}{\pi ^2}}{\displaystyle \frac{1}{\mathrm{\Lambda }^8}}.`$ (11)
As we have already mentioned in the Introduction, the effective potential has to be subtracted, so that at the scale $`M`$ it equals zero, exactly, since we deal with the source of composite operators not involving some interactions beyond the gauge interactions of SM. Then, we get
$`\mu _R^2(\mathrm{\Lambda })`$ $`=`$ $`{\displaystyle \frac{2}{\pi ^2}}{\displaystyle \frac{1}{\mathrm{\Lambda }^2}}\alpha _Y^2(M)(1\varkappa ^2(\mathrm{\Lambda })),`$ (12)
$`\lambda _R(\mathrm{\Lambda })`$ $`=`$ $`{\displaystyle \frac{4}{\pi ^2}}{\displaystyle \frac{1}{\mathrm{\Lambda }^8}}\alpha _Y^4(M)(1\varkappa ^2(\mathrm{\Lambda }))^2,`$ (13)
where we have introduced the notation for
$$\varkappa (\mathrm{\Lambda })=\frac{\alpha _Y(\mathrm{\Lambda })}{\alpha _Y(M)},$$
with the normalization $`\varkappa (M)=1`$. The scale-independent factors $`\alpha _Y^{2,4}(M)`$ can be removed by the redefinition of sources: $`J^{}=\alpha _YJ`$, which we imply below. In addition we introduce $`J(\mathrm{\Lambda })=\frac{1}{\mathrm{\Lambda }^2}J^{}`$ to obtain more usual notations. Then,
$`\mu ^2(\mathrm{\Lambda })`$ $`=`$ $`{\displaystyle \frac{2}{\pi ^2}}(1\varkappa ^2(\mathrm{\Lambda }))\mathrm{\Lambda }^2,`$ (14)
$`\lambda (\mathrm{\Lambda })`$ $`=`$ $`{\displaystyle \frac{4}{\pi ^2}}(1\varkappa ^2(\mathrm{\Lambda }))^2.`$ (15)
The vacuum expectation value, vev, is given by $`J^{}J=\frac{\mu ^2}{2\lambda }`$, so that
$$J^{}(\mathrm{\Lambda })J(\mathrm{\Lambda })=\frac{1}{4}\frac{1}{1\varkappa ^2(\mathrm{\Lambda })}\mathrm{\Lambda }^2.$$
(16)
Remember, that the potential parameters are the same for all charged heavy fermions: $`t`$-quark, $`b`$-quark and $`\tau `$-lepton. The density of vacuum energy is independent of flavor, too,
$$V(\mathrm{vac})=\frac{\mu ^4}{4\lambda }=\frac{1}{4\pi ^2}\mathrm{\Lambda }^4.$$
Then the action represented as the sum of terms over the space-time intervals with $`d^4x1/\mathrm{\Lambda }^4`$, has the form
$$S(\mathrm{vac})=d^4xV(\mathrm{vac})\frac{1}{4\pi ^2},$$
and it is independent of $`\mathrm{\Lambda }`$.
## III. Masses of gauge bosons and Yukawa constants
The diagrams, which result in the masses of gauge bosons, are shown in Fig. 6, where the permutations over the gauge bosons are implied.
We straightforwardly find that the couplings of gauge bosons are proportional to the differences of their charges, so that
$$m_{12}^2A_1^\mu A_2^\nu g_{\mu \nu }(Q_1^LQ_1^R)(Q_2^LQ_2^R)A_1^\mu A_2^\nu g_{\mu \nu },$$
where $`Q^{L,R}`$ denote the charges of left-handed and right-handed fermions. This implies that the vector-like gauge bosons, i.e. when $`Q^L=Q^R`$, remain massless.
For the $`W`$\- and $`Z`$-bosons after the subtraction procedure of $`\varkappa ^2(1\varkappa ^2)`$, we find
$`m_W^2`$ $`=`$ $`{\displaystyle \frac{4\pi \alpha _2}{2}}{\displaystyle \underset{p}{}}{\displaystyle \frac{\mathrm{\Lambda }_p^2}{4\pi ^2}},`$ (17)
$`m_Z^2`$ $`=`$ $`m_W^2{\displaystyle \frac{1}{\mathrm{cos}^2\theta _W}},`$ (18)
where $`\theta _W`$ is the Weinberg angle , as usual, and the sum is taken over the heavy flavors $`p=t,b,\tau `$. As we have seen in the previous section $`\mathrm{\Lambda }_p=\mathrm{\Lambda }`$ is independent of flavor, and, hence, we can introduce the Higgs field $`h_p`$ with the vev, $`h_p=v`$, so that
$$v=\frac{\mathrm{\Lambda }}{2\pi },h_p(v)=\frac{1}{\pi }\sqrt{1\varkappa ^2(v)}J_p(v).$$
Thus, we get
$$m_W^2=\frac{4\pi \alpha _2}{2}\mathrm{\hspace{0.33em}3}v^2,$$
(19)
so that $`v_{\mathrm{SM}}^2=3v^2(174\mathrm{GeV})^2`$, when the potential at the scale $`\mathrm{\Lambda }`$ has the form<sup>4</sup><sup>4</sup>4There is a possibility to change the convention on the prescription of scale by replacing $`\mathrm{\Lambda }v`$.
$$V(h_p,h_p^{})=2\mathrm{\Lambda }^2h_p^{}h_p+(2\pi )^2(h_p^{}h_p)^2.$$
(20)
We check that the quadratic term $`2\mathrm{\Lambda }^2`$ is exactly given by the one-loop calculation in the local $`\varphi ^4`$-theory with $`\lambda =(2\pi )^2`$ and cut-off $`\mathrm{\Lambda }`$.
The masses of fermions at the same scale can be derived from the diagram shown in Fig. 3 by putting the fermion momenta to the given virtuality, $`p^2=\mathrm{\Lambda }^2`$. Then, after the appropriate subtraction \[$`\varkappa (1\varkappa )`$\] we get
$$m_p=\lambda _pv,$$
with
$`\lambda _t(v)=\lambda _b(v)`$ $`=`$ $`{\displaystyle \frac{4\pi }{3\sqrt{2}}}\sqrt{{\displaystyle \frac{1\varkappa (v)}{1+\varkappa (v)}}},`$ (21)
$`\lambda _\tau (v)`$ $`=`$ $`3\lambda _t(v).`$ (22)
Replacing $`v`$ by $`v_{\mathrm{SM}}`$, we find $`\lambda _p^{\mathrm{SM}}=\lambda _p/\sqrt{3}`$.
Thus, we have calculated the masses of gauge bosons, Yukawa constants of heaviest fermions and the parameters of Higgs potential at the scale $`\mathrm{\Lambda }`$, which have to be matched with the quantities of local theory valid below $`\mathrm{\Lambda }`$.
## IV. Infrared fixed points
To the moment we have the local theory with three neutral Higgs fields, which are coupled with the appropriate heavy fermions in each sector, with the cut-off $`\mathrm{\Lambda }`$, where the Yukawa couplings have to be matched with the values calculated in the effective potential of sources for the composite operators.
The one-loop RG equations for the couplings<sup>5</sup><sup>5</sup>5The corresponding two-loop RG equations are given in ref. . We shortly comment the influence of two-loop corrections below. have the form
$`{\displaystyle \frac{d\mathrm{ln}\lambda _t}{d\mathrm{ln}\mu }}`$ $`=`$ $`{\displaystyle \frac{1}{(4\pi )^2}}\left[{\displaystyle \frac{9}{2}}\lambda _t^28g_3^2{\displaystyle \frac{17}{12}}g_Y^2{\displaystyle \frac{9}{4}}g_2^2\right],`$
$`{\displaystyle \frac{d\mathrm{ln}\lambda _b}{d\mathrm{ln}\mu }}`$ $`=`$ $`{\displaystyle \frac{1}{(4\pi )^2}}\left[{\displaystyle \frac{9}{2}}\lambda _b^28g_3^2{\displaystyle \frac{5}{12}}g_Y^2{\displaystyle \frac{9}{4}}g_2^2\right],`$ (23)
$`{\displaystyle \frac{d\mathrm{ln}\lambda _\tau }{d\mathrm{ln}\mu }}`$ $`=`$ $`{\displaystyle \frac{1}{(4\pi )^2}}\left[{\displaystyle \frac{5}{2}}\lambda _\tau ^2{\displaystyle \frac{15}{4}}g_Y^2{\displaystyle \frac{9}{4}}g_2^2\right],`$
where $`g_3^2=4\pi \alpha _s`$ is the QCD coupling, $`g_Y^2=4\pi \alpha _Y`$ is the hypercharge coupling, and $`g_2^2=4\pi \alpha _2`$ is the $`\mathrm{𝖲𝖴}(\mathrm{𝟤})`$-group coupling. At “low” virtualities about $`v100`$ GeV, the dominant contribution to the $`\beta `$-functions of quark couplings is given by QCD. We suppose that the value of matching point $`\mathrm{\Lambda }`$ is dictated by the fixed point condition for the $`t`$-quark: $`\frac{d\mathrm{ln}\lambda _t}{d\mathrm{ln}\mu }=0`$ , i.e.
$$\lambda _t^2(v)=\frac{64\pi }{9}\alpha _s(v)+\frac{34\pi }{27}\alpha _Y(v)+2\pi \alpha _2(v)\frac{64\pi }{9}\alpha _s(v),$$
(24)
when the matching gives
$$\lambda _t^2(v)=\frac{8\pi ^2}{9}\frac{1\varkappa (v)}{1+\varkappa (v)}.$$
(25)
Therefore, we find
$$\varkappa (v)=\frac{1\frac{8\alpha _s(v)}{\pi }}{1+\frac{8\alpha _s(v)}{\pi }}.$$
(26)
Due to the contribution by the hypercharge the difference between the RG equations for $`\lambda _b`$ and $`\lambda _t`$ causes the reach of infrared fixed point for the $`b`$-quark at a lower scale than for the $`t`$-quark. Indeed, the fixed point condition for the $`b`$-quark reads off
$$\frac{9}{2}(\lambda _b^2(\mu )\lambda _t^2(\mu ))=g_Y^2(\mu ).$$
(27)
Making use of matching condition $`\lambda _b(v)=\lambda _t(v)`$, we can write down
$$\frac{d\mathrm{ln}\lambda _t/\lambda _b}{d\mathrm{ln}\mu }=\frac{1}{(4\pi )^2}g_Y^2,$$
for small changes, so that
$$\lambda _b(\mu )\lambda _t(\mu )\frac{\lambda _t(\mu )}{(4\pi )^2}g_Y^2\mathrm{ln}\frac{\mathrm{\Lambda }}{\mu }.$$
(28)
Then, we can derive from (27) and (28) the following estimate of current mass for the $`b`$-quark
$$\mathrm{ln}\frac{m_t}{m_b(\widehat{v}_b)}=\frac{\pi }{4\alpha _s(m_b(\widehat{v}_b))},$$
(29)
where the current mass of $`t`$-quark is given by
$$m_t(m_t)=\frac{8}{3}\sqrt{\pi \alpha _s(v)}v,$$
since the evolution of $`t`$-quark mass above the scale $`v`$ is determined by the running of effective constant, which is negligibly small in the interval $`[v,m_t]`$, and, hence, $`m_t(m_t)m_t(v)`$ with quite a high accuracy. The scale of $`b`$-quark normalization is given by the following
$$m_b(\widehat{v}_b)=\frac{8}{3}\sqrt{\pi \alpha _s(\widehat{v}_b)}\widehat{v}_b,$$
and we use the QCD evolution to extract the current mass of $`b`$-quark at the scale of its value
$$m_b(m_b)=m_b(\widehat{v}_b)\left(\frac{\alpha _s(m_b(m_b))}{\alpha _s(\widehat{v}_b)}\right)^{12/25}.$$
Next, we can evaluate the mass of $`\tau `$-lepton in the same manner. At low energies we modify the RG equation for the $`\tau `$-coupling, neglecting the four-fermion weak interactions and taking into account the photon contribution. So, we have
$$\frac{d\mathrm{ln}\lambda _\tau }{d\mathrm{ln}\mu }=\frac{1}{(4\pi )^2}\left[\frac{5}{2}\lambda _\tau ^224\pi \alpha _{em}\right],$$
(30)
and the infrared fixed point condition reads off
$$\lambda _\tau ^2=\frac{48\pi }{5}\alpha _{em}.$$
(31)
The change of $`\lambda _\tau `$ from the matching value $`\lambda _\tau ^2=9\lambda _t^2=64\pi \alpha _s`$ can be found in the solution of
$$\frac{d\mathrm{ln}\lambda _\tau }{d\mathrm{ln}\mu }\frac{1}{(4\pi )^2}\frac{5}{2}9\lambda _t^240\frac{\alpha _s}{4\pi },$$
(32)
so that
$$\lambda _\tau (\mu )=\lambda _\tau (v)\left(\frac{\alpha _s(\mu )}{\alpha _s(v)}\right)^{\frac{40}{2b_3}},$$
(33)
where $`b_3=11\frac{2}{3}n_f=9`$ at $`n_f=3`$. From (31), (33) and $`\lambda _t^2=64\pi \alpha _s/9`$ we deduce the relation
$$\alpha _s(m_\tau )=\alpha _s(v)\left(\frac{3}{20}\frac{\alpha _{em}(m_\tau )}{\alpha _s(v)}\right)^{\frac{9}{40}}.$$
(34)
Note, that the one-loop evolution to such the large change of scales is quite a rough approximation. To improve the estimate of $`\tau `$-lepton mass we integrate (32) numerically with the same boundary conditions and extract the value under consideration.
Let us consider the way to estimate the masses of neutral Higgs bosons. The RG equations for the quartic couplings of scalar particles with the heaviest fermions are represented by the following:
$$\frac{d\lambda }{d\mathrm{ln}\mu }=\frac{3}{2\pi ^2}\left[\lambda ^2\frac{a_p}{4}\lambda _p^4\right],$$
(35)
where $`a_t=a_b=1`$, $`a_\tau =\frac{1}{3}`$, and we neglect the contribution given by the electroweak gauge couplings. This approximation is quite reasonable, since at $`\mathrm{\Lambda }(v)`$ the quartic couplings $`\lambda (v)=(2\pi )^2`$ dominate. For the Higgs fields coupled to the $`t`$\- and $`b`$-quarks, the infrared fixed points coincide with each other to the order under consideration, so that
$$\lambda (\mu _H)\frac{1}{2}\lambda _{t,b}^2(\mu _H)=\frac{32\pi }{9}\alpha _s(\mu _H),$$
which implies that the corresponding masses of scalars are degenerated with a high accuracy. Let us evaluate the scale of reaching the infrared fixed point. The evolution can be approximated at large $`\lambda `$ by the equation
$$\frac{1}{\lambda (\mu _H)}=\frac{1}{\lambda (v)}+\frac{3}{2\pi ^2}\mathrm{ln}\frac{v}{\mu _H},$$
so that we derive
$$\mathrm{ln}\frac{v}{\mu _H}\frac{3\pi }{16\alpha _s(\mu _H)}\frac{1}{6},$$
(36)
If we use the RG evolution for the QCD coupling
$$\frac{1}{\alpha _s(\mu _H)}=\frac{1}{\alpha _s(v)}\frac{b_3}{2\pi }\mathrm{ln}\frac{v}{\mu _H},$$
at $`n_f=5`$, we arrive to
$$\mathrm{ln}\frac{v}{\mu _H}\frac{6}{55}\frac{\pi }{\alpha _s(v)},$$
(37)
although the straightforward equation for the scale in (36) can be more accurate numerically.
Thus, following the general relation for the mass of Higgs field,
$$m_H(\mu )=2\sqrt{\lambda (\mu )}v,$$
we have the estimates
$`m_H(v)`$ $`=`$ $`4\pi v,`$ (38)
$`m_H(\mu _H)`$ $`=`$ $`{\displaystyle \frac{8}{3}}\sqrt{2\pi \alpha _s(\mu _H)}v.`$ (39)
As for the Higgs field coupled to the $`\tau `$-lepton, it is quite easily recognize that the corresponding scale $`\mu `$ is much greater than for the scalars coupled with the heaviest quarks, and, hence, its mass is greater than we have considered above. Indeed, we can use the evolution of $`\lambda _\tau `$ at large scales, where it is driven as $`\lambda _\tau =3\lambda _t`$, so that we derive the relation analogous to (36)
$$\mathrm{ln}\frac{v}{\mu _{H_\tau }}\frac{\pi }{16\sqrt{3}\alpha _s(\mu _{H_\tau })}\frac{1}{6},$$
(40)
and
$$m_{H_\tau }(\mu _{H_\tau })=8\sqrt{\frac{2}{\sqrt{3}}\pi \alpha _s(\mu _{H_\tau })}v.$$
To the moment we are ready to get numerical estimates.
## V. Numerical evaluation and the naturalness
First of all, the vev’s of Higgs fields are directly given by the masses of gauge bosons, so that
$$v=100.8\pm 0.1\mathrm{GeV},$$
and the cut-off
$$\mathrm{\Lambda }=2\pi v=633.0\pm 0.6\mathrm{GeV},$$
where we use the experimental data shown in Table 1.
The estimates for the masses of fermions depend on the values of QCD coupling constant. We put the value<sup>6</sup><sup>6</sup>6The central value is slightly displaced from the “world average” $`\alpha _s(m_Z)=0.119\pm 0.002`$ , though it is within the current uncertainty. However, this parameter corresponds to the LEP fit as well as to the recent global fit of structure functions .
$$\alpha _s(m_Z)=0.122\pm 0.003,$$
which corresponds to the $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}=255\pm 45`$ MeV in the three-loop approximation for the $`\beta `$-function. We suppose that the threshold values for the changing the number of active quark flavors are equal to $`\widehat{m}_b=4.3`$ GeV and $`\widehat{m}_c=1.3`$ GeV. The variation of threshold values is not so important in the estimates in contrast to the uncertainty in $`\alpha _s`$, which dominates in the error-bars.
Then we can numerically solve the equations in the previous section to find the current masses
$`m_t(m_t)`$ $`=`$ $`165\pm 1\mathrm{GeV},`$
$`m_b(m_b)`$ $`=`$ $`4.18\pm 0.38\mathrm{GeV},`$ (41)
$`m_\tau (m_\tau )`$ $`=`$ $`1.78\pm 0.27\mathrm{GeV}.`$
The one-loop relation of perturbative QCD for the pole mass of quark is given by
$$m^{(p)}=m(m)\left(1+\frac{4}{3\pi }\alpha _s(m)\right).$$
Then we estimate
$`m_t^{(p)}`$ $`=`$ $`173\pm 2\mathrm{GeV},`$
$`m_b^{(p)}`$ $`=`$ $`4.62\pm 0.40\mathrm{GeV}.`$
The QED correction to the $`\tau `$-lepton mass is negligibly small.
We see that the $`t`$-quark mass is in a good agreement with the direct measurements. The $`b`$-quark mass is in the desirable region. It is close to that of estimated in the QCD sum rules , where $`m_b(m_b)=4.25\pm 0.15`$ GeV , and in the potential approach , where $`m_b(m_b)=4.20\pm 0.06`$ GeV. It is worth to note that the pole mass is not the value, which has a good convergency in the OPE approach (see references in ), so we present it to the first order for the sake of reference. However, we stress also that the deviations from the central values are caused by the uncertainties in the $`\alpha _s`$ running.
The infrared fixed masses of neutral scalars, coupled with the $`t`$\- and $`b`$-quarks and the $`\tau `$-lepton, equal
$$m_H=306\pm 5\mathrm{GeV},m_{H_\tau }=552\pm 9\mathrm{GeV},$$
(42)
which can be compared with the global fit of SM at LEP yielding $`m_H=76_{47}^{+85}`$ GeV . The central value of this fit was recently excluded by the direct searches at modern LEP energies, where the constraint was obtained $`m_H>95`$ GeV . We expect, however, that many-doublet models of Higgs sector have a different connection to the LEP data. Indeed, the fit of SM with the single Higgs particle yields the value for the logarithm $`l_H=\mathrm{log}_{10}m_H^{\mathrm{SM}}[\mathrm{GeV}]=1.88_{0.41}^{+0.33}`$, whereas this correction basically contributes into the observed quantities due to the coupling to the massive gauge bosons. Then, we can write down the following approximation for this value in the model under consideration:
$$l_H=\frac{1}{3}\underset{p}{}\kappa _p\mathrm{log}_{10}m_{H_p}[\mathrm{GeV}],$$
where the factor $`\frac{1}{3}`$ represents the fraction of scalar coupling in the squares of gauge boson masses, respectively for $`p=t,b,\tau `$, and $`\kappa _p`$ stands for the possible formfactors at high virtualities of the order of masses of Higgs fields. To test, we put the simple approximation
$$\kappa _p\frac{1}{1+\frac{m_{H_p}^2}{\mathrm{\Lambda }^2}},$$
which results in $`\kappa _t=\kappa _b`$ close to unit, and $`\kappa _\tau \frac{1}{4}`$, so that the value under consideration is equal to
$$l_H1.86,$$
that is optimistically close to what was observed at LEP. So, the values in (42) are not in contradiction with the current data.
Next, since we deal with the strongly coupled version of Higgs sector (remember, that $`m_H(v)1267`$ GeV), we need more careful consideration of effective potential to take into account the higher dimensional operators, representing the multi-higgs couplings. So, we keep (42) as soft estimates of masses for the Higgs fields, which implies that the decays into the massive gauge bosons are the dominant modes for these scalar particles .
Finally, we evaluate the scale $`M`$, where the electroweak symmetry has to be exactly restored. The value of $`\varkappa (v)`$ is equal to
$$\varkappa (v)=\frac{\alpha _Y(v)}{\alpha _Y(M)}=0.532\pm 0.005,$$
which implies $`\alpha _1^1(M)32`$. The implication of $`\varkappa `$ for $`M`$ depends on the running of $`\alpha _Y=\frac{3}{5}\alpha _1`$ :
$$\frac{1}{\alpha _1(M)}=\frac{1}{\alpha _1(v)}+\frac{b_1}{2\pi }\mathrm{ln}\frac{M}{v},$$
where $`b_1`$ is model-dependent. So, in the SM $`b_1=\frac{4}{3}n_g\frac{1}{10}n_h`$ with $`n_g=3`$ being the number of fermion generations, $`n_h`$ is the number of Higgs doublets, we obtain<sup>7</sup><sup>7</sup>7Numerically, we put $`\alpha _1^1=58.6`$ for the order-of-magnitude estimate.
$$M_{\mathrm{SM}}2.510^{19}\mathrm{GeV},$$
when in the SUSY extension $`b_1=2n_g\frac{3}{10}n_h`$, so that
$$M_{\mathrm{SUSY}}710^{12}\mathrm{GeV}.$$
Hence, we obtain the broad constraints
$$M=710^{12}2.510^{19}\mathrm{GeV},$$
and the value strongly depends on the set of fields in the region above the cut-off $`\mathrm{\Lambda }`$. At present, we cannot strictly draw a conclusion on a preferable point. However, we can state that the offered mechanism for the breakdown of the electroweak symmetry solves the problem of naturalness, since the observed “low” scale of gauge boson masses is reasonably related to the “high” scale of GUT or even Planck mass.
Finally, we comment on possible uncertainties of numerical estimates and a role of two-loop corrections. First, we analyze the subleading terms in eq.(24). The gauge charges neglected in the fixed point condition of (24) result in the displacement of $`t`$ quark mass by a value about 4 GeV, if we do not change the normalization of QCD coupling constant. In this way, we note that under the account of gauge charge corrections in (24) the same central value of $`t`$ quark mass, i.e. 165 GeV, is reproduced at $`\alpha _s(m_Z^2)=0.118`$, which coincides with the Particle Data Group “world-average”. Next, the two-loop corrections in the RG equations for the Yukawa couplings as applied to the $`t`$ quark lead to an additional displacement of fixed point value. However, in this case we have to take into account the one-loop correction to the relation between the current mass and the pole mass of $`t`$ quark due to the Higgs sector, that results in the following additive renormalization of $`m_t(m_t)`$
$$\frac{\delta m_t(m_t)}{m_t(m_t)}=\frac{1}{16\pi ^2}\frac{9}{2}\frac{m_t^2}{v_{\mathrm{SM}}^2},$$
at the higgs mass $`m_H2m_t`$. The above correction compensates the displacement due to the two-loop modification of infrared fixed-point condition for the $`t`$ quark.
Second, we study the two-loop corrections to the fixed point condition for the $`b`$ quark. The corresponding modification of (27) reads off
$$\frac{9}{2}(\lambda _b^2(\mu )\lambda _t^2(\mu ))g_Y^2(\mu )\left(1\frac{1}{16\pi ^2}\frac{4}{3}\lambda _t^2(\mu )\right)+O(g^4),$$
(43)
that results in the appropriate correction in (28), viz., in the small change of slope in front of log about 2%. The solution of equations for the running $`b`$ quark mass under the variations caused by the introduction of two-loop corrections and the uncertainty in the coupling constant of QCD is shown in Fig. 7. We can straightforwardly see that the variation of slope in the RG equation for the Yukawa constant of $`b`$ quark due to the two-loop corrections results in uncertainties, which are much less than the variation of $`b`$ quark mass caused by the uncertainties in the running coupling constant of QCD at moderate virtualities about the $`b`$ quark mass. Therefore, the dominant origin of uncertainty for the $`b`$ quark mass is the normalization of $`\alpha _s`$. The same conclusion can be drawn for the mass of $`\tau `$ lepton. Therefore, the uncertainty in the estimates of masses for the $`b`$ quark and the $`\tau `$ lepton is not essentially changed by the introduction of two-loop corrections, while the value of $`t`$ quark mass depends on the normalization of $`\alpha _s`$ as well as the two-loop corrections combined, so that the uncertainty in the current mass can reach 4 GeV in $`m_t`$.
As for the estimates of masses for the scalar fields, we emphasize that they give preliminary results, and further investigations are in progress, since we should, first, sum up subleading terms with higher powers of the higgs field squared in the effective potential and, second, consider complete RG equations for the quartic self-coupling, including suppressed terms. Nevertheless, our preliminary estimates show that the scalar fields should be significantly heavy.
## VI. Generations, the number of Higgs fields and vacuum
In the previous sections we have introduced three independent global sources for the bi-local operators composed by the fermions of the heaviest generation, i.e., $`t`$ quark, $`b`$ quark and $`\tau `$ lepton. In the SSIR, these sources acquire the effective potentials providing the spontaneous breaking of electroweak symmetry. Below the scale $`\mathrm{\Lambda }`$ we assume the connection of such the potentials with the potentials of local Higgs fields. Thus, we suppose the introduction of three independent local Higgs doublets at the low energies. Therefore, we suggest the condensation of sources related with the heaviest generation only.
We have found the ‘democratic’ form for the potentials of independent sources in the SSIR. All three potentials have the same values of quadratic and quartic couplings, while we suggest evidently broken ‘democracy’ for the fermion generations, since we do not introduce the condensation of sources for the composite operators built of junior fermions.
In this section we describe a possible development on the problem of fermion generations and the structure of vacuum in the Higgs sector.
So, let us introduce the notation of normalized vev’s for the global sources connected with the Higgs fields as follows:
$$\chi _p=h_p/v,p=\tau ,t,b,$$
and the corresponding vacua $`|0_p`$, so that
$$0_p|\chi _p^{}|0_{p^{\prime \prime }}=\delta _{pp^{}}\delta _{p^{}p^{\prime \prime }}.$$
(44)
Then we easily find that the mass terms
$`_Y^\tau `$ $``$ $`\overline{\tau }_R\tau _L\chi _\tau +\mathrm{h}.\mathrm{c}.,`$
$`_Y^t`$ $``$ $`\overline{t}_Rt_L\chi _t+\mathrm{h}.\mathrm{c}.,`$ (45)
$`_Y^b`$ $``$ $`\overline{b}_Rb_L\chi _b+\mathrm{h}.\mathrm{c}.,`$
could be represented by means of fields
$$\left(\begin{array}{c}\phi _1\\ \phi _2\\ \phi _3\end{array}\right)=\frac{1}{\sqrt{3}}\left(\begin{array}{ccc}1& 1& 1\\ 1& \omega & \omega ^2\\ 1& \omega ^2& \omega \end{array}\right)\left(\begin{array}{c}\chi _\tau \\ \chi _t\\ \chi _b\end{array}\right)=U\left(\begin{array}{c}\chi _\tau \\ \chi _t\\ \chi _b\end{array}\right),$$
(46)
as follows
$`_Y^\tau `$ $``$ $`\overline{\tau }_R\tau _L(\phi _1+\phi _2+\phi _3)+\mathrm{h}.\mathrm{c}.,`$
$`_Y^t`$ $``$ $`\overline{t}_Rt_L(\phi _1+\omega ^2\phi _2+\omega \phi _3)+\mathrm{h}.\mathrm{c}.,`$ (47)
$`_Y^b`$ $``$ $`\overline{b}_Rb_L(\phi _1+\omega \phi _2+\omega ^2\phi _3)+\mathrm{h}.\mathrm{c}.,`$
where we omit the Yukawa couplings and use the matrix $`U`$ defined in terms of $`\omega =\mathrm{exp}(i\frac{2\pi }{3})`$.
Such the transformation in (46) relates the ‘heavy’ basis of $`\chi _p`$ with the ‘democratic’ basis of $`\phi _i`$. So, the definition (46) can be equivalently changed by permutations of $`\chi _\tau \chi _t\chi _b`$ or permutations of columns in the matrix $`U`$. Such the permutations correspond to the finite cyclic group $`_3`$ with the basis $`\omega `$, so that complex phases of $`\phi _i`$ are given by $`e^{iq\frac{2\pi }{3}}`$ with the charges $`q=(0,1,1)`$ of $`(\phi _1,\phi _2,\phi _3)`$.
Further, we can note that the vacuum fields have simple connections as follows:
$`\left(\begin{array}{c}\phi _1\\ \phi _2\\ \phi _3\end{array}\right)={\displaystyle \frac{1}{\sqrt{3}}}\left(\begin{array}{c}1^{}\hfill \\ 1\hfill \\ 1\hfill \end{array}\right)`$ $``$ $`0_\tau |𝝌_1|0_\tau =0_\tau |\left(\begin{array}{c}\chi _\tau \\ \chi _t\\ \chi _b\end{array}\right)|0_\tau =\left(\begin{array}{c}1\\ 0\\ 0\end{array}\right),`$ (60)
$`\left(\begin{array}{c}\phi _1\\ \phi _2\\ \phi _3\end{array}\right)={\displaystyle \frac{1}{\sqrt{3}}}\left(\begin{array}{c}1\hfill \\ \omega \hfill \\ \omega ^2\hfill \end{array}\right)`$ $``$ $`0_t|𝝌_2|0_t=0_t|\left(\begin{array}{c}\chi _\tau \\ \chi _t\\ \chi _b\end{array}\right)|0_t=\left(\begin{array}{c}0\\ 1\\ 0\end{array}\right),`$ (73)
$`\left(\begin{array}{c}\phi _1\\ \phi _2\\ \phi _3\end{array}\right)={\displaystyle \frac{1}{\sqrt{3}}}\left(\begin{array}{c}1\hfill \\ \omega ^2\hfill \\ \omega \hfill \end{array}\right)`$ $``$ $`0_b|𝝌_3|0_b=0_b|\left(\begin{array}{c}\chi _\tau \\ \chi _t\\ \chi _b\end{array}\right)|0_b=\left(\begin{array}{c}0\\ 0\\ 1\end{array}\right),`$ (86)
where the conditions of normalization (44) are reproduced.
Let us postulate the extended definition of the vacuum
$$|vac=|0_\tau |0_t|0_b,$$
(87)
which implies the $`_3`$ symmetry of the vacuum. Then, the couplings introduced in (47) are extended to three generations of fermions, whereas the only generation of $`\tau `$, $`t`$ and $`b`$ is heavy, while two junior generations are massless.
The vacuum definition (87) could be treated as the following assumption:
The number of generations equals the number of charged flavors in the generation as well as the number of Higgs fields in the local phase.
Thus, we postulate the $`_3`$ symmetry of the vacuum as the fundamental dynamical principal of the theory. Moreover, we suggest that this symmetry of the vacuum is exact, so that it is conserved under radiative corrections to the Yukawa constants of fermions.
A realistic description of generations, i.e., a model with nonzero masses of junior fermions is not the problem under the current consideration, and it is beyond the scope of this work. Nevertheless, we add two notes.
First, a general structure of Yukawa interactions with the $`_3`$ symmetry of the vacuum has the form
$`_Y^\tau `$ $``$ $`\overline{\tau }_R\tau _L(g_1^\tau \phi _1+g_2^\tau \phi _2+g_3^\tau \phi _3)+\mathrm{h}.\mathrm{c}.,`$
$`_Y^t`$ $``$ $`\overline{t}_Rt_L(g_1^t\phi _1+g_2^t\omega ^2\phi _2+g_3^t\omega \phi _3)+\mathrm{h}.\mathrm{c}.,`$ (88)
$`_Y^b`$ $``$ $`\overline{b}_Rb_L(g_1^b\phi _1+g_2^b\omega \phi _2+g_3^b\omega ^2\phi _3)+\mathrm{h}.\mathrm{c}.,`$
where the constants $`g_i`$ can be restricted by the following conditions: $`g_{2,3}`$ are real, while $`g_1`$ can be complex. So, the symmetric point
$$g_1=g_2=g_3=1,$$
restores the hierarchy of single heavy generation and two massless generations. The development of ansatz (88) with the realistic values of parameters consistent with the current data on the quark masses and mixing CKM matrix of charged quark currents was given in Ref. .
Second, the same form of mass matrix following from (88) could result in the leading order symmetry in the sector of neutrinos. Indeed, if we put
$$g_1=e^{i\frac{2\pi }{3}},|g_1|=g_2=g_3=1,$$
then we get the completely degenerate neutrinos, while small deviations in the $`g_1`$ phase and absolute values of $`g_i`$ will result in small differences of neutrino masses squared as observed in the neutrino oscillations .
## VII. Conclusion
In this work we have argued that there are three starting points for the presented consideration of Higgs sector of electroweak theory. These motivations are the following:
First, as well known, in the Standard Model the Yukawa coupling constant of $`t`$ quark obtained from the measurement of $`t`$ quark mass, is close to its value in the infrared fixed point derived from the renormalization group equation. If the Higgs sector is extended to three scalar fields separately coupled to three heaviest charged fermions, i.e., the $`t`$, $`b`$ quarks and $`\tau `$ lepton, with the same vacuum expectation values of Higgs fields, then the Yukawa coupling constant of $`t`$ quark is exactly posed to the infrared fixed point. The challenge is whether this coincidence is accidental or not. We treat this fact as the fundamental feature of dynamics determining the development of masses. Moreover, the problem acquires an additional insight because the only fermion generation is heavy, while two junior generations are approximately massless. These features can be attributed by introducing the fundamental $`_3`$ symmetry of the vacuum, so that this symmetry is conserved under the radiative loop corrections responsible for the development of nonzero masses of junior generations.
Second, the strong self-interaction regime in the Higgs sector of Standard Model at large virtualities can be treated as the indication of nonlocality, i.e., the compositeness of operators relevant to the electroweak symmetry breaking. We have introduced a separation of virtuality regions: the local Higgs phase in the range of $`[0;\mathrm{\Lambda }]`$, the nonlocal strong self-interaction regime in the range $`[\mathrm{\Lambda };M]`$, and the symmetric phase above $`M`$. We have determined a form of composite operators and their connection to the local phase by considering the second order of effective action in the SM.
Third, the development of effective potential for the global sources of composite operators from the point of symmetric phase $`M10^{12}10^{19}`$ GeV is stopped in the infrared fixed point $`\mathrm{\Lambda }633`$ GeV for the Yukawa coupling constant of $`t`$ quark. If the dynamics of evolution is given by the same electroweak group, then the large logarithm of $`\mathrm{ln}M/\mathrm{\Lambda }`$ is close to the value appearing in the calculation of GUT scale. So, since the breaking of electroweak symmetry and the fermion mass generation involve the composite operators with both left and right handed fermions, the gauge interaction of $`𝖴(1)`$ group determines the evolution of parameters of the effective potential in the strong self-interaction regime. Then the logarithm of $`\mathrm{ln}M/\mathrm{\Lambda }`$ in the coupling $`g_1`$ has the value depending on the set of fundamental fields above the scale $`\mathrm{\Lambda }`$. Anyway, $`M`$ should be close to the GUT scale, and this fact implies the solution of problem on the naturalness.
Then, following the above motivations, we have evaluated the basic parameters of the model. We have calculated the effective potential for the sources of composite operators, responsible for the breaking down the electroweak symmetry and generation of masses for the gauge bosons and heaviest fermions. The corresponding couplings serve as the matching values for the quadratic and quartic constants in the potential of local Higgs fields as well as the Yukawa interactions at the scale $`\mathrm{\Lambda }`$, which is the ultraviolet cut-off for the local theory and the low boundary of $`[\mathrm{\Lambda };M]`$-range for the effective potential of sources coupled with the bi-local composite operators of quarks and leptons. At $`M`$ the local gauge symmetry is restored, so that the effective potential is exactly equal to zero.
Posing the matching of Yukawa constant for the $`t`$-quark to the infrared fixed point at the scale $`\mathrm{\Lambda }`$, related to the gauge boson masses, we have found the null-potential value $`M`$ in the range of GUT park, which indicates the solution of naturalness. The exploration of fixed points has resulted in the following current masses of heaviest fermions: $`m_t(m_t)=165\pm 4`$ GeV, $`m_b(m_b)=4.18\pm 0.38`$ GeV and $`m_\tau (m_\tau )=1.78\pm 0.27`$ GeV. Two degenerated neutral Higgs fields have the infrared fixed mass $`m_H=306\pm 5`$ GeV, and the third scalar has the mass $`m_{H_\tau }=552\pm 9`$ GeV. So, the estimates do not contradict with the current constraints, coming from the experimental data.
Some questions need for an additional consideration. To the moment, discussing no possible ways to study, we focus on the directions requiring a progress.
* What is a picture for the generation of Yukawa constants, responsible for the masses of “junior” fermions?
As we have supposed in the paper, three sectors of Higgs fields are coupled to the appropriate heavy fermions, so that we need speculations based on a symmetry causing the junior generations to be massless to the leading order.
* What are the constraints on the model parameters as follows from the current data on the flavor changing neutral currents and precision measurements at LEP?
So, we expect that this point is not able to bring serious objections against the model, since we do not involve any interactions distinct from the gauge ones, composing the SM.
* The most constructive question is a supersymmetric extension of mechanism under consideration. Can SUSY provide new features or yield masses of super-partners?
To our opinion, the SUSY extension is more complicated, since there are many different relations between the mixtures of various sparticles, which all are expected to be essentially massive ($`\stackrel{~}{m}\mathrm{\Lambda }`$) in contrast to the SM, wherein the junior generations are decoupled from the Higgs fields to the leading order.
* A simple application, we think, is an insertion of the model into the TeV-scale Kaluza-Klein ideology, being under intensive progress now .
So, $`\varkappa (v)`$ transforms its logarithmic behavior to the power dependence on the scale. Then, the $`M`$ returns to a value not far away from the matching point $`\mathrm{\Lambda }1`$ TeV, as it should be in the KK approach.
Thus, we have offered the model of electroweak symmetry breaking, which provides a positive connection to the naturalness as well as needs some deeper studies under progress.
This work is in part supported by the Russian Foundation for Basic Research, grants 99-02-16558, 01-02-99315, 01-02-16585 and 00-15-96645, the Russian Ministry on the education, grant E00-3.3-62.
|
no-problem/9909/hep-ph9909223.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The electron-proton collider HERA started operation in the summer of 1992. The proton beam energy was 820 GeV while the electron beam energy was 27.5 GeV. The $``$25 nb<sup>-1</sup> of data collected by the experiments ZEUS and H1 in the first running period extended the kinematic range covered by deep inelastic scattering, DIS, measurements by two orders of magnitude in both $`Q^2`$, the four-momentum transfer squared, and $`x`$, the fraction of the proton four-momentum carried by the struck quark. The proton structure function was extracted from the data and observed to rise rapidly with decreasing $`x`$ ; a dramatic result, not expected by many.
In the years 1992-1997 H1 and ZEUS have each collected a luminosity of $``$1 pb<sup>-1</sup> using electron beams and $``$50 pb<sup>-1</sup> using positron beams. These data have been used to make measurements which test the electroweak Standard Model, SM, and the theory of the strong interactions, QCD, in both neutral current, NC, and charged current, CC, DIS. Jet analyses in DIS and photoproduction have been used to address fundamental issues in QCD. The observation of diffraction in DIS has led to a careful investigation of the transition from the kinematic region in which perturbative QCD is valid to the region where phenomenological models based on Regge theory must be applied (see for example and and references therein).
During the running period August 1998 to April 1999 $``$20 pb<sup>-1</sup> of $`e^{}p`$ data were delivered with a proton beam energy of 920 GeV. The data collected by H1 and ZEUS will be used to study the dependence of the NC and CC DIS cross sections on the charge of the lepton beam.
The HERA experiments will continue to take data until May 2000 when a long, 9 month, shutdown is scheduled. The shutdown will be used to upgrade the HERA accelerator and the collider detectors. The HERA luminosity will be increased by a factor of five and longitudinal lepton beam polarisation will be provided for ZEUS and H1. The physics motivation for this major upgrade programme is discussed in detail in reference . Reference also contains a discussion of the physics potential of HERA were polarised proton beams and beams of heavy nuclei to be provided. These interesting possibilities will not be discussed further here.
This report is organised as follows: section 2 contains a selection of current results from H1 and ZEUS which will benefit from more data or from improvements in the quality of the data expected to come from the detector upgrades; the HERA machine and the collider detector upgrades are reviewed in section 3; section 4 contains a selection of key physics topics which will be addressed after the upgrade; finally, section 5 contains a summary.
## 2 A Selection of Open Questions
The wealth of data provided by the successful operation of HERA over the past seven years has allowed a set of open questions to be clearly defined. Since a discussion of each open question can not be attempted here, the case for the upgrade of the accelerator and of the collider detectors will be made using three key examples.
### 2.1 Determination of the Gluon Density of the Proton
The double differential cross section for $`e^\pm p`$ NC DIS may be written in the form
$$\frac{d\sigma _{e^\pm p}^{\mathrm{NC}}}{dxdQ^2}=\frac{2\pi \alpha ^2}{xQ^4}[Y_+F_2^{\mathrm{NC}}Y_{}xF_3^{\mathrm{NC}}Y^2F_L^{\mathrm{NC}}]=\frac{2\pi \alpha ^2}{xQ^4}Y_+\stackrel{~}{\sigma }_{e^\pm p}^{\mathrm{NC}}$$
(1)
where $`Y_\pm =1\pm (1y)^2`$ with $`y=Q^2/xs`$, $`\sqrt{s}`$ is the $`e^\pm p`$ centre of mass energy, $`\alpha `$ is the fine structure constant and $`\stackrel{~}{\sigma }_{e^\pm p}^{\mathrm{NC}}`$ is referred to as the reduced cross section. The structure functions $`F_2^{\mathrm{NC}}`$ and $`xF_3^{\mathrm{NC}}`$ contain parton density functions, PDFs, and electromagnetic and weak couplings . The longitudinal structure function $`F_L^{\mathrm{NC}}`$ arises from QCD corrections to the naive quark parton model and makes a significant contribution to the NC cross section only at low $`Q^2`$. The structure function $`F_2^{\mathrm{NC}}`$ can be written as the sum of two terms $`F_2^{\mathrm{NC}}=F_2^{\mathrm{em}}+F_2^{\gamma /Z}`$. $`F_2^{\mathrm{em}}`$ contains the purely electromagnetic contribution, while $`F_2^{\gamma /Z}`$ receives contributions from the parity conserving terms which arise from $`Z`$ exchange and photon-$`Z`$ interference. ZEUS and H1 have measured the structure function $`F_2^{\mathrm{NC}}`$ over the kinematic range $`3.6\times 10^5<x<0.65`$ and $`0.14<Q^2<2\times 10^4\mathrm{GeV}^2`$ and extracted $`F_2^{\mathrm{em}}`$ from these measurements . The error in $`F_2^{\mathrm{em}}`$ is now dominated by systematic uncertainties over a large part of the this range. Fits to the data using next to leading order, NLO, QCD have been used to extract the gluon density, $`xg`$, with a precision varying from 20% at $`x3\times 10^5`$, $`Q^2=20\mathrm{GeV}^2`$ to 15% at $`x4\times 10^4`$, $`Q^2=7\mathrm{GeV}^2`$ . This is an indirect determination of $`xg`$ since the positron scatters from a quark. A quantity more directly sensitive to the gluon density is the charm production cross section. This may be measured using a sample of events which contain a $`D^{}`$ meson reconstructed using the decay chain $`D^{}D^0\pi K\pi \pi `$. Both H1 and ZEUS have performed such a measurement in DIS and in photoproduction . A sensitive test of QCD can be made by comparing the measured charm cross sections to the cross sections obtained from the scaling violations of $`F_2^{\mathrm{em}}`$. At present the data are not sufficiently precise to allow a quantitative comparison to be made.
In order to investigate the extent to which the gluon distribution extracted from the scaling violation of $`F_2^{\mathrm{em}}`$ agrees with that inferred from the charm production cross sections it is necessary to extend the range of $`x`$ and $`Q^2`$ over which $`F_2^{\mathrm{em}}`$ is measured to high precision so increasing the precision with which $`xg`$ can be extracted. At the same time a much larger charm data set is required to reduce the errors on the charm cross sections. The HERA luminosity upgrade combined with the improved charm tagging efficiency to be provided by the detector upgrades will make it possible to address the issue of whether the gluon distribution extracted from $`F_2^{\mathrm{em}}`$ is also that required to describe the charm production cross sections.
### 2.2 Neutral Current DIS Cross Section Measurement at High $`Q^2`$
The effect of $`Z`$ exchange on the NC DIS cross section has been observed at high $`Q^2`$. This is demonstrated in figure 1 where the single differential cross sections $`d\sigma _{e^+p}^{\mathrm{NC}}/dQ^2`$ and $`d\sigma _{e^+p}^{\mathrm{NC}}/dx`$ are compared with the expectations of the SM . The SM, including the $`Z`$ exchange contribution, describes the data while a calculation in which the $`Z`$ exchange contribution is ignored is in clear conflict with the data. The SM predicts that the $`\gamma Z`$ interference contribution to NC $`e^+p`$ scattering is destructive while in $`e^{}p`$ scattering it is constructive. The measured single differential cross sections $`d\sigma _{e^+p}^{\mathrm{NC}}/dQ^2`$ and $`d\sigma _{e^{}p}^{\mathrm{NC}}/dQ^2`$, shown in figure 2, are consistent with this expectation . It will be possible to extract the structure function $`xF_3^{\mathrm{NC}}`$ by combining the $`e^{}p`$ and $`e^+p`$ data, however, with the luminosities currently available the precision of the measurement will be dominated by the statistical uncertainty. To improve the measurement will require a significant increase in luminosity and some improvements in the detection of the scattered lepton at high $`x`$ and at high $`Q^2`$.
A second motivation for significantly improving the NC data set at high $`x`$ and high $`Q^2`$ is indicated in figure 3 where $`\stackrel{~}{\sigma }_{e^+p}^{\mathrm{NC}}`$ is plotted as a function of $`Q^2`$ for various fixed values of $`x`$ . For $`x=0.45`$ the data points at the highest $`Q^2`$ lie above the SM expectation. In order to establish this effect as something other than a statistical fluctuation requires a significant increase in the size of the NC data set.
### 2.3 Searches for Physics Beyond the Standard Model
It is in the nature of searches for physics beyond the SM to push to the edges of phase space. Hence, such analyses are always in need of a large increase in data. Searches for physics beyond the SM at HERA are reviewed elsewhere in these proceedings . The limits presented in reference indicate that HERA plays a crucial role in exploring the full panorama of new physics precisely because it is the worlds only $`ep`$ collider. As such the HERA collider experiments have the potential to make decisive contributions in the search for R-parity violating SUSY, leptoquarks, contact interactions and much more besides.
## 3 The Upgrades to the HERA Machine and the Collider Detectors
### 3.1 The HERA Accelerator Upgrade
The goals of the HERA upgrade programme are to provide an increase of a factor of five in luminosity and to provide longitudinal lepton beam polarisation for ZEUS and H1. Over a six year running period it is anticipated that a total luminosity of 1000 pb<sup>-1</sup> will be delivered .
The key parameters of the upgraded machine are summarised in table 1 . The five fold increase in luminosity is to be achieved by stronger focusing of the lepton and proton beams. In order to achieve the strong focusing required superconducting magnets must be installed close to the interaction region inside the H1 and ZEUS detectors. In order to calculate the synchrotron radiation background a maximum lepton beam energy, $`E_e`$, of 30 GeV was used. In operation $`E_e`$ will be reduced somewhat to ensure that the RF system performs reliably at high current. HERA has been operating reliably with a proton beam energy of 920 GeV since 1998 so that it is likely that the proton beam energy will remain 920 GeV after the upgrade.
The second major goal of the HERA upgrade, the provision of longitudinal lepton beam polarisation for the collider experiments, will be achieved by the provision of spin rotators for ZEUS and H1. Figure 4 shows the build up of polarisation during HERA running in 1998 . The asymptotic value of $``$60% is routinely obtained in HERA. Spin rotators have been operating at the HERMES interaction point for several years and are routinely providing 60% longitudinal polarisation. The present performance falls only slightly short of the design goal of 70% lepton beam polarisation at HERA after the upgrade.
### 3.2 The Collider Detector Upgrades
The collider detector upgrades focus on providing optimal performance for charm and beauty tagging and for the reconstruction of the scattered lepton at high $`x`$ and high $`Q^2`$ . Charm tagging is strongly enhanced by the ability to identify displaced vertices. Hence ZEUS will install a silicon micro-vertex detector. The boost of the lepton-quark centre of mass system throws the decay products of charmed particles preferentially into the forward direction. Hence the ZEUS micro-vertex detector will be equipped with a set of ‘wheels’ to cover the forward region. H1 already operates a micro-vertex detector. In order to enhance the charm tagging efficiency a forward silicon tracker will be installed and a set of ‘wheels’ will be added to the backward silicon tracker in order to measure the azimuthal coordinate so complementing the existing ‘wheels’ which measure the radial coordinate. To further enhance track reconstruction in the forward direction both collaborations will upgrade the forward tracking system . H1 will install a system of planar drift chambers while ZEUS is building a set of planar straw tube trackers. These detectors will significantly enhance track reconstruction in the forward direction and so lead to improved charm tagging and enhanced reconstruction of the scattered lepton particularly at high $`x`$ and at high $`Q^2`$ .
In view of the redesigned lepton and proton beams both experiments are planning upgrades to the luminosity measurement systems and H1 plans to upgrade the system of roman pots used to detect elastically scattered protons. Both collaborations plan trigger upgrades in order to increase flexibility and selectivity, particularly for events containing charmed particles.
## 4 Physics at HERA after the Upgrade
Following the HERA upgrade the proton will be probed using each of the four possible combinations of lepton beam charge and polarisation. The combination of high luminosity and polarisation will lead to a rich and diverse programme of measurements which can only be sketched below using a few examples. A detailed description of the physics opportunities afforded by the upgrade is to be found in reference .
### 4.1 Proton Structure
The large data volume will allow $`F_2^{\mathrm{NC}}`$ to be extracted with an accuracy of $``$3% over the kinematic range $`2\times 10^5<x<0.7`$ and $`2\times 10^5<Q^2<5\times 10^4\mathrm{GeV}^2`$ . This estimate is based on an assumed luminosity of 1000 pb<sup>-1</sup> and reasonable assumptions for systematic uncertainties. Such a measurement will represent a significant improvement over current results. If QCD evolution codes which go beyond next to leading order become available and a careful study of the dependence of the systematic errors on the kinematic variables is made it will be possible to determine $`\alpha _\mathrm{S}`$ from the scaling violations of $`F_2^{\mathrm{NC}}`$ with a precision of $`0.003`$. The gluon distribution will also be extracted from such a fit. The result of a fit in which systematic errors are handled in an optimal way and assuming a luminosity of 1000 pb<sup>-1</sup> is shown in figure 5. The gluon distribution will be determined with a precision of $``$ 3% for $`x=10^4`$ and $`Q^2=20\mathrm{GeV}^2`$
The combination of high luminosity and high charm tagging efficiency transforms the measurement of the charm contribution to $`F_2^{\mathrm{NC}},F_2^{cc}`$ . For example, figure 6 shows the precision on $`F_2^{cc}`$ expected from a luminosity of 500 pb<sup>-1</sup>. The precision will be sufficient to allow a detailed study of the charm production cross section to be made. The lifetime tag provided by the silicon micro-vertex detector allows the tagging of $`b`$-quarks. Figure 7 shows the anticipated result of a measurement of the ratio of the beauty contribution to $`F_2^{\mathrm{NC}}`$, $`F_2^{bb}`$, to $`F_2^{cc}`$ assuming a luminosity of 500 pb<sup>-1</sup> . The figure indicates that H1 and ZEUS will be sensitive to the beauty content of the proton as well as the charm content.
In the quark parton model CC DIS is sensitive to specific quark flavours. The $`e^+p`$ CC DIS cross section is sensitive to the $`d`$\- and $`s`$-quark parton densities and the $`\overline{u}`$\- and $`\overline{c}`$-anti-quark densities, while the $`e^{}p`$ CC DIS cross section is sensitive to the $`u,c,\overline{d}`$ and $`\overline{s}`$ parton density functions. With the large CC data sets expected following the upgrade it will be possible to use $`e^\pm p`$ CC data to determine the $`u`$\- and $`d`$\- quark densities. Further, by identifying charm in CC DIS it will be possible to determine the strange quark contribution to the proton structure function $`F_2^{\mathrm{NC}}`$ with an accuracy of between 15% and 30% .
In summary, following the upgrade the HERA collider experiments will make a complete survey of the parton content of the proton.
### 4.2 Tests of the Electroweak Standard Model
The high luminosity provided by the upgrade will allow access to low cross section phenomena such as the production of real $`W`$-bosons. The SM cross section for the process $`epeWX`$ is $``$ 1 pb which, combined with an acceptance of $`30\%`$, gives a sizeable data sample for a luminosity of 1000 pb<sup>-1</sup>. The production of the $`W`$-boson at HERA is sensitive to the non-abelian coupling $`WW\gamma `$ . The sensitivity of HERA to non-SM couplings is comparable to the sensitivities obtained at LEP and at the Tevatron and complementary in that at HERA the $`WW\gamma `$ vertex is probed in the space-like regime.
The full potential of electroweak tests at HERA will be realised through measurements using polarised lepton beams . Two types of electroweak test have been considered. The first involves the interpretation of NC and CC cross section measurements in terms of parameters of the SM such as the mass of the $`W`$ boson, $`M_W`$, and the mass of the top quark, $`m_t`$. The consistency of the SM requires that the values extracted must be in agreement with those obtained in measurements of the same parameters in other experiments. The second form of SM test involves the determination of parameters, such as the light quark NC couplings, which are not free parameters in the SM. In this case a deviation from the SM prediction would be a signal for new physics.
Within the SM NC and CC DIS cross sections may be written in terms of $`\alpha `$, $`M_W`$ and $`m_t`$ together with the mass of the $`Z`$ boson, $`M_Z`$, and the mass of the Higgs boson, $`M_H`$. In order to test the consistency of the theory we may fix the values of $`\alpha `$ and $`M_Z`$ to those obtained at LEP or elsewhere and use measurements of the NC and CC DIS cross sections to place constraints in the $`M_W`$, $`m_t`$ plane for fixed values of $`M_H`$. The SM is consistent if the values of the parameters $`M_W`$ and $`m_t`$ obtained agree with the values determined in other experiments. The result of such an analysis is shown in figure 8 where one standard deviation contours in the $`M_W`$, $`m_t`$ plane have been derived from anticipated measurements of NC and CC DIS in $`e^{}p`$ scattering with an electron polarisation of $`70\%`$ . Combining NC and CC data corresponding to a luminosity of 1000 pb<sup>-1</sup> with a top mass measurement from the Tevatron with a precision of $`\pm 5`$ GeV yields a measurement of $`M_W`$ with an error of $`60`$ MeV.
The sensitivity of NC DIS to lepton beam polarisation is shown in figure 9(a). The figure shows the ratio
$$R=\left(\frac{d^2\sigma ^{\mathrm{NC}}}{dxdQ^2}\right)/\left(\frac{d^2\sigma ^{\mathrm{em}}}{dxdQ^2}\right)$$
(2)
where $`d^2\sigma ^{\mathrm{em}}/dxdQ^2`$ is the differential cross section obtained if only photon exchange is taken into account. The strong polarisation dependence of the NC cross section can be used to extract the NC couplings of the light quarks. In such an analysis the CC cross section may be used to reduce the sensitivity of the results to uncertainties in the PDFs . The precision of the results obtained depend strongly on the degree of polarisation of the lepton beam as shown in figure 9(b). The figure shows the anticipated error on the vector and axial-vector couplings of the $`u`$-quark, $`v_u`$ and $`a_u`$ respectively, obtained in a fit in which $`v_u`$ and $`a_u`$ are allowed to vary while all other couplings are fixed at their SM values. With a luminosity of 250 pb<sup>-1</sup> per charge, polarisation combination and taking the vector and axial-vector couplings of the $`u`$\- and $`d`$-quarks as free parameters gives a precision of 13%, 6%, 17% and 17% for $`v_u`$, $`a_u`$, $`v_d`$ and $`a_d`$ respectively. By comparing these results with the NC couplings of the $`c`$\- and $`b`$-quarks obtained at LEP a stringent test of the universality of the NC couplings of the quarks will be made.
## 5 Summary
Deep inelastic scattering has played, and continues to play, a central role in the development of the understanding of the interactions among the fundamental particles. The HERA upgrade programme provides exciting opportunities which are both qualitatively and quantitatively new. The measurements to be performed in the years following the HERA upgrade will impinge directly on the description of the structure of the proton, the nature of the strong interaction and the electroweak sector of the Standard Model.
|
no-problem/9909/gr-qc9909013.html
|
ar5iv
|
text
|
# Holographic Bound in Brans-Dicke Cosmology
## I Introduction
In black hole theory, we know that the total entropy of matter inside a black hole cannot be greater than the Bekenstein-Hawking entropy, which is $`1/4`$ of the area of the event horizon of the black hole measured in Planck units . The extension of this statement to more general situations leads to the holographic principle . The most radical version of the holographic principle motivated by the Ads/CFT conjecture is that all the information about a physical system in a spatial region is encoded in the boundary. The application of this idea to cosmology was first considered by Fischler and Susskind . For the universe, it does not have a boundary, how can we apply the holographic principle to cosmology? Fischler and Susskind answered this question by considering a space inside the particle horizon. They proposed that the matter entropy inside a spatial volume of particle horizon would not exceed $`1/4`$ of the area of the particle horizon measured in Planck units. They found that the flat universe and open universe obeyed this version of holographic principle. However, closed universe violates this principle. This may imply our universe is flat or open. On the other hand, this may imply we need to revise the holographic principle somehow. Easther and Lowe use the generalized second law of thermodynamics to replace the holographic principle . Bak and Rey considered apparent horizon instead of particle horizon to solve the problem. In cosmology, there is a nature choice of length scale, the Hubble distance, $`H^1`$. $`H^1`$ coincides with the particle horizon and apparent horizon apart from an order 1 numerical factor for the flat universe, but it becomes much larger than the apparent horizon for the closed universe. So we know that the choice of $`H^1`$ as the horizon cannot solve the problem of violation of the holographic principle in the closed universe. The holographic principle in cosmology is also discussed in and . Einstein’s theory may not describe gravity at very high energy. The simplest generalization of Einstein’s theory is Brans-Dicke theory. The recent interest in scalar-tensor theories of gravity arises from the inflationary cosmology, supergravity and string theory. There exists at least one scalar field, the dilaton field, in the low energy effective bosonic string theory. Scalar degrees of freedoms arise also upon compactification of higher dimensions. In this paper, we apply the Fischler and Susskind proposal to the Brans-Dicke cosmology in both the Jordan and Einstein frames.
## II Brans-Dicke Cosmology in Jordan Frame
The Brans-Dicke Lagrangian in Jordan frame is given by
$$_{BD}=\sqrt{\gamma }\left[\varphi \stackrel{~}{R}+\omega \gamma ^{\mu \nu }\frac{_\mu \varphi _\nu \varphi }{\varphi }\right]_m(\psi ,\gamma _{\mu \nu }).$$
(1)
The above Lagrangian (1) is conformal invariant under the conformal transformations,
$$g_{\mu \nu }=\mathrm{\Omega }^2\gamma _{\mu \nu },\mathrm{\Omega }=\varphi ^\lambda ,(\lambda \frac{1}{2}),\sigma =\varphi ^{12\lambda },\overline{\omega }=\frac{\omega 6\lambda (\lambda 1)}{(2\lambda 1)^2}.$$
For the case $`\lambda =1/2`$, we make the following transformations
$$g_{\mu \nu }=e^{\alpha \sigma }\gamma _{\mu \nu },$$
(2)
$$\varphi =\frac{1}{2\kappa ^2}e^{\alpha \sigma },$$
(3)
where $`\kappa ^2=8\pi G`$, $`\alpha =\beta \kappa `$, and $`\beta ^2=2/(2\omega +3)`$. Remember that the Jordan-Brans-Dicke Lagrangian is not invariant under the above transformations (2) and (3). The homogeneous and isotropic Friedman-Robertson-Walker (FRW) space-time metric is
$$ds^2=dt^2+a^2(t)\left[\frac{dr^2}{1kr^2}+r^2d\mathrm{\Omega }\right],$$
(4)
the above metric can be written as
$$ds^2=dt^2+a^2(t)\left[d\chi ^2+\mathrm{\Sigma }^2(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)\right],$$
(5)
where
$$\mathrm{\Sigma }=\{\begin{array}{cc}\chi \hfill & k=0,\\ \mathrm{sinh}\chi \hfill & k=1,\\ \mathrm{sin}\chi \hfill & k=1.\end{array}$$
(6)
Based on the FRW metric and the perfect fluid $`T_m^{\mu \nu }=(\rho +p)U^\mu U^\nu +pg^{\mu \nu }`$ as the matter source, we can get the evolution equations of the universe from the action (1)
$$H^2+\frac{k}{a^2}+H\frac{\dot{\varphi }}{\varphi }\frac{\omega }{6}\left(\frac{\dot{\varphi }}{\varphi }\right)^2=\frac{8\pi }{3\varphi }\rho ,$$
(7)
$$\ddot{\varphi }+3H\dot{\varphi }=4\pi \beta ^2(\rho 3p),$$
(8)
$$\dot{\rho }+3H(\rho +p)=0.$$
(9)
If we are given a state equation for the matter $`p=\gamma \rho `$, then the solution to Eq. (9) is
$$\rho a^{3(\gamma +1)}=C_1.$$
(10)
Most of the cosmological solutions in this paper were given in . For the case $`k=0`$, we can get the power-law solutions to the Eqs. (7) and (8) with the help of Eq. (10),
$$a(t)=a_0t^p,\varphi (t)=\varphi _0t^q,$$
(11)
where
$$p=\frac{2+2\omega (1\gamma )}{4+3\omega (1\gamma ^2)},q=\frac{2(13\gamma )}{4+3\omega (1\gamma ^2)},1\gamma <1\frac{2}{3+\sqrt{6}/\beta },$$
(12)
$`a_0`$ and $`\varphi _0`$ are integration constants, and $`[q(q1)+3pq]\varphi _0=4\pi \beta ^2(13\gamma )C_1a_0^{3(\gamma +1)}`$. The solution for $`\gamma =1/3`$ is very special because the scalar field does not evolve. We will take care of this case later. The particle horizon is
$$r_H=_0^t\frac{d\stackrel{~}{t}}{a(\stackrel{~}{t})}=\frac{4+3\omega (1\gamma ^2)}{a_0[2+\omega (1\gamma )(1+3\gamma )]}t^{1p}.$$
(13)
Therefore, the ratio between the entropy inside the particle horizon and the area of the horizon is
$$\frac{S}{GA/4}=\frac{4}{3G}ϵ\frac{r_H}{a^2}=\frac{4ϵ}{3G}\frac{4+3\omega (1\gamma ^2)}{a_0^3[2+\omega (1\gamma )(1+3\gamma )]}t^{13p},$$
(14)
where $`ϵ`$ is the constant comoving entropy density, and $`13p=[2+3\omega (1\gamma )^2]/[4+3\omega (1\gamma ^2)]`$. The holographic bound is satisfied for $`\gamma `$ in the range given by Eq. (12) if the above ratio is not greater than 1 initially.
For the case $`k=\pm 1`$, we do not have a general solution for all values of $`\gamma `$, so we consider two special cases: the matter dominated universe with $`\gamma =0`$ and the radiation dominated universe with $`\gamma =1/3`$. It is convenient to use the cosmic time $`d\eta =dt/a(t)`$.
For $`\gamma =1/3`$, we can solve Eq. (8) to get
$$a^3\dot{\varphi }=C_2,$$
(15)
where $`C_20`$ is an integration constant.
* $`k=1`$, the solutions are
$$\varphi (\eta )=\varphi _0\left[\frac{4\pi C_1\mathrm{tan}(\eta +\eta _0)/3+\sqrt{16\pi ^2C_1^2/9+C_2^2/6\beta ^2}\sqrt{C_2^2/6\beta ^2}}{4\pi C_1\mathrm{tan}(\eta +\eta _0)/3+\sqrt{16\pi ^2C_1^2/9+C_2^2/6\beta ^2}+\sqrt{C_2^2/6\beta ^2}}\right]^{\sqrt{3\beta ^2/2}},$$
(16)
$$a^2(\eta )\varphi (\eta )=\frac{4\pi C_1}{3}+\sqrt{\frac{16\pi ^2C_1^2}{9}+\frac{C_2^2}{6\beta ^2}}\mathrm{sin}[2(\eta +\eta _0)],$$
(17)
where $`\eta _0`$ is an integration constant. The entropy to area ratio is
$$\frac{S}{GA/4}=\frac{ϵ(2\eta \mathrm{sin}2\eta )\varphi (\eta )}{G\mathrm{sin}^2\eta \{4\pi C_1/3+\sqrt{64\pi ^2C_1^2/9+2C_2^2/3\beta ^2}\mathrm{sin}[2(\eta +\eta _0)]\}}.$$
(18)
Note that $`02(\eta +\eta _0)\pi `$, so we see that the holographic bound can be satisfied if it is satisfied initially.
* $`k=1`$ and $`C_2^2<32\pi ^2\beta ^2C_1^2/3`$, we have solutions
$$a^2(\eta )\varphi (\eta )=\frac{4\pi C_1}{3}+\frac{1}{16}e^{2(\eta +\eta _0)}+\left(\frac{64\pi ^2C_1^2}{9}\frac{2C_2^2}{3\beta ^2}\right)e^{2(\eta +\eta _0)},$$
(19)
$$\varphi =\varphi _0\left[\frac{(4\pi C_1/3+b)\mathrm{tanh}(\eta +\eta _0)c+C_2/\beta \sqrt{6}}{(4\pi C_1/3+b)\mathrm{tanh}(\eta +\eta _0)cC_2/\beta \sqrt{6}}\right]^{\sqrt{3\beta ^2/2}},$$
(20)
where $`b=1/16+64\pi ^2C_1^2/92C_2^2/3\beta ^2`$ and $`c=1/1664\pi ^2C_1^2/9+2C_2^2/3\beta ^2`$. The Brans-Dicke scalar field changes very slowly compared to the scale factor. Therefore the holographic bound
$$\frac{S}{GA/4}=\frac{ϵ(\mathrm{sinh}2\eta 2\eta )\varphi }{G\mathrm{sinh}^2\eta [4\pi C_1/3+e^{2(\eta +\eta _0)}/16+(64\pi ^2C_1^2/92C_2^2/3\beta ^2)e^{2(\eta +\eta _0)}]}1,$$
(21)
will be satisfied if it is satisfied initially.
* $`k=0`$, the solutions are
$$a^2(\eta )\varphi (\eta )=\frac{8\pi C_1}{3}(\eta +\eta _0)^2\frac{C_2^2}{16\pi \beta ^2C_1},$$
(22)
$$\varphi (\eta )=\varphi _0\left[\frac{\eta +\eta _0\sqrt{6}C_2/16\pi \beta C_1}{\eta +\eta _0+\sqrt{6}C_2/16\pi \beta C_1}\right]^{\sqrt{3\beta ^2/2}}.$$
(23)
The Brans-Dicke scalar field $`\varphi `$ slowly increases up to $`\varphi _0`$ as the universe expands. The holographic bound
$$\frac{S}{GA/4}=\frac{4ϵ}{3G}\frac{\eta \varphi (\eta )}{8\pi C_1(\eta +\eta _0)^2/3C_2^2/16\pi \beta ^2C_1}1$$
(24)
can be satisfied if it is satisfied initially.
For $`\gamma =0`$, the solutions are:
$$a(\eta )=a_0e^{b\eta },\varphi =\varphi _0e^{b\eta },$$
(25)
where $`b^2=2k/(2+\omega )`$ and $`4\pi \beta ^2C_1=a_0\varphi _0b^2`$.
* $`k=1`$ and $`2<\omega <3/2`$, the above solutions (25) are exponential expansion in the cosmic time $`\eta `$ or linear expansion in the coordinate time $`t`$. The entropy to area ration is
$$\frac{S}{GA/4}=\frac{ϵ(\mathrm{sinh}2\eta 2\eta )}{Ga_0^2e^{2b\eta }\mathrm{sinh}^2\eta }.$$
(26)
So the holographic bound can be satisfied for $`2<\omega <3/2`$ if it is satisfied initially.
* $`k=1`$ and $`\omega <2`$, the solutions (25) are linear in the coordinate time $`t`$. The entropy to area ratio is
$$\frac{S}{GA/4}=\frac{ϵ(2\eta \mathrm{sin}2\eta )}{Ga_0^2e^{2b\eta }\mathrm{sin}^2\eta }.$$
(27)
It is obvious that the holographic bound can be violated when $`\eta =n\pi `$ for any integer $`n`$.
In fact, the current experimental constraint on $`\omega `$ is $`\omega >500`$ or $`\beta ^2<0.002`$. The solutions (25) may not be physical. However, the low energy effective theory of the string theory can lead to $`\omega =1`$, we may need to explore the possibility of negative $`\omega `$. For positive $`\omega `$, we need to solve the equations numerically. When $`\omega \mathrm{}`$ and at late times, the Brans-Dicke cosmological solutions become general relativistic solutions.
## III Brans-Dicke Cosmology in Einstein Frame
The Brans-Dicke Lagrangian in Einstein frame is obtained by the conformal transformations (2) and (3),
$$=\sqrt{g}\left[\frac{1}{2\kappa ^2}R\frac{1}{2}g^{\mu \nu }_\mu \sigma _\nu \sigma \right]_m(\psi ,e^{\alpha \sigma }g_{\mu \nu }).$$
(28)
The perfect fluid becomes $`T_m^{\mu \nu }=e^{2a\sigma }[(\rho +p)U^\mu U^\nu +pg^{\mu \nu }]`$. From the FRW metric in the Einstein frame, we can get the evolution equations of the universe from the action (28)
$$H^2+\frac{k}{a^2}=\frac{\kappa ^2}{3}\left(\frac{1}{2}\dot{\sigma }^2+e^{2\alpha \sigma }\rho \right),$$
(29)
$$\ddot{\sigma }+3H\dot{\sigma }=\frac{1}{2}\alpha e^{2\alpha \sigma }(\rho 3p),$$
(30)
$$\dot{\rho }+3H(\rho +p)=\frac{3}{2}\alpha \dot{\sigma }(\rho +p).$$
(31)
With $`p=\gamma \rho `$, the solution to Eq. (31) is
$$\rho a^{3(\gamma +1)}e^{3\alpha (\gamma +1)\sigma /2}=C_3,$$
(32)
where $`C_3`$ is a constant of integration. For the flat universe $`k=0`$, combining Eqs. (29), (30) and (31), we have
$$ae^{\alpha (1\gamma )\sigma /\beta ^2(13\gamma )}=C_4,$$
(33)
where $`C_4`$ is an integration constant and the above equation is valid for $`1\gamma <12/(3+\sqrt{6}/\beta )`$ and $`\gamma 1/3`$. To obtain the above solution, we assume that $`\dot{\sigma }a^30`$ and $`\dot{a}a^20`$ when $`a0`$. From Eqs. (29), (32) and (33), we get
$$H^2=\frac{2\kappa ^2(1\gamma )^2C_3C_4^{\beta ^2(13\gamma )^2/2(1\gamma )}}{6(1\gamma )^2\beta ^2(13\gamma )^2}a^{[6(1\gamma ^2)+\beta ^2(13\gamma )^2]/2(1\gamma )}.$$
(34)
The particle horizon is
$$r_H=_0^a\frac{d\stackrel{~}{a}}{\stackrel{~}{a}^2H}=Ba^{[2(1\gamma )(1+3\gamma )+\beta ^2(13\gamma )^2]/4(1\gamma )},$$
(35)
where
$$B=\frac{4C_4^{\beta ^2(3\gamma 1)^2/4(1\gamma )}\sqrt{6(1\gamma )^2\beta ^2(13\gamma )^2}}{[2(1\gamma )(1+3\gamma )+\beta ^2(3\gamma 1)^2]\sqrt{2\kappa ^2C_3}}$$
is a constant coefficient. The entropy to area ratio is
$$\frac{S}{GA/4}=\frac{4ϵB}{3G}a^{[6(1\gamma )^2\beta ^2(13\gamma )^2]/4(1\gamma )}.$$
(36)
For $`\gamma =1`$, we find that
$$Ha^3=C_6,$$
where $`C_6`$ is an integration constant. The entropy to area ratio
$$\frac{S}{GA/4}=\frac{2ϵ}{3GC_6},$$
is also a constant.
For $`\gamma =1/3`$, we have
$$a^3\dot{\sigma }=C_5,$$
(37)
where $`C_50`$ is an integration constant.
1. $`k=0`$, the entropy to area ratio is
$$\frac{S}{GA/4}=\frac{4ϵ}{G\kappa C_3}\frac{\sqrt{C_3a^2+C_5^2/2}\sqrt{C_5^2/2}}{\sqrt{3}a^2}.$$
(38)
Therefore, from Eqs. (36) and (38), we see that the holographic principle is satisfied for $`1\gamma <12/(3+\sqrt{6}/\beta )`$ provided that it is satisfied initially.
2. $`k=1`$ and $`\kappa ^2C_3^26C_5^2`$, we have
$$e^{2\chi _H}=\frac{2\sqrt{a^4+\kappa ^2C_3^2a^2/3+\kappa ^2C_5^2/6}+2a^2+\kappa ^2C_3/3}{2\sqrt{\kappa ^2C_5^2/6}+\kappa ^2C_3/3}.$$
(39)
The entropy to area ratio is
$$\frac{S}{GA/4}=\frac{ϵ(\mathrm{sinh}2\chi _H2\chi _H)}{Ga^2\mathrm{sinh}^2\chi _H}.$$
(40)
As $`a`$ increases, $`4S/GA`$ decreases. The holographic bound is satisfied if it is satisfied initially.
3. $`k=1`$, we have
$$2\chi _H=\mathrm{arcsin}\frac{\kappa C_3}{\sqrt{\kappa ^2C_3^2+6C_5^2}}+\mathrm{arcsin}\frac{6a^2\kappa ^2C_3}{\sqrt{\kappa ^4C_3^2+6\kappa ^2C_5^2}}.$$
(41)
The holographic bound
$$\frac{S}{GA/4}=\frac{ϵ(2\chi _H\mathrm{sin}2\chi _H)}{Ga^2\mathrm{sin}^2\chi _H}1$$
(42)
is satisfied if it is satisfied initially.
For $`\gamma =0`$ and $`k^2=1`$, we do not have any analytical solution. We need to solve the problem numerically.
## IV Conclusions
We analyze the holographic principle in Brans-Dicke theory. For the flat universe, we find that the holographic bound can be satisfied for any matter with $`1\gamma <12/(3+\sqrt{6}/\beta )`$. For the universe with $`k^2=1`$, we do not have general analytical solutions for all values of $`\gamma `$. In particular, we do not have analytical solution for the matter dominated $`k^2=1`$ universe. We know that in standard Friedman cosmology, the holographic principle is violated for the closed matter dominated universe near the maximal expansion. To check the holographic bound for the $`k=1`$ matter dominated Brans-Dicke cosmological model, we need to do numerical calculation. However, the numerical results in tell us that the expansion rate in Brans-Dicke models are slower than those in Friedman models. At large times, the difference becomes negligible. Therefore we expect that the holographic bound is also violated for the $`k=1`$ matter dominated universe in Brans-Dicke cosmology.
More recently, Bousso consider the holographic bound in regions generated by null geodesics. He gave a prescription to select light sheets which are hypersurfaces generated by surfaces orthogonal to null geodesics with non-positive expansion. This covariant entropy bound can be hold in general space times<sup>*</sup><sup>*</sup>*The author thanks Raphael Bousso for the references.. I believe that the covariant entropy bound is also satisfied for Brans-Dicke cosmological models. To defend this belief, we need to do numerical calculation.
|
no-problem/9909/nucl-th9909026.html
|
ar5iv
|
text
|
# Effective density dependent pairing forces in the T=1 and T=0 channels.
## I Introduction
The novel availability of exotic nuclei has spurred an immense revival of nuclear structure investigations . Indeed nuclei close to the neutron or proton drip lines may exhibit very unusual features such as pronounced neutron or proton skins , and neutron halos . Among many very interesting questions, nuclear pairing has again become on the forefront of theoretical interest. Indeed the existence of neutron halos is due to the pairing force and in heavier proton rich $`NZ`$ nuclei the proton-neutron pairing may play an important role . In this work we therefore want to address some problems of neutron-neutron and proton-neutron pairing. This concerns for instance considerations of the effective pairing interactions. However, we also will discuss some other aspects of more general character. We mostly will study the infinite matter case.
## II Generalities on the nuclear pairing forces
It is a well established fact that, aside from the exception of magicity, nuclei are superfluid. There can be $`nn`$ as well as $`pp`$ pairing whereas $`pn`$ pairing is less frequent. One of the main questions we will treat here is the effective pairing force. We will do this in the framework of homogeneous nuclear matter at various densities. The limit to finite nuclei can be established through the Local Density Approximation (LDA) which seems to work very well also for the nuclear pairing problem . Quite generally the equation for the gap $`\mathrm{\Delta }`$ in nuclear matter can be written as
$$\mathrm{\Delta }_𝐩=\underset{𝐤}{}v_{\mathrm{𝐩𝐤}}\frac{\mathrm{\Delta }_𝐤}{2\sqrt{\left(ϵ_kϵ_F\right)^2+\mathrm{\Delta }_𝐤^2}}$$
(1)
where $`v_{\mathrm{𝐩𝐤}}`$ is the (effective) pairing force, the $`ϵ_k`$ are the Brueckner-Hartree-Fock single particle energies and $`ϵ_F`$ is the Fermi energy. The summation goes over momentum states. In (1) we did not specify whether we consider the $`T=1`$ or $`T=0`$ channels.
The first aspect we want to discuss is what kind of force $`v_{pk}`$ shall be used in Eq.(1) from a microscopic point of view. The answer to this question is in principle very well known since the early days of superconductivity and superfluidity. Since the gap equation can be derived from the Bethe-Salpeter equation for the two particle many-body Green’s function , the pairing force $`v_{pk}`$ is built out of the sum of all particle-particle irreducible Feynman graphs . To lowest order in the bare interaction it is given by Fig. 1.
In Fig. 1 the dot stands for the bare vertex. The second term represents a $`ph`$ screening correction to the bare force. The very important point we want to make here is that in no way some type of Bethe-Goldstone or Brueckner G-matrix can be used in the gap equation. Since the gap equation is already a kind of in medium two-body Schroedinger equation (see e.g. Ref. ) one cannot use a G-matrix which in itself is a solution of the in medium two-body problem. Otherwise there is severe double counting. Since the G-matrix essentially softens the short range repulsion one expects that pairing becomes enhanced if used in the gap equation. In the pairing problem everything depends exponentially on the system parameters and this effect can then be quite large. A demonstration is given in Fig. 2, where the $`nn`$ gap is calculated once with the bare Paris force and once with the corresponding G-matrix . One sees that the use of the G-matrix enhances the gap value by practically a factor of two.
Sometimes Eq.(1) is divided into a low momentum and a high momentum space and the high momentum space is eliminated in renormalizing consistently the bare interaction in the low momentum space . This type of procedure is, of course, perfectly allowed, since it is only a mathematical trick for solving Eq.(1). Unfortunately in nuclear physics it is a quite widespread habit since decades (see for example and the critics given in ) to use some kind of G-matrix in Eq.(1) as for example Skyrme forces which are to be considered as a phenomenological representation of a microscopic G-matrix. One will object that one of the most successful nuclear $`nn`$ pairing forces, namely the Gogny force is also to be considered as a G-matrix. Things are, however, more subtle there as we now will explain. The first observation is that the Gogny force in the $`{}_{}{}^{1}S_{0}^{}`$ channel is of finite range but density independent. Second, one finds when solving the gap equation with the Gogny force in nuclear matter that it gives results which are very close to the ones obtained with the Paris force or any other realistic bare nucleon-nucleon force. This is demonstrated in Fig. 3 where we compare results of the gap from the D1, D1S, and Paris forces.
We see that D1S is still much closer to Paris than D1. Indeed D1S has been readjusted to give in first place a lower surface tension than D1 but at the same time to give a smaller even-odd staggering so that it becomes in closer agreement with experiment. It is very surprising that this readjustment brought D1S so close to the bare Paris force. So in the $`S=0T=1`$ channel the Gogny force acts like a realistic bare force at least in what concerns energies up to the Fermi energy. This conclusion was also found in and is further confirmed by the fact that the scattering length corresponding to D1S, $`a_{D1S}=12.16`$ fm, is very large and of the same order of magnitude as the experimental value $`a=18.5`$ fm.
The reason why the Gogny force acts like a free force in the $`nn`$ pairing channel in spite of the fact that it has been adjusted to the G-matrix from the Sprung-Tourreil force can only be guessed: probably for this force in that channel the Pauli blocking is so efficient that in the G-matrix equation, $`G=v+v\frac{Q}{e}G`$ , the second term on the r.h.s. is suppressed. On the other hand the question remains why experiment apparently demands a pairing force very close to the bare one. This is true at least in the $`T=1`$ channel. For the $`T=0`$ channel much less investigations have been performed and it is unclear whether a bare force can be used as well. One reason which can be advanced to explain the validity of the bare force is a possible cancellation between screening effects and effective mass enhancement. Graphically these two possibly opposing effects are shown to lowest order in the interaction in Fig. 4.
In this respect it should be mentioned that the Hartree-Fock-Bogoliubov (HFB) calculations with the Gogny force are performed with the so called $`k`$mass $`m^{}<m`$. However one knows that the corresponding level density close to the Fermi energy is much too small. Including $`E`$mass corrections such as the one shown in Fig. 4 brings the effective mass at the Fermi level back to the bare mass or even overshoots it. For consistency the screening of the bare force also shown in Fig. 4 must be included. Larger effective masses enhance pairing while screening probably weakens it so that the net effect could be the bare force. To investigate such effects, extreme care must be taken that both contributions of Fig. 4 are treated on the same footing. Since, as already mentioned, pairing depends exponentially on the system parameters, the slightest imbalance (for example in treating both graphs of Fig. 4 in slightly different approximations) may cause strong erroneous results. One way to treat things consistently could be to use the Gorkov equations and develop the normal and abnormal parts of the mass operator matrix to second order Born approximation and solve the corresponding gap equation numerically. In medium effects similar to the ones shown in Fig. 4 have been included in the past to the pairing problem in one way or the other . Practically all calculations resulted in an important reduction of $`\mathrm{\Delta }=\mathrm{\Delta }\left(k_F\right)`$ compared to the values shown in Fig. 3. It can be deduced from the study in that a reduction of pairing in infinite matter obtained with the Gogny force in a global way, i.e. for all values $`0k_F1.4`$ fm<sup>-1</sup>, inevitably leads also to a reduction of pairing in finite nuclei of the same proportions (this fact can be understood via the local density approximation which as mentioned already, on average, yields comparable results to quantal calculations ). It therefore can be concluded that e.g. a reduction of the $`\mathrm{\Delta }`$values in Fig. 3 by a factor of two (a scenario often encountered in the calculations of references mentioned above) will fail to reproduce experimental gap values of nuclei when the underlying theory is applied to finite nuclei.
Concluding these general considerations we want to say that in absence of any necessity stemming from experimental facts it is probably safe to treat nuclear pairing in conventional mean field theory with the bare nucleon-nucleon potential as this is indicated from the microscopic theory and as apparently is needed to reproduce experimental facts in the $`T=1`$ channel. Using this philosophy one arrives naturally for $`T=0np`$ pairing at much stronger gap values since the $`NN`$ force is strongest in this channel. We will give some more details about this in the next section and also discuss how the bare interaction in the gap equation can be replaced by an equivalent density dependent zero range force such as they have become quite popular recently in the nuclear structure problem.
## III Effective density dependent zero range pairing forces
In the last section we gave arguments that, at least as a first guess, it is indicated to use as the pairing force the bare nucleon-nucleon potential. We here want to develop arguments that this strategy is not necessarily orthogonal to the popular employment of density dependent zero range forces with a cutoff. Such arguments have first been developed by Bertsch and Esbensen and we here want to refine these arguments on the one hand and on the other side extend them also to $`T=0np`$ pairing.
A qualitative argument why a density independent finite range force in the gap equation (Eq.(1)) can be replaced by a density dependent zero range one with a cutoff, goes as follows. For $`s`$wave pairing only the angle averaged matrix element $`\stackrel{~}{v}_{pk}`$ enters the gap equation $`\mathrm{\Delta }_p=_k\stackrel{~}{v}_{pk}\kappa _k`$, where $`\kappa _k=\mathrm{\Delta }_k/2E_k`$ is the abnormal density and
$$E_k=\sqrt{\left(ϵ_kϵ_F\right)^2+\mathrm{\Delta }_k^2}$$
(2)
the quasiparticle energy. The former is very much peaked at $`k=k_F`$ with a peak width of the order $`\mathrm{\Delta }=\mathrm{\Delta }_{k_F}`$. Since anyway in pairing problems only the gap values at $`kk_F`$ matters, we see that for $`\mathrm{\Delta }_{k_F}`$ only the value of the matrix element $`\stackrel{~}{v}_{k_Fk_F}`$ plays a significant role. In the Gogny force, this matrix element as a function of $`k_F`$ is shown in Fig. 5. Since a $`\delta `$force is a constant in $`k`$space, one has to weight the $`\delta `$force with a $`k_F`$, i.e., a density dependent factor similar to $`\stackrel{~}{v}_{k_Fk_F}`$ in order to recover the essential pairing features of the original finite range force. The only thing we have to add is a cutoff value, otherwise the gap equation would diverge. Bertsch and Esbensen therefore proposed the following density dependent zero range force
$$v(𝐫_1,𝐫_2)=v_0\left[1\eta \left(\frac{\rho \left(\frac{𝐫_1+𝐫_2}{2}\right)}{\rho _0}\right)^\alpha \right]\delta \left(𝐫_1𝐫_2\right)$$
(3)
where $`v_0,\eta ,\alpha `$ are adjustable parameters and $`\rho _0`$ is the saturation density. In the gap equation (1), Eq.(3) must be supplemented with a cutoff value $`ϵ_{C\text{ }}`$ which thus constitutes a fourth parameter. However, at zero density the cutoff and $`v_0`$ must be chosen such that the scattering length $`a`$ is reproduced. For Eq. (3) one obtains the relation
$$v_0=\frac{\mathrm{}^2}{m}2\pi ^2\frac{1}{\frac{\pi }{a}+k_C}$$
(4)
where $`\frac{k_C^2}{2m}=ϵ_C`$. The neutron-neutron scattering length is very large $`\left(a=18.5\text{ fm}\right)`$ and we take in Eq.(4) the limit $`a\mathrm{}`$ that leads to the relation for $`v_0`$ also used in . One finally remains with three adjustable parameters $`(\eta ,\alpha ,ϵ_C)`$and the gap equation reads
$`1=`$ (6)
$`{\displaystyle \frac{v_0}{\pi ^2}}\left[1\eta \left({\displaystyle \frac{\rho }{\rho _0}}\right)^\alpha \right]\left({\displaystyle \frac{m^{}\left(\rho \right)}{2\mathrm{}^2}}\right)^{3/2}{\displaystyle _0^{ϵ_C}}𝑑ϵ\sqrt{{\displaystyle \frac{ϵ}{\left(ϵϵ_F\right)^2+\mathrm{\Delta }^2}}}`$
With respect to we considered also a density dependent effective mass. Since finite nuclei calculations are performed with such an effective mass one must account for it when adjusting a $`\delta `$force which later shall be used in BCS or HFB calculations. For the effective mass we take the one corresponding to the Gogny force,
$`\left({\displaystyle \frac{m^{}\left(\rho \right)}{m}}\right)^1`$ $`=`$ $`1+{\displaystyle \frac{m}{2\mathrm{}^2}}{\displaystyle \frac{k_F}{\sqrt{\pi }}}{\displaystyle \underset{c=1}{\overset{2}{}}}\left[W_c+2\left(B_cH_c\right)4M_c\right]`$ (8)
$`\mu _c^3e^{x_c}\left[{\displaystyle \frac{\mathrm{cosh}\left(x_c\right)}{x_c}}{\displaystyle \frac{\mathrm{sinh}\left(x_c\right)}{x_c^2}}\right]`$
with $`x_c=k_F^2\mu _c^2/2,`$ and the coefficients $`W_c,B_c,H_c,M_c,\mu _c`$ corresponding to the Gogny force D1.
In Fig. 6 we show the fit to $`\mathrm{\Delta }\left(k_F\right)`$ in the isovector channel obtained from Eq. (6) with $`ϵ_C=60`$ MeV, $`\eta =0.45,\alpha =0.47`$ . Also shown is the fit corresponding to the bare mass (i.e., $`m^{}/m=1`$) with $`ϵ_C=60`$ MeV, $`\eta =0.70,\alpha =0.45`$ , as in Ref. . In both cases, the corresponding $`v_0`$ value is $`v_0=481`$ MeV fm<sup>3</sup>. We see that the fits are good for values of $`k_F`$ up to the saturation value $`k_F=1.35`$ fm$`^1.`$ A density dependent $`\delta `$force has also been used for $`T=1`$ pairing in finite nuclei in the context of the HFB and in the context of relativistic Hartree-Bogoliubov . The strength used there is however larger. If we use the pairing force in Ref. with $`V_0=700`$ MeV fm<sup>3</sup>, we get the dotted line curve shown in Fig. 6 that corresponds to the following parameters in our notation: $`v_0=1400`$ MeV fm<sup>3</sup>; $`ϵ_C=7`$ MeV; $`\eta =1`$ MeV and $`\alpha =1`$ MeV.
For finite nuclei, the force (3) can be used in BCS approximation
$$\mathrm{\Delta }_i=\underset{k,ϵ_kϵ_C}{}i\overline{i}\left|v\right|k\overline{k}\frac{\mathrm{\Delta }_k}{2E_k}$$
(9)
or in the HFB approach where the gap equation has the form (9) in the canonical basis. We want to point out that the cutoff has to be counted relative to the bottom of the single-particle well and not from its edge.
## IV Proton-neutron pairing in the T=0 channel
In this section we want to extend our considerations to $`np`$ pairing in the $`T=0`$, i.e. in the deuteron channel. As we suggested earlier, as a first guess one should investigate the gap equation with the bare force. The gap equation in homogeneous symmetric nuclear matter has recently been solved for the $`T=0`$ channel using the bare Paris force with single particle energies obtained in Brueckner-Hartree-Fock approximation. Since in the deuteron channel $`\left(T=0,S=1,L=0,2\right)`$ we have a mixture of $`s`$ and $`d`$waves involving the tensor force, the net outcome is more attraction leading to the deuteron bound state in free space. This increased attraction then takes over to the gap equation (which in the zero density limit turns into the Schroedinger equation for the deuteron, see ) and, not unexpectedly (remember the exponential dependence), the gap values in the $`T=0`$ channel as a function of $`k_F`$ are much stronger reaching values more than a factor of two larger than in the $`T=1`$ channel. This is shown in Fig. 7 (Ref. ).
The use of the bare force in the $`T=0`$ channel may, however, be more questionable than in the $`T=1`$ channel. This stems from the implication of the $`d`$wave, i.e. the tensor force. The latter seems to be more affected by medium effects than the $`s`$wave part and therefore certainly great care must be employed in this channel. In particular, it has been shown in that higher shell admixtures make the tensor force appear weaker in the valence space. Again the possible balance of the two graphs of Fig. 4 should thoroughly be investigated with respect to $`s`$ and $`d`$wave contributions. We do not exclude the possibility that the tensor force is largely screened in the medium and thus the enhancement of the $`T=0`$ gap values may be brought back closer to values to which we are used in the $`T=1`$ case. However, without having detailed investigations at hand, we here stick to our working hypothesis and base our considerations on the bare force scenario. In this sense it may be interesting to also adjust, like we have done it for the $`T=1`$ case, a density dependent $`\delta `$force to the calculation with the Paris force shown in Fig. 7. In principle, in this case, the parameter $`v_0`$ should be chosen such that the deuteron binding energy is reproduced in free space. We, however, found that with this condition the cutoff parameter must be chosen very large rendering this force not very practicable in actual calculations. We therefore adopted the strategy to also vary within very narrow limits the parameter $`v_0`$ what may slightly degrade the gap values at very low densities but significantly improves them at the higher densities. In Fig. 8 we show such an adjustment using various cutoffs. The value of $`v_0`$ used for the fits in Fig. 8 is
$`v_0=1.05{\displaystyle \frac{\mathrm{}^2}{m}}{\displaystyle \frac{2\pi ^2}{k_C}}.`$
These fits should be useful for finite nuclei calculations.
## V Concluding remarks
In this paper we critically reviewed the use of effective nuclear pairing forces. We argued that a Bethe-Goldstone or Brueckner G-matrix must not be used in the gap equation. As a first guess, not knowing anything better, the free nucleon-nucleon force may be tried in the gap equation. At least in the traditional $`T=1`$ channel this prescription seems to work remarkably well, since the best phenomenological force, namely the Gogny force, acts very nearly like a free force in the $`T=1`$ pairing channel. We then advocated that the same strategy should be adopted in the $`T=0`$ channel. We pointed out that the situation may, however, be slightly more subtle there because it is the action of the tensor force which makes the $`T=0`$ channel more attractive than the $`T=1`$ one. The tensor part of the nuclear interaction is, however, a very delicate subject and it may well be that it is more affected by screening than the rest of the force. In the second part of the work we demonstrated that the use of density dependent zero range forces in the pairing channel may not be orthogonal to the use of finite range density independent forces. Following Bertsch and Esbensen , we give parametrizations of density dependent $`\delta `$forces which reproduce the gap values in both $`T=0`$ and $`T=1`$ channels very well over the whole range of relevant nuclear matter densities. Such forces, augmented by a cutoff, should then also be useful for calculations in finite nuclei.
## ACKNOWLEDGMENTS
We acknowledge useful discussions with N. Vinh Mau. This work was supported by DGICYT-IN2P3 agreement and by DGICYT (Spain) under contract number PB95/0123.
|
no-problem/9909/hep-lat9909042.html
|
ar5iv
|
text
|
# RUHN-99-1 Chiral fermions on the lattice Talk at Lattice ’99, June 29 - July 3, Pisa, Italy.
## 1 Introduction
My talk is divided into three parts. In the first part the relevance of the chiral fermion issue to fundamental particle physics and to numerical QCD will be explained. The second part is the bulk part of my talk and will present the main ideas and properties of the “overlap”. I shall restrict myself to the period from 1992 to one year ago, summer 1998, the time of the lattice 98 conference, held in Boulder, Colorado. Presumably the next plenary speaker on chiral fermions will focus on the last year, between summer 1998 and summer 1999, so overlap will be avoided. In the last part of my talk I shall try to convince you that these new developments present many opportunities for fresh ideas.
My main partner in the overlap construction was R. Narayanan. I have also collaborated with P. Huet, Y. Kikukawa, A. Yamada and P. Vranas. Important contributions to the overlap development were made by Randjbar-Daemi and J. Strathdee. More recently, the pace of developments has picked up, mainly in the vector-like context, and beautiful work has been done by R. Edwards, U. Heller, J. Kiskis, by Ting-Wai Chiu and by Keh-Fei Liu and his collaborators. New results are coming out almost daily, but this is material for the next lattice conference.
## 2 Relevance
The minimal standard model (MSM) works well experimentally, is a chiral gauge theory and constitutes a good effective low energy description of the theory of everything (TOE). The relatively varied $`U(1)_Y`$ charges of the MSM reflect the chiral nature of the MSM by assuring anomaly free couplings to the gauge bosons and gravity. These charges fit snuggly into representations of larger groups leading to $`SU(5)`$ and $`SO(10)`$ grand unified theories (GUT). All these new gauge theories are also chiral. Supersymmetric extensions of the MSM or of the GUTs also must be chiral. By normal physics standards, the TOE is best described as unknown at present. It seems likely that it is not an ordinary field theory as it contains gravity. It could be the case that in the TOE there is no well defined concept of chirality.
The most basic question about chirality was asked by Holger-Bech Nielsen many years ago. I paraphrase it to: Is a chiral gauge theory completely isolated from ultra-violet effects ? In other words is it truly renormalizable ? In yet another equivalent form the question is: Is the chiral nature of the theory compatible with an arbitrarily large scale separation between typical scales and new physics scales ?
We know very well that the answer to the above question is“yes” in perturbation theory to any order. But, outside perturbation theory, lattice difficulties have raised the suspicion that the answer might be “no”. This was the situation from the early 1970s to the early 1990s. The main achievement I am reporting on occurred during the period 1992 to 1997 and amounts to replacing the suspected “no” by an almost compelling “yes”.
Accepting the “yes” from now on, the next question one should ask is: What can we learn about Nature from the lattice difficulties and from their resolution ?
A mathematician might answer that we learn nothing because Nature isn’t a lattice. This physicist’s answer is different: If in Nature there were an infinite number of fermions per unit four volume, chirality at low energies can emerge naturally, without fine tuning. The TOE could “know” nothing about chiral gauge theories. New mechanisms in the TOE produce the appropriate set of light degrees of freedom in a natural way. Chiral gauge theories appear because they are the single consistent interaction between these light degrees of freedom in the long distance limit.
So, lattice field theory may have made a contribution to the understanding of one of the most fundamental issues in theoretical particle physics. This would certainly not be the first time, but is worth keeping in mind in the social climate of today’s particle theory.
An important spinoff of the developments on chiral gauge theories is feeding back into our own subfield. For the first time we know, in principle and also in practice (to some degree), that numerical QCD can treat global chiral symmetries exactly, on the lattice. This subfield is rapidly expanding and I am sure more will happen over the next few years.
## 3 Overlap essentials
### 3.1 Infinite number of fermions
If the number of fermions is infinite the theory is not precisely defined (yet). This provides an opportunity to “cheat”:
We can write down a theory that looks vectorial, but could equally well be viewed as chiral. Suppose we have a string of right-handed Weyl fermions stretching from $`\mathrm{}`$ to $`+\mathrm{}`$ and below it another string, now made out of left-handed fermions. We can think about the different fermions being labeled by a new discrete flavor index increasing along the strings. If we pair right-handed with left-handed Weyl particles starting from the middle of the strings outwards, we conclude that we have an infinite number of Dirac fermions and the theory is vector-like. However, suppose we start pairing from both infinite ends inwards. One could easily make a “mistake” and end up with, say, one unpaired Weyl fermion. Now the theory looks chiral.
We are restricting the action to be bilinear in the fermion fields. The action for a multiplet of Weyl fermions would be $`\overline{\psi }W\psi `$, where $`W`$ is the Weyl operator in a background gauge field. We shall always assume we are working on a compact Euclidean manifold (it makes no sense to have to worry also about infrared issues, the thermodynamic limit and so forth - we have our plate full already). We know that the gauge fields over a compact manifold fall into distinct “blobs”, each labeled by the topological charge $`Q\{U\}`$, where $`U`$ represents the gauge background in lattice notation although at this point we are still in the continuum. Moreover, we know that $`W`$ is structurally affected by $`Q\{U\}`$ via the famous Atiyah-Singer index theorem, which, in a loose but quite sensible sense means that (the number of rows of $`W`$)-(the number of columns of $`W`$)=$`Q\{U\}`$. Clearly, one cannot just write down a finite size matrix $`W`$ with such a property. But, if the size of $`W`$ is infinite it is at least conceivable that the Atiyah-Singer index theorem will hold on the lattice since, after all, $`\mathrm{}\mathrm{}`$ can be anything.
It is easy to imagine writing down a formula for $`W`$ which is gauge covariant. This means that replacing the background gauge field by a gauge transform is equivalent to conjugation by a formally unitary matrix dependent only on the gauge transformation. Thus, $`detW`$ would be gauge invariant. But, since $`W`$ is infinite, $`detW`$ isn’t well defined and it is conceivable that gauge invariance does not hold since the manipulations involving the determinant of an infinite matrix, the product of infinite matrices and the only formal unitarity of some of these may end up being incorrect. This creates an opportunity for anomalies to enter, an opportunity we are obligated to create one way or another. It is important to see that anomalies can show up in a way that is independent of taking the ultraviolet cutoff to infinity or the infrared cutoff to zero.
Thus, postulating an infinite number of fermions creates the right openings, and the problem becomes only how to “cheat” honestly.
### 3.2 Brief history of chiral issues
In the late sixties Adler, Bell, Jackiw and Bardeen discovered and understood anomalies in the context of particle physics . The importance of this discovery cannot be overstated. Starting from the mid seventies Stora, Zumino and others unveiled the beautiful mathematical structure of anomalies. At the algebraic level the very elegant descent equations were seen to relate anomalies in various dimensions. During the early to mid eighties the understanding of anomalies was enriched by discovering their topological meaning. It is fair to say that during this period the physics of fermions in a classical gauge background was put on firm (and elegant) mathematical grounds .
For what follows, a crucial step was taken by Callan and Harvey who provided a physical realization of the relation between anomalies in different dimensions (as algebraically reflected in the descent equations). They connected the consecutive dimensions in the descent equations by studying physical embeddings in a given manifold of sub-manifolds (“defects”) of lower dimension. Prompted by this work, Boyanowski, Dagotto and Fradkin studied similar arrangements in condensed matter. They also proposed that the famous chiral fermion problem of lattice field theory might be solved this way. But, they did not pursue their insight and our community paid no attention.
The situation changed in 1992 when Kaplan, again motivated by the work of Callan and Harvey, made a very specific and compelling case that the setup Callan and Harvey used could be put on the lattice and that this was a new way to deal with the lattice fermion problem. During the same year, starting from a completely different point of view, Frolov and Slavnov made a proposal containing an infinite number of auxiliary fields to regulate $`SO(10)`$ gauge theory with a 16-plet of left-handed Weyl fermions. This is a chiral gauge theory and each irreducible matter multiplet can accommodate one family of the MSM plus one additional $`SU(3)\times SU(2)\times U(1)`$-neutral right-handed neutrino.
Narayanan and I asked whether the two ways, one by Kaplan, and the other by Frolov and Slavnov had anything in common. Our conclusion was that they did and the heart of the matter in both cases was that although regulators were present, the systems had an infinite number of fermions per unit Euclidean four-volume . This led to the chiral overlap.
We spent several years testing and convincing ourselves that indeed the unbelievable had happened and the lattice fermion problem had been laid to rest. Luckily, we were not entirely alone doing this. Obviously, if the chiral gauge theory problem was solved one also had a way to exactly preserve global chiral symmetries in vector-like theories, like QCD. How competitive this was when compared to standard numerical QCD was a question we postponed. Eventually, focusing on QCD, a simplification of the overlap in the case of vector-like gauge theories was found producing a relatively simple formula for the fermion kernel for lattice Dirac fermions with exact global chiral symmetry .
Forgotten for many years was a prophetic paper by Ginsparg and Wilson in which the nowadays well known GW relation for the kernel of a vector-like theory was written down. Ginsparg and Wilson showed that if the kernel obeyed their relation one would have both exact global chiral symmetries and the $`U(1)`$ anomaly, directly on the lattice. They arrived at their relation being motivated by Renormalization Group ideas. In four dimensions a vector-like gauge theory would be classically conformally invariant if matter were massless, which in the fermion case also implies chiral symmetry. Since the Renormalization Group is built to extract the anomalous realization of dilations in the quantum context, it was a natural place to start. The Renormalization Group provided them with a rather formal derivation of their relation. To be sure, the GW relation itself, and its consequences were however well defined and clear. But, they could not find an explicit kernel satisfying their relation in the presence of a gauge background, and probably because of this the idea was forgotten. It was a pleasant surprise to realize that the overlap Dirac operator relevant to the vector-like case satisfied the GW relation . Of course, in the context of the overlap construction it was obvious a priori that one had global chiral symmetries and exactly massless quarks. But, that the GW relation would turn out to be satisfied was not built in as an explicit requirement in the construction.
Very soon after the overlap Dirac operator was shown to obey the GW relation, Narayanan showed that the derivation that took one from the chiral overlap to the vector case by combining a left handed fermion with a right handed one could be reversed. Narayanan factorized the overlap Dirac operator using only that the latter obeyed the GW relation and therefore it became possible to start from the GW relation and get to the chiral case by factorization. No new lattice results have been obtained starting from the GW relation directly, although some derivations can be made to look more familiar. Later on, I shall explain in greater detail the connection between the overlap and the GW relation.
Let me note some important properties of the overlap approach to chiral fermions. It works in any even dimension both for gauge and for gravitational backgrounds. It is independent of the renormalizability of the dynamics of the background. As such, the overlap is not Renormalization Group motivated, although Ginsparg and Wilson were. There is no meaningful GW relation in odd dimensions. However, there are global issues in odd dimensional gauge theories with massless fermions that are intimately related to chiral fermions. These issues are captured in the overlap approach which extends naturally to odd dimensions.
### 3.3 The basic idea
$$_\psi =\overline{\psi }/D\psi +\overline{\psi }(P_L+P_R^{})\psi $$
(1)
$`\overline{\psi }`$ and $`\psi `$ are Dirac and the mass matrix $``$ is infinite. It has a single zero mode but its adjoint has no zero modes. This were impossible if the mass matrix were finite. It clearly means we have one massless Weyl fermions whose handedness can be switched by interchanging the chiral projectors. It is very important that as long as $`^{}>0`$ this setup is stable under small deformations of the mass matrix. This stability comes from the internal supersymmetric quantum mechanics generated by the mass matrix.
Kaplan’s domain wall suggests the following realization:
$$=_sf(s)$$
(2)
where $`s(\mathrm{},\mathrm{})`$ and $`f`$ is fixed at $`\mathrm{\Lambda }^{}`$ for negative $`s`$ and at $`\mathrm{\Lambda }`$ for positive $`s`$. There is no mathematical difficulty associated with the discontinuity at $`s=0`$. Originally, the $`s`$-variable was discretized, but this is unnecessary.
Before proceeding let me remark that other realizations of $``$ ought to be possible, but nothing concrete has been worked out up to now.
The infinite path integral over the fermions is easily “done”: on the positive and negative segments of the real line respectively one has propagation with an $`s`$-independent “Hamiltonian”. The infinite extent means that at $`s=0`$ the path integrals produce the overlap (inner product) between the two ground states of the many fermion systems corresponding to each side of the origin in $`s`$. The infinite extent also means infinite exponents linearly proportional to the respective energies - these factors are subtracted. One is left with the overlap formula which expresses the chiral determinant as $`v^{}\{U\}|v\{U\}`$. The states are in second quantized formalism. By convention, they are normalized, but their phases are left arbitrary. This ambiguity is essential, as we shall see later on. It has no effect in the vector-like case. In first quantized formalism the overlap is:
$$v^{}\{U\}|v\{U\}=\underset{k^{}k}{det}M_{k^{}k}$$
(3)
The elements of the matrix $`M`$ are the overlaps between single body wave-functions, $`M_{k^{}k}=v_k^{}^{}v_k`$. (In the 94 paper $`M`$ was denoted by $`O^{RR}`$ and the single particle states $`v_k,v_k^{}^{}`$ by $`\psi _K^{R+},\psi _K^{}^R`$.) The $`v^{}`$’s span the negative energy subspace of $`H^{}\gamma _5(/D_4+\mathrm{\Lambda }^{})`$ and the $`v`$’s span the negative energy subspace of $`H\gamma _5(/D_4\mathrm{\Lambda })`$. I used continuum like notation to emphasize that the Hamiltonians are arbitrary regularizations of massive four dimensional Dirac operators with large masses of opposite signs. One may wonder why the different signs can at all matter. A simple way to see the difference is to consider a gauge background consisting of one instanton. While $`detH^{}`$ is positive, $`detH`$ is negative. In a complete, massive, four dimensional theory the mass sign could be traded for a topological $`\theta `$ parameter and the two cases would correspond to $`\theta =0`$ and $`\theta =\pi `$.
The Hamiltonians only enter as defining the Dirac seas and there is no distinction between the different levels within each sea; all that matters is whether a certain single particle state has negative or positive energy. Thus, all the required information is also contained in the operators $`ϵ=\epsilon (H)`$ and $`ϵ^{}=\epsilon (H^{})`$ where $`\epsilon `$ is the sign function. Thus the $`v^{}`$’s are all the $`1`$ eigenstates of $`ϵ^{}`$ and the $`v`$’s are all the $`1`$ eigenstates of $`ϵ`$. To switch chiralities one only has to switch the sign of the Hamiltonians. This is a result of charge conjugation combined with a particle-hole transformation.
Nowadays the defining equations for the states $`v`$ and $`v^{}`$ are expressed with the help of the projectors $`\frac{1+ϵ}{2}`$, $`\frac{1+ϵ^{}}{2}`$ (they annihilate the states $`v`$ and $`v^{}`$ respectively). But this is only notational novelty.
When $`\mathrm{\Lambda }^{}`$ is taken to infinity in lattice units one is left with $`ϵ^{}=\gamma _5`$. Thus, $`ϵ^{}`$ becomes gauge field independent and so become the associated states. (This simplification was already there in the Boyanowski, Dagotto, Fradkin paper, but has been rediscovered in the domain wall context by Shamir.) Physically, $`ϵ^{}`$ can be thought of as a lattice representation of a continuum, positive infinite mass, five dimensional Hamiltonian for Dirac fermions in a static gauge field; one can decouple the fermions from the gauge field entirely. On the other hand, $`ϵ`$ always maintains a dependence on the gauge background and its trace gives the gauge field topology. The parameter $`\mathrm{\Lambda }`$ is restricted to a finite range in lattice units and cannot be taken to infinity. Physically, one can think of $`ϵ`$ as a lattice representation of a continuum, negative infinite mass, five dimensional Hamiltonian for Dirac fermions in a static gauge field; the unavoidable dependence on the gauge field reflects the continuum result that infinitely massive fermions in odd dimensions cannot decouple from the gauge fields for both signs of the mass term.
### 3.4 The vectorial case
We add the states $`w`$ corresponding to the chirality opposite to that represented by the states $`v`$ above. As just said, all this requires is to switch the signs of the $`ϵ`$’s. The left handed and right handed fermions do not mix, each being self-coupled by $`M_{k^{}k}^R=v_k^{}^{}v_k,M_{k^{}k}^L=w_k^{}^{}w_k`$.
We wish to combine the two systems and get rid of the extra, unused dimensions in each case. This is possible in the vector-like case, but not in the chiral case, because, although the shapes of $`M^R`$ and $`M^L`$ change as a function of the gauge background topology, they change in a complementary way: The number of rows is fixed and the number of columns of $`M^R`$ plus the number of columns of $`M^L`$ is also fixed, equal to twice the number of rows. Thus $`M^R`$ and $`M^L`$ can be packed together into a square matrix of fixed size. To describe $`M^R`$ or $`M^L`$ alone one needs a larger space because the shape of these matrices fluctuates. To describe both matrices together however, extra dimensions are not needed.
In the most important case, at zero topology, we are searching for a simplified formula for the product of the determinants of $`M^R`$ and $`M^L`$. The two $`ϵ`$’s generate a relatively simple algebra; the main new operator in this algebra is the unitary operator $`V=ϵ^{}ϵ`$. A very basic linear set of elements in that algebra is $`O=\alpha ϵ+\beta ϵ^{}+\gamma ϵ^{}ϵ+\delta `$. The matrix elements of $`O`$ between any $`v`$ or $`w`$ states are trivially expressible in terms of corresponding overlaps. Picking $`\alpha =\beta =0,\gamma =\delta =\frac{1}{2}`$ we can kill all $`vw`$ cross terms and the matrix elements of $`O`$ are determined by those of $`M^R`$ and $`M^L`$:
$$\left(\begin{array}{cc}w^{}& v^{}\end{array}\right)^{}\frac{1+ϵ^{}ϵ}{2}\left(\begin{array}{cc}w& v\end{array}\right)=\left(\begin{array}{cc}M^L& 0\\ 0& M^R\end{array}\right)$$
(4)
Both $`\left(\begin{array}{cc}w& v\end{array}\right)`$ matrices are unitary since the columns make up orthonormal bases. Using charge conjugation one can assure that the determinants of the two $`M`$-matrices are complex conjugate of each other. With
$$D_o=\frac{1+ϵ^{}ϵ}{2}$$
(5)
one trivially derives $`detD_o=|detM^L|^2`$. When $`\mathrm{\Lambda }^{}=\mathrm{}`$, $`\left(\begin{array}{cc}w^{}& v^{}\end{array}\right)`$ is the unit matrix. Moving the unitary factor $`\left(\begin{array}{cc}w& v\end{array}\right)`$ to the other side of equation (4) we see that $`D_o`$ has been “factorized”. The columns $`v`$ span the kernel of $`\frac{1+ϵ}{2}`$ and the columns $`w`$’s span the orthogonal complement of this subspace.
Another way to decouple $`v`$ from $`w`$ is to choose in $`O`$ $`\alpha =\beta =\frac{1}{2},\gamma =\delta =0`$:
$$\left(\begin{array}{cc}w^{}& v^{}\end{array}\right)^{}\frac{ϵ^{}+ϵ}{2}\left(\begin{array}{cc}w& v\end{array}\right)=\left(\begin{array}{cc}M^L& 0\\ 0& M^R\end{array}\right)$$
(6)
At $`\mathrm{\Lambda }^{}=\mathrm{}`$, and after moving the unitary factor $`\left(\begin{array}{cc}w& v\end{array}\right)`$ to the other side of equation (6) we obtain the hermitian overlap Dirac operator, $`H_o=ϵ^{}D_o`$ studied at SCRI.
It is important that even if we keep $`\mathrm{\Lambda }^{}`$ finite in lattice units and $`H^{}`$ has a nontrivial gauge dependence, $`H^{}`$ always has exactly as many negative as positive energy eigenstates and there is an impenetrable (as a function of the gauge background) gap in its spectrum around zero. In other words, $`ϵ^{}`$ is never sensitive to gauge field topology.
Let me add here that a version of $`D_o`$ can be derived starting with a lattice implementation of the see-saw mechanism obeying a Froggatt-Nielsen symmetry. This $`D_o`$ is obtained in the limit of infinite see-saw partners .
### 3.5 Topology and fermions
The topological charge $`Q\{U\}`$ is the difference between the number of columns and rows of $`M^L`$, which is the negative of the same quantity for $`M^R`$. Since $`tr(ϵ^{})0`$, $`Q\{U\}=\frac{1}{2}trϵ`$. When $`Q\{U\}0`$, $`det\left(\begin{array}{cc}M^L& 0\\ 0& M^R\end{array}\right)0`$ because either among the first columns or among the last there are too many zeros to maintain linear independence. This implies $`detD_o=0`$, and hence exact zero modes for the overlap Dirac operator . Using $`trϵ^{}=0`$ the formula $`Q\{U\}=\frac{1}{2}trϵ`$ can be written in many equivalent ways. These days a popular way is $`Q\{U\}=trϵ^{}D_o`$ with the sum over sites contained in the trace made explicit. The summand is a lattice version of the topological density.
The impact of topology on fermion dynamics is easiest to see in second quantized language (our original formulation). We denote second quantized operators by hats: $`\widehat{H}=\widehat{a}^{}H\widehat{a},\widehat{H}^{}=\widehat{a}^{}H^{}\widehat{a},\widehat{N}=\widehat{a}^{}\widehat{a}`$, $`[\widehat{H},\widehat{N}]=[\widehat{H}^{},\widehat{N}]=0`$. The second quantized states entering the overlap satisfy: $`\widehat{N}|v^{}=\frac{1}{2}𝒩|v^{}`$ and $`\widehat{N}|v^{}=(\frac{1}{2}𝒩+Q\{U\})|v^{}`$. Here $`𝒩=tr1`$. Clearly, $`Q\{U\}0`$ forces $`v^{}|v=0`$. This immediately leads to nonvanishing, automatically normalized ’t Hooft vertices. For example, if $`Q\{U\}=1`$, $`v^{}|\widehat{a}|v0`$.
The consequences of the ’t Hooft vertices are far reaching. In the vector-like case they provide the solution to the $`U(1)_A`$ problem, now in an entirely rigorous setting. In a background that carries topological charge 1 for each flavor we shall have $`v^{}|\widehat{a}|v0,w^{}|\widehat{a}^{}|w0`$ which gives the two $`\overline{\psi }_R\psi _L`$, $`\overline{\psi }_L\psi _R`$ factors per flavor that make up the vector-like ’t Hooft vertex. In the chiral case one can get explicit fermion number violation. A simple example of this can be found in two dimensions, in an abelian gauge model with fermionic matter consisting of one charge 2 right handed fermion $`\psi `$ and four left handed charge -1 fermions $`\chi _\alpha (\sigma _\mu =(1,i))`$. The ’t Hooft vertex gives a nonzero expectation value to the operator $`V=\psi \sigma \psi \chi _1\chi _2\chi _3\chi _4`$. The model is exactly soluble and known to have a massless composite sextet of fermions $`\mathrm{\Phi }_{\alpha \beta }=\psi \chi _\alpha \chi _\beta `$. These fermions are actually Majorana-Weyl, although the original fermions were just Weyl. This is needed to match the global $`SU(4)`$ anomalies associated with the four $`\chi `$ fermion fields at the composite level. The ’t Hooft vertex $`V`$ (note the absence of barred fermion fields) provides the kinetic energy term for these composites.
The confirmation of the above features numerically in the 21111 chiral model represented a major step since it showed that even the subtler details of chiral fermion dynamics were captured by the overlap in an effortless way. More traditional approaches to the chiral fermion problem always had to come up with tenuous explanations for how such effects might be recovered in the continuum.
### 3.6 Chiral symmetry breaking
We find ourselves on the lattice, with exact global chiral symmetries, with correct anomalies and with exactly obeyed mass inequalities. Based on the continuum it seems that the day is not far where we shall be able to claim to have a rigorous proof of spontaneous chiral symmetry breaking directly on the lattice.
In the meantime, numerical work on the spectral properties of $`H_o`$ by SCRI has produced “experimental” evidence for spontaneous chiral symmetry breaking. Related observations were made by the Kentucky group. See also .
### 3.7 Anomalies
The second quantized states entering the overlap each come from a single body Hamiltonian which is analytic in the gauge link variables. Thus, they carry Berry phases. There is a natural connection (an abelian gauge field) over the space of gauge fields (playing the role of parameters in the familiar Berry setup). Integrating Berry’s connection along a smooth closed loop in gauge field space generates an invariant phase, Berry’s phase. There is such a connection associated with each state in the overlap, but the one associated with $`|v^{}`$ can be made to vanish by taking $`\mathrm{\Lambda }^{}`$ to infinity. $`𝒜=v\{U\}|dv\{U\}`$ is the expression giving the connection once some representatives of the rays $`|v\{U\}`$ are chosen (possibly using patches with overlays). $`𝒜`$ is not quite a function over the space of gauge fields; it is a connection, in the sense that it could be defined in patches and in their overlays the several definitions could differ by gauge transformations. Berry’s phase is nontrivial at the “perturbative level” because $`𝒜`$ has curvature (abelian field strength) $`=d𝒜`$ which does not vanish. Unlike $`𝒜`$, $``$ is a function of the gauge field independent of the ray representatives used to define the connection. In second quantized notation $`=dv\{U\}|dv\{U\}`$ (antisymmetrization is implicit). In first quantized language one has $`=_{k\mathrm{Dirac}\mathrm{sea}}dv_k^{}dv_k`$. Note that the overlaps entering the connection and the curvature only compare variations to the same state. This is why $``$ is a local functional of the gauge background. It does not depend on phase choices, so can be expressed in terms of the sign function alone: $`=\frac{1}{4}tr\left(\epsilon (H)d\epsilon (H)d\epsilon (H)\right)`$.
A special case is interesting. Take the gauge group as $`SU(2)`$ and the fermions in the fundamental representation. There is a simple basis in which $`H\{u\}`$ is a real matrix with no complex entries. Thus, it is natural to choose all eigenstates real and therefore Berry’s connection and its curvature vanish. Nevertheless, Berry’s phase factors can be nontrivial giving sign flips when states are taken round some loops. The $`U(1)`$ bundle of states one usually has in the overlap is replaced by a $`Z_2`$ bundle, and the latter could be twisted. This is how Witten’s global anomalies show up in the overlap . Let me remind you that seeing Witten’s global anomalies was beyond the reach of all approaches to the chiral fermion problem before the overlap.
My main message if that Berry’s phase encodes all anomalies in the theory. Let us see how this works in the ordinary, complex case .
In the continuum one defines two currents. (I shall restrict my discussion to a nonabelian semisimple gauge group and to a spacetime topology of a $`d`$-torus.) The consistent current is the variation of the chiral determinant with respect to the gauge field. As a variation it obeys “curl-free” constraints, also known as the Wess-Zumino conditions. When there are anomalies the consistent current cannot be covariant with respect to gauge transformations. If the determinant were gauge invariant the consistent current would trivially also be covariant. Even when there are anomalies and the consistent current cannot be covariant it can be made so by adding a local, exactly known polynomial in the gauge fields and their derivatives. This quantity is called $`\mathrm{\Delta }J`$. Although both the covariant and the consistent currents are non-local functionals (in the continuum) of the gauge background, their difference, $`\mathrm{\Delta }J`$, is local. $`\mathrm{\Delta }J`$, by itself, fixes the anomaly. In short, gauge invariance is restorable if and only if $`\mathrm{\Delta }J`$ vanishes (on account of anomaly cancelation).
Now let us go back to the lattice. We make some smooth phase choice for the states representing the ground state rays and compute the variation of the overlap with respect to the external gauge fields. This should produce a lattice version of the consistent current on the lattice because it is the variation of something. One writes $`J^{\mathrm{cons}}=𝒜𝒜^{}+J^{\mathrm{cov}}`$. The Berry phase terms contain the part of $`J^{\mathrm{cons}}`$ which is not guaranteed to be gauge covariant. The remainder, defined as $`J^{\mathrm{cov}}`$ is given (at $`\mathrm{\Lambda }^{}=\mathrm{}`$) by $`\frac{v^{}|dv_{}}{v^{}|v}`$; it is independent of the phase choices and has naive gauge transformation properties. Here, $`|x_{}|xv|x|v`$, which is independent of the phase of $`|v`$. The “curl” of $`\mathrm{\Delta }J=𝒜𝒜^{}`$ is $`^{}`$ and does not vanish in general.
The analogy sketched above has not been yet fully fleshed out, but one result is available. Pick an abelian background in the direction of a $`U(1)`$ subgroup with charges $`q_i`$. In the abelian context, $``$ is gauge invariant and can be viewed as defined over gauge orbits. If the anomaly does not vanish one can find a two-torus in the space of gauge orbits over which the necessarily quantized integral of $``$ is nonzero. (In $`d`$ even dimensions the integral $``$ goes as $`_iq_i^{\frac{d}{2}+1}`$). This implies that no “small” deformation of $`H\{U\}`$ can make $`0`$ and hence $`\mathrm{\Delta }J0`$. This leads to two conjectures (for the complex case):
If and only if anomalies cancel it is possible to smoothly deform $`H\{U\}`$ and $`H^{}\{U\}`$ such that $`=^{}`$. If $`=^{}`$ one can choose the second quantized states $`|v\{U\}`$ and $`|v^{}\{U\}>`$ smoothly such that the action of the gauge group is non-projective: for any $`g𝒢`$ $`G(g)|v\{U\}>=|v\{U^g\}`$ and the same for $`|v^{}`$. If these conjectures prove true one can preserve exact gauge invariance of the overlap if anomalies cancel, but, to do so, one needs to fine tune the Hamiltonians.
So, we must ask whether this is “natural”. The answer is that fine tuning is not necessary to get full gauge invariance in the continuum limit. Even before fine tuning the gauge breaking of the overlap is of a specific kind because the Hamiltonians are gauge covariant, implying (excluding backgrounds with degenerate fermionic ground states) $`G(g)|v\{U\}=e^{i\chi (\{U\},g)}|v\{U^g\}`$. $`\chi \chi ^{}`$ is a lattice Wess-Zumino action. By fine tuning we conjectured that one can make $`\chi =\chi ^{}`$ if anomalies cancel. But even if the lattice Wess-Zumino action is not zero, as long as anomalies cancel, it can be small in the sense that one can expand in it. (The cancelation of anomalies implies that in the continuum limit the lattice Wess-Zumino action will have no contribution from the continuum Wess-Zumino action.) Then the mechanism discovered by Förster, Nielsen and Ninomiya shows that exact gauge invariance will be restored in the continuum limit and the Higgs like degrees of freedom representing gauge transformations decouple . This was checked numerically in the above mentioned abelian two dimensional model already in 1997. Although this was only two dimensions it was not at all trivial.
The next chiral speaker will concentrate on the phase of the overlap . Berry’s connection and curvature will be seen to play a central role. It is important to stress that the problem has reduced to a phase choice only because in the overlap this is the single source of gauge breaking, just as emphasized in the continuum context by Fujikawa: Any fermionic correlation function in a fixed gauge background violates gauge covariance by no more and no less than the determinantal anomaly.
### 3.8 GW and overlap
We already heard that the chiral overlap produced the vector-like operator $`D_o`$ and that $`D_o`$ can be factorized to give back the chiral overlap. Now we focus on the relation between $`D_o`$ and GW. For related considerations, see .
Let us first specify what is meant by GW (I adopt a restricted definition including $`\gamma _5`$-hermiticity.): (1a) One is given a local hermitian positive operator $`R`$ which commutes with $`\gamma _5`$. The issue is to find a Dirac operator satisfying $`\{\gamma _5,D^1R\}=0`$ and (1b) $`\gamma _5D=(\gamma _5D)^{}`$.
Clearly, the operator $`D_c^1=D^1R`$ anticommutes with $`\gamma _5`$ and is $`\gamma _5`$-hermitian. (By standard wisdom, $`D_c`$ cannot be local.) Define the operator $`V=\frac{1+D_c}{1D_c}`$. $`V`$ is seen to be $`\gamma _5`$-hermitian and unitary. Inverting the relation, we find $`D_c^1=\frac{1V}{1+V}`$ leading trivially to $`D=(1+V)\frac{1}{1+R(1R)V}`$ which is the most general solution of (1a+b), in terms of a unitary hermitian operator $`ϵ\gamma _5V`$. ($`D_o`$ corresponds to $`R=1`$.) Obviously, $`ϵ`$ squares to unity. Although this satisfies the GW requirement, it is not enough to produce massless fermions. One also needs that topology be given by $`Q\{U\}=\frac{1}{2}trϵ`$. In our realization of the overlap we used $`ϵ=\epsilon (H)`$ with a sparse $`H\{U\}`$ analytic in the link variables and showed that topology and perturbation theory produce the correct chiral answers.
It is trivial that the overlap provides a solution to GW. What is the physical meaning of $`ϵ`$ ?
Physically, $`ϵ`$ by itself describes Dirac fermions, but they have infinite mass. Therefore, unlike $`D_c`$, $`ϵ`$ is local (except when ill defined). In the continuum, the infinite mass Hermitian Euclidean Dirac operator would have a spectrum concentrated at $`\pm \mathrm{}`$. $`ϵ`$ is a rescaled version, with spectrum at $`\pm 1`$. Any lattice operator $`H`$, representing Dirac fermions with order $`\frac{1}{a}`$ negative mass produces an $`ϵ=\epsilon (H)`$. A solution of GW, $`ϵ`$, that also satisfies the additional conditions required of massless fermions is an acceptable $`H`$, and reproduces itself in an overlap construction since $`ϵ=\epsilon (ϵ)`$. The overlap provides more flexibility, allowing the replacement of $`\gamma _5`$ by $`ϵ^{}`$.
It is unreasonable to view the GW relation as pivotal in Nature because it is just a formula, not the embodiment of a fundamental principle. Moreover, the formula accepts also unphysical solutions. On the other hand, the overlap is a direct reflection of a system consisting of an infinite number of fermions governed by some internal dynamics realizing an internal index; it is easier to accept that this is a natural mechanism, conceivably operative in Nature.
Had events in this decade occurred in reversed chronological order the infinite fermion number “explanation” of the GW relation might have been viewed as an inspired insight.
## 4 A list of projects
There is plenty to do and you are invited to join the chiral subfield! To make my case, I shall present a list of projects. I don’t suggest that you slavishly execute any one of them. The intention is more to inspire you, so you come up with your own idea. The main project seems difficult to me:
$``$ Find a genuinely non-overlap way to solve the chiral fermion problem. If this is possible we shall conclude that the overlap only solved our problem, not necessarily that faced by Nature.
Let me turn to less ambitious proposals:
### 4.1 General particle physics
$``$ Examine which aspects of low energy physics would be particularly sensitive to an UV regulator of the overlap type.
$``$ Find a natural way to explain the subtraction of the infinite Dirac sea vacuum energies.
$``$ Prove that one cannot find an acceptable solution to the GW relation which is nearest neighbor even in only one direction. Proceed to argue that unitarity in Minkowski space requires an infinite number of fermions.
### 4.2 Numerical 4D chiral gauge theories
In order to avoid dealing with a complex measure but still treat a non-trivial chiral model I suggest to solve numerically an $`SU(2)`$ gauge theory with one Weyl $`j=\frac{3}{2}`$ multiplet. The chiral determinant is real and there are no Witten anomalies. But, there also is no singlet $`\psi \psi `$ bilinear.
$``$ What is the phase structure as a function of $`\beta `$, the gauge coupling ?
$``$ Does the model confine ? What is its particle spectrum ?
$``$ Are there massless fermion states ?
### 4.3 QCD
$``$ Go to the $`F_4`$ lattice to disallow terms of the form $`_\mu p_\mu ^4`$ which are scalars on a hypercubic lattice but not in nature. (This is analogous to Higgs work, where, strictly speaking, claims about Nature on the basis of lattice work cannot be made using hypercubic lattices without fine tuning away the $`_\mu p_\mu ^4`$ term.) Compute, by Monte Carlo simulation, the order $`p^4`$ coefficients in a chiral effective Lagrangian for pions resulting from massless quarks . It is suggested to do this using finite size soft pion theorems of the type used previously in $`F_4`$ lattice Higgs work . (Let me take the opportunity to correct a misunderstanding that occurred during the discussion following my talk; contrary to a comment from the audience, this problem has not been solved in a poster presented at this conference .)
$``$ Use $`D_o`$ to define nonperturbative improvement coefficients to standard actions. The improvement intends to hasten the restoration of chirality in the continuum limit. On a gauge configuration typical of a fixed $`\beta `$ evaluate $`c`$ and $`c^{}`$ by minimizing $`c^{}D_o(D_W+c\sigma F)^2`$.
$``$ The sign function $`\epsilon (M)`$ is well defined even for complex $`M`$: investigate QCD at nonzero chemical potential but zero quark mass using the overlap.
$``$ Use the Wilson-Dirac operator $`D_W`$ to define the pure gauge action, as well as $`ϵ`$. This should reduce the density of states with low $`H_W^2`$ eigenvalues. For example, a pure gauge action could contain the term $`tr\frac{1}{H_W^2}`$, or, alternatively, one could use the determinant of a function of $`H_W`$ implemented by auxiliary heavy bosonic fields.
|
no-problem/9909/nucl-ex9909004.html
|
ar5iv
|
text
|
# A statistical interpretation of the correlation between intermediate mass fragment multiplicity and transverse energy
## I Introduction
Nuclear multifragmentation is arguably the most complex nuclear reaction, involving both collective and internal degrees of freedom to an extent unmatched even by fission. As in fission, multifragmentation is expected to present a mix of statistical and dynamical features.
A substantial body of evidence has been presented in favor of the statistical nature of several features such as fragment multiplicities , charge distributions , and angular distributions . Recently however, evidence has been put forth for the lack of statistical competition between intermediate mass fragment (IMF) emission and light charged particle (LCP) emission. More specifically, it has been shown that for the reaction <sup>136</sup>Xe+<sup>209</sup>Bi at 28 $`A`$MeV: a) LCP emission saturates with increasing number of emitted IMFs ; b) with increasing transverse energy ($`E_t`$), the contribution of the LCPs to $`E_t`$ saturates while that of the IMFs becomes dominant ; c) there is a strong anti-correlation of the leading fragment kinetic energy with the number of IMFs emitted . This body of evidence seems to suggest that beyond a certain amount of energy deposition most, if not all, of the energy goes into IMF production rather than into LCP emission in a manner inconsistent with statistical competition.
Given the importance of these results in showing a potential failure of the statistical picture and a possible novel dynamical mechanism of IMF production, we have applied the same analysis to a set of systematic measurements of <sup>129</sup>Xe+<sup>197</sup>Au at several bombarding energies. In what follows we report on 1) new experimental data that confirm the general nature of the observations in ; 2) new experimental data which show trends that are different from those observed in ; 3) the effectiveness of gating on IMF multiplicity ($`N_{\mathrm{IMF}}`$) as an event-selection strategy; and 4) the reproduction of key results with statistical model calculations.
## II Experimental setup
LCP and IMF yields and their correlations with, and contributions to $`E_t`$ were determined for the reaction <sup>129</sup>Xe+<sup>197</sup>Au at 30, 40, 50, and 60 $`A`$MeV. The experiments were performed at the National Superconducting Cyclotron Laboratory at Michigan State University (MSU). Beams of <sup>129</sup>Xe, at intensities of about $`10^7`$ particles per second, irradiated gold targets of approximately 1 mg/cm<sup>2</sup>. The beam was delivered to the 92 inch scattering chamber with a typical beam spot diameter of 2-3 mm.
For the bombarding energies of 40, 50 and 60 $`A`$MeV, LCPs and IMFs emitted at laboratory angles of 16-160 were detected using the MSU Miniball . As configured for this experiment, the Miniball consisted of 171 fast plastic (40 $`\mu `$m)-CsI(2 cm) phoswich detectors, with a solid angle coverage of approximately 87% of 4$`\pi `$. Identification thresholds for $`Z`$=3,10, and 18 fragments were $``$2, 3, and 4 MeV/nucleon, respectively. Less energetic charged particles with energies greater than 1 MeV/nucleon were detected in the fast plastic scintillator foils, but were not identified by $`Z`$ value. Isotopic identification was achieved for hydrogen and helium isotopes with energies less than 75 MeV/nucleon. Energy calibrations were performed using elastically scattered <sup>12</sup>C beams at forward angles and by using the punch-through points of the more backward detectors to normalize to existing data . The energy calibrations are estimated to be accurate to about 10% at angles less than 31 and to about 20% for the more backward angles.
Particles going forward ($`16^{}`$) were measured with the LBL forward array , a high resolution Si-Si(Li)-plastic scintillator array. Fragments of charge $`Z`$=1-54 were detected with high resolution using a 16-element Si(300 $`\mu `$m)-Si(Li)(5 mm)-plastic(7.6 cm) array with a geometrical efficiency of $``$64%. Where counting statistics allowed, individual atomic numbers were resolved for $`Z`$=1-54. Representative detection thresholds of $`Z`$=2, 8, 20 and 54 fragments were approximately 6, 13, 21, and 27 MeV/nucleon, respectively. Energy calibrations were obtained by directing 18 different beams ranging from $`Z`$=1 to 54 into each of the 16 detector elements. The energy calibration of each detector was accurate to better than 1%, and position resolutions of $`\pm `$1.5 mm were obtained.
The complete detector system for these higher energies (LBL array + Miniball) subtended angles from 2-160 and had a geometric acceptance $``$88% of 4$`\pi `$. As a precaution against secondary electrons, detectors at angles larger than 100 were covered with Pb-Sn foils of thickness 5.05 mg/cm<sup>2</sup> (this increased the detection thresholds for these backward detectors). Both the Miniball and forward array were cooled and temperature stabilized.
For the 30 $`A`$MeV data set, the forward going particles ($`\theta =8^{}23^{}`$) were measured by the MULTICS array , a high resolution gas-Si-Si(Li)-CsI array. Detection thresholds were approximately 2.5 MeV/nucleon for all fragments ($`Z`$=1-54), and the resolution in $`Z`$ was much better than 1 unit for $`Z<`$30. Energy calibrations were performed by directing 18 separate beams into each of the 36 telescopes. The calibration beams had energies of $`E/A`$=30 and 70 MeV, and ranged in mass from <sup>12</sup>C to <sup>129</sup>Xe. An energy resolution of better than 2% was obtained. Position calibrations of the Si elements of the MULTICS array were performed with the procedure of ref. . The angular resolution was estimated to be $``$ 0.2. Charged particles emitted beyond 23 were detected with the Miniball in a setup similar to the higher bombarding energies described above. The complete detector system covered approximately 87% of 4$`\pi `$.
Data were taken under two trigger conditions: at least two Miniball elements triggered or at least one IMF observed in the relevant forward array.
Further details of the experimental setups can be found in refs. .
## III Comparison with previous results
Following the procedure outlined in , the average LCP yields were determined as a function of $`N_{\mathrm{IMF}}`$ (which serves as a rough measure of impact parameter or energy deposition). Fig. 1 shows an example of such an analysis for the reaction <sup>129</sup>Xe+<sup>197</sup>Au at bombarding energies between 30 and 60 $`A`$MeV. The average LCP multiplicity ($`N_{\mathrm{LCP}}`$) does indeed saturate with increasing $`N_{\mathrm{IMF}}`$, as observed in . However, the value to which $`N_{\mathrm{LCP}}`$ saturates ($`N_{\mathrm{LCP}}_{\mathrm{max}}`$) rises with increasing bombarding energy and is listed in Table I. The IMF multiplicity at which the saturation occurs is approximately 4-5 at 30 $`A`$MeV and rises with increasing bombarding energy to a value of 8-9 at 60 $`A`$MeV.
The average LCP contribution to $`E_t`$ ($`E_t^{\mathrm{LCP}}`$) saturates in a bombarding energy dependent fashion as well (see $`E_t^{\mathrm{LCP}}_{\mathrm{max}}`$ in Table I and open symbols of Fig. 1, bottom panel). In contrast, the average IMF contribution to $`E_t`$ ($`E_t^{\mathrm{IMF}}`$) rises linearly with increasing IMF multiplicity. The significance of the bombarding energy dependence of these observations will be discussed in the next section.
We now explore the dependence of these same variables on $`E_t`$. According to the procedure outlined in , the average yields of multiplicity and transverse energy for both IMFs and LCPs were determined as a function of $`E_t`$ (which serves as a measure of impact parameter or energy deposition ). In Fig. 2 are plotted $`N_{\mathrm{IMF}}`$, $`N_{\mathrm{LCP}}`$, $`E_t^{\mathrm{IMF}}`$, and $`E_t^{\mathrm{LCP}}`$ as a function of $`E_t`$ for bombarding energies between 30 and 60 $`A`$MeV. All the observables rise with increasing $`E_t`$, in disagreement with the observations in . In , the value of $`E_t^{\mathrm{LCP}}`$ is observed to saturate to a relatively small value compared to $`E_t^{\mathrm{IMF}}`$(see Fig. 3), which is at variance with the observations in Fig. 2. The origin of this disagreement will be discussed in the next section.
Lastly, according to the procedure in , the average kinetic energy of the projectile-like fragment ($`E/A_{\mathrm{PLF}}`$, defined as the heaviest forward-moving particle in an event, with $`Z_{\mathrm{PLF}}10`$ and $`\theta 23^{}`$) has been determined as a function of $`N_{\mathrm{IMF}}`$, an example of which is given in Fig. 4. Here, we confirm the observation in . For increasing $`N_{\mathrm{IMF}}`$, the energy per nucleon of the leading fragment decreases continuously.
The three aforementioned observations have been used to suggest that, above a certain excitation energy, the IMFs get the lion’s share of the energy while the LCPs lose their capability to compete . In the following section, we explore each of these observations and suggest possible alternative explanations.
## IV Interpretation
We begin with the saturation of $`N_{\mathrm{LCP}}`$ and $`E_t^{\mathrm{LCP}}`$ as opposed to the continuous rise of $`E_t^{\mathrm{IMF}}`$ observed in Fig. 1. $`E_t^{\mathrm{IMF}}`$ rises linearly since
$$E_t^{\mathrm{IMF}}=\underset{i=1}{\overset{N_{\mathrm{IMF}}}{}}E_i\mathrm{sin}^2\theta _iN_{\mathrm{IMF}}ϵ_t^{\mathrm{IMF}},$$
(1)
where $`ϵ_t^{\mathrm{IMF}}`$ is the average transverse energy of an IMF. Thus, the reason for the continuous rise of $`E_t^{\mathrm{IMF}}`$ can be understood quite simply. But what is the reason for the saturation of $`E_t^{\mathrm{LCP}}`$ and $`N_{\mathrm{LCP}}`$? We believe that the values of $`N_{\mathrm{IMF}}`$ where $`N_{\mathrm{LCP}}`$ and $`E_t^{\mathrm{LCP}}`$ saturate represent the tails of the IMF multiplicity distribution which are determined by the most central collisions.
For example, the values of IMF multiplicity at which the observables in Fig. 1 saturate ($`N_{\mathrm{IMF}}^{\mathrm{sat}}`$) can be understood in terms of an impact parameter scale. Consider the probability $`P`$ of emitting $`N_{\mathrm{IMF}}`$ and its integrated yield
$$S(N_{\mathrm{IMF}})=\underset{i=N_{\mathrm{IMF}}}{\overset{\mathrm{}}{}}P(i)$$
(2)
as shown in Fig. 5 for the reaction <sup>129</sup>Xe+<sup>197</sup>Au at 50 $`A`$MeV. Average impact parameter scales, as they are commonly employed, are proportional to $`\sqrt{S}`$ . Note that the multiplicities at which saturation occurs represent roughly 5% of the total integrated cross section (dashed line in the bottom panel of Fig. 5). The $`N_{\mathrm{IMF}}`$ value $`N_{\mathrm{IMF}}^{\mathrm{sat}}`$ for which $`S0.05`$ is listed in Table I for each of the different bombarding energies. $`N_{\mathrm{IMF}}^{\mathrm{sat}}`$ tracks rather well the maximum average IMF multiplicity ($`N_{\mathrm{IMF}}_{\mathrm{max}}`$) measured for the most central collisions (top 5% of events) based upon the $`E_t`$ scale.
The above observations demonstrate that large IMF multiplicities ($`N_{\mathrm{IMF}}`$$`>`$$`N_{\mathrm{IMF}}_{\mathrm{max}}`$) have small probabilities and represent the extreme tails of events associated with the most central collisions. In other words, events with increasing values of $`N_{\mathrm{IMF}}`$ in the saturation region of Fig. 1 do not come from increasingly more central collisions where more energy has been dissipated. Thus, $`N_{\mathrm{IMF}}`$ is useful as a global event selector over only a very limited range.
Consequently, it is expected that statistical models should exhibit similar trends as those observed in Fig. 1. Examples of such predictions are shown in Fig. 6 for the statistical multifragmentation model SMM (open symbols) and for percolation (solid symbols) . In both models an excitation energy ($`E`$) distribution was used to mimic an impact parameter ($`b`$) weighting. Assuming that $`b`$=0 events give rise to the maximum excitation energy ($`E_{\mathrm{max}}`$), we have chosen the number of events at a given $`E`$ proportional to $`E_{\mathrm{max}}E`$. The “excitation energy” for the percolation calculation is essentially represented by the number of broken bonds and is calculated as per ref. .
Both calculations show a saturation of $`N_{\mathrm{LCP}}`$ when plotted as a function of IMF multiplicity. This behavior can be understood in terms of a simple model. Consider the statistical emission of two particle types with barriers $`B_1`$ and $`B_2`$ (and $`B_2>B_1`$). Assume the emission probabilities are $`p_i\mathrm{exp}[B_i/T]`$ ($`i=1,2`$) with $`p_1+p_2=1`$. With the temperature $`T`$ characterized in terms of the total multiplicity $`n_{\mathrm{tot}}=n_1+n_2=\alpha T`$, and ignoring mass conservation, the solution for $`n_1`$ as a function of $`n_2`$ can be calculated for a distribution of excitation energies like that described above. The solution of this model is shown by the asterisk symbols in the top right panel of Fig. 6 for $`B_1`$=8, $`B_2`$=24, $`T_{\mathrm{max}}`$=10 and $`\alpha =2`$ (and $`N_{\mathrm{IMF}}`$=$`n_2`$, $`N_{\mathrm{LCP}}`$=$`n_1`$). This saturation is qualitatively similar to that of the other statistical models listed in Fig. 6 and to the behavior observed in Fig. 1. Furthermore, the saturation value of $`N_{\mathrm{LCP}}`$, as well as the value of $`N_{\mathrm{IMF}}`$ at which saturation occurs, both depend on the maximum energy used in the calculation. Consequently, for statistical emission one expects (and observes in Fig. 1) a bombarding energy dependence of the saturation which reflects the total energy available to the decaying system. These behaviors are generic features that are present in any statistical model .
For completeness, the IMF and LCP yields from the SMM calculations are plotted as a function of $`E_t`$ as well in Fig. 6 (left panels). There is no saturation of $`E_t^{\mathrm{LCP}}`$ with increasing $`E_t`$ as was observed in . Instead, this calculation shows qualitatively the same trends as experimentally observed in Fig. 2.
What then causes the (unconfirmed) saturation of $`E_t^{\mathrm{LCP}}`$ observed in <sup>136</sup>Xe+<sup>209</sup>Bi (bottom panel of Fig. 7)? We believe that the saturation observed in <sup>136</sup>Xe+<sup>209</sup>Bi is likely due to the limited dynamic range of the detectors used. The charged particle yields from the <sup>136</sup>Xe+<sup>209</sup>Bi reaction were measured with the dwarf array whose thin CsI crystals (thickness of 4 mm for polar angle $`\theta =55168^{}`$, 8mm for $`\theta =3255^{}`$ and 20 mm for $`\theta =432^{}`$) are unable to stop energetic LCPs. For example, protons punch through 4 mm of CsI at an energy of 30 MeV. Consequently, their contribution to $`E_t`$ could be significantly underestimated.
An example of the distortions that would be caused by the detector response of the dwarf array on the similar <sup>129</sup>Xe+<sup>197</sup>Au reaction at 30 $`A`$MeV is given in Fig. 7. In the top panel is plotted $`E_t^{\mathrm{LCP}}`$ and $`E_t^{\mathrm{IMF}}`$ as a function of $`E_t`$ as measured by the MULTICS/Miniball collaboration. The thicknesses of the CsI crystals from these detectors range from 20 to 40 mm. Protons punch through 20 mm of CsI with an energy of 76 MeV. In the middle panel of Fig. 7, the <sup>129</sup>Xe+<sup>197</sup>Au data have been “filtered” using the dwarf array high energy cutoffs which remove high energy particles from $`E_t`$. After filtering, the two prominent features observed in the <sup>136</sup>Xe+<sup>209</sup>Bi data set (bottom panel of Fig. 7) then appear in the filtered data. Namely, $`E_t^{\mathrm{LCP}}`$ saturates to a small value and $`E_t^{\mathrm{IMF}}`$ becomes the “apparent” dominant carrier of $`E_t`$. These two features are likely to be instrumental in origin and therefore do not warrant a physical interpretation.
Last of all, we come to the behavior of the average kinetic energy of the projectile-like fragment $`E/A_{\mathrm{PLF}}`$ as a function of $`N_{\mathrm{IMF}}`$, an example of which is given in Fig. 4 for <sup>129</sup>Xe+<sup>197</sup>Au at 40 $`A`$MeV (solid circles). From the decrease of $`E/A_{\mathrm{PLF}}`$ with $`N_{\mathrm{IMF}}`$, it was concluded that kinetic energy of the PLF is expended for the production of IMFs . It was also argued that for increasing IMF multiplicity, the saturation of $`N_{\mathrm{LCP}}`$ represents a critical excitation energy value beyond which no further amount of relative kinetic energy between the PLF and TLF is converted into heat. In other words, the IMFs no longer compete with the LCPs for the available energy – they get it all.
One can test the consistency of this explanation by studying the same observable, $`E/A_{\mathrm{PLF}}`$, but now as a function of $`N_{\mathrm{LCP}}`$ (open symbols, top panel of Fig. 4). We observe the same dependence as that of the IMFs – a monotonic decrease of $`E/A_{\mathrm{PLF}}`$ with increasing $`N_{\mathrm{LCP}}`$ which reaches a value of $``$17 MeV at the largest multiplicities. This behavior persists whether we restrict ourselves to the saturation region ($`N_{\mathrm{IMF}}`$$`6`$, triangles) or not (open circles). The similar behavior of $`E/A_{\mathrm{PLF}}`$ with respect to $`N_{\mathrm{IMF}}`$ and $`N_{\mathrm{LCP}}`$ indicates that the LCPs do compete with the IMFs for the available energy.
This can be seen more clearly by pre-selecting events with a better global observable, $`E_t`$ , as done in the bottom panel of Fig. 4. Once a window of $`E_t`$ is selected, a corresponding value of $`E/A_{\mathrm{PLF}}`$ is also determined, and there is no longer any strong dependence of $`E/A_{\mathrm{PLF}}`$ on $`N_{\mathrm{IMF}}`$ or $`N_{\mathrm{LCP}}`$. In fact, the resulting $`N_{\mathrm{IMF}}`$ and $`N_{\mathrm{LCP}}`$ selections both give the same value of $`E/A_{\mathrm{PLF}}`$, consistent with a scenario where both species compete for the same available energy.
## V Conclusions
In summary, we have made a systematic study of LCP and IMF observables as a function of IMF multiplicity and transverse energy for the reaction <sup>129</sup>Xe+<sup>197</sup>Au at bombarding energies between 30 and 60 $`A`$MeV.
We observe that $`N_{\mathrm{LCP}}`$ and $`E_t^{\mathrm{LCP}}`$ saturate as a function of $`N_{\mathrm{IMF}}`$ in a bombarding energy dependent way. These saturations are predicted by statistical models and are fundamental features of statistical decay . A bombarding energy dependence of $`N_{\mathrm{LCP}}`$, $`E_t^{\mathrm{LCP}}`$, and $`N_{\mathrm{IMF}}^{\mathrm{sat}}`$ is expected (and experimentally observed) within the framework of statistical decay.
In addition, it has been demonstrated in a model independent fashion that the LCPs compete with the IMFs for the available energy. By using $`E_t`$, a more sensitive event selection is obtained. The analysis also demonstrates the limited usefulness of event classification using only $`N_{\mathrm{IMF}}`$.
We do not observe a saturation of $`E_t^{\mathrm{LCP}}`$as a function of $`E_t`$ at any bombarding energy. The saturation of $`E_t^{\mathrm{LCP}}`$ as a function of $`E_t`$ observed in ref. is likely due to instrumental distortions. We can account for this saturation by filtering the present measurements of <sup>129</sup>Xe+<sup>197</sup>Au with the experimental thresholds present in refs. . The resulting distortions to the data are large and induce qualitative changes in the trends of the data, causing an unphysical saturation of $`E_t^{\mathrm{LCP}}`$. Therefore, the observations listed in do not demonstrate any measurable failure of statistical models that would justify invoking dynamical IMF production by default. While the IMFs may indeed be produced dynamically, the observations listed in refs. do not provide evidence for such a conclusion.
Acknowledgments
This work was supported by the Nuclear Physics Division of the US Department of Energy, under contract DE-AC03-76SF00098, and by the National Science Foundation under Grants No. PHY-8913815, No. PHY-90117077, and No. PHY-9214992. One of us (L.B) acknowledges a fellowship from the National Sciences and Engineering Research Council (NSERC), Canada, and another (A.F.) acknowledges economic support from the Fundación J.B. Sauberan, Argentina.
Present addresses:
Indiana University Cyclotron Facility, 2401 Milo B. Sampson Ln, Bloomington, IN 47408
Washington Aerial Measurements Operations, Bechtel Nevada, P.O. Box 380, Suitland, MD 20752
<sup>§</sup> Instituto de Fisica, Universidade de Sao Paulo, C.P. 66318, CEP 05389-970, Sao Paulo, Brazil
Physics Department, Seoul National University, Seoul, 151-742, Korea.
Physics Department, Ohio State University, Columbus, OH 43210
<sup>∗∗</sup>Dipartimento di Fisica and Istituto Nazionale di Fisica Nucleare, Via A. Valerio 2, 34127 Trieste, Italy
<sup>††</sup>Physics Department, Hope College, Holland, MI 49423
|
no-problem/9909/astro-ph9909157.html
|
ar5iv
|
text
|
# 1 Introduction:
## 1 Introduction:
Perhaps the most promising model proposed for the origin of the $``$GeV extragalactic $`\gamma `$-ray background (EGRB), first detected by SAS-2 \[Fichtel, Simpson, & Thompson 1978\] and later confirmed by EGRET, is that it is the collective emission of an isotropic distribution of faint, unresolved blazars \[Stecker & Salamon 1996 and references therein\]. EGRET has determined that the EGRB spectrum is consistent with a single power-law,
$$\frac{dN_\gamma }{dE}=(7.32\pm 0.34)\times 10^6\left(\frac{E}{0.451\mathrm{GeV}}\right)^{2.10\pm 0.03}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1\mathrm{GeV}^1$$
(1)
between 0.1 and $`50`$ GeV (statistics limited) \[Sreekumar et al. 1998\]. Because GLAST has a roughly energy-independent area of $`10^4`$ cm<sup>2</sup> above 0.1 GeV (compared to EGRET’s $`10^3`$ cm<sup>2</sup> at 1 GeV and $`10^2`$ cm<sup>2</sup> at 100 GeV), with an estimated point source sensitivity (PSS) nearly two orders of magnitude lower than EGRET’s, GLAST will be able to (a) detect something on the order of $`10^2`$ times more blazars than EGRET, and (b) measure the EGRB spectrum to $`>1`$ TeV (assuming the EGRET power law spectrum). These two capabilities will enable GLAST to either strongly support or reject the unresolved-blazar hypothesis for the origin of the EGRB.
## 2 The Unresolved Blazar Model:
To determine the collective output of all $`\gamma `$-ray blazars, one can use the observed EGRET distribution of $`\gamma `$-ray luminosities and extrapolate to obtain a “direct” $`\gamma `$-ray luminosity funtion (LF) $`f_\gamma (l_\gamma ,z)`$ \[Chiang, et al. 1995 \] ($`f_\gamma `$ units being sources/co-moving volume-differential luminosity), where $`l_\gamma `$ is the differential luminosity (s<sup>-1</sup>) at a (source) fiducial photon energy $`E_f`$, and we assume power law spectra for all sources, $`l(E)=l_\gamma (E/E_f)^\alpha `$. Alternatively, one can make use of much larger catalogs at other wavelengths, and assume some relationship between the source luminosities at the catalog wavelength and the GeV region \[Padovani et al. 1993; Stecker, Salamon, & Malkan 1993\]. Both methods are fraught with potential errors. In the former method, only the “tip of the iceberg” of the $`\gamma `$-ray LF has been observed by EGRET, and extrapolation to lower luminosities that fall below EGRET’s PSS (for any reasonable source distance) involves some level of assumption. In the latter method, the assumption of a linear relation between the luminosities of a source at disparate wavelengths is by no means well established \[Padovani, et al. 1993; Muecke, et al 1996, Mattox, et al. 1997\].
We used the latter method in the past \[Stecker and Salamon 1996\] to estimate the contribution of unresolved blazars to the EGRB, and found that up to 100% of EGRET’s EGRB can be accounted for. This model assumes a linear relationship between the differential $`\gamma `$-ray luminosity $`l_\gamma `$ at $`E_f=0.1`$ GeV and the differential radio luminosity $`l_r`$ at 2.7 GHz for all sources, viz., $`l_\gamma =\kappa l_r`$, where $`\kappa `$ is a constant. One can then use the measured radio LF $`f_r(l_r,z)`$ for blazars (primarily for flat spectrum radio quasars) \[Dunlop & Peacock 1990\] to calculate the collective $`\gamma `$-ray output of all blazars.
The simplified elements of the calculation follow (the details can be found in Stecker and Salamon 1996). First, the $`\gamma `$-ray and radio LFs are related by $`f_\gamma (l_\gamma ,z)=\kappa ^1f_r(\kappa ^1l_\gamma ,z)`$. The number of sources $`𝒩`$ detected by a detector is a function of the detector’s point source sensitivity (PSS) at the fiducial energy $`E_f`$, $`[F(E_f)]_{\mathrm{min}}`$, where the integral $`\gamma `$-ray number flux $`F`$ is related to $`l_\gamma `$ by
$$F(E)=l_\gamma (E/E_f)^\alpha /4\pi \alpha (1+\alpha )^{\alpha +1}R_0^2r^2,$$
(2)
where $`R_0r`$ is the luminosity distance to the source. The number of sources at redshift $`z`$ seen at Earth with an integral flux $`F(E_f)`$ is given by
$$\frac{d𝒩}{dF(E_f)}\mathrm{\Delta }F(E_f)=4\pi R_0^3r^2𝑑rf_\gamma (l_\gamma ,z(r))\mathrm{\Delta }l_\gamma ,$$
(3)
where $`l_\gamma `$ in the integrand depends on $`z(r)`$ and $`F(E_f)`$ from Eq.2. Figure 1 shows our 1996 calculations for the number of sources versus flux, compared to the EGRET detections. The cutoff at $`10^7\mathrm{cm}^2\mathrm{s}^1`$ for $`E_f=0.1`$ GeV, their quoted PSS, is evident.
To calculate the EGRB, we integrate over all sources not detectable by the telescope to obtain the differential number flux of EGRB photons at an observed energy $`E_0`$:
$$\frac{dN_\gamma }{dE}(E_0)=4\pi R_0^3r^2𝑑r𝑑\alpha p(\alpha )_{l_{min}}^{l_{max}}\frac{dF}{dE}(E_0(1+z))f_\gamma (l_\gamma ,z)e^{\tau (E_0,z)}𝑑l_\gamma .$$
(4)
This expression includes an integration over the probability distribution of spectral indices $`\alpha `$ (based on the 2nd EGRET Catalog \[Thompson et al. 1995\]). There is also an important attenuation factor in this expression, due to the loss of $`\gamma `$-rays as they propagate through the intergalactic medium and interact with cosmic UV, optical, and IR background photons to produce $`e^\pm `$ pairs. Although there are significant uncertainties in estimates of the $`z`$-dependence of the soft photon background \[Salamon & Stecker 1998\], a qualitative feature cannot be avoided: If a substantial fraction of the EGRB is from high-$`z`$ flat spectrum radio quasars, a steepening in the spectrum should be seen at energies above 20 GeV. Figure 2 shows the calculated EGRB spectrum (based on EGRET’s PSS) compared to EGRET data. The “bump” in the spectrum above 10 GeV is due to there being a finite width in the assumed power-law spectral index distribution of blazars, which produces a summed unabsorbed spectrum with a positive second derivative \[Stecker & Salamon 1996\]. Although the match between the model’s spectrum and the recent EGRET EGRB data \[Sreekumar et al. 1998\] does not appear to be very good, one must examine it the light of the assumptions and uncertainties. The location and amplitude of the “bump” depend upon the amount of high redshift absorption, which, in turn, depend upon uncertain UV and optical soft photon background densities \[Salamon & Stecker 1998\]. The “bump” also depends upon the poorly known blazar spectral index distribution, which we have assumed to independent of quiescent-state luminosity. Finally, we not that the assumed power-law form for high redshift quasars above 10 GeV energy is highly questionable in light of the predictions of the Compton models for $`\gamma `$-ray emission from these sources; in fact their spectra are predicted to steepen at these energies. We also note that EGRET’s EGRB error bars are systematic, not statistical and that Sreekumar et al. have stated that the model cannot be ruled out based on the EGRET data.
## 3 GLAST and the EGRB:
Figure 1 shows that $`𝒪`$$`(10^3)`$ blazars should be detectable by GLAST, assuming it achieves a PSS of $`2\times 10^9`$ cm<sup>-2</sup>s<sup>-1</sup>, and Figure 2 shows that this should reduce the EGRB by a factor of $`2`$ for $`E>1`$ GeV. These predictions are arrived at by replacing EGRET’s PSS with GLAST’s in our 1996 model.
Figure 2 shows our predicted reduction in the EGRB from unresoved blazars to be observed by GLAST compared with that observed by EGRET. The dashed line in Figure 2 represents the energy interval in which it is difficult to determine GLAST’s EGRB spectrum, due to the effect of worsening angular resolution with decreasing energy. For a lower PSS level, the number of detectable sources is larger. However, the angular resolution limits the number of distinct sources one can identify, crudely given by $`𝒩(E)4\pi /\pi \sigma _\theta ^2(E)`$, where $`\sigma _\theta (E)`$ is the energy-dependent angular resolution. When $`𝒩(>F_{min})>𝒩(E)`$, individual sources are no longer separable from the EGRB. With GLAST’s proposed angular resolution function (GLAST Science Document), the equality $`𝒩(>F_{min})=𝒩(E)`$ occurs at $`E`$ somwhat less than 1 GeV; thus for $`E<1`$ GeV there is not as much a reduction in the level of the EGRB (compared to EGRET) as for energies $`E>1`$ GeV.
We conclude that GLAST can test the unresolved blazar background model in three ways: (1) GLAST should see roughly 2 orders of magnitude more blazars than EGRET because of its ability to detect the fainter blazars which contribute to the EGRB in our model. Moreover, GLAST can test our assumption of an average linear relation between $`\gamma `$-ray and radio flux and our assumed redshift distribution of blazars by testing the details of our source count versus flux prediction. (2) With GLAST’s improved PSS leading to more blazars being resolved out, fewer unresolved blazars will be left to contribute to the EGRB, thus reducing the level of the measured EGRB compared to EGRET’s. (3) GLAST’s much greater aperture at 100 GeV will allow a determination of whether or not a steepening exists in the EGRB, since the number of EGRB $`\gamma `$-rays recorded by GLAST above 100 GeV will be of order $`10^3`$ to $`10^4`$, assuming a continuation of the EGRET power law spectrum. However, this last test must be qualified because of the unknown amount of steepening in the high reshift quasar spectra above 10 GeV which can mimic an absorption effect.
References
Chiang, J. et al. 1995, ApJ 452, 156.
Dunlop, J.S. and Peacock, J.A. 1990, MNRAS 247, 19.
Fichtel, C.E., Simpson, G., & Thompson, D.J. 1978, ApJ 222, 833.
Mattox, J.R., et al. 1997, ApJ 481, 95.
Muecke, A., et al. 1996, A&AS 120, 541.
Padovani, P. et al. 1993, MNRAS 260, L21.
Salamon, M.H. and Stecker, F.W. 1998, ApJ 493, 547.
Sreekumar, P. et al. 1998, Ap.J. 494, 523.
Stecker, F.W., Salamon, M.H., & Malkan, M.A. 1993, ApJ 410, L71.
Stecker, F.W. and Salamon, M.H. 1996, ApJ 464, 600.
Thompson, D.J. et al. 1995, ApJS 101, 259.
|
no-problem/9909/hep-lat9909125.html
|
ar5iv
|
text
|
# KEK-TH-645 RIKEN BNL Research Center preprint SU(4) pure-gauge phase structure and string tensionsTalk presented by SO.The authors thank RIKEN, Brookhaven National Laboratory and the U.S. Department of Energy for providing the facilities essential for the completion of this work. SO thanks the RIKEN BNL Research Center for its hospitality.
## Abstract
We present numerical evidence that the SU(4) pure-gauge dynamics has a finite-temperature first-order phase transition. For a $`6\times 20^3`$ lattice, this transition occurs at the inverse-square coupling of $`8/g^210.79`$. Below this and above the known bulk phase transition at $`8/g^210.2`$ is a confined phase in which we find two different string tensions, one between the fundamental $`\mathrm{𝟒}`$ and $`\mathrm{𝟒}^{}`$ representations and the other between the self-dual diquark $`\mathrm{𝟔}`$ representations. The ratio of these two is about 1.5. The correlation in the adjoint representation suggests no string forms between adjoint charges.
There are renewed interests in SU($`N_c`$) pure Yang-Mills theory with large $`N_c`$:
1) Finite-temperature phase structure of quantum chromodynamics (QCD) would be easier to understand if the SU($`N_c`$) pure Yang-Mills system has a second order phase transition for $`N_c4`$ . With standard large-$`N_c`$ analysis where $`N_cg^2`$ is held fixed, the Z($`N_c`$) deconfinement transition occurs at $`T_dO(1)`$, separating confining phase with free energy $`FO(1)`$ and deconfining phase with $`FO(N_c^2)`$. The deconfining temperature $`T_dO(1)`$ is not effected if $`N_f`$ and $`g^2N_c`$ are held fixed and $`N_c\mathrm{}`$. If the transition is first order, it is not effected either. So large $`N_c`$ is not a reasonable guide for $`T0`$ QCD phase structure with Columbia phase diagram , unless SU($`N_c`$) pure Yang-Mills dynamics has second order deconfining phase transition for all $`N_c4`$.
2) New developments in M/string theory predict such things as glueball spectrum at large $`N_c`$ and large $`g^2`$ or ratio between different string tensions for $`N_c4`$ .
3) The dimensionless ratio $`T_d/\sqrt{\sigma }`$ of the deconfining temperature $`T_d`$ and string tension $`\sigma `$ is expected to be independent of $`N_c`$ with a value$`\sqrt{3/\pi (D2)}`$ with $`D`$ being the space-time dimensions .
Here we report the results of our numerical investigation on the order of deconfining phase transition and the ratio of string tensions for $`N_c=4`$ . We use single-plaquette action defined in the fundamental $`\mathrm{𝟒}`$-representation of the SU(4) gauge group. Combinations of pseudo-heatbath or Metropolis and over-relaxation algorithms are used in updating 4, 6 and 8 $`\times 8^3`$, $`12^3`$, $`16^3`$ or $`20^3`$ lattices. Various workstations are used for the numerical calculations, while migration to the RIKEN BNL QCDSP mother boards is planned. We look at the following observables: plaquette, Polyakov loops, $`L(\stackrel{}{x})=(1/N_c)\mathrm{tr}_{t=1}^{L_t}U(\stackrel{}{x},t;\widehat{t})`$, in $`\mathrm{𝟒}`$ (fundamental), $`\mathrm{𝟔}`$ (antisymmetric diquark), $`\mathrm{𝟏𝟎}`$ (symmetric diquark) and $`\mathrm{𝟏𝟓}`$ (adjoint) representations, deconfinement fraction, and Polyakov loop correlation $`L(\stackrel{}{0})L(\stackrel{}{r})^{}r^1\mathrm{exp}(F(r)/T)\mathrm{exp}(L_t\sigma r\mathrm{ln}r)`$ in $`\mathrm{𝟒}`$, $`\mathrm{𝟔}`$, $`\mathrm{𝟏𝟎}`$ and $`\mathrm{𝟏𝟓}`$ representations.
This SU(4) pure Yang-Mills system is known to have a bulk phase transition near $`\beta =8/g^210.2`$, separating two confining phases : across this transition the plaquette jumps discontinuously but the average Polyakov line in the fundamental $`\mathrm{𝟒}`$ representation remains zero on both sides. However if the lattice extent in temperature direction $`L_t`$ is too small this bulk transition drives a first-order finite-temperature deconfining transition just above itself in $`\beta `$ . We confirmed this is indeed the case for $`L_t=4`$. But for $`L_t6`$ we have good enough separation between the bulk and finite-temperature transitions as desired.
As is shown in Figure 1, on a $`6\times 20^3`$ lattice we confirmed coexistence of confined and deconfined phases at temperature $`\beta `$=10.79:
This strongly suggests a first-order deconfining phase transition. Work in progress confirms this phase coexistence as we extend the simulation from the present 3500 evolution steps (1 evolution = 5 heat bath + 1 over relaxation steps) to 20000 steps. We plan further study with finite-size scaling.
String tensions in SU($`N_c`$) pure Yang-Mills system is classified by its center Z($`N_c`$) $`N_c`$-ality. With $`N_c=4`$, the fundamental ($`\mathrm{𝟒}`$) charge has 4-ality $`k=1`$, the two diquark ($`\mathrm{𝟔}`$ and $`\mathrm{𝟏𝟎}`$) charges $`k=2`$, and adjoint ($`\mathrm{𝟏𝟓}`$) $`k=0`$. The string tensions between these charges and their anticharges are predicted to behave as $`\sigma _k`$ $``$ $`\mathrm{min}\{k,N_ck\}`$ by a standard strong-coupling analysis, $`k(N_ck)`$ by another strong coupling analysis , and $`\mathrm{sin}(k\pi /N_c)`$ by a SUSY strong coupling analysis . Generally the ratio $`\sigma _k/\sigma _1`$ should fall in the interval $`1\sigma _k/\sigma _12`$ . Note also that $`N_c=4`$ is the first example with different string tensions: in SU(3) pure Yang-Mills system the fundamental ($`\mathrm{𝟑}`$) and the symmetric diquark ($`\mathrm{𝟔}`$) tensions are the same .
In our numerical calculation on a $`6\times 16^3`$ lattice at $`\beta =10.70`$ (see Figure 2):
we find a clear difference between $`\mathrm{𝟒}`$\- and $`\mathrm{𝟔}`$-string tensions extracted from $`\mathrm{𝟒}`$\- and $`\mathrm{𝟔}`$-Polyakov loop correlations. From fitting these data we have $`\sigma _4`$=0.068(4) and $`\sigma _6`$=0.108(17). At a stronger coupling of $`\beta `$=10.65 their values are 0.086(3) and 0.142(57) respectively. Thus their ratio $`\sigma _6/\sigma _4`$ does not show much temperature dependence and falls in the interval (1,2) as it should. We are yet to see any signal for $`\mathrm{𝟏𝟎}`$ and $`\mathrm{𝟏𝟓}`$ representations from this lattice, probably because of too strong couplings. On the other hand at a weaker coupling of $`\beta `$=10.85 on a smaller lattice of $`8\times 12^3`$ (see Figure 3)
we find clear flattening of the adjoint ($`\mathrm{𝟏𝟓}`$) correlation.
For thermodynamics of SU(4) pure-gauge theory we confirmed that the bulk transition and $`T0`$ phase change are separated on $`L_t6`$ lattices, the bulk transition is at $`\beta _b10.2`$ and the finite-temperature 1st-order phase transition is at $`\beta _d10.79`$ ($`L_t=6`$) and $`10.9`$ ($`L_t=8`$), and easier to establish than weakly first-order SU(3). For string tensions, at $`L_t`$=6 we find signals for different string tensions in fundamental $`\mathrm{𝟒}`$ and antisymmetric $`\mathrm{𝟔}`$ representations, but no signal yet for symmetric diquark $`\mathrm{𝟏𝟎}`$ and adjoint $`\mathrm{𝟏𝟓}`$ representations. These strings satisfy a relation, $`1<\sigma _6/\sigma _4<2,`$ as they should, and the ratio does not show any strong temperature dependence. Combining these findings for thermodynamics and string tensions at $`L_t`$=6, we find an inequality: $`T_d/\sqrt{\sigma _4(T=0)}`$ $`<`$ $`T_d/\sqrt{\sigma _4(TT_d)}`$ $``$ 0.64 $`<`$ $`\sqrt{3/\pi (D2)}`$, just like in SU(2) and SU(3) pure-gauge results. At a weaker coupling of $`\beta =10.85`$ using a $`L_t`$=8 lattice we now have rather good evidence that there exists no string in the adjoint ($`\mathrm{𝟏𝟓}`$) representation. We plan further investigation on larger and finer lattices, probably using smaller partitions of the QCDSP parallel supercomputer at RIKEN BNL Research Center.
|
no-problem/9909/math9909126.html
|
ar5iv
|
text
|
# Lagrangian torus fibration of quintic Calabi-Yau hypersurfaces III: Symplectic topological SYZ mirror construction for general quintics
## 1 Introduction
This paper is a sequel of . The motivation of studying Lagrangian torus fibrations of Calabi-Yau manifolds comes from Strominger-Yau-Zaslow’s proposed approach toward mirror symmetry (). According to their proposal, on each Calabi-Yau manifold, there should be a special Lagrangian torus fibration. This conjectural special Lagrangian torus fibration structure of Calabi-Yau manifolds is further used to give a possible geometric explanation of mirror symmetry conjecture. Despite its great potential in solving the mirror symmetry conjecture, there are very few known examples of special Lagrangian submanifolds or special Lagrangian fibrations for dimension $`n3`$. Given our lack of knowledge for special Lagrangian, one may consider relaxing the requirement and consider Lagrangian fibration, which is largely unexplored and interesting in its own right. For many applications to mirror symmetry, especially those concerning (symplectic) topological structure of fibrations, Lagrangian fibrations will provide quite sufficient information. In this paper, as in the previous two papers (, ), we will mainly concern Lagrangian torus fibrations of Calabi-Yau hypersurfaces in toric variety, namely, the symplectic topological aspect of SYZ mirror construction.
In this paper, we will construct Lagrangian torus fibrations of generic quintic Calabi-Yau hypersurfaces and their mirror manifolds in complete generality. With the detailed understanding of Lagrangian torus fibrations, we will be able to prove the symplectic topological SYZ mirror conjecture for Calabi-Yau quintic hypersurfaces in $`^4`$. More precisely,
###### Theorem 1.1
For generic quintic Calabi-Yau hypersurface $`X`$ near the large complex limit, and its mirror Calabi-Yau manifold $`Y`$ near the large radius limit, there exist corresponding Lagrangian torus fibrations
$$\begin{array}{ccccccc}X_{s(b)}& & X& & Y_b& & Y\\ & & & & & & \\ & & \mathrm{\Delta }& & & & \mathrm{\Delta }_w\end{array}$$
with singular locus $`\mathrm{\Gamma }\mathrm{\Delta }`$ and $`\mathrm{\Gamma }^{}\mathrm{\Delta }_w`$, where $`s:\mathrm{\Delta }_w\mathrm{\Delta }`$ is a natural homeomorphism, $`s(\mathrm{\Gamma }^{})=\mathrm{\Gamma }`$. For $`b\mathrm{\Delta }_w\backslash \mathrm{\Gamma }^{}`$, the corresponding fibers $`X_{s(b)}`$ and $`Y_b`$ are naturally dual to each other.
This theorem will be proved in more precise form as in theorem 6.2.
Remark: It is important to point out that our purpose is not merely to construct a Lagrangian fibration for Calabi-Yau manifold. We want to construct the Lagrangian fibration corresponding to the Kähler and the complex moduli of the Calabi-Yau manifold. Namely, for Calabi-Yau manifolds with different Kähler and complex structures, the structures of the corresponding Lagrangian fibration (singular set, singular locus, singular fibers, etc.) should be different and reflect the corresponding Kähler and complex structures.
The gradient flow approach we developed in is ideal for this purpose. In addition to its clear advantage of being able to naturally produce Lagrangian fibrations, for general Calabi-Yau hypersurfaces, the gradient flow will naturally produce a rather “canonical” family of Lagrangian torus fibrations continuously depending on the moduli of complex and Kähler structures of the Calabi-Yau hypersurface. Some part of the fibration structure (singular set, singular locus, singular fibers, etc.) will depend on the complex structure, and some other part will depend on the Kähler structure. It is important to understand the dependence precisely. For general Calabi-Yau hypersurfaces, the dependence on complex and Kähler structure is somewhat mixed. For quintics, the complex moduli has 101 dimensions and the Kähler moduli has only 1 dimension, so the fibration structures mainly depend on the complex moduli. On the other side, for the mirrors of quintics, the Kähler moduli has 101 dimensions and the complex moduli has only 1 dimension, so the fibration structures mainly depend on the Kähler moduli. Therefore, it is worthwhile to first discuss the case of quintics and their mirrors in detail, where the dependence can be seen more clearly. This will help us better understand the dependence of the fibration structures on complex and Kähler moduli for more general cases. Take singular locus as an example. According to the discussion in section 2, the structure of singular locus for the quintics is largely a result of different “string diagram” structures of the singular set curves near the large complex limit. According to the discussion in section 4, the structure of singular locus for the mirrors of quintics is largely a result of the combinatorial structures of different crepant resolutions of singularities. For more general cases that we will discuss in , both phenomena will come into effect together in forming the singular locus.
As a byproduct, the detailed understanding of the Lagrangian torus fibration structures also provides better understanding of the mirror symmetry inspired partial compactification of the complex moduli of Calabi-Yau manifolds. As is well known, for the sake of mirror symmetry, the complex moduli of Calabi-Yau hypersurfaces in toric varieties near the large complex limit should be partially compactified according to the secondary fan (see , , etc.). Authors of also proposed a related chamber decomposition of the complex moduli of Calabi-Yau hypersurfaces near the large complex limit so that the chambers under the monomial-divisor mirror map correspond to the Kähler cones of the different birationally equivalent models of the mirror, which make up the top dimensional cones of the secondary fan. To our knowledge, so far there is no satisfactory intrinsic geometric explanation of this chamber decomposition nor convincing direct reasoning (without going to the Kähler moduli of the mirror) why secondary fan compactification should be the suitable partial compactification for mirror symmetry purpose. Our construction of Lagrangian torus fibrations as application gives an intrinsic geometric explanation of this chamber decomposition, consequently provides direct reasoning for the secondary fan compactification. More precisely, the Calabi-Yau manifolds in the same chamber near the large complex limit are exactly those whose Lagrangian torus fibrations have the graph singular locus with the same combinatorial type. This interpretation (discussed in section 3.1) can potentially be generalized to determine the chamber decomposition and partial compactification of the complex moduli near the large complex limit for more general Calabi-Yau manifolds in non-toric situations.
Next we give a brief review of our work in the first two papers .
Our work on Lagrangian torus fibrations of Calabi-Yau manifolds starts with . In , we described a very simple and natural construction of Lagrangian torus fibrations via gradient flow based on a natural Lagrangian torus fibration of the large complex limit. This method in principle will produce Lagrangian torus fibrations for general Calabi-Yau hypersurfaces in toric varieties. For simplicity, we described the case of Fermat type quintic Calabi-Yau threefold family $`\{X_\psi \}`$ in $`^4`$ defined by:
$$p_\psi =\underset{1}{\overset{5}{}}z_k^55\psi \underset{k=1}{\overset{5}{}}z_k=0$$
near the large complex limit $`X_{\mathrm{}}`$
$$p_{\mathrm{}}=\underset{k=1}{\overset{5}{}}z_k=0$$
in great detail. Most of the essential features of the gradient flow for the general cases already show up there. One crucial discovery in is that although the Lagrangian torus fibration of $`X_{\mathrm{}}`$ has many lower-dimensional torus fibers, the gradient flow will automatically yield Lagrangian torus fibration for smooth $`X_\psi `$ with 3-dimensional fibers everywhere and singular fibers clearly located. We also discussed the “expected” structure of the special Lagrangian torus fibration, particularly we computed the monodromy transformations of the “expected” special Lagrangian fibration and discussed the “expected” singular fiber structures implied by the monodromy information. Then we compared our Lagrangian fibration constructed via gradient flow with the “expected” special Lagrangian fibration and noted the differences. Finally, we discussed its relevance to mirror construction for Calabi-Yau hypersurfaces in toric varieties.
The so-called “expected” special Lagrangian fibrations first used in refers to the generally expected structure of the special Lagrangian torus fibrations of Calabi-Yau manifold at the time of , based on our knowledge of the elliptic fibrations of K3 surfaces corresponding to SYZ in dimension 2. More precisely, the special Lagrangian fibration was expected to be $`C^{\mathrm{}}`$, which necessarily has codimension 2 singular locus. The codimension 2 singular locus condition is crucial for SYZ conjecture as originally proposed to be valid for special Lagrangian fibrations. The Lagrangian fibrations we constructed in , with codimension 1 singular locus and different singular fibers, do not have exactly the “expected” structure of a $`C^{\mathrm{}}`$ special Lagrangian fibration. Recent examples of D. Joyce indicate that the structure of the actual special Lagrangian fibrations are probably not as expected, instead, they probably resemble more closely the structure of the natural Lagrangian fibrations we constructed in with codimension 1 singular locus. (This suggests that the singular loci of the special Lagrangian fibrations for a Calabi-Yau manifold and its mirror are probably different, based on the structure of codimension 1 singular loci of our Lagrangian fibrations for quintic Calabi-Yau and its mirror, which will be discussed in detail in this paper. Consequently, the original SYZ conjecture for special Lagrangian fibrations need to be modified.) This recent development makes the Lagrangian torus fibrations with codimension 1 singular locus potentially more important as a starting point to deform to the actual special Lagrangian fibration. Nevertheless, it is still interesting and important to construct and study Lagrangian torus fibrations with codimension 2 singular locus, because firstly we can show the symplectic SYZ mirror conjecture (as precisely formulated in section 6) to be valid for this kind of fibrations, secondly they have many good properties and are very useful for topological computations, and also hold independent interest from symplectic geometric point of view. In this paper, the term “expected” Lagrangian torus fibrations will more narrowly refer to the topological structure of Lagrangian torus fibrations with codimension 2 topological singular locus.
From the above discussion, special Lagrangian fibrations in SYZ construction are likely to be non-smooth. Coincidentally, our gradient flow approach naturally produces piecewise smooth (Lipschitz) Lagrangian fibrations. However, in our opinion, $`C^{\mathrm{}}`$ Lagrangian fibrations should still play an important role for symplectic topological aspect of SYZ construction. This is indicated by a somewhat surprising fact discussed in , that a general piecewise smooth (Lipschitz) Lagrangian fibration can not be smoothed to a $`C^{\mathrm{}}`$ Lagrangian fibration by small perturbation. More precisely, the singular locus of a general piecewise smooth (Lipschitz) Lagrangian fibration is usually of codimension 1 while the singular locus of the corresponding $`C^{\mathrm{}}`$ Lagrangian fibration necessarily has codimension 2. For this reason, it is desirable to try to find out how smooth one can make the naturally constructed piecewise smooth (Lipschitz) Lagrangian fibrations to be. When the smoothing can be achieved, we get a nice fibration with well behaved singular locus and singular fibers. When the smoothing can only be partially achieved, the obstruction will give us better understanding of the symplectic topology of the Calabi-Yau manifold and shed a light on the reason why the special Lagrangian fibration can not be smooth in general. In , we constructed many local examples of piecewise smooth (Lipschitz) Lagrangian fibrations that are not $`C^{\mathrm{}}`$. We also discussed methods to squeeze the codimension 1 singular locus of our piecewise smooth (Lipschitz) Lagrangian fibration into codimension 2 by symplectic geometry techniques. This can be viewed as the first step (topological modification) toward the smoothing of our Lagrangian torus fibration. Partial smoothing (analytical modification) of such Lagrangian fibration with codimension 2 singular locus will be discussed in .
For the understanding of the topological aspect of SYZ, a construction of purely topological (non-Lagrangian) torus fibration can also be of interest. Our method clearly can be easily modified to construct non-Lagrangian torus fibrations with much less technical difficulty and better smoothness. We choose to construct more difficult Lagrangian fibrations because first of all our gradient flow naturally produces Lagrangian fibrations. Secondly, Lagrangian fibrations could impose strong constraint on the topological types of singular fibers (see section 2 of ). The very difficulty involved in smoothing the Lagrangian fibrations can be taken as an indication that the actual special Lagrangian fibrations may not be smooth in general ( also indicated by recent examples of D. Joyce ). Non-Lagrangian fibrations, which can easily be smoothed, would not give us such insight. In light of all these, Lagrangian fibrations seem to be a good compromise between the very rigid special Lagrangian fibrations and general non-Lagrangian torus fibrations, which lack control of singular fibers.
In we provided the technical details for the gradient flow construction. The flow we use to produce Lagrangian fibration is the so-called normalized gradient flow. Gradient flows of smooth functions with non-degenerate critical points have been used intensively in the mathematical literature, for example in Morse theory. But the gradient flow in our situation is very unconventional. The critical points of our function are usually highly degenerate and often non-isolated. Worst of all our function is not even smooth (it has infinities along some subvarieties). In , we computed local models for the singular vector fields, then proved a structural stability for the gradient vector fields we use, which ensures that the gradient flow would behave as we expect. Other technical aspects in include the deformation of symplectic manifold and sympletic submanifold structures, the construction of toroidal Kähler metrics, and the symplectic deformation of the Lagrangian torus fibration with codimension 1 singular locus into a Lagrangian torus fibration with codimension 2 singular locus.
In the following we briefly discuss the contents of this paper.
As we know mirror symmetry conjecture first of all amounts to identifying the complex moduli of a Calabi-Yau manifold with the complexified Kähler moduli of its mirror manifold, namely, specifying the mirror map. Then it further concludes that the quantum geometry of a Calabi-Yau manifold is equivalent to the quantum geometry of its mirror partner. Despite the important role of Fermat type quintic family in the history of mirror symmetry, and the nice features of Lagrangian fibrations and singular fibers of this family as discussed in and , the Fermat type quintic Calabi-Yau family is a highly symmetric and very special type of Calabi-Yau manifolds. In fact, this family is located at the boundary of the moduli space of quintics. Their mirror manifolds are highly degenerate Calabi-Yau orbifolds that are located at the boundary of the mirror Kähler cone. To understand mirror symmetry for quintic Calabi-Yau threefolds, it is more important to understand Lagrangian torus fibrations of generic quintic Calabi-Yau hypersurfaces in $`^4`$ and their mirrors that we will discuss in this paper.
In Section 2 we construct Lagrangian torus fibrations for generic quintics, first with codimension 1 singular locus, then with codimension 2 singular locus. The construction of Lagrangian torus fibrations with codimension 1 singular locus is automatic using the gradient flow method, just as for the Fermat quintics, and works the same way even for more general situations. The main difficulties come from understanding the structure of the resulting codimension 1 singular locus and squeezing the codimension 1 singular locus into codimension 2. For Fermat type quintics, the codimension 1 singular locus can be easily seen as a fattening of a graph, so we can always modify the Lagrangian fibrations with codimension 1 singular locus to the expected topological type. It seems hopeless to do the same for general quintics due to the generally badly behaved singular locus. It turns out that for general quintics near the large complex limit (in suitable sense), the codimension 1 singular locus is much better behaved, and we can do the same thing as in the Fermat type Calabi-Yau case to get the expected Lagrangian torus fibrations.
More precisely, let $`F:XS^3`$ be the Lagrangian torus fibration of a quintic constructed by the gradient flow. The singular set $`CX`$ is a complex curve (more precisely, $`C=\mathrm{Sing}(X_{\mathrm{}})X`$). The singular locus $`\stackrel{~}{\mathrm{\Gamma }}=F(C)`$ is usually a 2-dimensional object (called amoeba in ), which in general can be rather chaotic. Miraculously, when $`X`$ is near the large complex limit (in a suitable sense defined in section 2), according to results in our paper , the singular locus $`\stackrel{~}{\mathrm{\Gamma }}`$ is actually a fattening of some 1-dimensional graph $`\mathrm{\Gamma }`$. With this fact and corresponding squeezing result in , general methods developed in and will enable us to construct Lagrangian torus fibrations with codimension 2 graph singular locus for generic quintics near the large complex limit.
In section 3 we discuss the complex moduli for the quintics and the Kähler moduli for their mirror manifolds. In subsection 3.1, we provide a direct geometric reconstruction of the secondary fan compactification of the complex moduli for the quintics based on our construction of Lagrangian torus fibrations for Calabi-Yau quintics from section 2. The discussion of the SYZ mirror construction needs an identification of the complex moduli of a Calabi-Yau near the large complex limit with the complexified Kähler moduli of the mirror Calabi-Yau to start with. According to the work of Aspinwall-Greene-Morrison in , for Calabi-Yau hypersurfaces in toric variety, it is more natural to consider the monomial-divisor mirror map instead of the actural mirror map, which is a higher order perturbation of the monomial-divisor mirror map. Our symplectic topological SYZ construction will be based on the monomial-divisor mirror map. Results in do not directly apply to our case. By using a slicing theorem we remove some restrictions in and construct the monomial-divisor mirror map for quintic Calabi-Yau hypersurfaces in the form we need based on .
Lagrangian torus fibrations of the mirrors of quintics are constructed in section 4. As proposed by Greene, Plesser and others (see ), the mirror manifolds of quintics can be understood as crepant resolutions of orbifold quotient of Fermat type quintics. The Lagrangian torus fibration structures of the mirror of quintics mostly depend on the Kähler moduli, which can be reduced to the classical knowledge of resolution of singularities. Indeed, most part of the singular locus of our natural Lagrangian torus fibration for the mirror of quintic with codimension 1 singular locus is actually of codimension 2, which has a nice interpretation in terms of resolution of singularities. The remaining codimension 1 part of singular locus is a disjoint union of finitely many fattened “Y”. The discussion here on squeezing to codimension 2 singular locus is a bit tricky, because locally the Kähler metric may not be Fubini-Study. Besides the discussion of construction of Lagrangian torus fibrations for the mirrors of quintics with codimension 1 and codimension 2 singular locus (in subsections 5.1 and 5.3) we also discuss the construction of non-Lagrangian torus fibration with codimension 2 singular locus (in subsection 5.2), which is much easier.
The actual symplectic topological SYZ construction is worked out in section 5. We first identify the bases and the singular locus of the two fibrations, then we establish the duality of the regular fibers. In the process, we also find a very simple way to compute the monodromy of the Lagrangian torus fibrations. In section 6, certain classes of generic singular fibers of Lagrangian torus fibrations are discussed and a proposal of SYZ mirror duality for such generic singular fibers is presented.
The gradient flow method and the construction of Lagrangian torus fibrations, slicing theorem, monomial-divisor map, construction of singular locus, identification of base spaces, duality relation of the fibers and monodromy computation, etc., can all be generalized to the general situation of Calabi-Yau hypersurfaces in a toric variety corresponding to a reflexive polyhedron based on the methods and constructions we developed in , , and in this paper. These generalizations and the symplectic topological SYZ mirror conjecture for general generic Calabi-Yau hypersurfaces in a toric variety corresponding to a reflexive polyhedron is proved in complete generality in . In , our construction is further generalized to Calabi-Yau complete intersections in toric varieties.
There is an earlier work by Zharkov () on the construction of certain non-Lagrangian torus fibrations of Calabi-Yau hypersurfaces in toric variety. The work of Leung and Vafa () from physics point of view, touched upon several important ideas related to our work. There is also the work of M. Gross , which appeared on the internet at around the same time as this paper, where certain non-Lagrangian torus fibrations for quintic Calabi-Yau are constructed based on the information of the torus fibration structure for the mirror of quintic and with the help of C.T.C. Wall’s existence theorem. One major difference between our work and these other results is that our torus fibrations are naturally Lagrangian fibrations.
Note on notation: Unless otherwise specified, Lagrangian fibration in this paper only refers to piecewise smooth (Lipschitz) Lagrangian fibration. The notion of convex in this paper is sometimes non-standard. In certain cases, it probably is called concave in conventional term. For precise definition, please refer to the respective sections. Another thing is that we usually use $`\mathrm{\Delta }`$ to denote the set of integral points in a Newton polyhedron, but sometimes we use $`\mathrm{\Delta }`$ to denote the corresponding real polyhedron.
## 2 Lagrangian torus fibrations of general quintic Calabi-Yau threefolds
In , , we constructed Lagrangian torus fibrations for Fermat type quintic Calabi-Yau family $`\{X_\psi \}`$ in $`^4`$ by a flow along vector fields. It is easy to observe that the same construction will also produce Lagrangian torus fibrations for general quintics. Let
$$z^m=\underset{k=1}{\overset{5}{}}z_k^{m_k},|m|=\underset{k=1}{\overset{5}{}}m_k,\mathrm{for}m=(m_1,m_2,m_3,m_4,m_5)_0^5.$$
Then a general quintic homogeneous polynomial can be written as
$$p(z)=\underset{|m|=5}{}a_mz^m.$$
Let $`m_0=(1,1,1,1,1)`$, and denote $`a_{m_0}=\psi `$. Consider the quintic Calabi-Yau family $`\{X_\psi \}`$ in $`^4`$ defined by
$$p_\psi (z)=p_a(z)+\psi \underset{k=1}{\overset{5}{}}z_k=\underset{mm_0,|m|=5}{}a_mz^m+\psi \underset{k=1}{\overset{5}{}}z_k=0.$$
When $`\psi `$ approaches $`\mathrm{}`$, the family approaches its “large complex limit” $`X_{\mathrm{}}`$ defined by
$$p_{\mathrm{}}=\underset{k=1}{\overset{5}{}}z_k=0.$$
Consider the meromorphic function
$$s=\frac{p_{\mathrm{}}(z)}{p_a(z)}$$
defined on $`^4`$. Let $`\omega `$ denote the Kähler form of a Kähler metric $`g`$ on $`^4`$, and $`f`$ denote the gradient vector field of the real function $`f=Re(s)`$ with respect to the Kähler metric $`g`$. As in , we will similarly use the flow of the normalized gradient vector field $`V=\frac{f}{|f|^2}`$ to construct Lagrangian torus fibration for $`X_\psi `$ based on Lagrangian torus fibration of $`X_{\mathrm{}}`$.
From our experience on Fermat type quintic Calabi-Yau hypersurfaces, we know that the Lagrangian fibration we get by this flow usually has codimension 1 singular locus. Due to the highly symmetric nature of the Fermat type quintics, in it was relatively easy to figure out explicitly the “expected” Lagrangian fibration structure with codimension 2 singular locus and the corresponding singular fibers. In that case, the singular locus of the Lagrangian fibration with codimension 1 singular locus is a fattened version of the codimension 2 singular locus of the “expected” Lagrangian fibration. Therefore, even without perturbing to the Lagrangian fibration with codimension 2 singular locus, we could already compute the monodromy. A rather magical explicit symplectic deformation construction (discussed in section 9 of ) was used to deform the Lagrangian torus fibration with codimension 1 singular locus to one with codimension 2 singular locus.
The major difficulty to generalize this program to general quintic Calabi-Yau threefolds is that for general quintics, the singular locus of the Lagrangian fibration for $`X_\psi `$ constructed from deforming the standard Lagrangian torus fibration of $`X_{\mathrm{}}`$ via the flow of $`V`$ can be fairly arbitrary and does not necessarily resemble the fattening of any “expected singular locus”. Worst of all, in the case of general quintic Calabi-Yau threefolds, there is no obvious guess what the “expected” codimension 2 singular locus should be and how it will vary when deforming in the complex moduli of the quintic Calabi-Yau. One clearly expects the “expected” singular locus to be some graph in $`\mathrm{\Delta }S^3`$. But it seems to take some miracle (at least to me when I first dreamed about it) for a general singular set $`C`$ (which is an algebraic curve) to project to the singular locus $`\stackrel{~}{\mathrm{\Gamma }}=F(C)`$ that resembles a fattening of a graph.
Interestingly, miracle happens here! It largely relies on a better understanding of what it means to be near the large complex limit. The large complex limit was a rather elusive concept from physics. From early works of , it is apparent that $`X_{\mathrm{}}`$ can be viewed in some sense as a representative of the large complex limit for quintic Calabi-Yau. (We will often refer to $`X_{\mathrm{}}`$ in this paper as the large complex limit in this narrow sense.) Later, authors of and others proposed that there are many large complex limits corresponding to a chamber decomposition of the complex moduli of quintic Calabi-Yau near $`X_{\mathrm{}}`$. (Each chamber under the monomial-divisor mirror map corresponds to the complexified Kähler cone of one of the birationally equivalent model of the mirror of quintic.) Each chamber characterize a particular way quintics get near $`X_{\mathrm{}}`$. This can be interpreted equivalently from two different perspectives as either different ways quintics get near the same large complex limit or quintics approaching different large complex limits. (In this paper, we will more often adopt the first interpretation.) For our purpose, when quintics get deep into each such chamber and approach $`X_{\mathrm{}}`$, the singular locus of our fibration will resemble fattening of a graph. For different chambers the corresponding graphs will be different. From this point of view, our discussion of singular locus in terms of amoeba actually will give a very natural new way to reconstruct this chamber decomposition without going to the mirror, which will be discussed in more detail in section 3 when we talk about the complex moduli space of quintic.
Before getting into the detail, let us recall the general result on the construction of Lagrangian torus fibration (theorem 8.1) in . In the following, we will rephrase this general theorem according to our special situation here. Assume that
$$X_{\mathrm{inv}}=X_\psi X_{\mathrm{}},\mathrm{in}\mathrm{particular}C=X_\psi \mathrm{Sing}(X_{\mathrm{}}).$$
For any subset $`I\{1,2,3,4,5\}`$, let
$$D_I=\{z^4|z_i=0,z_j0,\mathrm{for}iI,j\{1,2,3,4,5\}\backslash I\}.$$
In this section, we will always use $`I`$ to denote a subset in $`\{1,2,3,4,5\}`$. Let $`|I|`$ denote the cardinality of $`I`$, then
$$X_{\mathrm{}}=\underset{\begin{array}{c}I\{1,2,3,4,5\}\\ 0<\left|I\right|<5\end{array}}{}D_I.$$
Let $`\mathrm{\Delta }`$ denote the standard 4-simplex, whose vertices are identified with $`\{1,2,3,4,5\}`$. We will use $`\mathrm{\Delta }_I`$ to denote the subface of $`\mathrm{\Delta }`$, whose vertices are not in $`I`$. Then we have
$$\mathrm{\Delta }=\underset{\begin{array}{c}I\{1,2,3,4,5\}\\ 0<\left|I\right|<5\end{array}}{}\mathrm{\Delta }_I.$$
###### Definition 2.1
Assume that $`(M_1,\omega _1)`$ is a smooth symplectic manifold and $`(M_2,\omega _2)`$ is a symplectic variety. Then a piecewise smooth map $`H:M_1M_2`$ is called a symplectic morphism if $`H^{}\omega _2=\omega _1`$. If $`(M_2,\omega _2)`$ is also a smooth symplectic manifold and $`H`$ is a diffeomorphism, then the symplectic morphism $`H`$ is also called a symplectomorphism.
Remark: In this paper, we will only deal with the case of normal crossing symplectic varieties. More specifically, in our case, $`M_2=X_{\mathrm{}}`$ and $`\omega _2`$ is taken to be the restriction to $`X_{\mathrm{}}`$ of a symplectic form on $`^4`$. Therefore, we will not venture into the concept of general symplectic varieties and symplectic forms on them.
###### Definition 2.2
When $`X`$ and $`B`$ are smooth, a fibration $`F:XB`$ is called topologically smooth if $`F`$ is locally a topological product. When $`X`$ and $`B`$ are stratified into unions of smooth strata
$$X=\underset{i}{}X_i,B=\underset{i}{}B_i,$$
such that $`F(X_i)=B_i`$, $`F:XB`$ is called topologically smooth (with respect to the stratifications) if $`F`$ is continuous and $`F|_{X_i}:X_iB_i`$ are topologically smooth for all $`i`$.
Notice that the topological smoothness of $`F`$ depends on the stratifications of $`X`$ and $`B`$. Specifying to our case of torus fibration, a torus fibration $`F_{\mathrm{}}:X_{\mathrm{}}\mathrm{\Delta }`$ is topologically smooth if for each $`I`$, $`\mathrm{\Delta }_I=F_{\mathrm{}}(D_I)`$ and $`F_{\mathrm{}}|_{D_I}:D_I\mathrm{\Delta }_I`$ is a topologically smooth fibration, whose fibers are necessarily $`(4|I|)`$-dimensional torus. Using our notation here, the theorem 8.1 of can be rephased in our situation as follows.
###### Theorem 2.1
Start with a topologically smooth Lagrangian torus fibration $`F_{\mathrm{}}:X_{\mathrm{}}\mathrm{\Delta }`$, we can construct a symplectic morphism $`H_\psi :X_\psi X_{\mathrm{}}`$ such that $`F_\psi =F_{\mathrm{}}H_\psi :X_\psi \mathrm{\Delta }`$ is a Lagrangian torus fibration with singular set $`C=X_\psi \mathrm{Sing}(X_{\mathrm{}})`$ and singular locus $`\mathrm{\Gamma }=F_{\mathrm{}}(C)`$. For $`b\mathrm{\Gamma }`$, $`F_\psi ^1(b)`$ is a real $`3`$-torus. For $`b\mathrm{\Gamma }`$, $`F_\psi ^1(b)`$ is singular. For $`b\mathrm{\Gamma }\mathrm{\Delta }_I`$, $`F_\psi ^1(b)C=F_{\mathrm{}}^1(b)C`$ and $`H_\psi :F_\psi ^1(b)\backslash CF_{\mathrm{}}^1(b)\backslash C`$ is a topologically smooth $`(|I|1)`$-torus fibration.
$`\mathrm{}`$
Remark: Since our map $`F_\psi `$ is not $`C^{\mathrm{}}`$, by saying a point is in the singular set of $`F_\psi `$, we mean that in a neighborhood of that point the fibration $`F_\psi `$ is not a topological product.
Remark on notation: We generally use $`\mathrm{\Gamma }`$ to denote the singular locus. Sometimes when we discuss the Lagrangian torus fibrations with codimension 1 singular locus and the related Lagrangian torus fibrations with codimension 2 singular locus together, we usually use $`\stackrel{~}{\mathrm{\Gamma }}`$ to denote the codimension 1 singular locus and $`\mathrm{\Gamma }`$ to denote the codimension 2 singular locus.
From theorem 2.1, the construction of Lagrangian fibration for $`X_\psi `$ can be reduced to the construction of the topologically smooth Lagrangian torus fibration $`F_{\mathrm{}}:X_{\mathrm{}}\mathrm{\Delta }`$. The singular locus $`\mathrm{\Gamma }=F_{\mathrm{}}(C)`$ is determined by $`F_{\mathrm{}}`$ and $`C`$. The singular fibers are determined by the ways fibers of $`F_{\mathrm{}}`$ intersect with $`C`$. For a fixed quintic family $`\{X_\psi \}`$, $`C`$ is fixed. Therefore from now on our discussion will mainly be focused on the construction of $`F_{\mathrm{}}`$ in various situations.
The most natural choice for $`F_{\mathrm{}}`$ is the restriction to $`X_{\mathrm{}}`$ of the moment map $`F_{\mathrm{FS}}:^4\mathrm{\Delta }`$ with respect to the Fubini-Study Kähler form. The fibers of $`F_{\mathrm{}}=F_{\mathrm{FS}}|_X_{\mathrm{}}:X_{\mathrm{}}\mathrm{\Delta }`$ are exactly the orbits of the real torus action. For this reason, we may use any toric Kähler forms on $`^4`$. They will give the same torus fibration for $`X_{\mathrm{}}`$ except for a difference of reparametrization of the base $`\mathrm{\Delta }`$. $`F_{\mathrm{}}`$ is clearly a topologically smooth Lagrangian torus fibration. By theorem 2.1, we can construct a Lagrangian torus fibration $`F_\psi :X_\psi \mathrm{\Delta }`$ with the singular locus $`\stackrel{~}{\mathrm{\Gamma }}=F_{\mathrm{}}(C)`$.
For $`I\{1,2,3,4,5\}`$ with $`|I|=2`$, let
$$C_I=\{[z]^4|p_a(z)=0,z_l=0\mathrm{for}lI\}.$$
$`C_I`$ is a genus 6 curve. We have
$$C=\mathrm{Sing}(X_{\mathrm{}})X_\psi =\underset{\begin{array}{c}I\{1,2,3,4,5\}\\ \left|I\right|=2\end{array}}{}C_I.$$
The singular locus
$$\stackrel{~}{\mathrm{\Gamma }}=F_{\mathrm{}}(C)=\underset{\begin{array}{c}I\{1,2,3,4,5\}\\ \left|I\right|=2\end{array}}{}\stackrel{~}{\mathrm{\Gamma }}_I=\underset{\begin{array}{c}I\{1,2,3,4,5\}\\ \left|I\right|=2\end{array}}{}F_{\mathrm{}}(C_I).$$
At this point, we already constructed a Lagrangian fibration $`F_\psi :X_\psi \mathrm{\Delta }`$ with the codimension 1 singular locus $`\stackrel{~}{\mathrm{\Gamma }}=F_{\mathrm{}}(C)`$. To connect with the SYZ picture, especially to construct Lagrangian torus fibration with graph singular locus, more detailed understanding of the structure of the singular locus $`\stackrel{~}{\mathrm{\Gamma }}=F_{\mathrm{}}(C)`$ is needed.
What we need is to realize the singular locus $`\stackrel{~}{\mathrm{\Gamma }}=F_{\mathrm{}}(C)`$ as a fattening of some graph $`\mathrm{\Gamma }`$ and be able to explicitly construct a symplectic deformation that deforms the Lagrangian fibration $`F_{\mathrm{}}`$ to $`\widehat{F}`$ such that $`\widehat{F}(C)=\mathrm{\Gamma }`$. According to the section 9 of , this is equivalent to symplectically deforming $`C`$ to a symplectic curve $`\widehat{C}`$ that satisfies $`F_{\mathrm{}}(\widehat{C})=\mathrm{\Gamma }`$. Since $`C`$ is reducible, and each irreducible component $`C_I`$ is in $`\overline{D_I}^2`$, similar to the argument in the section 9 of , the problem can be isolated to each $`^2`$ and be reduced to the following problem. We will temporarily use $`\mathrm{\Delta }`$ to denote a 2-simplex in the following.
Problem: Let $`F:^2\mathrm{\Delta }`$ be the standard moment map with respect to certain toric metric on $`^2`$. We need to find quintic curves $`C`$ in $`^2`$, such that $`\stackrel{~}{\mathrm{\Gamma }}=F(C)`$ is a fattening of some graph $`\mathrm{\Gamma }`$. We also want to explicitly construct a symplectic deformation of $`^2`$, which deforms $`C`$ to symplectic curve $`\widehat{C}`$ with $`F(\widehat{C})=\mathrm{\Gamma }`$.
Clearly, one can not expect every quintic curve to have such nice properties. As we mentioned earlier, it turns out that when the Calabi-Yau quintic is generic and close to the large complex limit in a certain sense, the corresponding quintic curves $`C_{ijk}`$ have the properties described above. This kind of curves and more general situations have been discussed intensively in our paper . To describe the result from that paper, we first introduce some notations.
Consider $`^2`$ with homogeneous coordinate $`[z_0,z_1,z_2]`$ and inhomogeneous coordinate $`(x_1,x_2)=(z_1/z_0,z_2/z_0)`$. Let
$$M=\{x^m=x_1^{m_1}x_2^{m_2}|m=(m_1,m_2)^2\}^2.$$
Consider a general quintic polynomial
$$p(x)=\underset{|m|5}{}a_mx^m,$$
where $`|m|=m_1+m_2`$. Then the Newton polygon of quintic polynomials is
$$\mathrm{\Delta }_5=\{mM|m_1,m_20,|m|5\}.$$
Let $`w=(w_m)_{m\mathrm{\Delta }_5}`$ be a function on $`\mathrm{\Delta }_5`$ (regarding $`\mathrm{\Delta }_5`$ as an integral triangle in the lattice $`M`$).
###### Definition 2.3
$`w=(w_m)_{m\mathrm{\Delta }_5}`$ is called convex on $`\mathrm{\Delta }_5`$ if for any $`m^{}\mathrm{\Delta }_5`$, there exists an affine function $`n`$ such that $`n(m^{})=w_m^{}`$ and $`n(m)w_m`$ for $`m\mathrm{\Delta }_5\backslash \{m^{}\}`$.
We will always assume that $`w`$ is convex. With $`w`$ we can define the moment map
$$F_{t^w}(x)=\underset{m\mathrm{\Delta }_5}{}\frac{|x^m|_{t^w}^2}{|x|_{t^w}^2}m,$$
which maps $`^2`$ to $`\mathrm{\Delta }_5`$, where $`t>0`$ is a parameter and $`|x^m|_{t^w}^2=|t^{w_m}x^m|^2`$, $`|x|_{t^w}^2=_{m\mathrm{\Delta }_5}|x^m|_{t^w}^2`$.
With this moment map, it is convenient to assign $`^2`$ with the associated toric Kähler metric $`\omega _{t^w}`$. The Kähler potential of $`\omega _{t^w}`$ is $`\mathrm{log}|x|_{t^w}^2`$.
$`\mathrm{\Delta }_5`$ can also be viewed as a real triangle in $`M_{}=M`$. Then $`w=(w_m)_{m\mathrm{\Delta }_5}`$ defines a function on the integral points in $`\mathrm{\Delta }_5`$. If $`w`$ is convex, $`w`$ can be extended to a piecewise linear convex function on the real triangle $`\mathrm{\Delta }_5`$. We will denote the extension also by $`w`$. A generic such $`w`$ will determine a simplicial decomposition of $`\mathrm{\Delta }_5`$, with zero simplices being the integral points in $`\mathrm{\Delta }_5`$. In this case, we say the piecewise linear convex function $`w`$ is compatible with the simplicial decomposition of $`\mathrm{\Delta }_5`$, and the simplicial decomposition of $`\mathrm{\Delta }_5`$ is determined by $`w`$. Conversely, as pointed out to me by Yi Hu, not every simplicial decompositions of $`\mathrm{\Delta }_5`$ with zero simplices being integral points in $`\mathrm{\Delta }_5`$ possesses a compatible piecewise linear convex function. We will only restrict our discussion to those simplicial decompositions that possess compatible piecewise linear convex functions. Consider the baricenter subdivision of such a simplicial decomposition of $`\mathrm{\Delta }_5`$. Let $`\mathrm{\Gamma }_w`$ denote the union of simplices in the baricenter subdivision that do not intersect integral points in $`\mathrm{\Delta }_5`$. Then it is not hard to see that $`\mathrm{\Gamma }_w`$ is a one-dimensional graph. $`\mathrm{\Gamma }_w`$ divides $`\mathrm{\Delta }_5`$ into regions, with a unique integral point of $`\mathrm{\Delta }_5`$ located at the center of each of these regions. In particular we can think of the regions as parametrized by integral points in $`\mathrm{\Delta }_5`$.
Let $`C`$ denote the quintic curve defined by the quintic polynomial
$$p(x)=\underset{|m|5}{}a_mx^m,$$
with $`|a_m|=t^{w_m}`$. Combine theorem 3.1, proposition 5.7, theorem 5.1 and the remark following theorem 5.1 in , the theorem 9.1 in can be generalized in the general quintic case as the following.
###### Theorem 2.2
For $`w=(w_m)_{m\mathrm{\Delta }_5}`$ convex and positive, and $`t`$ small enough, $`\stackrel{~}{\mathrm{\Gamma }}_w=F_{t^w}(C)`$ will be a fattening of a graph $`\mathrm{\Gamma }_w`$. There exists a family $`\{H_s\}_{s[0,1]}`$ of piecewise smooth Lipschitz continuous symplectic automorphisms of $`(^2,\omega _{t^w})`$ that restrict to identity on the three coordinate $`^1`$’s, such that $`H_0=\mathrm{id}`$, $`\widehat{C}=H_1(C)`$ is a piecewise smooth symplectic curve satisfying $`F_{t^w}(\widehat{C})=\mathrm{\Gamma }_w`$.
$`\mathrm{}`$
Remark: The moment map $`F_{t^w}`$ is invariant under the real 2-torus action. For any other moment map $`F`$ that is invariant under the real 2-torus action, $`F(\widehat{C})`$ is a 1-dimensional graph.
Although not in the form we can directly apply, there are earlier results on the structures of amoeba with very different applications in mind that are more or less equivalent to theorem 3.1 of , which is stated in the first sentence of theorem 2.2. References for these earlier works can be found in and .
Example: For the standard simplicial decomposition of the Newton polygon $`\mathrm{\Delta }_5`$ of quintic polyonmials (shown in Figure 1),
Figure 1: the standard simplicial decomposition
we have the corresponding $`\widehat{\mathrm{\Gamma }}=F(C)`$ (Figure 2),
Figure 2: $`\stackrel{~}{\mathrm{\Gamma }}`$ for the standard simplicial decomposition
which is a fattening of the following graph $`\mathrm{\Gamma }`$ (Figure 3). (We should remark here that Figure 2 is a rough topological illustration of the image $`\widehat{\mathrm{\Gamma }}=F(C)`$. Some part of the edges of the image that are straight or convex could be curved or concave in more accurate picture. Of course, such inaccuracy will not affect our mathematical argument and the fact that $`\widehat{\mathrm{\Gamma }}=F(C)`$ is a fattening of a graph.) By results in , we can symplectically deform $`C`$ to a symplectic curve $`\widehat{C}`$ such that $`F(\widehat{C})=\mathrm{\Gamma }`$.
Figure 3: $`\mathrm{\Gamma }`$ for the standard simplicial decomposition
If the simplicial decomposition is changed to the following (Figure 4), we have the corresponding $`\mathrm{\Gamma }`$ (Figure 4).
Figure 4: alternative simplicial decomposition and corresponding $`\mathrm{\Gamma }`$
With this preparation, now we are ready to address the meaning of near the large complex limit. We will resume our usual notation instead of the 2-dimensional notation at this point. Consider the quintic Calabi-Yau family $`\{X_\psi \}`$ in $`^4`$ defined by
$$p_\psi (z)=p_a(z)+\psi \underset{k=1}{\overset{5}{}}z_k=\underset{mm_0,|m|=5}{}a_mz^m+\psi \underset{k=1}{\overset{5}{}}z_k=0,$$
where $`|a_m|=t^{w_m}`$, $`t>0`$ is a parameter and $`\{w_m\}_{m\mathrm{\Delta }}`$ is a function defined on the Newton polyhedron $`\mathrm{\Delta }`$ of the quintic polynomials on $`^4`$, which is
(2.1)
$$\mathrm{\Delta }=\{z^m=z_1^{m_1}\mathrm{}z_5^{m_5}||m|=5,m=(m_1,\mathrm{},m_5)_0^5\}.$$
$`\mathrm{\Delta }`$ is a reflexive polyhedron in the lattice
$`M`$ $`=`$ $`\{m=(m_1,\mathrm{},m_5)^5||m|=5\}`$
$``$ $`\{m=(m_1,\mathrm{},m_5)^5||m|=0\}`$
identifying $`m_0=(1,1,1,1,1)`$ with the origin.
Let $`\mathrm{\Delta }^0`$ denote the set of integral points in the 2-skeleton of $`\mathrm{\Delta }`$. Or in other words, the integral points in $`\mathrm{\Delta }`$ that are not in the interior of 3-faces and not the center $`m_0=(1,1,1,1,1)`$. For $`m\mathrm{\Delta }`$, let $`w_m^{}=w_mw_{m_0}`$ (recall $`\psi =a_{m_0}`$). We have two functions $`w=(w_m)_{m\mathrm{\Delta }}`$ and $`w^{}=(w_m^{})_{m\mathrm{\Delta }}`$ defined on the integral points in $`\mathrm{\Delta }`$.
###### Definition 2.4
$`w^{}=(w_m^{})_{m\mathrm{\Delta }}`$ is called convex with respect to $`\mathrm{\Delta }^0`$, if for any $`m^{}\mathrm{\Delta }^0`$ there exists a linear function $`n`$ such that $`n(m^{})=w_m^{}^{}`$ and $`n(m)w_m^{}`$ for $`m\mathrm{\Delta }^0\backslash \{m^{}\}`$.
The two convexities we defined are closely related. For $`I\{1,2,3,4,5\}`$ with $`|I|=2`$, let
$$\mathrm{\Delta }_I=\{z^{m_I}=\underset{iI}{}z_i^{m_i}||m_I|=5,m_I=(m_i)_{iI}_0^3\}.$$
Then
$$\mathrm{\Delta }^0=\underset{\begin{array}{c}I\{1,2,3,4,5\}\\ \left|I\right|=2\end{array}}{}\mathrm{\Delta }_I.$$
Let
$$w_I=w|_{\mathrm{\Delta }_I}.$$
Then we have the following lemma
###### Lemma 2.1
$`w^{}=(w_m^{})_{m\mathrm{\Delta }}`$ is convex with respect to $`\mathrm{\Delta }^0`$ for $`w_{m_0}`$ large enough, if and only if $`w_I`$ is convex on $`\mathrm{\Delta }_I`$ in the sense of definition 2.3 for all $`I\{1,2,3,4,5\}`$ with $`|I|=2`$.
Proof: The “only if” part is trivial. We will prove the “if” part. For any $`m^{}\mathrm{\Delta }^0`$, there exists the smallest subface $`\mathrm{\Delta }_m^{}\mathrm{\Delta }`$, such that $`m^{}\mathrm{\Delta }_m^{}`$. Since $`\mathrm{\Delta }`$ is convex, there exists a linear function $`n_m^{}`$ such that
$$n_m^{}|_{\mathrm{\Delta }_m^{}}=1,n_m^{}|_{\mathrm{\Delta }\backslash \mathrm{\Delta }_m^{}}>1.$$
By assumption, $`w_I`$ is convex on $`\mathrm{\Delta }_I`$ in the sense of definition 2.3 for all $`I\{1,2,3,4,5\}`$ with $`|I|=2`$. Clearly, $`\mathrm{\Delta }_m^{}`$ belongs to one of $`\mathrm{\Delta }_I`$. Therefore $`w|_{\mathrm{\Delta }_m^{}}`$ is convex on $`\mathrm{\Delta }_m^{}`$ in the sense of definition 2.3. Namely, there exists a linear function $`n^{}`$ such that
$$n^{}(m^{})=w_m^{},\mathrm{and}n^{}(m^{})w_m\mathrm{for}m\mathrm{\Delta }_m^{}\backslash \{m^{}\}.$$
Let $`n=n^{}+w_{m_0}n_m^{}`$, then it is easy to see that for $`w_{m_0}`$ large enough, $`n(m^{})=w_m^{}^{}`$ and $`n(m)w_m^{}`$ for $`m\mathrm{\Delta }^0\backslash \{m^{}\}`$, namely, $`w^{}=(w_m^{})_{m\mathrm{\Delta }}`$ is convex with respect to $`\mathrm{\Delta }^0`$.
$`\mathrm{}`$
With this lemma in mind, we can make the concept near the large complex limit more precise in the following definition.
###### Definition 2.5
The quintic Calabi-Yau hypersurface $`X_\psi `$ is said to be near the large complex limit, if $`w^{}=(w_m^{})_{m\mathrm{\Delta }}`$ is convex with respect to $`\mathrm{\Delta }^0`$ and $`t`$ is small.
When $`X_\psi `$ is near the large complex limit, for $`I\{1,2,3,4,5\}`$ with $`|I|=2`$, the corresponding $`w_I`$ is convex on $`\mathrm{\Delta }_I`$. When $`w`$ is generic, by previous construction, $`w_I`$ determines a 1-dimensional graph $`\mathrm{\Gamma }_I`$ on $`\mathrm{\Delta }_I`$. Let
$$\mathrm{\Gamma }=\underset{\begin{array}{c}I\{1,2,3,4,5\}\\ \left|I\right|=2\end{array}}{}\mathrm{\Gamma }_I.$$
At this point, in order to apply theorem 2.2, we need to adopt the toric metric with the Kähler potential $`\mathrm{log}|x|_{t^w}^2`$ on $`^4`$. The corresponding moment map is
$$F_{t^w}(x)=\underset{m\mathrm{\Delta }^0}{}\frac{|x^m|_{t^w}^2}{|x|_{t^w}^2}m,$$
which maps $`^4`$ to $`\mathrm{\Delta }`$, where $`|x^m|_{t^w}^2=|t^{w_m}x^m|^2`$, $`|x|_{t^w}^2=_{m\mathrm{\Delta }^0}|x^m|_{t^w}^2`$.
By theorem 2.2, $`\stackrel{~}{\mathrm{\Gamma }}_I=F_{t^w}(C_I)`$ is a fattening of $`\mathrm{\Gamma }_I`$. Therefore,
$$\stackrel{~}{\mathrm{\Gamma }}=\underset{\begin{array}{c}I\{1,2,3,4,5\}\\ \left|I\right|=2\end{array}}{}\stackrel{~}{\mathrm{\Gamma }}_I$$
is a fattening of 1-dimensional graph $`\mathrm{\Gamma }`$. There are 3 types of singular fibers over different part of $`\stackrel{~}{\mathrm{\Gamma }}`$. Let $`\stackrel{~}{\mathrm{\Gamma }}^2`$ denote the interior of $`\stackrel{~}{\mathrm{\Gamma }}`$, and
$$\stackrel{~}{\mathrm{\Gamma }}^1=\stackrel{~}{\mathrm{\Gamma }}\left(\underset{|I|=2}{}\mathrm{\Delta }_I\right),\stackrel{~}{\mathrm{\Gamma }}^0=\stackrel{~}{\mathrm{\Gamma }}\left(\underset{|I|=3}{}\mathrm{\Delta }_I\right).$$
Then
(2.3)
$$\stackrel{~}{\mathrm{\Gamma }}=\stackrel{~}{\mathrm{\Gamma }}^0\stackrel{~}{\mathrm{\Gamma }}^1\stackrel{~}{\mathrm{\Gamma }}^2.$$
Let $`F_{\mathrm{}}=F_{t^w}|_X_{\mathrm{}}:X_{\mathrm{}}\mathrm{\Delta }`$ be the moment map Lagrangian torus fibration. Results in imply that the inverse image of $`F_{\mathrm{}}`$ over a point in $`\stackrel{~}{\mathrm{\Gamma }}^0`$ is a circle that intersects $`C`$ at one point, over a point in $`\stackrel{~}{\mathrm{\Gamma }}^1`$ is a 2-torus that intersects $`C`$ at one point, over a point in $`\stackrel{~}{\mathrm{\Gamma }}^2`$ is a 2-torus that intersects $`C`$ at two points. Then by theorem 2.1, we have the following theorem.
###### Theorem 2.3
When $`X_\psi `$ is near the large complex limit, starting with the topologically smooth Lagrangian torus fibration $`F_{\mathrm{}}:X_{\mathrm{}}\mathrm{\Delta }`$, the gradient flow method will produce a Lagrangian torus fibration $`F_\psi :X_\psi \mathrm{\Delta }`$ with codimension 1 singular locus $`\stackrel{~}{\mathrm{\Gamma }}=F_{\mathrm{}}(C)`$ as a fattening of the graph $`\mathrm{\Gamma }`$. There are 4 types of fibers.
(i). For $`p\mathrm{\Delta }\backslash \stackrel{~}{\mathrm{\Gamma }}`$, $`F_\psi ^1(p)`$ is a smooth Lagrangian 3-torus.
(ii). For $`p\stackrel{~}{\mathrm{\Gamma }}^2`$, $`F_\psi ^1(p)`$ is a Lagrangian 3-torus with two circles collapsed to two singular points.
(iii). For $`p\stackrel{~}{\mathrm{\Gamma }}^1`$, $`F_\psi ^1(p)`$ is a Lagrangian 3-torus with one circle collapsed to one singular point.
(iv). For $`p\stackrel{~}{\mathrm{\Gamma }}^0`$, $`F_\psi ^1(p)`$ is a Lagrangian 3-torus with one 2-torus collapsed to one singular points.
$`\mathrm{}`$
Let $`\mathrm{\Gamma }=\mathrm{\Gamma }^1\mathrm{\Gamma }^2\mathrm{\Gamma }^3`$, where $`\mathrm{\Gamma }^1`$ is the smooth part of $`\mathrm{\Gamma }`$, $`\mathrm{\Gamma }^2`$ is the singular part of $`\mathrm{\Gamma }`$ in the interior of the 2-skeleton of $`\mathrm{\Delta }`$, $`\mathrm{\Gamma }^3`$ is the singular part of $`\mathrm{\Gamma }`$ in the 1-skeleton of $`\mathrm{\Delta }`$. Applying theorem 2.1, with help of the second part of the theorem 2.2 and some other results in , we can construct a Lagrangian torus fibration $`\widehat{F}_\psi :X_\psi \mathrm{\Delta }`$ with codimension 2 singular locus $`\mathrm{\Gamma }=\widehat{F}_\psi (C)`$ in the following theorem. For the definition of type I, II, III singular fibers, please refer to the section 8.
###### Theorem 2.4
When $`X_\psi `$ is near the large complex limit, the gradient flow method will produce a Lagrangian torus fibration $`\widehat{F}_\psi :X_\psi \mathrm{\Delta }`$ with the graph singular locus $`\mathrm{\Gamma }=\widehat{F}_\psi (C)`$. There are 4 types of fibers.
(i). For $`p\mathrm{\Delta }\backslash \mathrm{\Gamma }`$, $`\widehat{F}_\psi ^1(p)`$ is a smooth Lagrangian 3-torus.
(ii). For $`p\mathrm{\Gamma }^1`$, $`\widehat{F}_\psi ^1(p)`$ is a type I singular fiber.
(iii). For $`p\mathrm{\Gamma }^2`$, $`\widehat{F}_\psi ^1(p)`$ is a type II singular fiber.
(iv). For $`p\mathrm{\Gamma }^3`$, $`\widehat{F}_\psi ^1(p)`$ is a type III singular fiber.
Proof: According to theorem 2.2, on each $`\overline{D_I}^2`$ in the 2-skeleton of $`X_{\mathrm{}}`$, there exists a family $`\{H_s\}_{s[0,1]}`$ of piecewise smooth Lipschitz continuous symplectic automorphisms of $`^2`$ that restrict to identity on the three coordinate $`^1`$’s, such that $`H_0=\mathrm{id}`$, $`\widehat{C}_I=H_1(C_I)`$ is a piecewise smooth symplectic curve satisfying $`F_{\mathrm{}}(\widehat{C}_I)=\mathrm{\Gamma }_I`$. These $`\{H_s\}_{s[0,1]}`$’s can be patched together to form a family of symplectic automorphisms of the 2-skeleton of $`X_{\mathrm{}}`$. Use the extension theorem (theorem 6.8) from , we may extend such automorphisms to $`X_{\mathrm{}}`$ to form a family (still denote by $`\{H_s\}_{s[0,1]}`$) of piecewise smooth Lipschitz continuous symplectic automorphisms of $`X_{\mathrm{}}`$ such that $`H_0=\mathrm{id}`$, $`\widehat{C}=H_1(C)`$ is a normal crossing union of piecewise smooth symplectic curves satisfying $`F_{\mathrm{}}(\widehat{C})=\mathrm{\Gamma }`$.
Clearly $`\widehat{F}_{\mathrm{}}=F_{\mathrm{}}H_1:X_{\mathrm{}}\mathrm{\Delta }`$ is also a topologically smooth Lagrangian fibration that satisfies $`\widehat{F}_{\mathrm{}}(C)=\mathrm{\Gamma }`$. Apply theorem 2.1 to the topologically smooth Lagrangian fibration $`\widehat{F}_{\mathrm{}}:X_{\mathrm{}}\mathrm{\Delta }`$, we will get the desired result.
$`\mathrm{}`$
Remark: Although the Lagrangian fibrations produced by this theorem have codimension 2 singular locus, this fibration map is not $`C^{\mathrm{}}`$ map (merely Lipschitz). I believe that a small perturbation of this map will give an almost $`C^{\mathrm{}}`$ Lagrangian fibration with the same topological structure. (At least $`C^{\mathrm{}}`$ away from type II singular fibers.) In , we will discuss partial smoothing of the Lagrangian torus fibration constructed in theorem 2.4.
## 3 Complex moduli, Kähler moduli and monomial-divisor mirror map
In this section we discuss the partial compactification and chamber decomposition of the complex moduli space of quintics near the large complex limit based on the structure of Lagrangian torus fibrations. We will show that this partial compactification is equivalent to the well known secondary fan compactification (). We will also review the quotient construction for the mirror of quintics, the Kähler moduli of the mirror of quintics and the monomial-divisor mirror map. These will be used in the later section for establishing the symplectic SYZ correspondence. Most materials in this section are well known (see ). The main purpose of our presentation here is for reviewing the facts and fixing the notations, with the exception of our new interpretation of secondary fan compactification in terms of Lagrangian torus fibration and the slicing theorem, which removes some restriction in the results of so that they can be applied in our situation.
### 3.1 Slicing theorem and the complex moduli space of quintic Calabi-Yau manifolds
For quintic Calabi-Yau hypersurfaces, the complex moduli can be seen as the space of homogeneous quintic polynomials on $`^5`$ modulo the action of $`GL(5,)`$. Recall from (2.1) and (2) the Newton polyhedron $`\mathrm{\Delta }M`$ of quintic polynomials. The space of quintic polynomials $`Q`$ can be viewed as
$$Q=\mathrm{Span}_{}(\mathrm{\Delta }).$$
To understand the quotient of $`Q`$ by $`GL(5,)`$, classically we would have to use the geometric invariant theory. It is fairly complicated, and the compactification is not unique. For the purpose of mirror symmetry, the GIT approach is a bit unnatural. For example, the large complex limit
$$p_{\mathrm{}}(z)=\underset{k=1}{\overset{5}{}}z_k$$
is not invariant under the action of $`GL(5,)`$. The stablizer of $`p_{\mathrm{}}(z)`$ (at least infinitesimally) is merely the Cartan subgroup $`TSL(5,)`$. On the other hand, mirror symmetry mainly concerns the behavior of the complex moduli near the large complex limit $`p_{\mathrm{}}(z)`$, with the meaning of “near” suitably specified.
For these reasons, it will be much nicer if we can find a canonical $`T`$-invariant slice $`Q_0`$ in $`Q`$ that passes through the large complex limit $`p_{\mathrm{}}(z)`$ and intersects each orbit of $`GL(5,)`$ exactly at an orbit of $`T`$. This seems quite impossible to be done on the whole complex moduli. But near the large complex limit $`p_{\mathrm{}}(z)`$, this can be done very nicely by the following slicing theorem.
Recall that $`\mathrm{\Delta }^0`$ is the two skeleton of $`\mathrm{\Delta }`$. Let
$$Q_0=m_0+\mathrm{Span}_{}(\mathrm{\Delta }^0).$$
Then we have
###### Theorem 3.1
Near the large complex limit $`p_{\mathrm{}}`$, $`Q_0=m_0+\mathrm{Span}_{}(\mathrm{\Delta }^0)`$ is a slice of $`Q=\mathrm{Span}_{}(\mathrm{\Delta })`$ that contains the large complex limit $`p_{\mathrm{}}`$, and is invariant under toric automorphism group $`T`$ and intersects each nearby orbit of the automorphism group $`GL(5,)`$ at a finite set of orbits of the toric automorphism group $`T`$ (the maximal torus in $`SL(5,)`$) that is parametrized by a quotient of the Weyl group of $`SL(5,)`$.
Proof: Let
$$F(z)=\underset{m(m_0)\mathrm{\Delta }}{}a_mz^m+\psi z^{m_0},$$
then the theorem asserts that when $`\psi `$ is large, through linear transformation on $`z`$, $`F(z)`$ can be reduced to the following standard form
$$F_0(z)=\underset{m\mathrm{\Delta }^0}{}a_mz^m+\psi z^{m_0}.$$
In general, it seems a rather hard question. But when $`\psi `$ is large (when near the large complex limit), the question is much easier. Let us consider the linear transformation
$$L_1:z(I\frac{1}{\psi }B)z,$$
where $`b_{jk}=a_{m_{jk}}`$, $`m_{jk}=m_0e_j+e_k`$. Then
$`F_1(z)=F(L_1z)`$ $`=`$ $`\psi z^{m_0}+{\displaystyle \underset{m(m_0)\mathrm{\Delta }}{}}a_mz^m{\displaystyle \underset{j,k}{}}b_{jk}z^{m_{jk}}+O({\displaystyle \frac{1}{\psi }})`$
$`=`$ $`\psi z^{m_0}+{\displaystyle \underset{m\mathrm{\Delta }^0}{}}a_mz^m+{\displaystyle \underset{j,k}{}}a_{m_{jk}}z^{m_{jk}}{\displaystyle \underset{j,k}{}}b_{jk}z^{m_{jk}}+O({\displaystyle \frac{1}{\psi }})`$
$`=`$ $`\psi z^{m_0}+{\displaystyle \underset{m\mathrm{\Delta }^0}{}}a_mz^m+O({\displaystyle \frac{1}{\psi }})`$
Adjust $`\psi `$ and $`a_m`$ accordingly, above equation can also be expressed as
$$F_1(z)=\underset{m(m_0)\mathrm{\Delta }}{}a_mz^m+\psi z^{m_0},$$
where $`a_m=O(1/\psi )`$ for $`m\mathrm{\Delta }\backslash \mathrm{\Delta }^0`$. Repeating above process, consider
$$L_2:z(I\frac{1}{\psi }B)z.$$
Then
$$F_2(z)=F_1(L_2z)=F(L_1L_2z)=\underset{m(m_0)\mathrm{\Delta }}{}a_mz^m+\psi z^{m_0},$$
where $`a_m=O(1/\psi ^2)`$ for $`m\mathrm{\Delta }\backslash \mathrm{\Delta }^0`$. Repeat this process inductively, and define
$$L=L_1L_2\mathrm{}.$$
Since $`L_lI=O(1/\psi ^l)`$, the infinite product converges when $`\psi `$ is large. Then
$$F_0(z)=F(Lz)=\underset{m\mathrm{\Delta }^0}{}a_mz^m+\psi z^{m_0}$$
is in standard form (belonging to our slice after divided by $`\psi `$).
To show that near $`p_{\mathrm{}}`$ $`Q_0`$ intersects each $`GL(5,)`$ orbit at an orbit of $`T`$, notice that any group element in $`GL(5,)\backslash T`$ near $`T`$, when acting on elements of $`Q_0`$ near $`p_{\mathrm{}}`$, will produce terms in $`\mathrm{\Delta }\backslash (\mathrm{\Delta }^0\{m_0\})`$ or alter the $`m_0`$ term. This fact together with the fact that stablizer of $`p_{\mathrm{}}`$ is exactly the normalizer of $`TSL(5,)`$ imply our conclusion.
$`\mathrm{}`$
Remark: The similar slicing theorem is also true in the general toric case, which will be discussed in . Given the elementary nature of the theorem, we believe it may have appeared in the literature in some form, although we are not aware of it. In any case, this kind of slicing theorem is very important in understanding the complex moduli of Calabi-Yau hypersurfaces in a toric variety near the large complex limit.
As we know, the normalizer of $`T`$ in $`SL(5,)`$ is a semi-direct product of $`T`$ and the Weyl group $`S_5`$ of $`SL(5,)`$. The construction of the complex moduli near the large complex limit involves first constructing certain “quotient” of $`Q_0`$ by $`T`$ that is invariant under $`S_5`$, then taking the further quotient by $`S_5`$.
$`Q_0`$ is naturally an affine space with the natural toric structure determined by the open complex torus $`T_{Q_0}Q_0`$. The action of $`T`$ is compatible with this toric structure. We expect the “quotient” of $`Q_0`$ by $`T`$ to be a toric variety as toric partial compactification of the quotient complex torus $`T_{Q_0}/T`$. Before getting into the detail, we first introduce some notations. Let
$$\stackrel{~}{M}_0=\left\{a^I=\underset{m\mathrm{\Delta }^0}{}a_m^{i_m}\right|I=(i_m)_{m\mathrm{\Delta }^0}^{\mathrm{\Delta }^0}\}^{\mathrm{\Delta }^0},$$
then its dual $`\stackrel{~}{N}_0`$ is naturally the space of all the weights
$$\stackrel{~}{N}_0=\{w=(w_m)_{m\mathrm{\Delta }^0}^{\mathrm{\Delta }^0}\}^{\mathrm{\Delta }^0}.$$
Follow the toric geometry convention, let $`N`$ denote the dual lattice of $`M`$. $`nw_m=m,n`$ defines an embedding $`N\stackrel{~}{N}_0`$. Let $`WN_0`$ be the image of this embedding, $`\stackrel{~}{M}=W^{}`$, then $`\stackrel{~}{N}=\stackrel{~}{N}_0/W`$ is dual to $`\stackrel{~}{M}`$. $`\stackrel{~}{M}`$ can be viewed as the lattice of monomials on the quotient complex torus $`T_{Q_0}/T`$. Therefore, the fan of the “quotient” of $`Q_0`$ by $`T`$ should be in $`\stackrel{~}{N}`$. For a convex cone $`\sigma _0\stackrel{~}{N}_0`$, let $`\sigma ^{}=\sigma _0^{}\stackrel{~}{M}`$. Then $`\sigma =(\sigma ^{})^{}`$ is naturally the projection of $`\sigma _0`$ to $`\stackrel{~}{N}`$.
Our discussion on the notion of near the large complex limit (definition 2.5), which was needed for the construction of the Lagrangian fibration, naturally suggests a chamber decomposition of the complex moduli and the related fan structure in $`\stackrel{~}{N}`$ that determines a “canonical” partial compactification of the complex moduli near the large complex limit.
Recall from definition 2.5, the quintic hypersurface $`X`$ defined by
$$p(z)=z^{m_0}+\underset{m\mathrm{\Delta }^0}{}a_mz^mT_{Q_0}$$
with $`|a_m|=t^{w_m}`$ is near the large complex limit if $`\{w_m\}_{m\mathrm{\Delta }^0}`$ is strictly convex and $`t>0`$ is small. (A generic convex $`w=(w_m)_{m\mathrm{\Delta }^0}`$ will determine a simplicial decomposition $`Z`$ of $`\mathrm{\Delta }^0`$ with integral points in $`\mathrm{\Delta }^0`$ as vertices of the simplices in $`Z`$. In such situation, we say $`w=\{w_m\}_{m\mathrm{\Delta }^0}`$ is compatible with $`Z`$.) Notice that in our discussion, we are repeatedly using the notation $`\mathrm{\Delta }^0`$ for two slightly different meanings, first, as the set of integral points in the two skeleton of polyhedron $`\mathrm{\Delta }`$, secondly, as the real two skeleton of polyhedron $`\mathrm{\Delta }`$.
According to theorem 2.4, the gradient flow method will produce a Lagrangian torus fibration for $`X`$ with singular locus $`\mathrm{\Gamma }_Z`$, which is determined by the simplicial decomposition $`Z`$ of $`\mathrm{\Delta }^0`$ that is compatible with the strictly convex function $`\{w_m\}_{m\mathrm{\Delta }^0}`$. For each $`Z`$, those quintic polynomials in $`T_{Q_0}`$ determining the same $`Z`$ form a chamber $`\stackrel{~}{U}_Z`$ in $`T_{Q_0}`$. (The closure of $`\stackrel{~}{U}_Z`$ forms a chamber in $`Q_0`$.) For Calabi-Yau quintics in $`\stackrel{~}{U}_Z`$ near the large complex limit, the corresponding Lagragian fibrations have singular locus with the same combinatorial type $`\mathrm{\Gamma }_Z`$. Since the convex function $`\{w_m\}_{m\mathrm{\Delta }^0}`$ modulo linear functions on $`M`$ determine the same $`Z`$, $`\stackrel{~}{U}_Z`$ is invariant under the $`T`$ action. $`U_Z=\stackrel{~}{U}_Z/T`$ forms a chamber in the quotient complex torus $`T_{Q_0}/T`$.
Let $`\stackrel{~}{Z}`$ denote the set of all simplicial decomposition $`Z`$ of $`\mathrm{\Delta }^0`$ with integral points in $`\mathrm{\Delta }^0`$ as vertices of the simplices in $`Z`$. Then we get a chamber decomposition $`\{\stackrel{~}{U}_Z\}_{Z\stackrel{~}{Z}}`$ of $`T_{Q_0}`$ near the large complex limit $`m_0`$ that is invariant under $`T`$ and permuted by the action of the Weyl group of $`SL(5,)`$, hence decends to a chamber decomposition $`\{U_Z\}_{Z\stackrel{~}{Z}}`$ of the quotient complex torus $`T_{Q_0}/T`$, whose quotient by the Weyl group of $`SL(5,)`$ form a Zariski open set of the the complex moduli of the quintic.
On the other hand, for any $`Z\stackrel{~}{Z}`$,
$$\tau _Z=\{w=(w_m)_{m\mathrm{\Delta }^0}\stackrel{~}{N}|w\mathrm{is}\mathrm{convex}\mathrm{and}\mathrm{compatible}\mathrm{with}Z\}/W$$
is a top dimensional cone in $`\stackrel{~}{N}`$. Let $`\stackrel{~}{\mathrm{\Sigma }}`$ denote the fan consisting of subcones of $`\tau _Z`$ for $`Z\stackrel{~}{Z}`$. Then the support of the fan $`\stackrel{~}{\mathrm{\Sigma }}`$ is
$$\tau =\underset{Z\stackrel{~}{Z}}{}\tau _Z.$$
The corresponding toric variety $`P_{\stackrel{~}{\mathrm{\Sigma }}}`$ is our model for the “quotient” of $`Q_0`$ by $`T`$ as partial compactification of the quotient complex torus $`T_{Q_0}/T`$, which is compatible with the chamber decomposition in the sense that $`U_Z`$ can be naturally identified with the complexification of $`\tau _Z`$. It is easy to see that $`\stackrel{~}{\mathrm{\Sigma }}`$ is invariant under the natural action of the Weyl group of $`SL(5,)`$. The actual complex moduli space of quintic near the large complex limit is $`P_{\stackrel{~}{\mathrm{\Sigma }}}`$ modulo the action of the Weyl group $`S_5`$ of $`SL(5,)`$.
###### Theorem 3.2
$`P_{\stackrel{~}{\mathrm{\Sigma }}}/S_5`$ is a natural compactification of the moduli space of quintic Calabi-Yau hypersurfaces near the large complex limit.
$`\mathrm{}`$
Remark: $`\stackrel{~}{\mathrm{\Sigma }}`$ is the so-called secondary fan. Our discussion in this subsection provides an intrinsic geometric reconstruction of the secondary fan compactification based on the construction of Lagrangian torus fibration in the case of quintics. Other more detailed justification for the general case of Calabi-Yau hyperserfaces in toric variety will be discussed in .
### 3.2 The mirror of quintics and their Kähler moduli
All the materials in this subsection are well known and have appeared in the literature (). For reader’s convenience and fixing the notations for our application, we will review these results in the form we need in this subsection. We first recall the quotient construction for the mirror of quintics.
Let $`(e_1,\mathrm{},e_5)`$ be the standard basis of $`M_0^5`$, and $`(e^1,\mathrm{},e^5)`$ be the dual basis of $`N_0^5`$. Recall from (2.1) and (2) the Newton polyhedron $`\mathrm{\Delta }M`$ of quintic polynomials.
$$N=M^{}=\{n=(n_1,\mathrm{},n_5)^5\}/n_0$$
with $`n_0=(1,1,1,1,1)`$ is naturally the dual lattice of $`M`$. Let $`\mathrm{\Delta }^{}`$ be the dual polyhedron of $`\mathrm{\Delta }`$ in $`N`$. Then the simplex $`\mathrm{\Delta }^{}N`$ has vertices $`\{n^i\}_{i=1}^5`$, and the dual simplex $`\mathrm{\Delta }M`$ has vertices $`\{m^i\}_{i=1}^5`$, where
$$n^i=[e^i],m^i=5e_im_0.$$
Interestingly, $`\{n^i\}_{i=1}^5`$ and $`\{m^i\}_{i=1}^5`$ satisfy the same linear relation:
$$\underset{k=1}{\overset{5}{}}n^k=0,\underset{k=1}{\overset{5}{}}m^k=0.$$
Since $`N`$ is generated by $`\{n^i\}_{i=1}^5`$, there is a natural map $`Q:NM`$ that maps $`n^i`$ to $`m^i`$. The following lemma is very straightforward to check.
###### Lemma 3.1
$$M/Q(N)(_5)^3$$
$`\mathrm{}`$
Let $`\mathrm{\Sigma }_\mathrm{\Delta }`$ ($`\mathrm{\Sigma }_\mathrm{\Delta }^{}`$) be the fan in $`N`$ ($`M`$) whose cones are spanned by faces of $`\mathrm{\Delta }^{}`$ ($`\mathrm{\Delta }`$) from the origin. The corresponding toric variety $`P_{\mathrm{\Sigma }_\mathrm{\Delta }}`$ ($`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$) is the anti-canonical model, namely the anti-canonical class is ample on $`P_{\mathrm{\Sigma }_\mathrm{\Delta }}`$ ($`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$). The map $`Q:NM`$ gives us the well known equivalence between the Batyrev mirror construction () and the quotient interpretation of the mirror ().
###### Corollary 3.1
$$P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}P_{\mathrm{\Sigma }_\mathrm{\Delta }}/(_5)^3$$
$`\mathrm{}`$
In this way, Calabi-Yau hypersurfaces in $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ via pullback are equivalent to $`(_5)^3`$-invariant Calabi-Yau hypersurfaces in $`P_{\mathrm{\Sigma }_\mathrm{\Delta }}`$, namely, the Fermat type quintics $`X_\psi `$ defined by
$$\underset{k=1}{\overset{5}{}}z_k^5+5\psi \underset{k=1}{\overset{5}{}}z_k=0.$$
Denote the quotient of $`X_\psi `$ in $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ to be $`Y_\psi ^0`$. Then $`Y_\psi ^0`$ corresponds to the mirror of quintics, equivalently according to both the Batyrev and the quotient mirror constructions. We are interested in Calabi-Yau hypersurfaces near the large complex limit, which correspond to large $`\psi `$.
Calabi-Yau hypersurfaces in $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ are not exactly the mirrors, since they are singular. To get smooth mirror Calabi-Yau manifolds, it is necessary to do some crepant resolution. This can be done by resolving the singularities of $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ suitably, then the pullback of $`Y_\psi ^0`$ to the resolution, denoted by $`Y_\psi `$, gives the corresponding smooth Calabi-Yau manifold.
In general resolution of singularities of $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ corresponds to subdivision of the fan $`\mathrm{\Sigma }_\mathrm{\Delta }^{}`$. Arbitrary resolution of $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ can destroy the Calabi-Yau property of $`Y_\psi `$. Those admissible resolutions, which preserve the Calabi-Yau property of $`Y_\psi `$, correspond to those subdivision fan $`\mathrm{\Sigma }^{}`$ of $`\mathrm{\Sigma }_\mathrm{\Delta }^{}`$ whose generators of 1-dimensional cones lie in $`\mathrm{\Delta }`$. In other words, admissible resolutions of $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ correspond to polyhedron subdivision of $`\mathrm{\Delta }`$ with vertices of the polyhedrons being the integral points in $`\mathrm{\Delta }`$. Different subdivisions $`\mathrm{\Sigma }^{}`$ of $`\mathrm{\Sigma }_\mathrm{\Delta }^{}`$ correspond to different birational models $`P_\mathrm{\Sigma }^{}`$ of $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$.
For each particular birational model $`P_\mathrm{\Sigma }^{}`$ with the birational map $`\widehat{\pi }:P_\mathrm{\Sigma }^{}P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$, the Kähler cone of $`P_\mathrm{\Sigma }^{}`$ can be understood as the set of piecewise linear convex functions on $`M`$ that are compatible with $`\mathrm{\Sigma }^{}`$ modulo linear functions on $`M`$. We are actually interested in the Kähler cone of the corresponding Calabi-Yau hypersurface $`Y_\psi =\widehat{\pi }^1(Y_\psi ^0)P_\mathrm{\Sigma }^{}`$. By Kähler cone of $`Y_\psi `$ in this paper, we only refer to those Kähler forms of $`Y_\psi `$ that come from the restriction of Kähler forms on $`P_\mathrm{\Sigma }^{}`$, although presumablly, there might be other Kähler forms not comming from such restriction.
As a toric variety, $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ can be seen as a union of complex tori of various dimensions (orbits of $`T_M`$). Under the moment map, these complex tori are mapped to the interior of subfaces of $`\mathrm{\Delta }^{}`$ of various dimensions. There is a dual relation from subfaces of $`\mathrm{\Delta }`$ to subfaces of $`\mathrm{\Delta }^{}`$. For $`\alpha `$ a subface of $`\mathrm{\Delta }`$, the dual face is
$$\alpha ^{}=\{n\mathrm{\Delta }^{}|m,n=1\}.$$
Clearly, $`(\alpha ^{})^{}=\alpha `$ and $`dim\alpha +dim\alpha ^{}=3`$. $`P_\mathrm{\Sigma }^{}`$ is a toric resolution of $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$. Tori divisors in $`P_\mathrm{\Sigma }^{}`$ that dominate the complex torus in $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ corresponding to subface $`\alpha `$ in $`\mathrm{\Delta }^{}`$ are parametrized by interior integral points in $`\alpha ^{}`$.
Notice that $`Y_\psi ^0`$ does not intersect the points (0-dimensional tori) in $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ corresponding to vertices (0-dimensional subfaces) of $`\mathrm{\Delta }^{}`$. A toric divisor in $`P_\mathrm{\Sigma }^{}`$ restricts non-trivially to $`Y_\psi `$ if and only if the corresponding integral point in $`\mathrm{\Delta }`$ is in the 2-skeleton $`\mathrm{\Delta }^0\mathrm{\Delta }`$. Let $`Z`$ denote the simplicial decomposition of $`\mathrm{\Delta }^0`$ determined by the fan $`\mathrm{\Sigma }^{}`$. Then we have
###### Proposition 3.1
The Kähler cone of $`Y_\psi P_\mathrm{\Sigma }^{}`$ can be naturally identified as the set of $`w=(w_m)_{m\mathrm{\Delta }^0}`$ (understood as a piecewise linear function on $`M`$) that is convex with respect to $`\mathrm{\Delta }^0`$ in the sense of definition 2.5 and compatible with $`Z`$, modulo linear functions on $`M`$.
$`\mathrm{}`$
Notice that $`Y_\psi `$ is the same for all $`\mathrm{\Sigma }^{}`$ that determine the same simplicial decomposition $`Z`$ of $`\mathrm{\Delta }^0`$, and the Kähler cone of $`Y_\psi `$ determined in proposition 3.1 is exactly the $`\tau _Z`$ as introduced in the previous subsection. From the point of view of mirror symmetry, it is more natural to consider $`Y_\psi `$ for different $`Z`$ together as different birational models of the mirror Calabi-Yau, and think of the Kähler moduli of the mirror Calabi-Yau as the union of Kähler cones of all these different birational models $`Y_\psi `$. This union cone is usually called the movable cone of $`Y_\psi `$ in algebraic geometry.
###### Corollary 3.2
The Kähler moduli of the mirror Calabi-Yau $`Y_\psi `$ can be naturally identified with
$$\tau =\underset{Z\stackrel{~}{Z}}{}\tau _Z$$
as the set of $`w=(w_m)_{m\mathrm{\Delta }^0}`$ (understood as a piecewise linear function on $`M`$) that is convex with respect to $`\mathrm{\Delta }^0`$ in the sense of definition 2.5, modulo linear functions on $`M`$.
$`\mathrm{}`$
### 3.3 Monomial-divisor mirror map
Combine the discussions from the last two subsections, we can see that the Kähler cones of various birational models of $`Y_\psi `$ naturally correspond to the top dimensional cones in the secondary fan for the complex moduli of $`X_\psi `$.
Mirror symmetry actually requires a precise identification of the complex moduli $`P_{\stackrel{~}{\mathrm{\Sigma }}}`$ near the large complex limit and the complexified Kähler moduli $`(\stackrel{~}{N}_{}+i\tau )/\stackrel{~}{N}`$ near the large radius limit.
As a toric variety, the complex moduli $`P_{\stackrel{~}{\mathrm{\Sigma }}}`$ can be understood as a partial compactification of the complex torus $`T_{\stackrel{~}{N}}=\stackrel{~}{N}_{}/\stackrel{~}{N}`$. Combine the natural embeddings $`\stackrel{~}{N}_{}+i\tau \stackrel{~}{N}_{}`$ and $`T_{\stackrel{~}{N}}P_{\stackrel{~}{\mathrm{\Sigma }}}`$, we get the so-called monomial-divisor mirror map
$$\mathrm{\Phi }:(\stackrel{~}{N}_{}+i\tau )/\stackrel{~}{N}P_{\stackrel{~}{\mathrm{\Sigma }}}.$$
More precisely, for $`u=(\eta _m+iw_m)_{m\mathrm{\Delta }^0}\stackrel{~}{N}_{}+i\tau `$, we have the corresponding quintic polynomial
$$p_u(z)=\underset{m\mathrm{\Delta }^0}{}e^{2\pi i\eta _m}e^{2\pi w_m}z^m+z^{m_0}.$$
(Notice $`w_m`$ here will correspond to $`\frac{\mathrm{log}t}{2\pi }w_m`$ under the notation of section 2.)
Under the monomial-divisor mirror map, the complexified Kähler cone $`\tau _Z^{}=(\stackrel{~}{N}_{}+i\tau _Z)/\stackrel{~}{N}`$ is natually identified with the chamber $`U_Z`$. In section 2, we constructed the Lagrangian torus fibration for quintic Calabi-Yau $`X`$ in $`U_Z`$ near the large complex limit. In the following sections, we will construct Lagrangian torus fibration for the mirror Calabi-Yau manifold $`Y`$ (more precisely the monomial-divisor map image of $`X`$) with Kähler class in $`\tau _Z`$ and prove the symplectic SYZ duality for the two Lagrangian torus fibrations.
There are several nice properties that make the secondary fan compactification a very desirable canonical partial compactification of the complex moduli of Calabi-Yau hypersurfaces in toric variety. Firstly, the secondary fan compactification is the canonical minimal partial compactification that dominates all GIT/symplectic reduction partial compactifications. Secondly, the secondary fan compactification is the minimal partial compactification such that the discriminant locus is ample. (Please consult for mentioning and references of these results.) provides the first justification that the secondary fan compactification suits the mirror symmetry purpose by identifying the top dimensional cones in the secondary fan with the Kähler cones of different birationally equivalent models of the mirror, therefore establishing the monomial-divisor mirror map that identifies the complexified Kähler moduli of the mirror with the Kähler cone decomposition and the complex moduli of Calabi-Yau hypersurfaces in toric variety near the large complex limit with the chamber decomposition associated with the secondary fan compactification. Our discussion in this section gives an intrinsic geometric explanation of this chamber decomposition, consequently provides direct justification for the secondary fan compactification without going to the mirror. This interpretation can potentially be generalized to determine the chamber decomposition and partial compactification of the complex moduli near the large complex limit for more general Calabi-Yau manifolds in non-toric situations. On the other hand, this interpretation also indicates the central importance of the SYZ picture in understanding mirror symmetry.
Remark: The monomial-divisor mirror map first introduced in had certain restriction. Therefore, the results there can not be directly applied to our situation. As discussed in , the actual mirror map is not exactly the monomial-divisor map, or rather is a perturbation of the monomial-divisor map.
## 4 Lagrangian torus fibrations for the mirror manifolds of quintics
To establish the symplectic topological SYZ correspondence for quintic Calabi-Yau hypersurfaces, we need to construct Lagrangian torus fibrations for their mirror manifolds. The construction of Lagrangian torus fibrations of quintics was carried out in section 2, where the fibration structure is essentially determined by the complex structure of the quintic. In particular, the singular locus of the fibration map arises naturally from the “string diagram” structure of the singular set curves near the large complex limit. On the other side, the mirrors of quintics start as singular Calabi-Yau manifolds. Through crepant resolution, we get smooth models for the mirrors of quintics. Different Kähler structures will determine different crepant resolutions which in turn will determine different combinatoric structures of the singular locus of the Lagrangian torus fibration for the mirror of quintic Calabi-Yau manifolds. After suitable crepant resolution, the construction of Lagrangian torus fibrations for the mirror of quintics will be carried out using gradient flow. The construction is somewhat easier in this case, because most part of the singular locus is automatically 1-dimensional.
Take a generic $`w=(w_m)_{m\mathrm{\Delta }^0}\tau =_{Z\stackrel{~}{Z}}\tau _Z`$ (the Kähler moduli of mirror $`Y_\psi `$). In general, $`w=(w_m)_{m\mathrm{\Delta }^0}`$ naturally defines a piecewise linear convex function $`p_w`$ on $`M`$ that satisfies $`p_w(m)=w_m`$. $`p_w`$ naturally determines a fan $`\mathrm{\Sigma }^w`$ for $`M`$ that is compatible with $`\mathrm{\Delta }^{}`$. For our generic choice of $`w`$, we may assume that the fan $`\mathrm{\Sigma }^w`$ is simplicial. $`p_w`$ also naturally determines a real polyhedron $`\mathrm{\Delta }_wN`$ that consists of those $`nN`$ that as a linear function on $`M`$ is greater than or equal to $`p_w`$. As a real polyhedron, $`\mathrm{\Delta }_w`$ has a dual real polyhedron $`\mathrm{\Delta }_w^{}M`$.
### 4.1 Lagrangian torus fibration with part of singular locus being codimension one
Recall from the last section, the anti-canonical model of the mirror of quintic is the quotient of Fermat type quintic
$$Y_\psi ^0=X_\psi /(_5)^3P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}^4/(_5)^3.$$
$`w`$ determines a simplical fan $`\mathrm{\Sigma }^w`$ in $`M`$. $`\widehat{\pi }:P_{\mathrm{\Sigma }^w}P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ is a resolution. The pullback $`Y_\psi =\widehat{\pi }^1(Y_\psi ^0)`$ is the mirror Calabi-Yau with Kähler class corresponding to $`w=(w_m)_{m\mathrm{\Delta }^0}`$.
On $`P_{\mathrm{\Sigma }^w}`$ we may take any toric metric that represents the Kähler class of $`w`$. For example, we may use the toric metric resulting from symplectic reduction as discussed in Guillemin’s paper , whose Kähler potential $`h_w`$ is the Legendre transformation of
$$v_w(n)=\underset{m\mathrm{\Delta }^0}{}(m,n+w_m)(\mathrm{log}(m,n+w_m)1).$$
Let $`\mathrm{\Delta }_w^{(k)}`$ denote the smooth open part of the $`k`$-skeleton of $`\mathrm{\Delta }_w`$. Similarly, let $`Y_{\mathrm{}}^{(k)}`$ denote the smooth open part of the complex $`k`$-skeleton of $`Y_{\mathrm{}}`$. Then we have the stratification
$$\mathrm{\Delta }_w=\underset{k=0}{\overset{3}{}}\mathrm{\Delta }_w^{(k)},Y_{\mathrm{}}=\underset{k=0}{\overset{3}{}}Y_{\mathrm{}}^{(k)}.$$
A torus fibration $`F_{\mathrm{}}:Y_{\mathrm{}}\mathrm{\Delta }_w`$ is called topologically smooth if $`\mathrm{\Delta }_w^{(k)}=F_{\mathrm{}}(Y_{\mathrm{}}^{(k)})`$ and $`F_{\mathrm{}}|_{Y_{\mathrm{}}^{(k)}}:Y_{\mathrm{}}^{(k)}\mathrm{\Delta }_w^{(k)}`$ is a topologically smooth fibration, whose fibers are necessarily $`k`$-dimensional torus.
###### Proposition 4.1
The moment map $`F_w`$ maps $`P_{\mathrm{\Sigma }^w}`$ to $`\mathrm{\Delta }_w`$, and maps $`Y_{\mathrm{}}P_{\mathrm{\Sigma }^w}`$ to $`\mathrm{\Delta }_w`$. $`F_w|_Y_{\mathrm{}}:Y_{\mathrm{}}\mathrm{\Delta }_w`$ is a topologically smooth Lagrangian torus fibration with respect to the symplectic form $`\omega =\overline{}\mathrm{log}h_w`$ on $`P_{\mathrm{\Sigma }^w}`$.
$`\mathrm{}`$
With the Lagrangian fibration of $`Y_{\mathrm{}}`$ in hand, just as in the case of quintics, we can similarly apply theorem 8.1 in to produce the Lagrangian torus fibration $`F_\psi :Y_\psi \mathrm{\Delta }_w`$ for $`Y_\psi `$. As in the quintic case, we will rephrase the general theorem 8.1 of in the situation of mirror of quintic.
###### Theorem 4.1
Start with a topologically smooth Lagrangian torus fibration $`F_{\mathrm{}}:Y_{\mathrm{}}\mathrm{\Delta }_w`$, we can construct a symplectic morphism $`H_\psi :Y_\psi Y_{\mathrm{}}`$ such that $`F_\psi =F_{\mathrm{}}H_\psi :Y_\psi \mathrm{\Delta }_w`$ is a Lagrangian torus fibration with singular set $`C=Y_\psi \mathrm{Sing}(Y_{\mathrm{}})`$ and singular locus $`\mathrm{\Gamma }=F_{\mathrm{}}(C)`$. For $`b\mathrm{\Gamma }`$, $`F_\psi ^1(b)`$ is a real $`3`$-torus. For $`b\mathrm{\Gamma }`$, $`F_\psi ^1(b)`$ is singular. For $`b\mathrm{\Gamma }\mathrm{\Delta }_w^{(k)}`$, $`F_\psi ^1(b)C=F_{\mathrm{}}^1(b)C`$ and $`H_\psi :F_\psi ^1(b)\backslash CF_{\mathrm{}}^1(b)\backslash C`$ is a topologically smooth $`(3k)`$-torus fibration.
$`\mathrm{}`$
We will start with the natural $`F_{\mathrm{}}=F_w|_Y_{\mathrm{}}`$. The task now is to understand the structure of singular fibers and the singular locus of the corresponding fibration $`F_\psi `$. The singular locus $`\stackrel{~}{\mathrm{\Gamma }}=F_\psi (C)`$ is a fattening of some graph $`\mathrm{\Gamma }`$. To describe this graph $`\mathrm{\Gamma }`$, let us recall from the last section of that the singular locus of the Lagrangian fibration of $`Y_\psi ^0P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ is a fattening of a graph $`\widehat{\mathrm{\Gamma }}\mathrm{\Delta }^{}`$, where
$$\widehat{\mathrm{\Gamma }}=\underset{\{ijklm\}=\{12345\}}{}\overline{P_{ij}P_{klm}}.$$
The following is a picture (from ) of a face of $`\mathrm{\Delta }^{}`$.
Figure 5: $`\widehat{\mathrm{\Gamma }}`$ in a face of $`\mathrm{\Delta }^{}`$
It is interesting to observe that
$$\mathrm{Sing}(P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}})=\mathrm{Sing}(Y_{\mathrm{}}).$$
Hence
$$\mathrm{Sing}(Y_\psi ^0)=C=Y_\psi ^0\mathrm{Sing}(Y_{\mathrm{}}).$$
Let $`\stackrel{~}{P}_{ij}`$ be the point in $`C`$ that maps to $`P_{ij}`$ in $`\mathrm{\Delta }^{}`$. Notice that $`\mathrm{Sing}(C)=\{\stackrel{~}{P}_{ij}\}`$. Along the smooth part of $`C`$, $`Y_\psi ^0`$ has $`A_5`$-singularity. Under the unique crepant resolution, $`C`$ is turned into 5 copies of $`C`$. Around $`\stackrel{~}{P}_{ij}\mathrm{Sing}(C)`$, singularity of $`Y_\psi ^0`$ is much more complicated and crepant resolution is not unique. The following is a picture (from ) of the fan of such singularity and the subdivision fan of the standard crepant resolution.
Figure 6: the standard crepant resolution of singularity at $`\widehat{P}_{ij}`$
$`\widehat{\pi }:P_{\mathrm{\Sigma }^w}P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$ naturally induces a map $`\pi :\mathrm{\Delta }_w\mathrm{\Delta }^{}`$. We may take $`\mathrm{\Gamma }`$ to be the 1-skeleton of $`\pi ^1(\widehat{\mathrm{\Gamma }})`$. In a small neighborhood of $`P_{ij}`$, $`\widehat{\mathrm{\Gamma }}\mathrm{\Delta }^{}`$ is indicated in the following picture (Figure 7):
Figure 7: $`\widehat{\mathrm{\Gamma }}\mathrm{\Delta }^{}`$ near $`P_{ij}`$
Under the standard crepant resolution, we get $`\mathrm{\Gamma }\mathrm{\Delta }_w`$ as indicated in the following picture (Figure 8):
Figure 8: $`\mathrm{\Gamma }`$ for the standard crepant resolution
For a different crepant resolution, we can get an alternative picture for $`\mathrm{\Gamma }\mathrm{\Delta }_w`$ (Figure 9).
Figure 9: $`\mathrm{\Gamma }`$ for alternative crepant resolution
These singular locus graphs clearly resemble the singular locus graphs in section 2 via string diagram (amoeba) construction, although the two constructions are quite different.
###### Proposition 4.2
There exist a piecewise smooth automorphism of $`\mathrm{\Delta }_w`$ that preserves subfaces of $`\mathrm{\Delta }_w`$ so that under such automorphism, the 1-skeleton $`\mathrm{\Gamma }`$ of $`\pi ^1(\widehat{\mathrm{\Gamma }})`$ is identified with the union of 1-simplices in the baricenter subdivision of the 2-skeleton of $`\mathrm{\Delta }_w`$ that under the map $`\pi `$ do not meet the vertices of $`\mathrm{\Delta }^{}`$.
Proof: Clearly, one only need to construct such automorphism for each of the 2-subfaces of $`\mathrm{\Delta }_w`$ that contain part of $`\mathrm{\Gamma }`$ and then piecewise smoothly extend to the whole $`\mathrm{\Delta }_w`$. There are only two kinds of such 2-subfaces, those map to 1-subsimpleces or 2-subsimpleces in $`\mathrm{\Delta }^{}`$ under $`\pi `$. In both cases the required automorphisms are rather trivial to construct.
$`\mathrm{}`$
Let $`\stackrel{~}{\mathrm{\Gamma }}_0`$ denote the singular locus for the fibration of $`Y_\psi ^0P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$. Similar to the decomposition (2.3) in the quintic case, we have
$$\stackrel{~}{\mathrm{\Gamma }}_0=\stackrel{~}{\mathrm{\Gamma }}_0^0\stackrel{~}{\mathrm{\Gamma }}_0^1\stackrel{~}{\mathrm{\Gamma }}_0^2.$$
Consider the map $`\pi |_{\stackrel{~}{\mathrm{\Gamma }}}:\stackrel{~}{\mathrm{\Gamma }}\stackrel{~}{\mathrm{\Gamma }}_0`$. For $`k=1,2`$, define $`\stackrel{~}{\mathrm{\Gamma }}^k=(\pi |_{\stackrel{~}{\mathrm{\Gamma }}})^1(\stackrel{~}{\mathrm{\Gamma }}_0^k)`$. Let $`\mathrm{\Gamma }^{}=(\pi |_{\stackrel{~}{\mathrm{\Gamma }}})^1(\stackrel{~}{\mathrm{\Gamma }}_0^0)`$. $`\mathrm{\Gamma }^{}`$ represents the part of $`\mathrm{\Gamma }`$ that is not fattened in $`\stackrel{~}{\mathrm{\Gamma }}`$. $`\mathrm{\Gamma }^{}`$ has the natural decomposition $`\mathrm{\Gamma }^{}=(\mathrm{\Gamma }^{})^1\mathrm{\Gamma }^3`$, where $`(\mathrm{\Gamma }^{})^1`$ is the smooth part of $`\mathrm{\Gamma }^{}`$, $`\mathrm{\Gamma }^3`$ is the singular part of $`\mathrm{\Gamma }`$ in the 1-skeleton of $`\mathrm{\Delta }_w`$. Together we have the decomposition
$$\stackrel{~}{\mathrm{\Gamma }}=(\mathrm{\Gamma }^{})^1\mathrm{\Gamma }^3\stackrel{~}{\mathrm{\Gamma }}^1\stackrel{~}{\mathrm{\Gamma }}^2.$$
Start with the natural $`F_{\mathrm{}}=F_w|_Y_{\mathrm{}}`$ in proposition 4.1, apply theorem 4.1, we have
###### Theorem 4.2
For $`Y_\psi P_{\mathrm{\Sigma }^w}`$, when $`w=(w_m)_{m\mathrm{\Delta }^0}\tau `$ is generic and near the large radius limit of $`\tau `$, the gradient flow method will produce a Lagrangian fibration $`F_\psi :Y_\psi \mathrm{\Delta }_w`$ with singular locus $`\stackrel{~}{\mathrm{\Gamma }}=F_{\mathrm{}}(C)`$ as fattening of graph $`\mathrm{\Gamma }`$. There are five types of fibers.
(i). For $`p\mathrm{\Delta }_w\backslash \stackrel{~}{\mathrm{\Gamma }}`$, $`F_\psi ^1(p)`$ is a smooth Lagrangian 3-torus.
(ii). For $`p\stackrel{~}{\mathrm{\Gamma }}^2`$, $`F_\psi ^1(p)`$ is a Lagrangian 3-torus with two circles collapsed to two singular points.
(iii). For $`p\stackrel{~}{\mathrm{\Gamma }}^1`$, $`F_\psi ^1(p)`$ is a Lagrangian 3-torus with one circle collapsed to one singular point.
(iv). For $`p\mathrm{\Gamma }^3`$, $`F_\psi ^1(p)`$ is a type III singular fiber.
(v). For $`p(\mathrm{\Gamma }^{})^1`$, $`F_\psi ^1(p)`$ is a type I singular fiber.
$`\mathrm{}`$
Remark: From the construction of this Lagrangian fibration $`F_\psi `$, one can already see the hint of SYZ mirror correspondence to the Lagrangian fibrations of quintics. Although the singular locus $`\stackrel{~}{\mathrm{\Gamma }}`$ of $`F_\psi `$ is of codimension one, $`\stackrel{~}{\mathrm{\Gamma }}`$ is only fattening the part of $`\mathrm{\Gamma }`$ which is mapped to the interior of 2-simplices of $`\mathrm{\Delta }^{}`$ under the map $`\pi `$. A significant part $`\mathrm{\Gamma }^{}`$ of $`\mathrm{\Gamma }`$ that results from the crepant resolution is not fattened. The SYZ mirror correspondence is already quite apparent on this part of the fibration.
### 4.2 Non-Lagrangian torus fibration with codimension two singular locus
The commuting diagram
gives rise to
Let $`C^0=Y_\psi ^0\mathrm{Sing}(Y_{\mathrm{}}^0)`$. In , the Lagrangian fibration of $`X_{\mathrm{}}`$ is modified to yield Lagrangian torus fibration with codimension two singular locus for $`X_\psi `$. Notice that the modified Lagrangian fibration of $`X_{\mathrm{}}`$ and the gradient flow of the Fermat type quintic Calabi-Yau family $`\{X_\psi \}`$ are invariant under the action of $`(_5)^3`$. The quotient gives us the corresponding gradient flow on $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}^4/(_5)^3`$ of the family $`\{Y_\psi ^0\}`$, which produces the Lagrangian torus fibration with codimension 2 singular locus for $`Y_\psi ^0`$ under the quotient orbifold Kähler metric. Here we will only make use of the modified torus fibration $`\widehat{F}:Y_{\mathrm{}}^0\mathrm{\Delta }^{}`$ that satisfies $`\widehat{F}(C^0)=\widehat{\mathrm{\Gamma }}`$.
###### Lemma 4.1
There exists torus fibration $`\widehat{F}_w:Y_{\mathrm{}}\mathrm{\Delta }_w`$ that makes the following diagram commute.
Proof: For any $`x\mathrm{\Delta }^{}`$ and $`y\widehat{F}^1(x)`$, naturally $`F_w:\widehat{\pi }^1(y)\pi ^1(F(y))`$. Observe that $`\widehat{F}`$ respects the stratification of $`Y_{\mathrm{}}`$ and $`\mathrm{\Delta }_w`$. $`F(y)`$ and $`x`$ are in the same strata. There is a natural identification between $`\pi ^1(F(y))`$ and $`\pi ^1(x)`$. Compose this identification with $`F_w`$, we get $`\widehat{F}_w:\widehat{\pi }^1(y)\pi ^1(x)`$. Piece together the definition for all $`(x,y)`$, we get the desired $`\widehat{F}_w`$.
$`\mathrm{}`$
Remark: $`\widehat{F}_w`$ so constructed is not Lagrangian fibration in general.
For $`C=Y_\psi \mathrm{Sing}(Y_{\mathrm{}})`$, the $`\widehat{F}_w`$ so constructed satisfies that $`\widehat{F}_w(C)=\mathrm{\Gamma }`$ is a graph. Let $`\mathrm{\Gamma }=\mathrm{\Gamma }^1\mathrm{\Gamma }^2\mathrm{\Gamma }^3`$, where $`\mathrm{\Gamma }^1`$ is the smooth part of $`\mathrm{\Gamma }`$, $`\mathrm{\Gamma }^3`$ is the singular part of $`\mathrm{\Gamma }`$ in the 1-skeleton of $`\mathrm{\Delta }_w`$, $`\mathrm{\Gamma }^2`$ is the rest of singular part of $`\mathrm{\Gamma }`$. Apply theorem 4.1 without the Lagrangian condition, we have
###### Theorem 4.3
For $`Y_\psi P_{\mathrm{\Sigma }^w}`$, when $`w=(w_m)_{m\mathrm{\Delta }^0}\tau `$ is generic and near the large radius limit of $`\tau `$, start with torus fibration $`\widehat{F}_w`$ the gradient flow method will produce a torus fibration $`\widehat{F}_\psi :Y_\psi \mathrm{\Delta }_w`$ with codimension 2 singular locus $`\mathrm{\Gamma }=\widehat{F}_w(C)`$. There are 4 types of fibers.
(i). For $`p\mathrm{\Delta }_w\backslash \mathrm{\Gamma }`$, $`\widehat{F}_\psi ^1(p)`$ is a 3-torus.
(ii). For $`p\mathrm{\Gamma }^1`$, $`\widehat{F}_\psi ^1(p)`$ is a type I singular fiber.
(iii). For $`p\mathrm{\Gamma }^2`$, $`\widehat{F}_\psi ^1(p)`$ is a type II singular fiber.
(iv). For $`p\mathrm{\Gamma }^3`$, $`\widehat{F}_\psi ^1(p)`$ is a type III singular fiber.
$`\mathrm{}`$
Remark: Lagrangian torus fibration is much more difficult to construct than non-Lagrangian torus fibration. If our purpose is merely to construct non-Lagrangian fibration, the kind of non-Lagrangian torus fibration in the above theorem can be constructed with much simpler method. Of course to get an elegant and canonical construction is another matter.
### 4.3 Lagrangian torus fibration with codimension two singular locus
Take a generic $`w=(w_m)_{m\mathrm{\Delta }^0}\tau =_{Z\stackrel{~}{Z}}\tau _Z`$ (the Kähler moduli of mirror $`Y_\psi `$). Since $`\mathrm{\Delta }`$ is a simplex, without loss of generality, we can normalize $`w`$ to assume that $`w_m=w^0`$ are all the same for $`m`$ being the vertices of $`\mathrm{\Delta }`$. It is easy to see that $`\mathrm{\Delta }_ww^0\mathrm{\Delta }^{}N`$, and $`\mathrm{\Delta }_w`$ geometrically can be viewed as $`w^0\mathrm{\Delta }^{}`$ with some corners chopped off.
According to the proof of theorem 2.4, the construction of Lagrangian torus fibration with codimension two singular locus can be reduced to establishing a symplectic isotopy from $`C=Y_\psi \mathrm{Sing}(Y_{\mathrm{}})`$ to $`\widehat{C}`$ such that $`F_w(\widehat{C})=\mathrm{\Gamma }`$ is a graph. As we know, $`\stackrel{~}{\mathrm{\Gamma }}=F_w(C)`$ is mostly graph. The non-graph part of $`\stackrel{~}{\mathrm{\Gamma }}`$ is a disjoint union of many curved triangle pieces (50 pieces to be exact). Each of such curved triangle is mapped via $`\pi `$ into a 2-subsimplex in $`\mathrm{\Delta }^{}`$. Each of such 2-subsimplex corresponds to a sub-$`^2`$ in $`P_{\mathrm{\Sigma }_\mathrm{\Delta }^{}}`$.
Let us concentrate on one of these $`^2`$’s. For convenience of notation, we will temporarily suspend all our notation convention and use $`\mathrm{\Delta }`$ and $`\mathrm{\Delta }_w`$ to denote the moment map ($`F`$ and $`F_w`$) images of $`(^2,\omega _{FS})`$ and its toric blow up $`(\widehat{}^2,\omega _w)`$. ($`\mathrm{\Delta }_w`$ geometrically can be viewed as $`w^0\mathrm{\Delta }`$ chopping off some corners.) We will use $`C_0`$ to denote both the curve $`\{z_1+z_2+1=0\}(^2,\omega _{FS})`$ and its proper transformation in $`(\widehat{}^2,\omega _w)`$. In such way, our construction is reduced to the following problem on $`^2`$.
Problem: Find a symplectic isotopy from $`C_0`$ to symplectic curve $`\widehat{C}(\widehat{}^2,\omega _w)`$ such that $`F_w(\widehat{C})`$ is a “Y” shaped graph in $`\mathrm{\Delta }_w`$.
Naturally $`\mathrm{\Delta }_ww^0\mathrm{\Delta }`$. Introduce the family of toric Kähler forms $`\omega _t=(1t)w^0\omega _{FS}+t\omega _w`$. Then we have the family $`F_t:(\widehat{}^2,\omega _t)\mathrm{\Delta }_t`$. $`F_t=(1t)w^0F+tF_w`$. Assume $`F_0(C_0)\mathrm{\Delta }_w`$. Since $`F_w(C_0)\mathrm{\Delta }_w`$, we have $`F_t(C_0)\mathrm{\Delta }_w`$ for all $`0t1`$. Take a neighborhood $`U`$ of $`_tF_t(C_0)`$ in $`\mathrm{\Delta }_w`$. Then we have:
###### Lemma 4.2
The natural maps $`f_t:(F_t^1(U),\omega _t)(F_1^1(U),\omega _1)`$ are symplectomorphisms.
Proof: Assume $`F_t=(h_1^t,h_2^t)`$, then
$$\omega _t=d\theta _1dh_1^t+d\theta _2dh_2^t.$$
The natural map $`f_t`$ satisfies $`F_1f_t=F_t`$. Namely $`h_i^1f_t=h_i^t`$. Therefore $`f_t^{}\omega _1=\omega _t`$.
$`\mathrm{}`$
###### Proposition 4.3
There exists a symplectic isotopy from $`C_0`$ to symplectic curve $`\widehat{C}(\widehat{}^2,\omega _w)`$ such that $`F_w(\widehat{C})`$ is a “Y” shaped graph in $`\mathrm{\Delta }_w`$.
Proof: According to theorem 4.1 in , there exists a symplectic isotopy $`\{C_t\}_{t[0,1]}`$ from $`C_0`$ to $`\widehat{C}_0`$ in $`^2`$ such that $`F(\widehat{C}_0)`$ is a “Y” shaped graph in $`\mathrm{\Delta }`$. Let $`\widehat{C}=f_0(\widehat{C}_0)`$. $`\{f_0(C_t)\}_{t[0,1]}`$ is a symplectic isotopy from $`f_0(C_0)`$ to $`\widehat{C}`$.
On the other hand, $`\{f_t(C_0)\}_{t[0,1]}`$ is a symplectic isotopy from $`f_0(C_0)`$ to $`C_0`$. Combine the two families we get a symplectic isotopy from $`C_0`$ to symplectic curve $`\widehat{C}(\widehat{}^2,\omega _w)`$ such that $`F_w(\widehat{C})=F_0(\widehat{C}_0)`$ is a “Y” shaped graph in $`\mathrm{\Delta }_w`$.
$`\mathrm{}`$
Now let us get back to the discussion of the mirror of quintic and resume our notation. Based on what we just proved, we can see that there exists a symplectic isotopy from $`C=Y_\psi \mathrm{Sing}(Y_{\mathrm{}})`$ to $`\widehat{C}`$ such that $`F_w(\widehat{C})=\mathrm{\Gamma }`$ is a graph in $`\mathrm{\Delta }_w`$. Use similar argument as in the proof of theorem 9.1 in , we have
###### Theorem 4.4
There exists a family $`\{H_s\}_{s[0,1]}`$ of piecewise smooth Lipschitz continuous symplectic automorphisms of $`\widehat{}^2`$ that restrict to identity on the three coordinate $`^1`$’s, such that $`H_0=\mathrm{id}`$ and $`\widehat{C}=H_1(C)`$.
$`\mathrm{}`$
Let $`\mathrm{\Gamma }=\mathrm{\Gamma }^1\mathrm{\Gamma }^2\mathrm{\Gamma }^3`$, where $`\mathrm{\Gamma }^1`$ is the smooth part of $`\mathrm{\Gamma }`$, $`\mathrm{\Gamma }^3`$ is the singular part of $`\mathrm{\Gamma }`$ in the 1-skeleton of $`\mathrm{\Delta }_w`$, $`\mathrm{\Gamma }^2`$ is the rest of singular part of $`\mathrm{\Gamma }`$. As in the proof of the theorem 2.4, we can use theorem 4.4 to produce a topologically smooth Lagrangian fibration $`\widehat{F}_{\mathrm{}}:Y_{\mathrm{}}\mathrm{\Delta }_w`$ such that $`\widehat{F}_{\mathrm{}}(C)=\mathrm{\Gamma }`$. Apply theorem 4.1, we have
###### Theorem 4.5
For $`Y_\psi P_{\mathrm{\Sigma }^w}`$, when $`w=(w_m)_{m\mathrm{\Delta }^0}\tau `$ is generic and near the large radius limit of $`\tau `$, start with Lagrangian fibration $`\widehat{F}_{\mathrm{}}`$, the gradient flow method will produce a Lagrangian fibration $`\widehat{F}_\psi :Y_\psi \mathrm{\Delta }_w`$ with the graph singular locus $`\mathrm{\Gamma }=\widehat{F}_{\mathrm{}}(C)`$. There are 4 types of fibers.
(i). For $`p\mathrm{\Delta }_w\backslash \mathrm{\Gamma }`$, $`\widehat{F}_\psi ^1(p)`$ is a smooth Lagrangian 3-torus.
(ii). For $`p\mathrm{\Gamma }^1`$, $`\widehat{F}_\psi ^1(p)`$ is a type I singular fiber.
(iii). For $`p\mathrm{\Gamma }^2`$, $`\widehat{F}_\psi ^1(p)`$ is a type II singular fiber.
(iv). For $`p\mathrm{\Gamma }^3`$, $`\widehat{F}_\psi ^1(p)`$ is a type III singular fiber.
$`\mathrm{}`$
Remark: The advantage of the approach here is that we may start with any Kähler form $`\omega _w`$ in the Kähler class of $`w`$. The disadvantage is that we have to impose restriction that $`F_0(C_0)\mathrm{\Delta }_w`$ for each curved triangular piece of the singular locus. This condition essentially requires the Kähler class $`w`$ to be not too far away from the ray of multiple of the anti-canonical class. To remove such restriction and to be able to construct Lagrangian fibrations with codimension two singular locus for Calabi-Yau hypersurfaces in more general toric varieties, we will use a different method based on a careful construction of the Kähler form $`\omega _w`$ so that the argument of deformation to codimension 2 will still work directly under such Kähler form. More details on this alternative construction of Lagrangian torus fibration of the mirror of quintic Calabi-Yau can be found in , where the construction of Lagrangian torus fibration and their singular locus and singular fibers are discussed for general Calabi-Yau hypersurfaces in toric varieties.
## 5 Symplectic topological SYZ construction for quintic Calabi-Yau manifolds
According to SYZ conjecture, on each Calabi-Yau manifold, there should be a special Lagrangian torus fibration. The mirror Calabi-Yau manifold is the moduli space of special Lagrangian torus together with a flat $`U(1)`$ bundle on the special Lagrangian torus, with suitable compactification. In particular, the special Lagrangian torus fibration on the mirror should be naturally dual to the special Lagrangian torus fibration on the original Calabi-Yau manifold over the smooth part of the fibration. For our purpose, we will concern the corresponding statement for Lagrangian torus fibrations.
After the Lagrangian torus fibrations of a Calabi-Yau manifold and its mirror are constructed, to justify the SYZ conjecture, one first need to find a natural identification of the base spaces (topologically $`S^3`$) of the Lagrangian fibrations, under which the singular locus of the two Lagrangian fibrations are naturally identified. Then one can compute the monodromy of the two fibration to see if they are dual to each other.
### 5.1 Identification of the bases
Apriori the two singular locus arise via very different mechanism and do not seem to match. Nevertheless, there is a natural identification of the base spaces of the two fibrations that naturally identify the singular locus as stated in theorem 5.1. We will carry out the identification of fibers and compute the monodromy in the next subsection.
For any $`u=(u_m)_{m\mathrm{\Delta }^0}(\stackrel{~}{N}_{}+i\tau )/\stackrel{~}{N}`$, consider the quintic Calabi-Yau $`X_u`$ defined by
$$p_u(z)=\underset{m\mathrm{\Delta }^0}{}e^{2\pi iu_m}z^m+z^{m_0}=0.$$
Let $`w_m=\mathrm{Im}(u_m)`$, then $`w=(w_m)_{m\mathrm{\Delta }^0}\tau ={\displaystyle \underset{Z\stackrel{~}{Z}}{}}\tau _Z`$. Assume that $`w`$ is generic, then $`w`$ determines a simplicial decomposition $`Z`$ of $`\mathrm{\Delta }^0`$, and $`w\tau _Z`$. Recall that $`Z`$ determines a graph $`\mathrm{\Gamma }_Z\mathrm{\Delta }^0`$ as the union of simplices in the baricenter subdivision of the simplicial decomposition $`Z`$ of $`\mathrm{\Delta }^0`$, without any integral points of $`\mathrm{\Delta }^0`$ as vertex. According to theorem 2.4, when $`w`$ is near the large complex limit in $`\tau _Z`$, (consequently, $`X_u`$ is in the chamber $`U_Z\tau _Z^{}`$,) by gradient flow method, we can construct Lagrangian torus fibration of $`X_u`$ over $`\mathrm{\Delta }`$ such that the singular locus is exactly $`\mathrm{\Gamma }_Z`$.
In the mirror side, $`w=(w_m)_{m\mathrm{\Delta }^0}`$ can be viewed as representing the Kähler class of the mirror of quintic. As discussed at the beginning of section 4, $`w=(w_m)_{m\mathrm{\Delta }^0}`$ naturally determines a real polyhedron $`\mathrm{\Delta }_wN`$. Theorem 4.5 asserts that by gradient flow method, we can construct Lagrangian fibration of $`Y_w`$ (mirror of quintic) over $`\mathrm{\Delta }_w`$ with the singular locus $`\mathrm{\Gamma }_Z^{}`$. According to proposition 4.2, The singular locus $`\mathrm{\Gamma }_Z^{}`$ is naturally the union of 1-simplices in the baricenter subdivision of $`\mathrm{\Delta }_w`$ that under the map $`\pi `$ do not meet the vertices of $`\mathrm{\Delta }^{}`$.
As a real polyhedron, $`\mathrm{\Delta }_w`$ has a dual real polyhedron $`\mathrm{\Delta }_w^{}M`$. $`\mathrm{\Delta }_w^{}`$ has another very simple description, as the convex hull of $`\{w_m^1m\}_{m\mathrm{\Delta }}`$. $`\{w_m^1m\}_{m\mathrm{\Delta }\backslash \{m_0\}}`$ is exactly the set of vertices of $`\mathrm{\Delta }_w^{}`$. According to this interpretation, there is a natural piecewise linear map $`h:\mathrm{\Delta }\mathrm{\Delta }_w^{}`$ that maps integral points in $`\mathrm{\Delta }`$ to vertices of $`\mathrm{\Delta }_w^{}`$. The map is actually a simplicial isomorphism from $`\mathrm{\Delta }`$ to $`\mathrm{\Delta }_w^{}`$ with respect to the simplicial decomposition $`Z`$ of $`\mathrm{\Delta }`$. Namely $`Z`$ is exactly the pullback of the simplicial decomposition of $`\mathrm{\Delta }_w^{}`$ via the piecewise linear map $`\mathrm{\Delta }\mathrm{\Delta }_w^{}`$ . From this point of view, the Lagrangian fibration of $`X_u`$ can also be thought of as a Lagrangian fibration over $`\mathrm{\Delta }_w^{}`$. Let $`(\mathrm{\Delta }_w^{})^0`$ denote the image of the 2-skeleton $`\mathrm{\Delta }^0`$ of $`\mathrm{\Delta }`$ into $`\mathrm{\Delta }_w^{}`$. The image of $`\mathrm{\Gamma }_Z`$, still denoted by $`\mathrm{\Gamma }_Z`$, is the union of simplices in the baricenter subdivision of $`(\mathrm{\Delta }_w^{})^0`$ that do not meet vertices of $`(\mathrm{\Delta }_w^{})^0`$.
Above discussions have reduced the identification of the base spaces of the Lagrangian fibrations for the quintic Calabi-Yau $`X_uU_Z`$ (with the singular locus $`\mathrm{\Gamma }_Z`$) and its mirror $`Y_w\tau _Z`$ (with the singular locus $`\mathrm{\Gamma }_Z^{}`$) to the problem of identifying $`(\mathrm{\Gamma }_Z,\mathrm{\Delta }_w^{})`$ and $`(\mathrm{\Gamma }_Z^{},\mathrm{\Delta }_w)`$.
In general, for any convex polyhedron $`\mathrm{\Delta }`$, there is always a natural piecewise linear identification of $`\mathrm{\Delta }`$ and its dual polyhedron $`\mathrm{\Delta }^{}`$. For any face $`\alpha `$ of $`\mathrm{\Delta }`$, recall the dual face of $`\alpha `$ in $`\mathrm{\Delta }^{}`$ is denoted as $`\alpha ^{}`$. Let $`\widehat{\alpha }`$ ($`\widehat{\alpha }^{}`$) denote the baricenter of $`\alpha `$ ($`\alpha ^{}`$). Then we have the well known correspondence:
###### Proposition 5.1
The dual correspondence naturally induces a piecewise linear homeomorphism $`\mathrm{\Delta }\mathrm{\Delta }^{}`$ with respect to the baricenter subdivision. Under this homomorphism the baricenter $`\widehat{\alpha }`$ of a face $`\alpha `$ of $`\mathrm{\Delta }`$ is mapped to the baricenter $`\widehat{\alpha }^{}`$ of the dual face $`\alpha ^{}`$ on $`\mathrm{\Delta }^{}`$, and simplex $`\{\widehat{\alpha }_k\}_{k=0}^l`$ is mapped to simplex $`\{\widehat{\alpha }_k^{}\}_{k=0}^l`$ linearly.
$`\mathrm{}`$
Applying this general result to $`\mathrm{\Delta }_w`$, we get our first topological result on SYZ conjecture.
###### Theorem 5.1
There is a natural piecewise linear homeomorphism $`\mathrm{\Delta }_w\mathrm{\Delta }_w^{}`$ that identifies the singular locus $`\mathrm{\Gamma }_Z^{}`$ and $`\mathrm{\Gamma }_Z`$.
Proof: The two vertices of a 1-simplex in $`\mathrm{\Gamma }_Z^{}`$ are the baricenters of a 1-dimensional face and a 2-dimensional face of $`\mathrm{\Delta }_w`$ that do not map entirely to a vertex of $`\mathrm{\Delta }^{}`$ under $`\pi `$.
The two vertices of a 1-simplex in $`\mathrm{\Gamma }_Z`$ are the baricenters of a 1-dimensional simplex and a 2-dimensional simplex in $`(\mathrm{\Delta }_w^{})^0`$.
Under the dual map, one and two dimensional faces of $`\mathrm{\Delta }_w`$ that do not map entirely to a vertex of $`\mathrm{\Delta }^{}`$ under $`\pi `$ are naturally dual to two and one dimensional simplices in $`(\mathrm{\Delta }_w^{})^0`$.
$`\mathrm{}`$
### 5.2 Duality of fibers and monodromy
To fully establish SYZ construction, we also need to establish the duality relation of the Lagrangian torus fibers in the Lagrangian fibrations of quintic Calabi-Yau hypersurfaces and their mirror manifolds. There are many ways to see this, for example, one may compute monodromy operators of the two fibrations, and show that they are dual to each other. We will use a more direct method, we will establish an almost canonical characterization of Lagrangian torus fibers of the Lagrangian fibrations of quintics and the mirror manifolds, and prove that under this almost canonical characterization the two fibers are naturally dual to each other.
Consider the Calabi-Yau quintic $`X`$ defined by the quintic polynomial
$$p_u(z)=\underset{m\mathrm{\Delta }^0}{}e^{2\pi iu_m}z^m+z^{m_0}=\underset{m\mathrm{\Delta }^0}{}e^{2\pi i\eta _m}t^{w_m}z^m+z^{m_0}=0,$$
where $`u=\{u_m\}_{m\mathrm{\Delta }^0}`$ and $`u_m=\eta _mi\frac{\mathrm{log}t}{2\pi }w_m`$. Lagrangian torus fibration $`X\mathrm{\Delta }`$ is constructed by deforming the natural Lagrangian torus fibration of the large complex limit $`X_{\mathrm{}}`$ via gradient flow, where $`X_{\mathrm{}}`$ is defined by
$$p_{\mathrm{}}(z)=z^{m_0}=0.$$
For any vertex $`n`$ of $`\mathrm{\Delta }^{}N`$, there is a corresponding 3-dimensional face $`\alpha _n`$ of $`\mathrm{\Delta }`$ defined as
$$\alpha _n=\{m\mathrm{\Delta }|m,n=1\}.$$
Namely $`n`$ is the unique supporting function of $`\alpha _n`$. Clearly, fibers of the fibration $`X_{\mathrm{}}\mathrm{\Delta }`$ over $`\alpha _n^0`$ (interior of $`\alpha _n`$) are naturally identified with
$$T_n(N_n_{})/N_n,$$
where $`N_n=N/\{n\}`$. Since Lagrangian torus fibration $`X\mathrm{\Delta }`$ is a deformation of fibration $`X_{\mathrm{}}\mathrm{\Delta }`$, we have
###### Proposition 5.2
3-torus fibers of the Lagrangian fibration $`X\mathrm{\Delta }`$ over the interior of $`\alpha _n`$ can be naturally identified with $`T_n`$.
$`\mathrm{}`$
The dual torus of $`T_n`$ is naturally
$$T_n^{}(M_n_{})/M_n,$$
where $`M_n=n^{}M`$.
Similarly, for any integral $`m\mathrm{\Delta }`$, we can define
$$T_m^{}(N_m_{})/N_m,$$
where $`N_m=m^{}N`$. By multiplication, we have the map $`m:T_N^{}S^1`$. Clearly, $`T_m^{}=m^1(0)`$. Similarly, for $`\eta _mS^1`$, we can define $`T_{m,\eta _m}^{}=m^1(\eta _m)`$. We have
###### Proposition 5.3
When $`X`$ is near the large complex limit, 3-torus fiber $`X_b`$ of the Lagrangian fibration $`X\mathrm{\Delta }`$ over $`b`$ in a small neighborhood $`U_m`$ of integral $`m\mathrm{\Delta }`$ can be identified with $`T_{m,\eta _m}^{}`$. In addition, the identification of $`X_b`$ and $`T_n`$ (as discussed in proposition 5.2) can be modified by automorphisms of $`T_n`$ close to the identity map and smoothly depending on $`b\alpha _n^0`$, such that for $`b\alpha _n^0U_m`$ the following diagram commutes.
$$\begin{array}{ccc}X_b& & T_{m,\eta _m}^{}\\ & & \\ T_n& & T_N^{}\end{array}$$
Proof: Recall that the moment map
$$F_w(x)=\underset{m\mathrm{\Delta }}{}\frac{|x^m|_w^2}{|x|_w^2}m$$
maps $`^4`$ to $`\mathrm{\Delta }`$. For integral $`m^{}\mathrm{\Delta }`$, apply propositions 3.2, 3.3, 3.4 in to the simplex $`S=\{m_0,m^{}\}`$, we have a $`\delta >0`$ and a small neighborhood $`U_m^{}`$ of $`m^{}\mathrm{\Delta }`$ such that
$$e^{2\pi iu_m}z^m=O(t^{w_m^{}+\delta }),\mathrm{for}m\mathrm{\Delta }^0\backslash \{m^{}\}\mathrm{and}zF_w^1(U_m^{}).$$
Let $`p_u=p_{\mathrm{}}+\stackrel{~}{p}_u`$, where $`p_{\mathrm{}}=z^{m_0}`$. When $`X`$ is near the large complex limit, (namely, $`\{w_m\}_{m\mathrm{\Delta }^0}`$ is convex with respect to $`\mathrm{\Delta }^0`$ and $`t`$ is small,) for $`zF_w^1(U_m^{})`$ near $`F_w^1(m^{})`$, we have
$$\stackrel{~}{p}_u(z)=\underset{m\mathrm{\Delta }^0}{}e^{2\pi iu_m}z^m=e^{2\pi iu_m^{}}z^m^{}(1+O(t^\delta )).$$
The gradient flow will flow $`X=X_1`$ to $`X_{\mathrm{}}`$ through the family $`\{X_\psi \}_{\psi [1,+\mathrm{})}`$, where $`X_\psi `$ is defined by $`p_\psi (z)=\psi z^{m_0}+\stackrel{~}{p}_u(z)`$. More precisely, we actually use the normalized flow of $`V=\frac{f}{|f|^2}`$, where $`f=\mathrm{Re}(q)`$ and
$$q=\frac{p_{\mathrm{}}(z)}{\stackrel{~}{p}_u(z)}=e^{2\pi iu_m^{}}z^{m_0m^{}}\rho (z),\mathrm{where}\rho (z)=1+O(t^\delta ).$$
The flow produces Lagrangian fibrations $`F_\psi :X_\psi \mathrm{\Delta }`$ from $`F_{\mathrm{}}:X_{\mathrm{}}\mathrm{\Delta }`$.
Since $`\stackrel{~}{p}_u(z)`$ is non-vanishing near $`F_w^1(m^{})`$, the flow of $`V`$ moves Lagrangian fibers of $`X_{\mathrm{}}`$ near $`F_w^1(m^{})`$ entirely away from $`X_{\mathrm{}}`$ to Lagrangian 3-torus fibers of $`X_\psi `$ inside the large complex torus $`T_N`$. For monodromy computation, we need to show that these 3-torus fibers in $`T_N`$ are of the same homotopy class as $`T_m^{}^{}T_N`$. Our construction is more precise here, actually identifying these 3-torus fibers with $`T_{m,\eta _m}^{}`$.
Recall that $`T_N=(N_{})/N`$. Let $`T_N^{}=(N_{})/N`$. Then there is the natural map $`P:T_NT_N^{}`$. For any $`bU_m^{}`$, the fiber $`X_b=F_1^1(b)`$ deforms through $`X_{\psi ,b}=F_\psi ^1(b)`$ according to the inverse flow. Recall from that before running the gradient flow, we need to perturb the Kähler metric to toroidal metric near $`C=X_\psi \mathrm{Sing}(X_{\mathrm{}})`$. In the region we are discussing about, $`X_\psi `$ will move, therefore is away from $`C`$ and the Kähler metric does not need to be modified and is still the original toric metric corresponding to the moment map $`F_w`$. The local structure of the normalized gradient flow is summarized in theorems 5.3 and 5.4 in , which implies that $`P(X_{\psi ,b})`$ will converge to
$$P(X_{\mathrm{},b})=\left\{\theta T_N^{}\right|m^{}m_0,\theta =\eta _m^{}+\frac{\mathrm{Im}(\mathrm{log}\rho (z))}{2\pi }\}$$
along the inverse flow when $`\psi `$ approaches $`\mathrm{}`$. In such a way, $`X_b`$ can be identified with $`P(X_{\mathrm{},b})T_N^{}`$. (The identification in proposition 5.2 also factor through this identification.)
Notice that
$$T_{m^{},\eta _m^{}}^{}=\left\{\theta T_N^{}\right|m^{}m_0,\theta =\eta _m^{}\}$$
and $`\mathrm{log}\rho (z)=O(t^\delta )`$ can be arbitrarily small when $`X`$ is near the large complex limit. Namely, $`P(X_{\mathrm{},b})`$ are small perturbations of $`T_{m^{},\eta _m^{}}^{}`$ that smoothly depend on $`bU_m^{}`$. A projection along some direction transverse to $`T_{m^{},\eta _m^{}}^{}`$ will identify all these $`P(X_{\mathrm{},b})`$ with $`T_{m^{},\eta _m^{}}^{}`$ smoothly depending on $`bU_m^{}`$. In this way, we get the desired identifications $`X_bT_{m^{},\eta _m^{}}^{}`$ for $`bU_m^{}`$.
Since both identifications $`X_bT_{m^{},\eta _m^{}}^{}`$ and $`X_bT_n`$ factor through $`P(X_{\mathrm{},b})`$, the diagram in the proposition is equivalent to the following diagram
$$\begin{array}{ccc}P(X_{\mathrm{},b})& & T_{m,\eta _m}^{}\\ & & \\ T_n& & T_N^{}\end{array}$$
Since $`P(X_{\mathrm{},b})`$ is a small perturbation of $`T_{m,\eta _m}^{}`$, and the map $`P(X_{\mathrm{},b})T_n`$ also factor through $`T_N^{}`$ naturally, the automorphism of $`T_n`$ defined as composition of the 4 arrows in the diagram (reversing the arrow of the map $`P(X_{\mathrm{},b})T_n`$) will be as desired and is close to the identity map of $`T_n`$.
$`\mathrm{}`$
Now, let us consider the mirror Calabi-Yau $`YP_{\mathrm{\Sigma }^w}`$. Also by gradient flow method, we can construct Lagrangian fibration $`Y\mathrm{\Delta }_w`$. For any $`m\mathrm{\Delta }M`$, there is a corresponding 3-dimensional face $`\alpha _m`$ of $`\mathrm{\Delta }_w`$ defined as
$$\alpha _m=\{n\mathrm{\Delta }_w|w_m^1m,n=1\}.$$
Namely $`w_m^1m`$ is the unique supporting function of $`\alpha _m`$. Clearly, fibers of the fibration $`Y_{\mathrm{}}\mathrm{\Delta }_w`$ over $`\alpha _m^0`$ (interior of $`\alpha _m`$) are naturally identified with
$$T_m(M_m_{})/M_m,$$
where $`M_m=M/\{m\}`$. Since Lagrangian torus fibration $`Y\mathrm{\Delta }_w`$ is a deformation of fibration $`Y_{\mathrm{}}\mathrm{\Delta }_w`$, we have
###### Proposition 5.4
3-torus fibers of the Lagrangian fibration $`Y\mathrm{\Delta }_w`$ over interior of $`\alpha _m`$ can be naturally identified with $`T_m`$.
$`\mathrm{}`$
Recall the natural map $`\pi :\mathrm{\Delta }_w\mathrm{\Delta }^{}`$. For any vertex of $`n`$ of $`\mathrm{\Delta }^{}`$, we have
###### Proposition 5.5
3-torus fiber $`Y_b`$ of the Lagrangian fibration $`Y\mathrm{\Delta }_w`$ over $`b`$ in a small neighborhood $`U_n`$ of $`\pi ^1(n)\mathrm{\Delta }_w`$ can be naturally identified with $`T_n^{}`$. In addition, if $`b\alpha _m^0`$, then the following diagram commutes.
$$\begin{array}{ccc}Y_b& & T_n^{}\\ & & \\ T_m& & T_M^{}\end{array}$$
$`\mathrm{}`$
This proposition can be proved in a way similar to proposition 5.3. For detail, please see , where a more general result is proved.
Recall from the theorem 5.1, we have the natural piecewise linear identification of the two base of the Lagrangian fibrations
$$s:\mathrm{\Delta }_w\mathrm{\Delta }_w^{}\mathrm{\Delta }.$$
Let $`\alpha _n^0`$, $`\alpha _m^0`$ denote the interior of $`\alpha _n`$, $`\alpha _m`$. Then we have
###### Proposition 5.6
$`U_m`$, $`U_n`$ can be suitably chosen such that
$$s(U_n)=\alpha _n^0,s(\alpha _m^0)=U_m.$$
And
$$\mathrm{\Delta }_w\backslash \mathrm{\Gamma }^{}=\left(\underset{m\mathrm{\Delta }^0}{}\alpha _m^0\right)\left(\underset{n\mathrm{\Delta }^{}}{}U_n\right),$$
$$\mathrm{\Delta }\backslash \mathrm{\Gamma }=\left(\underset{n\mathrm{\Delta }^{}}{}\alpha _n^0\right)\left(\underset{m\mathrm{\Delta }^0}{}U_m\right).$$
Proof: When we enlarge $`U_m`$, we need to extend the identification of the fibers with $`T_{m,\eta _m}^{}`$ and modify identification of fibers over $`\alpha _n^0`$ with $`T_n`$ accordingly to make sure that the diagram still commute (as in proposition 5.3). Such operations are very easy to do, because $`U_m`$ is contractible and the fibration of $`X_\psi `$ over $`U_m`$ is trivial and smooth.
$`\mathrm{}`$
We are now ready to establish the dual relation of the fibers. Our construction of the torus fibration for $`Y`$ only depends on the real Kähler moduli, which under the monomial-divisor map corresponds to the moduli of quintics with real coefficients. Therefore, we will restrict our duality discussion to quintic $`X`$ in such real moduli space, where duality relation is much more precise and without shift. Notice that when the quintic has real coefficients, we have $`\eta _m=0`$ for all $`m`$ and $`T_{m,\eta _m}^{}=T_m^{}`$. For any $`b\mathrm{\Delta }_w\backslash \mathrm{\Gamma }^{}`$, let $`Y_b`$ ($`X_{s(b)}`$) denote the fiber of the Lagrangian fibration $`Y\mathrm{\Delta }_w`$ ($`X\mathrm{\Delta }`$) over $`b\mathrm{\Delta }_w\backslash \mathrm{\Gamma }^{}`$ ($`s(b)\mathrm{\Delta }\backslash \mathrm{\Gamma }`$). Then we have
###### Theorem 5.2
For any $`b\mathrm{\Delta }_w\backslash \mathrm{\Gamma }^{}`$, $`Y_b`$ is naturally dual to $`X_{s(b)}`$.
Proof: Based on above propositions, duality is easy to establish. The only thing that need to be addressed is that duality defined in two ways according to $`U_m`$ or $`\alpha _n^0`$ for $`bU_m\alpha _n^0`$ coincide. For this purpose, one only need to show that the following diagrams commute.
$$\begin{array}{ccccccc}X_{s(b)}& & T_m^{}& & Y_b& & T_n^{}\\ & & & & & & \\ T_n& & T_N^{}& & T_m& & T_M^{}\end{array}$$
This is proved in proposition 5.3 and 5.5.
$`\mathrm{}`$
With the explicit identification of fibers in place, monodromy computation becomes a piece of cake! Consider the path $`\gamma _{nmn^{}m^{}}=\alpha _n^0U_m\alpha _n^{}^0U_m^{}\alpha _n^0`$ on $`\mathrm{\Delta }`$, where $`n,n^{}\mathrm{\Delta }^{}`$, $`m,m^{}\mathrm{\Delta }^0`$ satisfy
$$m,n=m,n^{}=m^{},n=m^{},n^{}=1.$$
This condition implies that $`\alpha _n`$ and $`\alpha _n^{}`$ have common face that contains $`m,m^{}`$. Correspondingly we have the diagram
$$\begin{array}{ccc}N_n& & N_m\\ & & \\ N_m^{}& & N_n^{}\end{array}$$
$$xx+m,xnx+m,xn+m^{},x+m,xnn^{}=x+m,xn+m^{}m,xn^{}$$
Compose the four operators and modulo $`n`$, we get
###### Theorem 5.3
The monodromy operator along $`\gamma _{nmn^{}m^{}}`$ is
$$[x][x]+m^{}m,x[n^{}]\mathrm{for}[x]N_n.$$
$`\mathrm{}`$
Remark: Now we have found an extremely simple way to compute monodromy. All our monodromy computation in can be much more easily performed by this method. Since monodromy computation is becoming so trivial, I will omit the corresponding computation of monodromy operator for the mirror fibration, which is naturally dual to the monodromy operator for the fibration of quintic.
## 6 Singular fibers of Lagrangian torus fibration
In the previous section we established the duality between the smooth torus fibers of the Lagrangian torus fibrations for quintics and their mirrors. In this section we will discuss the generic singular fibers and their duality.
Let us start with some facts for $`C^{\mathrm{}}`$ Lagrangian fibrations. Let $`(X,\omega )`$ be a smooth symplectic manifold. A fibration $`F:XB`$ is called a $`C^l`$-Lagrangian fibration, if $`F`$ is a $`C^l`$ map and the smooth part of each fiber is Lagrangian and belongs to the regular point set of map $`F`$. The following well known results were discussed in the section 2 of .
###### Theorem 6.1
Let $`F:XB`$ be a $`C^{1,1}`$-Lagrangian fibration, then for any $`bB`$, there is an action of $`T_b^{}B`$ on $`X_b=f^1(b)`$.
$`\mathrm{}`$
Remark: In particular, this theorem applies to $`C^{\mathrm{}}`$ Lagrangian fibrations.
###### Corollary 6.1
For any $`bB`$,
$$\mathrm{Reg}(X_b)=\mathrm{Reg}(F^1(b))=\underset{l}{}O_l$$
is a disjoint union of orbits of $`T_b^{}B`$, where each $`O_l`$ is diffeomorphic to $`(S^1)^k\times ^m`$ for some $`k+m=dimB`$.
$`\mathrm{}`$
Remark: When $`F`$ is generic in certain sense, we expect $`F^1(b)={\displaystyle \underset{l}{}}O_l`$ to be a disjoint union of finitely many orbits of $`T_b^{}B`$, where each $`O_l`$ is diffeomorphic to $`(S^1)^k\times ^m`$ for some $`k+mdimB`$.
From these facts, one can see that $`C^{\mathrm{}}`$ Lagrangian fibrations put rather strict restriction on the topology of singular fibers. In the following, we will discuss certain generic singular fibers that include the singular fibers which appeared in the Lagrangian torus fibrations we constructed for generic quintics and Fermat type quintics. Although these singular fibers all satisfy the topological constraints in corollary 6.1, it is not immediately clear whether they can be realized as singular fibers for $`C^{\mathrm{}}`$ Lagrangian torus fibrations, especially the type two singular fibers.
As we know, even for elliptic fibrations the singular fibers in general can be quite complicated. It is crucial to restrict our attention to some classes of stable singular fibers with certain generic nature.
In three dimensions, singular fibers conceivably can be even more complicated. To have a meaningful discussion, it is crucial for us to first concentrate on certain classes of “generic” singular fibers. We will restrict our discussion to three types of singular fibers.
Type one singular fiber comes from the product of a 2-dimensional singular fiber with a circle. It has one vanishing 1-cycle. In particular, we denote the product of 2-dimensional $`A_n`$ singular fiber with a circle by $`I_n`$.
Type two singular fiber has one vanishing 1-cycle, and has a natural map to a 2-torus with fiber being either a point or a circle representing the vanishing cycle.
Type three singular fiber has two independent vanishing 1-cycles, and has a natural map to a circle with fiber being either a point or a 2-torus representing the vanishing cycles.
###### Proposition 6.1
Type three singular fibers are parametrized by positive integers, denoted by $`III_n`$. $`n`$ is the number of point fibers for the map to the circle. $`III_n`$ fiber has Euler number equal to $`n`$. In particular, type $`III(=III_1)`$ fiber is the generic singular fiber with Euler number 1.
$`\mathrm{}`$
Figure 10: Type $`III_5`$ and type $`III`$ fibers
Type two singular fiber has a map to a 2-torus $`T^2`$. The set of points on the 2-torus with point fiber is typically a graph $`\mathrm{\Gamma }`$ on the 2-torus, which divides the 2-torus into several regions. In general a graph $`\mathrm{\Gamma }`$ that divides $`T^2`$ into $`n`$ region has Euler number equal to $`n`$. We have
###### Proposition 6.2
For a type two singular fiber, if the corresponding graph $`\mathrm{\Gamma }T^2`$ devides the 2-torus $`T^2`$ into $`n`$ regions, then the Euler number for the singular fiber is equal to $`n`$.
$`\mathrm{}`$
We are interested in the generic type two singular fiber with Euler number equal to $`1`$. We have
###### Proposition 6.3
There are only two type two singular fibers with Euler number equal to $`1`$, the type $`II`$ fiber corresponding to parallel sexgon and type $`\stackrel{~}{II}`$ fiber corresponding to parallelgram.
Proof: Since Euler number equals to $`1`$, by the previous proposition, $`\mathrm{\Gamma }`$ divides $`T^2`$ into only one region. Going to the universal cover of $`T^2`$, this one region gives a filling of $`^2`$ by only one type of polygon. It is well known that there are only two types of polygon that can fill the plane, the parallel sexgon and the parallelgram.
$`\mathrm{}`$
Figure 11: Type $`\stackrel{~}{II}`$ fiber
Figure 12: Type $`II_{5\times 5}`$ and type $`II`$ fibers
$`n\times m`$ cover of type $`II`$ ($`\stackrel{~}{II}`$) singular fiber are type two singular fiber with Euler number equal to $`mn`$. We will denote this kind of singlar fiber by $`II_{n\times m}`$ ($`\stackrel{~}{II}_{n\times m}`$).
In our generic Lagrangian torus fibrations for quintics, we have generic singular fiber type $`I`$, $`II`$, $`III`$. Type $`I`$ fiber is dual to type $`I`$ fiber, type $`II`$ fiber is dual to type $`III`$ fiber. Clearly, the dual relations change only the sign of the Euler number of the singular fiber, while keeping the absolute value unchanged. This apparently is in accordance with mirror symmetry.
In our discussion of Lagrangian torus fibrations of Fermat type quintics, we have singular fibers of type $`I_5`$, $`II_{5\times 5}`$, $`III_5`$.
Summarize our results, we have proved the symplectic topological version of SYZ conjecture for quintic Calabi-Yau hypersurfaces, including the refinement involving the singular fibers. Recall that $`\stackrel{~}{Z}`$ denotes the set of integral simplicial decompositions of $`\mathrm{\Delta }^0`$ (the 2-skeleton of $`\mathrm{\Delta }`$).
###### Theorem 6.2
For $`Z\stackrel{~}{Z}`$, consider generic quintic Calabi-Yau hypersurface $`X`$ in the chamber $`U_Z`$ near the large complex limit and its mirror Calabi-Yau manifold $`Y`$ with the Kähler moduli $`w\tau _Z`$ near the large radius limit, there exist corresponding Lagrangian torus fibrations
$$\begin{array}{ccccccc}X_{s(b)}& & X& & Y_b& & Y\\ & & & & & & \\ & & \mathrm{\Delta }& & & & \mathrm{\Delta }_w\end{array}$$
with singular locus $`\mathrm{\Gamma }_Z\mathrm{\Delta }`$ and $`\mathrm{\Gamma }_Z^{}\mathrm{\Delta }_w`$, where $`s:\mathrm{\Delta }_w\mathrm{\Delta }`$ is a natural homeomorphism and $`s(\mathrm{\Gamma }_Z^{})=\mathrm{\Gamma }_Z`$. For $`b\mathrm{\Delta }_w\backslash \mathrm{\Gamma }_Z^{}`$, the corresponding fibers $`X_{s(b)}`$ and $`Y_b`$ are naturally dual to each other.
Singular fibers over the smooth part of $`\mathrm{\Gamma }_Z`$ ($`\mathrm{\Gamma }_Z^{}`$) are of type I. Singular fibers over the vertices of $`\mathrm{\Gamma }_Z`$ ($`\mathrm{\Gamma }_Z^{}`$) are of type II or III. Type I fiber is dual to type $`I`$ fiber, type II fiber and type III fiber are dual to each other.
$`\mathrm{}`$
The original SYZ mirror conjecture was rather sketchy in nature, with no mentioning of singular locus, singular fibers and duality of singular fibers, which is essential if one wants, for example, to use SYZ to construct mirror manifold. Our construction of Lagrangian torus fibrations and proof of symplectic topological SYZ for generic Calabi-Yau quintic hypersurfaces explicitly produce the 3 types of generic singular fibers (type $`I`$, $`II`$, $`III`$) and determine the way they are dual to each other under mirror symmetry. Our construction clearly indicates what should happen in general. In particular, it suggests that type II singular fiber with Euler number $`1`$ should be dual to type III singular fiber with Euler number $`1`$. This together with the knowledge of singular locus from our construction enable us to give a more precise formulation of SYZ mirror conjecture (in symplectic category). This precise formulation naturally suggests a way to construct mirror manifold in general from a generic Lagrangian torus fibration of a Calabi-Yau manifold.
Precise symplectic SYZ mirror conjecture For any Calabi-Yau 3-fold $`X`$, with Calabi-Yau metric $`\omega _g`$ and holomorphic volume form $`\mathrm{\Omega }`$, there exists a Lagrangian torus fibration of $`X`$ over $`S^3`$
$$\begin{array}{ccc}T^3& & X\\ & & \\ & & S^3\end{array}$$
with a Lagrangian section and codimension 2 singular locus $`\mathrm{\Gamma }S^3`$, such that general fibers (over $`S^3\backslash \mathrm{\Gamma }`$) are 3-torus. For generic such fibration, $`\mathrm{\Gamma }`$ is a graph with only 3-valent vertices. Let $`\mathrm{\Gamma }=\mathrm{\Gamma }^1\mathrm{\Gamma }^2\mathrm{\Gamma }^3`$, where $`\mathrm{\Gamma }^1`$ is the smooth part of $`\mathrm{\Gamma }`$, $`\mathrm{\Gamma }^2\mathrm{\Gamma }^3`$ is the set of the vertices of $`\mathrm{\Gamma }`$. For any leg $`\gamma \mathrm{\Gamma }^1`$, the monodromy of $`H_1(X_b)`$ of fiber under suitable basis is
$$T_\gamma =\left(\begin{array}{ccc}1& 1& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right).$$
Singular fiber along $`\gamma `$ is of type $`I`$.
Consider a vertex $`P\mathrm{\Gamma }^2\mathrm{\Gamma }^3`$ with legs $`\gamma _1`$, $`\gamma _2`$, $`\gamma _3`$. Correspondingly, we have monodromy operators $`T_1`$, $`T_2`$, $`T_3`$.
For $`P\mathrm{\Gamma }^2`$, under suitable basis we have
$$T_1=\left(\begin{array}{ccc}1& 1& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right),T_2=\left(\begin{array}{ccc}1& 0& 1\\ 0& 1& 0\\ 0& 0& 1\end{array}\right),T_3=\left(\begin{array}{ccc}1& 1& 1\\ 0& 1& 0\\ 0& 0& 1\end{array}\right).$$
Singular fiber over $`P`$ is of type $`II`$.
For $`P\mathrm{\Gamma }^3`$, under suitable basis we have
$$T_1=\left(\begin{array}{ccc}1& 0& 0\\ 1& 1& 0\\ 0& 0& 1\end{array}\right),T_2=\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 1& 0& 1\end{array}\right),T_3=\left(\begin{array}{ccc}1& 0& 0\\ 1& 1& 0\\ 1& 0& 1\end{array}\right).$$
Singular fiber over $`P`$ is of type $`III`$.
The Lagrangian fibration for the mirror Calabi-Yau manifold $`Y`$ has the same base $`S^3`$ and singular locus $`\mathrm{\Gamma }S^3`$. For $`bS^3\backslash \mathrm{\Gamma }`$, $`Y_b`$ is the dual torus of $`X_bT^3`$. In another word, the $`T^3`$-fibrations
$$\begin{array}{ccc}T^3& & X\\ & & \\ & & S^3\backslash \mathrm{\Gamma }\end{array}\begin{array}{ccc}T^3& & Y\\ & & \\ & & S^3\backslash \mathrm{\Gamma }\end{array}$$
are dual to each other. In particular the monodromy operators will be dual to each other.
For the fibration of $`Y`$, singular fibers over $`\mathrm{\Gamma }^1`$ should be type $`I`$, singular fibers over $`\mathrm{\Gamma }^2`$ should be type $`III`$, singular fibers over $`\mathrm{\Gamma }^2`$ should be type $`II`$. Namely, dual singular fiber of a type $`I`$ singular fiber is still type $`I`$. Type $`II`$ and $`III`$ singular fibers are dual to each other.
Conjecture: Type $`II`$, $`\stackrel{~}{II}`$, $`III`$ singular fibers are the only possible generic singular fibers with Euler number equal to $`\pm 1`$.
Remark: The last piece of the SYZ puzzle we have not yet discussed in detail is the construction of a section of the Lagrangian fibration. With the explicit description of Lagrangian fibers in section 5, it is not hard to construct the section. When the coefficients of the quintic are all real, one can take the identity section on each piece with explicit description as in proposition 5.3. they piece together to form an global section that can be extended to be over the compliment of the singular locus. With a little more care, such section can be made Lagrangian. We will describe the precise construction of global sections in , where more general Lagrangian cycles in Calabi-Yau hypersurfaces will also be disscussed.
Acknowledgement: I would like to thank Qin Jing for many very stimulating discussions during the course of my work, and helpful suggestions while carefully reading my early draft. I would also like to thank Prof. S.-T. Yau for his constant encouragement. This work was originally done while I was in Columbia University. I am very grateful to Columbia University for excellent research environment.
|
no-problem/9909/cond-mat9909313.html
|
ar5iv
|
text
|
# Iordanskii Force and the Gravitational Aharonov-Bohm effect for a Moving Vortex
## I Introduction
The problem of computing the transverse force acting on a vortex in a superfluid has recently engendered a certain amount of controversy. If the vortex moves at a velocity $`𝐯_L`$ while the superfluid and normal components have asymptotic velocities $`𝐯_s`$ and $`𝐯_n`$ respectively, then the most general form of the transverse force per unit length that is consistent with galilean invariance (i.e. depends only on velocity differences) can be written
$$𝐅=A\kappa \widehat{𝐳}\times (𝐯_L𝐯_s)+B\kappa \widehat{𝐳}\times (𝐯_L𝐯_n).$$
(1)
Here $`\kappa `$ is the magnitude of the quantum of circulation about the vortex,
$$\kappa =\frac{h}{m},$$
(2)
with $`m`$ the mass of a helium atom, and $`\widehat{𝐳}`$ a unit tangent to the vortex line in the direction of the circulation. In the absence of any normal component, elementary fluid mechanics shows that the momentum flux into the vortex core is
$$𝐅_M=\rho _{tot}\kappa \widehat{𝐳}\times (𝐯_L𝐯_s),$$
(3)
where $`\rho _{tot}`$ is the total mass density. This is the Magnus force. Once a normal component is present, however, a variety of different expressions for the coefficients $`A`$ and $`B`$ have been given in the literature.
It is generally accepted that the coefficient $`A`$ is $`\rho _s`$. An appealing thermodynamic argument for this has recently been given by Wexler. The controversy stems from the claim of Wexler, and Thouless et al. that the coefficient of $`𝐯_L`$ in $`𝐅`$, is also equal to $`\rho _s`$. Since this coefficient is $`A+B`$, their claim, if true, would force $`B`$ to be zero — thus ruling out the existence of the second term, the Iordanskii force, which is supposed to originate in a left/right asymmetry in the scattering of quasi-particles by the vortex line. Sonin, on the other hand, has presented a detailed review of the scattering of phonons by a vortex line at rest with respect to the superflow. His analysis (which has been challenged by Wexler and Thouless, but which I believe to be correct) shows that the asymmetry arises from a fluid dynamical analog of the Bohm-Aharonov effect, and gives the coefficient of $`𝐯_n`$ as $`\rho _n`$. Thus $`B=\rho _n`$. Combining this value of $`B`$ with the accepted value for $`A`$ gives the transverse part of the force per unit length as
$$𝐅=\rho _s\kappa \widehat{𝐳}\times (𝐯_L𝐯_s)+\rho _n\kappa \widehat{𝐳}\times (𝐯_L𝐯_n).$$
(4)
This is the most commonly accepted expression for the force. When it is written in this form the first term is usually refered to as the superfluid Magnus force and the second term as the Iordanskii force.
Since the total density is $`\rho _{tot}=\rho _s+\rho _n`$, equation (4) may equally well be written
$$𝐅=\rho _{tot}\kappa \widehat{𝐳}\times (𝐯_L𝐯_s)+\rho _n\kappa \widehat{𝐳}\times (𝐯_s𝐯_n).$$
(5)
The first part of (5) is the momentum transfer to the vortex due to the condensate motion (and possibly this should be called the superfluid Magnus force), so the second term must be the force on the vortex due to phonon scattering. Part of the phonon force is responsible for reducing the coefficient of $`𝐯_s`$ from $`\rho _{tot}`$ to $`\rho _s`$. Notice that this expression for the phonon force does not depend on the motion of the vortex line relative to either component of the fluid. Although Sonin, who works in the frame $`𝐯_L=0`$, writes $`\rho _n(𝐯_s𝐯_n)`$ in his expression for the phonon force, his analysis of the scattering process is restricted to the situation where $`𝐯_s=0`$, i.e to the case where there is no relative motion between the vortex line and the condensate. The $`𝐯_s`$ part of the force is inferred from the thermodynamic and galilean invariance argument given above. A direct demonstration that the phonon force is independent of the relative velocity of the vortex and the condensate, and hence that coefficient of $`𝐯_s`$ in the phonon force is indeed equal to $`\rho _n`$, would provide a useful consistency test of the conventional scattering-theory derivation of (4), because the galilean invariance that went into deducing this coefficient is not manifest in the linearized sound wave equation. In this paper I will provide such a demonstration. In obtaining the result we will find it useful to consider the analogy, first pointed out by Volovik, of phonon vortex scattering with the gravitational Bohm-Aharonov effect where particles are scattered by a spinning cosmic string.
In the next section I will review the acoustic Bohm-Aharonov effect and rederive Sonin’s results for the phonon force in the case that the vortex is at rest with respect to the condensate. In section three I will extend these results to the case in which the vortex moves relative to the condensate, and relate the momentum given to the phonons by the vortex to the time delay of signals passing on either side of a cosmic string. In the last section I will discuss the apparent conflict between our results and the claims of Thouless et al., and some possible resolutions.
## II The Acoustic Bohm-Aharonov Effect
The scattering of phonons by a superfluid vortex was first studied by Fetter. The wave equation used by most recent authors is
$$\frac{^2\varphi }{t^2}=c^2^2\varphi 2(𝐯)\frac{\varphi }{t}.$$
(6)
Here $`\varphi `$ is the velocity potential, $`𝐯=𝐯_v`$ is the velocity field of the vortex
$$((v_v)_x,(v_v)_y)=\frac{\kappa }{2\pi }(\frac{y}{x^2+y^2},\frac{x}{x^2+y^2}),$$
(7)
and $`c`$ the speed of sound.
When the sound field $`\varphi `$ has harmonic time dependence, $`\varphi (𝐫,t)=e^{i\omega t}\varphi (𝐫)`$, equation (6) becomes
$$\omega ^2\varphi =c^2^2\varphi +2i\omega (𝐯_v)\varphi .$$
(8)
We will be interested in effects only to first order in the circulation $`\kappa `$, therefore it is natural to add harmless $`O(𝐯_v^2)`$ terms to (8) so that it becomes the Schrödinger equation for unit charge particles minimally coupled to a gauge field $`𝐀=\frac{\omega }{c^2}𝐯_v`$, viz.
$$\omega ^2\varphi =c^2\left(+i\frac{\omega }{c^2}𝐯_v\right)^2\varphi .$$
(9)
Notice that this rewriting requires $`𝐯_v=0`$.
Equation (9) describes the Bohm-Aharonov interaction of particles with a threadlike tube of magnetic flux in the gauge where $`𝐀=0`$. The total flux in the tube is
$$\mathrm{\Phi }=𝐀𝑑𝐫=\frac{\omega }{c^2}𝐯_v𝑑𝐫=\frac{\omega }{c^2}\kappa .$$
(10)
Here the integration contour surrounds the vortex, which we have taken to be at the origin of our coordinate system. We will use the symbol $`\alpha `$ to denote the ratio of this flux to the Dirac flux quantum, $`\mathrm{\Phi }_0=2\pi `$.
In their original paper Aharonov and Bohm provided a partial-wave series expansion for scattering of a plane wave by the flux tube. Figures 1 and 2 are numerical plots of the sum of the first forty terms in this expansion for the cases where a plane wave is incident from the right on flux tubes with $`\alpha `$ equal to $`0.25`$ and $`0.5`$ respectively. These plots should be compared to the ripple tank photographs of surface waves interacting with a “bathtub” vortex in . The most noticable feature in both the photographs and the plots is the “seam” or “tear” in the wavefunction downstream of the flux tube. The incident plane wave is cut in two by the flux tube and, apart from diffraction effects, the upper and lower halves of the incident wave propagate parallel to each other but with a relative phase shift of a fraction $`\alpha `$ of a wavelength. It is in the region of the seam that the transverse momentum imparted to the beam by the flux resides. Indeed in Fig 1 one can plainly see that the wavefronts are directed slightly upwards in this region.
The time-average momentum density in the sound wave is $`\rho _{(1)}𝐯_{(1)}`$ where $`𝐯_{(1)}=\varphi `$ is the fluid velocity due to the sound wave and
$$\rho _{(1)}=\frac{\rho _{(0)}}{c^2}\{\dot{\varphi }+𝐯\varphi \}.$$
(11)
The angular brackets denote a time average. (See the appendix for a derivation the expression for $`\rho _{(1)}`$.) For a plane wave
$$\varphi (𝐫,t)=\mathrm{Re}\left\{\varphi _0e^{i𝐤𝐫i\omega t}\right\},$$
(12)
and with the background flow $`𝐯`$ vanishing, we have
$$\rho _{(1)}𝐯_{(1)}=\frac{1}{2}\rho _{(0)}\frac{\omega }{c^2}|\varphi _0|^2𝐤.$$
(13)
More generally we find
$$\rho _{(1)}𝐯_{(1)}=\frac{1}{4i}\frac{\rho _{(0)}\omega _r}{c^2}\left(\varphi ^{}\varphi (\varphi ^{})\varphi \right),$$
(14)
where $`\omega _r`$ is the frequency of the wave relative to the fluid. (Notice that (14) is not the “gauge invariant” form of the current for our minimally coupled Schrödinger equation.)
Once we are out of the region where $`𝐯`$ is significant, we can write the momentum density as
$$\rho _{(1)}𝐯_{(1)}=\frac{1}{2}\rho _{(0)}\frac{\omega }{c^2}|\varphi _0|^2\chi ,$$
(15)
where $`\varphi (x)=|\varphi _0|e^{i\chi (𝐫)}`$. If we temporarily ignore the reduction in the amplitude of the sound wave in the seam region, we can find the total transverse momentum by integrating the $`y`$ component of this momentum from one side of the seam to the other along a line parallel to the $`y`$ axis. The total transverse momentum per unit length at abscissa $`x`$ is therefore
$$p_y=\frac{1}{2}\rho _{(0)}\frac{\omega }{c^2}|\varphi _0|^2\mathrm{\Delta }\chi (x),$$
(16)
where $`\mathrm{\Delta }\chi (x)`$, the phase difference across the seam, is zero long before the sound wave interacts with the vortex, and
$$\mathrm{\Delta }\chi (x)=2\pi \alpha =\frac{\omega }{c^2}\kappa $$
(17)
well after the sound waves have passed the vortex. In this manner the transverse momentum is found by examining the wave at large (but not infinite) impact parameter, and the result is insensitive to details such as diffraction effects.
The transverse momentum per unit length of the seam can also be written
$$\left(\frac{1}{2}\rho _{(0)}\frac{\omega }{c^2}|\varphi _0|^2k\right)\frac{1}{k}\frac{\omega }{c^2}\kappa =j_{\mathrm{ph}}\frac{1}{k}\frac{\omega }{c^2}\kappa ,$$
(18)
where $`j_{\mathrm{ph}}`$ is the mass current, or momentum density in the unperturbed sound wave. If a finite pulse of sound is sent past the vortex then a length of seam equal to the group velocity of the waves (here $`c`$) is created every second. The transverse force is the rate of creation of transverse momentum and this is
$$\dot{P}_{}=j_{\mathrm{ph}}\frac{1}{k}\frac{\omega }{c^2}\kappa c=j_{\mathrm{ph}}\kappa $$
(19)
in agreement with ref .
The mass flux due to phonons in the two fluid model is
$$𝐣_{\mathrm{ph}}=\rho _n(𝐯_n𝐯_s),$$
(20)
and we can find the thermal average of the phonon force by substituting this in (19). Since we have so far assumed that $`𝐯_s`$ is zero we have, however, only established the $`\rho _n\kappa 𝐯_n`$ part of the Iordanskii force.
A more rigorous approach to computing the momentum given to the phonons exploits the momentum flux tensor $`\mathrm{\Pi }_{ij}^{\mathrm{phon}}`$. The relevant terms are
$$\mathrm{\Pi }_{ij}^{\mathrm{phon}}=\rho _{(0)}v_{(1)i}v_{(1)j}+(v_v)_i\rho _{(1)}v_{(1)j}+(v_v)_j\rho _{(1)}v_{(1)i}.$$
(21)
The only terms contributing to $`\mathrm{\Pi }_{xy}^{\mathrm{phon}}=\mathrm{\Pi }_{yx}^{\mathrm{phon}}`$ to first order in $`𝐯_v`$ turn out to be
$$\mathrm{\Pi }_{xy}^{\mathrm{phon}}=\left(\mathrm{\Pi }_{xy}^{\mathrm{phon}}\right)_{𝐯_v=0}+\frac{c}{k}\left(_y\chi +\frac{k}{c}(v_v)_y\right)(j_{\mathrm{ph}})_x,$$
(22)
while $`\mathrm{\Pi }_{yy}^{\mathrm{phon}}`$ is of at least second order in $`𝐯_v`$ and can be neglected.
For most of the $`x,y`$ plane we may use the eikonal approximation for the phase $`\chi `$,
$$\chi (x,y)=kx\frac{\omega }{c^2}_{\mathrm{}}^{(x,y)}(v_v)_x𝑑x^{}.$$
(23)
Here the integral is taken along the line from the point $`(\mathrm{},y)`$ to $`(x,y)`$. (The eikonal approximation will be described further in the next section.) From the formulae for $`(v_v)_x,(v_v)_y`$ in (7) we find
$$_{\mathrm{}}^x(v_v)_x𝑑x=\frac{\kappa }{2\pi }\frac{y}{|y|}\left(\mathrm{tan}^1\frac{x}{|y|}+\frac{\pi }{2}\right).$$
(24)
This expression is continuous across the negative $`x`$ axis, but jumps discontinuously by $`\kappa `$ across the positive $`x`$ axis. We therefore find that the eikonal phase has the expected $`\omega \kappa /c^2`$ discontinuity across the seam. The physical wave fronts, of course, smoothly interpolate the phase across the seam. We immediately see that outside this interpolating zone, in the region where the eikonal approximation to the phase is valid, we have
$$_y\chi +\frac{k}{c}(v_v)_y=0.$$
(25)
This means that $`\mathrm{\Pi }_{xy}`$ is zero outside the seam region. Thus the flux of $`p_y`$ through any curve vanishes unless it intersects the seam. Indeed we find that the only regions that have a net transverse momentum flux out of them are those that include the vortex. For these the momentum flux is entirely due to the interpolating phase and comes out to be $`(j_{\mathrm{ph}})_x\kappa `$ as found by the previous, more intuitive, method.
The transverse momentum flux tensor vanishes because the transverse component of $`𝐤`$ acquired by interaction with the flow is cancelled by the transverse component of $`𝐯_v`$ when they are combined to form the group velocity
$$\frac{\omega }{𝐤}=\frac{c}{k}𝐤+𝐯_v.$$
(26)
The classical phonon trajectories therefore remain straight, and, just as in the ordinary Bohm-Aharonov effect, the transverse momentum is consequence of the wave-particle duality.
A more detailed analysis would take into account the reduction of the amplitude of the sound wave in the seam region. It is well understood from the theory of Bohm-Aharovov scattering that the effect of this is to replace $`\mathrm{\Phi }`$ by $`\mathrm{sin}\mathrm{\Phi }`$ in the force equation. (Inspection of Fig 2 where $`\mathrm{\Phi }=\pi `$ shows that that the sound wave amplitude is exactly zero in the seam). Using $`c=230\mathrm{ms}^1`$ for the speed of sound in liquid helium, we find
$$\alpha =\frac{\omega }{c^2}\frac{\mathrm{}}{m}=0.035E_{\mathrm{phon}}$$
(27)
where $`E_{\mathrm{phon}}`$ is the energy of the phonon measured in degrees Kelvin. We see that $`\mathrm{\Phi }`$ is small at temperatures below $`0.2`$K where phonons dominate the scattering process, so the correction will be unimportant.
## III Moving the Vortex
So far we have merely reproduced the results of . We now extend our analysis to the case in which the vortex moves with respect to the condensate. So as to retain a time independent equation we will keep the vortex fixed at the origin, but allow a non-zero asymptotic $`𝐯_s`$. For simplicity of description we will initially consider only the case where the uniform $`𝐯_s`$ is in the direction of propagation of the sound wave, which as before we take to be the $`+x`$ direction. We will write $`𝐯_s=U\widehat{𝐱}`$. In this case the length of seam created per second is $`c+U`$. We need to find the phase shift between two halves of the wavefront to complete the computation.
It is tempting to simply replace the $`𝐯_v`$ in (9) by $`𝐯=U\widehat{𝐱}+𝐯_v`$, but this will not serve to give the correct answer. Because of the Doppler shift, the frequency is now related to the wavelength by $`\omega =(c+U)k`$, so for a wave with the same $`k`$ the frequency, and hence the effective flux
$$\mathrm{\Phi }=\frac{\omega }{c^2}𝐯_v𝑑𝐫$$
(28)
is increased, and this makes the phase shift larger. This is not what we expect, and is incorrect. The terms added to the sound-wave equation to make it into the minimally coupled Schrödinger equation are no longer harmless. This is because even when we work only to $`O(v_v)`$ accuracy, we must not neglect $`O(U^2)`$ terms.
We require a more accurate equation. In the appendix we show that the relevent equation is that given by Unruh
$$\left(\frac{}{t}+𝐯\right)\frac{\rho }{c^2}\left(\frac{}{t}+𝐯\right)\varphi =(\rho \varphi ),$$
(29)
who interprets his equation as that of a scalar field propagating in a curved space-time background.
We can find the phase offset by working at large impact parameter where $`\varphi `$ is essentially independent of $`y`$, and where also $`_x𝐯_v`$ is negligeable. We can therefore set
$$\varphi =\phi (x)e^{ikxi\omega t}$$
(30)
with $`\omega =(c+U)k`$, and expect $`\phi `$ to be slowly varying. The equation for $`\varphi `$ becomes
$$\omega ^2\varphi 2i\omega v_x_x\varphi +v_x^2_{xx}^2\varphi =c^2_{xx}^2\varphi $$
(31)
where $`v_x=(v_v)_x+U`$. Since $`\phi (x)`$ is slowly varying we can ignore $`_{xx}^2\phi (x)`$. Doing so we find that (31) reduces to
$$(v_v)_xk\phi i(U+c)_x\phi =0.$$
(32)
(I have ignored terms containing $`v_v`$ in the coefficient of $`_x\phi `$ because they will not affect our result to $`O(\kappa )`$.) This gives
$$\phi (x)=e^{i\chi (x)}$$
(33)
with
$$\chi (x)=\frac{k}{U+c}_{\mathrm{}}^x(v_v)_x𝑑x^{}.$$
(34)
We see that as $`x+\mathrm{}`$ the phase offset of the two wavefronts becomes
$$\mathrm{\Delta }\chi =\frac{k}{U+c}𝐯_v𝑑𝐫=\frac{k}{U+c}\kappa .$$
(35)
The factor of $`U+c`$ cancels against the length of seam being created per second to give
$$\dot{P}_{}=j_{\mathrm{ph}}\kappa $$
(36)
as before.
This result can be confirmed by examining the momentum flux tensor. The $`O(𝐯_v)`$ part of $`\mathrm{\Pi }_{xy}`$ is now
$$\mathrm{\Pi }_{xy}=\left((c+U)\frac{1}{k}_y\chi +(v_v)_y\right)(j_{\mathrm{ph}})_x,$$
(37)
where we have included a non-zero contribution from $`U\rho _{(1)}v_{(1)y}`$. Once again we see that outside the seam region the gradient of $`\chi `$ cancels the $`(v_v)_y`$ advective term, and that the discontinuity across the seam provides the momentum flux obtained in the previous paragraph.
The wavefront offset can also be calculated from the time delay between phonons passing on other side of the vortex. Since the phonon trajectories with large impact parameters are hardly deflected, we can find them as the null geodesics of the simplified form of the Unruh metric (A18)
$$ds^2=\frac{\rho }{c}\left\{\left(dx(v+c)dt\right)\left(dx(vc)dt\right)dy^2dz^2\right\}.$$
(38)
The null geodesics are given by
$$\frac{dx}{dt}=v\pm c.$$
(39)
In our present case, the time of arrival of a signal at the point $`(x,y)`$ is
$$t(x,y)=_{\mathrm{}}^{(x,y)}\frac{dx}{U+c+(v_v)_x}=const.\frac{1}{(U+c)^2}_{\mathrm{}}^{(x,y)}(v_v)_x𝑑x+O(v_v^2).$$
(40)
We convert the time delay to a phase shift by multiplying by $`\omega =(U+c)k`$. Again we find that the relative phase shift between waves that pass above and below the vortex to be
$$\mathrm{\Delta }\chi =\frac{k}{U+c}\kappa .$$
(41)
As described by Volovik this separation in the time of arrival for signals passing abitrarily far from the vortex is characteristic of a “spinning cosmic string”.
Matters become slightly more complicated when the uniform background superflow $`𝐯_s`$ is not oriented parallel (or anti-parallel) to the incident phonon flux. After a little work we find that the eikonal equation becomes
$$(𝐔^g)\chi +𝐤𝐯_v=0,$$
(42)
where $`𝐔^g=c\widehat{𝐤}+𝐯_s`$. The phase discontinuity seam no longer lies exactly along the $`x`$ axis, but instead points in the direction of $`𝐔^g`$. The rate of transverse momentum production does depend on the angle this vector makes with the x axis, but the effects are $`O(U^2/c^2)`$. They do not seem to be worth working out in detail since at this order we should also include compressibility effects and the dependence of $`\rho _n`$ and $`\rho _s`$ on $`(𝐯_n𝐯_s)^2/c^2`$.
## IV Discussion
We have computed the force on a vortex due to the scattering of phonons in the case where the vortex is moving with respect to the condensate. We have shown that to first order in $`|𝐯_s𝐯_L|/c`$ the rate of transverse momentum production is independent of the relative velocity of the vortex and the condensate, and that the Bohm-Aharonov phase shift of the phonon wavefront passing on either side of the vortex leads to a force
$$𝐅_{\mathrm{phon}}=\rho _n\kappa \widehat{𝐳}\times (𝐯_s𝐯_n),$$
(43)
which is consistent with galilean invariance. This supports the view that the Bohm-Aharonov analysis of the phonon force is correct.
The expression we have obtained for the phonon force leads to the commonly accepted form of the Iordanskii force and so disagrees with the claims of Thouless et al. that it vanishes identically. Their claim is based on earlier work by Thouless, Ao and Niu (TAN) which establishes a general theorem relating the force on a vortex to the circulation of momentum at infinity. Since there are no impurities present, this theorem should hold here and we have a puzzle that needs to be resolved.
In response to criticism of their claims, Thouless et al. have given two possible explanations for the discrepancy. The first suggests that the computation of the cross section asymmetry is mathematically flawed because a conditionally convergent series is mistreated. If this were correct, their objection would also apply to the case where $`𝐯_L=𝐯_n`$, but we have shown here that the scattering asymmetry correctly predicts the reduction of the coefficient of $`𝐯_s`$ from $`\rho _{tot}`$ to $`\rho _s`$. This explanation seems unlikely therefore.
The second possible explanation is more subtle. In the calculations presented here, and in all earlier work on phonon scattering, the incident flux of phonons is calculated by assuming a thermal phonon distribution derived from the asymptotic $`𝐯_n`$ and $`𝐯_s`$, i.e. a distribution that does not seem to take into account the effect of the local vortex flow field $`𝐯_v`$. Because the assumed phonon flux to the left and right of the vortex is the same, it appears that the phonons make an equal reduction in the fluid momentum on either side of the vortex. In other words they appear not to change the value of the total momentum circulation
$$K_{tot}=\rho 𝐯𝑑𝐫,$$
(44)
so that it remains $`\rho _{tot}h/m`$ instead of being reduced to $`\rho _sh/m`$. If this were correct there would be circulation in the normal fluid as well as in the superfluid component. If we include this unphysical circulation in the TAN formula, then it agrees with the calculated Iordanskii force.
The problem with this explanantion is that the motion of the phonons can be derived from a hamiltonian, $`H=c|k|+𝐯_v𝐤`$. Consequently Liouville’s theorem applies to the distribution function. Even in the absence of phonon-phonon interactions, a phonon distribution function that is in thermal equilibrium will remain in equilibrium and so phonons arriving from infinity will have their local distribution modified by the flow field so as to correctly bring about the expected reduction in the circulation. Although there are an equal number number of phonons passing to the right or to the left of the vortex, those moving against the superflow are going slower, so their number density is higher. This means that the phonon momentum opposing the $`\rho _{tot}𝐯_v`$ is left-right asymmetric as it should be.
There remains a contradiction, therefore.
What are we to conclude? The TAN arguments implicitly make use of the change in the total momentum of the medium outside the vortex. In fluid mechanics it is well known that the total momentum associated with a system of vortices is ill-defined, being given by a conditionally convergent integral. This problem is usually dealt with defining the impulse of the vortex system. The impulse is defined in terms the velocity potential in the vicinity of the vortex system, and does not have contributions from the effects of distant boundaries (the ultimate origin of the ill-defined momentum). Perhaps this is at the root of the problem. On the other hand the present calculation might be also described as naïve. We are not distinguishing between the true newtonian momentum and the pseudo-momentum possessed by the phonons. However pseudo-momentum is usually exactly what is needed for computing forces on immersed objects. Clearly, more work is needed to resolve the paradox.
## V Acknowledgements
This work was supported by grant NSF-DMR-98-17941. I would like to thank Edouard Sonin and Andrei Shelankov for useful e-mail discussions and also David Thouless, Ping Ao and Šimon Kos for many useful conversations.
## A The Unruh Equation
The homentropic potential flow of a fluid is derivable from the action
$$S=d^4x\left\{\rho \dot{\varphi }+\frac{1}{2}\rho (\varphi )^2+V(\rho )\right\}.$$
(A1)
Here $`\rho `$ is the mass density and $`\varphi =𝐯`$, the fluid velocity. Varying $`S`$ with respect to $`\varphi `$ gives the continuity equation
$$\dot{\rho }+(\rho \varphi )=0,$$
(A2)
while varying with respect to $`\rho `$ gives Bernoulli’s equation
$$\dot{\varphi }+\frac{1}{2}(\varphi )^2+\mu (\rho )=0,$$
(A3)
with $`\mu (\rho )=dV/d\rho `$.
In order to consider the propagation of sound waves in the background flow, set
$`\varphi `$ $`=`$ $`\varphi _{(0)}+\varphi _{(1)}`$ (A4)
$`\rho `$ $`=`$ $`\rho _{(0)}+\rho _{(1)}+\mathrm{}`$ (A5)
where $`\varphi _{(0)}`$ and $`\rho _{(0)}`$ obey the equations of motion, and $`\varphi _{(1)}`$ and $`\rho _{(1)}`$ are small amplitude perturbations. Expanding $`S`$ to quadratic order in the perturbations gives
$$S=S_0+d^4x\left\{\rho _{(1)}\dot{\varphi }_{(1)}+\frac{1}{2}\left(\frac{c^2}{\rho _{(0)}}\right)\rho _{(1)}^2+\frac{1}{2}\rho _{(0)}(\varphi _{(1)})^2+\rho _{(1)}𝐯\varphi _{(1)}\right\}.$$
(A6)
(The terms linear in the perturbations vanish because of our assumption that the zeroth order variables obey the equation of motion.) Here $`𝐯𝐯_{(0)}=\varphi _{(0)}`$. The speed of sound, $`c`$, is defined by
$$\frac{c^2}{\rho _{(0)}}=\frac{d\mu }{d\rho }|_{\rho _{(0)}},$$
(A7)
or more familiarly
$$c^2=\frac{dP}{d\rho }.$$
(A8)
Since the new action is quadratic in $`\rho _{(1)}`$, we can eliminate it via its equation of motion
$$\rho _{(1)}=\frac{\rho _{(0)}}{c^2}\{\dot{\varphi }_{(1)}+𝐯\varphi _{(1)}\}.$$
(A9)
We find the effective action for the sound waves in the background flow $`𝐯`$ to be
$$S_{(2)}=d^4x\left\{\frac{1}{2}\rho _{(0)}(\varphi _{(1)})^2\frac{\rho _{(0)}}{2c^2}(\dot{\varphi }_{(1)}+𝐯\varphi _{(1)})^2\right\}.$$
(A10)
After changing an overall sign for convenience, we can write this as
$$S=d^4x\frac{1}{2}\sqrt{g}g^{\mu \nu }_\mu \varphi _{(1)}_\nu \varphi _{(1)},$$
(A11)
where
$$\sqrt{g}g^{\mu \nu }=\frac{\rho _{(0)}}{c^2}\left(\begin{array}{cc}1,& 𝐯^T\\ 𝐯,& \mathrm{𝐯𝐯}^Tc^2\mathrm{𝟏}\end{array}\right).$$
(A12)
(We use the convention that greek letters run over all four space-time indices with $`0t`$, while roman indices refer to the spatial components.)
The resultant equation of motion
$$\frac{1}{\sqrt{g}}_\mu \sqrt{g}g^{\mu \nu }_\nu \varphi _{(1)}=0,$$
(A13)
is
$$\left(\frac{}{t}+𝐯\right)\frac{\rho _{(0)}}{c^2}\left(\frac{}{t}+𝐯\right)\varphi _{(1)}=(\rho _{(0)}\varphi _{(1)}).$$
(A14)
This equation and its interpretation as the wave equation for a scalar field propagating in the background space-time metric (A12) is due to Unruh .
In four dimensions we have $`\sqrt{g}=\rho _{(0)}^2/c`$ and
$$g_{\mu \nu }=\frac{\rho _{(0)}}{c}\left(\begin{array}{cc}c^2v^2,& 𝐯^T\\ 𝐯,& \mathrm{𝟏}\end{array}\right).$$
(A15)
The associated space-time interval is therefore
$$ds^2=\frac{\rho _{(0)}}{c}\left\{c^2dt^2\delta _{ij}(dx^iv^idt)(dx^jv^jdt)\right\}.$$
(A16)
Up to the overall conformal factor $`\frac{\rho _{(0)}}{c}`$ we see that $`c`$ and $`v^i`$ play the role of the lapse function and shift vector appearing in the Arnowitt-Deser-Misner (ADM) formalism of general relativity. A conformal factor does not affect null geodesics, and so variations in $`\rho _{(0)}`$ do not influence the ray tracing for the sound waves.
It is also sometimes convenient to write
$$ds^2=\frac{\rho _{(0)}}{c}\left\{(c^2v^2)\left(dt+\frac{v^idx^i}{c^2v^2}\right)^2\left(\delta _{ij}+\frac{v^iv^j}{c^2v^2}\right)dx^idx^j\right\}.$$
(A17)
When $`𝐯`$ is in the $`x`$ direction only, we can also rewrite $`ds^2`$ as
$$ds^2=\frac{\rho _{(0)}}{c}\left\{\left(dx(v+c)dt\right)\left(dx(vc)dt\right)dy^2dz^2\right\}.$$
(A18)
This shows that the $`xt`$ plane null geodesics coincide with the expected characteristics of the wave equation in the background flow.
|
no-problem/9909/hep-th9909083.html
|
ar5iv
|
text
|
# On the singular spectrum of the Almost Mathieu operator. Arithmetics and Cantor spectra of integrable models.
\[
## Abstract
I review a recent progress towards solution of the Almost Mathieu equation (A.G. Abanov, J.C. Talstra, P.B. Wiegmann, Nucl. Phys. B 525, 571 ,1998), known also as Harper’s equation or Azbel-Hofstadter problem. The spectrum of this equation is known to be a pure singular continuum with a rich hierarchical structure. Few years ago it has been found that the almost Mathieu operator is integrable. An asymptotic solution of this operator became possible due analysis the Bethe Ansatz equations.
\]
1. Introduction In this lecture I review a recent progress towards decoding of one of the most puzzling strange set generated by a quasiperiodic Schrödinger operator :
$$\psi _{n+1}+\psi _{n1}+2\lambda \mathrm{cos}(\theta +2\pi n\eta )\psi _n=E\psi _n$$
(1)
The history of this equation as well as its applications in different branches of physics and mathematics are rich. This equation, known as Harper’s equation, describes the electronic spectrum of one-dimensional quasicrystal (a particle in a quasiperiodic potential) and often used to study localization-delocalization transition (see e.g. ). It also describes a Bloch particle in a uniform magnetic field and also known as Azbel-Hofstadter problem . It is a standard example of the operator, also known as almost Mathieu operator with a singular continuum spectrum . This list may be continued.
The spectrum of this equation is complex. In the commensurate case, when $`\eta `$ is a rational number
$$\eta =P/Q,$$
one may impose the Bloch condition:
$$\psi _n=e^{ik^{}}\psi _{n+Q}$$
Then, the spectrum consists of $`Q`$ bands, separated by $`Q1`$ gaps. In the incommensurate limit, when $`\eta `$ is an irrational number<sup>*</sup><sup>*</sup>*Although some properties of the spectrum depend on the type of irrational number $`\eta `$, here we consider typically Diophantine numbers. These numbers have a full Lebesgue measure and thus sufficient for almost all physical applications. ($`P\mathrm{},Q\mathrm{}`$) the spectrum becomes an infinite Cantor setclosed, nowhere dense set with no isolated points with total bandwidth (Lebesgue measure of the spectrum) $`4|\lambda 1|`$ .
The most interesting “critical” case appears at $`|\lambda |=1`$. Then the spectrum becomes a purely singular continuum . In this case wave functions lost their extended character and not yet localized but exhibit a power law scaling. Moreover, there is numerical evidence and almost a consensus, that in this case ($`\lambda =1`$) the spectrum and wave functions are multifractal .
Multifractal sets exhibit a sort of conformal invariance and are expected to be described by methods of conformal field theory. This theory, however, is yet to be developed and scaling properties of sets generated by dynamical systems and by closely related quasiperiodic systems are far from being understood.
Since the empirical observations of Hofstadter , the evidence has been mounting that the spectrum of (1) (Hofstadter butterfly) as well as other quasiperiodic equations with a differential potentials are regular and universal rather than erratic or “chaotic”. Few years ago it has been shown that despite the complexity of the spectrum, the Harper-Azbel-Hofstadter-almost Mathieu operator (1) equation at any rational $`\eta =P/Q`$ is integrable and can be “solved” by employing methods of integrable systems Ansatz (BA). This had opened the possibility of describing the complex behavior of an incommensurate system as a limit of a sequence of integrable models. I hope that this solution will help to apply conformal field theory to dynamical systems.
The symmetries of the problem which eventually lead to its integrability are the most transparent it its ”magnetic” interpretation. Consider a particle on a two dimensional square lattice in a magnetic field with a flux $`\mathrm{\Phi }=2\pi \eta `$ per plaquette. Its Hamiltonian is:
$$H=T_x+T_x^1+\lambda (T_y+T_y^1),$$
where operators $`T_x`$ and $`T_y`$ describe translations of the particle in $`x`$ and $`y`$ direction by a lattice site. In magnetic field translations form a Weyl pair:
$$T_xT_y=q^2T_yT_x,$$
(2)
where
$$q^2=e^{i\mathrm{\Phi }}.$$
The Harper’s equation, then appears as a result of representation of translation operators as a shift and multiplication:
$`T_x\psi _n=\psi _{n+1},`$ (3)
$`T_y\psi _n=q^2e^{ik}\psi _n`$ (4)
2. Hierarchical tree. I begin by describing the scaling hypothesis for the spectrum of an incommensurate (quasiperiodic) operator with a purely singular continuum spectrum. To do so, we need a notion of the hierarchical tree.
Let us consider a sequence of rational approximants $`\eta ^{(j)}=P_j/Q_j`$ with increasing $`Q_j`$ to an irrational flux $`\eta `$ so that: $`|\eta ^{(j)}\eta |<c(Q_j)^2`$, where $`c`$ is a $`j`$-independent constantThis sequence always exists and can be constructed e.g. from the Farey Series. A Harper equation taken for each $`\eta ^{(j)}`$ is generations of the hierarchy. Let us consider a graph (with no loop), which connects the $`k`$-th band of the generation $`\eta ^j`$ (the daughter generation) to a certain band $`k^{}`$ of some previous (parent) generation $`\eta ^{(j1)}`$. We call it a hierarchical tree if energies $`E_j(𝒥)`$ belonging to any branch $`𝒥`$ of the tree form a sequence converging to the point $`E(𝒥)`$ of the spectrum in such a way that the sequence $`Q^{2ϵ_𝒥}|E_j(𝒥)E(𝒥)|`$ is bounded but does not converge to zero.
The set of numbers $`ϵ_𝒥`$ are anomalous exponents. In a multifractal spectrum, anomalous dimensions depend on the branch. They and the tree characterize ultrametric properties of the spectrum.
Let us stress that the very existence of the hierarchical tree is a hypothesis and the tree constructed below is the conjecture. We call it scaling hypothesis.
To construct the hierarchical tree it is necessary to find the sequence of generations and a rule to determine the parent generation and a parent band out of a given band of a given generation. In other words, the hierarchical tree is determined by a sequence of rational approximants $`\eta ^{(j)}\eta `$ and by a mapping
$$(k,P,Q)(k^{},P^{},Q^{}).$$
(5)
where $`k`$ and $`k^{}`$ are labels of a daughter’s band and the parent’s band of generations $`P/Q`$ and $`P^{}/Q`$ respectively. To describe the hierarchical tree we will need a notion of a discrete spectral flow and its rate Hall conductance.
A heuristic definition of the spectral flow is as follows. Let us consider a spectrum of the problem with a given $`\eta `$ and choose some (big) gap. We label it by $`k`$. Now let us change $`\eta `$ by a small $`\delta \eta `$, such that newly appeared gaps in the vicinity of the edge of the big gap, will be smaller than the big gap $`k`$. Then we can look on new levels appeared within the big gap, close to the bottom of the gap. The number of these levels, i.e., the number of levels $`\delta N_k`$, crossing an energy $`E`$, lying inside the gap, close to its lower edge, is a spectral flow. One can also treat the spectral flow as a number of levels appeared within a ”big” band adjacent to the gap from below. The rate of the spectral flow is
$$\sigma _k=\frac{\delta N_k}{\delta \eta }.$$
(6)
is known to be the Hall conductance of the gap (Streda’s formula) of two dimensional particles in magnetic field. This number is an integer and depends on the gap. The difference between the Hall conductances of nearest gaps, i.e., the spectral flow into the k-th band
$$\sigma (k)=\sigma _k\sigma _{k+1}$$
(7)
is the Hall conductance of the band.
The index theorem identifies the Hall conductance of the band with the number of zeros of the wave function $`\psi _n(k,k^{})`$ within the Brillouin zone: $`0<k<2\pi ,\mathrm{\hspace{0.17em}0}<k<2\pi /Q`$, i.e., with the Chern class of a band:
$$\sigma (k)=\frac{1}{2\pi }_B\stackrel{}{}\mathrm{ln}\psi _n(k,k^{})𝑑\stackrel{}{k},$$
where the integral goes over the boundary of the magnetic Brillouin zone of the $`k`$ th band.
The Hall conductivity of the k-th gap varies the range $`Q/2<\sigma _kQ/2`$ and obeys the Diophantine equation
$$P\sigma _k=k(\text{mod}Q).$$
(8)
In it turns the Hall conductance of a band $`\sigma (k)`$ is allowed to have only two values. They can be found explicitly. Let us consider a continued fraction expansion of
$$\eta ^{(j)}=\frac{1}{n_1+\frac{1}{n_2+\frac{1}{n_3+\mathrm{}}}}[n_1,n_2,n_3,\mathrm{},n_j]$$
(9)
Then the Hall conductance of the gap $`k`$ is
$$\sigma _k=\frac{Q_j}{2}Q_j\left\{(1)^j\frac{Q_{j1}}{Q_j}k+\frac{1}{2}\right\},$$
(10)
while the two values of the Hall conductance of the band $`k`$ may be
$$\sigma (k)=(1)^{j1}Q_{j1},\text{or}(1)^j(Q_jQ_{j1})$$
(11)
here $`\{x\}`$ is fractional part of $`x`$.
Now let us turn to the hierarchical tree. We conjecture that the hierarchical tree is the spectral hierarchy - an integral version of the spectral flow. Let us consider two close rational $`P/Q`$ and $`P^{}/Q^{}`$ with $`Q^{}<Q`$. The number of states per lattice site in a band is $`1/Q`$ and $`1/Q^{}`$ respectively. If there is a band $`k`$ of the problem with a flux $`P/Q`$, such that its Hall conductance is a ratio between the difference of the number of states and the fluxes
$$\frac{1}{Q}\frac{1}{Q^{}}=\sigma (k)(\frac{P}{Q}\frac{P^{}}{Q^{}})$$
(12)
then we say that the band $`k`$ of the generation $`P/Q`$ has a ”parent” band in the generation $`P^{}/Q^{}`$. The absolute value of the Hall conductance is the difference between number of states in the parent and a daughter band:
$$QQ^{}=|\sigma (k)|$$
(13)
This formula may be viewed as an integrated Streda formula (6). It determines the flux $`P^{}/Q^{}`$ and and by virtue of an iterative procedure, generates a sequence of rational approximants (generations of the hierarchical tree), $`\eta ^{(j)}`$ to $`\eta `$.
The integrated Streda formula is not enough to determine the tree. We complete it by the adiabatic principle, which states that the levels do not cross each other along a tree. This proposition may be put in symbols. Let us enumerate all states from the bottom of the spectrum and characterize them by a fraction $`\nu =\text{(number of the state)}/Q`$. All states in the k-th band have the fraction $`(k1)/Q<\nu <k/Q`$. The adiabatic principal assumes that a state in the middle of the band $`k`$ with a fraction $`(k1/2)/Q`$ has a parent somewhere in the parent band $`k^{}`$, i.e.
$$(k^{}1)/Q^{}<(k1/2)/Q<k^{}/Q^{}.$$
(14)
This together with integrated Streda formula (12) and formulas for the Hall conductance (7,8,11) determines the mapping (7) and thus the hierarchical tree. For quadratic irrationals this algorithm allows to find the tree analytically. I do not concentrate here on this matter, but would like to make two comments:
Intermediate fractions. Rational approximants (generations of the tree), generated by this procedure, appeared to be differ from truncated continues fraction of $`\eta `$. They are:
$$\eta ^{(j1)}=\{\begin{array}{cc}[n_1,\mathrm{},n_j1],\hfill & if\sigma (k)=(1)^{j1}Q_{j1}\hfill \\ [n_1,\mathrm{},n_{j1}],\hfill & if\sigma (k)=(1)^j(Q_jQ_{j1})\hfill \end{array}$$
(15)
Thus the parent generation is obtained by either subtracting 1 from the last quotient $`n_j`$ of the continued fraction or by truncating the fraction. The sequence of generations produced by these iterations is known as intermediate fractions. The golden mean $`\eta =\frac{\sqrt{5}1}{2}`$ is the only number which rational approximants are truncated continues fraction.
Takahashi-Suzuki numbers. The entire set of Hall conductances generated by the flux $`P/Q`$ are numbers less than $`Q`$ of the form
$$\{Q_{i2}+mQ_{i1}|m=1,\mathrm{},n_i,i=1,2,\mathrm{}\}$$
(16)
These numbers (denominators of intermediate fractions) are known in integrable models related to $`U_q(SL_2)`$ as Takahashi-Suzuki numbers . They are allowed lengths of string solutions of Bethe-Ansatz equations. These numbers are also a set of possible dimensions of irreducible highest weight representations of $`U_q(SL_2)`$ with definite parity .
Each path of the tree may also be characterized by a fraction $`\nu _k^{(j)}=k/Q_j`$ lying on the path and converged to a given irrational fraction $`\nu `$. According to eq.(14) the parent fraction is determined by the daughter one as $`\nu ^{(j1)}=\frac{1}{Q_{j1}}\left(\left[Q_{j1}(\nu ^{(j)}\frac{1}{2Q_j})\right]+1\right)`$ with $`Q_{j1}=Q_j|\sigma (k)|`$. The sequence $`\nu ^{(j)}`$ converges to the irrational $`\nu `$ faster than $`Q_j^1`$, i.e., $`|\nu ^{(j)}\nu |<cQ_j^1`$. In terms of the fractions one may reformulate the scaling hypothesis as $`|E_jE|<c|\nu ^{(j)}\nu |^{\alpha (𝒥)}`$, defining the scaling exponent $`\alpha (𝒥)`$.
The hierarchical tree, we just described, has been suggested by the Bethe Ansatz equations for the Harper’s equation. However, it seems plausible that the construction is universal and valid for a general quasiperiodic equation, regardless, whether it is integrable or not. A set of Hall conductances is the only input of the algorithm.
3. Integrability. The Harper’s equation (1) is integrable as soon as $`\eta `$ is a rational. Here I adopt a restricted definition of the integrability of a linear eqaution: there is an isospectral transformation which turns all Bloch solutions of the Harper’s equation to discrete polynomials of degree $`Q`$. In symbols
$$\psi _n=e^{ik^{}n}\underset{m=0}{\overset{Q1}{}}c_{nm}\mathrm{\Psi }_m,$$
(17)
where $`c_{nm}`$ is a unitary $`Q\times Q`$ matrix and $`\mathrm{\Psi }(z)`$ is a polynomial of the degree $`Q1`$. In other words, there is a gauge (a choice of the gage potential), or a representation of the algebra of translations in a magnetic field (2), where all wave functions are polynomials.
In this sence the Harper’s equation appears to be integrable for any point of the Brillouin zone $`0k^{}<2\pi /Q,\mathrm{\hspace{0.33em}0}k<2\pi `$ , although the Bethe Ansatz equations look especially simple at the so called rational points of the Brillouin zone. The latter correspond to the centers and edges of bands. The study of these points is sufficient for our purposes. In this case
$$\mathrm{\Psi }_n=\underset{j=0}{\overset{Q1}{}}a_j(\rho q^n)^j$$
where $`a_j`$ do not depend on $`n`$ and $`\rho `$ is a constant.
It appears to be convenient to parameterized polynomials by its roots:
$$\mathrm{\Psi }(z)\underset{j=0}{\overset{Q1}{}}a_jz^j=\underset{i=0}{\overset{Q1}{}}(zz_i).$$
(18)
Below we sketch the results of the Bethe-Ansatz solution and skip all aspects of integrability related to cyclic representations of $`U_q(Sl_2)`$ .
Rational points form a zoo. To characterize them we introduce parameters $`\tau ,\kappa ,\mu =\pm 1`$. The choice $`\tau =1`$ yields levels at the center of bands, while $`\tau =1`$ corresponds to edges of bands. The Chambers relation
$$\mathrm{\Lambda }(k^{},k)\text{det}H=2\mathrm{cos}Qk^{}+2\lambda \mathrm{cos}Qk$$
(19)
implies that the energy depends on $`k^{}`$ and $`k`$ via $`\mathrm{\Lambda }(k^{},k)`$. Therefore the edges of the energy bands are given by extrema of $`\mathrm{\Lambda }`$ which assumes a minimum/maximum given by $`\mathrm{\Lambda }=\pm (2+2\lambda )`$. The middle of bands corresponds to $`\mathrm{\Lambda }=0`$. If $`P`$ is even ($`Q`$ is odd) the rational points are labeled by additional discrete parameters $`\kappa ,\mu =\pm 1`$. The middle points of bands at $`\kappa =\pm 1`$ are given by the equation $`\frac{\mathrm{cos}\frac{Q}{2}(k_x+\pi \frac{P}{2Q})}{\mathrm{sin}\frac{Q}{2}(k_y+\pi \frac{P}{2Q})}=\nu (1)^{\frac{Q1}{2}}`$. The edges ($`\tau =1`$) of bands $`k^{}=\frac{\pi }{Q}\frac{1(1)^{\frac{P}{2}}}{2}`$ and $`k=\frac{\pi }{Q}\left(\frac{1(1)^{\frac{P}{2}}}{2}+2l\right)`$ are distinguished by parameter $`\mu `$. Being count from the bottom of the spectrum, these edges are ordered as bottom-top-bottom$`\mathrm{}`$ if $`\mu =(1)^{\frac{P}{2}}`$-odd and top-bottom-top$`\mathrm{}`$ if $`\mu =(1)^{\frac{P}{2}}`$ (see ) for detailes).
For the rational points the transformation (17) is given by “quantum dilogarithms”
$$c_{nm}=\underset{j=0}{\overset{m1}{}}\left(e^{ik}q^{2n+1/2}\frac{\lambda ^{1/2}+\tau \kappa \rho ^1q^{j1/2}}{\lambda ^{1/2}+\kappa \rho q^{j+1/2}}\right),$$
(20)
where $`\rho =i\mathrm{exp}(i\frac{k_x+k_y\pi P}{2})`$. Under this transformation the Harper’s equation becomes:
$`iq\left(z^{\frac{1}{2}}+i\tau \kappa ({\displaystyle \frac{1}{\lambda qz}})^{\frac{1}{2}}\right)\left(z^{\frac{1}{2}}i\kappa ({\displaystyle \frac{\lambda }{qz}})^{\frac{1}{2}}\right)\mathrm{\Psi }(qz)`$ (21)
$`iq^1\left(z^{\frac{1}{2}}i\tau \kappa ({\displaystyle \frac{q}{\lambda z}})^{\frac{1}{2}}\right)\left(z^{\frac{1}{2}}+i\kappa ({\displaystyle \frac{\lambda q}{z}})^{\frac{1}{2}}\right)\mathrm{\Psi }(zq^1)=`$ (22)
$`\mu \kappa \lambda ^{1/2}E\mathrm{\Psi }(z),`$ (23)
where one suppose to set $`z=\rho q^n`$. However, there is a certain advantage to consider the difference equation for $`\mathrm{\Psi }(z)`$ in a complex plane $`z`$.
The integrability now reads: all Bloch solutions of the difference equation (21) are polynomials.
I try to unmasked this transformation by a comment bellow, however it becomes meaningful in the $`U_q(sl_2)`$ setup.
Let us represent translation operators (4) by another Weyl pair
$$UV=qVU,$$
(24)
and setting
$$T_x=UV\frac{U+a}{U+b},T_y=VU^1\frac{U+a}{U+b},$$
(25)
where
$`a`$ $`=`$ $`i\nu q^{\frac{1}{2}}\lambda ^{1/2}`$ (26)
$`b`$ $`=`$ $`i\tau \nu q^{\frac{1}{2}}\lambda ^{1/2}`$ (27)
Equation (21) appears, by choosing a standard representation of $`U`$ and $`V`$:
$$(U\mathrm{\Psi })_n=\rho ^1q^n\mathrm{\Psi }_n,(V\mathrm{\Psi })_n=i\tau \nu \mu \mathrm{\Psi }_{n+1}.$$
4. The Bethe Ansatz. Being sure that solutions of the Eq.(21) are polynomials, we may evaluate it at one of the roots of the polynomial $`z_i`$. This gives the Bethe-Ansatz (BA) equations:
$$q^Q\underset{k=1}{\overset{Q1}{}}\frac{qz_iz_k}{z_iqz_k}=\frac{\left(z_ii\tau \kappa \lambda ^{\frac{1}{2}}q^{\frac{1}{2}}\right)\left(z_i+i\kappa \lambda ^{\frac{1}{2}}q^{\frac{1}{2}}\right)}{\left(q^{\frac{1}{2}}z_i+i\tau \kappa \lambda ^{\frac{1}{2}}\right)\left(q^{\frac{1}{2}}z_ii\kappa \lambda ^{\frac{1}{2}}\right)}.$$
(28)
Solutions of the BA equations give the wave functions of the Harper equation at band’s edges and centers. Their energy is given by
$$E=i\mu \lambda ^{\frac{1}{2}}q^Q(qq^1)\left[\kappa \underset{i=1}{\overset{Q1}{}}z_ii\frac{\lambda ^{\frac{1}{2}}\tau \lambda ^{\frac{1}{2}}}{q^{1/2}q^{1/2}}\right].$$
(29)
The latter is obtained by evaluating the leading terms at $`z\mathrm{}`$ at eq.(21).
At first glance, the BA equations (28) look even more complicated than the original Harper equation. This is true as long as $`Q`$ is not large. However, the BA equations (28) provide a better description of the problem in the most interesting, incommensurate, limit $`P,Q\mathrm{},\eta \text{irrational number}`$.
At $`P`$ odd the BA equations admit an exact zero mode solution at for $`E=0`$ . For a quasiclassical analysis of the BA equations at $`\eta 0`$, see.
Below, we consider the most interesting case $`\lambda =1`$.
3. String hypothesis. Here we formulate the string hypothesis which allows us to obtain the solutions of the BA equations (at $`\lambda =1`$) with an accuracy $`𝒪(Q^2)`$. The hypothesis is based on the analysis of singularities of the BA, and is supported by extensive numerics . Here we just formulate the string hypothesis and present some immediate consequences. To proceed, we need the notion of strings.
A string of spin $`l`$, parity $`v_=\pm 1`$ and center $`x_l`$ is a set of $`2l+1`$ complex numbers:
$$z_m^{(l)}=v_lx_lq_l^m,m=l,l+1.\mathrm{},l.$$
(30)
which have a common modulus $`x_l>0`$ (a center of the string), a parity $`v_l=\pm 1`$ and differ by multiples of $`q_l`$.
Now we are ready to formulate the string hypothesis — a central concept of this analysis:
* At large $`Q`$ each solution of the BA consists of strings.
* Each solution can be labeled by spins $`\{l_j,l_{j1},\mathrm{}\}`$ and parities $`\{v_j,v_{j1},\mathrm{}\}`$ of strings, such that the total number of roots $`_{i=1}^k(2l_i+1)=Q1`$. We refer to the set of lengths and parities of strings constituting the solution for a given energy level as to a string content of this level. Not more than two strings with a given length and parity may be found in a string content of the solution.
* The length of the longest string in a string content of a given energy level is the Hall conductance of the corresponding band: $`2l+1=|\sigma (k)|`$. The period of this string $`q_l=e^{i\pi \eta _l}`$ is uniquely determined by the requirement that $`\eta _l=\frac{P_l}{2l+1}`$ is the best approximant for the period $`\eta `$, so that $`q_l^{2l+1}=\pm 1`$.
* The parity of the longest string is $`v_l=iq_l^{l+1/2}\nu =(1)^{\left[\eta l\right]}\nu `$. The center of the longest string is $`x_l=1+𝒪(1/l)`$.
* The remaining roots of the state is a solution of the BA equation for the parent state of the parent generation.
The string hypothesis states that
$$\mathrm{\Psi }^{\mathrm{daugther}}(z)\underset{m=l}{\overset{l}{}}(zx_lv_lq_l^m)\mathrm{\Psi }^{\mathrm{parent}}(z).$$
(31)
and that $`2l+1=|\sigma (k)|`$ is the absolute value of the Hall conductance of the ”daughter” band.
This simple hypothesis allows one to construct a complete set of wave functions by virtue of the iterative procedure:
Starting from an irrational $`\eta `$, we first generate a hierarchical tree. Let us choose a branch of the tree $`J`$. Find the Hall conductance of the bands belong to the branch down to the origin. This determines the lengths, periods and parities of the string content and therefore zeros of the wave function of the chosen branch.
The only unknowns are the centers of strings. They, however approach 1 with accuracy $`𝒪(1/l)`$. The accuracy of the recursive eq.(31) is $`𝒪(l^2)`$. The string content of a state (i.e., lengths and parities of strings) is a topological characteristics, while the centers of strings are not.
The strings hierarchy has been obtained by analysis of singularities of BA (see ). As it was expected a set of possible lengths of strings is a set of Takahashi-Suzuki numbers, known in the Bethe- Ansatz literature. Eq.(8) provides a relation between them and Hall conductances.
To illustrate the iterative procedure let us consider the bottom edges of the lowest band of the spectrum and choose $`\eta =\frac{\sqrt{5}1}{2}`$ to be the golden mean. The sequence of rational approximants is given by ratios of subsequent Fibonacci numbers $`\eta _i=\frac{F_{i1}}{F_i}`$, where the $`F_i`$ are Fibonacci numbers ($`F_i=F_{i2}+F_{i1}`$ and $`F_0=F_1=1`$). The set of Hall conductances = Takahashi-Suzuki numbers = allowed lengths of strings are again Fibonacci numbers: $`Q_{k1}=F_{k1}`$. The considered branch of hierarchical tree connects edges ($`\tau =1,\kappa =1,\mu =1`$) of the lowest bands of generations $`\eta _{3k}=F_{3k1}/F_{3k}`$. Their string content consists of pairs of strings with length $`2l_n+1=F_{3n+1}`$, $`n=0,1,\mathrm{},k1`$, parities $`+1`$ and inverted centers $`x_k`$ and $`x_k^1`$. According to the string hypothesis the wave function of this state is
$`\mathrm{\Psi }(z|\eta _{3k}){\displaystyle \underset{n=0}{\overset{k1}{}}}{\displaystyle \underset{j=l_n}{\overset{l_n}{}}}(zx_{l_n}q_{l_n}^j)(zx_{l_n}^1q_{l_n}^j).`$ (32)
Centers of the strings $`x_{l_n}`$ are close to 1 but can not be obtained from the string hypothesis alone.
5. Gaps. A direct application of the string hypothesis, suggested by J. Bellissard, is the calculation of the gap distribution $`\rho (D)`$, i.e., the number of gaps with magnitude between $`D`$ and $`D+dD`$. The result is $`\rho (𝒟)𝒟^{3/2}`$, which essentially means that the width of the smallest gaps scales as $`D_{\mathrm{min}}1/Q^2`$. This result confirms numerical analysis of Ref..
6. Scaling hypothesis and finite size corrections: stating the problem. The string hypothesis solves the Bethe Ansatz equations with an accuracy $`𝒪(Q^2)`$, i.e., is asymptotically exact in the incommensurate limit $`Q\mathrm{}`$. It alone gives the explicit asymptotically exact form of wave functions and provides the hierarchical tree and topology of the Cantor set spectrum. However, the most interesting quantitative characteristics of the spectrum are hidden in the finite size corrections of the order of $`Q^2`$ to the bare value of strings. Among them are the anomalous dimensions of the spectrum $`ϵ_𝒥`$. They depend on the branch and on arithmetics of $`\eta `$ (according to ref. exponents $`ϵ_J`$ vary between $`0.171`$ and $`0.374`$ for the golden mean $`\eta =\frac{\sqrt{5}1}{2}`$). Can anomalous dimensions be found analytically, by finding finite size corrections to the string solutions? This is a technically involved but a fascinating and important problem. Its solution would provide an ultimate information of the spectrum and most interesting physical properties of the system. It also may suggest the conformal bootstrap and operator algebra approach, which has been proven to be effective for finding the finite size corrections of integrable systems, without the actual solving the Bethe Ansatz.
7. I would like to acknowledge inspiring numerics provided by Y. Hatsugai during the initial stages of formulated the string hypothesis, and useful discussions with Y. Avron, G. Huber, M. Kohmoto, R. Seiler and A. Zabrodin. I also would like to thank M. Ninomiya for the hospitality I received in YITP, where the text of this lecture was written.
|
no-problem/9909/hep-ph9909338.html
|
ar5iv
|
text
|
# DESY 99-136 hep-ph/9909338 Theory and Phenomenology of Instantons at HERA Contribution to the Ringberg Workshop “New Trends in HERA Physics”, Ringberg Castle, Tegernsee, Germany, May 30 - June 4, 1999; to be published in the Proceedings.
## 1 Introduction
It is a remarkable fact that non-Abelian gauge fields in four Euclidean space-time dimensions carry an integer topological charge. Instantons (anti-instantons) are classical solutions of the Euclidean Yang-Mills equations and also represent the simplest non-perturbative fluctuations of gauge fields with topological charge $`+1`$ ($`1`$). In QCD, instantons are widely believed to play an essential rôle at long distance: They provide a solution of the axial $`U(1)`$ problem , and there seems to be some evidence that they induce chiral symmetry breaking and affect the light hadron spectrum . Nevertheless, a direct experimental observation of instanton-induced effects is still lacking up to now.
Deep-inelastic scattering at HERA offers a unique window to discover QCD-instanton induced events directly through their characteristic final-state signature and a sizeable rate, calculable within instanton perturbation theory . It is the purpose of the present contribution to review our theoretical and phenomenological investigation of the prospects to trace QCD-instantons at HERA.
The outline of this review is as follows:
We start in Sect. 2 with a short introduction to instanton physics, contentrating especially on two important building blocks of instanton perturbation theory, namely the instanton size distribution and the instanton-anti-instanton interaction. A recent comparison of the perturbative predictions of these quantities with their non-perturbative measurements on the lattice is emphasized. It allows to extract important information about the range of validity of instanton perturbation theory. The special rôle of deep-inelastic scattering in instanton physics is outlined in Sect. 3: The Bjorken variables of instanton induced hard scattering processes probe the instanton size distribution and the instanton-anti-instanton interaction . By final state cuts in these variables it is therefore possible to stay within the region of applicability of instanton perturbation theory, inferred from our comparison with the lattice above. Moreover, within this fiducial kinematical region, one is able to predict the rate and the (partonic) final state. We discuss the properties of the latter as inferred from our Monte Carlo generator QCDINS . In Sect. 4, we report on a possible search strategy for instanton-induced processes in deep-inelastic scattering at HERA .
## 2 Instantons in the QCD Vacuum
In this section let us start with a short introduction to instantons and their properties, both in the perturbative as well as in the non-perturbative regime. We shall concentrate on those aspects that will be important for the description of instanton-induced scattering processes in deep-inelastic scattering in Sect. 3. In particular, we shall report on our recent determination of the region of applicability of instanton perturbation theory for the instanton size distribution and the instanton-anti-instanton interaction . Furthermore, we elucidate the connection of instantons with the axial anomaly.
Instantons , being solutions of the Yang-Mills equations in Euclidean space, are minima of the Euclidean action $`S`$. Therefore, they appear naturally as generalized saddle-points in the Euclidean path integral formulation of QCD, according to which the expectation value of an observable $`𝒪`$ is given by
$`𝒪[A,\psi ,\overline{\psi }]={\displaystyle \frac{1}{Z}}{\displaystyle [dA][d\psi ][d\overline{\psi }]𝒪[A,\psi ,\overline{\psi }]\mathrm{e}^{S[A,\psi ,\overline{\psi }]}},`$ (1)
where the normalization,
$`Z={\displaystyle [dA][d\psi ][d\overline{\psi }]\mathrm{e}^{S[A,\psi ,\overline{\psi }]}},`$ (2)
denotes the partition function. Physical observables (e.g. $`S`$-matrix elements) are obtained from the Euclidean expectation values (1) by analytical continuation to Minkowski space-time. In particular, the partition function (2) corresponds physically to the vacuum-to-vacuum amplitude.
Instanton perturbation theory results from the generalized saddle-point expansion of the path integral (1) about non-trivial minima of the Euclidean action<sup>1</sup><sup>1</sup>1Perturbative QCD is obtained from an expansion about the perturbative vacuum solution, i.e. vanishing gluon field and vanishing quark fields and thus vanishing Euclidean action.. It can be shown that these non-trivial solutions have integer topological charge,
$`Q{\displaystyle \frac{\alpha _s}{2\pi }}{\displaystyle d^4x\frac{1}{2}\mathrm{tr}(F_{\mu \nu }\stackrel{~}{F}_{\mu \nu })}=\pm 1,\pm 2,\mathrm{},`$ (3)
and that their action is a multiple of $`2\pi /\alpha _s`$,
$`S{\displaystyle d^4x\frac{1}{2}\mathrm{tr}(F_{\mu \nu }F_{\mu \nu })}={\displaystyle \frac{2\pi }{\alpha _s}}|Q|={\displaystyle \frac{2\pi }{\alpha _s}}(1,2,\mathrm{}).`$ (4)
In the weak coupling regime, $`\alpha _s1`$, the dominant saddle-point has $`|Q|=1`$. The solution corresponding to $`Q=1`$ is given by<sup>2</sup><sup>2</sup>2In Eq. (5) and throughout the paper we use the abbreviations, $`vv_\mu \sigma ^\mu `$, $`\overline{v}v_\mu \overline{\sigma }^\mu `$ for any four-vector $`v_\mu `$. (singular gauge)
$`A_\mu ^{(I)}(x;\rho ,U,x_0)={\displaystyle \frac{\mathrm{i}}{g}}{\displaystyle \frac{\rho ^2}{(xx_0)^2}}U{\displaystyle \frac{\sigma _\mu (\overline{x}\overline{x_0})(x_\mu x_{0}^{}{}_{\mu }{}^{})}{(xx_0)^2+\rho ^2}}U^{},`$ (5)
where the “collective coordinates” $`\rho `$, $`x_0`$ and $`U`$ denote the size, position and colour orientation of the solution. The solution (5) has been called “instanton” ($`I`$), since it is localized in Euclidean space and time (“instantaneous”), as can be seen from its Lagrange density,
$`\left(A_\mu ^{(I)}(x;\rho ,U,x_0)\right)={\displaystyle \frac{12}{\pi \alpha _s}}{\displaystyle \frac{\rho ^4}{((xx_0)^2+\rho ^2)^4}}S\left[A_\mu ^{(I)}\right]={\displaystyle \frac{2\pi }{\alpha _s}}.`$ (6)
It appears as a spherical symmetric bump of size $`\rho `$ centred at $`x_0`$.
The natural starting point of instanton perturbation theory is the evaluation of the instanton contribution to the partition function (2, by expanding the path integral about the instanton (5). Since the action is independent of the collective coordinates, one has to integrate over them and obtains the $`I`$-contribution $`Z^{(I)}`$, normalized to the topologically trivial perturbative contribution $`Z^{(0)}`$, in the form<sup>3</sup><sup>3</sup>3For notational simplicity, we call the $`I`$-position in the following $`x`$ (instead of $`x_0`$).
$`{\displaystyle \frac{1}{Z^{(0)}}}{\displaystyle \frac{dZ^{(I)}}{d^4x}}={\displaystyle \underset{0}{\overset{\mathrm{}}{}}}𝑑\rho D_m(\rho ){\displaystyle 𝑑U}.`$ (7)
The size distribution $`D_m(\rho )`$ is known in the framework of $`I`$-perturbation theory for small $`\alpha _s(\mu _r)\mathrm{ln}(\rho \mu _r)`$ and small $`\rho m_i(\mu _r)`$, where $`m_i(\mu _r)`$ are the running quark masses and $`\mu _r`$ denotes the renormalization scale. After its pioneering evaluation at 1-loop for $`N_c=2`$ and its generalization to arbitrary $`N_c`$, it is meanwhile available in 2-loop renormalization-group (RG) invariant form, i.e. $`D^1dD/d\mathrm{ln}(\mu _r)=𝒪(\alpha _s^2)`$,
$`{\displaystyle \frac{dn_I}{d^4xd\rho }}=D_m(\rho )=D(\rho ){\displaystyle \underset{i=1}{\overset{n_f}{}}}(\rho m_i(\mu _r))(\rho \mu _r)^{n_f\gamma _0\frac{\alpha _{\overline{\mathrm{MS}}}(\mu _r)}{4\pi }},`$ (8)
with the reduced size distribution
$`D(\rho )={\displaystyle \frac{d_{\overline{\mathrm{MS}}}}{\rho ^5}}\left({\displaystyle \frac{2\pi }{\alpha _{\overline{\mathrm{MS}}}(\mu _r)}}\right)^{2N_c}\mathrm{exp}\left({\displaystyle \frac{2\pi }{\alpha _{\overline{\mathrm{MS}}}(\mu _r)}}\right)(\rho \mu _r)^{\beta _0+(\beta _14N_c\beta _0)\frac{\alpha _{\overline{\mathrm{MS}}}(\mu _r)}{4\pi }}.`$ (9)
Here, $`\gamma _0`$ is the leading anomalous dimension coefficient, $`\beta _i`$ ($`i=0,1`$) denote the leading and next-to-leading $`\beta `$-function coefficients and $`d_{\overline{\mathrm{MS}}}`$ is a known constant.
The powerlaw behaviour of the (reduced) $`I`$-size distribution,
$$D(\rho )\rho ^{\beta _05+𝒪(\alpha _s)},$$
(10)
generically causes the dominant contributions to the $`I`$-size integrals (e.g. Eq. (7)) to originate from the infrared (IR) regime (large $`\rho `$) and thus often spoils the applicability of $`I`$-perturbation theory. Since the $`I`$-size distribution not only appears in the vacuum-to-vacuum amplitude (7), but also in generic instanton-induced scattering amplitudes (c.f. Sect. 3) and matrix elements, it is extremely important to know the region of validity of the perturbative result (9).
Crucial information on the range of validity comes from a recent high-quality lattice investigation on the topological structure of the QCD vacuum (for $`n_f=0`$). In order to make $`I`$-effects visible in lattice simulations with given lattice spacing $`a`$, the raw data have to be “cooled” first. This procedure is designed to filter out (dominating) fluctuations of short wavelength $`𝒪(a)`$, while affecting the topological fluctuations of much longer wavelength $`\rho a`$ comparatively little. After cooling, an ensemble of $`I`$’s and $`\overline{I}`$’s can clearly be seen (and studied) as bumps in the Lagrange density and in the topological charge density (c.f. Fig. 1).
Figure 2 (top) illustrates the striking agreement in shape and normalization of $`2D(\rho )`$ with the continuum limit of the UKQCD lattice data for $`dn_{I+\overline{I}}/d^4xd\rho `$, for $`\rho \text{ }<\text{ }0.30.35`$ fm. The predicted normalization of $`D(\rho )`$ is very sensitive to $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(0)}`$ for which we took the most accurate (non-perturbative) result from ALPHA . The theoretically favoured choice $`\mu _r\rho =𝒪(1)`$ in Fig. 2 (top) optimizes the range of agreement, extending right up to the peak around $`\rho 0.5`$ fm. However, due to its two-loop renormalization-group invariance, $`D(\rho )`$ is almost independent of $`\mu _r`$ for $`\rho \text{ }<\text{ }0.3`$ fm over a wide $`\mu _r`$ range. Hence, for $`\rho \text{ }<\text{ }0.3`$ fm, there is effectively no free parameter involved.
Turning back to the perturbative size distribution (8) in QCD with $`n_f0`$ light quark flavours, we would like to comment on the appearent suppression of the instanton-induced vacuum-to-vacuum amplitude (7) for small quark masses, $`\rho m_i1`$. It is related to the axial anomaly according to which any gauge field fluctuation with topological charge $`Q`$ must be accompanied by a corresponding change in chirality,
$$\mathrm{}Q_{5i}=2Q;i=1,\mathrm{},n_f.$$
(11)
Thus, pure vacuum-to-vacuum transitions induced by instantons are expected to be rare. On the other hand, scattering amplitudes or Green’s functions corresponding to anomalous chirality violation (c.f. Fig. 3) are expected to receive their main contribution due to instantons and do not suffer from any mass suppression.
Let us illustrate this by the simplest example of one light flavour ($`n_f=1`$): The instanton contribution to the fermionic two-point function can be written as
$`\psi (x_1)\overline{\psi }(x_2)^{(I)}{\displaystyle d^4x\underset{0}{\overset{\mathrm{}}{}}𝑑\rho D(\rho )𝑑U(\rho m)S^{(I)}(x_1,x_2;x,\rho ,U)}.`$ (12)
Expressing the quark propagator in the $`I`$-background, $`S^{(I)}`$, in terms of the spectrum of the Dirac operator in the $`I`$-background, which has exactly one right-handed zero mode<sup>4</sup><sup>4</sup>4According to an index theorem , the number $`n_{R/L}`$ of right/left-handed zero modes of the Dirac operator in the background of a gauge field with topological charge $`Q`$ satisfies $`n_Rn_L=Q`$. For the instanton: $`n_R=Q=1`$; $`n_L=0`$. $`\kappa _0`$ ,
$`\mathrm{i}\mathit{}^{(I)}\kappa _n`$ $`=`$ $`\lambda _n\kappa _n;\mathrm{with}\lambda _0=0\mathrm{and}\lambda _n0\mathrm{for}n0,`$ (13)
$`S^{(I)}(x_1,x_2;\mathrm{})`$ $`=`$ $`{\displaystyle \frac{\kappa _0(x_1;\mathrm{})\kappa _0^{}(x_2;\mathrm{})}{m}}+{\displaystyle \underset{n0}{}}{\displaystyle \frac{\kappa _n(x_1;\mathrm{})\kappa _n^{}(x_2;\mathrm{})}{m+i\lambda _n}},`$ (14)
we see that for $`m0`$ only the zero mode contribution survives in Eq. (12),
$`\psi (x_1)\overline{\psi }(x_2)^{(I)}{\displaystyle d^4x\underset{0}{\overset{\mathrm{}}{}}𝑑\rho D(\rho )𝑑U\rho \kappa _0(x_1;x,\rho ,U)\kappa _0^{}(x_2;x,\rho ,U)}.`$ (15)
Note that $`\kappa _0\kappa _0^{}`$ has $`Q_5=2`$, exactly as required by the anomaly (11). For the realistic case of three light flavours ($`n_f=3`$), the generalization of Eq. (15) leads to non-vanishing, chirality violating six-point functions corresponding to the anomalous processes shown in Fig. 3.
Finally, let us turn to the interaction between instantons and anti-instantons. In the instanton-anti-instanton ($`I\overline{I}`$) valley approach it is determined in the following way: Starting from the infinitely separated ($`R\mathrm{}`$) $`I\overline{I}`$-pair,
$`A_\mu ^{(I\overline{I})}(x;\rho ,\overline{\rho },U,R)\stackrel{R\mathrm{}}{=}A_\mu ^{(I)}(x;\rho ,\mathrm{𝟏})+A_\mu ^{(\overline{I})}(xR;\overline{\rho },U)`$ (16)
one looks for a constraint solution, which is the minimum of the action for fixed collective coordinates, $`\rho ,\overline{\rho },U,R`$. The valley equations have meanwhile been solved for arbitrary separation $`R`$ and arbitrary relative color orientation $`U`$ . Due to classical conformal invariance, the $`I\overline{I}`$-action $`S^{(I\overline{I})}`$ and the interaction $`\mathrm{\Omega }`$,
$`S[A_\mu ^{(I\overline{I})}]={\displaystyle \frac{4\pi }{\alpha _s}}S^{(I\overline{I})}(\xi ,U)={\displaystyle \frac{4\pi }{\alpha _s}}\left(1+\mathrm{\Omega }(\xi ,U)\right)`$ (17)
depend on the sizes and the separation only through the “conformal separation”,
$$\xi =\frac{R^2}{\rho \overline{\rho }}+\frac{\overline{\rho }}{\rho }+\frac{\rho }{\overline{\rho }}.$$
(18)
Because of the smaller action, the most attractive relative orientation (c.f. Fig. 4) dominates in the weak coupling regime. Thus, in this regime, nothing prevents instantons and anti-instantons from approaching each other and annihilating.
From a perturbative expansion of the path integral about the $`I\overline{I}`$-valley, one obtains the contribution of the $`I\overline{I}`$-valley to the partition function (2) in the form
$`{\displaystyle \frac{1}{Z^{(0)}}}{\displaystyle \frac{dZ^{(I\overline{I})}}{d^4x}}={\displaystyle d^4R\underset{0}{\overset{\mathrm{}}{}}𝑑\rho \underset{0}{\overset{\mathrm{}}{}}𝑑\overline{\rho }D_{I\overline{I}}(R,\rho ,\overline{\rho })},`$ (19)
where the group-averaged distribution of $`I\overline{I}`$-pairs, $`D_{I\overline{I}}(R,\rho ,\overline{\rho })`$, is known, for small $`\alpha _s`$, $`m_i`$, and for sufficiently large $`R`$ ,
$`{\displaystyle \frac{dn_{I\overline{I}}}{d^4xd^4Rd\rho d\overline{\rho }}}D_{I\overline{I}}(R,\rho ,\overline{\rho })=`$
$`D(\rho )D(\overline{\rho }){\displaystyle 𝑑U\mathrm{exp}\left[\frac{4\pi }{\alpha _{\overline{\mathrm{MS}}}(s_{I\overline{I}}/\sqrt{\rho \overline{\rho }})}\mathrm{\Omega }(\frac{R^2}{\rho \overline{\rho }},\frac{\overline{\rho }}{\rho },U)\right]\omega (\xi ,U)^{2n_f}}.`$
Here, the scale factor $`s_{I\overline{I}}=𝒪(1)`$ parametrizes the residual scheme dependence and
$`\omega ={\displaystyle d^4x\kappa _{0I}^{}(x;\mathrm{})[\mathrm{i}\mathit{}^{(I\overline{I})}]\kappa _{0\overline{I}}(xR;\mathrm{})}`$ (21)
denotes the fermionic interaction induced by the quark zero modes.
We will see below in Sect. 3 that the distribution (2) is a crucial input for instanton-induced scattering cross sections. Thus, it is extremely welcome that the range of validity of (2) can be inferred from a comparison with recent lattice data. Fig. 2 (bottom) displays the continuum limit of the UKQCD data for the distance distribution of $`I\overline{I}`$-pairs, $`dn_{I\overline{I}}/d^4xd^4R`$, along with the theoretical prediction . The latter involves (numerical) integrations of $`\mathrm{exp}(4\pi /\alpha _s\mathrm{\Omega })`$ over the $`I\overline{I}`$ relative color orientation $`(U)`$, as well as $`\rho `$ and $`\overline{\rho }`$. For the respective weight $`D(\rho )D(\overline{\rho })`$, a Gaussian fit to the lattice data was used in order to avoid convergence problems at large $`\rho ,\overline{\rho }`$. We note a good agreement with the lattice data down to $`I\overline{I}`$-distances $`R/\rho 1`$. These results imply first direct support for the validity of the “valley”-form of the interaction $`\mathrm{\Omega }`$ between $`I\overline{I}`$-pairs.
In summary: The striking agreement of the UKQCD lattice data with $`I`$-perturbation theory is a very interesting result by itself. The extracted lattice constraints on the range of validity of $`I`$-perturbation theory can be directly translated into a “fiducial” kinematical region for our predictions in deep-inelastic scattering, as shall be discussed in the next section.
## 3 Instantons in Deep-Inelastic Scattering
In this section we shall elucidate the special rôle of deep-inelastic scattering for instanton physics. We shall outline that only small size instantons, which are theoretically under contrôl, are probed in deep-inelastic scattering . Furthermore, we shall show that suitable cuts in the Bjorken variables of instanton-induced scattering processes<sup>5</sup><sup>5</sup>5Our approach, focussing on the $`I`$-induced final state, differs substantially from an exploratory paper on the $`I`$-contribution to the (inclusive) parton structure functions. Ref. involves implicit integrations over the Bjorken variables of the $`I`$-induced scattering process. Unlike our approach, the calculations in Ref. are therefore bound to break down in the interesting domain of smaller $`x_{\mathrm{Bj}}\text{ }<\text{ }0.3`$, where most of the data are located. allow us to stay within the range of validity of instanton perturbation theory, as inferred from the lattice . We review the basic theoretical inputs to QCDINS, a Monte Carlo generator for instanton-induced processes in deep-inelastic scattering . Finally, we discuss the final state characteristics of instanton-induced events.
Let us consider a generic $`I`$-induced process in deep-inelastic scattering (DIS),
$`\gamma ^{}+g{\displaystyle \underset{\mathrm{flavours}}{\overset{n_f}{}}}\left[\overline{q}_R+q_R\right]+n_gg,`$ (22)
which violates chirality according to the anomaly (11). The corresponding scattering amplitude is calculated as follows : The respective Green’s function is first set up according to instanton perturbation theory in Euclidean position space, then Fourier transformed to momentum space, LSZ amputated, and finally continued to Minkowski space where the actual on-shell limits are taken. Again, the amplitude appears in the form of of an integral over the collective coordinates ,
$`𝒯_\mu ^{(I)(2n_f+n_g)}={\displaystyle \underset{0}{\overset{\mathrm{}}{}}}𝑑\rho D(\rho ){\displaystyle 𝑑U𝒜_\mu ^{(I)(2n_f+n_g)}(\rho ,U)}.`$ (23)
In leading order, the momentum dependence of the amplitude for fixed $`\rho `$ and $`U`$,
$`𝒜_\mu ^{(I)(2n_f+n_g)}(q,p;k_1,k_2,\mathrm{},k_{2n_f},p_1,\mathrm{},p_{n_g};\rho ,U),`$ (24)
factorizes, as illustrated in Fig. 5 for the case $`n_f=1`$: The amplitude decomposes into a product of Fourier transforms of classical fields (instanton gauge fields; quark zero modes, e.g. as in Eq. (15)) and effective photon-quark “vertices” $`𝒱_\mu ^{(t(u))}(q,k_{1(2)};\rho ,U)`$, involving the (non-zero mode) quark propagator in the instanton background. These vertices are most important in the following argumentation since they are the only place where the space-like virtuality $`q^2=Q^2>0`$ of the photon enters.
After a long and tedious calculation one finds for these vertices,
$`𝒱_\mu ^{(t)}(q,k_1;\rho ,U)`$ $`=`$ $`2\pi \mathrm{i}\rho ^{3/2}\left[ϵ\sigma _\mu \overline{V}(q,k_1;\rho )U^{}\right],`$ (25)
$`𝒱_\mu ^{(u)}(q,k_2;\rho ,U)`$ $`=`$ $`2\pi \mathrm{i}\rho ^{3/2}\left[UV(q,k_2;\rho )\overline{\sigma }_\mu ϵ\right],`$ (26)
where
$`V(q,k;\rho )`$ $`=`$ $`\left[{\displaystyle \frac{\left(qk\right)}{(qk)^2}}+{\displaystyle \frac{k}{2qk}}\right]\rho \sqrt{\left(qk\right)^2}K_1\left(\rho \sqrt{\left(qk\right)^2}\right)`$
$`{\displaystyle \frac{k}{2qk}}\rho \sqrt{q^2}K_1\left(\rho \sqrt{q^2}\right).`$
Here comes the crucial observation: Due to the (large) space-like virtualities $`Q^2=q^2>0`$ and $`Q^2=(qk)^20`$ in DIS and the exponential decrease of the Bessel $`K`$-function for large arguments in Eq. (3), the $`I`$-size integration in our perturbative expression (23) for the amplitude is effectively cut off. Only small size instantons, $`\rho 1/𝒬`$, are probed in DIS and the predictivity of $`I`$-perturbation theory is retained for sufficiently large $`𝒬=\mathrm{min}(Q,Q^{})`$.
The leading<sup>6</sup><sup>6</sup>6$`I`$-induced processes initiated by a quark from the proton are suppressed by a factor of $`\alpha _s^2`$ with respect to the gluon initiated process . This fact, together with the high gluon density in the relevant kinematical domain at HERA, justifies to neglect quark initiated processes. instanton-induced process in the DIS regime of $`e^\pm P`$ scattering for large photon virtuality $`Q^2`$ is illustrated in Fig. 6. The inclusive $`I`$-induced cross section can be expressed as a convolution , involving integrations over the target-gluon density, $`f_g`$, the virtual photon flux, $`P_\gamma ^{}`$, and the known flux $`P_q^{}^{(I)}`$ of the virtual quark $`q^{}`$ in the $`I`$-background (c.f. Fig. 6).
The crucial instanton-dynamics resides in the so-called instanton-subprocess (c.f. dashed box in Fig. 6) with its associated total cross section $`\sigma _{q^{}g}^{(I)}(Q^{},x^{})`$, depending on its own Bjorken variables,
$$Q^{\mathrm{\hspace{0.17em}2}}=q^{\mathrm{\hspace{0.17em}2}}0;x^{}=\frac{Q^{\mathrm{\hspace{0.17em}2}}}{2pq^{}}1.$$
(28)
The cross section is obtained in the form of an integral over $`I\overline{I}`$ collective coordinates<sup>7</sup><sup>7</sup>7Both an instanton and an anti-instanton enter here, since cross sections result from taking the modulus squared of an amplitude in the single $`I`$-background. In the present context, the $`I\overline{I}`$-interaction $`\mathrm{\Omega }`$ takes into account the exponentiation of final state gluons .,
$`\sigma _{q^{}g}^{(I)}`$ $``$ $`{\displaystyle d^4R\underset{0}{\overset{\mathrm{}}{}}𝑑\rho \underset{0}{\overset{\mathrm{}}{}}𝑑\overline{\rho }D(\rho )D(\overline{\rho })𝑑U\mathrm{e}^{\frac{4\pi }{\alpha _s}\mathrm{\Omega }(\frac{R^2}{\rho \overline{\rho }},\frac{\overline{\rho }}{\rho },U)}\omega (\frac{R^2}{\rho \overline{\rho }},\frac{\overline{\rho }}{\rho },U)^{2n_f1}}`$ (29)
$`\times \mathrm{e}^{Q^{}(\rho +\overline{\rho })}\mathrm{e}^{\mathrm{i}(p+q^{})R}\{\mathrm{}\}.`$
Thus, as anticipated in Sect. 2, the group averaged distribution of $`I\overline{I}`$-pairs (2) is closely related to the instanton-induced cross section. The lattice constraints on this quantity are therefore extremely useful.
Again, the quark virtuality $`Q^2`$ cuts off large instantons. Hence, the integrals in (29) are finite. In fact, they are dominated by a unique saddle-point ,
$`U^{}`$ $`=`$ $`\mathrm{most}\mathrm{attractive}\mathrm{relative}\mathrm{orientation};`$
$`\rho ^{}`$ $`=`$ $`\overline{\rho }^{}1/Q^{};R^21/(p+q^{})^2{\displaystyle \frac{R^{}}{\rho ^{}}}\sqrt{{\displaystyle \frac{x^{}}{1x^{}}}},`$ (30)
from which it becomes apparent (c.f. Fig. 7) that the virtuality $`Q^{}`$ contrôls the effective $`I`$-size, while $`x^{}`$ determines the effective $`I\overline{I}`$-distance (in units of the size $`\rho `$). By means of the discussed saddle-point correspondence (30), the lattice constraints may be converted into a “fiducial” region for our cross section predictions in DIS ,
$$\begin{array}{ccc}\rho ^{}\hfill & \text{ }<\text{ }& 0.30.35\mathrm{fm};\hfill \\ \frac{R^{}}{\rho ^{}}\hfill & \text{ }>& 1\hfill \end{array}\}\{\begin{array}{ccc}Q^{}/\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(n_f)}\hfill & \text{ }>& 30.8;\hfill \\ x^{}\hfill & \text{ }>& 0.35.\hfill \end{array}$$
(31)
As illustrated in Fig. 7, $`\sigma _{q^{}g}^{(I)}(Q^{},x^{})`$ is very steeply growing for decreasing values of $`Q^2`$ and $`x^{}`$, respectively. The constraints (31) from lattice simulations are extremely valuable for making concrete predictions. Note that the fiducial region (31) and thus all our predictions for HERA never involve values of the $`I\overline{I}`$-interaction $`\mathrm{\Omega }`$ smaller than $`0.5`$ (c.f. Fig. 4), a value often advocated as a lower reliability bound .
Let us present an update of our published prediction of the $`I`$-induced cross section at HERA. For the following *modified* standard cuts,
$`𝒞_{\mathrm{std}}`$ $`=`$ $`x^{}0.35,Q^{}30.8\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(n_f)},x_{\mathrm{Bj}}10^3,`$
$`0.1y_{\mathrm{Bj}}0.9,Q30.8\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(n_f)},`$
involving the minimal cuts (31) extracted from lattice simulations, and an update of $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}`$ to the 1998 world average , we obtain
$`\sigma _{\mathrm{HERA}}^{(I)}(𝒞_{\mathrm{std}})`$ $`=`$ $`29.2_{8.1}^{+9.9}\mathrm{pb}.`$ (33)
Note that the quoted errors in the cross section (33) only reflect the uncertainty in $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}=219_{23}^{+25}`$ MeV , on which $`\sigma ^{(I)}`$ is known to depend very strongly . We have also used now the 3-loop formalism to perform the flavour reduction of $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(n_f)}`$ from 5 to 3 light flavours. Finally, the value of $`\sigma ^{(I)}`$ is substantially reduced compared to the one in Ref. , since we preferred to introduce a further cut in $`Q^2`$, with $`Q_{\mathrm{min}}^2=Q_{\mathrm{min}}^2`$, in order to insure the smallness of the $`I`$-size $`\rho `$ in contributions associated with the second term in Eq. (3).
Based on the predictions of $`I`$-perturbation theory, a Monte Carlo generator for simulating QCD-instanton induced scattering processes in DIS, QCDINS, has been developed . It is designed as an “add-on” hard process generator interfaced by default to the Monte Carlo generator HERWIG . Optionally, an interface to JETSET is also available for the final hadronization step.
QCDINS incorporates the essential characteristics that have been derived theoretically for the hadronic final state of $`I`$-induced processes: notably, the isotropic production of the partonic final state in the $`I`$-rest system ($`q^{}g`$ center of mass system in Fig. 6), flavour “democracy”, energy weight factors different for gluons and quarks, and a high average multiplicity $`2n_f+𝒪(1/\alpha _s)`$ of produced partons with a (approximate) Poisson distribution of the gluon multiplicity.
The characteristic features of the $`I`$-induced final state are illustrated in Fig. 8 displaying the lego plot of a typical event from QCDINS (c. f. also Fig. 6): Besides a single (not very hard) current jet, one expects an accompanying densely populated “hadronic band”. For $`x_{\mathrm{Bj}\mathrm{min}}10^3`$, say, it is centered around $`\overline{\eta }2`$ and has a width of $`\mathrm{\Delta }\eta \pm 1`$. The band directly reflects the isotropic production of an $`I`$-induced “fireball” of $`𝒪(10)`$ partons in the $`I`$-rest system. Both the total transverse energy $`E_T15`$ GeV and the charged particle multiplicity $`n_c13`$ in the band are far higher than in normal DIS events. Finally, each $`I`$-induced event has to contain strangeness such that the number of $`K^0`$’s amounts to $`2.2`$/event.
## 4 Search Strategies
In a recent detailed study , based on QCDINS and standard DIS event generators, a number of basic (experimental) questions have been investigated: How to isolate an $`I`$-enriched data sample by means of cuts to a set of observables? How large are the dependencies on Monte-Carlo models, both for $`I`$-induced (INS) and normal DIS events? Can the Bjorken-variables $`(Q^{},x^{})`$ of the $`I`$-subprocess be reconstructed?
All the studies presented in Ref. were performed in the hadronic center of mass frame, which is a suitable frame of reference in view of a good distinction between $`I`$-induced and normal DIS events (c. f. Ref. ). The results are based on a study of the hadronic final state, with typical acceptance cuts of a HERA detector being applied.
Let us briefly summarize the main results of Ref. . While the “$`I`$-separation power”= $`\mathrm{INS}_{\mathrm{eff}(\mathrm{iciency})}/\mathrm{DIS}_{\mathrm{eff}(\mathrm{iciency})}`$ typically does not exceed $`𝒪(20)`$ for single observable cuts, a set of six observables (among $`30`$ investigated in Ref. )) with much improved joint $`I`$-separation power $`=𝒪(130)`$ could be found, see Fig. 9. These are (a) the $`p_T`$ of the current jet, (b) $`Q^2`$ as reconstructed from the final state, (c) the transverse energy and (d) the number of charged particles in the $`I`$-band region<sup>8</sup><sup>8</sup>8With the prime in Fig. 9 (c,d,e) indicating that the hadrons from the current jet have been subtracted., and (e,f) two shape observables that are sensitive to the event isotropy.
The systematics induced by varying the modelling of $`I`$-induced events remains surprisingly small (Fig. 10). In contrast, the modelling of normal DIS events in the relevant region of phase space turns out to depend quite strongly on the used generators and parameters . Despite a relatively high expected rate for $`I`$-events in the fiducial DIS region , a better understanding of the tails of distributions for normal DIS events turns out to be quite important.
|
no-problem/9909/hep-th9909138.html
|
ar5iv
|
text
|
# Untitled Document
IF-UFRJ/99
HAWKING RADIATION IN THE DILATON GRAVITY
WITH A NON-MINIMALLY COUPLED
SCALAR FIELD
M. ALVES
Instituto de Fisica - UFRJ
Rio de Janeiro-RJ
Brazil
ABSTRACT
We discuss the two-dimensional dilaton gravity with a scalar field as the source matter where the coupling with gravity is given, besides the minimal one, through an external field. This coupling generalizes the conformal anomaly in the same way as those found in recent literature, but with a diferent motivation. The modification to the Hawking radiation is calculated explicity and show an additional term that introduces a dependence on the (effective) mass of the black-hole.
PACS: 04.60.+n; 11.17.+y; 97.60.Lf
e-mail: MSALVES@if.ufrj.br
1 INTRODUCTION
It is widely recognized that two-dimensional models of gravity can give us a better understanding of the gravitational quantum effects. These models, derived either from a string motivated effective action or from some low-dimensional version of the Einstein equations , have a rich structure in spite of their relative simplicity. Gravitational collapse, black holes and quantum effects are examples of subjects whose description is rather complicated in four-dimensional gravity while their lower-dimensional versions turn out to be more treatable, sometimes completely solved. This is the case of the seminal work of Callan, Giddings, Harvey and Strominger (CGHS), where black-hole solutions are found and analysed semi-classically, giving us a two dimensional version of the Hawking effect.
In the CGHS model, the starting point is the four-dimensional Einstein-Hilbert action in the spherically symmetric metric, where the Schwarzchild one is the simplest case. Then, the assumption of dependence in two variables for all the fields is done and, with a suitable form of the metric, it is possible to integrate the angular dependence. The final result is a two-dimensional action with a new field, besides the gravitational one, called the dilaton, that can be taken as the relic of the integrated coordinates. This is the two-dimensional dilaton gravity. Some improvements have been done to circumvent difficulties that arise from its quantum version, but leaving the initial purpose unchanged.
Black-hole solutions are found to be formed from non-singular initial conditions, namely a source of scalar matter coupled minimally with the two-dimensional gravity sector (the terminology will be clarified later). There is also a linear vacuum region, which turns to be relevant to these models.
Since these results are classical, the next step is the search for quantum effects: it is well known that whereas classical black-holes radiate nothing the semi-classical ones render a thermal radiation by a proccess called Hawking-Beckestein effect. An important feature of the semi-classical version for this effect is the conformal anomaly, since the Hawking radiation can be derived from this quantity. Recently, a series of works gives atention to the generalization of the conformal anomaly to the CGHS model, with diferent coupling between the scalar field and gravitation.
By semi-classical version of this theory we mean the scalar field quantized in the curved, classical, background. However , as pointed out before , we must be careful with the field variable to be considered when we use the Fujikawa method. In the present case we make a redefinition of the scalar field variable that, in the two dimensional case, results in a more general expression for the trace anomaly. We discuss here how this redefinition generates the same expression to the generalized anomaly found in in a simple and direct way. This is one of the results of this work.
On other hand, since the anomaly is used to calculate the Hawking radiation in the two dimensional case, we can expect new contributions to this quantity from the new terms of the conformal anomaly. In the CGHS model, these new terms give a generalization to the expression of the Hawking radiation that depend of the effective mass of the black-hole, absent in the original calculation.
This article is organized as follows: in the next section we discuss the motivation for the definition of a new field variable, the modification in the original action and the resulting expression for the trace anomaly in a direct way. After that, we present without details the main results of the CGHS model concerning the black hole radiation to fix notation, then, following the same steps as in , we calculate the new expression for the Hawking radiation derived from the modified anomaly. Discussions and final remarks are in the conclusion.
2 THE 2d DILATON GRAVITY WITH NON-MINIMAL COUPLING
The semi-classical quantization via functional method requires the integration over fields (the scalar one in this particular case). On the other hand, we are interested in theories that have the full quantized version free of anomalies. This paradigm can be worked out via the BRST analysis of the theory by using the Fujikawa’s technique . In this framework, the quantized theory is anomaly-free provided the functional measure is BRST-invariant. It follows that this invariance requires a redefinition of the field variables, the so-called gravitational dressing, and we must consider these new fields as the variables of the model that we are studying. We stress that without this modification the conservation of the quantized energy-momentum tensor (EMT) is not verified . Another remarkable fact is that the trace anomaly would be null (only in the 2d case).
For the present case, 2d scalar field $`f`$, this redefinition means
$$f\stackrel{~}{f}=(g)^{(\frac{1}{4})}f,$$
(1)
where $`g=detg_{\mu \nu }`$.
The original action can be rewriten in terms of this new variable:
$$S[f,g_{\mu \nu }]=\frac{1}{2\pi }d^2x\sqrt{g}_\mu f^\mu fS[\stackrel{~}{f},g_{\mu \nu }]=\frac{1}{2\pi }d^2x_\mu \stackrel{~}{f}^\mu \stackrel{~}{f}$$
(2)
It is straightforward to see that the action for the new variables is not conformal invariant, since by a transformation as
$$g_{\mu \nu }^{^{}}=e^{2\alpha }g_{\mu \nu }$$
(3)
the field $`\stackrel{~}{f}`$ transform as
$$\stackrel{~}{f}\stackrel{~}{f}^{^{}}=e^\alpha \stackrel{~}{f}.$$
(4)
Now, let us use the conformal gauge
$$g_{\mu \nu }=e^{2\rho }\eta _{\mu \nu }(g)^{(\frac{1}{4})}=e^\rho $$
(5)
and write
$$\stackrel{~}{f}^{^{}}=(e^\rho f)^{^{}}=e^\alpha e^\rho f$$
(6)
so, in this gauge, a conformal transformation on $`f^{^{}}`$ is equivalent to make
$$\rho \rho ^{^{}}=\rho +\alpha $$
(7)
Using these relations, we can define a non-minimal coupling to the field variable through a gauge type field, namely
$$\stackrel{~}{}_\mu \stackrel{~}{f}=(_\mu A_\mu )\stackrel{~}{f}$$
(8)
where
$$A_\mu =_\mu \rho $$
(9)
tranforms as
$$A_\mu ^{^{}}=A_\mu +_\mu \alpha $$
(10)
With the definition (8), we can write a conformally invariant action to the fields $`\stackrel{~}{f}`$ and $`g_{\mu \nu }`$:
$$S[\stackrel{~}{f},g_{\mu \nu }]=\frac{1}{2\pi }d^2x\stackrel{~}{}^\mu \stackrel{~}{f}\stackrel{~}{}_\mu \stackrel{~}{f}$$
(11)
or
$$S[\stackrel{~}{f},g_{\mu \nu }]=\frac{1}{2\pi }d^2x\stackrel{~}{f}(^\mu _\mu A^\mu A_\mu +^\mu A_\mu )\stackrel{~}{f}$$
(12)
This modification leads to a generalization of the value of trace anomaly which is easily calculated, since the field $`A_\mu `$ is not quantized and can be considered as an external field. The resulting anomaly is :
$$RR_{(generalized)}=R+\beta (A^\mu A_\mu ^\mu A_\mu )$$
(13)
Here, the parameter $`\beta `$ shows us the presence of non-minimal coupling ($`\beta =1`$) or its absence ($`\beta =0`$).
Antecipating a result from the next section, to wit the conformal factor $`\rho `$ equal to the dilaton field $`\varphi `$, we have:
$$R_{(generalized)}=R+\beta (^\mu \varphi _\mu \varphi +_\mu ^\mu \varphi )$$
(14)
There are many recent works dealing with this general value for the 2d trace anomaly, some of them with others but related motivation , rendering different numerical values for the extra terms in (14). This does not change our analysis.
3 THE CGHS MODEL FOR THE 2d DILATON GRAVITY AND THE HAWKING RADIATION
Intending to compare results, we present in this section the CGHS model for the two dimensional gravity and, specifically, the expression of the radiation of the black hole, derived from the semi-classical version following the same steps as in .
The starting point is the action
$$S=\frac{1}{2\pi }d^2x\sqrt{g}e^{2\varphi }\left\{R+4(\varphi )^2+4\lambda ^2\right\}.$$
(15)
Here, $`R`$ is the bidimensional scalar curvature and $`\lambda `$ is to be considered as a cosmological constant. As mentioned before, the dilaton field $`\varphi `$, that in two dimensional space time is a scalar, came from the angular part of the original 4d metric , so that this action must be considered as the pure gravitational part. Stressing this affirmative is the possibility to make the dilaton equal to the gravitational field, at least classically.
Using the light-cone coordinates and the conformal gauge, the metric becomes
$$g_+=\frac{1}{2}e^{2\rho },g_{++}=g_{}=0.$$
(16)
This two dimensional model shows a black hole type solution when scalar fields $`\phi `$ are considered as matter source in $`x^+=x_0^+`$, traveling in the $`x^{}`$ direction, with the intensity proportional to the constant $`a`$.
The solutions for the resulting equations are:
$$e^{2\rho }=e^{2\varphi }=\frac{M}{\lambda }\lambda ^2x^{}x^+$$
(17)
for $`x^+>x_0^+`$ and for $`x^+<x_0^+`$ we have the vacuum. $`M=ax_0^+\lambda `$ is identified with the mass of the hole. Note that $`\rho =\varphi `$ is a consequence of the calculation and it will be used later.
Up to this point, all the results are classical and to obtain information about the Hawking radiation we must consider quantum effects. This can be done through the relation between the trace anomaly and the components of the vacuum expectation value (VEV) of the energy momentum tensor (EMT) . In two dimensions, the VEV for the massless field is given by
$$T_\mu ^\mu =\frac{N}{24}R$$
(18)
where $`R`$ is the scalar curvature and $`N`$ is related with the number of the fields in the model. Of course, the RHS of this equation is only due to the quantum corrections since the classical expression for the trace of the EMT is null for zero mass fields. Up to numerical factors, the expression to this anomaly is the same to all kinds of fields.
The conservation of the EMT must be imposed, so
$$^\nu T_{\mu \nu }=0.$$
(19)
The solutions of these equations are, in the conformal gauge,
$$T_{}=_{}\rho _{}\rho _+^2\rho +t_{}(x^+)$$
(20)
and
$$T_{++}=_+\rho _+\rho _{}^2\rho +t_+(x^{})$$
(21)
The limits of (20) and (21) at the assimptotical regions , $`I^+`$ and $`I^{}`$, give us the value of $`t_+`$ and $`t_{}`$. In order to compare with the literature, we use the same coordinates as in \[CGHS\]:
$$x^+=\frac{1}{\lambda }e^{\lambda y^+}\text{and}x^{}=\frac{1}{\lambda }e^{\lambda y^{}}+\frac{a}{\lambda ^2}.$$
(22)
These definitions result in a new metric but conformally related with the flat one like the former or, in others words, they preserve the conformal gauge. This new metric is obtained by writing (17) in terms of the variables $`y^+`$ and $`y^{}`$ giving the conformal factor as
$$e^{2\rho }=\begin{array}{c}(1+\frac{a}{\lambda }e^{\lambda y^{}})^1\text{for}y^+<y_0^+\hfill \\ \\ \\ (1+\frac{a}{\lambda }e^{\lambda (y^{}y^++y_0^+)})^1\text{for}y^+>y_0^+,\hfill \end{array}$$
(23)
where $`\lambda x_0^+=e^{\lambda y_0^+}`$.
The requeriment that these expressions vanish in the vacuun ( $`x^+<x_0^+`$) gives us the values of $`t_+`$ and $`t_{}`$ and can be calculated at the limits $`e^{\lambda y^{}}\mathrm{}`$ and $`e^{\lambda y^{}}\frac{a}{\lambda }`$. This condition applied in (20), give us
$$t_{}=\frac{\lambda ^2}{4}(1(1+\frac{a}{\lambda }e^{\lambda y^{}})^2).$$
(24)
The value for the $`T_{}`$ component at $`y^+\mathrm{}`$ ($`x^+\mathrm{}`$) is the flux across the future null infinity and, taken this limit again in (20), we see that the remaining term is just $`t_{}`$.
The Hawking radiation is the value of the $`T_{}`$ near the horizon ( at $`y^{}\mathrm{}`$, it vanishes):
$$x^{}\frac{a}{\lambda ^2}\text{or}e^{\lambda y^{}}0,$$
(25)
Using (24) and (20), the final expression is:
$$T_{}^{horizon}=\frac{\lambda ^2}{4}.$$
(26)
At this point, it is worth mentionning that the absence of the mass in (26) is peculiar of the CGHS model.
Now, let us consider the modifications in the expression to the anomaly and show how this result affects the expression for the Hawking radiation. Our strategy will be to follow the same steps as those showed above, using the expression (13) in (18) and (19).
With our choice to the gauge, the trace anomaly is now given by
$$R=e^{2\rho }(8_+_{}\rho 6\beta (16_+_{}\rho +64_+\rho _{}\rho )).$$
(27)
The non-zero components of relation (18) turns to be
$$T_+=\alpha _+_{}\rho +\beta _+\rho _{}\rho ,$$
(28)
where the constants $`\alpha `$ and $`\beta `$ were redefined in terms of those ones of (13). In this way, it is simple to see that the new contribution comes from the second term in RHS of (28).
Following the same steps as before, namely using (28) in the (18) and (19), we have , e.g, for the $`T_{}`$ component
$$T_{}=T_{}^{\beta =0}+\beta [\frac{1}{2}(_{}\rho )^2+\rho _{}^2\rho 2\rho (_{}\rho )^2]+t(y^{})$$
(29)
where $`T_{}^{\beta =0}`$ is given by (20) or (21), depending on which portion of the space time we are considering: at the $`y^+<y_0^+`$ region, there is no extra contribution, since the new terms do not depend on $`y^+`$. Consequently $`t(y^{})`$ has the same value calculated previously.
As before, the expression for $`t(y^{})`$ at $`y^+>y_0^+`$is obtained by applying the assimptotical limit at the vacuum region, $`y^{}0`$:
$$t(y^{})=t(y^{})^{\beta =0}+\beta \frac{\lambda }{4}(\frac{1}{2}+\frac{1}{2}e^{4\rho }2\rho ),$$
(30)
with $`t(y^{})^{\beta =0}`$ given by(26) and $`\rho `$ by (23).
Finally, taking the appropriated limits, we arrive at the desired expression,
$$T_{}^{horizon}=\frac{\lambda ^2}{4}[\alpha +\beta ln(\frac{M}{a})]$$
(31)
where the constants $`\alpha `$ and $`\beta `$ were redefined again for simplicity.
The modification of the expression of the Hawking radiation (31) was already expected, since the expression to the anomaly was modified. The dependence with the effective mass arises due the nonlinearity of the extra term in (27) and is a direct consequence of the new couplings introduced before. This is another result of this work.
4 CONCLUSIONS AND FINAL REMARKS
In this paper, we use the non-minimal coupling between the scalar field and 2d gravitation that gives rise to a generalization to the trace anomaly. The non-minimal coupling includes a type-gauge field , given in terms of the gravitation (or the dilaton in the CGHS model ) field. The motivation of the introduction of this extra coupling is the requirement of the conformal invariance of the action for the redefined field that, differently to the original, is not conformally invariant.
The semiclassical quantization allows us to consider this gauge field as an external one, rendering the calculation of the anomaly very simple: we just need to add the new terms that appear in the equation of motion derived from (12).
The expression for the auxiliary field is due to the fact that in two dimensions we allways can use the conformal gauge. In higher dimensions, this choice would be very restrictive but, in this case, it is not nescessary to redefine the matter field to satisfy (19) and the definition of the conformal gauge field is straightfoward . Actually, in the 4d case, (19) is used to fix some of numerical values of the EMT .
In two dimensional space-time the calculation of the Hawking radiation is easily obtained via its relation with the trace anomaly . The calculations using the CGHS model yield a expression for black hole radiation that do not depend on the mass of the hole. On the contrary, in the Schwarzchild black hole, there is such dependence and has important consequences on the behaviour of these objects.
When we use the generalized conformal anomaly the resulting expression to the Hawking radiation has a dependence with the mass. The modification in the Hawking radiation was expected because the relations (13) and (18) and we can expect a more general behaviour for these structures. We mention also that, in the CGHS model, the dependence of the radiation with the temperature is assured by the relation with the $`\lambda `$ parameter, since the temperature in this case is given by :
$$T=\frac{\lambda }{2\pi }$$
(32)
and it is found to be the same as in the four dimensional case, to wit, $`T_{}T^2`$ (in this case, $`TM`$ and $`T_{}M^2`$). However, the modification in (31) breaks the relation between radiation and temperature so this result must be taken as a higher order term and not as the complete expression.
Finally, it must be interesting to study others models using the non-minimal coupling to see what are the consequences of this choice. Works in this direction are in progress.
ACKNOWLEDGEMENTS
The author is grateful to Prof. Carlos Farina for reading the manuscript and useful comments. This work was partially supported by Fundação Universitária José Bonifácio, FUJB.
REFERENCES
C.G. Callan, S.B. Giddings and J.A. Strominger, Phys. Rev. D 45 (1992) R1005; J.A. Strominger, in Les Houches Lectures on Black Holes (1994), hep-th/9501071.
R. Mann, A. Shiekm and L. Tarasov, Nucl. Phys. B341 (1992) 134; R. Jackiw, in Quantum Theory of Gravity, ed. S.Christensen (Adam Hilger,Bristol, 1984), p.403; C. Teitelboim, ibid, p. 327.
J. Russo, L. Susskind and L. Thorlacius, Phys. Rev. D 45 (1992) 3444; 47 (1993) 533
J. Maharana and J.H. Schwarz, Nucl. Phys. B390 (1993) 3; J.Scherk and J.H. Schwarz, Nucl. Phys. B153 (1979) 61; J. Maharana, Phys. Rev. Lett. 75,2 (1995) 205.
N.D. Birrel and P.C. Davies, in Quantum Fields in Curved Spacetime (Cambridge University Press, Cambridge, 1984)
K. Fujikawa in Quantum Gravity and Cosmology, ed. H.Sato and T.Inami (Singapore: World scientific); Phys. Rev. D 25 (1982) 2584.
K. Fujikawa, U. Lindstrom, N.K. Rocek and P.van Nieuwenhuizen, Phys. Rev. D 37 (1988) 391.
M.Alves and C.Farina, Class.Quantum Grav. 9 (1992)1841; M.Alves, Class.Quantum Grav. 13 (1996) 171.
S.M.Christensen and S.A.Fulling, Phis.Rev.D 15 (1977) 2088.
S.Hawking and R.Boussos, hep-th/9705236; J.S.Dowker, hep-th/9802029; S.Ichinoise and S.Odintsov, hep-th/9802043.
S.Hawking, Commun.math.Phys.43,199(1975);G.Gibbons and S.Hawking, Phys.Rev.D 15(1976)2738.
See, for example, A.Gosh, hep-th/9604056 and references therein.
M.Alves and J.Barcelos-Neto, Class.Quantum Grav. 5(1988)377.
|
no-problem/9909/hep-lat9909054.html
|
ar5iv
|
text
|
# Hard Thermal Loops and the Sphaleron Rate on the LatticePresented by K. Rummukainen at the conference LATTICE ’99, Pisa, Italy, July 1999.
## 1 MOTIVATION
Baryon number is not a conserved quantity in the Standard Model: due to the anomaly, the violation is related to the (Minkowski time) topological susceptibility of the SU(2) weak group. While at low temperatures the violation is totally negligible , at temperatures above the electroweak symmetry restoration temperature ($`100`$GeV) the rate of the baryon number violation (sphaleron rate) $`\mathrm{\Gamma }`$ is large. This can have significant repercussions for baryon number generation in the early Universe, and it opens the avenue for purely electroweak baryogenesis.
Even though the weak coupling constant is small, at high temperatures the sphaleron processes are dominated by IR momenta $`kg^2T`$ and are thus inherently non-perturbative. Moreover, the IR modes behave essentially classically, which is signalled, for example, by the large occupation numbers of the Bose fields: $`n(kg^2T)=(e^{k/T}1)^1T/k1/g^21`$. This has motivated the much utilized method of using the classical equations of motion to calculate $`\mathrm{\Gamma }`$ in hot SU(2) theories (the Higgs and fermionic degrees of freedom effectively decouple in the hot EW phase). For recent reviews, see , .
The success of the classical method hinges on the efficient decoupling of the almost-classical IR modes relevant for the sphaleron processes and the strongly non-classical UV modes. However, as argued by Arnold, Son and Yaffe , this decoupling is not complete. A step beyond the classical approximation is the hard thermal loop (HTL) effective theory , which incorporates the leading order effects of the UV modes. The HTL theory can be cast in various forms; most practical for lattice computations is the one where the the hard modes are described by including a large number of classical massless particles with adjoint charge moving on the background of IR fields. This field + particles system can be put on a lattice as such, and it has been succesfully used in simulations . In this work we use an alternative Boltzmann-Vlasov approach, where the particles are described with local density functions $`n(t,\stackrel{}{x},\stackrel{}{k})`$. For full description, see .
## 2 HTL THEORY ON THE LATTICE
Let us consider a system consisting of the HTL particles moving on the background of IR gauge fields. The particle density functions $`n(t,\stackrel{}{x},\stackrel{}{k})`$ obey the Vlasov equation
$`{\displaystyle \frac{\mathrm{d}_{\mathrm{conv}}n}{\mathrm{d}t}}=0`$ $`=`$ $`_0\delta n+\stackrel{}{v}\stackrel{}{D}\delta n+_0\stackrel{}{k}{\displaystyle \frac{n}{\stackrel{}{k}}}`$ (1)
$`=`$ $`vD\delta n+gv_iF^{0i}{\displaystyle \frac{n_0}{k}},`$
where $`n_0=(e^{k/T}1)^1`$, $`n=n_0+\delta n^a`$, and the Lorentz-force $`\stackrel{}{v}\times \stackrel{}{B}`$ has been neglected. The IR gauge fields evolve according to the Yang-Mills equations:
$$D_\mu F^{\mu \nu }=j_{\mathrm{hard}}^\nu =4g\frac{d^3k}{(2\pi )^3}v^\nu \delta n,$$
(2)
where the 4-velocity $`v=(1,\stackrel{}{k}/k)`$. These equations can be further simplified by factorizing $`\delta n^a=gW^a(x,\stackrel{}{v})(n_0/k)`$ and integrating over the amplitude $`|\stackrel{}{k}|`$ :
$`D_\mu F^{\mu \nu }`$ $`=`$ $`m_D^2{\displaystyle \frac{d\mathrm{\Omega }}{4\pi }v^\nu W(x,\stackrel{}{v})}`$
$`v^\mu D_\mu W(x,\stackrel{}{v})`$ $`=`$ $`v_iF^{0i}`$ (3)
Here $`d\mathrm{\Omega }`$ integration is over the directions of the 4-velocity $`v`$. The field $`W^a(x,\stackrel{}{v})`$ is proportional to the flux of the particles at point $`x`$ to direction $`\stackrel{}{v}`$.
In order to perform lattice simulations the field $`W`$ has to be regularized in space (standard lattice) and on the $`\stackrel{}{v}`$-sphere. We do this by expanding $`W`$ in spherical harmonics: $`W^a(x,\stackrel{}{v})=W_{lm}^a(x)Y_{lm}(\stackrel{}{v})`$, and truncating the expansion to $`ll_{\mathrm{max}}`$. In terms of $`W_{lm}`$, the equations (3) finally become
$`D_iF^{i0}`$ $`=`$ $`(m_D^2/\sqrt{4\pi })W_{00}`$ (4)
$`D_\mu F^{\mu i}`$ $`=`$ $`(m_D^2/4\pi )V_m^iW_{1m}`$ (5)
$`D_0W_{lm}`$ $`=`$ $`C_{lm;i}^{l^{}m^{}}D_iW_{l^{}m^{}}+\delta _{l,1}V_m^iF^{0i}.`$ (6)
Here the coefficients $`C_{lm;i}^{l^{}m^{}}=𝑑\mathrm{\Omega }Y_{lm}^{}v^iY_{l^{}m^{}}`$ and $`V_m^i=𝑑\mathrm{\Omega }Y_{1m}v^i`$. Eq. (4) is the Gauss law, and, as long as it is satisfied by the initial configuration, it is preserved by Eqs. (5) and (6).
With a finite $`l_{\mathrm{max}}`$, these equations can be readily discretized: SU(2) gauge field is defined on the links of the lattice, and the $`(l_{\mathrm{max}}+1)^2`$ adjoint $`W_{lm}^a`$ fields are on lattice sites. The discretization and the properties of the theory on the lattice are discussed in detail in .
## 3 THE SPHALERON RATE
The measurement of the sphaleron rate proceeds along similar lines to the purely classical theory: first, we generate an ensemble of initial thermalized configurations (which satisfy the Gauss law), and then evolve these with Eqs. (5),(6). We then obtain $`\mathrm{\Gamma }`$ by measuring the rate of the Chern-Simons number diffusion .
We have to check how $`\mathrm{\Gamma }`$ depends on (a) $`l_{\mathrm{max}}`$, (b) lattice spacing and (c) the Debye mass $`m_D`$. Only the last parameter is physical (it depends on the particle content of the theory).
Let us first consider the $`l_{\mathrm{max}}`$ dependence. In Fig. 1 we show $`\mathrm{\Gamma }`$ measured from a set of lattices with $`l_{\mathrm{max}}10`$. We note that when $`l_{\mathrm{max}}`$ is even, the rate remains remarkably constant (much better than indicated by naive arguments ). The behaviour at odd $`l_{\mathrm{max}}`$ can be understood by considering the properties of the gauge field propagator . Thus, we conclude that modest values of $`l_{\mathrm{max}}4`$–6 are sufficient in order to obtain the $`l_{\mathrm{max}}\mathrm{}`$ behaviour within reasonable statistical errors.
Dimensionally, one would expect that $`\mathrm{\Gamma }\alpha ^4T^4`$, the non-perturbative scale to the fourth power. However, as argued by Arnold, Son and Yaffe , the evolution of the IR fields is Landau damped by the UV modes, and the rate is slower by one further factor of $`\alpha `$, parametrically
$$\mathrm{\Gamma }=\kappa ^{}\frac{g^2T^2}{m_D^2}\alpha ^5T^4,$$
(7)
where $`\kappa ^{}`$ is a constant to be determined by lattice measurements. In Fig. 2 we show the behaviour of $`\mathrm{\Gamma }`$ against $`g^2T^2/m_D^2`$, measured using various lattice spacings $`a1/\beta _0`$. The rate is clearly not constant when the Debye mass is varied, and it goes to zero when $`m_D^2\mathrm{}`$, as predicted by Eq. (7). We have not observed any significant dependence on the lattice spacing (provided that it is small enough). It should be noted that the physical $`m_D^2`$ is not only the ‘bare’ $`m_D^2`$ which appears in Eq. (5), but it is a sum of the bare $`m_D^2`$ and a contribution due to the UV lattice gauge field modes $`1/a`$ .
In Fig. 3 we compare the coefficient $`\kappa ^{}`$ of the scaling law (7) as measured in this work, with the particles method , and with only the classical SU(2) gauge theory evolution without any added HTL degrees of freedom . In the last case the physical $`m_D^2`$ arises solely through the lattice UV modes; here the lattice spacing is up to a factor of $``$ 4 smaller than in the two HTL approaches. The consistency of the results is remarkable, considering the very different treatments used.
To conclude with, the sphaleron rate in hot SU(2) gauge theory is now settled. Inserting the Standard Model value of $`m_D^2=11/6g^2T^2`$, we obtain for the rate a value $`\mathrm{\Gamma }=(25\pm 2)\alpha ^5T^4`$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.